There are plenty of ways this could go wrong – developing hidden biases around gender or ethnicity; accidentally letting the wrong people through (the developers want to get the software up to 85% accuracy)… but isn’t that just the same as human border control agents?
Border control is heavily based on the assessments of human beings with their own biases and foibles. I’ve experienced it first-hand; more than once I’ve been in left in the hands of border control agents to decide whether I would be allowed into a country.
The wrong visa? Wearing unauthorised tech
? I’m sure being a polite, British, white man played in my favour in these cases. It’s not right that I had an advantage because of my background and characteristics, but at least it was obvious to everyone this was all based on human behaviour.
If an automated system turned me away, the fact I’d ‘failed’ against the computer would no doubt act against my chances even if I got to speak to a human about it.
Like everything in life, border control relies on the unpredictable world of human interaction. A.I. and algorithms also have biases – the biases of the people who coded it and the data they’re based on. But software can have an air of definitive authority that human interactions don’t. “Computer says no
” as the old comedy sketches went.
I fear we’ll lose a lot if we place all our trust in automation over human interactions – and we might not always even realise it.