A.I. straight out of the 1800s
When I started studying A-Level Sociology at the age of 16, the first topic we looked at was crime. To see how society’s understanding of crime had evolved, we first looked at 19th century theories which proposed that criminals had certain physical features that distinguished them from law-abiding folk.
This is obviously rubbish, and we quickly moved on to more advanced studies of how society drives certain people to crime. But the idea that people used to think criminals looked a certain way has stuck with me over the years, because it’s so easy to mock.
Now the idea is back. Motherboard reports
that over 1,000 A.I. experts have signed a letter asking a scientific publisher not to release a paper that claims software can predict who will commit a crime based on their appearance alone.
“As numerous scholars have demonstrated, historical court and arrest data reflect the policies and practices of the criminal justice system,” the letter states. “These data reflect who police choose to arrest, how judges choose to rule, and which people are granted longer or more lenient sentences […] Thus, any software built within the existing criminal legal framework will inevitably echo those same prejudices and fundamental inaccuracies when it comes to determining if a person has the ‘face of a criminal.’”
While you can’t (yet) convict someone of a crime there is no evidence at all that they committed, systems like this can certainly lead to harassment of innocent people by law enforcement. And given the data the system would be based on, it’s pretty obvious that the people who would be harassed would be the kind of people the police already hassle needlessly anyway.
So such a system would be pointless, cruel, and based on assumptions that sociologists debunked over 100 years ago. Any intelligence involved in it is certainly artificial.