There are a ton of research papers about using deep learning to analyze medical images and diagnose conditions, but that hypothetical “this-is-very-promising” feeling morphed into something resembling unease with the prospect of this stuff actually being applied in the wild. It was especially true with the stroke situation, where decisions need to be made in real time in order to prevent brain damage. It also did not escape me that the partnership between Samsung and MedyMatch is “pending regulatory approval” of the latter’s technology.
I’m assuming that regulators want to make sure the algorithms are accurate, which is a good thing to ensure. But per the HBR authors, where do they draw that line? Seventy-five percent? Ninety percent? One hundred percent? How this compares to the accuracy of humans and other technologies on the same task also seems relevant.
What worries me more than accuracy, though, are the externalities. What happens if, down the line, doctors start relying too much on the AI and either forget what they’ve learned or never really amass the kind of knowledge learned only through experience. Nick Carr wrote a whole book about topic
, and countless others have weighed in on it, but we’re at the time now where automation is really kicking into high gear—including in professions such as health care, law and banking.
It’s also worth considering the level of blowback that will follow any high-profile mistakes or other issues caused by AI in a field like medicine. Take driverless cars as an example. Humans crash cars ALL THE TIME—I saw three separate accidents involving at least eight cars while driving my kid home from skating lessons last night—but it’s major news when an autonomous driving system does the same. Regulators and politicians need to so some serious thinking about what level of perfection we expect from AI systems—in automobiles and elsewhere—and where we’re willing to place the blame when something goes wrong.
The big question isn’t whether AI and automated systems will be better than humans at certain things—they will—but, rather, how logically and fairly human systems react when something goes wrong. If the risk of adopting AI is too high for either consumers or companies, then uptake might be slower than we’d all like to see.
I couldn’t find a good place to fit this news about about targeted phishing attacks on prominent GitHub developers
, so I’m just including it up here. There are many scary things about the methods and the actual malware, but the scariest part might be the Dimnie trojan’s roots in espionage. Depending on who’s behind the attacks, the particular GitHub repos might not be the ultimate target, but rather the internal networks and IP of the large companies for which those developers work.