View profile

Big Revolution - The paranoia trap

Revue
 
Welcome to Thursday's newsletter. Let's dive straight into today's news... – Martin from Big Revoluti
 
April 4 · Issue #389 · View online
Big Revolution
Welcome to Thursday’s newsletter. Let’s dive straight into today’s news…
– Martin from Big Revolution

Big things you need to know today
  • 540m Facebook user records have been discovered on publicly-accessible third-party servers. The data was removed yesterday, but had been there since at least January.
  • WordPress has named the 12 publishers that will collaborate with it on Newspack, the new version of the CMS focused on news publishers.
The big thought
Credit: Ken Treloar on Unsplash
The paranoia trap
What might future malware attacks look like? Forget stealing your data, or locking up your computers until you pay a bitcoin ransom, a new experiment has shown just how much we’ll all have to be on the lookout for distorted reality in the future.
Security researchers in Israel have hacked cancer screening equipment to hide real tumors and insert entirely fake ones.
“Even when the radiologists were made aware that the scans were being altered, they still struggled to make a correct diagnosis. When they were given a second set of images with a warning that some had been changed, the medical professionals were still tricked into thinking computer-generated nodules were real 60 percent of the time.
"When the malware was used to remove nodules, 87 percent of the readings incorrectly determined the patient was healthy. The humans put through the test shouldn’t feel too bad, though—screening software used to confirm diagnoses fell for the malware’s tricks every single time.”
Ouch.
This was only a research demonstration, but it shows just how sophisticated future malware from determined attackers could be.
Security software is going to have to become more sophisticated, but we as individuals are going to have to be more sophisticated too. And I don’t know how we do that without becoming paranoid conspiracy theorists who trust nothing. If you look at certain quarters on social media, some of us have already got there. And that that’s exactly what some of the people peddling misinformation want.
The future is certainly going to be an information verification nightmare.
One big read
The problem with AI ethics The problem with AI ethics
“Big tech companies like Google and Microsoft have embraced AI ethics boards and charters, assuring the public that they’re essential to keeping the worst effects of machine learning in control. But are these projects actually helping anyone?”
That’s all for today...
Catch you in your inbox tomorrow. In the meantime, if you enjoy this newsletter, please recommend it to a friend or colleague. They can subscribe at this link.
Did you enjoy this issue?
 
Become a member for $5 per month
Don’t miss out on the other issues by Martin SFP Bryant
You can manage your subscription here
If you were forwarded this newsletter and you like it, you can subscribe here
Powered by Revue