‘Fake news’ isn’t a tech problem – it’s human
In recent months, I’ve seen countless pitches for, and discussion about, technology solutions to misinformation being spread online. That’s understandable – if fake stories are spreading via tech, it makes sense that the solution should come through tech too.
But automated language analysis and fact-checking apps are still at a relatively early stage. Tech can’t prove something to be false, especially when there’s nuance to propaganda – it’s not often outright false, just dishonestly twisted for emotional impact.
And then there’s the whole idea of 'bias’ – a tech person might try to find a solution to bias in the media by surfacing 'just the facts,’ but media people know that whenever you publish a set of facts you’re exhibiting a form of bias by deciding what’s relevant and important.
Facebook might be able to show a user alternative viewpoints when their friend shares a disputed story, it might be able to display a debunk from Snopes, but people are still going to believe the stories that fit their worldview over more accurate accounts. And people are still going abuse technology to exploit that weakness.
Until we all become more willing to listen to voices outside our own worldview, tech solutions are only going to plug holes in an increasingly weakened dam.