If you’re enjoying this newsletter, the podcast or the website, please feel free to share it via social media or good, old-fashioned word of mouth.
Like most people, I don’t have a good answer to the question of what Facebook should do about issue filter bubbles, privacy, harassment or any of the other issues currently facing the company. But, like most people, I also have an opinion.
What I know for sure about Facebook is that the company’s work (and open spirit) in the fields of data infrastructure, data science and, now, artificial intelligence have been hugely beneficial to the technology community. To the extent its technologies and practices are adopted outside of Facebook and used in beneficial ways, it has made a difference well beyond the four walls of its platform. Some of the advances Facebook has helped spur in deep learning, for example, could literally save lives when applied in health care.
I also know that Facebook has been a truly useful tool in certain crisis situations—from protests to natural disasters—and that its work with projects like Internet.org, while controversial in some aspects, is also very promising. I also tend to believe Zuckerberg still thinks Facebook can make the world a better place, and that he isn’t just blowing hot air.
However, Facebook’s problems are also real, and I don’t think the company can just data science or artificial intelligence, if you will, its way out of them. Indecent and criminal images and posts? Sure. Harassment? Sure.
Privacy? Not likely. Filter bubbles and an increasingly polarized society? No way. Those are the types of issues that data science and AI arguably helped to create. Facebook is able to continuously tweak its algorithms to give people more of what and who they want to see. It’s able to tag people and activities in images with remarkable accuracy.
On the one hand, these capabilities make the platform more engaging for users. On the other hand …
But short of dialing down the algorithms and turning back the clock on Facebook’s features by a decade—making investors’ heads explode in the process—it’s hard to see how some of these problems will be resolved. Users, apparently, cannot be trusted to police themselves or even apply common sense in many cases.
Perhaps there is a clever way to engineer a solution, but perhaps adding more data or more algorithms will only make things worse.