Just because you can do something, that doesn’t mean you should. That’s an argument that has been leveled against the tech industry in general lately, and it applies equally (or especially) to the field of artificial intelligence, as well.
Today, for example, I came across a story
about Danish brewery Carlsberg running its beer through sensors, and then AI models, to profile its flavor. I’m aware that brewing beer is a scientific endeavor and that, at the scale of a company like Carlsberg, it’s very much a capitalistic endeavor, too.
But there’s something about beer that, to me at least, is very human. When you tour a craft brewery (or when you watch Samuel Adams commercials) you see the people responsible for producing what you’re drinking, and you can feel the connection they have with their creation. We laud breweries that are hundreds of years old and adhere to tradition in the face of mass-market fads.
For some reason, adding AI into the mix cheapens the process to me. Even at a large-scale brewery where that personal connection probably hasn’t existed for a while, the idea that we need algorithms to help determine what might taste good is disheartening. I guess that’s because this isn’t pharmaceutical research, where the right or wrong chemistry has life-or-death consequences. It’s just beer.
It reminds me of efforts to have AI systems create music, artwork or literature. I might be in the minority here, but I don’t care about much of that work
beyond, for example, providing proof that AI pattern recognition is improving. Often, it’s the flawed and imperfect human experience that makes great art so great.
It’s not really Conde Nast’s fault. The company publishes fashion magazines and using AI to improve its product is just smart business. But at a time where we’re publicly debating the value (and values) of social media and the web, not to mention the effects of rampant consumerism, an AI model that can distinguish Gucci bags from Prada bags seems kind of superfluous.
Here a few more items that caught my eye today and seem worth pointing out:
A.I. researchers leave Elon Musk lab to begin robotics start-up (New York Times): I’ll forgive the NYT for going with the clickbait headline, but the actual story is that Pieter Abbeel (formerly of UC-Berkeley and OpenAI) just launched a new industrial robotics company called Embodied Intelligence. I’m planning to have him on the ARCHITECHT Show podcast later this week, so keep your eyes peeled for that.
AI could set us back 100 years when it comes to how we consume news (MIT Technology Review): Mostly because we’ll be less able to tell whether pictures and video are real or fake. This sad and, frankly, kind of scary.
Google and AWS add more security: AWS added new encryption features to S3, and also some visual warnings to users that their data is exposed to the public web. Google added DNSSEC to its Cloud DNS service. I think we’re at an interesting point in the evolution of cloud computing where we both understand the value of centralized management and respect the security practices of companies like Google and Amazon, but also increasingly question the reliance on a handful of companies for all our personal and business needs. We also ding these same companies about privacy a lot. But, really, if they can actually help bring some order to the madness that is cybersecurity, I think people will overlook a lot of sins elsewhere.