I never really gave much thought to smartphone launches until AI became one of their dominant themes. Now that I’m paying some attention, I can’t help but feel like Google and Apple have found themselves a hammer with computer vision and speech recognition, and, accordingly, they now see a lot of things that need to be nailed. I’m pretty certain nobody was dying for the Google Clips camera
(which sits on your counter and automatically captures “endearing, heartwarming moments”) or the Pixel 2’s music-identification service. And now that they’re here, I’m pretty sure most people will still find a way to live without them.
If you read my take on the hubbub about the iPhone 8’s new AI features, that pretty much sums up how I feel about the Pixel 2. They’re useful devices and it’s amazing how far phones have come in the last decade, but from an AI perspective they’re kind of underwhelming.
That being said, the real-time translation feature of the Google Pixel (ear) Buds
does sound very cool in theory. I’m just not sure how frequently I’ll come across someone else who happens to be wearing a pair and does not speak English and with whom I absolutely need to speak. Take out the first dependency, though, and you have something really meaningful.
And all of that being said, no one has ever really come to me for opinions on gadgets, so I’m not sure why I’m wasting anybody’s time with them ;-)
DeepMind’s new ethics society
This new unit will help us explore and understand the real-world impacts of AI. It has a dual aim: to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all.
I say it might
be important because, as with all things fusing technology and ethics, the devil is in the details. In this case, the details are how much money, resources and time DeepMind and its “fellows” are willing to put into this initiative, and how independent its outcomes will be from the profit motives of DeepMind and Google. For more on that, I think Natasha Lomas at TechCrunch offered some fair criticisms of the effort
and valid questions for DeepMind to answer.
But conflicts of interest aside, I’m also concerned with the efficacy of think tanks and other institutions that rely too heavily on academicians and research types to drive their directions. This is mostly because (and I’m certain I’ve talked about this in some earlier issue of this newsletter) ever since the advent of “big data” several years ago, we’ve had really smart people talking about many of the same ethical issues that arise with AI. But for all the justified concerns over algorithmic bias, privacy and filter bubbles, it seems to me like governments and companies (at least in the United States) have been pretty slow to act.
Why another ethics committee, even one managed by DeepMind, would change anything is a mystery to me.
What might be really useful—especially now that AI is not just a hypothetical, but a real thing in the wild (and in our pockets) and advancing every day—is some more significant dialogue among a broader range of folks, from CEOs to blue-collar workers, and from politicians to academics. I don’t think any group is equipped to seriously tackle the issue of AI on its own, but a concerted effort might result in some real progress or at least open people’s eyes to different points of view.
More new AI chips are on the way
If you can look past the hyperbole, here are two more promising projects in that space: