I’m not hugely concerned about the China vs. U.S. artificial intelligence race,
as I’ve explained before. To recap, that’s because (1) so much of early AI usage is in the consumer space, (2) so much AI work is done in the open (even if it’s not always open source), and (3) the United States is still home to Google, OpenAI, Microsoft, Facebook and lots of other leading AI research institutions. Yeah, the U.S. government could do a better job mapping out its AI future, like China has done, but at the moment the state of AI in the U.S. looks pretty good.
Such is not the case in the world of supercomputing, a field once dominated by the United States. For the last several years, Chinese systems have topped the list, and this year
Chinese systems claim the top 2 slots. The first U.S. system, Oak Ridge National Lab’s Titan supercomputer, debuts at No. 5. (An optimist might point out that the United States holds spots 5 though 8 – giving it 4 top 10 systems compared with China’s 2 top 10 systems – but
Chinese systems account for 35.4 percent of total power in the list compared with 29.6 percent for U.S. systems.)
Even if the United States does retake the throne next year, which it promises to do with an IBM system called Summit at ORNL, China expects to beat the world by developing the first-ever exascale system by 2020.
Nobody thinks that having the fastest supercomputer makes you the world’s biggest superpower, but it can be a pretty big deal symbolically. It suggests a country that take science, and computer science, very seriously. And, practically speaking, bigger systems can handle bigger, more complex applications.
It’s possible all of this international one upmanship is a waste of time and money, but if we’re going to concern ourselves with it, we probably don’t want to overlook current challenges because we’re too distracted by newer, shinier ones.