View profile

“Securities” by Lux Capital: Defense Fordism

Danny Crichton
Danny Crichton
Autonomous warfare / Nvidia’s coming acquisition wave / chiplets / etc.

Defense Fordism and AI
Image Credits: U.S. Navy / MC3 Connor Loessin
Image Credits: U.S. Navy / MC3 Connor Loessin
The war in Ukraine, if it comes (or came overnight — these things move fast), may well be one of the last wars we can process with human perception.
Much as the Boer War in South Africa at the turn of the twentieth century offered a savage preview of the weapons and tactics to come in the Great War, today’s conflicts foretell how war will be experienced in the coming decade. Unmanned and in some cases autonomous drones have already been used offensively in North Africa and Syria. Moore’s Law, which so far seems to apply equally to drone technology as it did to semiconductors, implies that the capabilities of autonomous hardware will increase exponentially while costs decrease. Full autonomous war is just around the corner.
The Pentagon is certainly aware of the future, but is it positioned to succeed? Drones and autonomous weapons have been studied for decades, and concerns about doctrine have been just as forthcoming. Paul Scharre at the Center for New American Security wrote a great book titled Army of None exploring the intricacies of the future of this warfare back in 2018. Yet, it’s one thing to ponder and strategize — another to actually experience and fight such a war firsthand.
Sue Halpern of The New Yorker had a great story this week on how this revolution is going to be experienced. She explores DARPA’s autonomous dogfighting challenge, with a specific focus on how human pilots are adapting to this new future of artificial intelligence:
In one scrimmage, his plane and the adversary’s chased each other around and around—on the screen, it looked like water circling a drain. The pilot told [Katharine] Woodruff [a researcher working with SoarTech] that, though he let the A.I. keep fighting, “I know it is not going to gun this guy anytime soon. In real life, if you keep going around like that you’re either going to run out of gas or another bad guy will come up from behind and kill you.” In an actual battle, he would have accepted more risk in order to get to a better offensive angle. “I mean, A.I. should be so much smarter than me,” he said. “So if I’m looking out there, thinking I could have gained some advantage here and A.I. isn’t, I have to start asking why.”
Latent throughout the piece is the same laconic dread that has animated American manufacturing for the past few decades — the sense that machines are taking over, production is being outsourced to “others,” that a way of life is vanishing. The Fordism that brought the postwar economic golden era in America is today’s wasteland of Fentanyl and social bile. Dreams of being a fighter pilot will soon go the way of securing a high-paying union manufacturing job in a prosperous industrial Midwest town.
Yet despite awareness of the future, one senses that nostalgia and inertia are still driving U.S. security policy. Far from punching ahead, the Pentagon’s goal with the Air Combat Evolution program is to support a human-computer hybrid model where humans are “battle managers” overseeing the work of their AI fighting counterpart. Yet, it’s already obvious that the artificial intelligence has little need for the human intelligence sitting in the cockpit:
Trust will also be crucial because, with planes flying at speeds of up to five hundred miles an hour, algorithms won’t always be able to keep pilots in the loop. [Peter Hancock, a psychology professor at the University of Central Florida], calls the discrepancy in reaction time “temporal dissonance.” As an analogy, he pointed to air bags, which deploy within milliseconds, below the threshold of human perception. “As soon as you put me in that loop,” he said, “you’ve defeated the whole purpose of the air bag, which is to inflate almost instantaneously.”
The dogfighting trials are early and ongoing, but AI pilots are arriving quickly. In fact, we will almost certainly have deployed AI fighter planes earlier than autonomous vehicles on our roads. Our cars have to halt at fallen stop signs, weave around construction sites, and navigate the intricacies of pedestrians, delivery vans, bikes, and more all without making a single, tragic mistake. An AI fighter plane? It may be hesitant to target and neutralize an enemy plane, but the constraints of land blow away in the air.
There is a discontinuity coming in war, where the costs won’t be born by the humans fighting, but exclusively by the humans who are being fought over. If, as some political scientists believe, the elimination of an active draft in the United States encouraged military adventurism by lowering the costs of going to war, the same will hold even more true when there aren’t much of any costs at all. Even worse, such a calculus will equally apply to America’s adversaries, from China and Russia to an insurgent.
Swarms of autonomous, cheap, violent autonomous weapons that can’t be stopped with conventional hardware. It’s the new Fordism of defense, but unfortunately, the United States is still obsessed with the old Fordism.
The Pentagon’s testing report for the new Gerald R. Ford class of aircraft carrier, which was obtained by Bloomberg this week and is expected to be submitted later this year to Congress, notes widespread shortcomings in the ship’s ability to defend itself as well as with its operational systems. The ship, which cost about $13 billion and first started construction in 2005, could see operational deployment later this year.
(That’s only slightly better than the James Webb Space Telescope which finally reached its destination this week and took nearly 24 years and about $8.8 billion to plan and construct).
Compare that prodigious spend on a single vessel to the entire autonomous warfare budget. According to Halpern at the New Yorker, the Pentagon will spend $1 billion on AI this year.
That’s the fundamental tension of defense acquisition today. The multi-decade commitments that advanced warfighting platforms like the Gerald R. Ford require are slamming straight into the Moore’s Law of autonomous machines. The Ford’s construction was underway prior to the introduction of the iPhone, and it’s still not in service because experimental Chinese weapons like supersonic missiles will likely prove effective in neutering it (an exercise that Beijing has already made a clear priority).
There’s the canonical line attributed to William Gibson about the future not being equally distributed, but what happens when the future is staring you right in the face and nostalgic blindness prevents a leap to the next generation? It’s a Brave New World out there, and the Fordism of the Navy’s past and all the rest of defense is running up against the Fordism of the future. Cheap and abundant will beat expensive and rare.
There are some positive motions to adapt to these new threats. Lux portfolio company Anduril announced a nearly $1 billion contract with the U.S. Special Operations Command (SOCOM) on an indefinite delivery indefinite quantity basis (which is government speak for an open-ended, fixed-time contract). Anduril will work as a systems integration partner with SOCOM on countering unmanned threats.
Down payments on the future are good. But we need more, faster — just like the AI weapons systems that will buzz around past the barrier of human perception.
Speaking of ARMing yourself for the future
Image Credits: Nvidia Corporation
Image Credits: Nvidia Corporation
Nvidia won’t be ARMing itself with ARM perhaps after all. The $40 billion September 2020 marriage proposal was one of the most important acquisition announcements in recent tech memory, but a year and a half later, the chipmaker looks set to abandon the deal according to a report in Bloomberg this week.
This was a fraught deal right from the beginning. Given ARM’s centrality to chip design, particularly in mobile and increasingly in embedded and automative applications, customers cried foul, worried that Nvidia would leverage its influence over ARM’s design roadmap for its own market gains. Then there were the antitrust reviews triggered in four(!) jurisdictions (U.S., U.K., EU and China) that are all quite frosty on the issue of national security and semiconductors.
It was something of a slow-motion train wreck, but Nvidia will skip ahead just fine. The same cannot be said of SoftBank, which currently owns ARM and paid $32 billion for the company back in 2016. SoftBank desperately needed liquidity from the ARM sale to pay back its colossal debt, and a breakup would be damaging. SoftBank has lost 55% of its value since a peak in February 2021, and Masayoshi Son’s right-hand man, Marcelo Claure, left the firm over what was reported to be a pay dispute.
With future ambiguities eliminated and tens of billions of acquisition money back on the table though, Nvidia has the bank to go to market for other properties, boding well for semiconductor exits, particularly given wider market turbulence.
A different direction on chips
Image Credits: DARPA
Image Credits: DARPA
Chiplets, a perennial favorite of our very own Shahin Farshchi, are in the limelight. As he puts it: “Chiplets are here, and they will become ubiquitous.”
Today, a single chip is often a combination of multiple processing systems manufactured together (what’s dubbed “System on a Chip” or SoC). If you’ve seen the Apple M1, then this is familiar: you have separate cores for processing, ML and GPUs all packed together onto one piece of silicon.
That model has great economics, but there is a clear downside to this all-in-one approach. Fabs are overwhelmed with demand right now, putting an extremely high premium on nodes with the smallest sizes such as TSMC’s 5nm and coming 3nm process. Machine learning cores need to run at the highest performance, but the same is not true for, say, memory or wireless modem systems. Right now, whichever system requires the smallest node constrains the selection of a fab for manufacturing.
Chiplets unbundle the systems on a chip. Each system can be manufactured on its own silicon (perhaps at a larger node with cheaper economics) and packaged together later. What gets more interesting is that engineers are stacking these chips in the Z-axis for much greater performance while using significantly less power. By working in three dimensions rather than two, the distance — and thus latency — between systems can be significantly reduced.
Once experimental, these technologies are hitting early scale, but challenges linger. AMD and Intel (via its Foveros technology) have incompatible standards and approaches, which is slowing progress, plus the fabs aren’t entirely ready for this new paradigm.
Nonetheless, there’s an opportunity here for startups to come in and build something just as this evolution in semiconductors crescendoes. One key area that Shahin has been interested in is technology that makes the multi-chip packages that connect chiplets together into a single unit more efficient and less expensive. Another opportunity is to build a “chiplet-native” startup that uses the enhanced performance of this chip design to outcompete on certain workloads.
Chiplets and 3D stacking are getting underway, but they will be definitive in the coming years and represent one of the largest macro trends coming up in semiconductors.
That’s it, folks. Have questions, comments, or ideas? This newsletter is sent from my email, so you can just click reply.
Did you enjoy this issue? Yes No
Danny Crichton
Danny Crichton @Lux_Capital

"Securities" is a weekly newsletter on science, technology, finance and the human condition

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Created with Revue by Twitter.
Lux Capital, 920 Broadway, 11th Floor, New York, NY 10010