Dispatch from Mitch
Not just rhetoric
The New Yorker’s Kyle Chayka argues (4/15) that the AI industry’s own apocalyptic rhetoric is to blame for recent violence directed at data centers and Sam Altman.
Altman’s famous 2015 quote is here: “A.I. will probably most likely lead to the end of the world, but in the meantime there’ll be great companies.” An Onion headline (satire) that seemed to resonate with people also gets a mention: “Sam Altman: ‘If I Don’t End the World, Someone Far More Dangerous Will.’”

Chayka says it’s hard to tell when existential warnings are just marketing hype, and implies we should mostly take them as a product of inflated egos.
There is a persistent delusion of grandeur among those leading the A.I. charge. In his blog post, Altman wrote, without apparent irony, that the prospect of controlling artificial general intelligence was like the “ring of power” from “The Lord of the Rings”
It is certainly odd that some of the most prominent voices warning about the extinction threat from advanced AIs are the people building it, but I place “marketing hype” pretty far down my list of likely explanations.
After all, most of these folks were warning about the dangers of superhuman AIs years before they were put in a position to build them, and long before they had anything to gain via the dubious marketing strategy of saying their industry could kill everyone. (Do you remember when the oil companies or cigarette companies used that playbook? Me neither.) Also, like that Onion headline implies, most of the leading AI CEOs started building AIs because they were worried that the other guys were more likely to screw it up. There are delusions of grandeur here, but they are mostly in individuals thinking that they are more capable than the next person of building superintelligence safely; the truth is that, at present, no one can.
Chayka wraps up this piece by saying the AI companies should get off their high horses, stop the consultations with philosophers and Christian leaders, and “stop appointing themselves as the only arbiters of safety.” He says it’s a “system of external accountability” that’s needed, “with input and involvement from the public.” I agree that’s better than what we’ve got. But also, the race to superintelligence must stop.
Dispatches from Joe
AI, jobs, and “jagged intelligence”
Noam Scheiber of the New York Times reports (4/15) that white-collar workers are automating computer tasks but not social ones, i.e. meetings. Featured is AI power user Dan Sirk, currently serving as a “fractional executive” for two different companies and planning to take on a third. The piece frames social skills as durable moats, on the dubious assumption that AI can never learn to schmooze with clients and stakeholders like Dan Sirk can. Nor does it really explore the implications (good or bad) of one executive doing the work of three.
Fortune’s Nick Lichtenberg makes a contrasting case, rounding up recent surveys to argue that workers resisting AI are falling behind (4/16).
The NYT’s Cade Metz splits the difference (4/15) with a profile of “jagged intelligence”, a term coined by Andrej Karpathy to describe AI’s current tendency to be superhuman in many domains but struggle in others (helpful graphics here). Metz frames this jaggedness as a stable property of AI rather than a feature of the weird transition we’re in, citing models’ poor performance on François Chollet’s new ARC-AGI 3 benchmark. But the piece concludes with an acknowledgement that AI is rapidly improving.
U.S. Senator and earnest firebrand Bernie Sanders capitalized on job concerns in a Fox News op-ed (4/16), framing AI as a job-destroying repeat of offshoring and his data center moratorium bill as a protection for American workers. The piece focuses primarily on jobs, propaganda, and privacy, but closes by calling AI an “existential threat” that could “function independently of human control, with possible catastrophic outcomes.”
“That obviously must not be allowed to happen,” Sanders adds, and “The international community must come together to prevent this nightmarish scenario.”
China bets on tokens
Across the Pacific, China is spinning up a busy trade in AI “tokens”, the basic unit of processing done by AI. Daily token consumption in China hit 140 trillion in March, up 40% in three months. Robyn Mak of Reuters Breakingviews argues (4/15) that China’s bet may be misguided; quality matters as much as quantity, and tokens from cheap AIs aren’t exactly fungible with those from U.S. frontier labs. Meanwhile, export controls look to keep the highest-end AI chips out of Chinese hands, making it harder to compete on either. Since China still lacks the centralized compute capacity to train large models, they’re mostly stuck waiting for U.S. labs to push more capable models.
Gimmicky step makes investors leap
The pivot to AI by shoe company Allbirds is making prints everywhere, so I guess we’ll grudgingly cover what’s afoot. The New York Times reports that the rebranded “NewBird AI” is receiving $50 million from an unnamed investor to buy GPUs. That’s more than the company soled for after its value plummeted 99%, and only the barest foothold when it comes to compute. The company saw its stock jump in the wake of this pivot, sparking widespread buzz, but this says less about AI than you might think; such buy-and-pivot effects aren’t unheard-of in companies coming unlaced (and they sometimes involve legally dubious schemes). Catchy as this story is, I wouldn’t run with it.



