Dispatches from Mitch
The winner of any AI race between the U.S. and China
Researcher and former OpenAI board member Helen Toner shared a sobering exchange she had with Senator Josh Hawley this week at a Senate Judiciary hearing. This was way more interesting than you would expect, so forgive me for sharing more blockquotes than usual.
During a hearing ostensibly about Chinese theft of intellectual property (IP), Hawley asked for clarifications on some of Toner’s earlier remarks:
You said to [Senator Durbin] that, regarding American AI companies, you said that it is hard to believe but nevertheless true that American AI companies are working as hard and as fast as they can to try to develop technology that will displace many millions of workers and potentially pose existential risks.
Now that’s my gloss, maybe you wanna correct the record exactly as you said it before. I thought that was very interesting and very important. Could you just reiterate that for us?
Toner replied:
Yes. AI is a very fast-moving field, and I think it is important that as we think about what AI’s implications are for our society, for our civilization, we don’t merely look at the AI systems that we have today—chatbots, starting to be agents that can help a little bit with some professional tasks—but instead we take seriously the goals of the companies that are building these systems.
Over the past 10 or 20 years, it’s gone from a very abstract idea that we might build AI that can outperform humans at any intellectual task, to a pretty concrete idea that some of the most well-capitalized companies in the history of the planet are driving towards as fast as they can. They may fail! It may turn out to be harder than they think to build systems that are that capable.
Personally, I’m skeptical of some of the extremely short timelines that they name, saying we might have these superintelligent AI systems within, you know, one to three years. But it seems so clear that there’s a real possibility that they build these systems within three years, 10 years. If they build it within 10 years, that’s when my daughter is entering high school.
That’s not very long. That is an extremely radical thing to be trying to do, to build computer systems that can outperform humans, that may escape the control of humans, and the companies are telling us they’re doing it, and I think we don’t take them seriously, and we should.
Hawley then mentioned that while the AI companies say they have to beat China, it sounds like the goals of these CEOs “are every bit as nefarious”.
Will it do us any good if these American AI companies are able to pursue their designs without any hindrance? Will it do any good that we beat China if in fact they succeed in displacing millions of American workers, gobbling up all of Americans’ data, completely destroying our IP system, etc.?
Toner:
I think the way I’ve heard this put best is: Right now, the way that we build AI and the level of control we have over it, which is not great, the winner of any AI race between the U.S. and China is the AI. And I think we need to be working to make sure that is not the case. I think it is very important that the U.S. AI sector remains ahead of the Chinese AI sector, but if that’s at the expense of AI overrunning the entire planet, then that is, you know, that hasn’t benefited us.
She goes on to talk about how the U.S. could be doing more to constrain China’s AI’s growth. Chip export bans are the obvious move, but this next move is equally important:
I’ll also call your attention to semiconductor manufacturing equipment, what goes in the fabrication facilities. I think it’s even more strategically clear that we should not be allowing China access to advanced tools. That is something that has gotten lip service from the past three administrations but enforcement has been very weak. And I think ensuring that the most advanced lithography tools, the most advanced design software, other aspects of the semiconductor supply chain are not being exported to China to let them build their own indigenous supply chain is also one of the simplest and most important levers we have available.
The most complex devices
Speaking of semiconductor manufacturing equipment, if there’s one name worth knowing in this space, it’s ASML.
The Dutch company was just profiled in a Wall Street journal piece by Kim Mackrael. It is the world’s only supplier of the extreme ultraviolet lithography machines needed to make high-end AI chips. The machines cost more than $400 million, are the size of a school bus, and can only be made and used in extreme clean-room conditions.
[They] are among the most complex devices humans have ever created. Inside, a high-powered laser fires bursts of light to flatten and vaporize tiny drops of molten tin. The process creates an explosion of extreme ultraviolet light, which the machine uses to print microscopic patterns onto silicon discs.
(For more about the crazy engineering behind these machines, check out this great Veritasium video.)
It would be extraordinarily difficult for any other company or national project to replicate what ASML does on a timescale of less than a decade or two. If it were merely very difficult, I think it would have already happened. ASML is predicting $47 billion in sales this year, with more demand than they can fill.
Good! People like me who want to see international controls on frontier AI development are glad the global manufacturing chain for AI chips relies on irreplaceable equipment from a single company from a single, friendly nation. In some ways, this makes chips much easier to control than nukes. Many countries have uranium deposits, but only the Netherlands has ASML.
Crossover point?
A vice president at Nvidia, which designs and sells most of the world’s most powerful AI chips, says that “For my team, the cost of [AI] compute is far beyond the costs of the employees.”
That’s according to reporting from Axios’s Madison Mills, who gives a few other cases of executives at companies boasting about their huge AI bills relative to payroll. I’m sure some such claims are real, and that many more will be real soon. For now, I see a trend where people play up their AI usage in hopes of investors seeing them as adaptive and forward-looking.
I also think that looking at AI spend vs. payroll fails to capture a more important trend where solo entrepreneurs and small teams are finding they no longer need to hire more humans in order to build their capacity. It’s easy to see when a payroll shrinks, but much harder to notice one that just doesn’t grow.
Electricity price shock
In an earlier dispatch this week, I reported on a story about competitive U.S. House races in eastern Pennsylvania potentially hinging on responses to datacenter backlash. Some of that backlash is driven by a 21.7% increase in state electricity rates in 2025.
A CBS News story yesterday from Georgia profiled an Atlanta homeowner whose electricity bill has nearly doubled in two years. This fits with September reporting from Bloomberg that found that Americans near data centers were paying more than twice what they were paying two years earlier.
I was, and am, somewhat skeptical of that analysis, because state power rates vary a lot for reasons that have nothing to do with data centers, while data centers are not evenly distributed among the states. There’s also no inherent reason why data centers must increase rates for consumers, if they are normal paying customers.
But in practice, grid operators scrambling to meet unexpected demand end up procuring power from more expensive sources. They also end up charging customers for new construction happening on hasty, unfavorable terms. With data centers currently the chief source of unexpected new demand, a data center owner might pay the same high rate as everyone else but be at least indirectly responsible for driving up that rate.
Politico also reported from Georgia today, to say that the state’s data center boom is reshaping its 2026 governor’s race and Senate contest.
According to recent polling, 47% of Georgia voters oppose data centers being built in their community (5 points above the national figure). It’s not always about the electricity, but the anger is bipartisan, and so are the power bills.
The Wall(-E)s have ears
NBC News’s Jared Perlo reported that Congressional Republicans and Democrats alike are alarmed about AI’s potential to supercharge warrantless surveillance of Americans.
A chief mechanism by which this has long been happening is Section 702 of the Foreign Intelligence Surveillance Act (FISA). It allows agencies to eavesdrop on the communications of foreigners outside the country, but also to collect the messages, emails, and other communications of Americans in contact with those same foreigners.
A second way Americans are surveilled without warrants is through the government’s ability to purchase commercially available data sets, the kind acquired through ads and other consumer tracking technologies.
Americans have always enjoyed some privacy based on the expense entailed in going through all their communications. But with AI able to process vast amounts of data cheaply, this protection is disappearing. So there’s resistance in Congress to renewing Section 702 without modifications to those loopholes. With no agreement yet reached about this, the existing law has been temporarily extended.
Sen. Ron Wyden sent a letter to the major U.S. labs asking whether they allow the government to use their tech to surveil Americans. In response, Anthropic disclosed it grants “a small number of national-security customers” an exception permitting Claude to do foreign-intelligence analysis “even if it includes incidentally collected U.S.-person information.” Google was the only other major lab to reply.
Not so anonymous
The Washington Post’s Megan McArdle demonstrated another way AI is threatening online privacy: its uncanny ability to identify the author of unattributed text. Replicating the work of technology reporter Kelsey Piper, she tested Claude Opus 4.7 against her own unpublished writing and found that Claude could identify her from 1,441 words of an old romance novel, 1,132 words of a sci-fi draft, and just 124 words of her mother’s eulogy.
She writes:
We stand to lose much more from de-anonymization than we gain from shaming internet trolls into silence. Unfortunately, at this point, there’s no way to stop it. Like nuclear weapons, as soon as such power became possible, it also became inevitable.
The analyses and opinions expressed on AI StopWatch reflect the views of the individual analysts and the sources they cover, and should not be taken as official positions of the Machine Intelligence Research Institute.


