Dispatches from Mitch
Bargaining, chips
With President Trump in Beijing tomorrow for a U.S.-China summit, we’re seeing a surge of articles about the bargaining positions of the two countries with regards to AI. Who leads? Who needs?
The most important data point today might be the one offered by Dustin Volz, Julian Barnes, Sheera Frenkel, and Tripp Mickle in today’s New York Times. They broke the news that officials from U.S. lab Anthropic met with a representative from a Chinese think tank last month who “insisted” that Beijing be given access to the company’s powerful new Mythos model — the AI currently shared only with select cyberdefenders.

(As we reported yesterday, even European countries don’t have Mythos access yet.)
This made the U.S. intelligence community perk up, because Beijing is understood to keep a tight leash on Chinese think tanks when they are engaged in unofficial diplomacy. The request was therefore a strong indication of China’s priorities: They want access to the models American companies are using to shore up their cyberdefenses before corresponding offensive capabilities are widespread.
So, access is one potential bargaining chip at the summit, which is increasingly expected to have a strong AI focus. Officials on both sides of the Pacific are comparing it to Cold War era arms talks: An unnamed senior U.S. official told reporters Sunday that the U.S. and China were interested in setting up a “deconfliction” channel for discussion and mitigation of AI risks. (Think “red phone”. You love to see it!)
And a CNBC report from Evelyn Cheng today described someone with a Chinese think tank saying that the two countries could work on “a global treaty to regulate the use of AI in the military.” He said that an arms race isn’t just bad for both countries, but for humanity.
What else might be on the summit agenda?
Open weights. Per the Times piece we led with, the White House wishes China would stop letting its AI companies share the weights of their best models. (While mostly uninterpretable to humans, the weights are the AI’s code; if you own the weights, you own the model.) Open weights releases are a boon to cybercriminals, because guardrails against misuse can be trivially removed and the models copied onto private hardware. There’s no taking back an open release.
Chinese exports. Per reporting by Bloomberg today, roughly half of China’s recent growth in exports has come from AI-related hardware. The tariff exemptions President Trump quietly made for data center materials earlier this year indicate some potential leverage for China there.
U.S. chips. Access to U.S. chips remains the greatest limiting factor for China’s AI industry. A different piece in the New York Times today, by Meaghan Tobin, reports on the country’s drive to develop an independent tech stack — AIs optimized to run on Chinese-made chips, themselves optimized to run Chinese AIs. Chinese chips are becoming “good enough” for many tasks, but they are still in short supply, and are substantially worse than the best from America and its Taiwanese manufacturing partners. This is an especially acute problem for Chinese companies trying to train new frontier models.
Unfortunately, while the two countries are expected to talk about potential guardrails around AI use, they are not expected to discuss stopping the AI race itself. But never say never? The shape of this summit already looks so different than it did a few weeks ago. By the end of this one — or the start of the next — who knows what could be on the table?
People like us often invite you (in the U.S.) to contact your representatives, but did you know you can also contact the White House?
Dispatches from Joe
Proving grounds
When a new state-of-the-art AI is trained, no one knows exactly what it can do. The work of figuring out the capabilities of each new model is done by third-party “evaluators” and by some government agencies.
After a sudden shift in priorities, two federal agencies are now wrestling for the right to evaluate AIs. The Washington Post describes the conflict as a “turf battle,” but the jurisdictional squabbles sit atop a genuinely important question about how the federal government views AI.
Our first contender is the Center for AI Standards and Innovation (CAISI), a branch of the National Institute of Standards and Technology in the Department of Commerce. Originally created in 2023, CAISI has a voluntary arrangement with many AI companies, who share their models before deployment for testing purposes.
In the wake of Mythos and recent developments in automated cybercrime, some members of the national security community are saying this isn’t good enough. The Office of the National Cyber Director, which reports directly to the President, proposed a new center for AI evaluations under the Office of the Director of National Intelligence (ODNI). Parallel to this, some in the administration are arguing that federal evaluations of frontier AIs ought to be mandatory.
I find this development fascinating because until recently, the administration has been largely in favor of unrestricted AI development, to the point of trying repeatedly to quash state regulations. In 2025, CAISI pivoted towards innovation and away from a focus on “safety” — they had previously been the U.S. AI Safety Institute.
My inner cynic suggests this move might be primarily a power grab by an agency that’s noticed the growing influence of AI.
But I’m tentatively optimistic that the intelligence community is taking note of AI’s destructive potential, and I’m glad that at least some voices in the administration are raising the question of whether it’s wise to let AI development proceed unchallenged.
A hasty revision
Two years ago, Colorado became the first state to pass substantial regulation on AI. The law requires companies using AI to make hiring, housing, and lending decisions to disclose details about how those decisions are made.
...Or it would, if the legislature had not gutted the law before it came into effect. Jesse Paul of the Colorado Sun recounts the rise and fall of SB 205.
The original bill admittedly had serious flaws. It was criticized as vague and overbroad, with definitions that could in theory consider a spreadsheet to be AI, if it wasn’t explicitly permitted. Among the industry complaints was the fact that model developers and users can’t fully explain how their systems make decisions, because modern AIs are black boxes that no one fully understands.
An April lawsuit by Elon Musk’s xAI challenged the bill, and a White House executive order threatened to cut funding to “States with onerous AI laws.” It’s a decent guess that this pressure contributed to the hastiness of the revision.
It’s genuinely hard to design good state-level AI regulation, especially when so many of the risks cross state lines. There have been other attempts more narrowly aimed at frontier model developers, like New York’s RAISE Act and California’s SB 53, but they’ve shared the pattern of being watered down before passage.
Dispatches from Beck
Harvested by the future
In a new essay in the New York Times, author Yi-Ling Liu argues that Americans and Chinese have a common experience of AI-inflicted concerns. It’s worth the read.
To Liu, Silicon Valley meme-ers advising how to avoid the “permanent underclass” through hustle culture and “grindset” echo the Chinese tech workers’ lived experience of “996” (a work schedule running from 9am – 9pm, 6 days a week). American influencers flying to China for drone-delivered KFC replicate the Chinese fixation with “American consumer abundance — its shopping malls and sprawling suburbs.” And she observes young people in both nations who are uninterested in having children, with many turning to AI in a lonely world.

Liu calls out the “US v China race” as a narrative produced by “Silicon Valley executives and Washington policy wonks,” to “justify sprinting ahead without guardrails.” On both sides of the Pacific, the race narrative masks the divide between those benefiting from AI and those being harmed.
One user of RedNote (a Chinese social media app) described the reality of the AI-exposed worker as “being harvested by the future.” Workers are increasingly tracked, hired, and fired with algorithmic supervision, only to then find cheap solace where they can, perhaps in chatbot companions, or in rose-tinted nostalgia for better times.
Liu cites “gradual disempowerment” (from this 2025 paper about humans increasingly handing over decision making to AIs) as not just a future risk but “a diagnosis of the present day.” While some surrender, accepting the Chinese internet idiom to “let it rot,” Liu argues that collaboration is the answer. That we can and should be like the scientists and policymakers at last year’s World AI Conference in Shanghai, who called for international cooperation “to ensure that advanced A.I. systems remain aligned with human values.”
I agree, and am heartened to see this sane analysis enter the discourse.
Watchin’ races
Politico reports Alex Bores, New York Assemblyman and US House candidate, has been endorsed by Rep. Pat Ryan, who cites AI policy as the deciding factor. Bores co-sponsored the NY state RAISE Act, described by the campaign as “the toughest AI safety law in the nation,” and has been the target of attack ads by anti-regulation superPACs. The millions that have been spent against Bores have boosted his profile and brought national prominence to the race.
The Bores campaign said that he and Ryan “share a belief that the next Congress must take decisive action to regulate artificial intelligence before this transformative technology outpaces the rules meant to govern it.”
Who knows how history will write this story, but surely campaign staff across the country will be watching closely to see how increasingly salient AI affects the election. Primary voting takes place on June 23rd.
Dispatches from Stefan
What workers actually want from AI
The Guardian’s Michael Sainato reported today on a new AFL-CIO poll, conducted by David Binder Research, showing what American workers want from AI on the job.
With the caveat that this is a commissioned poll from a labor organization, here are some key numbers from Sainato’s writeup: Over 90% support having unions negotiate rules around AI at work, 95% say a human should make the final call on decisions that affect them, and 94% want to be told when AI is monitoring them — though only 7% say their employer actually does say when and how they’re monitored. The poll memo adds an urgency number Sainato leaves out: Nearly eight in ten say it’s extremely or very important that something be done to protect workers from AI, soon.
The memo also fills in what workers are actually worried about, in their own words: 71% said they’d be uncomfortable with their employer analyzing their screens during work hours. 70% felt the same about location tracking. Focus-group participants described the monitoring as “creepy” and a threat to their autonomy, and worried about AI making judgments about their behavior with no human ever stepping in.
One participant said, “An algorithm can’t be held accountable for unfair labor practices. So these companies can just defer and say, ‘Oh, well, nobody made that decision. It was the AI.’”
The analyses and opinions expressed on AI StopWatch reflect the views of the individual contributors and the sources they cover, and should not be taken as official positions of the Machine Intelligence Research Institute.






