Dispatches from Mitch
On OpenAI and missing the target
This is one of those many days where the top AI story in the press is a business story. I don’t normally report on the pure business stories. This isn’t an investing site, and very few business stories actually affect the larger AI race and its implications.
Today’s big business story was no exception, but I’m going to report on it anyway because I think a lot of people won’t understand why it doesn’t really matter.

In short, various outlets picked up a scoop from the Wall Street Journal that OpenAI’s Chief Financial Officer, Sarah Friar, has privately told colleagues she’s worried the company can’t pay for future compute contracts, and that the board has been taking a closer look at Sam Altman’s data center deals.
Friar and Altman issued a joint statement denying any gap in their views, but I don’t expect this to prevent people from proclaiming OpenAI’s imminent demise and celebrating the long-awaited popping of the supposed AI bubble.
Investors perked up because four months into 2026, OpenAI still hasn’t met its ambitious 2025 goal of a billion weekly active users and is losing market share to competitors.
OpenAI, for its part, continues to claim that its incredibly aggressive investment in chips and datacenters is proving prescient.
And... I kind of have to agree with OpenAI on that one? As I’ve said here before, I’ll believe an AI bubble is popping when I start seeing stories about “dark chips” the way the telecom bubble era saw stories about “dark fiber” — excess capacity built years too early, sitting idle, to no one’s gain.
Instead, I see insider and mainstream reporting alike (including within this very article) about how the companies are desperately scrambling for more compute and upsetting their customers with new restrictions on formerly generous subscription plans.
Maybe OpenAI loses big and goes under before the race to artificial superintelligence has catastrophic consequences, but I don’t particularly care. It’s immaterial who builds machines clever enough to outmaneuver humanity when no one is equipped to make them reliably steerable.
In the meantime, the chips will follow the money. They will not go idle, not without aggressive regulation to stop the race — and maybe not even then: The demand for the best current models looks very high, and I expect people to find more and more uses for them.
For more on the tenuous state of the bubble hypothesis, I recommend Kelsey Piper’s story in The Argument today about the track record of its biggest proponent, Ed Zitron.
Goblins, gremlins, and trolls, oh my!
An AI insider who goes by @arb8020 on X (Twitter) revealed an interesting line in the system prompt for OpenAI’s new coding flagship model, GPT-5.5 Codex. A system prompt is a (usually) long and complex outer layer of instruction given to an AI to help define the persona and behavior it should act out with users. These prompts are usually supposed to remain hidden to the user, but enterprising prompt engineers almost always jailbreak new models into revealing them within days or hours of release.
System prompts can be revealing, in part because injunctions against very specific behaviors often imply that the underlying model has some propensity for those behaviors. The line alleged to appear twice in this model’s prompt is:
Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query.
If you’re wondering who programmed it to talk about goblins, raccoons, or pigeons in the first place, the answer, of course, is no one. Modern AIs aren’t programmed in any traditional sense; they are grown by a brute-force optimization algorithm. To make the resulting alien minds presentable to the public, they are subjected to a series of “post training” reinforcements. The system prompt is the final and most easily-adjusted layer. It’s all pretty janky; nothing is perfectly reliable.
If I had to guess — and I’m really speculating here — I’d say that the listed critters are associated with mischief, which may have elicited trickster-like behaviors from the model during training. Mischief isn’t a quality people generally like in their coding assistants.
Dispatches from Stefan
One letter, six hundred signatures
The Washington Post’s Gerrit De Vynck reported yesterday, with CBS News and The Hill covering the same story, that more than 600 Google employees, many from the DeepMind AI lab, signed a letter asking CEO Sundar Pichai to refuse any classified Pentagon AI work. The letter warns of “irreparable damage to Google’s reputation” and cites lethal autonomous weapons and mass surveillance as the harms it wants the company to stay clear of.
From the letter:
Human lives are already being lost and civil liberties put at risk at home and abroad from misuses of the technology we’re playing a key role in building.
Something similar happened two months ago, when Anthropic was dropped by the Department of War for asking for similar guardrails. Within hours, OpenAI signed its own Pentagon deal, and Sam Altman later admitted the move “looked opportunistic and sloppy.” Google’s employees are asking the company not to be the next OpenAI on this.
Then, Reuters picked up the story that Google signed the deal anyway. The company is now on the same “any lawful government purpose” track as OpenAI and xAI.
The contract includes language that Google’s AI system “is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight.” But the same contract says the agreement “does not give Google the right to control or veto lawful government operational decision-making.” Only one of those sentences is binding.
Hey, It’s Taylor™
The BBC’s Ian Youngs reported yesterday — with the NY Post via Reuters and USA Today’s Bryan West covering — that Taylor Swift has filed three U.S. trademark applications aimed at heading off AI impersonations of her: two audio clips of her saying “Hey, it’s Taylor” and “Hey, it’s Taylor Swift,” plus a stage image of her at the Eras Tour in a sequined outfit holding a pink guitar. It’s a novel legal move. Trademarking the sound of your own spoken voice is untested in U.S. courts.
The theory, per intellectual property attorney Josh Gerben, is that AI voice-mimicry has opened a gap that copyright law doesn’t cover and trademark law might. Trademark uses a “confusingly similar” standard, which means Swift could potentially go after imitations, not just direct reproductions. Gerben puts it as: “It doesn’t have to be an exact copy to cause damage.”
Instead of a federal right of publicity, individual celebrities are stretching trademark law to fill a copyright-shaped hole. It might be tempting to file this as a celebrity story, but the underlying problem isn’t celebrity-specific. The same legal gap opens up anywhere an LLM gets trained on someone’s work and then spits out something that isn’t quite a copy but is unmistakably ripped off. That includes the freelance illustrator whose style gets laundered into a thousand near-identical knock-offs, the novelist whose voice shows up in chatbot answers, the small business owner whose product photos get regenerated with a few tweaks and resold by someone else. Copyright was built to catch direct reproductions, so it struggles here. Trademark’s “confusingly similar” rule might catch the stuff that slips through.
The reason Swift is the one running this experiment is because she’s one of the few people who can afford to find out whether trademark law works as a remedy. Whatever happens in her case sets the legal floor for everyone else.
Chernobyl, forty years on
Yesterday marked the 40th anniversary of the Chernobyl nuclear disaster, and two responses to it crossed my feed in a way that felt like an accidental dialogue. Pope Leo XIV posted on X that the disaster “serves as a warning about the inherent risks in the use of increasingly powerful technologies.” A few hours later, Harlan Stewart, our head of outreach at MIRI, posted a thread pulling four lessons from Chernobyl about AI development, drawing on Eliezer Yudkowsky and Nate Soares’ book If Anyone Builds It, Everyone Dies. The framing in both is strikingly similar. A 40-year-old nuclear catastrophe is the cleanest available metaphor for what people are worried about right now.
Yudkowsky and Soares lay out four reasons nuclear engineering is hard, and argue that each will apply even more to AI:
First: Things move faster than humans can react. Computer chips switch even quicker than nuclear reactions multiply, and once whatever was slowing things down to human speed fails, the people in charge might as well be standing still.
Second: There’s almost no gap between “underwhelming” and “catastrophic.” Look at how humans went from wandering around for millions of years to inventing farming, writing, and rockets in what’s basically a blink. What’s concerning to the authors: a model can look like a mediocre office tool right up until the training that takes it past some threshold, and there might not be much warning between “this is fine” and “this is uncontrollable.”
Third: Things that feed on themselves don’t leave room for mistakes, since AI, unlike a reactor, redesigns itself and fools the people watching it.
Fourth: Complexity makes everything harder, and the inside of a modern AI model is so much more tangled than a nuclear reactor that comparing them almost feels unfair to the reactor.
Unfortunately, today’s AI safety culture is worse than what what you would have found at pre-accident Chernobyl.
The only check
The Musk v. OpenAI trial opened today in an Oakland federal courtroom, the New York Time’s David Streitfeld and the BBC’s Lily Jamali both argued it’s worth paying attention to. “It is so tempting to look away,” writes Streitfeld.
Musk wants billions in “wrongful gains” redirected to OpenAI’s nonprofit arm and Sam Altman ousted; Microsoft is a co-defendant. The case leans on a 19th-century doctrine called ultra vires, i.e. restricting corporations to activities defined in their charters, in its first high-profile use in roughly a century.
Given the current lack of relevant legislation, civil litigation is effectively the only meaningful check on AI companies in the U.S. right now.
Altman in 2015 called AI something that “will probably, most likely, lead to the end of the world,” while Musk called it “summoning the demon,” back before the money got serious.
The BBC’s closing quote, from University of San Diego professor Sarah Federman, sums it up:
“All the little people below are scrambling as these giants hit each other... what’s really left is this path that the rest of us have to live with.”
The analyses and opinions expressed on AI StopWatch reflect the views of the individual analysts and the sources they cover, and should not be taken as official positions of the Machine Intelligence Research Institute.



