Dispatches from Mitch
“They know my face.”
At the start of the current conflict with Iran, we saw a lot of stories about the Project Maven software used by the U.S. military to identify targets and coordinate strikes. We heard that AI, specifically Anthropic’s Claude, had turbocharged the whole system.
The LA Times’s Nabih Bulos reports today on Israel’s equivalent AI-enhanced system, and, chillingly, how it looks to those in its crosshairs.

The piece opens in Lebanon with a 62-year-old receiving a phone call from the Israeli military: “Ahmad, you want to die with those around you or alone?” Ahmad Turmus, a Hezbollah liaison, answered “Alone.” He hung up, told his family to leave, and got in his car.
Israel’s system fuses smartphones, security and traffic cameras, Wi-Fi signals, drones, government databases, and social media. Identities are linked across sources. Relationships and routines are mapped.
An Israeli colonel claims the system can do in seconds what once took human analysts several weeks.
An AI specialist who left defense work over Gaza concerns warns that these systems cause flawed inputs to get repeated “faster and with more confidence,” sometimes turning “correlation into action without always having context.”
Turmus, told by his family to flee, refused. “They know my face,” he replied. “There’s nothing we can do against this.”
Some thirty seconds after he got in his car, it was struck by two missiles.
For richer, for poorer
The New York Times reported Friday that Maryland became the first U.S. state to ban AI-driven surveillance pricing in groceries.
Surveillance pricing is when sellers offer different prices to different buyers based on what they know about them. Estimates of what a buyer might be willing to pay in a given moment can be greatly improved through timely fusion of more data from more sources — something AI is great at.
The practice probably sounds like unalloyed evil — it can certainly be used to exploit people in tough situations, or who have more money than time to spend shopping around. But my shoulder economist obligates me to point out that the practice cuts both ways, sometimes bringing a price within reach to someone who would otherwise have passed on it.
A traditional way to selectively offer a lower price is through tedious discount programs like coupons, a way to find people with more time than money. If AI can identify such people without their having to waste that time, isn’t that a win?
(If you’re wondering why they can’t just offer everyone the lower price: Sometimes they could, but many businesses couldn’t stay afloat that way, and their exit would leave us with fewer options. In some industries, wealthy “whale” customers subsidize the rest of us.)
That said, if you think companies already know too much about us, and you would rather not be squeezed harder just because they know you’ll still pay, I’m with you. And so are legislators in the thirty-three other states with similar bills to Maryland’s under consideration.
In sickness, and in health
What about surveillance pricing... in healthcare?
A story in The Guardian today finds that this is essentially what Kenya’s new “AI-powered” health insurance system is actually doing.
Intended to replace the country’s national insurance system, it uses an opaque formula and “a predictive machine learning algorithm” to calculate how much patients can afford to pay.
Collaborative investigations with local journalists found the system was “systematically overcharging the poorest Kenyans, overestimating their incomes, while undercharging the wealthiest by underestimating their incomes.”
This has sometimes lead to patients not getting the care they need.
One doctor calls the system “a really poor tool for identifying poor households. A great tool for helping the government run away from responsibility.”
It does sound to me like the problem here isn’t that AI couldn’t do the job well, but that the people responsible for the system benefit from it running poorly.
AI-run government?
Fox News’s Kurt “CyberGuy” Knutsson reported yesterday that the United Arab Emirates plans to have an “AI-run government within two years.” But the fine print is a lot less cyberpunk.
The actual plan seems to be to embed more agentic AIs into the bureaucracy for stuff like permit applications.
I claim that the real story here, which is barely a story, and definitely isn’t news, is that the UAE wants to be seen as a high-tech oasis for overseas investment.
I share the story as a state-level example of the trend where individuals and organizations overstate the degree to which they are embracing AI in hopes of looking like a good bet for the future. This definitely happens, but that doesn’t mean AI is just hype.
Yes, most businesses using the word “blockchain” in 2017 were full of crap. But retailers who failed to start saying “internet” in the late ‘90s really were cooked.
Lockstep dealmaking
Business stories in the Wall Street Journal and Bloomberg were mirror images of each other this morning. The Journal confirmed rumors that Anthropic was going into a $1.5 billion joint venture with some consultancies and private-equity firms, and that OpenAI was about to do something similar.
Bloomberg reported that OpenAI raised more than $4 billion for a new joint venture with a different set of consulting and private-equity firms, and that Anthropic was about to do something similar.
The name of the game for all involved is cost-cutting and efficiency. These are alliances built to overhaul businesses from the inside out. This morning’s stories are sowing the seeds for the headlines you’ll see in the coming months about ailing corporations gutting their workforces as part of “AI-first” restructurings.
“Most hated men in America”
Reuters and others reported that two days before the trial, Elon Musk approached OpenAI President Greg Brockman to gauge a settlement to his suit against Sam Altman and others at the company they had originally co-founded together. This is according to new documents filed in the case yesterday.
Allegedly, Brockman proposed both sides drop their claims, but Musk replied: “By the end of this week, you and Sam will be the most hated men in America. If you insist, so it will be.”
Altman, Brockman, and Microsoft chief Satya Nadella are expected to testify in the next few weeks.
A national security risk
Dean Ball and Ben Buchanan, former AI advisors to Trump and Biden, respectively, jointly argue in a New York Times op-ed today that AI is a national security risk, and that Washington isn’t doing nearly enough. The piece was motivated by two recent events:
Last month’s announcement that Claude Mythos had found vulnerabilities in much of the world’s critical and most-used software.
Mythos, and OpenAI’s GPT-5.4, matching or exceeding human performance in bioweapons-related tasks.
To get it out of the way, I disagree with Ball on important matters not discussed in this piece. The ultimate national security risk from AI is the extinction threat from artificial superintelligence. Ball acknowledges that frontier AI poses catastrophic risks but has previously argued that governments are doomed to make it worse and that hopes for international cooperation are unrealistic.
But he and Buchanan are right, in this op-ed, that we need to do more about the cybersecurity and bioterror threats, whether or not we are “competing with authoritarian powers for control of A.I.’s future.”
I think their prescriptions are too focused on China, but that doesn’t mean they’re bad. They say we should:
Tighten export restrictions on advanced A.I. chips
Crack down on smuggling of said chips
Close loopholes that let Chinese firms rent advanced chips remotely
In what for Ball is a change, or perhaps a carveout, he and Buchanan express optimism about the potential for diplomacy:
The United States will have to cooperate with China and other competitors on catastrophic risks that threaten all of society, such as the potential terrorist use of A.I.-enabled bioweapons. In these negotiations, China will no doubt complain that U.S. restrictions hold it back. But the United States has repeatedly struck agreements with hostile countries on controlling the use and spread of other dangerous technologies, such as nuclear weapons, even as it has continued to deny them access to cutting-edge U.S. systems. The Trump administration and Congress should do the same thing with A.I.
It’s time for bipartisan work on these fronts, they say. On that, I wholly agree.
The analyses and opinions expressed on AI StopWatch reflect the views of the individual analysts and the sources they cover, and should not be taken as official positions of the Machine Intelligence Research Institute.


