"To benefit humanity"
DeepMind unionization, White House plans, Brockman on the stand, mistaken identity
Dispatch from Stefan
London DeepMind lab votes to unionize
WIRED, The National, and The Guardian all reported today that Google DeepMind staff in London have voted overwhelmingly to seek union recognition, with the explicit goal of blocking the lab’s AI from being used by the U.S. and Israeli militaries.
Per The National, 98% of Communication Workers Union members at DeepMind backed the move, which would secure representation for at least 1,000 staff at the London office. Workers have asked management to recognize the CWU and Unite the Union as joint representatives. If management doesn’t engage with them, they’ll petition a UK arbitration committee to compel recognition.
The trigger, per WIRED, was Alphabet’s February 2025 decision to remove its no-weapons-or-surveillance pledge from its ethics guidelines. Earlier this week, we covered how the company had agreed to the “any lawful purpose” clause in the Department of Defense’s new AI contracts.
One anonymous employee told WIRED, “A lot of people here bought into the Google DeepMind tagline ‘to build AI responsibly to benefit humanity.’” Another employee told The National that workers feel “betrayed.”
Previously, two internal petitions, each with hundreds of signatures, received only (in the words of another employee) “non-answers that the corporate comms team had come up with.”
If recognized, the union plans to demand that Google exit Project Nimbus (its $1.2 billion Israeli military cloud-and-AI contract) and provide transparency on how DeepMind models get deployed. Other demands include an independent ethics board, whistleblower protections, the explicit right for any employee to abstain from working on a project that violates their conscience, and meaningful consultation before AI is used to automate workers’ own jobs.
The union is already discussing what direct action could look like: a research strike, or halting work on their Gemini models.
Dispatch from Beck
What’s the US policy on AI again?
Top AI companies have voluntarily agreed to give new AI models to a government program for security evaluations, the Wall Street Journal reports. Google, xAI and Microsoft agreed this week to join Anthropic and OpenAI in submitting their top models to the US Center for AI Standards and Innovation (CAISI). These models are shared with safeguards “reduced or removed” to enable national security applications, which leaves the models more willing to engage in potentially unsafe behaviors.
Meanwhile, The New York Times reports that the White House is weighing an executive order to require formal review of new AI models before they are released. According to “people briefed on the conversation,” the executive order would give the government first access, but “not block the release.” This contrasts with government action on Mythos, Anthropic’s model with advanced cybersecurity capabilities. Anthropic had moved to expand access to additional security-critical companies but refrained after administrative pushback.
This could be a significant change in White House AI policy. Prior actions by the administration have been largely anti-regulation, including supporting preemption of state regulation and ending Biden’s comprehensive executive order on AI that limited chip exports while requiring transparency into models above a certain size. Vice President JD Vance, during his speech at the 2025 Paris AI Action Summit, argued that AI was “not going to be won by hand-wringing about safety... it will be won by building.”
For those concerned about safety, like me, oversight and review are good news on the margin, but they are complicated by the unpredictability of mixed messaging and ad hoc decision-making. And it remains unclear if the order would actually increase oversight or simply formalize current voluntary processes.
Dispatches from Donald
Musk v. Altman et al: Brockman on the Stand
Elon Musk’s lawsuit against OpenAI has entered its second week. Yesterday saw the testimony of Greg Brockman, OpenAI’s president and one of its co-founders with Musk, Sam Altman, and others.
USA Today’s Deepa Seetharaman and Jonathan Stempel think that Brockman might play a key role in the trial’s outcome: Among “thousands of pages of internal documents” revealed earlier in the trial is a 2017 diary entry from Brockman. Thinking over Musk’s desire to become OpenAI’s CEO and how to respond, Brockman asked himself, “Financially, what will take me to $1B?”
Brockman surely succeeded beyond that modest dream: He testified yesterday that he has a stake in OpenAI worth almost $30 billion. Barbara Ortutay, of the Associated Press, writes that, if this figure is accurate, it would “put him in the Forbes list of the world’s richest people, with wealth comparable to Melinda French Gates.”
The Wall Street Journal previously reported on entanglements between OpenAI and other companies in which Sam Altman had invested money. In its coverage of the second week of the trial, it notes that these entanglements cover Brockman as well, who also had a financial stake in several of the companies that OpenAI did business with.
Now, I’m not the judge overseeing this case, but none of this sounds like Brockman’s foremost interest was the betterment of humankind. Presumably, Musk hopes that the jury will think the same way, and conclude that Brockman failed to uphold the duty he owed to the original OpenAI nonprofit by helping to create a for-profit arm in which Brockman held a substantial stake.
Google’s “AI Summary” Blends Two Biographies
The Guardian’s Sian Cain reports (5/4) that acclaimed Canadian fiddle player Ashley MacIsaac filed a civil lawsuit against Google. An AI-generated summary of MacIsaac’s career claimed that he had been convicted of several crimes, including sexual assault, and had been put on the national sex offender registry. None of this is true.
This was not a “hallucination” in the way that people usually mean it, where an AI makes up false facts from whole cloth. Per reporting from Billboard Canada, the problem is that there is another man named MacIsaac, to whom these things do apply, and Google’s AI Summary combined their life stories into a cohesive whole.
MacIsaac lost at least one gig due to the misinformation: a concert appearance was canceled after complaints about the untrue statements made about MacIsaac, which were taken as fact. Besides this, MacIsaac has spoken about a feeling of “tangible fear” about performing in public. Google fixed the issue when the issue was brought to its attention (but did not apologize, MacIsaac says), but the untrue story might still be circulating.
Google had nothing illuminating to say, just a boilerplate non-apology that denies any real wrongdoing.
Like MacIsaac, I think that Google is trying to have it both ways: happy to talk about how capable its systems are, but quick to shed responsibility as soon as something goes wrong. Unfortunately, some mistakes can’t be undone as easily as they’re made. As AI models become more powerful and are given more responsibilities, the potential consequences of a “mistake” continue to grow.
The analyses and opinions expressed on AI StopWatch reflect the views of the individual analysts and the sources they cover, and should not be taken as official positions of the Machine Intelligence Research Institute.





