"Talking about one thing but doing the other"
China summit, robot live-stream, AI making AI, and more
Dispatches from Mitch
Contradictions go to China
The Trump-led U.S. delegation arrived in China today for talks on trade and AI. Along for the ride: a slew of contradictions that may complicate negotiations.
The New York Times’s Vivian Wang provides choice coverage about it. On the one hand, she says:
American scholars have generally sought to discuss existential risks — such as A.I.-designed pathogens or accidental nuclear war — and the possibility of super-intelligence, or artificial intelligence with capabilities that exceed the human brain.
But on the other hand:
Jiang Tianjiao, a professor at Fudan University in Shanghai who has participated in many discussions with U.S. scholars, said that many Chinese scholars, especially in the security and defense communities, were skeptical of their U.S. counterparts’ intentions. They pointed to Mr. Trump’s efforts to loosen domestic restrictions on A.I. at home as proof that safety discussions were a trap to slow China’s development.
Jiang puts it bluntly: “These people believe the U.S. is talking about one thing but doing the other.”
Another contradiction: U.S. Treasury Secretary Scott Bessent is supposed to take a leading role in talks. But Wang points out that Bessent had criticized Senator Bernie Sanders for holding an AI forum two weeks ago with American and Chinese scientists, tweeting that “The real threat to AI safety is letting any nation other than the United States set the global standard.”
The last-minute inclusion of Nvidia CEO Jensen Huang in the American delegation — reported by POLITICO and other outlets — also presents something of a contradiction. The White House has aggressively painted China as an AI rival to be kept in check; it recently accused Chinese companies of engaging in “industrial-scale” campaigns to clone the capabilities of America’s best AIs. But Huang and his American company want to sell Chinese firms the high-end chips needed to train their own frontier models instead. The White House has gone back and forth on letting him.
Not all the contradictions are American. Summit coverage from Reuters’s Laurie Chen today includes the observation that, while it is the head of a Beijing-based AI safety consultancy who has suggested the two nations set up a “no-blame hotline” for warning each other about suspected AI incidents, China has reportedly failed to pick up American calls to an established hotline for defusing military incidents.
Chen, at least, understands why a hotline would be nice to have. Her list of the stakes includes “‘rogue’ systems acting on their own.”
The box-flipper grind
Yes, it’s a marketing stunt. No, I don’t think it’s doing actual paid work. No, I don’t think its performance is satisfactory.
But because almost all the impressive robot footage out there is cut from who-knows-how-many takes, it’s still a reality check to watch this robot from Figure grinding away at an irregular, physical task and know that it’s working on it whether you are looking or not.
The live feed claims this is “a team of humanoid robots running a full 8-hr shift at human performance levels.” It might still be going when you see this post.
If you watch for long enough, you are sure to see it screw up. When that happens, remember the mantra for all new technologies: “This is the worst it will ever be.”
Meta employees disgruntled
It seems at least some employee dignity can be found even at Meta. The Facebook parent recently installed screen and mouse-tracking software on employee computers to help train its AIs, and is planning to lay off 10% of its workforce next week.
But some workers have had enough. Reuters reports that flyers spotted around multiple U.S. offices ask workers if they really want to work at an “Employee Data Extraction Factory.” The flyers cite the right to organize under the National Labor Relations Act and encourage signing an online petition.
Asked for comment, a company rep pointed to an earlier statement that “If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them — things like mouse movements, clicking buttons, and navigating dropdown menus.”
I have been unable to determine whether anyone in the company’s AI divisions is actively protesting the mouse-tracking policy — or whether they are even subject to it. Given that Meta has offered pay packages worth north of $300 million for top AI talent, I find it hard to believe they would risk irritating said talent just to harvest a few extra mouse clicks.
Honor, code
There’s been a fair amount of chatter the past year or so about how high schools and universities might need to return to proctored, hand-written exams, thanks to AI-assisted cheating. Remember blue books?
Well, The Wall Street Journal reported yesterday that Princeton has ended its 133-year-old tradition of relying on its honor code for exam integrity; the school that used to ban proctoring during exams will now require it, starting this summer.
The school claims this comes at the request of “significant numbers” of undergrads and faculty who believe cheating has become widespread.
A student newspaper survey found 30% of seniors admitted cheating on an assignment or exam, and nearly half knew of an honor code violation — but less than 1% reported one, and reports tend to be anonymous, making them harder to investigate. Tipsters fear being called out on social media.
This one hits close to home for me. I have my own school-aged kid who is furious at all the cheaters she sees. When schools and students fail to police cheating, it’s the honest students who pay.
A professor quoted in the piece asks if you would hire a lawyer who used AI to take the bar exam, but if we don’t do more to stop cheating (and, you know, making sure we don’t go extinct), that’s the only kind of lawyer there will be.
Dispatches from Alana
Blind acceleration
Recursive self-improvement is the idea that an AI can improve itself, taking on the work of an AI researcher and effectively automating the creation of better AI. (Researchers and industry leaders are rapidly pursuing this goal, with aspects of it already underway.)
Once an AI becomes sufficiently capable at improving AI, each improvement could make the system better at generating the next improvement, potentially creating a feedback loop of rapidly escalating capability and a very short road to an AI that is vastly better than humans across every domain (“superintelligence”).
AI experts have warned about the dangers of recursive self-improvement for at least a decade (the first main section of this Stanford Law piece provides a good summary). We don’t currently know and aren’t close to knowing how to steer a superintelligence, and we likely can’t control the specific ways the AI will choose to “improve” itself — it will decide what should stay and go using whatever strange mix of preferences its initial training left it with, making the ability to instill safeguards and guardrails even more difficult than it already is.
These concerns are completely ignored in a New York Times piece covering the startup Recursive Superintelligence, where 8 researchers who previously worked at Google, Meta, and OpenAI are coming together to deliberately pursue recursive self-improvement. The company has a $4B valuation and has raised $650M from venture capital firms and chip manufacturers. The article notes that it shouldn’t be confused with the company “Ricursive Intelligence” (yes, that’s how they spell it), coincidentally also valued at $4B, and also pursuing recursive self-improvement, “which has been an obsession among Silicon Valley technologists for decades.”
A specialty of the new startup’s researchers? Letting systems run for “days, months or even years in pursuit of goals set by the researchers.”
If Times readers need any help understanding why it’s incredibly dangerous to let systems that researchers openly admit are effectively black boxes run uninterrupted in the hopes of facilitating recursive self-improvement, I recommend 80,000 Hours’ episode on the scenario covered in If Anyone Builds It, Everyone Dies. The scenario begins with developers letting a new system think uninterrupted for a mere 16 hours, and ends in...well...you can probably guess from the book title.
If we exist
It was interesting to see director James Cameron “yes but” Oscar-nominated actress Demi Moore’s reassurances that AI will not replace human creativity. As reported by Breitbart, Demi Moore told Variety at the Cannes Film Festival that AI is here and we need to find a way to work with it. She’s quoted in the article giving a common refrain — in short, we don’t have to worry about AI because it will never be human: “The truth is there really isn’t anything to fear because what it can never replace is what true art comes from, which is not the physical, it comes from the soul.”
I find myself agreeing with Cameron, who points out (emphasis mine): “We honor and celebrate actors. We don’t replace actors. That’s going to find its level. I think Hollywood will be self-policing on that. We’ll find our way through that. But we can only find our way through it as artists if we exist. So it’s the existential threat from big AI that worries me more than all that stuff.”
The analyses and opinions expressed on AI StopWatch reflect the views of the individual contributors and the sources they cover, and should not be taken as official positions of the Machine Intelligence Research Institute.




