Dispatches from Mitch
Just getting started
As reported by CNN, AP, and others, a humanoid robot built by Chinese smartphone maker Honor finished a Beijing half-marathon in 50:26 — more than six minutes faster than the human world record.
The story here is not that robots can run faster than humans — many kinds of robots and drones have been able to outrace us for a long time.
The story here is that the robot that won last year took almost two hours longer to finish. Robotics has progressed very rapidly in the past few years. Did you see the Chinese martial arts robots from a few months ago compared with those of the previous year?
There’s a pattern with technology where, once it gets to the point where it can do a task at all, it usually isn’t long before it can do the task far better than most humans, and perhaps all humans.
Remember when AI couldn’t write paragraphs? Or generate images? How about that period when it could barely craft a half-way coherent essay? Or draw a human hand with approximately the right number of fingers? How long did that last? Like two years, maybe?
If you see AI being able to just barely, half-way do your job, how long do you think it will be before it can replace you?
The acceleration in robotics is happening, in part, because robots are increasingly controlled by AIs that train in simulation, and so are improving rapidly along with the AIs.
It’s easy to dismiss humanoid bots as a gimmick, because most tasks we want robots to do could be done better or more cheaply by robots with form factors optimized to those tasks. Humanoids are important, though, because in time they can plug into all the existing workspaces, vehicles, and tools designed for humans. And a one-size-fits-all robot is a robot that can be manufactured at a truly industrial scale, leading to rapid cost reductions and improvements.
Therapy bots
AI can kind of half-way do the job of a therapist, maybe. The problem, according to reporting by the Washington Post (4/19), is that they aren’t really designed for this, but for friendly, affirming behavior that can look misleadingly similar.
App stores are full of therapy bots because most states don’t have laws protecting who or what can be called a “therapist”. I would confidently accuse almost all of these apps of being little more than cheap, general purpose LLMs wrapped in a hidden prompt that says, “You are a therapist”. For many users, this might be better than no therapist. For others, it’s a recipe for disaster.
You’ve probably heard of AIs encouraging or endorsing users’ suicides. This article reminds us that more than a dozen wrongful-death lawsuits against OpenAI are being consolidated into a class action case, that other AI companies are reaching settlements in their own cases, and that Sam Altman says up to 1,500 people per week may be talking about suicide on ChatGPT.
A Catholic chat bot
It’s possible to train AIs on custom data sets, optimized for different metrics. This doesn’t mean we have any way to actually program their behavior or cause them to internalize the values we see in that data, though. Remember this as I report on Fox News’s coverage (4/18) of Acutis AI, a chatbot trained on 2,000 years of Catholic teaching being pitched to families as a safe alternative to “secular” Silicon Valley AIs.
It includes parental monitoring, time limits, alerts on “dangerous topics,” and a promise to maintain “Catholic perspectives.”
I would not expect this or similar AIs to be safer, in part because smaller teams, unlike the giants, are less likely to have teams of engineers dedicated to slapping behavioral band-aids over misbehaviors. OpenAI has at least been flailing at their sycophancy problem for a couple years now, so my best case for the Acutis guys (a young pair of brothers) is that they’re building on top of a model from one of the bigger labs. Without the resources of a huge company with a brand to protect, I expect Acutis-like models to go on stranger tangents and spiral with their users in more interesting ways.
Longer term, this is also not the way to head off catastrophe. Reading religious texts doesn’t make you a saint. AIs can come to know our morals even better than we do, but this doesn’t mean they will care.
Updates inbound
In Mythos coverage today: Wall Street Journal Personal Tech columnist Nicole Nguyen (4/19) warns that “You’re about to see a lot of critical software updates. Don’t ignore them.” That’s a nod to the cybersecurity sprint happening at companies partnering with Anthropic right now; they’re using Mythos to find vulnerabilities before bad guys gain access to similar capabilities.
“It’s OK to feel helpless,” Nguyen says, since most of what will keep your data safe or not is up to the companies holding said data. But she gives us the usual cyber-hygiene tips: Install those security updates right away, use a password manager, enable two-factor authentication, and agree on a family code word against deepfake call scams.
Mostly, I share this piece for the quote from Dave Lewis of password manager company 1Password, who demonstrates the correct mindset about whether Mythos is being overhyped:
Whether or not Mythos is a hacker superweapon really is immaterial to the conversation. If it’s not this model, it’ll be another one in five minutes.
Data center perspective
There’s a popular narrative out there that AI is a “bubble” and that it’s currently popping. A factoid fueling this narrative is that up to half of all recently announced data center projects have been cancelled or postponed. I mostly believe the factoid, and Axios reports today on an especially large project hitting the rocks, but I caution up front that I don’t think this means what bubble-wishers want it to mean.
The project in question is Fermi America — a Trump-branded AI data center megaproject co-founded by former energy secretary, former Texas governor, and former presidential candidate Rick Perry.
Formally called the “President Donald Trump Advanced Energy and Intelligence Campus,” it has so far failed to acquire and install the necessary cooling system (demand for these are through the roof) to attract an anchor tenant. Its stock is down 75% over six months, and its CEO departed Friday.
Two reasons I don’t think stories like this are a sign of an AI industry in trouble:
Projects are failing in large part because so many other projects are sucking up the materials and construction talent needed to make them, not because there’s not enough demand for the data centers. I’m not seeing any stories at all about new data centers sitting around with idle chips.
Companies have been observed floating the same projects in multiple locations simultaneously, never intending for all of them to actually go through. It’s a way of hedging their bets against problems with grid connections or local activists.
Future Caucus
In better news, the Associated Press reported (4/19) on a loose cross-partisan network of state lawmakers with former tech ties pushing AI regulation against White House resistance, some through the Future Caucus AI task force. Three are profiled here: Utah’s Doug Fiefia (ex-Google), Vermont’s Monique Priestley, and New York’s Alex Bores (ex-Palantir).
What kind of laws are they trying to pass? Child safety protections are the most common goal. Also popular: “forcing chatbots to remind users they are not human and barring the use of AI to make nonconsensual pornography.”
Regulations passed in California and New York include steps to prevent “the AI-controlled meltdown of nuclear plants” and “AI models refusing to heed human direction.”
The analyses and opinions expressed on AI StopWatch reflect the views of the individual analysts and the sources they cover, and should not be taken as official positions of the Machine Intelligence Research Institute.


