Dispatches from Mitch
Roundtable
The AP reported (4/16) on an AI roundtable of the House Oversight subcommittee. On display were bipartisan anxieties about technology, including about the extinction threat.
Rep. Eli Crane (R-Ariz.), a former Navy SEAL:
I recognize AI is not going anywhere. That being said, does anyone on this panel feel or believe, in any way, that as we are going down the road in this AI race, we might be simultaneously engineering our own destruction?
Others discussed nudification apps, the strain on natural resources, fears of morally squeamish AIs in the military, and the cybersecurity implications of Anthropic’s unreleased Mythos model.
Rep. Maxwell Frost (D-Fla.), currently the youngest member of Congress, said, “I don’t have faith in this institution to actually put the common sense guardrails in place. And then we fast forward ten years, and the house is on fire.”
Mythos, Mythos, everywhere
I thought I had grabbed all of today’s Claude Mythos stories for myself, but from what you’ll see in Donald and Beck’s dispatches, Mythos, and what it represents, are now haunting the whole economic and geopolitical scene.
That took a little while! Last week, when the model was first announced (and the week before that, when news of the model first leaked), folks in our corner of Twitter dryly observed the scarcity and diminutive positioning of Mythos headlines in the mainstream media.
You could read such tweets as evidence that, like many people, AI experts overestimate the importance of their field. But I think it’s more the case that, with AI already being so capable and plugged into so many things, models that advance the frontier can potentially impact (seemingly) everything, everywhere, all at once. So when something like Mythos is revealed, the media does us a disservice by treating it as tech news, or AI news. It should be the news.
To be fair, Mythos wasn’t released straight to everyone’s desktops and phones. We haven’t actually seen what happens when tens of millions of coding agents and autonomous OpenClaw agents suddenly get a major upgrade at the same time. Agents were far fewer, and more niche, when the previous step change landed. That change, in late November, was the bump to Anthropic’s best public model, Claude Opus. That upgrade kicked off this whole phenomenon of non-coders finding they can make productive use of these agents.
(Anthropic did just update Opus from 4.6 to 4.7, yesterday, though, and it’s a sizable bump, if the company’s benchmark reports are to be believed, though not in the same league as Mythos. It’s certainly good at running a simulated vending machine, according to a popular independent benchmark. I’m still getting a feel for what it can do in my own usage, reading up on its other characteristics, and waiting for more field reports to come in. Watch this space.)
Anyway, with Mythos, we’ve actually progressed far enough in the news cycle for the counter-cycle to start to rival it. If the first stories were about the model’s frightening cybersecurity implications, then the next ones had to push back on that narrative, questioning whether the capabilities are overstated, or whether holding back the model simply reflects the company’s limited compute capacity, or whether their Project Glasswing program to shore up the world’s critical cyber infrastructure is just a marketing ploy ahead of Anthropic’s IPO. Or some combination of all three!
I think there could be a touch of truth to all those theories, but bankers, EU regulators, President Trump, and cybersecurity experts all seem pretty concerned.
Dispatch from Beck
Spoken too soon?
If you haven’t been following the saga between the Pentagon and Anthropic, they’ve been publicly feuding since February when Sec. of War Pete Hegseth declared Anthropic a Supply Chain Risk (SCR), a legal designation that makes it illegal for companies to work with the government. Historically, it’s a rarely used designation to prevent wartime sabotage by foreign companies. But despite the designation, the two remain entangled: “The same night that the Trump administration said it would cut ties with Anthropic, its system was put to use to aid the bombing campaign in Iran” (Washington Post reports). In the weeks since, multiple lawsuits have been filed. Anthropic’s preliminary success in an SF court was stayed by a separate DC appeals court ruling that the Pentagon may continue to cut legal ties while the lawsuits are ongoing. Anthropic has since hired White House-friendly lobbyists to aid in a renormalization.
The latest news is that Anthropic CEO Dario Amodei is scheduled to meet with White House Chief of Staff Susie Wiles Friday, the WSJ reports. Just six weeks after the SCR designation, Washington needs access to the latest in AI, Mythos. Anthropic’s latest model has significant cyber capabilities; it has found serious flaws in critical software infrastructure, like a full-control exploit in the Linux kernel which underpins internet infrastructure. Mythos is currently released only to a small number of software companies (under Project Glasswing) so that these flaws may be patched. While Anthropic’s legal status with the government remains unclear, the Office of Management and Budget has told agencies to prepare for access to Mythos (Bloomberg).
Government access to Mythos is probably good for cyberdefense, but a worrying sign if you are, like me, concerned about the power a small handful of CEOs and politicians will accumulate. These specific bugs will be patched, but next year’s models will be more powerful and the stakes more dire, and I worry our government will keep failing to respond coherently.
Dispatches from Donald
AI Bill of Rights
Politico’s Gary Fineout and Kimberly Leonard reported (4/15) that Florida governor Ron DeSantis is pushing an “AI Bill of Rights.” GOP representatives in the state House resisted DeSantis in the past on this issue, citing Trump’s desire to leave AI regulations up to the federal government. DeSantis’s original “Citizen Bill of Rights for Artificial Intelligence” would limit the ways that both companies and private citizens could use AI (e.g. prohibiting deep fakes, restricting the use of AI to deny insurance claims) and regulate the circumstances under which data centers could be built.
The legislation is supported by Florida Senate Minority Leader Lori Berman, who is concerned about the impact of AI and the prospect of falling behind if the state government doesn’t act quickly.
Thinking about thinking
WSJ columnist Christopher Mims presents (4/17) a measured form of superintelligence skepticism: recent gains in AI capabilities have come, Mims argues, not from radical improvements in the models themselves but in the adaptations that we have developed:
Increasing amounts of fresh information from human professionals and search-engine scraping.
The use of tools, like “chain of thought” reasoning, traditional software, the capacity to write code, and even calculators.
Cross-model verification, or the use of multiple models to work on the same problem and thereby identify errors more easily.
Mims concludes that Claude and other LLMs are nothing to worry about, because, while they may be growing more effective, they are not “reasoning the way humans do.” Left unmentioned is the issue that, whether or not LLM “thinking” maps cleanly to human “thinking,” LLMs still take actions to fulfill goals and may perform those actions in unexpected ways or attempt to fulfill goals that no one thought that they would have.
Meta’s new challenger
Fox News’ Jesse Watters (4/17) writes about Muse Spark, the first model to be released from Meta Superintelligence Labs, which Mark Zuckerberg founded nine months ago. It is due to roll out across Facebook, Instagram, Meta’s AI glasses, and other Meta services. Muse Spark uses multiple autonomous subagents in parallel in order to accomplish complex tasks, “how a capable human research team actually operates.”
Watters seems uninterested in the privacy concerns surrounding Meta’s AI glasses, which are sure to be compounded by the addition of AI. His article is all about the benefits, without any mention that everything the user sees and records will be made into training data for AI models like Muse Spark.
Banking on Mythos
The Guardian’s Kalyeena Makortoff reports (4/17) that British banks will receive access to Anthropic’s new AI model, Claude Mythos, by the end of next week. The U.S. treasury secretary, Scott Bessent, also met with bank leaders last week to talk about the new model, which has massive offensive and defensive capabilities in the domain of cybersecurity.
In connection to Mythos, the president of the European Central Bank, Christine Lagarde, said, “I don’t think there is a governance framework that is there to actually mind those things. We need to work on that.”
As Andrew Bailey, the governor of the Bank of England, said, Mythos “is a very serious challenge for all of us. It reminds us how fast the AI world moves.”The analyses and opinions expressed on AI StopWatch reflect the views of the individual analysts and the sources they cover, and should not be taken as official positions of the Machine Intelligence Research Institute.
The analyses and opinions expressed on AI StopWatch reflect the views of the individual analysts and the sources they cover, and should not be taken as official positions of the Machine Intelligence Research Institute.





