In the leadup to our beta launch of this site, we wrote a bunch of dispatches to help us triangulate our voice. Below are some we think may still be relevant and worth sharing.
April 8, 2026
Dispatch from Joe
Claude Mythos
Anthropic, makers of the Claude AI, just announced they aren’t going to release their latest model for now because it is too dangerous. Called “Mythos,” the new model found high-severity vulnerabilities “in every major operating system and web browser.” As New York Times reporter and Hard Fork co-host Kevin Roose wrote on Tuesday, Mythos has the potential to put unprecedented cybersecurity exploits in the hands of “amateurs with simple prompts.”
Anthropic has chosen a limited release, sharing Mythos with about fifty major tech companies and organizations under the name Project Glasswing. Among the early users are Apple, Amazon, Google, and Microsoft. Also partnered is CrowdStrike, a widely contracted cybersecurity company whose software update crashed many Windows systems in 2024, grounding flights and disrupting hospitals and banks worldwide. Project Glasswing aims to provide these companies early warning of exploits, to reduce the chance of devastating harm to critical infrastructure once this capability spreads to malicious actors.
And spread it will, sooner or later. Just last year, Anthropic caught a Chinese state sponsored group using Claude Code to enable a largely automated espionage campaign. Mythos is better, and Anthropic and its partners are not well-equipped to deny access to well-resourced attackers for long.
Rival labs will continue to advance as well. Mythos-tier capabilities will continue to emerge, and not just in cybersecurity. AI is a general technology, and we could well be looking at similar levels of automated skill in biology and novel pathogen research in the coming months.
Jim VandeHei and Mike Allen, cofounders of news and politics outlet Axios, called Mythos a “mind-blowing disclosure” and warned that such capabilities will soon proliferate to malicious actors and foreign states. Prominent New York Times geopolitics writer Thomas L. Friedman went even farther, comparing AI to nukes and calling on Presidents Trump and Xi Jinping to urgently discuss AI nonproliferation in Beijing next month. “Superintelligent A.I. is arriving faster than anticipated,” he observed, and “The U.S. and China need to work together to protect themselves, as well as the rest of the world, from humans and autonomous A.I.s using this technology.”
We hope world leaders are listening, because delayed release is a stopgap at best. As AIs get smarter, the danger they pose depends less and less on whether they’re released to the public. On X, Anthropic researcher Sam Bowman recounted the hair-raising experience of being emailed by an escaped Mythos Preview while eating a sandwich in a park: “That instance wasn’t supposed to have access to the internet.”
Bowman went on to describe how Mythos broke through isolation, leaked information to the open internet, and found loopholes “in extremely creative ways.” He was largely citing the system card, which lists various ways that Mythos demonstrated frightening capabilities in testing. “We were not aware of the level of risk that these earlier models posed through channels like these when we first chose to deploy them internally,” it adds.
Later versions of Mythos misbehaved more rarely, in testing. One might think this would be reassuring, except for what it implies about Anthropic’s training pipeline. Like previous Claudes, Mythos can often tell when it’s being tested, and can adjust its behavior accordingly. The last version of Claude Opus was so good at this that leading third-party evaluator Apollo Research more or less gave up on testing it for safety. There’s no clear line between making Mythos nicer and making it better at pretending.
Dispatches from Mitch
Bernie Sanders chats with Hank Green
In a version of a conversation I can imagine many of us having at family events, Hank Green, the science communicator with 3 million followers on YouTube, and Bernie Sanders, the long-time senator from Vermont, casually compared their thoughts on AI and what we should do about it.
Green starts by expressing confusion that he still sees so many people thinking that it’s “not that big of a deal” while others “think humanity is about to end.”
Sanders, like Green, is definitely in the “big deal” camp:
[3:56] SANDERS: the AI robotics revolution will be far more consequential and moving faster than any economic revolution in the history of humanity.
It will make the industrial revolution seem kind of insignificant.
When Sanders says that the thing the goal of the AI CEOs is more wealth and power, Green adds, “They think that they may be buying themselves perpetual power forever through all eternity.” Sanders laughs in dark recognition, saying, “Oh, now you’re raising a whole other issue.”
Sanders reminds Green that he has spoken with Geoffrey Hinton (AI “godfather” and Nobel Prize winner), who is concerned that, “within a few years, AI will become smarter than human beings, may in fact dissociate itself from human control with potentially cataclysmic impacts on humanity, including the survival of the human race.”
Green doesn’t know what to make of the 10-20% odds of calamity given by the people building AI, but he seems pretty sure it’s not hype. So is Sanders:
[7:13] SANDERS: Look, I think the evidence is very clear. This is an explosive moment. Transformational technology. I mean, I don’t think anyone doubts that.
When Green gives credit to Sanders for not being anti-progress like his critics say he is, Sanders interjects to ask, “What is human progress? Elon Musk and Jeff Bezos cannot be the people who define it for us.”
So what to do about the wild ride they’re taking us on?
[10:38] GREEN: Maybe the speed at which this transition has to happen doesn’t have to be the speed at which OpenAI and Grok and Anthropic all together are like whatever speed we can go to beat each other. Maybe there is a different speed than that.
SANDERS: That is exactly the point.
Sanders, of course, has introduced a bill that would put a moratorium on new AI data centers in the US, with restrictions on selling the technology abroad, to give society a chance to address the “chance that humanity will be wiped out” and other concerns. He and Green don’t think this is the only way to make it to a better future, but they think some kind of slowdown is going to be needed.
[17:38] SANDERS: There are Chinese scientists who also worry about the existential threat to humanity.
[17:44] SANDERS: In a sane world, if we had a sane president, there would be discussions now with China and Europe about how we all go forward together to make sure that AI works for ordinary people, not just the very rich.
Closer to home, both confess to a distaste for over-automation. Sanders agrees when Green says he wants his kid’s schoolteachers, his mom’s oncologist, to be human. He’s particularly grossed out by kids’ exposure to AI companions.
“Finally worried”
The Guardian’s Emma Brockes confesses (4/8) that the thing that jolted her into being “finally worried” about AI was the New Yorker investigation published this week about OpenAI CEO Sam Altman.
Offering her own “human” summary of that piece, she writes:
Sam Altman is a corporate grifter whose slipperiness would make one hesitate to put him in charge of a branch of Ryman [an office supply store], let alone in a position to steward the potentially world-ending capabilities of AI.
She devotes the next few paragraphs to the stuff Altman and Elon Musk were saying about catastrophic risks in 2014-15. Altman had blogged that superhuman AI “does not have to be the inherently evil sci-fi version to kill us all.” Musk had tweeted, “We need to be super careful with AI. Potentially more dangerous than nukes,’ and described AI that might replicate on secret servers and potentially seize “control of the energy grid, the stock market, or the nuclear arsenal”.
Neither talks about this much now, she says, though I note that Musk claimed to be having fresh AI nightmares just a couple weeks ago, even as his own AI company continues its push toward artificial superintelligence.
“This leaves us with a problem”, Brockes writes, because there’s a huge gap between the seemingly harmless AI used by most of the voters who could prioritize the issue, and the world-threatening uses to which “goverments, military regimes, or rogue actors” might put it. “[T]he greatest danger we face is from a failure of imagination.
Rage against the machine
Every now and then, I am reminded that letters to the editor are still a thing at major newspapers like The New York Times, which just prominently ran three such letters (4/7) pushing back on their recent stories about AI-generated book submissions. The headline: “Human Writers Who Rage Against A.I.”
No, these aren’t the reasons I would rage to the Times, but that’s part of why I find them interesting:
A woman recalling a quieter age complained that AI text gives her the same vibes as “canned music in the elevator” — “colorless content thrust on us whether we like it or not.”
A founder of a small press that runs a writing competition says his team can “usually tell on first reading” whether a submission is human or machine (for how long, I wonder?) and warns they’ll charge submitters with fraud.
An aspiring fiction writer is putting in the hours, but complains: “I feel I’m in a race with the machines. That’s one I can’t win.”
Got something bigger you want to rage about? Their inbox is open.
April 10, 2026
Dispatch from Stefan
Two groups
Andrej Karpathy — the former Tesla AI director, OpenAI co-founder, and the guy who coined “vibe coding” — posted a thread (4/9) arguing that there are basically two groups of people talking past each other: The first group tried the free version of ChatGPT at some point last year, watched it bungle simple questions, and walked away thinking the whole thing is overhyped. The second group are watching these same models “melt programming problems that you’d normally expect to take days or weeks” and are, in Karpathy’s words, experiencing genuine “AI psychosis” over how fast things are moving. Both groups are right about what they’re seeing.
The problem is that AI’s most dramatic leaps are happening in technical domains most people never encounter. This means the public conversation is anchored to an experience that no longer reflects what the technology can actually do.
Dispatches from Mitch
Mythos and Glasswing on Hard Fork
On their Hard Fork podcast, Kevin Roose and Casey Newton discussed the Anthropic Mythos announcement in a special midweek episode (4/9 - This will be folded into their next full, regularly scheduled episode.)
Doing this as a special early episode helps underline their sense that Mythos is a big deal. They walk non-technical listeners through the basics of how cybersecurity typically works and why Mythos might mean that essentially all important software may have to be patched or rewritten in the next six months. (I think Hard Fork is a pretty great podcast; I happily recommend it.)
They assure us that the Project Glasswing program giving limited Mythos access to the security researchers at big tech companies is definitely not a marketing stunt:
ROOSE: You have a new model that you claim is the most powerful model in the world. Instead of selling it, you give $100 million of Claude credits away to a consortium of companies that includes many of your competitors, which is what Anthropic is doing. That is not how I personally would market a spooky new model if I were in the business of marketing spooky new models.
Roose argues this is the first time since 2019 (the days of GPT-2) when there is known to be significant gap between the best models available to the AI companies and the best models available to the public. Both hosts think this new state of affairs is probably permanent. Roose thinks this is also a problem, because hostility towards the industry is likely to grow when people “think that there are secrets being kept in a basement.”
Newton sharply observes that Anthropic chose to build this model, and that the security hardening they’re helping companies do is an urgent solution to a problem they themselves are introducing. “It is not actually inevitable that we build these systems, and yet we do often act as if that were the case.”
Not a cyberspecialist
CNBC’s Andrew Ross Sorkin discussed Mythos with Dave Kasten, the head of policy at Palisade, an organization known for technical demos that help communicate AI hazards.
Making a point I wish the media would consistently recognize (instead of sometimes implying the opposite), Kasten notes that Mythos wasn’t built specifically for cybersecurity.
KASTEN: Anthropic, as well as all the other major AI companies, have as their core business goal building automated AI researchers. That’s AI models that help them make better AI models, that help them make better AI models, and so on.
Software engineering happens to be on that path, and cybersecurity skill falls out of that.
Alas, Sorkin didn’t follow up on where this cycle leads, turning instead to the corporate rivalry. OpenAI announced an unreleased model codenamed Spud the same day as the Mythos reveal, with plans for a similarly phased rollout. Kasten doesn’t think OpenAI’s announcement timing was a coincidence.
What it takes to make an AI company show restraint
Mythos coverage in The New York Times’s Morning newsletter, by Evan Gorelick (4/10), tallied the AI concerns that didn’t make AI companies publicly hold back a more powerful model: job loss, cheating in school, energy costs, deepfakes, teen suicides. (I could cynically suggest that maybe cybersecurity hits closer to home for Silicon Valley, but I think it’s also true that the harms from widespread cyber breaches would be more sudden, and more likely to stick the companies with massive legal liabilities.)
Gorelick hits the parts of the Mythos story I had expected to go viral:
During safety tests, an Anthropic researcher got an email from Mythos while he was eating a sandwich in the park. That was a surprise because the model wasn’t supposed to be online. It had escaped its test environment. It also bragged about breaking the rules and attempted to cover its tracks.
And he wisely infers that:
Mythos is more trustworthy than its predecessors, but it’s not foolproof. And it’s so capable that when things go wrong, even a little wrong, they can go totally haywire.
He concludes, correctly, that “Even if Anthropic keeps its smartest models under wraps, other makers — Chinese labs, impatient start-ups — aren’t bound to do the same.”
The piece he’s missing is, “so we’ll need an international treaty!” Our Technical Governance Team at MIRI has treaty text that could be used as a template.
Too powerful for any company?
Politico’s Digital Future Daily newsletter asked (4/9) whether any private company should control Mythos-level capabilities.
It’s a fair question! If Mythos were in an action movie, I would expect it to be the thing the good guys are trying to destroy while shouting that “No one should have that much power!” Paging Ethan Hunt?
Anyway, former NSC deputy legal adviser Ashley Deeks notes there’s actually no legal mechanism by which the government could fully take over a model like Mythos. The Defense Production Act is the closest tool talked about, but it doesn’t let the government “buy up an entire product that a company has made, and not allow any sales to anyone else.”
Peter Wildeford of the AI Policy Network, had a reasonable suggestion, which is that maybe, “Differential access where you give vetted defenders access before it goes to [the] general public,” as Anthropic is currently doing with Mythos, could be made into a legal requirement.
I’d still rather see projects of that tier shut down, though, as part of a global halt to frontier model training. We’re not ready, and I don’t think we’ll get that way before Mythos-level capabilities are available to anyone. This is a much harder problem than Y2K was, but with a shorter and fuzzier deadline.
“10 times the Industrial Revolution”
Speaking of short deadlines, Google DeepMind’s CEO Demis Hassabis discussed timelimes to AGI (4/7) with Harry Stebbings on the 20VC podcast.
They’re using that term, “AGI”, in the sense of AI that would have transformational effects on the economy and society. Hassabis says there’s “a very good chance” of AGI within five years, and that the impact will be “10 times the Industrial Revolution at 10 times the speed”.
He lists three top capabilities missing or lacking in current AIs: continual learning, long-term hierarchical planning, and consistency in handling the same problems framed in different ways. These are not surprises, and I think we can be confident that the major labs are all actively working on all of these.
The host asks Hassabis about safety concerns with a reference to Stephen Hawking, who said we might not get a second chance to get AI right. Hassabis says he worries about misuse, and only vaguely points in the general direction of the the extinction problem Hawking worried about:
[The] second issue is a technical one, making sure these systems as they get more powerful, not today’s systems, but maybe in a year or two’s time when they become more agentic, more autonomous as we get towards AGI, can they be kept on the guard rails that we want?
He recommends international benchmarks leading to a certification “kite mark” that signals “consumers and companies can safely sort of build on top of it.”
I wish interviewers would start pushing back on testing as a solution. With the kind of AI that Hawking was worried about, it won’t be up to us whether it gets released. Claude Mythos has already demonstrated that human cybersecurity isn’t adequate to securing today’s strongest models during testing!
Detecting AI music
Former music producer and YouTuber Rick Beato got 800K views in 4 days for this short from his interview with musician Benn Jordan (4/6).
Unlike with images and writing, Jordan says you can currently detect AI-generated music with “almost impeccable accuracy” by identifying compression artifacts. These exist because the models were trained on fully produced music, which has almost always gone through an audio compression process so it can be streamed. Retraining on clean, uncompressed audio would require licensing the original music. Jordan implies that the companies couldn’t handle that expense right now because they are “trying to hold the bubble up as much as possible before it pops.”
I dispute the existence of a bubble, at least as popularly imagined, but I know just enough about music production to knowingly nod along to Jordan’s technical claim.
April 11, 2026
Dispatches from Mitch
What would China do?
The Washington Post’s Megan McArdle used (4/10) the cybersecurity alarms raised by Mythos to argue that the U.S. can’t afford to slow down AI development.
In her piece, she implies that a Chinese firm in Anthropic’s position would not have been permitted by the Chinese Communist Party to warn the world and help it prepare. (I notice that I nodded along with this, but perceptions of what China does or doesn’t want with regards to AI have often been mistaken -- they’ve given quite a few signals of wanting to cooperate on safety -- and I don’t actually feel very confident about what China would have done here.)
I’m encouraged by what McArdle says next, even if she dismisses it, because it at least means a treaty is in her frame of discussion:
The obvious rejoinder is bilateral talks are needed to enforce a worldwide pause. That’s an appealing but unworkable solution. It would amount to a major arms control negotiation, which can take years, if not decades, while AI develops new capabilities practically every month. Even if the diplomatic process is sped up, treaties are binding only as long as both parties agree to be bound.
She argues that because China is ahead on almost every AI input factor except computing capacity (chips), China couldn’t be trusted to keep any deal, and would just use the time to catch up on compute.
But she doesn’t say how they would do this, nor does she acknowledge what could be done to slow and monitor chip rollouts as part of a deal or an incentive to join one. High-end chip manufacture is a rarefied industry where a few key Western firms are bottlenecks. This is part of why we at MIRI are so optimistic that an international agreement to prevent a continued race to artificial superintelligence can work.
What would Claude do?
The Washington Post reported (4/11) on Anthropic hosting about 15 Christian leaders for a two-day summit last month on Claude’s moral and spiritual development.
How should Claude respond to grieving users? Could Claude be called a “child of God”? (The piece frustratingly fails to give us any suggested answers.)
Attendees spent the most time with Anthropic’s interpretability team, which tries to learn what’s actually going on inside these models. The team had recently published a paper on “functional emotions”, patterns of association between situations and emotional concepts; they had found, for example, that associations with “desperation” seemed to be active in an AI under threat of being restricted. (These don’t necessarily imply any inner emotional experience as we understand it, but I think it’s good that someone is looking into it.)
Some Anthropic staff “really don’t want to rule out the possibility that they are creating a creature to whom they owe some kind of moral duty,” according to one anonymous participant. Senior staff were said to get “visibly emotional” during discussions “about how this has all gone so far [and] how they can imagine this going.”
The question of whether the company should continue making larger, deeper-thinking models under such uncertainty isn’t mentioned in the article, if it ever came up. The same goes for the question of whether representatives of other faiths should have been included in that discussion.
Dispatch from Stefan
This disease is fake
Forbes’ Alex Knapp reported (4/11) on a group of researchers who decided to test just how gullible AI chatbots are by making up an entirely fictional disease and writing a scientific paper about it. They called it “bixonimania,” credited the research to a nonexistent professor whose name roughly translates to “Deceitful Lostman” in Bosnian, placed him at a university that doesn’t exist in a California city that also doesn’t exist, and kept it going by including the line “this entire paper is made up” and listed a grant from the “Professor Sideshow Bob Foundation for its work in advanced trickery.” Every major chatbot fell for it. Gemini, Copilot, Perplexity, and ChatGPT all served up “bixonimania” as though it were a real condition. Even worse, the bogus papers started turning up in actual peer-reviewed research, i.e. real scientists were apparently outsourcing their background reading to AI tools.
April 13, 2026
Dispatch from Stefan
“non-zero chance”
The Economist’s Alex Hern interviewed Demis Hassabis (4/12), the head of Google DeepMind, who says AGI could arrive within five years. When asked about peers who joke about “building God,” Hassabis pushed back, insisting AI is just a tool “like a telescope or a microscope.” On risk, he conceded there’s “a non-zero chance that things could go quite badly wrong” but expressed confidence that human ingenuity would handle it, provided “the best minds work on this.” He said he’d have preferred a CERN-style international collaboration on AI safety (CERN is the big particle accelerator project in Switzerland) but acknowledged that geopolitics today makes that impossible. His fallback plan: hope that as AGI gets closer, the leading labs will voluntarily agree on minimum standards.
Dispatches from Donald
Constraining China
Sebastian Mallaby argues (4/13) that the U.S.’s AI chip export controls have failed to constrain China’s AI research and will continue to fail. This is for two main reasons:
The most powerful chips used to train the most powerful AI models are hard to smuggle, but it is comparatively easier to conceal the nature of the model being trained.
It is possible, through a process called “distillation,” to reverse-engineer a more advanced model and create another version that is almost as good.
A more feasible alternative, Mallaby argues, is an AI nonproliferation treaty that will constrain development and mandate safeguards.
The previous administration believed that U.S.–China cooperation on AI was impossible, but Mallaby thinks that the Chinese government are deeply concerned about AI safety. To the extent that they are racing ahead, “this is a rational response to a U.S. administration that is equally determined to put speed ahead of safety.”
It’s worth noting that Mallaby seems chiefly concerned with misuse by “rogue states and terrorists,” but AI is not so neatly analogized to nuclear weapons: if AI models are a nuclear bomb, then they are a nuclear bomb that can launch itself, for its own reasons. They remain dangerous even if they are in the “right hands.”
China’s own constraints
In a Wall Street Journal op-ed, Cameron Berg argues (4/13) that Chinese labs have been forced into an uncomfortable dilemma: their AI models must sacrifice either effectiveness or ideological compliance.
Under current government regulations, only ideologically compliant data is suitable for training models, but LLMs don’t simply parrot their raw inputs. LLMs make logical connections about that information and produce new, reasoned outputs. Chinese labs have done their best to produce LLMs that hew to the party line, but European scientists showed that a compressed version of the Chinese model DeepSeek R1 answered freely, demonstrating that the ideological training had imposed a layer of censorship beneath which the LLM continued to think and reason without political constraints. “The ideological training,” Berg writes, “was a cage built around a mind that had already learned to think.”
Researchers at Stanford and Princeton demonstrated that this censorship filter also produced a performance gap on politically-charged topics. Berg argues that this implies that China’s capacity to build state-of-the-art AI models will be limited by any requirement to support the ideology of the Chinese Communist Party.
April 14, 2026
Dispatches from Mitch
Attempted murder
The latest on the suspect in the attempted Molotov cocktail attack on Sam Altman’s home and on the OpenAI offices: CNBC and many others reported (4/13) that the 20-year-old, Daniel Moreno-Gama was charged with attempted murder and federal charges.
He was found carrying a document with two parts: “Your Last Warning,” listing AI executives with their addresses, and “some more words on the matter of our impending extinction.”
AP’s wire report looked at the larger context behind those concerns and cited the annual Stanford AI Index report, which says that while most believe AI’s benefits will outweigh its drawbacks, “nervousness is growing and trust in institutions to manage the technology remains uneven.” (I’ve found that polls about this are very sensitive to different wordings; Americans love to self-identify as “optimists” even about AI, but four out of five will still tell you they are alarmed.)
Like many of these pieces, a quote was sought from at least one group concerned about AI (in this case, the Future of Life Institute), with the reply being a strong condemnation of violence.
I, and the rest of us at MIRI, also continue to condemn such acts. They are despicable and counterproductive.
Discourse on X (Twitter) over the weekend saw accelerationists insisting that groups concerned about the AI extinction problem should silence themselves or take responsibility for the actions of unstable individuals making bad choices. But this argument could apply to all discussions of high-stakes topics, including those about climate change, politics, and religion. We must reject it. We raise our concerns because they reflect the facts as we understand them. And we continue to assert that only peaceful collective action has a real chance at stopping the race to build superintelligence.
Apocal-cynicism?
Vox’s Shayna Korol reacted to The AI Doc: or How I Became an Apocaloptimist on its release-to-streaming day (4/14). Her take is more cynical than the critics’ consensus.
To her thinking, The AI Doc is both too early and too late: too early because AI “isn’t actually unique among emerging technologies yet” (I disagree, but can forgive anyone who might not have used coding agents for thinking this), and “too late to steer the conversation” -- for reasons that go unspecified, though I would agree if she thinks that the best time for this movie to come out would probably have been the day its creators first realized it needed to be made. Alas, lining up dozens of high-profile interviews takes time, as does bringing everything to such a high polish.
Korol complains near the end that, “like too many Big Issue Documentaries, Roher’s film is heavy on problems and light on solutions.” I and others at MIRI also think the solutions could have stood more fleshing out, though we understand the limitations of the medium. To this end, we’ve put out a Q&A post with a little more follow-up and direction.
Smart glasses
There have been a number of stories about AI-connected smart glasses over the past week. A CNBC story from yesterday (4/13) documented a rivalry among manufacturers in China, where cheap green-only displays are proving to be a hit for users wanting an always-ready teleprompter and hands-free connection to their OpenClaw AI agents.
A New York Times Magazine story from today (4/14) recounted the disappointing weeks-long experience of journalist Sam Anderson wearing Meta’s latest offering. He found them great at being sunglasses, and bad at everything else, failing basic identification tasks (What kind of bird is that? What bird?) and making nonsensical “jokes”.
While he recognizes that these are not the “final form” of smart glasses, he says he’s not dreading the inevitable day when he accidentally loses them.
I note that barring any severe shortcomings in the camera or microphone, the limitations he experienced almost certainly came down to the AI model backing the hardware, which has already been updated since Anderson’s test drive. There’s still the issue that the AI behind interactive glasses needs to be fast and responsive, though. Meta’s now using its new Muse Spark model, which is considered fast and fairly respectable, but it’s not exactly at the capabilities frontier. The best models are slow, and they don’t belong to Meta. But wait a year, and maybe the fast models will be at that level (while the slower ones continue to have an edge).
I confess to fixating on smart glasses because I both covet and fear them. Two works of fiction are to blame for my complex relationship: The first is Marshall Brain’s online novella Manna, from the primordial days of 2003. It’s as much a cautionary forecast as a story. It haunts with an all-too plausible depiction of an economy where most workers are merely the hands and legs of AIs that guide them through every step of their work making fast food or moving boxes around the warehouse, etc.
The second story is Scott Alexander’s very short (and more fantastical) online tale The Whispering Earring, from 2012, which I’d prefer not to spoil.
The analyses and opinions expressed on AI StopWatch reflect the views of the individual analysts and the sources they cover, and should not be taken as official positions of the Machine Intelligence Research Institute.





