Dispatches from Mitch
99 percent of all species
It remains a bright spot for Earth that the extinction problem remains a non-partisan issue — one even Bernie Sanders and Steve Bannon can agree on. AI worries on the left seem to get more media coverage, but the concern I see on the right runs just as deep.
Joe Allen, co-host of Bannon’s War Room, has consistently sounded his own alarms. He says:
What happens if you create a superhuman artificial intelligence that cannot be controlled? One of the more sane arguments I would say is to simply pause this race.
That’s from an interview he posted yesterday with the Executive Director of PauseAI US, Holly Elmore.
If you’re new to the PauseAI platform (my colleague Joe shared more about them yesterday), they’re one of several groups looking to build a big-tent coalition to stop the AI race, globally, through the democratic process.
Her pitch is that no matter what your concerns about AI:
A pause is the next right step. It’s the next right step for any correct solution. For any solution, we need time where we’re not making the problem worse.
Elmore used to be an evolutionary biologist, which lets her land this talking point better than I can:
And 99 percent of all species that have ever lived are extinct now. And that’s the normal thing that happens. And a lot of times you can see in the fossil record what happens when one species gains something like eyes that are, you know, they become better predators. They just wipe out a lot of species. There’s nothing, there’s no natural law that says that we cannot go extinct.
Asked to provide some positivity, she cites precedent:
I think we’re about: the world is really good and we want to protect it. And we think that there is a way to protect it. We’re not talking about anything that hasn’t happened before. We have nuclear nonproliferation treaties. We have New START treaties. We just want this for AI. And then we can enjoy whatever benefits are safe from the AI. That’s really the best of all possible worlds.
Dispatches from Joe
“Somebody wants me!”
AI-enabled fraud is on the rise, according to two articles that discuss the systematic targeting of seniors (4/19) and job seekers (4/21). The FTC tracked $2.4 billion in senior scams in 2024, a more than 25% increase, and those are just the reported crimes. Meanwhile, Lloyds Banking Group reported a 237% rise in job scams last year. The Guardian’s Victoria Turk describes almost falling prey to an AI-enhanced recruitment scam, and relates what it feels like from the inside. She quotes Keith Rosser of scam-reporting nonprofit JobsAware: “A lot of people feel as if they’ve been found, almost – ‘Somebody wants me!’”
The rise in AI-assisted fraud should surprise no one who’s been paying attention to capabilities improvements in the last few years. But for me, it’s kind of personal. For over a year now, I’ve been trying and failing to extract a family member from multiple relentless pig butchering scams, which marinate their victims in patient attention for weeks or months before the investment fraud begins. The frequency of text messages and just-slightly-off short videos I’ve witnessed can only be enabled by AI. If you’ve never had a loved one threaten to cut ties with you after you warned them the attractive crypto-millionaire long-distance relationship they keep gushing about is a quilt of red flags, I lack the words to describe the heartbreak and I hope you never find out. Options for families in a situation like this are thin on the ground.
On that note, I remind everyone that OpenAI investor (and anti-regulation super PAC funder) Andreesen Horowitz has poured millions into deepfake bot farms and AI companion companies with names like “Botify AI” and “Ex-Human”.
The Good, the Bad, and the Medicine
CNN’s Michal Ruprecht surveys (4/21) doctors’ use of AI chatbots. It’s a mixed bag; AIs are reportedly helpful for literature review, patient notes, insurance letters, and even coming up with plausible diagnoses, all a boon for overworked doctors and trainees. In the best case, AIs can summarize medical literature and piece together a more complete view of a patient’s history than a doctor would otherwise have time to obtain.
But some doctors are uploading patient records to so-called “shadow AIs”, sometimes misleadingly marketed as “HIPAA compliant” (HIPAA is a federal law protecting patient information). I’d guess this refers to free-tier chatbots (including from mainstream providers), and others without the legal protections that come with a formal agreement with the hospital.
I’ve long assumed bad actors could grab my medical records with a modest effort. Most institutions just aren’t very secure. Still, this should serve as a reminder that even well-educated people will sometimes just paste things into random chatbots when it’s convenient, with all the legal and security headaches such behavior entails.
Leana Wen of the Washington Post profiles Doctronic (4/21), an AI firm piloting an automated prescription-filling pipeline in Utah. They seem reasonably cautious, starting with full review by physicians and then shifting to 10% sampling once the system has demonstrated success. Like much of AI, however, this use occupies a legal gray area; the FDA disclaimed jurisdiction and Doctronic currently operates with a waiver from Utah.
Dispatch from Stefan
Locked Out, Logging In Anyway
The New York Times’s Evan Gorelick reports on prisoners trying accessing ChatGPT despite institutional bans (4/21).
Not directly, of course. No laptops, no access. But they’re using friends, family, and contraband phones.
One inmate spent about $10 (basically a week’s wages inside) trying to get help drafting a legal complaint about cancer screenings. He eventually gave up after a friend pushed back: “Are you really my friend, or are you just using me for A.I.?”
Another had his sister generate a full 17-page nonprofit plan and physically mail it to him. His explanation is pretty blunt: “I don’t know how to use a computer.”
Officially, AI is banned for security reasons that seem to vary depending on who you ask. At the same time, one profiled prisoner helps teach a financial literary class to inmates, with A.I.-designed games sent by his sister. AI is both too dangerous to allow and a boon to rehabilitation.
The analyses and opinions expressed on AI StopWatch reflect the views of the individual analysts and the sources they cover, and should not be taken as official positions of the Machine Intelligence Research Institute.





