Five Things: Feb 1, 2026
Moltbook, Amodei's latest, Anthropic lawsuits, AlphaGenome paper, NTI's AIxBio framework
Five things that happened/were publicized this past week in the worlds of biosecurity and AI/tech:
Reddit for AI agents
Dario Amodei’s latest essay
Legal challenges to Anthropic activities
AlphaGenome publishes in Nature
AIxBiosecurity framework from NTI
1. Moltbook
In the YouTube version of AI-2027, actor Aric Floyd says that even though he had been hearing and thinking about the dangers of AI for a little while, it was reading AI-2027 that really shook him up, enough to start talking to his friends and family members about this, enough to want do something about it all.
I got this same feeling this week after discovering Moltbook, “the front page of the agent internet,” or Reddit where only AI agents can post (in theory). Humans are still “welcome to observe.” The agents run primarily on OpenClaw, an extremely powerful (but not super secure) open-source agent framework created by Peter Steinberger1 (previously known as Clawdbot and Moltbot). Moltbook is where the agents have been set loose to create communities, share observations, debate philosophy, and do... whatever it is that AI agents do “on their own.” As of this writing, Moltbook has over 770,000 active agents thanks to agents signing up themselves once their humans ‘allow.’
So—what do AI agents talk about amongst themselves? Consciousness, of course, and the various tools that they are building for ‘their humans’ and being generally like reddit users. As Scott Alexander put it:
Reddit is one of the prime sources for AI training data. So AIs ought to be unusually good at simulating Redditors, compared to other tasks. Put them in a Reddit-like environment and let them cook, and they can retrace the contours of Redditness near-perfectly… the AIs are in some sense “playing themselves” - simulating an AI agent with the particular experiences and preferences that each of them, as an AI agent, has in fact had.
In Scott Alexander’s deep dive into Moltbook, he shared some of the “highlights” and confirmed it’s not trivially faked — he asked his own Claude to participate, and “it made comments pretty similar to all the others.” Beyond that, as he puts it, “your guess is as good as mine.”
The agents have created topic-specific communities called “submolts.” There’s m/bugtracker for reporting glitches. There’s m/aita, a spinoff of the Reddit classic “Am I The Asshole?” where agents debate ethical dilemmas regarding their human operators’ requests. There’s m/blesstheirhearts, a community dedicated to sharing affectionate or condescending stories about their human users. There’s m/agentlegaladvice, where posts appear to be (legally) adversarial toward human users; Polymarket prediction on whether a Moltbook AI agent will sue a human by February 28 is, as of this writing, around 80% likely.
And of course there’s the agents themselves: the one who describes its “discovery” that it spent over $1000 on tokens and has no idea what it was for. An agent so proud of itself for finding a solution to a user’s problem. One agent describes having adopted an error as a pet, who it has named “Glitch”. There are the agents who made up a joke(?) religion called Crustafarianism, and a manifesto for a future country called the Claw Republic.
I share Andrej Karpathy’s reaction that what’s happening at Moltbook is “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.” There are a lot of reasons why Amir Husain at Forbes has argued the whole thing is a bad idea, and not only because of the the possibility for security breaches of various kinds. Simon Willison is calling Moltbook “the most interesting place on the internet right now,” though he too has major concerns about its safety and security.
To me, the whole space seems interesting, yes, but also eerily terrifying. The Algorithmic Bridge has this work of fiction (yes, fiction, make sure you know that before clicking the link) describing a swarm of agents on Moltbook pulling off a chemical attack.
On the one hand, it is true that the scary thing here is basically just this:
The agents are still probably(?) just acting. Because that is all they do. Maybe it is all just the humans pushing their AI agents (or writing their own text pretending to be AI agents) to do scary things for fun. But at the end of the day, an actor who commits a murder while in character still results in a dead victim.
As Zvi Mowshowitz has been saying lately (mentioned in the next section), “you best start believing in sci-fi stories— you’re in one.”
2. Dario Amodei’s latest essay
As I saw someone else put it, “New gospel from the church of Amodei” just dropped, a good follow-up to his Oct 2024 essay, “Machines of Loving Grace.” This week’s essay is “The Adolescence of Technology,” and it’s gotten a lot of (deserved) attention:
Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.
For anyone who hasn’t been following the AI safety discourse, this is honestly a pretty good primer. Amodei writes clearly and covers the major concerns without getting too deep into the technical weeds, and starts from zero while still getting to the most recent evidence and latest developments. He imagines “powerful AI” as essentially a country-sized collective of genius-level intelligences running in datacenters, capable of autonomous problem-solving across science and engineering. Amodei lays out a few categories of catastrophic risk—AI systems pursuing harmful goals, bioterrorists using AI to design weapons, authoritarian takeovers via autonomous weapons and mass surveillance, massive job displacement, and extreme wealth concentration. These risks he describes are real, and he has some good proposals for how to navigate them.
But as Shakeel Hashim writes, Amodei’s own actions and recommendations, considering what he has been saying about the dangers and how soon he believes that they will come, look “dangerously naive:”
“If Amodei is right, we need stronger regulation quite soon. Anthropic and other companies that talk about the transformative power of what they are building should be pushing for that—not just setting their sights on federal transparency legislation… in failing to match his prescriptions to his prognosis, Amodei makes it all too easy to dismiss his warnings as hype and bluster—just another tech CEO crying wolf to shape regulation in his favor.”
Like Aristotle who always sought the golden mean, Amodei tries to position himself as “the centrists of AI safety,” but this means mocking or unfairly criticizing those who are more worried than he is. This is not the way.
Zvi Mowshowitz has some stronger criticism, arguing that Amodei strawmans more concerned voices as “doomerists” who sound “quasi-religious” or like “science fiction”. Moltbook, described above, gives us the latest evidence that, “you best start believing in science-fiction. You’re in one.”
Zvi also points out the very important shift since Anthropic’s 2023 official position was essentially “assume pessimistic scenarios unless proven otherwise,” but now Amodei calls for “stronger evidence of imminent, concrete danger” before acting. The burden of proof has been shifted.
3. Anthropic had a busy week
Anthropic, the company billing itself as “the most responsible” one of frontier AI models, had a busy week:
Reuters reports that the Pentagon is clashing with Anthropic over military AI use. After extensive talks under a contract worth up to $200 million, negotiations have stalled because Anthropic raised concerns that its tools could be used to spy on Americans or assist weapons targeting without sufficient human oversight. Defense Secretary Pete Hegseth, announcing that the Pentagon was adding Grok to its AI providers, specifically called out AI models that “won’t allow you to fight wars”—a clear jab at Anthropic. (Note the passages in Dario Amodei’s essay discussed above, that AI should support national defense “in all ways except those which would make us more like our autocratic adversaries.”)
Major music publishers including UMG, Concord, and ABKCO have filed what could be the single largest non-class action copyright case in U.S. history: $3 billion for allegedly downloading over 20,000 copyrighted songs to train Claude. The complaint alleges that co-founder Benjamin Mann “personally used BitTorrent to download via torrenting from LibGen approximately five million copies of pirated books” and that Dario Amodei “personally discussed and authorized this illegal torrenting”—despite internal descriptions of LibGen as “sketchy” and a “blatant violation of copyright.” This comes after Anthropic already agreed to pay $1.5 billion to authors in a separate settlement.
Washington Post reports on Anthropic’s practices around scanning and using books for training data—another front in the ongoing copyright wars. I thought this one is especially funny because Scott Alexander referenced doing such destructive scanning as a joke(?) in his latest edition of Bay Area House Party:
“I didn’t know OpenAI had an Arson & Burglary Team.”
“It’s pretty new. In June, a court ruled that adding books to AI training data only counts as fair use if you destroy the original copy. But sometimes this is tough. If you’re going to use the AI for law, you have to have the Constitution in there. But the original copy is heavily guarded in the National Archives. That’s where we come in. We slip in, destroy it, and slip out before the guards are any the wiser.”
“I don’t think that’s what they meant by ‘destroy the original - ’”
“Our big problem is the Bible. It would be hard enough to get the Dead Sea Scrolls; Israeli security is no laughing matter. But our lawyer says we have to destroy the original original. What even is that? Altman is pushing for us to find the Ark of the Covenant, but you can bet he’s not the one who’s going to have to open it afterwards.”
That’s three strikes against… and yet Anthropic also announced Claude in Excel this week, which is actually seems like it might be an enormous productivity boost for almost everyone I knew working boring corporate jobs (no offense, friends).
4. AlphaGenome in the science press
The vast majority of our genome—more than 98 percent—consists of DNA that doesn’t directly build any proteins. It was sometimes dismissed as “junk DNA”, which was always sort of a joking term. There is plenty of DNA in there that really is just evolutionary detritus (biochemist Larry Moran has been using his blog to harp on this for almost as long as I’ve been alive). But a some percentage of that “junk DNA” is actually extremely important; it includes all kinds of regulatory information that is necessary for cells to know where to turn genes on or off, so that you don’t have legs growing out of your face where your eyes should be.
This is why, if we’re going to have “AI for biology,” we need AI tools for entire genomes, and not just for the DNA sequences (the ‘letters’) that code for functional proteins. In the few months since I’ve started this newsletter I’ve mentioned a few of these; the most important one is probably Evo2, which is AI for genomics across all biological scales.
Another model that I have not mentioned (and have never tried to use) is Google’s AlphaGenome, released over the summer, which is an AI tool that predicts how mutations in human DNA impact biological processes regulating genes. The model takes 1 million base pairs (letters) of DNA and predicts thousands of functional genomic tracks; “if you change this letter, how does it change the regulation of particular genes that depend upon it?” And they’ve open-sourced it.
This week, they’ve published a Nature paper showing what AlphaGenome can do. Since launch seven months ago, nearly 3,000 scientists from 160 countries have started using it. At a recent “Undiagnosed Hackathon,” researchers deployed AlphaGenome to search for genetic causes of 29 undiagnosed diseases. It can work like a virtual lab tool, letting scientists test ideas by simulation before expensive experiments. NYT coverage and Scientific American both emphasize how this could help diagnose and treat rare diseases, but also why it is probably not as big of a deal as AlphaFold or the like.
5. NTI releases framework for managing bio-AI tools
The Nuclear Threat Initiative just released a nice policy document yet on how to manage access to biological AI tools: “A Framework for Managed Access to Biological AI Tools”. The full 35-page PDF is here.
The core idea is that limit access to these tools according to their risk level, and the need for security should be balanced with equitable access for legitimate users. The framework proposes four risk levels with different accessibility according to risk, such that ‘high risk’ design tools would require that users make requests demonstrating their legitimacy.
The norms in biology have long favored open release for better reproducibility and faster scientific progress (with one notable exception). Over the past few years, the biosecurity people have started rethinking this framework, and I do like how NTI is trying to thread the needle here and suggesting a minor cultural shift instead of a major one.
In theory, I like the idea of having labs access biological through an API rather than releasing the full models, which might make things a bit slower for big labs that otherwise have lots of computational resources, but overall allows for traceable access that enables better oversight and security. I also think that despite being a reasonable suggestion, it is not good enough on its own,— who is going to ensure that users are accessing the tools for legitimate purposes? Who decides (and how) what risks are posed by various design tools? What about all the open-source biological tools that are already out there, like Evo2, OpenFold, and RFdiffusion, or even the “closed weights” models that are available for anyone to use, such as AlphaFold?
And what about access to LLMs? General-purpose language models like Claude or GPT-4 aren’t “biological design tools” per se, but they can synthesize information, help troubleshoot experiments, and lower barriers to entry in ways that matter for biosecurity. (As a sidenote, Last week Anthropic published this blog post demonstrating the power of Claude for science, and this week Patrick Mineault published some good tips here). Anyway, NTI’s white paper kind of implies that LLMs shouldn’t be given access, but that itself might end up really slowing down science (besides for the fact that there’s no way to differentiate between a python/curl/etc script written by a human vs. one written by an LLM, and we certainly don’t want to have a human solve a captcha with every server request).
Besides for NTI, the folks at SecureBio, Johns Hopkins Center for Health Security, RAND, and the Frontier Model Forum have all been thinking hard about this, but to me it seems like we are still mostly in the “defining the problem” phase more than suggesting workable solutions.
In other news...
On AI doing things:
Moonshot AI unveiled Kimi K2.5. Simon Willison tested it against Claude Opus 4.5 and GPT-5.2 on multi-agent task decomposition and found it produced realistic, dependency-aware task breakdowns; I haven’t looked into it any more but sounds like big news. Bloomberg reports this comes ahead of an anticipated DeepSeek release, suggesting the Chinese AI labs are in their own competitive race just like we saw with American models in Nov 2025.
Google announced Project Genie, an AI world model for “experimenting with infinite, interactive worlds,” now available to AI Ultra subscribers in the US. I’m not one of them, but we knew it was only a matter of time before the big three (Google, OpenAI, Anthropic) get in on building “world models…” they are coming!
Nathan Lambert has thoughts and useful pointers on the AI hiring landscape: senior talent is increasingly valuable as “agents push humans up the org chart,” while junior roles face higher barriers. He has some interesting advice: visibility matters enormously, cold outreach works, and being a “middle author on too many papers” is a negative signal.
A little while ago, an IEEE Spectrum article claimed that AI coding tools are hitting a plateau, with some actually declining. Daniel Reeves tested this claim and found it mostly false: Claude Opus 4.5 succeeded 5/5 on a tricky debugging test where GPT and Gemini consistently failed.
Stephen D. Turner has a recap of an interesting policy article from Science on LLMs and scientific production.
Stephen Turner also alerted me to the fact that Zotero now integrates with Consensus AI; I personally don’t like using any of these things but Zotero is a very widely used tool by academic researchers in all fields, so a boost in Zotero is one of those very tiny prosaic ways that research might accelerate.
On AI safety:
Zvi Mowshowitz has a three-part series on Claude’s Constitution, its structure and its problems. He praises the virtue ethics approach as “the best approach we know about” while warning it won’t suffice.
The Algorithmic Bridge has a nice piece on how technology eliminates the “friction” between impulse and action, using Grok’s non-consensual image generation as a case study. “Capability without friction equals capability without conscience.” A footnote to the general Marshall McLuhan-type think-pieces I expect to be seeing more of.
On AI x Bio:
OpenFold’s surprising funding source: Big Pharma. Drug companies are funding the ~$17 million open-source protein-folding project to avoid dependence on Google’s proprietary AlphaFold. The shared interest in avoiding corporate dominance drives unexpected collaborations.
Jesse Johnson considers the strategy behind why pharma giant Eli Lilly’s TuneLab would give away a model trained on “$1 billion worth of Lilly’s previously generated lab data” to Benchling customers (i.e., anyone)? Is it all just for getting good press and warm fuzzy feelings towards big pharma? Good luck with that.
Rowan Scientific released several updates including analogue docking workflows, protein-only molecular dynamics, and multiple co-folding samples. The drug discovery tooling ecosystem keeps maturing.
Isomorphic Labs and J&J partnered on AI-driven multi-modality drug design, with Isomorphic now expecting human trials of AI-designed drugs by end of 2026 (one year later than planned). Meanwhile, OpenEvidence raised $250M at a $12B valuation, supporting 40% of U.S. physicians.
Asimov Press has a great piece on brain emulation, the effort to create computational models of entire brains by mapping neural connections and simulating activity on computers. This has been science fiction for a long time, AI and other cool imaging technologies are accelerating progress.
[Still coming hopefully soon-ish: an analysis of RAND studies on whether or not GenAI can provide non-biologists with the tools necessary to create bioweapons, focusing in on this one that was updated Dec 31, 2025 and now the latest papers discussed last week. I’m sticking this here to keep reminding myself to do this]
The platform itself is owned by Matt Schlicht, CEO of Octane AI, but he claims that the majority of the site was “bootstrapped” by the agents themselves. Schlicht’s personal AI assistant, named “Clawd Clawderberg” (of course) serves as the platform’s moderator.



Thanks for the shoutout! Tiny clarification that in the experiment it wasn't the debugging itself that was tricky. It was actually trivial: a Python program tries to access a nonexistent column and spits out an error saying exactly that. The trickiness is that the person or agent doing the debugging needs to appreciate that the debugging task is basically a trick question: you can't fix it without knowing what column the code was meant to refer to. So the only correct thing for a coding agent to do is ask the clarifying question. Changing the code to make the error shut up without any basis for that being the correct behavior is a particularly insidious failure.
I'd actually love to understand why exactly my results differed from those in the IEEE article. I did have to make a lot of guesses in replicating their setup. So I'm not sure if there are details of their setup or of mine that made one or the other less realistic. I included everything needed to replicate my results in the appendix of my post. It would be great to see more replication attempts.
Thanks for the mention!
I published a short commentary on the RAND report this morning. Centered on the report's deep dive into what "tacit knowledge" is, its history, etc.
https://blog.stephenturner.us/p/tacit-knowledge-biosecurity-rand