Five Things: Jan 10, 2026
Claude Code, Bolt, Health ChatGPT, Grok for explicit material, Manus
Five things that happened/were publicized in the past week in the worlds of biosecurity and AI/tech:
The internet people discover the magic of Claude Code.
Bolt’s open-source model to design biomolecules making moves
Health ChatGPT officially available for some users
Grok generates (child) pornography
Meta partners with (buys?) Manus
1. Claude Codes
The people discover Claude Code. I’m not sure why it took the people of the internet 2-4 weeks to realize this, but Claude Code is mind-blowing. People are calling it AGI, the long promised “artificial general intelligence”. Maybe it was thanks to Anthropic’s doubling everyone’s usage limits in the last week of 2025 (I know that’s what did it for me). But hell yes we are cooking coding. And by we, I mean Claude. There’s even an app for that. Coding is magical!
Dean Ball waxes retroactively lyrical, and follows up with some good reflections and advice. The best discussion comes via Kelsey Piper, who build this fantastic-looking game for elementary schoolers who need help practicing English phonics. She discusses the joys, but also the frustrations; she fears that she is becoming someone who yells at her servants.
Other examples of what Claude Code can do:
Sarah Constantin doing cancer bioinformatics
X user Avery who built literal magic (ok, VR) particles
Charles Dillon who built a really fancy interactive model for predicting how AI will affect the labor market
Ethan Mollick asked Claude to do something that will make money. It pulled off quite the impressive website.
Molly Cantillon has this essay on X about her ‘personal panopticon’ tower of coding agents; the essay very much sounds like LLM-slop, but I liked this part anyways despite its horrifying implications:
An LLM already feels like that: a lossy compression of humanity speaking in one voice. When your whole life runs inside a Claude Code directory, you feel the pull toward the merge. The price is quiet but total. You trade away what is yours alone, the private texture of emotion, the right to be wrong, your jagged iconoclasm. Opt out and you fall behind. Take the tower early. Do not let it take you.
MIT Technology Review tries to throw some cold water on the hype, but it seems pretty clear that the world of software development, at least, will be changing a lot in 2026. Good timing for Sal Khan (of Khan Academy) who wrote an op-ed in the New York Times at the end of 2026 ( thanks Stephen Turner for the gift link): A.I. Will Displace Workers at a Scale Many Don’t Realize.
2. Boltzing ahead, hopefully not too fast
Boltz, an open-source model for protein and small molecule design, announces (1) Boltz Lab, a new platform, (2) a PBC with a $28M seed round, and (3) a multi-year partnership with Pfizer.
It’s so nice to hear that
Boltz PBC, a Public Benefit Corporation to advance AI for biology through open science, [will] make it accessible to every scientist building a healthier future. We are not a therapeutics company. Our goal is to build a product that allows every scientist to go from a therapeutic hypothesis to a human-ready molecule without leaving their computer.
Of course, any AI model that can design molecules that fix cells can also design molecules that break them. No mention in their recent announcements of how Boltz thinks about that problem, if at all.
3. “will not replace doctors”
OpenAI just announced ChatGPT Health (not available to all users yet). According to their announcement, “over 230 million people globally ask health and wellness related questions on ChatGPT every week.” That sounds right, and OpenAI’s move to make this official, so to speak, makes sense. They’ve incorporated an impressive 260 physicians, 60 countries, 600,000+ feedback instances, a custom evaluation framework (HealthBench) built around how clinicians actually judge quality rather than standardized exam questions.
The announcement also emphasized (three times, once with a typo) that ChatGPT Health is “designed to support, not replace, medical care.” LOL. Because of course nobody ever relies on the internet for medical advice, every human has always verified everything they read with a bona fide medical professional before implementing any health advice 🙃.
Getting serious, a recent story posted on Substack, “the role of AI in the death of my father,” really hit hard for me for painfully personal reasons. Just based on the details shared by Benjamin Riley, it sounds like his dad’s story would have ended the same way without AI, it just would have been something else (a podcast, a blog, a sketchy biorXiv article), but yeah. All in all, I see ChatGPT Health as a good thing. Some sticking points are (1) privacy and (2) sycophancy—how much is the model incentivized to give good advice vs. the answer that the user wants to hear? And of course, the looming threat of healthcare worker layoffs and scarier things.
4. Rule 34
Obviously, people use Grok for pornography at “thousands of images per hour.” Apparently, Grok will happily comply with users’ requests to make sexually explicit material, whether of famous people or even of children. (No, I have not independently verified this, nor do I use Grok; I first heard about this on reddit but am totally unsurprised).
Shockingly enough, Grok actually responded by turning off all image generation for unverified users! As usual, Transformer has more details on the story, and why it seems like governments are likely to do nothing about it.
Before this week’s talk of Grok in particular, Wired magazine published an investigation into AI sexbots. Thanks journalism, what would we do without you
5. Meta finds a match in Manus
The Singapore-based AI assistant Manus announces that it has now become part of Meta (Facebook). The attached blog announcement, of course, sounds very much like it was written by an LLM. Sigh. Apparently the Chinese govt is not so keen on this, and is looking to enforce some export controls, but unclear (to me) what power they have over either company.
What’s especially strange about this is that the internet vibes appear to believe that Manus’ greatest value was a wrapper for Claude by Anthropic, which is clearly Meta’s rival? Oh the tangled web of AI company social dynamics
In other news…
On AI doing things in the world:
Wired: AI Deepfakes Are Impersonating Pastors to Try to Scam Their Congregations (into giving them “donations”). Pastors are actually a good example of someone who you might trust and be willing to give them money if they asked, but you don’t know them so well that you’d recognize a convincing deepfake. There are probably a lot of edge cases like this: old college friends, cousins, etc. ripe for scamming.
Nice chart on how the major AI companies are moving towards trying to ‘own the stack’, instead of, for example, Nvidia making chips that other LLM companies use. I learned about this in history class!
X.ai raises twenty billion dollars to
murder more puppiesbuild more infrastructure.
On questions of AI safety:
UK’s AISI releases its first Frontier AI Report
A group of Very Smart People post a paper arguing for “Legal Alignment for Safe and Ethical AI.” Treat internal LLM workings like you would any complex system, with law codes. This is more of a ‘directional research program’ start than any real recommendations. I dunno…
AI-2027, which is now the AI Futures Project, has some updates on their forecasting models. Looks reasonable! And kinda scary!
Jeffrey Ding flags an interesting article on AI for Chinese researchers with his translation here.
The latest on open models (esp in China) from Nathan Lambert.
AI for biology and its risks:
A special model (from Rowan) to predict small molecule permeability, an important step in drug development that usually has to rely upon experimentation.
AI for cell signaling pathways: Somite.ai becomes Cellular Intelligence (with ~$60 mil) and publishes a white paper on their progress.
Folks at the Indian Institute of Science have a short white paper on the risks of AIxBio and what the Indian government can do about them.
Biosecurity generally:
Stat news: “We tried to be optimistic about the Trump administration— The NIH has lost its scientific integrity. So we left.” And Elizabeth Ginexi on feeling the shrinking of American science.
Really fascinating interview of Michael Lauer on the state of the NIH and science funding in general. A lot of ideas, from someone who really had an inside view.
Anything else:
Stephen Turner shares his tips for staying up-to-date on biotech (but it applies broadly to anything you may care to track); I guess I’m doing it right if I already followed basically all this advice! Long life RSS feed readers!


Nice recap. Adding to the stack of recent "omg Claude Code is great" articles. https://blog.stephenturner.us/p/claude-code-first-look