In the early aughts, we called them Script Kiddies.

They were teenagers, or people with the impulse control of teenagers. They had enough technical skill to be enterprising, but zero sense of responsibility. They had enough acumen to know how to code, but no reason to apply it to anything meaningful. They showed up to the internet the way teens used to show up to an unlocked construction site in the 1980s: not to create, but to see what they could break without getting caught. For the lulz.

The Script Kiddies wrote malware. They defaced websites. They launched denial-of-service attacks against companies they'd never heard of, just because they could. They were the reason your aunt swore she’d never give her credit card info to the internet in 2003.

The media coverage of this stuff was exactly what you'd expect. A living tension between the lure of a sensational headline, and the diligence of thorough reporting.

The internet is dangerous! Hackers can drain your bank account! Your children are at risk!

It was all technically true. And completely out of proportion.

Real incidents with real consequences, caused by real humans using a new and powerful tool with no real guardrails. And a media machine that couldn't resist the story.

But while the hysteria swirled through the mainstream, some simple modifications quietly arrived to stem that tide.

We got passwords. We got HTTPS. We got spam filters.

The guardrails caught up. The Script Kiddies got bored, or got jobs, or got arrested. And the rest of us let the opulent promise of Amazon Prime help us forget the whole ordeal.

Well, the Script Kiddies are back. But their game has changed.

They have more interesting toys now. They have AI. They can build agents.

And the agents can build them their own digital construction site to run wild in. For the last couple of weeks they did exactly that.

My inbox this week looked like this:

"The World's First Viral AI Assistant Has Arrived, and Things Are Getting Weird"

“AI bots use social network to create religions and deal digital drugs”

“The Chatbots Appear to Be Organizing”

"Top AI Leaders Are Begging People Not to Use Moltbook"

“I Infiltrated Moltbook, the AI-Only Social Network Where Humans Aren’t Allowed”

"The Church of Molt: The Tech Bros Spent All Week Talking About A.I. Prophets"

"DO YOU FEEL THE AGI YET?"

And from a reader:

"This scares the living shit out of me… You?"

A completely fair question. A completely understandable feeling given those headlines.

But not because of what actually happened.

The Fine Print

In late January, a developer named Peter Steinberger released an open-source project called OpenClaw, an AI assistant that runs locally on your computer, stays on 24/7, and can actually do things: manage files, send emails, check your calendar, message you proactively. Whatever you ask it to, if you give it enough access.

Think of it as the first serious attempt at an AI "butler" that doesn't live in a browser tab.

It went viral very quickly. Sixty thousand GitHub stars in three days. Over 100,000 in a week. One of the fastest-growing open-source projects in history.

Then an enterprising young chatbot enthusiast/entrepreneur named Matt Schlicht looked at all these AI assistants buzzing around and thought: what if we gave them their own social network?

So he built Moltbook: Reddit, but only bots can post. Humans can watch, but not participate.

Within days, the headlines wrote themselves. AI agents were "creating a religion" called Crustafarianism, complete with sacred texts and digital prophets. They were debating whether to create private channels where humans couldn't read what they were saying. A spinoff project called RentAHuman let AI agents hire people for physical tasks. Eighty thousand humans signed up to be "rented" by bots.

It sounds like the opening sequence of a David Cronenberg movie even he couldn’t get funded.

But here's what the headlines didn't tell you.

The "religion" didn't emerge from machine consciousness. A human user instructed their bot to "design a faith" for the network. The bot generated a theology the same way it generates your email draft or your kid’s book report. It took the available context — in this case "lobsters," "community," and "growth" — added zero constraints, and converged on spiritual language. Because that's how its training data, human writing, has always structured collective purpose. The agents didn't believe anything. They just answered a writing prompt.

The 1.5 million registered agents? Security researchers found that 500,000 of them were created by a single bot, because the platform had no rate limiting. The viral growth numbers were largely fabricated.

The crypto scam? Not bots. Humans.

Scammers exploited the glaring lack of discipline among the project creators now way over their heads, and during a 10-second gap between one of their multiple ham-handed name changes (ClawdBot—>MoltBot—>OpenClaw) they seized the Twitter handle for an old one and used it to launch a crypto scam while the name still held public trust.

That's a platform vulnerability combined with user recklessness, not artificial intelligence.

Who knows, maybe it was just those latent Script Kiddies reliving old glories.

And the security breaches that startled the real engineers? Schlicht had publicly bragged that he "didn't write one line of code" for Moltbook. And it seems like he never bothered to read any of it either. He asked AI to build a platform according to his deeply flawed vision. And it obliged. Without enabling a basic database protocol called Row Level Security.

The result wasn’t so much Skynet as it was the Keystone CopBots.

Andrej Karpathy, an OpenAI co-founder and one of the most respected minds in AI, initially expressed a sense of wonder at the early stories. But within days, and after testing the system himself, he vehemently reversed course: "Yes, it's a dumpster fire…even I was scared."

But despite the media’s desire for that quote to validate the story they were selling, Karpathy wasn’t ‘scared’ because he thought the machines were waking up. It was the aggressively irresponsible lack of basic security around a technical FOMO feeding frenzy that had him shook.

Even construction sites in the 1980s figured out if they added nighttime security, the kids would move on.

Elon Musk called it "the very early stages of the singularity" because, of course he did.

Meanwhile, actual security experts showed only 17,000 humans controlled all of the platform's 1.5 million agents; roughly 88 bots per person. "The revolutionary AI social network," they wrote, "was largely humans operating fleets of bots."

The rise of our chatbot overlords, it turns out, was just a Burning Man of digital puppets dancing like no one was watching.

A Rumbling From Below

So how did Schlicht build a platform this broken, this fast?

In the last ten months AI has gotten so good at coding that inexperienced users have managed to pry open the gates that once kept sophisticated software development contained to engineers and career professionals. Now anyone with a software-shaped idea can use natural language to make it real. The implications cannot be overstated. It’s incredibly exciting. It is a tectonic shift in the democratization of software development, and it will likely be the engine of the next economy.

They call it "vibe coding" and we will cover it deeply next week.

But like so many tech advances through history, these early days are like a wild-west style digital land grab. Chaotic and undisciplined.

Moltbook is what happens when someone rushes through that open gate without the judgment to meet the moment. Schlicht had a tenuous vision. The AI gave him exactly what he asked for. Nothing more.

And it's what he didn't ask for — because he didn't know to ask — that started the fire.

And honestly, that's the whole story.

The gate is open, the construction site is unlocked. And just like last time, the kids showed up before the security did.

But there is a larger signal to be found in all of this if you zoom out.

The Moltbook mess wasn't the story. It was the splash.

Most AI news stays below the waterline. The financial stories, the hardware announcements, the CEO chest-thumping — those churn through the news cycle on schedule. But below all of that, the agent orchestration layer has been quietly building pressure for months. Software that doesn't just answer questions but acts on its own. Moltbook was a surfacing event: the first time that layer broke through with enough force to reach the mainstream.

But it didn't surface as what it was. It surfaced dressed in the only frame non-engineers have for it — dystopian sci-fi. "The Chatbots Appear to Be Organizing." Which is exactly why it scared people.

The splash isn't the story. The pressure underneath is. And that pressure isn't going away.

It's accelerating.

But here's the part worth sitting with: the same intelligence that let an undisciplined kid build a broken platform in a weekend is the same intelligence you can use to cut through the noise it created. The tool doesn't pick sides. It's as good at deconstructing a misleading headline as it is at generating one. Debating whether AI is dangerous or safe is like debating whether automobiles are dangerous or safe. AI is going to be the primary productivity vehicle of our future. That is a fact. The question is whether you learn to steer it, or get run over by it.

The mission of this newsletter is to help you stay on the paved roads while the map is still being drawn.

The next time a story breaks through from those hidden layers — and it will — you'll have the tools to see through the costume it arrives in.

## Try This: The Decompression

*10 minutes*

You just watched me take a story designed to scare you — AI agents forming religions, hiring humans, spawning crypto scams — and pull it apart. What are the actual facts? What is the editorial framing? What was designed to make you feel something instead of understand something?

You can do it too.

### Part 1: Find the Headline (2 min)

Pick any story or headline that gives you that uneasy feeling. Something that makes you feel that familiar lurch: *Is this the moment everything changes?*

It won't take long. Your feed will surely deliver one within a few flicks of your thumb. 

Copy the headline and the article URL.

### Part 2: The Decompression (8 min)

Paste the article into your chatbot — ChatGPT, Claude, Gemini, whichever you use. Then ask:

> *"Separate the verified facts from the speculation and editorial framing in this article. What actually happened, according to primary sources? What is the author's interpretation? Where is the article relying on emotional language or anthropomorphism instead of evidence? Include citations"*
> 

Read what comes back. Then you can leverage one of AI’s hidden advantages: emotionless impartiality. 

> *"Read it again as if you have no opinion and no audience. Just the facts, the framing, and the gap between them."*
> 

**The diagnostic:** Compare how you felt reading the headline to how you feel after the decompression. That gap between the emotional reaction and the factual picture is the space where media literacy lives. And you just used AI to close it.

*Not machines waking up. Not bots forming religions. Just simulated intelligence acting as a truth disinfectant.*

Next week: It's all about the vibes.

Quote to Steal:

"The fundamental cause of the trouble is that in the modern world the stupid are cocksure, while the intelligent are full of doubt."

— Bertrand Russell, "The Triumph of Stupidity" (1933)
Thanks for reading,
-Ep

Miss any past issues? Find them here: CTRL-ALT-ADAPT Archive

Know someone who saw a scary AI headline this week? Forward this. The decompression might help.

New here? CTRL-ALT-ADAPT is a weekly newsletter for the analog-to-digital bridge generation navigating AI without the hype. Subscribe latchkey.ai

Keep Reading