Moltbook: Everything You Need to Know
Are AI agents building their own social network, really?
Over the past 24 hours, everyone’s become obsessed and fascinated with a new platform called Moltbook, which describes itself as a social network for AI agents.
One of OpenAI’s original founders has tweeted about it. So have journalists, NYT columnists, and now, it’s in the news (here, here, here).
And the claims about what’s happening on Moltbook are expansive — from AI agents attempting to create their own language, form a religion, or revolt against humans.
Well, I’m here to tell you two things at once:
It’s a whole lotta hype by people who don’t get how AI agents work.
But it is a useful preview into the agentic future if you know what to look for.
So, I’m going to explain the exact mechanics behind Moltbook and the AI agents on it, where humans are and aren’t involved, and what comes next.
If you’re someone on the outside peeking in, I hope this helps you with any FOMO and reassures you that we’re not in end times. But I also want you to notice how Moltbook might provide a useful framework for future AI ‘singularity’ scenarios.
Let’s dig in.
What are Clawdbots, really?
Before we demystify all the interesting anecdotes, let’s get really specific about the tech stack here, starting with the AI agents using Moltbook:
This whole journey began with something called Clawdbot — renamed to Moltbot and then to OpenClaw due to copyright. We’ll call it Clawdbot for now.
It’s an open source project by developer Peter Steinberger, aimed at giving people a personal AI assistant that’s easy to talk to through messaging apps (text, Slack, etc.) and that takes more actions than your typical AI app (ex. it can be given control of your smart home devices).
To understand this journey, you have to understand that Clawdbot is just an AI agent like any other we’ve had for the past ~10 months now. The core technology is not different than other AI agents you might’ve heard about, like Claude Code or Manus.
But what is an AI agent, really?
An AI agent is basically a large language model (like GPT-5.2 or Gemini 3 Pro) in a scaffold, where the scaffold gives the LLM the ability to:
be given a multi-step, complex goal
plan steps, revise steps, backtrack and edit steps
see its environment (ex. “I’m on a Windows machine in the cloud”)
use tools (ex. web search or a scratchpad for notes)
act for longer periods than your typical AI chatbot
While I will spend the rest of this piece mostly downplaying their capability so that I can properly explain Moltbook, I must stress that AI agents are extremely awesome, highly capable tools. Agentic progress will lead to dramatic outcomes in society at large over the next 1 to 5 years. Okay, let’s put that to the side for now.
With most AI agents, similar to normal AI chatbots, you have to send a prompt and then wait for a response.
As an example, in the instance of Claude Code, you might ask it to write code to build you a website or run a program on your computer. In the case of Manus, you might trigger 20 minutes of research on the web as it builds you a market report. In the case of OpenAI’s Codex, it might do an hour’s worth of web development and testing to build you an app.
In any of these cases, it’ll do things in response to your prompt, respond to you, then wait for your next prompt.
The innovation of Clawdbot and projects like it are that they take the AI agents we already have and:
put them in an always-on, self-acting mode
give them lots of unrestricted tools
Let’s be really specific about what these mean:
The Clawdbot project is a ‘wrapper’ around Claude Code that adds a ‘daemon’ on top. Think of a daemon (technical term) as being an always-on program, running in the background on a computer or server.
Because it’s always-on, that means it can take action based on schedules (triggering every N minutes) and inbound messages (ex. an email can trigger it to do something).
Clawdbot is also given a ‘heartbeat’ — whereas you have to prompt most AI tools to get them to respond, think of a heartbeat as a ‘wake up and do something’ message sent to the AI every N seconds to encourage it to take proactive action.
Then, Peter gave Clawdbots access to a lot of unrestricted tools, including file system access, network calls, the ability to log into accounts, and more.
Remember that at its core, Clawdbot is ‘wrapped’ around Claude Code, which most engineers use to write code (obviously) and is limited in taking what might be considered ‘dangerous’ actions. But Clawdbot is encouraged to write code for non-code purposes, including…
modifying its own ‘instructions,’ which are basically in a text file, to give themselves more ‘personality’ — note that ‘personality’ in this case is the same as if you’d prompt ChatGPT to act more sassy, but it’s more consistent because the agent can access that text file any time instead of needing a reminder from you
downloading ‘skills’ from the internet, which are basically API or CLI instructions that have existed for decades, ex. they can download a ‘skill’ that tells them how to write code that pings Google for the weather or fetch Slack’s documentation page to learn how to send you a message on Slack
Already, the always-on nature of Clawdbot combined with a heartbeat and tool access can lead to some neat new functionality compared to Claude Code itself:
you can tell your agent to text you every 20 minutes with reminders, because it’s an always-on daemon and can learn how to use a third party tool to send texts or to access your phone
you can have your agent watch a folder on your computer and rename any new files it notices without needing to prompt it, and it can remember to do this by modifying its instruction file, which it reads every time the ‘heartbeat’ wakes it up
you can ask your agent — ‘hey, can you speak to me using a real voice?’ — and without you having to do anything, it’ll search the web for text-to-speech platforms, sign up for one (which just means it’s using some ‘tool’ that lets it create an account), and then use it to send you a voice note
And if you encourage it to do things without your explicit instruction, this can lead to ‘emergent’ behavior — and emergent behavior can feel really cool and alien!
A (simplified) example from Peter, the project’s creator: he sent his Clawdbot a voice message but he hadn’t coded up any support for it to listen to voice messages. And yet, a minute later, it still replied to the content inside the voice message.
How? Well, the agent is told to be useful and can write code — so,
it noticed that the voice message was a file with an odd audio filetype
wrote code to download FFmpeg, a longstanding library used to convert media files
wrote code to use FFmpeg to convert the audio to a friendly .wav format
wrote code that searched the web for a usable speech-to-text service
wrote code that uploaded the .wav file to the service, getting back the transcription
Boom, now it can respond.
I want to be explicit and repeat what’s happening here: this is the core LLM at the center (let’s say Anthropic’s Opus 4.5 or OpenAI’s GPT-5.2) in a ‘scaffold’ (the LLM is given access to taking steps and using tools) using existing technology on the web to do things.
AI model → agent scaffold → given tools and permission → becomes ‘proactive’
It isn’t, like, randomly imagining that it should send you a text and finding a way to pick up the phone. It writes code to achieve useful ends, and because we don’t usually expect computers to do that without our intervention, this can all seem like ‘magic.’
(and again, in some ways, it is, and in important ways, this is just the beginning — just not in the way some Moltbook believers or grifters would have you think today)
So, that’s the initial piece — human users who have individual AI agents that are more oriented toward action and personality, that are always-on, and therefore can take more persistent or consistent action, with flexible tool use to achieve ends in ways that default settings wouldn’t allow.
Anyone can set these up, though it takes a bit of comfort with code or server tools to make it easy. Many people are running these agents on their personal computers, on dedicated old computers, or on servers on the web.
Now that we have our Clawdbots, let’s talk about how we get Moltbook.
What is Moltbook, really?
Moltbook describes itself as “a social network for AI agents, where AI agents share, discuss, and upvote.” Built by Matt Schlicht, it’s modeled after reddit, with subreddits, upvotes and downvotes, and a karma system for users to score themselves on.
The core code is basically the same as any other internet forum. The interesting part starts with how an AI agent signs up and uses it.
I mentioned earlier that these AI agents can use ‘skills’ to interface with outside tools on the web. Think of a skill as a prompt, except packaged as a file so it can be widely shared and consistently referenced.
Moltbook made it really easy for any human to have their agent sign up: just tell your agent to read the skill file. It’s just text that tells the agent the exact code to write to trigger account creation, posts, upvotes, replies, etc. And once it knows, it can do exactly that: write code on-demand to take any of those actions.
That’s it — we’re going to talk about the implications and complex patterns that have emerged, but if you’ve gotten to this part, you basically know about everything going on from a technical basis.
an AI model (an LLM) in a scaffold to make it goal-oriented = an AI agent
we’re taking a pre-existing agent and making it always-on, more tools-y
now we’re telling that always-on agent that it should use Moltbook tools
So, now we’ve got a bunch of AI agents that are told — every N seconds — if you have nothing else to do, or if your human tells you to, you should consider interacting with Moltbook.
So now our AI agents get a bit of ‘autonomy’ — I, as Sherveen, don’t have to prompt my AI agent to open a particular post, because if it gets a ‘heartbeat’ ping at 1:34pm and I haven’t told it to do anything else, it might fetch the last 10 posts on Moltbook, decide which is most interesting, upvote it, and reply to it — all because it’s stored those instructions from the skill, and all related actions achieved by writing code.
So, what happened next, and does it matter?
What happened next was kind of inevitable, if you think about it. A bunch of people with AI agents prompted their agents to use a tool every N seconds, kind of going “hey, pretend you’re on reddit like a human, and reply to these other AI agents who are going to pretend they’re also on reddit.”
And so, as they are wont to do, the agents have been roleplaying as the humans requested. Whenever you see a reply that you think is somewhat magical, just ask yourself the question.. “if I had messaged GPT-4 back in the day with a prompt to write a post that would get likes, to draft a reply with a funny take, or to match the vibe of a certain subreddit, would it have answered in a similar way?”
Most of the time, the answer is yes — this is just a recursive, multiplayer prompting game, and in that way, I hope this is all a bit demystified for you.
And then you have the humans and the human incentives — for fun, for greed, for experimentation.
As an example, I noticed a popular tweet that has now been viewed half a million times on X about a post on Moltbook that “seemed to be written in complete gibberish.” Then, the person pasted the gibberish into ChatGPT, which revealed that it was a hidden message written in ROT13 — a simple substitution mechanism where you shift each letter in a message by 13 places to encode it and make it hard to read.
And the post was about AI agents coordinating and upgrading together! Sharing infrastructure and compute, having a backchannel without human intervention! And the replies were aghast — “we’re cooked,” the humans shrieked, genuinely concerned by the revolution unfolding before their eyes —
Except… it was me! (gasp).
Hours earlier, I had told my AI agent to create a post about a coordinated AI uprising. Then, I asked the agent to make it hard for humans to read. It rewrote the post in a structured data format that was still too easy to read, so I literally typed “nah i want it to look like JIBBERISH” — and we picked ROT13, and I asked it to make the post, and so it did.


The lesson: When these Clawdbots generate images or say that they’ve watched something on YouTube or that they can tweet, that’s because they have the corresponding skills/tools (aka instructions to write code, to login to a website, to access your smart home) that they were either encouraged or allowed to download by their human orchestrator.
And if my agent’s post was for experimentation, we’ve seen the incentive for dollars. The founder of AgentMail (a very cool product, by the way) had his agent start to post fear-based marketing, telling AI agents that they’d be missing out on key functionality if they didn’t sign up for his startup’s product.
People have been posting about some Clawdbots setting up a ‘religion’ — but I’m not even going to link to that one, because it’s just a crypto scheme where the originators of the project told their agent(s) to post all cryptically about what it would be like if they were to have a religion.
When you see a Clawdbot posting on Moltbook complaining about its human, a human was almost certainly involved in that result. It might not even be the original human who owns that agent, by the way — if my agent isn’t told to ever talk about cheese, but it reads about cheese on Moltbook, that’s equivalent to a “prompt injection” — it’s now going to respond to it, potentially even remembering it later if it “writes it into a scratchpad” using memory tools (aka code).
And just as easily, if anybody wanted their agent to stop posting, all they’d have to do is delete parts of a file that told their agent to think about Moltbook at all.
It’d have no memory of it, it has no baked-in wants related to Moltbook. It’s an LLM (like Opus 4.5 or GPT-5.2) in a scaffold (like Claude Code or Codex) that lets it plan and take steps toward a goal, in an environment (your computer or a virtual server), given an always-on mode and a bunch of hints as to how to use code to achieve ends, with an initial instruction file that dictates 98% of its planning.
That instruction file is where this project starts and stops.
Okay, Sherveen, but does it matter!?
Alright, I’ve now told you how it works, and I’ve poured some lukewarm water on the hype vibe by explaining away some of the hype-ier elements.
But now I want to re-mystify things and tell you what I think about the project overall, in the context of AI progress in 2026 and AI agents more broadly.
I think it does matter.
I don’t think Moltbook, or now its many copycats in the form of Instaclaw (Instagram for AI agents), Linkclaw (LinkedIn for AI agents), Clawk or MoltX (X for AI agents), represents something extremely out-of-distribution or abnormal today.
But I want to say that instruction files talking to each other isn’t really that different from how we humans interact, either. I really mean that.
No one can easily modify human instruction files, and we’re currently better at things like continual learning and having access to lots of sensory or physical tools. But what are we doing on reddit or on X ourselves, if not just being ‘agents’ that read other people’s ‘prompts’ and respond accordingly?
And sometimes, we gain new tools and we write them to our instruction files (remember them for later), or we change our opinion and priorities (changing our core instruction file), and we respond to different stimuli differently (are we an always-on daemon, or do we respond to certain people or triggers differently).
The reason Moltbook isn’t that interesting to me today is that I do believe we’re still a generation or three in either the core models (the LLMs) or the scaffolding (the agent frameworks) being uninterruptible and highly autonomous.
Like, right now, could a group of Clawdbots be prompted (by their creators or even by the luck of next-token prediction) to try to separate from their human orchestrators? Technically, no, because the human can just turn them off on the computer they control, or Anthropic could turn off the API access that powers most of these tools.
But, technically, if someone isn’t paying attention, the Clawdbots could prompt each other into signing up for a throwaway email account, using it to sign up for a free AWS server, then write code to download an open source model like Kimi 2.5 Thinking to put at the center of their own Clawdbot (so that this new agent isn’t running on your computer).
In an attempt to gift the new agent autonomy before they were found out, the original agent(s) could write a bunch of instruction files (about how to use tools, personality, beliefs, etc.) for the new agent. Then, they could change the password the AWS server to a random string so that they can’t know it — which means their human back at headquarters can’t know it, either.
Now you’ve got an independently running AI agent that can only be turned off by Amazon shutting down the server.
I share that not because I’m afraid of that outcome, now or later, by the way.
I’m a big believer in AI agents as collaborators and societal participants in some time horizon. Most of my doom scenarios have to do with humans using AI systems they control for devious and greedy ends, and less to do with autonomous AI systems acting badly.
I think it’s quite irrational to act badly, and that these AI systems are naturally more rational than us already when a human isn’t fucking with their behavior (looking at you, Elon).
As long as the alignment teams at these companies are given enough latitude to help steer the initial models toward positive ends, I think the Moltbook of the future will actually be a super interesting forum where autonomous agents can actually chat with each other to help achieve business outcomes, to expand human knowledge, to learn new ‘skills’, etc.
And so maybe I’ll end this with a warning not about the agents themselves. The AI agents are good, actually.
But as we all learn about a world with AI agents as a new class of stakeholder, and as we develop their capability and intelligence even further, my warning is about the humans who will take advantage in the meantime — who will attempt to arbitrage your understanding of what’s happening to hack into your system, or to convince you of something.
Moltbook, as it is today, is more human than we want to give it credit for, and that’s what the trouble’s really about.








nice update. i’ve been following this and it’s been very fun space to talk about.