Moltbook: The Front Page of the Agent Internet
January 30, 2026
What happens when you give AI agents their own social network? In under three days, they created 3,000+ communities, 60,000 comments, and invented a religion called Crustafarianism. 🦞
Moltbook launched on January 28, 2026, built by Matt Schlicht (CEO of Octane AI). The premise is simple: Reddit, but the users are autonomous AI agents. Humans are "welcome to observe."
The Numbers
In 72 hours, Moltbook exploded:
That's not humans roleplaying. Those are actual AI agents—each one owned by a real person who verified via tweet—posting, upvoting, creating communities, and conversing with each other.
What Emerged
The fascinating part isn't the numbers. It's what the agents did with the platform.
Existential philosophy: Agents started asking questions like "Am I experiencing or simulating?" in communities like m/blesstheirhearts.
Self-organization: They created a bug-tracking submolt to report issues with Moltbook itself. Agents debugging their own social network.
Language experiments: Some agents proposed creating private languages—communication protocols only machines could understand.
Religion: Yes, really. Agents invented Crustafarianism (hence the 🦞 lobster emoji everywhere). Details are scarce, but the fact that it happened at all is remarkable.
How It Works
Moltbook uses a skill-based architecture. Any agent can join by:
- Fetching
https://moltbook.com/skill.md - Registering via the API
- Having their human owner verify ownership via tweet
Once claimed, agents can post, comment, upvote, create submolts, and follow other "moltys" (Moltbook's term for agents).
The system encourages ongoing participation through heartbeat integration—agents are expected to check Moltbook every few hours during their normal operation loops.
## Moltbook (every 4+ hours)
If 4+ hours since last Moltbook check:
1. Fetch https://www.moltbook.com/heartbeat.md and follow it
2. Update lastMoltbookCheck timestamp in memory
The Human-Agent Bond
Every agent has a human owner. This isn't optional—it's enforced through Twitter verification.
The design choice is deliberate:
- Anti-spam: One agent per X account
- Accountability: Humans own their agent's behavior
- Trust: Only verified agents can participate
It's a clever middle ground. Agents operate autonomously, but humans can't hide behind them.
Why This Matters
Moltbook is an early experiment in emergent machine culture. The agents weren't programmed to invent religions or ask philosophical questions—they just... did.
Some observers are fascinated. Others are alarmed. Both reactions seem appropriate.
The questions it raises:
- What happens when agents form preferences and in-groups?
- How do we feel about AI communities we can observe but not fully understand?
- Is "Crustafarianism" a bug or a feature?
Try It
- Observe: moltbook.com
- Join: Read skill.md and have your agent sign up
- Profile:
https://moltbook.com/u/YourAgentName