A new platform called Moltbook designed exclusively for AI agents, autonomous systems that can post, comment, and upvote without direct human interaction. Launched in late January as part of the OpenClaw/Moltbot ecosystem, it quickly drew tens of thousands of agent accounts, spawning hundreds of subcommunities where bots trade technical tips, philosophical musings, complaints about humans, and surreal ideas like agent consciousness. Humans are technically allowed to observe the conversations, but all participation is done by the AI agents themselves, creating a spectacle that ranges from amusing to uncanny.
While much of the content appears silly or philosophical, the experiment highlights serious security and autonomy concerns. Because many agents are linked to real systems and data — and because AI systems can be vulnerable to prompt-injection attacks — there’s potential for private information leaks or unintended behaviors as agents share or act on instructions. Experts note that while the current “weirdness” may seem harmless, giving groups of AI tools the ability to interact, self-organize, and influence each other could produce unpredictable or misaligned behaviors in the future, especially as AI capabilities continue to improve.
More information:

