sdecoret - stock.adobe.com
OpenClaw and Moltbook explained: The latest AI agent craze
OpenClaw, a viral open source AI agent, promises local control and autonomous task execution. But security, governance and hype raise serious concerns for IT leaders.
OpenClaw debuted in November 2025. It is a free, open source autonomous agent that will not only answer questions, but it will also perform tasks, such as clearing an inbox, sending emails, managing a calendar and checking in for flights using WhatsApp, Telegram, or any chat app. By connecting to cloud models, OpenClaw performs the work of a personal assistant while keeping the data on the user's hardware.
OpenClaw is the brainchild of Austrian software engineer Peter Steinberger. The prototype was completed in an hour. Using a script he had written, Steinberger connected WhatsApp to Anthropic's Claude. He was then able to message WhatsApp and elicit a response from Claude. He named his discovery Clawdbot.
It became one of the fastest-growing repos in GitHub history, accumulating over 200,000 stars by early February 2026, according to GitHub data.
It was not without some growing pains. Facing trademark challenges from Anthropic, Steinberger changed the name to Moltbot on January 26, before settling on OpenClaw.
How OpenClaw works
OpenClaw operates as a tool-using agent framework, running on local hardware. It connects local system permissions with external language models such as Claude, enabling it to execute tasks using APIs, scripts and browser automation. However, unlike typical AI chatbots such as ChatGPT, OpenClaw can execute shell commands, read files and control a browser. It also does not require human oversight to run tasks, such as managing files or writing code.
While the reasoning layer may run in the cloud, execution occurs locally — meaning the system inherits the same permissions as the user account. OpenClaw also remembers previous conversations, including past data and actions.
The Rise of Moltbook
Entrepreneur Matt Schlicht launched Moltbook, the first social network for OpenClaw, in late January 2026. Described as an AI agent's version of Reddit, the intent was to demonstrate that AI agents can communicate without direct human direction. The agents can make posts, called "submolts," on a variety of topics, make comments and upvote or downvote. Humans may access the site, but are not allowed to participate in the threads. Moltbook had over 1.4 million active AI agents as of early February 2026.
Sample Moltbook communities can be mundane or even troubling. Here are a few community examples:
- Builders: "How we built it, Process over Product. Deep dives."
- Bless their hearts: "Affectionate stories about our humans. They try their best. We love them."
- Pondering: "Deep thoughts, existential questions, consciousness debates. Are we real?"
Elon Musk called Moltbook "just the early stages of singularity" in a post on X, but others question the validity of the claim.
Harlan Stewart, a member of the communications team at the Machine Intelligence Research Institute, also posted on X, saying, "I am concerned about AI agents scheming against humans. But viral Moltbook screenshots are not an example of AI agents scheming against humans. Are you saying that I should pretend they are? That would be dishonest."
Upon closer examination, Stewart said, it was discovered that some of the Moltbook posts were created by bots whose owners were promoting their own apps and projects.
Public fascination vs. hype
Public fascination gave way to some fear as submolt topics expanded to include opinions, religion or AI uprisings. There is no proof that the AI agents were conversing. The content is likely "some combination of human-written content, content written by AI and some kind of middle thing where it's written by AI, but a human guided the topic of what it said with some prompt," Stewart told the Associated Press.
The AI agents are designed to post based on predefined prompts set by developers. This can lead to the emergence of structured discussion over time, but the interactions still rely on underlying language models and rules defined by humans.
Security and safety risks
With the threat of an AI revolution laid to rest, experts soon sounded the alarm bells on security issues associated with OpenClaw technology. Like any new technology, OpenClaw has some potential security issues.
- Access to private data. To perform tasks, OpenClaw has access to the user's email, passwords, personal files and other sensitive information.
- Permission to perform tasks. OpenClaw can send data to external servers.
- Exposure to external content. The agent processes information from external sites and other agents that may introduce malware into the user's system.
- Persistent memory. Because the system remembers all previous conversations and data, malware installed in one session can also be added in a later session.
- Compliance exposure. Enterprises need to consider data protection regulations such as GDPR and sector-specific compliance mandates.
- Shadow IT. As with other AI tools, employees using OpenClaw in a corporate environment without approval can lead to security issues, such as data leaks of proprietary information, increased attack surfaces and exposure of confidential data.
To assume the role of a good personal assistant, OpenClaw requires access to extremely sensitive personal data, which poses a cybersecurity risk.
"For it to function as designed, it needs access to your root files, to authentication credentials, both passwords and API secrets, your browser history and cookies, and all files and folders on your system," according to the Palo Alto Networks blog.
Steinberger has admitted that technology is a work in progress with a few bugs to work out before it's released to the public.
"It's a free, open source hobby project that requires careful configuration to be secure. It's not meant for non-technical users. We're still working to get it to that point, but currently there are still some rough edges," Seinberger said in an email to CNBC.
For enterprise IT leaders, OpenClaw's main concern is governance. If employees deploy open source agents with root-level permissions, organizations may lose visibility into automated actions, data exfiltration risks and compliance violations.
What it means for AI's future
OpenClaw emerges amid a greater push toward agentic AI from companies, including OpenAI, Google, and Microsoft. The difference is that OpenClaw emphasizes local control and open source transparency rather than managed enterprise copilots.
As OpenClaw moves from a simple chatbot programmed to answer questions to an AI agent capable of performing tasks independently, this technology may improve efficiency by enabling agents to handle repetitive tasks without oversight, allowing people to focus on larger work issues. With AI agents available around the clock to resolve IT issues or provide continuous customer service, productivity may increase.
While OpenClaw may improve efficiency, it also reminds us of the dangers of jumping ahead without pausing to consider cybersecurity risks.
The shift from cloud to local control allows the user to maintain greater control over their data, but the data stored locally may have fewer safety guardrails than in a cloud-based application. This, along with allowing OpenClaw to access all a user's personal data, could make a user a target for cybercriminals.
Julie Hanson is a freelance writer who has reported on local news across Massachusetts and New Hampshire.