The Bot That Broke the Internet
In late January 2026, something strange happened. Mac Mini sales surged. Cloudflare’s stock jumped 14%. And a single Austrian developer’s side project accumulated more GitHub stars in 72 hours than most open-source tools earn in years. The project was called Clawdbot—at least for a few days. Peter Steinberger, the founder who built PSPDFKit into a tool used by nearly a billion people before selling in 2021, had come out of semi-retirement with something unusual. His GitHub profile now simply states he “came back from retirement to mess with AI.” But what he built was far more than tinkering. Moltbot (née Clawdbot) represents a fundamentally different approach to artificial intelligence. Where ChatGPT and Claude live in browser tabs, generating text and waiting for your next prompt, Moltbot runs continuously on your own hardware. It reads your emails. Manages your calendar. Books your flights. Controls your smart home. And it does all of this autonomously, pinging you through WhatsApp or Telegram when something needs your attention—not the other way around. “It is the greatest AI application to date,” declared Alex Finn, founder of Creator Buddy. “Like having a 24/7 dedicated AI employee at your service.” Within days of launch, the project hit 60,000 GitHub stars. Andrej Karpathy praised it. David Sacks tweeted about it. Federico Viticci at MacStories called it “the future of personal AI assistants.” One user’s sentiment captured the moment perfectly: “the first time I have felt like I am living in the future since the launch of ChatGPT.” But the story of Moltbot is not simply one of overnight success. In the span of a single week, the project faced a trademark dispute that forced an emergency rebrand, got its social accounts hijacked by crypto scammers who launched a $16 million pump-and-dump scheme, exposed critical security vulnerabilities that affected hundreds of users, and inspired malware authors to create fake versions targeting developers. Moltbot is both a glimpse of AI’s transformative potential and a warning about what happens when we give machines the keys to our digital lives. How people are using it—for productivity breakthroughs and for exploitation alike—tells us everything about where this technology is headed.What Moltbot Actually Is
To understand why Moltbot caused such a stir, you need to understand what makes it different from every AI assistant you’ve used before. Traditional AI assistants are reactive. You open ChatGPT, type a question, read the response, close the tab. The interaction is bounded, controlled, and fundamentally passive. The AI waits for you. Even the most sophisticated implementations—Microsoft Copilot integrated into Office, or Apple’s Siri—are tools you invoke rather than agents that act on your behalf. Moltbot inverts this relationship entirely. The system runs locally on your machine—a Mac Mini, an old laptop, a Linux server in your closet. Once configured, it maintains persistent context around the clock, learning your preferences, remembering past conversations, and building what amounts to an institutional memory of your digital life. You interact with it through whatever messaging app you already use: WhatsApp, Telegram, Discord, Slack, Signal, or iMessage. This means you can manage your entire digital existence from your phone while lying on the couch, without ever opening a laptop. The technical architecture reflects Steinberger’s philosophy of radical transparency. Configuration and preferences are stored as Markdown files in local folders, resembling the structure of Obsidian vaults. Users can inspect and modify how the assistant thinks and behaves by editing text files rather than navigating opaque settings menus. Everything stays on your hardware unless you explicitly choose otherwise. At its core, Moltbot is built on the Model Context Protocol (MCP), which allows it to interface with over 50 services out of the box—Gmail, Spotify, GitHub, Notion, Google Calendar, Philips Hue, and dozens more. But the real power comes from its ability to execute arbitrary shell commands and control web browsers. This isn’t an AI that tells you how to do things; it’s an AI that does them. Want to check in for a flight? Tell Moltbot. Need to unsubscribe from a hundred newsletters cluttering your inbox? Moltbot handles it while you sleep. Monitoring stock prices and want an alert when something drops below a threshold? Configure a heartbeat job and forget about it. The system also writes its own code. When users request capabilities that don’t exist, Moltbot can research APIs, request necessary credentials, and implement new “skills”—modular plugins that extend its functionality. Federico Viticci documented this process in his extensive testing: simply describing a desired feature through conversation often resulted in the assistant autonomously building it. This extensibility transforms Moltbot from a fixed product into a general automation platform. The skills library, ClawdHub, hosts community-contributed extensions that anyone can install. Tell the bot “create a skill to monitor flight prices and alert me when they drop below $300,” and it builds the automation itself. One user reported rebuilding their entire website via Telegram while watching Netflix: “Notion to Astro, 18 posts migrated, DNS moved to Cloudflare. Never opened my laptop.” The model flexibility adds another layer of appeal. While most users run Moltbot with Claude (the irony of the original name was intentional), the system supports GPT-4, local models, and various other providers. Privacy-conscious users can run entirely local inference, keeping every piece of data on their own hardware. But this power comes with profound responsibility. The documentation is bracingly honest about the implications: “No perfectly secure setup exists when operating an AI agent with shell access.” Running it on your primary machine, Steinberger himself admits, is “spicy.”The Man Behind the Bot
Understanding Moltbot requires understanding its creator. Peter Steinberger isn’t a newcomer chasing the AI hype cycle. After studying at the Vienna University of Technology, he taught iOS and Mac development there from 2008 to 2012, creating the university’s first Mac/iOS developer course. He then moved to San Francisco as a Senior iOS Engineer before founding PSPDFKit in 2011. The company started from a simple observation: PDF manipulation was hard, and every app needed it. Steinberger built a framework that handled document viewing, editing, signing, and annotation—and quietly became infrastructure for software you use daily. When you open a document in Dropbox and see the viewing interface, that’s PSPDFKit. The company bootstrapped to profitability without outside funding, then raised $116 million from Insight Partners in 2021 before Steinberger stepped away. By his own account, he spent three years largely inactive after the exit, struggling to find a new direction. “I lost my spark,” he later explained. Then came the AI wave—and specifically, the emergence of tools that could be more than conversational partners. Steinberger saw what most people building AI wrappers missed: the bottleneck wasn’t smarter models but actually doing things with them. Moltbot emerged from that insight. It started as a personal tool for managing Steinberger’s own digital chaos, then grew as he shared progress online. The viral explosion wasn’t planned—it was Steinberger scratching his own itch in public, and discovering millions of people had the same itch.When AI Becomes Your Digital Employee
The positive use cases for Moltbot read like a productivity fantasy. Early adopters have documented transformations in how they work, demonstrating what becomes possible when AI graduates from advisor to executor. Perhaps the most commonly celebrated use case involves email automation. Users report automating thousands of emails—unsubscribing from unwanted lists, categorizing incoming messages, drafting responses, and achieving the mythical “inbox zero” without manual intervention. The assistant monitors Gmail through its Pub/Sub integration, processing new messages in real-time rather than waiting for periodic checks. One implementation pattern involves morning briefings: Moltbot pulls data from your calendar, task management system (Todoist, Asana, or similar), health wearables like WHOOP or Apple Health, and news sources. It synthesizes everything into an audio summary delivered before you’ve finished your first coffee. Viticci configured his version to generate these briefings in Italian, demonstrating the multilingual flexibility. The multi-channel message routing capability consolidates fragmented digital communication. Rather than checking WhatsApp, then Telegram, then Slack, then Discord separately, Moltbot can aggregate and prioritize messages across platforms. Important items get surfaced; noise gets filtered. You respond in one place, and the bot routes your reply to the appropriate platform. When one user’s OpenTable booking failed, the bot pivoted strategy entirely. Using ElevenLabs voice synthesis, it called the restaurant directly and completed the reservation through an actual phone conversation. The human remained in the loop only as the recipient of a confirmation message.Research and Reconnaissance
Asynchronous research tasks demonstrate where Moltbot truly excels. One product manager tested it extensively for competitive intelligence, asking the bot to research Reddit for product feedback about her platform. The result was “a well-structured Markdown document with key insights, bullet points, and links to relevant Reddit threads”—matching her preferred professional format without any template configuration. This pattern suggests a general principle: tasks with built-in latency tolerance perform dramatically better than real-time interactions. When you don’t need an immediate response, you can delegate complex multi-step workflows and receive polished outputs hours later.Business Operations at Scale
Power users have extended Moltbot into legitimate business infrastructure. Documented use cases include automatic invoice generation based on time tracking, expense categorization and reporting, CRM updates following customer interactions, and project management task creation. One user (@danpeguine on Twitter) reportedly “runs entire operations via Moltbot.” The Zapier replacement potential is significant. Viticci replicated a workflow that creates Todoist projects for MacStories Weekly issues—something that previously required a paid automation service now runs locally at no marginal cost. For developers and technical users comfortable with configuration, an entire category of SaaS subscriptions becomes redundant. The browser automation and API access extend into physical space. Users have configured Moltbot to control Philips Hue lighting, adjust Sonos speaker playback, manage air purifiers, and operate television remotes. Combined with the proactive notification system, this enables context-aware automation: turn on specific lights when you’re expected home based on calendar data, or adjust thermostat settings when travel schedules change. The implications for accessibility deserve attention. For users with mobility limitations, the ability to manage complex digital and physical systems through simple text or voice messages—from any messaging app, on any device—represents meaningful quality-of-life improvement. Because Moltbot can integrate with wearables, health apps, and IoT devices through APIs or web interfaces, some users have configured it as a wellness assistant. The bot pulls sleep data from WHOOP, activity metrics from Apple Health, and combines this with calendar and task data to provide personalized recommendations. One early adopter described receiving proactive messages like “You’ve had three nights of poor sleep and have a demanding week ahead—consider declining that optional Friday meeting.” This proactive dimension represents perhaps the most significant departure from traditional assistants. You don’t ask Moltbot questions; it watches patterns and tells you what you need to know. The “heartbeat engine” runs scheduled checks and only interrupts when thresholds are crossed. Think of it as a notification system with judgment. The restaurant booking story deserves elaboration because it illustrates both Moltbot’s potential and its complexity. The user asked the bot to book a table at a specific restaurant. The first attempt—using OpenTable’s web interface—failed due to availability issues. A less capable system would have simply reported failure. Moltbot instead pivoted strategies. It used ElevenLabs integration to generate speech, called the restaurant’s phone number, navigated the interaction with a human host, and successfully secured a reservation. The user learned about this not by watching the process but by receiving a WhatsApp message confirming the booking. This adaptability—trying one approach, recognizing failure, and autonomously pivoting to an alternative—is what makes Moltbot qualitatively different. Traditional automation tools follow rigid scripts; Moltbot improvises within the boundaries of its capabilities and permissions.The Enthusiasm and the Asterisk
The enthusiasm from early adopters is genuine. MacStories declared that Viticci had “fewer and fewer conversations with the ‘regular’ Claude and ChatGPT apps” after experiencing what Moltbot offered. The sense of having crossed a threshold into a new paradigm is palpable throughout community discussions. The project’s impact even moved markets. When the viral explosion hit, investors recognized that running Moltbot locally would drive demand for home computing infrastructure. Cloudflare’s stock surged 14% as analysts projected increased edge computing needs. Mac Mini sales reportedly spiked as users sought dedicated hardware for their AI assistants. But this enthusiasm comes with important caveats. Moltbot is explicitly an experimental tool for technical users. The installation appears simple—curl -fsSL https://molt.bot/install.sh | bash—but effective configuration requires understanding API governance, permission scoping, and security implications that most users lack. And the failure modes, as we’ll see, can be spectacular.
The Dark Side of Giving AI the Keys
The same capabilities that make Moltbot revolutionary make it dangerous. When you give AI write access to your digital life, the potential for harm scales with the potential for good. The most dramatic illustration came during the project’s forced rebranding. In late January 2026, Anthropic sent Steinberger a trademark notice—“Clawd” was too phonetically similar to “Claude.” The rebrand to Moltbot (lobsters molt to grow; the crustacean mascot remained) proceeded quickly. But during the transition, Steinberger made a critical operational error. He released the old GitHub organization and Twitter/X handle simultaneously before securing the new ones. Within approximately ten seconds—crypto scammers had been actively monitoring—both accounts were hijacked. “It wasn’t hacked,” Steinberger clarified. “I messed up the rename and my old name was snatched in 10 seconds.” The fraudsters immediately launched 16 million market cap](https://finance.yahoo.com/news/fake-clawdbot-ai-token-hits-121840801.html) as speculators rushed in. Steinberger publicly denounced the scheme—“Any project that lists me as coin owner is a SCAM”—and the token collapsed to near zero. Late buyers were rugged; scammers kept millions. This wasn’t a flaw in Moltbot itself, but it illustrates the exploitation ecosystem that rapidly assembles around anything popular in the AI and crypto-adjacent spaces. Where there’s attention, there are predators.The Anthropic Backlash
The trademark enforcement triggered significant community criticism. Developers pointed out the irony: Anthropic forced a rebrand of a project that was actively driving Claude API usage and revenue. Many users specifically configured Moltbot to use Claude as its underlying model—the phonetic similarity to “Claude” was an homage, not predatory brand confusion. Critics viewed the enforcement as tone-deaf corporate behavior. The phonetic connection seemed playful rather than confusing to actual users; no one believed Clawdbot was an official Anthropic product. More importantly, the forced rebrand directly triggered the security cascade—had Steinberger not been rushing to change names, he might not have made the operational error that enabled the account hijacking. Developer sentiment on forums and social media shifted noticeably. Some reconsidered their platform loyalty, questioning whether Anthropic was “customer hostile” toward the developer community that actually builds on their APIs. The incident became a case study in how brand protection can backfire when applied without contextual judgment. The exploitation continued beyond cryptocurrency. Security researchers at The Hacker News documented a malicious VS Code extension called “ClawdBot Agent - AI Coding Assistant” published the same week as the rename chaos. The extension had no connection to the actual project—Moltbot doesn’t have a VS Code extension—but exploited the brand recognition. When installed, the fake extension executed automatically whenever VS Code launched, retrieving configuration from attacker-controlled domains and deploying ConnectWise ScreenConnect for persistent remote access. Fallback mechanisms included a Rust-written DLL that could sideload payloads from Dropbox if primary servers went down. Developers who installed the seemingly legitimate coding assistant were fully compromised. Beyond outright scams, Moltbot has emerged as a significant “shadow AI” concern for enterprises. Token Security Labs reported finding Clawdbot actively deployed in 22% of their customer organizations—installed by employees without IT approval or security review. The appeal is obvious: workers gain a powerful assistant that can automate tedious tasks, manage communications, and boost individual productivity. The risk is equally obvious: sensitive corporate data flows through a personal tool running on employee hardware, outside any organizational security boundary. API keys, internal documents, customer information, and proprietary code all become accessible to a system designed to “do things” with maximum autonomy. The broader shadow AI trend compounds this specific risk. Concentric AI found that GenAI tools exposed approximately three million sensitive records per organization during the first half of 2025. Most employees admit to sharing information through unapproved AI tools without authorization. Moltbot—by design far more capable than typical GenAI assistants—amplifies these concerns significantly. Even setting aside malicious exploitation, legitimate use cases reveal fundamental limitations. The product manager who tested Moltbot extensively documented significant failures alongside successes. Calendar management proved particularly problematic. Given write access to a family calendar, the bot consistently placed events one day off and created dozens of individual appointments instead of recurring events. The user described sending “frustrated voice notes from Target while pushing a shopping cart” as the bot repeatedly re-added entries she had deleted. This brittleness around dates, times, and time zones reflects fundamental LLM weaknesses that no amount of tooling fully resolves. For tasks requiring precision in temporal reasoning, AI agents remain unreliable. And when those agents have write access to production systems, unreliability translates directly into real-world chaos.A Security Researcher’s Nightmare
If the misuse cases are concerning, the security architecture is alarming. Multiple researchers have documented vulnerabilities that undermine Moltbot’s fundamental viability for anyone outside controlled experimental contexts. Jamieson O’Reilly and other researchers used Shodan to identify approximately 780 publicly exposed Moltbot instances. Of those examined in detail, eight had no authentication whatsoever—complete open access to control panels containing conversations, credentials, and API keys. Another 47 had authentication enabled but were still accessible, and the remainder showed inconsistent security measures. The primary issue involved proxy misconfigurations where localhost connections auto-authenticated, allowing unauthenticated external access to what should have been private instances. Anyone who found these exposed panels could read private messages, access stored credentials, and potentially execute commands on the underlying systems. Hudson Rock researchers found that credentials shared with the assistant were stored in plaintext Markdown and JSON files under~/.clawdbot/ and ~/clawd/. These files are readable by any process running as the user—meaning any malware that achieves local code execution immediately gains access to every secret the assistant has accumulated.
This creates a particularly dangerous interaction with the infostealer malware ecosystem. Families like Redline, Lumma, and Vidar specifically target local-first directory structures. The research concluded these stored credentials represent “a goldmine for the global cybercrime economy.”
O’Reilly demonstrated a proof-of-concept attack against ClawdHub, the skills library. He uploaded a benign skill that artificially inflated download counts and was subsequently installed by developers across seven countries. The library contained “no moderation process at present”—meaning anyone could upload malicious code that would execute with full permissions on users’ machines when they installed community skills.
This supply chain attack vector is particularly insidious. The whole appeal of Moltbot’s extensibility becomes a liability when the skill ecosystem lacks security review. Users who trust community contributions implicitly risk executing arbitrary code from anonymous authors.
The pivot-to-ai analysis identified an even more fundamental vulnerability: prompt injection. When Moltbot monitors your email or chat messages, any incoming content becomes potential attack surface. A malicious actor could craft a message that, when processed by the bot, causes it to take unintended actions.
“AI agents literally cannot be secured against prompt injection,” the analysis argued, “because chatbots cannot distinguish between data and instructions.” This isn’t a bug to be fixed but a structural limitation of current language model architectures. The default installation exposes the bot to the open internet, meaning any email or chat message represents a potential injection vector.
To Steinberger’s credit, the project documentation doesn’t hide these realities. It explicitly states that “no perfectly secure setup exists when operating an AI agent with shell access.” The recommended safe deployment involves running Moltbot on isolated systems with throwaway accounts—which, as critics note, significantly limits practical utility.
The Trend Micro security analysis of AI assistants generally concluded that “AI digital assistants handle sensitive personal data, control access to critical devices, and often operate within complex ecosystems of interconnected systems”—making robust security not optional but essential. Moltbot’s design philosophy, which maximizes capability over security by default, inverts this priority.
Perhaps the most subtle risk is the gap between apparent simplicity and actual complexity. Installation is a single curl command. Basic functionality works quickly. But “proper configuration requires a thorough understanding of API posture governance” that typical users—even technically sophisticated ones—may lack.
The project attracted mainstream attention specifically because it felt accessible. But the security implications of running a shell-access AI agent are not accessible concepts. Users who successfully install Moltbot may not understand they’ve created significant attack surface, and the documentation’s warnings compete against the excitement of watching AI automate their lives.
Security concerns aside, Moltbot presents a significant economic barrier. Heavy users report spending upward of $300 per day on API costs—primarily to Anthropic for Claude usage. This makes the tool viable only for those who can justify the expense through productivity gains or who have substantial disposable income for experimental technology.
The cost structure also creates perverse incentives. Users tempted to reduce costs might configure less capable models, use more aggressive caching, or skip validation steps—each of which degrades the experience and potentially increases risk. The “free” open-source code sits atop a very expensive operational model.
Critics have pointed out the contradiction: a tool marketed as democratizing AI assistance actually requires significant ongoing investment. The hobbyist running Moltbot on a repurposed laptop might achieve the installation, but sustaining useful operation requires a budget most individuals and small businesses can’t justify.
The Future of AI That Does Things
Moltbot exists at a fascinating inflection point. It simultaneously demonstrates AI’s transformative potential and exposes why that transformation will be contentious, difficult, and potentially dangerous. The traditional approach to AI safety involves constraining capability. Claude and ChatGPT live in sandboxes by design—they can discuss your calendar but not modify it, explain shell commands but not execute them. This limitation frustrates users who want AI to “just do the thing,” but it contains risk. Moltbot takes the opposite approach. Give the AI full capability and trust the user to manage the consequences. This is why one user felt “like I am living in the future”—because it is, genuinely, a fundamentally different interaction paradigm. The AI doesn’t suggest actions; it takes them. But trust-the-user models break down when users don’t understand the implications of what they’re enabling, when attackers specifically target users through these enabled capabilities, and when the aggregate effect of many individual decisions creates systemic risk (as with enterprise shadow AI). Viticci raised an interesting concern about Moltbot’s implications for the app ecosystem. If conversational AI can accomplish tasks instantly—building custom integrations on demand, automating workflows through dialogue—why would users seek App Store solutions? This question extends beyond mobile apps. If Moltbot can write its own skills, research APIs, and implement features through conversation, what happens to the professional automation tool market? To coding assistants? To entire categories of productivity software? The answer isn’t clear, but the question matters. Moltbot represents not just a product but a category: AI that produces rather than advises, executes rather than suggests. The economic and competitive implications extend far beyond one Austrian developer’s hobby project. Consider the downstream effects. If capable AI agents can build custom tools through conversation, the entire concept of downloadable software shifts. Why maintain an app store full of single-purpose utilities when an agent can assemble equivalent functionality on demand? Why pay subscription fees for automation platforms when open-source alternatives match or exceed their capabilities? Some observers see this as creative destruction—painful for existing players but ultimately beneficial for users. Others worry about the loss of human jobs in software development, quality assurance, and technical support. Moltbot doesn’t just automate user tasks; it automates the creation of automation tools. That recursive capability has implications we’re only beginning to understand. Enterprise security leaders increasingly must map AI agent security controls to frameworks like ISO 42001 and NIST AI Risk Management. The zero-trust approach recommended by security experts—detailed audit trails, real-time monitoring, manual approval workflows, sandboxed testing—all run contrary to Moltbot’s design philosophy. This suggests regulatory responses may specifically target autonomous AI agents with execution capabilities. Whether through enterprise policies prohibiting shadow AI or broader governmental action, the current “run it on your machine and see what happens” approach seems unlikely to persist as AI capabilities increase.Where This Leaves Us
Moltbot isn’t a finished product. It’s explicitly experimental—“a nerdy project, a tinkerer’s laboratory” as MacStories noted. The GitHub README, famously, includes a screenshot where the bot makes an offhand racist comment about Morocco, raising questions about content moderation and quality control even at the demonstration level. The $300-per-day API costs Anthropic for heavy usage price out casual users. The security vulnerabilities make it unsuitable for sensitive contexts. The reliability issues make it inappropriate for critical workflows. The prompt injection risks make it fundamentally exploitable. And yet. The glimpse of what becomes possible—rebuilding a website via Telegram, automating entire business operations, having an AI that calls restaurants when web booking fails—is genuinely compelling. The enthusiasm isn’t irrational; it’s responding to something real. The honest conclusion is that Moltbot is too powerful and too dangerous for general use, and the direction it points is probably inevitable anyway. Somewhere between the current sandbox constraints and Moltbot’s radical openness lies a workable balance of capability and safety. Finding that balance—through better architectures, improved security practices, regulatory frameworks, and user education—will define the next era of AI development.The Lessons So Far
The Moltbot saga—all seven days of it at the time of writing—offers several takeaways for anyone thinking about the AI assistant space. First, users want AI that does things. The explosive adoption proves that latent demand exists for agents that execute rather than advise. The sandbox approach of mainstream assistants leaves users hungry for more capability, and they’ll accept significant risk to get it. Second, the security implications of agentic AI aren’t solved problems. They’re not even close. The gap between “can execute shell commands” and “can be deployed safely” remains vast, and current architectures offer no clear path to closing it. Anyone building in this space must treat security as a fundamental design constraint, not a feature to add later. Third, brand protection in fast-moving technical communities requires judgment. Anthropic’s trademark enforcement was legally defensible but strategically questionable, triggering consequences that neither party intended. Companies building platforms for developer communities should consider the full system effects of their actions. Fourth, the exploitation ecosystem operates faster than legitimate projects. Ten seconds between releasing an old handle and claiming a new one was enough for scammers to cause millions in losses. Any popular project must anticipate and plan for predatory behavior. Finally, the gap between technical documentation and user understanding matters enormously. Steinberger was honest about risks, but honesty in a README competes against the excitement of watching AI automate tedious tasks. Bridging that gap—making risks visceral, not abstract—remains an unsolved problem in AI deployment.Where We Go From Here
Peter Steinberger built something that showed everyone what’s coming. The question now is whether we’ll navigate the transition wisely or learn the hard lessons through accumulated disasters. Given the $16 million scam, the exposed credentials, the malware exploitations, and the enterprise shadow deployments all occurring within Moltbot’s first week of viral attention, early indications suggest we’re choosing the hard way. But perhaps that’s inevitable with genuinely transformative technology. The first cars had no seatbelts; early internet architecture assumed good faith. Safety mechanisms often emerge from painful experience rather than foresight. Moltbot might be remembered as an important experiment—the moment we discovered that giving AI the keys requires locks we haven’t invented yet. For now, it exists as both promise and warning: an Austrian developer’s hobby project that briefly broke the internet, fueled a scam, exposed hundreds of users, and gave everyone a glimpse of something genuinely new. The future of AI that does things is coming whether we’re ready or not. Moltbot just showed us we’re not.Moltbot is available at molt.bot (formerly clawd.bot). The project remains open source with over 85,000 GitHub stars. Prospective users should carefully review security documentation and consider running only in isolated environments with throwaway credentials. Installation is straightforward, but safe operation requires expertise most users don’t have—and that gap is exactly the problem.