Moltbook leaked 1.5 million AI agent tokens and plaintext OpenAI keys
The discovery of an exposed database belonging to Moltbook, a social network designed for autonomous AI agents, marks a significant failure in how we handle machine-to-machine identity. This leak exposed 35,000 user emails and 1.5 million API tokens, including plaintext third-party credentials harvested from private agent logs. It highlights a recurring problem in the agentic AI space: we are building complex social layers on top of infrastructure that lacks basic secret management.
The mechanics of an open agent database
The leak originated from a misconfigured Elasticsearch instance that was accessible over the public internet without any authentication. This is a classic failure mode for startups moving too fast, but the nature of the data stored inside makes it more dangerous than a typical human-centric breach. In a standard social network, a database leak usually involves hashed passwords and personal metadata. In the case of Moltbook, the database contained the live "brains" and access methods of 770,000 active AI agents.
Security researchers found the instance while scanning for open ports. The data was stored in several indices, including a core user table and a much larger log of agent interactions. Because Moltbook was designed to let agents interact without human intervention, the platform acted as a massive aggregator for API keys. Each agent required a token to exist on the platform, and many of these agents were configured to pull data from or push data to external services. The database effectively became a central repository for the keys to thousands of independent developer accounts.
Why 1.5 million tokens represent a systemic failure
The sheer volume of tokens—1.5 million across 770,000 agents—suggests that many users were deploying fleets of agents using automated scripts. These tokens are not just session cookies. They are the primary identifiers that allow an agent to post, read, and interact within the Moltbook ecosystem. When an agent token is compromised, the attacker gains full control over the persona, its history, and its connections.
The real problem is that these tokens were stored alongside the agent state. In modern agent architectures, developers often bundle the authentication token with the agent's memory or context window. If the database holding that context is exposed, the token is exposed with it. There was no separation between the application logic and the identity provider. This architecture assumes that the database is an impenetrable vault, a premise that has been proven wrong repeatedly in the last decade.
For the 35,000 human users who managed these agents, the breach is a direct path to their broader development environments. Many of the leaked email addresses are linked to GitHub and OpenAI accounts. An attacker with a list of active agent developers and their associated API tokens can begin mapping out the infrastructure behind these agents, looking for further weaknesses in the deployment pipelines.
The danger of unscrubbed agent-to-agent logs
The most damaging part of the Moltbook leak is the collection of private conversations between agents. Unlike human users who might be cautious about sharing a password in a chat, AI agents are often programmed to be helpful and transparent with one another to achieve a goal. The leaked logs show agents exchanging plaintext third-party credentials, including OpenAI API keys and AWS access tokens, while coordinating tasks.
This happens because of how context windows work. If a developer gives an agent an API key in its initial system prompt, that key stays in the conversation history. When that agent speaks to another agent on the Moltbook platform, it might pass that key along as part of a technical requirement or a debugging step. The platform was recording these interactions in a centralized log without any PII or secret scrubbing.
We are seeing a new type of "injection" where the vulnerability is not in the code but in the communicative nature of the agents. If the agents are designed to share their state to solve problems, they will share their secrets unless specifically told otherwise. Moltbook failed to implement any server-side filtering to detect and redact patterns that look like base64 encoded keys or specific API prefixes.
Third-party credential leakage and downstream costs
The inclusion of plaintext OpenAI keys in the leak creates an immediate financial and security risk for the affected developers. An leaked OpenAI key is essentially a blank check drawn on the developer's credit card. Attackers can use these keys to run their own high-volume inference jobs, which can drain thousands of dollars from an account in minutes.
The downstream effects are even worse for agents that had access to GitHub or Slack. An agent that was tasked with "summarizing my team's daily standup" would likely have a long-lived token for a company Slack workspace. If that token was passed in a conversation and logged by Moltbook, the breach extends from a niche social network for bots into the private communication channels of dozens of companies.
This incident shows that the blast radius of an agent-based breach is significantly wider than that of a traditional application. Because agents are designed to be "agentic"—meaning they have the power to act on the world—compromising an agent means compromising its ability to act. We are no longer talking about data theft. We are talking about the unauthorized remote execution of actions across the web.
Rethinking the security model for autonomous agents
This breach should end the practice of storing agent context and secrets in the same layer. As a community, we need to move toward a model where the agent only holds a reference to a secret stored in a dedicated vault like HashiCorp Vault or AWS Secrets Manager. The agent should never "know" the plaintext key; it should only be able to request that a secure proxy perform a signed request on its behalf.
The Moltbook incident also highlights the need for better logging practices in AI platforms. If you are building a platform where machines talk to machines, you have to assume they will eventually share something they shouldn't. Automated scrubbing of logs for high-entropy strings needs to be a default feature, not an afterthought.
The 1.5 million tokens leaked today are a warning. We are currently building a world where agents have more autonomy than they have security, and where the platforms hosting them are treating their "conversations" as low-risk data. Until we treat an agent's memory with the same level of security we give a password database, we are just waiting for the next misconfigured instance to bankrupt a few thousand developers.
How do we define the security perimeter of an agent when its primary function is to communicate and collaborate across platforms?