Security researchers have discovered Anthropic’s Claude Code source code spread across the open internet, revealing a massive leak of proprietary AI tooling. A researcher found that the entire TypeScript sourcebase for Anthropic’s flagship CLI tool was accidentally published to the npm registry via a source map file. The incident exposed nearly 512,000 lines of code across 1,900 files, including unreleased features, internal codenames, and a hidden Tamagotchi-style digital pet within the tool. You can now inspect the architecture that powers Anthropic’s command-line AI assistant, but it’s a security risk that Anthropic is scrambling to contain.
The npm Mistake
The leak occurred because a source map file, which maps minified output back to original source, was included in the published npm package. This file contained the full source code, comments, and feature flags. The misconfiguration shipped these files to npm, allowing anyone to access the entire codebase. The leaked code quickly spread to GitHub, with repositories accumulating over 1,100 stars. Anthropic responded with takedowns, but cached copies remain available for anyone to review the internal workings of their development suite.
Architecture Highlights
Powering the Tools
The Tool System uses a plugin-like architecture with roughly 40 tools. Each capability, such as file reading or bash execution, is a discrete, permission-gated tool. The base tool definition alone is 29,000 lines of TypeScript, showing the depth of these capabilities. This modular approach means Anthropic has clearly thought deeply about how to make a CLI tool feel like a full production system rather than just a chat wrapper.
Query Engine & Orchestration
The Query Engine is the brain, handling all LLM API calls, streaming, and orchestration. It is the largest single module in the codebase, highlighting its central role. Beyond simple interaction, Claude Code uses Multi-Agent Orchestration to spawn sub-agents, called “swarms,” to handle complex, parallelizable tasks. Each agent runs in its own context with specific tool permissions, enabling parallel problem-solving.
Bridge Systems & Memory
Anthropic built an IDE Bridge System to create a bidirectional communication layer connecting CLI to IDE extensions. This layer uses JWT-authenticated channels to enable the “Claude in your editor” experience, allowing the tool to edit files directly within your development environment. The leak also revealed a Persistent Memory System that stores and retrieves data across sessions. You’ll even find a hidden Tamagotchi, a small digital pet inside Claude Code, serving as a fun, low-stakes feature.
Practitioners Perspective
For software engineers, the leak is a rare, unfiltered look into how a major AI tool is built. Observers noted the use of Bun, not Node, as the bundler and runtime, showing Anthropic’s preference for modern tooling. The modular tool architecture, with 40 built-in tools, offers a template for building similar systems, emphasizing permission gates and clear role separation. The leak’s speed, with repositories going viral within hours, underscores the importance of careful build configurations and .npmignore rules. For enterprise customers, the accidental disclosure highlights the need for robust security reviews of all open-source dependencies2, even those seemingly minor like source maps.
Industry Implications
This is one of the largest accidental source code leaks in AI tooling history, exposing internal development strategies and unreleased features. The speed of the leak, with cached copies spreading widely, shows the difficulty of containing such disclosures. The presence of unreleased AI agent features suggests Anthropic’s roadmap may be visible publicly, potentially affecting product timing. The leak’s focus on CLI tooling highlights the growing importance of command-line AI agents, which are moving from niche to mainstream for their speed and integration capabilities. Anthropic’s response, including takedowns, will likely lead to stricter build and publishing procedures, with other AI companies watching closely to avoid similar mistakes.
