Anthropic has accidentally released a massive amount of internal source code for Claude Code onto the open internet. The company confirmed a release packaging error resulted in roughly 510,000 lines of internal TypeScript code being publicly accessible. It’s a significant leak, particularly since Claude Code isn’t just a simple script; it’s the sophisticated harness that allows Claude to interact with external software tools.
How the Leak Occurred
The mistake was discovered when a security researcher noticed Anthropic’s Claude Code CLI source maps were publicly readable. The Register reported the issue stemmed from a reference to unobfuscated TypeScript source within a map file included in the Claude Code npm package. Anthropic quickly clarified that while embarrassing, it wasn’t a security breach. They assured everyone that no sensitive customer data or credentials were involved. A spokesperson called it a simple “release packaging issue caused by human error,” and they’ve already implemented measures to prevent it from happening again.
Anthropic’s Repeated Security Lapses
It’s becoming a habit for Anthropic to accidentally expose internal data. In the same week, Fortune reported that Anthropic had also made draft blog posts publicly accessible. One leaked draft, for example, discussed a new cybersecurity model called “Capybara.” A few weeks prior, Anthropic exposed 5,000 files in a similar incident, including another draft blog post about a powerful new cybersecurity model. It’s a security lapse after a security lapse, and it’s not exactly inspiring confidence in their internal controls.
Why the Code Matters to You
When you use Claude Code, you aren’t just talking to a model; you’re interacting with a complex software harness. The leaked code provided a peek into this intellectual property. Competitors could theoretically reverse-engineer how Anthropic structures tool interactions, allowing them to build a better product or simply copy the functionality. This leak also provided further evidence that Anthropic is working on a new model, likely named “Capybara.” As Roy Paz, a senior AI security researcher at LayerX Security, noted, Anthropic may release a “fast” and “slow” version of this new model, or perhaps make it the company’s most advanced option yet.
