On March 31, the internal source code of ‘Claude Code’, an AI-powered coding assistant by Anthropic, was leaked. This means that software developers across the world could now look into how this model works and have access to 512,000 lines of its code.
The leak, Anthropic confirmed, was due to a “release packaging issue caused by human error” and not a targeted hack or security breach. When a team member ran a production build manually for the Claude Code version 2.1.88 update, the compiler accidentally published an internal file publicly, which allowed anyone who found it to download it as a zip file.
The issue was later fixed soon enough by Anthropic, but the code was already made available on GitHub, following which, at least 50,000 copies have been downloaded.
CNBC reports that the leak is a blow to the startup. Anthropic’s competitors can now get an insight into how it built its viral coding tool. No wonder that a post on X with a link to leaked code has amassed more than 21 million views since public discovery.
But even more amusing is that a new debate has taken stage: was the leak real or was it actually just an April Fool’s Day prank? Rubén Domínguez Ibar, an Angel Investor and founder of AI Corner, has posted on LinkedIn that a fake codebase was planted in a staging environment and was deliberately left unsecured. He also mentioned that 44 fictional feature flags were designed to be just believable enough, and Cambridge and LayerX cybersecurity researchers spent an entire weekend analysing these fake documents. He also attached a picture of Claude releasing a statement that says this is an April Fools’ joke; however, the source of it cannot be traced, and the speculation of it being a deliberate attempt remains unverified.
However, in the continued case of this being an accidental leak, critics note the irony: A company known for intense security controls (eg, Pentagon case) has ended up shipping its own blueprints to the public through a basic configuration error that should have easily been caught in a routine code review. The leak may not sink Anthropic’s business, but it gives every competitor a free engineering education on how to build a good AI coding agent and what tools to focus on.

