
Anthropic has established its public persona around the notion of being the diligent AI firm. It shares comprehensive research on AI risks, employs top-tier researchers in the domain, and has been outspoken regarding the obligations tied to creating such potent technology — so outspoken, in fact, that it is currently engaged in a dispute with the Department of Defense. On Tuesday, regrettably, someone overlooked checking a box.
This marks the second occasion within a week. Last Thursday, Fortune revealed that Anthropic had unintentionally made almost 3,000 internal documents accessible to the public, comprising a draft blog entry detailing a significant new model the company had yet to unveil.
Here’s what transpired on Tuesday: When Anthropic deployed version 2.1.88 of its Claude Code software suite, it inadvertently incorporated a file that revealed nearly 2,000 source code files and exceeding 512,000 lines of code — effectively the complete architectural design for one of its key products. A security researcher named Chaofan Shou recognized the issue almost instantaneously and shared it on X. Anthropic’s response to various media outlets was somewhat laid-back as far as such incidents go: “This was a release packaging issue triggered by human error, not a security compromise.” (Internally, one might speculate that the tone was less measured.)
Claude Code is far from a trivial product. It’s a command-line tool that enables developers to use Anthropic’s AI for coding and editing tasks and has grown robust enough to disturb competitors. According to the WSJ, OpenAI discontinued its video generation service Sora merely six months post-launch to redirect its focus towards developers and enterprises — in part due to Claude Code’s rising influence.
What was exposed was not the AI model per se but the software framework surrounding it — the directives that inform the model’s behavior, the tools it should utilize, and its boundaries. Developers began releasing in-depth evaluations almost immediately, with one characterizing the product as “a production-grade developer experience, not merely an interface for an API.”
Whether this will have any significant long-term implications is a question best posed to developers. Competitors may glean insights from the architecture; meanwhile, the industry is evolving rapidly.
Regardless, somewhere at Anthropic, one can envision a highly skilled engineer spending the remainder of the day quietly questioning their job security. One can only wish it’s not the same engineer, or engineering team, from the previous week.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026

