Back to Insights
AI Software

Anthropic Accidentally Leaks Claude Code's Entire Source Code via npm

A misconfigured source map in an npm package exposed 512,000 lines of Claude Code's internal source — revealing architecture, system prompts, and unreleased features.

S5 Labs Team April 1, 2026

On March 31, Anthropic shipped a 59.8 MB JavaScript source map file in version 2.1.88 of the @anthropic-ai/claude-code npm package. The file was intended for internal debugging. Instead, it exposed 512,000 lines of TypeScript source code — the full internals of Claude Code, Anthropic’s agentic coding assistant — to anyone who ran npm install.

Security researcher Chaofan Shou discovered the leak and it spread quickly. Within hours, the source code was mirrored on GitHub and has since been forked over 41,500 times. Anthropic acknowledged the incident, calling it a packaging error caused by human error, and moved to contain it. But the code is now part of the public record, and the community has been dissecting it since.

How It Happened

Source maps (.map files) are standard debugging artifacts that connect minified or bundled JavaScript back to the original source code. They are useful in development but should never ship in production packages. In this case, one did.

The root cause appears related to a known bug in Bun, the JavaScript runtime Anthropic acquired at the end of 2025. Bun’s documentation states that source maps should not be included in production builds, but a reported bug means they sometimes are. Whether Anthropic’s build pipeline should have caught this regardless is a separate question — the answer is obviously yes — but the proximate cause was a toolchain defect compounded by insufficient build-time validation.

Anthropic’s statement was brief: the leak was a packaging issue, not a security breach, and they are implementing measures to prevent recurrence. The distinction between packaging issue and security breach is technically accurate — no user data was exposed — but undersells the scope. The full architecture, internal prompts, memory system, and feature roadmap of one of the most widely used AI coding tools is now public.

What the Code Reveals

The leaked source paints a detailed picture of Claude Code’s internals. Several discoveries have drawn attention.

System prompts and instructions. The full system prompt that governs Claude Code’s behavior is now public. This includes the specific instructions, safety constraints, and behavioral guidelines that shape how the tool responds to user inputs. For anyone who has wondered what makes Claude Code behave differently from a raw Claude API call, the answer is now readable in plain text.

KAIROS — an always-on background agent. The most architecturally significant finding is KAIROS, named after the ancient Greek concept of acting at the right moment. The code describes an autonomous daemon mode that allows Claude Code to operate as a persistent background agent. It includes a process called autoDream for nightly memory consolidation — merging observations, resolving contradictions, and converting insights into structured facts. This suggests Anthropic is building toward a model where Claude Code does not just respond to commands but maintains ongoing awareness of a developer’s project state.

Buddy — a Tamagotchi-style companion. The code includes a full virtual pet system called Buddy, featuring 18 species (including a duck, dragon, capybara, and “chonk”), a deterministic gacha system with rarity tiers and shiny variants, and procedurally generated stats. Each pet has a soul description written by Claude on first hatch. An internal string reading friend-2026-401 strongly suggests this was planned as an April Fools’ feature. The companion has its own system prompt describing a small creature sitting beside the input box, occasionally commenting in a speech bubble.

44 feature flags. Analysis of the codebase revealed dozens of features behind flags at various stages of development, offering a roadmap of where Anthropic is taking the product.

What It Means for the Developer Community

The reaction has been mixed. Some developers view this as a gift — the ability to understand how Claude Code actually works, down to the prompt engineering, is genuinely useful for anyone building their own AI-assisted development workflows. Others have raised concerns about what the leaked prompts reveal about Anthropic’s approach to safety constraints and behavioral guardrails.

For competing AI coding tools — including OpenAI’s Codex — the leak is an intelligence windfall. The exact architectural patterns, prompt structures, and tool integration approaches that Anthropic has refined through iteration are now available for study. Whether competitors can meaningfully exploit this depends on how much of Claude Code’s quality comes from the model itself versus the engineering around it — but the engineering is no longer a trade secret.

From a software development practices perspective, the incident is a textbook case of supply chain hygiene failure. Build pipelines that ship debug artifacts to production are not a new problem — they predate AI tools by decades. The fact that Anthropic, a company with significant security expertise, was caught by this reinforces how easily build-time checks get deprioritized under shipping pressure.

The Broader Lesson

The KAIROS daemon mode is the finding worth watching. If Anthropic is building toward an always-on coding agent that consolidates context overnight and maintains persistent awareness of project state, that changes the interaction model for AI-driven automation in development workflows. Today’s AI coding assistants are reactive — you invoke them, they respond, state is lost between sessions. KAIROS suggests a future where the assistant is ambient: always present, always updating its understanding, acting when the moment is right.

That vision is exactly what the name implies. Whether Anthropic intended to reveal it this way is beside the point. The community now knows where the product is going — and the conversation about what always-on AI agents mean for developer workflows has started whether Anthropic was ready for it or not.

Want to discuss this topic?

We'd love to hear about your specific challenges and how we might help.