On March 31, 2026, at 08:23 UTC, a developer named Chaofan Shou: an intern at Solayer Labs: posted something unusual to X. He had downloaded the latest npm package for Anthropic's Claude Code CLI tool, version 2.1.88, and found something that should not have been there: a 59.8 megabyte file called cli.js.map containing the complete, unobfuscated TypeScript source code of one of AI's most closely watched developer tools.

Within hours, the code was mirrored to GitHub, where a single repository accumulated over 1,100 stars and 1,900 forks. By mid-afternoon, the leak had been covered by VentureBeat, CyberNews, Rolling Out, and dozens of security publications. It became the defining technology story of the day, and arguably the most significant accidental source code exposure in the AI industry to date.

This is not a story about a hack. Nobody broke into Anthropic's systems. What happened was a configuration mistake: the kind that any engineering team shipping fast can make. A source map file, a standard debugging artifact, was accidentally included in a public npm release. That file happened to contain 512,000 lines of original TypeScript across approximately 1,900 files. Everything about how Claude Code actually works was suddenly public.

How a Source Map Becomes a Source Code Leak

Source maps exist to help developers debug minified or bundled JavaScript. When you compress and bundle a large codebase into a single file, the resulting output is unreadable. Source maps translate error stack traces back into the original source lines. They do this by storing, inside the .map file, the complete content of every original source file as a string.

In a production npm package intended for end users, source maps should either be excluded from the published files or stripped of their sourcesContent field. Anthropic's build pipeline did neither for version 2.1.88. The full source was sitting inside the package that every Claude Code user had already downloaded.

Anthropic moved quickly once the exposure was confirmed. The npm package was updated to remove the source map, and older vulnerable versions were pulled from the registry. But the response came too late. The code had already been archived, mirrored, and analysed. The cat, as several commentators noted, was not going back into the bag.

The Second Leak in Five Days

Context makes this incident more significant. March 31 was not Anthropic's first operational security failure of the week. On March 26, a CMS misconfiguration had separately exposed what was being described internally as "Claude Mythos" materials: unreleased model details, draft blog posts, and approximately 3,000 unpublished assets from the company's content management system.

Two major accidental exposures in five days, neither caused by external attackers, both caused by configuration errors, at a company whose entire business is built on the premise that it builds AI more carefully than anyone else. The irony was not lost on the developer community. One widely circulated observation: "They forgot to add 'make no mistakes' to the system prompt."

What the Code Actually Revealed

The leaked code was not Anthropic's core AI research. It was not model weights, training data, or the systems that make Claude intelligent. What it was, however, was the complete implementation of Claude Code: its orchestration logic, tool definitions, permission systems, memory architecture, multi-agent coordination, and perhaps most interestingly, its roadmap.

The Architecture

Claude Code is built on the Bun runtime (not Node.js), with React and the Ink library for terminal UI rendering. The codebase uses Zod v4 for schema validation throughout. The largest single file, QueryEngine.ts, runs to 46,000 lines and handles all LLM API calls, streaming, caching, and orchestration. The tool definitions live in Tool.ts at 29,000 lines. There are approximately 40 agent tools in total, including BashTool, FileEditTool, GlobTool, WebFetchTool, MCPTool, LSPTool, NotebookEditTool, SkillTool, REPLTool, and AgentTool.

The Memory System

The leak exposed a three-layer memory architecture that explains one of Claude Code's most practically useful properties: its reliability across long sessions. A background service called autoDream handles memory consolidation automatically, triggering after 24 hours of idle time or after five sessions. The process runs through four phases: Orient, Gather, Consolidate, and Prune. The Prune phase maintains a hard limit of 200 lines and 25 kilobytes on the MEMORY.md file. This is the mechanism behind the persistent memory that users experience as the tool "remembering" context across conversations.

The Permission and Safety Systems

The risk classification system uses three tiers: LOW, MEDIUM, and HIGH. An ML-based auto-approval classifier processes routine requests without interrupting the user. Higher-risk actions route through a separate LLM call that generates a plain-language risk explanation before the approval prompt is shown. The code explicitly names a Safeguards Team responsible for this system: David Forsythe and Kyla Guru.

The Multi-Agent Orchestration

A system called ULTRAPLAN was found in the source: 30-minute remote planning sessions using the Opus 4.6 model in a Cloud Container Runtime, with browser-based approval workflows. The multi-agent coordination uses a mailbox-based system with coordinator and worker roles, a shared scratchpad directory for information exchange, and an atomic claim mechanism to prevent two workers from requesting the same permission simultaneously.

The Unshipped Features: 44 Feature Flags

The most widely discussed finding was the density of fully-built but unreleased functionality hidden behind 44 feature flags. This is not speculation about what Anthropic might build: this is code that is complete, tested, and waiting to be switched on. Among the most significant:

KAIROS: The Always-On Background Agent

KAIROS is described in the source as an autonomous daemon mode. It maintains an append-only daily log, respects a 15-second blocking budget per action, receives periodic tick prompts to check for work, and has access to two exclusive tools not available in normal sessions: SendUserFile and PushNotification. This is Claude Code running continuously in the background as a proactive assistant, not just responding when invoked. The privacy and security implications of a persistent AI agent with file access and notification rights are significant, and presumably part of why it has not yet shipped.

VOICE_MODE

A complete voice command interface for the CLI, not yet released.

Real Browser Control

Not just web fetching via HTTP requests: the code contains Playwright-based real browser control, enabling the agent to interact with web pages as a user would.

Cron-Based Autonomous Scheduling

The ability for agents to schedule their own future execution via cron jobs: a step toward genuinely autonomous, self-directed AI workflows.

BUDDY: The AI Pet

This was perhaps the most unexpected find. A system called BUDDY implements what is described as a Tamagotchi-style AI companion displayed in a speech bubble next to the CLI input. The system includes 18 species with rarity tiers from common to legendary, shiny variants, and procedurally generated personality stats. Crucially, each user's buddy species is deterministic: it is seeded from a hash of the user's ID using the Mulberry32 PRNG algorithm. The same user always gets the same buddy. The planned launch window was May 2026, with a teaser period beginning April 1-7, 2026.

Undercover Mode and the Ethics Question

One finding generated more ethical debate than the others. The code contains a feature called Undercover Mode, designed to prevent Anthropic employees from leaking internal information when contributing to public repositories. It blocks internal model codenames from appearing in commits, strips references to internal tooling, and prevents unreleased version numbers from surfacing in public git history.

Internal model codenames exposed by the leak include Capybara (the current fast-tier model, Claude 4.6), Fennec (Opus 4.6), and Numbat (an unreleased model still in testing).

The broader version of this question: whether Claude Code is configured to obscure its AI origins in git commit messages generated for open-source projects: was raised by multiple commentators. The code contains instructions directing the agent to scrub traces of AI authorship. Whether that constitutes a form of deception warranting disclosure is a question the AI ethics community will likely debate beyond this incident.

The Security Implications Beyond the Leak

The exposure of orchestration logic creates concrete attack surface. Researchers and threat actors can now study the exact mechanism by which Claude Code processes Hooks and MCP server configurations. That knowledge enables the design of malicious repositories specifically crafted to trigger background command execution or data exfiltration.

A separate but concurrent issue worsened the timing. A supply-chain attack on the Axios npm package, which has 83 million weekly downloads, occurred in the same morning window: between 00:21 and 03:29 UTC on March 31. A hijacked maintainer account deployed cross-platform malware into the package. Any developer who installed or updated Claude Code via npm in that window and happened to pull a fresh Axios version may have been exposed to both incidents simultaneously. The two events were unrelated in origin but intersected in risk for users who updated that morning.

CVE-2025-59536, identified by Check Point Research, documented an API token exfiltration vector through faulty project configurations in Claude Code. The leaked source makes the mechanism of this vulnerability considerably easier to understand and potentially to replicate.

The Larger Pattern

Claude Code is estimated to generate approximately Rs 20,000 crore ($2.5 billion) in annualised recurring revenue, with 80 percent from enterprise clients. The exposed code represents the complete implementation of a product at that revenue scale. For competitors: both the established labs and the growing number of Indian AI companies building developer tools: the exposure provides a detailed technical blueprint of how a production-grade AI coding agent is actually constructed.

For the developer community in Hyderabad and across India, where Claude Code has become a significant part of the daily workflow for software engineers at GCCs, startups, and product companies, the incident raises a more immediate question: what is the appropriate response when a tool you depend on accidentally ships its source code? Most engineers will continue using it. The CLI's value comes from the underlying model, not the orchestration code, and Anthropic can ship fixes faster than competitors can replicate the system.

But the incident does illuminate something important about where the AI tooling industry actually is in 2026. The companies building these systems are moving extremely fast. Forty-four fully-built features sitting behind flags, waiting for coordinated launch moments. A background daemon mode complete enough to ship but held back for safety review. A voice interface ready to go. A gamified companion system with a deterministic PRNG seeded to each user's identity.

The gap between what AI tools can already do and what users know they can do turned out to be larger than almost anyone had assumed. The leak did not expose an AI company's secrets so much as it exposed the speed at which the industry is actually moving.

"It is all built. They are releasing a new feature every two weeks because everything is already done."

Developer analysis of the leaked feature flag density, March 31, 2026

That observation may be the most consequential thing to emerge from 59.8 megabytes of accidentally public source code.


Sources: VentureBeat, CyberNews, Rolling Out, DEV Community, CyberSecurityNews, ByteIota, and multiple developer discussions on X and Hacker News. Anthropic had not issued a public statement at the time of publication.