It's April 1st, but this post is not a joke.
The internet is on fire over Claude Code's (NPM CLI to be precise) "leaked" source. 512,000 lines! Feature flags! System prompts! Unreleased features! VentureBeat, Fortune, Gizmodo, The Register, Hacker News — everyone covered it. A clean-room Rust rewrite (to dodge the DMCA) hit 100K GitHub stars in nearly a day — a world record. 110K now and counting.
Here's what nobody's saying: all of that was already public! On npm. In plaintext. For years. Open unpkg.com/@anthropic-ai/claude-code/cli.js right now — that's the entire Claude Code CLI, one click away, readable in your browser. No leak required.
What "leaked" was a source map file that added internal developer comments on top of code that was never protected in the first place, plus a directory/source structure.
What Actually Happened
A .map source map file — meant for internal debugging — was accidentally included in version 2.1.88 of the @anthropic-ai/claude-code package on npm. Security researcher Chaofan Shou spotted it, posted on X, and the internet did the rest.
Anthropic confirmed the mistake: "This was a release packaging issue caused by human error, not a security breach." The package was pulled, but by then it had already been mirrored everywhere.
The funny part? This is the second time. A nearly identical source map leak happened in February 2025. Same product, same mistake, thirteen months apart.
The Internet Lost Its Mind — In One Day
What happened next was genuinely impressive. In a single day:
-
Code dumps appeared on GitHub — like nirholas/claude-code, which was DMCA'd by Anthropic within hours but not before being forked hundreds of times.
-
Claw Code — a full Rust rewrite of the Claude Code architecture — hit 50,000 GitHub stars in 2 hours, making it the fastest repo in history to reach that milestone. It's now being actively developed as an open-source AI coding agent framework.
-
ccleaks.com popped up — a fully designed breakdown site cataloging every unreleased feature, hidden command, and build flag found in the source. Complete with shareable cards for discoveries like "BUDDY" (an AI companion pet), "ULTRAPLAN" (30-minute remote planning mode), and 44 other feature flags:
- Dozens of analysis articles on DEV Community, YouTube, and tech blogs dissecting every detail.
All of this. In one day. Software gets crazy in the AI era.
But the Code Was Already There
Here's what most of the coverage missed: Claude Code ships as a single bundled JavaScript file — cli.js — distributed via npm. It's 13MB, 16,824 lines of JavaScript. And it's been sitting there, publicly accessible, since the product launched.
We analyzed this file at AfterPack as part of a deobfuscation case study. What we found: it's minified, not obfuscated.
Here's the difference. Minification — what every bundler (esbuild, Webpack, Rollup) does by default — shortens variable names and removes whitespace. It makes code smaller for shipping. It was never designed to hide anything.
Obfuscation encrypts strings, flattens control flow, injects dead code, adds anti-tamper mechanisms. It's designed to make reverse engineering computationally expensive.
Claude Code has the first. Zero of the second.
| Technique | Present? |
|---|---|
| Variable name mangling | Yes (standard minification) |
| Whitespace removal | Yes (standard minification) |
| String encryption/encoding | No |
| Control flow flattening | No |
| Dead code injection | No |
| Self-defending / anti-tamper | No |
| String array rotation | No |
| Property name mangling | No |
All 148,000+ string literals sit in plaintext. Every system prompt, every tool description, every behavioral instruction — right there in the JavaScript, readable with a text editor. No source maps needed.
We Asked Claude to Deobfuscate Itself
Here's where it gets interesting. We didn't need source maps to extract Claude Code's internals. We asked Claude — Anthropic's own model — to analyze and deobfuscate the minified cli.js file.
It worked. Extremely well.
Using a simple AST-based extraction script, we parsed the full 13MB file in 1.47 seconds and extracted 147,992 strings. After categorizing them:
- 1,017 system prompts and instructions
- 431 tool descriptions
- 837 unique telemetry event names (all prefixed with
tengu_— Claude Code's internal codename) - 504 environment variables controlling the product's behavior
- 3,196 error messages
- Hardcoded endpoints, OAuth URLs, a DataDog API key, the complete model catalog
Every single string. Extracted in under 2 seconds. No decryption. No deobfuscation. Just parsing.
And we're not the only ones who figured this out. Geoffrey Huntley published a full "cleanroom transpilation" of Claude Code months before this leak — using LLMs to convert the minified JavaScript into readable, structured TypeScript. His key insight: "LLMs are shockingly good at deobfuscation, transpilation, and structure-to-structure conversions."
The source maps didn't reveal the code. The code was already revealed. Source maps just added comments and a source tree structure on top.
What Source Maps Actually Added
To be fair — source maps did surface some genuinely sensitive information that wasn't easily extractable from the minified bundle:
Internal code comments. Developer notes, TODOs, rationale for decisions. The kind of thing you write expecting only your teammates to read.
Exact file structure. The full 1,884-file project tree with original filenames, module boundaries, and dependency relationships.
Feature flags with codenames. Names like tengu_amber_flint, tengu_cobalt_frost, and tengu_miraculo_the_bard — internal experiment names that reveal product strategy and A/B test structure.
Undercover mode. A feature that strips internal codenames and references when Claude Code is used in non-Anthropic repositories — so the model never accidentally says "Tengu" or "Capybara" in public.
KAIROS. An unreleased autonomous daemon mode — background agent workers with cron cycles and GitHub webhook subscriptions.
Anti-distillation mechanisms. Claude Code sends anti_distillation: ['fake_tools'] in API requests, injecting decoy tool definitions into prompts to poison training data if someone records API traffic to train a competing model.
This is sensitive. The internal comments are a real exposure. But the actual source logic — the prompts, the tools, the architecture, the endpoints — was all already there in cli.js. Always was.
This Happens More Than You Think
Here's something that might surprise you: this isn't an Anthropic-specific problem. It's an industry-wide accepted practice.
We ran AfterPack's Security Scanner on GitHub's own website — one of the most sophisticated engineering organizations in the world. Here's what we found:
Email addresses. Environment variable names. Internal URLs. All sitting in production JavaScript and source maps, publicly accessible. See the full scan results for yourself.
And Anthropic's own website — claude.ai — has the same issue. Their production JavaScript at s-cdn.anthropic.com/s.js exposes employee email addresses:
No one talks about this. There's no VentureBeat headline about GitHub shipping email addresses in their JS bundles. No Hacker News thread about internal URLs exposed in Anthropic's CDN scripts. But it's the exact same class of problem — production JavaScript exposing information that was never meant to be public.
You can check your own site right now: afterpack.dev/security-scanner, or from the terminal:
| npx afterpack audit https://your-site.com |
AI Makes This Urgent
Here's the reality that the Claude Code leak illustrates — even if it's not the narrative most people latched onto:
Minification was never security. It's a size optimization. Variable renaming that slows down human readers is trivial for LLMs. AI reads minified code the way you read formatted code.
System prompts are the new trade secrets. Companies invest months engineering prompts that shape their products. Anthropic's Claude Code prompts — the "# Doing tasks", "# Executing actions with care", "# Tone and style" sections — represent serious prompt engineering work. All shipped in cleartext JavaScript.
Telemetry names reveal product roadmaps. The 837 tengu_* events we extracted tell you exactly what Anthropic is building, testing, and measuring. Event names like tengu_desktop_upsell_shown and tengu_review_overage_dialog_shown expose monetization strategy.
Environment variables expose what you're not ready to ship. 504 descriptive env var names — CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS, CLAUDE_CODE_DISABLE_AUTO_MEMORY, MAX_THINKING_TOKENS — reveal early features and experimental endpoints that aren't production-hardened yet. For attackers, that's a roadmap to the least-tested, most vulnerable surface. For competitors, it's your entire feature pipeline. And for your users, it's a list of knobs they were never supposed to know about.
And this isn't just about Anthropic. Every JavaScript application — React frontends, Electron desktop apps, React Native mobile apps, Node.js CLIs — ships code that AI can now read, analyze, reconstruct, and copycat. Payment gateways with feature flags. SaaS products with pricing logic. Authentication flows with hardcoded endpoints.
We're preparing a full study on how well AI models reverse-engineer popular JavaScript obfuscators — not just minified code, but code that was intentionally protected. Claude Code (powered by Opus) is currently the strongest model for reasoning and code tasks, and it's alarmingly good at it. Follow us so you don't miss it:
Stay Updated
Follow AfterPack for the latest news and updates.
The Problem With Current "Protection"
Here's the catch. The article you just read proves that minification is not protection. But here's what most people don't realize: traditional obfuscation isn't either — not anymore.
Every existing JavaScript obfuscator was built before Opus 4.6 and GPT-4 existed. Their techniques — string encryption, control flow flattening, dead code injection — are sequential, reversible transforms. Each step has an inverse. AI models trained on millions of code examples can identify and undo these patterns, often in seconds.
Crank the settings high enough and yes, you can slow analysis down — but at the cost of 10x code inflation and serious performance degradation. At that point, you've made your app unusable to protect it from being readable.
This is the gap that AfterPack addresses.
Why I'm Building AfterPack
AfterPack approaches this differently. Instead of layering reversible transforms on top of each other, AfterPack uses non-linear, irreversible transforms — closer to how a hash function works than how a traditional obfuscator works. The output is functionally equivalent to the input, but the transformation destroys semantic meaning in a way that cannot be reversed — even by AfterPack itself. There's no inverse function. No secret key that unlocks the original.
That's the core idea: don't just make code harder to read. Make reversal mathematically impossible.
Built in Rust for speed — fast enough to run on every build, every commit, even at the edge on Cloudflare Workers. With a free tier generous enough that there's no excuse not to use it.
The current state — where a $60B+ AI company ships their most sensitive client-side code with nothing more than bundler minification — won't last. You wouldn't ship without tests. You wouldn't ship without TypeScript. Why ship readable source code?
Protect what you build.