像洞穴人说话。
A Claude Code skill that makes Claude talk like a caveman, cutting token use

原始链接: https://github.com/JuliusBrussee/caveman

## 穴居人克劳德:LLM 通信效率 “穴居人克劳德”是 Claude Code 的一项技能,旨在大幅减少令牌使用量——大约 75%——同时不牺牲技术准确性。该技能的灵感来自于简化语言可以最大限度地降低 LLM 成本的观察,它提供简洁、直接的回复,让人联想到穴居人的语言。 安装很简单,只需一行命令 (`npx skills add JuliusBrussee/caveman` 或通过 Claude 插件市场)。使用类似 "/caveman" 或 "less tokens please" 的提示来触发它。 穴居人克劳德保留了代码块、技术术语、错误消息和 git 引用等关键元素,同时消除了填充词、客套话和含糊不清的语言。这带来了更快的响应速度、显著的成本节约,以及一个令人惊讶的有效(且有趣)的编码助手。它是免费且开源的(MIT 许可证)!

## LLM“穴居人模式”总结 一项名为“穴居人模式”的新技能,旨在通过提示模型使用极其简洁、直接的语言,从而大幅减少token使用量,并可能降低成本。其核心理念源于LLM以token为单位“思考”,强制简洁可能会限制必要的计算,从而影响性能。 讨论的重点在于这种权衡是否值得。一些用户报告在使用“穴居人模式”时,质量下降且误解增加,表明LLM有时*需要*更多上下文和细节。另一些用户则认为它很有益处,尤其是在接收专注的回复和避免冗长解释方面。 有趣的是,先前的研究表明,减少推理token数量可以*提高*结果,并且一篇论文表明简洁性甚至可以逆转LLM中的性能等级。该技能可以通过“穴居人模式”或“/caveman”等短语触发,并可以使用prompt hooks和正则表达式实现。最终,有效性似乎取决于任务,但许多人赞赏其潜在的更高效交互能力。
相关文章

原文

why use many token when few token do trick

InstallBefore/AfterWhy


A Claude Code skill that makes Claude talk like a caveman — cutting ~75% of tokens while keeping full technical accuracy.

Based on the viral observation that caveman-speak dramatically reduces LLM token usage without losing technical substance. So we made it a one-line install.

🗣️ Normal Claude (69 tokens)

"The reason your React component is re-rendering is likely because you're creating a new object reference on each render cycle. When you pass an inline object as a prop, React's shallow comparison sees it as a different object every time, which triggers a re-render. I'd recommend using useMemo to memoize the object."

🪨 Caveman Claude (19 tokens)

"New object ref each render. Inline object prop = new ref = re-render. Wrap in useMemo."

"Sure! I'd be happy to help you with that. The issue you're experiencing is most likely caused by your authentication middleware not properly validating the token expiry. Let me take a look and suggest a fix."

"Bug in auth middleware. Token expiry check use < not <=. Fix:"

Same fix. 75% less word. Brain still big.

npx skills add JuliusBrussee/caveman

Or with Claude Code plugin system:

claude plugin marketplace add JuliusBrussee/caveman
claude plugin install caveman@caveman

One rock. That it.

Trigger with:

  • /caveman
  • "talk like caveman"
  • "caveman mode"
  • "less tokens please"

Stop with: "stop caveman" or "normal mode"

Thing Caveman Do?
English explanation 🪨 Caveman smash filler words
Code blocks ✍️ Write normal (caveman not stupid)
Technical terms 🧠 Keep exact (polymorphism stay polymorphism)
Error messages 📋 Quote exact
Git commits & PRs ✍️ Write normal
Articles (a, an, the) 💀 Gone
Pleasantries 💀 "Sure I'd be happy to" is dead
Hedging 💀 "It might be worth considering" extinct
┌─────────────────────────────────────┐
│  TOKENS SAVED          ████████ 75% │
│  TECHNICAL ACCURACY    ████████ 100%│
│  SPEED INCREASE        ████████ ~3x │
│  VIBES                 ████████ OOG │
└─────────────────────────────────────┘
  • Save money — 75% less token = 75% less cost on output
  • Faster response — less token to generate = speed go brrr
  • Same accuracy — all technical info kept, only fluff removed
  • Fun — every code review become comedy

Caveman not dumb. Caveman efficient.

Normal LLM waste token on:

  • "I'd be happy to help you with that" (8 wasted tokens)
  • "The reason this is happening is because" (7 wasted tokens)
  • "I would recommend that you consider" (7 wasted tokens)
  • "Sure, let me take a look at that for you" (10 wasted tokens)

Caveman say what need saying. Then stop.

If caveman save you mass token, mass money — leave mass star. ⭐

MIT — free like mass mammoth on open plain.

联系我们 contact @ memedata.com