展示HN:我在旧手机上做了一个游戏,当时我也不知道我在做什么。
Show HN: I built a game on my old phone without knowing what I was building

原始链接: https://www.kikkupico.com/posts/vibe-discovery/

## 氛围探索:无计划构建 本文详细介绍了一种名为“氛围探索”的新型软件开发方法,其目标*并非*预先定义。作者使用 Termux、Claude Code,以及快速构建-测试-迭代循环,在一部五年前的 Android 手机上完全构建了一个 WebGL 大理石游戏“Inertia”——一切都源于提示:“创建一个利用加速度计的创意网页游戏。” 与“氛围编码”(AI 实现已知想法)不同,氛围探索涉及通过迭代的 AI 辅助原型设计*发现*产品。作者最初并非打算创建一个大理石游戏;它是在六次迭代中产生的,每一次都响应于前一版本的不足。 速度和环境所有权是此过程的关键。直接在手机上开发,利用加速度计作为核心输入,并避免沙盒生成器或云端 IDE,可以获得即时反馈和完全的工具自由。作者设想利用分析、自动化测试和模拟用户来自动化人类反馈循环,最终创建一个完全自主的开发周期。该项目展示了 AI 如何不仅可以*构建*你想要的东西,还可以*帮助你弄清楚你想要什么*。 你可以在提供的链接中玩最终游戏并探索代码演变。

一位名为kikkupico的开发者分享了他们的项目,一个完全在Redmi Note 9手机上使用Termux和Claude Code构建的WebGL弹珠游戏。他们将这个过程称为“氛围发现”——超越“氛围编程”,因为他们最初只想使用加速度计,并通过六次迭代自然地形成了游戏概念。 Kikkupico强调了同一设备编码和测试循环所带来的开发速度,并将其与传统的笔记本电脑到手机部署形成对比。 这篇帖子在评论区引发了简短的讨论,另一位用户哀叹了快速生成但存在缺陷的项目趋势,这些项目严重依赖LLM,并质疑了在线出现的代码质量。
相关文章

原文

The Starting Point

“Create a web-based game that leverages the accelerometer creatively.”

That was the entire spec. No wireframes, no design doc, no feature list. I typed this into Claude Code running on my Redmi Note 9 - a 4-year-old Android phone with 4GB RAM. Three hours later, I had Inertia - a WebGL marble game with procedural terrain and dynamic camera. The interesting part isn’t the game itself. It’s that I didn’t know I was building a marble game until iteration 4.

Inertia gameplay showing the ring-sphere player navigating 3D terrain

This Isn’t Vibe Coding

“Vibe coding” typically means: you know what you want to build, you just let AI handle the implementation details. You have a mental image of the end product - maybe a todo app, a chat interface, a dashboard - and you describe it loosely while the AI figures out the code.

What I’m describing is different. I’ll call it Vibe Discovery: you don’t know what you’re building. The requirements themselves are undefined. You’re not just discovering implementation - you’re discovering what the product should be.

The distinction matters:

“Build me a todo app with drag-and-drop” “Build me something fun with the accelerometer”
Requirements known, implementation fuzzy Requirements unknown, discovered through building
AI translates intent to code AI proposes, human reacts, product emerges
End state imagined upfront End state discovered through iteration

In Vibe Discovery, you’re not directing - you’re reacting. Each prototype teaches you what you actually want.

The Setup

Termux running Claude Code on a phone screen

The whole thing ran on a Redmi Note 9 - a 5-year-old basic Android phone with 4GB RAM. Termux for the terminal, Claude Code for the AI, Node.js http-server for testing, GitHub for deployment. The entire feedback loop - build, test, react, iterate - happened in seconds on a single device. That speed is what makes Vibe Discovery work.

Why the Phone?

You might ask: “Why not just do this on a laptop?”

  1. Sensor Loop: I was building a game that relies on the accelerometer. Developing on a laptop would mean a painful “Code → Deploy → Pick up phone → Test” loop. Coding directly on the device meant the dev environment was the test environment.
  2. The “Lying Down” Factor: This is purely subjective, but my ideas flow differently when I’m lying down. It puts me in a “tinkering” mindset rather than a “working” mindset. A laptop forces you to sit up and be productive. A phone lets you relax and discover.

One caveat: Termux isn’t compatible with Android’s voice typing feature. If you want to use voice input, you need to voice-type into a notes app and paste into Termux. It’s an extra step, but workable. On the flip side, editing text in vim mode is surprisingly efficient on a phone keyboard - the modal editing paradigm actually works well with touch input.

Why Not Just Use v0, Lovable, or Cloud Agents?

You might ask: “Why hack around in a terminal on a tiny screen when tools like Lovable, v0, or Bolt.new exist?”

It comes down to one thing: Environment Ownership.

Web-based generators are incredible, but they are “Sandboxed Gardens.” They own the runtime. If you want to run a custom Python script to generate sound assets, use a specific linter, or pipe a log file into a debugger, you can’t. You are limited to the tools they built into their UI.

Cloud-based agents (like Jules or browser-based IDEs) often suffer from the “Git Anchor” problem. To maintain state between sessions, they usually require you to attach to a GitHub repository immediately. The “Vibe Discovery” phase is transient; I don’t want to create a repo on Github just to get started. Moreover, Jules and Claude Code Web are in the ‘research preview’ phase and they don’t always work as expected.

Termux + Agent is different:

  • I own the runtime: I can open neovim to tweak a config file manually while the AI is thinking.
  • Tooling Freedom: The AI isn’t stuck in a browser tab. It has access to the actual OS. If I want it to use gh cli to create a PR or run a local script, it just does it.
  • Local First: The state lives on my device, not in a temporary cloud container that might time out.

In short: Generators give you a fish. Local Agents give you a fishing rod, a boat, and the entire ocean.

Six Iterations, Six Discoveries

Here’s how requirements emerged from nothing:

Iteration 1: “Accelerometer game, surprise me” → Claude builds particle art tool with tilt-controlled gravity → I try it: “Okay-ish. Not really a game though.” → Discovered: I want gameplay, not just visuals

Cosmic Painter

Iteration 2: “Something more fun” → Endless runner with tilt controls → “Better! But I want something more complex” → Discovered: I like objectives, want more depth

Tilt Runner

Iteration 3: “More complicated, different art style” → Isometric puzzle game with physics → “Love the physics, but the tilt controls are confusing” → Discovered: Good mechanics can’t overcome bad feedback

Sky Garden

Iteration 4: “Keep the physics, make controls intuitive” → Sandbox with tilt indicator → “The indicator works but the perspective is wrong” → Discovered: I want a marble game with 3rd-person view

Marble Sandbox

Iteration 5: “3rd-person marble game” → Marble game with calibration system → “Close! Controls are too sensitive, movement feels sticky” → Discovered: Fine-tuning matters more than features

Marble Chase

Iteration 6: “Make it beautiful, show acceleration visually” → Wireframe game with procedural terrain → “Terrain looks flat, camera should follow like driving” → Discovered: Need WebGL for proper 3D, dynamic camera sells the experience

Final Inertia game

Notice what happened: the final product (WebGL marble game with ring-sphere player and dynamic camera) wasn’t anywhere in my head at the start. Each feature emerged from reacting to the previous prototype. “The terrain looks flat” led to WebGL. “Can’t tell which way I’m accelerating” led to the ring-sphere design. “Camera feels static” led to the dynamic look-ahead system.

I didn’t design the product. I discovered it.

Why This Works

The feedback loop is fast enough that you can think by building:

  1. Describe what’s wrong (takes 5 seconds)
  2. Claude implements a fix (takes 30-60 seconds)
  3. Test it (takes 10 seconds)
  4. React to the result
  5. Repeat

Traditional development has too much friction for this. By the time you’ve written a spec, assigned the work, reviewed the PR, and deployed, you’ve forgotten what you were reacting to. Vibe Discovery keeps the reaction immediate.

It also works because AI can interpret vague feedback. “Make it more fun” isn’t actionable for a human developer without a long conversation about what “fun” means. But Claude can just try something - add obstacles, change mechanics, adjust physics - and I can react to the result. The conversation happens through prototypes, not words.

The Interesting Implication

VibeDiscoveryImplication From Human Bottleneck to Autonomous Loop cluster_loop The Vibe Discovery Loop cluster_current CURRENT STATE cluster_human_feedback cluster_future POTENTIAL FUTURE generate Generate (AI Codes) deploy Deploy (Systems Ship) generate->deploy test Test (Run Product) deploy->test analyze Analyze Feedback test->analyze iterate Iterate (Improve) analyze->iterate iterate->generate human 👤 Human in Loop THE BOTTLENECK human->analyze  provides  feedback taste Taste & Judgment human->taste mechanical Mechanical Feedback human->mechanical automated 🤖 Automated Sources NO HUMAN REQUIRED automated->analyze  could  provide testing Automated Testing "users drop off after 30 seconds" automated->testing analytics Analytics "no one uses feature X" automated->analytics simulated Simulated Users "agent reports confusion at step 3" automated->simulated orchestration ❓ Orchestration Layer THE MISSING PIECE Ties everything together orchestration->generate orchestration->deploy orchestration->automated legend Exists today (AI, Deploy, Test) Current bottleneck Potential automated sources Missing orchestration insight 💡 Key Insight The pieces exist. The workflow doesn't inherently require a human at every step. Much feedback is mechanical: "broken", "slow", "unresponsive"

Right now, Vibe Discovery needs a human in the loop. Someone has to play the game and say “this feels sticky” or “the camera is weird.” That’s the bottleneck.

But that feedback could come from other sources: - Automated testing (“users drop off after 30 seconds”) - Analytics (“no one uses feature X”) - Simulated users (“agent reports confusion at step 3”)

The pieces exist: AI that codes, systems that deploy, tools that measure. What’s missing is the orchestration layer that ties them together. Such an orchestrator would also ensure that the feedback loop isn’t tightly fit to one person’s taste.

What’s Next? Reality Check

Now that I’ve indulged my prophetic streak, it’s time for a reality check. Next, I’ll be putting Vibe Discovery to test along two different dimensions -

  1. Taking over “Inertia” and refining it by hand.
  2. Vibe discovering with a rather picky co-creator - my 6-year-old daughter - as we collaborate with AI for building a game called “Man and the Apple”.

These adventures will be the subject of subsequent blog posts. Stay tuned!

Technical Notes

The final game uses: - WebGL 1.0 with custom shaders - Device Orientation API with calibration - Procedural terrain from layered sine waves - Dynamic camera (height and distance scale with speed)

Deployment was gh repo create + GitHub Pages API. Under a minute from local to production.

The repository includes all six iterations as separate HTML files, so you can see the evolution.

Try It

Play: https://kikkupico.github.io/inertia/

(On a laptop, you can still play the game using the arrow keys. Of course, the controls will be much less intuitive than tilt controls on a phone.)

Code: https://github.com/kikkupico/inertia

Replicate: