LLM 玩 8 位指挥官 X16 游戏,使用结构化的“智能感知”。
LLM plays an 8-bit Commander X16 game using structured "smart senses"

原始链接: https://pvp-ai.russell-harper.com

## PvP-AI:复兴8位游戏并挑战人工智能 PvP-AI是对1990年最初编写的游戏的重现,现在在Commander X16复古电脑上重获新生。虽然模拟器可以达到每秒8.6帧的可玩帧率,但由于VERA模块问题,硬件限制为每秒4帧。该游戏具有独特的机制——动态变化的竞技场、智能导弹和优先考虑淘汰而非纯粹得分的计分系统。 受到LLM与复古游戏集成的启发,作者探索使用ChatGPT (gpt-4o)来玩PvP-AI。为了避免依赖视觉输入,一个“智能感知”系统——基于文本的数据,代表触觉和电磁场检测——被实施,以向LLM提供关键的游戏信息。 为了促进通信,一个新的VIA2-socket功能被添加到x16-模拟器中,允许LLM、PHP接口和游戏之间进行双向交互。初步测试,记录在一系列连续的游戏中,展示了ChatGPT学习和发展获胜策略的能力,并在比赛之间携带持久笔记。未来的研究旨在扩展“智能感知”以包括视觉和听觉,进一步增强人工智能与这个经典8位世界的互动。

对不起。
相关文章

原文
PvP-AI

Creating the Game

PvP-AI is a recreation of an 8-bit game I wrote back in 1990. The only traces left of the original are a few drawings and handwritten notes. Back then, writing for an 8-bit platform, it took every bit of memory and CPU to eke out 4 frames/s with the simplest of animations and backgrounds. Alas, by the time I finished it, the 1990 recession had begun, affecting many industries, including personal computers, so nothing came of it.

A few years ago, a YouTube channel by David Murray The8BitGuy caught my eye, in particular his Commander X16 retro-computer. Looking over the specifications, I realized it might be able to handle a newer incarnation of PvP-AI.

As it turns out, the emulator was able to handle it very nicely, running at almost 8.6 frames/s, with more detail and better AI! Here’s a video of it in action. The actual hardware, though… it turns out there’s a line drawing issue in the VERA module such that certain kinds of lines aren’t rendered correctly, meaning it has to fall back on a slower method. The end result is only 4 frames/s on hardware.

If you want to try it out, you can download the files from my Google Drive. I recommend using the x16-emulator, specifically R49, to run it. More details are in my GitHub EXPLORE repository under CX16 v2 – AI Demo (a.k.a. PvP-AI).

Of note are some peculiarities in gameplay, differentiating it from your typical “shoot-’em-up” 8-bit game:

  • the arena layout changes based on the highest level reached by any player
  • missiles have limited intelligence and can navigate around and above walls
  • if a player brings another player’s health to zero, both players advance in level
  • only at the final level (level 6) does losing all health eliminate a player from the game
  • scores are reduced when players have their health brought to zero
  • scores are increased when players bring another player’s health to zero or eliminate another player
  • the winner is always the last remaining player, regardless of score
  • second place goes to the eliminated player with the highest score

It turns out this game lends itself very well to “alternate strategies” one might encounter when integrating with AI — which leads us to…

Integrating with AI

Inspired by other attempts to have LLMs interact with 8-bit systems, most notably ChatGPT vs Atari 2600 Video Chess, I wanted to explore what it takes for an LLM to interact successfully with a simple game.

Unlike approaches that require the LLM to interpret visual or audio output directly, my method uses what I call “smart senses”. These are structured, text-based representations of the game world that abstract away heavy perception tasks. This lets the LLM spend less time deciphering raw data and more time doing what it excels at: reasoning about state and planning actions.

To that end, these were the accommodations to make the game compatible with the LLM:

  • turn-based play: the live game runs too fast for the LLM; converting to turn-based gives the LLM the time it needs
  • persistent notes: the LLM is instructed to maintain in-play and after-game notes so that it can carry strategies forward across games
  • smart sensory input, equivalent to what the 8-bit AI uses:
    • touch: awareness of contact with missiles, walls, and arena boundaries
    • EMF detection: awareness of other players’ positions and states

Like other researchers, I’m using the ChatGPT API (model gpt-4o) as the LLM because it offers strong reasoning, stable structured outputs, and affordable per-call pricing. I’m also most familiar with PHP, so I’m using it as the interface layer that connects the LLM to the game. The last missing piece was enabling two-way communication between PHP and the emulator:


	┌───────┐     ┌───────────┐     ┌───┐     ┌───────────┐     ┌────────────┐     ┌──────┐
	│“Cloud”│ <─> │ChatGPT API│ <─> │PHP│ ××× │    ???    │ ××× │x16-emulator│ <─> │PvP-AI│
	└───────┘     └───────────┘     └───┘     └───────────┘     └────────────┘     └──────┘
		

After some investigation into the existing capabilities of the emulator, I was able to piggyback a new feature, currently a pull request under review, and that completed the “chain”:


	┌───────┐     ┌───────────┐     ┌───┐     ┌───────────┐     ┌────────────┐     ┌──────┐
	│“Cloud”│ <─> │ChatGPT API│ <─> │PHP│ <─> │VIA2-socket│ <─> │x16-emulator│ <─> │PvP-AI│
	└───────┘     └───────────┘     └───┘     └───────────┘     └────────────┘     └──────┘
		

After enabling on-demand screen captures and removing sound and other non-essentials, I had a working platform to research with. And one final allowance for budgeting reasons: instead of making an API call every frame, I chose to make one every alternate frame.

As part of my investigations, I’ve recorded a series of three sequential games “ChatGPT vs PvP-AI” with persistent notes from game to game. They provide a very interesting arc from experimentation to a winning strategy by the LLM.

Further details about the LLM interface and technical setup are in my GitHub EXPLORE repository under CX16 v3 – LLM vs PvP-AI.

Given the encouraging results, for future research, I’m looking into even more advanced “smart senses” like vision, hearing, and balance.

© 2026 Russell Harper

联系我们 contact @ memedata.com