航点-1:来自现实世界的实时交互式视频扩散
Waypoint-1: Real-Time Interactive Video Diffusion from Overworld

原始链接: https://huggingface.co/blog/waypoint-1

## Waypoint-1:交互式实时世界生成 Overworld的Waypoint-1是一种新型交互式视频扩散模型,允许用户实时进入并控制程序生成的世界。与为简单控制而微调的现有模型不同,Waypoint-1是*针对*交互性进行*训练*的,能够实现鼠标和键盘输入的自由相机移动,并且具有零延迟。 Waypoint-1基于一个使用10,000小时游戏录像训练的Transformer,利用“扩散强制”和“自强制”技术来实现逼真、稳定的帧生成。这即使在消费级硬件上也能提供流畅的体验。 **WorldEngine**推理库为Waypoint-1提供支持,它是一个高性能的Python工具包,针对速度和开发者易用性进行了优化。目前在5090 GPU上可实现30-60 FPS。 Overworld将于2026年1月20日举办一个黑客马拉松,以鼓励使用WorldEngine进行开发,奖品为5090 GPU。 **试用:**[https://overworld.stream](https://overworld.stream)

## Waypoint-1:实时交互式视频扩散 一个名为Waypoint-1的新开源项目,实现了实时交互式视频扩散,从而创造动态演变的虚拟环境。用户正在通过Scope-Overworld和Runpod等插件试验该模型,并注意到它能够根据提示生成多样化的场景——从奇幻景观到赛博朋克UI。 尽管令人兴奋,但早期反馈也指出了局限性:不一致的空间记忆、缺乏物体持久性以及偏离初始提示的趋势。开发者承认这些问题,并将其比作GPT-3的早期阶段,但仍然对其潜力保持乐观。 该项目采用Apache许可(小型模型)或CC BY-SA NC 4.0(中型模型),旨在促进“世界模型”的开放开发。Overworld团队正在积极与社区互动并解决登录问题,并欢迎支持,以便他们能够启用支付功能。
相关文章

原文

waypoint launch grid

Try Out The Model

Overworld Stream: https://overworld.stream

What is Waypoint-1?

Waypoint-1 is Overworld’s real-time-interactive video diffusion model, controllable and prompted via text, mouse, and keyboard. You can give the model some frames, run the model, and have it create a world you can step into and interact with.

The backbone of the model is a frame-causal rectified flow transformer trained on 10,000 hours of diverse video game footage paired with control inputs and text captions. Waypoint-1 is a latent model, meaning that it is trained on compressed frames.

The standard among existing world models has become taking pre-trained video models and fine-tuning them with brief and simplified control inputs. In contrast, Waypoint-1 is trained from the get-go with a focus on interactive experiences. With other models, controls are simple: you can move and rotate the camera once every few frames, with severe latency issues. With Waypoint-1 you are not limited at all as far as controls are concerned. You can move the camera freely with the mouse, and input any key on the keyboard, and all this with zero latency. Each frame is generated with your controls as context. Additionally, the model runs fast enough to provide a seamless experience even on consumer hardware.

How was it trained?

Waypoint-1 was pre-trained via diffusion forcing, a technique with which the model learns to denoise future frames given past frames. A causal attention mask is applied such that a token in any given frame can only attend to tokens in its own frame, or past frames, but not future frames. Each frame is noised randomly, and as such the model learns to denoise each frame separately. During inference, you can then denoise new frames one at a time, allowing you to generate a procedural stream of new frames.

While diffusion forcing presents a strong baseline, randomly noising all frames is misaligned with a frame-by-frame autoregressive rollout. This inference mismatch results in error accumulation, and noisy long rollouts. To address this problem we post-train with self forcing, a technique that trains the model to produce realistic outputs under a regime which matches inference behavior. Self-forcing via DMD has the added benefit of one-pass CFG, and few-step denoising.

The Inference Library: WorldEngine

WorldEngine is Overworld’s high‑performance inference library for interactive world model streaming. It provides the core tooling for building inference applications in pure Python, optimized for low latency, high throughput, extensibility, and developer simplicity. The runtime loop is designed for interactivity: it consumes context frame images, keyboard/mouse inputs, and text, and outputs image frames for real‑time streaming.

On Waypoint‑1‑Small (2.3B) running on a 5090, WorldEngine sustains ~30,000 token‑passes/sec (single denoising pass; 256 tokens per frame) and achieves 30 FPS at 4 steps or 60 FPS at 2 steps

Performance comes from four targeted optimizations:

  • AdaLN feature caching: Avoids repeated AdaLN conditioning projections through caching and reusing so long as prompt conditioning and timesteps stay the same between fwd passes.
  • Static Rolling KV Cache + Flex Attention
  • Matmul fusion: Standard inference optimization using fused QKV projections.
  • Torch Compile using torch.compile(fullgraph=True, mode="max-autotune", dynamic=False)
from world_engine import WorldEngine, CtrlInput


engine = WorldEngine("Overworld/Waypoint-1-Small", device="cuda")


engine.set_prompt("A game where you herd goats in a beautiful valley")


img = pipeline.append_frame(uint8_img)  


for controller_input in [
        CtrlInput(button={48, 42}, mouse=[0.4, 0.3]),
        CtrlInput(mouse=[0.1, 0.2]),
        CtrlInput(button={95, 32, 105}),
]:
    img = engine.gen_frame(ctrl=controller_input)

Build with World Engine

We’re running a world_engine hackathon on 1/20/2026 - You can RSVP here. Teams of 2-4 are welcome and the prize is a 5090 GPU on the spot. We’d love to see what you can come up with to extend the world_engine and it should be a great event to meet like-minded founders, engineers, hackers and investors. We hope you can join us at 10am PST on January 20th for 8 hours of friendly competition!

Stay in Touch

联系我们 contact @ memedata.com