英伟达DGX Spark:出色的硬件,生态系统尚处于早期阶段。
Nvidia DGX Spark: great hardware, early days for the ecosystem

原始链接: https://simonwillison.net/2025/Oct/14/nvidia-dgx-spark/

## NVIDIA DGX Spark:初步印象 – “AI 超级计算机” NVIDIA 最近提供了一台新款 DGX Spark 桌面 AI 计算机的预览机(零售价约 4000 美元),这是一款体积紧凑的机器,大约相当于 Mac mini 的大小。它拥有强大的 ARM64 架构,配备 20 个 CPU 核心、128GB 内存和 Blackwell GPU,具有 119.68GB 内存。 虽然目标用户是 AI 研究人员,用于模型训练和运行,但初步设置因软件通常假定 x86 架构而面临挑战。让 PyTorch 运行起来需要深入 ARM 生态系统,不过 NVIDIA 最近发布的文件和官方 Docker 容器显著简化了这一过程。Claude Code 在排查 Ubuntu、CUDA 驱动程序和软件安装问题时非常有用。 Spark 的生态系统正在迅速发展,Ollama、llama.cpp、LM Studio 和 vLLM 等项目已经提供了优化的版本。基准测试显示出色的性能,llama.cpp 达到了高达 3600 token/秒的速度。 现在下定论是否推荐 DGX Spark 还为时过早,但最近的改进表明未来将得到强有力的支持。能够通过 Tailscale 轻松远程访问该机器也增加了其吸引力。未来几周将揭示这个新平台最终将得到多大程度的支持。

Hacker News 新闻 | 过去 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 Nvidia DGX Spark:优秀的硬件,生态系统尚处于早期阶段 (simonwillison.net) 13 分,GavinAnderegg 发表于 2 小时前 | 隐藏 | 过去 | 收藏 | 1 条评论 ChrisArchitect 58 分钟前 [–] 更多讨论:https://news.ycombinator.com/item?id=45575127 回复 考虑申请YC冬季2026批次!申请截止至11月10日 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请YC | 联系 搜索:
相关文章

原文

14th October 2025

NVIDIA sent me a preview unit of their new DGX Spark desktop “AI supercomputer”. I’ve never had hardware to review before! You can consider this my first ever sponsored post if you like, but they did not pay me any cash and aside from an embargo date they did not request (nor would I grant) any editorial input into what I write about the device.

The device retails for around $4,000. They officially go on sale tomorrow.

First impressions are that this is a snazzy little computer. It’s similar in size to a Mac mini, but with an exciting textured surface that feels refreshingly different and a little bit science fiction.

A rectangular small computer, sitting horizontally on a box. It is about the width of a Mac Mini. It has a NVIDIA logo on  a reflective handle portion, then textured silver metal front, then another reflective handle at the other end. It's pretty and a bit weird looking. It sits on the box it came in, which has NVIDIA DGX Spark written on it in white text on green.

There is a very powerful machine tucked into that little box. Here are the specs, which I had Claude Code figure out for me by poking around on the device itself:

Hardware Specifications

  • Architecture: aarch64 (ARM64)
  • CPU: 20 cores
    • 10x Cortex-X925 (performance cores)
    • 10x Cortex-A725 (efficiency cores)
  • RAM: 119 GB total (112 GB available)—I’m not sure why Claude reported it differently here, the machine is listed as 128GB
  • Storage: 3.7 TB (6% used, 3.3 TB available)

GPU Specifications

  • Model: NVIDIA GB10 (Blackwell architecture)
  • Compute Capability: sm_121 (12.1)
  • Memory: 119.68 GB
  • Multi-processor Count: 48 streaming multiprocessors
  • Architecture: Blackwell

Short version: this is an ARM64 device with 128GB of memory that’s available to both the GPU and the 20 CPU cores at the same time, strapped onto a 4TB NVMe SSD.

The Spark is firmly targeted at “AI researchers”. It’s designed for both training and running models.

The tricky bit: CUDA on ARM64

Until now almost all of my own model running experiments have taken place on a Mac. This has gotten far less painful over the past year and a half thanks to the amazing work of the MLX team and community, but it’s still left me deeply frustrated at my lack of access to the NVIDIA CUDA ecosystem. I’ve lost count of the number of libraries and tutorials which expect you to be able to use Hugging Face Transformers or PyTorch with CUDA, and leave you high and dry if you don’t have an NVIDIA GPU to run things on.

Armed (ha) with my new NVIDIA GPU I was excited to dive into this world that had long eluded me... only to find that there was another assumption baked in to much of this software: x86 architecture for the rest of the machine.

This resulted in all kinds of unexpected new traps for me to navigate. I eventually managed to get a PyTorch 2.7 wheel for CUDA on ARM, but failed to do so for 2.8. I’m not confident there because the wheel itself is unavailable but I’m finding navigating the PyTorch ARM ecosystem pretty confusing.

NVIDIA are trying to make this easier, with mixed success. A lot of my initial challenges got easier when I found their official Docker container, so now I’m figuring out how best to use Docker with GPUs. Here’s the current incantation that’s been working for me:

docker run -it --gpus=all \
  -v /usr/local/cuda:/usr/local/cuda:ro \
  nvcr.io/nvidia/cuda:13.0.1-devel-ubuntu24.04 \
  bash

I have not yet got my head around the difference between CUDA 12 and 13. 13 appears to be very new, and a lot of the existing tutorials and libraries appear to expect 12.

The missing documentation isn’t missing any more

When I first received this machine around a month ago there was very little in the way of documentation to help get me started. This meant climbing the steep NVIDIA+CUDA learning curve mostly on my own.

This has changed substantially in just the last week. NVIDIA now have extensive guides for getting things working on the Spark and they are a huge breath of fresh air—exactly the information I needed when I started exploring this hardware.

Here’s the getting started guide and the essential collection of playbooks. There’s still a lot I haven’t tried yet just in this official set of guides.

Claude Code for everything

Claude Code was an absolute lifesaver for me while I was trying to figure out how best to use this device. My Ubuntu skills were a little rusty, and I also needed to figure out CUDA drivers and Docker incantations and how to install the right versions of PyTorch. Claude 4.5 Sonnet is much better than me at all of these things.

Since many of my experiments took place in disposable Docker containers I had no qualms at all about running it in YOLO mode:

claude --dangerously-skip-permissions

Claude understandably won’t let you do this as root, even in a Docker container, so I found myself using the following incantation in a fresh nvcr.io/nvidia/cuda:13.0.1-devel-ubuntu24.04 instance pretty often:

apt-get update && apt-get install -y sudo
# pick the first free UID >=1000
U=$(for i in $(seq 1000 65000); do if ! getent passwd $i >/dev/null; then echo $i; break; fi; done)
echo "Chosen UID: $U"
# same for a GID
G=$(for i in $(seq 1000 65000); do if ! getent group $i >/dev/null; then echo $i; break; fi; done)
echo "Chosen GID: $G"
# create user+group
groupadd -g "$G" devgrp
useradd -m -u "$U" -g "$G" -s /bin/bash dev
# enable password-less sudo:
printf 'dev ALL=(ALL) NOPASSWD:ALL\n' > /etc/sudoers.d/90-dev-nopasswd
chmod 0440 /etc/sudoers.d/90-dev-nopasswd
# Install npm
DEBIAN_FRONTEND=noninteractive TZ=Etc/UTC apt-get install -y npm
# Install Claude
npm install -g @anthropic-ai/claude-code

Then switch to the dev user and run Claude for the first time:

su - dev
claude --dangerously-skip-permissions

This will provide a URL which you can visit to authenticate with your Anthropic account, confirming by copying back a token and pasting it into the terminal.

Docker tip: you can create a snapshot of the current image (with Claude installed) by running docker ps to get the container ID and then:

docker commit --pause=false <container_id> cc:snapshot

Then later you can start a similar container using:

docker run -it \
  --gpus=all \
  -v /usr/local/cuda:/usr/local/cuda:ro \
  cc:snapshot bash

Here’s an example of the kinds of prompts I’ve been running in Claude Code inside the container:

I want to run https://huggingface.co/unsloth/Qwen3-4B-GGUF using llama.cpp - figure out how to get llama cpp working on this machine such that it runs with the GPU, then install it in this directory and get that model to work to serve a prompt. Goal is to get this command to run: llama-cli -hf unsloth/Qwen3-4B-GGUF -p "I believe the meaning of life is" -n 128 -no-cnv

That one worked flawlessly—Claude checked out the llama.cpp repo, compiled it for me and iterated on it until it could run that model on the GPU. Here’s a full transcript, converted from Claude’s .jsonl log format to Markdown using a script I vibe coded just now.

I later told it:

Write out a markdown file with detailed notes on what you did. Start with the shortest form of notes on how to get a successful build, then add a full account of everything you tried, what went wrong and how you fixed it.

Which produced this handy set of notes.

Tailscale was made for this

Having a machine like this on my local network is neat, but what’s even neater is being able to access it from anywhere else in the world, from both my phone and my laptop.

Tailscale is perfect for this. I installed it on the Spark (using the Ubuntu instructions here), signed in with my SSO account (via Google)... and the Spark showed up in the “Network Devices” panel on my laptop and phone instantly.

I can SSH in from my laptop or using the Termius iPhone app on my phone. I’ve also been running tools like Open WebUI which give me a mobile-friendly web interface for interacting with LLMs on the Spark.

Here comes the ecosystem

The embargo on these devices dropped yesterday afternoon, and it turns out a whole bunch of relevant projects have had similar preview access to myself. This is fantastic news as many of the things I’ve been trying to figure out myself suddenly got a whole lot easier.

Four particularly notable examples:

  • Ollama works out of the box. They actually had a build that worked a few weeks ago, and were the first success I had running an LLM on the machine.
  • llama.cpp creator Georgi Gerganov just published extensive benchmark results from running llama.cpp on a Spark. He’s getting ~3,600 tokens/second to read the prompt and ~59 tokens/second to generate a response with the MXFP4 version of GPT-OSS 20B and ~817 tokens/second to read and ~18 tokens/second to generate for GLM-4.5-Air-GGUF.
  • LM Studio now have a build for the Spark. I haven’t tried this one yet as I’m currently using my machine exclusively via SSH.
  • vLLM—one of the most popular engines for serving production LLMs—had early access and there’s now an official NVIDIA vLLM NGC Container for running their stack.

Should you get one?

It’s a bit too early for me to provide a confident recommendation concerning this machine. As indicated above, I’ve had a tough time figuring out how best to put it to use, largely through my own inexperience with CUDA, ARM64 and Ubuntu GPU machines in general.

The ecosystem improvements in just the past 24 hours have been very reassuring though. I expect it will be clear within a few weeks how well supported this machine is going to be.

联系我们 contact @ memedata.com