(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=43358980

TinyKVM,一个基于Varnish构建的快速沙箱,正在Hacker News上引发热议。开发者称赞其接近原生速度以及将虚拟机重置到预定义状态的能力,这对于遭受攻击的网络服务来说非常理想。它允许写时复制,从而能够为大型语言模型(LLM)等密集型工作负载高效共享内存。 用户正在讨论潜在的用例,包括比WASM更快地运行原生代码,沙盒化Deno之类的Javascript运行时,以及在机器之间迁移程序执行。讨论也涉及到它的局限性:最初设计时没有文件I/O或网络访问,尽管这些正在通过自定义API得到解决。虽然由于其KVM基础而特定于Linux,但TinyKVM在安全性和性能之间取得了令人信服的平衡,介于WASM和完整的容器之间,使其成为各种沙盒需求的宝贵工具。


原文
Hacker News new | past | comments | ask | show | jobs | submit login
TinyKVM: Fast sandbox that runs on top of Varnish (varnish-software.com)
319 points by perbu 1 day ago | hide | past | favorite | 59 comments










I love this. Please never stop doing what you’re doing.

edit: Of course you’re the top contributor to IncludeOS. That was the first project I thought of while reading this blog post. I’ve been obsessed with the idea of Network Function Virtualization for a long time. It’s the most natural boundary for separating units of work in a distributed system and produces such clean abstractions and efficient scaling mechanisms.

(I’m also a very happy user of Varnish in production btw. It’s by far the most reliable part of the stack, even more than nginx. Usually I forget it’s even there. It’s never been the cause of a bug, once I got it configured properly.)



Oh. It's like Firecracker, only much faster 8-)

What I like most is the ability to instantly reset the state of the VM to a known predefined state. It's like restarting the VM without any actual restart. It looks like an ideal course of action for network-facing services that are constantly under attack: even if an attack succeeds, the result is erased on the next request.

Easy COW page sharing for programs that are not written with that in mind, like ML model runners, is also pretty nice.



It also sounds ideal for resuming memory intensive per-user programs, like LLMs with a large context window. You can basically have an executable (and its memory) attached to a user session, but only pay the cost for it while the user session has an open request.


Yes

> TinyKVM can fork itself into copies that use copy-on-write to allow for huge workloads like LLMs to share most memory. As an example, 6GB weights required only 260MB working memory per instance, making it highly scalable.



yes that's the durable objects ~ durable agents model that cloudflare is building


It's more like gVisor (or QEMU's user mode emulation but it does not support KVM, only dynamic code translation).


Original post: https://fwsgonzo.medium.com/tinykvm-the-fastest-sandbox-564a...

You can find a bunch of posts related to this topic there as well.



This is really exciting. The 2.5us snapshot restore performance is on a par with Wasmtime but with the huge advantage of being able to run native code, albeit with the disadvantage of much slower but still microsecond interop.

I see there is a QuickJS demo in the tinykvm_examples repo already but it'd be great to see if it's possible to get a JIT capable JavaScript runtime working as that will be an order of magnitude faster. From my experiments with server rendering a React app native QuickJS was about 12-20ms while v8 was 2-4ms after jit warmup.

I need to study this some more but I'd love to get to the point where there was a single Deno like executable that ran inside the sandbox and made all http requests through Varnish itself. A snapshot would be taken after importing the specified JS URl and then each request would run in an isolated snapshot.

Probably needs a mechanism to reset the random seed per request.



You can run v8 jitless, if you want. It's going to be much faster than QuickJS. Adding JIT support means adding a fixed executable range, which you also can do already, but you can't run it in the dumb CLI example. JITs love to be W+X. So, not sure if it's an afternoon amount of work yet, due to security implications.

I have experience with this from libriscv, where I also embed JIT run-times like v8 and LuaJIT already.



From my tests v8 jitless was about 50% faster than QuickJS but still almost an order of magnitude slower than with JIT.

Note that I mistranscribed the numbers above: QuickJS was 18-24ms while v8 without warmup was 12-20ms (which I think is similar to jitless perf) and warmed jit was 2-4ms when I benchmarked a couple of years back. https://news.ycombinator.com/item?id=33793181

Thanks for the complexity warning. Sounds like I need to wait for an embedded JIT example using fixed executable range before I start playing around. But it would be fun to try and make Deno run inside it somehow, perhaps building on deno_runtime and hooking the http client user agent to make requests through Varnish. Deno's permission system should allow cleanly disabling unavailable functionality like access to the file system.

I see some examples that seem to use glibc but I was under the impression only musl binaries can be truly static? Can binaries built against glibc be used with TinyKVM?



Yes, you can use glibc with -static!


Isn't this basically libkrun? https://github.com/containers/libkrun


Fascinating but I'm having trouble understanding the big picture. This runs a user process in a VM with no kernel? Does every system call become a VM exit and get proxied to the host? Or are there no system calls?


IIUC there's no need for system calls because there's no I/O. There's just program arguments and shared memory.


You need a few syscalls: to grow your heap size (brk) or to exit your program (exit). I took a quick look at their code and here are the syscalls and arguments implemented: https://github.com/varnish/tinykvm/blob/master/src/functions...

It’s a bit more than running a program under seccomp strict mode, but conceptually similar, so running anything too complicated likely won't work. You certainly won’t be able to sandbox chromium for taking website snapshots for example.



There's many ways to go about it, but essentially yes, brk and mmap and a few others just to get into main() for some common run-times.

But you can do whatever you want. For example in libriscv I override the global allocator in my guest programs to use a host-managed heap. That way heap usage has native performance in interpreter mode, while also allowing me full control of the heap from the outside. I wrote about this here: https://medium.com/@fwsgonzo/using-c-as-a-scripting-language...



Well, yes, there's a few system calls like `brk()`/`sbrk()`/`getrandom()`, and what not. But you don't get to open any files or sockets or devices.


For the Varnish integration I added permission-based access to local files. Network stuff can be accessed through custom APIs. A simple fetch(url, options)-like system call. Just have a look at the VMOD repository. It's something I'd like to move into TinyKVM when I feel like it.


What do you mean by I/O exactly? Because to me handling HTTP requests definitely requires I/O, no matter how you technically implement it. Does the program start anew with new arguments for each HTTP request, and if so how is that an improvement over I/O syscalls?


Handling HTTP requests can be done entirely via stdin + stdout. But it won't be too useful if you could not even talk to a database.

The VM may (and should) be limited to a small subset of what's available on the host though.



I mean you don't get to open files, sockets, devices, etc. in the sandboxed program. You get to do just a few minimal things like I/O on stdin/stdout/stderr, use shared memory, maybe allocate memory.


I believe it ships with its own kernel

> The TinyKVM guest has a tiny kernel which cannot be modified.



this is really cool if it works for your use cases.

Some notes from the post

> I found that TinyKVM ran at 99.7% native speed

> As long as they are static and don’t need file or network access, they might just run out-of-the box.

> The TinyKVM guest has a tiny kernel which cannot be modified



And unlike WebAssembly, it can leverage specialized CPU instructions. This is huge for cryptographic implementations, video codecs, LLMs, etc.


This is so cool.

I’m exploring micro-VMs for my self-hosted PaaS, https://lunni.dev/ – and something with such little overhead seems like a really interesting option!



Not entirely what this is intended for, but does anyone have experience running an X server (or Wayland, I don't care)?

I'm doing some dev (on Mac) against RDP server and occasionally have other needs like that for a client. Currently I use UTM (nice QEMU Mac frontend) along with a DietPi (super stripped-down Debian) VM for these sorts of things.

I'm pretty familiar with Docker, but have a good idea of what sorts of hoop-jumping might be needed to get a graphics server to run there. Wondering if there's a simpler path.



Interesting to see the performance gain. But without file i/o and network access, what are the use cases?


You can call host functions doing whatever you want. Similar to what WebAssembly does.


There's nothing in the article that suggests that it runs on top of Varnish; in fact, the author even says it's not intended to run Varnish in it.


There's a contradiction in the text, I'll giv eyou that, bu at the end he clearly links both * a Varnish Module using this <https://github.com/varnish/libvmod-tinykvm> * a set of examples in multiple languages <https://github.com/varnish/tinykvm_examples>


I still believe the nexus needs to be described clearer and more strongly in the story in order to support the title here in HN that it runs on top of Varnish. Even the blog title itself does not make such a claim.


I'm curious: would it be a good idea to switch my desktop Linux pc to using huge pages across the board?


Could this be used to migrate execution of a single program between two different machines?


Yep. I could imagine a deterministic method of just sending the executable + changed pages. Then load the program in the same way on the other machine, and then apply the changed pages. It would be a minimal transfer. Thread state can also be migrated, but Linux-kernel stuff like FDs cannot or at least, that's not my area of expertise!


There was Condor for this[1], a couple of decades ago. Condor would checkpoint the process and restart it on another machine entirely user-level (but requiring processes to link to their library) by continuing to forward system calls. It of course had plenty of limitations, and some of their decisions would be considered serious security risks now (e.g. they intercept open() and record the name, and assume that its safe to reopen a file by the same name after migration), but it was an interesting system.

I think migrating cooperating processes would be fairly simple, and the big challenge is rather to decide on the right set of tradeoffs.

[1] https://chtc.cs.wisc.edu/doc/ckpt97.pdf



> the big challenge is rather to decide on the right set of tradeoffs

Agreed. That's a good way to sum it up.



Yeah, that would be very cool!

With a read-only operating system that is identical across machines (i.e. NixOS or Silverblue), you would only have to send the dirty pages, too!



I don’t see why not; over ten years ago the OpenVZ vm code had a way to rsync a container across the network; syncing everything; then only the pages that had changed since the start of sync; then the final pages that had changed in the last few seconds. There was a tiny delay to pause the container on the old and start on the new host; but I am sure that this could be reduced further.


In case the author is around: Are there any plans to wrap this in an OCI-compliant runtime?


(not the author, but a friend of friend ;)

Could you specify this a bit? @codethief

The way it's phrased makes it sound like you want to stuff TinyKVM into a container, but I suspect what you are actually asking to implement an OCI runtime with TinyKVM https://github.com/opencontainers/runtime-spec/blob/main/spe...

Does that make more sense?



I'm new to this area, can someone ELI5 this? What's the difference/advantages/disadvantages compared to other process isolation like containers?

Would I use this to run a distributed infra on a server a bit like docker-compose? or it's not related?



quick someone make rust bindings


No need to wait, you can start playing right away: https://github.com/varnish-rs/varnish-rs


did someone call?


Is this a modern version of CGI with process isolation?


It's rather something that sits between WebAssembly and containers, combining the sandboxing guarantees of the former with the performance of the latter. From a security perspective, the composition is also really good (WebAssembly enforces memory limits, but doesn't have memory protection, NULL pointers are writable, etc. and this is solved here). But unlike WebAssembly, it is Linux-only. So, not something that can run in Web browsers.


Quicky someone make Zig bindings.


man see virtualization man happy man see it no crossplatform man sad


I mean. It's built on KVM and integrates deeply with how processes work; I'm not sure it's possible to make it portable without a lot of engineering time, performance hit, or both.


no i get it its amazing engineering same thing with firecracker wish there was something like that lighter than docker for all 3 major platforms


Well even docker only runs on Linux with some magic to make it look local. Probably the closest similar thing would be WASM stuff.


Someone joined my discord recently and showed me this one: https://github.com/ashishbijlani/kwarantine

It runs on bare metal, though. I just thought it was very interesting to see. Must have been a lot of work.



A four-year-old repo with no code? Not sure what you mean by "showed me".


Looks interesting, but "coming soon" with last commit 3 months ago... I'm sure they are working on it. But soon is probably not correct.


> Kwarantine is a thin hypervisor that directly runs containers/workers on the hardware (no VMs).

> hypervisor

> no VMs

Um?



I really should ask him what's up with that.


There actually is a true native Windows docker, as grotesque of a concept as that is:

https://hub.docker.com/r/microsoft/windows

My understanding is that it... Doesn't work all that well.



it's worked for "run this docker to use this code" sort of things on windows for me. That's all i use it for, it's an inconvenience. Docker, that is. Not docker on windows. Docker in general.


Oh okay yes I definitely agree that it would be great to have that:)






Join us for AI Startup School this June 16-17 in San Francisco!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com