展示HN:RunMat – 具有自动CPU/GPU路由的密集数学运行时
Show HN: RunMat – runtime with auto CPU/GPU routing for dense math

原始链接: https://github.com/runmat-org/runmat

## RunMat:具有自动GPU加速的快速MATLAB风格计算 RunMat是一个预发布、开源的运行时,旨在利用熟悉的MATLAB语法进行高性能数值计算。它能自动优化和融合操作,智能地将它们路由到CPU或GPU——通常可以匹配或超越手动优化的CUDA性能,即使在各种硬件(NVIDIA、AMD、Apple Silicon、Intel)上也是如此,而无需任何代码重写或设备管理。 主要特性包括MATLAB兼容语言、基于数据大小的自动CPU/GPU选择,以及使用Rust构建的现代运行时,以实现内存安全和跨平台兼容性。分层运行时(Ignition解释器 & Turbine JIT)确保了快速启动和优化的执行。 目前,提供了基本的绘图功能,并计划增加更复杂的可视化功能。RunMat拥有CLI、Jupyter内核支持和软件包系统,用于扩展功能。基准测试表明,在各种工作负载中,RunMat比NumPy和PyTorch具有显著的加速效果。 **欢迎贡献!** 开发者可以贡献代码,领域专家可以添加函数,用户可以提供反馈和错误报告。RunMat采用MIT许可,并要求署名,旨在为科学计算构建一个开放和可访问的未来。

## RunMat:快速MATLAB风格计算 Nabeel Nallana 发布了“RunMat Accelerate”,一个用于MATLAB代码的开源运行时环境,旨在显著提高速度。与需要CUDA或内核编程不同,RunMat 能够自动优化并在 CPU 和 GPU 之间分配计算。 该运行时环境会构建数组数学运算的图,将它们融合为高效的内核,并智能管理数据放置。在 Apple M2 Max 上的基准测试表明,RunMat 在蒙特卡洛模拟方面比 PyTorch 快 2.8 倍,比 NumPy 快 130 倍;在图像处理方面比 PyTorch 快 1.8 倍,比 NumPy 快 10 倍;在逐元素运算方面,速度提升甚至更大(比 PyTorch 快高达 140 倍)。 该项目在 GitHub 上可用 ([https://github.com/runmat-org](https://github.com/runmat-org)),并提供可运行的基准测试,以及详细解释底层技术的博客文章,网址为 [https://runmat.org/blog/runmat-accel-intro-blog](https://runmat.org/blog/runmat-accel-intro-blog)。开发者欢迎反馈和错误报告。
相关文章

原文

RunMat automatically fuses operations and intelligently routes between CPU and GPU. MATLAB syntax. No kernel code, no rewrites.

Build Status License Crates.io Downloads

🌐 Website📖 Documentation


Status: Pre-release (v0.2)

RunMat is an early build. The core runtime and GPU engine already pass thousands of tests, but some plotting features are still missing or buggy. Expect a few rough edges. Feedback and bug reports help us decide what to fix next.


With RunMat you write your math in clean, readable MATLAB-style syntax. RunMat automatically fuses your operations into optimized kernels and runs them on the best place — CPU or GPU. On GPU, it can often match or beat hand-tuned CUDA on many dense numerical workloads

It runs on whatever GPU you have — NVIDIA, AMD, Apple Silicon, Intel — through native APIs (Metal / DirectX 12 / Vulkan). No device management. No vendor lock-in. No rewrites.

Core ideas:

  • MATLAB syntax, not a new language
  • Fast on CPU and GPU, with one runtime
  • No device flags — Fusion automatically chooses CPU vs GPU based on data size and transfer cost heuristics
  • MATLAB language

    • Familiar .m files, arrays, control flow
    • Many MATLAB / Octave scripts run with few or no changes
  • Fusion: automatic CPU+GPU choice

    • Builds an internal graph of array ops
    • Fuses elementwise ops and reductions into bigger kernels
    • Chooses CPU or GPU per kernel based on shape and transfer cost
    • Keeps arrays on device when that is faster
  • Modern CPU runtime

    • Ignition interpreter for fast startup
    • Turbine JIT (Cranelift) for hot paths
    • Generational GC tuned for numeric code
    • Memory-safe by design (Rust)
  • Cross-platform GPU backend

    • Uses wgpu / WebGPU
    • Supports Metal (macOS), DirectX 12 (Windows), Vulkan (Linux)
    • Falls back to CPU when workloads are too small for GPU to win
  • Plotting and tooling (pre-release)

    • Simple 2D line and scatter plots work today
    • Plots that use filled shapes or meshes (box plots, violin plots, surfaces, many 3D views) are not wired up yet
    • 3D plots and better camera controls are on the roadmap
    • VS Code / Cursor extensions are also on the roadmap
  • Open source

    • MIT License with attribution
    • Small binary, CLI-first design

📊 Performance highlights

These are large workloads where Fusion chooses GPU.
Hardware: Apple M2 Max, Metal, each point is the mean of 3 runs.

4K Image Pipeline Perf Sweep (B = image batch size)

B RunMat (ms) PyTorch (ms) NumPy (ms) NumPy ÷ RunMat PyTorch ÷ RunMat
4 217.9 922.9 548.4 2.52x 4.23x
8 270.3 960.1 989.6 3.66x 3.55x
16 317.4 1,040.7 1,859.1 5.86x 3.28x
32 520.5 1,178.3 3,698.6 7.11x 2.26x
64 893.8 1,379.6 7,434.6 8.32x 1.54x

Monte Carlo Perf Sweep (M = paths)

M RunMat (ms) PyTorch (ms) NumPy (ms) NumPy ÷ RunMat PyTorch ÷ RunMat
250 000 179.8 955.4 4,252.3 23.65x 5.31x
500 000 203.1 1,021.8 9,319.9 45.90x 5.03x
1 000 000 243.3 1,283.9 17,946.4 73.78x 5.28x
2 000 000 372.0 1,469.4 38,826.8 104.36x 3.95x
5 000 000 678.1 1,719.5 95,539.2 140.89x 2.54x

Elementwise Math Perf Sweep (points)

points RunMat (ms) PyTorch (ms) NumPy (ms) NumPy ÷ RunMat PyTorch ÷ RunMat
1 000 000 197.1 820.8 68.3 0.35x 4.16x
2 000 000 211.4 896.2 76.7 0.36x 4.24x
5 000 000 207.7 1,104.7 111.9 0.54x 5.32x
10 000 000 173.8 1,426.1 166.6 0.96x 8.20x
100 000 000 170.9 16,878.8 1,098.8 6.43x 98.77x
200 000 000 202.8 17,393.0 2,188.9 10.79x 85.76x
500 000 000 171.8 18,880.2 5,946.9 34.61x 109.87x
1 000 000 000 199.4 22,652.0 12,570.0 63.04x 113.61x

On smaller arrays, Fusion keeps work on CPU so you still get low overhead and a fast JIT.

Benchmarks run on Apple M2 Max with BLAS/LAPACK optimization and GPU acceleration. See benchmarks/ for reproducible test scripts, detailed results, and comparisons against NumPy, PyTorch, and Julia.


# Quick install (Linux/macOS)
curl -fsSL https://runmat.org/install.sh | sh

# Quick install (Windows PowerShell)
iwr https://runmat.org/install.ps1 | iex

# Or install from crates.io
cargo install runmat --features gui

# Or build from source
git clone https://github.com/runmat-org/runmat.git
cd runmat && cargo build --release --features gui

For BLAS/LAPACK acceleration on Linux, install the system OpenBLAS package before building:

sudo apt-get update && sudo apt-get install -y libopenblas-dev
# Start the interactive REPL
runmat

# Or run an existing .m file
runmat script.m

# Or pipe a script into RunMat
echo "a = 10; b = 20; c = a + b" | runmat

# Check GPU acceleration status
runmat accel-info

# Benchmark a script
runmat benchmark script.m --iterations 5 --jit

# View system information
runmat info
# Register RunMat as a Jupyter kernel
runmat --install-kernel

# Launch JupyterLab with RunMat support
jupyter lab
% RunMat automatically uses GPU when beneficial
x = rand(10000, 1, 'single');
y = sin(x) .* x + 0.5;  % Automatically fused and GPU-accelerated
mean(y)  % Result computed on GPU
% Your existing MATLAB code just works
A = [1 2 3; 4 5 6; 7 8 9];
B = A' * A;
eigenvals = eig(B);
plot(eigenvals);
% RunMat automatically fuses this chain into a single GPU kernel
% No kernel code, no rewrites—just MATLAB syntax
x = rand(1024, 1, 'single');
y = sin(x) .* x + 0.5;        % Fused: sin, multiply, add
m = mean(y, 'all');            % Reduction stays on GPU
fprintf('m=%.6f\n', double(m)); % Single download at sink
% Simple 2D line plot (works in the pre-release)
x = linspace(0, 2*pi, 1000);
y = sin(x);

plot(x, y);
grid on;
title("Sine wave");

🧱 Architecture: CPU+GPU performance

RunMat uses a tiered CPU runtime plus a fusion engine that automatically picks CPU or GPU for each chunk of math.

Component Purpose Technology / Notes
⚙️ runmat-ignition Baseline interpreter for instant startup HIR → bytecode compiler, stack-based interpreter
⚡ runmat-turbine Optimizing JIT for hot code Cranelift backend, tuned for numeric workloads
🧠 runmat-gc High-performance memory management Generational GC with pointer compression
🚀 runmat-accelerate GPU acceleration subsystem Fusion engine + auto-offload planner + wgpu backend
🔥 Fusion engine Collapses op chains, chooses CPU vs GPU Builds op graph, fuses ops, estimates cost, keeps tensors on device
🎨 runmat-plot Plotting layer (pre-release) 2D line/scatter plots work today; 3D, filled shapes, and full GPU plotting are on the roadmap
📸 runmat-snapshot Fast startup snapshots Binary blob serialization / restore
🧰 runmat-runtime Core runtime + 200+ builtin functions BLAS/LAPACK integration and other CPU/GPU-accelerated operations
  • Tiered CPU execution gives quick startup and strong single-machine performance.
  • Fusion engine removes most manual device management and kernel tuning.
  • GPU backend runs on NVIDIA, AMD, Apple Silicon, and Intel through Metal / DirectX 12 / Vulkan, with no vendor lock-in.

🚀 GPU Acceleration: Fusion & Auto-Offload

RunMat automatically accelerates your MATLAB code on GPUs without requiring kernel code or rewrites. The system works through four stages:

RunMat builds an "acceleration graph" that captures the intent of your operations—shapes, operation categories, dependencies, and constants. This graph provides a complete view of what your script computes.

2. Decide What Should Run on GPU

The fusion engine detects long chains of elementwise operations and linked reductions, planning to execute them as combined GPU programs. The auto-offload planner estimates break-even points and routes work intelligently:

  • Fusion detection: Combines multiple operations into single GPU dispatches
  • Auto-offload heuristics: Considers element counts, reduction sizes, and matrix multiply saturation
  • Residency awareness: Keeps tensors on device once they're worth it

RunMat generates portable WGSL (WebGPU Shading Language) kernels that work across platforms:

  • Metal on macOS
  • DirectX 12 on Windows
  • Vulkan on Linux

Kernels are compiled once and cached for subsequent runs, eliminating recompilation overhead.

The runtime minimizes host↔device transfers by:

  • Uploading tensors once and keeping them resident
  • Executing fused kernels directly on GPU memory
  • Only gathering results when needed (e.g., for fprintf or display)

Example: Automatic GPU Fusion

% This code automatically fuses into a single GPU kernel
x = rand(1024, 1, 'single');
y = sin(x) .* x + 0.5;  % Fused: sin, multiply, add
m = mean(y, 'all');      % Reduction stays on GPU
fprintf('m=%.6f\n', double(m));  % Single download at sink

RunMat detects the elementwise chain (sin, .*, +), fuses them into one GPU dispatch, keeps y resident on GPU, and only downloads m when needed for output.

For more details, see Introduction to RunMat GPU and How RunMat Fusion Works.

🎨 Modern Developer Experience

Rich REPL with Intelligent Features

runmat> .info
🦀 RunMat v0.1.0 - High-Performance MATLAB Runtime
⚡ JIT: Cranelift (optimization: speed)
🧠 GC: Generational (heap: 45MB, collections: 12)
🚀 GPU: wgpu provider (Metal/DX12/Vulkan)
🎨 Plotting: GPU-accelerated (wgpu)
📊 Functions loaded: 200+ builtins + 0 user-defined

runmat> .stats
Execution Statistics:
  Total: 2, JIT: 0, Interpreter: 2
  Average time: 0.12ms

runmat> accel-info
GPU Acceleration Provider: wgpu
Device: Apple M2 Max
Backend: Metal
Fusion pipeline cache: 45 hits, 2 misses

First-Class Jupyter Support

  • Rich output formatting with LaTeX math rendering
  • Interactive widgets for parameter exploration
  • Full debugging support with breakpoints
// Adding a new builtin function is trivial
#[runtime_builtin("myfunction")]
fn my_custom_function(x: f64, y: f64) -> f64 {
    x.powf(y) + x.sin()
}

RunMat includes a comprehensive CLI with powerful features:

# Check GPU acceleration status
runmat accel-info

# Benchmark a script
runmat benchmark my_script.m --iterations 5 --jit

# Create a snapshot for faster startup
runmat snapshot create -o stdlib.snapshot

# GC statistics and control
runmat gc stats
runmat gc major

# System information
runmat info

See CLI Documentation for the complete command reference.

RunMat's package system enables both systems programmers and MATLAB users to extend the runtime. The core stays lean while packages provide domain-specific functionality.

High-performance built-ins implemented in Rust:

#[runtime_builtin(
    name = "norm2",
    category = "math/linalg",
    summary = "Euclidean norm of a vector.",
    examples = "n = norm2([3,4])  % 5"
)]
fn norm2_builtin(a: Value) -> Result<Value, String> {
    let t: Tensor = (&a).try_into()?;
    let s = t.data.iter().map(|x| x * x).sum::<f64>().sqrt();
    Ok(Value::Num(s))
}

Native packages get type-safe conversions, deterministic error IDs, and zero-cost documentation generation.

MATLAB source packages compile to RunMat bytecode:

% +mypackage/norm2.m
function n = norm2(v)
    n = sqrt(sum(v .^ 2));
end

Both package types appear identically to users—functions show up in the namespace, reference docs, and tooling (help, search, doc indexing).

# Declare dependencies in .runmat
[packages]
linalg-plus = { source = "registry", version = "^1.2" }
viz-tools = { source = "git", url = "https://github.com/acme/viz-tools" }

# Install packages
runmat pkg install

# Publish your package
runmat pkg publish

Note: Package manager CLI is currently in beta. See Package Manager Documentation for design details.

RunMat follows a minimal core, fast runtime, open extension model philosophy:

  • Full language support: The core implements the complete MATLAB grammar and semantics, not a subset
  • Extensive built-ins: The standard library aims for complete base MATLAB built-in coverage (200+ functions)
  • Tiered execution: Ignition interpreter for fast startup, Turbine JIT for hot code
  • GPU-first math: Fusion engine automatically turns MATLAB code into fast GPU workloads
  • Small, portable runtime: Single static binary, fast startup, modern CLI, Jupyter kernel support
  • Toolboxes as packages: Signal processing, statistics, image processing, and other domains live as packages
  • A modern, high-performance runtime for MATLAB code
  • A minimal core with a thriving package ecosystem
  • GPU-accelerated by default with intelligent CPU/GPU routing
  • Open source and free forever
  • A reimplementation of MATLAB-in-full (toolboxes are packages)
  • A compatibility layer (we implement semantics, not folklore)
  • An IDE (use any editor: Cursor, VSCode, IntelliJ, etc.)

RunMat keeps the core small and uncompromisingly high-quality; everything else is a package. This enables:

  • Fast iteration without destabilizing the runtime
  • Domain experts shipping features without forking
  • A smaller trusted compute base, easier auditing
  • Community-driven package ecosystem

See Design Philosophy for the complete design rationale.

RunMat is built for array-heavy math in many domains.

Examples:

Imaging / geospatial
4K+ tiles, normalization, radiometric correction, QC metrics
Quant / simulation
Monte Carlo risk, scenario analysis, covariance, factor models
Signal processing / control
Filters, NLMS, large time-series jobs
Researchers and students
MATLAB background, need faster runs on laptops or clusters

If you write math in MATLAB and hit performance walls on CPU, RunMat is built for you.

RunMat is more than just software—it's a movement toward open, fast, and accessible scientific computing. We're building the future of numerical programming, and we need your help.

🛠️ How to Contribute

🚀 For Rust Developers

  • Implement new builtin functions
  • Optimize the JIT compiler
  • Enhance the garbage collector
  • Build developer tooling

Contribute Code →

🔬 For Domain Experts

  • Add mathematical functions
  • Write comprehensive tests
  • Create benchmarks

Join Discussions →

📚 For Everyone Else

  • Report bugs and feature requests
  • Improve documentation
  • Create tutorials and examples
  • Spread the word

Get Started →

RunMat is licensed under the MIT License with Attribution Requirements. This means:

Free for everyone - individuals, academics, most companies
Open source forever - no vendor lock-in or license fees
Commercial use allowed - embed in your products freely
⚠️ Attribution required - credit "RunMat by Dystr" in public distributions
⚠️ Special provisions - large scientific software companies must keep modifications open source

See LICENSE.md for complete terms or visit runmat.org/license for FAQs.


Built with ❤️ by Dystr Inc. and the RunMat community

Star us on GitHub if RunMat is useful to you.

🚀 Get Started🐦 Follow @dystr


MATLAB® is a registered trademark of The MathWorks, Inc. RunMat is not affiliated with, endorsed by, or sponsored by The MathWorks, Inc.

联系我们 contact @ memedata.com