展示 HN:基于 XLA 的 R 语言数组计算框架
Show HN: XLA-based array computing framework for R

原始链接: https://github.com/r-xla/anvil

## Anvil:R语言中的快速数值计算 {anvil} 是一个R框架,旨在实现高性能数值计算,利用即时编译(JIT)和自动微分技术。它能够显著提高数值程序的执行速度,可在CPU和GPU上运行。 使用方法是,将R函数用`jit()`包裹起来,并对`AnvilTensor`数据类型进行操作。这将触发编译成优化的内核。{anvil} 目前支持反向模式自动微分,方便梯度计算。 可以通过源代码(需要C++20编译器和protobuf工具)或r-universe仓库进行安装。对于适合的问题(如贝叶斯模型优化和神经网络训练),{anvil} 速度非常快,但其编译开销意味着它最适合输入形状不经常变化的情况。 该包主要用R编写,并构建于OpenXLA项目之上,为新的基本运算和后端提供了可扩展性。

黑客新闻 新的 | 过去的 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 展示 HN: 基于 XLA 的 R 数组计算框架 (github.com/r-xla) 5 分,由 sebffischer 1 小时前发布 | 隐藏 | 过去的 | 收藏 | 讨论 帮助 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系 搜索:
相关文章

原文

Lifecycle: experimental R-CMD-check CRAN status codecov r-universe

Composable code transformation framework for R, allowing you to run numerical programs at the speed of light. It currently implements JIT compilation for very fast execution and backward-mode automatic differentiation. Programs can run on various hardware backends, including CPU and GPU.

In order to install from source, you need a C++20 compiler, as well as libprotobuf and the protobuf-compiler.

You can also install from r-universe, by adding the code below to your .Rprofile.

options(repos = c(
  rxla = "https://r-xla.r-universe.dev",
  CRAN = "https://cloud.r-project.org/"
))

You can also use one of the prebuilt Docker images.

Below, we create a standard R function. We cannot directly call this function, but first need to wrap it in a jit() call. If the resulting function is then called on AnvilTensors – the primary data type in {anvil} – it will be JIT compiled and subsequently executed.

library(anvil)
f <- function(a, b, x) {
  a * x + b
}
f_jit <- jit(f)

a <- nv_scalar(1.0, "f32")
b <- nv_scalar(-2.0, "f32")
x <- nv_scalar(3.0, "f32")

f_jit(a, b, x)
#> AnvilTensor
#>  1.0000
#> [ CPUf32{} ]

Through automatic differentiation, we can also obtain the gradient of the above function.

g_jit <- jit(gradient(f, wrt = c("a", "b")))
g_jit(a, b, x)
#> $a
#> AnvilTensor
#>  3.0000
#> [ CPUf32{} ] 
#> 
#> $b
#> AnvilTensor
#>  1.0000
#> [ CPUf32{} ]
  • Automatic Differentiation:
    • Gradients for functions with scalar outputs are supported.
  • Fast:
    • Code is JIT compiled into a single kernel.
    • Runs on different hardware backends, including CPU and GPU.
  • Extendable:
    • It is possible to add new primitives, transformations, and (with some effort) new backends.
    • The package is written almost entirely in R.

When to use this package?

While {anvil} allows to run certain types of programs extremely fast, it only applies to a certain category of problems. Specifically, it is suitable for numerical algorithms, such as optimizing bayesian models, training neural networks or more generally numerical optimization. Another restriction is that {anvil} needs to re-compile the code for each new unique input shape. This has the advantage, that the compiler can make memory optimizations, but the compilation overhead might be a problem for fast running programs.

  • This work is supported by MaRDI.
  • The design of this package was inspired by and borrows from:
  • For JIT compilation, we leverage the OpenXLA project.
联系我们 contact @ memedata.com