将 Wolfram 技术作为 LLM 系统的基础工具提供
Making Wolfram Tech Available as a Foundation Tool for LLM Systems

原始链接: https://writings.stephenwolfram.com/2026/02/making-wolfram-tech-available-as-a-foundation-tool-for-llm-systems/

大型语言模型(LLM)令人印象深刻,但缺乏精确计算和深度知识。为了克服这一点,需要一个“基础工具”——例如Wolfram语言——来补充LLM,赋予它们这些能力。Wolfram语言经过40多年的构建,旨在通过算法、方法和数据使世界可计算,从而推动各领域的发现。 现在,LLM可以利用这种力量。LLM与Wolfram语言的融合得益于“计算增强生成”(CAG),它将实时计算能力注入LLM的输出中——这是对传统检索增强生成(RAG)的重大扩展。 Wolfram发布了三种新的方法,供LLM访问其技术:通过广泛支持的MCP协议进行集成,提供一个即插即用的LLM API替代品的“通用代理”,以及直接、可定制的Wolfram技术访问。这使得任何LLM都能受益于精确的计算和知识,最终创建一个更强大、更可靠的AI系统。

黑客新闻 新 | 过去 | 评论 | 提问 | 展示 | 工作 | 提交 登录 将 Wolfram 技术作为 LLM 系统的基础工具提供 (stephenwolfram.com) 12 分,由 surprisetalk 1 小时前发布 | 隐藏 | 过去 | 收藏 | 2 条评论 帮助 petcat 8 分钟前 [–] 听起来很棒。顺便说一下,我讨厌读到像这样的帖子,然后下意识地开始数破折号和“不仅仅是[事物],而是[其他事物]”这种说法。这让我觉得这只是更多的人工智能。回复 llbbdd 3 分钟前 | 父评论 [–] 前几天,我大声地用“不仅仅是 x,而是 y”的结构组织了一句话,立刻感到恶心,尽管我一生中可能已经这样做了几百万次。那是一种灵魂出窍的感觉。回复 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系 搜索:
相关文章

原文

LLMs don’t—and can’t—do everything. What they do is very impressive—and useful. It’s broad. And in many ways it’s human-like. But it’s not precise. And in the end it’s not about deep computation.

So how can we supplement LLM foundation models? We need a foundation tool: a tool that’s broad and general and does what LLMs themselves don’t: provides deep computation and precise knowledge.

And, conveniently enough, that’s exactly what I’ve been building for the past 40 years! My goal with Wolfram Language has always been to make everything we can about the world computable. To bring together in a coherent and unified way the algorithms, the methods and the data to do precise computation whenever it’s possible. It’s been a huge undertaking, but I think it’s fair to say it’s been a hugely successful one—that’s fueled countless discoveries and inventions (including my own) across a remarkable range of areas of science, technology and beyond.

But now it’s not just humans who can take advantage of this technology; it’s AIs—and in particular LLMs—as well. LLM foundation models are powerful. But LLM foundation models with our foundation tool are even more so. And with the maturing of LLMs we’re finally now in a position to provide to LLMs access to Wolfram tech in a standard, general way.

It is, I believe, an important moment of convergence. My concept over the decades has been to build very broad and general technology—which is now a perfect fit for the breadth of LLM foundation models. LLMs can call specific specialized tools, and that will be useful for plenty of specific specialized purposes. But what Wolfram Language uniquely represents is a general tool—with general access to the great power that precise computation and knowledge bring.

But there’s actually also much more. I designed Wolfram Language from the beginning to be a powerful medium not only for doing computation but also for representing and thinking about things computationally. I’d always assumed I was doing this for humans. But it now turns out that AIs need the same things—and that Wolfram Language provides the perfect medium for AIs to “think” and “reason” computationally.

There’s another point as well. In its effort to make as much as possible computable, Wolfram Language not only has an immense amount inside, but also provides a uniquely unified hub for connecting to other systems and services. And that’s part of why it’s now possible to make such an effective connection between LLM foundation models and the foundation tool that is the Wolfram Language.

On January 9, 2023, just weeks after ChatGPT burst onto the scene, I posted a piece entitled “Wolfram|Alpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT”. Two months later we released the first Wolfram plugin for ChatGPT (and in between I wrote what quickly became a rather popular little book entitled What Is ChatGPT Doing … and Why Does It Work?). The plugin was a modest but good start. But at the time LLMs and the ecosystem around them weren’t really ready for the bigger story.

Would LLMs even in the end need tools at all? Or—despite the fundamental issues that seemed at least to me scientifically rather clear right from the start—would LLMs somehow magically find a way to do deep computation themselves? Or to guarantee to get precise, reliable results? And even if LLMs were going to use tools, how would that process be engineered, and what would the deployment model for it be?

Three years have now passed, and much has clarified. The core capabilities of LLMs have come into better focus (even though there’s a lot we still don’t know scientifically about them). And it’s become much clearer that—at least for the modalities LLMs currently address—most of the growth in their practical value is going to have to do with how they are harnessed and connected. And this understanding highlights more than ever the broad importance of providing LLMs with the foundation tool that our technology represents.

And the good news is that there are now streamlined ways to do this—using protocols and methods that have emerged around LLMs, and using new technology that we’ve developed. The tighter the integration between foundation models and our foundation tool, the more powerful the combination will be. Ultimately it’ll be a story of aligning the pre-training and core engineering of LLMs with our foundation tool. But an approach that’s immediately and broadly applicable today—and for which we’re releasing several new products—is based on what we call computation-augmented generation, or CAG.

The key idea of CAG is to inject in real time capabilities from our foundation tool into the stream of content that LLMs generate. In traditional retrieval-augmented generation, or RAG, one is injecting content that has been retrieved from existing documents. CAG is like an infinite extension of RAG, in which an infinite amount of content can be generated on the fly—using computation—to feed to an LLM. Internally, CAG is a somewhat complex piece of technology that has taken a long time for us to develop. But in its deployment it’s something that we’ve made easy to integrate into existing LLM-related systems and workflows. And today we’re launching it, so that going forward any LLM system—and LLM foundation model—can count on being able to access our Foundation Tool, and being able to supplement their capabilities with the superpower of precise, deep computation and knowledge.

The Practicalities

Today we’re launching three primary methods for accessing our Foundation Tool, all based on computation-augmented generation (CAG), and all leveraging our rather huge software engineering technology stack.

MCP ServiceImmediately call our Foundation Tool from within any MCP-compatible LLM-based system. Most consumer LLM-based systems now support MCP, making this extremely easy to set up. Our main MCP Service is a web API, but there’s also a version that can use a local Wolfram Engine.

Agent One APIA one-stop-shop “universal agent” combining an LLM foundation model with our Foundation Tool. Set up as a drop-in replacement for traditional LLM APIs.

CAG Component APIsDirect fine-grained access to Wolfram tech for LLM systems, supporting optimized, custom integration into LLM systems of any scale. (All Wolfram tech is available in both hosted and on-premise form.)

Wolfram Foundation Tool Capabilities Listing »

For further information on access and integration options, contact our Partnerships group »

联系我们 contact @ memedata.com