亚马逊发布Trainium3
Amazon launches Trainium3

原始链接: https://techcrunch.com/2025/12/02/amazon-releases-an-impressive-new-ai-chip-and-teases-a-nvidia-friendly-roadmap/

在AWS re:Invent 2025上,亚马逊网络服务推出了Trainium3,其最新的AI训练芯片,基于3纳米工艺制造。这一代产品在性能和内存方面比其前代产品提高了4倍,可扩展至支持数千个互连的UltraServer上的多达100万个芯片。重要的是,Trainium3的能效也提高了40%,符合AWS降低自身和客户成本的重点。 早期采用者,如Anthropic和Karakuri,已经在使用Trainium3时看到了推理成本的降低。展望未来,AWS预告了正在开发的Trainium4。Trainium4的一个关键特性将是与英伟达的NVLink Fusion技术的兼容性,从而可以与英伟达GPU无缝集成,并可能吸引更多基于CUDA平台构建的AI应用程序。虽然发布时间表尚未确定,但预计将在明年的会议上公布更多细节。

亚马逊发布Trainium3 (techcrunch.com) 23点 由 thnaks 25分钟前 | 隐藏 | 过去 | 收藏 | 4条评论 aaa_aaa 13分钟前 [–] 有趣的是,文章中没有说明这颗芯片实际做什么。一次也没有。回复 Kye 0分钟前 | 父评论 | 下一个 [–] 向量数学回复 egorfine 9分钟前 | 父评论 | 上一个 | 下一个 [–] 可能因为这颗芯片唯一的任务就是取悦股东。回复 wmf 4分钟前 | 父评论 | 上一个 [–] 训练。字在名称里。回复 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请YC | 联系 搜索:
相关文章

原文

Amazon Web Services, which has been building its own AI training chips for years now, just introduced a new version known as Trainium3 that comes with some impressive specs.

The cloud provider, which made the announcement Tuesday at AWS re:Invent 2025, also teased the next product on its AI training product roadmap: Trainium4, which is already in the works and will be able to work with Nvidia’s chips.

AWS used its annual tech conference to formally launch Trainium3 UltraServer, a system powered by the company’s state-of-the art, 3 nanometer Trainium3 chip, as well as its homegrown networking tech. As you might expect, the third-generation chip and system offer big bumps in performance for AI training and inference over the second-generation chip, according to AWS.

AWS says the system is more than 4x faster, with 4x more memory, not just for training, but for delivering AI apps at peak demand. Additionally, thousands of UltraServers can be linked together to provide an app with up to 1 million Trainium3 chips — 10x the previous generation. Each UltraServer can host 144 chips, according to the company. 

Perhaps more importantly, AWS says the chips and systems are also 40% more energy efficient than the previous generation. While the world races to build bigger data centers powered by astronomical gigawatts of electricity, data center giant AWS is trying to make systems that drink less, not more.

It is, obviously, in AWS’s direct interests to do so. But in its classic, Amazon cost-conscious way, it promises that these systems save its AI cloud customers money, too.  

AWS customers like Anthropic (of which Amazon is also an investor), Japan’s LLM Karakuri, SplashMusic, and Decart have already been using the third-gen chip and system and significantly cut their inference costs, Amazon said. 

Techcrunch event

San Francisco | October 13-15, 2026

AWS also presented a bit of a roadmap for the next chip, Trainium4, which is already in development. AWS promised the chip will provide another big step up in performance and support Nvidia’s NVLink Fusion high-speed chip interconnect technology.  

This means the AWS Trainium4-powered systems will be able to interoperate and extend their performance with Nvidia GPUs while still using Amazon’s homegrown, lower-cost server rack technology.  

It’s worth noting, too, that Nvidia’s CUDA (Compute Unified Device Architecture) has become the de facto standard that all the major AI apps are built to support. The Trainium4-powered systems may make it easier to woo big AI apps built with Nvidia GPUs in mind to Amazon’s cloud.

Amazon did not announce a timeline for Trainium4. If the company follows previous rollout timelines, we’ll likely hear more about Trainium4 at next year’s conference.

Follow along with all of TechCrunch’s coverage of the annual enterprise tech event here.

联系我们 contact @ memedata.com