2026年消费电子展:AMD威尼斯和MI400 SoC揭秘
CES 2026: Taking the Lids Off AMD's Venice and MI400 SoCs

原始链接: https://chipsandcheese.com/p/ces-2026-taking-the-lids-off-amds

## AMD Venice & MI400 系列在CES 2026上亮相 在CES 2026上,AMD展示了即将推出的Venice服务器CPU和MI400数据中心加速器,进一步补充了此前AI活动中透露的细节。这是双方产品线的芯片首次公开展示。 Venice CPU最多拥有256个核心,通过8个CCD实现,每个CCD包含32个Zen 6核心,以及潜在的128MB L3缓存。主要变化包括先进的封装技术——从有机基板转向类似Strix Halo的设计——以及在结构/电源供应芯片旁边集成*两个*大型IO芯片(总面积约700mm²)。 MI400加速器是一个巨大的封装,包含12个HBM4芯片,以及2nm/3nm计算和IO芯片的组合。它采用双基座芯片设计,并额外增加芯片用于封装外连接。 AMD还宣布了MI400系列的补充产品——适用于现有基础设施的MI440X,以及Venice-X,这是一种潜在的Venice V-Cache版本,每片芯片可提供高达3GB的L3缓存。Venice和MI400预计将于今年晚些时候发布。

## AMD下一代芯片:威尼斯 & MI400 在CES 2026上亮相 AMD正在准备发布下一代“威尼斯”和“MI400”芯片,拥有令人印象深刻的核心数量和缓存大小。一个关键发展是,高核心数CCD(核心复合芯片 – 包含CPU核心和缓存的芯片let)最终可能支持V-Cache技术,每片芯片可能达到高达3GB的L3缓存,拥有32个核心。 讨论的重点在于其庞大的规模 – 每片芯片256个核心 – 以及软件如何利用这种强大性能。有人建议将MPI进程映射到芯片上以优化性能,而另一些人则指出NUMA节点配置选项。将其与英伟达的Blackwell架构(752个核心)和英特尔的Clearwater Forest(288个核心)进行比较。 对话还涉及散热解决方案,专家们认为大表面积可能允许即使在高功耗下也能有效进行风冷。最终,这些芯片面向科学模拟等苛刻的工作负载,而不是典型的桌面或游戏应用,尽管未来软件的可扩展性也在考虑之中。文章和讨论因填补了Anandtech在详细芯片分析方面留下的空白而受到赞扬。
相关文章

原文

Hello you fine Internet folks,

Here at CES 2026, AMD showed off their upcoming Venice series of server CPUs and their upcoming MI400 series of datacenter accelerators. AMD has talked about the specifications of both Venice and the MI400 series at their Advancing AI event back in June of 2025, but this is the first time AMD has shown off the silicon for both of product lines.

Starting with Venice, the first thing to notice is the packaging of the CCDs to the IO dies is different. Instead of using the organic substrate of the package to run the wires between the CCDs and the IO dies that AMD has used since EPYC Rome, Venice appears to be using a more advanced form of packaging similar to Strix Halo or MI250X. Another change is that Venice appears to have two IO dies instead of the single IO die that the prior EPYC CPUs had.

Venice has 8 CCDs each of which have 32 cores for a total of up to 256 cores per Venice package. Doing some measuring of each of the dies, you get that each CCD is approximately 165mm2 of N2 silicon. If AMD has stuck to 4MB of L3 per core than each of these CCDs have 32 Zen 6 cores and 128MB of L3 cache along with the die to die interface for the CCD <-> IO die communications. At approximately 165mm2 per CCD, that would make a Zen 6 core plus the 4MB of L3 per core about 5mm2 each which is similar to Zen 5’s approximately 5.34mm2 on N3 when counting both the Zen 5 core and 4MB of L3 cache.

Moving to the IO dies, they each appear to be approximately 353mm2 for a total of just over 700mm2 of silicon dedicated for the IO dies. This is a massive increase from the approximately 400mm2 that the prior EPYC CPUs dedicated for their IO dies. The two IO dies appear to be using an advanced packaging of some kind similar to the CCDs. Next to the IO dies appear to be 8 little dies, 4 on each side of the package, which are likely to either be structural silicon or deep trench capacitor dies meant to improve power delivery to the CCDs and IO dies.

Shifting off of Venice and on to the MI400 accelerator, this is a massive package with 12 HBM4 dies and “twelve 2 nanometer and 3 nanometer compute and IO dies”. It appears as if there are two base dies just like MI350. But unlike MI350, there appears to also be two extra dies on the top and bottom of the base dies. These two extra dies are likely for off-package IO such as PCIe, UALink, etc.

Calculating the die sizes of the base dies and the IO dies, the die size of the base die is approximately 747mm2 for each of the two base dies with the off-package IO dies each being approximately 220mm2. As for the compute dies, while the packaging precludes any visual demarcation of the different compute dies, it is likely that there are 8 compute dies with 4 compute dies on each base die. So while we can’t figure out the exact die size of the compute dies, the maximum size is approximately 180mm2. The compute chiplet is likely in the 140mm2 to 160mm2 region but that is a best guess that will have to wait to be confirmed.

The MI455X and Venice are the two SoCs that are going to be powering AMD’s Helios AI Rack but they aren’t the only new Zen 6 and MI400 series products that AMD announced at CES. AMD announced that there would be a third member of the MI400 family called the MI440X joining the MI430X and MI455X. The MI440X is designed to fit into the 8-way UBB boxes as a direct replacement for the MI300/350 series.

AMD also announced Venice-X which is likely is going to be a V-Cache version of Venice. This is interesting because not only did AMD skip Turin-X but if there is a 256 core version of Venice-X, then this would be the first time that a high core count CCD will have the ability to support a V-Cache die. If AMD sticks to the same ratio of base die cache to V-Cache die cache, then each 32 core CCD would have up to 384MB of L3 cache which equates to 3 Gigabytes of L3 cache across the chip.

Both Venice and the MI400 series are due to launch later this year and I can’t wait to learn more about the underlying architectures of both SoCs.

If you like the content then consider heading over to the Patreon or PayPal if you want to toss a few bucks to Chips and Cheese, also consider joining the Discord.

联系我们 contact @ memedata.com