英特尔首席执行官:“整个行业都在积极消除 CUDA 市场”
Intel CEO: 'The entire industry is motivated to eliminate the CUDA market'

原始链接: https://www.tomshardware.com/tech-industry/artificial-intelligence/intel-ceo-attacks-nvidia-on-ai-the-entire-industry-is-motivated-to-eliminate-the-cuda-market

最近,在英特尔主办的一次活动上,首席执行官帕特·基辛格(Pat Gelsinger)发表了一场演讲,矛头直指竞争对手英伟达(Nvidia)的CUDA技术,该技术经常用于人工智能训练。 通过强调使用 Pythonic 编程层和开放标准而不是专有解决方案的好处,Gelsinger 标志着英特尔重点的重大转变。 基辛格表示,虽然推理技术对于人工智能仍然至关重要,但由于行业内竞争的加剧,培训本身可能很快就会失去重要性。 尽管传统上以专有技术而闻名,但英特尔最近宣布支持开放标准,表明该领域正在朝着更大程度的协作与合作迈进。 基辛格在演讲中赞扬了谷歌、微软的 OpenAI 计划和 MLIR 等例子,强调了他们对开发开放人工智能框架的承诺。 此外,通过在演示中展示 Gaudi 3(据 Gelsinger 称,与现有型号相比,该设备到 2024 年将提供四倍的性能),英特尔展示了其抢先追随新兴人工智能趋势的意图。 总体而言,英特尔似乎准备在开放标准的帮助下,在创新高端人工智能技术方面处于领先地位,以便在许多人认为是科技的未来的领域保持领先于竞争对手。 然而,英特尔能否完全取代英伟达的主导地位仍存在不确定性。 只有时间才能证明这些雄心壮志是否会取得成果。 尽管如此,人们对人工智能和开源软件的兴趣和投资都在不断增长,这表明该领域在未来几个月和几年内前景光明。

总体而言,尽管 AMD 尝试为独立专业 GPU 开发 ROCm,但由于在为旧硬件提供支持方面存在限制,它在与 CUDA 竞争时仍然面临挑战,这可能会阻碍预算较低的大型机构做出必要的贡献和修复。 尽管 AMD 对 HPC 的持续投资(例如与阿贡国家实验室合作进行百亿亿次计算)可能会增加人们对其人工智能工作的兴趣,但这并不一定能解决学生、业余爱好者和小型机构的可用性和定价问题。 此外,适应快速变化的技术环境仍然至关重要,因为在高端 GPU 上拥有负担得起的租赁时间或优先考虑新一代卡而不是旧硬件,可以作为购买昂贵 GPU 的替代方案。 然而,利益相关者之间的分散和缺乏承诺仍然给建立一个主导者来挑战 CUDA 的主导地位带来了困难。 尽管如此,一些成功案例证明了在此类努力中投入资源的潜在成果,最近在 ROCm 下为 PyTorch 添加 RX 7900 XT 支持就证明了这一点。 最终,清晰的沟通、对创新的持续投资的承诺以及与学术机构和行业内主要参与者的合作可以促进在挑战 CUDA 作为集中堆栈的影响力和地位方面取得更大进展。
相关文章

原文

Intel CEO Pat Gelsinger came out swinging at Nvidia's CUDA technology, claiming that inference technology will be more important than training for AI as he launched Intel Core Ultra and 5th Gen Xeon datacenter chips in an event here in New York City. Taking questions at the NASDAQ, Gelsinger suggested that Nvidia’s CUDA dominance in training wouldn't last forever.
"You know, the entire industry is motivated to eliminate the CUDA market," Gelsinger said. He cited examples such as MLIR, Google, and OpenAI, suggesting that they are moving to a "Pythonic programming layer" to make AI training more open.
"We think of the CUDA moat as shallow and small," Gelsinger went on. "Because the industry is motivated to bring a broader set of technologies for broad training, innovation, data science, et cetera."
But Intel isn't relying just on training. Instead, it thinks inference is the way to go.

"As inferencing occurs, hey, once you've trained the model… There is no CUDA dependency," Gelsinger continued. "It's all about, can you run that model well?" He suggested that with Gaudi 3, shown on stage for the first time, that Intel will be up to the challenge, and will be able to do it as well with Xeon and edge PCs. Not that Intel won't compete in training, but "fundamentally, the inference market is where the game will be at," Gelsinger said.

Intel Gaudi 3, shown on stage for the first time

(Image credit: Tom's Hardware)

He also took the opportunity to push OpenVINO, the standard that Intel has gathered around for its AI efforts, and predicted a world of mixed computing, some that occurs in the cloud, and others that happen on your PC.
Sandra Rivera, executive vice president and general manager of the Data Center and AI Group at Intel, added that Intel's scale from the data center to the PC may make it a partner of choice, as it can produce at volume.
"We're going to compete three ways for 100% of the datacenter AI TAM." Gelsinger said, tacking onto Rivera's comment. "With our leadership CEOs, leadership accelerators, and as a foundry. Every one of those internal opportunities is available to us: The TPUs, the inferentias, the trainiums, et cetera. We're going to pursue all of those. And we're going to pursue every commercial opportunity as well, with NVIDIA, with AMD, et cetera. We're going to be a foundry player."
It's a bold strategy, and Gelsinger appeared confident as he led his team through presentations today. Can he truly take on CUDA? Only time will tell as applications for the chips Intel launched today — and that his competitors are also working on — become more widespread.

联系我们 contact @ memedata.com