NVIDIA becomes major Intel CPU buyer in $5B collaboration

News Room
7 Min Read

“The cognac was excellent; just not enough of it. I think [it] was from 1912,” Jensen Huang said, moments before the press conference began. Intel’s Lip-Bu Tan answered, “Wow. 1912.” Wow, indeed. Decades ago, it would have seemed unthinkable that NVIDIA would be in a position to invest $5 billion in Intel, and to buy custom Intel x86 CPUs for its own AI platforms. Yet a deal is a deal. This one pulls x86 directly into NVIDIA’s NVLink 72 fabric for data centers, and builds x86 SoCs with RTX GPU chiplets for PCs.

“Innovation opened new frontiers in science and industry, and it sparked the big bang of artificial intelligence,” Huang said when kicking off the official remarks. “Today, we’re taking the next great step. This partnership is a recognition that computing has fundamentally changed—the era of accelerated and AI computing has arrived.”

The deal’s specifics: Intel will design and manufacture custom x86 CPUs for NVIDIA’s data center platforms. NVIDIA said it will buy Intel x86 server CPUs for integration into its AI platforms, a shift from relying solely on ARM-based Grace for tightly coupled NVLink systems. For PCs, Intel will build x86 SoCs integrating NVIDIA RTX GPU chiplets, targeting the 150-million-unit annual notebook market. NVIDIA’s $5 billion equity investment at $23.28 per share signals confidence in Intel’s execution. “I’d like to thank Jensen for the confidence in me and our team,” Lip-Bu Tan said. “Intel will work really hard to make sure it’s a good return for you.”

At the heart of the deal is a technical plan to pull x86 directly into NVIDIA’s NVLink fabric. “This architecture, the NVLink 72 rack scale architecture, is only available for the Vera CPU that we build, the ARM CPU… for the x86 ecosystem, it’s really unavailable… The first opportunity is that we can now, with Intel x86 CPU, integrate it directly into NVLink ecosystem and create these rack scale AI supercomputers,” Huang explained. The partnership targets what Huang described as “$25 to $50 billion of annual opportunity.”

NVLink pulls x86 into rack-scale AI

With the headline items on the table, Huang drilled into what changes technically. “This NVLink 72 rack-scale architecture is only available for the Vera CPU that we build… For the x86 ecosystem, it’s really unavailable… The first opportunity is that we can now, with an Intel x86 CPU, integrate it directly into the NVLink ecosystem and create these rack-scale AI supercomputers,” Huang said.

Huang framed the addressable market: “I think it’s safe to say that the partnership… is going to address some $25 to $50 billion of annual opportunity.” The data-center CPU segment alone is “about $25 billion or so annually,” with the x86 footprint still dominant across enterprise cloud instances, he added.

What changes for PCs

A second track of the partnership targets client computing. Here, the companies plan x86 SoCs “that integrate NVIDIA RTX GPU chiplets,” with Huang pointing to a large but under-served segment: “There are 150 million laptops sold per year… we’re creating an SoC that fuses the CPU and an NVIDIA RTX GPU using NVLink… into one virtual giant SoC.”

Both executives stressed that packaging is the practical enabler for these designs. Huang highlighted: “Intel has the Foveros multi-technology packaging capability… connecting NVIDIA’s GPU… chiplet with Intel CPUs in a multi-technology packaging capability and multi-process packaging technology.” Tan added: “Also the EMIB is a really good technology… we will definitely continue to refine it.”

How NVIDIA says the money will flow

Huang also outlined how the partnership will transact in practice. “With this partnership, we’re essentially going to be a major customer of Intel server CPUs… We’re going to be quite a large supplier of GPU chiplets into Intel x86 CPUs,” Huang said. On PCs, NVIDIA “sell[s] NVIDIA’s GPU chiplet either in a pass-through way with Intel or [it is] sold to Intel, and that is then packaged into an SoC,” while in servers NVIDIA would “buy those CPUs from Intel… [and] connect them into Superchips… integrated into a rack-scale AI supercomputer.”

In terms of scope, the collaboration expands x86 options without displacing NVIDIA’s Arm plans. Huang emphasized that NVIDIA’s Arm roadmap continues (“we’re fully committed to the Arm roadmap”), but the collaboration lets x86 systems scale beyond PCIe-only designs: “In the case of x86, we scale up to NVLink 8… And so now, with x86, we can scale up also to NVLink 72.”

On governance and manufacturing, both leaders set expectations. The policy backdrop helps explain Huang’s distancing language: Intel’s funding and ownership were already under unusual scrutiny ahead of the NVIDIA deal. The Biden-Harris administration awarded Intel up to $7.865 billion in CHIPS incentives on Nov. 26, 2024. In late August 2025, the U.S. government acquired about 10% of Intel, funded by reallocating previously awarded support. The deal gave Washington a 9.9% equity stake (433.3M shares at $20.47), per Intel’s 8-K and contemporaneous reports.

Against that backdrop, when asked about politics and foundry choices, Huang said, “The Trump administration had no involvement in this partnership at all,” positioning the NVIDIA-Intel collaboration as a commercial decision. He also reiterated respect for TSMC while keeping the day’s focus on product: custom x86 CPUs tied into NVIDIA’s NVLink fabric for data centers, and x86 SoCs with RTX GPU chiplets for PCs.

Read the full article from the Source

Share This Article