There are only a handful of people in the semiconductor industry who can genuinely claim to have lived through nearly every major shift in modern compute from the inside. Pat Gelsinger is one of them. His career stretches back through Intel’s formative decades, an era when CPU performance scaling still defined the direction of the industry, through the build-out of enterprise infrastructure and virtualization. He then came back again into the middle of one of the most difficult and closely watched rebuilds in semiconductor history. For years, I have covered Pat in different contexts: as Intel’s technologist, as the public face of its attempt to reassert manufacturing leadership, and as one of the most vocal believers that Moore’s Law. In one form or another, Pat is a stalwart in the Law continuing almost unabated. More recently, I spoke with him at Intel Foundry Connect in the middle of the “five nodes in four years” push. This conversation comes from a very different moment in his career.
Today, Pat is no longer running Intel. Instead, he is operating from Playground Global, where he has shifted to helping identify and shape the next generation of hard technology and hard physics bets. But in many ways, the themes are familiar. He is still thinking about the future of compute, he is still thinking about architectures, manufacturing, software, scaling, and he is still thinking about which new ideas have the technical depth and leadership to matter. In the interview, he describes this stage of life as focusing on things that matter, with people he enjoys, and says Playground gives him a way to stay close to companies he believes could shape the industry.
That broader perspective is what makes this conversation interesting from my point-of-view. This is not just a retrospective on Pat’s past, and it is not just a venture capital conversation either. It is a discussion with someone who has spent decades building and leading in classical compute, and who is now looking outward at AI accelerators, dataflow systems, resilient networking, quantum computing, and the growing pressure to make inference dramatically more efficient. Pat argues with me that inference still needs to improve by orders of magnitude, that the future is fundamentally heterogeneous, and that the next real breakthroughs may come from combining what he calls a “trinity of computing”: classical, AI, and quantum systems. Whether you agree with all of that or not, it is a view worth hearing from someone with Pat’s knowledge and experience.
In this interview, we talk about what changes when a former Intel CEO starts evaluating startups instead of product roadmaps, how he thinks about proof, market fit, and leadership teams, why he believes the industry is still early in the real buildout of agentic and scientific AI workloads, and where he sees the next pressure points in compute, from precision and resilience to optical links, dataflow machines, and the search for architectures that move beyond a purely von Neumann view of the world.
The following is a transcript of the video interview embedded above. Phrases have been adjusted slightly to improve readability.
Ian Cutress: We’re here at Playground Global, which is Pat’s new home. How’s it going?
Pat Gelsinger: I’ve just turned 65, and I'd say at this phase of my career, I want to do things that matter. If they succeed, it makes a difference with people I enjoy. Playground is a place I can do that.
I have about 10 companies that I’m deeply involved in right now. I’m on the board of most of those, and I think some of these are going to be industry-shaping companies. If I get to help build great companies with great leadership teams at this phase of my career, this is good. And so far this year, [although] I’m a rookie in venture capital, the team has have embraced me and we’re making a difference.
Ian Cutress: Did you ever see yourself eventually migrating into this sort of role? It’s a bit hard to go from a sustained big company sort of position into venture capital.
Pat Gelsinger: Remember, I was in the industry for 45 years! When I left Intel the first time, to EMC, literally I retired one night, and started the next morning. When I went from EMC to VMware, I literally did both jobs for two weeks. When I left VMware to come back to Intel, I did both jobs for four weeks.
It has been 45 incredible years, so after leaving Intel, my wife walks into my study literally the next day after the announcement of leaving Intel and she said, “you’re not done yet”. I think that means “don’t be home too much”! I have a little bit too much energy I guess!
That began a 100-day journey for me of figuring it out, of what do I want to do next? And I interviewed a lot of venture firms, private equity firms, a bunch of CEO roles, and government roles. It was when the Administration was coming in. University roles too, and at the end of that, I decided I wanted to make a difference, and Playground came at the top of that list.
Then there’s also my Gloo company, the faith technology company. So I’m splitting my time between the two of those. I want to make a difference eternally and make a difference in the future of science, and bring those two together. That’s a pretty good position in life.
Ian Cutress: I can’t imagine how many people came out of the woodwork to offer you advice on the next stage?
Pat Gelsinger: Lots. Of course I wanted their advice too. I had what felt like 100 meetings in a 100 days to figure out what was next - and here we are a year later and it’s going pretty well. A lot of it is in semiconductors, but you know I’m doing Alva Energy, nuclear upgrading. I didn’t know anything about nuclear energy. Now I’m doing that too.
I am learning a lot too. I’m doing superconducting, Josephson junctions and such. It’s sort of in the computing ecosystem, but wow, those are really different looking transistors from what we thought - to now worrying about cryogenics. It’s stretching my mind into quantum computing - and for a digital boolean guy for my entire life, it’s like you’ve got to turn your brain to a 90 degree angle to think about qubits and how they’re functioning and tangled and how you use them to compute. So it’s a lot of learning for me personally. I say the only disappointment I have at this phase of my career is I’m not 35 years younger! This is the greatest time to be a technologist in human history.
Ian Cutress: So in my mind I’m seeing you at a desk pouring through papers of potential startups trying to find the one that’s got something a bit unique.
Pat Gelsinger: I’m reading a lot, so lots of papers and that’s part of it. A lot of that work gets done by our associates, so I’m like that one sounds pretty interesting and they’ll seek my advice on how it would fit into the industry. They’ll go do work, we’ll meet. I like reading but even more so, I like interacting. To me, if I could read for ten hours or meet with the company for one hour, that one hour is more valuable.
Ian Cutress: So you’d rather have the college lecture?
Pat Gelsinger: Well, it’s not the college lecture. It’s the questioning, the interaction. I want a few smart people at the table. I want to ask questions and I want to hear their questions, how they respond, and that’s my best learning modality. But yeah, I mean, my brain is being stretched in so many dimensions. I’m learning about bioengineering, how semiconductors are being applied to bioscience as well.
It’s what I call the trinity of computing - the fusion of classical, AI and quantum effects coming together. I think that will be literally the problems that we will now be able to compute. Almost everything that we’ve been able to express mathematically will now become computable.
I mean you just have this powerful new tool coming over the horizon with quantum computing where many of those quadratic functions that were uncomputable before. Now they’re sort of like, yeah great let’s go attack those portions of the problem as well.
Ian Cutress: So how do you marry the fact that you’re not completely green and brand new but these are new degrees of freedom for you to explore, then also evaluating whether a startup is actually worth the investment right? There is something to be said for the person who spent 40 years in a space and their add-in to that discussion!
Pat Gelsinger: You sort of come down to three questions. One is if we can make the technology work, is there real proof - we like to do hard tech things. So there really is this question: Does the tech work? Then you’re sort of saying, okay, how does it get to market, how do we bring it to an at-scale deployment? Then the third question is if we have the team to do it.
With the leadership team, we have brilliant technologists asking the hard questions: what are the milestones and the way to show that we get there, the market insertion. By the way, that’s part of my unique value. I’m able to call up CEOs throughout the industry and say, hey, don’t let this one get stuck down here as some entry-level person as you evaluate it - this one matters. Let’s work together on how we differentiate your offerings, your GPUs, your networking components, your business using these. That connectivity helps quite a lot with number two, driving that business development and accelerating it into the marketplace.
And then, with 45 years of major leadership roles, my job is to shape leaders into great CEOs, great CXO teams. I give them some pretty tough feedback sometimes! Some of them are great leaders but can’t present slides to save their life. Okay, we’re going to make you at least a mediocre presenter, you’re going to be able to do that. In other cases, they’re great presenters but not great leaders. How do we build a leadership team around you? In other cases, we’ve got to show them how to make a sale. There are all these things associated with building a great company. Since I’ve been in these big leadership roles, I really enjoy bringing that to the CEOs. But the hardest thing, Ian, I don’t have my hands on the wheel. Every once in a while it’s sort of like [mimes grasps the wheel]!
Ian Cutress: You can lead a horse to water, right?
Pat Gelsinger: Yeah, for some of those! But it (the new job) also does give me a lot more flexibility as well. “So, tell me how you solve this one? On my desk by Monday morning because I’m taking the weekend and taking my grandkids skiing.”
Ian Cutress: I’ve been seeing some of the magic you guys have done with our good friend Mark Wade over at Ayar Labs. They just did their Series E. Mightily impressive, and they seem to have funding from everyone now.
Pat Gelsinger: Well, I’ll say there’s this characteristic where you get a term sheet, somebody ready to lead the round, price the round, and then people are ready to jump in. And there’s this sort of interesting thing - what’s going to cause that dam to break?
Ian Cutress: Did you mean that in the sense of people don’t want to fund, or it just may end up being oversubscribed?
Pat Gelsinger: Everybody is sort of around the table, but who’s going to put the lead term sheet down for this round? Who’s going to price the round? Who’s going to say I’m going to put the biggest check into this round? And as soon as it’s them, or maybe it’s two who are sort of co-leading the round, then others are ready to come into it. The round is going to get formed. We’re going to see it now, we want to be part of that as well.
That was very much the case with Mark and the Series E. Once we had that lead term sheet, it sort of broke the dike a bit, then the energy unleashed of many other people wanting to come into the round, and quite a few strategics as well. So to me, as I’m learning the venture game, if we could call it that, having that network of people that I am confident are going to be leading with me to form rounds in timely fashions, because if it takes us three months to form a round, that’s three months later in the market as well. And I don’t want to let some of the hottest companies languish in competitiveness by not moving them forward effectively as well.
Ian Cutress: Speaking to a few people in the last couple of months, since Nvidia’s acquisition of Groq, a lot of people who are raising have suddenly found it’s getting easier in 2026. Are you finding similar?
Pat Gelsinger: Clearly that was catalytic! Nvidia is not going to acquire ten companies in that space - they were clearly deficient in having optimized inference, and as I said at GTC last year in the pre-show, we need to make inferencing 10,000 times better. Not 10x, not 100x, 10,000 times.
Ian Cutress: That’s a great headline! 10000x vs 10x. But as always, the question is how?
Pat Gelsinger: When you think about it, it’s replacing search. Now with OpenClaw, something none of us quite predicted, even though everybody was predicting agentic. It’s just demonstrating that okay, we’ve got to make inferencing a lot better. My “10,000x” was sort of a number that I pulled out based on some math of where search was in terms of energy, compute, cost. But as proud as NVIDIA are, and should be around the incredible progression of the GPU, it got them to say the GPU is great for training, it’s great for some of the waterfall training into inferencing, but it’s not an optimized inference chip. And that led them to (acquire) Groq. But now there’s 20 companies pursuing that assignment, asking how can we be 10x better or 100x better than where Nvidia just described the LPU with the Groq design.
Ian Cutress: I had to tell you, but I’m tracking 150 these days [laughs]. It’s anything from pre-seed, all the way up to IPO. Avariety of SRAM focus based, dataflow, people talking about HBM5 and high bandwidth flash.
Pat Gelsinger: I’ve got to see your list!
Ian Cutress: When I look at the Playground companies that put you in their press releases, it seems like a lot of bets you’re making are non-von Neumann these days. Take Snowcap for example, an absolute zero type of computing. Is that where you see the future going, or is this just a “we have to optimize for every workload” sort of play?
Pat Gelsinger: Part of that is the language I’ve used, I’ve called the trinity of computing. We’ve always believed in accelerators. I’ve always believed in accelerators - whether they were SIMD accelerators, SIMT accelerators, MIMD, these other forms of how you attack those workloads. To me, the architecture in that sense, trying to fit the world through a von Neumann lens, means that all workloads look von Neumann.
Ian Cutress: A case of ‘Is the workload defined in the architecture, or is the architecture defined in the workload?’
Pat Gelsinger: In this sense, what we’re now seeing, and I think clearly the surge in AI has said: “hey, different architectures enable different workloads to emerge at scale.” That’s what’s made this period exciting, that all of a sudden those data-centric workloads, with matrix functions against them, are able to do really interesting things. But it’s the workload that matters.
I was saying that before even Jensen was saying that, in some of our debates. So it’s the workload that matters. It’s about which workload is interesting, solving unique and wonderful problems, and then what’s the best architecture to run it on. There’s nothing that you saw yesterday at GTC that can’t run on the CPU, right? That’s the beauty of von Neumann type machines.
Ian Cutress: Can you run Jensen on a CPU? Maybe he’s got a little bit of magic there, but I know what you mean.
Pat Gelsinger: It may not run efficiently, it may take a long time. As we said, there’s algorithms that only can run on a quantum machine. But actually no, I can run them on an AI or classical machine, it’s just they just take a billion years and run out of memory.
I can’t get there from here using that computing architecture. So I do think in that sense we’re going to see this multiplicity, or heterogeneity, of what the compute architectures are, and most of the interesting problems will combine them. Yesterday, Jensen announced a CPU. What a backward thing for him to do! Why are we going back to the CPU?
Ian Cutress: It was funny, because at the beginning of the presentation, I’m not sure if you caught it, he said the CPU is old. It’s the legacy. And then, as you’re right, he then announced a CPU.
Pat Gelsinger: Of course I noticed that! I’m a CPU guy, of course I notice it! So first we’re going to spend half the keynote throwing it under the bus, we don’t need it anymore. Then we’re going to announce that we now have the best one because we need it.
So which is it? Well, the answer is the workloads need both.
There are things that run like crap on a GPU. If I have control flow related functions, they’re terrible on a GPU. Your basic if-then-else is a terrible thing to do on a long pipeline GPU. It may be able to run six different dimensions of parallelism, but if-then-else is not a parallel function. So in that sense I really do see it as a heterogeneous view, and that’s why I call it the trinity of computing. Classical, that’s going to be control flow, tool based, analytics based, operating systems, all of those things, more on the CPU. Then I’m going to have this raft of things that are data-centric and algorithmic that are going to be great on the AI. And then I’m going to have these things that only work because of a quantum machine, being able to apply entangled Qubits against those problems. With that, now we can open up a whole lot of workloads that are not computable today.
Ian Cutress: I know people are going to be shouting at the camera because we discussed this back in the day, and your good friend Greg Lavender was on top of this. It’s the software strategy. You can’t have a heterogeneous installation without having the software that attacks it. I’ve always been a vocal critic of the write-once and compile to target, because it’s never as efficient as you need it. We know working with hyperscalers, they’re going to extract every percentage point of power and efficiency out of code. And you can’t do that at a high level where you might have this heterogeneous ecosystem.
So… it sounds blasé to ask, but what can we do here? We’ve had standards in machine learning, for example on WindowsML, and Intel tried to do it with OneAPI and everything else, but there just doesn’t seem to be any consolidation right now like we’ve had in GPUs.
Pat Gelsinger: Yeah, I do think we sort of abstract, abstract, abstract. Then all of a sudden we need to collapse. I think everybody knows, a little bit over a year ago, was the DeepSeek moment. Well, what did they do? I’ll say they collapsed. They went to really understanding what the machine was doing, getting very targeted on how to use the machines to be much more efficient.
What does an engineer do? They produce great results in the constraints that they have. To me that was part of what made DeepSeek such a defining moment, because they sort of tunneled through the stack, said we really know what’s going on in hardware, we’re going to align our algorithmic functions against what hardware we have available and is capable. I think we’re going to see some of that happening again.
I also think one of our portfolio companies, NextSilicon, building a programmable dataflow machine, is saying “hey, this has all gotten too complicated.” We have to create an abstraction sitting beneath a more programmable hardware embodiment, that is able to statically and dynamically reconfigure itself both in network topology and in compute resource to the most efficient way of the current running workload.
As we’ve all seen in AI workloads, the characteristics of the workload change dramatically. Am I in the prefill phase? As you go through the different phases of computation, big GPUs are turning on and then they’re turning off and power rails are bouncing across the scale, networks are being overloaded and then they’re fallow. So for that I think there has to be this layer of programmability, and that’s part of the reason we’re quite excited about NextSilicon.
But clearly we have DRAM shortages today. Imagine that, four years of great DRAM business, this has never happened in the industry’s history, right? Now you have extraordinary logic and DRAM demand. That never happens in the industry
Ian Cutress: Everybody says we are so used to these bubbles in these commodity markets. And the question I get asked a lot by investors, and I’m sure you do, is when is it going to end?
I’m not sure about you, but I struggle to predict that end because realistically, the only way it’s going to end from my perspective, is if somehow the bottom comes out of the market.
Pat Gelsinger: Well, you know to some degree, the ideal gas law, Jevon’s paradox, whichever way you view it, we’re in a Promethean period of compute expansion. In this sense, when I think OpenClaw clearly has been sort of that next accelerant that all of a sudden, how many tokens per day are you going to use as an engineer. Because, I’m going to spend $100,000 on tokens and I’m going to give you a $200,000 salary. At what point does that become balanced?
Ian Cutress: That’s the argument I’ve made. It’s how much do you want to put into an engineer?
Pat Gelsinger: You know there doesn’t seem to be any end in that value proposition right now. Now obviously as I said we have to make inference 10,000 times better. I do think some of these compute memory architectures are going to have meaningful breakthroughs, that are going to make inferencing dramatically cheaper and not just 10x like we saw from Groq yesterday but 1000x or 10,000x better.
Ian Cutress: Did you see the Taalas announcement earlier in the year?
Pat Gelsinger: No, I didn’t follow that one.
Ian Cutress: So they’re doing what looks like a structured ASIC design. So you bake in the communication metal layers, your model. And they’re seeing 10,000-14,000 tokens per second. Pure ASIC style, traditional ASIC by definition. I’m not sure how you feel about people calling things GPUs versus ASICs, being the traditionalist you are.
Pat Gelsinger: You know, tomatoe tomato right?
Ian Cutress: But investors sometimes don’t realize!
Pat Gelsinger: Yeah, but at the end of the day, market acceptance will be based on tokens per second, tokens per second per watt, aggregate throughput capabilities, latency. It will deduce to real engineering measurable results over time.
Ian Cutress: So my argument against that has always been it doesn’t matter how many tokens you produce if your tokens are useless. It’s about the value of a token. I’m sure you’re seeing the same thing I am - the biggest workload that has value in the token for the output is code - people are willing to pay if you accelerate your engineer. Or I often cite IBM because they have a consultancy business where they generate tokens for their clients and they can upsell and the client just sees you know the reduction in cost. Is there anything we can do you know to stop talking about just garbage tokens?
Pat Gelsinger: Let me disagree with you a little bit. I’m now starting to see very meaningful agentic business process workflows. Where people are turning, nominally, low-end white collar business process flows into highly agentic workflows. Where I truly am putting agents to solve things, for me, that make me more productive, more scalable as well. By the way, that’s sort of what led to the whole SaaS implosion.
As people started to see, “oh, what’s going to happen to my Salesforce, my Oracle” all of those start changing as well. And those are very real. I think that probably becomes the AI pathway into most enterprise workloads as well - once we solve security problems and people get comfortable with where they’re running, whether that’s on prem or in the cloud, or what the security model is, I think that’s powerful.
I do think there’s always going to be a little bit of “my token is different than your token!”, so there’s going to be a question of what “really is the token?” But good benchmarking is going to help us sort that out over time. I’ve done more benchmarks in my career than most humans ever should consider. There’s that phrase, ‘there are lies, lies, and benchmarks’, but you’re still always seeking the best metrics that allows us to deduce differences in hardware. That’s going to get harder as we’ve already talked about with regard to heterogeneous architectures as well. The role of good benchmarking is going to get more important in this next phase.
By the way, it’s going to be critical because people are going to be making billions or tens of billions of dollars of capital decisions based on that work. So it’s going to get more, not less important, because otherwise I’m going to build data centers that produce crappy tokens.
Ian Cutress: I think we started calling it “benchmarketing”. Well, depends on the graph you get from the vendor!
Pat Gelsinger: Remember my code is in the SPEC benchmark, right? One of the most venerable benchmarks of all time. You know, my code is still in there.
Ian Cutress: In 2006 or 2017?
Pat Gelsinger: Yes, all, from the very earliest days of SPECint, my code was there.
Ian Cutress: So which test, which subtest?
Pat Gelsinger: Well if you go into the SPEC benchmark if you look, the espresso flow - that was mine, associated with it, we had the compiler flow - that was my code that we were using in the compiler when we initially did it. I think I have a third one in there as well.
Ian Cutress: Would you trust yourself to do that again?
Pat Gelsinger: Oh my, no! At the time it was great. Of course, when people give me those numbers today, I think they suck, right!? It’s like everybody was trying to break cache sizes and so on with them at the time. Today, it’s no longer a valid view. So I would just yell at our engineers in my last four years at Intel when they would show me SPECint numbers or SPECint rate numbers. That’s code is so old, it’s been so tortured into cache footprints. There’s no longer meaningful view of system performance.
Ian Cutress: There’s a small argument here because I’ll speak to Arm and they still value SPEC2006 because of the embedded market. The embedded market still relies on it. But I completely understand your point.
Ian Cutress: Going back a little bit for a second when we have these agentic setups, I often see that a lot of people are playing with it but it seems a very personal implementation on people improving their workflows. I struggle to really see where it’s going to offer it at scale - and the the only sort of workload I’m seeing where it is actually being applied at scale is because our good friends at Synopsys and Cadence are leaning on it heavily than almost anyone else.
Ian Cutress: So how do we bridge that gap? Is that just a time thing or is that just a familiarity thing?
Pat Gelsinger: That’s a great place to start - so let’s just say that’s wonderful. But there’s a whole lot of other engineering workflows other than that, in material science, CFD, and now we’re worried about hypersonics, so there are just all sorts of wonderful things that open up across the greater EDA space, not just chip design.
The explosions that are happening in bioengineering - now you’ve created orders of magnitude of additional complexity as you start looking at three-dimensional molecular models. You can’t make the nice 2D simplifications like you can in a lot of the chip design flows.
So I do think we’re at the beginning of these processes, not anywhere close to the end. And I do think that every one of these, as you open one up, somebody’s going to be the one that jumps ahead, and then what happens is sort of like cockroaches - there’s a little bit of food in that corner, and a lot more industry innovation will run in that direction.
Ian Cutress: Given your background and it relates a little bit to my background on the HPC side, I speak a lot with the people over there on almost on a weekly basis and they’re getting frustrated by the whole march of machine learning. Especially as we look at 64-bit precision being reduced in traditional accelerators and I know that a lot of them are wary of 64-bit emulation in the 8-bit side. Do you have any good news for them? Or whatever your opinion is on it?
Pat Gelsinger: Well, a couple of thoughts. I mean, when people started talking to me about BF4 (4-bit brainfloats), I was like, what are you talking about? You know, I was worried.
I was worried if 64 bits was good enough. We needed the 80 bit modes that we put into IEEE854 for the precision. Now we’re saying that four bits is good enough. And I think even there, people are doing interesting work, but what they find is, models actually don’t settle very well when you get to a precision that low. So useful bits seem to be more in the 8 to 16 bits range for most model work. I’m not a model expert in that regard but they sort of run some of those benchmarks at four bits, but the reality is most machines aren’t actually operating there. Emulating 64 bits in a 16-bit environment is very different than saying we’re going to do it in the four-bit environment.
But I do think as we move to the next phases of science, and not just doing large language model performance, as we move to the next phase of science, where the LLM almost becomes a leaf node, not the core computational node, you’re going to be back to worrying very much about precision. So I think there will be, over the next couple of years, the revenge of the HPC guys!
Because now when I want to start using LLMs in the context of a CFD for my airplane or hypersonic design, I’m going to be fusing these together and I’m not going to be able to give up precision for this portion of the workload. Even though the LLM portion may be perfectly happy running on a BF8 machine, I’m going to need them both. So I do think the next phase of application of AI, is going to be increasingly in the science domain, increasingly in three dimensions, not in flatter language representations. And as a result, all this pursuit of getting to four bit, two bit, etc, I think some of that’s going to be for naught, because the real machines are going to be combining those together in a much more central way in workloads.
Now maybe I’m just trying to convince myself that all that hard work I did on 64-bit and 32-bit floating point is going to come back to reality! But I think it’s based on the view of workloads. The next phase is not doing more language modeling, it’s much more science modeling, and as you get to science, I need real precision to look at many of those algorithmic domains. Again, it comes back to the workload.
Ian Cutress: Speaking to a few people who run university supercomputers they’re saying, because of the price of accelerators these days and the high demand, they’re almost seeing cloud computing be more cost effective. If there’s one thing you don’t want, it’s a student accidentally clogging up a thousand GPUs in the cloud with bad code. I think they’re struggling and as you mentioned them before. You know, our friends at NextSilicon are trying to solve part of that. I think there’s a community who really want more love.
Pat Gelsinger: I think so as well. Remember, NVIDIA began its journey into AI through the HPC door. So I do think there’s going to be good science being done there. I think the National Labs matter a lot. I think some of the government programs matter a lot here, and people like NextSilicon I think will be helpful. But I do think there needs to be more networking work.
I joke that NVL72 is an engineering marvel and a manufacturing nightmare, right? How do you build those things, scale them? You’re hitting just the limits of copper, we saw a bunch of optical conversations. I think we need new models that become resilient networks, like we’re doing with Delos Data, so that we create more flexibility in the underlying network, because we can’t have the failure rates that we’re seeing today for these super large configurations.
Then you’re just spending all your time either restarting and redoing or checkpointing, right? And that’s not a computing architecture suitable for a lot of the workloads that we want to get to. So I do think some of these things are going to move us more rapidly to optical, move us more radically to resilient networks, more radically back toward dataflow machines with the full gamut of precision. And then of course we’re going to do radical things like Snowcap, which are just a 1000x times better.
Ian Cutress: I’ll go speak to people like IBM with the Z processors and then some of the automotive players, and they speak about resilience as a very hardware focused element. Yet with Delos Data and others we’re seeing it more as a software application on top of the hardware. Is there a right balance?
Pat Gelsinger: I think the workload again is the determinant. In the sense that, hey, if the software has assumed faulty hardware, and has built resilience into the software layer, then software is the right answer. But if in fact a whole lot of workloads, sort of the difference between a virtual machine and a Kubernetes machine is who is taking care of resilience? In the VM, I’m assuming the hardware is, and I’ll run any software including containers. In the opposite view I can only run containerized software because I’m not presuming resilience in the hardware. And for a lot of things, hardware is better for resilience, right? I can see memory errors, I can correct memory errors, I can see link failures...
Ian Cutress: But it costs die area that I could spend on another ALU...
Pat Gelsinger: You certainly could, and I think now as we’re looking at machine learning and the failure rates we’re seeing, we need more resiliance in the hardware. It’s not resilient enough anymore for some of these incredibly large cluster sizes that we’re seeing emerge. So I don’t think there’s a one size fits all answer to that question. I do like Jensen’s co-design perspective here, because there are certain things that I think are done better in software resilience, but there’s a whole lot of things where you don’t let the hardware guys off the hook. You’re going to build five nines hardware. You’re going to be able to prove 10 to the 12th, 10 to the 14th, error rates. And until your material structures have proven that, get back in the lab and finish your freaking work, right? To prove something that I can really build on into the next generation.
Ian Cutress: Last time we recorded one of these, it was at Intel Foundry Connect, I’m sure you remember. It was all about the Five Nodes in Four Years. How much do you spend these days thinking about next generation foundry and packaging technology?
Pat Gelsinger: Well, obviously here at Global we have investments in Xlite, so I’m spending a lot of time on how we build next generation. Or as I call it, waking Moore’s law up from its nap..
Ian Cutress: So it’s not dead?
Pat Gelsinger: No, absolutely not! I think we’ve put it on an economic pause, because transistors haven’t gotten cheaper. I can still build more of them, but they’re not cheaper anymore. And they’re not as effective of a powerful tool as they have been in the past. That said, Xlite are bringing a better light source to EUV, enabling us to think beyond today’s 13.5nm of light, moving and solving the stochastics, the polarization...
Ian Cutress: You really think there’s going to be something less than 13.5nm?
Pat Gelsinger: Absolutely!
Ian Cutress: I think I disagree.
Pat Gelsinger: Oh why do you disagree?
Ian Cutress: I don’t think I’ll see anything beyond EUV in my lifetime. Because of the stochastics, because of the energy!
Pat Gelsinger: Oh, but we could take, with a free electron laser, I can give you so many more photons, right?
Ian Cutress: Yes, but in terms of making it commercially viable at scale?
Pat Gelsinger: Absolutely, not the slightest hesitation in my mind. We’re going to pull that off over the next decade. Absolutely the case. And whether it ends up being in the 7nm range, in the 4nm range, absolutely, there will be a next generation wavelength of light. It’s going to be at much, much higher power levels. Free electron lasers can produce 2000, 3000, 4000 watts of delivered energy, which allow you to radically reduce the stochastics. Even if you apply that only to 13.5nm, you get a big win in the yield characteristics. I’m going to be able to go from double patterning to single patterning, so I get more productive in my capital efficiency.
Ian Cutress: But if you can’t reflect, and it means you have to scan the wafer vertically in order to do it.
Pat Gelsinger: Hey, there’s going to be new material structures that, we’re going to innovate. So absolutely, you know, you skeptic for the advancement of science and new materials. This is going to be thrilling.
Ian Cutress: It’s not I’m an absolute skeptic - I’m just pulling on my material science background.
Pat Gelsinger: Okay, for your audience here, you and I are going to take a bet right now that we will see wavelengths below 13.5nm in the next decade in production deployment.
Ian Cutress: Okay, in mass production, yeah, a dollar is always a good bet. I always bet a dollar.
Pat Gelsinger: Yeah, you know, pick your favorite bottle of wine, or favorite thing that you like, you know, whatever it is because you’re paying me buddy.
Ian Cutress: We’ll see!
Ian Cutress: We recently saw a startup promising what they call X-ray, and I was doing the mathematics, and the video on it might actually be out before this because we’ve just done the final cut. I spent an hour going through the math of how many photons you need to have compared to EUV for an effective use, and in order to scan and how you produce that. An EUV machine will do 150 wafers per hour, layers, and the best I could get a synchrotron to do across all the beam lines was 40 wafers per day. Per day!
Pat Gelsinger: Okay, well, free electron lasers, we believe we’re going to be able to deliver dose at full productivity, in the equivalent in excess of 2,000 watts, compared to today’s 500 or 600 watts in the EUV machine. They just announced 1,000 watts. So we believe we can go much, much higher. And also the spectral purity of a free electron laser - instead of splattering tin like EUV, we’re producing finely precision light sources as well. This is a pretty magic technology, so that’s one example.
Ian Cutress: A small story, I found the supplier who provides tin for ASML and they wouldn’t tell me anything! I reached out to the supplier and you know eventually worked out that you’re using less than grams per day in an EUV machine. So the cheapest part of the machine is the tin.
Pat Gelsinger: So yeah, I definitely think that next generation light. We see that light also enabling many other aspects of semiconductors, new forms of metrology, new forms of packaging as well. I think it will initiate a whole set of equipment development, as well as new materials development, for mass refraction technologies, new forms of chemistry as well can be improved. If I give you more photons, I’ll be able to move to harder resists, which have natural benefits in and of themselves, not just next chemistry, when I go to 4.5nm wavelength of light. So yeah, I’m an optimist there. And of course, as we’re seeing just extraordinary capital investments going into today’s level of machines, I think people will start to see “Oh, there’s real value in some of the critical layers moving beyond that.”
Ian Cutress: Here’s another one of my critical points and I want to cite the the work that was done on 450mm wafers, you know, the 18-inch transition. You know, as well as I do between IBM, Intel, and others, this whole consortium came together, solved it, and then the industry turned around and said, yeah, but it’s going to cost us a trillion dollars to transition, so we’re not going to. I have a great picture of me holding one of those 18 inch wafers, and I asked IBM, can I have it? And then we realized there may be export restrictions on it. Do you think about that when you’re you know considering the next phase of semiconductors?
Pat Gelsinger: Oh absolutely, you have to think about the adoption. The insertion point in volume supply chains, what’s the capital requirements to move it into volume deployment, what will be the business models be associated with it. One of the things with the Xlite business model is we’re going to move to photons as a service.
Ian Cutress: Oh no! That sounds terrible!
Pat Gelsinger: Well, I call it God said, “Let there be light,” right? Light as a service. And just like today, you have chemical supplies sitting outside of the fab and energy substations out of the fab, you will have light substations out of the fab. It’ll become a utility going into the fab. And that has so many benefits, right? I use different pools of capital to go accomplish that, I’m not burdening the fab with the capital requirements. Also, that’s very long life. I can keep upgrading the free electron laser. I can attach it to many different types of equipment, metrology equipment, packaging equipment, lithography equipment.
Ian Cutress: Assuming you can bend it?
Pat Gelsinger: Oh yeah, not that hard, right? Grazing incidents, mirrors and other things that are pretty robust technologies. So we see this entering into a next phase of how you build much, much higher productivity into the most expensive capital equipment industry on earth, the semiconductor. We have to bring capital efficiency to the semiconductor industry if we’re going to continue to have this dramatic expansion that Moore’s Law was enabling. It took a nap, and now we’re going to wake it from its slumber.
Ian Cutress: I’m seeing this interesting dichotomy here, because on the one hand you paint a very vivid picture about the future of where we are in semis, whether that’s architecture, whether it’s manufacturing. But then also it’s the ‘solve the problems of the day’, right? That’s where the business is. It’s all very well having your blue sky investments, but realistically you still need to make money today, not just tomorrow. Do you find that now, in this new stage in your career, you’re favoring one more than the other?
Pat Gelsinger: Well, I’ve always been a tech guy - a deep tech guy. So I’m sort of drawn to those 10-year projects. But I also realize I have to have a portfolio that investors can look at, and we have limited partners as well. So I need some that okay, this is going to be a two or three year win solving a today problem.
A company like PowerLattice that we announced, that one’s going to be much faster because that is a today problem. And some of the other power related companies, I think those will have two to three year cycles that’ll give me some time to work on my 10 year cycles as well. But even there, a company like Snowcap, I think we’re going to have real commercial use cases around that in two to three years. As dramatic as superconducting logic is, the fact that satellites happen to operate in 4 Kelvin space. Saying I have a 2 or 4 Kelvin superconducting, it’s sort of a native ambient environment and I don’t produce any space so I don’t have to radiate any heat. that’s actually pretty good.
Ian Cutress: Well, or anything you do radiate is a larger proportion of what could affect it?
Pat Gelsinger: Yeah, so that’s pretty clever in that regard.
Also every quantum project will want more things sitting in the cryogenic temperature range. So all of those projects will want more of what we’re doing with Snowcap, and there are some nice problems that fit beautifully into a smaller logic. All the signal intelligence problems, give me smaller footprints of compute where I don’t need as much memory, they’re going to fit nicely into it.
So I think there’s going to be great commercial applications even before we hit the big wins of creating AI inferencing centers that are a 1000 times more energy efficient than today. That’s the holy grail, but there are some really good milestones along the way for commercialization.
Ian Cutress: So what does it take to get on Pat Gelsinger’s radar?
Pat Gelsinger: Well, being super smart, right? People who have really worked on problems, like the Snowcap team, they have worked on that problem for 20 to 30 years, deeply in academia or national labs. They have really deep expertise on problems that are meaningful. If we solve that problem, wow, that matters. And then having real views - fusion for example, not one that I’m particularly excited about, because I need 10 billion dollars to prove it, right? Versus okay, I can get $50 million to prove critical technology milestones. Those are dramatically different problems. So we like those kind of problems where we can say, boy, we have tangible milestones that allow us to get to the point where I have to go raise billions of dollars, but I have something that I’ve proven, and now I’m ready to go raise the billions of dollars. I’m not afraid of the billions of dollars, but I want to have proof points along the way.
Ian Cutress: And that’s obviously a point where governments start mattering.
Pat Gelsinger: Absolutely.
Ian Cutress: So, how often these days are you having to interact with the Administration?
Pat Gelsinger: Yeah, quite regularly. Xlite was the first recipient of the CHIPS Act under this Administration, so quite excited about that. We have a number of other BAAs in process, the new process for the CHIPS Act, from our portfolio companies, but also other governments as well are interacting. For example Japan, they do a whole lot in the semiconductor industry, so working with them. A number of our companies leverage Australia - PsiQuantum, half the team was from Australia. I have a packaging company that we’re forming out of Australia doing 3D packaging technology. Some of them are out of the UK, and we have presence for one of our companies in Germany. So it really is across the spectrum, leveraging non-dilutive capital where governments see critical roles for them. So there’s a good amount of that, and I say the network I built with the CEOs of the industry, combined with many of the governments, somewhat gives Playground a pretty unique perspective as we’re in this next phase of taking some of these really cool companies and scaling them to be things that change the world.
Ian Cutress: I know back at Intel, one of your things was to reinvigorate the American semiconductor industry. I assume it still matters, but it sounds like you’re being a bit more holistic now.
Pat Gelsinger: Yeah, I’d say yes to that, but also yes that I’m still deeply committed to reinvigorating the US semiconductor industry. I’m a US citizen. I consider myself an American dream, a farm kid becoming CEO, leading technology - that’s the American dream coming to reality. I feel a deep loyalty to the nation in that regard. I helped to bring the CHIPS Act into existence. And I do think the absence of explicit industrial policy by our government renders us at risk as a nation.
A brownout in Taiwan has an economic impact that is twice as great as the Great Depression. A brownout, right? Remember, Taiwan has three weeks of energy reserves on the island. This is not a stable situation for the world’s supply chains of technology. I want more of that in the US. I want more manufacturin g in the US. I want more of our supply chains for critical minerals, semiconductors. We’ve radically underinvested in the energy capacity of the nation. For a decade, one to one and a half percent increase in the energy capacity of our nation. We spent so much time preoccupied on renewables, we forgot that the only thing that really matters is how much. There are 39 nuclear reactors being built in China today. How many in the US? Zero. 39 compared to zero. That’s terrible, right? Just terrible.
Ian Cutress: If I recall China last year in March stood up 90 gigawatts of solar?
Pat Gelsinger: Yeah. But again, they’re investing in their energy infrastructure. In the AI age, energy capacity equals economic capacity. It is critical that we accelerate that. That’s why Alva Energy, our nuclear operating company, is so important. We just brought that company out of stealth. How do we get more from our current fleet, as well as restart the industry to build a lot more as well?
Ian Cutress: Well, you should come to Europe, because what we have more than anyone else is regulation. It’s that interesting dynamic because obviously I’m Europe based, but I spend a lot of time here dealing with companies especially in the Bay Area. I know you’ve highlighted in the past, for example, optical packaging up in Scotland, and there are other facilities in Germany, and Germany is quite a hub for that. But I often find that even with all the investment over there, all the startups I deal with, even if they’re based there, they will slowly migrate to the US.
Pat Gelsinger: Yeah, and I think of Europe as having two problems that way. One is I think it’s difficult, Europe has a lot of low capital startups but that mid-capital range, where you move from tens of millions to hundreds of millions, very little of that happens in Europe. So that capital formation in that critical middle phase of a company is very hard in Europe. And then I think the regulatory domain is ridiculously challenging there.
As a result of that, there are the occasional Arm that pop through, but they’re occasional. Extraordinary support, but then also hurdles that make it hard for them to pop through. And that’s why so many companies, and by the way some of these are coming to Playground, sort of say hey, let’s keep a footprint in Europe but I want my domicile to be in the US or in the Bay Area, because of the vibrancy of the community, the research community, but also just the speed that we can get things done and capital formation.
Ian Cutress: Well, it’s good that you bring up Arm, because whenever people say, well, what does the UK bring to the table, right? Okay, we have Arm, we had Graphcore, whatever the architecture was worth in the end. But I’ve always said the one thing we’ve done well is produce CPU architects and engineers. I know you’ve probably had the same conversations I’ve had with people like Philip Wong at Stanford - part of the difficulty of this industry is bringing in new chip designers, partly because the US is such a software driven economy compared to Taiwan or South Korea. Even though Playground is very much a startup incubator, accelerator - do you do more holistic stuff on how do we get people to become engineers?
Pat Gelsinger: Clearly the UK and Europe have a lot of great things going for it in that sense, and I’m quite excited about many of those, and they continue to produce that. I was also meeting with one of the ministers in the UK recently, and I was highlighting that there’s more quantum based work emanating from Europe and the UK than maybe anywhere else in the world. Why should Quantum Valley, if I analogize that to Silicon Valley, be presumed to be in the US? I do think there’s a lot of these major new technologies that are going to be impactful for the entirety of the world, that are yet to be developed, and where will those centers be in the world? I think many of them could be there. I think it takes capital formation. I was recently having a conversation with one of the ministers in that regard, they gave me their initial proposal for how much capital they were going to put behind it, and I said 10x.
Ian Cutress: I think they just announced two billion this morning or something.
Pat Gelsinger: Anyway, 10x that. I said 10x what they originally said to me, and then you’re demonstrating to the world you’re serious. At that proposal it’s sort of like, oh okay, that’s nice!
Ian Cutress: I was at a Ministry of Defense sort of conference and they asked me to keynote, and I did the wave of new AI hardware. We had somebody from Adiabatic Computing, somebody from Quantum Computing, and then at the end of the day there was a Q&A and they said, well, the government just issued a billion dollars for new semiconductor investment. Where should we spend it? And a billion dollars just isn’t enough to fund startups. It isn’t enough to restart any sort of manufacturing or wafer production. And so my response was invest in people - to which the response was that we do invest in people but then they move to the US. I said, well, what if you don’t invest in people and they stay? So my question is, what can we do to invest in people? I mean, it’s all very well having AI and chip design meaning you can design a chip with 15 people rather than 500, but we still need chip designers.
Pat Gelsinger: Well, I do think there’s a lot of great raw talent there. And when you think about the Oxford/Cambridge core.
Ian Cutress: I’m not even just speaking UK, I’m speaking about getting more chip designers in the US too.
Pat Gelsinger: But the reality is there is a wave back toward hardware. There is this enthusiasm again. How many AI dating apps do we need, right? There’s just been such a move to software, to SaaS, to different application things, but all of a sudden hardware is getting cool again. And I think that’s being recognized in our universities and in the startup community as well.
Some of my friends leading some of the big venture firms - one of them confided in me and said, I think we forgot how to do hard, right? And to me it’s like, but that’s where you started! How did you lose your heritage, your foundation? And I think now you’re seeing that whole resurgence come back, whether that’s physics, material science, chemistry, biology, chip development, manufacturing sciences as well. That renaissance of activity in that space is starting to emerge. And I’m super excited. Some of the universities that have never walked away from it, like Purdue, are now viewing this as a West Coast, Bay Area location. For them, they never walked away. Some of the great schools, Stanford, Berkeley in the area, MIT on the East Coast. To me, it’s the resurgence of the hard, the material science, the hard engineering sciences. This is our day, again.
Ian Cutress: So, what does 2026 look like for Pat Gelsinger?
Pat Gelsinger: Well, I want to have several good exits of our companies!
Ian Cutress: Have you had an exit yet?
Pat Gelsinger: Not one that I’ve led, but obviously the recent announcement with Ayar Labs was a nice step forward, along with a couple of good exits from our portfolio companies. I want to do six to eight foundational investments this year, companies that really matter, and then key milestones of some of the next rounds of our companies, bringing them to that next phase of reality. And clearly on my Gloo side, putting a platform in place that really starts to scale the faith ecosystem.
And then I’ve got eight great grandkids, so we’re enjoying time with them and our family. I have more freedom to invest in them and their lives. And then of course I have about a dozen philanthropies that I spend a lot of time on, so seeing some of those milestones as well. I have a lot of wonderful things happening in my life.
Ian Cutress: So you’re getting the grandkids into semis, that’s what you’re saying?
Pat Gelsinger: We’re working on it!