(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=39197619

总体而言,英特尔不太可能完全消失,因为他们一直在某些市场保持主导地位,并继续利用 Raptor Lake 和即将推出的 Ponte Vecchio 等新技术进行创新。 然而,如前所述,该公司面临着挑战,包括错失移动设备的机会以及努力保持与新芯片制造商的相关性。 如前所述,亚马逊网络服务 (AWS)、谷歌云平台 (GCP) 和其他公司已经开始为虚拟实例提供密集的核心选项。 时间将最终决定英特尔是否会继续保持行业领先地位,还是成为影响其他主要参与者的相同趋势的受害者。 最终,在评估一家著名科技公司的潜在寿命时,尤其是在考虑技术和行业格局长期演变时,耐心似乎是关键。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
Intel's Humbling (stratechery.com)
532 points by jseliger 1 day ago | hide | past | favorite | 382 comments










I'm kind of bullish on Intel right now. They've moved up so many process nodes so quickly and have made some earnest headway in being an actual fab. Let's ignore the elephant in the room which is taiwan and it's sovereignty, and only focus on the core r&d.

Intel flopped so hard on process nodes for 4 years up until Gelsinger took the reigns... it was honestly unprecedented levels of R&D failure. What happened over the 8 years prior was hedge funds and banks had saddled up on Intel stock which was paying healthy dividends due to cost cutting and "coasting". This sudden shock of "we're going to invest everything in R&D and catch back up" was news that a lot of intel shareholders didn't want to hear. They dumped the stock and the price adjusted in kind.

Intel's 18A is roughly 6 months ahead of schedule, set to begin manufacturing in the latter half of 2024. Most accounts put this ahead of TSMC's equivalent N2 node...

Fab investments have a 3 year lag on delivering value. We're only starting to see the effect of putting serious capital and focus on this, as of this year. I also think we'll see more companies getting smart about having all of their fabrication eggs in one of two baskets (samsung or tsmc) both within a 500 mile radius circle in the south china sea.

Intel has had 4 years of technical debt on it's fabrication side, negative stock pressure from the vacuum created by AMD and Nvidia, and is still managing to be profitable.

I think the market (and analysts like this) are all throwing the towel in on the one company that has quite a lot to gain at this point after losing a disproportionate amount of share value and market.

I just hope they keep Pat at the helm for another 2 years to fully deliver on his strategy or Intel will continue where it was headed 4 years ago.



There is a good chance for Intel to recover, but that remains to be proven.

From their long pipeline of future CMOS manufacturing processes with which Intel hopes to close the performance gap between them and TSMC, for now there exists a single commercial product: Meteor Lake, which consists mostly of chips made by TSMC, with one single Intel 4 die, the CPU tile.

The Meteor Lake CPU seems to have finally reached the energy efficiency of the TSMC 5-nm process of almost 4 years ago, but it also has obvious difficulties in reaching high clock frequencies, exactly like Ice Lake in the past, so once more Intel has been forced to accompany Meteor Lake with Raptor Lake Refresh made in the old technology, to cover the high-performance segment.

Nevertheless, Meteor Lake demonstrates reaching the first step with Intel 4.

If they will succeed to launch on time and with good performance, later this year, their server products based on Intel 3, that will be a much stronger demonstration of their real progress than this Meteor Lake preview, which has also retained their old microarchitecture for the big cores, so it shows nothing new there.

Only by the end of 2024 it will become known whether Intel has really become competitive again, after seeing the Arrow Lake microarchitecture and the Intel 20A manufacturing process.



> TSMC 5-nm process of almost 4 years ago

N5 is interesting because it's the first process fully designed around EUV and because it was pretty much exclusive to Apple for almost two years. It launched in Apple products in late 2020, then crickets until about late 2022 (Zen 4, RTX 4000, Radeon 7000). Launches of the other vendors were still on N7 or older processes in 2020 - RTX 3000 for example used some 10nm Samsung process in late 2020. All of those were DUV (including Intel 7 / 10ESF). That's the step change we are looking at.



Exactly. N5 is sort of an outlier, it's a process where a bunch of technology bets and manufacturing investment all came together to produce a big leap in competitive positioning. It's the same kind of thing we saw with Intel 22nm[1], where Ivy Bridge was just wiping the floor with the rest of the industry.

Improvements since have been modest, to the extent that N3 is only barely any better (c.f. the Apple M3 is... still a really great CPU, but not actually that much of an upgrade over the M2).

There's a hole for Intel to aim at now. We'll see.

[1] Also 32nm and 45nm, really. It's easy to forget now, but Intel strung together a just shocking number of dominant processes in the 00's.



> The Meteor Lake CPU [...] has obvious difficulties in reaching high clock frequencies,

Not sure where that's coming from? The released parts are mobile chips, and the fastest is a 45W TDP unit that boosts at 5.1GHz. AMD's fastest part in that power range (8945HS) reaches 5.2GHz. Apple seems to do just fine at 4GHz with the M3.

I'm guessing you're looking at some numbers for socketed chips with liquid cooling?



The 5.1 GHz Intel Core Ultra 9 processor 185H is the replacement for the 5.4 GHz Intel Core i9-13900H Processor of previous year. Both are 45-W CPUs with big integrated GPUs and almost identical features in the SoC.

No liquid cooling needed for either of them, just standard 14" or 15" laptops without special cooling, or NUC-like small cases, because they do not need discrete GPUs.

Both CPUs have the same microarchitecture of the big cores.

If Intel had been able to match the clock frequencies of their previous generation, they would have done that, because it is embarrassing that Meteor Lake wins only the multi-threaded benchmarks, due to the improved energy efficiency, but loses in the single-threaded benchmarks, due to lower turbo clock frequency, when compared to the last year's products.

Moreover, Intel could easily have launched a Raptor Lake Refresh variant of i9-13900H, with a clock frequency increased to 5.6 GHz. They have not done this only to avoid an internal competition for Meteor Lake, so they have launched only HX models of Raptor Lake Refresh, which do not compete directly with Meteor Lake (because they need a discrete GPU).

During the last decade, the products made at TSMC with successive generations of their processes had a continuous increase of their clock frequencies.

On the other hand Intel had a drop in clock frequency at all switches in the manufacturing processes, at 14-nm with the first Broadwell models, then at 10-nm with Cannon Lake and Ice Lake (and even Tiger Lake could not reach clock frequencies high enough for desktops), and now with Meteor Lake in the new Intel 4 process.

With the 14-nm and 10-nm (now rebranded as Intel 7), Intel has succeeded to greatly increase the maximum clock frequencies after many years of tuning and tweaking. Now, with Meteor Lake, this will not happen, because they will pass immediately to different better manufacturing processes.

According to rumors, the desktop variant of Arrow Lake, i.e. Arrow Lake S, will be manufactured at TSMC in order to ensure high-enough clock frequencies, and not with the Intel 20A, which will be used only for the laptop products.

Intel 18A is supposed to be the process that Intel will be able to use for several years, like their previous processes. It remains to be seen how much time will pass until Intel will become able to reach again 6.0 GHz in the Intel 18A process.



That's getting a little convoluted. I still don't see how this substantiates that Intel 4 "has obvious difficulties in reaching high clock frequencies".

Intel is shipping competitive clock frequencies on Intel 4 vs. everyone in the industry except the most recent generation of their own RPL parts, which have the advantage of being being up-bins of an evolved and mature process.

That sounds pretty normal to me? New processes launch with conservative binning and as yields improve you can start selling the outliers in volume. And... it seems like you agree, by pointing out that this happened with Intel 7 and 14nm too.

Basically: this sounds like you're trying to spin routine manufacturing practices as a technical problem. Intel bins differently than AMD (and especially Apple, who barely distinguish parts at all), and they always have.



I have also pointed that while for Intel this repeats their previous two process launches, which is not a good sign, TSMC has never had such problems recently.

While one reason why TSMC did not have such problems is that they have made more incremental changes from one process variant to another, avoiding any big risks, the other reason is that Intel has repeatedly acted as if they had been unable to estimate from simulations the performance characteristics of their future processes and they have always been caught by surprise by inferior experimental results compared to predictions, so they always had to switch the product lines from plan A to plan B during the last decade, unlike the previous decade when all appeared to always go as planned.

A normal product replacement strategy is for the new product to match most of the characteristics of the old product that is replaced, but improve on a few of them.

Much too frequently in recent years many Intel new products have improved some characteristics only with the price of making worse other characteristics. For example raising the clock frequency with the price of also increased power consumption, increasing the number of cores but removing AVX-512, or, like in Meteor Lake, raising the all-cores-active clock-frequency with the price of lowering the few-cores-active clock frequency.

While during the last decade Intel has frequently progressed in the best case by making two steps forward and one step backward, all competitors have marched steadily forwards.



> I have also pointed that while for Intel this repeats their previous two process launches, which is not a good sign, TSMC has never had such problems recently.

I'll be blunt: you're interpreting a "problem" where none exists. I went back and checked: when Ivy Bridge parts launched the 22nm process (UNDENIABLY the best process in the world at that moment, and by quite a bit) the highest-clocked part from Intel was actually a 4.0 GHz Sandy Bridge SKU, and would be for a full 18 months until the 4960X matched it.

This is just the way Intel ships CPUs. They bin like crazy and ship dozens and dozens of variants. The parts at the highest end need to wait for yields to improve to the point where there's enough volume to sell. That's not a "problem", it's just a manufacturing decision.



What I worry about with Intel is that they have gotten too much into politics; relying on CHIPS act and other subsidies, encouraging sanctions on Chinese competitors while relying on full access to the Chinese market for sales.

It is not a good long term strategy: The winds of politics may change, politicians may set more terms (labour and environment), foreign market access may become politicized too (US politicians will have to sell chips like they sell airplanes on foreign trips).

So Intel will end up like the old US car makers or Boeing - no longer driven by technological innovation but instead by its relationship to Washington.



"This investment, at a time when … wages war against utter wickedness, a war in which good must defeat evil, is an investment in the right and righteous values that spell progress for humanity"

That is not a partner for creating logical systems. Very clear their current decisions are political.



That bananas quote is from an Israeli minister.

Imagine what it would do if Intel became strongly associated with one side in the Israel-Palestine conflict. It could really hurt their business.

Usually business leaders are smart enough to stay out of politics.



They are taking sides. That is easily seen during an interview with the CEO, who almost cried talking about the events of October 7th. Intel will give 5000$ war grant to the Israeli employees. One of Intel's largest fabs is a 20-minute drive from where the massacres occurred.


Do you know if AMD has any presence in Israel? Intel has already sold me multiple garbage dump products in the past and so if I can minimize my Israel related purchases i'd prefer to do that.

The Mac is annoying since I think some pieces of their silicon designs come from Israel (storage controller). Can someone correct me if I am wrong on that?



What company doing VLSI, with more than say 500 employees, doesn't have some kind of presence in Israel? That's got to be a short list.


AMD is big in Israel as well. Most of the tech stuff is developed in Israel, side effects of future-oriented democracy I imagine. Boycotting things is useless virtue signalling of the woke disease. I would suggest going to pro-Palestinian protests and try to explain to them that raping, kidnapping and mutilating children is not going to bring peace and a country to Palestinians.


It's fine to be future oriented, to share development, the problem is any religious destiny/racist element.

I'm not sure what you mean by "woke disease," but consumerism involves evaluation.

Oct 7 was horrible, but it didn't come out of nowhere. Sabra and Shatila, for example (Waltz with Bashir being a very good Israeli film on the topic), and the many thousands of people killed mutilated or displaced in their usual unhelpfully disproportionate response..



How is AMD big in Israel?


Apparently they have become strongly associated since that quote is part of the release for their new plant. It is sickening to me this kind of hard-right religious zealotry is part of decisions of tech companies. I am avoiding Intel as much as possible now, I hope others will consider this too.


That quote didn't come out of Intel's mouth.


It is directly associated with the deal they are making. Would you let that quote be used with a deal your company is making? Intel knows full well how this is being spun.


Basically all the big semiconductor companies do R&D in Israel


Yes, but it's all relative. A giant new development is different than a branch office. Though in all cases there is no doubt overlap with their military industrial complex. But I don't think we will normally see such religious extremism tied to projects, and that should be called out, loudly and clearly, as not ok.


Intel has used political incentives often though its history to great effect. I think its a much smaller issue than you think. Its part of their standard game-plan for over 30 years. The issue with boeing is becoming acontract company that into contracts out all their work which is self defeating and leads to brain drain. EX: the door lacking bolts because Boeing doesnt even build its own fuselages anymore and have let their standards fall, wholly depending on contractors with little oversight.


Boeing was culturally taken over by McDonnell Douglas leadership.

What's fascinating albeit somewhat depressing is that it seems something similar happened when McDonnell merged with/took over Douglas as well: https://admiralcloudberg.medium.com/a-legal-and-moral-questi...



Have you read chip war?

It will challenge your concerns



Yeah. Too much the cold war angle. I think he overstates the role of government/military and underestimates how much the consumer market has driven the process innovations that has made computing cheap and ubiquitous,


If you think the concern over China and Taiwan is understated I think you'd do well to look at how both the US and China are putting insane amounts of resources behind this.


>have made some earnest headway in being an actual fab

In the terms of end product - not really. Last 3-4 gens are indistinguishable to the end user. It's a combined effect of marketing failure and really underwhelming gains - when marketing screams "breakthrough gen", but what you get is +2%/ST perf for another *Lake, you can't sell it.

They might've built a foundation and that might be a deliberate tactics to get back into the race, we'll see. But i'm not convinced for now.



Depends who your user is. From a desktop side you're probably not going to notice because desktop CPU requirements have been stagnant for years, desktop is all about GPU. On the server side Sapphire Rapids and Emerald Rapids is Intel getting back in the game and the game is power and market share.

See there's only 2 or 3 more obvious generations of obvious die shrinks available. Beyond those generations we'll have to innovate some other way, so whoever grabs the fab market for these modes now gets a longer period to enjoy the fruits of their innovation.

Meanwhile server CPU TDPs are hitting the 400W+ mark and DC owners are looking dubiously at big copper busbars, die shrinks tend to reduce the Watts per Calculation so they're appealing. In the current power market, more efficient computing translates into actual savings on your power bill. There's still demand for better processors, even if we are sweating those assets for 5-7 years now.



Intel is still behind TSMC at this point in terms of raw process efficiency, but that rate of change is moving quickly and I posit that the products released later this year will have a process efficiency advantage over AMD's offerings for the first time since AMD abandoned global foundry.


What you quoted refers to Intel's efforts to act as a fab for external customers.


raptor lake is the same as coffee lake/comet lake? nah


> They've moved up so many process nodes so quickly and have made some earnest headway in being an actual fab.

I'd buy this if they'd actually built a fab, but right now this seems too-little, too-late for a producer's economy.

The rest frankly doesn't matter much. Intel processors are only notable in small sections of the market.

And frankly—as counter-intuitive as this may seem to such an investor-bullish forum—the death knell was the government chip subsidy. I simply can't imagine american government and private enterprise collaborating to produce anything useful in 2024, especially when the federal government has shown such a deep disinterest in holding the private economy culpable to any kind of commitment. Why would intel bother?



Licking County (next-gen, post-18A) has already broken ground and is in assembly, Magdeburg and Ireland (18A) also well underway and in production. Arizona's 20A facility (Fab 52 and Fab 62) have been done for half a year and are already in tape out. Not sure what is up for debate here, you can't really hide a $5BN infrastructure project from the public.

I think it's safe to say that 80BN+ in subsidies are already well in the process of being deployed. Intel, along with Samsung and TSMC, are heavily subsidized and have been so for a very long time. Any government with modest intelligence understands the gravity of having microchip manufacturing secured.



The biggest Intel's problem is that a lot of good people left over the previous years of shitty management. Pouring money into R&D certainly helps but with wrong people in key positions the efficiency of the investments will be low.


Gelsinger put a $4BN compensation package in effect for securing and retaining talent within his first 6 months of taking the role, one of the first things noted was brain drain to competitors.

>Intel Poaches Head Apple Silicon Architect Who Led Transition To Arm And M1 Chips. Intel has reacquired the services of Jeff Wilcox, who spearheaded the transition to Arm and M1 chips for Apple. Wilcox will oversee architecture for all Intel system-on-a-chip (SoC) designs

note re-acquired.

Raja Koduri also came back to Intel (from AMD Radeon) and only recently left to dabble in VFX, as opposed to working for a competitor to Intel.

Anton Kaplanyan (father of RTX at nvidia) is at Intel now.

I think people are not checking linkedin when they make the claim that Intel's talent has been drained and there is nobody left at home. Where there is remuneration and opportunity you will find talent. I think it's safe to say no industry experts have written down Intel.

edit: first foundry customer online in New Mexico fab: https://www.intel.com/content/www/us/en/newsroom/news/intel-...



There are a few areas where they are under pressure:

- The wintel monopoly is losing its relevance now that ARM chips are creeping into the windows laptop market and now that Apple has proven that ARM is fantastic for low power & high performance solutions. Nobody cares about x86 that much any more. It's lost its shine as the "fastest" thing available.

- AI & GPU market is where the action is and Intel is a no-show for that so far. It's not about adding AI/GPU features to cheap laptop chips but about high end workstations and dedicated solutions for large scale compute. Intel's GPUs lack credibility for this so far. Apple's laptops seem popular with AI researchers lately and the goto high performance solutions seem to be provided by NVidia.

- Apple has been leading the way with ARM based, high performance integrated chips powering phones, laptops, and recently AR/VR. Neither AMD nor Intel have a good answer to that so far. Though AMD at least has a foothold in the door with e.g. Xbox and the Steam Deck depending on their integrated chips and them still having credible solutions for gaming. Nvidia also has lots of credibility in this space.

- Cloud computing is increasingly shifting to cheap ARM powered hardware. Mostly the transition is pretty seamless. Cost and energy usage are the main drivers here.



> Apple has proven that ARM is fantastic for low power & high performance solutions

Apple has proven that Apple Silicon on TSMC's best process is great. There are no other ARM vendors competing well in that space yet. SOCs that need to compete with Intel and AMD on the same nodes are still stuck at the low margin end of the market.



They will be manufacturing those ARM cpus


Has that been announced? Or is it more a matter of Intel producing some unannounced product on an unannounced timeline with a feature set that has yet to be announced on an architecture that may or may not involve arm? Intel walking away from x86 would be a big step for them. First they don't own arm and second all their high end stuff is x86.




They will be manufacturing them for their customers.

Also arm isa is just isa

You seem to focus on it too much when it isnt THAT relevant.

Isa doesnt imply perf/energy characteristics



> This sudden shock of "we're going to invest everything in R&D and catch back up" was news that a lot of intel shareholders didn't want to hear. They dumped the stock and the price adjusted in kind.

Why the fuck are shareholders often so short-sighted?

Or do they just genuinely think the R&D investment won't pay off?



They bought the stock on the principle that it was going to pay a consistent 5% dividend every year and weren't looking for moonshots at the cost of that consistent revenue.


Yeah, just look into any investment thread on HN to see how shareholders thinks, nobody recommend investing in unconventional things. Shareholders are your everyday guy who decides where to put his pension, and that guy picks the safe bet with good returns.


You should be bullish on intel, they got so much TSMC and Samsung trade secrets through the chips act that it would be a miracle to mess that up.


How did that work?


This blog

https://semiaccurate.com/

has told the story for more than a decade that Intel has been getting high on its own supply and that the media has been uncritical of the stories it tells.

In particular I think when it comes to the data center they’ve forgotten their roots. They took over the data center in the 1990s because they were producing desktop PCs in such numbers they could afford to get way ahead of the likes of Sun Microsystems, HP, and SGI. Itanium failed out of ignorance and hubris but if they were true evil geniuses they couldn’t have made a better master plan to wipe out most of the competition for the x86 architecture.

Today they take the desktop for granted and make the false claim that their data center business is more significant (not what the financial numbers show.). It’s highly self-destructive because when they pander to Amazon, Amazon takes the money they save and spends it on developing Graviton. There is some prestige in making big machines for the national labs but it is an intellectual black hole because the last thing they want to do is educate anyone else on how to simulate hydrogen bombs in VR.

So we get the puzzle that most of the performance boost customers could be getting comes from SIMD instructions and other “accelerators” but Intel doesn’t make a real effort to get this technology working for anyone other than the Facebook and the national labs and, in particular, they drag their feet in getting it available on enough chips that it is is worth it for mainstream developers to use this technology.

A while back, IBM had this thing where they might ship you a mainframe with 50 cores and license you to use 30 and if you had a load surge you could call you up and they could turn on another 10 cores at a high price.

I was fooled when I heard this the first time and thought it was smart business but after years of thinking about how to deliver value to customers I realized it’s nothing more than “vice signaling”. It makes them look rapacious and avaricious but really somebody is paying for those 20 cores and if it is not the customer it is the shareholders. It’s not impossible that IBM and/or the customer winds up ahead in the situation but the fact is they paid to make those 20 cores and if those cores are sitting there doing nothing they’re making no value for anyone. If everything was tuned up perfectly they might make a profit by locking them down, but it’s not a given at all that it is going to work out that way.

Similarly Intel has been hell-bent to fuse away features on their chips so often you get a desktop part that has a huge die area allocated to AVX features that you’re not allowed to use. Either the customer or the shareholders are paying to fabricate a lot of transistors the customer doesn’t get to use. It’s madness but except for Charlie Demerjian the whole computer press pretends it is normal.

Apple bailed out on Intel because Intel failed to stick to its roadmap to improve their chips (they’re number one why try harder?) and they are lucky to have customers that accept that a new version of MacOS can drop older chips which means MacOS benefits from features that were introduced more than ten years ago. Maybe Intel and Microsoft are locked in a deadly embrace but their saving grace is that every ARM vendor other than Apple has failed to move the needle on ARM performance since 2017, which itself has to be an interesting story that I haven’t seen told.



> every ARM vendor other than Apple has failed to move the needle on ARM performance since 2017

You must mean, performance relative to Intel, not absolute performance. Clearly Qualcomm has improved Snapdragon over time as have a number of other Android SOC vendors.

But I wonder if it's even true, have ARM vendors other than Apple failed to move the needle on performance (let's call performance single thread geekbench) relative to Intel? If someone is up for tracking down all the numbers I'd read that blog post. :)



> and they are lucky to have customers that accept that a new version of MacOS can drop older chips

Indeed, Apple has shown not just once but multiple times that they'll happily blow up their entire development ecosystem, whether it's software (Mac Finder vs. MacOS X) or hardware (68k, PPC, Intel, and now ARM). I think Intel didn't expect Apple to switch architectures so quickly and thoroughly and got caught flat-footed.



i honestly don’t see what you are seeing in terms of taiwans future sovereignty. Of course, China would like to do something about Taiwan, especially now with their economy kind of in the dumps and a collapsing real estate bubble. But when you look at the facts of it all, there’s absolut ZERO chance chine can muster up what it takes to hold their own in such a conflict. Their military isn’t up to snuff and they are one broken dam away from a huge mass casualty event.


> there’s absolut ZERO chance chine can muster up what it takes to hold their own in such a conflict.

However China is now a full fledged dictatorship. I'm not sure you can count on them being a rational actor on the world stage.

They can do a lot of damage, but would also get absolutely devastated in return. They are food, energy insecure and entirely dependent on exports after all.



True, but the elite class that’s currently profiting from and in control of said country would devastate themselves if they dare. Skepticism about the wests self-inflicted dependency on China is at an all time high. Terms like "on-" or "friend-shoring" are already coming up now.

You’re not wrong, maybe all the scaremongering in the west about China overtaking us got them delusional enough in a Japanese nationalist type way for them to behave this irrational, but i highly doubt it. But that can also change pretty quick if they feel like their back is against the wall, you’re not wrong in that regard



How much is that elite independent of Xi? A relatively independent elite is probably a more stable system. But a completely subservient elite to the fearless leader is however much more dangerous.


I don’t think Xi is as independent as you believe, but that’s a matter of personal opinion.

I just don’t think it’s very likely for just about any leader putting themselves into the position you are describing. This is a reoccurring narrative in western media, and I’m not here to defend dictators, but i feel like reality is less black and white than that.

Many of the "crazed leaders" we are told are acting irrational, often do not. It’s just a very, very different perspective, often bad ones, but regardless.

Let me try to explain what I mean: during the Iraq war, Saddam Hussein was painted as this sort of crazed leader, irrationally deciding to invade Kuwait. But that’s not the entire truth. Hussein may have been an evil man, but the way the borders of Iraq were re-drawn, Iraq was completely cut off from any sources of fresh water. As expected, their neighbors cut off their already wonky water supplies and famine followed. One can still think it’s not justified to invade Kuwait over this, but there’s a clear gain to be had from this "irrational" act. Again, not a statement of personal opinion, just that there IS something to be had. I’m not trying to say that i am certain that Hussein had the prosperity of his people at heart, but i do think that it isn’t entirely irrational to acknowledge that every country in human history is 3 missed meals away from revolution. That’s not good, even if you are their benevolent god and dictator for lifetime(tm).

Russia "irrationally" invading the Ukraine may seem that way to us, but let’s see. Russias economy is just about entirely dependent on their petrochem industry. Without, their are broke. The reason why they still can compete in this market is their asset of soviet infrastructure and industry. A good majority of USSR pipelines run through the Ukraine. I’m not saying it’s okay for them to invade, but i can see what they seek to gain and why exactly they fear NATO expansion all that much.

I personally don’t see a similar gain to be had from China invading Taiwan, at least right now. They have lots to lose and little to gain. Taiwans semiconductor industry is useless without western IP, lithography equipment and customers. There are even emergency plans to destroy taiwans fabs in case of invasion. And that’s beside the damage done to mainland China itself.

But as i stated, this may very well change when they get more desperate. Hussein fully knew the consequences of screwing with the wests oil supply, but the desperation was too acute.

I just don’t buy irrationality, there’s always something to be had or something to lose. It may be entirely different from our view, but there’s gotta be something.



Russia doesn't frear NATO - see their reaction on Finland joining it. Also the pipelines were not the reason for invasion. They were the opposite - a deterrence. As soon as Russia built pipelines that were circumventing Ukraine, they decided to invade, thinking that the gas transmition would't be in danger now.


Also Saddam was told by the US ambassador that the US has no opinion on Arab-Arab conflicts...


yup. there are more examples than i can muster up to write. One more gut-wrenching than the former. The US calling anyone irrational is pretty rich anyways. After all, invoking the use Brainwashing in war after war, instead of accepting the existence of differing beliefs isn’t the pinnacle of rationality either. Neither is kidnapping your own people in an attempt to build your own brand of LSD-based brainwashing. Neither is infiltrating civil rights movements, going so far as attempting to bully MLK into suicide. Neither is spending your people’s tax money on 638 foiled assassinations of Castro. Neither is committing false-flag genocides in Vietnam, or PSYOPing civilians into believing they are haunted by the souls of their relatives.

none of those claims are anything but proven, historical facts by the way.

Wanna lose your appetite? The leadership in charge of the described operations in Vietnam gleefully talked about their management genius. They implemented kīll quotas.

this list also is everything but exhaustive.



> the way the borders of Iraq were re-drawn, Iraq was completely cut off from any sources of fresh water.

In the geography which with I am familiar, Iraq has two incredibly famous rivers and the Persian Gulf is actually salt water.



I'm skeptical of you claims about Hussein but I will admit less familiarity with it. Your claim about Russia's motives are bunk

> Russia "irrationally" invading the Ukraine may seem that way to us, but let’s see.

Invading one of their largest neighbors and ruining their relationship with a nation they had significant cultural exchange and trade with (including many of their weapons factories) is irrational.

But Russia's leaders didn't want a positive neighborly relationship they wanted to conquer Ukraine and restore the empire. Putin has given speeches on this comparing himself to the old conquering czars.

> Russias economy is just about entirely dependent on their petrochem industry. Without, their are broke.

True enough

> The reason why they still can compete in this market is their asset of soviet infrastructure and industry.

Much of the equipment is western and installed in the post Soviet period.

> A good majority of USSR pipelines run through the Ukraine.

Then they probably shouldn't have invaded in 2014? Almost seems like they made a bad irrational choice. They had other pipelines that bypassed Ukraine like NS1 and NS2 which didn't enter service due to the war

> I’m not saying it’s okay for them to invade, but i can see what they seek to gain

Please explain what they tried to gain. Ukraine wouldn't have objected to exports of gas through Ukraine if not for the Russian invasion and they already had pipelines that bypassed Ukraine.

> and why exactly they fear NATO expansion all that much.

They don't fear NATO expansion, they disliked it because it prevented them from conquering or bullying countries with threats of invasion. They've taken troops of the NATO border with Finland (and didn't even invade Finland when Finland joined NATO). Russia acknowledged the right of eastern European nations to join NATO and promised to respect Ukraine's sovereignty and borders.

> I personally don’t see a similar gain to be had from China invading Taiwan, at least right now. They have lots to lose and little to gain. Taiwans semiconductor industry is useless without western IP, lithography equipment and customers. There are even emergency plans to destroy taiwans fabs in case of invasion. And that’s beside the damage done to mainland China itself.

The fabs are a red herring, they're largely irrelevant. If China invades (which I hope doesn't happen) it will not be because of any economic gains. There are no possible economic gains that would justify the costs of a war. If they invade it will be for the same reason that Russia did, because of extreme nationalism/revanchism and trying to use that extreme nationalism to maintain popularity among the population.



Problem is, "rational" is not objective. "Rational" is more like "consistent with one's goals (subjective) under one's perception of reality (subjective)".

When you're saying "Putin invaded Ukraine irrationally" you're implicitly projecting your own value system and worldview onto him.

Let's take goals. What do you think Putin's goals are? I don't think it's too fanciful to imagine that welfare of ordinary Russians is less important to him than going down in history as someone who reunited the lost Russian Empire, or even just keeping in power and adored. It's just a fact that the occupation of Crimea was extremely popular and raised his ratings, so why not try the same thing again?

What about the worldview? It is well established that Putin didn't think much of Ukraine's ability to defend, having been fed overly positive reports by his servile underlings. Hell, even Pentagon thought Ukraine will fold, shipping weapons that would work well for guerrilla warfare (Javelins) and dragging their feet on stuff regular armies need (howitzers and shells). Russians did think it'll be a walk in the park, they even had a truck of crowd control gear in that column attacking Kyiv, thinking they'll need police shields.

So when you put yourself into Putin's shoes, attacking Ukraine Just Makes Sense: a cheap&easy way to boost ratings and raise his profile in history books, what not to like? It is completely rational — for his goals and his perceived reality.

Sadly, people often fall into the trap of overextending their own worldview/goals onto others, finding a mismatch, and trying to explain that mismatch away with semi-conspiratorial thinking (Nato expansion! Pipelines! Russian speakers!) instead of reevaluating the premise.



I don't accept the subjectivity w.r.t. "perceived reality". Russia's military unreadiness was one of the big reasons I consider the invasion irrational, and I put the blame squarely on Putin because he could have gotten accurate reports if he wasn't such a bad leader. You are responsible for your perceived reality, and part of rationality is acting in a way that it matches real reality.

(But yeah, clearly his actual goal was to increase his personal prestige. Is that not common knowledge yet?)



I think "economy in the dumps" is a bit too harsh.

China is facing a deflating real estate bubble, but they still managed to grow the last year (official sources are disputed but independent estimates are still positive).



I would refer you to these to take the counterpoint to your position [1][2] [3].

China is in a world of hurt, but the government is trying desperately to hide how bad it actually is. If this continues for a few more months, it will be an existential situation for their economy.

[1] - https://www.bloomberg.com/news/articles/2024-01-31/china-hom...

[2] - https://www.bloomberg.com/news/articles/2024-01-31/china-sto...

[3] - https://www.piie.com/blogs/realtime-economics/foreign-direct...



it’s where the growth is coming from. Chinas growth (or even just sustenance) isn’t coming from a healthy job market and consumer spending. It’s mostly fueled by SOEs and prefectures going into debt to keep on investing, many local administration have found out they can trick debt limits by forming state-owned special purpose vehicles that aren’t bound to their debt limits. That’s not good at all. there’s a reason we are seeing tons of novel Chinese car brands being pushed here in Europe, they massively overproduced and cannot sell them in their own market anymore. It’s really not looking great atm.

edit: one also should keep in mind that the Chinese real estate market is entirely different in its importance to its populations wealth. "Buying" real estate is pretty much the only sanctioned market to invest your earnings. They still pretend to be communist after all.



> they are one broken dam away from a huge mass casualty event.

Are there any dam-having countries for which this isn't the case?



none or VERY few are even remotely close to the impact a potential breach of the three gorges dam would have. [1] Seriously, it’s worth reading up on, it’s genuinely hard to overstate.

[1]: https://www.ispsw.com/wp-content/uploads/2020/09/718_Lin.pdf

"In this case, the Three Gorges Dam may become a military target. But if this happens, it would be devastating to China as 400 million people live downstream, as well as the majority of the PLA's reserve forces that are located midstream and downstream of the Yangtze River."



It's grossly overstated because TW doesn't have the type or numbers of ordnance to structurally damage gravity dam the size of three gorges. And realistically they won't because the amount of conventional munitions needed is staggering, more than TW can muster in retaliatory strike, unless it's a coordinated preemptive strike, which TW won't since it's suicide by war crime.

The entire three gorges meme originated from FaLunGong/Epoche times propaganda, including in linked article (to interview with Simone Gao) and all the dumb google map photos of deformed damn due to lens distortion. PRC planners there aren't concerned about dam breech, but general infra terrorism.

The onne infra PRC planners are concerned about are coastal nuclear plants under construction, which is much better ordnance trade for TW anyway, and just as much of a war crime.



"This article first appeared in The Times of Israel on September 11, 2020."

Also what does "400 million people live downstream" even mean? There's ten million people living downstream of this dam https://en.wikipedia.org/wiki/Federal_Dam_(Troy), and ten million more living downstream of the various Mississippi dams and so on.



[flagged]



Intel is recipient #1 of CHIPS and similar EU initiatives - and the government may pressure nvidia and other US companies (ie: apple) to move their procurement domestically. Intel being the only player outside of taiwan and south korea to have the capital and capacity to supply that.


> I think the market (and analysts like this) are all throwing the towel

1) Intel is up 100% from ten years ago when it was at $ 23. All that despite revenue being flat/negative, inflation and costs rising and margins collapsing.

2) Intel is up 60% in the last 12 months alone.

Doesn't look to me like they throwing the towel at all.



I appreciate the deep cut. I definitely do not follow companies internally closely enough to see this coming.

> (samsung or tsmc) both within a 500 mile radius circle in the south china sea.

Within 500 mile radius of great power competitor, perhaps. The closest points on mainland Taiwan and Korea are 700 miles apart. Fabs about 1000 miles, by my loose reckoning.



A 500 mile radius circle has a diameter of 1000 miles, so you're both correct.


Ha, silly of me, quite right. Not exactly what comes to mind when drawing circles to include a city 2/3 south down Taiwan, and 2/3 north up RoK, but fair point.


>What happened over the 8 years prior was hedge funds and banks had saddled up on Intel stock which was paying healthy dividends due to cost cutting and "coasting"

Not clear about what the role of activist hedge funds is here but Intel's top shareholders are mutual funds like Vanguard which are part of many people's retirement investments. If an activist hedge fund got to run the show, it means that they could get these passive shareholders on their side or to abstain. It would have meant those funds along with pension funds, who should have been in a place to push back against short term thinking, didn't push back. These funds should really be run much more competently given their outsized influence, but the incentives are not there.



there's probably no need to imagine this conspiracy-like machinations of shareholders. Intel fucked up bad and process development is certified crazytrain to la la land.

(dropping molten tin 1000 times a second and then shooting it with a laser just to get a lamp that can bless you with the hard light you need for your fancy fine few nanometers thin shadows? sure, why not, but don't forget to shoot the plasma ball with a weaker pulse to nudge it into the shape of a lens, cheerio.

and you know that all other parts are similarly scifi sounding.

and their middle management got greedy and they were bleeding talent for a decade.)



Everyone acts as though Intel should have seen everything coming. Where was AMD? Was AMD really competitive before Ryzen? Nope. Core 2 series blew them out of the water. Was ARM really competitive until recently? Nope. Intel crushed them. The problem for Intel is the inertia of laziness due to a lack of competition. I wouldn’t count them out just yet, however. The company’s first true swing at modern GPU was actually good for a first attempt. Their recent CPUs while not quite as good as Ryzen aren’t exactly uncompetitive. Their foundry business faltered because they were trying a few things never before and not because they were incompetent. Also, 20A and 18A are coming along. I am not an Intel fan at all. I run AMD and ARM. My dislike isn’t technological though, it’s just that I hate their underhanded business practices.


The curse of having weak enemies is that you become complacent.

You're right: AMD wasn't competitive for an incredibly long time and ARM wasn't really meaningful for a long time. That's the perfect situation for some MBAs to come into. You start thinking that you're wasting money on R&D. Why create something 30% better this year when 10% better will cost a lot less and your competitors are so far behind that it doesn't matter?

It's not that Intel should have seen AMD coming or should have seen ARM coming. It's that Intel should have understood that just because you have weak enemies today doesn't mean that you have an unassailable castle. Intel should have been smart enough to understand that backing off of R&D would mean giving up the moat they'd created. Even if it looked like no one was coming for their crown at the moment, you need to understand that disinvestment doesn't get rewarded over the long-run.

Intel should have understood that trying to be cheap about R&D and extract as much money from customers wasn't a long-term strategy. It wasn't the strategy that built them into the dominant Intel we knew. It wouldn't keep them as that dominant Intel.



> It's that Intel should have understood that just because you have weak enemies today doesn't mean that you have an unassailable castle.

Their third employee that later went on to become their third CEO and guide Intel from the memory to processor transition literally coined the term and wrote a book called "Only the Paranoid Survive" [1]. It's inexcusable that management degraded that much.

[1] https://en.wikipedia.org/wiki/Andrew_Grove#Only_the_Paranoid...



Yes, I agree. However, I don’t necessarily see this book title as an imperative to innovate. Patent trolling can also be a way to deal with competitors.

After all, Apple and ARM came from the idea to have better end user products around softer factors than shear CPU power. Since Intel‘s products aren’t highly integrated Phones nor assembled computer, Intel had no stake directly.

It is complex.



Apple came from recreational “there is now a 10 times cheaper CPU than anything else and I can afford to build my video terminal into real computer in my bedroom” and “maybe we can actually sell it?”. [1]

ARM literally came from “we need a much better and faster processor” and “how hard can this be?” [2]

[1] https://en.wikipedia.org/wiki/History_of_Apple_Inc.#1971%E2%...

[2] https://en.wikipedia.org/wiki/ARM_architecture_family



To be fair, they should have seen Ryzen coming, any long-term AMD user knew years before Ryzen landed that it was going to be a good core because AMD were very vocal about how badly wrong they bet with Bulldozer (previous core family).

AMD bet BIG on the software industry leaning in heavily on massive thread-counts over high throughput, single-threaded usage... But it never happened so the cores tanked.

It was never a secret WHY that generation of core sucked, and it was relatively clear what AMD needed to do to fix the problem, and they were VERY vocal about "doing the thing" once it became clear their bet wasn't paying off.



(I'm curious about this story, as I am unfamiliar with it.)

Why did that generation of core (Bulldozer) suck?

What was it that AMD needed to do to fix the problem?

(Links to relevant stories would be sufficient for me!)



Chips and Cheese has probably the most in depth publicly available dive for the tech reasons why Bulldozer was the way it was:

https://chipsandcheese.com/2023/01/22/bulldozer-amds-crash-m...

https://chipsandcheese.com/2023/01/24/bulldozer-amds-crash-m...

---

From a consumer perspective, Bulldozer and revisions as compared to Skylake and revisions were:

+ comparable on highly multi-threaded loads

+ cheaper

- significantly behind on less multi-threaded loads

- had 1 set of FPUs per 2 cores, so workloads with lots of floating point calculations were also weaker

- Most intensive consumer software was single or a very small number of thread focused still (this was also a problem for Intel in trying to get people to buy more expensive i7s/i9s over i5s in those days)



Bulldozer was contemporary with Sandy Bridge, not Skylake. Piledriver competed with Ivy Bridge and Haswell. The next Construction cores (Steamroller and Excavator) were only found in APUs, and not in desktop FX parts. Around the time of Skylake, AMD didn't have a meaningful presence in the desktop space. All they were selling was quad-core APUs based on minor revisions of Bulldozer, and the highly outdated FX-x3xx Piledrivers.


The Bulldozer design had a few main issues.

1.Bulldozer had a very long pipeline akin to a Pentium 4. This allows for highclocks but comparatively little work being done per cycle vs their competition. Since clocks have a ceiling around 5GHz they could never push the clocks high enough to compete with intel. 2.They used a odd core design with 1 FPU for every 2 integer unit instead of the normal 1:1 that we have seen on every x86 since the i486. This leads to very weak FPU performance needed for many professional applications. Conversely it allowed for very competitive performance on highly threaded integer applications like rendering. This decision was probably under the assumption APUs would integrate their GPUs better and software would be written with it in mind since a GPU easily out does a CPUs FPU but it requires more programming. This didn't come to be. 3. They were stuck using Global Foundries due to previous contracts when they spun it off requiring AMD use GloFlo. This became a anchor as Gloflo fell behind market competitors like TSMC. Leaving AMD stuck on 32nm for a long while, until gloflo got 14nm and eventually AMD got out of the contract between zen 1-2.

bonus: Many IC designers have bemoaned how much of bulldozers design was automated with little hand modifications which tends to lead to a less optimized design. 3. 3.



There's been lots written about this but this is my opinion.

Bulldozer seemed to be designed under the assumption heavy floating point work would be done on the GPU (APU) which all early construction cores had built in. But no one is going to rewrite all of their software to take advantage of the iGPU that isn't present in existing CPUs and isn't present in the majority of CPUs (Intel) so it sort of smelt like Intel's itantic moment, only worse.

I think they were desperate to see some near term return on the money they spent on buying ATI. ATI wasn't a bad idea for a purchase but they seemed to heavily overpay for it which probably really clouded management's judgement.



I thought it was a bad idea when I first read of it. It reminded me of Intel's Netburst (Pentium 4) architecture.


They've seen all that. You don't have to have an MBA or a MIT degree to plot projected performance of your or your competitors' chips.

It was process failures. Their fabs couldn't fab the designs. Tiger lake was what? 4 years late?



This sounds like Google. Some bean counter is firing people left and right and somehow they think that's going to save them from the fact that AI answers destroy their business model. They need more people finding solutions, not less.


> Was ARM really competitive until recently? Nope. Intel crushed them.

Intel never "crushed" ARM. Intel completely failed to develop a mobile processor and ARM has a massive marketshare there.

ARM has always beaten the crap out of Intel at performance per watt, which turned out to be extremely important both in mobile and data center scale.



I got curious about how ARM is doing in the data center and found this:

>Arm now claims to hold a 10.1% share of the cloud computing market, although that's primarily due to Amazon and its increasing use of homegrown Arm chips. According to TrendForce, Amazon Web Services (AWS) was using its custom Graviton chips in 15% of all server deployments in 2021.

https://www.fool.com/investing/2023/09/23/arm-holdings-data-...



ARM would be even more popular in the datacenter if getting access to Ampere CPUs was possible.

I can get a top of the line Xeon Gold basically next day with a incredibly high quality out of band management from a reputable server provider. (HP, Dell).

Ampere? Give it 6 months, €5,000 and maybe you can get one, from Gigabyte. Not known for server quality.

(yes, I'm salty, I have 4 of these CPUs and it took a really long time to get them while costing just as much as AMD EPYC Milan's).



I'm using Ampere powered servers on Oracle cloud and boy, they're snappy, even with the virtualization layer on top.

Amazon has its own ARM CPUs on AWS, and you can get them on demand, too.

Xeons and EPYCs are great for "big loads", however some supercomputer centers also started to install "experimental" ARM partitions.

The future is bright not because Intel is floundering, but there'll be at least three big CPU producers (ARM, AMD and Intel).

Also, don't have prejudices about "brands". most motherboard brands can design server-class hardware if they wish. They're just making different trade-offs because of the market they're in.

I used servers which randomly fried parts of their motherboard when see some "real" load. Coming one morning and having no connectivity because a top of the line 2 port gigabit onboard Ethernet fried itself on a top of the line, flagship server is funny in its own way.



Since roughly the first year of covid the supply generally has been quite bad. Yes, I can get _some_ xeon or epyc from HPE quickly, but if I care about specific specs it's also a several month long wait. For midsized servers (up to about 100 total threads) AMD still doesn't really have competition if you look at price, performance and power - I'm currently waiting for such a machine, the intel option would've been 30% more expensive at worse specs.


ARM server CPUs are great, I'd move all of our stuff to them once more competition happens. Give it a few more years.


HPE has an Ampere server line that is quite good, especially considering TCO, density, and the IO it can pack. But yeah you'll have to fork some cash.


You can get them on Oracle cloud servers for whatever you choose to do, last i looked and used them.


> Amazon Web Services (AWS) was using its custom Graviton chips in 15% of all server deployments in 2021

I'm guessing this has increased since 2021. I've moved the majority of our AWS workloads to ARM because the price savings (it mostly 'just works'). If companies are starting to tighten their belts, this could accelerate even more ARM adoption.



It'll probably get get there, but it'll probably be a slow migration. After all, no point in tossing out all the Xeon that still have a few years left in them. But I believe Google is now also talking about or is already working on their own custom chip similar to Graviton now. [1]

[1] https://www.theregister.com/2023/02/14/google_prepares_its_o...



Oracle, and even Microsoft, have decently large arm64 deployments now too (compared to nothing).


The Amazon Graviton started by using stock ARM A72 cores.


> Intel never "crushed" ARM.

They certainly tried selling their chips below cost to move into markets ARM dominated, but "contra revenue" couldn't save them.

> Intel Corp.’s Contra-Revenue Strategy Was a Huge Waste of Money

https://www.fool.com/investing/general/2016/04/21/intel-corp...



The name they chose to try and make it not sound like anti-competitive practices just makes it sound like Iran-Contra


ARM has been really competitive since, well, 2007, when the first iPhone hit the market, and when Android followed in 2008. That is, last 15 years or so. Not noticing a hugely growing segment that was bringing insane reams of cash to Apple, and Qualcomm, Samsung and others involved is not something I could call astute.

Pretty certainly, Intel is improving, and of course should not be written off. But they did get themselves into a hole to dig out from, and not just because the 5nm process was really hard to get working.



> Not noticing a hugely growing segment that was bringing insane reams of cash to Apple, and Qualcomm, Samsung and others involved is not something I could call astute.

And it's not like they didn't notice either. Apple literally asked intel to supply the chips for the first iPhone, but the intel CEO at the time "didn't see it".

https://www.theverge.com/2013/5/16/4337954/intel-could-have-...



I agree mobile was a miss but the linked article actually quotes Intel's former COE making a pretty good argument why they missed:

> "The thing you have to remember is that this was before the iPhone was introduced and no one knew what the iPhone would do... At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn't see it. It wasn't one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought."

In that circumstance, I think most people would have made the same decision.



Kind of speaks to how Intel was not competitive in the space at all. If it was truly that the marginal cost per part was higher than the requested price, either Apple was asking for the impossible and settled for a worse deal with an ARM chip, or Intel did not have similar capabilities.


I’m not so sure, he made a choice purely on “will it make money now” not “well let’s take a chance and see if this pays of big and if not we’ll loose a little money”

It’s not like they couldn’t afford it and taking chances is important



Ok, but you have to view this through the lens of what was on the market at the time and what kind of expectations Intel likely would have had. I can't imagine that Apple told Intel what they were planning. Therefore, it would have been reasonable to look around at the state of what existed at the time (basically, iPods, flip phones, and the various struggling efforts that were trying to become smartphones at the time) and conclude that none of that was going to amount to anything big.

I'm pretty sure most people here panned the iPhone after it came out, so it's not as if anyone would have predicted it prior to even being told it existed.



And that statement is hilarious in light of the many failed efforts (eg subsidies for Netbooks and their embedded x86 chip) where they lit billions on fire attempting to sway the market.

FWIW I don't buy his explanation anyway. Intel at the time had zero desire to be a fab. Their heart was not in it. They wanted to own all the IP for fat margins. They have yet to prove anything about that has changed despite the noise they repeatedly make about taking on fab customers.



Intel also had a later chance when Apple tried to get off the Qualcomm percent per handset model. This was far after the original iPhone. Apple also got sued for allegedly sharing proprietary Qualcomm trade secrets with Intel. And Intel still couldn’t pull it off despite all these tailwinds.


> In that circumstance, I think most people would have made the same decision.

In that circumstance, I think most MBAs would have made the same decision.

Fixed that for you



"We couldn't figure out how much a chip design would cost to make" is pretty damning, in my book.


That was very luck for Apple though. Nokia made deals with Intel to provide the CPU for upcoming phone models, and had to scramble to redesign them when it became clear Intel was unable to deliver.


Not quite true - the intel projects were at a pretty early stage when Elop took over, and the whole Microsoft thing happened - and the projects got canned as part of the cleanup and moving to Windows for the phones.

The CPUs were indeed horrible, and would've caused a lot of pain if the projects had actually continued. (source: I was working on the software side for the early Nokia intel prototypes)



Thanks for the insights. N9 was originally rumored to use Intel, and it was speculated so [1] still half a year before the release. Was that then also switched by Elop as part of the whole lineup change, or were these rumors unfounded in the first place?

[1] https://www.gottabemobile.com/meego-powered-nokia-n9-to-laun...



Pretty much all rumors at that time were very entertainingly wrong.

I think at the time that article got published we didn't even have the intel devboards distributed (that is a screen and macroboards, way before it starts looking like a phone). We did have some intel handsets from a 3rd party for meego work, but that was pretty much just proof of concept - nobody ever really bothered looking into trying to get the modem working, for example.

What became the N9 was all the time planned as an arm based device - exact name and specs changed a few times, but it still pretty much was developed as a maemo device, just using the meego name for branding, plus having some of the APIs (mainly, qt mobility and QML) compatible with what was scheduled to become meego. The QML stuff was a late addition there - originally it was supposed to launch with MTF, and the device was a wild mix of both when it launched, with QML having noticeable issues in many areas.

Development on what was supposed to be proper meego (the cooperation with intel) happened with only a very small team (which I was part of) at that time, and was starting to slowly ramp up - but massive developer effort from Nokia to actually make a "true" meego phone would've started somewhere mid-11.



Very interesting, thanks for setting the record straight!


> ARM has been really competitive since

And a few years prior to that Intel made the most competitive ARM chips (StrongARM). Chances are that an Intel chip would have powered the iPhone had they not scrapped their ARM division due to “reasons”



Intel had purchased/gotten StrongARM from DEC.

DEC had started developing ARM chips as they concluded it was a bad idea to try and scale down their alpha chips to be more energy efficient.

Then, after the success of these ARM chips in the blackberry and most of the palm PDAs as well as MP3 players and HTC smartphones, Intel sells it off, so it could focus on trying to make its big chips more energy efficient, making the mistake DEC avoided.

iPhone was a defining moment, but at the time it was completely obvious that smartphones would be a thing, it's just that people thought that the breakthrough product would come from Nokia or Sony-Ericsson (who were using ARM SoCs from TI and Quallcomm respectively). Selling off the ARM division would not have been my priority?

So it's a string of unforced errors. Nevertheless, Intel remains an ARM licensee, they didn't give that up when selling StrongARM, so it seems some people still saw the future..



Sounds like the classic Innovators Dilemma. There wasn't a lot of margin in the ARM chips so Intel doubled down on their high margin server and desktop chips. ARM took over the low end in portable devices and is now challenging in the datacenter.


> DEC had started developing ARM chips as they concluded it was a bad idea to try and scale down their alpha chips to be more energy efficient.

I thought this was interesting enough to track down more of the backstory here, and found this fascinating article:

https://archive.computerhistory.org/resources/access/text/20...

Page 60, hard breaks added for readability:

Baum: Apple owned a big chunk of it, but when Apple was really having hard times, they sold off their chunk at quite a profit, but they sold off the chunk. And then-- oh, while Newton was going on, some people from DEC came to visit, and they said, “Hey, we were looking at doing a low power Alpha and decided that just couldn’t be done, and then looked at the ARM. We think we can make an ARM which is really low power, really high performance, really tiny, and cheap, and we can do it in a year. Would you use that in your Newton?” Cause, you know, we were using ARMs in the Newton, and we all kind of went, “Phhht, yeah. You can’t do it, but, yeah, if you could we’d use it.”

That was the basis of StrongARM, which became a very successful business for DEC. And then DEC sued Intel. Well, I worked on the StrongARM 1500, which was a very interesting product. It was an ARM and a DSP kind of highly combined. It was supposed to be like video processing using set top boxes, and things like that. And then we finished that project and our group in Palo Alto, we were just gonna start an Alpha project.

And just then it was announced that DEC was-- no. No. Intel, at that time, Intel, DEC sued Intel for patent violations, didn’t go to them and say, “Hey, pay up or stop using it.” They just sued them. Intel was completely taken by surprise. There was a settlement. The settlement was you have to buy our Microelectronics Division and pay us a whole pile of money, and everything will go away.

So they sold the Microelectronics Division, which we were part of, except for the Alpha Design Group, 'cause they didn’t think that they could sell that to Intel and have the SEC approve, 'cause the two can conflict. So I went away on vacation not knowing whether I would be coming back and working for Intel, or coming back working for DEC. And it turned out they decided to keep the Alpha Design Group, so I was still working for DEC. Except the reason for the lawsuit was Compaq wanted to buy DEC, but didn’t want to buy ‘em with this Fab and Microelectronics Division. So by doing this, they got rid of the Microelectronics Division, and now they could sell themselves to Compaq.



Apple has been working with Arm since 1987, when work on the Apple Newton started: https://www.cpushack.com/2010/10/26/how-the-newton-and-arm-s...


> Where was AMD?

Trying to breathe as Intel was pushing their head under water.

We saw AMD come back after their lawsuit against Intel got through and Intel had to stop paying everyone to not use AMD.



Kind of, but not really in laptops :( they're doing great on handhelds though.


I think they're doing better (disclaimer: writing this from a Ryzen laptop) and their latest chip has better thermals and consumption, with a decent reputation compared to 10 years ago for instance. But yes, it's a long road ahead.


> Was AMD really competitive before Ryzen?

No, but ARM should've rung many bells.

Intel poured tons of billions in mobile.

Didn't understood the future from smartphones to servers was about power efficiency and scale.

Eventually their lack of power efficiency made them lose ground in all their core business. I hope they will get this back and not just by competing on manufacturing but architecture too.



> Was ARM really competitive until recently?

The writing was on the wall 6 years ago; Intel was not doing well in mobile and it was only a matter of time until that tech improved. Same as Intel unseating the datacenter chips before it. Ryzen I will give you is a surprise, but in a healthy competive market, "the competition outengineered us this time" _should_ be a potential outcome.

IMO the interesting question is basically whether Intel could have done anything differently. Clay Christianson's sustaining vs disrupting innovation model is well known in industry, and ARM slowly moving up the value chain is obvious in that framework. Stratechery says they should have opened up their fabs to competitors, but how does that work?



Previously, wverytine competition managed to out engineer intel, they crushed them either by having spare process advantage they could use to brute force performance... Or lock competition out of markets by blocking large swathes of them through illegal deals.


> Nope. Intel crushed them.

The problem is that Intel has had a defensive strategy for a long time. Yes, they crushed many attempts to breach the x86 moat but failed completely and then gave up attempts to reach beyond that moat. Mobile, foundry, GPUs etc have all seen half-hearted or doomed attempts (plus some bizarre attempts to diversify - McAfee!).

I think that, as Ben essentially says, they put too much faith in never-ending process leadership and the ongoing supremacy of x86. And when that came to an end the moat was dry.



Part of the problem is Intel is addicted to huge margins. Maybe of the areas they have tried to enter are almost commodity products in comparison so it would take some strong leadership to convince everyone to back off those margins for the sake of diversification.

They should have been worried about their process leadership for a long time. IIRC even the vaunted 14nm that they ended up living on for so long was pretty late. That would have had me making backup plans for 10nm but it looked more like leadership just went back to the denial well for years instead. It seemed like they didn't start backport designs until after Zen1 launched to me.



100%! Also in a way reversing what Moore/Grove did when they abandoned commodity memories. Such a hard thing to do.


On the contrary, they tried to pivot so many times and enter different markets, they bought countless small companies, some big, nothing seemed to stick except the core CPU and datacenter businesses. IIRC mobileye is one somewhat successful venture.


Except they weren't real pivots. Mobile / GPU were x86 centric, foundry half hearted without buying into what needed to be done. Buying a company is the easy bit.


> Where was AMD?

Crushed by Intel's illegal anticompetitive antics?



Well, yes.

Look at Japan generally and Toyota specifically. In Japan the best award you can get for having an outstanding company in terms of profit, topline, quality, free-cash, people, and all the good measures is the Deming Award. Deming was our guy (an American) but we Americans in management didn't take him seriously enough.

The Japanese to their credit did ... ran with it and made it into their own thing in a good way. The Japanese took 30% of the US auto market in our own backyard. Customers knew Hondas, Toyotas cost more but were worth every dollar. They resold better too. (Yes some noise about direct government investment in Japanese companies by the government was a factor too, but not the chief factor in the long run).

We Americans got it "explained to us." We thought we were handling it. Nah, it was BS. But we eventually got our act together. Our Deming award is the Malcolm Baldridge award.

Today, unfortunately the Japanese economy isn't rocking like it was the 80s and early 90s. And Toyota isn't the towering example of quality it once was. I think -- if my facts are correct --- they went too McDonalds and got caught up in lowering price in their materials, and supply chain with bad effects net overall.

So things ebb and flow.

The key thing: is management through action or inaction allowing stupid inbred company culture to make crappy products? Do they know their customers etc etc. Hell, mistakes even screw-ups are not life ending for companies the size of Intel. But recurring stupidity is. A lot of the times the good guys allow themselves to rot from the inside out. So when is enough enough already?



Intel’s problem was the cultural and structural issues in their organization, plus their decision to bet on strong OEM partner relationships to beat competition. This weakness would prevent them from being ready for any serious threat and is what they should’ve seen coming.


Intel's flaw was trying to push DUV to 10nm (otherwise known as Intel 7).

Had Intel adopted the molten tin of EUV, the cycle of failure would have been curtailed.

Hats off to SMIC for the DUV 7nm which they produced so quickly. They likely saw quite a bit of failed effort.

And before we discount ARM, we should remember that Acorn produced a 32-bit CPU with a 25k transistor count. The 80386 was years later, with 275k transistors.

Intel should have bought Acorn, not Olivetti.

That's a lot of mistakes, not even counting Itanium.



Acorn’s original ARM chip was impressive but it didn’t really capture much market share. The first ARM CPU competed against the 286, and did win. The 386 was a big deal though. First, software was very expensive at the time, and the 386 allowed people to keep their investments. Second, it really was a powerful chip. It allowed 11mips vs ARM3’s 13, but the 486 achieved 54mips. ARM6 only hit 28mips. It’s worth noting that the 386 also used 32bit memory addressing and a 32 bit bus while ARM was 26bit addressing with a 16 bit bus.


At the same time, it had unquestioned performance dominance until ARM made the decision for embedded.

ARM would have been much more influential under Intel, rather than pursuing the i960 or the iAPX 432.

Just imagine Intel ARM Archimedes. It would have crushed the IBM PS/2.

Whoops.

Seriously, even DEC was smart enough.

https://en.m.wikipedia.org/wiki/StrongARM



> Intel should have bought Acorn, not Olivetti.

Intel had StrongARM though. IIRC they made best ARM cpus in the early 2000s and were designing their own cores. Then Intel decided to get rid of it because obviously they were just wasting money and could design a better x86 mobile chip…



> Acorn produced a 32-bit CPU with a 25k transistor count. The 80386 was years later, with 275k transistors

Coincidentally, ARM1 and 80386 were both introduced in 1985. I'm a big fan of the ARM1 but I should point out that the 386 is at a different level, designed for multitasking operating systems and including a memory management unit for paging.



N7, TSMC's competitor to Intel 7, does not use EUV either.


There are multiple versions of N7. The N7 and N7P are DUV while the N7+ is EUV.


The problem is this... The money cow is datacenters, and especially top of the line products where there is no competition.

Then fastest single core and multi core x86 cpus that money can buy will go to databases and similar vertically scaled systems.

That's where you can put up the most extreme margins. It's "winner takes all the margins". Beeing somewhat competitive, but mostly a bit worse is the worst business position. Also...

I put money on AMD when they were taking over the crown.

Thank you for this wakeup call, I'll watch closely if Intel can deliver on this and take it back. I'll habe to adjust accordingly.



They should have known it was coming because of how many people they were losing to AMD, but there is a blindness in big corps when management decide they are the domain experts and the workers are replaceable.


AMD had Jim Keller join them, that should have been a wake up call for Intel.


Only the Paranoid Survive


Intel's recent CPUs are not as good as Ryzen? That hasn't been correct for a few years now.


> Core 2 series blew them out of the water.

And Sandy Bridge all but assured AMD wouldn't be relevant for the better part of a decade.

It's easy to forget just how fast Sandy Bridge was when it came out; over 12 years later and it can still hold its own as far as raw performance is concerned.



If you're process nodes are going wayyyy over schedule, it shouldn't take much intelligence to realize that TSMC is catching up FAST.

You should probably have some intel (haha) on ARM and AMD chips. They didn't care.

Why? It's monopoly business tactics, except they didn't realize they weren't Microsoft.

It's not like this was overnight. Intel should have watched AMD like a hawk after that slimeball Ruiz was deposed and a real CEO put in charge.

And the Mac chips have been out, what, two years now, and the Apple processors on the iPhones at least 10?

Come on. This is apocalyptic scale incompetence.



Microsoft also got its lunch ate during this time by mobile. They have a new CEO who’s had to work hard to reshape the place as a services company.


>>> Notice what is happening here: TSMC, unlike its historical pattern, is not keeping (all of its) 5nm capacity to make low-cost high-margin chips in fully-depreciated fabs; rather, it is going to repurpose some amount of equipment — probably as much as it can manage — to 3nm, which will allow it to expand its capacity without a commensurate increase in capital costs. This will both increase the profitability of 3nm and also recognizes the reality that is afflicting TSMC’s 7nm node: there is an increasingly large gap between the leading edge and “good enough” nodes for the vast majority of use cases.

My understanding is that 5nm has been and continues to be "problematic" in terms of yield. The move to 3nm seems to not be afflicted by as many issues. There is also a massive drive to get more volume (and a die shrink will do that), due to the demands of all things ML.

I suspect that TSMC's move here is a bit more nuanced than the (valid) point that the article is making on this step...



5nm is good yield at the moment. It's even fully automotive qualified, which is a testament to its yield. But performance advantage vs cost doesn't justify to move from 7nm for a lot of designs. So tech adoption is getting stickier. For hogh performance designs like Apple CPUs going to the cutting edge is a given. So since 3nm is available 5nm lost its appeal. This is a new territory for TSMC but I think they handled it well. Just last year they were gearing up a lot of 5nm capacity in anticipation of people moving from 12 and 7nm to 5nm. It quickly became clear that this isn't happening. So they moved some of this to 3nm and some is going back to 7nm and 6nm (shrunk 7nm) I think. They are also cautious about buying the newest equipment from ASML unlike Intel and Samsung. This seems to play well for TSMC.

I think TSMC learned from Intel's downfall more than Intel. I don't see any industry traction from IFS. They can research any new technology they want. Without wafer orders it's a recipe for a quick cash burn..



> some is going back to 7nm and 6nm (shrunk 7nm) I think

You can also see the lasting popularity of 7nm-class nodes in consumer products. For example, RDNA3 uses 5nm for the core parts (GCD), but the peripheral parts (the memory chiplets/MCDs) are built on 6nm, and the monolithic low-end parts (RX 7600) are even fully built on 6nm.



With the US re-industrialising, semi-conductors are a strategic priority so Intel will be at the heart of the process. They're going to be a major beneficiary of the process, here and in Europe. They'll be a significant player.

They got lazy and sat on their laurels when AMD was struggling and they didnt view ARM as a threat. TSMC was a probably a joke to them...until everyone brought out killer products and Intel had no response to them. They could have been way ahead of the pack by now but they decided to harvest the market instead of innovating aggresively. Right now they're worth less than $200bn, which is less than half of Broadcom or TSMC, its 30% less than AMD and 10% of Nvidia. Is it intrisically worth that little? Probably not, I think its a buy at this price.



Intel's 2010 7.6B$ purchase of mcafee was a sign that Intel doesn't know what its doing. In the CEO's words: The future of chips is security on the chip. I was like no, no its not! I wanted them to get into mobile and GPUs at the time. Nvdia's market cap was about 9B at the time. I know it would have been a larger pill to swallow, and likely had to bid a bit more than 9B, but I thought it was possible for Intel at the time.


> The future of chips is security on the chip. I was like no, no its not!

Putting aside whether the statement is considered true or not, buying McAfee under the guise of the kind of security meant when talking about silicon is... weird, to say the least.



McAfee makes their money from people being required to run it for certification. Imagine government/healthcare/banking/etc. customers being obliged to use only Intel chips because they'll fail their audits (which mandate on-chip antivirus) otherwise. I hate it, but I can see the business sense in trying.


Still, $7.6B is ludicrous money for a "try", especially when everyone in the room should have known how shaky the fundamentals were for such a pitch.


I’m not sure mcaffee is the go to for this requirement any longer. Maybe. Definitely across the 4 enterprise I’ve worked at, they all migrated away from mcaffee.


There's definitely a lot that can be critiqued about that period.

Famously they divested their ARM-based mobile processor division just before smartphones took off.

The new CEO, as the article mentions, seems to have a lot more of a clue. We just hope he hasn't arrived too late.



  a lot that can be critiqued about that period.
Like the time they appointed Will.I.Am?

https://youtu.be/gnZ9cYXczQU



>Famously they divested their ARM-based mobile processor division just before smartphones took off.

Wasn't that AMD (perhaps also AMD)? Qualcomm Adreno GPUs are ATi Radeon IP, hence the anagram.



Intel sold their XScale family of processors to Marvell in 2006.

I remember very well as back then I was working in University porting Linux to an Intel XScale development platform we had gotten recently.

After I completed the effort, Android was released as a public beta and I dared to port it too to that Development Board as a side project. I thought back then Intel was making a big mistake by missing that opportunity. But Intel were firm believers in the x86 architecture, specially on their Atom Cores.

Those little Intel PXA chips were actually very capable, I had back then my own Sharp Zaurus PDA running a full Linux system on an Intel ARM chip and I loved it. Great performance and great battery life.



Intel divested their StrongARM/XScale product line.


Yes, just before the iPhone came out and with Apple newly fully engaged as a major Intel CPU customer (for x86 Macs) for the first time ever.

Kind of like Decca Records turning down The Beatles.



It's really sort of been downhill since they decided to play the speed number game over all else with the Pentium IV. Even the core i7/i9 lines that were good for a long time have gone absolutely crazy lately with heat and power consumption.


That's overly reductionist. Conroe topped out at around 3 GHz, compared to its predecessor Presler achieving 3.6 GHz.

I think Netburst mostly came from a misguided place where Intel thought that clock frequency was in fact the holy grail (and would scale far beyond what actually ended up happening), and that all the IPC issues such as costly mispredicts could be solved by e.g. improving branch prediction.



It is exactly that short sited mhz over all else attitude im referring to as a fatal mistake.


Intel's market reality is (percieved) speed sells chips.

It's embarassing when they go to market and there's no way to say it's faster than the other guy. Currently, they need to pump 400W through the chip to get the clock high enough.

But perf at 200w or even 100w isn't that far below perf at 400w. If you limit power to something like 50w, the compute efficiency is good.

Contrast that to Apple, they don't have to compete in the same way, and they don't let their chips run hot. There's no way to get the extra 1% of perf if you need it.



Oh, I'm quite well aware. I traded a spaceheater of an i9/3090 tower for an M1 Studio.

The difference in performance for 95% of what I do is zero. I even run some (non-AAA) Windows games via crossover, and that's driving a 1440p 165hz display. All while it sits there consuming no more than about 35w (well, plus a bit for all my USB ssds, etc) and I've never seen the thermals much past 60c, even running nastive-accelerated LLMs or highly multithreaded chess engines and the like. Usually sits at about 40c at idle.

It's exactly what almost 40 year old me wants out of a computer. It's quiet, cool, and reliable - but at the same time I'm very picky about input devices so a-bring-your-own peripherals desktop machine with a ton of USB ports is non-negotiable.



I remember when they did random stuff like the whole IoT push (frankly, their offerings made no sense to humble me .. Microsoft had a better IoT than Intel). They did drone crap .. gave a kick ass keynote at CES I recall .. also .. made little sense. Finally, the whole FPGA thing .. makes little sense. So much value being destroyed :(


The Altera (FPGA) acquisition could have made sense, but they never really followed through and now it's being spun off again.


There were some technical issues with the follow-through that they didn't foresee. CPUs need to closely manage their power usage to be able to extract maximum computing power, and leaving a big chunk of static power on the table in case the FPGA needs it. The idea of putting an FPGA on a die was mostly killed by that.

Regarding other plans, QPI and UPI for cache coherent FPGAs were pretty infeasible to do at the sluggish pace that they need in the logic fabric. CXL doesn't need a close connection between the two chips (or the companies), and just uses the PCIe lanes.

FPGA programming has always been very hard, too, so the dream of them everywhere is just not happening.



That was not the point of the Altera acquisition. The point was the fill Intel's fabs, but the fab fiasco left Altera/Intel-FPGA without a product to sell (Stratix 10 -- 10nm -- got years of delay because of that). Meanwhile Xilinx was racing ahead on TSMC's ever shrinking process.


i was a process engineer there in the early 2000's, they did crazy random shit then too! they had an 'internet tv' pc that was designed to play mp4's in 2001.


I remember when they bought a smart glasses company then refunded every buyer ever the full retail price. There hasn’t been an Intel acquisition that has worked out in some 20 years now it seems. Just utterly unserious people.


> There hasn’t been an Intel acquisition that has worked out in some 20 years now it seems.

Maybe Habana Labs?

I can't really tell if it's working out for Intel, but I do hear them mentioned now and then.



Isn't that true for virtually EVERY big tech merger? Like, which ones have actually worked?


Google built its core advertising ecosystem on acquisitions (Applied Semantics, DoubleClick, AdMob, etc) and extended it into the mobile space by buying Android.


Youtube was also an acquisition.

Haven't heard much about successful Google acquisitions lately though.



Facebook bought Instagram and WhatsApp and they were both home runs. Zuckerberg knows how to buy companies.


Facebook bought their eventual competitors by making an offer they couldn't refuse. Zuck knows Metcalfe's law.


Instagram and WhatsApp are still popular with consumers though. Meta didn’t kill them, if anything they’ve grown significantly.


That’s a different type of acquisition, right? Buying your competition. If nothing else you’ve wiped out a competitor.


Even that sometimes flops (HP/Compaq)


Mostly true, but there are exceptions:

Apple does really well on its rare acquisitions, but they aren't very public as they get successfully absorbed. PA Semi, Intrinsity, more I can't remember.

ATi and Xilinx have by all accounts worked out really well for AMD.



The iPod


Android and PA Semi have worked out pretty well...


Broadcom is a good example of successful mergers.


There are the occasional good ones, like instagram.

But I guess thats the problem - I had to provide an example



Nvidia Mellanox


He was right but for the wrong reasons.

Had Intel figured out hyperthreading security and avoided all the various exploits that later showed up …



Then they would have worse-performing chips and the market wouldn't care about the security benefits. Cloud providers may grumble, but they aren't the most important market anyway.


Has there ever been an exploit in the wild for rowhammer/whatever the other vulnerabilities were?


Intel pivoting to GPUs was a smart move but they just lacked the tribal knowledge needed to successfully ship a competitive GPU offering. We got Arc instead.


Isn't Arc actually pretty okay?


They mostly work now and they are decent options at the low-end (what used to be the mid-range: $200) where there is shockingly little competition nowadays.

However, they underperform greatly compared to competitors' cards with similar die areas and memory bus widths. For example the Arc A770 is 406mm^2 on TSMC N6 and a 256-bit bus and performs similarly to the RX 6650XT which is 237mm^2 on TSMC N7 with a 128-bit bus. They're probably losing a lot of money on these cards.



It's getting better and drivers are improving all the time. I personally liked the Arc for the hardware AV1 encoding. Quicksync (I use qsvencc) is actually pretty decent for a hardware encoder. It won't ever beat software encoding, but the speed is hard to ignore. I don't have any experience using it for streaming, but it seems pretty popular there too. Nvidia has nvenc, and reviews say it's good as well but I've never used it.


This. If you follow GamersNexus, there are stories every month about just how much the Arc drivers have improved. If this rate continues and the next-gen hardware (Battlemage) actually ships, then Intel might be a serious contender for the midrange. I really hope Intel sticks with it this time as we all know it takes monumental effort to enter the discrete GPU market.


When it works, perhaps


Arc seems more like, where the GPU will 'be' in another 2-6 years. Where Arc's second or third iteration might be more competitive. Vulkan / future focused, fast enough that some translation layers for old If you're hoping for an nVidia competitor, the units in that market may be more per unit, but there's already an 1-ton gorilla there and AMD can't seem to compete either. Rather, Arc makes sense as an in-house GPU unit to pair with existing silicon (CPUs) and low / mid range dGPUs to compete where nVidia's left that market and where AMD has a lot of lunch to undercut.


One unfortunate note on Nvidia data center GPUs is to fully utilize features such as vgpu and multi-instance GPU, there is an ongoing licensing fee for the drivers.

I applaud Intel for providing fully capable drivers at no additional cost. Combined with better availability for purchase they are competing in the VDI space.

https://www.intel.com/content/www/us/en/products/docs/discre...



McAfee was Renee James idea, she was two in a box (Intel speak for sharing a management spot) with Brian Krzanich.


MBAs eating the world one acquisition at a time.


He was a process engineer


It's never too late to go back to school.


The CEO at the time of the McAfee acquisition was Paul Otellini -- an MBA: https://en.wikipedia.org/wiki/Paul_Otellini.


Intel has an amazing track record with acquisitions -- almost none of them work out. Even the tiny fraction of actually good companies they acquired, the Intel culture is one of really toxic politics and it's very hard for acquired people to succeed.

I wish Pat well and I think he might be the only who could save the company if it's not already too late.

Sourced: worked with many ex-Intel people.

POSTSCRIPT: I have seen from the inside (not at Intel) how a politically motivated acquisition failed so utter spectacularly due to that same internal power struggle. I think there are some deeply flawed incentives in corporate America.



Not gonna lie, I had a professor who retired from Intel as a director or something like that. Worst professor I had the entire time. We couldn't have class for a month because he 'hurt his back,' then half us saw him playing a round of golf two days later.


Intel should have bought Nvidia.

And acqu-hired Jensen as CEO.



I've heard the reason AMD bought ATI instead of Nvidia is Jensen wanted to be CEO of the combined company for it to go through. I actually AMD would be better off it they had taken that deal.

Prior to the ATI acquisition nvidia actually had been the motherboard chipset manufacturer of choice for AMD cpus for a number of years.



AMD is doing fantastic and it’s CEO is great. It would be a big let down if they had bought nvidia as we’d have a single well run company instead of two


Would that make them Ntel or Invidia?


Invidia is sort of how nvidia gets pronounced anyway, so I’d go with that one. Ntel sounds like they make telecommunications equipment in 1993.


It would have had the same fate as the NVIDIA ARM deal.


Unlikely with AMD owning ATI. The reason NVidia was blocked from buying ARM was because of the many, many third parties that were building chips off ARM IP. Nvidia would have become their direct competitor overnight with little indication they would treat third parties fairly. Regulators were rightly concerned it would kill off third party chips. Not to mention the collective lobbying might of all the vendors building ARM chips.

There were and are exactly zero third parties licensing nvidia IP to build competing GPU products.



one example would be the semicustom deal with mediatek

https://corp.mediatek.com/news-events/press-releases/mediate...

like it’s of course dependent on what “build competing products” means, but assuming you mean semicustom (like AMD sells to Sony or Samsung) then nvidia isn’t as intractibly opposed as you’re implying.

regulators can be dumb fanboys/lack vision too, and nvidia very obviously was not spending $40b just to turn around and burn down the ecosystem. Being kingmaker on a valuable IP is far more valuable than selling some more tegras for a couple years. People get silly when nvidia is involved and make silly assertions, and most of the stories have become overwrought and passed into mythology. Bumpgate is… something that happened to AMD on that generation of gpus too, for instance. People baked their 7850s to reflow the solder back then too - did AMD write you a check for their defective gpu?

https://m.youtube.com/watch?v=iLWkNPTyg2k



Maybe, however, the GPU market was not considered so incredibly valuable at the time (particularly by eg politicians in the US, Europe or China). Today it's a critical national security matter, and Nvidia is sitting on the most lucrative semiconductor business in history. Back then it was overwhelmingly a segment for gaming.


i worked at intel between 97 and 07. MFG was absolute king. Keeping that production line stable and active was the priority that eclipsed all. i was a process engineer, and to change a gas flow rate on some part of the process by a little bit, i'd have to design an experiment, collect data for months, work with various upstream/downstream teams, and write a change control proposal that would exceed a hundred pages of documentation. AFAIK, that production line was the most complex human process that had happened to date. It was mostly ran by 25-30 yo engineers. That in itself was a miracle.


Off topic but I find it weird that Intel CEO does so much religious quoting under the Intel corporate logo:

Here's one examples but there's a pile of them.

https://twitter.com/PGelsinger/status/1751653865009631584

I guess Intel does need the help of a higher power at the stage.



That's not under the Intel corporate logo. It's a personal account. His comments aren't my cup of tea but they certainly aren't generally offensive.


It has implications for his decision making process though. Christianity requires comfortably handling internally inconsistent information and taking a superficial approach to evidence. Whether that is an advantage in a CEO is unclear. It probably helps him remain confident in Intel's outlook.


Almost as if "doing the right thing" requires an underlying moral framework.


I' not sure I would agree that you need any sort of "moral framework" to know how to do the right thing, such as religion.

Knowing how to do the right thing is simple human decency.



My own humble opinion is that Intel has always suffered from market cannibalization. They are a brand I look for but many times the iteration of products will force me to go a generation or two older because I can’t argue with the price and features. By the time I was sold on a NUC they were discontinued. I wanted a discrete GPU when they announced Xe but it has become Xe ARC alchemist, battlemage, celestial, and druid. By the time I’m ready to spend some money it will become something else usually. Also, they should have snapped up Nuvia. I’m still rooting for them but really if they could streamline their products and be willing to take a leap of faith on others in the same space it would help out a lot.


>I wanted a discrete GPU when they announced Xe but it has become Xe ARC alchemist, battlemage, celestial, and druid.

They've made this situation fairly clear, in my eyes.

Alchemist is the product line for their first attempt at true dedicated GPUs like those Nvidia and AMD produce. It's based on Intel Xe GPU architecture.

It's done decently well, and they've been very diligent about driver updates.

Battlemage is the next architecture that will replace it when it's ready, which I believe was targeted for this year. Similar to how the Nvidia 4k series replaced the 3k before it. Celestial comes a couple years later, then druid a couple years after that, etc. They don't exist simultaneously, they're just the names they use for generations of their GPUs.



I felt that way about their Optane persistent RAM memory (https://arstechnica.com/gadgets/2018/05/intel-finally-announ...).


Brilliant article. He is totally right that what Pat Gelsinger is doing is as brave as what Andy Grove did and just as essential. In hindsight Andy Grove was 100% right and I hope Pat Gelsinger is proved right.

The fact that Intel stock went up during the Brian Krzanich tenure as CEO is simply a reflection of that being the free money era that lifted all boats/stocks. Without that we would be writing Intel’s epitaph now.

You cannot play offense in tech when there is a big market shift.



> I thought that Krzanich should do something similar: Intel should stop focusing its efforts on being an integrated device manufacturer (IDM) — a company that both designed and manufactured its own chips exclusively — and shift to becoming a foundry that also served external customers.

That would only work if Intel has a competitive foundry. Intel produces very high margin chips. Can it be competitive with TSMC in low margin chips where costs must be controlled?

The rumors I've heard (not sure about their credibility) is that Intel is simply not competitive in terms of costs and yields.

And that's even before considering it doesn't really have an effective process competitive with TSMC.

It's easy to say it should become a foundry, it's much harder to actually do that.



Intel is a product of corporate cancer that is management consulting.


What does this even mean? (not sarcastic)


I worked at (American drink company in Japan) previously and saw what the poster may be referring to.

Management Consulting sells ideas, many of them silly or not important that are marketed and packaged as era-defining. A manager who implements #FASHIONABLE_IDEA can look forward to upwards mobility, while boring, realistic, business-focused ideas from people in the trenches usually get ignored (unless you want a lateral transfer to a similar job). hashtag-collection-of-ideas is much easier to explain when the time comes for the next step up.

This explains why you get insane things like Metaverse fashion shows that barely manage to look better than a liveleak beheading video. These sorts of things might seem like minor distractions, but getting these sorts of boondoggles up and running creates stress and drowns out other concerns. Once the idea is deployed, the success or failure of the idea is of minimal importance, it must be /made/ successful so that $important_person can get their next job.

These projects starve companies of morale, focus and resources. I recall the struggle for a ~USD $20k budget on a project to automate internal corporate systems, while some consultants received (much) more than 10 times that amount for a report that basically wound up in the bin.

Oddly, this sort of corporate supplication to management consultants worked out for me (personally). I was a dev who wound up as a manager and was able to deliver projects internally, while other decent project managers could not get budget and wound up looking worse for something that wasn't their fault.

I don't think any of the projects brought by management consultants really moved the needle in any meaningful way while I worked for any BigCos.



Oh the PTSD... *twitch*


People come in on a short term basis. They don't know the company, the business, or the employees. They make long term decisions by applying generic and simplified metrics without deep understanding.


tbh that applies to all fortune 500 at this point


Intel has truly been on a remarkable spree of extremely poor strategic decisions for the last 20 years or so. Missed the boat on mobile, missed the boat on GPUs and AI, focused too much on desktop, and now AMD and ARM-based chips are eating their lunch in the data centre area.


They ran it for max profit in an era when strategic investments needed to be made.

It's remarkably common and heavily incentivized.



You're missing the big one: they missed the boat on 64-bits. It was only because they had a licensing agreement in place with AMD that they were able to wholesale adopt AMD's extensions to deliver 64-bit x86 processors.


That's not at all what happened. Intel's 64-bit story was EPIC/IA-64/Itanium and it was an attempt to gain monopoly and keep x86 for the low-end. AMD64 and Itanic derailed that idea so completely that Intel was forced by Microsoft to adopt the AMD64 ISA. Microsoft refused to port to yet-another incompatible ISA.

Had Itanium been a success then Intel would have crushed the competition (however it did succeed in killing Alpha, SPARC, and workstation MIPS).



I don’t think it was Itanium that killed SPARC. On workstations it was improved reliability of Windows and to some extent Linux. Sun tried to combat this with lower cost systems like the Ultra 5, Ultra 10, and Blade 100. Sun fanatics dismissed these systems because they were to PC like. PC fanatics saw them as overpriced and unfamiliar. With academic pricing, a $3500 Ultra 10 with 512 MB of RAM and an awful IDE drive ran circles around a $10000 HP C180 with 128 MB of RAM and an OK SCSI drive because the Sun never had to hit swap. I think Dell and HP x86 PC workstations with similar specs as the Ultra 10 were a bit cheaper.

On servers, 32 bit x86 was doing wonders for small workloads. AMD64 quickly chipped away at the places where 1-4 processor SPARC would have previously been used.



Fair point, Itanium's impact on SPARC might be less than I stated, but Alpha is very clearly documented.


I think that Itanium had zero impact on anything (besides draining Intel for money) due to high cost and low performance.

It could not run x86 apps faster than x86 cpus, so it didn't compete in the MS Windows world. Itanium was a headache for compiler writers as it was very difficult to optimize for, so it was difficult to get good performance out of Itanium and difficult to emulate x86.

Itanium was introduced after the dot-com crash, so the marked was flooded with cheap slightly used SPARC systems, putting even more pressure on price.

This is unlike when Apple introduced Macs with PowerPC cpus: they had much higher performance than the 68040 cpu they replaced. PowerPC was price competetive and easy to write optimizing compilers for.



Itanium itself did nothing. But Itanium announcement killed basically killed MIPS, Alpha and PA-RISC. Why invest money into MIPS and Alpha when Intanium is gone come out and destroy everything with its superior performance.

So ironically announcing Itanium was genius, but then they should have just canceled it.



Microsoft did port to Itanium. Customers just didn't buy the chips. They were expensive, the supposed speed gains from "smarter compilers" never materialized, and their support for x86 emulation was dog slow (both hardware and later software).

No one wanted Itanium. It was another political project designed to take the wind out of HP and Sun's sails, with the bonus that it would cut off access to AMD.

Meanwhile AMD released AMD64 (aka x86-64) and customers started buying it in droves. Eventually Intel was forced to admit defeat and adopt AMD64. That was possible because of the long-standing cross-licensing agreement between the two companies that gave AMD rights to x86 way back when nearly all CPUs had to have "second source" vendors. FWIW Intel felt butt-hurt at the time, thinking the chips AMD had to offer (like I/O controllers) weren't nearly as valuable as Intel's CPUs. But a few decades later the agreement ended up doing well for Intel. At one point Intel made some noise about suing AMD for something or another (AVX?) but someone in their legal department quickly got rid of whoever proposed nonsense like that because all current Intel 64-bit CPUs rely on the AMD license.



Maybe I wasn’t clear; I meant after Itanium failed, Microsoft refused to support yet another 64-bit extention of x86 as they already had AMD64/x64 (and IA-64 obviously)


They didn't simply miss the boat on 64 bit.

That was an intentional act to force customers into Itanium, which was 64 bit from the outset.



No, they were on the boat, they just mis-managed it. They used to make Arm chips, but sold it off just before the first iPhone was released as they saw no future in mobile CPUs. Same with network processors for routers around the same time.

They have been trying half-heartedly with GPUs on and off since the late 1990's i740 series.

The root cause is probably the management mantra "focus on core competencies". They had an effective monopoly on fast CPUs from 2007 until 2018. This monopoly meant very little improvement in CPU speed.



Xeon is not something people normally run on desktops.

Datacenters were and still are full of monstrous Xeons, and for a good reason.



> Datacenters were and still are full of monstrous Xeons, and for a good reason.

To ask the foolish question, why? My guess would be power efficiency. I've only ever seen them in workstations, and in that use-case it was the number of cores that was the main advantage.



They seem to gate ECC support behind Xeon for higher end processors. You see ECC memory in a lot of workstation class machines.


It's a compact package with many cores, lots of cache memory, lots or RAM lanes, and lots of PCI lanes. All within a large but manageably hot crystal.

Space and power in datacenters are at a premium; packing so many pretty decent cores into one CPU allows to run a ton of cloud VMs on a physically compact server.

AMD EPYC, by the way, follows the same dtatcenter-oriented pattern.



They still dominate PC’s and the server market.


Looking at Statistica [0] I see Intel at 62% and AMD at 35% on desktop for 23Q2. That's a significant gap, and having more than half of the market is nothing to sneeze at, but I think they move from being dominant to just being a major player.

IF (big IF) the trend continues we might see Intel and AMD get pretty close, and a lot more comptetion on the market again (I hope)

On the server side, I don't have the number but that's probably a way harder turf to protect for Intel going forward, if they're even still ahead ?

[0] https://www.statista.com/statistics/735904/worldwide-x86-int...



I run all my workloads in AWS on armchips. It's half the cost for just as good of an experience on my side.


Cloud providers are very careful to make sure of that - they have deliberately eschewed performance increases that are possible (and have occurred in the consumer market) in favor of keeping the “1 vcpu = 1 sandy bridge core” equivalence. The latest trend is those “dense cores” - what a shocking coincidence that it’s once again sandy bridge performance, and you just get more of them.

They don’t want to be selling faster x86 CPUs in the cloud, they want you to buy more vcpu units instead, and they want those units to be arm. And that’s how they’ve structured their offerings. It’s not the limit of what’s possible, just what is currently the most profitable approach for Amazon and google.



The trend ain't their friend in those markets though. Many folks running server workloads on ARM and their customer base is drastically more concentrated and powerful than it once was. Apple has shown the way forward on PC chips.

They are a dead company walking.



Very hyperbolic take. I agree there is serious competition elsewhere, but "dead company walking" is far from true.


Time will tell. It's not meant to be hyperbolic - I'm short them and lots of others are and expect it will be a disaster going forward. There are obviously people on the other side of that trade with different expectations, so we will see.


And the manufacturing failed to save them:

[...] when Intel’s manufacturing prowess hit a wall Intel’s designs were exposed. Gelsinger told me: 'So all of a sudden, as Warren Buffet says, “You don’t know who’s swimming naked until the tide goes out.” [...]'



Is it just the fate of large successful companies? The parallels with Boeing always come to mind. We’ve seen this play out so many times through history, it’s why investing in the top companies of your era is a terrible idea.

https://money.cnn.com/magazines/fortune/fortune500_archive/f...

I wonder how many of our companies will survive at the top for even 20 more years?

https://companiesmarketcap.com/

Berkshire is the only one I feel sure about because they hold lots of different companies.



Apple makes 40% of Berkshire's investment portfolio


https://www.cnbc.com/berkshire-hathaway-portfolio/

As of Nov 2023, BRK has ~915.5M AAPL shares, with a market cap of $172B. BRK’s market cap is $840B.

Per the CNBC link, the market cap of BRK’s AAPL holding is 46% of the market cap of BRK’s publicly listed holdings, but BRK’s market cap being much higher than $172B/46% means there is $840B - $374B = $466B worth of market cap in BRK’s non publicly listed assets.

I would say $172/$840 = 20% is more representative of BRK’s AAPL proportion.



Market forces are hard to fight sometimes. Look at Meta- they were beaten like hell cuz of their metaverse bet.


Meta was beaten because investors were worried Mark had gone rogue. Now that hes laid a bunch of people off to show that the investors are in control theyre cool with the metaverse.


Or maybe it has something to do with their amazing net income trend in recent quarters:

https://www.macrotrends.net/stocks/charts/META/meta-platform...



Am I missing something? Meta looks like it’s at an all time high


This is so common that it happens all the time with successful companies. They don't have to make good decisions. They have more than enough cash to keep making bad decision after bad decision, whereas a smaller company would collapse.

Apple and Microsoft have both managed to avoid this, and been the exception.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com