(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40926648

一位用户质疑美国科技巨头收购欧洲初创公司会对东道国经济产生负面影响的观点。 他们认为,初创公司在该国的持续运营有助于其国内生产总值(GDP)。 尽管全球股东可能获得潜在利润,但当地就业和投资仍然很重要。 他们还提到了芬兰强大的教育体系和对国际投资者的吸引力的例子。 此外,他们还质疑美国公司比欧洲公司更重视软件的观点,并提到了 AMD 最近收购 Silo AI 以及他们对软件开发的关注。 此外,他们建议讨论各种投资策略的有效性,而不是关注负面影响。

相关文章

原文


I’d argue that a factor in CUDA’s success is their army of in-house researchers which use CUDA to do novel things. Sometimes those things get turned into products (OptiX) other times they are essentially DevRel to show off what the hardware can do and documentation for how to do it. Additionally I’m sure they use pre-release hardware and software and give feedback about how to improve it.

I don’t know what AMD has in mind for this acquisition but I could see there being a lot of value having an in house LLM team to create models for customers to build on, run in benchmarks, and improve their products.



Nvidia also spends a metric shit ton of money to make sure professors use and teach on their platform.

I don’t remember any alternatives in uni. Maybe OpenCL but only lightly mentioned



As someone who has designed and taught those courses, my experience (admittedly only one persons) is that you pick what will work with the least hassle - because you'll have plenty of hassle elsewhere and probably no real time to deal with any of it without making more.



Also did an algorithms in machine learning course in matlab

It’s a great language choice for it

It weeded out the script kiddies who incorrectly signed up wanting a Tensorflow or PyTorch course

It’s a fairly bland and slow but usable language for the task

Shits me off to no end a lot of engineering courses moreorless indoctrinate their students into using it unconditionally, though

Octave exists but is a relative pain to use



Matlab is fairly easy to work with (initially) and is great when learning a new concept, instead of learning that plus arbitrary syntax of the tool.

It isn't particularly fast though, and the simplicity quickly becomes an obstacle when solving a real problem.



> The software that students use is the software the industry uses about five years later.

which is why it's anti-competitive for a company to sponsor university courses (such as providing educational versions for free). It should be disallowed, unless the course is _specifically_ teaching the software, rather than a general course.



> others are not allowed to do the same.

it's usually the case where the sponsor is the sole sponsors (aka, the course does not teach both X and Y, esp. if X is given to the uni for free).

It's anti-competitive to allow companies to embed themselves in general courses, despite it not being so by the letter of the laws.



Sort of -- but basically no course is going to teach X and Y, if they're functionally equivalent ways to learn about Z, because almost no course is specifically about X or Y, it's about Z, and learning both X and Y isn't germane to learning Z, just learning one is enough.

As long as the companies behind X and Y both have a fair shot at sponsorship, this isn't really anti-competitive. It's literally a competition in which the companies compete for student and faculty attention.

Anti-competitive would be a company saying "you must teach X and not Y in your class about Z because you use Xco's mail services" or some other such abuse of one contractual relationship for an unrelated gain.



They say "hey if you want to teach a class using X, we'll sponsor it."

A competitor can complete for that sponsorship. So long as it's done on direct merit of the value, there's no problem.

Anti-competitive would be providing products or services and forcibly leveraging that into an unrelated contract.



>Nvidia also spends a metric shit ton of money to make sure professors use and teach on their platform.

Do you have a source for this claim? Or do you simply mean that since they spend money making it better that professors end up using it on their own accord?



I hold an NVidia instructors cert from when I worked in academia. They even give you access to hardware while you’re running courses on it. It’s super easy and totally free.



It's definitely both.

I'm sure plenty of professors use CUDA in their courses because it's what they actually use. At the same time, in 2013 when I was in college I took a course on "parallel computing" as a CS elective. The professor told us on day 1 that NVidia was sponsoring the course and had donated a bunch of GPUs to the clusters we could remotely connect into for the sake of the class. Naturally we used CUDA exclusively.

I know for a fact that this happened at a lot of schools. I don't know if it's still happening since I'm not in that world anymore, but don't see why it would have stopped.



CUDA is extremely simple, the classes might as well be on rails. OpenCL is like impossible without graphics and/or CUDA/distributed computing/operating system experience.



No nvidia makes great tooling. Like as a startup if I had to pick a development tool AmD fails repeatedly while nvida tooling is like matlab level of usefulness.

Those companies have money to make ‘nice’ things which open source software doesn’t have the time to do.

For 100m you could probably make some pretty sweet clones if amd is hiring anybody to man that position.



I’m not sure if I’m in the minority here but Matlab levels of tooling is an insult. Their guides were always two or three steps before being useful. Just enough to make you think whatever they were selling would solve your problems but never enough when really building a solution.



I don’t understand what you are disagreeing with.

Nvidia makes software that induces demand for their products. Sometimes that software is a tool, or a platform, or an ML model, or foundational research on algorithms.



Moreso lack of will to effectively mass organise

Thousands of OSS devs would be willing to devote serious time to it, but can’t/won’t run the gauntlet of starting such a ludicrously large project from scratch

It’s easy to contribute, difficult to be the one organising the contributions

A real “where do I even begin” problem



True, gotta love Nsight Systems and Compute.

That's the first hurdle of working with AMD GPUs, I have no idea what the GPU is actually doing because there is no quality profiler.



The success of CUDA is basically a dual effect of AMD devices being inefficient and bad for years, plus AMD having no answer to CUDA for a solid 7+ years while the foundations of GPGPU were being laid down.

Mindshare shifts slowly.



AMD is (according to their own statements) in the process of picking up a lot of software manpower. And wages in Finland are European tier, not US West Coast. Why lay them off?



Because nobody has been fired or fiscally punished for firing excessive number of people? :) People are notoriously bad at predicting potential positives, so firing people means nobody can prove that something wasn't created. In reverse it is possible to blame people for overhiring because that can be supported by hard numbers.



I guess their growth strategy was mostly about hiring every Finnish person, or living in Finland (and then in other countries) with a PhD in some quantitative topic and then market the "we have xx PhDs as a consultancy for all your projects". So you probably are right that not all these are needed anymore?



Huge congratulations to the founders and what a nice mark for the European (and Nordirc) startup community.

It's gonna be quite interesting to see if this works out strategically.

I guess the bet is an in-house army of PhDs vs. having a CUDA - which you don't as a second mover here - and assuming PhDs tightly coupled with the hardware can outperform an open framework/ push Triton to parity to CUDA over time.



Congrats to the founders indeed, but

> what a nice mark for the European (and Nordirc) startup community.

Not sure if it is a great win for the EU at large if their AI startups get bought up by American companies though, to be fair.



The economy and startup world isn't a zero sum game.

Ultimately the AI play* is open source for the foreseeable future, even more so for AMD if they want to sell their chips.

And if Silo AI's people accelerate competition in the AI HW space by accelerating Triton development/ raising the industry's competitive edge against Nvidia, we all benefit from stronger competition.

And in most other European startup hot spots, senior staff/ founders with previous exits reinvested their earnings into the domestic startup scene through founding again or becoming Business Angels or going VC.

I see this as a huge net win.

* EDIT: For integrating with compute, I guess.



Actually, it is zero sum. there's finite resources, human talent, and centers for decision making. yeah, European startup gets American money today, and American decision making center grows larger. whether the money paid into Europeans now is used to prop up new generation of startups - in any meaningful way - will remain to be seen. most likely: these senior staff/founders will probably allocate their cash where it is more efficient and I doubt it will be (meaningfully) in europe



The fact you're posting this comment in a thread of a press release of the acquisition of a European startup entity is in itself a counterfactual, wouldn't you agree?

One of the Cofounders of Silo is ex-Nokia...

Should tell you everything about zero-sum games.

Sure, the US is the dominant financial and technological economy on the planet and that will not change for the foreseeable future.

But implying a globalized, technology enabled economy will behave in a zero-sum fashion is just plain wrong.

The US is where it is today because post WWII it geniously recognized the value of free and global trade and invested heavily in its Navy to enable and protect said trade.

Instead of making things on your own in the US, you could sit in New York and invest globally - the value of your investment and access to its dividends guaranteed by the power of the US military.

Relative value against the status quo is created every day everywhere by millions of smart people.

What Europe - and Finland in that example - has is a century old tradition and established infrastructure for high education.

That investment will continue to pay off for the foreseeable future.



> geniously recognized the value of free and global trade and invested heavily in its Navy to enable and protect said trade

This reads like a person taking credit for the sun rising in the east and setting in the west.

United States is rich for three reasons:

Firstly USA stole textile and other technology from the British empire.

Secondly, gen 1 ‘non-free trade’ empires like the British got demolished in the war. All of the world’s industrial nations were in ruins.

Third step, the ‘genius’ was the Marshall Plan, was giving reconstruction loans to British and the French that they could only spend on American products - remember their industry was demolished, further stimulating American economy.

Global trade grew after 1955 when we invented containerisation.

And USA does not really believe in global free trade - that’s for its club of friends. Everyone else gets a sudden 100% solar panel tax or a 100% ev tax when they want to export to US. Or they get a sudden coup if their government wanted to stop exporting bananas to US



oversimplified explanations here - great to relieve some hostility perhaps but not complete.. lots of holes in this blanket explanation.

History of trade protectionism ? that is older than sailing ships. Yes, the achievements of old Europe include flaming the largest and deadliest wars in world history. The USA benefited from not being destroyed? sure OK.. maybe war is a bad idea for prosperity.

Modern trade values might be divided into "oil and gas" and then everything else.. "arms trade" and then everything else.. Big Pharma ? ok you got me, yes the US rules it financially, but then old Europe has some assets like that, but not on front stage.. thinking textile dye chemistry for an example.

The USA has no special awards for inventing trade protectionism, just a vigorous practice at the right time due to the idiocy of others.

You speak English pretty well.. maybe that language is part of the success here? many other angles easily come to mind...



The world is very much not zero sum and never has been. If it was then we'd be stuck in the stone age because every advancement that benefits one person would hurt the rest. Instead we see that over the course of history the average wealth of the world has gone up. There are certainly some negative intangibles that come from that (eg. climate change or ecosystem collapse) but it's hard to quantify if that makes it zero sum and even so, the context of this thread is about human vs human zero sum games.



The world was effectively a zero-sum game until the industrial revolution. For most of human history, the average growth per capita was something like 0.01%/year. There was some growth, but you could not see it in a single lifetime. Which is why conquering your neighbors, taking their possessions, and enslaving them was such a popular form of business.



Don’t European programmers make much less than Americans? I wouldn’t be surprised if they kept a pretty big footprint over there.

Big picture the US unemployment rate is quite a bit lower than the EU, so I’m sure any global company is happy to draw from the bigger pool.

Finally, benefits can be unbalanced in favor of one entity or another without being zero sum. Even if the US benefits more from this deal, the purchasing company, AMD, still turns sand into extremely valuable electronics. That’s not a zero-sum activity.



> US unemployment rate

do not believe this number.. it is a manipulated statistic on the front line of old class wars regarding labor versus capital. hint- capital interests know the Federal government very well



I’m not sure we agree on what zero sum here means, but one direct consequence of having a decent exit here is that the investors in Silo will get a capital return they can use to raise more funds.

I don’t know what the founders of Silo will do, but the investors are in the business of investing, and incrementally the viability of being an AI VC in this area has gone up (depends on the counterfactual but I think cash exit is better than some chance of IPO).



You say it's zero-sum then in the next sentence say "whether the money paid into Europeans now is used to prop up new generation of startups - in any meaningful way - will remain to be seen", which surely implies that it's not necessarily zero-sum.



>Not sure if it is a great win for the EU at large if their AI startups get bought up by American companies though, to be fair.

That would be a concern if the plan was to move the entire team to the US. But if the Finland based company just becomes a part of AMD then I see little downside. Some very competent people in Finland now have $665M to fund new startups.

Ultimately I think the most important question is where the interesting and high productivity work gets done. That's the place that benefits most.



>That would be a concern if the plan was to move the entire team to the US.

The issue is that all that Finnish labor now fuels a US tech giant who's profit center is in the US, not in EU, therefore mostly boosting the US economy in the process.

Then there's also the trade barriers that come with now becoming a US tech company instead of a Finnish one. You can't sell to China, and other countries on the US's shit list without Uncle Sam's approval.



>The issue is that all that Finnish labor now fuels a US tech giant who's profit center is in the US, not in EU, therefore mostly boosting the US economy in the process.

No, this is not how it works. Assuming Silo AI continues to operate out of Finland, its investments, the consumption of its employees and its exports will continue to count towards Finland's GDP just like before. Any profits go to AMD shareholders all over the world, not just in the US. The strategic alignment between Silo AI and AMD may well benefit both Finland and the US.

We have a similar debate in the UK regarding DeepMind. And yes it's true, if you assume that DeepMind or Silo AI would have become world dominating tech behemoths in their own right, then it would have been better for Britain/Finland if they hadn't been sold.

But it's also possible that the UK and Finish operations are ultimately more successful as part of Google/AMD because they benefit from strategic opportunities they wouldn't otherwise have.

I'm not saying that headquarters don't matter or that there are no downsides (e.g wrt corporation tax). What I am saying is that it's not automatically a bad thing for a country if a company gets sold to a foreign corporation.

One thing is for sure. It's far better to have a lot of US subsidiaries in the country than watching your graduates and startup founders leave for the US.



Employee loyalty isn’t a good thing. One of the best things about Silicon Valley is that people can swiftly change companies when they get higher offers. Non-competes are void in California.

There’s a reason US salaries for software devs are 2-5x EU salaries for similar roles.



Anyone could ask you to sign a non-compete. But in California, they have been legally unenforceable for as long as I have been alive.

What was changed is they now cannot make condition of employment based on signing this unenforceable contract.



As someone who has been stuck in Silicon Valley for 20 years I can say hands down the German and European teams I’ve worked with far outshine the hacker ego Hollywood hipster techbros of San Francisco. Yet the latter make 2-5x the income.



Thanks :) (

But I guess there are also very capable American teams and narcissistic European CS.

(I guess it is a very good question why this difference exist and how to change economic policy)



>There’s a reason US salaries for software devs are 2-5x EU salaries for similar roles.

When you account for medical costs, rent (especially compared to the localities in the USA that provide these huge salaries), extra vacation time, and for those with children, education and child care, this gap narrows considerably.

Rent alone... one can find a reasonable spot in Berlin for ~$1300/mo. Good luck finding more than a shared box in the Tenderloin for that much in the Bay Area.



> When you account for medical costs, rent (especially compared to the localities in the USA that provide these huge salaries), extra vacation time, and for those with children, education and child care, this gap narrows considerably.

That's what Europeans generally say to justify or cope with their low salaries, but it's not true. After accounting for all these, an SV, NYC, Seattle, etc., engineer ends up with far more disposable income than their EU counterpart.

The US has the highest average disposable income worldwide; the rest almost don't come close [1]. That's why it has much more entrepreneurial activity.

Yes, the US isn't perfect, but the EU doesn't come close to the US in terms of money for highly skilled professional workers.

1- https://www.statista.com/statistics/725764/oecd-household-di...



My cursory understanding is that Silo is a developer of LLMs that run on top of compute platforms. Isn't the problem with no one using AMD's accelerators the fact that their programming environment is sub-par compared to CUDA, or even Apple's?



CUDA is a decided abstraction with OpenCL I wouldn’t be surprised if eventually they pick a different abstraction to describe the interface they use for writing programs



It makes no business sense for them to try to get CUDA compatibility. That would just cement CUDA as the de facto standard, at which point they are locked in to playing catch up forever as nVidia intentionally adds features to break compatibility.

Much more sensible to work on getting rock solid support for their own standards into all the major ML platforms/libraries.



It depends what means by "business sense". Compatible makers have profited, did profit during PC era. Indeed, one of AMD's core businesses is make xx86 compatible CPUs.

Nvidia and standard-maker is limited in what breaking changes they introduce - these can harm their customers as much as they harm the competition. Intel failed to force all their changes on AMD as the xxx86 market expanded (notably, the current iteration of CPUs standards was set by AMD after Intel was unable to sell their completely new standard).

Still, I'd acknowledge that "business sense" today follows the approach of only aiming for markets the company can completely control and by that measure, CUDA compatibility isn't desirable.



I think the key is that CUDA is much more like the Microsoft Windows software part of the duopoly than the Intel x86 hardware part of the old Wintel duopoly. At best back in the glory days of that era, you could have weird hackish things like WINE ... until Microsoft's business model changed and started being interested in supporting virtualization to build up Azure.

The key is that while there were many clones of x86, there never really was an attempt at a company built around "run MS Windows programs natively" because maintaining software compatability is an order of magnitude harder than doing it for hardware.



CUDA is absolutely not equivalent to Windows as a platform. It's essentially a single API, not a huge, multilayer and futzy platform with multiple weirdly behaved APIs.

Moreover, companies aren't buying GPUs to keep their huge stable of legacy applications running. They want to create new AI applications and CUDA is a simple API for doing that (at a certain level).



CUDA is a programming language with libraries (cuBLAS, cuSPARSE, etc.) that are constantly having things added and try to maintain backwards compatibility. It's not as big and hefty as all of Win32 sure, but it's still far more difficult than x86 compatibility.



It makes sense then for AMD to buy them out.

If they’ve trained LLMs with lumi which has a lot of instinct GPUs there is a high chance they’ve had to work through and solve a lot of the gaps in software support from AMD.

They may have already figured out a lot of stuff and kept it all proprietary and AMD buying them out is a quick way to get access to all the solutions.

I suspect AMD is trying to fast track their software stack and this acquisition allows them to do just that.



Poro (reindeer in finnish) is specifically developed to be used in Finnish. GPT etc. general models struggle with less used languages. Unfortunately this sale likely means this development will cease.



Reindeer is a great name, and gives me an idea - next time I create an Azure OpenAI resource (depending on model availability and data residency requirements, sometimes you need to create more than one) I'm going to start going through Santa's reindeer names.



That is the most sarcastic thing I have read in weeks.

But isn't getting a software stack the exact kind of thing they need? Is there no overlap in the skills at the purchased company and the skills needed to make the AMD software stack not suck?



Going straight by stock price isn't very valuable unless you're selling immediately.

600 million dollars is a lot, and in order for that 12 billion increase to stick around this team up needs to present a lot of value. I'm optimistic but I'm also an outsider.



Yeah I saw the stock market uptick, but that is a kneejerk reaction by the public markets. It's not as if the public market participants have had ample time to evaluate the merits of the acquisition, and even then, if they are right or not.



If it's a culture problem and the C-suite is aware of it, then one reason to buy a company with a working software stack is to percolate their culture into your company so you can be successful.



The company I used to work for is doing this to the engineering org in my current employer. It requires the leadership from the old company to be embedded in very senior positions, and it requires buy-in from the existing C-suite. There's a lot of backroom politics to change culture along with a bunch of work to prove yourselves to people who aren't involved in the backroom. There have been a bunch of points at which I didn't think it would continue but so far the original team has been pretty successful at rising.

Think of it as a reverse McDonnell-Douglas.



I also have this impression. The software problems that are plaguing AMD are in the "less than $10 million" range, if they hired the right people to work on the most severe bugs in their GPU drivers and let them do their job.



Why the personal attack?

I said that I interpreted the previous comment as sarcastic so I could be called out if it wasn't. The author hasn't yet disagreed. And I think sarcasm is warranted in a space that has witnessed so many bad acquisitions.

On software at AMD; if my world is so simple, please explain where I am wrong. I never said this was a simple solution, I implied there was some overlap needed skills.

ROCm sucks, it has licensing and apparently use issues. It has had performance issues, and that is getting better. It isn't in a lot of the places it needs to be where it could be considered a default choice.

Apparently, Silo uses AMD stuff to do ML work. Apparently, they have domain experts in this space. It seems likely that getting input from such people could positively influence the ML and hardware.

Of course there will be complexity in this process. This is a 600 million dollar deal involving thousands of people (not just Silo employee, but AMD people, regulators, stakeholders, etc). I don't think anyone is implying this is simple.

I only wanted to say, "This isn't obviously dumb".



I'm curious about these "licensing issues" you speak of. From what I've seen, the vast majority of the ROCm components are MIT licensed, with a few bits of Apache license and NCSA Open Source License mixed in. Could you possibly elaborate on that?



It has been a while, but last time I got the ROCm drivers and some other items that I needed from them there was a really weird proprietary license. That might not be the case anymore my information might be stale.



"fix"? What is there to fix? AMD has been simultaneously fighting Intel and Nvidia, two MUCH larger companies, and it's been winning the fight against Intel for close to a decade now.

It's certainly not Lisa Su's fault that the clowns over at Intel got stuck on variations of 14nm (with clever marketing names like 14nm+++++) for nearly a decade, but credit certainly is hers for introducing Zen and putting AMD back on top of the x86 market.

With the new x870(e) motherboards and Granite Ridge chips right around the corner, effortlessly destroying the pyrotechnic processing units known as Raptor Lake, it's honestly a miracle to me that Intel's stock price is still as high as it is.

Guess wall street still loves those billions of forcefully confiscated taxpayer dollars being doled out by Uncle Sam to a graying dinosaur like Intel who couldn't even compete without those handouts... the quality of their marketplace offerings certainly isn't what's keeping that valuation up!



I’m also bullish on Intel, but clearly not as much as you. Intel is transitioning right now. x86 is never going to reclaim the crown of most important architecture, so Intel is trying its best to become a foundry for all the fabless customers out there. It’s going to take a long while, but right now they’re the best company to compete with TSMC in ten years. If Apple uses their foundries next decade, you’ll know Intel is back on top.



    > x86 is never going to reclaim the crown of most important architecture
To be clear, I assume you are including 32-bit and 64-bit, e.g., x86-64. I am surprised by this comment. To me, x86 won the architecture battle because of Linux (and less Microsoft Windows). Nothing is so cheap to deploy and maintain as a Linux server that runs x86-64 procs. Yes, I know you can buy single board computers, but x86 wins in the triangulation of dollars-watts-performance. If you disagree, what do you think is the most important architecture today?


This site is full of people with the west coast VC-driven-tech bizarro world blinders on. If AMD just keeps at what they've been doing well (matching or beating Intel processors) instead of chasing after the latest buzzword grift bubble, they're doomed in the eyes of people with that mindset.



AMD needs to expand the user base of their GPUs away from gaming and desktop graphics. Buying an AI company that is using their stack for compute is a really good way of learning how to do that. It's essentially now an in-house team to dogfood all of your brand new products and tell your other engineering teams what they're doing wrong.

In my mind it's not about AI per se, but about using the hot use case for GPU to drive meaningful change in your software stack. There are tons and tons and tons of GPGPU users out there who aren't training LLMs but who need a high-quality compute stack.



I think AMD's concern is that x86 might not be much of a market in 10 years. Between Apple, Amazon Graviton, and Nvidia Grace Hopper's ARM CPU we are seeing a sustained successful attack on x86 the likes of which we haven't seen... ever? Sustained and successful non-x86 Desktops, servers, and next-gen datacenter platforms, where does that leave AMD? (Intel has a little more diversification because of it's foundry opportunities, but is in the same boat.)



> "fix"? What is there to fix? AMD has been simultaneously fighting Intel and Nvidia, two MUCH larger companies, and it's been winning the fight against Intel for close to a decade now.

There's everything to fix. AMD is sitting on a gold mine and is squandering massive amounts of money every month that they don't just get their shitty software stack in order.

AMD could be as rich as NVIDIA. Instead, Lisa Su for some insane reason refuses to build even the most mediocre ML-capable libraries for their GPUs.

If I could ask anyone in the ML world at the moment what the heck they're thinking, it would be her. Nothing makes sense about AMDs actions for years on this topic. If I was the board, I'd be talking about her exit for wasting such an opportunity.



> Instead, Lisa Su for some insane reason refuses to build even the most mediocre ML-capable libraries for their GPUs.

Spending $665m on a company that builds AI tooling, is a refusal?



I've been thinking that NVDA stock is massively overpriced - yes, AI is a hot topic, but their only advantage is the software stack. It is just a matter of time until Intel and AMD realize that they should join hands and do an open-source CUDA alternative for their respecitve GPUs (yes, Intel has competetive GPUs and just like AMD and Nvidia they will try to get a share of the AI chip market share).



Problem is the CUDA advantage is gigantic and it has been known for years in GPGPU processing, way before AI was a meme. AMD has lost countless developers over the year just on hello world style projects. Developers had a solid 6-7 years of living with OpenCL when the green rival had a very mature and nice CUDA sitting there. I’ve been out of that world for a while now, but it was truly painful and turned a lot of devs off programming AMD devices. Now there’s a big moat of entrenched developers that could take decades to displace. It’s like trying to displace C++ with Java 22 — possible, but it’s a slow, slow trudge and everyone still remembers Java 1.4



While I agree with the sentiment towards CUDA, the example is a bit off, given that C++ basically lost all mindshare in distributed computing, to Java and others, and is hardly visible in CNCF projects landscape.

Displacing C++ in compiler development and HFT/HPC/GPGPU with Java 22, most likely not happening, everwhere else it has been loosing mindshare, the current cybersecurity laws versus WG21 attitude towards them, doesn't help.



No, the amount of code written in CUDA for pytorch could easily be rewritten in CUDA for few million or tens of millions of investment. The problem is that it is damn near impossible to get good performance in AMD. For complicated CUDA programs like flash attention(few 100 lines of code), no amount of developers could write those few 100 lines for AMD to get the same performance.



Even worse: GPGPU is not only about LLM or even ML. It's also for computer vision, signal processing, pointcloud processing, e.g. Opencv has backend for CUDA, open3d, PCL the same. Even apple is kind of worse than AMD regarding ecosystem of libraries and open source high performance algorithms - when I tried to port some ICP pipeline to apple metal there was nothing there, most libraries and research code target only CUDA



"AMD is among several companies contributing to the development of an OpenAI-led rival to Cuda, called Triton, which would let AI developers switch more easily between chip providers. Meta, Microsoft and Intel have also worked on Triton."

Last paragraph



This is a bit misleading since Triton is a bit higher level than CUDA. But the idea is kind of right - there’s active development of AMD and Intel backends, and Pytorch is investing into Triton as well.



NVDA's moat is over-stated. There are several deep-pocketed players with pretty good AI chips. The big players are training models at such a large scale that they can afford to back them by different architectures. Smaller players use frameworks like Pytorch and Tensorflow, but those are backed by big players buying from Nvidia.

But valuation isn't the NVDA trade right now; it's that there's still a bigger fool.



I wish people wouldn't post such pointless comments - the only users who get any value from reading your sentence are people who already share your view and can go "hah yeah!", while you couldn't be bothered to explain why it's your view to anyone who doesn't already think the same thing. Literally no benefit over not saying anything. Sorry to be blunt.



In my view, LLM is just the first step in the AI journey. The LLM boom will help NVidia to grow very fast and increase R&D. During this time, I expect new AI leaps that are not LLM-related. To be clear: I'm not talking about AGI, but rather, other practical advances.



> Intel has competitive GPUs

No they don't. Both Intel and AMD compare their newest GPU favorably against Nvidia's H100 that has been on the market longer and soon to be replaced and then it's never H100 NVL for a reason.

Intel and AMD can sell their GPU's only with lower profit margin. If they could match FLOPS per total ownership they would sell much better.

Both are years behind.



Benchmarks were just run, MI300x is onpar/better than an H100. Next generation of MI (MI325x) is coming out end of the year and those specs look fantastic too. Especially on the all important memory front. 288GB is fantastic.

Both companies will leapfrog each other with new releases. Anyone who believes that there should only be a single vendor for all AI compute will quickly find themselves on the wrong side of history



> 288GB is fantastic

This reminds me of those "192GB is fantastic" people that bought maxed-out M2 Ultras for AI inference. It can be awesome, but you need a substantial amount of interconnect bandwidth and powerful enough local compute before it's competitive. In products where AI is an afterthought, you're fighting against much different constraints than just having a lot of high-bandwidth memory.

I've always rooted for Team Red when they made an effort to do things open-source and transparently. They're a good role-model for the rest of the industry, in a certain sense. But I have to make peace with the fact that client-side AI running on my AMD machines isn't happening. Meanwhile, I've been using CUDA, CUDNN, CUBLAS, DLSS, on my Nvidia machine for years. On Linux!



Comparisons against H100 I have seen are always:
  8x AMD MI300X (192GB, 750W)   
  8x H100 SXM5 (80GB, 700W) 
Never against 8x H100 NVL (188GB,

What the customer does not see is how AMD must spend 2 times more money to produce a chip that is competitive against architecture that is soon 2 years old.



Stocks of companies that develop extremely niche and technical things is a tiny sliver of the stock market that I actually think communities like HN would be better at valuing than the market.

Technology stocks are the only ones I personally day trade for that reason. Example: at the beginning of a pandemic lockdowns, any HN user could have anticipated increased internet usage and buy Cloudflare/Fastly stock and made a lot of money before the rest of the market realized that CDN companies will significantly benefit from that specific macro event.

I'm not convinced the market (or market analysts) have a deep understanding of Nividia's long-term advantage. If they did, we would have seen a much slower and steadier valuation increase rather than the meteoric rise. Meteoric stock price rise/fall = the market is having trouble valuing the stock.

In other words, stock prices don't add much to the conversation.



Intel's profit, and revenue, have declined for 3 consecutive years. Their price to earnings ratio is 36.

Nvidia's revenue is now greater than Intel's with 20% of the employees that Intel has. Their PE ratio is 78, roughly double that of Intel.

The market valued Nvidia as growing and Intel as not.



AMD has been working on GPGPU at least as long as nVidia.

AMDs "CTM" SDK was released in 2006, same year as CUDA. In 2007 they released Stream SDK. Then they had "APP SDK" for a while, which iirc coincided with their opencl phase. And now they landed on rocm.

Meanwhile nvidia has kept trucking with just CUDA.



Happy to see this acquisition landing in Finland, but I have to wonder how the purchase price is justified. Silo AI is primarily a consulting company doing "traditional" kinds of AI consulting projects. Their LLM project is like a side hustle for the company.



Personally, I am bit sad that nothing stays in Finland. Too many promising companies have been sold into foreign countries recently. Just because founders look for exit strategy (not claiming that it is the case here). Not good for Finland in general.



Well, in this case the purchase price appears grossly overpriced. So even though Finland lost an AI startup, it gained money that is worth more than the startup. That money will to a large extent flow back into the Finnish economy in the form of taxes, investment in new startups, etc.



> That money will to a large extent flow back into the Finnish economy in the form of taxes, investment in new startups, etc.

Short term gains, in terms of taxes.

Otherwise, there are no guarantees for that. Shareholders might just make some castle. Who knows. Or move away to different country.



> Shareholders might just make some castle.

And then be left with nothing?

Look at Silo's About page.

The people who started this are not slackers or already had so much money before that they could have bought a 3rd Porsche.

Do you think these people will pull back and do nothing as their ability to benefit from and shape the technological advances happening just increases with this exit?

I highly doubt that.

> Or move away to different country.

And then?

Capital is global. And as per these [0] statistics, Finland is ranked 4th for per capita VC money invested in 2018, far ahead of France and Germany.

As per this [1] article from May, Finland received the most private equity and VC investment adjusted for GDP in all of Europe in 2023.

Finland is an attractive country to invest in, and I highly doubt native speakers with an excellent local network - i.e. much more expertise than the average non-Finnish speaking invesotor - will not be aware of that and capitalize on it.

[0]: https://www.statista.com/statistics/879124/venture-capital-a...

[1]: https://www.goodnewsfinland.com/en/articles/breaking-news/20...



Well in Finland we seem to produce promising "early-stage" companies which are then eagerly sold to bigger players. Vs in Sweden there is will (and capital) to keep growing these.



But if that happens (almost) every time for a potential company, then you will never likely have successful company in Finland, where the decision making also stays in Finland, and the money benefits the country in larger scale.

There is this saying that "don't sell the cow when you can sell the milk" - maybe there is still some wisdom... but Finland keeps selling the cow and buying the milk back over and over again. And then they wonder why the state of the economy is so sad and they never see "new Nokia".



When acquiring a telecommunications network, I suspect that network size (user count) is far more relevant for valuation, if anything, having a low employee count with a massive network like WhatsApp was probably a huge selling point.



Nice to see AMD finally doing something about competing in the compute market (LLM being the hottest thing at the moment)!

Though apparently MI300X is a fine product as well. But it still needs code.



This is an indictment of Lisa Su's own ROCm strategy. An implicit admission of failure, without explicitly admitting it. I predict this acquisition will cause even more software schizophreny inside AMD as multiple conflicting teams pinball their way around towards nowhere in particular.



Former AMD employee here (2007-2012) AMD 'dropped the ball' BADLY when (2012) then-VP Ben Bar-Haim decided to do a software purge, and focused on retaining the over-bureaucratic folks of ATI/Markham. Net result: NVidia was (and did) pick up a lot of very smart researchers and developers from AMD (I know of a couple whom were thoroughly disgusted with AMD management at that time)

He also trashed a lot of good and useful software projects for seemingly protectionist reasons (if it wasn't ATI/Markham, it was dumped)



Wasn't there a point at which AMD was actually looking at buying nvidia but Jensen wanted to be something like CEO. Jensen actually worked at AMD so there was already a connection there.

Instead AMD bought ATI which if I remember was barely hanging on. Not saying it was a bad purchase, just interesting that a bet on ATI (always had buggy drivers in my experience) which hadn't really demonstrated success ... how decisions ripple for a while.



Seems like an excellent exit strategy in hindsight. Spend a gazillion dollars of investor money on AMD hardware, get bought back by AMD because you worked out how to use that hardware



I don’t think it is possible to pay for access to LUMI. I know my company has been in talks about getting free access as it sits under utilized most of the time. These supercomputers are mostly vanity projects for EU politicians, there is no commercial use case.



I don't know about Lumi specifically, but top tier scientific supercomputers should typically have 80+% utilisation rate:

https://doku.lrz.de/usage-statistics-for-supermuc-ng-1148309...

Smaller machines will tend lower from what i have seen. If you give a large enough pool of scientists access to significant compute resources, they will generally figure out how to saturate them. Also, scientific teams often can't pay top software engineers. Lots of hardware is a way to compensate for inefficient code. If Lumi is underutilized to such an extent someone is funking up.

There is of course no commercial use case for these computers. That's not the point of these machines.



An inverse Nadella, wherein you buy a chunk of OpenAI and they turn around and buy a bunch of Azure time (then give it away to people on ChatGPT cuz ain't no one about making money in that business)



The economic mood in Finland is downright depressed [1]. This kind of news is therefore extremely welcome because it indicates there's a way forward, out of the old industry doldrums where people are still moaning about closed paper mills and Nokia's failure 15 years ago.

$665M USD isn't a staggering number by Silicon Valley standards, but it's very significant for a nation of five million people that hasn't seen global startup successes like neighboring Sweden with Spotify and others.

[1] The actual level of depression is somewhat hard to track because Finns are always pessimistic regardless of how well they're doing. (This also makes them the happiest people on Earth in polls. The situation right now is never quite as bad as one had expected beforehand, so when a pollster calls to ask, the conclusion must be that they're pretty happy with things overall at that specific moment, but surely everything is going in the wrong direction anyway.)



Contrary to what you say, Finnish startups have been very successful. Here's just a couple examples:

- Supercell sold 81.4% stake to Tencent in 2018 with a valuation of $10.2 billion.

- Wolt was acquired by DoorDash in 2021 with a valuation of $8.1 billion.

The list is much longer with startups that currently generate revenues of tens or hundreds of millions in a year that have not been sold.



These two are great success stories, but they’re also the only Finnish unicorn exits in the post-Nokia era.

The exits were somewhat less exciting to founders than these numbers suggest. Supercell sold 51% to SoftBank already in 2013 for 1.1B EUR. And Wolt’s purchase price was paid entirely in DoorDash stock which was down 75% by the time the lockups expired.

Startups generating low-hundreds of millions in annual revenue just aren’t unicorns anymore, unless they happen to be AI.



Both Supercell and Wolt have their headquarters steadily in Finland. The founders and Finnish early investors have gained hundreds of millions or billions of euros wealth for themselves which they have further spended and invested in Finland. They have paid huge amounts of taxes and keep on doing all of these since they are still located in Finland. It's hard to downplay the value of those IMO. Overall Rovio wasn't a complete disaster either. First made billions of euros for many years and was later sold to Sega for >$700 million. Still has HQ in Finland.

There's plenty of interesting and fast growing startups still left here. For example Supermetrics, Varjo, Smartly, Iceye, Aiven to name a few. IMO you are being pessimistic.



In any case, I agree in that the acquisition is great news and the economy is in a depression. :) Huge part of it is because Finnish mortgages are mostly straight tied to Euribor unlike in other Euro countries and since post covid the interest rates went up, Finns got f*cked. Hopefully the Euribor interest rate will be going down and the mortgages will start to become smaller, at least when they are being paid off.



Which is great, but doesn’t move the needle of popular perception the same way as large acquisitions and IPOs do.

The start of the startup investment pipeline in Finland has been flowing pretty well. The outputs at the end of the pipeline have been more questionable. Silo’s acquisition is a positive example of activity at that end.



Does the desirable talent in this case have equity / future vesting equity that is a part of the price?

I just wonder as many decades ago I was a part of a company who wanted to get into a market, they bought a little start up, and over the course of a year everyone quit, and the project eventually folded entirely ;) It was sorta hilarious, but also bizarre that the acquiring company didn't think of that.



I mean buying a "private AI lab" doesn't sound to me like they have the purchase price worth in IP which is so desirable nobody else has unlocked it and it lends itself particularly well to being integrated with AMD tech?

Let's see if more details come to light, but a good part of that price is spent for sure on people.

It'd be hilarious indeed if they wouldn't be able to or haven't properly incentivized them to retain them.



I don't know how this specific acquisition is going to work out, but at least we can say one thing. This represents some kind of response to the constant chorus of "AMD don't appreciate the importance of software. AMD should invest more in software. CUDA, CUDA, CUDA" comments that one always hears when AMD is mentioned.

Of course there's room to debate the details here: would they have, perhaps, been better off investing that money in their existing software team(s)? Or spinning up (a) new team(s) from scatch? Who's to say. But at least it show some intention on their behalf to beef up their software stance, and generally speaking that feels like a positive step to me.

But then again, I'm an AMD fan boi who is invested in the ROCm ecosystem, so I'm not entirely unbiased. But I think the overall point stands, regardless of that.



AMD has also been doing a bunch of hiring for their software teams. I've seen a few colleagues that AMD previously couldn't have afforded accept offers to work on GPU stuff.



Which tbf has been an apt description of AMD GPUs for the better part of a decade. Great hardware, god awful software and even worse long term software strategy.

It's why the 'fine wine' spin on the long term performance of AMD GPUs exists in gaming circles.



You're totally right. That said, spending $665m on an AI company seems, at first glance, like a step in the right direction. I'm sure there are a 1000 ways they could have spent that much money, but hey... I do appreciate them at least trying to do something to resolve the issue. Another way to think of it is that now there is a whole team that isn't dedicated to nvidia.



Yeah I'm not arguing against this acquisition, just commenting on how things have been so far. At this point I'm kind of apathetic, it's good if whatever they do eventually leads them to fixing their software woes, and I'll come back to their stuff then. If not, I'm fine with sticking to CUDA for now.

Ultimately they're all GPU programming languages, once you're good with one, switching to another one is not that hard (as long as the supporting software is good of course).



I honestly don't understand how paywalled links get so much traction, most people probably can't even engage with the material. Thanks for the direct link to Silo AI's press release!



I'm curious how this deal happened. There are a lot of LLM shops out there, how did this nordic co get the attention of AMD and why did they think this co stood out among the crowd.



What's the difference to what they did in this acquisition?

Who's gonna improve tooling and develop drivers?

PhD level AI experts such as employed by Silo AI, probably, right?

EDIT: For context [0], Nvidia invested billions into CUDA development way back when it was unsexy.

Clearly a second mover won't need that much, Nvidia proved the market.

But a billion doesn't seem like a large sum for the potential upside of AMD catching a significantly larger share of the budget going into AI - many times the value of this acquisition.

0: https://www.newyorker.com/magazine/2023/12/04/how-jensen-hua...



I always wonder about these thought experiments. Given a few good talented people and good management ... you'd think they'd be able to put a team together, but maybe talent in this area is few / far between?

To be clear, i'm not disagreeing, I really don't know, but yeah $665M, could do a lot with that.



You are basically paying some premium for the fact that someone already did the hiring and built the talent pool and a cohesive team. Doing that from scratch is a multi-year project, so they basically bought a shortcut.



Yeah I get the general idea that you're paying more for the assembled team and software / experience.

It's just always wonky as acquisitions generally don't seem to be 100% known quantities / outcomes. People paying big premiums for what sometimes turn out to be nothing.

That package of talent and etc is handy, but also seems like sometimes it makes it harder to really know what you'll get out of it. It's an interesting dynamic.



The org structure and culture dynamics of large companies like AMD makes it very difficult to achieve quality results when starting from scratch. 665M$ might well have been too much money, putting too much pressure for results for anything valuable to emerge. A 665M$ acquisition means they know exactly what they are getting, and they are getting it _now_.



> Imagine AMD simply put that $665M into tooling and driver development

Feels like a company saying they're going to "spend a few weeks paying down tech debt", which generally amounts to nothing getting done. Progress happens in creative pursuit of another goal and with hard constraints, in my experience. You can fix a specific piece of tech debt while working on a product feature that's adjacent to it, and you can create some great tooling and drivers while working on a product that needs them, but just setting aside the money for greenfield development often/usually ends up with it being set alight. I have worked at least one very well-funded place where the lack of product focus and thus lack of any constraints has just led to endless wheel spinning under the guise of "research".



SYCL also doesn't meet the listed requirement of 'Intel and AMD teaming up'. Intel seems to be the only hardware vendor to actually care about SYCL, and AMD is instead backing HIP which is 'standardized' but boils down to 'just take cuda and run :s/cu/hip/g'.

If AMD were to work on SYCL tooling and, say, build a 'syclcc' next to 'hipcc' that ingested SYCL to run it on ROCm, I feel like interest in SYCL could potentially grow, since Intel is supporting it properly already and it would be actually a cross-vendor standard.

Codeplay (which is part of Intel) does provide 'plugins' to run oneAPI (SYCL) on NVIDIA and AMD hardware, which is great but is still being made, indirectly, by Intel, who in the end want to sell Intel hardware.

联系我们 contact @ memedata.com