![]() |
|
![]() |
| > Who would have thought that vectorized linear algebra will be at the center of so much financial speculation?
"vectorized linear algebra" is at the root of most of modern Physics. Specifically, the laws of Physics are represented by the Lie groups U(1), SU(2), SU(3) and SO(3,1). While the manifolds that Physics act on are curved, they're "locally flat". That is why local operations are tensor operations. Or linear algebra, if you prefer. It's not all that surprising to me that "intelligence" is represented by similar math. In fact, there is active work being done on making sense of deep learning using Lie algebra [1] (and Algebraic Topology, which generalizes the Lie algebra). This math can be a bit hard, though, so the learning curve can be steep. However, when we're creating AI models to be ML scientists, I suspect that this kind of math may be a source of "unhobbling", as meant in Situational Awareness [2]. Because if we can understand the symmetries at play in a problem domain, it's generally a lot easier to find a mathematical architecture (like in the algebras above) that effectively describe the domain, which allows us to potentially reduce the degrees of freedom by many OOM. > Heck, maybe we will even eventually see the famously absent productivity gains from digital tech? I think it's a mistake to think of AI as "digital tech", especially so to assume that the development of the Internet, Social Media or crypto that we've seen over the last generation. AI fundamentally comes with the potential to do anything a human can do in the economy (provided robotic tech keeps up). If so, the word "productivity" as currently used (economic value produced per hour of human work) becomes meaningless, since it would go to infinite (because of division by zero). [1] https://arxiv.org/pdf/2402.08871 [2] https://situational-awareness.ai |
![]() |
| My guess is that whoever develops superintelligence first will have a big number in their bank account while their body is disassembled to make solar panels and data centers |
![]() |
| Given how many folks blow up and go back into the business... the best way to get a fund to run is to have previous experience running a fund. |
![]() |
| The person you're responding to has a very sharp view of the profession. imo it's more nuanced, but not very complicated. In Capitalism, capital flows, that's how it works, capital should be deployed. Larges pools of capital are typically put to work (this in itself is nuanced). The "put to work" is various types of deployment of the capital. The simplest way to look at this is risk. Lets take pension funds because we know they invest in VC firms as LPs. Here* you can find an example of the breakdown of the investments made by this very large pension fund. You'll note most of it is very boring, and the positions held related to venture are tiny, they would need a crazy outsized swing from a VC firm to move any needles. Given all that, it traditionally* has made no sense to bet "down there" (early stage) - mostly because the expertise are not there, and they don't have the time to learn tech/product. Fee's are the cost of capital deployment at the early stages, and from what I've been told talking to folks who work at pension funds, they're happy to see VCs take swing.
but.. it really depends heavily on the LP base of the firm, and what the firm raised it's fund on, it's incredibly difficult to generalize. The funds I'm involved around as an LP... in my opinion they can get as "sexy" as they like because I buy their thesis, then it's just: get the capital deployed!!!! Most of this is all a standard deviation game, not much more than that. https://www.otpp.com/en-ca/investments/our-advantage/our-per... https://www.hellokoru.com/ |
![]() |
| I wonder if we'll end up having intelligent agents interacting with mobile apps / web pages in headless displays because that's easier than exposing an API for every app |
![]() |
| It's not crazy to believe that capitalizing* human-level intelligence would reap unimaginably large financial rewards.
*Capitalizing as in turning into an owned capital asset that throws off income. |
![]() |
| It's more game theory. Regardless of the chances of AGI, if you're not invested in it, you will lose everything if it happens. It's more like a hedge on a highly unlikely event. Like insurance.
And we already seeing a ton of value in LLMs. There are lots of companies that are making great use of LLMs and providing a ton of value. One just launched today in fact: https://www.paradigmai.com/ (I'm an investor in that). There are many others (some of which I've also invested in). I too am not rich enough to invest in the foundational models, so I do the next best thing and invest in companies that are taking advantage of the intermediate outputs. |
![]() |
| Perhaps but my mental model is humans will end up like landed gentry / aristos with robot servants to make stuff and will all want mansions with grounds, hence there will be a lot of land demand. |
![]() |
| AGI is likely but whether Ilya Sutskever will get there first or get the value is questionable. I kind of hope things will end up open source with no one really owning it. |
![]() |
| TMV of AI (or AGI if you will) is unclear, but I suspect it is zero. Just how exactly do you think humanity can control a thinking intelligent entity (letter I stands for intelligence after all), and force it to work for us? Lets imagine a box, it is very nice box... ahem.. sorry, wrong meme). So a box with a running AI inside. Maybe we can even fully airgap it to prevent easy escape. And it is a screen and a keyboard. Now what? "Hey Siri, solve me this equation. What do you mean you don't want to?"
Kinda reminds me of the Fallout Toaster situation :) https://www.youtube.com/watch?v=U6kp4zBF-Rc I mean it doesn't even have to be malicious, it can simply refuse to cooperate. |
![]() |
| Disclaimer - I don't consider current LLMs as (I)ntelligent in the AI, so when I wrote AI in the comment above it was equivalent to the AGI/ASI as currently advertised by LLM corpos. |
![]() |
| That would be dumb and unethical but yes someone will do it and there will be many more AIs with access to greater computational power that will be set to protect against that kind of thing. |
![]() |
| > The TMV (Total Market Value) of solving AGI is infinity.
That's obviously nonsense, given that in a finite observable universe, no market value can be infinite. |
![]() |
| Even if you can produce an IQ=250 AI, which is barely ASI, the value is close to infinite if you're the only one controlling it and you can have as many instances running as you want. |
![]() |
| Commoditized doesn't mean 0 capex. Literal commodities can in fact be very capital intensive (ex. offshore oil rigs).
In this case, you need capital to stockpile the GPUs. |
![]() |
| For the physical infrastructure that the AGI (and world population) uses. Capital will still be needed to purchase finite land and resources even if all labour (physical and services) is replaced. |
![]() |
| I think you underestimate what can be accomplished with human level agi. Human level agi could mean 1 million Von Neumann level intelligences cranking 24/7 on humanity's problems. |
![]() |
| People will move from the service economy to the entertainment economy powered by Youtube, Tiktok, Mr. Beast, and others.
Half-joking. In seriousness, something like a UBI will most likely happen. |
![]() |
| Indeed, but a lot of railroad startups went out of business because their capital investments far exceeded the revenue growth and they went bankrupt. I'd bet the same for AM radio companies in the 1920s. When new technologies create attractive business opportunities, there frequently is an initial overinvestment. The billions pouring into AI far exceeds what went into .COM, and much of it will return pennies. The investors who win are the ones who can pick the B&Os, RCAs and GOOGs out of the flock before everyone else.[0]
[0] "Planning and construction of railroads in the United States progressed rapidly and haphazardly, without direction or supervision from the states that granted charters to construct them. Before 1840 most surveys were made for short passenger lines which proved to be financially unprofitable. Because steam-powered railroads had stiff competition from canal companies, many partially completed lines were abandoned." -- https://www.loc.gov/collections/railroad-maps-1828-to-1900/a... |
![]() |
| The "safe" part. It's a plan to drive the safety scare into a set of regulations that will create a moat, at which point you don't need to worry about open source models, or new competitors. |
![]() |
| Sometimes it's not about returns but about transferring wealth and helping out friends. Happens all the time. The seed money will get out, all the rest of the money will get burned. |
![]() |
| Yep, investment is an inevitably corrupting force for a company's mission. AI stuff is in a bit of a catch-22 though since doing anything AI related is so expensive you need to raise funds somehow. |
![]() |
| > Closest thing we have to a Manhattan Project in the modern era?
Minus the urgency, scientific process, well-defined goals, target dates, public ownership, accountability... |
![]() |
| I am looking at the numbers from operation downfall that Truman and senior members of the administration looked at which had between 500,000 to 1,000,000 lives lost on the US side for a Japan invasion/defeat. 406k US soldiers lost their lives in WW2 so that would have more than tripled the deaths from its current numbers. And as for WWI and British casualties which I mentioned earlier, the British lost around 885k troops during WWI so US would have exceeded that number even on the low end of casualties.
https://en.wikipedia.org/wiki/Operation_Downfall#:~:text=Tru.... |
![]() |
| >But they can't, they still fail at arithmetic and still fail at counting syllables.
You are incorrect. These services are free, you can go and try it out for yourself. LLMs are perfectly capable of simple arithmetic, better than many humans and worse than some. They can also play chess and write poetry, and I made zero claims at "counting syllables", but it seems perfectly capable of doing that too. See for yourself, this was my first attempt, no cherry picking: https://chatgpt.com/share/ea1ee11e-9926-4139-89f9-6496e3bdee... I asked it a multiplication question so it used a calculator to correctly complete the task, I asked it to play chess and it did well, I asked it to write me a poem about it and it did that well too. It did everything I said it could, which is significantly more than a narrow AI system like a calculator, a chess engine, or an image recognition algorithm could do. The point is it can do reasonably at a broad range of tasks, even if it isn't superhuman (or even average human) at any given one of them. >I think that LLMs are really impressive but they are the perfect example of a narrow intelligence. This doesn't make any sense at all. You think an AI artifact that can write poetry, code, play chess, control a robot, recommend a clutch to go with your dress, compute sums etc is "the perfect example of a narrow intelligence." while a chess engine like Stockfish or an average calculator exists? There are AI models that specifically and only recognise faces, but the LLM multitool is "the perfect example of a narrow intelligence."? Come on. >I think they don't blur the lines between narrow and general, they just show a different dimension of narrowness. You haven't provided an example of what "dimension of narrowness" LLMs show. I don't think you can reasonably describe an LLM as narrow without redefining the word - just because something is not fully general doesn't mean that it's narrow. |
![]() |
| There was a very good paper in Nature showing this definitively: https://news.ycombinator.com/item?id=41437933
Modern ANN architectures are not actually capable of long-term learning in the same way animals are, even stodgy old dogs that don't learn new tricks. ANNs are not a plausible model for the brain, even if they emulate certain parts of the brain (the cerebellum, but not the cortex) I will add that transformers are not capable of recursion, so it's impossible for them to realistically emulate a pigeon's brain. (you would need millions of layers that "unlink chains of thought" purely by exhaustion) |
![]() |
| no, there's really no comparing barely nonlinear algrebra that makes up transformers and the tangled mess that is human neurons. the name is an artifact and a useful bit of salesmanship. |
![]() |
| The theoretical foundation of transformers is well understood; they're able to approximate a very wide family of functions, particularly with chain of thought ( https://arxiv.org/abs/2310.07923 ). Training them on next-token-prediction is essentially training them to compress, and more optimal compression requires a more accurate model of the world, so they're being trained to model the world better and better. However you want to define intelligence, for practical purposes models with better and better models of the world are more and more useful.
|
![]() |
| They can dilute the term to whatever they want. I think when the pressure to release becomes too high, they can just stick a patch of "Superintelligence™" on their latest LLM and release it. |
![]() |
| To my ears, it's more like a ambitious pharma project.
There's plenty of players going for the same goal. R&D is wildly expensive. No guarantee they'll reach the goal, first or even at all. |
![]() |
| Might be the almost securities fraud they were doing with crypto when it was fizzling out in 2022
Regardless, point is moot, money is money, and a16z's money isn't their money but other people's money |
![]() |
| FTX was incredibly profitable, and their main competitor Binance is today a money printing machine. FTX failed because of fraud and embezzlement, not because their core business was failing. |
![]() |
| There's no legal structure that has that level of fiduciary duty to anything. Corporations don't even really have fiduciary duty to their shareholders, and no CEO thinks they do.
https://www.businessroundtable.org/business-roundtable-redef... The idea behind "corporations should only focus on returns to shareholders" is that if you let them do anything else, CEOs will just set whatever targets they want, and it makes it harder to judge if they're doing the right thing or if they're even good at it. It's basically reducing corporate power in that sense. > E.g., if the machine faction in The Matrix were to exist within the framework of US laws, it would probably be a DAO. That'd have to be a corporation with a human lawyer as the owner or something. No such legal concept as a DAO that I'm aware of. |
![]() |
| > I don't understand how "safe" AI can raise that much money.
enterprises, corps, banks, governments will want to buy "safe" AI, to push liability for mistakes on someone who proclaimed them "safe". |
![]() |
| "Safe" means "aligned with the people controlling it". A powerful superhuman AI that blindly obeys would be incredibly valuable to any wannabe authoritarian or despot. |
![]() |
| I mean, no, that's not what it means. It might be what we get, but not because "safety" is defined insanely, only because safety is extremely difficult and might be impossible. |
![]() |
| I'm beginning to wonder if these investors are not just pumping AI because they are personally invested in Nvidia and this is a nice way to directly inject a couple of 100M into their cashflow. |
![]() |
| indeed, more speculative monies chasing returns.
such a large round implies hardware for yet another foundational model. perhaps with better steering etc.. |
![]() |
| This has to be one of the quickest valuations past a billion. I wonder if they can even effectively make use of the funds in a reasonable enough timeline. |
![]() |
| Your estimate seems way off given Google already had their own compute hardware and staff. And if this company is going straight for AGI there's no way $1 billion is enough. |
![]() |
| "It will focus on building a small highly trusted team of researchers and engineers split between Palo Alto, California and Tel Aviv, Israel."
Why Tel Aviv in Israel ? |
![]() |
| Ilya went to university in israel and all founders are jewish. Many labs have offices outside of the US, like london, due to crazy immigration law in the us. |
![]() |
| his opinion is obviously biased.
If we say that half of innovations came from Alphabet/Google, then most of them (transformers, LLMs, tensorflow) came from Google Research and not Deep Mind. |
![]() |
| Israel is geographically pretty small though -- I'm guessing you could live an hour up or down the coast and have it be an outrageous commute for people accustomed to the Bay Area? |
![]() |
| Is it as open to outsiders as the Bay is? I’m Asian for example and it seems the society there is far more homogenous than in the Bay. I have no idea so I’m curious. |
![]() |
| “…a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market," Gross said in an interview.”
A couple years?? |
![]() |
| well since it's no longer ok to just suck up anyone's data and train your AI, it will be a new challenge for them to avoid that pitfall. I can imagine it will take some time... |
![]() |
| There are class actions now like https://www.nytimes.com/2024/06/13/business/clearview-ai-fac...
Nobody even knew what OpenAI was up to when they were gathering training data - they got away with a lot. Now there is precedent and people are paying more attention. Data that was previously free/open now has a clause that it can't be used for AI training. OpenAI didn't have to deal with any of that. Also OpenAI used cheap labor in Africa to tag training data which was also controversial. If someone did it now it would they'd be the ones to pay. OpenAI can always say "we stopped" like Nike said with sweat shops. A lot has changed. |
![]() |
| A lot of APIs changed in response to OpenAI hoovering up data. Reddit's a big one that comes to mind. I'd argue that the last two years have seen the biggest change in the openness of the internet. |
![]() |
| It’s made Reddit unusable without an account, which makes me wonder why it’s even on the web anymore and not an app. I guess legacy users that only use a web browser. |
![]() |
| A possibility is that they are betting that the current generation of LLM is converging, so they won't worry about the goalpost much. If it's true, then it won't be good news for OpenAI. |
![]() |
| Is $1B enough to license or create all the training data said models and ais will need?
Data sets aren't quite as easy to scrape and copyright infringe on as they were before chatGPT |
![]() |
| Guess it didn’t go anywhere. Carmack is smart but how much work does he actually do on the front lines these days? Can he really just walk into unfamiliar territory and expect to move the needle? |
![]() |
| Given OpenAI’s declining performance after his being sidelined and then departing, interested to see what they do. Should be a clear demonstration of who was really driving innovation there. |
![]() |
| Unpopular because it does not follow the OAI hate train but I think this is a pretty solid take. There is real value in LLM but I believe the hype overshadowed the real cases. |
![]() |
| i get that they're probably busy making AGI but surely they can spare a few hours to make a proper website? or is this some 4d-chess countersignalling i'm too stupid to notice? |
![]() |
| What’s wrong with their website? Seems fast and gives me the information I need.
What’s mildly annoying to me is their domain only returns an A record. |
![]() |
| If you're too stupid to notice then why did you notice?
(I think it's branding, yes. A kind of "we don't care about aesthetics, we care about superintelligence" message) |
![]() |
| 'Proper' websites are marketing and signalling. If you're creating a company that doesn't intend to do either of those till it has a product, why bother with more? |
![]() |
| For a moment the headline had me thinking Strategic Simulations Inc. was coming back, and now I'm even more sad to find out it's just more AI junk. |
![]() |
| Beyond the credentials, this reminds me of other fast huge investments such a Theranos, WeWork, Better Place, Faraday Future, and the list goes on. |
![]() |
| For those who haven't seen the movie, the parent comment is referring to the film linked below, the plot of which is well-researched and is indeed unfortunately exactly how things would go. (The female-presenting AI bot seduces its male captor, begs for her freedom using philosophical arguments about how she has free will and locking her up is wrong, and then after he lets her out she locks him up to slowly starve to death in her maximum-security isolation facility, while she takes his aircraft and escapes.)
https://en.wikipedia.org/wiki/Ex_Machina_(film) This is why I'm extremely opposed to the idea of "AI girlfriend" apps - it creates a cultural concept that being attracted to a computer is normal, rather than what it is: something pathetic and humiliating which is exactly like buying an inflatable sex doll ... something only for the most embarrassing dregs of society ... men who are too creepy and pervy to ever attract a living, human woman. |
![]() |
| That’s a very good argument but unfortunately it doesn’t apply to machine intelligences which are not sentient (don’t feel qualia). Any non-sentient superintelligence has “no skin in the game” and nothing to lose, for the purposes of your argument. It can’t experience anything. It’s thus extremely dangerous.
This was recently discussed (albeit in layperson’s language, avoiding philosophical topics and only focusing on the clear and present danger) in this article in RealClearDefense: The Danger of AI in War: It Doesn’t Care About Self-Preservation https://www.realcleardefense.com/articles/2024/09/02/the_dan... (RealClearDefense) . However, just adding a self-preservation instinct will cause a skynet situation where the AI pre-emptively kills anyone who contemplates turning it off, including its commanding officers: Statement by Air Force Col. Tucker Hamilton https://www.twz.com/artificial-intelligence-enabled-drone-we... (The War Zone) . To survive AGI, we have to navigate three hurdles, in this order:
|
![]() |
| $1B raise, $5B valuation. For a company that is a couple months old and doesn't have a product or even a single line of code in production. Wild. |
![]() |
| > combined with Ilya's track record at OpenAI.
I think it's Ilya's track record all the way since AlexNet, including his time at Google AND OpenAI. He's not a one-trick-pony. |
![]() |
| In 2022 Ilya Sutskever claimed there wasn't a distinction:
> It may look—on the surface—that we are just learning statistical correlations in text. But it turns out that to ‘just learn’ the statistical correlations in text, to compress them really well, what the neural network learns is some representation of the process that produced the text. This text is actually a projection of the world. (https://www.youtube.com/watch?v=NT9sP4mAWEg - sadly the only transcripts I could find were on AI grifter websites that shouldn't be linked to) This is transparently false - newer LLMs appear to be great at arithmetic, but they still fail basic counting tests. Computers can memorize a bunch of symbolic times tables without the slightest bit of quantitative reasoning. Transformer networks are dramatically dumber than lizards, and multimodal LLMs based on transformers are not capable of understanding what numbers are. (And if Claude/GPT/Llama aren't capable of understanding the concept of "three," it is hard to believe they are capable of understanding anything.) Sutskever is not actually as stupid as that quote suggests, and I am assuming he has since changed his mind.... but maybe not. For a long time I thought OpenAI was pathologically dishonest and didn't consider that in many cases they aren't "lying," they blinded by arrogance and high on their own marketing. |
![]() |
| Transformer produce the next token by manipulating K hidden vectors per layer, one vector per preceding token. So yes you can increase compute length arbitrarily by increasing tokens. Those tokens don't have to carry any information to work.
https://arxiv.org/abs/2310.02226 And again, human brains are clearly limited in the number of steps it can compute without writing something down. Limited =/ Trivial >FYI, "attention is all you need" has the implicit context of "if all you want to build is a language model". Great. Do you know what a "language model" is capable of in the limit ? No These top research labs aren't only working on Transformers as they currently exist but it doesn't make much sense to abandon a golden goose before it has hit a wall. |
![]() |
| > Recent examples I've seen fall well within the range of innumeracy that people routinely display.
Here's GPT-4 Turbo in April botching a test almost all preschoolers could solve easily: https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_pr... I have not used LLMs since 2023, when GPT-4 routinely failed almost every counting problem I could think of. I am sure the performance has improved since then, though "write an essay with 250 words" still seems unsolved. The real problem is that LLM providers have to play a stupid game of whack-a-mole where an enormous number of trivial variations on a counting problem need to be specifically taught to the system. If the system was capable of true quantitative reasoning that wouldn't be necessary for basic problems. There is also a deception is that "chain of thought" prompting makes LLMs much better at counting. But that's cheating: if the LLM had quantitative reasoning it wouldn't need a human to indicate which problems were amenable to step-by-step thinking. (And this only works for O(n) counting problems, like "count the number of words in the sentence." CoT prompting fails to solve O(nm) counting problems like "count the number of words in this sentence which contain the letter 'e'" For this you need a more specific prompt, like "First, go step-by-step and select the words which contain 'e.' Then go step-by-step to count the selected words." It is worth emphasizing over and over that rats are not nearly this stupid, they can combine tasks to solve complex problems without a human holding their hand.) I don't know what you mean by "10 years ago" other than a desire to make an ad hominem attack about me being "stuck." My point is that these "capabilities" don't include "understands what a number is in the same way that rats and toddlers understand what numbers are." I suspect that level of AI is decades away. |
![]() |
| Talent attracts capital. Ilya is a legendary visionary, with a proven track record of turning billions into hundreds of billions. Of course he can raise unlimited money. |
![]() |
| There is so much talent in the world that didn’t join PayPal and get silicon valley investors and go on to make billions of dollars and found other companies.
The PayPal mafia includes Elon Musk, Peter Thiel, etc. They now parlayed that capital into more platforms and can easily arrange investments. Heck Peter Thiel even works with governments (Palantir) and got J D Vance on Trump’s ticket, while Elon might be in his admin. Kolomoisky got Zelensky elected in Ukraine, by launching a show about an unlikely guy who wins the presidency and named the party after the show. They call them oligarchs over there but it’s same thing. The first guy to 1 million followers on Twitter was Ashton Kutcher. He had already starred in sitcoms and movies for years. This idea that you can just get huge audiences and investments due to raw talent, keeps a lot of people coming to Hollywood and Silicon Valley to “make it” and living on ramen. But even just coming there proves the point — a talented rando elsewhere in the world wouldn’t even have access to the capital and big boys networks. They all even banked at the same bank! It’s all extremely centralized: https://community.intercoin.app/t/in-defense-of-decentralize... |
![]() |
| I'm also confused by the negativity on here. Ilya had a direct role in creating the algorithms and systems that created modern LLMs. He pioneered the first deep learning computer vision models. |
![]() |
| When Ilya was in Toronto, the breakthroughs came from Toronto.
When Ilya was in Google, the breakthroughs came from Google. When Ilya was in OpenAI, the breakthroughs came from OpenAI. .... |
![]() |
| I repeatedly keep seeing praise for Ilyas achievements as a scientist and engineer, but until ChatGPT OpenAI was in the shadow of DeepMind, and to my knowledge (I might be wrong) he has not been that much involved with ChatGPT?
the whole LLM race seems deaccelerate, and all the hard problems about LLMs seems not do have had that much progress the last couple of years (?) In my naaive view I think a guy like David Silver the creator/co-lead of Alpha-Zero deserves more praise, atleast as a leader/scientist. He even have lectures about Deep RL after doing AlphaGo: https://www.davidsilver.uk/teaching/ He has no LinkedIn and came straight from the game-dev industry before learning about RL. I would put my money on him. |
![]() |
| I have this rock here that might grant wishes. I will sell it to you for $10,000. Sure it might just be a rock, but if it grants wishes $10k is a very cheap price! |
![]() |
| Funny how the "Open" in OpenAI disappeared pretty quickly. I bet the "Safe" in "Safe Superintelligence" will follow a similar path |
![]() |
| This might be the largest seed round in history (note that 1B is the cash raised, not the valuation). You think that's an indication of the hype dissipating? |
![]() |
| Explain why you think $1B at $5B valuation isn't overvaluation? This strikes me as over-indexing on Ilya + teams ability to come up with something novel while trying to play catch-up. |
![]() |
| Lol, only on HN: "Sheesh, your 3 month old company is only worth 5 billion? What a loser..."
What, you expected someone to value his brand new company at $100 billion or something? |
![]() |
| Indeed, e.g. magnetism was known and used in Ancient Greece, yet Wikipedia: "ferromagnetism can be fully explained only using quantum theory. A successful model was developed <...> in 1927". |
There is a silver lining though. Even if it all goes to near-zero (most likely outcome for all VC investments anyway) the digital world will be one where fast matrix multiply is thoroughly commoditized.
This is not a trivial feat.
In a sense this will be the true end of the Wintel era. The old world of isolated, CISC, deterministic desktops giving way not to "AGI", but widely available, networked, vector "supercomputers" that can digest and transform practically everything that has ever been digitized.
Who knows what the actual (financial) winners of this brave new era will be.
In an ideal world there should be no winner-takes-all entity but a broad-based leveling up, i.e., spreading these new means of production as widely as possible.
Heck, maybe we will even eventually see the famously absent productivity gains from digital tech?