(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40361128

本文讨论了人工智能 (AI) 在现代社会中的作用,特别是在通过病毒式媒体报道建立“品牌”或存在方面的作用。 作者认为,以 ChatGPT 等模型为代表的人工智能无法与埃隆·马斯克、彼得·蒂尔或萨蒂亚·纳德拉等人物的人类戏剧性和不可预测性相媲美。 然而,作者承认人工智能的价值超出了其当前的能力,特别是在它能够比人类持续更快、更便宜地执行的任务中。 作者提到了教育和日程安排作为例子。 尽管担心道德和环境影响,作者认为人工智能的重大进步是可能的和可取的,但警告说它可能不会像一些人预期的那样具有革命性或立即改变游戏规则。 作者还反思了技术创新的历史轨迹,注意到漫长的时间线和迭代过程。

相关文章

原文


Interesting, both Karpathy and Sutskever are gone from OpenAI now. Looks like it is now the Sam Altman and Greg Brockman show.

I have to admit, of the four, Karpathy and Sutskever were the two I was most impressed with. I hope he goes on to do something great.



Top 6 science guys are long gone. Open AI is run by marketing, business, software and productization people.

When the next wave of new deep learning innovations sweeps the world, Microsoft eats whats left of them. They make lots of money, but don't have future unless they replace what they lost.



AI has now evolved beyond just the science and it's biggest issue is in the productization. Finding use cases for what's already available ALONG with new models will be where success lies.

ChatGPT is the number 1 brand in AI and as such needs to learn what it's selling, not how its technology works. It always sucks when mission and vision don't align with the nerds ideas, but I think it's probably the best move for both parties.



> AI has now evolved beyond just the science

Pretty weak take there bud. If we just look at the Gartner Hype Cycle that marketing and business people love so much it would seem to me that we are at the peak, just before the downfall.

They are hyping hard to sell more, when they should be prepping for the coming dip, building their tech and research side more to come out the other side.

Regardless, a tech company without the inventors is doomed to fail.



I'm siding with you here. The same is happening at Google, but they definitely have momentum from past decades, so even if they go "full Boeing", there's a long way to fall.

Meanwhile, OpenAI (and the rest of the folks riding the hype train) will soon enter the trough. They're not diversified and I'm not sure that they can keep running at a loss in this post-ZIRP world.



>ChatGPT is the number 1 brand in AI and as such needs to learn what it's selling, not how its technology works.

I'm not as in tune as some people here so: don't they need both? With the rate at which things are moving, how can it be otherwise?



They do need both but it seems like they have enough engineering talent to keep improving. Time will tell now that Ilya is out but I expect they have enough cultural cache to attract excellent engineers even if they aren’t as famous as Ilya and Karpathy.

They have a strong focus on making the existing models fast and cheap without sacrificing capability which is music to the ears of those looking to build with them.



> With the rate at which things are moving

Things have been moving fast because we had a bunch of top notch scientists in companies paired with top notch salesmen/hype machines. But you need both in combination.

Hypemen make promises that can't be kept, but get absurd amounts of funding for doing so. Scientists fill in as many of the gaps as possible, but also get crazy resources due to the aforementioned funding. Obviously this train can't go forever, but I think you might understand that one of these groups is a bit more important than the other while one of these groups is more of a catalyst (makes things happen faster) for the other.



Ain’t there this pattern that innovations comes in waves and that the companies of the first wave most often just die but the second and third wave a build upon their artefacts and can be successful in a longer run?

I see this coming for sure for open ai and I do my part by just writing this comment on HN.



> ChatGPT is the number 1 brand in AI

Not for long. They have no moat. Folks who did the science are now doing science for some other company, and will blow the pants off OpenAI.



I think you massively underestimate the power of viral media coverage and the role it plays in building a “brand.” You’ll never replicate the Musk/Altman/Satya soap opera again. ChatGPT will forever be in the history books as the Kleenex of LLM AI.


“Its biggest issue is in the productization.”

That’s not true at all. The biggest issue is that it doesn’t work. You can’t actually trust ai systems and that’s not a product issue.



> That’s not true at all. The biggest issue is that it doesn’t work. You can’t actually trust ai systems and that’s not a product issue.

I don't know about that, it seems to work just fine at creating spam and clone websites.



> You can’t actually trust ai systems

For a lot of (very profitable) use cases, hallucinations and 80/20 are actually more than good enough. Especially when they are replacing solutions that are even worse.



It works fine for some things. You just need a clearly defined task where LLM + human reviewer is on average faster (ie cheaper) than a human doing the same task themselves without that assistance.


Given the fact that you need to review, research, and usually correct every detail of AI output, how can that be faster than just doing it right yourself in the first place? Do you have some examples of such tasks?


It’s a core technology issue.

The AI isn’t the product, e.g. the ChatGPT interface is the main product that is layered above the core AI tech.

The issue is trustworthiness isn’t solvable by applying standard product management techniques on a predictable schedule. It requires scientific research.



Eh maybe from a company point of view.

But this race to add 'AI' into everything is producing a lot of nonsense. I'd rather go fullsteam ahead on the science and the new models, because that is what will actually get us something decent, rather than milking what we already have.



Agree in general. While there remains issues on making/using AI, there is plenty of utility that doesn't require new science but maturation of deployment. For those who say its junk, I can only speak for myself and disagree.


If we look at history of innovation and invention it’s very typical the original discovery and final productization are done by different people. For many reasons, but a lot of them are universal I would say.

E.g. Oppenheimer’s team created the bomb, then following experts finetuned the subsequent weapon systems and payload designs. Etc.



> If we look at history of innovation and invention it’s very typical the original discovery and final productization are done by different people.

You don't really need to look at history, that's basically science vs engineering in a nutshell.

Maybe history could tell us if that's an accident or a division that arose out of 'natural' occurrence, but I suppose a question for an economist or psychologist or sociologist how natural that could really be anyway or if it's biased by e.g. academics not financially motivated because it happens that there isn't money there; so they don't care about productising; leaving it for others who are so motivated.



> that's basically science vs engineering in a nutshell

Right, because those are two very different things. Science is about figuring out truths of how reality works. Engineering is about taking those truths and using them to make useful things.

People often talk in a way that conflates the two, but they are completely different activities.



With abombs for weapons systems design they needed people who just got huge kicks out of explosions (not kidding here). I guess it’s partially about personal internal motivations, and it might be more of a chance wether the thing you are intrinsically motivated to do falls under engineering or science (in both cases you get the feeling the greats did stuff they wanted to do regardless of the categorizations applied to their discipline - you get more capital affinity in engineering ofc).


An AGI could replace human experts at tasks that doesn't require physical embodiment, like diagnosing patients, drafting contracts, doing your taxes etc. If you still do those manually and not just offload all of it to ChatGPT then you would greatly benefit from a real AGI that could do those tasks on their own.

And no, using ChatGPT like you use a search engine isn't ChatGPT solving your problem, that is you solving your problem. ChatGPT solving your problem would mean it drives you, not you driving it like it works today. When I hired people to help me do taxes they told me what papers they needed and then they did my taxes correctly without me having to look it through and correct them, an AGI would work like that for most tasks, it means you no longer need to think or learn to solve problems since the AGI solves them for you.



Let's take a step back from LLMs. Could you accept the network of all interconnected computers as a generally intelligent system? The key part here that drives me to ask this is:

> ChatGPT solving your problem would mean it drives you, not you driving it like it works today.

I had a very bad Reddit addiction in the past. It took me years of consciously trying to quit in order to break the habit. I think I could make a reasonable argument that Reddit was using me to solve its problems, rather than myself using it to solve mine. I think this is also true of a lot of systems - Facebook, TikTok, YouTube, etc.

It's hard to pin down all computers as an "agent" in the way we like to think about that word and assign some degree of intelligence to, but I think it is at least an interesting exercise to try.



Companies are general intelligences and they use people, yes. But that depends on humans interpreting that data reddit users generates and updating their models, code and algorithms to adapt to that data, the computer systems alone aren't general intelligences if you remove the humans.

An AGI could run such a company without humans anywhere in the loop, just like humans can run such a company without an AGI helping them.

I'd say a strong signal that AGI has happened are large fully automated companies without a single human decisionmaker in the company, no CEO etc. Until that has happened I'd say AGI isn't here, if that happens it could be AGI but I can also imagine a good enough script to do it for some simple thing.



> An AGI could replace human experts at tasks that doesn't require physical embodiment, like diagnosing patients, drafting contracts, doing your taxes etc.

How come the goal posts for AGI are always the best of what people can do?

I can't diagnose anyone, yet I have GI.

Reminds me of:

> Will Smith: Can a robot write a symphony? Can a robot take a blank canvas and turn it into a masterpiece?

> I Robot: Can you?



> How come the goal posts for AGI are always the best of what people can do?

Not the best, I just want it to be able to do what average professionals can do because average humans can become average professionals in most fields.

> I can't diagnose anyone, yet I have GI.

You can learn to, an AGI system should be able to learn to as well. And since we can copy AGI learning it means that if it hasn't learned to diagnose people yet then it probably isn't an AGI, because an AGI should be able to learn that without humans changing its code and once it learned it once we copy it forever and now the entire AGI knows how to do it.

So, the AGI should be able to do all the things you could do if we include all versions of you that learned different fields. If the AGI can't do that then you are more intelligent than it in those areas, even if the singular you isn't better at those things than it is.

For these reasons it makes more sense to compare an AGI to humanity rather than individual humans, because for an AGI there is no such thing as "individuals", at least not the way we make AI today.



If they can't learn then they don't have general intelligence, without learning there are many problems you wont be able to solve that average (or even very dumb) people can solve.

Learning is a core part to general intelligence, as general intelligence implies you can learn about new problems so you can solve those. Take away that and you are no longer a general problem solver.



That's a really good point. I want to define what I think of intelligence as being so we are on the same page: it is the combination of knowledge and reason. An example of a system with high knowledge amd low reason is Wikipedia. An example of a system with high reason and low knowledge is a scientific calculator. A highly intelligent system exhibits aspects of both.

A rule based expert intelligence system can be highly intelligent, but it is not general, and maybe no arrangement of rules could make one that is general. A general intelligence system must be able to learn and adapt to foreign problems, parameters, and goals dynamically.



Yes, I think that makes sense, you can be intelligent without being generally intelligent. For some definitions the person with Alzheimer can be more intelligent than someone without, but the person without is more general intelligent thanks to ability to learn.

The classical example of a general intelligent task is to get the rules for a new game and then play it adequately, there are AI contests for that. That is easy for humans to do, games are enjoyed even by dumb people, but we have yet to make an AI that can play arbitrary games as well as even dumb humans.

Note that LLMs are more general than previous AI's thanks to in context learning, so we are making progress, but still far from as general as humans are.



The simplest answer, without adding any extraordinary capabilities to the AGI that veer into magical intelligence, is to have AI assistants that can seemlessly interact with technology the way a human assistant would.

So, if you want to meet with someone, instead of opening you calendar app and looking for an opening, you'd ask your AGI assistant to talk to their AGI assistant and set up a 1h meeting soon. Or, instead of going on Google to find plane tickets, you'd ask you AGI assistant to find the most reasonable tickets for a certain date range.

This would not require any special intelligence more advanced than a human's, but it does require a very general understanding of the human world that is miles beyond what LLMs can achieve today.

Going only slightly further with assumptions about how smart an AGI would be, it could revolutionize education, at any level, by acting as a true personalized tutor for a single student, or even for a small group of students. The single biggest problem in education is that it's impossible to scale the highest quality education - and an AGI with capabilities similar to a college professor would entirely solve that.



The examples you're providing seem to have been thoroughly solved already.

I'm at the European AI Conference for our startup tomorrow, and they use a platform that just booked me 3 meetings automatically with other people there based on our availability... It's not rocket science.

And you don't even need those narrow tools. You could easily ask GPT-4o (or lesser versions) something along the lines of :

> "you're going to interact with another AI assistant to book meetings for me: [here would be the details about the meeting]. Come up with a protocol that you'll send to the other assistant so it can understand what the meetings are about, communicate you their availability, etc. I want you to come up with the entire protocol, send it, and communicate with the other assistant end-to-end. I won't be available to provide any more context ; I just want the meeting to be booked. Go."



GPT-4(o) lacks the ability to connect to any of the tools needed to achieve what I'm describing. Sure, it maybe could give instructions about how this could be done, but it can't actually do it. It can't send an email to your email account, and it can't check your incoming emails to see if any arrived asking for a meeting. It can't then check your calendar, and propose another email, or book a time if the time is available. It doesn't know that you normally take your lunch at some time, so that even though the spot is free, you wouldn't want a meeting at that time. And even if you did take the considerable amount of effort to hook it up with all of these systems, it's failure rate is still far too high to rely on it for such a thing.

And getting it to actually buy stuff like plane tickets on your behalf would be entirely crazy.

Sure, it can be made to do some parts of this for very narrowly defined scenarios, like the specific platform of a single three day conference. But it's nowhere near good enough for dealing with the general case of the messy general world.



This is definitely an interesting way to look at it. My initial reaction is to consider that I can enhance the capabilities of a system without increasing its inteligence. For example, if I give a monkey a hammer, it can do more than it could do when it didn't have the hammer, but it is not more intelligent (though it could probably learn things by interacting with the world with the hammer). That leads me to think: can we enhance the capabilities of what we call "AI systems" to do these things, without increasing their intelligence? It seems like you can glue GPT-4o to some calendar APIs to do exactly this. This seems more like an issue of tooling rather than an issue of intelligence to me.

I guess the issue here is: can a system be "generally intelligent" if it doesn't have access to general tools to act on that intelligence? I think so, but I also can see how the line is very fuzzy between an AI system and the tools it can leverage, as really they both do information processing of some sort.

Thanks for the insight.



I'm sure some aspects of this can be achieved by manually programming GPT-4 links to other specific services. And obviously, some interaction tools would have to be written manually even for an AGI.

The difference though is the amount of work. Today if you wanted GPT-4 to work as I describe, you would have to write an integration for Gmail, another one for Office365, another one for Proton etc. You would probably have to create a management interface to give access to your auth tokens for each of these to OpenAI so they can activate these interactions. The person you want to sync with would have to do the same.

In contrast, an AGI that only has average human intelligence, or even below, would just need access to, say, Firefox APIs, and should easily be able to achieve all of this. And it would work regardless if the other side is a different AGI using a different provider, or even if they are just a regular human assistant.



> The single biggest problem in education is that it's impossible to scale the highest quality education

Do you work in education? Because I don't think many who do would agree with this take.

Where I live, the single biggest problem in education is that we can't scale staffing without increasing property taxes, and people don't want to pay higher property taxes. And no, AGI does not fix this problem, because you need staff to be physically present in schools to deal with children.

Even if we had an AGI that could do actual presentation of coursework and grading, you need a human being in there to make sure they behave and to meet the physical needs of the students. Humans aren't software to program around.



Having individual tutors for each child is not often discussed because it is self-evidently impossible for any cost whatsoever - it would require far too high a percentage of the workforce of a country to be dedicated to education. But it is the most responsible thing for the difference between the education the elites get, especially the elites of the past, and the general education.

Sure, this doesn't mean you could just fire all teachers and dissolve all schools. You still need people to physically be there and interact with the children in various ways. But if you could separate the actual teaching from the child care part, and if you could design individualized courses for each child with something approaching the skill of the best teachers in the whole world, you would get an inconceivably better educational system for the entire population.

And I don't need to work in education for much of this. Like all others, I was intimately acquainted with the educational system (in my country) for 16 years of my life through direct experience, and much more since in increasingly less direct experience. I have very very good and very direct experience of the variance between teachers and the impact that has on how well students understand and interact with the material.



> This would not require any special intelligence more advanced than a human's, but it does require a very general understanding of the human world that is miles beyond what LLMs can achieve today.

Does it? I am quite certain those things are achievable right now without anything like AI in the sense being discussed here.



Show me one product that can offer me an AI assistant that can set up a meeting with you at a time that doesn't contradict any of our plans, given only my and your email address.


A working memory that can preserve information indefinitely outside a particular context window and which can engage in multi-step reasoning that doesn't show up in its outputs.

GPT4o's context window is 128k tokens which is somewhere on the order of 128kB. Your brain's context window, all the subliminal activations from the nerves in your gut and the parts of your visual field you aren't necessarily paying attention to is on the order of 2MB. So a similar order of magnitude though GPT has a sliding window and your brain has more of an exponential decay in activations. That LLMs can accomplish everything they do just with what seems analogous to human reflex rather than human reasoning is astounding and more than a bit scary.



Looking up an estimate of the brain's input bandwidth at 10 million bits per second and multiplying by the second or two a subliminal stimuli can continue to affect a person's behavior. This is a very crude estimate and probably an order of magnitude off, but I don't think many orders of magnitude off.


Some first ideas coming to mind:

Engineering Level:

  Solve CO2 Levels
  End sickness/death
  Enhance cognition by integrating with willing minds.
  Safe and efficient interplanetary travel.
  Harness vastly higher levels of energy (solar, nuclear) for global benefit.
Science:
  Uncover deeper insights into the laws of nature.
  Explore fundamental mysteries like the simulation hypothesis, Riemann hypothesis, multiverse theory, and the existence of white holes.
  Effective SETI
 
Misc:
  End of violent conflicts
  Fair yet liberal resource allocation (if still needed), "from scarcity to abundance"


The problem with CO2 levels is that no one likes the solution not that we don't have one. I highly doubt adding AGI to the mix is going to magically make things better. If anything we'll just burn more CO2 providing all the compute resources it needs.

People want their suburban lifestyle with their red meat and their pick-up truck or SUV. They drive fuel inefficient vehicles long-distances to urban work environments and they seem to have very limited interest in changing that. People who like detached homes aren't suddenly affording the rare instances of that closer to their work. We burn lots of oil because we drive fuel inefficient vehicles long distances. This is a problem of changing human preferences which you just aren't going to solve with an AGI.



Assuming embedded AI in every piece of robotics - sometimes directly, sometimes connected to a central server (this is doable even today) - it'll revolutionize industries: human-less mining, processing, manufacturing, services, and transportation. These factories would eventually produce and install enough solar power or build sufficient nuclear plants and energy infrastructure, making energy clean and free.

With abundant electric cars (at this future point in time) and clean electricity powering heating, transportation, and manufacturing, some AIs could be repurposed for CO2 capture.

It sounds deceptively easy, but from an engineering standpoint, it likely holds up. With free energy and AGI handling labor and thinking, we can achieve what a civilization could do and more (cause no individual incentives come into play).

However, human factors could be a problem: protests (luddites), wireheading, misuse of AI, and AI-induced catastrophes (alignment).



Having more energy is intrinsically dangerous, though, because it's indiscriminate: more energy cannot enable bigger solutions without also enabling bigger problems. Energy is the limiting factor to how much damage we can do. If we have way more of it, all bets are off. For instance, the current issue may be that we are indirectly cooking the planet through CO2 emissions, so capturing that sounds like a good idea. But even with clean energy, there is a point where we would cook the planet directly via waste heat of AI and gizmos and factories and whatever unforeseen crap we'll conjure just because we can. And given our track record I'm far from confident that we wouldn't do precisely that.


This exactly. Every self replicating organism will eventually use all the energy available to it, there will never be an abundance. From the dawn of time, mankind has similarly used every bit of energy it generates. From the perspective of a subsistence farmer in the 1600s, if you told them how much energy would be available in 400s year they would think we surely must live in paradise with no labor. Here we are, still metaphorically tilling the land.


Do you believe the average human has general intelligence, and do you believe the average human can intellectually achieve these things in ways existing technology cannot?


Yes, considering that AI operates differently from human minds, there are several advantages:
  AI does not experience fatigue or distractions => consistent performance.
  AI can scale its processing power significantly, despite the challenges associated with it (I understand the challenges)
  AI can ingest and process new information at an extraordinary speed.
  AIs can rewrite themselves
  AIs can be multiplicated (solving scarcity of intelligence in manufacturing)
  Once achieving AGI, progress could compound rapidly, for better or worse, due to the above points.


The first AGI will probably take way too much compute to have a significant effect, unless there is a revolution in architecture that gets us fast and cheap AGI at once the AGI revolution will be very slow and gradual.

A model that is as good as an average human but costs $10 000 per effective manhour to run is not very useful, but it is still an AGI.



> A model that is as good as an average human but costs $10 000 per effective manhour to run is not very useful, but it is still an AGI.

Geohot (https://geohot.github.io/blog/) estimates that a human brain equivalent requires 20 PFLOPS. Current top-of-the-line GPUs are around 2 PFLOPS and consume up to 500W. Scaling that linearly results in 5kW, which translates to approximately 3 EUR per hour if I calculate correctly.



But is it even clear what is the next big leap after LLM? I have the feeling many tend to extrapolate the progress of AI from the last 2 years to the next 30 years but research doesn't always work like that (though improvements in computing power did).


Extrapolating 2 years might give you a wrong idea, but extrapolating the last year suggests making another leap that was GPT3 or GPT4 is much, much more difficult. The only considerable breakthrough I can think of is Google's huge context window which I hope will be the norm one day, but in terms of actual results they're not mind-blowing yet. We see little improvements everyday and for sure there will be some leaps, but I wouldn't count on a revolution.


Unlike AI in the past, there is now massive amounts of money going into AI. And the number things humans are still doing significantly better than AI is going down continously now.

If something like Q* is provided organically with GPT5 (which may have a different name), and allows proper planning, error correction and direct interaction with tools, that gaps is getting really close to 0.



AI in the past (adjusted for 1980s) was pretty well funded. It's just that fundamental scientific discovery bears little relationship to the pallets of cash.


Funding in the 1980s was sometimes very good. My company bought me an expensive Lisp Machine in 1982 and after that, even in “AI winters” it mostly seemed that money was available.

AI has a certain mystique that helps get money. In the 1980s I was on a DARPA neural network tools advisory panel, and I concurrently wrote a commercial product that included the 12 most common network architectures. That allowed me to step in when a project was failing (a bomb detector we developed for the FAA) that used a linear model, with mediocre results. It was a one day internal consult to provide software for a simple one hidden layer backprop model. During that time I was getting mediocre results using symbolic AI for NLP, but the one success provided runway internally in my company to keep going.



That funding may have felt good at the time compared to some other academic fields.

But compared to the 100s of billions (possibly trillions, globally) that is currently being plowed into AI, that's peanuts.

I think the closest recent analogy to the current spending on AI, was the nuclear arms race during the cold war.

If China is able to field ASI before the US even have full AGI, nukes may not matter much.



You are right about funding levels, even taking inflation into account. Some of the infrastructure, like Connection Machines and Butterfly Machines seemed really expensive at the time though.


They only seem expensive because they're not expected to generate a lot of value (or military/strategic benefit).

Compare that the 6+ trillions that were spent in the US alone on nuclear weapons, and then consider, what is of greater strategic importance: ASI or nukes?



> AI in the past (adjusted for 1980s) was pretty well funded.

A tiny fraction of the current funding. 2-4 orders of magnitude less.

> It's just that fundamental scientific discovery bears little relationship to the pallets of cash

Heavy funding may not automatically lead to breakthroughs such as Special Relativity or Quantum Mechanics (though it helps there too). But once the most basic ideas are in place, massive is what causes the breakthroughs like in the Manhatten Project and Apollo Program.

And it's not only the money itself. It's the attention and all the talent that is pulled in due to that.

And in this case, there is also the fear that the competition will reach AGI first, whether the competition is a company or a foreign government.

It's certainly possible the the ability to monetize the investments may lead to some kind of slowdown at some point (like if there is a recession).

But it seems to me that such a recession will have no more impact on the development of AGI than the dotcom bust had for the importance of the internet.



> A tiny fraction of the current funding. 2-4 orders of magnitude less.

Operational costs were correspondingly lower, as they didn't need to pay electricity and compute bills for tens of millions concurrent users.

> But once the most basic ideas are in place, massive is what causes the breakthroughs like in the Manhatten Project and Apollo Program.

There is no reason to think that the ideas are in place. It could be that the local optimum is reached as it happened in many other technology advances before. The current model is mass scale data driven, the Internet has been sucked dry for data and there's not much more coming. This may well require a substantial change in approach and so far there are no indications of that.

From this pov monetization is irrelevant, as except for a few dozen researchers the rest of the crowd are expensive career tech grunts.



> There is no reason to think that the ideas are in place.

That depends what you mean when you say "ideas". If you consider ideas at the level of transformers, well then I would consider those ideas of the same magnitude as many of the ideas the Manhatten Project or Apollo Program had to figure out on the way.

If you mean ideas like going from expert system to Neural Networks with backprop, then that's more fundamental and I would agree.

It's certainly still conceivable that Penrose is right in that "true" AGI requires something like microtubules to be built. If so, that would be on the level of going from expert systems to NNs. I believe this is considered extremely exotic in the field, though. Even LeCun probably doesn't believe that. Btw, this is the only case where I would agree that funding is more or less irrelevant.

If we require 1-2 more breakthroughs on par with Transformers, then those could take anything from 2-15 years to be discovered.

For now, though, those who have predicted that AI development will mostly be limited by network size and the compute to train it (like Sutskever or implicitly Kurzweil) have been the ones most accurate in the expected rate of progress. If they're right, then AGI some time between 2025-2030 seems most likely.

Those AGI's may be very large, though, and not economical to run for a wider audience until some time in the 30's.

So, to summarize: Unless something completely fundamental is needed (like microtubules), which happens to be a fringe position, AGI some time between 2025 and 2040 seems likely. The "pessimists" (or optimists, in term of extinction risk) may think it's closer to 2040, while the optimists seem to think it's arriving very soon.



IMO their next big leap will be to get it cheap enough and integrated with enough real time sources to become the default search engine.

You can really flip the entire ad supported industry upside down if you integrate with a bunch of publishers and offer them a deal where they are paid every time an article from their website is returned. If they make this good enough people will pay $15-20 a month for no ads in a search engine.



I don’t think we’re even close to exhausting the potential of transformer architectures. gpt4o shows that a huge amount can be gained by implementing work done on understanding other media modalities. There’s a lot of audio that they can continue to train on still and the voice interactions they collect will go into further fine tuning. Even after that plays out there will be video to integrate next and thanks to physics simulations and 3D rendering there is a potentially endless and readily generated license free supply of it, at least for the simpler examples. For more complex real world video they could just set up web cams in public areas around the world where consent isn’t required by law and collect masses of data every second. Given that audio seems to have enabled emotional understanding and possibly even humour, I can’t imagine what all might fall out of video. At the least it’s going to improve reasoning since it will involve predicting cause and effect. There are probably a lot of others you could add though we don’t have large datasets for them.


Not saying it’s going to be the same, but I’m sure computing progress looked pretty unimpressive from, say, 1975 to 1990 for the uninitiated.

By the 90s they were still mainly used as fancy typewriters by “normal” people (my parents, school, etc) although the ridiculous potential was clear from day one.

It just took a looong time to go from pong to ping and then to living online. I’m still convinced even this stage is temporary and only a milestone on the way to bigger and better things. Computing and computational thought still has to percolate into all corners of society.

Again not saying “LLM’s” are the same, but AI in general will probably walk a similar path. It just takes a long time, think decades, not years.

Edit: wanted to mention The Mother of All Demos by Engelbart (1968), which to me looks like it captures all essential aspects of what distributed online computing can do. In a “low resolution”, of course.



Computing progress from 78 to 90 was mind-blowing.

1978: the apple ][. 1mhz 8 bit microprocessor, 4kb of ram, monochrome all-,caps display.

1990:Mac IIci, 25mhz 32-bit CPU, 4MB ram, 640x480 color graphics and an easy to use GUI.

Ask any of us who used both of these at the time: it was really amazing.



They were amazing, and the progress was incredible, but both of those computers - while equally exciting and delightful to people who saw the potential - were met with ‘but what can I actually use it for?’ from the vast majority of the population.

By 1990 home computer use was still a niche interest. They were still toys, mainly. DTP, word processing and spreadsheets were a thing, but most people had little use for them - I had access to a Mac IIci with an ImageWriter dot matrix around that time and I remember nervously asking a teacher whether I would be allowed to submit a printed typed essay for a homework project - the idea that you could do all schoolwork on a computer was crazy talk. By then, tools like Mathematica existed but as a curiosity not an essential tool like modern maths workbooks are.

The internet is what changed everything.



> The internet is what changed everything.

Broadband. Dial-up was still too much of an annoyance, too expensive.

Once broadband was ubiquitous in the US and Europe, that's when the real explosion of computer usage happened.



A big obstacle was that everything was on paper. We still had to do massive amounts of data entry.

For some strange reason html forms is an incredibly impotent technology. Pretty standard things are missing like radioboxes with an other text input. 5000+ years ago the form labels aligned perfectly with the value.

I can picture it already, ancient Mesopotamia, the clay tablet needs name and address fields for the user to put their name and address behind. They pull out a stamp or a roller.

Of course if you have a computer you can have stamps with localized name and address formatting complete with validation as a basic building block of the form. Then you have a single clay file with all the information neatly wrapped together. You know, a bit like that e-card no one uses only without half data mysteriously hidden from the record by some ignorant clerk saboteur.

We've also failed to hook up devices to computers. We went from the beautiful serial port to IoT hell with subscriptions for everything. One could go on all day like that, payments, arithmetic, identification, etc much work still remains. I'm unsure what kind of revolution would follow.

Talking thinking machines will no doubt change everything. That people believe it is possible is probably the biggest driver. You get more people involved, more implementations, more experiments, more papers, improved hardware, more investments.



Honestly mobile totally outstrips this.

One day at work about 10-15 years ago I looked at my daily schedule and found that on that day my team were responsible for delivering a 128kb build of Tetris and a 4GB build of Real Racing.



No, but that’s when “ping” was written, which is what you said.

(And, irrelevant, but my parents were in fact both posting to Usenet in 1983.)



mobile internet and smartphones were the real gamechanger here, which were definitely not linear.

They became viable in the 2000's, let's say 2007 with the iPhone, and by late 2010's everyone was living online, so "decades" is a stretch.



I don't think it particularly matters right now (practically speaking). It's going to take years for businesses and product companies to commoditize applications of LLMs, so while it's valuable for the Ilyas & Andrejs of the world to continue the good work of hard research, it's the startups, hyperscalers and SaaS companies who are creating business applications for LLMs that going to be the near term focus.


LLM is all you need

Attention and scale is all you need

Anything else you do will be overtaken by LLM when it builds its internal structures

Well, LLM and MCTS

The rest is old news. Like Cyc



The majority of the developers may know what LLMs are in an abstract sense, but I meet very few that really realize what these are. These LLMs are an exponential leap in computational capability. The next revolution is going to be when people realize what we have already, because it is extremely clear the majority do not. RAG? Chatbots? Those applications are toys compared to what LLMS can do right now, yet everyone is dicking around making lusty chatbots or naked celebrities in private.


I agree with OP, I think we still have no idea yet what dreams may come of the LLM's we have today. So no one will be able to "enlighten us" — perhaps not until we're looking in the rear-view mirror.

I would say instead, stay tuned.



It is both subtle and obvious, yet many are missing this: if you want/need a deep subject matter expert in virtually any subject, write a narrative biography describing your expert using the same language that expert would use to describe themselves; this generates a context within the LLM carrying that subject matter expertise, and now significantly higher quality responses are generated. Duplicate this process for several instances of your LLM, creating a home brewed collection of experts, and have them collectively respond to one's prompts as a group privately, and then present their best solution. Now there is a method of generating higher reliability responses. Now turn to the fact that the LLMs are trained on an Internet corpus of data that contains the documentation and support forums for every major software application; using the building blocks described so far, it is not difficult at all to create agents that sit between the user and pretty much every popular software application and act as co-authors with the user helping them use that application.

I have integrated 6 independent, specialized "AI attorneys" into a project management system where they are collaborating with "AI web developers", "AI creative writers", "AI spreadsheet gurus", "AI negotiators", "AI financial analysts" and an "AI educational psychologist" that looks at the user, the nature and quality of their requests, and makes a determination of how much help the user really needs, modulating how much help the other agents provide.

I've got a separate implementation that is all home solar do-it-yourself, that can guide someone from nothing all the way to their own self made home solar setup.

Currently working on a new version that exposes my agent creation UI with a boatload of documentation, aimed at general consumers. If one can write well, as in write quality prose, that person can completely master using these LLMs to superior results.



>I have integrated 6 independent, specialized "AI attorneys" into a project management system where they are collaborating with "AI web developers", "AI creative writers", "AI spreadsheet gurus", "AI negotiators", "AI financial analysts" and an "AI educational psychologist" that looks at the user, the nature and quality of their requests, and makes a determination of how much help the user really needs, modulating how much help the other agents provide.

Ah yes, "it's so obvious no one sees it but me". Until you show people your work, and have real experts examining the results, I'm going to remain skeptical and assume you have LLMs talking nonsense to each each other.



The point is these characters are not doing the work for people, it co-authors the work with them. It's just like working with someone highly educated but with next to no experience - they're a great help, but ya gotta look at their work to verify they are on track. This is the same, but with a collection of inexperienced phds. The LLMs really are idiot savants, and when you treat them like that they respond with expectations better.


How can no one see what we have today? You only need six instances of an LLM running at the same time, with a system to coordinate between them, and then you have to verify the results manually anyway. Sign me up!


If a certain percent of the work is completed through research synthesis and multiple perspective alignment, why is said novel approach not worth lauding?

I've created a version of one of the resume GPTs that analyses my resume's fit to a position when fed the job description along with a lookup of said company. I then have a streamlined manner in which it points out what needs to be further highlighted or omitted in my resume. It then helps me craft a cover letter based on a template I put together. Should I stop using it just because I can't feed it 50 job roles and have it automatically select which ones to apply to and then create all necessary changes to documents and then apply?



but at some point, probably in the near future, they will. And then this system I have will already be in place, and that added capability will just arrive and integrate into all the LLM integrated systems I've made and they'll just improve.


There are no moats in deep learning, everything changes so fast.

They have the next iteration of GPT Sutskever helped to finalize. OpenAI lost it's future unless they find new same caliber people.



> They have the next iteration of GPT Sutskever helped to finalize

How do you know that they have the next GPT?

How do you know what Sutskever contributed? (There was talk that the most valuable contributions came from the less well known researchers not from him)



Isn't access to massive datasets and computation the moat? If you and your very talented friends wanted to build something like GPT-4, could you?

It's going to get orders of magnitude less expensive, but for now, the capital requirements feel like a pretty deep moat.



How do you know massive datasets are required? Just because that’s how current LLMs operate, doesn’t mean it’s necessarily the only solution.


Then the resources needed to discover an alternative to brute-forcing a large model are a huge barrier.

I think academia and startups are currently better suited to optimize tinyml and edge ai hardware/compilers/frameworks etc.



Except if you're Anthropic or OpenAI you don't care about what your compute provider has done in the AI space - you care about the compute power they can give you.


But how many of those are ordered specifically for OpenAI, and are on order as a result of them to begin with? Do you think if we were in a parallel universe where OpenAI ended up partnering with Google or Amazon instead, the GPU shipments would look the same? I think they would reflect wherever OpenAI ended up doing all their compute showing a pretty similar lion's share.

Your claim was that people should care about compute based on what the provider has done in the AI space, but Microsoft was pretty far behind on that side until OpenAI - Google was really the only player in town. Should they have wanted GCP credits instead? Do you care about their AI results or the ex post facto GPU shipments?

Or, if what you actually want to argue is that Anthropic would be able to get more GPUs with Azure than AWS or GCP then this is a different argument which is going to require different evidence than raw GPU shipments.



The claim being implied was that Anthropic was in a better position because they had partnered with AWS versus Azure and thus they would have more access to GPU.

That isn't the case, at all. All I'm stating is what the chart clearly shows - Azure has invested deeply in this technology and at a rate that far exceeds AWS.



OpenAI most definitely needs the compute from MSFT. It could certainly swap out to another service but given that microsoft invested via credits it would be problematic. They have enmeshed their future.


They seem to have a huge "money moat" now. Partnerships with Apple and MS mean they have a LOT of money to try a lot of things I guess.

Before the Apple partnership, maybe it seemed like the moat was shrinking, but I'm tno sure now.

Likely they have access to a LOT of data now too.



How important are top science guys though? OpenAI has a thousand employees and almost unlimited money, and llm are better understood, I would guess continous development will beat singular genius heroes?


> OpenAI has a thousand employees and almost unlimited money

You could say the same about Google - and yet they missed the consequences of their own discovery and got behind instead of being leaders. So you need specific talent to pull this off even if in theory you can hire anybody.



I am just curious how it happened to Google? Like who were the product managers or others who didn't see an opportunity here exactly where the whole thing was invented and they had huge amounts of data already, whole web basically and the amount of video that no one else can ever hope to have?


I’m 100% positive lots of people at Google were chomping at the bit to productize LLMs early on.

But the reality is, LLMs are a cannibalization threat to Search. And the Search Monopoly is the core money making engine of the entire company.

Classic innovators dilemma. No fat-and-happy corporate executive would ever say yes to putting lots of resources behind something risky that might also kill the golden goose.

The only time that happens at a big established company, is when driven by some iconoclastic founder. And Google’s founders have been MIA for over a decade.



Golden goose is already being hoisted upon a spit — and your company is not even going to get even drippings of the fat. I am surprised by the short-sightedness of execs.


I don’t work there, I’ve just worked for lots of big orgs — they are all the same. Any claimed uniqueness in “Organizational structure” and “culture” are just window dressing around good ol’ human nature.

It’s not short sightedness, it’s rational self-interest. The rewards for taking risk as employee #20,768 in a large company are minimal, whereas the downside can be catastrophic for your career & personal life.



I think the discovery of the power of the LLM was almost stumbled upon at OpenAI, they certainly didn't set out initially with the goal of creating them. Afaik they had one guy who was doing a project of creating an LLM with amazon review text data and only off the back of playing around with that did they realise its potential


Data volume isn't that important, that's becoming clearer now. What OpenAI did was paid for a bunch of good labelled data. I'm convinced that's basically the differentiator. It's not a academic or fundamental thing to do which is why google didn't do it, it's a pure practical product thing.


Well for one, Ilya was poached from Google to work for OpenAI to eventually help build SOTA models.

Fast forward to today and we a discussing the implications of him leaving OpenAI on this very thread.

Evidence to support the notion that you can’t just throw mountains of cash and engineers at a problem to do something truly trailblazing.



A lot of it was the unwillingness to take risk. LLMs were, and still are, hard to control, in terms of making sure they give correct and reliable answers, making sure they don't say inappropriate things that hurt your brand. When you're the stable leader you don't want to tank your reputation, which makes LLMs difficult to put out there. It's almost good for Google that OpenAI broke this ground for them and made people accepting of this imperfect technology.


Difficult to quantify but as an example the 2017 scientific paper “Attention is all you need” changed the entire AI field dramatically. Without these landmark achievements delivered by highly skilled scientists, OpenAI wouldn’t exist or only be severely limited.


And ironically even the authors did not fully grasp at the time the paper importance. Reminds me of when Larry Page and Sergey Brin, tried to sell Google for $1 million ...


It depends on your views on LLMs

If your view is that LLMs only need minor improvements to their core technology and that the major engineering focus should be placed on productizing them, then losing a bunch of scientists might not be seen as that big of a deal.

But if your view is that they still need to overcome significant milestones to really unlock their value... then this is a pretty big loss.

I suppose there's a third view, which is: LLMs still need to overcome significant hurdles, but solutions to those hurdles are a decade or more away. So it's best to productize now, establish some positive cashflow and then re-engage with R&D when it becomes cheaper in the future and/or just wait for other people to solve the hard problems.

I would guess the dominant view of the industry right now is #1 or #3.



Agreed - it's good to have some far thinking innovation but really that can be acquired as needed so you really just need a few people with their pulse on innovation which there will always be more of outside a given company than within it.

Right now it's all about reducing transaction costs, small-i innovating, onboarding integrations, maintaining customer and stakeholder trust, getting content, managing stakeholders, and selling.



> They make lots of money

Will they though? Last I heard OpenAI isn't profitable, and I don't know if it's safe to assume they every will be.

People keep saying that LLMs are an existential threat to search, but I'm not so sure. I did a quick search (didn't verify in any way if this is a feasible number) to find that Google on average makes about 30 cents in revenue per query. They make a good profit on that because processing the query costs them almost nothing.

But if processing a query takes multiple seconds on a high-end GPU, is that still a profitable model? How can they increase revenue per query? A subscription model can do that, but I'd argue that a paywalled service immediately means they're not a threat to traditional ad-supported search engines.



> When the next wave of new deep learning innovations sweeps the world,

that won't happen, the next scam will be different

it was crypto until FTX collapsed then the usual suspects led by a16z leaned on OpenAI to rush whatever they had on market hence the odd naming of ChatGPT 3.5.

When the hype is finally realized to be just mass printing bullshit -- relevant bullshit, yes, which sometimes can be useful but not billions of dollars of useful -- there will be something else.

Same old, same old. The only difference is there is no new catchy tunes. Yet? https://youtu.be/I6IQ_FOCE6I https://locusmag.com/2023/12/commentary-cory-doctorow-what-k...



Crypto currencies has the potential to grow the world economy by about 1-3%, as banking fees go down. Add other uses of crypto may double or triple that, but that's really speculative.

AI, on the other hand, has a near infinite potential. It's conceivable that it will grow the global economy by 2% OR MORE per MONTH for decades or more.

AI is going to be much more impactful than the internet. Probably more than internal combustion, the steam engine and electricity combined.

The question is about the timescale. It could take 2 years before it really starts to generate profits, or it could take 10 or even more.



>Crypto currencies has the potential to grow the world economy by about 1-3%, as banking fees go down.

Bank fees don't disappear into the ether when they're collected, so I doubt they have this much affect.

Oh, made my very first retail purchase with Bitcoin the other day. While the process was pretty slick and easy, the network charged $15.00 in fees. Long way to go until "free".



> Bank fees don't disappear into the ether when they're collected, so I doubt they have this much affect.

1-3% was intended as a ceiling for what cryptocurrency could bring to the economy, after adjusting for the reduction in inflation once those costs are gone.



He's saying the fees aren't burned like with mining, so they don't hurt the economy by the amount of the fee: the profit portion of them goes into other investments. The fees hinder parts of the economy making some transactions nonviable, but they don't fully translate to "friction" making waste heat so much as something more adiabatic that goes back in. It's largely an extraneous spring in the system, not a damper.


I agree with you on the AI point, but with crypto not all is what it seems.

Yes you may have short term growth, this is solely due to there being less regulation.

Despite what many people think regulation is a good thing, put in place to avoid the excesses that lead to lost livelihoods. It stops whales from exploiting the poor, provides tools for central banks to try avoid depressions.

Costs wise, banks acting as trust authorities actually can theoretically be cheaper too.



Well, I agree with all that. The 1-3% was meant to come off as a tiny, one-time gain, and an optimistic estimate of that. Not at all worth the hype.

Basically, crypto is more like gold rush than a tech breakthrough. And gold rushes rarely lead to much more than increased inflation.



All crypto"currencies" with a transaction fee are negative sum games and as such , they are a scam. It's been nine years since the Washington Post admittedly somewhat clumsily but still drawn attention to this and people still insist it's something other than a scam. Despite heady articles about how it's going to solve world hunger, it's just a scam.

This round of AI is only capable of producing bullshit. Relevant bullshit but bullshit. This can be useful https://hachyderm.io/@inthehands/112006855076082650 but it doesn't mean it's more impactful than the Internet.



I agree, 1-3% was a best case. While I agree it's a net zero, even those who argue for it really don't claim much more than a couple of %.

I actually expected objections on the opposite direction. But then, this is not twitter/X.

The point is that something that can easily generate 20%-100% growth per year (AGI/ASI) is so much more important that the best case prediction for crypto's effect on the economy are not even noticeable.

That's why comparing the crypto bubble to AI is so meaningless. Crypto was NEVER* going to be something hugely important, while AI is potentially almost limitless.

*If crypto had anything to offer at all, it would be ways to avoid fees, taxes and the ability to trace transactions.

The thing is, if crypto at any point seriously threatens to replace traditional currencies as stores of value in the US or EU, it will be banned instantly. Simply because it would make it impossible for governments to run budget deficits, prevent tax evasion and sever other things that governments care about.



What you call "AI" is generally named AGI. LLM's are alredy a kind of AI, not just generic enough to fully replace all humans.

We don't know if full AGI can be built using just current technology (like transformers) given enough scale, or if 1 or more fundamental breakthroughs are needed beyond just the scale.

My hypothesis has always been that AGI will arrive roughly when the compute power and model size matches the human brain. That means models of about 100 trillion params, which is not that far away now.



> We don't know if full AGI can be built using just current technology (like transformers) given enough scale,

We absolutely do and the answer is such a resounding no it's not even funny.



Actually, we really don't. When GPT-3.5 was released, it was a massive surprise to many, exactly because they didn't believe simply scaling up transformers wouldn't end up with something like that.

Now using transformers doesn't mean they have to be assembled like LLM's. There are other ways to stich them together to solve a lot of other problems.

We may very well have the basic types of lego pieces needed to build AGI. We won't know until we try to build all the brain's capacities into a model of size of a few 100 trillion parameters.

And if we actually lack some types of pieces, they may even be available by then.



Yeah, I think him leaving was a huge blow to OpenAI that they have maybe not yet recovered from. Clearly there is no moat to transformer-based LLM development (other than money), but in terms of pace of development (insight as to what is important) I think Anthropic have the edge, although Reka are also storming ahead at impressive pace.


The scenario I have in my head is that they had to override the safety team's objections to ship their new models before Google IO happened.


The "safety" team can go eat grass.

I don't believe in AI "safety measures" any more than I do in kitchen cleaver safety measures.

That is, nothing beyond "keep out of kids' reach" and "don't use it like an idiot" but let the cleaver be a damn cleaver.



Resignations lead to more resignations....unless mgmt. can get on top of it and remedy it quickly, which rarely happens. I've seen it happen way too many times working 25 years in tech.


You need to think about OpenAI specifically - Ilya basically attempted a coup last year and failed, stayed in the company for months afterwards, according to rumours had limited contributions to the breakthroughs in research and was assigned to lead the most wishy-washy project of superalignment.

I’m not seeing “the good ones” leaving in this case.



Have they earned a return on it yet?

Seriously asking; I've purchased a GitHub CoPilot license subscription but I don't know what their sales numbers are doing on AI in general. It's to be seen if it can be made more cost-efficient to deliver to consumers.



Because Tesla is. Unlike the traditional automakers that have no room of growth and in a perpetual stagnation, Tesla has potential being a partially automative and partially tech industry. They can even have their own mobile phones if they want to. Or robots and stuff.

What Mercedes, Porsche, Audi can do aside continue to produce the cars over and over again until they are overtaken by somebody else? Hell, both EU and USA need tariffs to compete with chinese automakers.



Not quite. Tesla has a high valuation mostly because traditional auto carries an enormous amount of debt on their balance sheets. I think Tesla is one economic downturn in a high interest rate environment from meeting the same fate. Once an auto company is loaded with debt, they get stuck in a low margin cycle where the little profit they make has to go into new debt for retooling and factories. Tesla is very much still coasting from zero interest rate free VC money times.


I guess if they really thought we had something to worry about, they would've stayed just to steer things in the right direction.

Doesn't seem like they felt it was required.

Edit: I'd love to know why the down votes, it's an opinion, not a political statement. This community is quite off lately.

Is this a highly controversial statement ? People are truly worried about the future and this is just an anxiety based reaction ?



Daniel “Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI”

“I think AGI will probably be here by 2029, and could indeed arrive this year”

Kokotajlo too.

We are so fucked



I am sorry, there must be some hidden tech, some completely different attempt to speak about AGI.

I really, really doubt that transformers will become AGI. Maybe I am wrong, I am no expert in this field, but I would love to understand the reasoning behind this "could arrive this year", because it reminds me about coldfusion :X

edit: maybe the term has changed again. AGI to me means truly understanding, maybe even some kind of consciousness, but not just probability... when I explain something, I have understood it. It's not that I have soaked up so many books that I can just use a probabilistic function to "guess" which word should come next.



Don't worry, these are the "keeping the bridge intact" speak of people leaving a glorious or so workplace. I have worked at several places, and when people left(usually most well paid ones), they post linkedin/twitter posts to say kudos and inspire that, the corresponding business will be in forefront of the particular niche this year or soon and they would like to be proud of ever being part of it.

Also, when they speak about AGI, it raises their(person leaving) marketing value as someone else already know they are brilliant to have worked at something cool and they might also know some secret sauce, which could be acquired at lower cost by hiring them immediately[1]. I have seen these kinds of speak play out too many times. Last January, one of the senior engineers from my current work place in aviation left citing about something super secret coming this year or soon, and they immediately got hired by a competitor with generous pay to work on that said topic.



> Also, when they speak about AGI, it raises their(person leaving) marketing value

Why yes, of course Jan Leike just impromptu resigned and Daniel Kokotajlo just gave up 85% of his wealth in order not to sign a resignation NDA to do what you're describing...



While he'll be giving up a lot of wealth, it's unlikely that any meaningful NDA will be applied here. Maybe for products, but definitely not for their research.

There's very few people who can lead in frontier AI research domains - maybe a few dozen worldwide - and there are many active research niches. Applying an NDA to a very senior researcher would be such a massive net-negative for the industry, that it'd be a net-negative for the applying organisation too.

I could see some kind of product-based NDA, like "don't discuss the target release dates for the new models", but "stop working on your field of research" isn't going to happen.



Kokotajlo: “To clarify: I did sign something when I joined the company, so I'm still not completely free to speak (still under confidentiality obligations). But I didn't take on any additional obligations when I left.

Unclear how to value the equity I gave up, but it probably would have been about 85% of my family's net worth at least.

Basically I wanted to retain my ability to criticize the company in the future.“

> but "stop working on your field of research" isn't going to happen.

We’re talking about NDA, obviously no-competes aren’t legal in CA

https://www.lesswrong.com/posts/kovCotfpTFWFXaxwi/?commentId...



> Unclear how to value the equity I gave up, but it probably would have been about 85% of my family's net worth at least.

Percentages are nice, but with money and wealth absolute numbers are already important enough. You can leave a very, very good life even if you are losing 85% if the remaining 15% is USD $1M. And if not signing that NDA will help you landing another richly paying job + freedom to say whatever you feel it's important saying.



> truly understanding… when I explain something, I have understood it

When you have that feeling of understanding, it is important to recognize that it is a feeling.

We hope it’s correlated with some kind of ability to reason, but at the end of the day, you can have the ability to reason about things without realising it, and you can feel that you understand something and be wrong.

It’s not clear to me why this feeling would be necessary for superhuman-level general performance. Nor is it clear to me that a feeling of understanding isn’t what being an excellent token predictor feels like from the inside.

If it walks and talks like an AGI, at some point, don’t we have to concede it may be an AGI?



Okay, what if I put it like this: there is understanding (ability to reason about things), and there is knowing that you understand something.

In people, these are correlated, but one does not necessitate the other.



No I’m with you on this. Next token prediction does lead to impressive emergent phenomena. But what makes people people is an internal drive to attend to our needs, and an LLM exists without that.

A real AGI should be something you can drop in to a humanoid robot and it would basically live as an individual, learning from every moment and every day, growing and changing with time.

LLMs can’t even count the number of letters in a sentence.



>LLMs can’t even count the number of letters in a sentence.

It's a consequence of tokenization. They "see" the world through tokens, and tokenization rules depend on the specific middleware you're using. It's like making someone blind and then claiming they are not intelligent because they can't tell red from green. That's just how they perceive the world and tells nothing about intelligence.



You seem generally intelligent. Can you tell how many letters are in the following sentence?

"هذا دليل سريع على أنه حتى البشر الأذكياء لا يمكنهم قراءة ”الرموز“ أو ”الحروف“ من لغة لم يتعلموها."



I counted very quickly but 78? I learned arabic in kindergarten, im not sure what your point was. There are arabic spelling bees and an alphabet song just like english

The comment you replied to was saying LLMs trained on english cant count letters in english



From that AGI definition, AGI is probably quite possible and reachable - but also something pointless which there are no good reasons to "use", and many good reasons not to.


> maybe the term has changed again. AGI to me means truly understanding, maybe even some kind of consciousness, but not just probability... when I explain something, I have understood it.

The term, and indeed each initial, means different things to different people.

To me, even InstructGPT manages to be a "general" AI, so it counts as AGI — much to the confusion and upset of many like you who think the term requires consciousness, and others who want it to be superhuman in quality.

I would also absolutely agree LLMs are not at all human-like. I don't know if they do or don't need the various missing parts in order to be in order to change the world into a jobless (u/dis)topia.

I also don't have any reason to be for or against any claim about consciousness, given that word also has a broad range of definitions to choose between.

I expect at least one more breakthrough architecture on the scale of Transformers before we get all the missing bits from human cognition, even without "consciousness".

What do you mean by "truly understanding"?



> when I explain something, I have understood it.

Yeah, that's the part I don't understand though - do I understand it? Or do I just think I understand it. How do I know that I am not probabilistic also?

Synthesis is the only thing that comes to mind as a differentiator between me and an LLM.



> Alan Turing on why we should never ever perform a Turing test

Can you cite specifically what in the paper you're basing that on? I skimmed it as well as the Wikipedia summary but I didn't see anywhere that Turing said that the imitation game should not be played.



I think what's missing:

- A possibility to fact-check the text, for example by the Wolfram math engine or by giving internet access

- Something like an instinct to fight for life (seems dangerous)

- some more subsystems: let's have a look a the brain: there's the amygdala, the cerebellum, the hippocampus, and so on, and there must be some evolutionary need for these parts



AGI can’t be defined as autocomplete with fact checker and instinct to survive, there’s so so so much more hidden in that “subsystems point”. At least if we go by Bostroms definition…


Relying on specific people was never a good strategy, people will change but this will be a good test of their crazy governance structure. I think of it similar to political systems - if it can't withstand someone fully malicious getting in power then it's not a good system


Oops, I thought there was something odd, I got my rationality acronyms mixed up. Hutter's program was called AIXI (MIRI was the research lab).

Here is Leike's paper, coauthored with Hutter:

https://arxiv.org/abs/1510.04931

They can probably sum it up in their own paper better than I can, but AIXI was supposed to be a formalized, objective model of rationality. They knew from the start that it was uncomputable, but I think they hoped to use it as a sort of gold standard that you could approach.

But then it turned out that the choice of Turing machine, which can be (mostly) ignored for Kolmogorov complexity, can not be ignored in AIXI at all.



When walking around the U of Toronto, I often think that ~10 years ago Ilya was in a lab next to Alex trying to figure things out. I can't believe this new AI wave started there. Ilya, Karpathy, Jimmy Ba, and many more were at the right time when Hinton was there too.


Oh man that was an amazing time at UoT. We also got GPU versions of btc mining from that group.

We also had Ethereum be born right around there as well around 2014. I remember the first Ethereum meetups around Queen and Spadina with Vitalik.

But to another posters point. Even though we had the father of deep learning Geoffrey Hinton and lumiaries like Ilya, and Vitalik, we didn't manage to get any real benefit from that.



Wow! By the time I arrived, Hinton was gone. As well as many great professors that started their own companies or were poached by big players (i.e. Sanja-Nvidia). At least I got to learn NN from Jimmy Ba (author of Adam). Now, he's working at xAI.


I've thought about this one for a long time having lived in both SV and Canada. It is a complicated one but there are a handful of critical road blocks in Canada that make it more challenging.

(1) Access to size of market even if online being US vs 'foreign' has advantages in political arena/regulatory benefits

(2) Significant tax advantages for US investors vs limited tax advantages for Canada (Angel+VC)

(3) Risk Appetite (impacted by size of market) - compounded by tax disadvantages (why would you take risk if your lining the pockets of the government?)

(4) Bench depth on talent once you really start to scale your company

(5) CAD strength (double edged sword) - talent goes South for better salaries (+ you need to compete), if the company revenue is in USD and employees are paid in CAD

(6) Start-ups paying in equity, early employees taking on that risk actually will get taxed heavily under new cap gains so the incentive to work hard for money is lower.

(7) Network effects of being in the valley - idea percolation, new playbooks, talent, competitiveness, company fitness

I will add that in this very specific AI case there is limited way you are going to find the depth of talent and capital in the country to make that company fly at the scale it needs to be.



if the idea that high taxes disincentivizes people from building stuff, california would be a wasteland.

but that's not what we see. people build because they still have a chance a making a lot of money.

also, like canada can build successfully tech companies. yes, I realize there should be more canadian tech darlings, but I don't think it has to do with high taxes so much as it has to do with Canadians being comfortable and not feeling the need to sacrifice everything to try and build the biggest thing.

If you look at Canada's most successful tech companies, the founders usually sell and enjoy a more comfortable existence.



You misunderstand my statement on tax.

Taxs on investors are quite high in Ontario/Canada compared to California. Not only does this minimize the outcome for the investor - it decreases the risk for making larger bets on big outcomes. In terms of exits -- you have a smaller playing field and fewer buyers being based in Canada vs USA. All the things add up to make a smaller opportunity for investors and builders and you work harder to pay more taxes to the government.

In terms for your ambition argument -- that could be an inherent problem in Canadian culture that no one wants to change the status quo - it is definitely a different culture than SV. The largest city is captured by financial industry for the most part which doesn't bode well for innovation.



I think it has much more to do with investors' sentiments. Canadian entrepreneurs are not comfortable; that's why they move to the US. But that's not because they don't like Canada. Moving is a big sacrifice -- they move away from their home and community, and also deal with the headache and uncertainty of US immigration. The ones I talk to who have moved down south, they miss Canada and didn't want to leave, but they didn't feel like they'd be able to afford the cost of living in Canada, and didn't think they could launch a successful startup there.

And the cost of living is going up, which is going to make even more talented Canadians uncomfortable. These days if you ever hope to own a house, you basically can't go the stable 9-to-5 route.

If investors in Canada were throwing hundreds of millions into moonshot startups the way that they do in Silicon Valley, probably most Canadian entrepreneurs would build those companies at home. But the investment landscape is such that the investors who have that much money opt to lever up on real estate instead.



The point was about high taxes for investors not employees.

How many money does OpenAI _directly_ pay to CA in taxes? Sure the employees pay a ton in taxes but as an investor if you're going to lose more in taxes by investing into a Canadian company vs a California company then you invest into CA.



Why would anyone start the game on hard mode when easy mode is a border drive away?

Us is so outrageously better than the rest that people fly across oceans to start businesses there. Canada, being next door, doesn't have the distance moat to at least slow down the brain drain



With regards to talent, there's no particular reason why software centers couldn't be in any major established city in the world. It's not like it takes billions of dollars on a highly uncertain bet like creating a car company, rocket reuse company, or a CPU company.

A small crew of people could potentially build the next WhatsApp. On Erlang.



There are definitely many good programmers all over the world, but there are more in the US, because that's where all the best companies are. So if you're trying to make a good company and you want good programmers, where do you go?


In the case of AI it absolutely does take billions and billions of dollars on an uncertain bet. They bet that throwing more data, more hardware, more GPU cycles at the problem would yield results and it has.


That statement applies to most industries. Tons of areas have the potential for an industry boom, but silicon Valley in is California. Or semi conductors are in Taiwan.

For many reasons, only some areas succeed whilst the rest fail. In this case, Canada doesn't have silicon valley, nor do they have a high amount of start ups.



As another Canadian, I feel the same but I'm not surprised one bit.

Canada is actively hostile towards tech and suffers from crippling salaries and investments. The idea of "business" in this country is buying a house and renting its basement.

Our government's incompetence is comical, we are nothing but more than a tech talent / immigration proxy for United States at this point.



I don't know, Canada and every-other-country-that's-not-the-US. When there's a neighbor that is flush with cash, where make-or-break is a kind of national disease ... what can you do?

(I'm an American, FWIW.)



FWIW, Israel and China are the other two hotspots for building startups. It’s worth looking at how Israel did it since their model could work elsewhere. For example they have a government funded delegation that goes around to conferences solely to meet with large companies and investors and promote Israeli startups.


Its a mix of "Easy" things to fix: Streamline tax code, build desirable office centers, Have good internet infrastructure

and "Hard" Things: The work culture, Cost of living



The Canadian Dream is to get a great education and then move of the US.

You might want to blame the government or this or that but I think as a Canadian I've finally come to reckon with the fact that it's just not in the Canadian ethos to do risky things like make startups. Of course there are exceptions to the rule but they are very very rare. Canadian investors don't want to take big risks and the Americans are just next door waiting to gobble up the talent in search of capital.



For folks without responsibilities like kids, aging parents, etc. I really don’t think startups are very “risky”.

What’s the worst that happens? It doesn’t work out and after five years you go get a job in boring corp corp with an incredible skillset and vast life experience.

You’ve sacrificed some income perhaps, but so what? People make choices like that all the time. Your working career could easily be 40 or 45 years, 5 is not that much and it’s not like you went bankrupt. Your skillset might even mean you more than make up for lost time.

I don’t understand the talk of “risk” unless you’re Elon Musk betting the farm on your businesses and facing bankruptcy.

Work in your spare time until you have something Angel worthy, then get a modest salary to get to the next level and on you go. Or just bootstrap.

Is it easy? No, it’s the hardest thing you’ll ever do. Is it risky? Not so much.

So why do Canadians and Brits see it as a risky thing to do? I think they don’t. What they see is _uncertainty_ - where will I be in six months? What if it doesn’t work out? What if I fail and people judge me? They don’t like uncertainty. That is conservative with a small c. Probably it’s a cultural artefact rather than anything remotely rational. The problem is you end up in an equilibrium where the society is conservative (“what you wanna do wasting your time with that”) so the ambitious people just leave and go to somewhere like (parts of) the US where people want to change things, make things, improve the world. And the conservative society gets more conservative until it is ossified.

Startups carry high uncertainty but not high risk.



There are countries where the business culture makes you unemployable and almost impossible for you to get a loan for the rest of your live if you have ever failed a business (bad enough). Many countries aren't as open to failure as the US.


This depends on what you see as risk. If I can safely earn for 5 years way above national average and build a strong savings egg that can provide income forever.

Or I can fail at a startup and be close to zero five years later, the fact that you aren't homeless and starving and can get another job doesn't mean it wasn't risky, you still wasted a bunch of years compared to slow and steady accumulation.

I've read the majority of millionaires in the US get created like this, working and saving through decades.

You're basically repeating investor kool-aid, because for their model to work, 100 people will fail and 1 succeed, and so they tell you to not worry if you're in the 99.



Of course you could probably say at least some of the same things about grad degrees that may not really translate into appreciable different/better career outcomes. Of course some say exactly that, especially about PhDs.


The increased cost of living in the last few years has changed this somewhat. That 5 years of lower earnings now means less nice groceries, fewer holidays and being under the yoke of landlords for considerably longer.


I can't speak for Canada and I may be a wrong, but it seems to me harder to loan money for business than in NA. Banks are the ones that don't want to take risks, not necessarily the people with ideas.

Also failures aren't considered the same in every job market.



Turns out there’s only enough people with this mindset to fill a couple hubs around the world. The rest prefers less volatility and happily takes on less downside risk for capped reward and/or less upside risk.


Canada is addicted to rent seeking, monopoly businesses, corporations that push regulatory capture on the gov't and then parasitize, and -- most of all -- ripping resources out of the ground and selling them cheap, or doing the same with real estate.

My latest annoyance is all the moaning and groaning about the latest capital gains tax increase. People complaining on one hand about how the Canadian economy lacks productivity, and then screaming to high heaven about tax policy that mostly only impacts people making quick speculative cash.

Investment takes no risks in this country because they don't have to. They just dump money into real estate or oil & gas instead and then hang at the lake in the Muskokas.



Not true. I've done SRED every year for the past ~7 years. It is work, but there are specialized consultants that do most of it. If the work is truly R&D (which would be the case for a cutting-edge AI company) and you track your work in JIRA or something like that, then it's mostly just writing a few pages describing the efforts.


10B is going to old mates mates.

No new startups are getting it.

It's also like pulling teeth trying to explain to people that if we don't offer compensation commensurate with what they get in the US people will just leave for the US.

There is some form of brain damage where even people who know how to code assume that because you can get a crud developer for $80k a year you should get an AI researcher for $150,000. It's nearly double after all.



Great. But anyone good gets offered so much they move eventually.

It might not be a year after uni, might not be 10, but eventually they will move because the pay in the US is just so much better than anywhere else.



There is a vast underestimation of how tedious & time-intensive these tax credit programs are when applying for them. A large company can do so because they can hire the people to solely go after them; A new startup (with a headcount that can fit in one hand) is too busy in actually keeping the business alive to pursue these programs, which often times come with conditions too arbitrary for startups to fulfill.


SRED is basically a subsidy for companies that do your SRED paperwork for you, not the company doing the engineering itself. There's a whole industry of this.

No evidence that SRED has done anything ever for actual R&D. I've seen people get SRED for making web pages in JavaScript&HTML. When I had to fill in the SRED stuff it was ridiculous. Someone doing actual innovation would throw their hands up in the air.



Yeah Canada just spends a ton of taxpayer money to create great institutions like U of T and Waterloo, so that their graduates can all go to Silicon Valley and make 2-3x the money.


> 2-3x the money.

That's if you're stuck in tech support. When you start doing actual ground breaking work it starts at x10 and goes up significantly.



Maybe the majority of Canadians think that having great higher education institutions and thr people who work in them is a good fit for their way of life, but having Silicon Valley companies and people making SV salaries around epuld make their lives worse? If so, this is great: Canadians don't want to live with the tech crowd, so they provide them with the skills so they can move elsewhere, make their dreams come tuee, and not bother the majority that don't want their presence.

NB some actual Canadians in this thread have voiced this possibility.



That makes zero sense, governments invest in education to improve their own country, not to train other countries work forces. If you read anything about Canada ever you will also know they have a bunch of policies to try and stop the brain drain and to recruit tech workers from abroad.


> That makes zero sense, governments invest in education to improve their own country

The idea is precisely that not having SV types around _improves_ the country, i.e. makes it closer to the preferences of Canadians.

And yes, having a foreign tech worker doing 9-to-5 in a large legacy company for thoroughly average salaries is very different from having a SV-style startup culture. There is very little process in Canada to make life difficult for the former style of company, and plenty of process to make operations difficult for the latter.

If not having SV folk improves Canada for Canadians, and hqving SV folks improves America for Americans, then this is just mutually beneficial trade. Efforts to try and stop brain drain still makes sense: it's even better if you can convince the citizens you trained to engage in the economic activity you actually want instead of economic activity that you find undesirable, but if you're unable to convince most of them, letting them go is still better than having them stay and engage in their undesirable behavior anyway.

Compare: if a large minority of Icelanders wanted to work for the Baby (which Iceland doesn't have), theb stopping the brain drain (convincing them to work in the Merchant Fleet) is the best outcome, but funneling them out (training them in merchant navigation and watching them join the Danish Navy) would still be preferable to them engaging in their desired behavior anyway (form their own pirate gang preying on the very Merchant Fleet you're trying to advantage).



> And yes, having a foreign tech worker doing 9-to-5 in a large legacy company for thoroughly average salaries is very different from having a SV-style startup culture

Immigrants coming into countries start companies at a disproportionate rate compared to natives.

Other than unquantifiable statements about what "Canadians want" everything you mentioned so far to justify this idea of "canada doesnt care if tech graduates leave" is falsifiable by data.



One last time, the claim is not that "Canada doesn't care". It's that it prefers it to the alternative of SV-style companies operating from Canada. Which is consistent both with data, facts on the ground (yes, Canada has laws and administrative processes designed to make SV-style startups difficult to start there, that's precisely what people complain about above!), and the comments of actual Canadians in this very thread.

You're welcome to present data falsifying the actual claim if you think you have it (instead of the "Canada doesn't care" straw man or misunderstanding that you repeat above, noting that so far you have not even refuted your own straw man by presenting any data).



> Maybe the majority of Canadians think that (...) having Silicon Valley companies and people making SV salaries around epuld (sic) make their lives worse

This is your claim that I engaged with. If your claim is true it literally means that Canadians do not care if those people leave, in fact they would prefer it. My argument is that you're wrong and Canada and it's people would rather have more tech workers and more tech companies.

I don't believe I'm misunderstanding so I think we should probably both give up at this point.



The problem is that Canada is basically a european country on the american continent - SV is possible in a place where you can have risk and reward. But also you might lose everything. In Canada, it is hard to become rich - so no worth trying, there is also less risk due to better social security and the base level is pretty decent. Would not be surprised if there are tons of regulations in Canada too (more than in USA).

There is a reason why there are not many startups in Europe - if you can have a decent life, secure job and a nice social security - no worth playing risky games. I would not be surprised if just sheer layoffs in USA led to more startups than in the whole Europe.



I don't tuink it implies that they don't care, it only implies that they find it preferable to one certain alternative (staying AND turning Vancouver into north-SF; the conjunction is load-bearing), and I think this much looks true and well-supported by the facts and revealed preferences. They're not willing to change the rules and procedures that people complain about here, and if you propose they do so, as many have, they say no to that explicitly.

But I agree that we should probably disengage, so (barring exceptional new insights on my end) will leave this as my last post in the thread. Thanks for the chat.



> Compare: if a large minority of Icelanders wanted to work for the Baby (which Iceland doesn't have), theb stopping the brain drain (convincing them to work in the Merchant Fleet) is the best outcome, but funneling them out (training them in merchant navigation and watching them join the Danish Navy) would still be preferable to them engaging in their desired behavior anyway (form their own pirate gang preying on the very Merchant Fleet you're trying to advantage).

I read this as if you'd be concerned of Canadians using their tech skills run malware groups, if Canada wouldn't let them leave and join SV companies.



I see why you'd read it that way, but it's meant as a metaphor, not an analogy, to help elucidate that a government may take steps to try and bring about their preferred outcome of retaining people, while not bringing about what is, from their perspective, even more undesirable outcome.

It's not perfect, but neither is anything else I couls.come up with. Take the following:

- Persia would prefer many trained accountants so that PersianAccountants, the shah's preferred supplier of accounting technology, can hire cheaply from a large talent pool.

- Ambitious, trained accountants leave for the U.S. to work on DisruptiveAccounting.io, earning big bucks and disrupting the U.S. accounting sector.

- If Persia changes the kingdom's procedures and incentives, the same accountants would stay and found DisruptiveAccounting East, instead of working for PersianAccountants. This would be strictly worse for the shah than letting them leave.

The problem is, if I were to use this metaphor, people would get hung up on the difference between democracy and monarchy (preferences of one autocrat vs. that of the majority) and most Americans just straught up do not understand why anybody, much less the majority, would prefer not disrupting the accounting sector.

I.e. if they don't understand what I'm saying about Canada not liking the third option, the metaphor falls flat: they also won't see why the shah doesn't like the third option.

So I had to look for a metaphor where the obvious alternative is undesirable to most Americans. Hence piracy. The problem is that there's another reading now (the software engineers will become criminals).

Do you have a metaphor that would avoid both issues? I'd love to hear it!



It's not as if Canada doesn't benefit from machine learning advances. It just doesn't by having many ML start-ups as a tax base.

Canada's skilled immigration policy is a train wreck, but that's another issue.



Canada's skilled immigration policy is amazing. It is attracting some of the best talent in the world. What it is not able to do is retain the talent and is just ending up as a stepping stone to the US. All it needs to do is two things: 1. Provide tax deductions for rent and interest on home loans for new home buyers. 2. Reduce the average taxes to just slightly less than the US tax rate by 5-10% upto 500k. Then watch the magic happen.
联系我们 contact @ memedata.com