![]() |
|
![]() |
|
> ChatGPT is the number 1 brand in AI Not for long. They have no moat. Folks who did the science are now doing science for some other company, and will blow the pants off OpenAI. |
![]() |
|
“Its biggest issue is in the productization.” That’s not true at all. The biggest issue is that it doesn’t work. You can’t actually trust ai systems and that’s not a product issue. |
![]() |
|
It works fine for some things. You just need a clearly defined task where LLM + human reviewer is on average faster (ie cheaper) than a human doing the same task themselves without that assistance.
|
![]() |
|
Show me one product that can offer me an AI assistant that can set up a meeting with you at a time that doesn't contradict any of our plans, given only my and your email address.
|
![]() |
|
Do you believe the average human has general intelligence, and do you believe the average human can intellectually achieve these things in ways existing technology cannot?
|
![]() |
|
> A model that is as good as an average human but costs $10 000 per effective manhour to run is not very useful, but it is still an AGI. Geohot (https://geohot.github.io/blog/) estimates that a human brain equivalent requires 20 PFLOPS. Current top-of-the-line GPUs are around 2 PFLOPS and consume up to 500W. Scaling that linearly results in 5kW, which translates to approximately 3 EUR per hour if I calculate correctly. |
![]() |
|
AI in the past (adjusted for 1980s) was pretty well funded. It's just that fundamental scientific discovery bears little relationship to the pallets of cash.
|
![]() |
|
You are right about funding levels, even taking inflation into account. Some of the infrastructure, like Connection Machines and Butterfly Machines seemed really expensive at the time though.
|
![]() |
|
No, but that’s when “ping” was written, which is what you said. (And, irrelevant, but my parents were in fact both posting to Usenet in 1983.) |
![]() |
|
LLM is all you need Attention and scale is all you need Anything else you do will be overtaken by LLM when it builds its internal structures Well, LLM and MCTS The rest is old news. Like Cyc |
![]() |
|
There are no moats in deep learning, everything changes so fast. They have the next iteration of GPT Sutskever helped to finalize. OpenAI lost it's future unless they find new same caliber people. |
![]() |
|
How do you know massive datasets are required? Just because that’s how current LLMs operate, doesn’t mean it’s necessarily the only solution.
|
![]() |
|
Except if you're Anthropic or OpenAI you don't care about what your compute provider has done in the AI space - you care about the compute power they can give you.
|
![]() |
|
Golden goose is already being hoisted upon a spit — and your company is not even going to get even drippings of the fat. I am surprised by the short-sightedness of execs.
|
![]() |
|
And ironically even the authors did not fully grasp at the time the paper importance. Reminds me of when Larry Page and Sergey Brin, tried to sell Google for $1 million ...
|
![]() |
|
> When the next wave of new deep learning innovations sweeps the world, that won't happen, the next scam will be different it was crypto until FTX collapsed then the usual suspects led by a16z leaned on OpenAI to rush whatever they had on market hence the odd naming of ChatGPT 3.5. When the hype is finally realized to be just mass printing bullshit -- relevant bullshit, yes, which sometimes can be useful but not billions of dollars of useful -- there will be something else. Same old, same old. The only difference is there is no new catchy tunes. Yet? https://youtu.be/I6IQ_FOCE6I https://locusmag.com/2023/12/commentary-cory-doctorow-what-k... |
![]() |
|
All crypto"currencies" with a transaction fee are negative sum games and as such , they are a scam. It's been nine years since the Washington Post admittedly somewhat clumsily but still drawn attention to this and people still insist it's something other than a scam. Despite heady articles about how it's going to solve world hunger, it's just a scam. This round of AI is only capable of producing bullshit. Relevant bullshit but bullshit. This can be useful https://hachyderm.io/@inthehands/112006855076082650 but it doesn't mean it's more impactful than the Internet. |
![]() |
|
> We don't know if full AGI can be built using just current technology (like transformers) given enough scale, We absolutely do and the answer is such a resounding no it's not even funny. |
![]() |
|
The scenario I have in my head is that they had to override the safety team's objections to ship their new models before Google IO happened.
|
![]() |
|
Resignations lead to more resignations....unless mgmt. can get on top of it and remedy it quickly, which rarely happens. I've seen it happen way too many times working 25 years in tech.
|
![]() |
|
Kokotajlo: “To clarify: I did sign something when I joined the company, so I'm still not completely free to speak (still under confidentiality obligations). But I didn't take on any additional obligations when I left. Unclear how to value the equity I gave up, but it probably would have been about 85% of my family's net worth at least. Basically I wanted to retain my ability to criticize the company in the future.“ > but "stop working on your field of research" isn't going to happen. We’re talking about NDA, obviously no-competes aren’t legal in CA https://www.lesswrong.com/posts/kovCotfpTFWFXaxwi/?commentId... |
![]() |
|
You seem generally intelligent. Can you tell how many letters are in the following sentence? "هذا دليل سريع على أنه حتى البشر الأذكياء لا يمكنهم قراءة ”الرموز“ أو ”الحروف“ من لغة لم يتعلموها." |
![]() |
|
From that AGI definition, AGI is probably quite possible and reachable - but also something pointless which there are no good reasons to "use", and many good reasons not to.
|
![]() |
|
AGI can’t be defined as autocomplete with fact checker and instinct to survive, there’s so so so much more hidden in that “subsystems point”. At least if we go by Bostroms definition…
|
![]() |
|
Oops, I thought there was something odd, I got my rationality acronyms mixed up. Hutter's program was called AIXI (MIRI was the research lab). Here is Leike's paper, coauthored with Hutter: https://arxiv.org/abs/1510.04931 They can probably sum it up in their own paper better than I can, but AIXI was supposed to be a formalized, objective model of rationality. They knew from the start that it was uncomputable, but I think they hoped to use it as a sort of gold standard that you could approach. But then it turned out that the choice of Turing machine, which can be (mostly) ignored for Kolmogorov complexity, can not be ignored in AIXI at all. |
![]() |
|
Its a mix of "Easy" things to fix: Streamline tax code, build desirable office centers, Have good internet infrastructure and "Hard" Things: The work culture, Cost of living |
![]() |
|
Yeah Canada just spends a ton of taxpayer money to create great institutions like U of T and Waterloo, so that their graduates can all go to Silicon Valley and make 2-3x the money.
|
![]() |
|
> 2-3x the money. That's if you're stuck in tech support. When you start doing actual ground breaking work it starts at x10 and goes up significantly. |
I have to admit, of the four, Karpathy and Sutskever were the two I was most impressed with. I hope he goes on to do something great.