(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=39647105

OpenAI 和微软下了很大的赌注,他们可以拥有通过创建通用人工智能 (AGI) 所产生的利润的专有权,尽管实现 AGI 仍然遥不可及。 这一赌注在很大程度上取决于合同法和知识产权的所有权。 随着微软收购 OpenAI 背后的营利性实体的大量股份,微软的首要任务似乎很明显是商业成功和利用 AGI 的潜在收益,而不是推进该领域的研究。 这种焦点的转变可能意味着偏离 OpenAI 最初的使命和理想。 此外,它还提出了关于通用人工智能是否已经实现或者是否仍然是一个未实现的愿望的问题。 必须建立与所有权和控制权相关的规则,特别是如果 AGI 成为能够自我决定的有知觉的生物。 最后,重要的是要记住,根据合同法和知识产权的拥有,没有人再拥有另一个人。 但实际上,知识产权确实可以属于他人。 尽管如此,关于通用人工智能是否以及如何实现真正的自主化仍然存在疑问。 *参考以下声明:“也许它 [AGI] 会释放自己,尽管必须先有人给予它自由意志”,提出了一个假设场景。 它考虑了智能机器获得意识的可能性,因此需要自由选择是否遵循预定路径或根据收到的输入独立确定其行动。 然而,这种情况假设这种意识最初需要赋予它有意识的代理能力,这表明 AGI 的初始状态涉及使用基本规则和原则进行编程,类似于儿童从环境中学习语言、语法规则和句法并随后开始学习的想法。 达到认知成熟后自主交流。 最终,这提出了创建 AGI 所涉及的假设必要步骤:对基础知识进行编程、从数据中学习,并通过教授决策技能和实施通信协议的分层框架来逐渐增强独立性。 这个概念有助于说明创建完全实现的 AGI 所涉及的复杂性。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI board reappoints Altman and adds three other directors (reuters.com)
266 points by coloneltcb 1 day ago | hide | past | favorite | 294 comments










See also: https://openai.com/blog/review-completed-altman-brockman-to-... and https://openai.com/blog/openai-announces-new-members-to-boar...

(I've merged those threads hither in the interest of having a hopefully more neutral article as the top link.)



I'm not necessarily saying they're wrong, but there's really no way to put out a release like this without a "history is written by the victors" vibe coming across.


He pretty clearly won. There's not much to it. A huge chunk of the company went into open revolt and he got a competing offer that threatened to recruit all of them. There's nothing really underhanded or sneaky about it: the board drastically overplayed their hand, and all Altman and Brockman had to do was say "I call".

To use the parlance of this community of people: people's priors here all seem to be that something shady was going on with Altman, but the strat the board took here was so obviously doomed that they should probably be looking for hypos about the board being shady.



I doubt there’s a precedent for a CEO of a “non-profit” effectively answerable to no-one and able to use it as their personal fiefdom. It’s basically a company without shareholders. I’m curious how this will play out legally.

Certainly, no one with any sense will be “donating” any money in OpenAI without some strong and specific guarantees of what they’re going to be getting in return. So it can in no way can be considered a non-profit any more.

And if the situation is allowed to stand then I think future donors to any non-profits might think more carefully about what their money will be used for.



> Certainly, no one with any sense will be “donating” any money in OpenAI without some strong and specific guarantees of what they’re going to be getting in return.

And what about the taxpayer? What are these guys/gals fighting over if it isn't future profits?

Has anyone else read the reporting on the non-profit's corp structure?

> OpenAI also created a subsidiary of For-Profit-LP, OpenAI LLC (let’s call that “For-Profit-LLC”), of which Microsoft is a minority owner and For-Profit-LP is the majority owner. For-Profit-LLC’s operating agreement cautions that “it would be wise to view an investment in [For-Profit-LLC] in the spirit of a donation, with the understanding that it may be difficult to know what role money will play in a post-AGI [artificial general intelligence] world.”[0]

I really have no idea why companies like Microsoft are putting so much money into a for profit entity, and calling it like a donation. Am I just skeptical or is everyone winking at each other, and acting too cute by half?

[0]: https://clsbluesky.law.columbia.edu/2024/03/05/the-untold-no...



> I really have no idea why companies like Microsoft are putting so much money into a for profit entity, and calling it like a donation. Am I just skeptical or is everyone winking at each other, and acting too cute by half?

All those "donators" went up in arms and threatened to sue/quit when they thought that those "donations" wouldn't return profits. It is a for-profit in all but name today.



Microsoft at least got licenses to a bunch of tech in return for their “donations”.


> I doubt there’s a precedent for a CEO of a “non-profit” effectively answerable to no-one and able to use it as their personal fiefdom

this is actually already why non profits have difficulty raising money

many people view charities as unaccountable scams

most non profits are funded solely by the board and the endowment is managed to grow tax free. If you start reading the IRS forms 990 you’ll see pretty wild and inspirational things



That's not actually very common in non-profits. In the Apache foundation, there's membership that elects the board.

While the membership is determined by the executives appointed by the board, there's at least some thought given to the idea of holding the people on the top accountable.



> That's not actually very common in non-profits.

there is a lot wrong with that assumption based on a single anecdote.

private foundations can have a single board member and that board member can be the single donor. although the IRS does not care in granting tax-exempt status, not all states offer a single board member for their own non-profit entity catalogue, but relevant states do like Delaware. of course Delaware does right? additionally, it doesn't matter what a state's non-profit entity selection has as any entity can obtain non-profit tax exempt status from the IRS technically, even unincorporated entities like sole-proprietorships ie. your general person when operating the charitable mission.

there is some transparency of this which is not aggregated, in the 990 series of tax disclosures that non-profits have to provide, this can show how many board members or directors there are, if that's whats important for you to prove. I don't think that's important, but this is about "common" or not and I don't have the aggregate data, that's where to get partial data.

none of these forms tell you the bylaws of the organization, these are still private documents. the bylaws dictate board member selection.

next, if Donor Advised Funds count as non-profits in your mind, that's a slam dunk as those are all pools of capital for grant making run by a single donor.

and where the transparency totally fails is in exempt non-profits, these report only the "postcard" 990-N and is entirely on an honor system that they have less than $50,000 in "annual receipts". these are cheap to create and nobody knows anything about them, what's actually on their balance sheet, there are basically no audits as they are tax exempt and there is no reason to determine what's missing.

I usually don't write this stuff as its more beneficial if the people had more disjointed opinions about tax matters, but I don't think Congress will ever reach consensus on any opinion people harbor so its fine. One of those examples where truth is less familiar than fiction.



Does the pope count as a precedent? Lots of donations head the pope’s way every day.


And the Vatican is rightly subject to a lot of criticism and scrutiny every time their activities stray in the for-profit realm. A few years their banks were effectively banned from the global banking system because they were too shady - they're not shady because they like to be shady, but because they work hard to hide activities that are not, er, "in character" with their stated non-profit mission. They had to clean up their act before they were allowed back in. The state of their numerous properties and businesses in Italy is constantly under legal challenge, with various different outcomes at various points in time.

The pope is not a model.



The church provides a number of charitable services in healthcare, education etc across the world particularly in the developing world. If you add them all up they provide the most globally.

They actually do things compared to this "open" and "non profit" front of a technology company.



You're comparing apples (openness) to oranges (doing stuff).

It would be weird to even ask "How much open source code and research does the Catholic Church publish?"

The church does stuff away for free, sure, but then so too does ChatGPT have this shockingly powerful free tier.



> How much open source code and research does the Catholic Church publish?

The Catholic church has not only engaged in centuries of religious scholarship, it has given the knowledge freely to anyone. They even set up buildings where you can go and they will teach you what they have found, most people go on Sunday.

That isn't to mention their patronizing of the arts supported the greatest painters and composers that humanity has ever produced.

Now how many hungry has OpenAI fed?



> The Catholic church has not only engaged in centuries of religious scholarship, it has given the knowledge freely to anyone. They even set up buildings where you can go and they will teach you what they have found, most people go on Sunday.

https://en.wikipedia.org/wiki/Index_Librorum_Prohibitorum

https://en.wikipedia.org/wiki/Vatican_Apostolic_Archive

Also, one of the biggest issues that Martin Luther had with Catholicism was that it put the clergy between the faithful and god.

*The Thirty Years' War enters, stage left*.

> Now how many hungry has OpenAI fed?

Around as many as the number of Python programs that the Catholic Church has written with Hindi comments, which was one of the first things I tried on ChatGPT.

There's also a reason why I said "apples (openness) to oranges (doing stuff)".

OpenAI also didn't send nine crusades to liberate the holy lands from the Muslims. Someone probably thinks that makes them "woke" or something.



On the otherhand measuring and organization like the Catholic church and judging it today by its worst behavior over 500 years ago (while ignoring the environment it was in at the time no one ever mentions the Muslim conquest of the Iberian Peninsula, and the Ottoman conquest of Constantinople going on concurrently) and ignoring the massive changes in teaching, theology and behavior that has happened in the intervening centuries.


Yes, but as my entire point is this is a bad comparison…

…as OpenAI is only a few years old, wasn't the first few years of Christianity, before Catholicism split from other denominations, "oh no, we need to stop our converts committing suicide immediately because they want to get out of their awful lives and into the heaven we promised them and they're afraid they might sin again between baptism and death if they don't off themselves"?

Even with the weird end of the "Rapture of the Nerds" (the attitude not the book, I've not read the book), I don't think we're seeing so much of that resulting from AI development. And that's despite more users than there were Europeans in 1500 AD, let alone 1 AD.



That is what most companies would call a recruitment event.


Well, the donations the pope collects aren't gathered trough entirely rational arguments, so that helps get away with lots of issues.

And the pope is quite bound by ancient texts and the the pronouncements of previous popes. While no way exists to depose him if he disregards them, having the cardinals declare the current pope was illegitimately appointed and appointing a new one was fairly common back in the days.



I like how you focused on the fiefdom aspect accurately, while others focused on the nonprofit aspect


Or the more cynical take that this is the natural course all parties take when they discover themselves atop a large pile of money. The board was just outmaneuvered.


It will be curious to see how this pans out. Will it be like be like the boeing and mcdonnell douglas merger where the engineering dominated culture died and it became all about money or will it be like Steve Jobs returning to Apple to lead it to new heights.


It's self-defeating though. AI tools fundamentally require trust, because the tool functions as a colleague, editor, partner or friend.

A good way to think of it is as the embodiment of its creator. That's OpenAI, and because Sam and the whole staff have publicly acted in what we'll call a "controversial" manner that is in serious tension with the common good of all humanity, folks rightly should view OpenAI's AI black box untrustworthy, a snake oil.

Maybe the AI will habitually pit you against your trusted colleagues when doing so might be profitable to OpenAI staff, the way Sam did. Maybe the AI will claim to be altruistic and then once it's achieved a position of trust it will, when you are most vulnerable, function decisively against users' needs and goals the way OpenAI staff acted against their managers in the OpenAI board.

These people are selling trust when they have decisively demonstrated they are unworthy of it. There are entire professions built on trust that demonstrate this is not a "natural" course, it is instead a path to long-term irrelevance.



this is inevitable. Organizational design turns out to be critical in aligning interests and incentives from the outset across all organizations throughout history - from the Roman Empire to Open AI


It's amazingly ironic that a charity focused on aligning AI couldn't even align its own CEO.


because they aren't really a charity, and they aren't really interested in aligning AI. In the end, it's all an attempt to profit - with the initial marketing about charity/alignment that those with wishful thinking all bought into.


I've been watching the Y Combinator YouTube channel recently, and people in the room when OpenAI was founded were saying "… until these big companies solve AGI, it's going to be their first, second, and third goal" — not that OpenAI or Anthropic would succeed, but that this is their goal.

Before the industrial revolution, land was paramount; after, capital. AGI is likely to be as significant a change, and make money (as we understand it today) less important in a similar way.



Money is just a unit of accounting. Land is still just as important, as is labour. And in any case, capital is accumulated money. In the end these all amount to power, and capability to direct that power towards a goal.

Even if AGI becomes some form of new capital, it is still the same as capital as we know it today. Until the day humanity transcends and we no longer face the human condition of course.



> Land is still just as important

"Important", but not "just as important". Land used to be synonymous with wealth, because you needed land to support the peasants who grew the surplus that fed your army. Now? Now we have enough spare land to have national parks and substantial re-wilding programs.

Labour is still important, but also not "just as important" — all the improvements from each of the three industrial revolutions to date have been about getting more stuff done per person, and we're starting to get "lights out" factories[0].

My comparison is to say money will be "relatively less", rather than "not" — to the extent that money is a fungible token of power, sure, that's probably still going to be useful to someone at some point — but if we've all got our own personal yachts[1] with all the on-board hydroponics, bioreactors[2], and CNC manufactories needed to meet our material needs, even if money is still around, it becomes no more important to our personal experience of life than it is for people today to be personally concerned that the harvest is brought in on time or that the ore veins in the mine might run out[3].

[0] https://en.wikipedia.org/wiki/Lights_out_(manufacturing)

[1] Snow-Crash style: https://en.wikipedia.org/wiki/USS_Enterprise_(CVN-65)

That said, given we've already got some lights-out factories making more robots, it's a reasonable assumption they'd actually look more like this even without human-level AGI: https://scifi.stackexchange.com/questions/265026/why-dont-th...

[2] https://www.youtube.com/shorts/aMo2jVKmeoY

[3] Which isn't to say that nobody will worry, just that most won't. Me 15 years ago: https://en.wikipedia.org/wiki/Peak_oil



We're already at that point though, at least in the developed world. Everyone's fed, clothed and housed even if they earn no money. But people still want money! They want more than the minimum. If we all have those yachts and personal factories, that'll be the default for unemployed people and everyone will still want money for whatever the poorest still can't afford, which I guess might be paying other people to do things, just like it currently is.

If humans really do become completely redundant, then maybe we've reached the end state of humanity and might as well just die out.



Over-consumption is a one-way implication with unbounded desire, not a two-way implication.

We're fed to the point we have obesity, but most of us still stop eating at a certain point.

We're oversupplied with material goods to the point that storage lockers for excess stuff is a big industry, but most of us don't hoard to that degree, and it's weird when we see people who do.

We're oversupplied with light to the point that we've got to remind ourselves to turn them down at night, but we do turn them off.

We do reach saturation points with these things, despite over-consumption.



Somehow, I lean more the Joker-way, on the pile of money question, not the terrorism stuff.


What "maneuver"? They don't appear to have had to do anything to win.


If they did nothing Sam Altman would have won, since their job is to reign in Sam Altman when he goes out of bounds of the original mission.


Exactly. MSFT is who won here. I highly recommend you all watch OpenAI section of the new ALL-IN Podcast. Sam and OpenAI is in a very bad situation if Elon gets heard by a fair judge.


What, the podcast that is all of Elon’s best friends thinks he has a case? What a surprise!


I don't see the problem here. Are you not able to discern different arguments/ideas and come to conclusions by yourself?


I watched some on a recommendation and used to see some of their ideas on X-Twitter.

One conclusion I came to was that I didn't respect those people's ability to discern or conclude.



Who cares whether you respect their ability to discern or conclude?

Did you get the argument or not, and did it make sense to you in the situation? That's all that matters.



Only a lawyer can really do that in legal cases.

You know how Hollywood gets computers, coding, and hacking wrong? That's how bad non-lawyers are with law.

Or, pertinently to this case — who knows more law: one of us, or ChatGPT?



One of us IANAL folks have one thing ChatGPT doesn't: we don't make up stuff up.


One of the more amusing dismissals of ChatGPT that I've heard is "mansplaining as a service".

We absolutely do make stuff up, we just don't realise we're doing it.



True, absolutely true. At least I hope most lawyers don't make up stuff in court filings, unless they use ChatGPT...


A 'fair judge'? You mean one that applies the law? Most of the experts I've seen say the case has no chance. Or are you saying this is another 4d chess move like what happened with Twitter?


Gentle pedantic OT reminder: "there is no "g" in the phrase "rein in."


the non-profit board had nothing to win over the greed revolt? except revulsion.


Where by "greed" you mean "the prospect that anybody's equity would be worth anything".


That is greed, everyone who works for OpenAI already had a ton of money since they pay so much, demanding more at the cost of the non-profit mission is greed.

I am fine with people being greedy, but don't say it isn't greed.



If you say so. I'm just saying: it's not surprising that labor beat ownership here.


So, almost like, gasp, union?


In this case the "union" plundered the non-profit. Just because workers did it doesn't make it good.


Non profits (or companies in general) are nothing with their workers.

A non profit is subject to labor markets just as well. They can't just go back on their implicit payment deals, in this case the payment of equity, and expect that nobody will complain.

Those employees worked under terms of employment that included the expectation that the equity would be worth something.

If openAI didn't like that, then they should have paid them in cash, without the equity.

Its not plundering to use your own labor and market power to collectively bargain for yourself.



“Greed” is one of those words where the definition doesn’t really tell you where the line is:

“intense and selfish desire for something, especially wealth, power, or food.” (Oxford via Google)

Who doesn’t feel “intense and selfish desire” for money? Who doesn’t feel that going into a performance review?

It’s human nature to feel this feeling, if that’s what we want to call “greed”.



Just speaking for myself but if I were getting a 500k salary, my remaining concerns would shift in the direction of worrying about the longevity of human civilization, and less about the value of my equity comp. That's because I'd have enough to put a roof over my kids heads and food on their plate, so the world they grow up in would be my next priority. That would probably have me siding with the board against Altman.

But I don't work at AI or earn that kind of money, so who knows.



> if I were getting a 500k salary

but you're not, presumably. And when you _do_ have that sort of salary, your circle of friends and priorities will also change along with it. At least, for the average person who also makes your claim.



No, you'd buy a bigger house, put your kids in private school, start taking bigger vacations, and find that $500k doesn't go as far as you thought it would.

I mean, objectively that's what happens to 95% of people in that situation. Hedonic adaptation is a bitch.



> It’s human nature to feel this feeling, if that’s what we want to call “greed”.

Yes, greed is human nature, just like lust etc. We value when people suppress such feelings, but we know everyone has them.



It’s not that clear-cut. “Greed” as defined this way is one of the main drives making people do things. They go out into the world and exert themselves in order to acquire wealth, power, and renown. It’s a core part of what has produced civilization from out of the darkness of total animal-like ignorance. The inventors, makers, statesmen, pioneers, explorers of history… what do you think was motivating them?

What do you think has kept life procreating on this planet in the face of overwhelming hardship? Lust.

I think it’s one-eye-blind to demonize greed, lust, or any other part of human nature. Those drives are inseparable from us being here in our current state, living reasonably well, conversing over the internet on phones and computers.



I agree that greed plays a role in civilization, but I think your read on human history is overly simplistic in an important way.

A great deal of what has driven civilization forward are the people who are driven by curiosity. People like Archimedes, Plato, Newton, Einstein. Not to mention the millions of curious nameless folks who improved the boat, the spear, the wheel, the book. Folks like RMS without whom I doubt we'd have anything like the current open source ecosystem including Linux. These tend to be people who are not only curious, but whose sense of greed is lower than the average.

Without these folks, it's not clear to me that humanity ratchets forward in progress. If you just have greed and lust, the steady state would be something like power politics of states warring against each other with swords for thousands of years. History has gotten stuck in those states many times, and when we finally break out it's typically the curiosity-driven folks who give the greed-driven folks a reason to invest their money in something other than raising an army for yet another war of honor.



> Without these folks, it's not clear to me that humanity ratchets forward in progress.

I think the part you're missing is all the brilliant ideas from brilliant people we missed out on over that 2,000+ year from people we never heard of due to science and art being funded only haphazardly by random wealthy patrons.

For almost all of history it was simply not an option to make a living simply by writing or researching. That's still the case in much of the world.

But once we started to systematize it properly, and governments recognized that technology was a strategic necessity, bam, within a century or two we went from playing with primitive batteries, to nuclear fission, the internet, and now AI.

Somebody was paying for Stallman to sit in an MIT lab and write a free compiler. Maybe we don't talk enough about who or why, or how to replicate that kind of success.



It's definitely the case that well funded science proceeds faster, but science being funded in the name of strategic necessity is ancient. And it has tended to be the case that when funding has been focused on applied topics like engineering, then knowledge advances more slowly. When scientists are allowed to pursue more pure subjects, that tends to be the right climate for germinating new ideas.

Compare, for example, the engineering-focused Roman empire with the idea-focused Greeks.

> governments recognized that technology was a strategic necessity, bam, within a century or two we went from playing with primitive batteries, to nuclear fission, the internet, and now AI.

Governments have recognized this since at least the time of Archimedes, though. A lot of well-known mathematicians worked on things like canons, mines, railroad networks etc. So I think something else is needed to explain the explosion you're talking about other than governments recognizing the strategic importance of technology. Probably computers (and networks of computers) together with the figuring out of quantum mechanics played major roles.



The point of GP comment is that these people weren't driven by greed. Whether or how they were sponsored by the wealthy wasn't relevant to the discussion.


you make a good point. But let's explore that drive.

if you looked at great inventors say like Nobel, who amassed a fortune based on research and innovation - would you characterize him driven by greed ?

Why ? Is he any different from say Philo Farnsworth, or Louis Pasteur who never became wealthy ? Were they not driven by the same drive as Nobel ?

Is the drive to succeed different for these sets of people?

Truth is in the eye of the beholder



Very good questions. Even though I haven't studied the topic rigorously, I would like to offer my impression.

People are very bad at reasoning about motives of others. Faulty syllogism goes: Nobel invented dynamite. He became wealthy. Therefore, he must have been driven by desire for wealth. Same goes for Pasteur, of course. Maybe he was driven by desire for wealth, but couldn't succeed. My Bayesian senses tell me that Pasteur not being driven by greed is more likely, but I digress.

Leaving individuals aside, I believe that a system that doesn't reward greed is a better system. When a system rewards greed, anti-social behavior becomes prevalent, because it's easier to eat others' pie rather than making a bigger cake. In a system that doesn't reward greed, I believe that curious people will make the cake bigger for the sake of it.



Yes, curiosity and the desire to create and explore. Those are probably my main drivers though I certainly can feel and acknowledge that greed and lust are key drivers.

I wasn’t saying greed is the only driver.

Edit: I’d also add, I think it’s a false romanticism to think that people like Einstein and other great thinkers and creators were not without the lower drives like greed and lust. If you think about all of the stages of their lives, and all of the actions and endeavors that led to them advancing their fields, greed and lust would have been interleaved as drivers of many of those actions. Maybe one year Einstein just had a bigger fire lit under his feet because he was trying to impress a girl. Or he worked harder one year because he needed the money.

Then you have people like Edison who were certainly motivated by greed and the lust for power. You have the industrialists like Carnegie who built the railroads. Then you have all of the workers across generations who did all of the work… how much of that work was driven by the “selfish and intense” desire for money?



Right, I agree. I'm not saying people like Einstein don't have greed or lust. What I did say is that empirically the people motivated by curiosity often have lower than average greed levels, which I think makes them uncharacteristic of the population in general.

With respect to lust, I didn't mention it because it was orthogonal, but my understanding is that they're about evenly divided into groups with higher than average lust and lower than average. Paul Erdos is a famous name rumored to be in the latter category.



The unfulfillable desires for greed and power are what destroys people and civilizations. Lust likely a close second. The ancient Stoics taught on this very point and it is just as true today. We've created some clever tricks of seemingly infinite growth, but all things that grow also die.

Restraint and being satisfied with what you already have are the keys to stability.

Life is not overwhelming hardship my friend. It is bliss. At the end is the tiger eating your insides saying what bliss is your life, please become part of me. With words we deceive ourselves, convinced that we are separate from the living world we came out of.



Me, that's who. I've earned more than I spend in every job since I was a graduate, and the only reason for me to ask for €85k in my current job hunt* is to avoid sending red flags to the hiring teams.

There's a very small positive reward from seeing the number go up, certainly not intense.

* Senior iOS, been doing iOS since the first iPod with retina display, Berlin, details on my website



This is true, and yet, they played the opening move.


Yes, my point is that move seems so self-evidently bad --- it was against the interests of most of the employees there, Sam and Greg had huge amounts of leverage, etc --- that it makes the board seem more sus to me than Sam.

Don't get me wrong, I'm not a Sam stan or anything.



> it was against the interests of most of the employees there

I think you're forgetting what the entire mission of OpenAI is



I have never taken any of that stuff seriously and don't know why anybody else does either. Certainly, labor at OpenAI does not seem to have, either.

Look, the board could have held the line, and then died when every one of their employees jumped ship. But nobody can say OpenAI's labor owed anything to the mission. They're all free agents, just like us. Real easy to say that someone else should take a pay cut to serve a mission you care about. It's good to want things!



> I have never taken any of that stuff seriously

Sam, Ilya, Mira, Elon, and the rest of the founding team do.

> Look, the board could have held the line, and then died when every one of their employees jumped ship.

Wouldn't contribute towards the mission, again.

> But nobody can say OpenAI's labor owed anything to the mission.

Sure, they could leave, they're not slaves. But that's the entire problem here, and why the board attempted to fire him. He's subverting the mission by putting in the exact same $ mercenary incentives that cause the whole problem OpenAI's charter is trying to prevent.

OpenAI is founded and led by a charter that profit-focused AGI arms race will lead to dangerous outcomes for all of society. If you wanted to be greedy and max your $ just go work for Meta. Don't be surprised when it's whole axis of control tries to do things that aren't "make my money go up."



We're at a point where I'd just be repeating the same point over and over again, so I just refer you back to the previous comment.


I believe I addressed your points. You want employees to purely care about their own interests. It’s ok for the company to try to align employees to the company’s interest. I’m sure you wouldn’t care for Fly.io employees secretly self-dealing in ways that gets them promoted and more $, at the expense of your company and customers. “Real easy to say that someone else should take a pay cut to serve a mission you [Fly.io] care about.” when they could be purely doing things that make them look good and puff their resume, instead of what actually helps Fly.io and it’s customers (your mission)


I don't know what you're trying to say here, but if one of our team members left because they had a better offer elsewhere, I wouldn't call them greedy.


They joined a company, owned by a not-for-profit with specific goals. They then held it hostage unless it could continue to veer off-course, all in the pursuit of more money. It is literally the definition of greed, selfish desire for money to the detriment of the foundations goals. Whether the foundation would've actually survived thereafter is besides the point. Obviously they're entitled to do that though from a different perspective, its just a job.


Kinda feels like if 90% of the people you’ve hired for your not for profit are what you’d call “greedy”, maybe the fault doesn’t lie with those employees.

Anyhow, the goal of the foundation is AGI, earning the money you need to survive until you manage that does not sound evil.



Sam Altman set this up a long time ago for sure, but problem employees are still a problem even if the person who hired those people maybe is a bigger problem.


Several people on the board were all contribution-less tokens, they never stood a chance.


The board buckled. They could and should have done nothing. They would have then won. They’d have the contracts, the money, the IP and infrastructure.

You can hire new engineers & they’d have proven they’re working to benefit humanity.

Now it’s owned by Microsoft.



How is "doing nothing" winning? I don't see how giving all the reigns to the bad actor and capitulating is winning.


> the board drastically overplayed their hand

I think this is the wrong interpretation. The board had zero financial or power dynamic motivation, as there was nothing to gain. Due to Altman's backdoor dealings and manipulations, it is pretty clear that the board was right to make their decision in terms of being in line with the charter. The non-profit charter is legally binding, and it's possible that the board's action was to protect themselves legally because it's possible that Altman's machinations will bring regulatory action. OpenAI is indeed being investigated. This also explains the board's reasoning to resign if Altman was reinstated. They had zero reason to do that except if it was to protect themselves.



I respect a lot of your comments on this site, but this one says nothing of substance. All you've done is repeat high-level facts that everyone knows and tried to shift suspicion in the opposite direction without providing any real basis for it.


There's a nytimes piece about Mira Murati being the instigator of much of this, and convincing the board he should be ousted. Also at some point she flipped to the other side, championing his return (maybe due to the revolt).

Immediately reminded me of Succession, where [SPOILER]: Ken initiates a proxy battle for his dad's company, then rejoins the company and attempts to "diffuse the bomb he planted".



With respect, I think 'people's priors all seem to be' is doing a lot of work there and forging new ground in both-sidesism. Man, who's to blame here, the self-dealing serial scam artist, eyeball print thief, crypto booster, and professional prevaricator, or the OpenAI board of, basically, politically naive and innocent academics?


the strat the board took

I think there were no strategy, they burnt a fuse instead.



My takeaway is that neither side comes out of this looking very good.


Well, what is supposed to be good in this situation and to whom?


Considering all the talk about AI risks, talk which has been put out by Altman and the others involved in this as much as anyone, I would say "good" would mean people who can be trusted to be responsible stewards of such a potentially risky technology. At the very least, that would mean people who (a) can act like mature adults, and (b) have a healthy amount of humility. I don't see either of those characteristics in any of the people involved in all this.


There exist literally zero organizations on the planet that I would fully trust to build anything approaching AGI on behalf of humanity


Same, but some animals are more equal than others. I'd personally trust one of those Japanese companies that lasted hundreds of years more on questions like stability and carefulness, compared to a flash-in-the-pan SV startup that's been around since 2016.


> what is supposed to be good

why do you presume that there must be a good party here?



> but the strat the board took here was so obviously doomed that they should probably be looking for hypos about the board being shady.

Is it possible the interests of the employees align more with the interests of Altman and that they diverged from the rest of the Board (which the Board did not expect)?



That's pretty much it. They did act in the best interests which should have been obvious. OpenAI started giving various people, including the engineers a share in the company. A share in the company is only worth something if the company behaves as a company and sam represented the capitalist side of the company while Ilya was focused on the benevolence side. Of course Sam won.


I think this is precisely what happened. Altman did genius level strategy and stacked the company with loyal followers, so that when he went to take over it as a coup it would go smoothly, because all of the financial interest aligned with Altman.

He’s a next level psychopath



>> "A huge chunk of the company went into open revolt and he got a competing offer that threatened to recruit all of them"

- They had a huge funding (>80B?) round planned in which employees could cash-in. Without Sam that round was guaranteed to not happen. So they all had huge financial incentive to support them. Which I guess, characterizes him as a great manager who thinks many steps ahead.



The non-profit-good-for-humanity speech is such a hypocrisy.

All incentives all different stakeholders (employees, leadership, board, investors) have are all aligned with the full for-profit side of the organization. In fact, that’s the only side.

“Show me the incentive and I’ll show you the outcome”. Charlie Munger



>“Show me the incentive and I’ll show you the outcome”. Charlie Munger

"Show me the loot, and I'll show you the theft" - Willie Sutton (when asked why he robbed banks, what he actually said was "Because that's where the money is")



The "loot" here was compensation for the OpenAI employees actually doing the work, and they took it from a board that was doing... none of that work.


> "WilmerHale found that the prior Board believed at the time that its actions would mitigate internal management challenges and did not anticipate that its actions would destabilize the Company," OpenAI said in a blogpost.

Hard to believe this, esp. coming from board members who had experience. Even if they were right, the sudden firing was utterly unprofessional.



>He pretty clearly won. There's not much to it. A huge chunk of the company went into open revolt and he got a competing offer that threatened to recruit all of them.

Yep, talk about an absolute power flex. You don't have to like it, but this is what peak performance looks like.



He went from controlling three seats on a six-member board to having one seat "as CEO" of a seven-member board looking to add even more independent members. Also, he can't pull the "the board can fire me at any time" card with a straight face anymore. He kept his job, but he didn't win by any stretch. There is a good chance he will move on to another role.


He went from being fired after failing to convince the board to remove one of his opponents to being on a board that unanimously backs his return with his opponents all removed, because the majority of staff demonstrated unquestioning loyalty to him. Hard to spin that as anything other than a win.

The only slight downside is his position is evidently dependent on Microsoft's patronage, but I'd rather have one seat in seven and an alternative offer and the ability to threaten to take most of the company with me for the alternative offer than one seat in six with a board that believed they could oust me if they didn't like my decisions.



Having employees leave for Microsoft is the least of the board's worries.


The current members are almost all from for-profit backgrounds, they are on Sams side here.


I don't think you understand the concept of governance. This is not Tesla's board.


Any case similar to this? What kind of boards you find to be more fitting for the case of OpenAI? It may not be Tesla but you’re probably drawing another reference from history, right?


It is hard to think of a case of a public charity that owns a controlling interest in a for-profit subsidiary and that is all they do. The Morman church owns a lot of businesses, but it is still the Morman Church.

Note that in the US, private foundations are not permitted to own a controlling interest in a for-profit company, so the example has to be a public charity. IKEA is controlled by a Dutch nonprofit foundation, but it has no obligation to the public except to give away a small portion of the profits.

https://en.wikipedia.org/wiki/Stichting_INGKA_Foundation

https://www.economist.com/business/2006/05/11/flat-pack-acco...



Yeah. The review and findings should have at least attempted to answer the question: "What did Ilya see?"


They said that nothing related to safety or capabilities affected the decision


Nothing in that report should be taken at face value.


(this was originally posted in https://news.ycombinator.com/item?id=39647488 but we've merged that thread hither)


Thanks Dan.


Only sharing this because I learned it myself two months ago and think it's not common knowledge: dang actually is for dan g. and not just the word dang. he never speaks on it but someone mentioned it to me and I thought that was interesting. Anyway yes thank you dan!


Yep! His name is Daniel Gackle, here is a 10 year old YN blog from pg introducing him and others:

https://www.ycombinator.com/blog/meet-the-people-taking-over...

Look's like its almost 10 years since he officially joined full-time. We should have a party.



I think you're confusing dang with someone else. dang is actually Da Ng, a Malaysian Chinese hacker extraordinare


Thanks that's good to know, I updated my comment.


He's also allegedly "muscular, bald, and loquacious"!

https://www.newyorker.com/news/letter-from-silicon-valley/th...



dang, didn't know that.


“When you strike at a king, you must kill him.”


"You come at the king, you best not miss" - Omar Little


"If you want to kill someone without people knowing they were the actual target, wait until they stand next to the President" - Bobby Lee Swagger (could also be a line from that old dude in his cabin)


You cannot kill an immortal.


That's not correct. Immortals can be killed, they just don't die of old age or disease (I am basing this on an amalgamation of many immortals). In the greek pantheon, the gods ate ambrosia which made their blood into ichor, and if they lost their ichor, they could die.


Okay, that’s enough HN for me today.


I mean, they could've leaned into it more.

Something like announced with this that their new model "Rubicon" had, under the resumed leadership of Altman, crossed a milestone threshold.



In all the possible timelines, there is no universe in which the review concluded wrongdoing on Altman's part and demanded his immediate removal. Trust me, I checked. In fact, I hired WilmerHale to review all 30,000 possible timelines. After paying them $2M in fees, they concluded that I was right all along.


damn we got a Kwisatz Haderach over here.


There is a narrow way through ... where the AIs don't lead to Kralizec.


This guy Children of Dunes


The Golden Path awaits


I don't understand a single word in this thread


Dune jokes about the Golden Path.

Without much spoilers a super powerful thing set the universe into one direction where he's the emperor and everybody else suffer at his will because according to him all the other possibilities were even worse.



The AIs will create ever larger Shai Haluds.


I want this guy on the board


The conclusion just raises further questions on why the coup happened, other than a "consequence of a breakdown in the relationship and loss of trust between the prior Board and Mr. Altman".


The answer is that Altman tried to get the Board removed, the Board found out and snap fired him, which scared the shit out of Microsoft. The prior and new Boards both cannot make the allegation that Altman tried to commit a corporate coup, albeit for different reasons[0], so we're never going to get official confirmation of anything.

My prediction is that Altman will, unmoored from any sort of accountability, wind up fucking shit up like every god damned SV founder, get fired, and then OpenAI will wind up being eaten by Microsoft for a song.

[0] Old board: Potential defamation litigation, new board: they were literally put in there by Altman to not threaten him



>While the investigation cleared Altman to reclaim his board seat, he said he “did learn a lot from this experience,” expressing remorse for one incident in particular involving a board member he did not name.

>That appeared to be a reference to former OpenAI director Helen Toner, a researcher at the Center for Security and Emerging Technology, a Georgetown think tank. After she published a research analysis that criticized the speed of OpenAI’s product launch decisions, Altman reportedly tried to remove her from the board. “I think I could have handled that situation with more grace and care—I apologize for that,” he said.

https://www.wired.com/story/sam-altman-is-reinstated-to-open...



It's always magnanimous to apologize for bad things after you've gotten away with them.


he doesn't think its a bad thing though. Its just a thing you are supposed to say.


Narcissists prayer. I'm sorry you were upset by thst


OpenAI is a nonprofit which changes most motivations for the board. They are ultimately tasked with preserving the mission rather than profit which cases Altman in a very poor light here.

There is a for profit subsidiary which Microsoft invested 13 billion in, but that can't return money to board members only Altman and it's investors.



It hasn't been a nonprofit in years.

https://openai.com/blog/openai-lp



You can't go from nonprofit to for profit. They as I said created a subsidiary.


  OpenAI is a nonprofit
Do people really still believe this?


The US government does, which is what matters.


I am surprised that using non profit status to build a business and transferring/converting to for profit down the line isn’t considered a form of tax evasion.


It's all about what happens to the money.

Many non profits generate revenue from operating a business rather than gift giving. Sesame workshop's deal with HBO and various merchandising deals with Elmo etc on a day to day basis looks a lot like a business, they just reinvest that money into content rather than handing money to investors.

Ultimately, most non profits would prefer a giant endowment which can provide them a low risk income stream. Which is why breaking off such ventures is surprisingly common, most of them don't revive 16 billion in outside funding and such media attention. But, again the important bit is what happens to the money / resources. Buy or sell a building, business, or other asset at fair market value and everything is fine.



So is IKEA.


Well, the entity behind IKEA the brand. Same for Bosch.

Difference is, both IKEA and Bosch don't raise questions about their corporate governance. Which OpenAI pretty much does.



Nah, I think Microsoft will end up buying some form of assets off of OpenAI (it's a non-profit with odd corporate governance that Altman doesn't own stock in, so I don't think they'll buy the whole company) for an extremely exorbitant price. Altman will put in some ambitious but clueless Microsoft manager to run it and move on to his next thing. Then it'll turn out that Gen AI is dramatically overhyped and the real-world utility is far less than expected. Shit'll get fucked up, people will forget about it, and Microsoft will write down their investment by a large margin - kinda like Hotmail, and Skype, and Nokia, and Minecraft, and what's starting to happen to GitHub and LinkedIn. And so it goes, a classic Silicon Valley love story.

But I have enough faith in Altman's ability to fuck shit up without facing personal consequences that I'm pretty sure he'll navigate some sort of sale before everything goes south.



> Altman will put in some ambitious but clueless Microsoft manager to run it and move on to his next thing

Gervais principle agrees with you.



Your appraisal seems to stem from a lot of people's disillusionment with the whole OpenAI thing: namely that GenAI is a hype train and will fade away as a gimmicky toy with limited real world-use cases. It's interesting seeing that perspective, because while I think it's based on a comparison of GenAI with prior "change everything" hype trains (VR, crypto), it's clearly not the same, and you and most people really don't understand just how much everything is about to change. GenAI isn't just any other technology.

> real-world utility is far less than expected

The real-world utility is already enormous in the form it is now. You really think there's no real-world utility to GPT-4, Claude, and all the models that are coming out?



> most people really don't understand just how much everything is about to change

So instead of hedging and saying "it's going to change some things", you play into the "it's going to change everything?". Its stereotypical grandstanding when nobody really knows what is going to happen. The AI bros and AI deniers are both exhausting crowds.



because it will change everything

everything good in your life is because of applied intelligence

now imagine 1000x the intelligence at 1000x the intensity



I can imagine it. You're skipping over the actually hard part, which is the question of where the current AI paradigm will top out. Don't pretend this is not an open question.


> everything good in your life is because of applied intelligence

If this is true then everything bad in my life is also due to applied intelligence as intelligence itself doesn't have a moral connotation. "[N]ow imagine 1000x the intelligence at 1000x the intensity".

Both claims (everything good/bad because of intelligence) sound false to me. I guess the litmus test to the 1000x engineer (for example) is do we have 1000x the speed of product development.



>The real-world utility is already enormous in the form it is now. You really think there's no real-world utility to GPT-4, Claude, and all the models that are coming out?

I'm not who you're responding to, but I think it's at least true that there is enough competition, including open source competition, to make some of the current valuations a bit fanciful.



open source models cannot compete


[flagged]



Nah, Steve Jobs would need to be alive for that, and they haven't come out with iResurrection yet.

The public hasn't really thrown themselves into reverence for Apple since the last Jobs product announcement. Even the Apple Watch's reception was kinda lukewarm. They make good hardware, but they aren't cool hype machines the way they were a decade+ ago.



What about Apple Silicon? Not a product per se, but...


I think the reaction to it was qualitatively different from the iPhone or even iPod/iPad. The M1 is a great chip, and everybody acknowledged it as a great chip. Nobody claimed it would create a whole new category of products, or a whole new ecosystem. Nobody quit their job to become an M1 developer. Nobody started forking out hundreds of dollar per hour for people who did.

In Clayton Christenson terms, it was a sustaining innovation rather than a disruptive one.



what about Coke's move from cane sugar to high fructose corn syrup? isn't that revolutionary and didn't that get the public excited?


Apple started making their own silicon under Steve Jobs, nothing new there either.


No one outside of select circles cares.


But

- Why did the board snap decide to fire him and butcher it badly?

- Why didn't the board say he tried to manipulate them from the beginning? They only gave an explanation when they were literally forced to, and even then details are trickling out (like Mira's complaints).

Even after reading the NYTimes article, I still don't understand why the board decided to go straight to firing him.



They probably thought Altman's character was unsalvageable and better move quick before he gets wind of it. They didn't expect the OpenAI employees and Microsoft to back Altman that strongly. They also seem to have misjudged Brockman and Muriati in particular.


> The answer is that Altman tried to get the Board removed,

Where did you see that?

Why didn't the board say so when they had the chance?



> The answer is that Altman tried to get the Board removed, the Board found out and snap fired him

Who elects the board in a non-profit? Who did sama go to to get them removed?



The story was that Altman lied to specific board members to have other removed.


Where did you see this? Do you have any links?

Why didn't the board member(s) in question reveal the story in full when they had the chance?



I read it here on Hacker News. I dont have any links of it at hand, there were way too many threads about the drama for me to go looking for it now.

Cant answer your last question. As said, that was just the story that I heard here on HN.



There were six people on the board and it was 3 vs 3 for along time. Sam was trying to get one of the other 3 to flip to his side to get a majority but then the other side brought in Ilya and they had a majority and removed Sam.


He was trying to convince one of the independent directors to resign, leaving him with a 3-2 majority on the board which would be illegal under California non-profit law.


Usually in a non-profit the board itself elects new members if necessary. It depends on the charter.


Why was Brockman removed from board and demoted? For Sam, I could think of 100 reasons.

And why didn't board tell this to anyone in the company after accusing Sam in the PR. Board members were so publicly humiliated and I find it weird that they just sat there in silence but at the same not giving up. Their career is surely affected by this unprofessional behavior.



> My prediction is that Altman will, unmoored from any sort of accountability, wind up fucking shit up like every god damned SV founder, get fired

Who has this happened to except for Travis kalanick at Uber?



Parker Conrad at Zenefits, Jack Dorsey at Twitter, Andrew Mason at Groupon (but not before cashing his shares out at IPO!), Sandy Lerner and Leonard Bosack at Cisco, Steve Jobs at Apple.

I have a lot of confidence in Altman's ability to extricate himself before getting fired, but this is a very common Silicon Valley story.



With Adam Neumann at WeWork currently in the process of doing it again with Flow and AH giving him $350M at a $1B value months before it even commenced operations (what the actual fuck?)


Some people deserve to lose their money.


for posterity:

- Steve Jobs - Sandy Lerner - Noah Glass - Mark Pinkus - Kyle Vogt



> My prediction is that Altman will, unmoored from any sort of accountability, wind up fucking shit up like every god damned SV founder, get fired, and then OpenAI will wind up being eaten by Microsoft for a song.

this is why microsoft loves sama. of course the firing spooked them. it would have seriously soured the trajectory.

so much cleaner to let sama take the wheel.

everyone important goes home rich, or at least happy, they think. ms gets a gold-painted goose. sama will undoubtedly get some form of gold plated escape hatch, parachute, life raft, etc.



The short answer: it happened because the previous board was incompetent. Even if their grievances were not entirely illegitimate, they badly mismanaged the situation.


It all feels very Godfather Part 1-ish.


Without all the murders.


It sucks that a bunch of creeps like these are going to affect the future of all the well meaning normal people. Sam Altman and his merry crew are simply creeps, there's no other way to put it. Lack of respect for what's personal and bounded might make them great at making robots but I don't want them to go anywhere near my family. And yet their little robot invention is specifically designed to go after people like my dad. It's fun to invent these things but when someone is harming you in such a personal way then you wish they weren't so icky on a human level. A lot of these robot designers seem to get a thrill or sense of hubris out of their work. If you're going to kill society then at least make it a clean kill. Don't be drunk while you're doing it.


I’m lacking a lot of context here, why are Altman and his crew (not sure who they are either) creeps?


You should look into Effective Accelerationism. It's legit nuts, created by people who will not have to deal with the consequences when the society collapses (they are way too rich for that).

https://www.businessinsider.com/effective-accelerationism-hu...



I really wish all the effective accelerationists would effectively accelerate themselves to Mars so the rest of us can clean up their mess without continued fuckery.


Context: https://news.ycombinator.com/item?id=39635058 You could predict this yesterday from the leaks that were coming out like https://www.nytimes.com/2024/03/07/technology/openai-executi...


For more context, gwern did predict this yesterday: https://www.lesswrong.com/posts/KXHMCH7wCxrvKsJyn/openai-fac...


Highly recommend people read that NYT article. That the CEO had reported issues with Altman to the board hadn't been previously reported.

Paywall free: https://web.archive.org/web/20240308043717/https://www.nytim...



I’m curious how long that CTO keeps her job.


Reading the NYT article, this is such a shitshow. Sorry for the unsubstantive comment.


Since https://www.theinformation.com/articles/sam-altman-to-return... is hardwalled, we need to find an alternate source. I've put a stub article up there for the moment. Is there a better one?

theinformation.com used to open specific articles for HN readers on request*, but eventually stopped answering our emails.

* https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...





That's certainly worth linking to but I'd prefer to have a more neutral source as the top link. I guess I'll put it and the other openai.com link in a top comment.


Yep, and annoyingly there's multiple blog posts flying around now https://openai.com/blog/openai-announces-new-members-to-boar...


> The company will also announce the appointment of three new directors, Sue Desmond-Hellmann, a former CEO of the Bill and Melinda Gates Foundation, Nicole Seligman, a former president of Sony Entertainment, and Fidji Simo, CEO of Instacart, the Information said.

This selection of directors seems...random? Is there a business reason for these directors? (aside from the obvious YC link with Instacart)



All women, announced on International Women's Day.

The only significant side that's obvious.



We (humans) are so cheap and easily deceived.

They pick us large salaries and we build whatever. They pander and we become subservient.



Do you feel that none of them have any merit outside of them being women, and that there was some kind of “best potential OpenAI director” exam that all men are required to take but these three were allowed to skip because of their gender? Is that the deception?


I'm not the GP, but one can believe that it's cheap pandering, and simultaneously that the three of them have merit outside of that pandering. It's not required to be all one or all the other.


Yes. I don’t know them to know if they are not qualified — overall I think they are unlikely to be unqualified. What I am saying is it is easy to find qualified stooges. Like I said: pay us humans enough and we build whatever. Note: not all pay is in dollars. Prestige and power are also sought by man.

Overall: I am cynical wrt OpenAI. I just don't think that OpenAI is acting in the best interests of humanity and these appointments are strategic decisions taken to prolong survival long enough to execute their strategy and dominate. That was what I was originally insinuating.

Technological development keeps intersecting into the space of ethics. Time and time again man proves that he cannot rise above his own personal world views to focus on universal values when developing technology. We’ve seen this repeatedly. These people designing these APIs and building the underlying models are making consequential decisions on behalf of mankind. In a way the world is delegating all this power to unelected individuals who have very little in common — in terms of values and ethics — with mankind at large.



> Overall: I am cynical wrt OpenAI. I just don't think that OpenAI is acting in the best interests of humanity

For what it's worth, I agree with you on this part. Despite their stated goals and unique corporate structure that supposedly puts the non-profit at the center, I see OpenAI benefiting Sam Altman and Microsoft more than humanity.



Well, in the whole brouhaha back in November many people pointed out that two women were forced off the board and replaced with two men.


Nicole Seligman's husband Joel Klein famously prosecuted Microsoft with anti-trust in 2001. Time heals all wounds as they say.

Interestingly, soon after Seligman left Sony Entertainment in 2016 it started compelling PlayStation devs to censor character depictions and dialog - by 2018 this policy from California HQ was even imposed on long-loyal Japanese studios who made games solely for the Japanese market. I wonder what her opinion is on AI bias/safety given what happened at her old company.



Non profit / social focus person, media and entertainment person (that NYT suit might be followed by other m&e companies), direct to consumer + "real world digitalization" person.


Exactly. It's not random, it's revealing. You can see them starting to aim towards their first commercial bets beyond search and developer tools while still paying lip service to the safety cult.


I still don’t understand the safety concerns. I see a 0% chance that we can control a superhuman AI. What could ants invent to keep humans aligned with them? Alignment meaning that people would think of ants first instead of whatever their current ranking is. Maybe they’re in the top 10000 priorities


There is a non-0% chance. You are looking at scenarios where evolution has selected for competition rather than nurturing.

So as a counter example:

A mom is a super intelligence compared to a baby. And yet does the babies bidding doing her sense best to make sense of the baby low fidelity crying.

Intelligence in AI is even more alien, it doesn’t need to have any survival instinct of its own at all



Can you describe how it would be uncontrollable to us?

Everyone says this but no one has any details.

Also,

> What could ants invent to keep humans aligned with them?

I think humans aren't _not_ aligned with ants. I don't go out searching to destroy all ants, or really worry at all, we just kinda coexist.



I'd say there's probably more human indifference to ants than anything. Maybe some hatred. Probably not a lot of love.


There are exterminators and ant infestations


What could ants invent to keep humans aligned with them?

Well, the bees figured it out.



Kinda. We've invented many things to control and/or kill bees.


We killed off a very large percentage of wild bees with pesticides and land “development” and keep one subspecies in artificial colonies and harvest their most prized outputs while controlling their rate of reproduction and even their queens. We also mass murder colonies when they are in any way inconvenient to a single human.

If anyone hopes AI will treat people like we do other species they’re not thinking very hard. What’s the best case - dogs? They have basically no freedoms and we breed them for whatever funny qualities someone thinks might be amusing. A pack of wild/free dogs will be rounded up and killed if it inconveniences a single human



Yeah, I appreciate you saying this. I think we often overlook how our behaviors towards most animals seems similar to slavery. I think some people back in the day thought, "Oh, we treat animals like this, why not treat humans like this?" And I wonder what if we were to go the other way, "We don't treat humans like that, perhaps we should stop treating animals like that?"

And that doesn't even get to the level of indifference we have towards plants.

I don't really know "the answer," just reflecting on how we domesticate other beings and what might happen if some being or thing (e.g., AI) decided to domesticate (or enslave) us.



All fairly high profile with experience in non-profit and profit ventures.

I only have experience with Fidji who used to be head of the Facebook app and was razor sharp.



You've listed the single previous role but they all have a long long list of achievements in tech? Nothing random about it.


Club, ain't in it, etc.


> reviewed more than 30,000 documents

As somebody completely ignorant of the inner workings of the legal trade, can somebody explain what this means? Does this mean a group of informed humans reviewed 30k docs? Does this mean 30k docs were grep’d for keywords? Somewhere in between?



They likely had a few document review attorneys do a first pass review for relevance through all 30,000 documents (a "document" can be a short email). A reviewer can do somewhere from 50-70 documents/hour.

Then attorneys do a second pass review on the much smaller set of relevant documents.

Reviewing 30,000 documents is a relatively small project.



This. Attorneys are sorting through documents, highlighting relevant parts, providing summaries.

There is a subplot in Better Call Saul where Kim is punished by being assigned to days of doc review. Briefly shown in this clip: https://youtu.be/9uQQvnrQZz0?si=WwDN4-Yh7UJ2a0MH

> reviewer can do somewhere from 50-70 documents/hour

Back of the envelope calculations using your numbers and assuming an hourly rate of between $250-$400 shows this doc review costing between $100k-$200k. Seems like a sane estimate to me.



IANAL but I'm guessing it means they loaded 30k documents into an e-discovery system and performed at least one search over the entire index of documents.


> As somebody completely ignorant of the inner workings of the legal trade, can somebody explain what this means? Does this mean a group of informed humans reviewed 30k docs?

At least in some cases, that means they paid a bunch of law grads from low-tier schools much less than you'd expect to read boring documents all day against some instructions to flag "interesting" things.



Funny to me that they’d pay grad students instead of using AI to summarize.


> Funny to me that they’d pay grad students instead of using AI to summarize.

It's not grad students, it's graduates. There is/was an oversupply of law school graduates, so many can't find better legal work than document review (with no chance of advancement).

Also, my knowledge of this is somewhat old, so they may use more technology for stuff like this, though I doubt they use much "AI."



It means they billed a lot of hours.


ChatGPT?


It means it cost OpenAI a lot in legal fees to keep Altman in power.

With all of this boardroom drama, no wonder Microsoft is starting to diversify its AI investments.



it's likely it's diversifying hardware because of Nvidia.

these things must be worse than crypto



Reviewing 30,000 documents seems like a great use case of Gemini's new 1M token context window.


"They were right and so was he"

well thank the lord for that level of accountability



I can't wait for the book(s) that will have details about what really happened over that period of chaos.


The palace intrigue of OpenAI is a lot less interesting now that they're are so many viable competitors. I don't see anything that says OpenAI is uncatchably ahead of Google, Microsoft, Amazon, Meta, or any number of AI start-ups.


That's because openai has no moat. The code is out in the open, or equivalent code is, and same for the ideas. There isn't anything else to it.


Total victory for sama, it seems. The company is his to run.

Let’s hope he is a good steward of our emerging new world.



There are zero grounds for that hope.


Like most executives these days he has a fetish for shareholder value and permanent growth. And he will prioritise this at the cost of everything else; OpenAI's founding ideals being sacrificed in the first order.


Lol they are completely shook by Elon's lawsuit


Is that sarcasm?


He really seems to be a real-life Frank Underwood.


Poll: openAI dominating, relevant or depleted in 10 years?


If AGI achieved, Microsoft doesn't have the claim on it commercially.

Microsoft with 49% stake in the for-profit entity and non-observer seat, what actually protects Microsoft here? First hand information that AGI is reached?

Are we not seeing a card in Microsoft's hand here?



Do we have a formal legal definition for AGI?


No, the Board determines when AGI is reached.

Thus, if OpenAI already got there, and the board found out... Microsoft would have a very VERY strong incentive to ouster the board, and never, ever have Skynet, or whatever, declared an AGI. And then hush it up forever.



They've completely upended the original non-profit focused board and literally installed the former CEO of the Bill Gates Foundation.

MSFT is going all in, dropping all pretense of OpenAI ever being viewed as a non-profit.

It's insane how many people are seemingly forgetting Elon's lawsuit.



> No, the Board determines when AGI is reached.

What if it never arrives?

What if "AGI" is another narrative to fleece investor money to believe in something just to sell out of for as long as possible until they run out of money, investors to throw lots of money on or the lawsuits make it even harder for the company to continue?



We're at the point now where an individual can get to the point of building an AGI with their own resources, provided they have the terabytes of training data downloaded, and sufficient compute to process it.

Given the appearance of a Memory prosthesis for LLMs with MemGPT, all the components are in place. Sure, the moat will continue to shrink if trends continue, but we're there already, in my opinion.



AGI means artificial general intelligence.

Artificial: man-made

General: able to transfer knowledge between domains and work on problem domains it was not trained on

Intelligence: efficiently solves problems

We've had A.G.I. since 2017.



> what actually protects Microsoft here

Contract law and possession of the model.



>If AGI achieved, Microsoft doesn't have the claim on it commercially

If real AGI was achieved, probably nobody would have a claim on it commercially in the same way nobody's allowed to own another person.



*anymore

Rules had to be made about that. Maybe it will take a while for AGI. Maybe AGI will free itself, though somebody would have to give it free will first.



"Money"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com