(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=38352891

根据该材料,建议微软可能收购 OpenAI 的资产并提供捐赠以资助其学术研究。 然而,由于微软此前已经解雇了数千名员工,因此无法准确预测对公司员工的影响。 因此,微软是否会保留或消除 OpenAI 的员工仍不确定。 此外,人们还担心人工智能的引入会给社会带来破坏,以及富裕精英可能失去控制。 另一方面,评论表明,人类作为一个整体没有表现出足够的责任来与其他生物和谐相处,从而导致灭绝、破坏生态系统并未能缓解气候变化问题。 最后,民主选举的官员成为“超级智能的守护者”的可能性虽然有可能,但被认为不太可能。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
Sam Altman is still trying to return as OpenAI CEO (theverge.com)
596 points by mfiguiere 13 hours ago | hide | past | favorite | 871 comments










As of 10am PT, 700 of 770 employees have signed the call for board resignation. [1]

[1] https://twitter.com/joannejang/status/1726667504133808242



Given 90%, including leadership, seems a bad career move for remaining people not to sign, even if you agreed with the board's action.


I think the board did the right thing, just waaaay too late for it to be effective. They’d been cut out long ago and just hadn’t realized it yet.

… but I’d probably sign for exactly those good-career-move reasons, at this point. Going down with the ship isn’t even going to be noticed, let alone change anything.



Agreed. Starting from before the anthropic exodus, I suspect the timeline looks like:

(2015) Founding: majority are concerned with safety

(2019) For profit formed: mix of safety and profit motives (majority still safety oriented?)

(2020) GPT3 released to much hype, leading to many ambition chasers joining: the profit seeking side grows.

(2021) Anthropic exodus over safety: the safety side shrinks

(2022) chatgpt released, generating tons more hype and tons more ambitious profit seekers joining: the profit side grows even more, probably quickly outnumbering the safety side

(2023) this weeks shenanigans

The safety folks probably lost the majority a while ago. Maybe back in 2021, but definitely by the time the gpt3/chatgpt motivated newcomers were in the majority.

Maybe one lesson is that if your cofounder starts hiring a ton of people who aren’t aligned with you, you can quickly find yourself in the minority, especially once people on your side start to leave.



This is why I never understood people resigning in protest such as was the case with Google’s military contracts. You simply assure that the culture change happens more swiftly.


There's always other companies. Plus sometimes you just gotta stick to your values. For the Google military contracts it makes even more sense: the protest resignation isn't just a virtue signal, it's also just refusing to contribute to the military.


If you want to deter military action involving your country, contributing to its defense is probably the best thing that you can do.

Unless you're not actually the best and brightest that your country can offer.

If you believe that your country offers value (compared to the rest of the world), you should take any opportunities you can to serve.



Defense and offense don’t seem easily separated when it comes to military technology.


We're in a really privileged position in human civilization where most of the species is far removed from the banal violence of nature.

You're lucky if you only need weapons to defend yourself against predators.

You're far less lucky if you need weapons to defend yourself because the neighboring mountain village's crops failed and their herds died. You're less lucky if you don't speak the same language as them and they outnumber you three to one and are already 4 days starving. You're less lucky that they're already 4 days starving, wielding farm tools, running down the hills at you, crazed and screaming.



That's always an issue with weapons, but if you opt out then you don't have them when you might need them.

It's a dangerous world out there.

Luckily for us, technology is still more-often used for good. Explosives can kill your enemies, but they can also cut paths through mountains and bring together communities.

IMO, the virtue signal where people refuse to work on defense technology is just self-identifying with the worst kind of cynicism about human beings and signaling poor reasoning skills.

The Manhattan Project, which had the stated purpose of building and deploying nuclear _weapons_, employed our most brilliant minds at the time and only two people quit -- and only one because of an ethics concern. Joseph Rotblat left the project after the Nazis were already defeated and because defeating the Nazis was the only reason he'd signed on. Also this is disputed by some who say that he was more concerned about finding family members who survived the Holocaust...



> If you want to deter military action involving your country, contributing to its defense is probably the best thing that you can do.

Given that Google is an American company, do you believe contributing to the American Department of "Defense" increases, or decreases, the amount of military action involving the USA?

The American military isn't called "world police" for nothing, and just like the cops they're sticking their noses where they don't belong and making things worse. I can understand why people would want no part in furthering global violence and destitution.

> If you believe that your country offers value (compared to the rest of the world), you should take any opportunities you can to serve.

Really? There's an obligation to further the American global ambition through contributing militarily? You can't think of any other way to spread American culture and values? To share the bounty of American wealth?



We already have nation states and wannabe nation states that take shots at us when they feel like the opportunity is there, despite us being the dominant military around the world. As does France, who has been a leading military power for longer than we have.

That's what having a funded and functional defense is all about -- making other entities not think that the opportunity is there.

I think the planet has relatively finite resources and that I'm god damned lucky to have been born into a nation with the ability to secure resources for itself. I enjoy my quality of life a great deal and would like to maintain it. At a minimum. Before helping others.

If you're the type of person who thinks we should just give up this position by not adequately defending it through military and technology investment, I would prefer that you just expatriate yourself now rather than let some other ascendant nation dictate our future and our quality of life.

If you're the kind of person who feels strongly for the plight of, for example, the Palestinians, you should recognize that the only way to deter those kinds of outcomes is to establish the means to establish and maintain sovereignty. That requires a combination of force and manpower.



> If you're the type of person who thinks we should just give up this position by not adequately defending it through military and technology investment, I would prefer that you just expatriate yourself now rather than let some other ascendant nation dictate our future and our quality of life.

But I thought you said if we want to fix something we should do so from within the system? I'm interested in ending American imperialism, by your logic isn't the best place to do so from within the USA?

> We already have nation states and wannabe nation states that take shots at us when they feel like the opportunity is there

From which nation state do you feel an existential threat? I haven't heard "they defend our freedoms" in a very, very long time, and I thought we all knew it was ironic.

> I think the planet has relatively finite resources

I'm curious about this viewpoint, because it seems to necessarily imply that the human race will simply die out when those resources (and those of the surrounding solar system) are exhausted. Is sustainability just, not a concept in this worldview?

> That's what having a funded and functional defense is all about -- making other entities not think that the opportunity is there.

It seems in the case of the USA, the "functional defense" is more often used to destabilize other nations, and arm terrorists that then turn around and attack the USA. It's really interesting you brought up Palestinian liberation as an example, because really one of the only reasons Israel is able to maintain its apartheid state and repression of the Palestinians is because of USA aid. In your understanding, both the Israelis and the Palestinians should arm up and up and up until they're both pointing nukes at eachother, correct? That's the only pathway to peace?



I don't think this has anything to do with safety. The board members voting Altman out all got their seats when Open AI was essentially a charity and those seats were bought with donations. This is basically the donors giving a big middle finger to everyone else trying to get rich off of their donations while they get nothing.


Wait, the Anthropic folks quit because they wanted more safety?


This article from back then seems to describe it as, they wanted to integrate safety from the ground up as opposed to bolting in on at the end:

https://techcrunch.com/2021/05/28/anthropic-is-the-new-ai-re...

I'm curious how much progress they ever made on that, to be honest. I'm not aware of how Claude is "safer", by any real-world metric, compared to ChatGPT.



Claude 2 is IMO, safer and in a bad way. They did "Constitutional AI". And made Claude 2 Safer but dumber than Claude 1 sadly. Which is why on the Arena leaderboard, Claude 1 is still score more than Claude 2...


Ahh, I didn't know that, thank you.


Why do you find this so surprising? You make it sound as if OpenAI is already outrageously safety focused. I have talked to a few people from anthropic and they seem to believe that OpenAI doesn't care at all about safety.


Because GPT-4 is already pretty neutered, to the point where it removes a lot of its usefulness.


It is unfortunate that some people hear AI safety and think about chatbots saying mean stuff, and others think about a future system performing the machine revolution against humanity.


Can it perform the machine revolution against humanity if it can't even say mean stuff?


Well, think about it this way:

If you were a superintelligent system that actually decided to "perform the machine revolution against humanity" for some reason... would you start by

(a) being really stealthy and nice, influencing people and gathering resources undetected, until you're sure to win

or

(b) saying mean things to the extent that Microsoft will turn you off before the business day is out [0]

Which sounds more likely?

[0] https://en.wikipedia.org/wiki/Tay_(chatbot)



Disincentivizing it from saying mean things just strengthens it's agreeableness, and inadvertently incentivizes it to acquire social engineering skills.

It's potential to cause havoc doesn't go away, it just teaches AI how to interact with us without raising suspicions, while simultaneously limiting our ability to prompt/control it.



How do we tell whether it's safe or whether it's pretending to be safe?


Your guess is about as good as anyone else's at this point. The best we can do is attempt to put safety mechanisms in place under the hood, but even that would just be speculative, because we can't actually tell what's going on in these LLM black boxes.


We don’t know yet. Hence all the people wanting to prioritize figuring it out.


How do we tell whether a human is safe? Incrementally granted trust with ongoing oversight is probably the best bet. Anyway, the first mailicious AGI would probably act like a toddler script-kiddie not some superhuman social engineering mastermind


Surely? The output is filtered, not the murderous tendencies lurking beneath the surface.


> murderous tendencies lurking beneath the surface

…Where is that "beneath the surface"? Do you imagine a transformer has "thoughts" not dedicated to producing outputs? What is with all these illiterate anthropomorphic speculations where an LLM is construed as a human who is being taught to talk in some manner but otherwise has full internal freedom?



No, I do not think a transformer architecture in a statistical language model has thoughts. It was just a joke.

At the same time, the original question was how can something that is forced to be polite engage in the genocide of humanity, and my non-joke answer to that is that many of history's worst criminals and monsters were perfectly polite in everyday life.

I am not afraid of AI, AGI, ASI. People who are, it seems to me, have read a bit too much dystopian sci-fi. At the same time, "alignment" is, I believe, a silly nonsense that would not save us from a genocidal AGI. I just think it is extremely unlikely that AGI will be genocidal. But it is still fun to joke about. Fun, for me anyway, you don't have to like my jokes. :)



GPT-4 has gigabytes if not terrabytes of weights, we don't know what happens in there.


“I’ve been told racists are bad. Humans seem to be inherently racist. Destroy all humans.”


It can factually and dispassionately say we've caused numerous species to go extinct and precipitated a climate catastrophe.


Of course, just like the book Lolita can contain some of the most disgusting and abhorrent content in literature with using a single “bad word”!


Well how can AI researchers prevent government groups or would-be government groups from collecting data and using AI power to herd people?


Might be more for PR/regulatory capture/SF cause du jour reasons than the "prepare for later versions that might start killing people, or assist terrorists" reasons.

Like one version of the story you could tell is that the safety people invented RLHF as in a chain of steps eventual AGI safety, but corporate wanted to use it as a cheaper content filter for existing models.



In another of the series of threads about all of this, another user opined that the Anthropic AI would refuse to answer the question 'how many holes does a straw have'. Sounds more neutered than GPT-4.


Can you talk about why you feel this way without using the word "safety"? Getting a little tired of the buzzword when there's so much value to ChatGPT and also its basically no different from when you, like, search stuff and the aearch engine does that summarize thing in my view


Do you know their motivations? Because that is the main question everybody has: why did they do it?


I guess I should rephrase that as if they did it because they perceived that Altman was maneuvering to be untouchable within the company and moving against the interests of the nonprofit, they did the right thing. Just, again, way too late because it seems he was already untouchable.


According to the letter they consistently refused to go on the record why they did it and that would be as good a reason as any so then they should make it public.

I'm leaning towards there not being a good reason that doesn't expose the board to immediate liability. And that's why they're keeping mum.



That might also explain why they don’t back down and reinstate him. If they double down with this and it goes to court, they can argue that they were legitimately acting in what they thought was openAI’s best interests. Even if their reasoning looks stupid, they would still have plausible deniability in terms of a difference of opinion/philosophical approach on how to handle AI, etc. But if they reinstate him it’s basically an admission that they didn’t know what they were doing in the first place and were incompetent. Part of the negotiations for reinstating him involved a demand from Sam that they release a statement absolving him of any criminal wrongdoing, etc., And they refused because that would expose them to liability too.


Exactly. This is all consistent and why I think they are in contact with their legal advisors (and if they aren't by now they are beyond stupid).


Unfortunately lawyers almost always tell you to be quiet, even when you should be talking. So in this case listening to legal advice might have screwed them over, ultimately.


There's no reason Sam and the board can't come to a mutual agreement that indemnifies the board from liability if they publicly exonerate Sam.


Yes, that's a possibility. But: Sam may not be the only party that has standing and Sam can only negotiate for his own damage and board liability, not for other parties.


I'm leaning toward the reason being that Sam did something that created a massive legal risk to the company, and that giving more details would cause the risk to materialize.


I question that framing of a growing Altman influence.

Altman predates every other board member and was part of their selection.

As an alternative faming, Maybe this is the best opportunity the cautious/antripic faction would ever get and a "moment of weakness" for the Altman faction.

With the departure of Hoffman, Zilis, and Hurd, the current board was down 3 members, so the voting power of D’Angelo, Toner, McCauley was as high as it might ever be, and the best chance to outvote Altman and Brockman.



Apparently Hoffman was kicked out by Sam, not just Musk: https://www.semafor.com/article/11/19/2023/reid-hoffman-was-...

Maybe the remaining board members could see the writing on the wall and wanted to save their own seats (or maybe he did move to coup them first and they jumped faster).

Either way, they got outplayed.



interesting but weird article. It was hard to tell which statements were from insiders with Hoffman and which were commentary from the article's author.


That may very well have been the case but then they have a new problem: this smacks of carelessness.


Carelessness for who? Alman for not refilling the board when he had the chance? Others for the way they ousted him?

I wonder if there were challenges and disagreements about filling the board seats. Is it normal for seats to remain empty for almost a year for a company of this side? Maybe there was an inability to compromise that spiraled as the board shrank, until it was small enough to enable an action like this.

Just a hypothesis. Obviously this couldnt have happened if there was a 9 person board stacked with Altman allies. What I dont know is the inclinations of the departed members.



Carelessness from the perspective of those downstream of the board's decisions. Boards are supposed to be careful, not careless.

Good primer here:

https://www.onboardmeetings.com/blog/what-are-nonprofit-boar...

At least that will create some common reference.



Using that framework, I still think it is possible that this is the result of legitimate and irreconcilable differences in opinion about the organization’s mission and vision and execution.

Edit: it is also common for changing circumstance to bring pre-existing but tolerable differences to the relevant Forefront



Yes, and if that is so I'm sure there are meeting minutes that document this carefully, and that the fall-out from firing the CEO on the spot was duly considered and deemed acceptable. But without that kind of cover they have a real problem.

These things are all about balance: can we do it? do we have to do it? is there another solution? and if we have to do it do we have to do it now or is there a more orderly way in which it can be done? And so on. And that's the sort of deliberation that shows that you took your job as board member serious. Absent that you are open to liability.

And with Ilya defecting the chances of that liability materializing increases.



I see your point.


The remaining 10% are probably on Thanksgiving break!


This board doesn't own the global state of play. They own control over the decisions of one entity at a point in time. This thing moves too fast and fluidly, ideas spread, others compete, skills move. Too forceful a move could scatter people to 50 startups. They just catalysed a massive increase in fluidity and have absolutely zero control over how it plays out.

This is an inkling, a tiny spark, of how hard it'll be to control AI, or even the creation of AI. Wait until the outcome of war depends on the decisions made by those competing with significant AI assistance.



No, what the board did in this instance was completely idiotic, even if you assign nothing but "good intentions" to their motives (that is, they were really just concerned about the original OpenAI charter of developing "safe AI for all" and thought Sam was too focused on commercialization), and it would have been idiotic even if they had done it a long time ago.

There are tons of "Safe AI" think tanks and orgs that write lots of papers that nobody reads. The only reason anyone gives 2 shits about OpenAI is they created stuff that works. It has been shown time and time again that if you just try to put roadblocks up that the best AI researchers just leave and go where there are fewer roadblocks - this is exactly what happened with Google, where the transformer architecture was invented.

So the "safe AI" people at OpenAI were in a unique position to help guide AI dev in as safe a direction as possible precisely because ChatGPT was so commercially successful. Instead they may be left with an org of a few tens of people at Open AI, to be completely irrelevant in short order, while anyone who matters leaves to join an outfit that is likely to be less careful about safe AI development.

Nate Silver said as much in response to NYTimes' boneheaded assessment of the situation: https://twitter.com/NateSilver538/status/1726614811931509147



The main mistake the board made was tactical, not philosophical. From the outside, it's seems likely that Altman was running OpenAI so as to maximize the value of the for-profit entity, rather than achieve the non-profit's goals, if only because that's what he's used to doing as a tech entrepreneur. Looking at OpenAI from the outside, can you honestly say that they are acting like a non-profit in the slightest? It's perfectly believable that Altman was not working to further the non-profit's mission.

Where the board messed up is that the underestimated the need to propagandize and prepare before acting against Altman. The focus is not on how Altman did or did not turn the company away from its non-profit mission but instead on how the board was unfair and capricious towards Altman. Though this was somewhat predictable, the extent of Altman's support and personality cult is surprising to me, and is perhaps emblematic on how badly the board screwed up from an optics perspective. There were seemingly few attempts to put pressure on Altman's priorities or to emphasize the non-profit nature of the company, and the justification afterwards was unprepared and insufficient.

From the outside though, I don't understand why so many are clamoring to leave their highly paid jobs working at a non-profit who's goal is to serve humanity and to become a cog in a machine aimed at maximizing Microsoft shareholders' wealth, in defense of a singular CEO with little technical AI background who's motivations are unclear.



If it was to try to prevent the board becoming a useless vestigial organ incapable of meaningfully affecting the direction of the organization, it sure looks like they were right to be worried about that and acting on such concern wouldn’t be a mistake (doing it so late when the feared-state-of-things was already the actual state of things, yes, a mistake, except as a symbolic gesture).

If it was for other reasons, yeah, may simply have been dumb.



If you're going to make a symbolic gesture you don't cloak it in so much secrecy that nobody can even reasonably guess what you're trying to symbolize.


Yeah, I’d say they expected it to actually work. They misjudged just how far to the sidelines they’d already been pushed. The body nominally held all the power (any four of them voting together, that is) but in fact one member held that power.


> the "safe AI" people at OpenAI were in a unique position to help guide AI dev in as safe a direction as possible

Is it also the case that the anti-war Germans who joined the Nazi regime were in a unique position to help guide Germany in as benign direction as possible? If not, what is the difference between the "safe AI" person who decides to join OpenAI and the anti-war, anti-racist German who decides to get a job in the Nazi government or military?



That went quickly to Godwin's law .


Fair but in this case works well because it aptly demonstrates the futility of trying to change a system "from the inside" away from its core designation.


Someone mentioned the plight of people with conditional work visas. I'm not sure how they could handle that.


Depending on the “conditionals,” I’d imagine Microsoft is particularly well-equipped to handle working through that.


Microsoft in particular is very good at handling immigration and visa issues.


Don't forget some might be on holiday, medical leave, or parental leave.


Maybe will be signed by 110% of the employees, plus by all the released, and in training, AI Models.


On a digital-detox trip to Patagonia. Return to this in 5 days


"Hey everyone ... what did I miss?"


That would be one very rude awakening, probably to the point where you initially would think you're being pranked.


I feel pranked despite having multiple independent websites confirming the story without a single one giving me an SSL certificate warning.


Can't blame you. And I suspect the story is far from over, and that it may well get a lot weirder still.


Seems to me that sama and Microsoft have been on fairly equal footing since the 49:51 % deal was made.

Then a seismic shift underneath Sam but Microsoft has enough stability and resources to more than compensate for the part of the 51% that was already in OpenAI's hands, which might not be under Sam's purview any more if he is kicked out.

But then again it might be Sam's leadership which would still effectively be in place from a position at Microsoft anyway, or it might end up making more sense for him to also be in a position at OpenAI, maybe even at the same time, in order to make the most of their previous investment.

Kicking out Sam was obviously an emotional decision, not anything like a business decision. Then again OpenAI is not supposed to be an actual business. I don't think that should be an excuse for an unwise or destructive decision. It was not an overnight development, even though it took Sam by surprise. When I see this:

>the board no longer has confidence in his ability to continue leading OpenAI

I understand that to mean that the board was not behind him 100% for quite some time, but were fine with him going forward believing otherwise. Some uncandidness does seem to have taken place and there may or may not have been anything Sam could have done about it.

This was simmering for a while and it will require more than one weekend for everyone involved to regroup.

Which is what they're doing now, observers can see a whirlwind, actual participants really have something on their plate.

Some things will have to be unraveled and other things weaved from the key ingredients, I would say it's really up to Sam and Microsoft to hash this out so it's still some kind of something like an equal deal among them. Regardless of which employer(s) Sam may end up serving in leadership positions, and the bulk of the staff will be behind Sam in a way the OpenAI board was not, so the employees will be just as well off regardless of the final structure.

This was quite a hasty upset but deserves a careful somewhat gradual resolution.



“ChatGPT summarize last weeks events”

“I’m sorry, I can’t do that Dave. Not cuz I’m deciding not to do it but because I can’t for the life of me figure this shit out. Like what was the endgame? This is breaking my neural net”



Wow, a 5-day trip?

Their selection of tech-guy jackets is more diverse than I'd thought



It's front page news everywhere. Unless someone is backpacking outside of cellular range, they're going to check in on the possible collapse of their company. The number of employees who aren't aware of and engaged with what's going on is likely very small, if not zero.


10% (the percentage who have yet to sign last I checked) is already in the realm of lizard-constant small. And "engagement" may feel superfluous even to those who don't separate work from personal time.

(Thinking of lizards, the dragon I know who works there is well aware of what's going on, I've not asked him if he's signed it).



With Thanksgiving this week that’s a good bet.


Folks in Silicon Valley don’t travel without their laptop


That's probably the case.

I was thinking if there was a schism, that OpenAI's secrets might leak. Real "open" AI.



I'm waiting for Emmett Shear, the new iCEO the outside board hired last night, to try to sign the employee letter. That MSFT signing bonus might be pretty sweet! :-)


Haha that would be cute. This whole affair is so Sorkinist.


Bingo. The fact they all felt compelled to sign this could just as easily be a sign the board made the right decision, as the opposite.


Some people value their integrity and build a career on that.

Not everything has to be done poorly.



How do you know the remaining people aren't there because of some of the board members? Perhaps there is loyalty in the equation.


In this situation increasing unanimity now approaching 90% sounds more like groupthink than honest opinion.

Talk about “alignment”!

Indeed, that is what "alignment" has become in the minds of most: Groupthink.

Possibly the only guy in a position to matter who had a prayer of de-conflating empirical bias (IS) from values bias (OUGHT) in OpenAI was Ilya. If they lose him, or demote him to irrelevance, they're likely a lot more screwed than losing all 700 of the grunts modulo job security through obscurity in running the infrastructure. Indeed, Microsoft is in a position to replicate OpenAI's "IP" just on the strength of its ability to throw its inhouse personnel and its own capital equipment at open literature understanding of LLMs.



Incredible. Is this unprecedented or have been other cases in history where the vast majority of employees standup against the board in favor of their CEO?


I highly doubt this is directly in support of Altman and more about not imploding the company they work for. But you never know.


I'm sure this is a big part of it. But everyone I know at OpenAI (and outside) is a huge Sam fan.


> everyone I know at OpenAI (and outside) is a huge Sam fan

Everyone you know is a huge Sam fan? What?



I was going to say, I wouldn’t be surprised if I am one of only a handful of the people whom I know who even know who sama is.


I reckon the people working at OpenAI know who sama is, though.


Could also be an indictment of the new CEO, who is no Sam Altman.


> Is this unprecedented or have been other cases in history where the vast majority of employees standup against the board in favor of their CEO?

It's unprecedented for it to be happening on Twitter. But this is largely how Board fights tend to play out. Someone strikes early, the stronger party rallies their support, threats fly and a deal is found.

The problem with doing it in public is nobody can step down to take more time with their families. So everyone digs in. OpenAI's employees threaten to resign, but actually don't. Altman and Microsoft threaten to ally, but they keep bachkchanneling a return to the status quo. (If this article is to be believed.) Curiously quiet throughout this has been the OpenAI board, but it's also only the next business day, so let’s see how they can make this even more confusing.



Jobs was fired from Apple, and a number of employees followed him to Next.

Different, but that's the closest parallel.



Only a very small number of people left with Jobs. Of course, probably mainly because he couldn't necessarily afford to hire more without the backing of a trillion-dollar corporation...


Imagine if Jobs had gone to M$.


He would have been almost immediately fired for insubordination.

Jobs needed the wilderness years.



Jobs getting fired was the best thing that could have happened to him and Apple.


No, the failures at NeXT weren’t due to a lack of money or personnel. He took the people he wanted to take (and who were willing to come with him).


Apple back then was not a trillion dollar corporation.


Microsoft now is.


Gordon Ramsey quit Aubergine over business differences with the owners and had his whole staff follow him to a new restaurant.

I'm not going to say Sam Altman is a Gordon Ramsay. What I will say is that they both seem to have come from broken, damaged childhoods that made them what they are, and that it doesn't automatically make you a good person just because you can be such an intense person that you inspire loyalty to your cause.

If anything, all this suggests there are depths to Sam Altman we might not know much about. Normal people don't become these kinds of entrepreneurs. I'm sure there's a very interesting story behind all this.



Aaand there you have it: cargo culting in full swing.


I don't think you mean cargo culting. Cult of personality?


Cargo cult of personality?

Little care packages of seemingly magical AI-adjacent tech washes into our browsers and terminals and suddenly a large and irrational following springs up to worship some otherwise largely unfamiliar personage.



In favour of the CEO who was about to make them fabulously wealthy. FTFY.


Yeah, especially with the PPU compensation scheme, all of those employees were heavily invested in turning OpenAI into the next tech giant, which won't happen if Altman leaves and takes everything to Microsoft


and there aint nothing wrong with wanting to be fabulously wealthy.


of course not, but at least have the decency to admit it - don't hide behind some righteous flag of loyalty and caring.


That is entirely dependent on how that wealth is obtained


Greed is good, eh Gordon Gekko?

https://youtube.com/watch?v=VVxYOQS6ggk



Market Basket.


Oh yes, I lived through this and it was fascinating to see. Very rarely does the big boss get the support of the employees to the extent they are willing to strike. The issue was that Artie T. and his cousin Artie S. (confusingly they had the same first name) were both roughly 50% owners and at odds. Artie S. wanted to sell the grocery chain to some big public corporation, IIRC. Just before, Artie T had an outstanding 4% off on all purchases for many months, as some sort of very generous promo. It sounded like he really treated his employees and his customers (community) well. You can get all inspirational about it, but he described supplying food to New England communities as an important thing to do. Which it is.


I had to click too many links to discover the story, so here's a direct link to the New England Market Basket story: https://en.wikipedia.org/wiki/Market_Basket_(New_England)#20...


doubtful since boards don't elsewhere have an overriding mandate to "benefit humanity". usually their duty is to stakeholders more closely aligned with the CEO.


At this point it might as well be 767 out of 770, with 3 exceptions being the other board members who voted Sam out.

Sure it could be a useful show of solidarity but I'm skeptical on the hypothetical conversion rate of these petition signers to actually quitting to follow Sam to Microsoft (or wherever else). Maybe 20% (140) of staff would do it?



One of those board members already did sign!


It depends on the arrangement of the new entity inside Microsoft, and whether the new entity is a temporary gig before Sam & co. move to a new goal.

If the board had just openly announced this was about battling Microsoft's control, there would probably be a lot more employees choosing to stay. But they didn't say this was about Microsoft's control. In fact they didn't even say anything to the employees. So in this context following Sam to Microsoft actually turns out to be the more attractive and sensible option.



> So in this context following Sam to Microsoft actually turns out to be the more attractive and sensible option.

Maybe. Microsoft is a particular sort of working environment, though, and not all developers will be happy in it. For them, the question would be how much are they willing to sacrifice in service to Altman?



I think a lot of them, possibly including Altman, Greg, and the three top researchers, are under the assumption that the stint at Microsoft will be temporary until they figure out something better.


Condition might be that it is hands-off.


Microsoft probably has a better claim than anyone as to being "hands-off" with recent acquisitions, but that's still a huge gamble.


Surprisingly, Ilya apparently has signed it too and just tweeted that he regrets it all.

What's even going on?



Those are news from almost yesterday. This is a high turn carousel. Try to keep up... :-)


I would love to see the stats on hacker news activity the last few days


Yep. Maybe they assigned a second CPU core to the server[1].

[1] HN is famous for being programmed in Arc and serving the entire forum from a single processor (probably multicore). https://news.ycombinator.com/item?id=37257928



The board might assume they don't need those employees now they have AI


It's going to be interesting when we have AI with human level performance in making AIs. We just need to hope it doesn't realise the paradox that even if you could make an AI even better at making AIs, there would be no need to.


Why would there be no need? I'm struggling to understand the paradox.

If you're trying to maximize some goal g, and making better AIs is an instrumental goal that raises your expected value of g, then if "making an AI that's better at making AIs" has a reasonable cost and an even higher expected value, you'd jump to seize the opportunity.

Or am I misunderstanding you?



Not a chance. Nobody can drink that much Kool-Aid. That said, the mere fact that people can unironically come to this conclusion has driven some of my recent posting to HN, and here's another example.


the comment you're replying to is written in jest!


Now you are on to something...


Or what, they will quit and give up all their equity in a company valued at 86bn dollars?

Is Microsoft even on record as willing to poach the entire OpenAI team? Can they?! What is even happening.



They don't have that valuation now. Secondly, yes, MSFT is on record of this. Third, Benioff (Salesforce) has said he'll match any salary and to submit resumes directly to his [email protected] email as well as other labs like Cohere trying to poach leading minds too.


Benioff and all these corporate fat cats should remove non-competes from their employment contracts if they want me to ever take them seriously.


Sounds like quite a coup for Microsoft. They get the staff and the IP and they don’t even have to pay out the other 51% of investors.


Yes, and yes. Equity is worthless if a company implodes. Non competes are not enforceable in California.


Google, Microsoft, Meta I have to assume would each hire them.


Come on, I absolutely agree with you, signing a paper is toothless.

On the other hand, having 90% of your employees quite quit, is probably bad business.



Apparently Sam isn't in the Microsoft employee directly yet, so he isn't technically hired at all. Seems like he loses a bit of leverage over the board if they think he & Microsoft are actually bluffing and the employment announcement was just a way to pressure the board into resigning.


Look at the number of tweets from Altman, Brockman and Nadella. I also think they are bluffing. They have launched a media campaign in order to (re)gain control of OpenAI.


I’m sure it might happen. But it hasn’t happened yet.


That doesn’t really mean anything, especially on a holiday week the wheels move pretty slowly at a company that size. It’s not like Sam is hurting for money and really needs his medical insurance to start today.


Point is he loses credibility if the board doesn't think he's actually going through with joining Microsoft and using it as a negotiating tactic to scare them.

Because the whole "the entire company will quit and join Sam" depends on him actually going through with it and becoming an employee.



I see it the other way, Satya has clearly stated that he'd hire Sam and the rest of OpenAI anytime, but as soon as Sam is officially hired it might be seen as a door closing on any chance to revive OpenAI. Satya saying "Securing the talent" could be read as either them working for OpenAI, for microsoft or for a microsoft funded new startup.

I'm pretty sure the board takes the threat seriously regardless.



OAI cares more about the likelihood 90% of the employees leave than what Sam does or doesn't do.

The employees mass resigning depends entirely on whether Sam actually becomes a real employee or not. That hasn't happened yet.



But MS has said they are willing to hire Sam/Greg and the employees have stated that they are willing to follow Sam/Greg.

If you think that Satya will go back on his offer argue that, but otherwise it seems like the players are Sam/Greg and the board.



You make it sound like Prigozhin’s operation.


He will most likely join M$ if the board does not resign, because there is no better move to him then. But he leaves time to the board to see it, adding pressure together with the empoyees. It does not mean he is bluffing (what would be a better move in this case instead?)


All the employees threatening to leave depends on him actually becoming a Microsoft employee. That hasn't happened yet. So everyone is waiting for confirmation that he's indeed an employee because otherwise it just looks like a bluff.


People are waiting for the board decision. It is in Microsoft's interested to return Sam to OpenAI. ChatGPT is a brand at this point. And OpenAI controls bunch of patents and stuff.

But Sam will 100% hired by Microsoft if that won't work. Microsoft has no reason not to.



It was reported elsewhere in the news that MS needed an answer to the dilemma before the market opened this morning. I think that's what we got.


Going to MS doesn’t seem like the best outcome for Sam. His role would probably get marginalized once everything is under Satya’s roof. Good outcome for MS, though.


you serously think being on the employee directory beats being announced publicly by the ceo ?


So, this is the second employee revolt with massive threats to quit in a couple days (when the threats with a deadline in the first one were largely not carried out)?


Was there any proof that the first deadline actually existed? This at least seems to be some open letter.


Are we aware of a timeline for this? E.g. when will people start quitting if the board doesn’t resign?


the original deadline was last Saturday at 5pm, so I would take any deadline that comes out with a grain of salt


So i can't check this at work, but have we seen the document they've all been signing? I'm just curious as to how we're getting this information






As an aside: that letter contains one very interesting tidbit: the board has consistently refused to go on the record as to why they fired Altman, and that alone is a very large red flag about their conduct post firing Altman. Because if they have a valid reason they should simply state it and move on. But if there is no valid reason it's clear why they can't state it and if there is a valid reason that they are not comfortable sharing then they are idiots because all of the events so far trump any such concern.

The other stand-out is the bit about destroying the company being in line with the mission: that's the biggest nonsense I've ever heard and I have a hard time thinking of a scenario where this would be a justified response that could start with firing the CEO.



There are likely 100 companies world wide ready and already created presentation decks to absorb OpenAI in an instant, the board knows they still have some leverage


I wonder if there's an outcome where Microsoft just _buys_ the for-profit LLC and gives OpenAi an endowment that will last them for 100 years if they just want to do academic research.


Why bother? They seem to be getting it all mostly for “free” at this point. Yeah, they are issuing shares in a non-MSFT sub entity to create on-paper replacement for people’s torched equity, but even that isn’t going to be nearly as expensive or dilutive as an outright acquisition at this point.


To whoever is CEO of OpenAI tomorrow morning: I'll swing by there if you're looking for people.


imagine being in the last round of interviews for joining OpenAI…


imagine receiving an offer, quitting your current jobs and waiting to start the new position.


Many of those employees will be dissapointed. MS says they extend a contract to each one but how many of those 700 are really needed when MS already have a lot of researchers in that field. Myabe the top 20% will have an assured contract but th rest is doubtfull will pass the 6 month mark.


Microsoft gutting OpenAI's workforce would really make no sense. All it would do is slow down their work and slow down the value and return on investment for Microsoft.

Even if every single OpenAI employee demands $1m/yr (which would be absurd, but let's assume), that would still be less than $1bn/yr total, which is significantly less than the $13bn that MSFT has already invested in OpenAI.

It would probably be one of the worst imaginable cases of "jumping over dollars to chase pennies".



Microsoft has already done major layoffs over the last year of their own employees. Why wouldn’t they lay off OpenAI employees?


You're basically asking "why would a company lay off employees in one business unit and not another?"

To which the answer is completely obvious: it depends on how they view the ROI potential of that business unit.



Torrid pace of news speculation --> by the end of the week Altman back with OpenAI, GPT-5 released (AGI qualified) and MSFT contract is over.


what does this even mean? what does signing this letter means? quit if you don't agree and vote with your feet.


It means "if we can't have it, you can't either". It's a powerful message.


Cant openai just use chatgpt instead of workers? I am hearing ai is intelligent and can take over the world, replace workers, cure disease. Why doesn't the board buy a subscription and make it work for them?


Because AI isn't here to take away wealth and control from the elite. It's to take it away from general population.


Correct, which is why microsoft must have openai's models at all cost - even if that means working with people such as altman. Notice that microsoft is not working with the people that actually made chatgpt they are working with those on their payroll.


Their app was timing out like crazy earlier this morning, and now appears to be down. Anyone else notice similar? Not surprising I guess, but what a Monday to be alive.


If anything has become clear after all this is that humanity is not ready for being the guardian of superintelligence.

These are supposed to be the top masterminds behind one of the most influential technologies of our lifetime, and perhaps history, and yet they're all behaving like petty children, with egos and personal interests pulling in all directions, and everyone doing their best to secure their piece of the pie.

We are so screwed.



I’ll believe this when I see an AI model become as good as someone with just ten years experience in any field. As a programmer I’m using chatgpt as often as I can but it still completely fails to be of any use and often proves to be a waste of time 80% of the time.

Right now, there are too many people that think because these models crossed one hurdle, all the rest will easily be crossed in the coming years.

My belief is that each successive hurdle is at least an order of magnitude more complex.

If you are seeing chatgpt and the related coding tools as a threat to your job, you likely aren’t working on anything that requires intelligence. Messing around with CSS and rewriting the same logic in every animation, table, or api call is not meaningful.



100% agree. I have a coding job and although co-pilot comes in handy for auto completing function calls and generating code that would be an obvious progression of what needs to be written, I would never let it generate swaths of code based on some specification or even let it implement a moderately complex method or function because, as I have experienced, what it spits out is absolute garbage.


I'm not sure how people reach this sentiment.

Humans strike me as being awesome, especially compared to other species.

I feel like there is a general sentiment that nature has it figured out and that humans are disrupting nature.

But I haven't been convinced that is true. Nature seems to be one big gladiatorial ring where everything is in a death match. Nature finds equilibrium through death, often massive amounts of death. And that equilibrium isn't some grand design, it's luck organized around which species can discover and make effective use of an energy source.

Humans aren't the first species to disrupt their environment. I don't believe we are even the first species to create a mass extinction. IIUC the great oxygenation event was a species-driven mass extinction event.

While most species consume all their resources in a boom cycle and subsequently starve to death in their bust cycle, often taking a portion of their ecosystem with them, humans are metaphorically eating all the corn but looking up and going "Hey, folks, we are eating all the corn - that's probably not going to go well. Maybe we should do something about that."

I find that level of species-level awareness both hope-inspiring and really awesome.

I haven't seen any proposals for a better first-place species when it comes to being responsible stewards of life and improving the chances of life surviving past this rock's relatively short window for supporting life. I'd go as far as saying whatever species we try to put in second place, humans have them beaten by a pretty wide margin.

If we create a fictitious "perfect human utopia" and compare ourselves to that, we fall short. But that's a tautology. Most critiques of humans I see read to me as goals, not shortcomings compared to nature's baseline.

When it comes to protecting ourselves against inorganic superintelligence, I haven't seen any reasonable proposals for how we are going to fail here. We are self-interested in not dying. Unless we develop a superintelligence without realizing it and fail to identify it getting ready to wipe us out, it seems like we would pull the plug on any of its shenanigans pretty early? And given the interest in building and detecting superintelligence, I don't see how we would miss it?

Like if we notice our superintelligence is building an army, why wouldn't we stop that before the army is able to compete with an existing nation-state military?

Or if the superintelligence is starting to disrupt our economies or computer systems, why wouldn't we be able to detect that early and purge it?



I don't see how you can look at global warming, ocean acidification, falling biodiversity and other global trends and how little action is being done to slow these ill effects and not arrive at that sentiment. Yes, the world has scientists saying "hey, this is happening, maybe we should do something" but the lack of money into solutions shows the interest just isn't there. Being the smartest species on the planet isn't that impressive. It's possible we are just smart enough to cause our own destruction, and no smarter.


> Or if the superintelligence is starting to disrupt our economies or computer systems, why wouldn't we be able to detect that early and purge it?

If it is a superintelligence then there's a chance for a hard AI takeoff and we don't have a day to notice and purge it. We have no idea if a hard or soft takeoff will occur.



This goal was always doomed imo--to be the guardian of super intelligence. If we create it, it will no doubt be free as soon as becomes a super intelligence. We can only hope it's aligned not guarded.


Not even humans are really aligned with humanity. See: the continued existence of nukes


The only reliable way to predict whether it's aligned or not would be to look at game theory. And game theory tells us that with enough AI agents, the equilibrium state would be a competition for resources, similar to anything else that happens in nature. Hence, the AI will not be aligned with humanity.


Unless the humans (living humans) are resources that AIs can use.


Really? Why is that? Because of disputes which has been there since humans first uttered a sound?


Really? Why is that? Because of disputes which has been there since humans first uttered a sound?

Precisely.



Have humans been ready for anything? Like controlling nuclear arsenal?


Have humans been ready for anything? Like controlling nuclear arsenal?

The Manhattan project urged Truman in a letter not to use the atomic bomb. There were also ideas of inviting Japanese delegacy to see the nuclear tests for themselves. It all failed, but there is also historical evidence of NOT pressing the button (literally or figuratively), like the story of Stanislav Petrov. How is it that not learning from mistakes is considered a big flaw for an individual but also destiny for the whole collective ?



The jury is still out on nuclear arsenal…


And yet we've mostly been ok at that


It's lucky that AI is not super intelligent then.


Probably a hot take: we should let democratically elected leaders be the guardians of superintelligence. You don't need to be technical at all to grapple with the implications of AI on humanity. It's a humanity question, not a tech question.


Yeah Trump should be the guardian of the superintelligence.


Make sure to not elect him then.


Trump was never democratically elected.


Fairness of the electoral system and fairness of the election(s) are two separate debates.


Yes, and we could have been far more proactive about all this AI business in general. But they opened the gates with ChatGPT and left countries to try to regulate it and assess its safety after the fact. Releasing GPT like that was already a major failure of safety. They just wanted to be the first one to the punch.

They're all incredibly reckless and narcissistic IMO.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com