(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40447431

此人讲述了他在五个月前可能的工作转换期间遇到的过度限制性就业竞业禁止条款的经历。 尽管表达了对该条款的担忧,但谈判结果使其保持不变,导致他们拒绝了这一提议。 他们分享了早期就业经历中工资过低和无证协议的例子,质疑这些条款的合法性和重要性。 尽管口头协议很灵活,但他们强调一致性和公平性,坚持所有协议要么必须采用书面形式,要么根本没有。 本文最后对科技行业竞业禁止的演变进行了观察,特别是关于 OpenAI 最近有争议的行为。

相关文章

原文


If this really was a mistake the easiest way to deal with it would be to release people from their non disparagement agreements that were only signed by leaving employees under the duress of losing their vested equity.

It's really easy to make people whole for this, so whether that happens or not is the difference between the apologies being real or just them just backpedaling because employees got upset.

Edit: Looks like they're doing the right thing here:

> Altman’s initial statement was criticized for doing too little to make things right for former employees, but in an emailed statement, OpenAI told me that “we are identifying and reaching out to former employees who signed a standard exit agreement to make it clear that OpenAI has not and will not cancel their vested equity and releases them from nondisparagement obligations” — which goes much further toward fixing their mistake.



This reads like more than standard restrictions. I hate those like everyone, they are just intended to chill complaints in my opinion with enough question to scare average people without legal expertise (like me, like most devs), just like non-competes used to seemingly primarily be used to discourage looking at other jobs, separate from whether it was enforceable - note the recent FTC decision to end non-competes.

About 5 months ago I had a chance to join a company, their company had what looked like an extreme non-compete to me, you couldn't work for any company for the next two years after leaving if they had been a customer of that company.

I pointed out to them that I wouldn't have been able to join their company if my previous job had that non-compete clause, it seemed excessive. Eventually I was in meetings with a lawyer at the company who told me it's probably not enforceable, don't worry about it, and the FTC is about to end non-competes. I said great, strike it from the contract and I'll sign it right now. He said I can't do that, no one off contracts. So then I said I'm not working there.



I have worked for multiple startups (Malwarebytes, Vicarious, Rad AI, Explosion AI, Aptible, Kenna Security). Not once have I seen an exit agreement that stated they would steal back my vested equity if I didn't sign. This is definitely not "standard restrictions".



I always respond with "if it's never enforced, then you'll be fine with me taking it out"

Then I strike the offending passage out on both copies of the contract, sign and hand it back to them.

Your move.

¯\_(ツ)_/¯



Yes, I really do this. Have done since I started working.

At one of my first jobs as a student employee they offered me a salary X. In the contract there was some lower number Y. When I pointed this out, they said "X includes the bonus. It's not in the contract but we've never not paid it". OK, if this is really guaranteed, you can make that the salary and put it in writing. They did, my salary was X and that year was the first time they didn't pay the optional bonus. Didn't affect me, because I had my salary X.

IANAL and I don't know how binding this is. I'd think it's crucial for it to be in both copies of the contract, otherwise you could have just crossed it out after the fact, which would of course not be legally binding at all and probably fraud (?)

In practice, it doesn't really come up, because the legal department will produce a modified contract or start negotiating the point. The key is that the ball is now in their court. You've done your part, are ready and rearin' to go, and they are the ones holding things up and being difficult, for something that according to them isn't important.

UPDATE:

I think it's important to note that I am also perfectly fine with a verbal agreement.

A working relationship depends on mutual trust, so a contract is there for putting in a drawer and never looking at it again...and conversely if you are looking at it again after signing, both the trust and the working relationship are most likely over.

But it has to be consistent: if you insist on a binding written agreement, then I will make sure what is written is acceptable to me. You don't get to pick and choose.



Exactly. And just like I have to be fine with not getting the job if my conditions are not acceptable to them, they have to be fine with not getting me if their conditions are not acceptable to me.

Considering the considerable effort that has gone into this by the time you are negotiating a contract, letting it fail over something that "is not important" and "is never enforced" would be very stupid of them.

So if they are unwilling to budge, that either means they were lying all along and the thing that's "never enforced" and is "not important" actually is very important to them and definitely will be enforced, or that they are a company that will enforce arbitrary and pointless rules on employees as long as they think they can.

Neither of which is a great advertisement for the company as an employer.



> So if they are unwilling to budge, that either means they were lying all along and the thing that's "never enforced" and is "not important" actually is very important to them and definitely will be enforced, or that they are a company that will enforce arbitrary and pointless rules on employees as long as they think they can.

Most of the time is basically just FUD, to coerce people into following the rule-that-is-never-enforced



I’ve heard of some pretty aggressive non-competes in finance, but AFAIU (never worked in Connecticut myself), it’s both the carrot and the stick: you get both paid and a stiff contract if you leave with proprietary alpha between the ears.

In tech I’ve never even heard a rumor of something like this.



It’s got a term - “garden leave” and yeah it was prevalent in finance. I say “was” because I think some states are changing laws wrt/ non-competes and this calling this practice into question.



No, you're confusing stuff.

First of all, taking any code with you is theft, and you go to jail, like this poor Goldman Sachs programmer [1]. This will happen even if the code has no alpha.

However, noone can prevent you from taking knowledge (i.e. your memories), so reimplementing alpha elsewhere is fine. Of course, the best alpha is that which cannot simply be replicated, e.g. it depends on proprietary datasets, proprietary hardware (e.g. fast links between exchanges), access to cheap capital, etc.

What hedge funds used to do, is give you lengthy non-competes. 6months for junior staff, 1-2y for traders, 3y+ in case of Renaissance Technologies.

In the US, that's now illegal and un-enforceable. So what hedge funds do now, is lengthy garden(ing) leaves. This means you still work for the company, you still earn a salary, and in some (many? all?) cases also the bonus. But you don't go to the office, you can't access any code, you don't see any trades. The company "moves on" (developes/refines its alpha, including your alpha - alpha you created) and you don't.

These lengthy garden leaves replaced non-competes, so they're now 1y+. AFAIK they are enforceable, just as non-competes while being employed always have been.

[1] https://nypost.com/2018/10/23/ex-goldman-programmer-sentence...



I think this still leaves garden leave on the table. The thing that can no longer happen is an employer ending it's relationship with an employee and preventing them from continuing their career after the fact. Garden leave was in fact one of the least bad outcomes of a non-compete as I understand it.



Comp clawbacks in exit agreements, that weren't part of the employment agreement?

I've seen equity clawbacks in employment agreements. Specifically, some of the contracts I've signed have said that if I'm fired for cause (and were a bit more specific, like financial fraud or something) then I'd lose my vested equity. That isn't uncommon, but its not typically used to silence people and is part of the agreement they review and approve of before becoming an employee. It's not a surprise that they learn about as they try to leave.



According to the Vox article, it's much more complicated legally. It's not part of each employee's contract that allows this, it's part of the articles of incorporation of the for-profit part of OpenAI.



Must it?

Not clear what you mean.

Do you mean it is generic to do that in contracts? (Been a while since I was offered equity.)

Or do you mean that even OpenAI would not try it without having set it up in the original contract? Because I hate to be the guy with the square brackets ;-)



If it wasn't in the original contracts for the equity, they wouldn't be able to claw back. Fairly obviously, the mechanism can't be in the exit agreement because you didn't sign that yet.

Normally a company has to give you new "consideration" (which is the legal term for something of value) for you to want to sign an exit agreement - otherwise you can just not bother to sign. Usually this is extra compensation. In this case they are saying that they won't exercise some clause in an existing agreement that allows them to claw back.



Per the Vox article, it's not directly in the contract you sign for the equity, it's basically part of the definition of the equity itself (the articles of incorporation of the for-profit company) that OpenAI remains in full control of the equity in this way.



What is the structure of those compensations, and the mechanism for the clawbacks? Equity is taxed when it becomes the full, unrestricted property of the employee, so depending on the structure these threatened clawbacks could have either (1) been very illegal [essentially theft], or (2) could have had drastic and very bad tax consequences for all employees, current and former.

I'm not surprised that they're rapidly backpedaling.



> taxed when it becomes the full, unrestricted property of the employee

I guess these agreements mean that the property isn't full unrestricted property of the employee... and therefore income tax isn't payable when they vest.

The tax isn't avoided - it would just be paid when you sell the shares instead - which for most people would be a worse deal because you'll probably sell them at a higher price than the vest price.



> which for most people would be a worse deal

It's a worse deal in retrospect for a successfull company. But there and then it's not very attractive to pay an up-front tax on something that you can sell at an unknown price in the relatively far future.



   > Comp clawbacks are quite common in finance
Common? Absolutely not. It might be common for a tiny fraction of investment bank staff who are considered (1) material risk takers, (2) revenue generators, or (3) senior management.


Can you find any specific examples? I've only seen that apply to severance agreements where you're being paid some additional sum for that non-disparagement clause.

Never seen anything that says money or equity you've already earned could be clawed back.



Right, but would that have been achieved with a clause open-ended enough to allow this additional paperwork on exit?

Or would that have been an "if you break the law" thing?

Seems unlikely that OpenAI are legally in the clear here with nice clear precedent. Why? Because they are backflipping to deny it's something they'd ever do.



I think they are backpedaling rapidly to avoid major discontent among their workers. By the definition of their stock as laid out in their articles of incorporation, they have the right to reduce any former employee's stock to 0, or to prevent them from ever selling it, which is basically the same thing. This makes their stock offers to employees much less valuable than the appear at face value, so their current and future employees may very well start demanding actual dollars instead.



> The ability to do it was almost certainly already in place

Why? OpenAI is a shitshow. Their legal structure is a mess. Yanking vested equity on the basis of a post-purchase agreement signed under duress sounds closer to securities fraud than anything thought out.



I'm not saying it was thought out, I'm saying it was in place. My understanding is that the shareholders agreement had something which enabled the canceling of the shares (not sure if it was all shares, shares granted to employees, or what). I have not seen the document, so you may be right, but that's my understanding.



> the shareholders agreement had something which enabled the canceling of the shares

OpenAI doesn't have shares per se, since they're not a corporation but some newfangled chimeric entity. Given the man who signed the documents allegedly didn't read them, I'm not sure why one would believe everything else is buttoned up.



I think you missed my proviso.

If you can do X in the first place, I don't think there's any general rule that you can't condition X on someone not signing a contract.



I’ve seen that for a well-known large tech company, and I wasn’t even employed in the US, making those seem stranger. Friends and former colleagues pushed back against that (very publicly and for obvious reasons in one case) and didn’t get to keep their vested options: they had to exercise what they had before leaving.

There was one thing that I cared about (anti-competitive behavior, things could technically be illegal, but what counts is policy so it really depends on what the local authority wants to enforce), so I asked a lawyer, and they said: No way this agreement prevents you from answering that kind of questioning.



A 90 days exercise window is standard (and there are tax implications as well in play).

OpenAI is different: they don’t grant options, but “Units” that are more like RSUs.



The closest thing I've heard of is having to sign anti-disparagement clauses as part of severance when laid off; still pretty shitty, but taking back already vested equity would be on another level.



My understanding is that its an explicit condition of the equity grant, not something technically first revealed at exit (which would probably be illegal), but probably under the expectation that no one is carefully studying the terms of the agreement that would be required at exit when they are accepting compensation terms that i nclude equity.



> He said I can't do that, no one off contracts.

There was still potential to engage there:

  "That's alright, as you said it's not enforceable anyway just remove it from everyone's
   contract.  It'll just be the new version of the contract for everyone."
Doubt it would have made any difference though, as the lawyer was super likely bullshitting.


Well, you can be surprised. It's surprisingly common, in my experience, to believe people who pretend they are on your side. One interesting and typical case that is documented through countless online videos is police interrogations, where the interrogator is usually an expert in making it seem he (or she) is on your side, despite how obvious it should be that they're not. "Can I get you a meal?", friendly tone, various manipulations and before you know it you've said things that can and will be used against you whether you are guilty or not.

And you don't get the meal, either.



> We can also mention the case of psychiatrists running the "Presence francaise" groups who, appointed to examine the prisoner, started off boasting they were great friends with the defense lawyer and claiming both of them (the lawyer and the psychiatrist) would get the prisoner out. All the prisoners examined by this method were guillotined. These psychiatrists boasted in front of us of this neat method of overcoming "resistance."

- The Wretched of the Earth, Frantz Fanon



Attorneys are like any other profession. The average attorney is just like the average person, except he passed a difficult test.

Exceptions require sign off and thinking. The optimal answer is go with the flow. In an employment situation, these sorts of terms require regulatory intervention or litigation to make them go away, so it’s a good bet that most employees will take no action.



Probably not enforceable != enforceable. Are you worth suing or does everyone sign? Are your state laws and jurisprudence going to back you up?

If you are ever going to sign an employee agreement that binds you, consult with an employment attorney first. I did this with a past noncompete and it was the best few hundred I ever spent: my attorney talked with me for an hour about the particulars of my noncompete, pointed out areas to negotiate, and sent back redlines to make the contract more equitable.



The single best professional decision I ever made was to get a business degree. The degree itself wasn’t worth a damn, but the network was invaluable. I have very close friends who are the exact kind of attorney who you would expect to have an undergraduate business degree. They’re greedy, combative people who absolutely relish these sorts of opportunities. And as a bonus, they are MY greedy, combative people who relish these sorts of opportunities.

They’re great partners when confronted with this kind of contract. And fundamentally, if my adversary/future employer retains counsel, I should too. Why be at a disadvantage when it’s so easy to pay money and be at even?

There are some areas my ethics don’t mesh with, but at the end of the day this is my work and I do it for pay. And when I look at results, lawyers are the best investment I have ever made.



At most it would just make that part of the contract void. Almost all contracts with stuff like this would have a “severability” clause which states like if one part of the contract is invalid, the rest is still valid.

But even without that, judges have huge amounts of leeway to “create” an ex post facto contract and say “heres the version if that contract you would have agreed to, this is now the contract you signed”. A sort of “fixed” version of the contract.



> At most it would just make that part of the contract void. Almost all contracts with stuff like this would have a “severability” clause which states like if one part of the contract is invalid, the rest is still valid.

Severability clauses themselves are not necessarily valid; whether provisions can be severed and how without voiding the contract is itself a legal question that depends on the specific terms and circumstances.



> Then why did bother to have a meeting?

Because lawyers are in the business of managing risk, and knowing what OC was unhappy about was very much relevant to knowing if he presented a risk.



yup.

companies say that all the time.

another way they do it is to say, it is company policy, sorry, we can't help it.

thereby trying to avoid individual responsibility for the iniquity they are about to perpetrate on you. .



You did well: there is never a rule against one-off contract. I can assure you the CEO has a one-off contract, and that lawyer has a one-off contract, at the very least :D



Not a mistake...

"...But there's a problem with those apologies from company leadership. Company documents obtained by Vox with signatures from Altman and Kwon complicate their claim that the clawback provisions were something they hadn’t known about..."



"We are sorry...that we got caught."

"...and that our PR firm wasn't good enough to squash the story."

They will follow the standard corporate 'disaster recovery' - say something to make it look like they're addressing it, then do nothing and just wait for it to fall out of the news cycle.



Honestly I'm willing to give the benefit of the doubt on that, depending on their actions, because I'm sure they sign so many documents they just rely on their legal teams to ensure they're good.



There's absolutely no way that the officers of the company would be unaware of this.

First of all, it beggars belief that this whole thing could be the work of HR people or lawyers or something, operating under their own initiative. The only way I could believe that is if they deliberately set up a firewall to let people be bad cops while giving the C-suite plausible deniability. Which is no excuse.

But...you don't think they'd have heard about it from at least one departing employee, attempting to appeal the onerous terms of their separation to the highest authority in the company?



Hold up... Do you really think that a C-suite including career venture-capitalists who happen to be leading+owning stock in a private startup which has hit an estimated billion+ valuation are too naive/distracted to be involved in how that stock is used to retain employees?

In other words, I'm pretty sure the Ed Dillingers are already in charge, not Walter Gibbs garage-tinkerers. [0]

[0] https://www.youtube.com/watch?v=atmQjQjoZCQ



> If this really was a mistake

The article makes it clear that it wasn't a mistake at all. It's a lie. They were playing hardball, and when it became public they switched to PR crisis management to try and save their "image", or what's left of it.

They're not the good guys. I'd say they're more of a caricature of bad guys, since they get caught every time. Something between a classic Bond villain and Wile E. Coyote.



>Edit: Looks like they're doing the right thing here

That's like P.Diddy saying I'm sorry.

That's damage control for being caught doing something bad ... again.



"Trust me bro, if it weren't up to me you wouldn't even have to sign that contract. I mean it is up to me, but, like, I won't enforce the thing I made you sign. What? No I won't terminate the contract why don't you trust me bro? I thought we were a family?"



What Sam is saying is very different than what I'm saying. I'm saying he should be proactive and just do it, he's saying that if people explicitly reach out to him then he'll do it specifically for them.



Sure and anyone who has worked in a toxic workplace knows exactly what it means to require a direct path to leadership to resolve an issue instead of just resolving it.



I also notice he conditions it on "any former employee." What about current employees who may be affected by the same legalese?

Either way, I can imagine a subtext of "step forward and get a target on your back."



True, they can't renegotiate agreements that don't yet exist.

However the fact that the corporate leadership could even make those threats to not-yet-departed employees indicates that something is already broken or missing in the legal relationship with current ones.

A simple example might for the company to clearly state in their handbook--for all current employees--that vested shares cannot be clawed back.



don't go public

don't contact OpenAI legal, which leaves an unsavory paper trail

contact me directly, so we can talk privately on the phone and I can give you a little $$$ to shut you up



That really is not enough. Now that they have been publicly embarrassed and the clause is common knowledge they really have to undo the mistake. If they didn't, they would look like a horrible employer and employees would start valuing their stock at $0, dropping their effective compensations by a ton and then people will leave. Given the situation, undoing the agreement is an act of basic self-preservation at this point.

The documents show this really was not a mistake and "I didn't know what the legal documents I signed meant, which specifically had a weird clause that standard agreements don't" isn't much of a defence either. The whole thing is just one more point in favor of how duplicitous the whole org is, there are many more.



> ”we are identifying and reaching out to former employees who signed a standard exit agreement to make it clear that OpenAI has not and will not cancel their vested equity and releases them from nondisparagement obligations”

Looks like they’re doing that.



Well, they say they are. But the nondisparagement agreement repeatedly forbids revealing the agreement itself, so if it wasn't cancelled those subject to it would be forbidden to point out that the public claim they were going to release people from it was a lie (or done only for people from whom OpenAI was not particularly concerned about potential disparagement.)



> Looks like they're doing the right thing here

Even if that's true (and I'm not saying it is, or it isn't, I don't think anyone on the outside knows enough to say for sure), is it because they genuinely agree they did something egregiously wrong and they will really change their behavior in the future? Or is it just because they got caught this time so they have to fix this particular mistake, but they'll keep on using similar tactics whenever they think they can get away with it?

The impact of such uncertainty on our confidence in their stewardship of AI is left as an exercise for the reader.



> Edit: Looks like they're doing the right thing here:

> Altman’s initial statement was criticized for doing too little to make things right for former employees, but in an emailed statement, OpenAI told me that “we are identifying and reaching out to former employees who signed a standard exit agreement to make it clear that OpenAI has not and will not cancel their vested equity and releases them from nondisparagement obligations” — which goes much further toward fixing their mistake.

1. Get cash infusion from microsoft

2. Do microsoft playbook of 'oh I didn't mean to be shady we will correct' when caught.

3. In the meantime there are uncaught cases as well as the general hand waving away of repeated bad behavior.

4. What sama did would get him banned from some -fetish- circles, if that says something about how his version of 'EA' deals with consent concerns.



Plenty of legitimate things to criticize EA for, no need to smear them by association with someone who's never claimed to be an EA and hasn't obviously behaved like one either.



It shouldn't take a Vox article to ensure employees basic security over their compensation. The fact that this provision existed at all is exceptionally anti-employee.



> Looks like they're doing the right thing here:

Well, no:

> We're removing nondisparagement clauses from our standard departure paperwork, and we're releasing former employees from existing nondisparagement obligations unless the nondisparagement provision was mutual. We'll communicate this message to former employees.

So the former successfully blackmailed employees, stay blackmailed.



You surely don't actually believe Altman when he says they're doing this? Like Elon Musk, Altman is a known liar and should not be trusted. It's truly unbelievable to me that people take statements like this at face value after having been lied to again and again and again. I think I'm starting to understand how crypto scams work.



The amount [and scale] of practices, chaos and controversies caused by OpenAI since ChatGPT was released are "on par" with the powerful products it has built since.. in a negative way!

These are the hottest controversial events so far, in a chronological order:

  OpenAI's deviation from its original mission (https://news.ycombinator.com/item?id=34979981).
  The Altman's Saga (https://news.ycombinator.com/item?id=38309611).
  The return of Altman (within a week) (https://news.ycombinator.com/item?id=38375239).
  Musk vs. OpenAI (https://news.ycombinator.com/item?id=39559966). 
  The departure of high-profile employees (Karpathy: https://news.ycombinator.com/item?id=39365935 ,Sutskever: https://news.ycombinator.com/item?id=40361128).
  "Why can’t former OpenAI employees talk?" (https://news.ycombinator.com/item?id=40393121).


The best case business pitch is total replacement of all white collar jobs. It's even more a "take over the world" pitch than regular tech companies. Now, quite a lot of that is unrealistic and will never be delivered, but which bit?

AI raises all sorts of extremely non-tech questions about power, which causes all the drama.

Edit: also, they've selected for people who won't ask ethical questions. Thus running into the classic villain problem of building an organization out of opportunistic traitors.



Thank you for making me laugh. Seriously, I think working for openai already selected for people who are ok with playing in the grey area. They know they ignore copyright and a few other rules. It's not surprising to me that they would also not be very nice to each other internally.



It's probably the perceived value & power it has.

They think they are about to change the entire world. And a very large but of the world agrees. (I personally think it's a great tool but exaggerated)

But that created an very big power play where people don't act normal anymore and the most powerhungry people come out to play.



Money. The hype is really strong, the hype might even be justified, insane amounts of money flow in. There is a land grab going on. Blood is in the water, all the sharks are circling.



After all that money, nobody can never even think of saying that it was wasted. To keep the investment value high and justifiable, they all agree and go on with the hype. Until the end.



Great, if these documents are credible, this is exactly what I was implying[1] yesteday. Here, listen to Altman say how he is "genuinely embarrassed":

"this is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have."

The first thing the above conjures up is the other disgraced Sam (Bankman-Fried) saying "this is on me" when FTX went bust. I bet euros-to-croissants I'm not the only one to notice this.

Some amount of corporate ruthlessness is part of the game, whether we like it or not. But these SV robber-barrons really crank it up to something else.

[1] https://news.ycombinator.com/item?id=40425735



"this is on me" --> "look at what a great leader I am, taking responsibility for other people's mistakes"

"i've been genuinely embarrassed" --> "yep, totally not my fault actually"

"I should have known" --> "other people fucked this up, and they didn't even inform me"



It's funny how finding out about corporate misdoing has almost a common ritual attached to it. First shock and dismay is expressed to the findings, then the company leadership has to say it was a mistake (rather than an obvious strategy they literally signed off on), we then bring up the contradiction. Does this display of ignorance from every side really need to take place? Why bother asking for an explanation, they obviously did the thing they obviously did and will obviously do as much as possible to keep doing as much of things like that they can get away with.



Looking forward for a document leak about openai using YouTube data for training their models. When asked if they use it, Murali (CTO) told she doesn't know which makes you believe that for 99% they are using it.



Number of videos are less relevant than the total duration of high-quality videos (quality can be approximated on YouTube with metrics such as view and subscriber count). Also, while YouTube videos are not labelled directly, you can extract signal from the title, the captions, and perhaps even the comments. Lastly, many sources online use YouTube to host videos and embed them on their pages, which probably contains more text data that can be used as labels.



To be fair I don’t think Google deserves exclusive rights to contents created by others, just because they own a monopolistic video platform. However I do think it should be the content owner’s right to decide if anyone, including Google, gets to use their content for AI.



I find it hard to believe that Sam didn’t know about something that draconian in something so sensitive as NDAs that affects to equity.

He’s not exactly new to this whole startup thing and getting equity right is not a small part of that



So what happened to Daniel Kokotajlo, the ex-OAI employee who made a comment saying that his equity was clawed back? Was it a miscommunication and he was referring to unvested equity, or is Sama just lying?

In the original context, it sounded very much like he was referring to clawed-back equity. I’m trying to find the link.



In my third world country, when they do something unethical they say "everything is in accordance with the law", here it's "this is on me", both are very cynical. From the time they went private, it was apparent that this company is unethical to say the least. Given what it is building, this can be very dangerous, but I think they are more proficient in creating hype, than actually coming up with something meaningful.



> ..or agreeing not to criticize the company, with no end date

Oh! free speech is on trade! We used to hear the above statement coming from some political regimes but this is the first time I read it in the tech world. Would we live to witness more variations of this behavior on a larger scale?!

> High-pressure tactics at OpenAI

> That meant the former employees had a week to decide whether to accept OpenAI’s muzzle or risk forfeiting what could be millions of dollars

> When ex-employees asked for more time to seek legal aid and review the documents, they faced significant pushback from OpenAI.

> “We want to make sure you understand that if you don't sign, it could impact your equity. That's true for everyone, and we're just doing things by the book,”

Although they've been able to build the most capable AI models that could replace a lot of human jobs, they struggle to humanely manage the people behind these models!!



Do OpenAI employees actually get equity in the company (e.g. options or RSUs)? I was under the impression that the company awards "profit units" of some kind, and that many employees aren't sure how they work.



I'm not following this very closely, but agreements that block employees from selling (private) vested equity are a market term, not something uniquely aggressive OpenAI does. The Vox article calls this "just as important" as the clawback terms, but, obviously, no.



> agreements that block employees from selling (private) vested equity are a market term

They threatened to block the employee who pushed back on the non-disparagement from participating in tender offers, while allowing other employees to sell their equity (which is what the tender offers are for). This is not a "market term".



Yeah, my impression is that a lot of non-public startups have "secondary market transactions allowed with board approval" clauses, but many of them just default-deny those requests and never have coordinated tender offers pre-IPO.



I'm surprised that an executive or lawyer didn't realise the reputational damage adding these clauses would eventually cause the leadership team.

Were they really stupid enough to think that the amount of money being offered would bend some of the most principled people in the world?

Whoever allowed those clauses to be added and let them remain has done more damage to the public face of OpenAI than any aggravated ex-employee ever could.



If Sam didn't get hired back after the firing, there was a good chance OpenAI would implode and that would be bad news for employee equity. Plus, the board didn't give out any information that could've convinced anyone to side with them. The drama about exit documents and superalignment research appears to have been contained in relatively small circles and did not circulate company-wide until they became public.



I recall that only some wanted him back, and the split was product/research—the “let’s get rich!” types wanted him back, the “let’s do AI!” types adamantly didn’t.



And today: a post about Johansson's voice was on the front page with quite a high score, and then disappeared. This is not the place to discuss OpenAI.



I really wish there was some simple calculations that could be shown on how posts are ranked. For eg post A has x upvotes, y comments, is z minutes old and therefore rank 2. Post B has these values, while C is here. Hence this post went down the front page quickly.

It's not that I don't trust the mods explicitly, it's just that showing such numbers (if they exist) would be helpful for transparency.



People always interested and fascinated by the algorithm whenever it comes up. Dang makes the (correct) assertion that people will much more easily game it if they know the intricacies. PG always churlishly jumps in to say there’s nothing interesting about it and any discussion of it is boring.

Pretty asinine response but I work in Hollywood and each studio lot has public tours giving anyone that wants a glimpse behind the curtain. On my shows, we’ve even allowed those people to get off the studio golf cart to peek inside at our active set. Even answering questions they have about what they see which sometimes explains Hollywood trickery.

I’m sure there’s tons of young programmers that would love to see and understand how such a long-lasting great community like this one persists.



I dunno. This is standard practice for things like SEO algos to try to slow down spammers, or risk algos to slow down scammers.

HN drives a boatload of traffic, so getting on the front page has economic value. That means there are 100% people out there who will abuse a published ranking system to spam us.



wait long enough and the other product will be able to expose the secrets.

future gpt prompt : "Take 200000 random comments and threads from hacker news, look at how they rank over time and make assumptions about how the moderation staff may be affecting what you consume. Precisely consider the threads or comments which have risque topics regarding politics or society or projects that are closely related to Hacker News moderation staff or Y Combinator affiliates."



> Dang makes the (correct) assertion that people will much more easily game it if they know the intricacies.

Which is interesting, because it's sacrilege to insinuate that it's being gamed at all.



It's not sacrilege, it's just that people rarely have any basis for saying this beyond just it kind of feels that way based on one or maybe two datapoints, and feeliness really doesn't count. We take real abuse seriously and I've personally put hundreds (feels like thousands) of hours into that problem over many years - but there has to be some sort of data to go on.



I really don't care about the "algorithm" here. I think this place is distinguished nicely by the fact that I almost never know how much karma a post or user has. If it was in fact a total dictatorship of a few, posing as some democratic reddit thing, who cares? I'm OK as it is, and these things don't last forever anyway.

All you can really do on the internet is ride the waves of synchronicity where the community and moderation is at harmony, and jump ship when it isnt! Any other conceit that some algorithm or innovation or particular transparency will be this cure all to feels like it never pans out, the boring truth is that we are all soft squishy people.

Show me a message board that is ultimately more harmonious and diverse and big as this one!



IMHO HN data should be transparent.

The innovation on detecting patterns would be incredible, and in reality I think would be best to evolve into allowing user-decided algorithms that they personally subscribe to.



That post was, as far as I can tell, basically an opinion piece repeating/summarizing stories that had been on the HN frontpage dozens of times. This post is investigative journalism with significant new information.

It should not be surprising that the outcomes are different.



They’re definitely doing this on comments too. I’ve had negative towards altman comments at the top dropped to below the negative voted ones in the past



It's standard moderation on HN to downweight subthreads where the root comment is snarky, unsubstantive, or predictable. Most especially when it is unsubstantive + indignant. This is the most important thing we've figured out about improving thread quality in the last 10 years.

But it doesn't vary based on specific persons (not Sam or anyone else). Substantive criticism is fine, but predictable one-liners and that sort of thing are not what we want here—especially since they evoke even worse from others.

The idea of HN is to have an internet forum—to the extent possible—where discussion remains intellectually interesting. The kind of comments we're talking about tend to choke all of that out, so downweighting them is very much in HN's critical path.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...



Lol.

Long story short MO worked for and knew the circle of VCs that pg knew. MO didn’t like them, they didn’t like him calling them out. pg didn’t like him posting about them on HN.

The rest is history.

Hopefully you can see how an egregious conflict of interesting can occur here with sama.



PG is Altman's godfather more or less. I am disappoint of these OpenAI news as of late.

5. Sam Altman

I was told I shouldn't mention founders of YC-funded companies in this list. But Sam Altman can't be stopped by such flimsy rules. If he wants to be on this list, he's going to be.

Honestly, Sam is, along with Steve Jobs, the founder I refer to most when I'm advising startups. On questions of design, I ask "What would Steve do?" but on questions of strategy or ambition I ask "What would Sama do?"

What I learned from meeting Sama is that the doctrine of the elect applies to startups. It applies way less than most people think: startup investing does not consist of trying to pick winners the way you might in a horse race. But there are a few people with such force of will that they're going to get whatever they want.

https://paulgraham.com/5founders.html



Who is bullish or bearish on OpenAI?

Now that LLM alternatives are getting better and better, as well as having well funded competitors. They don't yet have seem to developed a new, more advanced technology. What's their long term moat?



I think it’s time to cancel that Chat GPT subscription and move to something else. I am tired of the arrogance of these companies and particularly their narcissistic leaders who constantly want to make themselves the centre of the piece. It’s absolutely ridiculous to run a company as if you’re the lead in a contemporary drama.



> this is on me and one of the few times i've been genuinely embarrassed running openai

This statement seems to suggest that feeling embarrassed by one's actions is a normal part of running a company. In reality, the expectation is that a CEO should strive to lead with integrity and foresight to avoid situations that lead to embarrassment.



>This statement seems to suggest that feeling embarrassed by one's actions is a normal part of running a company.

It suggests humans makes mistakes and sometimes own up to them - which is a good thing.

> CEO should strive to lead with integrity and foresight to avoid situations that lead to embarrassment.

There is no human who does this , or are you saying turn the CEO role over to AI? :)



Going to be hard to keep claiming you didn’t know something, if your signature is on it. I don’t really think a CEO gets to say he didn’t read what he was signing.



I don't understand whenever you read about something like this, why the head of HR at a company like this (just google (head of people|hr|"human resources" openai linkedin) and see the first result) doesn't end up on a public blacklist of bad actors who are knowingly aggressive toward employees!



PG is Altman's godfather more or less. I am disappoint of these OpenAI news as of late.

5. Sam Altman

I was told I shouldn't mention founders of YC-funded companies in this list. But Sam Altman can't be stopped by such flimsy rules. If he wants to be on this list, he's going to be.

Honestly, Sam is, along with Steve Jobs, the founder I refer to most when I'm advising startups. On questions of design, I ask "What would Steve do?" but on questions of strategy or ambition I ask "What would Sama do?"

What I learned from meeting Sama is that the doctrine of the elect applies to startups. It applies way less than most people think: startup investing does not consist of trying to pick winners the way you might in a horse race. But there are a few people with such force of will that they're going to get whatever they want.

https://[email protected]/5founders.html *edited link due to first post getting deleted



This relationship feels like Michael and Ryan from The Office.

One is a well meaning but very naive older person who desperately wants to be liked by the cool kids, the other is a pretentious young conman who soars to the top by selling his “vision”. Michael is a huge simp for Ryan and thinks of himself as Ryan’s mentor, but is ultimately backstabbed by him just like everyone else.



we're deeply sorry we got caught, we need to do better. i take full responsibility for this mistake, i should have ensured all incriminating documents were destroyed.

ps "responsibility" means "zero consequences"



I don’t believe in the AGI claims, or in X-Risk. But I do think it’s apparent that AI will only become more powerful and ubiquitous. Very concerning that someone like Sam, with a history of dishonesty and narcissism that is only becoming more obvious time, may stand to control a large chunk of this technology.

He can’t be trusted, and as a result OpenAI cannot be trusted.



The company that fails in even a simple good faith gesture in their employee aggreement, claims it is the only one who can handle AGI while government creating regulation to lock out open source.



OpenAI's terrible, horrible, no good, very bad month only continues to worsen.

It's pretty established now that they had some exceptionally anti-employee provisions in their exit policies to protect their fragile reputation. Sam Altman is bluntly a liar, and his credibility is gone.

Their stance as a pro-artist platform is a joke after the ScarJo fiasco, that clearly illustrates that creative consent was an afterthought. Litigation is assumed, and ScarJo is directly advocating for legislation to prevent this sort of fiasco in the future. Sam Altman's involvement is again evident from his trite "her" tweet.

And then they fired their "superalignment" safety team for good measure. As if to shred any last measure of doubt that this company is somehow more ethical than any other big tech company in their pursuit of AI.

Frankly, at this point, the board should fire Sam Altman again, this time for good. This is not the company that can, or should, usher humanity into the artificial intelligence era.



What surprises me about these stories surrounding openAI is how they apologize while lying and downplaying any blame. Do they expect anybody to believe they didn’t know about clawback clauses?



Do they care? The mob will shout for a week or two and then turn their attention somewhere else. spez (the reddit chief) said something like that about their users, and he was absolutely right. A few days ago I was re-reading some of those threads about reddit API changes from ten months back where so many users claimed it was their last message and they were leaving for good. Almost none of them did. I checked two dozen profiles and all but one of them had fresh comments posted within that same day.



I went from very active on multiple subreddits to barely posting once every few months. Instead of answering programming questions or helping people get in shape I'm on other sites doing other things.

Changes like that are hard to measure.



> I went from very active on multiple subreddits to barely posting once every few months. Instead of answering programming questions or helping people get in shape I'm on other sites doing other things. Changes like that are hard to measure.

Changes in sentiment can be hard to measure, but changes in posting behavior seems incredibly easy to measure.



It’s the rule of ten (I made that up) 1:10 upvote. 1:10 of them comment. 1:10 post.

The people barking are actually the least worrisome, they’re highly engaged. The meat of your users say nothing and are only visible in-house.

That said, they also don’t give a shit about most of this. They want their content and they want it now. I am very confident spez knows exactly what he’s talking about.



Same. Redditor for 15 years and the API thing was the last straw.

I didn’t post about not engaging with or using the platform anymore. Nor did I delete my account, since it still holds some value to me. But I slinked away into the darkness and now HN is my social media tool.



I didn’t have a schedule, but probably had 5 or 6 accounts over the years… purging, deleting, and a few weeks later rejoining. The last time I deleted everything was before the API changes, and it was the last straw. I haven’t attempted to create a new account and don’t browse at all. I used to spend hours per day there. Now the only time I end up there is if a search engine directs me there for an answer to a specific question I have.



For me it's been every six months. I've even given some creds for burned accounts to the void for the heck of it.

That said, I think you could easily correlate my hn activity with my reddit usage (inverse proportionality). Loving it tbh, higher quality content overall and better than slashdot ever was



I'm guessing the ones who actually left Reddit did what I did - they disengaged from the site and then deleted all their content and accounts. It's pointless to complain without any actual power.

The relevant stakeholders here are the potential future employees, who are seeing in public exactly how OpenAI treats its employees.



When the changes went through I nuked all my comments and then my account. I don't know if many others did the same, but if so it would mean that you wouldn't see our "I'm leaving" comments anymore, i.e. that we wouldn't be included in your samples.



Yeah, reading old threads is weird. The majority of everything is intact, but there's enough deleted or mangled comments that it is an effective minor inconvenience.



I actually find myself to be using reddit much less. It’s not that I protesting, but it feels like the community changed into something more like Facebook folks. It doesn’t feel cutting edge anymore, it’s much more tamed stale. The fresh stuff isn’t on Reddit anymore.



My activity on Reddit has gone way down since they stopped supporting .compact view on mobile. I definitely miss it and want to go back but it’s incredibly hard to engage with the content on mobile browsers now.



I haven't stopped using it immediately, but it definitely added to the growing list of problems. I don't use that site anymore, except when a search result directs me there. Even then it's a second choice of mine, because I need to disable my VPN to access it, and I won't login.



The risk is not users boycotting them. The risk is OpenAI having trouble recruiting and retaining top talent, which will cause them to eventually fall behind the competition, leading users to naturally leave.



> A few days ago I was re-reading some of those threads about reddit API changes from ten months back where so many users claimed it was their last message and they were leaving for good. Almost none of them did. I checked two dozen profiles and all but one of them had fresh comments posted within that same day.

Lots of people have pointed out problems with your determination, but here's another one: can you really tell none of those people are posting to subvert reddit? I'm not going to go into details for privacy reasons, but I've "quit" websites in protest while continuing to post subversive content afterwards. Even after I "quit," I'm sure my activity looked good in the site's internal metrics, even though it was 100% focused on discouraging other users.



> I checked two dozen profiles and all but one of them had fresh comments posted within that same day.

I also remember when the internet was talking about the twenty four Reddit accounts that threatened to quit the site. It’s enlightening to see that the protest the size of Jethro Tull didn’t impact the site



This is them fucking over their employees though, not the public, and in a very concrete manner. Threats to rob them of millions - maybe tens of millions - are going to hurt more than losing access to a third-party Reddit client.

And the employees also have way more leverage than Reddit users; at this point they should still be OpenAI's greatest asset. Even once this is fixed (which they obviously will do, given they got caught), it's still going to cause a major loss of trust in the entire leadership.



Employees are replaceable. Outside of a very specific few, they have very little leverage. If an employee loses trust and leaves or “quiet quits”, they will simply be replaced with one of the hundreds of people clamoring to work for them. This is why unionization is so great.

Just as Reddit users stay on Reddit because there is nowhere else to go, the reality is that everyone worships leadership because they keep their paychecks flowing.



Yes, that "very little leverage" is why engineers & researchers near the bottom of OpenAI's career ladder are getting paid 900k/year (2/3rds funny money, admittedly, though in practice many people _have_ cashed out at very large multiples).



Employees are replaceable, sure, but that doesn't mean that you can't squander your good will with competent employees and end up only being able to hire sub-par employees.



Once you learn that online outrage doesn't actually impact your life that much, its easy to ignore. Gone are the days of public apologies and now we just sweep criticism under the rug and carry on.



Honestly, from a moderation perspective, the dropoff has been stark - the quality of work behind the scenes has dropped off a cliff on most larger subreddits, and the quality of the content those subreddits facilitate has reduced in quality in turn.

It's definitely had a very impact - but since it's not one that's likely to hit the bottom line in the short term, it's not like it matters in any way beyond the user experience.



It is hard to compete for high-end AI research and AI engineering talent. This definitely matters and they definitely should care. Their equity situation was already a bit of a barrier by being so unusual, now it's going to be a harder sell.

I know extremely desirable researchers who refuse to work for Elon because of how he has historically treated employees. Repeated issues like this will slowly add OpenAI to that list for more people.



This is how anything political (big or small P) works.

Aspirations keep people voting against their interests.

I personally worry that the way fans of OpenAI and Stability AI are lining up to criticise artists for demanding to be compensated, or accusing them of “gatekeeping” could be folded into a wider populism, the way 4chan shitposting became a political position. When populism turns on artists it’s usually a bad sign.



It's not about taking sides, it's about not caring. Everyone is tired of getting worked up over super rich CEOs being "aggressive" to their very rich employees and your,"if you're not with us, you're against us" attitude.



Every legal clause that affects company ownership is accepted by the CEO and the board. It's not something VP or general counsel can put there. Lo and behold, signatures from Altman and Kwon are there.

>Company documents obtained by Vox with signatures from Altman and Kwon complicate their claim that the clawback provisions were something they hadn’t known about.

>OpenAI contains multiple passages with language that gives the company near-arbitrary authority to claw back equity from former employees or — just as importantly — block them from selling it.

>Those incorporation documents were signed on April 10, 2023, by Sam Altman in his capacity as CEO of OpenAI.



The public statements would suggest either that Sam Altman is lying or he signs anything that is put in front of him without reading it. I'm inclined to believe that whatever is said is PR (aka BS). In a court of law it is the written and signed contracts that are upheld.



Employment letters, with equity grants, are practically never signed by anyone on the board, and for companies of any size, practically never signed by the CEO. There are a bunch of people who can bind a company legally, and it's one of the things they figure as part of their governing docs, board process etc.



> The text in the letter must be approved

Employment agreements don't even need to be approved by the CEO, let alone the Board. Delegating this responsibility is fairly common.

That said, Sam chose to sign. The buck stops with him, and he has--once again--been caught in a clear and public lie.



I'm not talking about grant-making case by case, but delegation authorization resolution itself. Delaware General Corporation Law has sections describing how it can be done.



> delegation authorization resolution itself. Delaware General Corporation Law has sections describing how it can be done

How what can be done?

Nothing in Delaware law says the CEO--let alone the Board--has to sign off on every form of employment agreement. That's a classic business judgement.



fwiw, re-reading your initial comment, I think you may have meant to say one thing and have inadvertantly said two things. The comment sounds like you are saying any clause in any contract which has any effect on equity. You might have intended to say any clause in the contracts you then copy/pasted, which deal with process around granting equity.

?



Again, wrong.

It's just not how the world works. It's legally not required, and in practice not done. If the board and CEO had to sit around approving every contract change in every country, my goodness they'd have no time left for anything else.



I wrote: Every legal clause that affects company ownership is accepted by the CEO and the board.

I did not write: Every legal clause is accepted by the CEO and the board.



I read it. It's just not right. Companies have all kinds of processes in place for who can negotiate what and up to what limit and who has to agree if it goes above this or changes that or whatever. There are entire software packages around contract management to route contracts to the right place based on the process the company has in place.



You're right that in general Sam Altman isn't countersigning every OpenAI employee contract.

However, at some point a lawyer sat down and worked with people to write the form-contract, and someone said "you sure? you want them to sign an NDA on exit? with a clause that lets you claw back equity? (implicit: because that's not normal)"



The form-contract is changing frequently at a company going through a lot of corporate changes and with a lot of freakishly talented employees who probably negotiate hard on contracts.

To be clear, he may well have known. But it isn't a given and in the grand scheme of things on a CEO brain, it would have been way down the list of capturing mind share.



Agree 100%, something tells me A) he really didn't know B) it's still scuzzy.

I'm taking a mental note to remember why mom always said to read e v e r y word before you sign.

I should have learned this at a younger age, somehow ended up 50/50 in an LLC I always assumed was going to be 70/30. Cost a lotttt of time and energy, essentially let them hold the company hostage for $60K later, after some really strange accounting problems were discovered. (my heart says they didn't take money, but they were distracted and/or incompetent)



Equity grants are usually signed by the board, although a lot of companies treat this as a meaningless formality and present it as though the document your manager signs is the real one. If you take a look at equity grants you've gotten in offer letters in the past, I bet they have "effective on board approval" in there somewhere.



They’re only apologetic because they got caught in a PR shitstorm. They would not otherwise. Being an sh*bag company that claws back equity is a huge red flag and can drive away the critical people who make up the company. They started an arms race, but with companies with much deeper pockets. Meta will be more than happy to gobble up any and every OpenAI employee who no longer wants to work there.



> Do they expect anybody to believe they didn’t know about clawback clauses?

Why wouldn’t they? I’m sure you can think of a couple of politicians and CEOs who in recent years have clearly demonstrated that no matter what they do or say, they will have a strong core of rabid fans eating their every word and defending them.



Not trying to play the devil's advocate here, but I am thinking how this would play out if I ever opened a spinoff...

Let's say I find a profitable niche while working for a project and we decide to open a separate spin off startup to handle that idea. I'd expect legality to be handled for me, inherited from the parent company.

Now let's also say the company turns out to be disproportionately successful. I'd say I would have a lot on my plate to worry about, the least of which the legal part that the company inherited.

In this scenario it is probable that hostile clauses in contracts would be dug up. I surely would be legally responsible for them, but how much would I be to blame for them, truly?

And if the company handles the incident well, how important should that blame putting be?



> I'd expect legality to be handled for me, inherited from the parent company.

That sounds like a really bad idea for many many reasons. Lawyers are cheap compared to losing control, or even your stake, to legal shenanigans.



It has become a hallmark of Western civilization to first of all Cover Your Ass, and where it gets exposed, to Pretend It's Covered, but when its photos get published, to Sincerely Apologize, and when pressed even more, to come out afresh with a Bold Pro-Coverage Vision and Commitment.

But maybe there's a further step that someone like OpenAI seems uniquely capable of evolving.



It's been stultifying the older I get to see how easy it is for people to lie to themselves and others, everywhere.

You have to be really attuned to "is this actually rational or sound right, or am I adding in an implicit 'but we're good people, so,'"



Right. The big change is bad faith argument developing into unapologetic bad faith developing into weaponised bad faith.

It accelerated rapidly with some trends like the Tea Party, Gamergate, Brexit, Andrew Wakefield, covid antivax, and the Ukraine situation, and is in evidence on both sides of the trans rights debate, in doxxing, in almost every single argument on X that goes past ten tweets, etc.

It's something many on the left have generally identified as worse from the right wing or alt.right.

But this is just because it's easier to categorise it when it's pointing at you. It's actually the primary toxicity of all argument in the 21st century.

And the reason is that weaponised bad faith is addictive fun for the operator.

Basically everyone gets to be Lee Atwater or Roger Stone for a bit, and everyone loves it.



> It's something many on the left have generally identified as worse from the right wing or alt.right.

It depends a bit by what you mean by left and right, but if you take something like Marxism that was always 100% a propaganda effort created by people who owned newspapers and the pervasiveness of propaganda has been a through line e.g. in the Soviet Union, agitprop etc. A big part of the Marxist theory is that there is no reality, that social experience completely determines everything, and that sort of ideology naturally lends itself to the belief that blankets of bad faith arguments for "good causes" are a positive good.

This sort of thinking was unpopular on the left for many years, but it's become more hip no doubt thanks to countries like Russia and China trying to re-popularize communism in the West.



Propaganda at a national level, it's always been that, and I take your point for sure.

I think perhaps I didn't really make it totally clear that what I'm mostly talking about is a bit closer to the personal level -- the way people fight their corners, the way twitter level debate works, the way local politicians behave. The individual, ghastly shamelessness of it, more than the organised wall of lies.

Everyone getting to play Roger Stone.

Not so much broadcast bad faith as narrowcast.

I get the impression Stalinism was more like this -- you know, you have your petty level of power and you _lie_ to your superiors to maintain it, but you use weaponised bad faith to those you have power over.

It's a kind of emotional cruelty, to lie to people in ways they know are lies, that make them do things they know are wrong, and to make it obvious you don't care. And we see this everywhere now.



Well, I was referring to Marx and Engels. That's sort of how the whole movement got started. The post-Hegelians who turned away from logic-based philosophical debate to a sort of anti-logical emotional debate where facts mattered less than the arc of history. That got nationalized and industrialized with Lenin and Stalin etc, but that trend precedes them and was more personal. It was hashed out in coffee houses and drinking clubs.

You see the same pattern with social media accounts who claim to be on the Maxist-influenced left. Their tactics are very frequently emotionally abusive or manipulative. It's basically indistinguishable in style from how people on the fringe right behave.

Personally I don't think it's a right vs left thing. It's more about authoritarianism and the desire to crush the people you feel are violating the rules, especially if it seems like they're getting away with violating the rules. There are just some differences about what people think the rules are.



> Personally I don't think it's a right vs left thing. It's more about authoritarianism and the desire to crush the people you feel are violating the rules, especially if it seems like they're getting away with violating the rules. There are just some differences about what people think the rules are.

Oh I agree. I wasn't making it a right-vs-left thing, but rather neutering the idea that people perceive it to be.

I would not place myself on the political right at all -- even in the UK -- but I see this idea that bad-faith is an alt.right thing and I'm inclined to push back, because it's an oversimplification.



I mean it's not like anything is going to happen to them anyway.

People will continue to defend and worship Altman until their last drop of blood on HN and elsewhere, consumers will continue using GPT, businesses will keep hyping it up and rivers of cash will flow per status quo to his pockets like no tomorrow.

If one thoroughly wants to to make a change, one should support alternative open source models to remove our dependency on Altman and co; I fear for a day where such powerful technology is tightly controlled by OpenAI. We have already given up so much our computing freedom away to handful of companies, let's make sure AI doesn't follow. Honestly,

I wonder if we would ever have access to Linux, if it were to be invented today?



>People will continue to defend and worship Altman until their last drop of blood on HN and elsewhere

The percentage of HN users defending Altman has dropped massively since the board scandal ~6 months ago.

>consumers will continue using GPT, businesses will keep hyping it up

Customers will use the best model. If OpenAI loses investors and talent, their models may not be in the lead.

IMO the best approach is to build your app so it's agnostic to the choice of model, and take corporate ethics into consideration when choosing a model, in addition to performance.



Yes, I've definitely seen people believe that in various discussions. Combine "Altman said they'd totally never done this" with "the ex-employee who first wrote about this didn't write with absolute 100% clarity that this applied to vested equity", and there's enough cover to continue believing what one wants to believe. And if the news cycle dies down before the lie is exposed, then that's a win.

Obviously that should not be possible any more with these leaked documents, given they prove both the existence of the scheme and Altman and other senior leadership knowing about it. Maybe they thought that since they'd already gagged the ex-employees, nobody would dare leak the evidence?

联系我们 contact @ memedata.com