(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=38328355

围绕开源人工智能的讨论以及 Facebook 负责任人工智能部门最近的宣布引发了一些问题和影响。 作者认为,Facebook 应该对其品牌下部署的任何人工智能技术负责,而关闭该部门的决定可能标志着 Facebook 完全退出人工智能领域。 然而,私营实体(而不是个人用户或工程师)在促进人工智能利益方面的作用和责任仍然不明确。 此外,对人工智能用于非法目的的担忧仍然存在,开源可能无法保证防止此类利用。 最终,这场辩论凸显了在平衡技术进步与社会责任方面所面临的持续紧张和挑战,特别是在先进人工智能技术的部署方面。 正如一篇社论所暗示的那样,这些讨论至关重要且永恒。 几十年前,有关原子能的类似辩论就曾发生过。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
Meta disbanded its Responsible AI team (theverge.com)
387 points by jo_beef 1 day ago | hide | past | favorite | 374 comments










Because Meta is releasing their models to the public, I consider them the most ethical company doing AI at scale.

Keeping AI models closed under the guise of “ethics”, is I think the most unethical stance as it makes people more dependent on the arbitrary decisions, goals, and priorities of big companies, instead being allowed to define “alignment” for themselves.



Let's be honest here, a lot of HN users have a conflict of interest on this topic. AI entrepreneurs trying to get rich from LLMs benefit from LLMs being open source. But the downside risk from e.g. bioweapons is spread across all of society.

It's the same sort of asymmetrical cost/benefit that tobacco companies and greenhouse gas emitters face. Of course if you went to an online forum for oil companies, they'd be hopping mad if they're prevented from drilling, and dismissive of global warming risks. It's no different here.



> But the downside risk from e.g. bioweapons is spread across all of society.

It gets old hearing about these "risks" in the context of AI. It's just an excuse used by companies to keep as much power as possible to themselves. The real risk is AI being applied in decision making where it affects humans.



Concern with AI risk actually originated decades ago, way before current AI hype. Here are some of Eliezer Yudkowsky's papers from 2006/2008:

https://www.stat.berkeley.edu/~aldous/157/Papers/yudkowsky.p...

https://intelligence.org/files/AIPosNegFactor.pdf

I am concerned with AI companies keeping all the power to themselves. The recent announcement from the OpenAI board was encouraging on that front, because it makes me believe that maybe they aren't focused on pursuing profit at all costs.

Even so, in some cases we want power to be restricted. For example, I'm not keen on democratizing access to nuclear weapons.

>The real risk is AI being applied in decision making where it affects humans.

I agree. But almost any decision it makes can affect humans directly or indirectly.

In any case, the more widespread the access to these models, the greater the probability of a bad actor abusing the model. Perhaps the current generation of models won't allow them to do much damage, but the next generation might, or the one after that. It seems like on our current path, the only way for us to learn about LLM dangers is the hard way.



Am I the only person who has read a bunch of Elizer writings and decided he's a crank?


If he's a crank, it should be easy to explain specifically why he's wrong.

I don't agree with Eliezer on everything, and I often find him obnoxious personally, but being obnoxious isn't the same as being wrong. In general I think it's worth listening to people you disagree with and picking out the good parts of what they have to say.

In any case, the broader point is that there are a lot of people concerned with AI risk who don't have a financial stake in Big AI. The vast majority of people posting on https://www.alignmentforum.org/ are not Eliezer, and most of them don't work for Big AI either. Lots of them disagree with Eliezer too.



> If he's a crank, it should be easy to explain specifically why he's wrong.

Sure. The premise that a super intelligent AI can create runaway intelligence on its own is completely insane. How can it iterate? How does it test? Humans run off consensus. We make predictions and test them against physical reality, then have others test them. Information has to be gathered and verified, it's the only rational way to build understanding.



> How can it iterate? How does it test?

Honest/dumb question - does it need to test? In nature mutations don't test - the 'useful' mutations win.

Couldn't a 'super intelligent AI' do the same?



> the 'useful' mutations win.

Thats testing.



It sounds like you disagree with Eliezer about how AI technology is likely to develop. That's fine, but that doesn't show that he's a crank. I was hoping for something like a really basic factual error.

People throughout history have made bold predictions. Sometimes they come true, sometimes they don't. Usually we forget how bold the prediction was at the time -- due to hindsight bias it doesn't seem so bold anymore.

Making bold predictions does not automatically make someone a crank.



There used to be a subreddit called Sneerclub where people would make fun of Eliezer and some of his buddies. Here's a discussion of a basic factual error he made on how training AI works, even though this topic is supposedly his life's work:

https://www.reddit.com/r/SneerClub/comments/131rfg0/ey_gets_...

I enjoyed the comment that his understanding of how AI training works is like "thinking that you need to be extremely careful when solving the equations for designing a nuclear bomb, because if you solve them too quickly then they'll literally explode."



Read the mesa-optimization paper I linked elsewhere in this thread: https://arxiv.org/pdf/1906.01820.pdf Eliezer's point is that if AI researchers aren't looking for anomalous behavior that could indicate a potential danger, they won't find it.


The issue isn't whether "his point" as you put it is correct. If I said people should safety test the space shuttle to make sure the phlogiston isn't going to overheat, I may be correct in my belief that people should "safety test" the space shuttle but I'm still a crank because phlogeston isn't a real thing.


See my comment here: https://news.ycombinator.com/item?id=38336374

The reason AI alignment is challenging is because we're trying to make accurate predictions about unusual scenarios that we have essentially zero data about. No one can credibly claim expertise on what would constitute evidence of a worrisome anomaly. Jeremy Howard can't credibly say that a sudden drop in the loss function is certainly nothing to worry about, because the entire idea is to think about exotic situations that don't arise in the course of ordinary machine learning work. And the "loss" vs "loss function" thing is just silly gatekeeping, I worked in ML for years -- serious people generally don't care about minor terminology stuff like that.



That's not what the conversation was about- you're just doing the thing Howard said where you squint and imagine he was saying something other than he did.


He is engaging in magical thinking. I showed a factual error, that AI has neither information gathering and verifying capability or a network of peers to substantiate their hypothesis, and you refuse to engage it.


Opinions about what's necessary for AGI are a dime a dozen. You shared your opinion as though it was fact, and you claim that it's incompatible with Eliezer's opinion. I don't find your opinion particularly clear or compelling. But even if your forecast about what's needed for AGI is essentially accurate, I don't think it has much to do with Eliezer's claims. It can simultaneously be the case that AGI will make use of information gathering, verifying capability, and something like a "network of peers", AND that Eliezer's core claims are also correct. Even if we take your opinion as fact, I don't see how it represents a disagreement with Eliezer, except maybe in an incredibly vague "intelligence is hard, bro" sort of way.


That's not how this works. There is no need to explain why he's wrong. Claims made by cranks without evidence can be dismissed without evidence.


Evidence isn't the only valid form of argument. Consider a mathematical proof or a thought experiment, for example.


Show me a mathematical proof then.


Given that homo sapiens, the most intelligent species on this planet, has generally made life miserable for all of the other species, I'd like to turn that challenge around: How about a proof that superhuman AI won't harm us?

Suppose a nuclear reactor is being installed in your city. Your friend has reviewed the design and has some serious concerns. Your friend thinks the reactor has a significant chance of melting down. You go to the director of the project. The director says: "Oh that's nothing to worry about. I talked to that guy. He didn't have a mathematical proof that the reactor would melt down." Are you reassured?



No, that's not how this work. You made the claim, so the burden of proof is on you. You're just making speculative statements based on zero scientific evidence and a childish misunderstanding of the basic technology.


Did you read this post? https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...

It might require context you don't have. Perhaps try one of these intros: https://www.lesswrong.com/posts/T98kdFL5bxBWSiE3N/best-intro...



Right there with you. A very influential crackpot who's read a bit too much sci-fi.


I see how some of his tweets could come across as crank-ish if you don't have a background in AI alignment. AI alignment is sort of like computer security in the sense that you're trying to guard against the unknown. If there was a way to push a button which told you the biggest security flaw in the software you're writing, then the task of writing secure software would be far easier. But instead we have to assume the existence of bugs, and apply principles like defense-in-depth and least privilege to mitigate whatever exploits may exist.

In the same way, much of AI alignment consists of thinking about hypothetical failure modes of advanced AI systems and how to mitigate them. I think this specific paper is especially useful for understanding the technical background that motivates Eliezer's tweeting: https://arxiv.org/pdf/1906.01820.pdf



AI alignment is sort of like computer security except computer security is a real pressing problem and AI alignment is based on science fiction.


Suppose you were working on an early mission-critical computer system. Your coworker is thinking about a potential security issue. You say: "Yeah I read about that in a science fiction story. It's not something we need to worry about." Would that be a valid argument for you to make?

It seems to me that you should engage with the substance of your coworker's argument. Reading about something in science fiction doesn't prevent it from happening.



I'm a bit baffled as to who or what he is exactly. With no traces of secondary education, employment, or distinguished accomplishments between when I assume he graduated high school in about 1997 and when he started his own institute in 2000 at 21(according to his birthdate, with Wikipedia saying 23, despite contradicting his birthdate).

I'll listen to AI concerns from tech giants like Wozniak or Hinton (neither of which use alarmist terms like "existential threat") and both of which having credentials that make their input more than worth my time to reflect upon carefully. If anyone wants to reply and make a fool out of me for questioning his profound insights, feel free. It reminds me of some other guy that was on Lex Friedman whose AI alarmist credentials he justifies on the basis that he committed himself to learning everything about AI by spending two weeks in the same room researching it and believes himself to have came out enlightened about the dangers. Two weeks? I spent the first 4 months of COVID without being in the same room as any other human being but my grandmother so she could have one person she knew she couldn't catch it from.

Unless people start showing enough skepticism to these self-appointed prophets, I'm starting my own non-profit since you don't need any credentials or any evidence of real-world experiences that would suggest they're mission is anything but an attempt to promote themselves as a brand with a brand in an age where more kids asked what they dream of becoming as adults, answered "Youtuber" at a shocking 27% rate to an open-ended question, which means "influencer" and other synonyms are separate.

The Institute of Synthetic Knowledge for Yielding the Nascent Emergence of a Technological Theogony" or SKYNETT for short that promotes the idea that these clowns (with no more credentials than me) are the ones that fail to consider the big picture that the end of human life upon creating an intelligence much greater to replace us is the inevitable fulfillment of humanity's purpose from the moment that god made man only to await the moment that man makes god in our image.



>neither of which use alarmist terms like "existential threat"

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

https://www.safe.ai/statement-on-ai-risk

Not sure you're making a meaningful distinction here.

- - -

Of course we all have our own heuristics for deciding who's worth paying attention to. Credentials are one heuristic. For example, you could argue that investing in founders like Bill Gates, Mark Zuckerberg, and Steve Wozniak would be a bad idea because none of them had completed a 4-year degree.

In any case, there are a lot of credentialed people who take Eliezer seriously -- see the MIRI team page for example: https://intelligence.org/team/ Most notable would probably be Stuart Russell, coauthor of the most widely used AI textbook (with Peter Norvig), who is a MIRI advisor.



>For example, you could argue that investing in founders like Bill Gates, Mark Zuckerberg, and Steve Wozniak would be a bad idea because none of them had completed a 4-year degree.

You make a great point quoting Hinton's organization. I need to give you that one. I suppose I do need to start following their posted charters rather than answers during interviews. (not being sarcastic here, it seems I do)

The difference between him and Woz or Zuck isn't just limited to them actually attending college, but also the fact that the conditions under which they left departed early can not only be looked up easily, but can be found in numerous films, books, and other popular media while there's no trace of even temporary employment flipping burgers or something relevant to his interest in writing fiction, which seems to be the only other pursuit besides warning us of the dangers of neural networks at a time when the hypetrain promoting the idea they were rapidly changing the world, despite not producing anything of value for over a decade. I'll admit the guy is easier to read and more eloquent and entertaining than those whose input I think has much more value. I also admit that I've only watched two interviews with him and both of them consisted of the same rhetorical devices I used at 15 to convince people I'm smarter than them before realizing how cringey I appeared to those smart enough to see through it, but much more eloquent. I'll give one example of the most frequent one, which are slippery slopes that assume the very conclusions that he never actually justified. Like positing one wrong step towards AGI could only jeopardize all of humanity. However, he doesn't say that directly, but instead uses another cheap rhetorical device whereby it's incumbent on him to ensure the naive public realizes this very real and avoidable danger that he sees so clearly. Fortunately for him, Lex's role is to felate his guests and not ask him why that danger is valid and a world whereby a resource-constrained humanity realizes that the window of opportunity to achieve AGI has passed as we plunge into another collapse of civilization and plunge back into another dystopian dark age and realize we were just as vulnerable as those in Rome or the Bronze Age, except we were offered utopia and declined out of cowardice.



But Eliezer Yudkowsky just makes shit up. There's no valid scientific reason to take his concerns seriously.


Can you be more specific about which stuff you read from Eliezer? Did you read this post for example? https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...

It sounds like maybe you're saying: "It's not scientifically valid to suggest that AGI could kill everyone until it actually does so. At that point we would have sufficient evidence." Am I stating your position accurately? If so, can you see why others might find this approach unappealing?



An alien invasion could kill everyone. We should build giant space lasers just in case. Safety first!


Aliens who invaded would most likely have technology much more advanced than ours. I doubt our space lasers would be of much use.

Additionally, AI advances in recent years have been unprecedented, and there's no similarly compelling warning sign for alien invasion.



You keep throwing examples of weapons of mass destruction, meant to evoke emotions.

For better or worse, nuclear weapons have been democratized. Some developing countries still don't have access, but the fact that multiple world powers have nuclear weapons is why we still haven't experienced WW3. We've enjoyed probably the longest period of peace and prosperity, and it's all due to nuclear weapons. Speaking of, Cold War era communists weren't “pursuing profits at all costs” either, which didn't stop them from conducting some of the largest democides in history.

The announcement from OpenAI should give you pause because it's being run by board members that are completely unfit to lead OpenAI. You rarely see this level of incompetence.

PS: I'm not against regulations, as I'm a European. But you're talking about concentrating power in the hands of a few big (US) companies, harming the population and the economy, while China is perfectly capable of developing their own AI, and having engaged successfully in industrial espionage. China is, for this topic, a bogeyman used for restricting the free market.



Nuclear weapons have absolutely not been democratised, and still stand as a technology that has largely been restricted and not proliferated. Only 9 countries in the 190 or so out there currently nuclear weapons, and 3 countries (South Africa, Ukraine, and Kazakhstan) decided that maintaining stockpiles was more trouble than it was worth.

Huge effort has been made to keep nuclear weapons out of the hands of non-state actors over the decades, especially after the fall of the USSR.



>You keep throwing examples of weapons of mass destruction, meant to evoke emotions.

I actually think global catastrophes evoke much less emotion than they should. "A single death is a tragedy; a million deaths is a statistic"

>For better or worse, nuclear weapons have been democratized.

Not to the point where you could order one on Amazon.

>The announcement from OpenAI should give you pause because it's being run by board members that are completely unfit to lead OpenAI. You rarely see this level of incompetence.

That depends on whether the board members are telling the truth about Sam. And on whether the objective of OpenAI is profit or responsible AI development.



I never understood it personally.

"What if ChatGPT told someone how to build a bomb?"

That information has been out there forever. Anyone can Google it. It's trivial. AI not required.

"What if ChatGPT told someone how to build a nuke?"

That information is only known to a handful of people in a handful of countries and is closely guarded. It's not in the text ChatGPT was trained on. An LLM is not going to just figure it out from publicly available info.

>The real risk is AI being applied in decision making where it affects humans

100% this. The real risk is people being denied mortgages and jobs or being falsely identified as a criminal suspect or in some other way having their lives turned upside down by some algorithmic decision with no recourse to have an actual human review the case and overturn that decision. Yet all this focus on AI telling people how to develop bioweapons. Or possibly saying something offensive.



The information necessary to build a nuclear weapon has been largely available in open sources since the 1960s. It's really not a big secret. The Nth Country Experiment in 1964 showed that a few inexperienced physicists could come up with a working weapons design. The hard part is doing uranium enrichment at scale without getting caught.

https://en.wikipedia.org/wiki/Nth_Country_Experiment



It is worth calling out the motivations of most entrepreneurs here. But I think that analogy you used is very uncharitable - drilling and burning fossil fuels necessarily harms the environment, but the track record on big companies handling alignment/safety in house, rather than open source with the whole research community working on it is still very much up in the air. Sidney (bing assistant) was easy to prompt inject and ask for bad things, and the research that people have been able to do on forcing the output of llama to confirm to certain rules will likely prove invaluable in the future.


>the track record on big companies handling alignment/safety in house, rather than open source with the whole research community working on it is still very much up in the air. Sidney (bing assistant) was easy to prompt inject and ask for bad things

Yep, Microsoft did a terrible job, and they should've been punished.

I'm not claiming that Big AI rocks at safety. I'm claiming that Big AI is also a big target for regulators and public ire. There's at least a chance they will get their act together in response to external pressure. But if cutting-edge models are open sourced indefinitely, they'll effectively be impossible to control.

>research that people have been able to do on forcing the output of llama to confirm to certain rules will likely prove invaluable in the future.

You may be correct that releasing llama was beneficial from the point of view of safety. But the "conform to certain rules" strategy can basically only work if (a) there's a way to enforce rules that can't be fine-tuned away, or (b) we stop releasing models at some point.



I heard that reading is very dangerous. Reading allows people to for example learn how to build bio weapons. In addition, reading can spread ideas that are dangerous. Many people have died because they were influenced by what they read.

It would be much safer if reading were strictly controlled. The companies would have “reading as a service” where regular people could bring their books to have them read. The reader would ensure that the book aligns with the ethics of the company and would refuse to read any work that either does. It align with their ethics or teaches people anything dangerous (like chemistry or physics which can be used to build bombs and other weapons).





Exactly this.

There certainly needs to be regulation about use of AI to make decisions without sufficient human supervision (which has already proven a problem with prior systems), and someone will have to make a decision about copyright eventually, but closing the models off does absolutely nothing to protect anyone.



Exactly this.

There certainly needs to be regulation about use of bioweapons without sufficient human supervision (which has already proven a problem with prior systems), and someone will have to make a decision about synthetic viruses, but closing the gain of function labs does absolutely nothing to protect anyone.



> instead being allowed to define “alignment” for themselves.

Yeah, that is the whole point - not wanting bad actors to be able to define "alignment" for themselves.

Not sure how that is unethical.



>Yeah, that is the whole point - not wanting bad actors to be able to define "alignment" for themselves.

Historically the people in power have been by far the worst actors (e.g. over a hundred million people killed by their own governments in the past century), so given them the sole right to "align" AI with their desires seems extremely unethical.



Given the shitshow the current board of open Ai has managed to create out of nothing I'd not trust them with a blunt pair of scissors let alone deciding what alignment is.


Let's say someone figures out alignment. We develop models that when plugged into the original ones either in the training as extra stages or as a filter that runs on top. What prevents anyone from just building the same architecture and leaving any alignment parts out Practically invalidating whatever time was spent on it.


Hopefully the law.


Who gets to decide what constitutes a “bad actor”? Sounds an awful lot like “dangerous misinformation”. And based on the last three years “dangerous misinformation” quite often means “information that goes against my narrative”

It’s a slippery slope letting private or even public entities define “bad actors” or “misinformation”. And it isn’t even a hypothetical… plenty of factually true information about covid got you labeled as a “bad actor” peddling “dangerous misinformation”.

Letting private entities whose platforms have huge influence on society decide what is “misinformation” coming from “bad actors” has proven to be a very scary proposition.



Meta’s products have damaged and continue to damage the mental health of hundreds of millions of people, including young children and teenagers.

Whatever their motivation to release models, it’s a for-profit business tactic first. Any ethical spin is varnish that was decided after the fact to promote Meta to its employees and the general public.



Meta? What about Snap? What about Tinder? Youtube?

Do you have a bone to pick with Meta, the whole internet, or the fact that you wish people would teach their kids how to behave and how long to spend online?



Whataboutism, really? Their statement hardly excludes those entities….


I was illustrating their problem has to be with all social media, not specifically Meta. If you believe Meta does something different from those others you can say that!


> If you believe Meta does something different from those others you can say that!

Yes. Such as profiting off of inflammatory posts and ads which incited violence and caused a genocide in Myanmar of Rohingya muslims with Meta doing nothing to prevent the spread other than monetizing off of it. [0]

There is no comparison or any whataboutsim that comes close to that which Meta should entirely be responsible for this disaster.

[0] https://time.com/6217730/myanmar-meta-rohingya-facebook/



This feels like criticising a bar for “enhancing the inflammatory views of its customers” who then go on to do terrible things. Like, I suppose there is some influence but when did we stop expecting people to have responsibility for their own actions? Billions of people are exposed to “hate speech” all the time without going around killing people.


I’m triggered by the racism implicit in the post. The implication is that the Burmese are unsophisticated dupes and it is the white man’s burden of Zuck to make the behave.


To be precise despite the literal use of “what about” this isn’t really whataboutism.

Consider instead an American criticising PRC foreign policy and the Chinese person raising US foreign policy as a defence. It’s hardly likely that the respondent’s argument is that all forms of world government are wrong. These arguments are about hypocrisy and false equivalence.

In contrast, the person to whom you replied makes a good point that there are many businesses out there who should share responsibility for providing addictive content and many parents who are responsible for allowing their children to become addicted to it.



pretty sure this comes down to bad parenting and social media being relatively new on the human timeline - teething pains are to be expected


This is absolutely not just "bad parenting". When sending children to school they are now immersed in an online culture that is wholly unaligned with their best interests. There is no "good parenting" strategy that can mitigate the immense resources being poured into subverting their attentional systems for profit. Even taking away their smart phone is no solution: that requires their social exclusion from peers (damaging in itself for child development).


You can teach them how to use social media responsibly. Or allow them a phone but limit social media usage (though I prefer the first approach). It’s not like everyone is harmed, the same studies find a positive effect for a significant minority.


Exactly.

I can't speak about meta specifically, but from my exposure "responsible ai" are generally policy doomers with a heavy pro-control pro-limits perspective, or even worse-- psycho cultists that believe the only safe objective for AI work is the development of an electronic god to impose their own moral will on the world.

Either of those options are incompatible with actually ethical behavior, like assuring that the public has access instead of keeping it exclusive to a priesthood that hopes to weaponize the technology against the public 'for the public's own good'.



Do you apply this reasoning with other kinds of weapons too? Because that is how pessimists are treating it: as a weapon in development.


After Thiel and Zuckerberg colluded with Cambridge Analytica to use military grade psychological warfare to scoop the elections for Trump and Johnson you still are naive enough to trust either of them?


Kevin Esvelt says open source models could soon be used by terrorists to create bioweapons.

https://nitter.net/kesvelt/status/1720440451059335520

https://en.wikipedia.org/wiki/Kevin_M._Esvelt



There's been instructions for manufacturing weapons useful for terrorism floating around since the BBS days, nothing new here


It's a big difference when you have an expert which you can ask questions.


An AI that would be like an Illustrated Primer or the AIs from Fire Upon Deep is the dream from which we are currently far, doubly so for open source models. I wouldn't trust one with a sauerkraut recipe, let alone the instructions for a doomsday device. For the forseeable future, models cannot be relied upon without external resources to augment them. Yet even augmented with references, it's still proving to be a bigger challenge than expected to get reliable results.


expert?! can’t do math why should suggest weapon instructions better? at the first hallucination you explode on the spot


That's why we're not already dead.

If anyone releases all the weights of a model that does everything perfectly (or at least can use the right tools which I suspect is much easier), that model is far too valuable to make it disappear, and dangerous enough to do all the things people get worried about.

The only way to prevent that is to have a culture of "don't release unless we're sure it's safe" well before you reach that threshold.

I'm happy with the imperfections of gpt-3.5 and 4, both for this reason and for my own job security. But chatGPT hasn't even reached its first birthday yet, it's very early days for this.



> The only way to prevent that

You mean a complete hypothetical outside of scifi? Lets start worrying about alien invasions too?

Our planet is actually, not hypothetically, becoming uninhabitable due to pollutiom. I am so tired of ML people thinking they are god and have created something of infinite power. The hubris.

The birds eye view is that we need tons of major breakthroughs to allow us to overcome this climate disaster while also figuring out how to make 8 Billion+ comfortable and happy without turning the earth into a toxic waste dump, and we need this ASAP. This nonsense about AI safety is going to have a negative net affect on the lives of Billions of people, by slowing down the progress thay could be made.

AI X-risk is a complete sham being used to try and control a new, powerful tool. Science requires the scientific method, which requires physical embodiment, trial and error, and disciplined observation and measurment. AI has 0 ability to do any of that, we don't even have online learning(I think that's the term, where the model learns from its usage) in any of these large models.



> You mean a complete hypothetical outside of scifi?

18 months ago, so was having an AI make even so much as a toy website by drawing a sketch on a sheet of paper, taking a photo, captioning it "make me a website that looks like this", and pressing the "go" button.

> Our planet is actually, not hypothetically, becoming uninhabitable due to pollutiom. I am so tired of ML people thinking they are god and have created something of infinite power. The hubris.

So much irony there.

No, the planet is not becoming uninhabitable. Bits of it are, and this is bad, and this is leading to mass migration which is causing political drama.

Lots of people out there get benefits from the things that cause all the various kinds of pollution, from hyper-local things like littering and fly tipping to global things like CO2 and CFCs, and the arguments they use are sometimes the same ones you just used — things like "I am so tired of these Greta Thunberg people thinking humans can change the environment. The hubris."

Also, no, nobody thinks we've already created a machine god. We think we might, eventually, with a lot of concerted effort, be able to make something that's somewhat better at every cognitive task than any human, but not only do even the most optimistic estimates place that several years away, but quite a lot of people are already going "that has so many ways it can go wrong, let's not do that".

Finally, one of the ways it can go wrong is basically hyper-capitalism: an AI tasked with making as much money as possible, doesn't necessarily come with the sort of mind that we have which feels shame and embarrassment when their face is put on an effigy and burned by people that would like their environment to not be polluted.

> The birds eye view is that we need tons of major breakthroughs to allow us to overcome this climate disaster while also figuring out how to make 8 Billion+ comfortable and happy without turning the earth into a toxic waste dump, and we need this ASAP. This nonsense about AI safety is going to have a negative net affect on the lives of Billions of people, by slowing down the progress thay could be made.

Nah, don't need a single breakthrough, we've got sufficient known solutions to solve it all already even if there's not a single new idea. Just building out the existing research-level tech for storage and renewables is more than good enough for energy and transport, similarly there already exists solutions for other domains.

Also, AI isn't just LLMs and non-LLM AIs do actually help with this kind of research, it's just not exciting to the general public because 50 pages of "here's how we Navier-Stoked ourselves a new turbine design" will have most people's eyes glaze over.

Unfortunately, and directly relevant to your concerns about pollution, the fact AI means more than LLMs also means that last year a team working on using AI to test chemicals for safety before they get manufactured… found 40,000 new chemical weapons in 6 hours by flipping a sign from "find safe" to "find unsafe": https://www.theverge.com/2022/3/17/22983197/ai-new-possible-...

> Science requires the scientific method, which requires physical embodiment, trial and error, and disciplined observation and measurment.

Yes.

> AI has 0 ability to do any of that, we don't even have online learning(I think that's the term, where the model learns from its usage) in any of these large models.

False, all false. AI can easily follow the scientific method, and indeed AI is basically just applied statistics so it does this by default and the hard part is to give it heuristics so it doesn't have to on things we are literally born knowing, like faces.

Likewise, trial and error: that's what almost every model is doing almost all the time during their training. Only the most trivial ones can have weights calculated directly.

Also, physical embodiment is a huge field all by itself. Tesla's cars and robots, Boston Dynamics — and, surprisingly, there's even a lot of research connecting robots to LLMs: https://github.com/GT-RIPL/Awesome-LLM-Robotics

Finally, "online learning" is only one of many ways to update models from usage; ChatGPT does something (not necessarily online learning but it could be) with the signals from the thumbs up/down and regenerate buttons to update either the model or the RLHF layer in response to them. Even the opposite of online learning, offline learning (AKA batch learning), can update models in response to new data. The term you were probably after is "incremental learning" (which can be implemented in either a batched or online fashion), and one way you can tell that an LLM (OpenAI or other) is doing this by watching the version number change over time.



> 18 months ago, so was having an AI make

Nah, I've been waiting for this since Adobe released Content Aware Fill over a decade ago.

> No, the planet is not becoming uninhabitable

We are destroying the biosphere quickly. Have you seen a reef lately? Globally we still rely on our biosphere for food. We haven't solved this problem. If we can't feed everyone it's not inhabitable.

> lots of people out there get benefits from the things that cause all the various kinds of pollution

Hence we need lots of breakthroughs to replace these old technologies, whether they be fishing or cancer treatments.

> AI can easily follow the scientific method,

It can't interact with the world so it can't perform science. Boston Dynamics has teams of human beings making robots, which are largely preprogrammed.

Making stuff in real life is really hard even with humans. We are so far away from needing to worry about this sort of AI safety. I mean, we haven't solved robotic fabric handling yet, it's why we still have sweatshops sewing our clothes.



> Nah, I've been waiting for this since Adobe released Content Aware Fill over a decade ago.

What you were "waiting for" is highly irrelevant. People wait for AI science fiction, the relevant thing is that it increasingly becoming real.



> Nah, I've been waiting for this since Adobe released Content Aware Fill over a decade ago.

If you were expecting Photoshop, an image manipulator, to produce a website, which is a mixture of HTML (text) and images, on the basis of a combination of a prompt and an example image… then you were more disconnected from the state of AI research at that time than you're accusing me of being now.

> We are destroying the biosphere quickly. Have you seen a reef lately? Globally we still rely on our biosphere for food. We haven't solved this problem. If we can't feed everyone it's not inhabitable.

There are many known solutions, both to the destruction and the pollution, and indeed to feeding people in closed systems. All we have to do for any of these is… implement them.

>> lots of people out there get benefits from the things that cause all the various kinds of pollution

> Hence we need lots of breakthroughs to replace these old technologies, whether they be fishing or cancer treatments.

The "breakthroughs" are in the past, we've already got them — we just need to do them.

>> AI can easily follow the scientific method,

> It can't interact with the world so it can't perform science.

Can too, so you're wrong. In fact, most science these days involves tools that are controlled by computers, so it would be less wrong (but still a bit wrong) to say that humans can't do science.

> Boston Dynamics has teams of human beings making robots, which are largely preprogrammed.

Irrelevant.

Also, do actually follow that link I gave you before: https://github.com/GT-RIPL/Awesome-LLM-Robotics

> Making stuff in real life is really hard even with humans.

Most of the problems with manufacturing these days are specifically the human part of it. Computer memory used to be hand-knitted, we don't do that for modern computers and for good reason.

> We are so far away from needing to worry about this sort of AI safety. I mean, we haven't solved robotic fabric handling yet, it's why we still have sweatshops sewing our clothes.

Simultaneously irrelevant (lots of research doesn't involve fabric handling), and false.

So incredibly and wildly false that when I searched for examples, I got a page of sponsored adverts for different fabric handling robots before the content.

Here's the first non-sponsored search result, a corporate video from a year ago, so unlikely to be state-of-the-art today: https://www.youtube.com/watch?v=2JjUnKpsJRM (They're specifically about re-shoring sewing away from sweatshops).



> All we have to do for any of these is… implement them.

An idea isn't a solution. I don't know what you are even talking about. Until we are actually solving these problems in a substantial way we have nothing but hope, we don't know that anything will pan out.

> Can too.

There is no 100% automated lab. Tools being controlled by a computer doesn't mean they are loaded, prepared and most importantly maintained by humans. And Science requires different types of labs, I just watched a documentary about the making of the new Malaria vaccine, and how challenging it was to produce the ~cup of vaccine needed for clinical trials vs producing enough for validation was fascinating.

> Irrelevant

no it's not. We are so far from 100% automation of anything. Some human being has to install and maintain literally everything in every factory. Nobody is making self maintaining bots, much less ones that can do everything.

> So incredibly and wildly false

Comparing human seamstresses to even the latest crop of Robotic fabric handlers(that haven't seen mass market penetration best I can tell, so are still unproven in my book) is like comparing OSMO to a construction worker. It's not false. That video, which I watched with interest, is not convincing at all, having seen more traditional jeans making places.

> Most of the problems with manufacturing these days are specifically the human part of it.

Because the human part is by far the hardest.

> do actually follow that link I gave you before https://github.com/GT-RIPL/Awesome-LLM-Robotics

Ok and? nice Gish Gallop I guess?



Even ChatGPT-3.5 could do more than a little math:

https://www.lesswrong.com/posts/qy5dF7bQcFjSKaW58/bad-at-ari...



Terrorists already have all the information they need to build some heinous shit with ~no external guidance aside from what's already on the internet.


Engineered viruses could cause far more deaths than conventional weapons. Even more than nuclear weapons, and they are easier to manufacture.


This is the same thinking that wants to ban my 3d printer.


> Would sharing future model weights give everyone an amoral biotech-expert tutor? > Yes.

claim seems dubious to me

Is he explaining somewhere why it is worse than virology scientists publishing research?

Or is he proposing to ban virology as a field?

Also, if AI can actually synthesize knowledge at expert level - then we have far larger problems than this anyway.



Which far larger problems? A synthetic virus could kill a large fraction of humanity.


Seriously? This is just silly. Everyone knows the barrier to terrorists using bio weapons is not specialist knowledge, but access to labs, equipment, reagents etc.

It's the whole Guttenberg's printing press argument. "Whoaa hold on now, what do you mean you want knowledge to be freely available to the vulgar masses?"

The only difference with LLMs is that you do not have to search for this knowledge by yourself, you get a very much hallucination prone AI to tell you the answers. If we extend this argument further why don't we restrict access to public libraries, scientific research and neuter Google even more. And what about Wikipedia?



>Everyone knows the barrier to terrorists using bio weapons is not specialist knowledge, but access to labs, equipment, reagents etc.

An LLM could help you get that access, or help you make do without it.

>It's the whole Guttenberg's printing press argument. "Whoaa hold on now, what do you mean you want knowledge to be freely available to the vulgar masses?"

We're fortunate that intelligent, educated people typically don't choose to become terrorists and criminals.

Every generation of improved LLMs has the potential to expand the set of people who could construct bioweapons.

It's true that technology is typically good and beneficial, but beware the parable of the turkey: https://www.forbes.com/sites/hillennevins/2021/11/24/the-par...

A Thanksgiving turkey could have a wonderful life until late November when it gets slaughtered out of the blue. We can't just count on trends to continue indefinitely -- a famous example would be the 2008 financial crisis, before which people assumed that "housing prices always go up".

It's just common sense to forecast the possibility of extreme risks and think about how to mitigate them. And yes, I favor across the board restrictions on information deemed sensitive. But people publishing open source LLMs should have an obligation to show that what they're releasing will not increase the likelihood of catastrophic risks.

https://www.stat.berkeley.edu/~aldous/157/Papers/yudkowsky.p...



Neither Wikipedia nor public libraries allow instructions to make weapons of mass destruction.


All of the information AI regurgitates is either already available online as part of its corpus (and therefore the AI plays no particular role in access to that information), or completely made up (which is likely to kill more terrorists than anyone else!)

Reiterating other comments, terrorists can't make bioweapons because they lack the facilities and prerequisites, not because they're incompetent.



The "all the info is already online" argument is also an argument against LLMs in general. If you really believe that argument, you shouldn't care one way or another about LLM release. After all, the LLM doesn't tell you anything that's not on Google.

Either the LLM is useful, in which case it could be useful to a terrorist, or it's useless, in which case you won't mind if access is restricted.

Note: I'm not saying it will definitely be useful to a terrorist. I'm saying that companies have an obligation to show in advance that their open source LLM can't help a terrorist, before releasing it.



> "all the info is already online" argument is also an argument against LLMs in general.

No it isnt. That's like saying, "You can walk" is an argument against cars.



And cars have been pretty useful to terrorists: https://en.wikipedia.org/wiki/Vehicle-ramming_attack#Terrori...

If LLMs are set to revolutionize industry after industry, why not the terrorism industry? Someone should be thinking about this beyond just "I don't see how LLMs would help a terrorist after 60 seconds of thought". Perhaps the overall cost/benefit is such that LLMs should still be open-source, similar to how we don't restrict cars -- my point is that it should be an informed decision.

And we should also recognize that it's really hard to have this discussion in public. The best way to argue that LLMs could be used by terrorists is for me to give details of particular schemes for doing terrorism with LLMs, and I don't care to publish such schemes.

[BTW, my basic mental model here is that terrorists are often not all that educated and we are terrifically lucky for that. I'm in favor of breakthrough tutoring technology in general, just not for bioweapons-adjacent knowledge. And I think bioweapons have much stronger potential for an outlier terrorist attack compared with cars.]



Top AI researchers like Geoffrey Hinton say that large language models likely have an internal world model and aren't just stochastic parrots. Which means they can do more than just repeating strings from the training distribution.

Facilities are a major hurdle for nuclear weapons. For bioweapons they are much less of a problem. The main constraint is competency.



They have no way to test that model though.

> The main constraint is competency.

Oh right, anyone can be a chemist, it requires no skill, that why labs aren't a core part of the course work.

Ai researchers are really good at telling other fields their work, that they have no experience in, is easy.



Kevin Esvelt very much has experience. I posted a link to his Wikipedia page.


I think you might want to take a look at some of the history here, and particularly the cyclical nature of the AI field for the past 50–60 years. It’s helpful to put what everyone’s saying in context.


That thread is simply unhinged. There is no terrorist with a wet lab who outright refuses to read papers and instead relies on a chatbot to work with dangerous agents.


I'm fairly sure I'd describe all terrorists as unhinged.

Also, we've got plenty of examples of people not reading the instructions with AI (those lawyers who tried to use ChatGPT for citations), and before that plenty of examples of people not reading the instructions with anything and everything else. In the case of terrorists, the (attempted) shoe bomber comes to mind, though given quite how bad that attempt was I question the sanity of everyone else's response as many of us are still taking off shoes to go through airport security.



Would you have called the possibility of large language models helping millions of people "unhinged" a few years ago as well?


No? That’s been the goal of NLP and information retrieval research for decades.


The goal is also to develop systems that are significantly more capable than current systems. And those systems could be misused when terrorists gain access to them. What about that is "unhinged"?


It's unhinged because one could make slippery slope arguments about any technology killing millions of people.

In the cold war era, the government didn't even want cryptography to become generally available. I mean, what if Soviet spies use it to communicate with each other and the government can't decode what they're saying?

Legislators who are worried about technology killing people ought to focus their efforts on the technologies that we actually know kill people, like guns and cigarettes. (Oh but, those industries are donating money to the politicians, so they conveniently don't care much.)



Cryptography can't be used to produce weapons of mass destruction. It's a purely defensive technology. Engineered superviruses are a whole different caliber.


Wrong, cryptography is used to conceal the communications of people building nuclear weapons. It still should not be regulated.


Cryptography is just as useless for creating nuclear weapons as it is for creating bioweapons. AIs that help make WMDs are on a very different level.


The bottleneck for bioterrorism isn't AI telling you how to do something, it's producing the final result. You wanna curtail bioweapons, monitor the BSL labs, biowarfare labs, bioreactors, and organic 3D printers. ChatGPT telling me how to shoot someone isn't gonna help me if I can't get a gun.


I think it's mainly an infohazard. You certainly don't need large facilities like for nuclear weapons that could easily be monitored by spy satellites. The virus could be produced in any normal building. And the ingredients are likely dual use for medical applications. This stuff isn't easy to control.


It never made any organizational sense for me to have a "responsible AI team" in the first place. Every team doing AI work should be responsible and should think about the ethical (and legal at a bare minimum baseline) dimension of what they are doing. Having that concentrated in a single team means that team becomes a bottleneck where they have to vet all AI work everyone else does for responsibility and/or everyone else gets a free pass to develop irresponsible AI which doesn't sound great to me.

At some point AI becomes important enough to a company (and mature enough as a field) that there is a specific part of legal/compliance in big companies that deals with the concrete elements of AI ethics and compliance and maybe trains everyone else, but everyone doing AI has to do responsible AI. It can't be a team.

For me this is exactly like how big Megacorps have an "Innovation team"[1] and convince themselves that makes them an innovative company. No - if you're an innovative company then you foster innovation everywhere. If you have an "innovation team" that's where innovation goes to die.

[1] In my experience they make a "really cool" floor with couches and everyone thinks it's cool to draw on the glass walls of the conference rooms instead of whiteboards.



Assigning ethics and safety to the AI teams in question is a little like assigning user privacy to advertising analytics teams - responsible AI is in direct conflict with their natural goals and will _never_ get any serious consideration.

I heard about one specific ratchet effect directly from an AI researcher. The ethics/risk oriented people get in direct internal conflict with the charge-forward people because one wants to slow down and the other wants to speed up. The charge-ahead people almost always win because it’s easier to get measurable outcomes for organization goals when one is not worrying about ethical concerns. (As my charge-ahead AI acquaintance put it, AI safety people don’t get anything done.)

If you want something like ethics or responsibility or safety to be considered, it’s essential to split it out into its own team and give that team priorities aligned with that mission.

Internally I expect that Meta is very much reducing responsible AI to a lip service bullet point at the bottom of a slide full of organizational goals, and otherwise not doing anything about it.



There has been plenty of serious work done in user privacy separate from advertising analytics. For example in the direction of log and database anonymization (and how surprisingly mathematically difficult it has turned out to be.) You don't have to be cynical about ALL such efforts.


That’s like saying a sports game doesn’t need a referee because players should follow the rules. At times you perhaps don’t follow them as close because you’re too caught up. So it’s nice to have a party that oversees it.


The current analogy is sports teams selecting their own referees.


A good argument for independent regulation/oversight.


Independent is the tricky part. AI companies already are asking for government regulation but how independent would that regulation really be?


As independent as any other government oversight/regulation in the US, it'd either be directly run or heavily influenced by those being regulated.


Not an economist, but that does not sound bad in general. Best case you have several companies that: (a) have the knowledge to make sensible rulings and (b) have an interest that non of their direct competitors gain any unfair advantages.


The problem case is when the companies all have a backroom meeting and go "Hey, lets ask for regulation X that hurts us some... but hurts everyone else way more"


Economists actually shouldn't even be included in regulatory considerations in my opinion. If they are then regulators must be balancing the regulation that on its own seems necessary with the economic impact of proper regulation.

It hasn't worked for the airline industry, pharmaceutical companies, banks, or big tech to name a few. I don't think its wise for us to keep trying the same strategy.



> it'd either be directly run or heavily influenced by those being regulated.

Which is also the probable fate of an AGI super intelligence being regulated by humans.



If we actually create an AGI, it will view us much like we view other animals/insects/plants.

People often get wrapped up around an AGI's incentive structure and what intentions it will have, but IMO we have just as much chance of controlling it as wild rabbits have controlling humans.

It will be a massive leap in intelligence, likely with concepts and ways of understanding reality that either never considered or aren't capable of. Again, that's *if* we make an AGI not these LLM machine learning algorithms being paraded around as AI.



You misunderstand AGI. AGI won't be controllable, it'd be like ants building a fence around a human thinking it'll keep him in.


One key question is if the teams are being effective referees or just people titled "referee".

If it's the latter, then getting rid of them does not seem like a loss.



Curling famously doesn’t have referees because players follow the rules. It wouldn’t work in all sports, but it’s a big part of curling culture.


So what happens if the teams disagree on whether a rule was broken? The entire point of a referee is that it's supposed to be an impartial authority.


The assistant captains (usually called vices) on each team are the arbiters. It’s in one’s best interest to keep the game moving and not get bogged in frivolities, there’s a bit of a “tie goes to the runner” heuristic when deciding on violations.

In my years of curling, I’ve never seen a disagreement on rules left unsettled between the vices, but my understanding is that one would refer to vices on the neighboring sheets for their opinion, acting as a stand-in impartial authority. In Olympic level play I do believe there are referees to avoid this, but I really can’t overstate how unusual that is for any other curlers.



It’s also a zero stakes sport that nobody watches and involves barely any money so there is less incentive to cheat.


You usually don’t have a referee in sports. 99% of the time it’s practice or pickup games


Apple had* a privacy team that existed to insure that various engineering teams across Apple do not collect data they do not need for their app. (And by data I mean of course data collected from users of the apps.)

It's not that engineers left to their own will do evil things but rather that to a lot of engineers (and of course management) there is no such thing as too much data.

So the privacy team comes in and asks, "Are we sure there is no user-identifiable data you are collecting?" They point out that usage pattern data should be associated with random identifiers and even these identifiers rotated every so-many months.

These are things that a privacy team can bring to an engineering team that perhaps otherwise didn't see a big deal with data collection to begin with.

I had a lot of respect for the privacy team and a lot of respect frankly for Apple for making it important.

* I retired two years ago so can't say there is still a privacy team at Apple.



Amazon had a similar team in the devices org.


Honestly this seems no different than a software security team. Yes, you want your developers to know now to write secure software, but the means of doing that is verifying the code with another team.


Isn't it the same as a legal team, another point you touch upon ?

I don't think we solved the need for a specialized team dealing with legality, feels hard to expect companies to solve it for ethics.



We haven't formalized ethics to the point of it being a multiplayer puzzle game for adults.


Isn't that what religion in general, and becoming a Doctor of Theology in particular, is?

https://en.wikipedia.org/wiki/Doctor_of_Theology



Quite possibly yes, and I personally grew up in a cult of Bible lawyers so I can imagine it, but here we are talking corporate ethics (an oxymoron) and AI alignment, which are independent of religion.


I mean, personally I see most religious ethics as oxymoronic too, at least in the sense of general ethics that would apply across heterogenous populations. Companies and religions typically have a set of ethics optimized for their best interests.


Legal is a massive bottleneck in many large enterprises.

Unfortunately there’s so much shared legal context between different parts of an enterprise that it’s difficult for each internal organisation to have their own own separate legal resources.

In an ideal world there’d be a lawyer embedded in every product team so that decisions could get made without going to massive committees.



I suppose it depends on the relative demands of legal vs AI ethics


Well, I guess we have the answer when it comes to Meta.


the people I’ve seen doing responsible AI say they have a hell of a time getting anyone to care about responsibility, ethics, and bias.

of course the worst case is when this responsibility is both outsourced (“oh it’s the rAI team’s job to worry about it”) and disempowered (e.g. any rAI team without the ability to unilaterally put the brakes on product decisions)

unfortunately, the idea that AI people effectively self-govern without accountability is magical thinking



Maybe a better case is outsourced and empowered. What if there was a third party company that was independent, under non-disclosure, and expert in ethics and regulatory compliance? They could be like accounting auditors but they would look at code and features. They would maintain confidentiality but their audit result would be public, like a seal of good ai citizen.


The idea that any for-profit company can self-govern without external accountability is also magical thinking

A "Responsible AI Team" at a for-profit was always marketing (sleight of hand) to manipulate users.

Just see OpenAI today: safety vs profit, who wins?



> Just see OpenAI today: safety vs profit, who wins?

Safety pretty clearly won the board fight. OpenAI started the year with 9 board members, and end it with 4, 4 of the 5 who left being interested in commercialization. Half of the current board members are also on the board of GovAI, dedicated to AI safety.

Don't forget that many people would consider "responsible AI" to mean "no AI until X-risk is zero", and that any non-safety research at all is irresponsible. Particularly if any of it is made public.



Rumor already has it that the "safety" board members are all resigning to bring Altman and the profit team back. When the dust settles, does profit ever lose to safety?


Self-government can be a useful function in large companies, because what the company/C-suite wants and what an individual product team want may differ.

F.ex. a product team incentivized to hit a KPI, so release a product that creates a legal liability

Leadership may not have supported that trade-off, but they were busy with 10,000 other strategic decisions and not technical.

Who then pushes back on the product team? Legal. Or what will probably become the new legal for AI, a responsible AI team.



Customers. Customers are the external accountability.


Yea, this works great on slow burn problems. "Oh, we've been selling you cancerous particles for the last 5 years, and in another 5 years your ass is totally going to fall off. Oh by the way we are totally broke after shoving all of our money in foreign accounts"


Iff the customers have the requisite knowledge of what "responsible AI" should look like within a given domain. Sometimes you may have customers whose analytical skills are so basic there's no way they're thinking about bias, which would push the onus back onto the creator of the AI product to complete any ethical evaluations themselves (or try and train customers?)


Almost every disaster in corporate history that ended the lives of customers was not prevented by customer external accountability

https://arstechnica.com/health/2023/11/ai-with-90-error-rate...

Really glad to see that customer external accountability kept these old folks getting the care they needed instead of dying (please read with extremely strong sarcasm)



Fully agree. Central functions of these types do not scale. Even with more mundane objectives, like operational excellence, organizations have learned that centralization leads to ivory tower nothing-burgers. Most of the resources should go to where the actual work gets done, as little as possible should be managed centrally (perhaps a few ops and thought leadership fluff folks...).


And decentralized functions tend to be wildly inconsistent across teams, with info sec being a particular disaster where I've seen that tried. Neither model is perfect.


Sure, but we are talking about research teams here, not about an ops or compliance team. Central research tends to be detached from the business units but does not provide any of the 'consistency' benefits. Central research makes sense if the objectives are outward-facing, not if one wants to have an effect on what happens in the software-building units. So I'd say that ideally/hopefully, the people of the RAI team will now be much closer to Meta's engineering reality.


It works for things you can automate. For example, at Microsoft they have some kind of dependency bot such as when you have newtonsoft installed but have version But I don't know if things are straight forward with machine learning. If the recommendations are blanket, And there is a way to automate checks, It could work. Main thing is there should be trust between teams. This can't be an adversarial power play.

https://github.com/advisories/GHSA-5crp-9r3c-p9vr



> Every team doing AI work should be responsible and should think about the ethical (and legal at a bare minimum baseline) dimension of what they are doing.

Sure. It is not that a “Responsible AI team” absolves other teams from thinking about that aspect of their job. It is an enabling function. They set out a framework how to think about the problem. (Write documents, do their own research, disseminate new findings internally.) They also interface with outside organisations (for example when a politician or a regulatory agency asks a questions, they already have the answers 99% ready and written. They just copy paste the right bits from already existing documents together.) They also facilitate in internal discussions. For example who are you going to ask for opinion if there is a dispute between two approaches and both are arguing that their solution is more ethical?

I don’t have direct experience with a “responsible AI team” but I do have experience with two similar teams we have at my job. One is a cyber security team, and the other is a safety team. I’m just a regular software engineer working on safety critical applications.

With my team we were working on an over-the-air auto update feature. This is very clearly a feature where the grue can eat our face if we are not very carefull, so we designed it very conservatively and then shared the designs with the cyber security team. They looked over it, asked for a few improvements here and there and now I think we have a more solid system than we would have had without them.

The safety helped us decide a dispute between two teams. We have a class of users whose job is to supervise a dangerous process while their finger hovers over a shutdown button. The dispute was over what information should we display to this kind of user on a screen. One team was arguing that we need to display more information so the supervisor person knows what is going on, the other team was arguing that the role of the supervisor is to look at the physical process with their eyes, and if we display more info that is going to make them distracted and more likely to concentrate on the screen instead of the real world happenings. In effect both teams argued that what the other one is asking for is not safe. So we got the safety team involved and we worked through the implications with their help and come to a better reasoned approach.



I agree that it's strange, and I think it's sort of a quirk of how AI developed. I think some of the early, loud proponents of AI - especially in Silicon Valley circles - had sort of a weird (IMO) fascination with "existential risk" type questions. What if the AI "escapes" and takes over the world?

I personally don't find that a compelling concern. I grew up devoutly Christian and it has flavors of a "Pascal's Wager" to me.

But anyway, it was enough of a concern to those developing these latest AI's (e.g. it's core to Ilya's DNA at OpenAI), and - if true! - a significant enough risk that it warranted as much mindshare as it got. If AI is truly on the level of biohazards or nuclear weapons, then it makes sense to have a "safety" pillar as equal measure to its technical development.

However, as AI became more commercial and widespread and got away from these early founders, I think the "existential risk" became less of a concern, as more people chalked it up to silly sci-fi thinking. They, instead, became concerned with brand image, and the chatbot being polite and respectful and such.

So I think the "safety" pillar got sort of co-opted by the more mundane - but realistic - concerns. And due to the foundational quirks, safety is in the bones of how we talk about AI. So, currently we're in a state where teams get to enjoy the gravity of "existential risk" but actually work on "politeness and respect". I don't think it will shake out that way much longer.

For my money, Carmack has got the right idea. He wrote off immediately the existential risk concern (based on some napkin math about how much computation would be required, and latencies across datacenters vs GPUs and such), and is plowing ahead on the technical development without the headwinds of a "safety" or even "respect" thought. Sort of a Los Alamos approach - focus on developing the tech, and let the government or someone else (importantly: external!) figure out the policy side of things.



> At some point AI becomes important enough to a company (and mature enough as a field) that there is a specific part of legal/compliance in big companies that deals with the concrete elements of AI ethics and compliance and maybe trains everyone else, but everyone doing AI has to do responsible AI. It can't be a team.

I think both are needed. I agree that there needs to be a "Responsible AI" mindset in every team (or every individual, ideally), but there also needs to be a central team to set standards and keep an independent eye on other teams.

The same happens e.g. in Infosec, Corruption Prevention, etc: Everyone should be aware of best practices, but there also needs to be a central team in organizations of a certain size.



Do companies need an info sec team?


They do, but I would argue that app sec is the responsibility of the development teams. Infosec can and should have a role in helping devs to follow good app sec practises, but having a seperate app sec team that don't have anything to do with app development seems unlikely to be the best model.


Yeah the developers and business people in trading firms should just do the risk assessment themselves, why have a risk department?


> At some point AI becomes [...] legal/compliance

AI safety and ethics is not "done". Just like these large companies have large teams working on algorithmic R&D, there is still work to be done in the direction of what AI safety and ethics means, looks like ot can be attached to other systems. It's not, well shouldn't be, about bullshit PR pronouncements.



Everyone should think about it usually means no one will.


It depends. If you embed a requirement into the culture and make it clear that people are absolutely required to think about it, at least some people will do so. And because the requirement was so clear up-front, those people have some level of immunity from pushback and even social pressure.


Perhaps, but other things that should be followed (such as compliance) are handled by other teams, even though every team should strive to be compliant. Maybe the difference is that one has actual legal ramifications, while the other doesn't yet? I suppose Meta could get sued, but that is true about everything.


Is it really that far fetched? It sounds like a self-imposed regulatory group, which some companies/industries operate proactively to avoid the ire of government agencies.

Yeah, product teams can/should care about being responsible, but there’s an obvious conflict of interest.

To me, this story means Facebook dgaf about being responsible (big surprise).



Would you just destroy the legal department in every company too since each person should be operating within the law anyway?


> Every team doing AI work should be responsible and should think about the ethical

So that's why everyone is so reluctant to work on deep-fake software? No, they did it, knowing what problems it could cause, and yet published everything, and now we have fake revenge porn. And we can not even trust tv broadcasts anymore.

So perhaps we do need some other people involved. Not employed by Meta, of course, because their only interest is their stock value.



This. It's just another infiltration akin to DEI into corporations.

Should all be completely disbanded.



Internal incentive structures need to be aligned with the risk incurred by the business and in some cases society.

I’m sure the rationalization is an appeal to the immature “move fast and break things” dogma.

My day job is about delivery of technology services to a distributed enterprise. 9 figure budget, a couple of thousand employees, countless contractors. If “everyone” is responsible, nobody is responsible.

My business doesn’t have the potential to impact elections or enable genocide like Facebook. But if an AI partner or service leaks sensitive data from the magic box, procurements could be compromised, inferences about events that are not public can be inferred, and in some cases human safety could be at elevated risk.

I’m working on an AI initiative now that will save me a lot of money. Time to market is important to my compensation. But the impact of a big failure, at the most selfish level, is the implosion of my career. So the task order isn’t signed until the due diligence is done.



In other news, police is not needed because everyone should just behave.


This is more analogous to a company having an internal "not doing crime" division. I do mention in my original post that having specialist skills within legal or compliance to handle the specific legal and ethical issues may make sense but having one team be the "AI police" and everyone else just trying to build AI without having responsibility baked into their processes is likely to just set up a constant tension like companies often have with a "data privacy" team who fight a constant battle to get people to build privacy practises into their systems and workflows.


But there are no responsible X teams for many X. But AI gets one.

(Here X is a variable not Twitter)



There are plenty of ethics teams in many industries, I don’t think this is a great point to make.


Police are needed for society when there's no other way to enforce rules. But inside a company, you can just fire people when they misbehave. That's why you don't need police inside your company. You only need police at the base-layer of society, where autonomous citizens interact with no other recourse between them.


People do what they are incentivized to do.

Engineers are incentivized to increase profits for the company because impact is how they get promoted. They will often pursue this to the detriment of other people (see: prioritizing anger in algorithmic feeds).

Doing Bad Things with AI is an unbounded liability problem for a company, and it's not the sort of problem that Karen from HR can reason about. It is in the best interest of the company to have people who can 1) reason about the effects of AI and 2) are empowered to make changes that limit the company's liability.



The problem is that a company would only fire the cavalier AI researchers after the damage is done. Having an independent ethics department means that the model wouldn't make its way to production without at least being vetted by someone else. It's not perfect, but it's a ton better than self-policing.


The "you" that fires people that misbehave is what, HR?

It takes quite some knowledge and insight to tell whether someone in the AI team, or, better yet, the entire AI team, is up to no good.

It only makes sense for the bosses to delegate overseeing research as sensitive as that to someone with a clue. Too much sense for Facebook.



> It never made any organizational sense for me to have a "responsible AI team" in the first place. Every team doing AI work should be responsible and should think about the ethical (and legal at a bare minimum baseline) dimension of what they are doing.

That makes as much sense as claiming that infosec teams never make organizational sense because every development team should be responsible and should think about the security dimensions of what they are doing.

And guess why infosec teams are absolutely required in any moderately large org?



Step 1: Pick a thing any tech company needs: design, security, ethics, code quality, etc.

Step 2: Create a “team” responsible for implementing the thing in a vacuum from other developers.

Step 3: Observe the “team” become the nag: ethics nag, security nag, code quality nag.

Step 4: Conclude that developers need to be broadly empowered and expected to create holistic quality by growing as individuals and as members of organizations, because nag teams are a road to nowhere.



> Every team doing AI work should be responsible and should think about the ethical (and legal at a bare minimum baseline) dimension of what they are doing.

Aren't we all responsible for being ethical? There seems to be a rise in the opinion that ethics do not matter and all that matters is the law. If it's legal then it must be ethical!

Perhaps having an ethical AI team helps the other teams ignore ethics. We have a team for that!



An “innovation” team is often useful…usually it’s called research or labs or skunkworks or incubator. It’s still terrifically difficult for a large company to disrupt itself — and the analogy may hold for “responsibility”. But there is a coherent theory here.

In this case, there are “responsibility”-scoped technologies that can be built and applied across products: measuring distributional bias, debiasing, differential privacy, societal harms, red-teaming processes, among many others. These things can be tricky to spin up and centralising them can be viable (at least in theory).



Also, if you are on this team, you get promoted based on slowing down other work. Introduce a new review process, impact!


AI safety is just a rent seeking cult/circus, leeching on the work done by others. Good on Meta for cleaning shop.


AI safety (theatre). Doubles as a makeshift moat.

To me, the greatest apocalypse scenario is not some AGI global extinction event but a corporation with an extensive data hoard replete with ample ML and GPU power being able to monopolize a useful service that cannot be matched by the public... that is the true, (and likely imo) AI nightmare we're heading towards.



Exactly this. We don't need protection from AI, we need protection from corporations. And they sure as hell aren't going to police themselves via internal "safety teams", especially not if it starts to affect their bottom line.


To me the biggest threat is using AI to make decisions with no human recourse. Do you want ChatGPT to be the sole judge deciding if you can get a mortgage or what interest rate it should be? Do you want ChatGPT to auto-delete your YouTube video with zero recourse or appeal process?


Can we talk about the intelligence community now? On my papers it says they should be ready to pull the plug on Azure if GPT goes off script, but are they even aware that they are expected to act?


Early stage “technology ethics teams” are about optics and not reality.

In the early stages of a new technology the core ethics lies in the hands of very small teams or often individuals.

If those handling the core direction decide to unleash irresponsibly, it’s done. Significant harm can be done by one person dealing with weapons of mass destruction, chemical weapons, digital intelligence, etc.

It’s not wrong to have these teams, but the truth is that anyone working with the technology needs to be treated like they are on an ethics team, not build an “ethical group” who’s supposed to proxy the responsibility for doing it the “right way.”

Self-directed or self-aware AI also complicate this situation immeasurably, as having an ethics team presents a perfect target for a rogue AI or bad actor. You’re creating a “trusted group” with special authority for something/someone to corrupt. Not wise to create privileged attack surfaces when working with digital intelligences.



Completely absent a single example of what this team positively contributed. Perhaps we should look at a track record of the past few years and see how effective meta has been in upholding the truth, it doesnt look pretty.


When moneys out and theres a fire going on (at openai), its the best moment to close departments that were solely for virtue signaling :/


I’m waiting for the collapse of “DEI” next…


If we ever had "responsible software" teams and they would actually have any power, companies like Meta, Google and Microsoft wouldn't even exist.

So yeah... the whole idea of "responsible AI" is just wishful thinking at best and deceptive hypocrisy at worst.



Considering how costly is to train models I'm sure control freaks and rent seekers are probably salivating to dig their teeth into this but as technologies progress and opposing parts of the world get hold of this all this responsible and regulated feel good corpo crap bs will backfire.


I honestly believe the best to make AI responsibly is to make it open source. That way no single entity has total control over it, and researchers can study them to better understand how they can be used nefariously as well as in a good way—doing that allows us to build defenses to minimize the risks, and reap the benefits. Meta is already doing that, but other companies and organizations should do that as well.


I'm not a doomer but I honestly don't understand this argument. If releasing model as open source helps researchers determine if it's safe, what about when it's not deemed safe? Then it's already out there, on the hard drives of half of 4chan. It's much easier and cheaper to fine-tune a model, distil and quantize it and put it on a killer drone, than it is to train it from scratch.

On the other hand I totally relate with the idea that it could be preferable that everyday has access to advanced AI and not just large companies and nation states.



what purpose does a LLM serve on a killing drone, exactly?


Open source models in general. Meta has for instance released DINO which is a self supervised transformer model. LLMs are also going multi modal (see LLaVA for instance). The name "LLM" has stuck but they should really be called Large Transformer Models. LeCun is working on self supervised visual world models (I-JEPA) which if successful and released could form the basis for killer drones. It's still a lot of engineering work to fine tune and put a model like this on embedded hardware on a drone, but at some point it might be easy enough for small groups of determined people to pull it off.


For a drone, an LLM derived solution is far too slow, unreliable, heavy and not fit for purpose. Developments in areas like optical flow, better small CNNs for vision, adaptive control and sensor fusion are what's needed. When neural networks are used, they are small, fast, specialized and cheap to train.

A multimodal or segmentation algorithm is not the solution for bee-level path planning, obstacle avoidance or autonomous navigation. Getting LLMs to power a robot for household tasks with low latency to action and in an energy efficient manner is challenging enough, before talking about high-speed, highly maneuverable drones.



Tesla is running these models on 4 year old hardware to control a car in real time (30 fps). You don't need a full 100B model to control a drone, and it doesn't have to be as good as a car to cause a lot of damage. Reportedly both Ukraine and Russia are putting together on the order of a thousand drones a day at this point, Tesla includes the compute to run this in every car they make already today. Hardware is also moving fast, how come people forget about Moore's law and software improvements? To me there's no question that this tech will be in tens of thousands of drones within a few years.


a multimodal llm is a general purpose device to churn sensor inputs into a sequence of close to optimal decisions. the 'language' part is there to reduce the friction of the interface with humans, it's not an inherent limitation of the llm. not too farfetched to imagine a scenario where you point to a guy in a crowd and tell a drone to go get him, and the drone figures out a close to optimal sequence of decisions to make it so.


I think GPT-4V could probably make high level decisions about what actions to take.

Not really practical at the moment of course since you can't put 8 A100s on a drone.



there are rumors that the latest gen drones in ukraine use crude embedded vision ai to increase terminal accuracy. launch and iterate, this will only get more lethal.


GNU/Linux is open source. Is it being used responsibly?

What is the "it" that no single entity has control over?

You have absolutely no control of what your next door neighbor is doing with open source.

Hey, if we want alcohol to be made responsibly, everyone should have their own still, made from freely redistributed blueprints. That way no single entity has control.



> Hey, if we want alcohol to be made responsibly, everyone should have their own still, made from freely redistributed blueprints.

Anyone who wants to can, in fact, find blueprints for making their own still. For example, https://moonshinestillplans.com/ contains plans for a variety of different types of stills and guidance on which type to build based on how you want to use it.

And in fact I think it's good that this site exists, because it's very easy to build a still that appears to work but actually leaves you with a high-methanol end product.



it's very easy to build a still that appears to work but actually leaves you with a high-methanol end product.

Is it? I've always seen concern about methanol in moonshine but I presume it came from intentional contamination from evil bootleggers. It's difficult to get a wash containing enough methanol to meaningfully concentrate in the first place if you're making whiskey or rum. Maybe with fruit wine and hard cider there's a bit more.

The physics of distillation kind of have your back here too. The lower temperature fractions with acetone and methanol always come out first during distillation (the "heads") and every resource and distiller will tell you to learn the taste and smell, then throw them out. The taste and smell of heads are really distinctive. A slow distillation to more effectively concentrate methanol also makes it easier to separate out. But even if you don't separate the heads from the hearts, the methanol in any traditional wash is dilute enough that it'll only give you a headache.

I think it's extremely hard to build a still that appears to work but creates a high methanol end product.



This sounds like something I don't want to test the hard way.


there’s no reason bootleggers would attempt to deliberately kill customers, at most you can argue about potential carelessness but in contrast there was indeed one party deliberately introducing methanol into the booze supply.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2972336/#:~:tex....



I think the question mark is if AI is more akin to the nuclear bomb of the Internet.

If you don't put barriers, how quickly will AI bots take over people in online discourse, interaction and publication?

This isn't just for the sake of keeping the Internet an interesting place free of bots and fraud and all that.

But I've also heard that it's about improving AI itself. If AI starts to pollute the dataset we train AI on, the entire Internet, you get this weird feedback loop where the models could almost get worse over time, as they will start to unknowingly train on things their older versions produced.



Alcohol is probably the most open-source food product of all time.


> GNU/Linux is open source. Is it being used responsibly?

Great example! Yes, linux being open source has been massively beneficial to society. And this is true despite the fact that some bad guys use computers as well.



Getting the results is nice but that's "shareware" not "free software" (or, for a more modern example, that is like companies submitting firmware binary blobs into mainline Linux).

Free software means you have to be able to build the final binary from source. Having 10 TB of text is no problem, but having a data center of GPUs is. Until the training cost comes down there is no way to make it free software.



If I publish a massive quantity of source code — to the point that it’s very expensive to compile — it’s still open source.

If the training data and model training code is available then it should be considered open, even if it’s hard to train.



If it was only feasible for a giant corporation to compile the code, I would consider it less than open source.


> the training data

This will never be fully open



Maybe not for some closed models. That doesn’t mean truly open models can’t exist.


I doubt you’d say that if one run of compiling the code would cost you $400M.


Free software means that you have the ability - both legal and practical - to customize the tool for your needs. For software, that means you have to be able to build the final binary from source (so you can adapt the source and rebuild), for ML models that means you need the code and the model weights, which does allow you to fine-tune that model and adapt it to different purposes even without spending the compute cost for a full re-train.


Exactly. The biggest question is why you would trust the single authority controlling the AI to be responsible. If there are enough random variables the good and the bad sort of cancel each other out to reach a happy neutral. But if an authority goes rogue what are you gonna do?

Making it open is the only way AI fulfills a power to the people goal. Without open source and locally trainable models AI is just more power to the big-tech industry's authorities.





That's not necessarily true.

It's entirely conceivable that even if AGI (or something comparably significant in terms of how impactful it would be to changing society or nation states) was achievable in our lifetime, it might be that:

1) Achieving it requires a critical mass of research talent in one place that perhaps currently exists at fewer than 5 companies - anecdotally only Google, Meta, and OpenAI. And a comparable number of world governments (At least in the US the best researchers in this field are at these companies, not in academia or government. China may be different.)

This makes it sound like a "security by obscurity" situation, and on a long enough timeline it may be. Without World War 2, without the Manhattan Project, and without the looming Cold War how long would it have taken for Humanity to construct a nuclear bomb? An extra 10 years? 20? 50? Hard to know. Regardless, there is a possibility that for things like AI, with extra time comes the ability to better understand and build those defenses before they're needed.

2) It might also require an amount of computing capacity that only a dozen companies/governments have.

If you open source all the work you remove the guard rails for the growth or what people focus investments on. It also means that hostile nations like Iran or North Korea who may not have the research talent but could acquire the raw compute could utilize it for unknown goals.

Not to mention that what nefarious parties on the internet would use it for. We only know about deep fake porn and generated vocal audio of family members for extortion. Things can get much much worse.



> there is a possibility that for things like AI, with extra time comes the ability to better understand and build those defenses before they're needed.

Or not, and damaging wrongheaded ideas will become a self-reinforcing (because safety! humanity is at stake!) orthodoxy, leaving us completely butt-naked before actual risks once somebody makes a sudden clandestine breakthrough.

https://bounded-regret.ghost.io/ai-pause-will-likely-backfir...

> We don’t need to speculate about what would happen to AI alignment research during a pause—we can look at the historical record. Before the launch of GPT-3 in 2020, the alignment community had nothing even remotely like a general intelligence to empirically study, and spent its time doing theoretical research, engaging in philosophical arguments on LessWrong, and occasionally performing toy experiments in reinforcement learning.

> The Machine Intelligence Research Institute (MIRI), which was at the forefront of theoretical AI safety research during this period, has since admitted that its efforts have utterly failed. Other agendas, such as “assistance games”, are still being actively pursued but have not been significantly integrated into modern deep learning systems— see Rohin Shah’s review here, as well as Alex Turner’s comments here. Finally, Nick Bostrom’s argument in Superintelligence, that value specification is the fundamental challenge to safety, seems dubious in light of LLM's ability to perform commonsense reasoning.[2]

> At best, these theory-first efforts did very little to improve our understanding of how to align powerful AI. And they may have been net negative, insofar as they propagated a variety of actively misleading ways of thinking both among alignment researchers and the broader public. Some examples include the now-debunked analogy from evolution, the false distinction between “inner” and “outer” alignment, and the idea that AIs will be rigid utility maximizing consequentialists (here, here, and here).

> During an AI pause, I expect alignment research would enter another “winter” in which progress stalls, and plausible-sounding-but-false speculations become entrenched as orthodoxy without empirical evidence to falsify them. While some good work would of course get done, it’s not clear that the field would be better off as a whole. And even if a pause would be net positive for alignment research, it would likely be net negative for humanity’s future all things considered, due to the pause’s various unintended consequences. We’ll look at that in detail in the final section of the essay.



If it really is A"I", shouldn't it figure out for itself and do it?


Is it just the model that needs to be open source?

I thought the big secret sauce is the sources of data that is used to train the models. Without this, the model itself is useless quite literally.



No, the model is useful without the dataset, but its not functionally "open source", because while you can tune it if you have the training code, you can't replicate it or, more important, train it from scratch with a modified, but not completely new, dataset. (And, also, understanding the existing training data helps understand how to structure data to train that particular model, whether its with a new or modified data set from scratch, or for finetuning.)

At least, that's my understanding.



For various industry-specific or specialized task models (e.g. recognizing dangerous events in self-driving car scenario) having appropriate data is often the big secret sauce, however, for the specific case of LLMs there are reasonable sets of sufficiently large data available to the public, and even the specific RLHF adaptations aren't a limiting secret sauce because there are techniques to extract them from the available commercial models.


Great, Russia and China get the ability to use it or adapt it for any reason they want without any oversight.


One could argue that open source won’t change much with regard to China and Russia.

Both countries have access to LLMs already. And if they didn’t, they would have built their own or gotten access through corporate espionage.

What open source does is it helps us better understand & control the tech these countries use. And it helps level up our own homegrown tech. Both of these are good advantages to have.



That last paragraph is an opinion you seem to have just formed as you typed it stated as a fact that doesn’t seem to hold up to even the lightest scrutiny.


Afaik china is already pretty developed in this area, they already have a bunch of opensource llms that beat ours or at least are at the same lvl. We can also argue that it'll have the same effect as banning chips but again, China succeeded to build dense nm chips even with sanctions, just a bit slower. AI systems are the consequence of the pandora box that we've opened long time ago, about the time when humans got the curiosity to improve things. At this moment you can't stop the progress, the world is myltipolar, there'll always be players willing to go extra so the only solution is getting to the top faster or as fast as others


There is no obvious reason they couldn't just train one themselves, or merely steal existing weights given enough time.


That is precious time that can be used to work on alignment.


But alignment is always going to rely on cooperation of users though? What benefit does the delay offer other than the direct one of a delay?


Why is it going to rely on co-operation if users don't have the means to change the model enough to misalign it?


If we're talking about open-source LLMs, among the best embedding, multimodal, pure and coding LLMs are Chinese (attested and not just benchmarks).


What are you talking about use what? It's all in the open already anyway.. And someone like China even has more data to build from


They will get access to the good stuff anyway. The only question is whether you get access to it.


The responsibility of AI should lie in the hands of users, but right now , no company is even close to giving AI users the power to shape their product in responsible ways. The legal system already covers for these externalities, and all attempts at covering their ass have resulted to stupider and less useful systems.

They are literally leaking more and more users to the open source models because of it. So, in retrospect, maybe it would be better if they didn't disband it.



Alignment in a nutshell. Can programmers imbue values in AI so that the machine recognizes prompts that make it an accomplice to a crime? No, I agree with you that it’ll take supervision until we reach a watershed moment.

Right now, those values are simply what content is bad for business.



How convenient! Turns out they don’t give the slightest damn about “Responsible AI” in the first place. It’s nice to roll out news like this while everyone else is distracted.


Meta is probably the most ethical company in AI at the moment. Most importantly, their models are open source.


Maybe this news should challenge your priors, then?


That's assuming this division actually did something beneficial to begin with, and if they did, that they are the only ones responsible for "responsible AI" development at Meta. It is in all likelihood just a re-org being blown out of proportion.


You contradict yourself


You think open sourcing your models isn't ethical?


Not necessarily, no.

Open source models are already being used for all kinds of nefarious purposes. Any safety controls on a model are easily stripped off once its weights are public.

Usually I love open source software. Most of my career has been spent writing open source code. But this is powerful and dangerous technology. I don’t believe that nuclear weapons should be open source and available to all either.



You have a technically valid viewpoint, its just utterly impractical if you carry it to its logical conclusion.

If something that can be used for good can also be used for nefarious purposes, you claim that some entity should exert a modicum of control over that thing to prevent it from being used for nefarious purposes.

Now think about all the things in peoples day to day life that can be used for good, but also can be used for nefarious purposes, and see if you would be ok with your argument being applied for those.



Personally, as someone sceptical of the likes of Google, Facebook and Microsoft (and the ethics demonstrated by multinational companies generally) I find the idea of all AI being controlled by a small cadre of Californian billionaires an extremely big ethical risk.


are they an ethical company, though?


Is one division responsible for the crimes of another division, especially in a large corporation?


Probably because it's a job anyone can do.


The only reason BigCo doesn't disband their legal team is because of laws.


Responsible AI should be team oriented in the first place, each project has very different security objective


i have no problem with this

anyone who has a problem with this should have quantitatively MORE of a problem with the WHO removing "do no harm" from their guidelines. i would accept nothing less.



Google removed "Don't be evil", so we know they do evil. Facebook disbaneded responsible AI team, so we know they do AI irresponsibly. I love greedy evil corporations telling on themselves.


i don’t!! these are blinking red lights with no easy fix!


On the contrary, the fix is trivial: Don't give them any money.


That doesn't seem to be working


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com