![]() |
|
![]() |
|
Also is perfectly aligned with Yann’s goals as an (academic) researcher whose career is built on academic community kudos far more than, say, building a successful business
|
![]() |
|
Yeah, the way I see it Meta is undermining OpenAI's business model because it can, I have serious doubts Meta would be doing as it does with OpenAI out of the picture.
|
![]() |
|
> When Sam Altman is calling for AI regulation Sure; that's almost certainly not being done in good faith. When numerous AI experts and luminaries who left their jobs in AI are advocating for AI regulation, that's much more likely to be being done in good faith. > What is this regulation aimed at making AI safer > Furthermore, there doesn't seem to be any agreement on whether or how AI, at a state remotely similar to the level it is at today, is dangerous You could also write that as "there's no agreement that AI is safe". But that aside... Most of the arguments about AI safety are not about current AI technology. (There are some reasonable arguments about the use of AI for impersonation, such as that AI-generated content should be labeled as such, but those are much less critical and they aren't the critical part of AI safety.) The most critical arguments about AI safety are not primarily about current technology. They're about near-future expansions of AI capabilities. https://arxiv.org/abs/2309.01933 > how to mitigate that danger Pause capabilities research until we have proven strategies for aligning AI to human safety. > How can you even attempt to regulate in good faith without that? We don't know that it's safe, many people are arguing that it's dangerous on an unprecedented scale, there are no good refutations of those arguments, and we don't know how to ensure its safety. That's not a situation in which "we shouldn't do anything about it" is a good idea. How can you even attempt to regulate biological weapons research without having a good way to mitigate it? By stopping biological weapons research. |
![]() |
|
Two different things to have a product to sustain research and expenses vs. having a product to make company grow exponentially and make investors richer
|
![]() |
|
> If Meta starts being all open and generous about their core assets I think they are pretty open about it? - https://www.meta.ai/ requires no log in (for now at least) - PyTorch is open source - Various open models like LLama, Detectron, ELF, etc - Various public datasets like FACET, MMCSG, etc - A lot of research papers describing their findings |
![]() |
|
I don't think it's really being "generous" with their competitors business value. Meta has a track record of open sourcing the "infrastructure" bits behind their core products. They have released many, many things like React/React Native, GraphQL, Casandra (database), Open Compute Project (server / router designs), HHVM and dozens of other projects long before their recent AI push. Have a look here, I spent five minutes scrolling and got 1/4 of the way through! https://opensource.fb.com/projects/ With Llama, they now have an army of people hacking on the llama architecture, so even if they don't explicitly use any of llama-adjacent projects, there are tons and tons of optimizations and other techniques being discovered. Just making up numbers, but if they spend x billions on interference per year and the open source community comes up with a way to make inference even just a few more percent efficient, the costs of their open source efforts might be a drop in comparison. For example, Zuck was on the Dwarkesh podcast recently and mentioned that open sourcing OCP (server/rack design) has saved them billions because the industry standardized their designs, driving down the price for them. |
![]() |
|
Open Compute, PyTorch, React, zstd, OpenBMC, buck, pyre, proxygen, thrift, watchman, RocksDB, folly, HHVM,...
|
![]() |
|
Facebook had the most open and used social platform for third party apps, and it was so successful that it was blamed for an election, and they had to cut back usage sharply.
|
![]() |
|
In the search market, everyone loves the paid search engine (Kagi) and hates the ad supported one. It would seem that for LLMs, it’s the opposite :)
|
![]() |
|
I agree, except in the case of OpenAI it should be an end, and yet they fail at it spectacularly, and since Altman/Microsoft finished their takeover there is basically no hope of it ever coming back.
|
![]() |
|
The horse is condemned by its own mouth: https://openai.com/blog/openai-elon-musk > As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes). |
![]() |
|
The only more generous interpretation I can see is that OpenAI actually did intend to be open, but only between December 11th 2015 to January 2nd 2016, at which point they had changed their mind.
|
![]() |
|
The paragraph immediately preceding the quotation is: "The article is concerned with a hard takeoff scenario: if a hard takeoff occurs, and a safe AI is harder to build than an unsafe one, then by opensorucing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe AI, which will experience a hard takeoff." The article in question appears to be: http://slatestarcodex.com/2015/12/17/should-ai-be-open/ Which opens with: """H.G. Wells’ 1914 sci-fi book The World Set Free did a pretty good job predicting nuclear weapons:
and, I hope I'm summarising usefully rather than cherry-picking because it's quite long, also says:"""Once again: The decision to make AI findings open source is a tradeoff between risks and benefits. The risk is that in a world with hard takeoffs and difficult control problems, you get superhuman AIs that hurl everybody off cliffs. The benefit is that in a world with slow takeoffs and no control problems, nobody will be able to use their sole possession of the only existing AI to garner too much power. But the benefits just aren’t clear enough to justify that level of risk. I’m still not even sure exactly how the OpenAI founders visualize the future they’re trying to prevent. Are AIs fast and dangerous? Are they slow and easily-controlled? Does just one company have them? Several companies? All rich people? Are they a moderate advantage? A huge advantage? None of those possibilities seem dire enough to justify OpenAI’s tradeoff against safety.""" and """Elon Musk famously said that AIs are “potentially more dangerous than nukes”. He’s right – so AI probably shouldn’t be open source any more than nukes should."" This is what OpenAI and Musk were discussing in the context of responding to "I've seen you […] doing a lot of interviews recently extolling the virtues of open sourcing AI, but I presume you realise that this is not some sort of panacea that will somehow magically solve the safety problem?" |
![]() |
|
"Luckily" the USA is capitalist enough that, if the top S&P500 companies are all releasing open (weight) models to the public, regulation isn't going to happen any time soon.
|
![]() |
|
Works fine for me on PIA and Tor. Not logged in. Maybe they just blocked your server because someone used it for scraping?
|
![]() |
|
>We’re rolling out Meta AI in English in more than a dozen countries outside of the US. Now, people will have access to Meta AI in Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia and Zimbabwe — and we’re just getting started. https://about.fb.com/news/2024/04/meta-ai-assistant-built-wi... |
![]() |
|
That's why I said I don't think anyone else would have started training and releasing free models UNLESS ChatGPT existed and had success, and only OpenAI managed to do so.
|
![]() |
|
Because it's easy to forget that fact, and pretend that Meta is "better" than OpenAI when in reality they are late to the game and are trying to catch up to commoditize their product's complement.
|
![]() |
|
Only if it can't do voice cloning. Normally I find the AI ethics people insufferable but I do think that good voice cloning tech will do much more harm than good.
|
![]() |
|
> I’m surprised no one is outraged that Israel’s Military AI, named lavender is in active use identifying, targeting and murdering Palestinians In Real time. 'Lavender': The AI machine directing Israel's bombing in Gaza - https://news.ycombinator.com/item?id=39918245 - 20 days ago (1418 points, 1433 comments) It's not that no one is outraged, it's that we like to keep the outrage limited to the threads that are about the outrage. I'd rather not see HN devolve into a place where every discussion inevitably pivots to the horrific world event du jour. |
![]() |
|
I must run in very different circles because in my social sphere even my Jewish friends are uncomfortable saying anything supporting Israel. Where is this rabidly pro-Israel mob?
|
![]() |
|
This comment is just desperately grasping for straws at this point. Why is the simpler explanation (that they have always had a culture of being open about AI tools and research) so hard to grasp?
|
![]() |
|
You can’t be that naive. “Done is better than perfect”. Please google “commoditize your complements”. Business is about winning in the arena of capitalism, not abstract metrics like code quality. |
It probably also stems from his experience working at Bell Labs, and how his pioneering work very much required a lot of help from things available openly, as is still the case in academia.
The man have been repeating the advantage of open-source being the better option, and been very vocal about his opposition to "regulate AI (read: let a handful of US government aligned closed-source companies have complete monopoly on AI under the guise of ethics)". On this point he (and meta) stands in a pretty stark contrast to a lot of big-tech AI mainstream voices.
Me myself recently moved some research tech stack to meta stuff, and unlike platform that basically stops being supported as soon as the main contributor finishes his PhD, it have been great to work on it (and they're usually open to feedback and fixes/PR).