![]() |
|
![]() |
| Practically speaking, is it possible for NVIDIA to "pull the rug" later, intentionally or otherwise, by subtly changing the behaviour of this out-of-doc instruction on new architectures? |
![]() |
| Zuckerberg should stop claiming Meta is open sourcing AI (they are even running TV ads) when they are only releasing the weights, and not the code. Only DeepSeek is real OSS AI. |
![]() |
| DeepSeek is definitely not real OSS. To be open source, you need to use a real open source license (like the ones OSI lists), and you need to share all pre and post training code, any code related to tuning, any evaluation code, everything related to safety/censorship/etc, and probably the full training data as well. Otherwise you can't reproduce their weights. Sharing weights is like sharing a compiled program.
As far as I know the only true open source model that is competitive is the OLMo 2 model from AI2: https://allenai.org/blog/olmo2 They even released an app recently, which is also open source, that does on-device inference: https://allenai.org/blog/olmoe-app They also have this other model called Tülu 3, which outperforms DeepSeek V3: |
![]() |
| Not really open source. For a truly open source model, check out OLMo 2 from AI2:
https://allenai.org/blog/olmo2 They literally share everything you need to recreate their model, including the data itself. This is what they say on that link above: > Because fully open science requires more than just open weights, we are excited to share a new round of OLMo updates–including weights, data, code, recipes, intermediate checkpoints, and instruction–tuned models—with the broader language modeling community! |
> For extreme performance, we discover and use an out-of-doc PTX instruction: ld.global.nc.L1::no_allocate.L2::256B. This instruction will lead to an undefined behavior: accessing volatile GPU memory with non-coherent read-only PTX modifiers .nc. But the correctness is tested to be guaranteed with .L1::no_allocate on Hopper architectures, and performance will be much better.