Back in 2018, OpenAI published a charter, which includes a self-sacrifice clause:
We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions. Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
Interestingly, this is still hosted at https://openai.com/charter/, meaning it remains the official company policy.
At the same time, explicitly stated AGI timelines by Sam Altman are the following:
| Date | Predicted AGI Year | Diff (years) | Quote / Claim | Source |
|---|---|---|---|---|
| May 22, 2023 | ~2033 | ~10 | “Within the next ten years, AI systems will exceed expert skill level in most domains” | OpenAI Blog — Governance of Superintelligence |
| Dec 2023 | ~2030 | ~6 | “By the time the end of this decade rolls around, the world will be in an unbelievably better place” | TIME |
| Nov 4, 2024 | ~2029 | ~5 | “I think in 5 years […] people are like, man, the AGI moment came and went” | 20VC Podcast |
| Nov 8, 2024 | 2025 | ~1 | “What are you excited about in 2025? - AGI” | Futurism |
| Jan 2025 | ~2029 | ~4 | “AGI will probably get developed during Trump’s term” | Bloomberg |
| Sep 25, 2025 | 2030 | ~4 | “By 2030, if we don’t have extraordinarily capable models that do things we can’t, I’d be very surprised” | TechSpot |
| Oct 28, 2025 | 2028 | ~2 | “Automated AI research intern by Sep 2026, full AI researcher by Mar 2028” | OfficeChai |
| Dec 18, 2025 | 2025 | 0 | “AGI kinda went whooshing by… okay fine, we built AGIs” | Windows Central |
| Feb 3, 2026 | 2025 | ~-1 | “We basically have built AGI” (later: “a spiritual statement, not a literal one”) | ALM Corp |
We can see that the timeline of AGI (let’s assume this is the timeline for a better-than-even chance) has accelerated and the median prediction since 2025 is around 2 years. Notably, in the latest interviews it’s claimed that AGI has been achieved, and we’re now racing towards ASI.
Finally, here’s a snapshot of the current overall Arena ranking of top 10 models.
| Model | Overall | Expert | Hard Prompts | Coding | Math | Creative Writing | Instruction Following | Longer Query |
|---|---|---|---|---|---|---|---|---|
| claude-opus-4-6 | 1 | 1 | 2 | 2 | 3 | 4 | 2 | 2 |
| claude-opus-4-6-thinking | 2 | 2 | 1 | 1 | 2 | 1 | 1 | 1 |
| gemini-3.1-pro-preview | 3 | 3 | 3 | 3 | 1 | 5 | 3 | 3 |
| grok-4.20-beta1 | 4 | 14 | 4 | 5 | 20 | 2 | 8 | 12 |
| gemini-3-pro | 5 | 7 | 5 | 9 | 5 | 3 | 9 | 5 |
| gpt-5.4-high | 6 | 4 | 10 | 11 | 8 | 6 | 5 | 11 |
| gpt-5.2-chat-latest | 7 | 10 | 7 | 6 | 4 | 9 | 10 | 15 |
| gemini-3-flash | 8 | 9 | 9 | 18 | 7 | 8 | 13 | 13 |
| grok-4.1-thinking | 9 | 17 | 13 | 19 | 21 | 19 | 28 | 27 |
| claude-opus-4-5-202… | 10 | 6 | 6 | 4 | 13 | 7 | 4 | 4 |
Based on these, the flagship GPT-5.4 model is clearly trailing behind competition. At least Anthropic’s and Google’s models are clearly safety-conscious, and probably value-aligned (whatever that means, but since the models are drop-in replacements to GPT, it should hold).
It can be debated whether arena.ai is a suitable metric for AGI, a strong case can probably be made for why it’s not. However, that’s irrelevant, as the spirit of the self-sacrifice clause is to avoid an arms race, and we are clearly in one.
Therefore, one can only conclude, that we currently meet the stated example triggering condition of “a better-than-even chance of success in the next two years”. As per its charter, OpenAI should stop competing with the likes of Anthropic and Gemini, and join forces, however that might look like.
While this will never happen, I think it’s illustrative of some great points for pondering:
- The impotence of naive idealism in the face of economic incentives.
- The discrepancy between marketing points and practical actions.
- The changing goalposts of AGI and timelines. Notably, it’s common to now talk about ASI instead, implying we may have already achieved AGI, almost without noticing.