We’re excited to release Chai-1, a new multi-modal foundation model for molecular structure prediction that performs at the state-of-the-art across a variety of tasks relevant to drug discovery. Chai-1 enables unified prediction of proteins, small molecules, DNA, RNA, covalent modifications, and more.
The model is available for free via a web interface, including for commercial applications such as drug discovery. We are also releasing the model weights and inference code as a software library for non-commercial use.
A frontier model for biomolecular interactions
We tested Chai-1 across a large number of benchmarks, and found that the model achieves a 77% success rate on the PoseBusters benchmark (vs. 76% by AlphaFold3), as well as an Cα LDDT of 0.849 on the CASP15 protein monomer structure prediction set (vs. 0.801 by ESM3-98B).
Unlike many existing structure prediction tools which require multiple sequence alignments (MSAs), Chai-1 can also be run in single sequence mode without MSAs while preserving most of its performance. The model can fold multimers more accurately (69.8%) than the MSA-based AlphaFold-Multimer model (67.7%), as measured by the DockQ acceptable prediction rate. Chai-1 is the first model that’s able to predict multimer structures using single-sequences alone (without MSA search) at AlphaFold-Multimer level quality.
For more information, and a comprehensive analysis of the model, read our technical report.
A natively multi-modal foundation model
In addition to its frontier modeling capabilities directly from sequences, Chai-1 can be prompted with new data, e.g. restraints derived from the lab, which boost performance by double-digit percentage points. We explore a number of these capabilities in our technical report, such as epitope conditioning – using even a handful of contacts or pocket residues (potentially derived from lab experiments) doubles antibody-antigen structure prediction accuracy, making antibody engineering more feasible using AI.
Releasing the model for all
We are releasing Chai-1 via a web interface for free, including for commercial applications such as drug discovery. We are also releasing the code for Chai-1 for non-commercial use as a software library. We believe that when we build in partnership with the research and industrial communities, the entire ecosystem benefits.
Try Chai-1 for yourself by visiting lab.chaidiscovery.com, or run it from our GitHub repository at github.com/chaidiscovery/chai-lab.
What's next?
The team comes from pioneering research and applied AI companies such as OpenAI, Meta FAIR, Stripe, and Google X. Collectively, we have played pivotal roles in the advancement of research in AI for biology. The majority of the team has been Head of AI at leading drug discovery companies, and has collectively helped advance over a dozen drug programs.
Chai-1 is the result of a few months of intense work, and yet we are only at the starting line. Our broader mission at Chai Discovery is to transform biology from science into engineering. To that end, we'll be building further AI foundation models that predict and reprogram interactions between biochemical molecules, the fundamental building blocks of life. We’ll have more to share on this soon.
We are grateful for the partnership of Dimension, Thrive Capital, OpenAI, Conviction, Neo, Lachy Groom, and Amplify Partners, as well as Anna and Greg Brockman, Blake Byers, Fred Ehrsam, Julia and Kevin Hartz, Will Gaybrick, David Frankel, R. Martin Chavez, and many others.