This release marks a significant step forward in leveraging large scale real-world data to address the complexity and richness of the physical world. With SAM 3D, we’re introducing two new models: SAM 3D Objects, which enables object and scene reconstruction, and SAM 3D Body, which focuses on human body and shape estimation. Both models deliver robust, state-of-the-art performance, transforming static 2D images into detailed 3D reconstructions.
As part of this release, we're sharing SAM 3D model checkpoints and inference code. Coming soon, we look forward to also sharing our new SAM 3D Artist Objects (SA-3DAO) dataset for visually grounded 3D reconstruction in real world images. This novel evaluation dataset features a diverse array of paired images and object meshes, offering a level of realism and challenge that surpasses existing 3D benchmarks.
To make these advancements widely accessible, we’re introducing Segment Anything Playground, the simplest way for anyone to experiment with our state-of-the-art models for media modification. Anyone can upload their own images, select humans and objects, generate detailed 3D reconstructions, and explore the full range of features offered by our new models. The Playground also includes SAM 3, our latest foundation model that advances understanding across image and video understanding. More information about this release can be found in the SAM 3 blog post.
At Meta, we’re using these advancements in our products. SAM 3D and SAM 3 are enabling the new View in Room feature on Facebook Marketplace, helping people visualize the style and fit of home decor items in their spaces before purchasing. By broadening access to these models, we hope to inspire new possibilities for everyone — including creative projects, research, and interactive applications.