We built a mobile app that turns a single photo into a full 3D model using Meta’s SAM 3D models. In this video, we break down how SAM 3D reconstructs objects from images, how Gaussian splatting fits into the pipeline, and what actually works or fails when objects are partially obscured. We also walk through the real app setup with Expo, a remote GPU backend, and share the full code so you can try it yourself.
🔗 Relevant Links
Project API: https://github.com/andrisgauracs/sam3...
Project Frontend: https://github.com/andrisgauracs/sam3...
Meta's SAM 3D: https://ai.meta.com/sam3d/
❤️ More about us
Radically better observability stack: https://betterstack.com/
Written tutorials: https://betterstack.com/community/
Example projects: https://github.com/BetterStackHQ
📱 Socials
Twitter: / betterstackhq
Instagram: / betterstackhq
TikTok: / betterstack
LinkedIn: / betterstack
📌 Chapters:
00:00 Intro
00:31 What Are Segment Anything Models
01:05 The Latest SAM 3D Models
02:09 How SAM 3D Reconstructs Objects
02:59 Project App Architecture and Tech Stack
04:38 Setting Up SAM Models on a Remote GPU
07:00 Setting Up The Expo App
07:33 Test Run
09:56 Testing The Model Handling Obscurity
10:59 Tests With Real Photos
11:27 Final Takeaways