Turn your everyday devices into a local AI supercomputer. No cloud. No data leaks. No insane GPU bills.
In this video, we look at Exo, the open-source project that lets you run massive AI models (up to 671B parameters) across your own Macs, Linux machines, and even Raspberry Pis — all on your local network.
🔗 Relevant Links
Exo on GitHub - https://github.com/exo-explore/exo
Official Exo Site - https://exolabs.net/
Jeff Geerling's Article - https://www.jeffgeerling.com/blog/202...
❤️ More about us
Radically better observability stack: https://betterstack.com/
Written tutorials: https://betterstack.com/community/
Example projects: https://github.com/BetterStackHQ
📱 Socials
Twitter: / betterstackhq
Instagram: / betterstackhq
TikTok: / betterstack
LinkedIn: / betterstack
📌 Chapters:
0:00 – Run AI Models at Home (Exo Explained)
0:33 – What Is Exo? Distributed AI on Your Local Network
1:00 – How Exo Actually Works
1:55 – Major Exo Milestones (Llama, M4 Macs, Raspberry Pi)
2:05 – How to Setup Exo
2:50 – RDMA over Thunderbolt 5 (Why Latency Drops 99%)
3:11 – Real Benchmarks: 235B–671B Models Running Locally
3:31 – Mixed Hardware Clusters (Macs + Linux + Raspberry Pi)
3:58 – Exo is not perfect by any means