About Liquid AI
Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
The Opportunity
Our Training Infrastructure team is building the distributed systems that power our next-generation Liquid Foundation Models. As we scale, we need to design, implement, and optimize the infrastructure that enables large-scale training.
This is a high-ownership training systems role focused on runtime/performance/reliability (not a general platform/SRE role). You'll work on a small team with fast feedback loops, building critical systems from the ground up rather than inheriting mature infrastructure.
While San Francisco and Boston are preferred, we are open to other locations.
What We're Looking For
We need someone who: