Embodied Foundation Model
A vision-language-action transformer trained on millions of hours of human and teleoperated demonstration. It generalizes across tasks without per-task fine-tuning.
Technology
We design the model, the silicon, the actuators, and the software that ties them together. Vertical integration is how we close the loop between perception, reasoning, and action.
A vision-language-action transformer trained on millions of hours of human and teleoperated demonstration. It generalizes across tasks without per-task fine-tuning.
Sub-millimeter capacitive arrays on each fingertip feed a closed-loop control system that adjusts grip in real time, so L-1 can hold an egg or torque a jar lid.
A unified planner reasons about base, torso, and arm motion together — eliminating the awkward stop-and-go behavior of stitched-together robotic systems.
Our model runs on a fanless compute module inside L-1. Nothing about your home leaves the device unless you opt in.