Robotic machine learning company Generalist has announced GEN-1, a new physical AI system that it says “crosses into production-level success rates” on “a broad range of physical skills” that used to require the dexterity and muscle memory of human hands. Generalist is also touting the new model’s ability to respond to disruptions by improvising new moves and “connect[ing] ideas from different places in order to solve new problems.”
GEN-1 builds on Generalist’s previous GEN-0 model, which the company touted in November as a proof of concept for the applicability of scaling laws in robotics training, showing how more pre-training data and compute time improve post-training performance. But while large language models have been able to effectively process trillions of words collectively written on the Internet as part of their training, robotic models don’t have a similar, readily accessible source of quality data about how humans manipulate objects.
To help solve this problem, Generalist has relied on “data hands”, a set of wearable pincers that capture micro-movements and visual information as humans perform manual tasks. Generalist now claims it has collected over half a million hours and “petabytes of physical interaction data” to help train its physical model.


