ProtoMotions3 is a GPU-accelerated simulation and learning framework for training physically simulated digital humans and humanoid robots. Our mission is to provide a fast prototyping platform for various simulated humanoid learning tasks and environmentsβfor researchers and practitioners in animation, robotics, and reinforcement learningβbridging efforts across communities.
Modularity, extensibility, and scalability are at the core of ProtoMotions3. It is community-driven and permissively licensed under the Apache-2.0 license.
Also check out MimicKit, our sibling repository for a lightweight framework for motion imitation learning.
![]() |
![]() |
|
![]() |
![]() |
![]() |
Train your fully physically simulated character to learn motion skills from the entire public AMASS human animation dataset (40+ hours) within 12 hours on 4 A100s.
Scale training to even larger datasets with each GPU handling a subset of motions. For example, we have trained with 24 A100s with 13K motions on each GPU with the BONES dataset in SOMA skeleton format. Check out Quick Start and SEED BVH Data Preparation to play around with the dataset and pre-trained models today.
Transfer (retarget) the entire AMASS dataset to your favorite robot with the built-in PyRoki-based optimizerβin one command.
Note: As of v3, we use PyRoki for retargeting. Earlier versions used Mink.
Train your robot to perform AMASS motor skills in 12 hours, by just changing one command argument:
--robot-name=smpl β --robot-name=h1_2 and preparing retargeted motions (see here)
One-click test (--simulator=isaacgym β --simulator=newton β --simulator=mujoco) of robot control policies on H1_2 or G1 in different physics engines (NVIDIA Newton, MuJoCo CPU). Policies shown below only use observations you could actually get from real hardware.
Train in simulation, deploy on real hardware. ProtoMotions trains one General Tracking Policy on entire BONES-SEED dataset (~142K motions) and transfers directly to the Unitree G1 humanoid robot zero-shot.
Our deployment pipeline exports a single ONNX model (with observation computation baked in), so deployment frameworks only need to provide raw sensor signals β no need to rewrite obs functions or match training internals. We tested on the Unitree G1 via the brilliant RoboJuDo framework, adding just one policy file with no mandatory changes to RoboJuDo core.
π Full Deployment Tutorial β from data preparation to real robot, fully reproducible.
Test your policy in IsaacSim 5.0+, which allows you to load beautifully rendered Gaussian splatting backgrounds (with Omniverse NuRec β this rendered scene is not physically interact-able yet).
With Kimodo (NVIDIA's text-to-motion generation model), generate any motion from a text prompt and use ProtoMotions to train a physics-based policy that performs the motion β for both the SOMA animation character and the Unitree G1 robot. Policies trained this way can be deployed directly on real hardware.
See Kimodo Data Preparation for how to convert Kimodo outputs to ProtoMotions format.
Image Credit: NVIDIA Human Motion Modeling Research
Procedurally generate many scenes for scalable Synthetic Data Generation (SDG): start from a seed motion set, use RL to adapt motions to augmented scenes.
Train a generative policy (e.g., MaskedMimic) that can autonomously choose its "move" to finish the task.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Train your robot to hike challenging terrains!
Have a new task? Build it from modular components β no monolithic env class needed. Here's how the steering task is composed:
| Layer | File | What it does |
|---|---|---|
| Control | steering_control.py |
Manages task state (target direction, speed, facing). Periodically samples new heading targets. |
| Observation | obs/steering.py |
Pure tensor kernel β transforms targets to robot-local frame β 5D feature vector. |
| Reward | rewards/task.py |
compute_heading_velocity_rew β blends direction-matching (0.7) and facing-matching (0.3) rewards. |
| Experiment | steering/mlp.py |
Wires components together as MdpComponent instances via context paths. |
Each piece is a standalone function or class β the experiment config binds them into a complete task using MdpComponent and FieldPath descriptors.
Want to try a new RL algorithm? Implement algorithms like ADD in ProtoMotions in ~50 lines of code, utilizing our modularized design:
π protomotions/agents/mimic/agent_add.py
Would like to use your own simulator? Implement these APIs interfacing among different simulators:
π protomotions/simulator/base_simulator/
Refer to this community-contributed example:
π protomotions/simulator/genesis/
Want to add your own robot? Follow these steps:
- Add your
.xmlMuJoCo spec file toprotomotions/data/robots/ - Fill in config fields (see examples like
protomotions/robot_configs/g1.py) - Register in
protomotions/robot_configs/factory.py
And you're good to go!
π Full Documentation
We welcome contributions! Please read our Contributing Guide before submitting pull requests.
ProtoMotions3 is released under the Apache-2.0 License.
If you use ProtoMotions3 in your research, please cite:
@misc{ProtoMotions,
title = {ProtoMotions3: An Open-source Framework for Humanoid Simulation and Control},
author = {Tessler*, Chen and Jiang*, Yifeng and Peng, Xue Bin and Coumans, Erwin and Shi, Yi and Zhang, Haotian and Rempe, Davis and Chechikβ , Gal and Fidlerβ , Sanja},
year = {2025},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/NVLabs/ProtoMotions/}},
}































