Skip to content

NVlabs/ProtoMotions

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

28 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

ProtoMotions 3

A GPU-Accelerated Framework for Simulated Humanoids

License Documentation

Newton IsaacLab IsaacGym Genesis MuJoCo DeepWiki (unverified AI generation)


Overview

ProtoMotions3 is a GPU-accelerated simulation and learning framework for training physically simulated digital humans and humanoid robots. Our mission is to provide a fast prototyping platform for various simulated humanoid learning tasks and environmentsβ€”for researchers and practitioners in animation, robotics, and reinforcement learningβ€”bridging efforts across communities.

Modularity, extensibility, and scalability are at the core of ProtoMotions3. It is community-driven and permissively licensed under the Apache-2.0 license.

Also check out MimicKit, our sibling repository for a lightweight framework for motion imitation learning.


What You Can Do with ProtoMotions3

πŸƒ Large-Scale Motion Learning

Train your fully physically simulated character to learn motion skills from the entire public AMASS human animation dataset (40+ hours) within 12 hours on 4 A100s.

SMPL motion 1 SMPL motion 2 SMPL motion 3 SMPL motion 4 SMPL motion 5

πŸ“ˆ Scalable Multi-GPU Training

Scale training to even larger datasets with each GPU handling a subset of motions. For example, we have trained with 24 A100s with 13K motions on each GPU with the BONES dataset in SOMA skeleton format. Check out Quick Start and SEED BVH Data Preparation to play around with the dataset and pre-trained models today.

πŸ”„ One-Command Retargeting

Transfer (retarget) the entire AMASS dataset to your favorite robot with the built-in PyRoki-based optimizerβ€”in one command.

Note: As of v3, we use PyRoki for retargeting. Earlier versions used Mink.

G1 retargeting

πŸ€– Train Any Robot

Train your robot to perform AMASS motor skills in 12 hours, by just changing one command argument:
--robot-name=smpl β†’ --robot-name=h1_2 and preparing retargeted motions (see here)

H1_2 AMASS training

πŸ”¬ Sim2Sim Testing

One-click test (--simulator=isaacgym β†’ --simulator=newton β†’ --simulator=mujoco) of robot control policies on H1_2 or G1 in different physics engines (NVIDIA Newton, MuJoCo CPU). Policies shown below only use observations you could actually get from real hardware.

H1_2/G1 sim2sim

πŸ€– From Sim to Real

Train in simulation, deploy on real hardware. ProtoMotions trains one General Tracking Policy on entire BONES-SEED dataset (~142K motions) and transfers directly to the Unitree G1 humanoid robot zero-shot.

G1 deployment 1 G1 deployment 2 G1 real robot

Our deployment pipeline exports a single ONNX model (with observation computation baked in), so deployment frameworks only need to provide raw sensor signals β€” no need to rewrite obs functions or match training internals. We tested on the Unitree G1 via the brilliant RoboJuDo framework, adding just one policy file with no mandatory changes to RoboJuDo core.

πŸ“– Full Deployment Tutorial β€” from data preparation to real robot, fully reproducible.

🎨 High-Fidelity Rendering

Test your policy in IsaacSim 5.0+, which allows you to load beautifully rendered Gaussian splatting backgrounds (with Omniverse NuRec β€” this rendered scene is not physically interact-able yet).

G1 NeuRec

🎬 Motion Authoring with Kimodo

With Kimodo (NVIDIA's text-to-motion generation model), generate any motion from a text prompt and use ProtoMotions to train a physics-based policy that performs the motion β€” for both the SOMA animation character and the Unitree G1 robot. Policies trained this way can be deployed directly on real hardware.

See Kimodo Data Preparation for how to convert Kimodo outputs to ProtoMotions format.

Vaulting G1 robot walking

Image Credit: NVIDIA Human Motion Modeling Research

πŸ—οΈ Procedural Scene Generation

Procedurally generate many scenes for scalable Synthetic Data Generation (SDG): start from a seed motion set, use RL to adapt motions to augmented scenes.

Augmented Scenes and Motions

🎭 Generative Policies

Train a generative policy (e.g., MaskedMimic) that can autonomously choose its "move" to finish the task.

MaskedMimic 1 MaskedMimic 2 MaskedMimic 3
MaskedMimic 4 MaskedMimic 5 MaskedMimic 6

⛰️ Terrain Navigation

Train your robot to hike challenging terrains!

SMPL Terrain

🎯 Custom Environments

Have a new task? Build it from modular components β€” no monolithic env class needed. Here's how the steering task is composed:

Layer File What it does
Control steering_control.py Manages task state (target direction, speed, facing). Periodically samples new heading targets.
Observation obs/steering.py Pure tensor kernel β€” transforms targets to robot-local frame β†’ 5D feature vector.
Reward rewards/task.py compute_heading_velocity_rew β€” blends direction-matching (0.7) and facing-matching (0.3) rewards.
Experiment steering/mlp.py Wires components together as MdpComponent instances via context paths.

Each piece is a standalone function or class β€” the experiment config binds them into a complete task using MdpComponent and FieldPath descriptors.

G1 Steering

πŸ§ͺ New RL Algorithms

Want to try a new RL algorithm? Implement algorithms like ADD in ProtoMotions in ~50 lines of code, utilizing our modularized design:

πŸ“„ protomotions/agents/mimic/agent_add.py

πŸ”§ Custom Simulators

Would like to use your own simulator? Implement these APIs interfacing among different simulators:

πŸ“„ protomotions/simulator/base_simulator/

Refer to this community-contributed example:

πŸ“„ protomotions/simulator/genesis/

πŸ€– Add Your Own Robot

Want to add your own robot? Follow these steps:

  1. Add your .xml MuJoCo spec file to protomotions/data/robots/
  2. Fill in config fields (see examples like protomotions/robot_configs/g1.py)
  3. Register in protomotions/robot_configs/factory.py

And you're good to go!


Documentation

πŸ“š Full Documentation


Contributing

We welcome contributions! Please read our Contributing Guide before submitting pull requests.

License

ProtoMotions3 is released under the Apache-2.0 License.


Citation

If you use ProtoMotions3 in your research, please cite:

@misc{ProtoMotions,
  title = {ProtoMotions3: An Open-source Framework for Humanoid Simulation and Control},
  author = {Tessler*, Chen and Jiang*, Yifeng and Peng, Xue Bin and Coumans, Erwin and Shi, Yi and Zhang, Haotian and Rempe, Davis and Chechik†, Gal and Fidler†, Sanja},
  year = {2025},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/NVLabs/ProtoMotions/}},
}

About

ProtoMotions is a GPU-accelerated simulation and learning framework for training physically simulated digital humans and humanoid robots.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Contributors