mirror of
https://github.com/huggingface/lerobot.git
synced 2026-05-15 16:49:55 +00:00
ce665160ae
* [Port codebase pipeline] General fixes for RL and scripts (#1748) * Refactor dataset configuration in documentation and codebase - Updated dataset configuration keys from `dataset_root` to `root` and `num_episodes` to `num_episodes_to_record` for consistency. - Adjusted replay episode handling by renaming `episode` to `replay_episode`. - Enhanced documentation - added specific processor to transform from policy actions to delta actions * Added Robot action to tensor processor Added new processor script for dealing with gym specific action processing * removed RobotAction2Tensor processor; imrpoved choosing observations in actor * nit in delta action * added missing reset functions to kinematics * Adapt teleoperate and replay to pipeline similar to record * refactor(processors): move to inheritance (#1750) * fix(teleoperator): improvements phone implementation (#1752) * fix(teleoperator): protect shared state in phone implementation * refactor(teleop): separate classes in phone * fix: solve breaking changes (#1753) * refactor(policies): multiple improvements (#1754) * refactor(processor): simpler logic in device processor (#1755) * refactor(processor): euclidean distance in delta action processor (#1757) * refactor(processor): improvements to joint observations processor migration (#1758) * refactor(processor): improvements to tokenizer migration (#1759) * refactor(processor): improvements to tokenizer migration * fix(tests): tokenizer tests regression from #1750 * fix(processors): fix float comparison and config in hil processors (#1760) * chore(teleop): remove unnecessary callbacks in KeyboardEndEffectorTeleop (#1761) * refactor(processor): improvements normalize pipeline migration (#1756) * refactor(processor): several improvements normalize processor step * refactor(processor): more improvements normalize processor * refactor(processor): more changes to normalizer * refactor(processor): take a different approach to DRY * refactor(processor): final design * chore(record): revert comment and continue deleted (#1764) * refactor(examples): pipeline phone examples (#1769) * refactor(examples): phone teleop + teleop script * refactor(examples): phone replay + replay * chore(examples): rename phone example files & folders * feat(processor): fix improvements to the pipeline porting (#1796) * refactor(processor): enhance tensor device handling in normalization process (#1795) * refactor(tests): remove unsupported device detection test for complementary data (#1797) * chore(tests): update ToBatchProcessor test (#1798) * refactor(tests): remove in-place mutation tests for actions and complementary data in batch processor * test(tests): add tests for action and task processing in batch processor * add names for android and ios phone (#1799) * use _tensor_stats in normalize processor (#1800) * fix(normalize_processor): correct device reference for tensor epsilon handling (#1801) * add point 5 add missing feature contracts (#1806) * Fix PR comments 1452 (#1807) * use key to determine image * Address rest of PR comments * use PolicyFeatures in transform_features --------- Co-authored-by: Pepijn <138571049+pkooij@users.noreply.github.com> --------- Co-authored-by: Michel Aractingi <michel.aractingi@huggingface.co> Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com> Co-authored-by: Pepijn <138571049+pkooij@users.noreply.github.com>
221 lines
7.9 KiB
Plaintext
221 lines
7.9 KiB
Plaintext
# Imitation Learning in Sim
|
|
|
|
This tutorial will explain how to train a neural network to control a robot in simulation with imitation learning.
|
|
|
|
**You'll learn:**
|
|
|
|
1. How to record a dataset in simulation with [gym-hil](https://github.com/huggingface/gym-hil) and visualize the dataset.
|
|
2. How to train a policy using your data.
|
|
3. How to evaluate your policy in simulation and visualize the results.
|
|
|
|
For the simulation environment we use the same [repo](https://github.com/huggingface/gym-hil) that is also being used by the Human-In-the-Loop (HIL) reinforcement learning algorithm.
|
|
This environment is based on [MuJoCo](https://mujoco.org) and allows you to record datasets in LeRobotDataset format.
|
|
Teleoperation is easiest with a controller like the Logitech F710, but you can also use your keyboard if you are up for the challenge.
|
|
|
|
## Installation
|
|
|
|
First, install the `gym_hil` package within the LeRobot environment, go to your LeRobot folder and run this command:
|
|
|
|
```bash
|
|
pip install -e ".[hilserl]"
|
|
```
|
|
|
|
## Teleoperate and Record a Dataset
|
|
|
|
To use `gym_hil` with LeRobot, you need to use a configuration file. An example config file can be found [here](https://huggingface.co/datasets/aractingi/lerobot-example-config-files/blob/main/env_config_gym_hil_il.json).
|
|
|
|
To teleoperate and collect a dataset, we need to modify this config file. Here's an example configuration for imitation learning data collection:
|
|
|
|
```json
|
|
{
|
|
"env": {
|
|
"type": "gym_manipulator",
|
|
"name": "gym_hil",
|
|
"task": "PandaPickCubeGamepad-v0",
|
|
"fps": 10
|
|
},
|
|
"dataset": {
|
|
"repo_id": "your_username/il_gym",
|
|
"root": null,
|
|
"task": "pick_cube",
|
|
"num_episodes_to_record": 30,
|
|
"replay_episode": null,
|
|
"push_to_hub": true
|
|
},
|
|
"mode": "record",
|
|
"device": "cuda"
|
|
}
|
|
```
|
|
|
|
Key configuration points:
|
|
|
|
- Set your `repo_id` in the `dataset` section: `"repo_id": "your_username/il_gym"`
|
|
- Set `num_episodes_to_record: 30` to collect 30 demonstration episodes
|
|
- Ensure `mode` is set to `"record"`
|
|
- If you don't have an NVIDIA GPU, change `"device": "cuda"` to `"mps"` for macOS or `"cpu"`
|
|
- To use keyboard instead of gamepad, change `"task"` to `"PandaPickCubeKeyboard-v0"`
|
|
|
|
Then we can run this command to start:
|
|
|
|
<hfoptions id="teleop_sim">
|
|
<hfoption id="Linux">
|
|
|
|
```bash
|
|
python -m lerobot.scripts.rl.gym_manipulator --config_path path/to/env_config_gym_hil_il.json
|
|
```
|
|
|
|
</hfoption>
|
|
<hfoption id="MacOS">
|
|
|
|
```bash
|
|
mjpython -m lerobot.scripts.rl.gym_manipulator --config_path path/to/env_config_gym_hil_il.json
|
|
```
|
|
|
|
</hfoption>
|
|
</hfoptions>
|
|
|
|
Once rendered you can teleoperate the robot with the gamepad or keyboard, below you can find the gamepad/keyboard controls.
|
|
|
|
Note that to teleoperate the robot you have to hold the "Human Take Over Pause Policy" Button `RB` to enable control!
|
|
|
|
**Gamepad Controls**
|
|
|
|
<p align="center">
|
|
<img
|
|
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/gamepad_guide.jpg?raw=true"
|
|
alt="Figure shows the control mappings on a Logitech gamepad."
|
|
title="Gamepad Control Mapping"
|
|
width="100%"
|
|
></img>
|
|
</p>
|
|
<p align="center">
|
|
<i>Gamepad button mapping for robot control and episode management</i>
|
|
</p>
|
|
|
|
**Keyboard controls**
|
|
|
|
For keyboard controls use the `spacebar` to enable control and the following keys to move the robot:
|
|
|
|
```bash
|
|
Arrow keys: Move in X-Y plane
|
|
Shift and Shift_R: Move in Z axis
|
|
Right Ctrl and Left Ctrl: Open and close gripper
|
|
ESC: Exit
|
|
```
|
|
|
|
## Visualize a dataset
|
|
|
|
If you uploaded your dataset to the hub you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id.
|
|
|
|
<p align="center">
|
|
<img
|
|
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/dataset_visualizer_sim.png"
|
|
alt="Figure shows the dataset visualizer"
|
|
title="Dataset visualization"
|
|
width="100%"
|
|
></img>
|
|
</p>
|
|
<p align="center">
|
|
<i>Dataset visualizer</i>
|
|
</p>
|
|
|
|
## Train a policy
|
|
|
|
To train a policy to control your robot, use the [`lerobot-train`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
|
|
|
|
```bash
|
|
lerobot-train \
|
|
--dataset.repo_id=${HF_USER}/il_gym \
|
|
--policy.type=act \
|
|
--output_dir=outputs/train/il_sim_test \
|
|
--job_name=il_sim_test \
|
|
--policy.device=cuda \
|
|
--wandb.enable=true
|
|
```
|
|
|
|
Let's explain the command:
|
|
|
|
1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/il_gym`.
|
|
2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
|
|
3. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
|
|
4. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
|
|
|
|
Training should take several hours, 100k steps (which is the default) will take about 1h on Nvidia A100. You will find checkpoints in `outputs/train/il_sim_test/checkpoints`.
|
|
|
|
#### Train using Collab
|
|
|
|
If your local computer doesn't have a powerful GPU you could utilize Google Collab to train your model by following the [ACT training notebook](./notebooks#training-act).
|
|
|
|
#### Upload policy checkpoints
|
|
|
|
Once training is done, upload the latest checkpoint with:
|
|
|
|
```bash
|
|
huggingface-cli upload ${HF_USER}/il_sim_test \
|
|
outputs/train/il_sim_test/checkpoints/last/pretrained_model
|
|
```
|
|
|
|
You can also upload intermediate checkpoints with:
|
|
|
|
```bash
|
|
CKPT=010000
|
|
huggingface-cli upload ${HF_USER}/il_sim_test${CKPT} \
|
|
outputs/train/il_sim_test/checkpoints/${CKPT}/pretrained_model
|
|
```
|
|
|
|
## Evaluate your policy in Sim
|
|
|
|
To evaluate your policy we have to use a configuration file. An example can be found [here](https://huggingface.co/datasets/aractingi/lerobot-example-config-files/blob/main/eval_config_gym_hil.json).
|
|
|
|
Here's an example evaluation configuration:
|
|
|
|
```json
|
|
{
|
|
"env": {
|
|
"type": "gym_manipulator",
|
|
"name": "gym_hil",
|
|
"task": "PandaPickCubeGamepad-v0",
|
|
"fps": 10
|
|
},
|
|
"dataset": {
|
|
"repo_id": "your_username/il_sim_dataset",
|
|
"dataset_root": null,
|
|
"task": "pick_cube"
|
|
},
|
|
"pretrained_policy_name_or_path": "your_username/il_sim_model",
|
|
"device": "cuda"
|
|
}
|
|
```
|
|
|
|
Make sure to replace:
|
|
|
|
- `repo_id` with the dataset you trained on (e.g., `your_username/il_sim_dataset`)
|
|
- `pretrained_policy_name_or_path` with your model ID (e.g., `your_username/il_sim_model`)
|
|
|
|
Then you can run this command to visualize your trained policy
|
|
|
|
<hfoptions id="eval_policy">
|
|
<hfoption id="Linux">
|
|
|
|
```bash
|
|
python -m lerobot.scripts.rl.eval_policy --config_path=path/to/eval_config_gym_hil.json
|
|
```
|
|
|
|
</hfoption>
|
|
<hfoption id="MacOS">
|
|
|
|
```bash
|
|
mjpython -m lerobot.scripts.rl.eval_policy --config_path=path/to/eval_config_gym_hil.json
|
|
```
|
|
|
|
</hfoption>
|
|
</hfoptions>
|
|
|
|
> [!WARNING]
|
|
> While the main workflow of training ACT in simulation is straightforward, there is significant room for exploring how to set up the task, define the initial state of the environment, and determine the type of data required during collection to learn the most effective policy. If your trained policy doesn't perform well, investigate the quality of the dataset it was trained on using our visualizers, as well as the action values and various hyperparameters related to ACT and the simulation.
|
|
|
|
Congrats 🎉, you have finished this tutorial. If you want to continue with using LeRobot in simulation follow this [Tutorial on reinforcement learning in sim with HIL-SERL](https://huggingface.co/docs/lerobot/hilserl_sim)
|
|
|
|
> [!TIP]
|
|
> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
|