chore: remove usernames + use entrypoints in docs, comments & sample commands (#2988)

This commit is contained in:
Steven Palma
2026-02-18 22:46:12 +01:00
committed by GitHub
parent bc38261321
commit 5f15232271
17 changed files with 97 additions and 97 deletions
+42 -42
View File
@@ -28,9 +28,9 @@ We don't expect the same optimal settings for a dataset of images from a simulat
For these reasons, we run this benchmark on four representative datasets: For these reasons, we run this benchmark on four representative datasets:
- `lerobot/pusht_image`: (96 x 96 pixels) simulation with simple geometric shapes, fixed camera. - `lerobot/pusht_image`: (96 x 96 pixels) simulation with simple geometric shapes, fixed camera.
- `aliberts/aloha_mobile_shrimp_image`: (480 x 640 pixels) real-world indoor, moving camera. - `lerobot/aloha_mobile_shrimp_image`: (480 x 640 pixels) real-world indoor, moving camera.
- `aliberts/paris_street`: (720 x 1280 pixels) real-world outdoor, moving camera. - `lerobot/paris_street`: (720 x 1280 pixels) real-world outdoor, moving camera.
- `aliberts/kitchen`: (1080 x 1920 pixels) real-world indoor, fixed camera. - `lerobot/kitchen`: (1080 x 1920 pixels) real-world indoor, fixed camera.
Note: The datasets used for this benchmark need to be image datasets, not video datasets. Note: The datasets used for this benchmark need to be image datasets, not video datasets.
@@ -179,7 +179,7 @@ python benchmark/video/run_video_benchmark.py \
--output-dir outputs/video_benchmark \ --output-dir outputs/video_benchmark \
--repo-ids \ --repo-ids \
lerobot/pusht_image \ lerobot/pusht_image \
aliberts/aloha_mobile_shrimp_image \ lerobot/aloha_mobile_shrimp_image \
--vcodec libx264 libx265 \ --vcodec libx264 libx265 \
--pix-fmt yuv444p yuv420p \ --pix-fmt yuv444p yuv420p \
--g 2 20 None \ --g 2 20 None \
@@ -203,9 +203,9 @@ python benchmark/video/run_video_benchmark.py \
--output-dir outputs/video_benchmark \ --output-dir outputs/video_benchmark \
--repo-ids \ --repo-ids \
lerobot/pusht_image \ lerobot/pusht_image \
aliberts/aloha_mobile_shrimp_image \ lerobot/aloha_mobile_shrimp_image \
aliberts/paris_street \ lerobot/paris_street \
aliberts/kitchen \ lerobot/kitchen \
--vcodec libx264 libx265 \ --vcodec libx264 libx265 \
--pix-fmt yuv444p yuv420p \ --pix-fmt yuv444p yuv420p \
--g 1 2 3 4 5 6 10 15 20 40 None \ --g 1 2 3 4 5 6 10 15 20 40 None \
@@ -221,9 +221,9 @@ python benchmark/video/run_video_benchmark.py \
--output-dir outputs/video_benchmark \ --output-dir outputs/video_benchmark \
--repo-ids \ --repo-ids \
lerobot/pusht_image \ lerobot/pusht_image \
aliberts/aloha_mobile_shrimp_image \ lerobot/aloha_mobile_shrimp_image \
aliberts/paris_street \ lerobot/paris_street \
aliberts/kitchen \ lerobot/kitchen \
--vcodec libsvtav1 \ --vcodec libsvtav1 \
--pix-fmt yuv420p \ --pix-fmt yuv420p \
--g 1 2 3 4 5 6 10 15 20 40 None \ --g 1 2 3 4 5 6 10 15 20 40 None \
@@ -252,37 +252,37 @@ Since we're using av1 encoding, we're choosing the `pyav` decoder as `video_read
These tables show the results for `g=2` and `crf=30`, using `timestamps-modes=6_frames` and `backend=pyav` These tables show the results for `g=2` and `crf=30`, using `timestamps-modes=6_frames` and `backend=pyav`
| video_images_size_ratio | vcodec | pix_fmt | | | | | video_images_size_ratio | vcodec | pix_fmt | | | |
| ---------------------------------- | ---------- | ------- | --------- | --------- | --------- | | --------------------------------- | ---------- | ------- | --------- | --------- | --------- |
| | libx264 | | libx265 | | libsvtav1 | | | libx264 | | libx265 | | libsvtav1 |
| repo_id | yuv420p | yuv444p | yuv420p | yuv444p | yuv420p | | repo_id | yuv420p | yuv444p | yuv420p | yuv444p | yuv420p |
| lerobot/pusht_image | **16.97%** | 17.58% | 18.57% | 18.86% | 22.06% | | lerobot/pusht_image | **16.97%** | 17.58% | 18.57% | 18.86% | 22.06% |
| aliberts/aloha_mobile_shrimp_image | 2.14% | 2.11% | 1.38% | **1.37%** | 5.59% | | lerobot/aloha_mobile_shrimp_image | 2.14% | 2.11% | 1.38% | **1.37%** | 5.59% |
| aliberts/paris_street | 2.12% | 2.13% | **1.54%** | **1.54%** | 4.43% | | lerobot/paris_street | 2.12% | 2.13% | **1.54%** | **1.54%** | 4.43% |
| aliberts/kitchen | 1.40% | 1.39% | **1.00%** | **1.00%** | 2.52% | | lerobot/kitchen | 1.40% | 1.39% | **1.00%** | **1.00%** | 2.52% |
| video_images_load_time_ratio | vcodec | pix_fmt | | | | | video_images_load_time_ratio | vcodec | pix_fmt | | | |
| ---------------------------------- | ------- | ------- | -------- | ------- | --------- | | --------------------------------- | ------- | ------- | -------- | ------- | --------- |
| | libx264 | | libx265 | | libsvtav1 | | | libx264 | | libx265 | | libsvtav1 |
| repo_id | yuv420p | yuv444p | yuv420p | yuv444p | yuv420p | | repo_id | yuv420p | yuv444p | yuv420p | yuv444p | yuv420p |
| lerobot/pusht_image | 6.45 | 5.19 | **1.90** | 2.12 | 2.47 | | lerobot/pusht_image | 6.45 | 5.19 | **1.90** | 2.12 | 2.47 |
| aliberts/aloha_mobile_shrimp_image | 11.80 | 7.92 | 0.71 | 0.85 | **0.48** | | lerobot/aloha_mobile_shrimp_image | 11.80 | 7.92 | 0.71 | 0.85 | **0.48** |
| aliberts/paris_street | 2.21 | 2.05 | 0.36 | 0.49 | **0.30** | | lerobot/paris_street | 2.21 | 2.05 | 0.36 | 0.49 | **0.30** |
| aliberts/kitchen | 1.46 | 1.46 | 0.28 | 0.51 | **0.26** | | lerobot/kitchen | 1.46 | 1.46 | 0.28 | 0.51 | **0.26** |
| | | vcodec | pix_fmt | | | | | | | vcodec | pix_fmt | | | |
| ---------------------------------- | -------- | -------- | ------------ | -------- | --------- | ------------ | | --------------------------------- | -------- | -------- | ------------ | -------- | --------- | ------------ |
| | | libx264 | | libx265 | | libsvtav1 | | | | libx264 | | libx265 | | libsvtav1 |
| repo_id | metric | yuv420p | yuv444p | yuv420p | yuv444p | yuv420p | | repo_id | metric | yuv420p | yuv444p | yuv420p | yuv444p | yuv420p |
| lerobot/pusht_image | avg_mse | 2.90E-04 | **2.03E-04** | 3.13E-04 | 2.29E-04 | 2.19E-04 | | lerobot/pusht_image | avg_mse | 2.90E-04 | **2.03E-04** | 3.13E-04 | 2.29E-04 | 2.19E-04 |
| | avg_psnr | 35.44 | 37.07 | 35.49 | **37.30** | 37.20 | | | avg_psnr | 35.44 | 37.07 | 35.49 | **37.30** | 37.20 |
| | avg_ssim | 98.28% | **98.85%** | 98.31% | 98.84% | 98.72% | | | avg_ssim | 98.28% | **98.85%** | 98.31% | 98.84% | 98.72% |
| aliberts/aloha_mobile_shrimp_image | avg_mse | 2.76E-04 | 2.59E-04 | 3.17E-04 | 3.06E-04 | **1.30E-04** | | lerobot/aloha_mobile_shrimp_image | avg_mse | 2.76E-04 | 2.59E-04 | 3.17E-04 | 3.06E-04 | **1.30E-04** |
| | avg_psnr | 35.91 | 36.21 | 35.88 | 36.09 | **40.17** | | | avg_psnr | 35.91 | 36.21 | 35.88 | 36.09 | **40.17** |
| | avg_ssim | 95.19% | 95.18% | 95.00% | 95.05% | **97.73%** | | | avg_ssim | 95.19% | 95.18% | 95.00% | 95.05% | **97.73%** |
| aliberts/paris_street | avg_mse | 6.89E-04 | 6.70E-04 | 4.03E-03 | 4.02E-03 | **3.09E-04** | | lerobot/paris_street | avg_mse | 6.89E-04 | 6.70E-04 | 4.03E-03 | 4.02E-03 | **3.09E-04** |
| | avg_psnr | 33.48 | 33.68 | 32.05 | 32.15 | **35.40** | | | avg_psnr | 33.48 | 33.68 | 32.05 | 32.15 | **35.40** |
| | avg_ssim | 93.76% | 93.75% | 89.46% | 89.46% | **95.46%** | | | avg_ssim | 93.76% | 93.75% | 89.46% | 89.46% | **95.46%** |
| aliberts/kitchen | avg_mse | 2.50E-04 | 2.24E-04 | 4.28E-04 | 4.18E-04 | **1.53E-04** | | lerobot/kitchen | avg_mse | 2.50E-04 | 2.24E-04 | 4.28E-04 | 4.18E-04 | **1.53E-04** |
| | avg_psnr | 36.73 | 37.33 | 36.56 | 36.75 | **39.12** | | | avg_psnr | 36.73 | 37.33 | 36.56 | 36.75 | **39.12** |
| | avg_ssim | 95.47% | 95.58% | 95.52% | 95.53% | **96.82%** | | | avg_ssim | 95.47% | 95.58% | 95.52% | 95.53% | **96.82%** |
+1 -1
View File
@@ -185,7 +185,7 @@ echo $HF_USER
Use the standard recording command: Use the standard recording command:
```bash ```bash
python src/lerobot/scripts/lerobot_record.py \ lerobot-record \
--robot.type=earthrover_mini_plus \ --robot.type=earthrover_mini_plus \
--teleop.type=keyboard_rover \ --teleop.type=keyboard_rover \
--dataset.repo_id=your_username/dataset_name \ --dataset.repo_id=your_username/dataset_name \
+5 -5
View File
@@ -224,7 +224,7 @@ lerobot-record \
--teleop.port=/dev/tty.usbmodem1201 \ --teleop.port=/dev/tty.usbmodem1201 \
--teleop.id=right \ --teleop.id=right \
--teleop.side=right \ --teleop.side=right \
--dataset.repo_id=nepyope/hand_record_test_with_video_data \ --dataset.repo_id=<USER>/hand_record_test_with_video_data \
--dataset.single_task="Hand recording test with video data" \ --dataset.single_task="Hand recording test with video data" \
--dataset.num_episodes=1 \ --dataset.num_episodes=1 \
--dataset.episode_time_s=5 \ --dataset.episode_time_s=5 \
@@ -241,7 +241,7 @@ lerobot-replay \
--robot.port=/dev/tty.usbmodem58760432281 \ --robot.port=/dev/tty.usbmodem58760432281 \
--robot.id=right \ --robot.id=right \
--robot.side=right \ --robot.side=right \
--dataset.repo_id=nepyope/hand_record_test_with_camera \ --dataset.repo_id=<USER>/hand_record_test_with_camera \
--dataset.episode=0 --dataset.episode=0
``` ```
@@ -249,13 +249,13 @@ lerobot-replay \
```bash ```bash
lerobot-train \ lerobot-train \
--dataset.repo_id=nepyope/hand_record_test_with_video_data \ --dataset.repo_id=<USER>/hand_record_test_with_video_data \
--policy.type=act \ --policy.type=act \
--output_dir=outputs/train/hopejr_hand \ --output_dir=outputs/train/hopejr_hand \
--job_name=hopejr \ --job_name=hopejr \
--policy.device=mps \ --policy.device=mps \
--wandb.enable=true \ --wandb.enable=true \
--policy.repo_id=nepyope/hand_test_policy --policy.repo_id=<USER>/hand_test_policy
``` ```
### Evaluate ### Evaluate
@@ -270,7 +270,7 @@ lerobot-record \
--robot.side=right \ --robot.side=right \
--robot.cameras='{"main": {"type": "opencv", "index_or_path": 0, "width": 640, "height": 480, "fps": 30}}' \ --robot.cameras='{"main": {"type": "opencv", "index_or_path": 0, "width": 640, "height": 480, "fps": 30}}' \
--display_data=false \ --display_data=false \
--dataset.repo_id=nepyope/eval_hopejr \ --dataset.repo_id=<USER>/eval_hopejr \
--dataset.single_task="Evaluate hopejr hand policy" \ --dataset.single_task="Evaluate hopejr hand policy" \
--dataset.num_episodes=10 \ --dataset.num_episodes=10 \
--policy.path=outputs/train/hopejr_hand/checkpoints/last/pretrained_model --policy.path=outputs/train/hopejr_hand/checkpoints/last/pretrained_model
+1 -1
View File
@@ -60,7 +60,7 @@ policy.type=pi0
For training π₀, you can use the standard LeRobot training script with the appropriate configuration: For training π₀, you can use the standard LeRobot training script with the appropriate configuration:
```bash ```bash
python src/lerobot/scripts/lerobot_train.py \ lerobot-train \
--dataset.repo_id=your_dataset \ --dataset.repo_id=your_dataset \
--policy.type=pi0 \ --policy.type=pi0 \
--output_dir=./outputs/pi0_training \ --output_dir=./outputs/pi0_training \
+1 -1
View File
@@ -56,7 +56,7 @@ policy.type=pi05
Here's a complete training command for finetuning the base π₀.₅ model on your own dataset: Here's a complete training command for finetuning the base π₀.₅ model on your own dataset:
```bash ```bash
python src/lerobot/scripts/lerobot_train.py\ lerobot-train \
--dataset.repo_id=your_dataset \ --dataset.repo_id=your_dataset \
--policy.type=pi05 \ --policy.type=pi05 \
--output_dir=./outputs/pi05_training \ --output_dir=./outputs/pi05_training \
+4 -4
View File
@@ -269,7 +269,7 @@ This generates visualizations showing video frames with subtask boundaries overl
Train with **no annotations** - uses linear progress from 0 to 1: Train with **no annotations** - uses linear progress from 0 to 1:
```bash ```bash
python src/lerobot/scripts/lerobot_train.py \ lerobot-train \
--dataset.repo_id=your-username/your-dataset \ --dataset.repo_id=your-username/your-dataset \
--policy.type=sarm \ --policy.type=sarm \
--policy.annotation_mode=single_stage \ --policy.annotation_mode=single_stage \
@@ -288,7 +288,7 @@ python src/lerobot/scripts/lerobot_train.py \
Train with **dense annotations only** (sparse auto-generated): Train with **dense annotations only** (sparse auto-generated):
```bash ```bash
python src/lerobot/scripts/lerobot_train.py \ lerobot-train \
--dataset.repo_id=your-username/your-dataset \ --dataset.repo_id=your-username/your-dataset \
--policy.type=sarm \ --policy.type=sarm \
--policy.annotation_mode=dense_only \ --policy.annotation_mode=dense_only \
@@ -307,7 +307,7 @@ python src/lerobot/scripts/lerobot_train.py \
Train with **both sparse and dense annotations**: Train with **both sparse and dense annotations**:
```bash ```bash
python src/lerobot/scripts/lerobot_train.py \ lerobot-train \
--dataset.repo_id=your-username/your-dataset \ --dataset.repo_id=your-username/your-dataset \
--policy.type=sarm \ --policy.type=sarm \
--policy.annotation_mode=dual \ --policy.annotation_mode=dual \
@@ -468,7 +468,7 @@ This script:
Once you have the progress file, train your policy with RA-BC weighting. The progress file is auto-detected from the dataset path (`sarm_progress.parquet`). Currently PI0, PI0.5 and SmolVLA are supported with RA-BC: Once you have the progress file, train your policy with RA-BC weighting. The progress file is auto-detected from the dataset path (`sarm_progress.parquet`). Currently PI0, PI0.5 and SmolVLA are supported with RA-BC:
```bash ```bash
python src/lerobot/scripts/lerobot_train.py \ lerobot-train \
--dataset.repo_id=your-username/your-dataset \ --dataset.repo_id=your-username/your-dataset \
--policy.type=pi0 \ --policy.type=pi0 \
--use_rabc=true \ --use_rabc=true \
+2 -2
View File
@@ -216,7 +216,7 @@ lerobot-teleoperate \
### Record Dataset in Simulation ### Record Dataset in Simulation
```bash ```bash
python -m lerobot.scripts.lerobot_record \ lerobot-record \
--robot.type=unitree_g1 \ --robot.type=unitree_g1 \
--robot.is_simulation=true \ --robot.is_simulation=true \
--robot.cameras='{"global_view": {"type": "zmq", "server_address": "localhost", "port": 5555, "camera_name": "head_camera", "width": 640, "height": 480, "fps": 30}}' \ --robot.cameras='{"global_view": {"type": "zmq", "server_address": "localhost", "port": 5555, "camera_name": "head_camera", "width": 640, "height": 480, "fps": 30}}' \
@@ -266,7 +266,7 @@ lerobot-teleoperate \
### Record Dataset on Real Robot ### Record Dataset on Real Robot
```bash ```bash
python -m lerobot.scripts.lerobot_record \ lerobot-record \
--robot.type=unitree_g1 \ --robot.type=unitree_g1 \
--robot.is_simulation=false \ --robot.is_simulation=false \
--robot.cameras='{"global_view": {"type": "zmq", "server_address": "172.18.129.215", "port": 5555, "camera_name": "head_camera", "width": 640, "height": 480, "fps": 30}}' \ --robot.cameras='{"global_view": {"type": "zmq", "server_address": "172.18.129.215", "port": 5555, "camera_name": "head_camera", "width": 640, "height": 480, "fps": 30}}' \
+1 -1
View File
@@ -45,7 +45,7 @@ policy.type=wall_x
For training WallX, you can use the standard LeRobot training script with the appropriate configuration: For training WallX, you can use the standard LeRobot training script with the appropriate configuration:
```bash ```bash
python src/lerobot/scripts/lerobot_train.py \ lerobot-train \
--dataset.repo_id=your_dataset \ --dataset.repo_id=your_dataset \
--policy.type=wall_x \ --policy.type=wall_x \
--output_dir=./outputs/wallx_training \ --output_dir=./outputs/wallx_training \
+1 -1
View File
@@ -154,7 +154,7 @@ lerobot-train \
```bash ```bash
lerobot-train \ lerobot-train \
--dataset.repo_id=pepijn223/bimanual-so100-handover-cube \ --dataset.repo_id=<USER>/bimanual-so100-handover-cube \
--output_dir=./outputs/xvla_bimanual \ --output_dir=./outputs/xvla_bimanual \
--job_name=xvla_so101_training \ --job_name=xvla_so101_training \
--policy.path="lerobot/xvla-base" \ --policy.path="lerobot/xvla-base" \
+1 -1
View File
@@ -22,7 +22,7 @@ lerobot-replay \
--robot.type=so100_follower \ --robot.type=so100_follower \
--robot.port=/dev/tty.usbmodem58760431541 \ --robot.port=/dev/tty.usbmodem58760431541 \
--robot.id=black \ --robot.id=black \
--dataset.repo_id=aliberts/record-test \ --dataset.repo_id=<USER>/record-test \
--dataset.episode=2 --dataset.episode=2
``` ```
""" """
+10 -10
View File
@@ -27,8 +27,8 @@ measuring consistency and ground truth alignment.
Usage: Usage:
# Basic usage with smolvla policy # Basic usage with smolvla policy
uv run python examples/rtc/eval_dataset.py \ uv run python examples/rtc/eval_dataset.py \
--policy.path=helper2424/smolvla_check_rtc_last3 \ --policy.path=<USER>/smolvla_check_rtc_last3 \
--dataset.repo_id=helper2424/check_rtc \ --dataset.repo_id=<USER>/check_rtc \
--rtc.execution_horizon=8 \ --rtc.execution_horizon=8 \
--device=mps \ --device=mps \
--rtc.max_guidance_weight=10.0 \ --rtc.max_guidance_weight=10.0 \
@@ -58,16 +58,16 @@ Usage:
--device=cuda --device=cuda
uv run python examples/rtc/eval_dataset.py \ uv run python examples/rtc/eval_dataset.py \
--policy.path=lipsop/reuben_pi0 \ --policy.path=<USER>/reuben_pi0 \
--dataset.repo_id=ReubenLim/so101_cube_in_cup \ --dataset.repo_id=<USER>/so101_cube_in_cup \
--rtc.execution_horizon=8 \ --rtc.execution_horizon=8 \
--device=cuda --device=cuda
# With torch.compile for faster inference (PyTorch 2.0+) # With torch.compile for faster inference (PyTorch 2.0+)
# Note: CUDA graphs disabled by default due to in-place ops in denoising loop # Note: CUDA graphs disabled by default due to in-place ops in denoising loop
uv run python examples/rtc/eval_dataset.py \ uv run python examples/rtc/eval_dataset.py \
--policy.path=helper2424/smolvla_check_rtc_last3 \ --policy.path=<USER>/smolvla_check_rtc_last3 \
--dataset.repo_id=helper2424/check_rtc \ --dataset.repo_id=<USER>/check_rtc \
--rtc.execution_horizon=8 \ --rtc.execution_horizon=8 \
--device=mps \ --device=mps \
--use_torch_compile=true \ --use_torch_compile=true \
@@ -75,8 +75,8 @@ Usage:
# With torch.compile on CUDA (CUDA graphs disabled by default) # With torch.compile on CUDA (CUDA graphs disabled by default)
uv run python examples/rtc/eval_dataset.py \ uv run python examples/rtc/eval_dataset.py \
--policy.path=helper2424/smolvla_check_rtc_last3 \ --policy.path=<USER>/smolvla_check_rtc_last3 \
--dataset.repo_id=helper2424/check_rtc \ --dataset.repo_id=<USER>/check_rtc \
--rtc.execution_horizon=8 \ --rtc.execution_horizon=8 \
--device=cuda \ --device=cuda \
--use_torch_compile=true \ --use_torch_compile=true \
@@ -84,8 +84,8 @@ Usage:
# Enable CUDA graphs (advanced - may cause tensor aliasing errors) # Enable CUDA graphs (advanced - may cause tensor aliasing errors)
uv run python examples/rtc/eval_dataset.py \ uv run python examples/rtc/eval_dataset.py \
--policy.path=helper2424/smolvla_check_rtc_last3 \ --policy.path=<USER>/smolvla_check_rtc_last3 \
--dataset.repo_id=helper2424/check_rtc \ --dataset.repo_id=<USER>/check_rtc \
--use_torch_compile=true \ --use_torch_compile=true \
--torch_compile_backend=inductor \ --torch_compile_backend=inductor \
--torch_compile_mode=max-autotune \ --torch_compile_mode=max-autotune \
+3 -3
View File
@@ -28,7 +28,7 @@ For simulation environments, see eval_with_simulation.py
Usage: Usage:
# Run RTC with Real robot with RTC # Run RTC with Real robot with RTC
uv run examples/rtc/eval_with_real_robot.py \ uv run examples/rtc/eval_with_real_robot.py \
--policy.path=helper2424/smolvla_check_rtc_last3 \ --policy.path=<USER>/smolvla_check_rtc_last3 \
--policy.device=mps \ --policy.device=mps \
--rtc.enabled=true \ --rtc.enabled=true \
--rtc.execution_horizon=20 \ --rtc.execution_horizon=20 \
@@ -41,7 +41,7 @@ Usage:
# Run RTC with Real robot without RTC # Run RTC with Real robot without RTC
uv run examples/rtc/eval_with_real_robot.py \ uv run examples/rtc/eval_with_real_robot.py \
--policy.path=helper2424/smolvla_check_rtc_last3 \ --policy.path=<USER>/smolvla_check_rtc_last3 \
--policy.device=mps \ --policy.device=mps \
--rtc.enabled=false \ --rtc.enabled=false \
--robot.type=so100_follower \ --robot.type=so100_follower \
@@ -53,7 +53,7 @@ Usage:
# Run RTC with Real robot with pi0.5 policy # Run RTC with Real robot with pi0.5 policy
uv run examples/rtc/eval_with_real_robot.py \ uv run examples/rtc/eval_with_real_robot.py \
--policy.path=helper2424/pi05_check_rtc \ --policy.path=<USER>/pi05_check_rtc \
--policy.device=mps \ --policy.device=mps \
--rtc.enabled=true \ --rtc.enabled=true \
--rtc.execution_horizon=20 \ --rtc.execution_horizon=20 \
@@ -529,7 +529,7 @@ if __name__ == "__main__":
type=str, type=str,
required=True, required=True,
help="Repository identifier on Hugging Face: a community or a user name `/` the name of the dataset " help="Repository identifier on Hugging Face: a community or a user name `/` the name of the dataset "
"(e.g. `lerobot/pusht`, `cadene/aloha_sim_insertion_human`).", "(e.g. `lerobot/pusht`, `<USER>/aloha_sim_insertion_human`).",
) )
parser.add_argument( parser.add_argument(
"--branch", "--branch",
@@ -27,18 +27,18 @@ Usage:
# Full RA-BC computation with visualizations # Full RA-BC computation with visualizations
python src/lerobot/policies/sarm/compute_rabc_weights.py \\ python src/lerobot/policies/sarm/compute_rabc_weights.py \\
--dataset-repo-id lerobot/aloha_sim_insertion_human \\ --dataset-repo-id lerobot/aloha_sim_insertion_human \\
--reward-model-path pepijn223/sarm_single_uni4 --reward-model-path <USER>/sarm_single_uni4
# Faster computation with stride (compute every 5 frames, interpolate the rest) # Faster computation with stride (compute every 5 frames, interpolate the rest)
python src/lerobot/policies/sarm/compute_rabc_weights.py \\ python src/lerobot/policies/sarm/compute_rabc_weights.py \\
--dataset-repo-id lerobot/aloha_sim_insertion_human \\ --dataset-repo-id lerobot/aloha_sim_insertion_human \\
--reward-model-path pepijn223/sarm_single_uni4 \\ --reward-model-path <USER>/sarm_single_uni4 \\
--stride 5 --stride 5
# Visualize predictions only (no RA-BC computation) # Visualize predictions only (no RA-BC computation)
python src/lerobot/policies/sarm/compute_rabc_weights.py \\ python src/lerobot/policies/sarm/compute_rabc_weights.py \\
--dataset-repo-id lerobot/aloha_sim_insertion_human \\ --dataset-repo-id lerobot/aloha_sim_insertion_human \\
--reward-model-path pepijn223/sarm_single_uni4 \\ --reward-model-path <USER>/sarm_single_uni4 \\
--visualize-only \\ --visualize-only \\
--num-visualizations 5 --num-visualizations 5
@@ -714,12 +714,12 @@ Examples:
# Full RA-BC computation with visualizations # Full RA-BC computation with visualizations
python src/lerobot/policies/sarm/compute_rabc_weights.py \\ python src/lerobot/policies/sarm/compute_rabc_weights.py \\
--dataset-repo-id lerobot/aloha_sim_insertion_human \\ --dataset-repo-id lerobot/aloha_sim_insertion_human \\
--reward-model-path pepijn223/sarm_single_uni4 --reward-model-path <USER>/sarm_single_uni4
# Visualize predictions only (no RA-BC computation) # Visualize predictions only (no RA-BC computation)
python src/lerobot/policies/sarm/compute_rabc_weights.py \\ python src/lerobot/policies/sarm/compute_rabc_weights.py \\
--dataset-repo-id lerobot/aloha_sim_insertion_human \\ --dataset-repo-id lerobot/aloha_sim_insertion_human \\
--reward-model-path pepijn223/sarm_single_uni4 \\ --reward-model-path <USER>/sarm_single_uni4 \\
--visualize-only \\ --visualize-only \\
--num-visualizations 10 --num-visualizations 10
""", """,
@@ -30,7 +30,7 @@ Example of finetuning the smolvla pretrained model (`smolvla_base`):
```bash ```bash
lerobot-train \ lerobot-train \
--policy.path=lerobot/smolvla_base \ --policy.path=lerobot/smolvla_base \
--dataset.repo_id=danaaubakirova/svla_so100_task1_v3 \ --dataset.repo_id=<USER>/svla_so100_task1_v3 \
--batch_size=64 \ --batch_size=64 \
--steps=200000 --steps=200000
``` ```
@@ -40,7 +40,7 @@ and an action expert.
```bash ```bash
lerobot-train \ lerobot-train \
--policy.type=smolvla \ --policy.type=smolvla \
--dataset.repo_id=danaaubakirova/svla_so100_task1_v3 \ --dataset.repo_id=<USER>/svla_so100_task1_v3 \
--batch_size=64 \ --batch_size=64 \
--steps=200000 --steps=200000
``` ```
+16 -16
View File
@@ -24,100 +24,100 @@ When new_repo_id is specified, creates a new dataset.
Usage Examples: Usage Examples:
Delete episodes 0, 2, and 5 from a dataset: Delete episodes 0, 2, and 5 from a dataset:
python -m lerobot.scripts.lerobot_edit_dataset \ lerobot-edit-dataset \
--repo_id lerobot/pusht \ --repo_id lerobot/pusht \
--operation.type delete_episodes \ --operation.type delete_episodes \
--operation.episode_indices "[0, 2, 5]" --operation.episode_indices "[0, 2, 5]"
Delete episodes and save to a new dataset: Delete episodes and save to a new dataset:
python -m lerobot.scripts.lerobot_edit_dataset \ lerobot-edit-dataset \
--repo_id lerobot/pusht \ --repo_id lerobot/pusht \
--new_repo_id lerobot/pusht_filtered \ --new_repo_id lerobot/pusht_filtered \
--operation.type delete_episodes \ --operation.type delete_episodes \
--operation.episode_indices "[0, 2, 5]" --operation.episode_indices "[0, 2, 5]"
Split dataset by fractions: Split dataset by fractions:
python -m lerobot.scripts.lerobot_edit_dataset \ lerobot-edit-dataset \
--repo_id lerobot/pusht \ --repo_id lerobot/pusht \
--operation.type split \ --operation.type split \
--operation.splits '{"train": 0.8, "val": 0.2}' --operation.splits '{"train": 0.8, "val": 0.2}'
Split dataset by episode indices: Split dataset by episode indices:
python -m lerobot.scripts.lerobot_edit_dataset \ lerobot-edit-dataset \
--repo_id lerobot/pusht \ --repo_id lerobot/pusht \
--operation.type split \ --operation.type split \
--operation.splits '{"train": [0, 1, 2, 3], "val": [4, 5]}' --operation.splits '{"train": [0, 1, 2, 3], "val": [4, 5]}'
Split into more than two splits: Split into more than two splits:
python -m lerobot.scripts.lerobot_edit_dataset \ lerobot-edit-dataset \
--repo_id lerobot/pusht \ --repo_id lerobot/pusht \
--operation.type split \ --operation.type split \
--operation.splits '{"train": 0.6, "val": 0.2, "test": 0.2}' --operation.splits '{"train": 0.6, "val": 0.2, "test": 0.2}'
Merge multiple datasets: Merge multiple datasets:
python -m lerobot.scripts.lerobot_edit_dataset \ lerobot-edit-dataset \
--repo_id lerobot/pusht_merged \ --repo_id lerobot/pusht_merged \
--operation.type merge \ --operation.type merge \
--operation.repo_ids "['lerobot/pusht_train', 'lerobot/pusht_val']" --operation.repo_ids "['lerobot/pusht_train', 'lerobot/pusht_val']"
Remove camera feature: Remove camera feature:
python -m lerobot.scripts.lerobot_edit_dataset \ lerobot-edit-dataset \
--repo_id lerobot/pusht \ --repo_id lerobot/pusht \
--operation.type remove_feature \ --operation.type remove_feature \
--operation.feature_names "['observation.images.top']" --operation.feature_names "['observation.images.top']"
Modify tasks - set a single task for all episodes (WARNING: modifies in-place): Modify tasks - set a single task for all episodes (WARNING: modifies in-place):
python -m lerobot.scripts.lerobot_edit_dataset \ lerobot-edit-dataset \
--repo_id lerobot/pusht \ --repo_id lerobot/pusht \
--operation.type modify_tasks \ --operation.type modify_tasks \
--operation.new_task "Pick up the cube and place it" --operation.new_task "Pick up the cube and place it"
Modify tasks - set different tasks for specific episodes (WARNING: modifies in-place): Modify tasks - set different tasks for specific episodes (WARNING: modifies in-place):
python -m lerobot.scripts.lerobot_edit_dataset \ lerobot-edit-dataset \
--repo_id lerobot/pusht \ --repo_id lerobot/pusht \
--operation.type modify_tasks \ --operation.type modify_tasks \
--operation.episode_tasks '{"0": "Task A", "1": "Task B", "2": "Task A"}' --operation.episode_tasks '{"0": "Task A", "1": "Task B", "2": "Task A"}'
Modify tasks - set default task with overrides for specific episodes (WARNING: modifies in-place): Modify tasks - set default task with overrides for specific episodes (WARNING: modifies in-place):
python -m lerobot.scripts.lerobot_edit_dataset \ lerobot-edit-dataset \
--repo_id lerobot/pusht \ --repo_id lerobot/pusht \
--operation.type modify_tasks \ --operation.type modify_tasks \
--operation.new_task "Default task" \ --operation.new_task "Default task" \
--operation.episode_tasks '{"5": "Special task for episode 5"}' --operation.episode_tasks '{"5": "Special task for episode 5"}'
Convert image dataset to video format and save locally: Convert image dataset to video format and save locally:
python -m lerobot.scripts.lerobot_edit_dataset \ lerobot-edit-dataset \
--repo_id lerobot/pusht_image \ --repo_id lerobot/pusht_image \
--operation.type convert_image_to_video \ --operation.type convert_image_to_video \
--operation.output_dir /path/to/output/pusht_video --operation.output_dir /path/to/output/pusht_video
Convert image dataset to video format and save with new repo_id: Convert image dataset to video format and save with new repo_id:
python -m lerobot.scripts.lerobot_edit_dataset \ lerobot-edit-dataset \
--repo_id lerobot/pusht_image \ --repo_id lerobot/pusht_image \
--new_repo_id lerobot/pusht_video \ --new_repo_id lerobot/pusht_video \
--operation.type convert_image_to_video --operation.type convert_image_to_video
Convert image dataset to video format and push to hub: Convert image dataset to video format and push to hub:
python -m lerobot.scripts.lerobot_edit_dataset \ lerobot-edit-dataset \
--repo_id lerobot/pusht_image \ --repo_id lerobot/pusht_image \
--new_repo_id lerobot/pusht_video \ --new_repo_id lerobot/pusht_video \
--operation.type convert_image_to_video \ --operation.type convert_image_to_video \
--push_to_hub true --push_to_hub true
Show dataset information: Show dataset information:
python -m lerobot.scripts.lerobot_edit_dataset \ lerobot-edit-dataset \
--repo_id lerobot/pusht_image \ --repo_id lerobot/pusht_image \
--operation.type info \ --operation.type info \
--operation.show_features true --operation.show_features true
Show dataset information without feature details: Show dataset information without feature details:
python -m lerobot.scripts.lerobot_edit_dataset \ lerobot-edit-dataset \
--repo_id lerobot/pusht_image \ --repo_id lerobot/pusht_image \
--operation.type info \ --operation.type info \
--operation.show_features false --operation.show_features false
Using JSON config file: Using JSON config file:
python -m lerobot.scripts.lerobot_edit_dataset \ lerobot-edit-dataset \
--config_path path/to/edit_config.json --config_path path/to/edit_config.json
""" """
+1 -1
View File
@@ -22,7 +22,7 @@ lerobot-replay \
--robot.type=so100_follower \ --robot.type=so100_follower \
--robot.port=/dev/tty.usbmodem58760431541 \ --robot.port=/dev/tty.usbmodem58760431541 \
--robot.id=black \ --robot.id=black \
--dataset.repo_id=aliberts/record-test \ --dataset.repo_id=<USER>/record-test \
--dataset.episode=0 --dataset.episode=0
``` ```