mirror of
https://github.com/huggingface/lerobot.git
synced 2026-05-11 14:49:43 +00:00
a147fa4439
* feat(ci): add RoboCerebra benchmark eval job - Docker image with robosuite/libero deps for RoboCerebra eval - CI workflow: 1-episode eval with pepijn223/smolvla_robocerebra - Reuses libero env with rename_map + empty_cameras=3 * docs(robocerebra): add benchmark page and toctree entry Add a dedicated docs page for RoboCerebra that points at the canonical dataset lerobot/robocerebra_unified and shows how to run eval + fine-tune against it. Wire it into the Benchmarks section of the toctree so doc-builder picks it up. * ci: point benchmark eval checkpoints at the lerobot/ org mirrors pepijn223/smolvla_* → lerobot/smolvla_* across every benchmark job in this branch (libero, metaworld, and the per-branch benchmark). The checkpoints were mirrored into the lerobot/ org and that's the canonical location going forward. * fix(robocerebra): drop alias extra + simplify docker image Two problems rolled up: 1. `uv sync --locked --extra test` was failing because pyproject.toml added a `robocerebra = ["lerobot[libero]"]` alias extra but uv.lock wasn't regenerated. Drop the alias — nothing in CI actually needs the extra name (the Dockerfile just installs what it needs directly), so this restores pyproject.toml and uv.lock to byte-exact origin/main. 2. Rebase docker/Dockerfile.benchmark.robocerebra on huggingface/lerobot-gpu:latest (same pattern as libero/metaworld/…). The nightly image already ships lerobot[all] which includes [libero], so the RoboCerebra image is essentially identical to the LIBERO one: fetch libero-assets, write ~/.libero/config.yaml, overlay source. 92 → 43 lines. Also repoint the CI workflow comment that referenced the removed extra. * ci: use dedicated lerobot/smolvla_robocerebra checkpoint for smoke eval Replace the generic pepijn223/smolvla_libero placeholder with the purpose-trained lerobot/smolvla_robocerebra model in the RoboCerebra CI smoke test. * fix(ci): align RoboCerebra eval with training pipeline Fixes 5 mismatches that caused 0% success rate: - env.type: robocerebra (unregistered) → libero - resolution: 360x360 (default) → 256x256 (matches dataset) - camera_name_mapping: map eye_in_hand → wrist_image (not image2) - empty_cameras: 3 → 1 (matches training) - add HF_USER_TOKEN guard on eval step * fix(ci): set env.fps=20 and explicit obs_type for RoboCerebra eval Match the dataset's 20 FPS (LiberoEnv defaults to 30) and make obs_type=pixels_agent_pos explicit for safety against future changes. * docs(robocerebra): align page with adding_benchmarks template Rework docs/source/robocerebra.mdx to follow the standard benchmark doc structure: intro + links + available tasks + installation + eval + recommended episodes + policy I/O + training + reproducing results. - Point everything at lerobot/smolvla_robocerebra (the released checkpoint), not the personal pepijn223 mirror. - Add the --env.fps=20 and --env.obs_type=pixels_agent_pos flags that CI actually uses, so copy-paste eval reproduces CI. - Split the "Training" block out of the recipe section into its own section with the feature table. - Add an explicit "Reproducing published results" section pointing at the CI smoke eval. * fix: integrate PR #3314 review feedback - ci(robocerebra): drop redundant hf auth login step (auth is already performed inside the eval step's container). - ci(robocerebra): add Docker Hub login before the image build to pick up the authenticated rate limit. - docs(robocerebra): align eval snippet with the CI command (observation size, camera_name_mapping, use_async_envs, device, empty_cameras=1). * fix(envs): preserve AsyncVectorEnv metadata/unwrapped in lazy eval envs Port of #3416 onto this branch. * ci: gate Docker Hub login on secret availability * Update .github/workflows/benchmark_tests.yml Co-authored-by: Khalil Meftah <khalil.meftah@huggingface.co> Signed-off-by: Pepijn <138571049+pkooij@users.noreply.github.com> * Update .github/workflows/benchmark_tests.yml Signed-off-by: Pepijn <138571049+pkooij@users.noreply.github.com> Co-authored-by: Khalil Meftah <khalil.meftah@huggingface.co>
153 lines
3.7 KiB
YAML
153 lines
3.7 KiB
YAML
- sections:
|
|
- local: index
|
|
title: LeRobot
|
|
- local: installation
|
|
title: Installation
|
|
title: Get started
|
|
- sections:
|
|
- local: il_robots
|
|
title: Imitation Learning for Robots
|
|
- local: bring_your_own_policies
|
|
title: Bring Your Own Policies
|
|
- local: integrate_hardware
|
|
title: Bring Your Own Hardware
|
|
- local: hilserl
|
|
title: Train a Robot with RL
|
|
- local: hilserl_sim
|
|
title: Train RL in Simulation
|
|
- local: multi_gpu_training
|
|
title: Multi GPU training
|
|
- local: hil_data_collection
|
|
title: Human In the Loop Data Collection
|
|
- local: peft_training
|
|
title: Training with PEFT (e.g., LoRA)
|
|
- local: rename_map
|
|
title: Using Rename Map and Empty Cameras
|
|
title: "Tutorials"
|
|
- sections:
|
|
- local: lerobot-dataset-v3
|
|
title: Using LeRobotDataset
|
|
- local: porting_datasets_v3
|
|
title: Porting Large Datasets
|
|
- local: using_dataset_tools
|
|
title: Using the Dataset Tools
|
|
- local: dataset_subtask
|
|
title: Using Subtasks in the Dataset
|
|
- local: streaming_video_encoding
|
|
title: Streaming Video Encoding
|
|
title: "Datasets"
|
|
- sections:
|
|
- local: act
|
|
title: ACT
|
|
- local: smolvla
|
|
title: SmolVLA
|
|
- local: pi0
|
|
title: π₀ (Pi0)
|
|
- local: pi0fast
|
|
title: π₀-FAST (Pi0Fast)
|
|
- local: pi05
|
|
title: π₀.₅ (Pi05)
|
|
- local: groot
|
|
title: NVIDIA GR00T N1.5
|
|
- local: xvla
|
|
title: X-VLA
|
|
- local: multi_task_dit
|
|
title: Multitask DiT Policy
|
|
- local: walloss
|
|
title: WALL-OSS
|
|
title: "Policies"
|
|
- sections:
|
|
- local: sarm
|
|
title: SARM
|
|
title: "Reward Models"
|
|
- sections:
|
|
- local: async
|
|
title: Use Async Inference
|
|
- local: rtc
|
|
title: Real-Time Chunking (RTC)
|
|
title: "Inference"
|
|
- sections:
|
|
- local: envhub
|
|
title: Environments from the Hub
|
|
- local: envhub_leisaac
|
|
title: Control & Train Robots in Sim (LeIsaac)
|
|
title: "Simulation"
|
|
- sections:
|
|
- local: adding_benchmarks
|
|
title: Adding a New Benchmark
|
|
- local: libero
|
|
title: LIBERO
|
|
- local: metaworld
|
|
title: Meta-World
|
|
- local: robotwin
|
|
title: RoboTwin 2.0
|
|
- local: robocasa
|
|
title: RoboCasa365
|
|
- local: robocerebra
|
|
title: RoboCerebra
|
|
- local: envhub_isaaclab_arena
|
|
title: NVIDIA IsaacLab Arena Environments
|
|
title: "Benchmarks"
|
|
- sections:
|
|
- local: introduction_processors
|
|
title: Introduction to Robot Processors
|
|
- local: debug_processor_pipeline
|
|
title: Debug your processor pipeline
|
|
- local: implement_your_own_processor
|
|
title: Implement your own processor
|
|
- local: processors_robots_teleop
|
|
title: Processors for Robots and Teleoperators
|
|
- local: env_processor
|
|
title: Environment Processors
|
|
- local: action_representations
|
|
title: Action Representations
|
|
title: "Robot Processors"
|
|
- sections:
|
|
- local: so101
|
|
title: SO-101
|
|
- local: so100
|
|
title: SO-100
|
|
- local: koch
|
|
title: Koch v1.1
|
|
- local: lekiwi
|
|
title: LeKiwi
|
|
- local: hope_jr
|
|
title: Hope Jr
|
|
- local: reachy2
|
|
title: Reachy 2
|
|
- local: unitree_g1
|
|
title: Unitree G1
|
|
- local: earthrover_mini_plus
|
|
title: Earth Rover Mini
|
|
- local: omx
|
|
title: OMX
|
|
- local: openarm
|
|
title: OpenArm
|
|
title: "Robots"
|
|
- sections:
|
|
- local: phone_teleop
|
|
title: Phone
|
|
title: "Teleoperators"
|
|
- sections:
|
|
- local: cameras
|
|
title: Cameras
|
|
title: "Sensors"
|
|
- sections:
|
|
- local: torch_accelerators
|
|
title: PyTorch accelerators
|
|
title: "Supported Hardware"
|
|
- sections:
|
|
- local: notebooks
|
|
title: Notebooks
|
|
- local: feetech
|
|
title: Updating Feetech Firmware
|
|
- local: damiao
|
|
title: Damiao Motors and CAN Bus
|
|
title: "Resources"
|
|
- sections:
|
|
- local: contributing
|
|
title: Contribute to LeRobot
|
|
- local: backwardcomp
|
|
title: Backward compatibility
|
|
title: "About"
|