Compare commits

...

57 Commits

Author SHA1 Message Date
Pepijn Kooijmans 8770c011b0 feat(eval): add per-episode timing logs to eval worker
Logs avg env step time, avg inference call time, and totals per episode
to identify whether env or policy server is the bottleneck.

Made-with: Cursor
2026-03-25 07:30:49 +01:00
Pepijn Kooijmans ddcda8f1ca feat(eval): respect n_action_steps in inference server
The server now returns only n_action_steps actions from the predicted
chunk instead of the full chunk_size, enabling more frequent
re-planning when n_action_steps < chunk_size.

Made-with: Cursor
2026-03-25 06:21:02 +01:00
Pepijn Kooijmans 4f8ebe41b3 fix(eval): create independent preprocessors per policy server
Each _InferenceServer now gets its own preprocessor/postprocessor
instances, preventing RuntimeError from HuggingFace tokenizer's
non-thread-safe Rust borrow checker when multiple servers run
concurrently.

Made-with: Cursor
2026-03-24 22:38:03 +01:00
Pepijn Kooijmans 066976e078 feat(eval): add multiprocess runtime -- no Docker needed
New eval.runtime=multiprocess spawns local lerobot-eval-worker
subprocesses instead of Docker containers. Supports eval.policy_servers
for parallel inference. Works on SLURM clusters and anywhere Docker
is unavailable.

Usage: lerobot-eval --eval.runtime=multiprocess \
    --eval.instance_count=8 --eval.policy_servers=4 --eval.port=50051
Made-with: Cursor
2026-03-24 22:14:27 +01:00
Pepijn Kooijmans b3c2592ace feat(eval): multi-policy-server support for Docker eval
Add eval.policy_servers parameter (default 1) that spawns N independent
policy inference servers on consecutive ports. Containers are round-robin
assigned across servers, enabling parallel GPU inference for small models
like SmolVLA (~1.4GB each).

Usage: --eval.policy_servers=4 --eval.instance_count=20
  → 4 model copies on GPU, 20 containers distributed across them.
Made-with: Cursor
2026-03-24 20:28:58 +01:00
Pepijn Kooijmans b97ea8999f fix(docker): create libero config.yaml for non-plus LIBERO builds
The generic libero benchmark case was falling through to the wildcard
install path, which doesn't pre-create ~/.libero/config.yaml. This
caused an interactive input() prompt that crashes in Docker (EOFError).

Made-with: Cursor
2026-03-24 07:11:18 +01:00
Pepijn Kooijmans 69aeda68f5 Docker EGL/GLVND support + asset download refactor + imagenet stats fix
- Add NVIDIA EGL/Vulkan vendor ICDs and graphics libs to both Dockerfiles
- Refactor LIBERO-plus asset download into a separate build step
- Fix KeyError in datasets/factory.py when stats dict is None or missing keys

Made-with: Cursor
2026-03-23 23:33:22 +01:00
Pepijn Kooijmans a9e355bd03 Lazy env creation + smart sharding to fix container OOM 2026-03-23 23:15:23 +01:00
Pepijn Kooijmans aae68e3448 fix(docker): use recursive glob for deeply nested asset zip structure
The LIBERO-plus assets.zip has a deeply nested path
(inspire/hdd/project/.../assets) that didn't match the shallow glob.
Use recursive glob to find assets/scenes regardless of nesting depth.

Made-with: Cursor
2026-03-23 18:57:44 +01:00
Pepijn Kooijmans 4b9f6c4aed fix(docker): download LIBERO-plus assets (~6 GB) at image build time
The benchmark containers were missing the scene/texture/object assets
required by LIBERO-plus. Download them from HuggingFace Hub during the
Docker build so containers are self-contained and ready to run.

Made-with: Cursor
2026-03-23 18:48:25 +01:00
Pepijn Kooijmans 6057638fc1 fix(docker): pre-create libero config.yaml to avoid interactive input() prompt
The upstream libero __init__.py calls input() when ~/.libero/config.yaml
is missing, which crashes in non-interactive Docker containers with
EOFError. Pre-create the config with default paths at build time using
importlib.util.find_spec to locate the module without triggering the
problematic import.

Made-with: Cursor
2026-03-23 18:44:14 +01:00
Pepijn Kooijmans e52e7e644a fix(docker): add libero_plus install workaround to generic Dockerfile.benchmark
The generic Dockerfile.benchmark was using a plain `uv pip install ".[libero_plus]"`
which silently fails to make `libero` importable due to an upstream LIBERO-plus
packaging bug. Port the dedicated clone + .pth workaround from
Dockerfile.eval-libero-plus so `docker build --build-arg BENCHMARK=libero_plus`
produces working containers.

Also fix eval worker using nonexistent `parser.parse()` — use `draccus.parse()`.

Made-with: Cursor
2026-03-23 18:31:57 +01:00
Pepijn 8633608d26 fix(docker): pin numpy==2.2.5 in separate RUN for robocasa
robocasa/__init__.py hard-asserts numpy==2.2.5. When bundled with other
packages in one uv install command, uv silently skips the numpy pin
(same "already resolved" bug hit with libero_plus). Moving the pin to a
dedicated final RUN step guarantees it is applied last.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 22:04:42 -07:00
Pepijn 900e6b59c8 fix(docker): pin mujoco==3.3.1 for robocasa (hard assert on import)
robocasa/__init__.py asserts mujoco.__version__ == "3.3.1" and aborts
with an error if any other version is installed.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 21:53:37 -07:00
Pepijn f844fe683c fix(docker): use robosuite master branch for robocasa (per README)
robocasa README explicitly says to use the master branch of
ARISE-Initiative/robosuite (no robocasa-specific branch exists).
Also install robocasa with --no-deps to bypass its lerobot==0.3.3
pin, and declare its actual runtime deps explicitly.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 21:51:18 -07:00
Pepijn 4403675b31 fix(docker): install robocasa's robosuite fork (adds PandaOmron)
Standard robosuite 1.4.x from PyPI doesn't include PandaOmron and
other robocasa-specific robots. robocasa requires the fork at
ARISE-Initiative/robosuite@robocasa_v1.4.1. Install both from source
with --no-deps; shared deps (easydict, scikit-image, scipy) installed
explicitly first.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 21:40:48 -07:00
Pepijn d18be0c3f4 feat(docker): add metaworld to default benchmark build list
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 21:38:54 -07:00
Pepijn 866f8adf11 fix(docker): install robocasa from GitHub source (not on PyPI)
robocasa is not published to PyPI, so uv can't resolve it as a plain
package dep. Fix by installing its runtime deps explicitly and cloning
robocasa from GitHub with --no-deps (same pattern as libero_plus).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 21:38:19 -07:00
Pepijn 3d6310c03d fix(docker): also override numpy==1.26.4 for robomme image
mani-skill==3.0.0b21 requires numpy<2.0.0 in addition to gymnasium==0.29.1,
both conflicting with lerobot's base requirements.

numpy 1.26.4 is runtime-compatible with lerobot's usage (no numpy 2.x-only
APIs are used in the eval worker or env wrappers).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 21:34:50 -07:00
Pepijn c3b26382e7 fix(docker): override gymnasium==0.29.1 for robomme image
mani-skill==3.0.0b21 (robomme dep) pins gymnasium==0.29.1, conflicting
with lerobot's gymnasium>=1.1.1. Use uv --override to force 0.29.1.

Both 0.29.x and 1.x use the same 5-tuple step() API (introduced in
gymnasium 0.26), so the eval worker and RoboMMEGymEnv wrapper are
fully compatible with the downgraded version.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 21:28:34 -07:00
Pepijn e54e582a6f fix(docker): bypass uv extras chain bug in libero_plus Dockerfile
uv silently skips packages when resolving a nested extras chain
(lerobot[libero_plus] -> lerobot[libero] -> hf-libero -> robosuite).
POST-INSTALL grep confirmed robosuite absent after install despite uv
reporting 'Resolved 113 packages, Installed 1'.

Fix: install all libero_plus deps directly by name, bypassing the extras
chain entirely. Also add --plain flag to build script for verbose output.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 21:12:26 -07:00
Pepijn 418791ebba debug(docker): add pre/post-install diagnostics to libero_plus Dockerfile
Temporary diagnostic to identify why uv sees robosuite as already
installed in the base venv despite it not being a base lerobot dep.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 21:03:42 -07:00
Pepijn ee3354a885 fix(docker): fix libero_plus deps by replacing git dep with lerobot[libero]
The libero @ git+...@main dep had empty install_requires, causing uv to
skip robosuite (and other deps) during resolution — they appeared
"already resolved" from a stale git dep cache even though not installed.

Fix: use lerobot[libero] as the dep source (hf-libero properly declares
all deps including robosuite via robomimic). The LIBERO-plus Python
module is installed from the git clone with --no-deps, so hf-libero's
declared deps are used but LIBERO-plus's environments override via .pth.

Also remove egl_probe (broken original) duplicate alongside hf-egl-probe.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 20:57:14 -07:00
Pepijn 2cd06fe95b fix(docker): exclude .venv from Docker build context
Without this, the server's local .venv gets copied into the image by
the final COPY . . step in Dockerfile.eval-base, overwriting the
freshly-created uv venv. uv then sees those packages as already
installed and skips them — but they may be missing or built for the
wrong environment, causing ModuleNotFoundError at runtime.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 16:13:32 -07:00
Pepijn 7be84cb545 fix(docker): add CMAKE_POLICY_VERSION_MINIMUM=3.5 for cmake 4.x compat
cmake 4.x removed backward compat with cmake_minimum_required < 3.5,
breaking egl-probe compilation. Setting CMAKE_POLICY_VERSION_MINIMUM=3.5
in the base image ENV re-enables it so robomimic's egl-probe builds.

Also adds --no-cache-base flag to build script so the base can be
force-rebuilt when Dockerfile.eval-base changes, and pins hf-egl-probe
in libero extras as the upstream-fixed fork of egl-probe.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 15:45:36 -07:00
Pepijn c35af1ae6a fix(docker): use uv pip instead of pip in benchmark Dockerfiles
pip's backtracking resolver hits 'resolution-too-deep' on complex
dependency graphs (robomme → mani-skill, libero_plus → robosuite/bddl).
uv resolves the same graphs in seconds without backtracking issues.

Also removes the now-redundant PATH= prefix since uv and python are
already on PATH via the base image ENV.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 15:17:03 -07:00
Pepijn 6fc024704e fix(deps): add missing future dep for bddl in libero_plus extras
bddl imports future (from-future) at package init but doesn't declare it
as a dependency. This caused ModuleNotFoundError inside the benchmark
Docker container when verifying the libero_plus install.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 11:17:51 -07:00
Pepijn c3b7a18f01 feat(docker): add build_benchmark_images.sh to build and push all eval images
Builds lerobot-eval-base then each benchmark image (libero, libero_plus,
robomme, robocasa), runs the smoke tests, and optionally pushes to Docker Hub.

Usage:
  bash docker/build_benchmark_images.sh                         # local only
  bash docker/build_benchmark_images.sh --push --hub_org=<org>  # push to Hub

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-22 11:03:09 -07:00
Pepijn 7fc0cdf68a fix(eval): skip multi-instance orchestration when runtime=docker
_orchestrate_multi_instance_eval spawns extra lerobot-eval processes that each
call run_eval_in_docker again, creating N^2 containers. For docker runtime,
instance_count directly controls how many env-worker containers are spawned by
run_eval_in_docker — no process-level orchestration is needed.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-20 22:48:06 -07:00
Pepijn 23bf69ebab feat(eval): add hf_eval_results utility for HF leaderboard upload
Implements the missing lerobot.utils.hf_eval_results module imported by
lerobot_eval.py but never created. Provides:
- default_eval_date(): today's UTC date as ISO-8601
- build_eval_results_rows(): converts eval info dict → HF .eval_results rows
- upload_eval_results_yaml(): serialises rows and uploads to Hub model repo

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-20 22:41:21 -07:00
Pepijn 3d5d8fa88a feat(eval): implement docker runtime with HTTP policy inference server
Add docker_runtime.py (host-side) and lerobot_eval_worker.py (container-side)
for --eval.runtime=docker. Policy loads once on the host GPU; Docker containers
run env-only workers that call back via HTTP for action chunks, maximising GPU
utilisation across parallel benchmark tasks.

- _InferenceServer: HTTP server wrapping predict_action_chunk with a single lock
- run_eval_in_docker: spawns instance_count containers, collects + merges per-task
  JSON, writes eval_info.json compatible with _aggregate_eval_from_per_task
- lerobot-eval-worker CLI: make_env → shard tasks → run episodes → write JSON
- EvalDockerConfig: add port field (default 50051)
- pyproject.toml: add lerobot-eval-worker entry point

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-20 22:35:59 -07:00
Pepijn e80c9e6270 chore(docker): remove docker-compose, use individual build/run commands
Replace docker-compose.benchmark.yml with per-image docker build/run
instructions. Each benchmark is built and tested independently via
--build-arg BENCHMARK=<name> on Dockerfile.benchmark.

Co-Authored-By: Claude <noreply@anthropic.com>
2026-03-20 22:21:28 -07:00
Pepijn 39cf11d5dc feat(docker): add per-benchmark evaluation containers
Add Dockerfile.benchmark (parameterized via ARG BENCHMARK), a
docker-compose.benchmark.yml with services for libero, libero_plus,
robomme, and robocasa, and a smoke_test_benchmark.sh that verifies
imports and CLI entry-points in each container.

Also add the missing `robocasa` optional dep group to pyproject.toml
(the docs already referenced `pip install ".[robocasa]"` but the group
was not defined).

Build a specific benchmark image:
  docker build --build-arg BENCHMARK=robomme \
    -f docker/Dockerfile.benchmark -t lerobot-benchmark-robomme .

Build all via compose:
  docker compose -f docker/docker-compose.benchmark.yml build

Smoke-test inside a container:
  docker compose -f docker/docker-compose.benchmark.yml run --rm robomme \
    bash docker/smoke_test_benchmark.sh

Co-Authored-By: Claude <noreply@anthropic.com>
2026-03-20 22:21:28 -07:00
Pepijn Kooijmans 285c500aef speed up benchmark eval scheduling and docker workflow 2026-03-21 06:09:01 +01:00
Pepijn Kooijmans f60d163588 refactor(leaderboard): use a single positional file arg for repo IDs
Replace --repo-ids and --repo-ids-file with a single positional
repo_ids_file argument. Lines starting with # are ignored.

Made-with: Cursor
2026-03-16 02:58:28 +01:00
Pepijn Kooijmans 4a04465bb8 feat: add lerobot-leaderboard to generate interactive eval comparison pages
New CLI tool that fetches eval results from multiple Hub model repos
and produces a self-contained HTML leaderboard with sortable columns,
per-suite breakdowns, best-in-column highlighting, and filtering.

Made-with: Cursor
2026-03-16 02:57:37 +01:00
Pepijn Kooijmans 464532ec37 feat(eval): include eval config (policy, env, eval settings) in Hub push
Saves eval_config.json locally and uploads it alongside results. The
model card now includes a collapsible "Eval configuration" section
showing the full config JSON used for the evaluation run.

Made-with: Cursor
2026-03-16 02:42:38 +01:00
Pepijn Kooijmans 89f9bd78ab feat(eval): add --push_to_hub to upload eval results, videos, and model card to Hub
Adds a push_to_hub flag to lerobot-eval that uploads eval_info.json,
rollout videos, and appends an evaluation results table to the model
card on Hugging Face. Also declares missing LIBERO-plus runtime deps
in pyproject.toml and adds an asset validation check for libero_plus.

Made-with: Cursor
2026-03-16 02:39:24 +01:00
pepijn c9cfc88602 feat: add benchmark orchestration, LIBERO-plus install parity, and eval hardening
- Add lerobot-benchmark CLI for multi-benchmark train/eval workflows
- Add benchmark_training.mdx documentation
- Add libero-plus pip extra alias with EGL probe deps matching standard libero
- Harden libero.py: wand mock, init-state fallback, renderer EGL→OSMesa fallback
- Add multimodal_analysis.py script and SLURM training template

Made-with: Cursor
2026-03-15 05:52:53 +00:00
pepijn 7bef12a461 feat(envs): add RoboMME memory-augmented manipulation benchmark
- RoboMMEEnv config with 16 tasks across 4 suites (Counting, Permanence,
  Reference, Imitation)
- Gymnasium wrapper around BenchmarkEnvBuilder (robomme.py)
- Environment factory wiring for env_type="robomme"
- robomme optional dependency in pyproject.toml

Made-with: Cursor
2026-03-13 04:44:32 +00:00
pepijn db5c26f07d feat(envs): add LIBERO-plus integration for evaluation benchmarks
Add LiberoPlusEnv config (subclass of LiberoEnv), register libero_plus
env type in factory, add import fallbacks for LIBERO-plus package
structure, and add libero_plus optional dependency group in pyproject.toml.

Made-with: Cursor
2026-03-12 04:31:09 +00:00
Pepijn 8904768db4 feat(envs): add RoboCasa composite-task benchmark integration
Integrates 5 selected RoboCasa kitchen tasks (3 short + 2 long) as a
LeRobot benchmark environment, following the same pattern as Libero.

Selected tasks:
  Short: PickPlaceCounterToCabinet, PrepareToast, CoffeeSetupMug
  Long:  PrepareCoffee, RestockPantry

Changes:
- envs/robocasa.py: RoboCasaEnv wrapper with flat 12D Box action space,
  3-camera pixel obs, and 16D proprioceptive state
- envs/configs.py: RoboCasaEnv config with features_map
- envs/factory.py: wire robocasa into make_env + make_env_pre_post_processors
- processor/env_processor.py: RoboCasaProcessorStep for obs key remapping
- tests/test_robocasa_env.py: full test suite (auto-skips if assets missing)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-09 17:08:32 +01:00
Steven Palma b0efa73520 chore(dependencies): Bump lerobot to 0.5.1 (#3118) 2026-03-09 12:43:32 +01:00
Steven Palma 00b662de02 chore(dependencies): Bump lerobot to 0.5.0 (#3117) 2026-03-09 11:34:52 +01:00
Steven Palma 5c51a74484 chore(deps): update requirements file (#3114) 2026-03-09 11:18:05 +01:00
Steven Palma db8547e35d test(cameras): skip flaky async_read test (#3106) 2026-03-08 14:02:33 +01:00
Steven Palma c17d949531 chore(readme): update citation with ICLR26 paper (#3107)
* peer reviewed citation 🎉

Signed-off-by: Francesco Capuano <74058581+fracapuano@users.noreply.github.com>

* add iclr year

Signed-off-by: Francesco Capuano <74058581+fracapuano@users.noreply.github.com>

* fix quentin's spelling name

Signed-off-by: Francesco Capuano <74058581+fracapuano@users.noreply.github.com>

* docs(readme): update citation

---------

Signed-off-by: Francesco Capuano <74058581+fracapuano@users.noreply.github.com>
Co-authored-by: Francesco Capuano <74058581+fracapuano@users.noreply.github.com>
2026-03-08 14:01:43 +01:00
Steven Palma 1e131f93f8 chore(docs): add uv installation instructions (#3105)
* chore(docs): add uv installation instructions

* fix(docs): format tabs

* chore(docs): small details

* chore(docs): last details uv installation instructions

* chore(docs): last detail

---

Co-authored-by: sahilmaniyar888 <156301258+sahilmaniyar888@users.noreply.github.com>
2026-03-08 13:00:06 +01:00
Ignat Georgiev 2fb5c7add0 feat(train): add cudnn_deterministic option for reproducible training (#3102)
Add a `cudnn_deterministic` flag to `TrainPipelineConfig` (default: False)
that sets `torch.backends.cudnn.deterministic = True` and disables benchmark
mode, eliminating CUDA floating-point non-determinism at the cost of ~10-20%
training speed. When False (default) the existing benchmark=True behaviour
is preserved.
2026-03-08 12:29:33 +01:00
Martino Russi 4f2ef024d8 feat(robots): Unitree G1 WBC implementation (#2876)
* move locomotion from examples to robot, move controller to teleoperator class

* modify teleoperate to send back actions to robot

* whole body controller

* add holosoma to locomotros

* various updates

* update joint zeroing etc

* ensure safefail with locomotion

* add unitree locomotion

* launch camera from g1 server

* publish at varying framerates

* fix async read in camera

* attempting to fix camera lag

* test camera speedup

* training

* inference works

* remove logging from pi0

* remove logging

* push local changes

* testing

* final changes

* revert control_utils

* revert utils

* revert

* revert g1

* revert again:

* revert utils

* push recents

* remove examples

* remove junk

* remove mjlog

* revergt edit_dataset

* Update lerobot_edit_dataset.py

Signed-off-by: Martino Russi <77496684+nepyope@users.noreply.github.com>

* undo teleop changes

* revert logging

* remove loggings

* remove loogs

* revert dataset tools

* Update dataset_tools.py

Signed-off-by: Martino Russi <77496684+nepyope@users.noreply.github.com>

* move gravity to utils

* revert changes

* remove matplotlib viewer (rerun works fine)

* factory revert

* send policy action directly

* recent changes

* implement flexible action space

* send empty command if arms are missing

* rename locomotion to controller

* add init

* implement feedback

* add feedback for teleoperator

* fix ruff

* fix ruff

* use read_latest

* fix zmq camera

* revert exo_serial

* simplify PR

* revert exo_changes

* revert camera_zmq

* Update camera_zmq.py

Signed-off-by: Martino Russi <77496684+nepyope@users.noreply.github.com>

* remove frame duplication from zmq server

* revert channerfactoryinitialize

* keep channelfactoryinitialize

* remove zeroing out logic

* fix typo

* refactor teleop class

* simplify teleop further

* import armindex at the top

* fix visualizer again

* revert ik helper

* push stuff

* simplify image_server

* update image_server

* asd

* add threading logic

* simplify ik helper stuff

* simplify holosoma

* fix names

* fix docs

* revert leg override

* clean connect

* fix controller

* fix ruff

* clean teleoperator

* set_from_wireless

* avoid double initializations

* refactor robot class

* fix pre-commit

* update docs

* update docs format

* add teleop instructions

* unitree_g1 specific exception in record/teleoperate

* add thumbnail to docs

* add thumbnail to doc

* refactor(unitree): multiple improvements (#3103)

* refactor(unitree): multiple improvements

* test(unitree): added tests + improved installation instructions

* refactor(robots): minor changes unitree robot kinematic

* chore(robots): rename g1 kinematics file

---------

Signed-off-by: Martino Russi <77496684+nepyope@users.noreply.github.com>
Signed-off-by: Steven Palma <imstevenpmwork@ieee.org>
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>
Co-authored-by: Steven Palma <steven.palma@huggingface.co>
2026-03-08 11:33:24 +01:00
Shun.Sasaki 6139b133ca fix(async_inference): restore robot module imports in robot_client.py (#3081) 2026-03-06 17:14:14 +01:00
Steven Palma 85de893fa7 fix(ci): skip HF log in (and tests) in forks and community PRs (#3097)
* fix(ci): skip HF log in (and tests) in forks and community PRs

* chore(test): remove comment about test meant to be only run locally

* fix(tests): no hf log in decorator for xvla

* fix(test): no decorator in yield
2026-03-06 16:33:43 +01:00
Steven Palma a4c66e530b chore(docs): remove pi installation note (#3095) 2026-03-06 15:52:54 +01:00
Steven Palma a225127527 chore(dependencies): sync intelrealsense + added notes (#3094) 2026-03-06 10:50:46 +01:00
Steven Palma e489ba24fc feat(dependencies): require Python 3.12+ as minimum version (#3023)
* feat(dependecies): upgrade to python3.12

* fix(test): processor regex message

* fix(test): processor regex message

* fix(dependecies): resolve all tags in python 3.12

* fix(dependecies): add more hints to faster resolve

* chore(dependecies): remove cli tag huggingface-hub dep

* refactor(policy): update eagle for python3.12

* chore(docs): update policy creation for python 3.12

* chore(test): skip failing tests in macos
2026-03-06 10:15:13 +01:00
Steven Palma d324ffe810 fix(ci): test only multi-gpu tests in multi-gpu runner (#3092) 2026-03-05 19:53:40 +01:00
Pepijn 1a24f770d3 Feat/slurm compute rabc script (#3041)
* Add SLURM SARM progress annotation script.

Provide a standalone two-stage compute/aggregate pipeline for RA-BC progress generation so large datasets can be processed in parallel and optionally uploaded to the Hub.

Made-with: Cursor

* fix pr comments

* remove comments
2026-03-05 18:27:58 +01:00
110 changed files with 8522 additions and 1357 deletions
+5
View File
@@ -12,6 +12,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# Python virtual environments — never copy into Docker images
.venv
venv
env/
# Misc
.git
tmp
+2 -1
View File
@@ -44,7 +44,7 @@ permissions:
# Sets up the environment variables
env:
UV_VERSION: "0.8.0"
PYTHON_VERSION: "3.10"
PYTHON_VERSION: "3.12"
# Ensures that only the latest commit for a PR or branch is built, canceling older runs.
concurrency:
@@ -91,6 +91,7 @@ jobs:
run: uv sync --extra "test"
- name: Login to Hugging Face
if: env.HF_USER_TOKEN != ''
run: |
uv run hf auth login --token "$HF_USER_TOKEN" --add-to-git-credential
uv run hf auth whoami
+4 -2
View File
@@ -37,7 +37,7 @@ permissions:
# Sets up the environment variables
env:
UV_VERSION: "0.8.0"
PYTHON_VERSION: "3.10"
PYTHON_VERSION: "3.12"
DOCKER_IMAGE_NAME: huggingface/lerobot-gpu
# Ensures that only the latest action is built, canceling older runs.
@@ -89,6 +89,7 @@ jobs:
run: uv sync --extra all # TODO(Steven): Make flash-attn optional
- name: Login to Hugging Face
if: env.HF_USER_TOKEN != ''
run: |
uv run hf auth login --token "$HF_USER_TOKEN" --add-to-git-credential
uv run hf auth whoami
@@ -181,11 +182,12 @@ jobs:
working-directory: /lerobot
steps:
- name: Login to Hugging Face
if: env.HF_USER_TOKEN != ''
run: |
hf auth login --token "$HF_USER_TOKEN" --add-to-git-credential
hf auth whoami
- name: Fix ptxas permissions
run: chmod +x /lerobot/.venv/lib/python3.10/site-packages/triton/backends/nvidia/bin/ptxas
run: chmod +x /lerobot/.venv/lib/python3.12/site-packages/triton/backends/nvidia/bin/ptxas
- name: Run pytest on GPU
run: pytest tests -vv --maxfail=10
- name: Run end-to-end tests
+5 -3
View File
@@ -28,7 +28,7 @@ on:
# Sets up the environment variables
env:
UV_VERSION: "0.8.0"
PYTHON_VERSION: "3.10"
PYTHON_VERSION: "3.12"
DOCKER_IMAGE_NAME_CPU: huggingface/lerobot-cpu:latest
DOCKER_IMAGE_NAME_GPU: huggingface/lerobot-gpu:latest
@@ -132,6 +132,7 @@ jobs:
working-directory: /lerobot
steps:
- name: Login to Hugging Face
if: env.HF_USER_TOKEN != ''
run: |
hf auth login --token "$HF_USER_TOKEN" --add-to-git-credential
hf auth whoami
@@ -164,6 +165,7 @@ jobs:
working-directory: /lerobot
steps:
- name: Login to Hugging Face
if: env.HF_USER_TOKEN != ''
run: |
hf auth login --token "$HF_USER_TOKEN" --add-to-git-credential
hf auth whoami
@@ -197,6 +199,7 @@ jobs:
working-directory: /lerobot
steps:
- name: Login to Hugging Face
if: env.HF_USER_TOKEN != ''
run: |
hf auth login --token "$HF_USER_TOKEN" --add-to-git-credential
hf auth whoami
@@ -206,5 +209,4 @@ jobs:
python -c "import torch; print(f'PyTorch CUDA available: {torch.cuda.is_available()}'); print(f'Number of GPUs: {torch.cuda.device_count()}')"
- name: Run multi-GPU training tests
# TODO(Steven): Investigate why motors tests are failing in multi-GPU setup
run: pytest tests -vv --maxfail=10 --ignore=tests/motors/
run: pytest -vv tests/training/
+1 -1
View File
@@ -50,7 +50,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: '3.10'
python-version: '3.12'
- name: Run pre-commit hooks
uses: pre-commit/action@v3.0.1 # zizmor: ignore[unpinned-uses]
+2 -10
View File
@@ -22,7 +22,7 @@ on:
# Sets up the environment variables
env:
UV_VERSION: "0.8.0"
PYTHON_VERSION: "3.10"
PYTHON_VERSION: "3.12"
jobs:
# This job builds the Python package and publishes it to PyPI
@@ -45,7 +45,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v6
with:
python-version: '3.10'
python-version: '3.12'
- name: Extract Version
id: extract_info
@@ -83,14 +83,6 @@ jobs:
exit 1
fi
- name: Remove Tags with Git dependencies
# TODO(Steven): Temporary patch to remove pi from PyPi 0.4.0 release due to its reliance on git dependencies.
run: |
echo "::info:: Checking for Git dependencies to remove from pyproject.toml..."
grep -E '@ git\+https|lerobot\[pi\]' pyproject.toml | sed 's/^/::warning:: Removing line: /' || true
sed -E -i '/@ git\+https|lerobot\[pi\]/d' pyproject.toml
echo "::info:: Git dependencies removed. Proceeding with build."
- name: Install build dependencies
run: python -m pip install build
+3 -1
View File
@@ -29,7 +29,7 @@ permissions:
# Sets up the environment variables
env:
UV_VERSION: "0.8.0"
PYTHON_VERSION: "3.10"
PYTHON_VERSION: "3.12"
DOCKER_IMAGE_NAME: huggingface/lerobot-gpu:unbound
# Ensures that only the latest action is built, canceling older runs.
@@ -81,6 +81,7 @@ jobs:
- name: Install lerobot with all extras
run: uv sync --extra all # TODO(Steven): Make flash-attn optional
- name: Login to Hugging Face
if: env.HF_USER_TOKEN != ''
run: |
uv run hf auth login --token "$HF_USER_TOKEN" --add-to-git-credential
uv run hf auth whoami
@@ -154,6 +155,7 @@ jobs:
working-directory: /lerobot
steps:
- name: Login to Hugging Face
if: env.HF_USER_TOKEN != ''
run: |
hf auth login --token "$HF_USER_TOKEN" --add-to-git-credential
hf auth whoami
+2 -2
View File
@@ -13,7 +13,7 @@
# limitations under the License.
default_language_version:
python: python3.10
python: python3.12
exclude: "tests/artifacts/.*\\.safetensors$"
@@ -55,7 +55,7 @@ repos:
rev: v3.21.0
hooks:
- id: pyupgrade
args: [--py310-plus]
args: [--py312-plus]
##### Markdown Quality #####
- repo: https://github.com/rbubley/mirrors-prettier
+18 -1
View File
@@ -135,7 +135,7 @@ Learn how to implement your own simulation environment or benchmark and distribu
## Citation
If you use LeRobot in your research, please cite:
If you use LeRobot in your project, please cite the GitHub repository to acknowledge the ongoing development and contributors:
```bibtex
@misc{cadene2024lerobot,
@@ -146,6 +146,23 @@ If you use LeRobot in your research, please cite:
}
```
If you are referencing our research or the academic paper, please also cite our ICLR publication:
<details>
<summary><b>ICLR 2026 Paper</b></summary>
```bibtex
@inproceedings{cadenelerobot,
title={LeRobot: An Open-Source Library for End-to-End Robot Learning},
author={Cadene, Remi and Alibert, Simon and Capuano, Francesco and Aractingi, Michel and Zouitine, Adil and Kooijmans, Pepijn and Choghari, Jade and Russi, Martino and Pascal, Caroline and Palma, Steven and Shukor, Mustafa and Moss, Jess and Soare, Alexander and Aubakirova, Dana and Lhoest, Quentin and Gallou\'edec, Quentin and Wolf, Thomas},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://arxiv.org/abs/2602.22818}
}
```
</details>
## Contribute
We welcome contributions from everyone in the community! To get started, please read our [CONTRIBUTING.md](./CONTRIBUTING.md) guide. Whether you're adding a new feature, improving documentation, or fixing a bug, your help and feedback are invaluable. We're incredibly excited about the future of open-source robotics and can't wait to work with you on what's next—thank you for your support!
+200
View File
@@ -0,0 +1,200 @@
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Benchmark evaluation container — one image per benchmark, built via BENCHMARK arg.
#
# Supported values for BENCHMARK:
# libero — LIBERO suite (spatial / object / goal / 10 / 90)
# libero_plus — LIBERO-plus extended benchmark (requires robosuite, bddl, robomimic)
# robomme — RoboMME memory-augmented manipulation benchmark
# robocasa — RoboCasa kitchen composite-task benchmark
#
# Build:
# docker build --build-arg BENCHMARK=libero -f docker/Dockerfile.benchmark \
# -t lerobot-benchmark-libero .
#
# Run (interactive):
# docker run --gpus all --rm -it lerobot-benchmark-libero
# Run eval:
# docker run --gpus all --rm lerobot-benchmark-libero lerobot-eval --help
ARG CUDA_VERSION=12.4.1
ARG OS_VERSION=22.04
FROM nvidia/cuda:${CUDA_VERSION}-base-ubuntu${OS_VERSION}
ARG PYTHON_VERSION=3.12
ARG BENCHMARK=libero
ENV DEBIAN_FRONTEND=noninteractive \
MUJOCO_GL=egl \
PYOPENGL_PLATFORM=egl \
EGL_PLATFORM=device \
NVIDIA_DRIVER_CAPABILITIES=all \
NVIDIA_VISIBLE_DEVICES=all \
PATH=/lerobot/.venv/bin:$PATH \
CMAKE_POLICY_VERSION_MINIMUM=3.5 \
CUDA_VISIBLE_DEVICES=0 \
DEVICE=cuda \
BENCHMARK=${BENCHMARK}
# ── Base system deps (shared across all benchmarks) ───────────────────────────
RUN apt-get update && apt-get install -y --no-install-recommends \
software-properties-common build-essential git curl \
libglib2.0-0 libgl1 libgl1-mesa-glx libgles2 \
libegl1 libegl1-mesa libegl1-mesa-dev \
libglew-dev libglfw3 libglfw3-dev libgl1-mesa-dri \
libglvnd-dev libosmesa6 libosmesa6-dev \
libvulkan1 mesa-vulkan-drivers \
libsm6 libxext6 libxrender-dev \
ffmpeg libusb-1.0-0-dev speech-dispatcher libgeos-dev portaudio19-dev \
cmake pkg-config ninja-build \
&& add-apt-repository -y ppa:deadsnakes/ppa \
&& apt-get update \
&& apt-get install -y --no-install-recommends \
python${PYTHON_VERSION} \
python${PYTHON_VERSION}-venv \
python${PYTHON_VERSION}-dev \
&& curl -LsSf https://astral.sh/uv/install.sh | sh \
&& mv /root/.local/bin/uv /usr/local/bin/uv \
&& useradd --create-home --shell /bin/bash user_lerobot \
&& usermod -aG sudo user_lerobot \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
# ── NVIDIA EGL + Vulkan vendor ICDs (lets GLVND find the GPU driver) ──────────
RUN mkdir -p /usr/share/vulkan/icd.d /usr/share/glvnd/egl_vendor.d \
&& printf '{"file_format_version":"1.0.0","ICD":{"library_path":"libGLX_nvidia.so.0","api_version":"1.2.155"}}\n' \
> /usr/share/vulkan/icd.d/nvidia_icd.json \
&& printf '{"file_format_version":"1.0.0","ICD":{"library_path":"libEGL_nvidia.so.0"}}\n' \
> /usr/share/glvnd/egl_vendor.d/10_nvidia.json
# ── Benchmark-specific system deps ────────────────────────────────────────────
# libero_plus: the `wand` Python package requires ImageMagick headers.
RUN case "${BENCHMARK}" in \
libero_plus) \
apt-get update && apt-get install -y --no-install-recommends \
libmagickwand-dev \
&& apt-get clean && rm -rf /var/lib/apt/lists/* ;; \
esac
WORKDIR /lerobot
RUN chown -R user_lerobot:user_lerobot /lerobot
USER user_lerobot
ENV HOME=/home/user_lerobot \
HF_HOME=/home/user_lerobot/.cache/huggingface \
HF_LEROBOT_HOME=/home/user_lerobot/.cache/huggingface/lerobot \
TORCH_HOME=/home/user_lerobot/.cache/torch \
TRITON_CACHE_DIR=/home/user_lerobot/.cache/triton
RUN uv venv --seed --python python${PYTHON_VERSION}
# Copy only the dependency manifests first so Docker can cache this layer
# independently of source-code changes.
COPY --chown=user_lerobot:user_lerobot setup.py pyproject.toml README.md MANIFEST.in ./
COPY --chown=user_lerobot:user_lerobot src/ src/
ARG UNBOUND_DEPS=false
RUN if [ "$UNBOUND_DEPS" = "true" ]; then \
sed -i 's/,[[:space:]]*<[0-9\.]*//g' pyproject.toml; \
echo "Dependencies unbound:" && cat pyproject.toml; \
fi
# Install lerobot core + the selected benchmark extra.
# LIBERO-plus needs a dedicated install path because the upstream package is
# import-broken when installed via the extras chain alone.
RUN case "${BENCHMARK}" in \
libero_plus) \
PATH=/usr/bin:/bin:/lerobot/.venv/bin:$PATH /lerobot/.venv/bin/python -m pip install --no-cache-dir \
"hf-libero>=0.1.3,<0.2.0" \
"hf-egl-probe>=1.0.1" \
"transformers>=5.3.0,<6.0.0" \
"scipy>=1.14.0,<2.0.0" \
"bddl>=1.0.1,<2.0.0" \
"future" \
"easydict>=1.9" \
"wand" \
"scikit-image>=0.20.0" \
"gym>=0.25.0,<0.27.0" \
&& git clone --depth 1 https://github.com/sylvestf/LIBERO-plus.git /tmp/LIBERO-plus \
&& PATH=/usr/bin:/bin:/lerobot/.venv/bin:$PATH /lerobot/.venv/bin/python -m pip install --no-cache-dir --no-deps /tmp/LIBERO-plus \
&& /lerobot/.venv/bin/python -c "import pathlib, site; pathlib.Path(site.getsitepackages()[0], 'libero_plus_repo.pth').write_text('/tmp/LIBERO-plus\n')" \
&& /lerobot/.venv/bin/python -m pip install --no-cache-dir . \
&& /lerobot/.venv/bin/python -c "\
import os, yaml, importlib.util; \
root = os.path.dirname(importlib.util.find_spec('libero.libero').origin); \
d = dict(benchmark_root=root, bddl_files=os.path.join(root,'bddl_files'), \
init_states=os.path.join(root,'init_files'), datasets=os.path.join(root,'..','datasets'), \
assets=os.path.join(root,'assets')); \
cfg_dir = os.path.expanduser('~/.libero'); os.makedirs(cfg_dir, exist_ok=True); \
yaml.dump(d, open(os.path.join(cfg_dir,'config.yaml'),'w')); print('libero config created')" \
&& /lerobot/.venv/bin/python -c "from libero.libero import benchmark, get_libero_path; print('libero OK')" ;; \
libero) \
uv pip install --no-cache ".[libero]" \
&& /lerobot/.venv/bin/python -c "\
import os, yaml, importlib.util; \
root = os.path.dirname(importlib.util.find_spec('libero.libero').origin); \
d = dict(benchmark_root=root, bddl_files=os.path.join(root,'bddl_files'), \
init_states=os.path.join(root,'init_files'), datasets=os.path.join(root,'..','datasets'), \
assets=os.path.join(root,'assets')); \
cfg_dir = os.path.expanduser('~/.libero'); os.makedirs(cfg_dir, exist_ok=True); \
yaml.dump(d, open(os.path.join(cfg_dir,'config.yaml'),'w')); print('libero config created')" \
&& /lerobot/.venv/bin/python -c "from libero.libero import benchmark, get_libero_path; print('libero OK')" ;; \
*) \
uv pip install --no-cache ".[${BENCHMARK}]" ;; \
esac
# LIBERO-plus requires ~6 GB of scene/texture/object assets from HuggingFace.
# Download at build time so containers don't need network access at runtime.
USER root
COPY <<'FETCH_ASSETS' /tmp/fetch_assets.py
from huggingface_hub import hf_hub_download
hf_hub_download("Sylvest/LIBERO-plus", "assets.zip",
repo_type="dataset", local_dir="/tmp/libero-plus-assets")
FETCH_ASSETS
COPY <<'VERIFY_ASSETS' /tmp/verify_assets.py
from pathlib import Path
from libero.libero import get_libero_path
d = Path(get_libero_path("benchmark_root")) / "assets" / "scenes"
assert d.is_dir(), f"assets missing at {d}"
print("assets OK:", d)
VERIFY_ASSETS
RUN if [ "${BENCHMARK}" = "libero_plus" ]; then \
apt-get update && apt-get install -y --no-install-recommends unzip \
&& apt-get clean && rm -rf /var/lib/apt/lists/* \
&& /lerobot/.venv/bin/python /tmp/fetch_assets.py \
&& unzip -q /tmp/libero-plus-assets/assets.zip -d /tmp/libero-plus-unzipped \
&& ASSETS_DIR=$(/lerobot/.venv/bin/python -c "from libero.libero import get_libero_path; print(get_libero_path('benchmark_root'))") \
&& SRC=$(find /tmp/libero-plus-unzipped -type d -name assets | head -1) \
&& mv "$SRC" "$ASSETS_DIR/assets" \
&& chown -R user_lerobot:user_lerobot "$ASSETS_DIR/assets" \
&& rm -rf /tmp/libero-plus-assets /tmp/libero-plus-unzipped /tmp/fetch_assets.py \
&& /lerobot/.venv/bin/python /tmp/verify_assets.py \
&& rm /tmp/verify_assets.py; \
fi
USER user_lerobot
# Triton requires its ptxas binary to be executable (NVIDIA-specific).
RUN if [ -f /lerobot/.venv/lib/python${PYTHON_VERSION}/site-packages/triton/backends/nvidia/bin/ptxas ]; then \
chmod +x /lerobot/.venv/lib/python${PYTHON_VERSION}/site-packages/triton/backends/nvidia/bin/ptxas; \
fi
# Verify EGL probe is importable (runtime GPU check requires NVIDIA drivers at container start).
RUN /lerobot/.venv/bin/python -c "import egl_probe; print('egl_probe OK')" \
2>/dev/null || echo 'NOTE: egl_probe not installed (non-libero build), skipping'
# Copy full source (tests, examples, configs, etc.)
COPY --chown=user_lerobot:user_lerobot . .
CMD ["/bin/bash"]
+78
View File
@@ -0,0 +1,78 @@
# Copyright 2026 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
ARG CUDA_VERSION=12.4.1
ARG OS_VERSION=22.04
FROM nvidia/cuda:${CUDA_VERSION}-base-ubuntu${OS_VERSION}
ARG PYTHON_VERSION=3.12
ENV DEBIAN_FRONTEND=noninteractive \
MUJOCO_GL=egl \
PYOPENGL_PLATFORM=egl \
EGL_PLATFORM=device \
NVIDIA_DRIVER_CAPABILITIES=all \
NVIDIA_VISIBLE_DEVICES=all \
PATH=/lerobot/.venv/bin:$PATH \
# cmake 4.x removed backward compat with cmake_minimum_required < 3.5.
# This env var re-enables it so packages like egl-probe can compile.
CMAKE_POLICY_VERSION_MINIMUM=3.5
RUN apt-get update && apt-get install -y --no-install-recommends \
software-properties-common build-essential git curl \
libglib2.0-0 libgl1 libgl1-mesa-glx libgles2 \
libegl1 libegl1-mesa libegl1-mesa-dev \
libglew-dev libglfw3 libglvnd-dev \
libosmesa6 libosmesa6-dev \
libvulkan1 mesa-vulkan-drivers \
libsm6 libxext6 libxrender-dev \
ffmpeg libusb-1.0-0-dev speech-dispatcher libgeos-dev portaudio19-dev \
cmake pkg-config ninja-build \
&& add-apt-repository -y ppa:deadsnakes/ppa \
&& apt-get update \
&& apt-get install -y --no-install-recommends \
python${PYTHON_VERSION} \
python${PYTHON_VERSION}-venv \
python${PYTHON_VERSION}-dev \
&& curl -LsSf https://astral.sh/uv/install.sh | sh \
&& mv /root/.local/bin/uv /usr/local/bin/uv \
&& useradd --create-home --shell /bin/bash user_lerobot \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
# NVIDIA EGL + Vulkan vendor ICDs (lets GLVND find the GPU driver)
RUN mkdir -p /usr/share/vulkan/icd.d /usr/share/glvnd/egl_vendor.d \
&& printf '{"file_format_version":"1.0.0","ICD":{"library_path":"libGLX_nvidia.so.0","api_version":"1.2.155"}}\n' \
> /usr/share/vulkan/icd.d/nvidia_icd.json \
&& printf '{"file_format_version":"1.0.0","ICD":{"library_path":"libEGL_nvidia.so.0"}}\n' \
> /usr/share/glvnd/egl_vendor.d/10_nvidia.json
WORKDIR /lerobot
RUN chown -R user_lerobot:user_lerobot /lerobot
USER user_lerobot
ENV HOME=/home/user_lerobot \
HF_HOME=/home/user_lerobot/.cache/huggingface \
HF_LEROBOT_HOME=/home/user_lerobot/.cache/huggingface/lerobot \
TORCH_HOME=/home/user_lerobot/.cache/torch \
TRITON_CACHE_DIR=/home/user_lerobot/.cache/triton
RUN uv venv --seed --python python${PYTHON_VERSION}
COPY --chown=user_lerobot:user_lerobot setup.py pyproject.toml README.md MANIFEST.in ./
COPY --chown=user_lerobot:user_lerobot src/ src/
RUN uv pip install --no-cache .
COPY --chown=user_lerobot:user_lerobot . .
CMD ["/bin/bash"]
+20
View File
@@ -0,0 +1,20 @@
# Copyright 2026 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM lerobot-eval-base:latest
RUN uv pip install --no-cache ".[libero]" \
&& python -c "import libero"
CMD ["/bin/bash"]
+47
View File
@@ -0,0 +1,47 @@
# Copyright 2026 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM lerobot-eval-base:latest
# Install libero_plus deps explicitly rather than via ".[libero_plus]" extras chain.
# uv has a bug where it considers packages "already resolved" when coming through
# a nested lerobot[libero] → lerobot[libero_plus] extras chain, silently skipping them.
RUN uv pip install --no-cache \
"hf-libero>=0.1.3,<0.2.0" \
"hf-egl-probe>=1.0.1" \
"transformers>=5.3.0,<6.0.0" \
"scipy>=1.14.0,<2.0.0" \
"bddl>=1.0.1,<2.0.0" \
"future" \
"easydict>=1.9" \
"wand" \
"scikit-image>=0.20.0" \
"gym>=0.25.0,<0.27.0"
# Clone LIBERO-plus; install with --no-deps (runtime deps declared above via hf-libero).
# Add .pth so the libero module can locate its data files at runtime.
RUN git clone --depth 1 https://github.com/sylvestf/LIBERO-plus.git /tmp/LIBERO-plus \
&& uv pip install --no-cache --no-deps /tmp/LIBERO-plus \
&& python -c "import pathlib, site; pathlib.Path(site.getsitepackages()[0], 'libero_plus_repo.pth').write_text('/tmp/LIBERO-plus\n')" \
&& python -c "\
import os, yaml, importlib.util; \
root = os.path.dirname(importlib.util.find_spec('libero.libero').origin); \
d = dict(benchmark_root=root, bddl_files=os.path.join(root,'bddl_files'), \
init_states=os.path.join(root,'init_files'), datasets=os.path.join(root,'..','datasets'), \
assets=os.path.join(root,'assets')); \
cfg_dir = os.path.expanduser('~/.libero'); os.makedirs(cfg_dir, exist_ok=True); \
yaml.dump(d, open(os.path.join(cfg_dir,'config.yaml'),'w')); print('libero config created')" \
&& python -c "from libero.libero import benchmark, get_libero_path; print('libero OK')"
CMD ["/bin/bash"]
+20
View File
@@ -0,0 +1,20 @@
# Copyright 2026 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM lerobot-eval-base:latest
RUN uv pip install --no-cache ".[metaworld]" \
&& python -c "import metaworld"
CMD ["/bin/bash"]
+40
View File
@@ -0,0 +1,40 @@
# Copyright 2026 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM lerobot-eval-base:latest
# robocasa README says to use master branch of ARISE-Initiative/robosuite.
# Install it with deps (robosuite from master has modern dep declarations).
RUN git clone --depth 1 https://github.com/ARISE-Initiative/robosuite.git /tmp/robosuite \
&& uv pip install --no-cache /tmp/robosuite
# Clone robocasa and install with --no-deps to skip its lerobot==0.3.3 pin.
# Install robocasa's actual runtime deps explicitly instead.
RUN git clone --depth 1 https://github.com/robocasa/robocasa.git /tmp/robocasa \
&& uv pip install --no-cache --no-deps /tmp/robocasa \
&& uv pip install --no-cache \
"scikit-image>=0.20.0" \
"numba>=0.61.0,<0.62.0" \
"mujoco==3.3.1" \
"h5py" \
"lxml" \
"tianshou==0.4.10" \
"easydict>=1.9"
# robocasa/__init__.py asserts numpy.__version__ in ["2.2.5"] — pin it last
# so no subsequent package can bump it away.
RUN uv pip install --no-cache "numpy==2.2.5" \
&& python -c "import robocasa"
CMD ["/bin/bash"]
+26
View File
@@ -0,0 +1,26 @@
# Copyright 2026 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
FROM lerobot-eval-base:latest
# mani-skill==3.0.0b21 (robomme dep) pins gymnasium==0.29.1 and numpy<2.0.0,
# conflicting with lerobot's gymnasium>=1.1.1 and numpy>=2.0.0.
# Both overrides are safe at runtime:
# - gymnasium 0.29.x has the same 5-tuple step() API as 1.x (since gym 0.26)
# - numpy 1.26.4 is API-compatible with lerobot's actual usage (no 2.x-only APIs used)
RUN printf 'gymnasium==0.29.1\nnumpy==1.26.4\n' > /tmp/robomme_override.txt \
&& uv pip install --no-cache --override /tmp/robomme_override.txt ".[robomme]" \
&& python -c "import robomme"
CMD ["/bin/bash"]
+1 -1
View File
@@ -24,7 +24,7 @@ ARG OS_VERSION=22.04
FROM nvidia/cuda:${CUDA_VERSION}-base-ubuntu${OS_VERSION}
# Define Python version argument
ARG PYTHON_VERSION=3.10
ARG PYTHON_VERSION=3.12
# Configure environment variables
ENV DEBIAN_FRONTEND=noninteractive \
+1 -1
View File
@@ -19,7 +19,7 @@
# docker run -it --rm lerobot-user
# Configure the base image
ARG PYTHON_VERSION=3.10
ARG PYTHON_VERSION=3.12
FROM python:${PYTHON_VERSION}-slim
# Configure environment variables
+120
View File
@@ -0,0 +1,120 @@
#!/usr/bin/env bash
# Build (and optionally push) all lerobot benchmark eval images.
#
# Usage:
# # Build locally only (for testing on this machine)
# bash docker/build_benchmark_images.sh
#
# # Build and push to Docker Hub under your org
# bash docker/build_benchmark_images.sh --push --hub_org=pepijn223
#
# # Force-rebuild base image (e.g. after Dockerfile.eval-base changes)
# bash docker/build_benchmark_images.sh --no-cache-base --push --hub_org=pepijn223
#
# # Build only specific benchmarks
# bash docker/build_benchmark_images.sh --benchmarks="libero_plus robomme"
#
# After building, run eval with:
# lerobot-eval --eval.runtime=docker --eval.docker.pull=false \
# --eval.docker.image=<hub_org>/lerobot-benchmark-<benchmark>:latest ...
# OR (if run locally with the default tag):
# lerobot-eval --eval.runtime=docker --eval.docker.pull=false \
# --env.type=<benchmark> ... # auto-resolves to lerobot-benchmark-<benchmark>
set -euo pipefail
PUSH=false
HUB_ORG=""
BENCHMARKS="libero libero_plus robomme robocasa metaworld"
NO_CACHE_BASE=false
PROGRESS="auto"
for arg in "$@"; do
case "$arg" in
--push) PUSH=true ;;
--hub_org=*) HUB_ORG="${arg#*=}" ;;
--benchmarks=*) BENCHMARKS="${arg#*=}" ;;
--no-cache-base) NO_CACHE_BASE=true ;;
--plain) PROGRESS="plain" ;;
*) echo "Unknown arg: $arg"; exit 1 ;;
esac
done
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
if [[ "$PUSH" == "true" && -z "$HUB_ORG" ]]; then
echo "ERROR: --push requires --hub_org=<your-dockerhub-org>"
exit 1
fi
ok() { echo "[OK] $*"; }
fail() { echo "[FAIL] $*"; exit 1; }
BASE_CACHE_FLAG=""
if [[ "$NO_CACHE_BASE" == "true" ]]; then
BASE_CACHE_FLAG="--no-cache"
fi
echo "=== Building lerobot-eval-base ==="
docker build \
${BASE_CACHE_FLAG} \
--progress="${PROGRESS}" \
-f "${SCRIPT_DIR}/Dockerfile.eval-base" \
-t lerobot-eval-base:latest \
"${REPO_ROOT}" || fail "lerobot-eval-base build failed"
ok "lerobot-eval-base"
for BENCHMARK in $BENCHMARKS; do
LOCAL_TAG="lerobot-benchmark-${BENCHMARK}:latest"
DOCKERFILE="${SCRIPT_DIR}/Dockerfile.eval-${BENCHMARK//_/-}"
# Handle underscore → hyphen mapping for filename lookup
DOCKERFILE_HYPHEN="${SCRIPT_DIR}/Dockerfile.eval-${BENCHMARK//_/-}"
DOCKERFILE_UNDERSCORE="${SCRIPT_DIR}/Dockerfile.eval-${BENCHMARK}"
if [[ -f "$DOCKERFILE_HYPHEN" ]]; then
DOCKERFILE="$DOCKERFILE_HYPHEN"
elif [[ -f "$DOCKERFILE_UNDERSCORE" ]]; then
DOCKERFILE="$DOCKERFILE_UNDERSCORE"
else
fail "No Dockerfile found for benchmark '${BENCHMARK}' (tried ${DOCKERFILE_HYPHEN} and ${DOCKERFILE_UNDERSCORE})"
fi
echo ""
echo "=== Building ${LOCAL_TAG} from $(basename ${DOCKERFILE}) ==="
docker build \
--progress="${PROGRESS}" \
-f "${DOCKERFILE}" \
-t "${LOCAL_TAG}" \
"${REPO_ROOT}" || fail "${LOCAL_TAG} build failed"
ok "${LOCAL_TAG}"
if [[ "$PUSH" == "true" ]]; then
HUB_TAG="${HUB_ORG}/lerobot-benchmark-${BENCHMARK}:latest"
docker tag "${LOCAL_TAG}" "${HUB_TAG}"
docker push "${HUB_TAG}" || fail "push ${HUB_TAG} failed"
ok "Pushed ${HUB_TAG}"
fi
done
echo ""
echo "=== Smoke-testing images ==="
for BENCHMARK in $BENCHMARKS; do
LOCAL_TAG="lerobot-benchmark-${BENCHMARK}:latest"
echo " Smoke test: ${LOCAL_TAG}"
docker run --rm -e BENCHMARK="${BENCHMARK}" \
"${LOCAL_TAG}" bash docker/smoke_test_benchmark.sh \
&& ok "smoke test ${BENCHMARK}" \
|| echo "[WARN] smoke test failed for ${BENCHMARK} (may need GPU)"
done
echo ""
echo "All benchmark images built successfully."
if [[ "$PUSH" == "true" ]]; then
echo "Pushed to Docker Hub under: ${HUB_ORG}/"
echo ""
echo "To use Hub images in eval, pass:"
for BENCHMARK in $BENCHMARKS; do
echo " --eval.docker.image=${HUB_ORG}/lerobot-benchmark-${BENCHMARK}:latest"
done
fi
+115
View File
@@ -0,0 +1,115 @@
#!/usr/bin/env bash
# Smoke-test a benchmark container: verifies imports and CLI entry-points.
#
# Build and run for a specific benchmark:
# docker build --build-arg BENCHMARK=libero -f docker/Dockerfile.benchmark -t lerobot-benchmark-libero .
# docker run --gpus all --rm -e BENCHMARK=libero lerobot-benchmark-libero bash docker/smoke_test_benchmark.sh
#
# Test all benchmarks individually:
# for b in libero libero_plus robomme robocasa; do
# docker build --build-arg BENCHMARK=$b -f docker/Dockerfile.benchmark -t lerobot-benchmark-$b .
# docker run --gpus all --rm -e BENCHMARK=$b lerobot-benchmark-$b bash docker/smoke_test_benchmark.sh
# done
set -euo pipefail
BENCHMARK="${BENCHMARK:-libero}"
PASS=0
FAIL=0
ok() { echo "[PASS] $*"; PASS=$((PASS + 1)); }
fail() { echo "[FAIL] $*"; FAIL=$((FAIL + 1)); }
python_import() {
local module="$1"
if python -c "import ${module}" 2>/dev/null; then
ok "import ${module}"
else
fail "import ${module}"
fi
}
cli_help() {
local cmd="$1"
if "${cmd}" --help > /dev/null 2>&1; then
ok "${cmd} --help"
else
fail "${cmd} --help"
fi
}
echo "=== Smoke test: benchmark=${BENCHMARK} ==="
# ── lerobot core ──────────────────────────────────────────────────────────────
python_import "lerobot"
python_import "lerobot.envs"
python_import "lerobot.configs.eval"
cli_help "lerobot-eval"
# ── Benchmark-specific env import ─────────────────────────────────────────────
case "${BENCHMARK}" in
libero)
python_import "lerobot.envs.libero"
python -c "
from lerobot.envs.configs import LiberoEnv
cfg = LiberoEnv(task='libero_spatial/KITCHEN_SCENE1_open_the_bottom_drawer_of_the_cabinet')
print(' LiberoEnv config OK:', cfg.type)
" && ok "LiberoEnv config instantiation" || fail "LiberoEnv config instantiation"
;;
libero_plus)
python_import "lerobot.envs.libero"
python -c "
from lerobot.envs.configs import LiberoPlusEnv
cfg = LiberoPlusEnv()
print(' LiberoPlusEnv config OK:', cfg.type)
" && ok "LiberoPlusEnv config instantiation" || fail "LiberoPlusEnv config instantiation"
# Verify the LIBERO-plus package itself is importable
python_import "libero"
python_import "robosuite"
;;
robomme)
python_import "lerobot.envs.robomme"
python -c "
from lerobot.envs.robomme import ROBOMME_TASKS, RoboMMEGymEnv
assert len(ROBOMME_TASKS) == 16, f'Expected 16 tasks, got {len(ROBOMME_TASKS)}'
print(' ROBOMME_TASKS OK:', ROBOMME_TASKS[:3], '...')
" && ok "RoboMME task list" || fail "RoboMME task list"
python -c "
from lerobot.envs.configs import RoboMMEEnv
cfg = RoboMMEEnv(task='PickXtimes')
print(' RoboMMEEnv config OK:', cfg.type)
" && ok "RoboMMEEnv config instantiation" || fail "RoboMMEEnv config instantiation"
python_import "robomme"
;;
robocasa)
python_import "lerobot.envs.robocasa"
python -c "
from lerobot.envs.robocasa import ACTION_DIM, STATE_DIM
assert ACTION_DIM == 12, f'Expected ACTION_DIM=12, got {ACTION_DIM}'
assert STATE_DIM == 16, f'Expected STATE_DIM=16, got {STATE_DIM}'
print(' ACTION_DIM:', ACTION_DIM, ' STATE_DIM:', STATE_DIM)
" && ok "RoboCasa constants" || fail "RoboCasa constants"
python -c "
from lerobot.envs.configs import RoboCasaEnv
cfg = RoboCasaEnv(task='PickPlaceCounterToCabinet')
print(' RoboCasaEnv config OK:', cfg.type)
" && ok "RoboCasaEnv config instantiation" || fail "RoboCasaEnv config instantiation"
python_import "robocasa"
python_import "robosuite"
;;
*)
echo "Unknown BENCHMARK='${BENCHMARK}'. Valid values: libero, libero_plus, robomme, robocasa"
exit 1
;;
esac
# ── Summary ───────────────────────────────────────────────────────────────────
echo ""
echo "=== Results: ${PASS} passed, ${FAIL} failed ==="
if [ "${FAIL}" -gt 0 ]; then
exit 1
fi
+2
View File
@@ -19,6 +19,8 @@
title: Multi GPU training
- local: peft_training
title: Training with PEFT (e.g., LoRA)
- local: benchmark_training
title: Benchmark Training & Evaluation
title: "Tutorials"
- sections:
- local: lerobot-dataset-v3
+398
View File
@@ -0,0 +1,398 @@
# Benchmark Training & Evaluation
This guide explains how to train and evaluate policies on the simulation benchmarks
integrated in LeRobot: **LIBERO**, **LIBERO-plus**, **MetaWorld**, **RoboCasa**, and **RoboMME**.
The workflow is:
1. Pick one or more benchmarks.
2. For each benchmark, train a policy on its combined dataset (multi-GPU).
3. Upload the trained policy to the Hugging Face Hub.
4. Evaluate the policy on every task suite within that benchmark.
## Prerequisites
Install the benchmark-specific dependencies for the environments you want to evaluate on:
```bash
# LIBERO (original)
pip install -e ".[libero]"
# LIBERO-plus
pip install -e ".[libero_plus]"
# MetaWorld
pip install -e ".[metaworld]"
# RoboCasa
pip install -e ".[robocasa]"
# RoboMME
pip install -e ".[robomme]"
```
`libero_plus` includes the same EGL probe dependencies as `libero` so headless
renderer setup is consistent between both installs.
If your environment has CMake build-isolation issues, use the same fallback as
standard LIBERO installs:
```bash
PATH=/usr/bin:/bin:$PATH pip install --no-build-isolation -e ".[libero-plus]"
```
For multi-GPU training you also need [Accelerate](https://huggingface.co/docs/accelerate):
```bash
pip install accelerate
```
## Docker-isolated evaluation (EnvHub)
LeRobot eval now supports running the full eval worker in a Docker container
while keeping policy loading compatible with local checkpoints and local code changes.
Use `lerobot-eval` with `--eval.runtime=docker`:
```bash
lerobot-eval \
--policy.path=outputs/train/my_policy/checkpoints/050000/pretrained_model \
--env.type=libero_plus \
--eval.runtime=docker \
--eval.docker.envhub_ref=envhub://lerobot/libero_plus@v1 \
--eval.n_episodes=10 \
--eval.batch_size=10
```
`eval.docker.envhub_ref` is optional. If omitted, LeRobot resolves a default
image from `env.type`. You can also override the image directly:
```bash
--eval.docker.image=docker://ghcr.io/huggingface/lerobot-eval-libero-plus:latest
```
By default (`eval.docker.use_local_code=true`), the local repository is mounted
in the container and added to `PYTHONPATH`, so edited policy/env code and local
checkpoints continue to work without rebuilding the image for each change.
Common Docker runtime options:
```bash
--eval.docker.pull=true \
--eval.docker.gpus=all \
--eval.docker.shm_size=8g \
--eval.docker.use_local_code=true
```
The benchmark runner supports the same Docker eval path (extra args are
forwarded to each generated `lerobot-eval` call):
```bash
lerobot-benchmark eval \
--benchmarks libero_plus,robocasa \
--hub-user $HF_USER \
--n-episodes 50 \
--eval.runtime=docker \
--eval.docker.pull=true
```
Build benchmark images locally:
```bash
make build-eval-images
```
## Fast single-machine eval tuning
`lerobot-eval` now has two orthogonal throughput knobs:
- `eval.batch_size`: number of sub-envs per task (inside one vector env).
- `env.max_parallel_tasks`: number of tasks scheduled concurrently.
- `eval.instance_count`: number of full eval instances (process-level sharding).
Use them in this order:
1. Increase `eval.batch_size` first for per-task throughput.
2. Then increase `env.max_parallel_tasks` to overlap tasks, while monitoring RAM/VRAM.
3. Optionally increase `eval.instance_count` for process-level parallelism (best with enough CPU/RAM and small models).
The eval logs print the active scheduler mode (`sequential`, `threaded`, or `batched_lazy`) so you can verify the effective concurrency path.
### Suggested starting points
| Benchmark | Conservative | Faster (single GPU) | Notes |
|---|---|---|---|
| `libero` / `libero_plus` | `eval.batch_size=1`, `env.max_parallel_tasks=4` | `eval.batch_size=1`, `env.max_parallel_tasks=16` | For large suite sweeps, increase `max_parallel_tasks` before `batch_size` to avoid MuJoCo memory spikes. |
| `metaworld` | `eval.batch_size=8`, `env.max_parallel_tasks=1` | `eval.batch_size=16`, `env.max_parallel_tasks=2` | Prefer larger per-task vectorization first. |
| `robocasa` | `eval.batch_size=4`, `env.max_parallel_tasks=1` | `eval.batch_size=8`, `env.max_parallel_tasks=2` | Rendering/memory can dominate at high image resolution. |
| `robomme` | `eval.batch_size=4`, `env.max_parallel_tasks=1` | `eval.batch_size=8`, `env.max_parallel_tasks=2` | Start small and scale gradually with task count. |
### Local fast eval recipe
```bash
lerobot-eval \
--policy.path=$HF_USER/smolvla_libero_plus \
--env.type=libero_plus \
--eval.n_episodes=1 \
--eval.batch_size=1 \
--env.max_parallel_tasks=16 \
--eval.instance_count=2 \
--rename_map='{"observation.images.image":"observation.images.camera1","observation.images.image2":"observation.images.camera2"}' \
--output_dir=outputs/eval/smolvla_libero_plus \
--push_to_hub=true
```
### Docker fast eval recipe
```bash
lerobot-eval \
--policy.path=$HF_USER/smolvla_libero_plus \
--env.type=libero_plus \
--eval.runtime=docker \
--eval.docker.envhub_ref=envhub://lerobot/libero_plus@v1 \
--eval.docker.gpus=all \
--eval.docker.shm_size=16g \
--eval.n_episodes=1 \
--eval.batch_size=1 \
--env.max_parallel_tasks=16
```
## Quick start — single benchmark
Train SmolVLA on LIBERO-plus with 4 GPUs for 50 000 steps:
```bash
lerobot-benchmark train \
--benchmarks libero_plus \
--policy-path lerobot/smolvla_base \
--hub-user $HF_USER \
--num-gpus 4 \
--steps 50000 \
--batch-size 32 \
--wandb
```
This trains on the combined LIBERO-plus dataset and pushes the checkpoint to
`$HF_USER/smolvla_libero_plus` on the Hub.
Then evaluate on **all four** LIBERO suites (spatial, object, goal, 10):
```bash
lerobot-benchmark eval \
--benchmarks libero_plus \
--hub-user $HF_USER \
--n-episodes 50
```
This automatically runs a separate `lerobot-eval` for each suite.
## Full sweep — multiple benchmarks
Run training **and** evaluation across all benchmarks:
```bash
lerobot-benchmark all \
--benchmarks libero,libero_plus,metaworld,robocasa,robomme \
--policy-path lerobot/smolvla_base \
--hub-user $HF_USER \
--num-gpus 4 \
--steps 50000 \
--batch-size 32 \
--wandb \
--push-eval-to-hub
```
For each benchmark the runner:
1. Trains a policy on its dataset.
2. Evaluates on every eval task in the benchmark (e.g. 4 suites for LIBERO).
3. Pushes HF-native `.eval_results` rows (and optional artifacts) to the Hub.
<Tip>
Use `--dry-run` to print the exact `lerobot-train` / `lerobot-eval` commands without executing them, so you can inspect or modify them before running.
</Tip>
## Using the CLI directly (without the benchmark runner)
You can also compose the commands yourself. The benchmark runner is a thin wrapper; here is what it does under the hood.
### Training
```bash
accelerate launch \
--multi_gpu \
--num_processes=4 \
$(which lerobot-train) \
--policy.path=lerobot/smolvla_base \
--dataset.repo_id=$HF_USER/libero_plus \
--policy.repo_id=$HF_USER/smolvla_libero_plus \
--env.type=libero_plus \
--env.task=libero_spatial \
--steps=50000 \
--batch_size=32 \
--eval_freq=10000 \
--save_freq=10000 \
--output_dir=outputs/train/smolvla_libero_plus \
--job_name=smolvla_libero_plus \
--policy.push_to_hub=true \
--wandb.enable=true
```
### Evaluation (run once per suite)
```bash
for SUITE in libero_spatial libero_object libero_goal libero_10; do
lerobot-eval \
--policy.path=$HF_USER/smolvla_libero_plus \
--env.type=libero_plus \
--env.task=$SUITE \
--eval.n_episodes=50 \
--eval.batch_size=10 \
--output_dir=outputs/eval/smolvla_libero_plus/$SUITE \
--policy.device=cuda \
--push_to_hub=true \
--benchmark_dataset_id=lerobot/sim-benchmarks
done
```
## Available benchmarks
| Benchmark | Env type | Dataset | Eval tasks | Action dim |
|---|---|---|---|---|
| `libero` | `libero` | `{hub_user}/libero` | spatial, object, goal, 10 | 7 |
| `libero_plus` | `libero_plus` | `{hub_user}/libero_plus` | spatial, object, goal, 10 | 7 |
| `metaworld` | `metaworld` | `{hub_user}/metaworld` | push-v2 | 4 |
| `robocasa` | `robocasa` | `{hub_user}/robocasa` | PickPlaceCounterToCabinet | 12 |
| `robomme` | `robomme` | `{hub_user}/robomme` | PickXtimes | 8 |
Run `lerobot-benchmark list` to see the full registry with all eval tasks.
## Policy naming convention
The benchmark runner stores trained policies under:
```
{hub_user}/{policy_name}_{benchmark}
```
The default `--policy-name` is `smolvla`. So training on `libero_plus` as user `alice` produces `alice/smolvla_libero_plus`.
You can override this, e.g. `--policy-name pi05` if training π₀.₅ instead.
## Multi-GPU considerations
The effective batch size is `batch_size × num_gpus`. With `--batch-size=32` and
`--num-gpus=4`, you train with an effective batch of 128 per step. LeRobot does **not**
auto-scale the learning rate; see the [Multi-GPU Training guide](./multi_gpu_training) for
details on when and how to adjust it.
## Custom benchmarks
To add a new benchmark, edit the `BENCHMARK_REGISTRY` in
`src/lerobot/scripts/lerobot_benchmark.py`:
```python
from lerobot.scripts.lerobot_benchmark import BenchmarkEntry, BENCHMARK_REGISTRY
BENCHMARK_REGISTRY["my_benchmark"] = BenchmarkEntry(
dataset_repo_id="{hub_user}/my_dataset",
env_type="my_env",
env_task="MyDefaultTask",
eval_tasks=["TaskA", "TaskB", "TaskC"],
)
```
Then use `--benchmarks my_benchmark` as usual. The runner will train once and
evaluate separately on TaskA, TaskB, and TaskC.
## Outputs
After training and evaluation, your outputs directory looks like:
```
outputs/
├── train/
│ ├── smolvla_libero/
│ │ ├── checkpoints/
│ │ └── ...
│ ├── smolvla_libero_plus/
│ ├── smolvla_robocasa/
│ └── smolvla_robomme/
└── eval/
├── smolvla_libero/
│ ├── libero_spatial/
│ │ ├── eval_info.json
│ │ └── videos/
│ ├── libero_object/
│ ├── libero_goal/
│ └── libero_10/
├── smolvla_libero_plus/
│ ├── libero_spatial/
│ ├── libero_object/
│ ├── libero_goal/
│ └── libero_10/
├── smolvla_robocasa/
└── smolvla_robomme/
```
Each `eval_info.json` contains per-episode rewards, success rates, and aggregate metrics.
## HF Eval Results + Leaderboard
LeRobot publishes benchmark scores using Hugging Face's native
`/.eval_results/*.yaml` format, which powers model-page eval cards and
benchmark leaderboards.
Add `--push-eval-to-hub` to push results after each eval run:
```bash
lerobot-benchmark eval \
--benchmarks libero_plus,robocasa \
--hub-user $HF_USER \
--benchmark-dataset-id lerobot/sim-benchmarks \
--push-eval-to-hub
```
This writes one or more files under `.eval_results/` in the model repo, for example:
```yaml
- dataset:
id: lerobot/sim-benchmarks
task_id: libero_plus/spatial
value: 82.4
notes: lerobot-eval
```
Notes:
- `--benchmark-dataset-id` points to your consolidated benchmark dataset repo.
- `task_id` values are derived from `env.type` and evaluated suite/task names.
- Eval artifacts (`eval_info.json`, `eval_config.json`, videos) are still uploaded
for provenance, but leaderboard ranking comes from `.eval_results`.
## Passing extra arguments
Any arguments after the recognized flags are forwarded to `lerobot-train` or
`lerobot-eval`.
Example (training): use PEFT/LoRA during training.
```bash
lerobot-benchmark train \
--benchmarks libero_plus \
--policy-path lerobot/smolvla_base \
--hub-user $HF_USER \
--num-gpus 4 \
--steps 50000 \
--peft.method_type=LORA --peft.r=16
```
Example (evaluation): forward Docker runtime flags to each `lerobot-eval` call.
```bash
lerobot-benchmark eval \
--benchmarks libero_plus \
--hub-user $HF_USER \
--eval.runtime=docker \
--eval.docker.envhub_ref=envhub://lerobot/libero_plus@v1
```
+4 -4
View File
@@ -32,7 +32,7 @@ version = "0.1.0"
dependencies = [
# your policy-specific dependencies
]
requires-python = ">= 3.11"
requires-python = ">= 3.12"
[build-system]
build-backend = # your-build-backend
@@ -82,7 +82,7 @@ Create your policy implementation by inheriting from LeRobot's base `PreTrainedP
# modeling_my_custom_policy.py
import torch
import torch.nn as nn
from typing import Dict, Any
from typing import Any
from lerobot.policies.pretrained import PreTrainedPolicy
from .configuration_my_custom_policy import MyCustomPolicyConfig
@@ -91,7 +91,7 @@ class MyCustomPolicy(PreTrainedPolicy):
config_class = MyCustomPolicyConfig
name = "my_custom_policy"
def __init__(self, config: MyCustomPolicyConfig, dataset_stats: Dict[str, Any] = None):
def __init__(self, config: MyCustomPolicyConfig, dataset_stats: dict[str, Any] = None):
super().__init__(config, dataset_stats)
...
```
@@ -102,7 +102,7 @@ Create processor functions:
```python
# processor_my_custom_policy.py
from typing import Dict, Any
from typing import Any
import torch
+1 -1
View File
@@ -13,7 +13,7 @@ The EarthRover Mini Plus is a fully open source mobile robot that connects throu
### Hardware
- EarthRover Mini robot
- Computer with Python 3.10 or newer
- Computer with Python 3.12 or newer
- Internet connection
### Setting Up the Frodobots SDK
+60 -13
View File
@@ -1,8 +1,8 @@
# Installation
This guide uses conda (via miniforge) to manage environments. If you prefer another environment manager (e.g. `uv`, `venv`), ensure you have Python >=3.10 and ffmpeg installed with the `libsvtav1` encoder, then skip ahead to [Install LeRobot](#step-3-install-lerobot-).
This guide uses `conda` (via miniforge) to manage environments (recommended). If you prefer another environment manager (e.g. `uv`, `venv`), ensure you have Python >=3.12 and `ffmpeg` installed with the `libsvtav1` encoder, then skip ahead to [Environment Setup](#step-2-environment-setup).
## Step 1: Install [`miniforge`](https://conda-forge.org/download/)
## Step 1 (`conda` only): Install [`miniforge`](https://conda-forge.org/download/)
```bash
wget "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh"
@@ -11,22 +11,47 @@ bash Miniforge3-$(uname)-$(uname -m).sh
## Step 2: Environment Setup
Create a virtual environment with Python 3.10, using conda:
Create a virtual environment with Python 3.12:
<!-- prettier-ignore-start -->
<hfoptions id="create_venv">
<hfoption id="conda">
```bash
conda create -y -n lerobot python=3.10
conda create -y -n lerobot python=3.12
```
Then activate your conda environment, you have to do this each time you open a shell to use lerobot:
</hfoption>
<hfoption id="uv">
```bash
uv python install 3.12
uv venv --python 3.12
```
</hfoption>
</hfoptions>
<!-- prettier-ignore-end -->
Then activate your virtual environment, you have to do this each time you open a shell to use lerobot:
<!-- prettier-ignore-start -->
<hfoptions id="activate_venv">
<hfoption id="conda">```bash
conda activate lerobot
```</hfoption>
<hfoption id="uv">
```bash
# Linux/macOSsource
source .venv/bin/activate
# Windows PowerShell
source .venv\Scripts\Activate.ps1
```
</hfoption>
</hfoptions>
<!-- prettier-ignore-end -->
When using `conda`, install `ffmpeg` in your environment:
```bash
conda install ffmpeg -c conda-forge
ffmpeg -version # ffmpeg 8.X is not yet supported !
```
> [!TIP]
@@ -47,6 +72,9 @@ conda install ffmpeg -c conda-forge
> conda install evdev -c conda-forge
> ```
> [!IMPORTANT]
> If you are using `uv` you will have to install `ffmpeg` system-wide (outside of the virtual environment). You rely on `uv` and `torchcodec` ability to dynamically link to the system `ffmpeg`.
## Step 3: Install LeRobot 🤗
### From Source
@@ -60,23 +88,45 @@ cd lerobot
Then, install the library in editable mode. This is useful if you plan to contribute to the code.
<!-- prettier-ignore-start -->
<hfoptions id="install_lerobot_src">
<hfoption id="conda">
```bash
pip install -e .
```
</hfoption>
<hfoption id="uv">
```bash
uv pip install -e .
```
</hfoption>
</hfoptions>
<!-- prettier-ignore-end -->
### Installation from PyPI
**Core Library:**
Install the base package with:
<!-- prettier-ignore-start -->
<hfoptions id="install_lerobot_pypi">
<hfoption id="conda">
```bash
pip install lerobot
```
</hfoption>
<hfoption id="uv">
```bash
uv pip install lerobot
```
</hfoption>
</hfoptions>
<!-- prettier-ignore-end -->
_This installs only the default dependencies._
**Extra Features:**
To install additional functionality, use one of the following:
To install additional functionality, use one of the following (If you are using `uv`, replace `pip install` with `uv pip install` in the commands below.):
```bash
pip install 'lerobot[all]' # All available features
@@ -90,13 +140,10 @@ _Replace `[...]` with your desired features._
For a full list of optional dependencies, see:
https://pypi.org/project/lerobot/
> [!NOTE]
> For lerobot 0.4.0, if you want to install pi, you will have to do: `pip install "lerobot[pi]@git+https://github.com/huggingface/lerobot.git"`
### Troubleshooting
If you encounter build errors, you may need to install additional dependencies: `cmake`, `build-essential`, and `ffmpeg libs`.
To install these for linux run:
To install these for Linux run:
```bash
sudo apt-get install cmake build-essential python3-dev pkg-config libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libswresample-dev libavfilter-dev
@@ -106,7 +153,7 @@ For other systems, see: [Compiling PyAV](https://pyav.org/docs/develop/overview/
## Optional dependencies
LeRobot provides optional extras for specific functionalities. Multiple extras can be combined (e.g., `.[aloha,feetech]`). For all available extras, refer to `pyproject.toml`.
LeRobot provides optional extras for specific functionalities. Multiple extras can be combined (e.g., `.[aloha,feetech]`). For all available extras, refer to `pyproject.toml`. If you are using `uv`, replace `pip install` with `uv pip install` in the commands below.
### Simulations
-5
View File
@@ -34,11 +34,6 @@ As described by Physical Intelligence, while AI has achieved remarkable success
pip install -e ".[pi]"
```
> [!NOTE]
> For lerobot 0.4.0, if you want to install pi tag, you will have to do: `pip install "lerobot[pi]@git+https://github.com/huggingface/lerobot.git"`.
>
> This will be solved in the next patch release
## Training Data and Capabilities
π₀ is trained on the largest robot interaction dataset to date, combining three key data sources:
-5
View File
@@ -36,11 +36,6 @@ This diverse training mixture creates a "curriculum" that enables generalization
pip install -e ".[pi]"
```
> [!NOTE]
> For lerobot 0.4.0, if you want to install pi tag, you will have to do: `pip install "lerobot[pi]@git+https://github.com/huggingface/lerobot.git"`.
>
> This will be solved in the next patch release
## Usage
To use π₀.₅ in your LeRobot configuration, specify the policy type as:
-5
View File
@@ -43,11 +43,6 @@ This approach can transform **any existing VLM** into a VLA by training it to pr
pip install -e ".[pi]"
```
> [!NOTE]
> For lerobot 0.4.0, if you want to install the pi tag, you will have to do: `pip install "lerobot[pi]@git+https://github.com/huggingface/lerobot.git"`.
>
> This will be solved in the next patch release
## Training a Custom FAST Tokenizer
You have two options for the FAST tokenizer:
+140 -186
View File
@@ -1,23 +1,49 @@
# Unitree G1
This guide covers the complete setup process for the Unitree G1 humanoid, from initial connection to running gr00t_wbc locomotion.
<img
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/unitree_thumbnail.jpg"
alt="Unitree G1 locomanipulation demo"
style={{ width: "100%" }}
/>
## About
We support both 29 and 23 DOF G1 EDU version. We introduce:
- **`unitree g1` robot class, handling low level read/write from/to the humanoid**
- **ZMQ socket bridge** for remote communication and camera streaming, allowing for remote policy deployment over wlan, eth or directly on the robot
- **Locomotion policies** from NVIDIA gr00t and Amazon FAR Holosoma
- **Simulation mode** for testing policies without the physical robot in mujoco
The Unitree G1 humanoid is now supported in LeRobot! You can teleoperate, train locomanipulation policies, test in sim, and more. Both 29 and 23 DoF variants are supported.
---
## Connection guide
## Part 1: Getting Started
### Step 1: Configure Ethernet Interface
### Install LeRobot on Your Machine
Set a static IP on the same subnet as the robot:
```bash
conda create -y -n lerobot python=3.12
conda activate lerobot
git clone https://github.com/unitreerobotics/unitree_sdk2_python.git
cd unitree_sdk2_python && pip install -e .
git clone https://github.com/huggingface/lerobot.git
cd lerobot
pip install -e '.[unitree_g1]'
```
### Test the Installation (Simulation)
```bash
lerobot-teleoperate \
--robot.type=unitree_g1 \
--robot.is_simulation=true \
--teleop.type=unitree_g1 \
--teleop.id=wbc_unitree \
--robot.cameras='{"global_view": {"type": "zmq", "server_address": "localhost", "port": 5555, "camera_name": "head_camera", "width": 640, "height": 480, "fps": 30}}' \
--display_data=true
```
This will launch a [MuJoCo sim instance](https://huggingface.co/lerobot/unitree-g1-mujoco/tree/main) for the G1.
- Press `9` to release the robot
- Press `7` / `8` to increase / decrease waist height
### Connect to the Robot
The G1's Ethernet IP is fixed at `192.168.123.164`. Your machine must have a static IP on the same subnet: `192.168.123.x` where `x ≠ 164`.
```bash
# Replace 'enp131s0' with your ethernet interface name (check with `ip a`)
@@ -26,272 +52,200 @@ sudo ip addr add 192.168.123.200/24 dev enp131s0
sudo ip link set enp131s0 up
```
**Note**: The G1's Ethernet IP is fixed at `192.168.123.164`. Your computer must use `192.168.123.x` with x ≠ 164.
### Step 2: SSH into the Robot
### SSH into the Robot
```bash
ssh unitree@192.168.123.164
# Password: 123
```
You should now be connected to the G1's Orin.
### Install LeRobot on the G1
From the robot:
```bash
conda create -y -n lerobot python=3.12
conda activate lerobot
git clone https://github.com/unitreerobotics/unitree_sdk2_python.git
cd unitree_sdk2_python && pip install -e .
git clone https://github.com/huggingface/lerobot.git
cd lerobot
pip install -e '.[unitree_g1]'
```
> **Note:** The Unitree SDK requires CycloneDDS v0.10.2. See the [Unitree SDK docs](https://github.com/unitreerobotics/unitree_sdk2_python) for details.
---
## Part 2: Enable WiFi on the Robot
Wlan0 is disabled by default on the G1. To enable it:
### Step 1: Enable WiFi Hardware
Wi-Fi connectivity is blocked by default on the G1. To activate:
```bash
sudo rfkill unblock wifi
sudo rfkill unblock all
# Bring up wlan0
sudo ip link set wlan0 up
# Enable NetworkManager control of wlan0
sudo nmcli radio wifi on
sudo nmcli device set wlan0 managed yes
sudo systemctl restart NetworkManager
```
### Step 2: Enable Internet Forwarding
**On your laptop:**
**On your laptop** (share internet via Ethernet):
```bash
# Enable IP forwarding
sudo sysctl -w net.ipv4.ip_forward=1
# Set up NAT (replace wlp132s0f0 with your WiFi interface)
# Replace wlp132s0f0 with your WiFi interface name
sudo iptables -t nat -A POSTROUTING -o wlp132s0f0 -s 192.168.123.0/24 -j MASQUERADE
sudo iptables -A FORWARD -i wlp132s0f0 -o enp131s0 -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -i enp131s0 -o wlp132s0f0 -j ACCEPT
```
**On the G1:**
**On the G1** (set default route through your laptop):
```bash
# Add laptop as default gateway
sudo ip route del default 2>/dev/null || true
sudo ip route add default via 192.168.123.200 dev eth0
echo "nameserver 8.8.8.8" | sudo tee /etc/resolv.conf
# Test connection
# Verify
ping -c 3 8.8.8.8
```
### Step 3: Connect to WiFi Network
**Connect to a WiFi network:**
```bash
# List available networks
nmcli device wifi list
# Connect to your WiFi (example)
sudo nmcli connection add type wifi ifname wlan0 con-name "YourNetwork" ssid "YourNetwork"
sudo nmcli connection modify "YourNetwork" wifi-sec.key-mgmt wpa-psk
sudo nmcli connection modify "YourNetwork" wifi-sec.psk "YourPassword"
sudo nmcli connection modify "YourNetwork" connection.autoconnect yes
sudo nmcli connection up "YourNetwork"
# Check WiFi IP address
ip a show wlan0
```
### Step 4: SSH Over WiFi
Once connected to WiFi, note the robot's IP address and disconnect the Ethernet cable. You can now SSH over WiFi:
You can now SSH over WiFi:
```bash
ssh unitree@<YOUR_ROBOT_IP>
ssh unitree@<ROBOT_WIFI_IP>
# Password: 123
```
Replace `<YOUR_ROBOT_IP>` with your robot's actual WiFi IP address.
---
## Part 3: Robot Server Setup
## Part 3: Teleoperation & Locomotion
### Step 1: Install LeRobot on the Orin
SSH into the robot and install LeRobot:
```bash
ssh unitree@<YOUR_ROBOT_IP>
conda create -y -n lerobot python=3.10
conda activate lerobot
git clone https://github.com/huggingface/lerobot.git
cd lerobot
pip install -e '.[unitree_g1]'
git clone https://github.com/unitreerobotics/unitree_sdk2_python.git
cd unitree_sdk2_python && pip install -e .
```
**Note**: The Unitree SDK requires CycloneDDS v0.10.2 to be installed. See the [Unitree SDK documentation](https://github.com/unitreerobotics/unitree_sdk2_python) for details.
### Step 2: Run the Robot Server
### Run the Robot Server
On the robot:
```bash
python src/lerobot/robots/unitree_g1/run_g1_server.py
python src/lerobot/robots/unitree_g1/run_g1_server.py --camera
```
**Important**: Keep this terminal running. The server must be active for remote control.
### Run the Locomotion Policy
```bash
lerobot-teleoperate \
--robot.type=unitree_g1 \
--robot.is_simulation=false \
--robot.robot_ip=<ROBOT_IP> \
--teleop.type=unitree_g1 \
--teleop.id=wbc_unitree \
--robot.cameras='{"global_view": {"type": "zmq", "server_address": "<ROBOT_IP>", "port": 5555, "camera_name": "head_camera", "width": 640, "height": 480, "fps": 30}}' \
--display_data=true \
--robot.controller=HolosomaLocomotionController
```
We support both [HolosomaLocomotionController](https://github.com/amazon-far/holosoma) and [GrootLocomotionController](https://github.com/NVlabs/GR00T-WholeBodyControl).
---
## Part 4: Controlling the robot
## Part 4: Loco-Manipulation with the Homunculus Exoskeleton
With the robot server running, you can now control the robot remotely. Let's launch a locomotion policy
We provide a loco-manipulation solution via the Homunculus Exoskeleton — an open-source 7 DoF exoskeleton for whole-body control. Assembly instructions [here](https://github.com/nepyope/hmc_exo).
### Step 1: Install LeRobot on your machine
```bash
conda create -y -n lerobot python=3.10
conda activate lerobot
git clone https://github.com/huggingface/lerobot.git
cd lerobot
pip install -e '.[unitree_g1]'
git clone https://github.com/unitreerobotics/unitree_sdk2_python.git
cd unitree_sdk2_python && pip install -e .
```
### Step 2: Update Robot IP in Config
Edit the config file to match your robot's WiFi IP:
```python
# In src/lerobot/robots/unitree_g1/config_unitree_g1.py
robot_ip: str = "<YOUR_ROBOT_IP>" # Replace with your robot's WiFi IP.
```
### Step 3: Run the Locomotion Policy
```bash
# Run GR00T locomotion controller
python examples/unitree_g1/gr00t_locomotion.py --repo-id "nepyope/GR00T-WholeBodyControl_g1"
# Run Holosoma locomotion controller
python examples/unitree_g1/holosoma_locomotion.py
```
Press `Ctrl+C` to stop the policy.
---
## Running in Simulation Mode (MuJoCo)
You can test policies before deploying on the physical robot using MuJoCo simulation. Set `is_simulation=True` in config or pass `--robot.is_simulation=true` via CLI.
### Calibrate Exoskeleton Teleoperator
### Calibrate
```bash
lerobot-calibrate \
--teleop.type=unitree_g1 \
--teleop.left_arm_config.port=/dev/ttyACM1 \
--teleop.right_arm_config.port=/dev/ttyACM0 \
--teleop.id=exo
--teleop.type=unitree_g1 \
--teleop.left_arm_config.port=/dev/ttyACM1 \
--teleop.right_arm_config.port=/dev/ttyACM0 \
--teleop.id=exo
```
### Teleoperate in Simulation
During calibration move each joint through its entire range. After fitting, move the joint in a neutral position and press `n` to advance.
```bash
lerobot-teleoperate \
--robot.type=unitree_g1 \
--robot.is_simulation=true \
--teleop.type=unitree_g1 \
--teleop.left_arm_config.port=/dev/ttyACM1 \
--teleop.right_arm_config.port=/dev/ttyACM0 \
--teleop.id=exo \
--fps=100
```
### Record Dataset in Simulation
### Record a Dataset
```bash
lerobot-record \
--robot.type=unitree_g1 \
--robot.is_simulation=true \
--robot.cameras='{"global_view": {"type": "zmq", "server_address": "localhost", "port": 5555, "camera_name": "head_camera", "width": 640, "height": 480, "fps": 30}}' \
--teleop.type=unitree_g1 \
--teleop.left_arm_config.port=/dev/ttyACM1 \
--teleop.right_arm_config.port=/dev/ttyACM0 \
--teleop.id=exo \
--dataset.repo_id=your-username/dataset-name \
--dataset.single_task="Test" \
--dataset.num_episodes=2 \
--dataset.episode_time_s=5 \
--dataset.reset_time_s=5 \
--dataset.push_to_hub=true \
--dataset.streaming_encoding=true \
# --dataset.vcodec=auto \
--dataset.encoder_threads=2
--robot.type=unitree_g1 \
--robot.is_simulation=true \
--robot.cameras='{"global_view": {"type": "zmq", "server_address": "localhost", "port": 5555, "camera_name": "head_camera", "width": 640, "height": 480, "fps": 30}}' \
--teleop.type=unitree_g1 \
--teleop.left_arm_config.port=/dev/ttyACM1 \
--teleop.right_arm_config.port=/dev/ttyACM0 \
--teleop.id=exo \
--dataset.repo_id=your-username/dataset-name \
--dataset.single_task="Test" \
--dataset.num_episodes=2 \
--dataset.episode_time_s=5 \
--dataset.reset_time_s=5 \
--dataset.push_to_hub=true \
--dataset.streaming_encoding=true \
--dataset.encoder_threads=2
```
Example simulation dataset: [nepyope/teleop_test_sim](https://huggingface.co/datasets/nepyope/teleop_test_sim)
> **Note:** Omit `--teleop.left_arm_config.port` and `--teleop.right_arm_config.port` if you're only using the joystick.
Example dataset: [nepyope/unitree_box_move_blue_full](https://huggingface.co/datasets/nepyope/unitree_box_move_blue_full)
---
## Running on Real Robot
## Part 5: Training & Inference
Once the robot server is running on the G1 (see Part 3), you can teleoperate and record on the real robot.
### Start the Camera Server
On the robot, start the ZMQ image server:
### Train
```bash
python src/lerobot/cameras/zmq/image_server.py
python src/lerobot/scripts/lerobot_train.py \
--dataset.repo_id=your-username/dataset-name \
--policy.type=pi05 \
--output_dir=./outputs/pi05_training \
--job_name=pi05_training \
--policy.repo_id=your-username/your-repo-id \
--policy.pretrained_path=lerobot/pi05_base \
--policy.compile_model=true \
--policy.gradient_checkpointing=true \
--wandb.enable=true \
--policy.dtype=bfloat16 \
--policy.freeze_vision_encoder=false \
--policy.train_expert_only=false \
--steps=3000 \
--policy.device=cuda \
--batch_size=32
```
Keep this running in a separate terminal for camera streaming during recording.
### Inference with RTC
### Teleoperate Real Robot
Once trained, we recommend deploying policies using inference-time RTC:
```bash
lerobot-teleoperate \
--robot.type=unitree_g1 \
--robot.is_simulation=false \
--teleop.type=unitree_g1 \
--teleop.left_arm_config.port=/dev/ttyACM1 \
--teleop.right_arm_config.port=/dev/ttyACM0 \
--teleop.id=exo \
--fps=100
python examples/rtc/eval_with_real_robot.py \
--policy.path=your-username/your-repo-id \
--policy.device=cuda \
--robot.type=unitree_g1 \
--robot.is_simulation=false \
--robot.controller=HolosomaLocomotionController \
--robot.cameras='{"global_view": {"type": "zmq", "server_address": "<ROBOT_IP>", "port": 5555, "camera_name": "head_camera", "width": 640, "height": 480, "fps": 30}}' \
--task="task_description" \
--duration=1000 \
--fps=30 \
--rtc.enabled=true
```
### Record Dataset on Real Robot
```bash
lerobot-record \
--robot.type=unitree_g1 \
--robot.is_simulation=false \
--robot.cameras='{"global_view": {"type": "zmq", "server_address": "172.18.129.215", "port": 5555, "camera_name": "head_camera", "width": 640, "height": 480, "fps": 30}}' \
--teleop.type=unitree_g1 \
--teleop.left_arm_config.port=/dev/ttyACM1 \
--teleop.right_arm_config.port=/dev/ttyACM0 \
--teleop.id=exo \
--dataset.repo_id=your-username/dataset-name \
--dataset.single_task="Test" \
--dataset.num_episodes=2 \
--dataset.episode_time_s=5 \
--dataset.reset_time_s=5 \
--dataset.push_to_hub=true \
--dataset.streaming_encoding=true \
# --dataset.vcodec=auto \
--dataset.encoder_threads=2
```
**Note**: Update `server_address` to match your robot's camera server IP.
Example real robot dataset: [nepyope/teleop_test_real](https://huggingface.co/datasets/nepyope/teleop_test_real)
---
## Additional Resources
@@ -300,8 +254,8 @@ Example real robot dataset: [nepyope/teleop_test_real](https://huggingface.co/da
- [GR00T-WholeBodyControl](https://github.com/NVlabs/GR00T-WholeBodyControl)
- [Holosoma](https://github.com/amazon-far/holosoma)
- [LeRobot Documentation](https://github.com/huggingface/lerobot)
- [Unitree_IL_Lerobot](https://github.com/unitreerobotics/unitree_IL_lerobot)
- [Unitree IL LeRobot](https://github.com/unitreerobotics/unitree_IL_lerobot)
---
_Last updated: December 2025_
_Last updated: March 2026_
+490
View File
@@ -0,0 +1,490 @@
#!/usr/bin/env python
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
SLURM-distributed SARM RA-BC annotation pipeline.
Computes SARM progress values for all frames in a dataset, distributed across
SLURM workers, then merges the shards into a single sarm_progress.parquet.
Two subcommands, each a separate SLURM submission:
compute N workers, each computes progress for a subset of episodes
aggregate 1 worker, merges N shards into sarm_progress.parquet, pushes to hub
Usage:
python slurm_compute_rabc.py compute \\
--repo-id user/dataset --reward-model-path user/sarm_model \\
--stride 10 --device cpu --workers 50 --partition cpu
python slurm_compute_rabc.py aggregate \\
--repo-id user/dataset --reward-model-path user/sarm_model \\
--partition cpu --push-to-hub
"""
import argparse
from pathlib import Path
from datatrove.executor import LocalPipelineExecutor
from datatrove.executor.slurm import SlurmPipelineExecutor
from datatrove.pipeline.base import PipelineStep
class ComputeProgressShards(PipelineStep):
"""Each worker computes SARM progress for its assigned episodes."""
def __init__(
self, repo_id, reward_model_path, stride=1, head_mode="sparse", device="cpu", shard_dir="rabc_shards"
):
super().__init__()
if stride < 1:
raise ValueError(f"stride must be >= 1, got {stride}")
self.repo_id = repo_id
self.reward_model_path = reward_model_path
self.stride = stride
self.head_mode = head_mode
self.device = device
self.shard_dir = shard_dir
def run(self, data=None, rank: int = 0, world_size: int = 1):
import logging
from pathlib import Path
import numpy as np
import pyarrow as pa
import pyarrow.parquet as pq
import torch
from tqdm import tqdm
from lerobot.policies.sarm.compute_rabc_weights import (
generate_all_frame_indices,
interpolate_progress,
load_sarm_resources,
)
from lerobot.utils.utils import init_logging
init_logging()
dataset, reward_model, preprocess = load_sarm_resources(
self.repo_id,
self.reward_model_path,
self.device,
)
if hasattr(preprocess, "eval"):
preprocess.eval()
for step in preprocess.steps:
if hasattr(step, "eval"):
step.eval()
image_key = reward_model.config.image_key
state_key = reward_model.config.state_key
frame_gap = reward_model.config.frame_gap
center_idx = reward_model.config.n_obs_steps // 2
dual_mode = reward_model.config.uses_dual_heads
compute_sparse = self.head_mode in ("sparse", "both") or not dual_mode
compute_dense = self.head_mode in ("dense", "both") and dual_mode
my_episodes = list(range(dataset.num_episodes))[rank::world_size]
if not my_episodes:
logging.info(f"Rank {rank}: no episodes assigned")
return
logging.info(f"Rank {rank}: {len(my_episodes)} / {dataset.num_episodes} episodes")
all_rows = []
for ep_idx in tqdm(my_episodes, desc=f"Rank {rank}"):
ep = dataset.meta.episodes[ep_idx]
ep_start, ep_end = ep["dataset_from_index"], ep["dataset_to_index"]
task = dataset[ep_start].get("task", "perform the task")
all_ep_indices = generate_all_frame_indices(ep_start, ep_end, frame_gap)
if self.stride > 1:
compute_indices = [i for i in all_ep_indices if (i - ep_start) % self.stride == 0]
if (ep_end - 1) not in compute_indices:
compute_indices.append(ep_end - 1)
compute_indices = sorted(set(compute_indices))
else:
compute_indices = all_ep_indices
frame_results = {}
for qi in tqdm(compute_indices, desc=f" Ep {ep_idx}", leave=False):
try:
sample = dataset[qi]
batch = {
image_key: sample[image_key],
"task": task,
"index": qi,
"episode_index": ep_idx,
}
if state_key in sample:
batch[state_key] = sample[state_key]
with torch.no_grad():
processed = preprocess(batch)
vf = processed["video_features"].to(self.device)
tf = processed["text_features"].to(self.device)
sf = processed.get("state_features")
if sf is not None:
sf = sf.to(self.device)
lengths = processed.get("lengths")
sparse_val = dense_val = np.nan
if compute_sparse:
r = reward_model.calculate_rewards(
text_embeddings=tf,
video_embeddings=vf,
state_features=sf,
lengths=lengths,
return_all_frames=True,
head_mode="sparse",
)
sparse_val = float(r[0, center_idx] if r.ndim == 2 else r[center_idx])
if compute_dense:
r = reward_model.calculate_rewards(
text_embeddings=tf,
video_embeddings=vf,
state_features=sf,
lengths=lengths,
return_all_frames=True,
head_mode="dense",
)
dense_val = float(r[0, center_idx] if r.ndim == 2 else r[center_idx])
frame_results[qi] = (sparse_val, dense_val)
except Exception as e:
logging.warning(f"Failed frame {qi}: {e}")
if not frame_results:
logging.warning(f"Episode {ep_idx}: all frames failed, skipping")
continue
# Interpolate to all frames in this episode
computed_idx = np.array(sorted(frame_results.keys()))
all_frame_arr = np.arange(ep_start, ep_end)
sparse_vals = np.array([frame_results[i][0] for i in computed_idx]) if compute_sparse else None
dense_vals = np.array([frame_results[i][1] for i in computed_idx]) if compute_dense else None
if self.stride > 1 and len(computed_idx) > 1:
if compute_sparse:
sparse_vals = interpolate_progress(computed_idx, sparse_vals, all_frame_arr)
if compute_dense:
dense_vals = interpolate_progress(computed_idx, dense_vals, all_frame_arr)
output_frames = all_frame_arr
else:
# Use only successfully computed frames to avoid indexing mismatch on failures
output_frames = computed_idx
for i, fi in enumerate(output_frames):
row = {"index": int(fi), "episode_index": ep_idx, "frame_index": int(fi - ep_start)}
if compute_sparse:
row["progress_sparse"] = float(sparse_vals[i])
if compute_dense:
row["progress_dense"] = float(dense_vals[i])
all_rows.append(row)
if all_rows:
import pandas as pd
df = pd.DataFrame(all_rows).sort_values("index").reset_index(drop=True)
table = pa.Table.from_pandas(df, preserve_index=False)
table = table.replace_schema_metadata({b"reward_model_path": self.reward_model_path.encode()})
shard_dir = Path(self.shard_dir)
shard_dir.mkdir(parents=True, exist_ok=True)
out = shard_dir / f"shard_{rank:05d}.parquet"
pq.write_table(table, out)
logging.info(f"Rank {rank}: saved {len(df)} rows to {out}")
class AggregateProgress(PipelineStep):
"""Merge all shard parquets into final sarm_progress.parquet."""
def __init__(self, repo_id, reward_model_path, shard_dir="rabc_shards", push_to_hub=False):
super().__init__()
self.repo_id = repo_id
self.reward_model_path = reward_model_path
self.shard_dir = shard_dir
self.push_to_hub = push_to_hub
def run(self, data=None, rank: int = 0, world_size: int = 1):
import datetime
import logging
import os
from pathlib import Path
import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
from lerobot.datasets.lerobot_dataset import LeRobotDataset
from lerobot.utils.utils import init_logging
init_logging()
if rank != 0:
return
shard_dir = Path(self.shard_dir)
shards = sorted(shard_dir.glob("shard_*.parquet"))
if not shards:
raise FileNotFoundError(f"No shards found in {shard_dir}")
# Log shard modification time range to help detect stale files
mtimes = [os.path.getmtime(s) for s in shards]
oldest = datetime.datetime.fromtimestamp(min(mtimes)).isoformat(timespec="seconds")
newest = datetime.datetime.fromtimestamp(max(mtimes)).isoformat(timespec="seconds")
logging.info(f"Aggregating {len(shards)} shards (oldest: {oldest}, newest: {newest})")
df = pd.concat([pd.read_parquet(s) for s in shards], ignore_index=True)
df = df.sort_values("index").reset_index(drop=True)
table = pa.Table.from_pandas(df, preserve_index=False)
table = table.replace_schema_metadata({b"reward_model_path": self.reward_model_path.encode()})
temp_ds = LeRobotDataset(self.repo_id, download_videos=False)
out_path = Path(temp_ds.root) / "sarm_progress.parquet"
out_path.parent.mkdir(parents=True, exist_ok=True)
pq.write_table(table, out_path)
logging.info(f"Saved {len(df)} rows to {out_path}")
for col in ["progress_sparse", "progress_dense"]:
if col in df.columns:
v = df[col].dropna()
logging.info(
f"{col}: mean={v.mean():.4f} std={v.std():.4f} min={v.min():.4f} max={v.max():.4f}"
)
if self.push_to_hub:
from huggingface_hub import HfApi
api = HfApi()
hub_path = "sarm_progress.parquet"
logging.info(f"Uploading to {self.repo_id}/{hub_path}")
api.upload_file(
path_or_fileobj=str(out_path),
path_in_repo=hub_path,
repo_id=self.repo_id,
repo_type="dataset",
)
logging.info(f"Uploaded: https://huggingface.co/datasets/{self.repo_id}/blob/main/{hub_path}")
def make_compute_executor(
repo_id,
reward_model_path,
stride,
head_mode,
device,
shard_dir,
logs_dir,
job_name,
slurm,
workers,
partition,
cpus_per_task,
mem_per_cpu,
):
kwargs = {
"pipeline": [
ComputeProgressShards(repo_id, reward_model_path, stride, head_mode, device, str(shard_dir)),
],
"logging_dir": str(logs_dir / job_name),
}
if slurm:
kwargs.update(
{
"job_name": job_name,
"tasks": workers,
"workers": workers,
"time": "24:00:00",
"partition": partition,
"cpus_per_task": cpus_per_task,
"sbatch_args": {"mem-per-cpu": mem_per_cpu},
}
)
return SlurmPipelineExecutor(**kwargs)
kwargs.update({"tasks": workers, "workers": 1})
return LocalPipelineExecutor(**kwargs)
def make_aggregate_executor(
repo_id,
reward_model_path,
shard_dir,
logs_dir,
job_name,
slurm,
partition,
cpus_per_task,
mem_per_cpu,
push_to_hub,
):
kwargs = {
"pipeline": [
AggregateProgress(repo_id, reward_model_path, str(shard_dir), push_to_hub),
],
"logging_dir": str(logs_dir / job_name),
}
if slurm:
kwargs.update(
{
"job_name": job_name,
"tasks": 1,
"workers": 1,
"time": "02:00:00",
"partition": partition,
"cpus_per_task": cpus_per_task,
"sbatch_args": {"mem-per-cpu": mem_per_cpu},
}
)
return SlurmPipelineExecutor(**kwargs)
kwargs.update({"tasks": 1, "workers": 1})
return LocalPipelineExecutor(**kwargs)
def _add_shared_args(p):
p.add_argument(
"--repo-id",
type=str,
required=True,
help="Hugging Face repository identifier, e.g. 'user/dataset'.",
)
p.add_argument(
"--shard-dir",
type=Path,
default=Path("rabc_shards"),
help="Directory to read/write per-rank parquet shards.",
)
p.add_argument(
"--logs-dir",
type=Path,
default=Path("logs"),
help="Directory for datatrove logs.",
)
p.add_argument(
"--job-name",
type=str,
default=None,
help="SLURM job name (defaults to rabc_<subcommand>).",
)
p.add_argument(
"--slurm",
type=int,
default=1,
help="1 = submit via SLURM; 0 = run locally (useful for debugging).",
)
p.add_argument(
"--partition",
type=str,
default=None,
help="SLURM partition to submit to.",
)
p.add_argument(
"--cpus-per-task",
type=int,
default=4,
help="Number of CPUs per SLURM task.",
)
p.add_argument(
"--mem-per-cpu",
type=str,
default="4G",
help="Memory per CPU, e.g. '4G' or '1950M'.",
)
def main():
parser = argparse.ArgumentParser(
description="SLURM-distributed SARM RA-BC annotation pipeline",
formatter_class=argparse.RawDescriptionHelpFormatter,
)
sub = parser.add_subparsers(dest="command", required=True)
# compute subcommand
cp = sub.add_parser(
"compute",
help="Distribute progress computation across SLURM workers.",
)
_add_shared_args(cp)
cp.add_argument(
"--reward-model-path",
type=str,
required=True,
help="Path or HF repo id of the SARM reward model.",
)
cp.add_argument(
"--stride",
type=int,
default=1,
help="Compute every Nth frame; intermediate frames are interpolated (must be >= 1).",
)
cp.add_argument(
"--head-mode",
type=str,
default="sparse",
choices=["sparse", "dense", "both"],
help="Which reward head(s) to compute.",
)
cp.add_argument(
"--device",
type=str,
default="cpu",
help="Device for reward model inference, e.g. 'cpu' or 'cuda'.",
)
cp.add_argument(
"--workers",
type=int,
default=50,
help="Number of parallel SLURM tasks (one shard per worker).",
)
# aggregate subcommand
ap = sub.add_parser(
"aggregate",
help="Merge per-rank shards into a single sarm_progress.parquet.",
)
_add_shared_args(ap)
ap.add_argument(
"--reward-model-path",
type=str,
required=True,
help="Path or HF repo id of the SARM reward model (stored in parquet metadata).",
)
ap.add_argument(
"--push-to-hub",
action="store_true",
help="Upload sarm_progress.parquet to the Hugging Face Hub after aggregation.",
)
args = parser.parse_args()
job_name = args.job_name or f"rabc_{args.command}"
kwargs = vars(args)
kwargs["slurm"] = kwargs.pop("slurm") == 1
kwargs["job_name"] = job_name
command = kwargs.pop("command")
executor = make_compute_executor(**kwargs) if command == "compute" else make_aggregate_executor(**kwargs)
executor.run()
if __name__ == "__main__":
main()
+2
View File
@@ -78,6 +78,7 @@ from torch import Tensor
from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig # noqa: F401
from lerobot.cameras.realsense.configuration_realsense import RealSenseCameraConfig # noqa: F401
from lerobot.cameras.zmq.configuration_zmq import ZMQCameraConfig # noqa: F401
from lerobot.configs import parser
from lerobot.configs.policies import PreTrainedConfig
from lerobot.configs.types import RTCAttentionSchedule
@@ -97,6 +98,7 @@ from lerobot.robots import ( # noqa: F401
bi_so_follower,
koch_follower,
so_follower,
unitree_g1,
)
from lerobot.robots.utils import make_robot_from_config
from lerobot.utils.constants import OBS_IMAGES
+74 -24
View File
@@ -25,11 +25,11 @@ discord = "https://discord.gg/s3KuuzsPFb"
[project]
name = "lerobot"
version = "0.4.5"
version = "0.5.1"
description = "🤗 LeRobot: State-of-the-art Machine Learning for Real-World Robotics in Pytorch"
dynamic = ["readme"]
license = { text = "Apache-2.0" }
requires-python = ">=3.10"
requires-python = ">=3.12"
authors = [
{ name = "Rémi Cadène", email = "re.cadene@gmail.com" },
{ name = "Simon Alibert", email = "alibert.sim@gmail.com" },
@@ -50,7 +50,8 @@ classifiers = [
"Intended Audience :: Education",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3.10",
"Programming Language :: Python :: 3.12",
"Programming Language :: Python :: 3.13",
"Topic :: Software Development :: Build Tools",
"Topic :: Scientific/Engineering :: Artificial Intelligence",
]
@@ -61,26 +62,28 @@ dependencies = [
# Hugging Face dependencies
"datasets>=4.0.0,<5.0.0",
"diffusers>=0.27.2,<0.36.0",
"huggingface-hub[cli]>=1.0.0,<2.0.0",
"huggingface-hub>=1.0.0,<2.0.0",
"accelerate>=1.10.0,<2.0.0",
# Core dependencies
"numpy>=2.0.0,<2.3.0", # NOTE: Explicitly listing numpy helps the resolver converge faster. Upper bound imposed by opencv-python-headless.
"setuptools>=71.0.0,<81.0.0",
"cmake>=3.29.0.1,<4.2.0",
"packaging>=24.2,<26.0",
"torch>=2.2.1,<2.11.0",
"torchcodec>=0.2.1,<0.11.0; sys_platform != 'win32' and (sys_platform != 'linux' or (platform_machine != 'aarch64' and platform_machine != 'arm64' and platform_machine != 'armv7l')) and (sys_platform != 'darwin' or platform_machine != 'x86_64')",
"torchvision>=0.21.0,<0.26.0",
"einops>=0.8.0,<0.9.0",
"opencv-python-headless>=4.9.0,<4.13.0",
"av>=15.0.0,<16.0.0",
"jsonlines>=4.0.0,<5.0.0",
"packaging>=24.2,<26.0",
"pynput>=1.7.7,<1.9.0",
"pynput>=1.7.8,<1.9.0",
"pyserial>=3.5,<4.0",
"wandb>=0.24.0,<0.25.0",
"torch>=2.2.1,<2.11.0", # TODO: Bump dependency
"torchcodec>=0.2.1,<0.11.0; sys_platform != 'win32' and (sys_platform != 'linux' or (platform_machine != 'aarch64' and platform_machine != 'arm64' and platform_machine != 'armv7l')) and (sys_platform != 'darwin' or platform_machine != 'x86_64')", # TODO: Bump dependency
"torchvision>=0.21.0,<0.26.0", # TODO: Bump dependency
"draccus==0.10.0", # TODO: Remove ==
"draccus==0.10.0", # TODO: Relax version constraint
"gymnasium>=1.1.1,<2.0.0",
"rerun-sdk>=0.24.0,<0.27.0",
@@ -95,13 +98,14 @@ dependencies = [
# Common
pygame-dep = ["pygame>=2.5.1,<2.7.0"]
placo-dep = ["placo>=0.9.6,<0.10.0"]
placo-dep = ["placo>=0.9.6,<0.9.17"]
transformers-dep = ["transformers>=5.3.0,<6.0.0"]
grpcio-dep = ["grpcio==1.73.1", "protobuf>=6.31.1,<6.32.0"]
can-dep = ["python-can>=4.2.0,<5.0.0"]
peft-dep = ["peft>=0.18.0,<1.0.0"]
scipy-dep = ["scipy>=1.14.0,<2.0.0"]
qwen-vl-utils-dep = ["qwen-vl-utils>=0.0.11,<0.1.0"]
matplotlib-dep = ["matplotlib>=3.10.3,<4.0.0", "contourpy>=1.3.0,<2.0.0"] # NOTE: Explicitly listing contourpy helps the resolver converge faster.
# Motors
feetech = ["feetech-servo-sdk>=1.0.0,<2.0.0"]
@@ -115,20 +119,22 @@ gamepad = ["lerobot[pygame-dep]", "hidapi>=0.14.0,<0.15.0"]
hopejr = ["lerobot[feetech]", "lerobot[pygame-dep]"]
lekiwi = ["lerobot[feetech]", "pyzmq>=26.2.1,<28.0.0"]
unitree_g1 = [
"unitree-sdk2==1.0.1",
"pyzmq>=26.2.1,<28.0.0",
"onnxruntime>=1.16.0,<2.0.0",
"pin>=3.0.0,<4.0.0",
"meshcat>=0.3.0,<0.4.0",
"matplotlib>=3.9.0,<4.0.0",
"lerobot[matplotlib-dep]",
"lerobot[pygame-dep]",
"casadi>=3.6.0,<4.0.0",
]
reachy2 = ["reachy2_sdk>=1.0.15,<1.1.0"]
kinematics = ["lerobot[placo-dep]"]
intelrealsense = [
"pyrealsense2>=2.55.1.6486,<2.57.0 ; sys_platform != 'darwin'",
"pyrealsense2-macosx>=2.54,<2.55.0 ; sys_platform == 'darwin'",
"pyrealsense2-macosx>=2.54,<2.57.0 ; sys_platform == 'darwin'",
]
phone = ["hebi-py>=2.8.0,<2.12.0", "teleop>=0.1.0,<0.2.0", "fastapi<1.0"]
phone = ["hebi-py>=2.8.0,<2.12.0", "teleop>=0.1.0,<0.2.0", "fastapi<1.0", "lerobot[scipy-dep]"]
# Policies
wallx = [
@@ -151,12 +157,12 @@ groot = [
"ninja>=1.11.1,<2.0.0",
"flash-attn>=2.5.9,<3.0.0 ; sys_platform != 'darwin'"
]
sarm = ["lerobot[transformers-dep]", "faker>=33.0.0,<35.0.0", "matplotlib>=3.10.3,<4.0.0", "lerobot[qwen-vl-utils-dep]"]
sarm = ["lerobot[transformers-dep]", "faker>=33.0.0,<35.0.0", "lerobot[matplotlib-dep]", "lerobot[qwen-vl-utils-dep]"]
xvla = ["lerobot[transformers-dep]"]
hilserl = ["lerobot[transformers-dep]", "gym-hil>=0.1.13,<0.2.0", "lerobot[grpcio-dep]", "lerobot[placo-dep]"]
# Features
async = ["lerobot[grpcio-dep]", "matplotlib>=3.10.3,<4.0.0"]
async = ["lerobot[grpcio-dep]", "lerobot[matplotlib-dep]"]
peft = ["lerobot[transformers-dep]", "lerobot[peft-dep]"]
# Development
@@ -165,13 +171,53 @@ test = ["pytest>=8.1.0,<9.0.0", "pytest-timeout>=2.4.0,<3.0.0", "pytest-cov>=5.0
video_benchmark = ["scikit-image>=0.23.2,<0.26.0", "pandas>=2.2.2,<2.4.0"]
# Simulation
aloha = ["gym-aloha>=0.1.2,<0.2.0"]
# NOTE: Explicitly listing scipy helps flatten the dependecy tree.
aloha = ["gym-aloha>=0.1.2,<0.2.0", "lerobot[scipy-dep]"]
pusht = ["gym-pusht>=0.1.5,<0.2.0", "pymunk>=6.6.0,<7.0.0"] # TODO: Fix pymunk version in gym-pusht instead
libero = ["lerobot[transformers-dep]", "hf-libero>=0.1.3,<0.2.0"]
metaworld = ["metaworld==3.0.0"]
libero = [
"lerobot[transformers-dep]",
"hf-libero>=0.1.3,<0.2.0; sys_platform == 'linux'",
# hf-egl-probe is the fixed fork of egl-probe (robomimic transitive dep).
# egl-probe's CMakeLists.txt requires cmake_minimum_required < 3.5 which
# modern cmake rejects. Installing hf-egl-probe first satisfies the egl_probe
# import without source compilation.
"hf-egl-probe>=1.0.1; sys_platform == 'linux'",
"lerobot[scipy-dep]",
]
libero_plus = [
# Inherit all of libero's deps (hf-libero → robosuite/robomimic/egl-probe/scipy/transformers).
# LIBERO-plus extends LIBERO with extra task suites; its Python module is installed
# from the git clone in Dockerfile.eval-libero-plus (overrides hf-libero via .pth).
"lerobot[libero]",
# Additional runtime deps declared by LIBERO-plus but absent from its setup.py:
"bddl>=1.0.1,<2.0.0; sys_platform == 'linux'",
"future; sys_platform == 'linux'", # bddl transitive dep not declared in its metadata
"easydict>=1.9; sys_platform == 'linux'",
"wand; sys_platform == 'linux'",
"scikit-image>=0.20.0; sys_platform == 'linux'",
"gym>=0.25.0,<0.27.0; sys_platform == 'linux'",
]
libero-plus = ["lerobot[libero_plus]"]
robomme = [
"robomme @ git+https://github.com/RoboMME/robomme_benchmark.git@main ; sys_platform == 'linux'",
]
robocasa = [
# robocasa and its robosuite fork are not on PyPI; both installed from source
# in Dockerfile.eval-robocasa (requires ARISE-Initiative/robosuite@robocasa_v1.4.1
# for PandaOmron and other robocasa-specific robots).
"easydict>=1.9; sys_platform == 'linux'",
"scikit-image>=0.20.0; sys_platform == 'linux'",
"lerobot[scipy-dep]",
]
metaworld = ["metaworld==3.0.0", "lerobot[scipy-dep]"]
# All
all = [
# NOTE(resolver hint): scipy is pulled in transitively via lerobot[scipy-dep] through
# multiple extras (aloha, metaworld, pi, wallx, phone). Listing it explicitly
# helps pip's resolver converge by constraining scipy early, before it encounters
# the loose scipy requirements from transitive deps like dm-control and metaworld.
"scipy>=1.14.0,<2.0.0",
"lerobot[dynamixel]",
"lerobot[gamepad]",
"lerobot[hopejr]",
@@ -192,10 +238,11 @@ all = [
"lerobot[aloha]",
"lerobot[pusht]",
"lerobot[phone]",
"lerobot[libero]",
"lerobot[libero]; sys_platform == 'linux'",
"lerobot[metaworld]",
"lerobot[sarm]",
"lerobot[peft]",
# "lerobot[unitree_g1]", TODO: Unitree requires specific installation instructions for unitree_sdk2
]
[project.scripts]
@@ -207,6 +254,7 @@ lerobot-replay="lerobot.scripts.lerobot_replay:main"
lerobot-setup-motors="lerobot.scripts.lerobot_setup_motors:main"
lerobot-teleoperate="lerobot.scripts.lerobot_teleoperate:main"
lerobot-eval="lerobot.scripts.lerobot_eval:main"
lerobot-eval-worker="lerobot.scripts.lerobot_eval_worker:main"
lerobot-train="lerobot.scripts.lerobot_train:main"
lerobot-train-tokenizer="lerobot.scripts.lerobot_train_tokenizer:main"
lerobot-dataset-viz="lerobot.scripts.lerobot_dataset_viz:main"
@@ -214,7 +262,9 @@ lerobot-info="lerobot.scripts.lerobot_info:main"
lerobot-find-joint-limits="lerobot.scripts.lerobot_find_joint_limits:main"
lerobot-imgtransform-viz="lerobot.scripts.lerobot_imgtransform_viz:main"
lerobot-edit-dataset="lerobot.scripts.lerobot_edit_dataset:main"
lerobot-leaderboard="lerobot.scripts.lerobot_leaderboard:main"
lerobot-setup-can="lerobot.scripts.lerobot_setup_can:main"
lerobot-benchmark="lerobot.scripts.lerobot_benchmark:main"
# ---------------- Tool Configurations ----------------
[tool.setuptools.package-data]
@@ -224,7 +274,7 @@ lerobot = ["envs/*.json"]
where = ["src"]
[tool.ruff]
target-version = "py310"
target-version = "py312"
line-length = 110
exclude = ["tests/artifacts/**/*.safetensors", "*_pb2.py", "*_pb2_grpc.py"]
@@ -316,7 +366,7 @@ default.extend-ignore-identifiers-re = [
# Uncomment [tool.mypy] first, then uncomment individual module overrides as they get proper type annotations
[tool.mypy]
python_version = "3.10"
python_version = "3.12"
ignore_missing_imports = true
follow_imports = "skip"
# warn_return_any = true
+170 -271
View File
@@ -1,76 +1,73 @@
#
# This file is autogenerated by pip-compile with Python 3.10
# This file is autogenerated by pip-compile with Python 3.12
# by the following command:
#
# pip-compile --output-file=requirements-macos.txt requirements.in
#
-e .[all]
# via -[all]
absl-py==2.3.1
absl-py==2.4.0
# via
# dm-control
# dm-env
# dm-tree
# labmaze
# mujoco
# tensorboard
accelerate==1.11.0
accelerate==1.13.0
# via
# lerobot
# peft
aiohappyeyeballs==2.6.1
# via aiohttp
aiohttp==3.13.1
aiohttp==3.13.3
# via fsspec
aiosignal==1.4.0
# via aiohttp
annotated-doc==0.0.4
# via
# fastapi
# typer
annotated-types==0.7.0
# via pydantic
antlr4-python3-runtime==4.9.3
# via
# hydra-core
# omegaconf
anyio==4.11.0
anyio==4.12.1
# via
# httpx
# starlette
# watchfiles
asttokens==3.0.0
asttokens==3.0.1
# via stack-data
async-timeout==5.0.1
# via aiohttp
attrs==25.4.0
# via
# aiohttp
# dm-tree
# jsonlines
# jsonschema
# referencing
# rerun-sdk
av==15.1.0
# via lerobot
bddl==1.0.1
# via libero
certifi==2025.10.5
# via
# lerobot
# qwen-vl-utils
certifi==2026.2.25
# via
# httpcore
# httpx
# requests
# sentry-sdk
cffi==2.0.0
# via pymunk
cfgv==3.4.0
cfgv==3.5.0
# via pre-commit
charset-normalizer==3.4.4
charset-normalizer==3.4.5
# via requests
click==8.3.0
click==8.3.1
# via
# typer
# uvicorn
# wandb
cloudpickle==3.1.1
# via
# gymnasium
# libero
cmake==4.1.0
cloudpickle==3.1.2
# via gymnasium
cmake==4.1.3
# via lerobot
cmeel==0.57.3
cmeel==0.59.0
# via
# cmeel-assimp
# cmeel-boost
@@ -108,15 +105,17 @@ cmeel-zlib==1.3.1
# via cmeel-assimp
coal-library==3.0.1
# via pin
contourpy==1.3.2
# via matplotlib
coverage[toml]==7.11.0
contourpy==1.3.3
# via
# lerobot
# matplotlib
coverage[toml]==7.13.4
# via pytest-cov
cycler==0.12.1
# via matplotlib
datasets==4.1.1
datasets==4.6.1
# via lerobot
debugpy==1.8.17
debugpy==1.8.20
# via lerobot
decorator==5.2.1
# via ipython
@@ -130,7 +129,7 @@ dill==0.4.0
# multiprocess
distlib==0.4.0
# via virtualenv
dm-control==1.0.34
dm-control==1.0.37
# via gym-aloha
dm-env==1.6
# via dm-control
@@ -138,69 +137,55 @@ dm-tree==0.1.9
# via
# dm-control
# dm-env
# lerobot
docopt==0.6.2
# via num2words
draccus==0.10.0
# via lerobot
dynamixel-sdk==3.8.4
# via lerobot
easydict==1.13
# via libero
egl-probe @ git+https://github.com/huggingface/egl_probe.git
# via
# libero
# robomimic
eigenpy==3.10.3
# via coal-library
einops==0.8.1
# via
# lerobot
# libero
einops==0.8.2
# via lerobot
eiquadprog==1.2.9
# via placo
etils[epath,epy]==1.13.0
etils[epath,epy]==1.14.0
# via mujoco
exceptiongroup==1.3.0
# via
# anyio
# ipython
# pytest
executing==2.2.1
# via stack-data
faker==34.0.2
# via lerobot
farama-notifications==0.0.4
# via gymnasium
fastapi==0.119.1
# via teleop
fastjsonschema==2.21.2
# via nbformat
fastapi==0.135.1
# via
# lerobot
# teleop
feetech-servo-sdk==1.0.0
# via lerobot
filelock==3.20.0
filelock==3.25.0
# via
# datasets
# diffusers
# huggingface-hub
# python-discovery
# torch
# transformers
# virtualenv
fonttools==4.60.1
fonttools==4.61.1
# via matplotlib
frozenlist==1.8.0
# via
# aiohttp
# aiosignal
fsspec[http]==2025.9.0
fsspec[http]==2026.2.0
# via
# datasets
# etils
# huggingface-hub
# torch
future==1.0.0
# via libero
gitdb==4.0.12
# via gitpython
gitpython==3.1.45
gitpython==3.1.46
# via wandb
glfw==2.10.0
# via
@@ -212,7 +197,6 @@ grpcio==1.73.1
# lerobot
# reachy2-sdk
# reachy2-sdk-api
# tensorboard
grpcio-tools==1.73.1
# via
# lerobot
@@ -223,71 +207,67 @@ gym-hil==0.1.13
# via lerobot
gym-pusht==0.1.6
# via lerobot
gymnasium==1.2.1
gymnasium==1.2.3
# via
# gym-aloha
# gym-hil
# gym-pusht
# lerobot
# libero
# metaworld
h11==0.16.0
# via uvicorn
h5py==3.15.1
# via robomimic
# via
# httpcore
# uvicorn
hebi-py==2.11.0
# via lerobot
hf-transfer==0.1.9
# via huggingface-hub
hf-xet==1.1.10
hf-xet==1.3.2
# via huggingface-hub
hidapi==0.14.0.post4
# via
# gym-hil
# lerobot
httpcore==1.0.9
# via httpx
httptools==0.7.1
# via uvicorn
huggingface-hub[cli,hf-transfer]==0.35.3
httpx==0.28.1
# via
# datasets
# huggingface-hub
huggingface-hub==1.6.0
# via
# accelerate
# datasets
# diffusers
# lerobot
# peft
# timm
# tokenizers
# transformers
hydra-core==1.3.2
# via libero
identify==2.6.15
identify==2.6.17
# via pre-commit
idna==3.11
# via
# anyio
# httpx
# requests
# yarl
imageio[ffmpeg]==2.37.0
imageio[ffmpeg]==2.37.2
# via
# gym-aloha
# gym-hil
# lerobot
# metaworld
# robomimic
# scikit-image
imageio-ffmpeg==0.6.0
# via
# imageio
# robomimic
importlib-metadata==8.7.0
# via imageio
importlib-metadata==8.7.1
# via diffusers
importlib-resources==6.5.2
# via etils
iniconfig==2.3.0
# via pytest
inquirerpy==0.3.4
# via huggingface-hub
ipython==8.37.0
ipython==9.11.0
# via meshcat
ipython-pygments-lexers==1.1.1
# via ipython
ischedule==1.2.7
# via placo
jedi==0.19.2
@@ -296,44 +276,24 @@ jinja2==3.1.6
# via torch
jsonlines==4.0.0
# via lerobot
jsonschema==4.25.1
# via nbformat
jsonschema-specifications==2025.9.1
# via jsonschema
jupyter-core==5.9.1
# via nbformat
jupytext==1.18.1
# via bddl
kiwisolver==1.4.9
# via matplotlib
labmaze==1.0.6
# via dm-control
lazy-loader==0.4
lazy-loader==0.5
# via scikit-image
libero @ git+https://github.com/huggingface/lerobot-libero.git@main
# via lerobot
llvmlite==0.45.1
# via numba
librt==0.8.1
# via mypy
lxml==6.0.2
# via dm-control
markdown==3.9
# via tensorboard
markdown-it-py==4.0.0
# via
# jupytext
# mdit-py-plugins
# via rich
markupsafe==3.0.3
# via
# jinja2
# werkzeug
matplotlib==3.10.7
# via
# lerobot
# libero
# via jinja2
matplotlib==3.10.8
# via lerobot
matplotlib-inline==0.2.1
# via ipython
mdit-py-plugins==0.5.0
# via jupytext
mdurl==0.1.2
# via markdown-it-py
mergedeep==1.3.4
@@ -346,41 +306,35 @@ mock-serial==0.0.1
# via lerobot
mpmath==1.3.0
# via sympy
mujoco==3.3.7
mujoco==3.5.0
# via
# dm-control
# gym-aloha
# gym-hil
# libero
# metaworld
# robosuite
multidict==6.7.0
multidict==6.7.1
# via
# aiohttp
# yarl
multiprocess==0.70.16
multiprocess==0.70.18
# via datasets
mypy==1.19.1
# via lerobot
mypy-extensions==1.1.0
# via typing-inspect
nbformat==5.10.4
# via jupytext
networkx==3.4.2
# via
# bddl
# mypy
# typing-inspect
networkx==3.6.1
# via
# scikit-image
# torch
ninja==1.13.0
# via lerobot
nodeenv==1.9.1
nodeenv==1.10.0
# via pre-commit
num2words==0.5.14
# via lerobot
numba==0.62.1
# via robosuite
numpy==2.2.6
# via
# accelerate
# bddl
# cmeel-boost
# contourpy
# datasets
@@ -389,16 +343,14 @@ numpy==2.2.6
# dm-env
# dm-tree
# gymnasium
# h5py
# hebi-py
# imageio
# labmaze
# libero
# lerobot
# matplotlib
# meshcat
# metaworld
# mujoco
# numba
# opencv-python
# opencv-python-headless
# pandas
@@ -406,26 +358,18 @@ numpy==2.2.6
# pyquaternion
# reachy2-sdk
# rerun-sdk
# robomimic
# robosuite
# scikit-image
# scipy
# shapely
# teleop
# tensorboard
# tensorboardx
# tifffile
# torchvision
# transformers
# transforms3d
omegaconf==2.3.0
# via hydra-core
opencv-python==4.12.0.88
opencv-python==4.13.0.92
# via
# gym-pusht
# libero
# reachy2-sdk
# robosuite
opencv-python-headless==4.12.0.88
# via lerobot
orderly-set==5.5.0
@@ -435,97 +379,87 @@ packaging==25.0
# accelerate
# datasets
# huggingface-hub
# hydra-core
# jupytext
# lazy-loader
# lerobot
# matplotlib
# peft
# pytest
# qwen-vl-utils
# reachy2-sdk
# scikit-image
# tensorboard
# tensorboardx
# transformers
# wandb
pandas==2.3.3
# via
# datasets
# lerobot
parso==0.8.5
parso==0.8.6
# via jedi
peft==0.17.1
pathspec==1.0.4
# via mypy
peft==0.18.1
# via lerobot
pexpect==4.9.0
# via ipython
pfzy==0.3.4
# via inquirerpy
pillow==12.0.0
pillow==12.1.1
# via
# diffusers
# imageio
# lerobot
# matplotlib
# meshcat
# qwen-vl-utils
# rerun-sdk
# robosuite
# scikit-image
# tensorboard
# torchvision
pin==3.4.0
# via placo
placo==0.9.14
placo==0.9.16
# via lerobot
platformdirs==4.5.0
platformdirs==4.9.4
# via
# jupyter-core
# python-discovery
# virtualenv
# wandb
pluggy==1.6.0
# via
# pytest
# pytest-cov
pre-commit==4.3.0
pre-commit==4.5.1
# via lerobot
prompt-toolkit==3.0.52
# via
# inquirerpy
# ipython
# via ipython
propcache==0.4.1
# via
# aiohttp
# yarl
protobuf==6.31.0
protobuf==6.31.1
# via
# dm-control
# grpcio-tools
# lerobot
# reachy2-sdk
# reachy2-sdk-api
# tensorboard
# tensorboardx
# wandb
psutil==7.1.1
psutil==7.2.2
# via
# accelerate
# imageio
# peft
# robomimic
ptyprocess==0.7.0
# via pexpect
pure-eval==0.2.3
# via stack-data
pyarrow==21.0.0
pyarrow==23.0.1
# via
# datasets
# rerun-sdk
pycparser==2.23
pycparser==3.0
# via cffi
pydantic==2.12.3
pydantic==2.12.5
# via
# fastapi
# wandb
pydantic-core==2.41.4
pydantic-core==2.41.5
# via pydantic
pygame==2.6.1
# via
@@ -535,33 +469,35 @@ pygame==2.6.1
pygments==2.19.2
# via
# ipython
# ipython-pygments-lexers
# pytest
# rich
pymunk==6.11.1
# via
# gym-pusht
# lerobot
pyngrok==7.4.1
pyngrok==7.5.1
# via meshcat
pynput==1.8.1
# via
# gym-hil
# lerobot
pyobjc-core==12.0
pyobjc-core==12.1
# via
# pyobjc-framework-applicationservices
# pyobjc-framework-cocoa
# pyobjc-framework-coretext
# pyobjc-framework-quartz
pyobjc-framework-applicationservices==12.0
pyobjc-framework-applicationservices==12.1
# via pynput
pyobjc-framework-cocoa==12.0
pyobjc-framework-cocoa==12.1
# via
# pyobjc-framework-applicationservices
# pyobjc-framework-coretext
# pyobjc-framework-quartz
pyobjc-framework-coretext==12.0
pyobjc-framework-coretext==12.1
# via pyobjc-framework-applicationservices
pyobjc-framework-quartz==12.0
pyobjc-framework-quartz==12.1
# via
# pynput
# pyobjc-framework-applicationservices
@@ -570,13 +506,13 @@ pyopengl==3.1.10
# via
# dm-control
# mujoco
pyparsing==3.2.5
pyparsing==3.3.2
# via
# dm-control
# matplotlib
pyquaternion==0.9.9
# via reachy2-sdk
pyrealsense2-macosx==2.54.2
pyrealsense2-macosx==2.56.5
# via lerobot
pyserial==3.5
# via
@@ -585,7 +521,6 @@ pyserial==3.5
# lerobot
pytest==8.4.2
# via
# bddl
# lerobot
# pytest-cov
# pytest-timeout
@@ -596,11 +531,14 @@ pytest-timeout==2.4.0
# via lerobot
python-dateutil==2.9.0.post0
# via
# faker
# matplotlib
# pandas
python-dotenv==1.1.1
python-discovery==1.1.1
# via virtualenv
python-dotenv==1.2.2
# via uvicorn
pytz==2025.2
pytz==2026.1.post1
# via pandas
pyyaml==6.0.3
# via
@@ -609,13 +547,10 @@ pyyaml==6.0.3
# draccus
# hebi-py
# huggingface-hub
# jupytext
# omegaconf
# peft
# pre-commit
# pyngrok
# pyyaml-include
# timm
# transformers
# uvicorn
# wandb
@@ -625,15 +560,13 @@ pyzmq==27.1.0
# via
# lerobot
# meshcat
reachy2-sdk==1.0.14
qwen-vl-utils==0.0.14
# via lerobot
reachy2-sdk==1.0.15
# via lerobot
reachy2-sdk-api==1.0.21
# via reachy2-sdk
referencing==0.37.0
# via
# jsonschema
# jsonschema-specifications
regex==2025.10.23
regex==2026.2.28
# via
# diffusers
# transformers
@@ -642,184 +575,150 @@ requests==2.32.5
# datasets
# diffusers
# dm-control
# huggingface-hub
# qwen-vl-utils
# teleop
# transformers
# wandb
rerun-sdk==0.26.1
rerun-sdk==0.26.2
# via lerobot
rhoban-cmeel-jsoncpp==1.9.4.9
# via placo
robomimic==0.2.0
# via libero
robosuite==1.4.0
# via libero
rpds-py==0.28.0
# via
# jsonschema
# referencing
safetensors==0.6.2
rich==14.3.3
# via typer
safetensors==0.7.0
# via
# accelerate
# diffusers
# lerobot
# peft
# timm
# transformers
scikit-image==0.25.2
# via
# gym-pusht
# lerobot
scipy==1.15.3
scipy==1.17.1
# via
# dm-control
# lerobot
# metaworld
# robosuite
# scikit-image
sentry-sdk==2.42.1
# torchdiffeq
sentry-sdk==2.54.0
# via wandb
shapely==2.1.2
# via gym-pusht
shellingham==1.5.4
# via typer
six==1.17.0
# via
# pynput
# python-dateutil
smmap==5.0.2
smmap==5.0.3
# via gitdb
sniffio==1.3.1
# via anyio
stack-data==0.6.3
# via ipython
starlette==0.48.0
starlette==0.52.1
# via fastapi
sympy==1.14.0
# via torch
teleop==0.1.2
teleop==0.1.4
# via lerobot
tensorboard==2.20.0
# via robomimic
tensorboard-data-server==0.7.2
# via tensorboard
tensorboardx==2.6.4
# via robomimic
termcolor==3.1.0
# via
# lerobot
# robomimic
thop==0.1.1.post2209072238
# via libero
tifffile==2025.5.10
termcolor==3.3.0
# via lerobot
tifffile==2026.3.3
# via scikit-image
timm==1.0.20
# via lerobot
tokenizers==0.22.1
tokenizers==0.22.2
# via transformers
toml==0.10.2
# via draccus
tomli==2.3.0
# via
# cmeel
# coverage
# jupytext
# pytest
torch==2.7.1
torch==2.10.0
# via
# accelerate
# lerobot
# peft
# robomimic
# thop
# timm
# torchdiffeq
# torchvision
torchcodec==0.5
torchcodec==0.10.0
# via lerobot
torchvision==0.22.1
# via
# lerobot
# robomimic
# timm
tornado==6.5.2
torchdiffeq==0.2.5
# via lerobot
torchvision==0.25.0
# via lerobot
tornado==6.5.4
# via meshcat
tqdm==4.67.1
tqdm==4.67.3
# via
# datasets
# dm-control
# huggingface-hub
# peft
# robomimic
# transformers
traitlets==5.14.3
# via
# ipython
# jupyter-core
# matplotlib-inline
# nbformat
transformers==4.57.1
transformers==5.3.0
# via
# lerobot
# libero
# peft
transforms3d==0.4.2
# via teleop
typer==0.24.1
# via
# huggingface-hub
# transformers
typing-extensions==4.15.0
# via
# aiosignal
# anyio
# etils
# exceptiongroup
# faker
# fastapi
# gymnasium
# huggingface-hub
# ipython
# multidict
# mypy
# pydantic
# pydantic-core
# referencing
# rerun-sdk
# starlette
# torch
# typing-inspect
# typing-inspection
# uvicorn
# virtualenv
# wandb
typing-inspect==0.9.0
# via draccus
typing-inspection==0.4.2
# via pydantic
tzdata==2025.2
# via
# fastapi
# pydantic
tzdata==2025.3
# via pandas
u-msgpack-python==2.8.0
# via meshcat
urllib3==2.5.0
urllib3==2.6.3
# via
# requests
# sentry-sdk
uvicorn[standard]==0.38.0
uvicorn[standard]==0.41.0
# via teleop
uvloop==0.22.1
# via uvicorn
virtualenv==20.35.3
virtualenv==21.1.0
# via pre-commit
wandb==0.21.4
# via
# lerobot
# libero
wandb==0.24.2
# via lerobot
watchfiles==1.1.1
# via uvicorn
wcwidth==0.2.14
wcwidth==0.6.0
# via prompt-toolkit
websocket-client==1.9.0
# via teleop
websockets==15.0.1
websockets==16.0
# via uvicorn
werkzeug==3.1.3
# via tensorboard
wrapt==2.0.0
wrapt==2.1.2
# via dm-tree
xxhash==3.6.0
# via datasets
yarl==1.22.0
yarl==1.23.0
# via aiohttp
zipp==3.23.0
# via
+209 -188
View File
@@ -1,12 +1,12 @@
#
# This file is autogenerated by pip-compile with Python 3.10
# This file is autogenerated by pip-compile with Python 3.12
# by the following command:
#
# pip-compile --output-file=requirements-ubuntu.txt requirements.in
#
-e .[all]
# via -[all]
absl-py==2.3.1
absl-py==2.4.0
# via
# dm-control
# dm-env
@@ -14,30 +14,33 @@ absl-py==2.3.1
# labmaze
# mujoco
# tensorboard
accelerate==1.11.0
accelerate==1.13.0
# via
# lerobot
# peft
aiohappyeyeballs==2.6.1
# via aiohttp
aiohttp==3.13.1
aiohttp==3.13.3
# via fsspec
aiosignal==1.4.0
# via aiohttp
annotated-doc==0.0.4
# via
# fastapi
# typer
annotated-types==0.7.0
# via pydantic
antlr4-python3-runtime==4.9.3
# via
# hydra-core
# omegaconf
anyio==4.11.0
anyio==4.12.1
# via
# httpx
# starlette
# watchfiles
asttokens==3.0.0
asttokens==3.0.1
# via stack-data
async-timeout==5.0.1
# via aiohttp
attrs==25.4.0
# via
# aiohttp
@@ -47,30 +50,35 @@ attrs==25.4.0
# referencing
# rerun-sdk
av==15.1.0
# via lerobot
bddl==1.0.1
# via libero
certifi==2025.10.5
# via
# lerobot
# qwen-vl-utils
bddl==1.0.1
# via hf-libero
certifi==2026.2.25
# via
# httpcore
# httpx
# requests
# sentry-sdk
cffi==2.0.0
# via pymunk
cfgv==3.4.0
cfgv==3.5.0
# via pre-commit
charset-normalizer==3.4.4
charset-normalizer==3.4.5
# via requests
click==8.3.0
click==8.3.1
# via
# typer
# uvicorn
# wandb
cloudpickle==3.1.1
cloudpickle==3.1.2
# via
# gymnasium
# libero
cmake==4.1.0
# hf-libero
cmake==4.1.3
# via lerobot
cmeel==0.57.3
cmeel==0.59.0
# via
# cmeel-assimp
# cmeel-boost
@@ -108,20 +116,24 @@ cmeel-zlib==1.3.1
# via cmeel-assimp
coal-library==3.0.1
# via pin
contourpy==1.3.2
# via matplotlib
coverage[toml]==7.11.0
contourpy==1.3.3
# via
# lerobot
# matplotlib
coverage[toml]==7.13.4
# via pytest-cov
cuda-bindings==12.9.4
# via torch
cuda-pathfinder==1.4.1
# via cuda-bindings
cycler==0.12.1
# via matplotlib
datasets==4.1.1
datasets==4.6.1
# via lerobot
debugpy==1.8.17
debugpy==1.8.20
# via lerobot
decorator==5.2.1
# via ipython
decord==0.6.0
# via lerobot
deepdiff==8.6.1
# via lerobot
diffusers==0.35.2
@@ -132,7 +144,7 @@ dill==0.4.0
# multiprocess
distlib==0.4.0
# via virtualenv
dm-control==1.0.34
dm-control==1.0.37
# via gym-aloha
dm-env==1.6
# via dm-control
@@ -140,7 +152,6 @@ dm-tree==0.1.9
# via
# dm-control
# dm-env
# lerobot
docopt==0.6.2
# via num2words
draccus==0.10.0
@@ -148,66 +159,60 @@ draccus==0.10.0
dynamixel-sdk==3.8.4
# via lerobot
easydict==1.13
# via libero
egl-probe @ git+https://github.com/huggingface/egl_probe.git
# via
# libero
# robomimic
# via hf-libero
egl-probe==1.0.2
# via robomimic
eigenpy==3.10.3
# via coal-library
einops==0.8.1
einops==0.8.2
# via
# flash-attn
# hf-libero
# lerobot
# libero
eiquadprog==1.2.9
# via placo
etils[epath,epy]==1.13.0
etils[epath,epy]==1.14.0
# via mujoco
evdev==1.9.2
evdev==1.9.3
# via pynput
exceptiongroup==1.3.0
# via
# anyio
# ipython
# pytest
executing==2.2.1
# via stack-data
faker==34.0.2
# via lerobot
farama-notifications==0.0.4
# via gymnasium
fastapi==0.119.1
# via teleop
fastapi==0.135.1
# via
# lerobot
# teleop
fastjsonschema==2.21.2
# via nbformat
feetech-servo-sdk==1.0.0
# via lerobot
filelock==3.20.0
filelock==3.25.0
# via
# datasets
# diffusers
# huggingface-hub
# python-discovery
# torch
# transformers
# virtualenv
flash-attn==2.8.3
# via lerobot
fonttools==4.60.1
fonttools==4.61.1
# via matplotlib
frozenlist==1.8.0
# via
# aiohttp
# aiosignal
fsspec[http]==2025.9.0
fsspec[http]==2026.2.0
# via
# datasets
# etils
# huggingface-hub
# torch
future==1.0.0
# via libero
# via hf-libero
gitdb==4.0.12
# via gitpython
gitpython==3.1.45
gitpython==3.1.46
# via wandb
glfw==2.10.0
# via
@@ -230,50 +235,60 @@ gym-hil==0.1.13
# via lerobot
gym-pusht==0.1.6
# via lerobot
gymnasium==1.2.1
gymnasium==1.2.3
# via
# gym-aloha
# gym-hil
# gym-pusht
# hf-libero
# lerobot
# libero
# metaworld
h11==0.16.0
# via uvicorn
h5py==3.15.1
# via
# httpcore
# uvicorn
h5py==3.16.0
# via robomimic
hebi-py==2.11.0
# via lerobot
hf-transfer==0.1.9
# via huggingface-hub
hf-xet==1.1.10
hf-egl-probe==1.0.2
# via hf-libero
hf-libero==0.1.3
# via lerobot
hf-xet==1.3.2
# via huggingface-hub
hidapi==0.14.0.post4
# via
# gym-hil
# lerobot
httpcore==1.0.9
# via httpx
httptools==0.7.1
# via uvicorn
huggingface-hub[cli,hf-transfer]==0.35.3
httpx==0.28.1
# via
# datasets
# huggingface-hub
huggingface-hub==1.6.0
# via
# accelerate
# datasets
# diffusers
# lerobot
# peft
# timm
# tokenizers
# transformers
hydra-core==1.3.2
# via libero
identify==2.6.15
# via hf-libero
identify==2.6.17
# via pre-commit
idna==3.11
# via
# anyio
# httpx
# requests
# yarl
imageio[ffmpeg]==2.37.0
imageio[ffmpeg]==2.37.2
# via
# gym-aloha
# gym-hil
@@ -285,16 +300,14 @@ imageio-ffmpeg==0.6.0
# via
# imageio
# robomimic
importlib-metadata==8.7.0
importlib-metadata==8.7.1
# via diffusers
importlib-resources==6.5.2
# via etils
iniconfig==2.3.0
# via pytest
inquirerpy==0.3.4
# via huggingface-hub
ipython==8.37.0
ipython==9.11.0
# via meshcat
ipython-pygments-lexers==1.1.1
# via ipython
ischedule==1.2.7
# via placo
jedi==0.19.2
@@ -303,40 +316,41 @@ jinja2==3.1.6
# via torch
jsonlines==4.0.0
# via lerobot
jsonschema==4.25.1
jsonschema==4.26.0
# via nbformat
jsonschema-specifications==2025.9.1
# via jsonschema
jupyter-core==5.9.1
# via nbformat
jupytext==1.18.1
jupytext==1.19.1
# via bddl
kiwisolver==1.4.9
# via matplotlib
labmaze==1.0.6
# via dm-control
lazy-loader==0.4
lazy-loader==0.5
# via scikit-image
libero @ git+https://github.com/huggingface/lerobot-libero.git@main
# via lerobot
llvmlite==0.45.1
librt==0.8.1
# via mypy
llvmlite==0.46.0
# via numba
lxml==6.0.2
# via dm-control
markdown==3.9
markdown==3.10.2
# via tensorboard
markdown-it-py==4.0.0
# via
# jupytext
# mdit-py-plugins
# rich
markupsafe==3.0.3
# via
# jinja2
# werkzeug
matplotlib==3.10.7
matplotlib==3.10.8
# via
# hf-libero
# lerobot
# libero
matplotlib-inline==0.2.1
# via ipython
mdit-py-plugins==0.5.0
@@ -353,36 +367,38 @@ mock-serial==0.0.1
# via lerobot
mpmath==1.3.0
# via sympy
mujoco==3.3.7
mujoco==3.5.0
# via
# dm-control
# gym-aloha
# gym-hil
# libero
# hf-libero
# metaworld
# robosuite
multidict==6.7.0
multidict==6.7.1
# via
# aiohttp
# yarl
multiprocess==0.70.16
multiprocess==0.70.18
# via datasets
mypy==1.19.1
# via lerobot
mypy-extensions==1.1.0
# via typing-inspect
# via
# mypy
# typing-inspect
nbformat==5.10.4
# via jupytext
networkx==3.4.2
networkx==3.6.1
# via
# bddl
# scikit-image
# torch
ninja==1.13.0
# via lerobot
nodeenv==1.9.1
nodeenv==1.10.0
# via pre-commit
num2words==0.5.14
# via lerobot
numba==0.62.1
numba==0.64.0
# via robosuite
numpy==2.2.6
# via
@@ -391,7 +407,6 @@ numpy==2.2.6
# cmeel-boost
# contourpy
# datasets
# decord
# diffusers
# dm-control
# dm-env
@@ -399,9 +414,10 @@ numpy==2.2.6
# gymnasium
# h5py
# hebi-py
# hf-libero
# imageio
# labmaze
# libero
# lerobot
# matplotlib
# meshcat
# metaworld
@@ -426,49 +442,51 @@ numpy==2.2.6
# torchvision
# transformers
# transforms3d
nvidia-cublas-cu12==12.6.4.1
nvidia-cublas-cu12==12.8.4.1
# via
# nvidia-cudnn-cu12
# nvidia-cusolver-cu12
# torch
nvidia-cuda-cupti-cu12==12.6.80
nvidia-cuda-cupti-cu12==12.8.90
# via torch
nvidia-cuda-nvrtc-cu12==12.6.77
nvidia-cuda-nvrtc-cu12==12.8.93
# via torch
nvidia-cuda-runtime-cu12==12.6.77
nvidia-cuda-runtime-cu12==12.8.90
# via torch
nvidia-cudnn-cu12==9.5.1.17
nvidia-cudnn-cu12==9.10.2.21
# via torch
nvidia-cufft-cu12==11.3.0.4
nvidia-cufft-cu12==11.3.3.83
# via torch
nvidia-cufile-cu12==1.11.1.6
nvidia-cufile-cu12==1.13.1.3
# via torch
nvidia-curand-cu12==10.3.7.77
nvidia-curand-cu12==10.3.9.90
# via torch
nvidia-cusolver-cu12==11.7.1.2
nvidia-cusolver-cu12==11.7.3.90
# via torch
nvidia-cusparse-cu12==12.5.4.2
nvidia-cusparse-cu12==12.5.8.93
# via
# nvidia-cusolver-cu12
# torch
nvidia-cusparselt-cu12==0.6.3
nvidia-cusparselt-cu12==0.7.1
# via torch
nvidia-nccl-cu12==2.26.2
nvidia-nccl-cu12==2.27.5
# via torch
nvidia-nvjitlink-cu12==12.6.85
nvidia-nvjitlink-cu12==12.8.93
# via
# nvidia-cufft-cu12
# nvidia-cusolver-cu12
# nvidia-cusparse-cu12
# torch
nvidia-nvtx-cu12==12.6.77
nvidia-nvshmem-cu12==3.4.5
# via torch
nvidia-nvtx-cu12==12.8.90
# via torch
omegaconf==2.3.0
# via hydra-core
opencv-python==4.12.0.88
opencv-python==4.13.0.92
# via
# gym-pusht
# libero
# hf-libero
# reachy2-sdk
# robosuite
opencv-python-headless==4.12.0.88
@@ -487,6 +505,7 @@ packaging==25.0
# matplotlib
# peft
# pytest
# qwen-vl-utils
# reachy2-sdk
# scikit-image
# tensorboard
@@ -497,21 +516,21 @@ pandas==2.3.3
# via
# datasets
# lerobot
parso==0.8.5
parso==0.8.6
# via jedi
peft==0.17.1
pathspec==1.0.4
# via mypy
peft==0.18.1
# via lerobot
pexpect==4.9.0
# via ipython
pfzy==0.3.4
# via inquirerpy
pillow==12.0.0
pillow==12.1.1
# via
# diffusers
# imageio
# lerobot
# matplotlib
# meshcat
# qwen-vl-utils
# rerun-sdk
# robosuite
# scikit-image
@@ -519,28 +538,27 @@ pillow==12.0.0
# torchvision
pin==3.4.0
# via placo
placo==0.9.14
placo==0.9.16
# via lerobot
platformdirs==4.5.0
platformdirs==4.9.4
# via
# jupyter-core
# python-discovery
# virtualenv
# wandb
pluggy==1.6.0
# via
# pytest
# pytest-cov
pre-commit==4.3.0
pre-commit==4.5.1
# via lerobot
prompt-toolkit==3.0.52
# via
# inquirerpy
# ipython
# via ipython
propcache==0.4.1
# via
# aiohttp
# yarl
protobuf==6.31.0
protobuf==6.31.1
# via
# dm-control
# grpcio-tools
@@ -550,7 +568,7 @@ protobuf==6.31.0
# tensorboard
# tensorboardx
# wandb
psutil==7.1.1
psutil==7.2.2
# via
# accelerate
# imageio
@@ -560,17 +578,17 @@ ptyprocess==0.7.0
# via pexpect
pure-eval==0.2.3
# via stack-data
pyarrow==21.0.0
pyarrow==23.0.1
# via
# datasets
# rerun-sdk
pycparser==2.23
pycparser==3.0
# via cffi
pydantic==2.12.3
pydantic==2.12.5
# via
# fastapi
# wandb
pydantic-core==2.41.4
pydantic-core==2.41.5
# via pydantic
pygame==2.6.1
# via
@@ -580,12 +598,14 @@ pygame==2.6.1
pygments==2.19.2
# via
# ipython
# ipython-pygments-lexers
# pytest
# rich
pymunk==6.11.1
# via
# gym-pusht
# lerobot
pyngrok==7.4.1
pyngrok==7.5.1
# via meshcat
pynput==1.8.1
# via
@@ -595,7 +615,7 @@ pyopengl==3.1.10
# via
# dm-control
# mujoco
pyparsing==3.2.5
pyparsing==3.3.2
# via
# dm-control
# matplotlib
@@ -621,13 +641,16 @@ pytest-timeout==2.4.0
# via lerobot
python-dateutil==2.9.0.post0
# via
# faker
# matplotlib
# pandas
python-dotenv==1.1.1
python-discovery==1.1.1
# via virtualenv
python-dotenv==1.2.2
# via uvicorn
python-xlib==0.33
# via pynput
pytz==2025.2
pytz==2026.1.post1
# via pandas
pyyaml==6.0.3
# via
@@ -642,7 +665,6 @@ pyyaml==6.0.3
# pre-commit
# pyngrok
# pyyaml-include
# timm
# transformers
# uvicorn
# wandb
@@ -652,7 +674,9 @@ pyzmq==27.1.0
# via
# lerobot
# meshcat
reachy2-sdk==1.0.14
qwen-vl-utils==0.0.14
# via lerobot
reachy2-sdk==1.0.15
# via lerobot
reachy2-sdk-api==1.0.21
# via reachy2-sdk
@@ -660,7 +684,7 @@ referencing==0.37.0
# via
# jsonschema
# jsonschema-specifications
regex==2025.10.23
regex==2026.2.28
# via
# diffusers
# transformers
@@ -669,60 +693,62 @@ requests==2.32.5
# datasets
# diffusers
# dm-control
# huggingface-hub
# qwen-vl-utils
# teleop
# transformers
# wandb
rerun-sdk==0.26.1
rerun-sdk==0.26.2
# via lerobot
rhoban-cmeel-jsoncpp==1.9.4.9
# via placo
rich==14.3.3
# via typer
robomimic==0.2.0
# via libero
# via hf-libero
robosuite==1.4.0
# via libero
rpds-py==0.28.0
# via hf-libero
rpds-py==0.30.0
# via
# jsonschema
# referencing
safetensors==0.6.2
safetensors==0.7.0
# via
# accelerate
# diffusers
# lerobot
# peft
# timm
# transformers
scikit-image==0.25.2
# via
# gym-pusht
# lerobot
scipy==1.15.3
scipy==1.17.1
# via
# dm-control
# lerobot
# metaworld
# robosuite
# scikit-image
sentry-sdk==2.42.1
# torchdiffeq
sentry-sdk==2.54.0
# via wandb
shapely==2.1.2
# via gym-pusht
shellingham==1.5.4
# via typer
six==1.17.0
# via
# pynput
# python-dateutil
# python-xlib
smmap==5.0.2
smmap==5.0.3
# via gitdb
sniffio==1.3.1
# via anyio
stack-data==0.6.3
# via ipython
starlette==0.48.0
starlette==0.52.1
# via fastapi
sympy==1.14.0
# via torch
teleop==0.1.2
teleop==0.1.4
# via lerobot
tensorboard==2.20.0
# via robomimic
@@ -730,46 +756,38 @@ tensorboard-data-server==0.7.2
# via tensorboard
tensorboardx==2.6.4
# via robomimic
termcolor==3.1.0
termcolor==3.3.0
# via
# lerobot
# robomimic
thop==0.1.1.post2209072238
# via libero
tifffile==2025.5.10
# via hf-libero
tifffile==2026.3.3
# via scikit-image
timm==1.0.20
# via lerobot
tokenizers==0.22.1
tokenizers==0.22.2
# via transformers
toml==0.10.2
# via draccus
tomli==2.3.0
# via
# cmeel
# coverage
# jupytext
# pytest
torch==2.7.1
torch==2.10.0
# via
# accelerate
# flash-attn
# lerobot
# peft
# robomimic
# thop
# timm
# torchdiffeq
# torchvision
torchcodec==0.5
torchcodec==0.10.0
# via lerobot
torchvision==0.22.1
torchdiffeq==0.2.5
# via lerobot
torchvision==0.25.0
# via
# lerobot
# robomimic
# timm
tornado==6.5.2
tornado==6.5.4
# via meshcat
tqdm==4.67.1
tqdm==4.67.3
# via
# datasets
# dm-control
@@ -783,26 +801,29 @@ traitlets==5.14.3
# jupyter-core
# matplotlib-inline
# nbformat
transformers==4.57.1
transformers==5.3.0
# via
# hf-libero
# lerobot
# libero
# peft
transforms3d==0.4.2
# via teleop
triton==3.3.1
triton==3.6.0
# via torch
typer==0.24.1
# via
# huggingface-hub
# transformers
typing-extensions==4.15.0
# via
# aiosignal
# anyio
# etils
# exceptiongroup
# faker
# fastapi
# gymnasium
# huggingface-hub
# ipython
# multidict
# mypy
# pydantic
# pydantic-core
# referencing
@@ -811,46 +832,46 @@ typing-extensions==4.15.0
# torch
# typing-inspect
# typing-inspection
# uvicorn
# virtualenv
# wandb
typing-inspect==0.9.0
# via draccus
typing-inspection==0.4.2
# via pydantic
tzdata==2025.2
# via
# fastapi
# pydantic
tzdata==2025.3
# via pandas
u-msgpack-python==2.8.0
# via meshcat
urllib3==2.5.0
urllib3==2.6.3
# via
# requests
# sentry-sdk
uvicorn[standard]==0.38.0
uvicorn[standard]==0.41.0
# via teleop
uvloop==0.22.1
# via uvicorn
virtualenv==20.35.3
virtualenv==21.1.0
# via pre-commit
wandb==0.21.4
wandb==0.24.2
# via
# hf-libero
# lerobot
# libero
watchfiles==1.1.1
# via uvicorn
wcwidth==0.2.14
wcwidth==0.6.0
# via prompt-toolkit
websocket-client==1.9.0
# via teleop
websockets==15.0.1
websockets==16.0
# via uvicorn
werkzeug==3.1.3
werkzeug==3.1.6
# via tensorboard
wrapt==2.0.0
wrapt==2.1.2
# via dm-tree
xxhash==3.6.0
# via datasets
yarl==1.22.0
yarl==1.23.0
# via aiohttp
zipp==3.23.0
# via
+4 -4
View File
@@ -1,9 +1,9 @@
# requirements.in
# requirements-macos.txt was generated on macOS and is platform-specific (macOS 26.0.1 25A362 arm64).
# Darwin MacBook-Pro.local 25.0.0 Darwin Kernel Version 25.0.0: Wed Sep 17 21:42:08 PDT 2025; root:xnu-12377.1.9~141/RELEASE_ARM64_T8132 arm64
# requirements-macos.txt was generated on macOS and is platform-specific (macOS 26.3.1 25D2128 arm64).
# Darwin MacBook-Pro.local 25.3.0 Darwin Kernel Version 25.3.0: Wed Jan 28 20:54:55 PST 2026; root:xnu-12377.91.3~2/RELEASE_ARM64_T8132 arm64
# requirements-ubuntu.txt was generated on Linux and is platform-specific (Ubuntu 24.04.3 LTS x86_64).
# Linux mlerobot-linux 6.14.0-33-generic #33~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 19 17:02:30 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
# requirements-ubuntu.txt was generated on Linux and is platform-specific (Ubuntu 24.04.4 LTS x86_64).
# Linux lerobot-linux 6.17.0-14-generic #14~24.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Jan 15 15:52:10 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
-e .[all]
+689
View File
@@ -0,0 +1,689 @@
#!/usr/bin/env python
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Chunk-level multi-modality analysis for comparing full/mixed vs curated datasets.
Treats each action chunk (sliding window of CHUNK_SIZE consecutive frames) as the
atomic unit, tagged by the SARM progress score at its start frame. For each
progress band, compares the full vs HQ dataset on:
1. Intra-band action variance
2. Progress delta per chunk
3. GMM + BIC optimal K (number of distinct strategies)
4. PCA embedding (visual cluster inspection)
Usage:
python chunk_multimodality_analysis.py \\
--full-dataset lerobot-data-collection/level12_rac_2_2026-02-08_1 \\
--hq-dataset lerobot-data-collection/level2_final_quality3 \\
--output-dir ./chunk_analysis
"""
from __future__ import annotations
import argparse
import logging
from collections import defaultdict
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from huggingface_hub import hf_hub_download
from scipy.stats import gaussian_kde
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
from sklearn.preprocessing import StandardScaler
from lerobot.datasets.lerobot_dataset import LeRobotDataset
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s %(message)s")
logger = logging.getLogger(__name__)
# ── Visual style ──────────────────────────────────────────────────────────
BG = "#0e1117"
CARD = "#1a1d27"
BORDER = "#2a2d3a"
SUB = "#8b8fa8"
TEXT = "#e8eaf0"
C_FULL = "#f7934f"
C_HQ = "#4dc98a"
def _style_ax(ax: plt.Axes) -> None:
ax.set_facecolor(CARD)
ax.tick_params(colors=SUB, labelsize=8)
for spine in ax.spines.values():
spine.set_color(BORDER)
def _save(fig: plt.Figure, path: Path) -> None:
fig.savefig(path, dpi=150, bbox_inches="tight", facecolor=BG)
plt.close(fig)
logger.info("Saved %s", path)
# ── Step 0: Load episodes ────────────────────────────────────────────────
def _load_sarm_progress(repo_id: str) -> pd.DataFrame | None:
"""Try to download sarm_progress.parquet from the Hub."""
try:
path = hf_hub_download(
repo_id=repo_id, filename="sarm_progress.parquet",
repo_type="dataset",
)
df = pd.read_parquet(path)
col = "progress_sparse" if "progress_sparse" in df.columns else "progress_dense"
if col not in df.columns:
logger.warning("sarm_progress.parquet has no progress columns — ignoring")
return None
logger.info("Loaded SARM progress (%s) for %s (%d rows)", col, repo_id, len(df))
return df.rename(columns={col: "progress"})[["episode_index", "frame_index", "progress"]]
except Exception as exc:
logger.warning("Could not load sarm_progress.parquet for %s: %s", repo_id, exc)
return None
def load_episodes(
repo_id: str,
n_joints: int = 16,
max_episodes: int | None = None,
) -> list[dict]:
dataset = LeRobotDataset(repo_id, download_videos=False)
raw = dataset.hf_dataset
sarm_df = _load_sarm_progress(repo_id)
# Build per-episode progress arrays from SARM parquet (indexed by frame_index)
sarm_by_ep: dict[int, dict[int, float]] = {}
if sarm_df is not None:
if max_episodes is not None:
sarm_df = sarm_df[sarm_df["episode_index"] < max_episodes]
for ep_id, grp in sarm_df.groupby("episode_index"):
sarm_by_ep[int(ep_id)] = dict(
zip(grp["frame_index"].astype(int), grp["progress"].astype(float))
)
episodes: dict[int, dict] = defaultdict(lambda: {"actions": [], "progress": []})
for row in raw:
ep = int(row["episode_index"])
if max_episodes is not None and ep >= max_episodes:
continue
action = np.array(row["action"], dtype=np.float32)[:n_joints]
episodes[ep]["actions"].append(action)
fi = int(row["frame_index"])
ep_prog = sarm_by_ep.get(ep, {})
episodes[ep]["progress"].append(ep_prog.get(fi, float("nan")))
has_sarm = len(sarm_lookup) > 0
result = []
for ep_id, d in sorted(episodes.items()):
actions = np.stack(d["actions"])
T = len(actions)
if has_sarm:
prog = np.array(d["progress"], dtype=np.float32)
prog = np.clip(np.nan_to_num(prog, nan=0.0), 0.0, 1.0)
prog = np.maximum.accumulate(prog)
else:
prog = np.linspace(0.0, 1.0, T, dtype=np.float32)
result.append({"episode": ep_id, "actions": actions, "progress": prog})
src = "SARM" if has_sarm else "time-based"
logger.info("Progress source: %s", src)
return result
# ── Step 1: Filter short episodes ────────────────────────────────────────
def auto_length_threshold(
episodes_full: list[dict], episodes_hq: list[dict]
) -> int:
all_lengths = np.array(
[e["actions"].shape[0] for e in episodes_full + episodes_hq]
)
kde = gaussian_kde(all_lengths, bw_method=0.25)
xs = np.linspace(all_lengths.min(), np.percentile(all_lengths, 40), 300)
return int(xs[np.argmin(kde(xs))])
def plot_length_distribution(
episodes_full: list[dict],
episodes_hq: list[dict],
threshold: int,
out_path: Path,
) -> None:
lens_full = np.array([e["actions"].shape[0] for e in episodes_full])
lens_hq = np.array([e["actions"].shape[0] for e in episodes_hq])
all_lens = np.concatenate([lens_full, lens_hq])
fig, ax = plt.subplots(figsize=(10, 5))
fig.patch.set_facecolor(BG)
_style_ax(ax)
bins = np.linspace(all_lens.min(), all_lens.max(), 50)
ax.hist(lens_full, bins=bins, alpha=0.5, color=C_FULL, label="Full/Mixed")
ax.hist(lens_hq, bins=bins, alpha=0.5, color=C_HQ, label="HQ")
xs = np.linspace(all_lens.min(), all_lens.max(), 300)
kde = gaussian_kde(all_lens, bw_method=0.25)
ax.plot(xs, kde(xs) * len(all_lens) * (bins[1] - bins[0]), color=TEXT, lw=1.5, label="KDE (combined)")
ax.axvline(threshold, color="#ff4b4b", ls="--", lw=1.5, label=f"Threshold = {threshold}")
ax.set_xlabel("Episode length (frames)", color=SUB)
ax.set_ylabel("Count", color=SUB)
ax.set_title("Episode Length Distribution", color=TEXT, fontsize=13)
ax.legend(facecolor=CARD, edgecolor=BORDER, labelcolor=TEXT, fontsize=8)
_save(fig, out_path)
def filter_episodes(episodes: list[dict], min_length: int) -> list[dict]:
kept = [e for e in episodes if e["actions"].shape[0] >= min_length]
logger.info("Kept %d / %d episodes (min_length=%d)", len(kept), len(episodes), min_length)
return kept
# ── Step 2: Extract chunks ───────────────────────────────────────────────
def extract_chunks(
episodes: list[dict],
chunk_size: int = 30,
chunk_stride: int = 15,
) -> list[dict]:
chunks = []
for ep in episodes:
actions = ep["actions"]
T = len(actions)
prog = ep["progress"]
for t in range(0, T - chunk_size, chunk_stride):
chunk = actions[t : t + chunk_size]
p_start = float(prog[t])
p_end = float(prog[min(t + chunk_size, T - 1)])
chunks.append({
"action_mean": chunk.mean(axis=0).astype(np.float32),
"action_flat": chunk.flatten().astype(np.float32),
"progress_start": p_start,
"progress_delta": p_end - p_start,
"episode": ep["episode"],
})
return chunks
# ── Step 3: Adaptive progress bands ─────────────────────────────────────
def make_bands(n_bands: int = 5) -> list[tuple[float, float]]:
edges = np.linspace(0.0, 1.0, n_bands + 1)
return [(float(edges[i]), float(edges[i + 1])) for i in range(n_bands)]
def assign_bands(
chunks: list[dict], band_edges: list[tuple[float, float]]
) -> list[dict]:
n = len(band_edges)
for c in chunks:
p = c["progress_start"]
c["band"] = next(
(bi for bi, (lo, hi) in enumerate(band_edges) if p < hi),
n - 1,
)
return chunks
def split_by_band(chunks: list[dict], n_bands: int) -> dict[int, list[dict]]:
out: dict[int, list[dict]] = {b: [] for b in range(n_bands)}
for c in chunks:
out[c["band"]].append(c)
return out
# ── Step 4: Intra-band action variance ──────────────────────────────────
def band_variance_matrix(
bands: dict[int, list[dict]], n_bands: int, n_joints: int
) -> np.ndarray:
var_mat = np.full((n_bands, n_joints), np.nan)
for b, clist in bands.items():
if len(clist) < 3:
continue
means = np.stack([c["action_mean"] for c in clist])
var_mat[b] = np.var(means, axis=0)
return var_mat
def plot_variance_heatmap(
var_full: np.ndarray,
var_hq: np.ndarray,
band_edges: list[tuple[float, float]],
out_path: Path,
) -> None:
n_bands = var_full.shape[0]
vmin = 0.0
vmax = max(np.nanmax(var_full), np.nanmax(var_hq))
band_labels = [f"{lo:.0%}{hi:.0%}" for lo, hi in band_edges]
joint_labels = [f"J{j}" for j in range(var_full.shape[1])]
fig, axes = plt.subplots(3, 1, figsize=(12, 10), gridspec_kw={"height_ratios": [3, 3, 2]})
fig.patch.set_facecolor(BG)
fig.suptitle("Intra-Band Action Variance", color=TEXT, fontsize=14, y=0.98)
for ax_idx, (mat, label) in enumerate([(var_full, "Full/Mixed"), (var_hq, "HQ")]):
ax = axes[ax_idx]
_style_ax(ax)
im = ax.imshow(mat, aspect="auto", cmap="YlOrRd", vmin=vmin, vmax=vmax)
ax.set_yticks(range(n_bands))
ax.set_yticklabels(band_labels, fontsize=7, color=SUB)
ax.set_xticks(range(var_full.shape[1]))
ax.set_xticklabels(joint_labels, fontsize=7, color=SUB)
ax.set_title(f"Panel {'A' if ax_idx == 0 else 'B'}: {label}", color=TEXT, fontsize=11)
fig.colorbar(im, ax=ax, fraction=0.02, pad=0.02)
with np.errstate(invalid="ignore"):
mean_full = np.nanmean(var_full, axis=1)
mean_hq = np.nanmean(var_hq, axis=1)
ratio = np.where(np.isnan(mean_full) | np.isnan(mean_hq), np.nan,
mean_full / (mean_hq + 1e-8))
ax_bar = axes[2]
_style_ax(ax_bar)
colors = [
"#ff4b4b" if r > 2.0 else "#ffaa33" if r > 1.2 else C_HQ
for r in ratio
]
ax_bar.bar(range(n_bands), ratio, color=colors, edgecolor=BORDER)
ax_bar.axhline(1.0, color=SUB, ls="--", lw=0.8)
ax_bar.set_xticks(range(n_bands))
ax_bar.set_xticklabels(band_labels, fontsize=7, color=SUB)
ax_bar.set_ylabel("Variance ratio\n(Full / HQ)", color=SUB, fontsize=9)
ax_bar.set_title("Panel C: Variance Ratio per Band", color=TEXT, fontsize=11)
fig.tight_layout(rect=[0, 0, 1, 0.96])
_save(fig, out_path)
# ── Step 5: Progress delta per band ──────────────────────────────────────
def plot_progress_delta(
bands_full: dict[int, list[dict]],
bands_hq: dict[int, list[dict]],
band_edges: list[tuple[float, float]],
out_path: Path,
) -> None:
n_bands = len(band_edges)
band_labels = [f"{lo:.0%}{hi:.0%}" for lo, hi in band_edges]
x = np.arange(n_bands)
w = 0.35
means_full, stds_full = [], []
means_hq, stds_hq = [], []
all_deltas_full, all_deltas_hq = [], []
for b in range(n_bands):
df = np.array([c["progress_delta"] for c in bands_full.get(b, [])])
dh = np.array([c["progress_delta"] for c in bands_hq.get(b, [])])
means_full.append(np.mean(df) if len(df) > 0 else 0)
stds_full.append(np.std(df) if len(df) > 0 else 0)
means_hq.append(np.mean(dh) if len(dh) > 0 else 0)
stds_hq.append(np.std(dh) if len(dh) > 0 else 0)
all_deltas_full.extend(df.tolist())
all_deltas_hq.extend(dh.tolist())
fig, (ax_bar, ax_viol) = plt.subplots(1, 2, figsize=(14, 5), gridspec_kw={"width_ratios": [3, 1]})
fig.patch.set_facecolor(BG)
fig.suptitle("Progress Delta per Chunk", color=TEXT, fontsize=14)
_style_ax(ax_bar)
ax_bar.bar(x - w / 2, means_full, w, yerr=stds_full, color=C_FULL, edgecolor=BORDER,
capsize=3, label="Full/Mixed", error_kw={"ecolor": SUB})
ax_bar.bar(x + w / 2, means_hq, w, yerr=stds_hq, color=C_HQ, edgecolor=BORDER,
capsize=3, label="HQ", error_kw={"ecolor": SUB})
ax_bar.set_xticks(x)
ax_bar.set_xticklabels(band_labels, fontsize=7, color=SUB, rotation=30)
ax_bar.set_ylabel("Mean progress Δ", color=SUB)
ax_bar.legend(facecolor=CARD, edgecolor=BORDER, labelcolor=TEXT, fontsize=8)
_style_ax(ax_viol)
data_viol = [np.array(all_deltas_full), np.array(all_deltas_hq)]
if all(len(d) > 0 for d in data_viol):
parts = ax_viol.violinplot(data_viol, positions=[0, 1], showmeans=True, showmedians=True)
for pc, c in zip(parts["bodies"], [C_FULL, C_HQ]):
pc.set_facecolor(c)
pc.set_alpha(0.7)
for key in ("cmeans", "cmedians", "cbars", "cmins", "cmaxes"):
if key in parts:
parts[key].set_color(SUB)
ax_viol.set_xticks([0, 1])
ax_viol.set_xticklabels(["Full", "HQ"], color=SUB)
ax_viol.set_ylabel("Progress Δ", color=SUB)
ax_viol.set_title("Overall Distribution", color=TEXT, fontsize=10)
fig.tight_layout()
_save(fig, out_path)
# ── Step 6: GMM + BIC per band ──────────────────────────────────────────
def gmm_optimal_k(
band_chunks: list[dict],
pca_components: int = 15,
max_k: int = 12,
seed: int = 42,
) -> int | None:
if len(band_chunks) < 20:
return None
X = np.stack([c["action_flat"] for c in band_chunks])
X = StandardScaler().fit_transform(X)
n = min(pca_components, X.shape[1], X.shape[0] - 1)
X_r = PCA(n_components=n, random_state=seed).fit_transform(X)
bics = []
for k in range(1, min(max_k + 1, len(X_r) // 6)):
gmm = GaussianMixture(
n_components=k, covariance_type="full",
n_init=5, max_iter=300, random_state=seed,
)
gmm.fit(X_r)
bics.append((k, gmm.bic(X_r)))
if not bics:
return None
return min(bics, key=lambda x: x[1])[0]
def plot_gmm_bic(
bands_full: dict[int, list[dict]],
bands_hq: dict[int, list[dict]],
band_edges: list[tuple[float, float]],
seed: int,
out_path: Path,
) -> tuple[list[int | None], list[int | None]]:
n_bands = len(band_edges)
ks_full = [gmm_optimal_k(bands_full.get(b, []), seed=seed) for b in range(n_bands)]
ks_hq = [gmm_optimal_k(bands_hq.get(b, []), seed=seed) for b in range(n_bands)]
band_labels = [f"{lo:.0%}{hi:.0%}" for lo, hi in band_edges]
fig, ax = plt.subplots(figsize=(10, 5))
fig.patch.set_facecolor(BG)
_style_ax(ax)
xs = np.arange(n_bands)
valid_full = [(i, k) for i, k in enumerate(ks_full) if k is not None]
valid_hq = [(i, k) for i, k in enumerate(ks_hq) if k is not None]
if valid_full:
xi, yi = zip(*valid_full)
ax.plot(xi, yi, "o-", color=C_FULL, label="Full/Mixed", lw=2, markersize=7)
if valid_hq:
xi, yi = zip(*valid_hq)
ax.plot(xi, yi, "o-", color=C_HQ, label="HQ", lw=2, markersize=7)
if valid_full and valid_hq:
all_x = sorted(set([i for i, _ in valid_full]) & set([i for i, _ in valid_hq]))
if len(all_x) >= 2:
kf_interp = {i: k for i, k in valid_full}
kh_interp = {i: k for i, k in valid_hq}
shared_x = [i for i in all_x if i in kf_interp and i in kh_interp]
yf = [kf_interp[i] for i in shared_x]
yh = [kh_interp[i] for i in shared_x]
ax.fill_between(shared_x, yf, yh, alpha=0.15, color=TEXT)
ax.set_xticks(xs)
ax.set_xticklabels(band_labels, fontsize=7, color=SUB, rotation=30)
ax.set_ylabel("Optimal K (GMM-BIC)", color=SUB)
ax.set_title("Number of Distinct Strategies per Band", color=TEXT, fontsize=13)
ax.legend(facecolor=CARD, edgecolor=BORDER, labelcolor=TEXT, fontsize=9)
ax.yaxis.set_major_locator(plt.MaxNLocator(integer=True))
fig.tight_layout()
_save(fig, out_path)
return ks_full, ks_hq
# ── Step 7: PCA scatter per band ────────────────────────────────────────
def plot_pca_scatter(
bands_full: dict[int, list[dict]],
bands_hq: dict[int, list[dict]],
band_edges: list[tuple[float, float]],
out_path: Path,
) -> None:
n_plot = min(4, len(band_edges))
fig, axes = plt.subplots(2, n_plot, figsize=(4 * n_plot, 7))
fig.patch.set_facecolor(BG)
fig.suptitle("PCA of Action Chunks per Band", color=TEXT, fontsize=14)
if n_plot == 1:
axes = axes.reshape(2, 1)
for col, b in enumerate(range(n_plot)):
cf = bands_full.get(b, [])
ch = bands_hq.get(b, [])
lo, hi = band_edges[b]
for row, (clist, color, label) in enumerate([
(cf, C_FULL, "Full/Mixed"), (ch, C_HQ, "HQ")
]):
ax = axes[row, col]
_style_ax(ax)
if row == 0:
ax.set_title(f"{lo:.0%}{hi:.0%}", color=TEXT, fontsize=10)
if col == 0:
ax.set_ylabel(label, color=SUB, fontsize=9)
if len(cf) < 3 or len(ch) < 3:
ax.text(0.5, 0.5, "Too few\nchunks", transform=ax.transAxes,
ha="center", va="center", color=SUB, fontsize=9)
continue
X_full_b = np.stack([c["action_flat"] for c in cf])
X_hq_b = np.stack([c["action_flat"] for c in ch])
X_all = np.vstack([X_full_b, X_hq_b])
X_all = StandardScaler().fit_transform(X_all)
X_2d = PCA(n_components=2, random_state=42).fit_transform(X_all)
X_2d_full = X_2d[: len(cf)]
X_2d_hq = X_2d[len(cf) :]
pts = X_2d_full if row == 0 else X_2d_hq
ax.scatter(pts[:, 0], pts[:, 1], s=8, alpha=0.5, color=color, edgecolors="none")
fig.tight_layout(rect=[0, 0, 1, 0.95])
_save(fig, out_path)
# ── Plot 1: Chunk counts per band ───────────────────────────────────────
def plot_chunk_counts(
bands_full: dict[int, list[dict]],
bands_hq: dict[int, list[dict]],
band_edges: list[tuple[float, float]],
out_path: Path,
) -> None:
n_bands = len(band_edges)
band_labels = [f"{lo:.0%}{hi:.0%}" for lo, hi in band_edges]
x = np.arange(n_bands)
w = 0.35
counts_full = [len(bands_full.get(b, [])) for b in range(n_bands)]
counts_hq = [len(bands_hq.get(b, [])) for b in range(n_bands)]
fig, ax = plt.subplots(figsize=(10, 5))
fig.patch.set_facecolor(BG)
_style_ax(ax)
ax.bar(x - w / 2, counts_full, w, color=C_FULL, edgecolor=BORDER, label="Full/Mixed")
ax.bar(x + w / 2, counts_hq, w, color=C_HQ, edgecolor=BORDER, label="HQ")
ax.set_xticks(x)
ax.set_xticklabels(band_labels, fontsize=7, color=SUB, rotation=30)
ax.set_ylabel("Chunk count", color=SUB)
ax.set_title("Chunk Counts per Progress Band", color=TEXT, fontsize=13)
ax.legend(facecolor=CARD, edgecolor=BORDER, labelcolor=TEXT, fontsize=8)
fig.tight_layout()
_save(fig, out_path)
# ── Summary figure ───────────────────────────────────────────────────────
def plot_summary(
var_full: np.ndarray,
var_hq: np.ndarray,
band_edges: list[tuple[float, float]],
ks_full: list[int | None],
ks_hq: list[int | None],
bands_full: dict[int, list[dict]],
bands_hq: dict[int, list[dict]],
out_path: Path,
) -> None:
with np.errstate(invalid="ignore"):
mean_full = np.nanmean(var_full, axis=1)
mean_hq = np.nanmean(var_hq, axis=1)
ratio = np.where(np.isnan(mean_full) | np.isnan(mean_hq), np.nan,
mean_full / (mean_hq + 1e-8))
valid_ratio = ratio[~np.isnan(ratio)]
mean_ratio = float(np.mean(valid_ratio)) if len(valid_ratio) > 0 else float("nan")
peak_idx = int(np.argmax(valid_ratio)) if len(valid_ratio) > 0 else 0
peak_ratio = float(valid_ratio[peak_idx]) if len(valid_ratio) > 0 else float("nan")
lo, hi = band_edges[peak_idx]
peak_band = f"{lo:.0%}{hi:.0%}"
valid_kf = [k for k in ks_full if k is not None]
valid_kh = [k for k in ks_hq if k is not None]
mean_k_full = np.mean(valid_kf) if valid_kf else float("nan")
mean_k_hq = np.mean(valid_kh) if valid_kh else float("nan")
n_bands = len(band_edges)
deltas_full = [c["progress_delta"] for b in range(n_bands) for c in bands_full.get(b, [])]
deltas_hq = [c["progress_delta"] for b in range(n_bands) for c in bands_hq.get(b, [])]
mean_delta_full = float(np.mean(deltas_full)) if deltas_full else float("nan")
mean_delta_hq = float(np.mean(deltas_hq)) if deltas_hq else float("nan")
rows = [
("Mean variance ratio (Full / HQ)", f"{mean_ratio:.2f}x"),
("Peak variance ratio", f"{peak_ratio:.2f}x at {peak_band}"),
("Mean GMM K — Full", f"{mean_k_full:.1f}"),
("Mean GMM K — HQ", f"{mean_k_hq:.1f}"),
("Mean progress Δ — Full", f"{mean_delta_full:.4f}"),
("Mean progress Δ — HQ", f"{mean_delta_hq:.4f}"),
]
fig, ax = plt.subplots(figsize=(8, 3))
fig.patch.set_facecolor(BG)
ax.set_facecolor(CARD)
ax.axis("off")
table = ax.table(
cellText=[[m, v] for m, v in rows],
colLabels=["Metric", "Value"],
loc="center",
cellLoc="left",
)
table.auto_set_font_size(False)
table.set_fontsize(10)
for key, cell in table.get_celld().items():
cell.set_edgecolor(BORDER)
cell.set_facecolor(CARD)
cell.set_text_props(color=TEXT)
if key[0] == 0:
cell.set_text_props(color=TEXT, fontweight="bold")
table.scale(1, 1.6)
ax.set_title("Summary Statistics", color=TEXT, fontsize=13, pad=15)
fig.tight_layout()
_save(fig, out_path)
for metric, value in rows:
logger.info(" %s: %s", metric, value)
# ── Main ─────────────────────────────────────────────────────────────────
def main(args: argparse.Namespace) -> None:
out = Path(args.output_dir)
out.mkdir(parents=True, exist_ok=True)
logger.info("Loading FULL dataset: %s", args.full_dataset)
episodes_full = load_episodes(args.full_dataset, args.n_joints, args.max_episodes)
logger.info("Loading HQ dataset: %s", args.hq_dataset)
episodes_hq = load_episodes(args.hq_dataset, args.n_joints, args.max_episodes)
logger.info("Loaded %d full episodes, %d HQ episodes", len(episodes_full), len(episodes_hq))
# Step 1: length threshold + filter
if args.min_episode_length is not None:
threshold = args.min_episode_length
else:
threshold = auto_length_threshold(episodes_full, episodes_hq)
logger.info("Episode length threshold: %d", threshold)
plot_length_distribution(episodes_full, episodes_hq, threshold, out / "0_length_distribution.png")
episodes_full = filter_episodes(episodes_full, threshold)
episodes_hq = filter_episodes(episodes_hq, threshold)
# Step 2: extract chunks
chunks_full = extract_chunks(episodes_full, args.chunk_size, args.chunk_stride)
chunks_hq = extract_chunks(episodes_hq, args.chunk_size, args.chunk_stride)
logger.info("Extracted %d full chunks, %d HQ chunks", len(chunks_full), len(chunks_hq))
# Step 3: fixed equal-width bands over episode-relative progress
band_edges = make_bands(args.n_bands)
n_bands = len(band_edges)
logger.info("Progress bands (%d): %s", n_bands,
[f"{lo:.0%}{hi:.0%}" for lo, hi in band_edges])
chunks_full = assign_bands(chunks_full, band_edges)
chunks_hq = assign_bands(chunks_hq, band_edges)
bands_full = split_by_band(chunks_full, n_bands)
bands_hq = split_by_band(chunks_hq, n_bands)
# Plot 1: chunk counts
plot_chunk_counts(bands_full, bands_hq, band_edges, out / "1_chunk_counts_per_band.png")
# Step 4: variance heatmap
var_full = band_variance_matrix(bands_full, n_bands, args.n_joints)
var_hq = band_variance_matrix(bands_hq, n_bands, args.n_joints)
plot_variance_heatmap(var_full, var_hq, band_edges, out / "2_variance_heatmap.png")
# Step 5: progress delta
plot_progress_delta(bands_full, bands_hq, band_edges, out / "3_progress_delta_per_band.png")
# Step 6: GMM BIC
ks_full, ks_hq = plot_gmm_bic(bands_full, bands_hq, band_edges, args.seed, out / "4_gmm_bic_per_band.png")
# Step 7: PCA scatter
plot_pca_scatter(bands_full, bands_hq, band_edges, out / "5_pca_per_band.png")
# Summary
plot_summary(var_full, var_hq, band_edges, ks_full, ks_hq,
bands_full, bands_hq, out / "6_summary.png")
logger.info("All figures saved to %s", out)
if __name__ == "__main__":
p = argparse.ArgumentParser(
description="Chunk-level multi-modality analysis: Full/Mixed vs HQ dataset.",
formatter_class=argparse.RawDescriptionHelpFormatter,
)
p.add_argument("--full-dataset", default="lerobot-data-collection/level12_rac_2_2026-02-08_1")
p.add_argument("--hq-dataset", default="lerobot-data-collection/level2_final_quality3_trim_0_hil_data")
p.add_argument("--output-dir", default="./chunk_analysis")
p.add_argument("--chunk-size", type=int, default=30)
p.add_argument("--chunk-stride", type=int, default=15)
p.add_argument("--n-bands", type=int, default=5, help="Number of equal-width progress bands")
p.add_argument("--max-episodes", type=int, default=500)
p.add_argument("--n-joints", type=int, default=16)
p.add_argument("--min-episode-length", type=int, default=None,
help="Override auto-detected length filter threshold")
p.add_argument("--seed", type=int, default=42)
args = p.parse_args()
main(args)
+29
View File
@@ -0,0 +1,29 @@
#!/bin/bash
#SBATCH --job-name=smolvla_libero_plus
#SBATCH --partition=hopper-prod
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --gpus-per-node=4
#SBATCH --cpus-per-task=48
#SBATCH --mem=200G
#SBATCH --time=12:00:00
#SBATCH --output=logs/smolvla_libero_plus_%j.out
#SBATCH --error=logs/smolvla_libero_plus_%j.err
set -euo pipefail
eval "$(conda shell.bash hook 2>/dev/null)"
conda activate lerobot312
cd /admin/home/pepijn/lerobot_wt_robocasa
lerobot-benchmark train \
--benchmarks libero_plus \
--policy-path lerobot/smolvla_base \
--hub-user pepijn223 \
--num-gpus 4 \
--steps 30000 \
--batch-size 32 \
--eval-freq 0 \
--wandb \
--dataset.repo_id=pepijn223/libero_plus_lerobot
+7 -2
View File
@@ -49,9 +49,14 @@ import torch
from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig # noqa: F401
from lerobot.cameras.realsense.configuration_realsense import RealSenseCameraConfig # noqa: F401
from lerobot.robots import (
RobotConfig, # noqa: F401
from lerobot.robots import ( # noqa: F401
Robot,
RobotConfig,
bi_so_follower,
koch_follower,
make_robot_from_config,
omx_follower,
so_follower,
)
from lerobot.transport import (
services_pb2, # type: ignore
+1 -1
View File
@@ -181,7 +181,7 @@ class ZMQCamera(Camera):
try:
message = self.socket.recv_string()
except Exception as e:
# Check for ZMQ timeout (EAGAIN/Again) without requiring global zmq import
# zmq is lazy-imported in connect(), so check by name to avoid a top-level import
if type(e).__name__ == "Again":
raise TimeoutError(f"{self} timeout after {self.timeout_ms}ms") from e
raise
+72 -4
View File
@@ -23,6 +23,7 @@ import base64
import contextlib
import json
import logging
import threading
import time
from collections import deque
@@ -42,10 +43,57 @@ def encode_image(image: np.ndarray, quality: int = 80) -> str:
return base64.b64encode(buffer).decode("utf-8")
class CameraCaptureThread:
"""Background thread that continuously captures and encodes frames from a camera."""
def __init__(self, camera: OpenCVCamera, name: str):
self.camera = camera
self.name = name
self.latest_encoded: str | None = None # Pre-encoded JPEG as base64
self.latest_timestamp: float = 0.0
self.frame_lock = threading.Lock()
self.running = False
self.thread: threading.Thread | None = None
def start(self):
"""Start the capture thread."""
self.running = True
self.thread = threading.Thread(target=self._capture_loop, daemon=True)
self.thread.start()
def stop(self):
"""Stop the capture thread."""
self.running = False
if self.thread:
self.thread.join(timeout=1.0)
def _capture_loop(self):
"""Continuously capture and encode frames at the camera's native rate."""
while self.running:
try:
frame = self.camera.read() # Blocks at camera's native rate
timestamp = time.time()
# Encode immediately in capture thread (this is the slow part)
encoded = encode_image(frame)
with self.frame_lock:
self.latest_encoded = encoded
self.latest_timestamp = timestamp
except Exception as e:
logger.warning(f"Camera {self.name} capture error: {e}")
time.sleep(0.01)
def get_latest(self) -> tuple[str | None, float]:
"""Get the latest encoded frame and its timestamp."""
with self.frame_lock:
return self.latest_encoded, self.latest_timestamp
class ImageServer:
def __init__(self, config: dict, port: int = 5555):
# fps controls the publish loop rate (how often frames are sent over ZMQ), not the camera capture rate
self.fps = config.get("fps", 30)
self.cameras: dict[str, OpenCVCamera] = {}
self.capture_threads: dict[str, CameraCaptureThread] = {}
for name, cfg in config.get("cameras", {}).items():
shape = cfg.get("shape", [480, 640])
@@ -61,6 +109,10 @@ class ImageServer:
self.cameras[name] = camera
logger.info(f"Camera {name}: {shape[1]}x{shape[0]}")
# Create capture thread for this camera
capture_thread = CameraCaptureThread(camera, name)
self.capture_threads[name] = capture_thread
# ZMQ PUB socket
self.context = zmq.Context()
self.socket = self.context.socket(zmq.PUB)
@@ -73,6 +125,18 @@ class ImageServer:
def run(self):
frame_count = 0
frame_times = deque(maxlen=60)
last_published_ts: dict[str, float] = {}
# Start all capture threads
for capture_thread in self.capture_threads.values():
capture_thread.start()
# Wait for first frames to be captured and encoded
logger.info("Waiting for cameras to start capturing...")
for name, capture_thread in self.capture_threads.items():
while capture_thread.get_latest()[0] is None:
time.sleep(0.01)
logger.info(f"Camera {name} ready (capture + encode in background)")
try:
while True:
@@ -80,10 +144,12 @@ class ImageServer:
# Build message
message = {"timestamps": {}, "images": {}}
for name, cam in self.cameras.items():
frame = cam.read() # Returns RGB
message["timestamps"][name] = time.time()
message["images"][name] = encode_image(frame)
for name, capture_thread in self.capture_threads.items():
encoded, timestamp = capture_thread.get_latest()
if encoded is not None and timestamp > last_published_ts.get(name, 0.0):
message["timestamps"][name] = timestamp
message["images"][name] = encoded
last_published_ts[name] = timestamp
# Send as JSON string (suppress if buffer full)
with contextlib.suppress(zmq.Again):
@@ -102,6 +168,8 @@ class ImageServer:
except KeyboardInterrupt:
pass
finally:
for capture_thread in self.capture_threads.values():
capture_thread.stop()
for cam in self.cameras.values():
cam.disconnect()
self.socket.close()
+51 -2
View File
@@ -49,15 +49,64 @@ class WandBConfig:
mode: str | None = None # Allowed values: 'online', 'offline' 'disabled'. Defaults to 'online'
@dataclass
class EvalDockerConfig:
# Docker image to use for evaluation (e.g., "ghcr.io/org/lerobot-eval-libero:latest").
# Takes precedence over eval.envhub_ref.
image: str | None = None
# Optional EnvHub reference to resolve an image, e.g. "envhub://lerobot/libero_plus@v1".
envhub_ref: str | None = None
# If true, mount the local repository and prefer local source code in the container.
use_local_code: bool = True
# Pull the image before running.
pull: bool = True
# Docker --gpus value. Set to None to disable GPU flags and run CPU-only.
gpus: str | None = "all"
# Docker --shm-size value (increase when using larger eval.batch_size values).
shm_size: str = "8g"
# Port on which the host HTTP policy inference server listens.
port: int = 50051
@dataclass
class EvalConfig:
n_episodes: int = 50
# `batch_size` specifies the number of environments to use in a gym.vector.VectorEnv.
# Number of sub-envs per task inside one VectorEnv. Increase to improve per-task
# inference throughput until GPU or simulator memory saturates.
batch_size: int = 50
# `use_async_envs` specifies whether to use asynchronous environments (multiprocessing).
# Use AsyncVectorEnv (multiprocessing). Prefer SyncVectorEnv unless your environment
# spends significant time in Python-side stepping and can benefit from process parallelism.
use_async_envs: bool = False
# Runtime where evaluation executes: "local", "docker", or "multiprocess".
# "multiprocess" spawns local worker processes + policy servers.
runtime: str = "local"
docker: EvalDockerConfig = field(default_factory=EvalDockerConfig)
# Number of parallel eval script instances to launch for one run.
# instance_count > 1 enables multi-instance task sharding.
instance_count: int = 1
# 0-indexed shard id for this process. Users usually leave this at 0.
# Additional shards are launched automatically by `lerobot-eval` when instance_count > 1.
instance_id: int = 0
# Number of policy inference servers to run in parallel (docker/multiprocess runtimes).
# Each server loads a copy of the model and listens on consecutive ports
# starting from eval.docker.port. Workers are round-robin assigned.
policy_servers: int = 1
# Base port for policy servers in multiprocess mode.
port: int = 50051
def __post_init__(self) -> None:
if self.runtime not in {"local", "docker", "multiprocess"}:
raise ValueError(
f"Unsupported eval.runtime '{self.runtime}'. Expected one of: local, docker, multiprocess."
)
if self.instance_count < 1:
raise ValueError("eval.instance_count must be >= 1.")
if self.instance_id < 0 or self.instance_id >= self.instance_count:
raise ValueError(
f"eval.instance_id must be in [0, {self.instance_count - 1}] (got {self.instance_id})."
)
if self.policy_servers < 1:
raise ValueError("eval.policy_servers must be >= 1.")
if self.batch_size > self.n_episodes:
raise ValueError(
"The eval batch size is greater than the number of eval episodes "
+2
View File
@@ -40,6 +40,8 @@ class EvalPipelineConfig:
rename_map: dict[str, str] = field(default_factory=dict)
# Explicit consent to execute remote code from the Hub (required for hub environments).
trust_remote_code: bool = False
# Push eval results (metrics JSON, rollout videos, model card update) to the model's Hub repo.
push_to_hub: bool = False
def __post_init__(self) -> None:
# HACK: We parse again the cli args here to get the pretrained path if there was one.
+3
View File
@@ -50,6 +50,9 @@ class TrainPipelineConfig(HubMixin):
# `seed` is used for training (eg: model initialization, dataset shuffling)
# AND for the evaluation environments.
seed: int | None = 1000
# Set to True to use deterministic cuDNN algorithms for reproducibility.
# This disables cudnn.benchmark and may reduce training speed by ~10-20%.
cudnn_deterministic: bool = False
# Number of workers for the dataloader.
num_workers: int = 4
batch_size: int = 8
+4
View File
@@ -126,7 +126,11 @@ def make_dataset(cfg: TrainPipelineConfig) -> LeRobotDataset | MultiLeRobotDatas
)
if cfg.dataset.use_imagenet_stats:
if dataset.meta.stats is None:
dataset.meta.stats = {}
for key in dataset.meta.camera_keys:
if key not in dataset.meta.stats:
dataset.meta.stats[key] = {}
for stats_type, stats in IMAGENET_STATS.items():
dataset.meta.stats[key][stats_type] = torch.tensor(stats, dtype=torch.float32)
+2 -4
View File
@@ -21,7 +21,7 @@ from collections import deque
from collections.abc import Iterable, Iterator
from pathlib import Path
from pprint import pformat
from typing import Any, Generic, TypeVar
from typing import Any
import datasets
import numpy as np
@@ -78,8 +78,6 @@ DEFAULT_FEATURES = {
"task_index": {"dtype": "int64", "shape": (1,), "names": None},
}
T = TypeVar("T")
def get_parquet_file_size_in_mb(parquet_path: str | Path) -> float:
metadata = pq.read_metadata(parquet_path)
@@ -1234,7 +1232,7 @@ class LookAheadError(Exception):
pass
class Backtrackable(Generic[T]):
class Backtrackable[T]:
"""
Wrap any iterator/iterable so you can step back up to `history` items
and look ahead up to `lookahead` items.
+103
View File
@@ -45,6 +45,10 @@ class EnvConfig(draccus.ChoiceRegistry, abc.ABC):
fps: int = 30
features: dict[str, PolicyFeature] = field(default_factory=dict)
features_map: dict[str, str] = field(default_factory=dict)
# Upper bound on concurrent task evaluation in `lerobot-eval`.
# - For lazy wrappers (e.g. LIBERO/LIBERO-plus), values >1 can enable chunked
# task batching with one policy forward pass over multiple tasks.
# - For other envs, values >1 use a threaded task scheduler fallback.
max_parallel_tasks: int = 1
disable_env_checker: bool = True
@@ -346,6 +350,105 @@ class LiberoEnv(EnvConfig):
return kwargs
@EnvConfig.register_subclass("libero_plus")
@dataclass
class LiberoPlusEnv(LiberoEnv):
"""Alias config for LIBERO-plus benchmarks.
LIBERO-plus keeps the same Python package/module names as LIBERO, so this
config reuses the existing LIBERO env implementation while making intent explicit
in experiment configs (`env.type=libero_plus`).
"""
task: str = "libero_spatial,libero_object,libero_goal,libero_10"
@EnvConfig.register_subclass("robocasa")
@dataclass
class RoboCasaEnv(EnvConfig):
"""RoboCasa kitchen composite-task environments.
Wraps ``robocasa.wrappers.gym_wrapper.RoboCasaGymEnv`` with a flat 12-D Box
action space and a structured pixel + state observation dict.
Selected benchmark tasks (3 short + 2 long):
Short: PickPlaceCounterToCabinet, PrepareToast, CoffeeSetupMug
Long: PrepareCoffee, RestockPantry
"""
task: str = "PickPlaceCounterToCabinet"
tasks: list[str] | None = None # multi-task: list of task names (without robocasa/ prefix)
fps: int = 20
episode_length: int = 500
image_size: int = 128
split: str = "target" # "pretrain" or "target"
features: dict[str, PolicyFeature] = field(
default_factory=lambda: {
ACTION: PolicyFeature(type=FeatureType.ACTION, shape=(12,)),
}
)
features_map: dict[str, str] = field(
default_factory=lambda: {
ACTION: ACTION,
"agentview_left": f"{OBS_IMAGES}.agentview_left",
"agentview_right": f"{OBS_IMAGES}.agentview_right",
"eye_in_hand": f"{OBS_IMAGES}.eye_in_hand",
"robot_state": OBS_STATE,
}
)
def __post_init__(self):
for cam in ("agentview_left", "agentview_right", "eye_in_hand"):
self.features[cam] = PolicyFeature(
type=FeatureType.VISUAL, shape=(self.image_size, self.image_size, 3)
)
self.features["robot_state"] = PolicyFeature(type=FeatureType.STATE, shape=(16,))
@property
def gym_kwargs(self) -> dict:
return {"split": self.split}
@EnvConfig.register_subclass("robomme")
@dataclass
class RoboMMEEnv(EnvConfig):
"""RoboMME memory-augmented manipulation benchmark (ManiSkill/SAPIEN).
16 tasks across 4 suites: Counting, Permanence, Reference, Imitation.
Uses BenchmarkEnvBuilder from the robomme package.
"""
task: str = "PickXtimes"
fps: int = 10
episode_length: int = 300
action_space: str = "joint_angle"
dataset_split: str = "test"
task_ids: list[int] | None = None
features: dict[str, PolicyFeature] = field(
default_factory=lambda: {
ACTION: PolicyFeature(type=FeatureType.ACTION, shape=(8,)),
"front_rgb": PolicyFeature(type=FeatureType.VISUAL, shape=(256, 256, 3)),
"wrist_rgb": PolicyFeature(type=FeatureType.VISUAL, shape=(256, 256, 3)),
OBS_STATE: PolicyFeature(type=FeatureType.STATE, shape=(8,)),
}
)
features_map: dict[str, str] = field(
default_factory=lambda: {
ACTION: ACTION,
"front_rgb": f"{OBS_IMAGES}.front",
"wrist_rgb": f"{OBS_IMAGES}.wrist",
OBS_STATE: OBS_STATE,
}
)
@property
def gym_kwargs(self) -> dict:
return {
"action_space": self.action_space,
"dataset": self.dataset_split,
}
@EnvConfig.register_subclass("metaworld")
@dataclass
class MetaworldEnv(EnvConfig):
+442
View File
@@ -0,0 +1,442 @@
#!/usr/bin/env python
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Docker runtime for lerobot-eval.
The policy stays on the host GPU; gym environments run inside Docker containers.
Each container runs `lerobot-eval-worker`, which calls back to a host HTTP inference
server for action chunks.
Architecture:
host (GPU):
1. Load policy + preprocessors from EvalPipelineConfig.
2. Start ``policy_servers`` HTTP inference servers on consecutive ports.
3. Spawn ``instance_count`` Docker containers, round-robin assigned to servers.
4. Wait; collect per-task JSON written to the mounted output volume.
5. Merge shards → aggregate → write eval_info.json.
container (CPU only):
1. make_env(cfg.env) → shard tasks by (instance_id, instance_count).
2. For each task: run n_episodes, POST obs to /predict_chunk, step env.
3. Write per-task JSON to /results/worker_{instance_id}.json.
"""
from __future__ import annotations
import json
import logging
import pickle # nosec B403 — internal serialisation only
import platform
import subprocess # nosec B404
import sys
import threading
import time
from http.server import BaseHTTPRequestHandler, HTTPServer
from pathlib import Path
from typing import TYPE_CHECKING, Any
import numpy as np
import torch
from lerobot.envs.factory import make_env_pre_post_processors
from lerobot.policies.factory import make_policy, make_pre_post_processors
from lerobot.utils.utils import get_safe_torch_device
if TYPE_CHECKING:
from lerobot.configs.eval import EvalPipelineConfig
logger = logging.getLogger(__name__)
# ---------------------------------------------------------------------------
# HTTP inference server (host side)
# ---------------------------------------------------------------------------
class _PolicyInferenceHandler(BaseHTTPRequestHandler):
"""POST /predict_chunk → pickled numpy action chunk."""
server: _InferenceServer
def do_POST(self) -> None:
if self.path != "/predict_chunk":
self.send_error(404)
return
length = int(self.headers["Content-Length"])
body = self.rfile.read(length)
payload: dict = pickle.loads(body) # nosec B301
obs_t: dict = payload["obs_t"]
with self.server._lock:
chunk_np = self.server._predict(obs_t)
resp = pickle.dumps(chunk_np) # nosec B301
self.send_response(200)
self.send_header("Content-Type", "application/octet-stream")
self.send_header("Content-Length", str(len(resp)))
self.end_headers()
self.wfile.write(resp)
def log_message(self, fmt: str, *args: Any) -> None: # noqa: ANN401
pass # suppress per-request logs
class _InferenceServer(HTTPServer):
"""Wraps the loaded policy behind a trivial HTTP interface."""
def __init__(
self,
addr: tuple[str, int],
policy: Any,
env_preprocessor: Any,
preprocessor: Any,
postprocessor: Any,
) -> None:
super().__init__(addr, _PolicyInferenceHandler)
self._policy = policy
self._env_preprocessor = env_preprocessor
self._preprocessor = preprocessor
self._postprocessor = postprocessor
self._lock = threading.Lock()
self._device = torch.device(str(policy.config.device))
def _predict(self, obs_t: dict) -> np.ndarray:
"""Apply full preprocessing pipeline and return (n_action_steps, A) numpy chunk."""
obs = self._env_preprocessor(obs_t)
obs = self._preprocessor(obs)
obs_gpu: dict = {k: v.to(self._device) if isinstance(v, torch.Tensor) else v for k, v in obs.items()}
with torch.no_grad():
chunk: torch.Tensor = self._policy.predict_action_chunk(obs_gpu) # (B, T, A)
n_action_steps = getattr(self._policy.config, "n_action_steps", chunk.shape[1])
batch, n_steps, action_dim = chunk.shape
chunk_2d = chunk.reshape(batch * n_steps, action_dim) # (B*T, A)
chunk_2d = self._postprocessor(chunk_2d) # (B*T, A)
return chunk_2d[:n_action_steps].cpu().numpy() # (n_action_steps, A)
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def _get_host_ip() -> str:
"""Return the IP that containers can use to reach the host."""
if platform.system() in ("Darwin", "Windows"):
return "host.docker.internal"
return "172.17.0.1" # Linux Docker bridge default gateway
def _resolve_image(cfg: EvalPipelineConfig) -> str:
"""Return the Docker image name to use for the env containers."""
if cfg.eval.docker.image:
return cfg.eval.docker.image
return f"lerobot-benchmark-{cfg.env.type}"
def _env_argv() -> list[str]:
"""Extract --env.* args from sys.argv to forward verbatim to the worker."""
return [arg for arg in sys.argv[1:] if arg.startswith("--env.")]
def _spawn_container(
*,
image: str,
instance_id: int,
instance_count: int,
server_address: str,
n_episodes: int,
seed: int,
output_dir: Path,
docker_cfg: Any,
env_argv: list[str],
) -> subprocess.Popen:
output_dir.mkdir(parents=True, exist_ok=True)
container_results = "/results"
cmd: list[str] = ["docker", "run", "--rm"]
if docker_cfg.gpus:
cmd += [f"--gpus={docker_cfg.gpus}"]
cmd += [f"--shm-size={docker_cfg.shm_size}"]
cmd += ["-v", f"{output_dir.resolve()}:{container_results}"]
# Allow containers on Linux to resolve host.docker.internal.
cmd += ["--add-host=host.docker.internal:host-gateway"]
cmd.append(image)
cmd += [
"lerobot-eval-worker",
*env_argv,
f"--server_address={server_address}",
f"--n_episodes={n_episodes}",
f"--seed={seed}",
f"--instance_id={instance_id}",
f"--instance_count={instance_count}",
f"--output_path={container_results}/worker_{instance_id}.json",
]
logger.info(
"Spawning container %d/%d: %s",
instance_id + 1,
instance_count,
" ".join(cmd),
)
return subprocess.Popen(cmd) # nosec B603 B607
# ---------------------------------------------------------------------------
# Public entry point
# ---------------------------------------------------------------------------
def run_eval_in_docker(cfg: EvalPipelineConfig) -> None:
"""Run eval with env in Docker containers and policy on the host GPU.
Writes ``eval_info.json`` to ``cfg.output_dir``. Called by
``lerobot_eval._run_eval_worker`` when ``eval.runtime == "docker"``.
"""
# Import here to avoid circular import at module level.
from lerobot.scripts.lerobot_eval import _aggregate_eval_from_per_task
start_t = time.time()
output_dir = Path(cfg.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
docker_cfg = cfg.eval.docker
# Optionally pull the image before starting.
image = _resolve_image(cfg)
if docker_cfg.pull:
logger.info("Pulling Docker image: %s", image)
subprocess.run(["docker", "pull", image], check=True) # nosec B603 B607
# ── Load policy + all preprocessors on the host GPU ──────────────────
device = get_safe_torch_device(cfg.policy.device, log=True)
torch.backends.cudnn.benchmark = True
torch.backends.cuda.matmul.allow_tf32 = True
policy = make_policy(cfg=cfg.policy, env_cfg=cfg.env, rename_map=cfg.rename_map)
policy.eval()
preprocessor_overrides: dict = {
"device_processor": {"device": str(device)},
"rename_observations_processor": {"rename_map": cfg.rename_map},
}
preprocessor, postprocessor = make_pre_post_processors(
policy_cfg=cfg.policy,
pretrained_path=cfg.policy.pretrained_path,
preprocessor_overrides=preprocessor_overrides,
)
env_preprocessor, _env_postprocessor = make_env_pre_post_processors(
env_cfg=cfg.env,
policy_cfg=cfg.policy,
)
# ── Start HTTP inference server(s) ────────────────────────────────────
n_policy_servers = cfg.eval.policy_servers
base_port = docker_cfg.port
host_ip = _get_host_ip()
instance_count = cfg.eval.instance_count
env_argv = _env_argv()
servers: list[_InferenceServer] = []
for s_idx in range(n_policy_servers):
port = base_port + s_idx
if s_idx > 0:
policy = make_policy(cfg=cfg.policy, env_cfg=cfg.env, rename_map=cfg.rename_map)
policy.eval()
preprocessor, postprocessor = make_pre_post_processors(
policy_cfg=cfg.policy,
pretrained_path=cfg.policy.pretrained_path,
preprocessor_overrides=preprocessor_overrides,
)
env_preprocessor, _ = make_env_pre_post_processors(
env_cfg=cfg.env, policy_cfg=cfg.policy,
)
srv = _InferenceServer(
("0.0.0.0", port), # nosec B104
policy=policy,
env_preprocessor=env_preprocessor,
preprocessor=preprocessor,
postprocessor=postprocessor,
)
t = threading.Thread(target=srv.serve_forever, daemon=True)
t.start()
servers.append(srv)
logger.info("Policy inference server %d/%d running on port %d", s_idx + 1, n_policy_servers, port)
# ── Spawn containers (round-robin across policy servers) ──────────────
container_dirs: list[Path] = []
procs: list[subprocess.Popen] = []
try:
for i in range(instance_count):
assigned_port = base_port + (i % n_policy_servers)
server_address = f"{host_ip}:{assigned_port}"
shard_dir = output_dir / "shards" / str(i)
container_dirs.append(shard_dir)
proc = _spawn_container(
image=image,
instance_id=i,
instance_count=instance_count,
server_address=server_address,
n_episodes=cfg.eval.n_episodes,
seed=cfg.seed,
output_dir=shard_dir,
docker_cfg=docker_cfg,
env_argv=env_argv,
)
procs.append(proc)
failed: list[tuple[int, int]] = []
for i, proc in enumerate(procs):
rc = proc.wait()
if rc != 0:
failed.append((i, rc))
logger.error("Container %d/%d exited with code %d", i + 1, instance_count, rc)
if failed:
raise RuntimeError(f"Docker eval containers failed (instance_id, exit_code): {failed}")
finally:
for srv in servers:
srv.shutdown()
# ── Collect and merge per-task results ───────────────────────────────
per_task: list[dict] = []
for i, shard_dir in enumerate(container_dirs):
result_file = shard_dir / f"worker_{i}.json"
with open(result_file) as f:
shard_data: dict = json.load(f)
per_task.extend(shard_data.get("per_task", []))
per_task.sort(key=lambda x: (x["task_group"], x["task_id"]))
info = _aggregate_eval_from_per_task(per_task, total_eval_s=time.time() - start_t)
with open(output_dir / "eval_info.json", "w") as f:
json.dump(info, f, indent=2)
logger.info("Docker eval complete. Results: %s/eval_info.json", output_dir)
def run_eval_multiprocess(cfg: EvalPipelineConfig) -> None:
"""Run eval with multiple local worker processes and policy servers (no Docker).
Same architecture as Docker runtime but spawns `lerobot-eval-worker` as local
subprocesses. Works on SLURM clusters and anywhere Docker is unavailable.
"""
from lerobot.scripts.lerobot_eval import _aggregate_eval_from_per_task
start_t = time.time()
output_dir = Path(cfg.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
device = get_safe_torch_device(cfg.policy.device, log=True)
torch.backends.cudnn.benchmark = True
torch.backends.cuda.matmul.allow_tf32 = True
policy = make_policy(cfg=cfg.policy, env_cfg=cfg.env, rename_map=cfg.rename_map)
policy.eval()
preprocessor_overrides: dict = {
"device_processor": {"device": str(device)},
"rename_observations_processor": {"rename_map": cfg.rename_map},
}
preprocessor, postprocessor = make_pre_post_processors(
policy_cfg=cfg.policy,
pretrained_path=cfg.policy.pretrained_path,
preprocessor_overrides=preprocessor_overrides,
)
env_preprocessor, _env_postprocessor = make_env_pre_post_processors(
env_cfg=cfg.env, policy_cfg=cfg.policy,
)
n_policy_servers = cfg.eval.policy_servers
base_port = cfg.eval.port
instance_count = cfg.eval.instance_count
env_argv = _env_argv()
servers: list[_InferenceServer] = []
for s_idx in range(n_policy_servers):
port = base_port + s_idx
if s_idx > 0:
policy = make_policy(cfg=cfg.policy, env_cfg=cfg.env, rename_map=cfg.rename_map)
policy.eval()
preprocessor, postprocessor = make_pre_post_processors(
policy_cfg=cfg.policy,
pretrained_path=cfg.policy.pretrained_path,
preprocessor_overrides=preprocessor_overrides,
)
env_preprocessor, _ = make_env_pre_post_processors(
env_cfg=cfg.env, policy_cfg=cfg.policy,
)
srv = _InferenceServer(
("0.0.0.0", port), # nosec B104
policy=policy,
env_preprocessor=env_preprocessor,
preprocessor=preprocessor,
postprocessor=postprocessor,
)
t = threading.Thread(target=srv.serve_forever, daemon=True)
t.start()
servers.append(srv)
logger.info("Policy server %d/%d on port %d", s_idx + 1, n_policy_servers, port)
worker_dirs: list[Path] = []
procs: list[subprocess.Popen] = []
try:
for i in range(instance_count):
assigned_port = base_port + (i % n_policy_servers)
shard_dir = output_dir / "shards" / str(i)
shard_dir.mkdir(parents=True, exist_ok=True)
worker_dirs.append(shard_dir)
cmd = [
sys.executable, "-m", "lerobot.scripts.lerobot_eval_worker",
*env_argv,
f"--server_address=127.0.0.1:{assigned_port}",
f"--n_episodes={cfg.eval.n_episodes}",
f"--seed={cfg.seed}",
f"--instance_id={i}",
f"--instance_count={instance_count}",
f"--output_path={shard_dir / f'worker_{i}.json'}",
]
logger.info("Spawning worker %d/%d → port %d", i + 1, instance_count, assigned_port)
procs.append(subprocess.Popen(cmd)) # nosec B603
failed: list[tuple[int, int]] = []
for i, proc in enumerate(procs):
rc = proc.wait()
if rc != 0:
failed.append((i, rc))
logger.error("Worker %d/%d exited with code %d", i + 1, instance_count, rc)
if failed:
raise RuntimeError(f"Multiprocess eval workers failed (id, exit_code): {failed}")
finally:
for srv in servers:
srv.shutdown()
per_task: list[dict] = []
for i, shard_dir in enumerate(worker_dirs):
result_file = shard_dir / f"worker_{i}.json"
with open(result_file) as f:
shard_data: dict = json.load(f)
per_task.extend(shard_data.get("per_task", []))
per_task.sort(key=lambda x: (x["task_group"], x["task_id"]))
info = _aggregate_eval_from_per_task(per_task, total_eval_s=time.time() - start_t)
with open(output_dir / "eval_info.json", "w") as f:
json.dump(info, f, indent=2)
logger.info("Multiprocess eval complete. Results: %s/eval_info.json", output_dir)
+59 -6
View File
@@ -20,11 +20,21 @@ import gymnasium as gym
from gymnasium.envs.registration import registry as gym_registry
from lerobot.configs.policies import PreTrainedConfig
from lerobot.envs.configs import AlohaEnv, EnvConfig, HubEnvConfig, IsaaclabArenaEnv, LiberoEnv, PushtEnv
from lerobot.envs.configs import (
AlohaEnv,
EnvConfig,
HubEnvConfig,
IsaaclabArenaEnv,
LiberoEnv,
LiberoPlusEnv,
PushtEnv,
RoboCasaEnv,
RoboMMEEnv,
)
from lerobot.envs.utils import _call_make_env, _download_hub_file, _import_hub_module, _normalize_hub_result
from lerobot.policies.xvla.configuration_xvla import XVLAConfig
from lerobot.processor import ProcessorStep
from lerobot.processor.env_processor import IsaaclabArenaProcessorStep, LiberoProcessorStep
from lerobot.processor.env_processor import IsaaclabArenaProcessorStep, LiberoProcessorStep, RoboCasaProcessorStep
from lerobot.processor.pipeline import PolicyProcessorPipeline
@@ -35,6 +45,12 @@ def make_env_config(env_type: str, **kwargs) -> EnvConfig:
return PushtEnv(**kwargs)
elif env_type == "libero":
return LiberoEnv(**kwargs)
elif env_type == "libero_plus":
return LiberoPlusEnv(**kwargs)
elif env_type == "robocasa":
return RoboCasaEnv(**kwargs)
elif env_type == "robomme":
return RoboMMEEnv(**kwargs)
else:
raise ValueError(f"Policy type '{env_type}' is not available.")
@@ -70,9 +86,13 @@ def make_env_pre_post_processors(
return make_xvla_libero_pre_post_processors()
# For LIBERO environments, add the LiberoProcessorStep to preprocessor
if isinstance(env_cfg, LiberoEnv) or "libero" in env_cfg.type:
if isinstance(env_cfg, (LiberoEnv, LiberoPlusEnv)) or "libero" in env_cfg.type:
preprocessor_steps.append(LiberoProcessorStep())
# For RoboCasa environments, add the RoboCasaProcessorStep to preprocessor
if isinstance(env_cfg, RoboCasaEnv) or "robocasa" in env_cfg.type:
preprocessor_steps.append(RoboCasaProcessorStep())
# For Isaaclab Arena environments, add the IsaaclabArenaProcessorStep
if isinstance(env_cfg, IsaaclabArenaEnv) or "isaaclab_arena" in env_cfg.type:
# Parse comma-separated keys (handle None for state-based policies)
@@ -105,7 +125,7 @@ def make_env(
use_async_envs: bool = False,
hub_cache_dir: str | None = None,
trust_remote_code: bool = False,
) -> dict[str, dict[int, gym.vector.VectorEnv]]:
) -> dict[str, dict[int, Any]]:
"""Makes a gym vector environment according to the config or Hub reference.
Args:
@@ -123,8 +143,9 @@ def make_env(
ModuleNotFoundError: If the requested env package is not installed
Returns:
dict[str, dict[int, gym.vector.VectorEnv]]:
A mapping from suite name to indexed vectorized environments.
dict[str, dict[int, Any]]:
A mapping from suite name to indexed environments. Values are either
materialized vector envs or lazy wrappers that materialize on first use.
- For multi-task benchmarks (e.g., LIBERO): one entry per suite, and one vec env per task_id.
- For single-task environments: a single suite entry (cfg.type) with task_id=0.
@@ -171,6 +192,11 @@ def make_env(
if cfg.task is None:
raise ValueError("LiberoEnv requires a task to be specified")
if cfg.type == "libero_plus":
from lerobot.envs.libero import _check_libero_plus_assets
_check_libero_plus_assets()
return create_libero_envs(
task=cfg.task,
n_envs=n_envs,
@@ -181,6 +207,33 @@ def make_env(
control_mode=cfg.control_mode,
episode_length=cfg.episode_length,
)
elif "robocasa" in cfg.type:
from lerobot.envs.robocasa import create_robocasa_envs
tasks = cfg.tasks if cfg.tasks else [cfg.task]
return create_robocasa_envs(
tasks=tasks,
n_envs=n_envs,
image_size=cfg.image_size,
split=cfg.split,
episode_length=cfg.episode_length,
gym_kwargs=cfg.gym_kwargs,
env_cls=env_cls,
)
elif "robomme" in cfg.type:
from lerobot.envs.robomme import create_robomme_envs
return create_robomme_envs(
task=cfg.task,
n_envs=n_envs,
action_space_type=cfg.action_space,
dataset=cfg.dataset_split,
episode_length=cfg.episode_length,
task_ids=cfg.task_ids,
env_cls=env_cls,
)
elif "metaworld" in cfg.type:
from lerobot.envs.metaworld import create_metaworld_envs
+58
View File
@@ -0,0 +1,58 @@
#!/usr/bin/env python
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
from collections.abc import Callable, Sequence
from typing import Any
class LazyVectorEnv:
"""Defer vector-env construction until first usage.
This is useful for benchmarks with many tasks: we can register one env object
per task without eagerly allocating all simulator/rendering resources.
"""
def __init__(self, env_cls: Callable[[Sequence[Callable[[], Any]]], Any], factory_fns: list[Callable]):
self._env_cls = env_cls
self._factory_fns = factory_fns
self._env = None
@property
def env_cls(self) -> Callable[[Sequence[Callable[[], Any]]], Any]:
return self._env_cls
@property
def factory_fns(self) -> list[Callable]:
return self._factory_fns
@property
def num_factory_fns(self) -> int:
return len(self._factory_fns)
def materialize(self):
if self._env is None:
self._env = self._env_cls(self._factory_fns)
return self._env
def close(self):
if self._env is not None:
self._env.close()
self._env = None
def __getattr__(self, name):
return getattr(self.materialize(), name)
+310 -13
View File
@@ -16,6 +16,7 @@
from __future__ import annotations
import os
import re
from collections import defaultdict
from collections.abc import Callable, Iterable, Mapping, Sequence
from functools import partial
@@ -26,11 +27,222 @@ import gymnasium as gym
import numpy as np
import torch
from gymnasium import spaces
try:
import libero as _libero_pkg # noqa: F401
except ImportError:
raise ImportError(
"Could not import libero. Install benchmark dependencies with one of:\n"
" pip install -e \".[libero]\"\n"
" pip install -e \".[libero_plus]\" (alias: \".[libero-plus]\")"
)
# LIBERO's env_wrapper unconditionally imports wand (ImageMagick Python binding)
# which requires the system-level libMagickWand library. The wand features are only
# used for visual noise perturbations and are not needed for standard evaluation.
# Pre-install a stub so the import succeeds even without ImageMagick.
import sys
import types
if "wand" not in sys.modules:
try:
import wand.api # noqa: F401
except (ImportError, OSError):
class _AttrSink:
"""Accepts any attribute get/set without error."""
def __getattr__(self, _name):
return self
def __setattr__(self, _name, _value):
pass
def __call__(self, *a, **kw):
pass
_wand = types.ModuleType("wand")
_wand_api = types.ModuleType("wand.api")
_wand_api.library = _AttrSink()
_wand_image = types.ModuleType("wand.image")
_wand_image.Image = type("Image", (), {})
_wand.api = _wand_api
_wand.image = _wand_image
sys.modules["wand"] = _wand
sys.modules["wand.api"] = _wand_api
sys.modules["wand.image"] = _wand_image
from libero.libero import benchmark, get_libero_path
from libero.libero.envs import OffScreenRenderEnv
from lerobot.envs.lazy_vec_env import LazyVectorEnv
from lerobot.processor import RobotObservation
_ASSET_DOWNLOAD_INSTRUCTIONS = """\
LIBERO-plus assets not found at: {assets_dir}
The LIBERO-plus benchmark requires ~6 GB of scene/texture/object assets that
are hosted separately on Hugging Face. To download and install them:
python -c "
from huggingface_hub import hf_hub_download
hf_hub_download('Sylvest/LIBERO-plus', 'assets.zip',
repo_type='dataset', local_dir='/tmp/libero-plus-assets')
"
unzip /tmp/libero-plus-assets/assets.zip -d /tmp/libero-plus-assets-unzipped
# The zip contains a deeply nested path; move the assets directory:
mv /tmp/libero-plus-assets-unzipped/inspire/*/assets {assets_dir}
rm -rf /tmp/libero-plus-assets /tmp/libero-plus-assets-unzipped
See https://huggingface.co/datasets/Sylvest/LIBERO-plus for details.
"""
def _check_libero_plus_assets() -> None:
"""Validate that LIBERO-plus scene assets are present."""
assets_dir = Path(get_libero_path("benchmark_root")) / "assets"
if not (assets_dir / "scenes").is_dir():
raise FileNotFoundError(_ASSET_DOWNLOAD_INSTRUCTIONS.format(assets_dir=assets_dir))
# ---- Perturbation support for LIBERO-Plus -----------------------------------
PERTURBATION_DIMENSIONS = (
"Camera Viewpoints",
"Robot Initial States",
"Language Instructions",
"Light Conditions",
"Background Textures",
"Sensor Noise",
"Objects Layout",
)
PERTURBATION_SHORT_KEYS = {
"Camera Viewpoints": "camera",
"Robot Initial States": "robot",
"Language Instructions": "language",
"Light Conditions": "light",
"Background Textures": "background",
"Sensor Noise": "noise",
"Objects Layout": "layout",
}
def load_task_classification() -> dict:
"""Load task_classification.json shipped with LIBERO-Plus."""
import json
benchmark_root = Path(get_libero_path("benchmark_root"))
candidates = [
benchmark_root / "benchmark" / "task_classification.json",
benchmark_root / "task_classification.json",
benchmark_root.parent / "benchmark" / "task_classification.json",
]
for p in candidates:
if p.exists():
with open(p) as f:
return json.load(f)
raise FileNotFoundError(
f"task_classification.json not found. Tried: {[str(c) for c in candidates]}"
)
def build_perturbation_index(suite_name: str) -> dict[int, str]:
"""Return {0-indexed task_id: perturbation_dimension} for *suite_name*."""
tc = load_task_classification()
suite_data = tc.get(suite_name, {})
index: dict[int, str] = {}
# LIBERO-Plus task_classification.json has appeared in two shapes:
# 1) dict[suite][task_id_str] -> meta
# 2) dict[suite] -> list[{id, category, ...}]
if isinstance(suite_data, list):
for item in suite_data:
if not isinstance(item, dict):
continue
raw_id = item.get("id")
if raw_id is None:
continue
try:
# list-form ids are 1-indexed in current LIBERO-Plus release.
tid = int(raw_id) - 1
except (TypeError, ValueError):
continue
if tid < 0:
continue
dim = item.get("perturbation_type") or item.get("category", "unknown")
index[tid] = dim
return index
if isinstance(suite_data, dict):
# Handle both 0-indexed and 1-indexed key conventions.
numeric_keys: list[int] = []
for k in suite_data:
try:
numeric_keys.append(int(k))
except (TypeError, ValueError):
continue
one_indexed = bool(numeric_keys) and 0 not in numeric_keys and min(numeric_keys) >= 1
for task_id_str, meta in suite_data.items():
try:
tid = int(task_id_str)
except (TypeError, ValueError):
continue
if one_indexed:
tid -= 1
if tid < 0:
continue
if isinstance(meta, dict):
dim = meta.get("perturbation_type") or meta.get("category", "unknown")
else:
dim = "unknown"
index[tid] = dim
return index
return index
def aggregate_by_perturbation(
per_task: list[dict], suite_indices: dict[str, dict[int, str]]
) -> dict[str, dict]:
"""Aggregate per-task eval results by perturbation dimension.
Args:
per_task: list of {"task_group": str, "task_id": int, "metrics": {...}}
suite_indices: {suite_name: {task_id: dimension_name}} from build_perturbation_index
Returns:
{short_key: {"pc_success": float, "n_episodes": int}} for each perturbation dimension
"""
dim_successes: dict[str, list] = defaultdict(list)
for entry in per_task:
suite = entry["task_group"]
tid = entry["task_id"]
idx = suite_indices.get(suite, {})
dim = idx.get(tid, "unknown")
short = PERTURBATION_SHORT_KEYS.get(dim, dim.lower().replace(" ", "_"))
successes = entry["metrics"].get("successes", [])
dim_successes[short].extend(successes)
results: dict[str, dict] = {}
all_successes: list = []
for short_key in list(PERTURBATION_SHORT_KEYS.values()) + ["unknown"]:
if short_key not in dim_successes:
continue
s = dim_successes[short_key]
all_successes.extend(s)
results[short_key] = {
"pc_success": float(np.nanmean(s) * 100) if s else float("nan"),
"n_episodes": len(s),
}
if all_successes:
results["total"] = {
"pc_success": float(np.nanmean(all_successes) * 100),
"n_episodes": len(all_successes),
}
return results
def _parse_camera_names(camera_name: str | Sequence[str]) -> list[str]:
"""Normalize camera_name into a non-empty list of strings."""
@@ -68,13 +280,35 @@ def _select_task_ids(total_tasks: int, task_ids: Iterable[int] | None) -> list[i
def get_task_init_states(task_suite: Any, i: int) -> np.ndarray:
init_states_path = (
Path(get_libero_path("init_states"))
/ task_suite.tasks[i].problem_folder
/ task_suite.tasks[i].init_states_file
init_states_dir = Path(get_libero_path("init_states")) / task_suite.tasks[i].problem_folder
init_states_file = task_suite.tasks[i].init_states_file
# 1. Direct match
direct = init_states_dir / init_states_file
if direct.exists():
return torch.load(direct, weights_only=False) # nosec B614
# 2. LIBERO-Plus perturbation filenames append suffixes like
# _view_0_0_100_0_0_initstate_1, _language_19, _noise_45, _table_1, _tb_9, _add_16
# to the base task name. Instead of regex-matching every variant, find the
# longest existing base file whose stem is a prefix of the perturbation stem.
stem, ext = os.path.splitext(init_states_file)
best_match: Path | None = None
best_len = 0
for candidate in init_states_dir.glob(f"*{ext}"):
cstem = candidate.stem
if stem == cstem or (stem.startswith(cstem) and stem[len(cstem)] == "_"):
if len(cstem) > best_len:
best_len = len(cstem)
best_match = candidate
if best_match is not None:
return torch.load(best_match, weights_only=False) # nosec B614
raise FileNotFoundError(
f"Could not find init states for task {i}. "
f"Tried '{init_states_file}' and prefix matching in '{init_states_dir}'."
)
init_states = torch.load(init_states_path, weights_only=False) # nosec B614
return init_states
def get_libero_dummy_action():
@@ -94,6 +328,29 @@ TASK_SUITE_MAX_STEPS: dict[str, int] = {
}
def _make_offscreen_env_with_renderer_fallback(env_args: dict[str, Any]) -> Any:
"""Create OffScreenRenderEnv and fallback to OSMesa if EGL is unavailable."""
try:
return OffScreenRenderEnv(**env_args)
except ImportError as exc:
msg = str(exc)
if "EGL" not in msg and "PLATFORM_DEVICE" not in msg:
raise
# Headless clusters often miss EGL PLATFORM_DEVICE support. Retry with
# software rendering to keep evaluation working.
os.environ["MUJOCO_GL"] = "osmesa"
os.environ["PYOPENGL_PLATFORM"] = "osmesa"
try:
return OffScreenRenderEnv(**env_args)
except Exception as fallback_exc:
raise ImportError(
"Failed to initialize robosuite offscreen renderer with both EGL and "
"OSMesa backends. Set up EGL-capable drivers or install OSMesa (e.g. "
"`conda install -c conda-forge mesalib`) and retry."
) from fallback_exc
class LiberoEnv(gym.Env):
metadata = {"render_modes": ["rgb_array"], "render_fps": 80}
@@ -147,6 +404,7 @@ class LiberoEnv(gym.Env):
# Load once and keep
self._init_states = get_task_init_states(task_suite, self.task_id) if self.init_states else None
self._reset_stride = n_envs # when performing a reset, append `_reset_stride` to `init_state_id`.
self._init_state_error_warned = False
self.init_state_id = self.episode_index # tie each sub-env to a fixed init state
@@ -238,7 +496,7 @@ class LiberoEnv(gym.Env):
"camera_heights": self.observation_height,
"camera_widths": self.observation_width,
}
env = OffScreenRenderEnv(**env_args)
env = _make_offscreen_env_with_renderer_fallback(env_args)
env.reset()
return env
@@ -298,8 +556,21 @@ class LiberoEnv(gym.Env):
self._env.seed(seed)
raw_obs = self._env.reset()
if self.init_states and self._init_states is not None:
raw_obs = self._env.set_init_state(self._init_states[self.init_state_id % len(self._init_states)])
self.init_state_id += self._reset_stride # Change init_state_id when reset
try:
raw_obs = self._env.set_init_state(self._init_states[self.init_state_id % len(self._init_states)])
self.init_state_id += self._reset_stride # Change init_state_id when reset
except Exception as exc:
# Some LIBERO-Plus perturbation tasks (notably object-layout variants)
# can have different simulator state dimensions than their base init files.
# Fall back to plain env.reset() instead of aborting the whole evaluation.
self.init_states = False
if not self._init_state_error_warned:
print(
"WARNING: Failed to apply init state for "
f"task_id={self.task_id} ({self.task}). "
f"Falling back to plain reset. Error: {exc}"
)
self._init_state_error_warned = True
# After reset, objects may be unstable (slightly floating, intersecting, etc.).
# Step the simulator with a no-op action for a few frames so everything settles.
@@ -325,7 +596,17 @@ class LiberoEnv(gym.Env):
f"Expected action to be 1-D (shape (action_dim,)), "
f"but got shape {action.shape} with ndim={action.ndim}"
)
raw_obs, reward, done, info = self._env.step(action)
try:
raw_obs, reward, done, info = self._env.step(action)
except ValueError as e:
if "terminated episode" not in str(e):
raise
# Robosuite's internal done flag is stale (e.g. from a previous
# termination that wasn't properly cleared by SyncVectorEnv).
# Signal termination so the caller resets us.
obs, reset_info = self.reset()
return obs, 0.0, True, False, {"is_success": False, **reset_info}
is_success = self._env.check_success()
terminated = done or is_success
@@ -345,7 +626,6 @@ class LiberoEnv(gym.Env):
"done": bool(done),
"is_success": bool(is_success),
}
self.reset()
truncated = False
return observation, reward, terminated, truncated, info
@@ -388,6 +668,9 @@ def _make_env_fns(
return fns
_LazyVecEnv = LazyVectorEnv
# ---- Main API ----------------------------------------------------------------
@@ -431,12 +714,23 @@ def create_libero_envs(
print(f"Restricting to task_ids={task_ids_filter}")
out: dict[str, dict[int, Any]] = defaultdict(dict)
total_tasks = 0
for suite_name in suite_names:
suite = _get_suite(suite_name)
total = len(suite.tasks)
selected = _select_task_ids(total, task_ids_filter)
if not selected:
raise ValueError(f"No tasks selected for suite '{suite_name}' (available: {total}).")
total_tasks += len(selected)
lazy = total_tasks > 1
if lazy:
print(f"Using lazy env creation for {total_tasks} tasks (envs created on demand)")
for suite_name in suite_names:
suite = _get_suite(suite_name)
total = len(suite.tasks)
selected = _select_task_ids(total, task_ids_filter)
for tid in selected:
fns = _make_env_fns(
@@ -450,8 +744,11 @@ def create_libero_envs(
gym_kwargs=gym_kwargs,
control_mode=control_mode,
)
out[suite_name][tid] = env_cls(fns)
print(f"Built vec env | suite={suite_name} | task_id={tid} | n_envs={n_envs}")
if lazy:
out[suite_name][tid] = LazyVectorEnv(env_cls, fns)
else:
out[suite_name][tid] = env_cls(fns)
print(f"Built vec env | suite={suite_name} | task_id={tid} | n_envs={n_envs}")
# return plain dicts for predictability
return {suite: dict(task_map) for suite, task_map in out.items()}
+11 -5
View File
@@ -25,6 +25,7 @@ import metaworld.policies as policies
import numpy as np
from gymnasium import spaces
from lerobot.envs.lazy_vec_env import LazyVectorEnv
from lerobot.processor import RobotObservation
# ---- Load configuration data from the external JSON file ----
@@ -297,19 +298,24 @@ def create_metaworld_envs(
print(f"Creating Meta-World envs | task_groups={task_groups} | n_envs(per task)={n_envs}")
group_to_tasks = {group: DIFFICULTY_TO_TASKS.get(group, [group]) for group in task_groups}
total_tasks = sum(len(tasks) for tasks in group_to_tasks.values())
lazy = total_tasks > 50
if lazy:
print(f"Using lazy env creation for {total_tasks} tasks (envs created on demand)")
out: dict[str, dict[int, Any]] = defaultdict(dict)
for group in task_groups:
# if not in difficulty presets, treat it as a single custom task
tasks = DIFFICULTY_TO_TASKS.get(group, [group])
tasks = group_to_tasks[group]
for tid, task_name in enumerate(tasks):
print(f"Building vec env | group={group} | task_id={tid} | task={task_name}")
if not lazy:
print(f"Building vec env | group={group} | task_id={tid} | task={task_name}")
# build n_envs factories
fns = [(lambda tn=task_name: MetaworldEnv(task=tn, **gym_kwargs)) for _ in range(n_envs)]
out[group][tid] = env_cls(fns)
out[group][tid] = LazyVectorEnv(env_cls, fns) if lazy else env_cls(fns)
# return a plain dict for consistency
return {group: dict(task_map) for group, task_map in out.items()}
+279
View File
@@ -0,0 +1,279 @@
#!/usr/bin/env python
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
from collections import defaultdict
from collections.abc import Callable, Sequence
from functools import partial
from typing import Any
import gymnasium as gym
import numpy as np
from gymnasium import spaces
from lerobot.envs.lazy_vec_env import LazyVectorEnv
# Action layout (flat 12D, normalized to [-1, 1]):
# [0:3] end_effector_position (delta x, y, z)
# [3:6] end_effector_rotation (delta roll, pitch, yaw)
# [6:7] gripper_close (open=-1, close=+1)
# [7:11] base_motion (x, y, theta, torso_height)
# [11:12] control_mode (arm=-1, base=+1)
ACTION_DIM = 12
ACTION_LOW = -1.0
ACTION_HIGH = 1.0
# Proprioceptive state layout (flat 16D):
# [0:2] gripper_qpos
# [2:5] base_position
# [5:9] base_rotation (quaternion)
# [9:12] end_effector_position_relative
# [12:16] end_effector_rotation_relative (quaternion)
STATE_DIM = 16
# Obs dict keys from RoboCasaGymEnv.get_observation()
_CAM_KEYS = (
"video.robot0_agentview_left",
"video.robot0_agentview_right",
"video.robot0_eye_in_hand",
)
_STATE_KEYS_ORDERED = (
"state.gripper_qpos", # (2,)
"state.base_position", # (3,)
"state.base_rotation", # (4,)
"state.end_effector_position_relative", # (3,)
"state.end_effector_rotation_relative", # (4,)
)
# Mapping from video.* key → short image name used in features_map
CAM_KEY_TO_NAME = {
"video.robot0_agentview_left": "agentview_left",
"video.robot0_agentview_right": "agentview_right",
"video.robot0_eye_in_hand": "eye_in_hand",
}
def _flat_to_action_dict(flat: np.ndarray) -> dict[str, np.ndarray]:
"""Convert a 12D flat action array to the Dict format expected by RoboCasaGymEnv."""
return {
"action.end_effector_position": flat[0:3],
"action.end_effector_rotation": flat[3:6],
"action.gripper_close": flat[6:7],
"action.base_motion": flat[7:11],
"action.control_mode": flat[11:12],
}
class RoboCasaEnv(gym.Env):
"""Thin wrapper around RoboCasaGymEnv that provides a flat Box action space
and a structured observation dict compatible with LeRobot policies.
Observations returned by step/reset:
{
"pixels": {
"agentview_left": (H, W, 3) uint8,
"agentview_right": (H, W, 3) uint8,
"eye_in_hand": (H, W, 3) uint8,
},
"robot_state": (16,) float32,
}
Actions: flat float32 ndarray of shape (12,), normalized to [-1, 1].
"""
metadata = {"render_modes": ["rgb_array"], "render_fps": 20}
def __init__(
self,
task: str,
split: str = "target",
image_size: int = 128,
render_mode: str = "rgb_array",
episode_length: int = 500,
**gym_kwargs: Any,
):
super().__init__()
# Lazy import — robocasa is optional
import robocasa.environments # noqa: F401 — registers all gym envs
self.task = task
self.render_mode = render_mode
self.image_size = image_size
self._max_episode_steps = episode_length
self._step_count = 0
self._env = gym.make(
f"robocasa/{task}",
split=split,
camera_widths=image_size,
camera_heights=image_size,
**gym_kwargs,
)
# Flat 12D Box action space
self.action_space = spaces.Box(
low=ACTION_LOW,
high=ACTION_HIGH,
shape=(ACTION_DIM,),
dtype=np.float32,
)
images = {
name: spaces.Box(low=0, high=255, shape=(image_size, image_size, 3), dtype=np.uint8)
for name in CAM_KEY_TO_NAME.values()
}
self.observation_space = spaces.Dict(
{
"pixels": spaces.Dict(images),
"robot_state": spaces.Box(
low=-np.inf, high=np.inf, shape=(STATE_DIM,), dtype=np.float32
),
}
)
def _format_obs(self, raw_obs: dict) -> dict:
pixels = {
CAM_KEY_TO_NAME[k]: raw_obs[k]
for k in _CAM_KEYS
if k in raw_obs
}
state_parts = [
np.asarray(raw_obs[k], dtype=np.float32)
for k in _STATE_KEYS_ORDERED
if k in raw_obs
]
robot_state = np.concatenate(state_parts) if state_parts else np.zeros(STATE_DIM, dtype=np.float32)
return {"pixels": pixels, "robot_state": robot_state}
def reset(self, seed: int | None = None, **kwargs) -> tuple[dict, dict]:
super().reset(seed=seed)
self._step_count = 0
raw_obs, info = self._env.reset(seed=seed)
info.setdefault("is_success", False)
info["task"] = self.task
return self._format_obs(raw_obs), info
def step(self, action: np.ndarray) -> tuple[dict, float, bool, bool, dict]:
if action.ndim != 1 or action.shape[0] != ACTION_DIM:
raise ValueError(
f"Expected 1-D action of shape ({ACTION_DIM},), got {action.shape}"
)
action_dict = _flat_to_action_dict(action)
raw_obs, reward, terminated, truncated, info = self._env.step(action_dict)
self._step_count += 1
is_success = bool(info.get("success", False))
terminated = terminated or is_success
if self._step_count >= self._max_episode_steps:
truncated = True
info.update({"task": self.task, "is_success": is_success})
obs = self._format_obs(raw_obs)
if terminated or truncated:
info["final_info"] = {"task": self.task, "is_success": is_success}
return obs, reward, terminated, truncated, info
def render(self) -> np.ndarray | None:
if self.render_mode == "rgb_array":
return self._env.render()
return None
def close(self) -> None:
self._env.close()
def _make_env_fns(
*,
task: str,
n_envs: int,
image_size: int,
split: str,
episode_length: int,
gym_kwargs: dict[str, Any],
) -> list[Callable[[], RoboCasaEnv]]:
"""Build n_envs factory callables for a single task."""
def _make(episode_index: int) -> RoboCasaEnv: # noqa: ARG001
return RoboCasaEnv(
task=task,
split=split,
image_size=image_size,
episode_length=episode_length,
**gym_kwargs,
)
return [partial(_make, i) for i in range(n_envs)]
def create_robocasa_envs(
tasks: str | Sequence[str],
n_envs: int,
image_size: int = 128,
split: str = "target",
episode_length: int = 500,
gym_kwargs: dict[str, Any] | None = None,
env_cls: Callable[[Sequence[Callable[[], Any]]], Any] | None = None,
) -> dict[str, dict[int, Any]]:
"""Create vectorized RoboCasa environments.
Args:
tasks: A single task name or list of task names (without "robocasa/" prefix).
E.g. "PickPlaceCounterToCabinet" or ["BoilPot", "PrepareCoffee"].
n_envs: Number of parallel envs per task.
image_size: Square image resolution for all cameras.
split: RoboCasa dataset split "pretrain" or "target".
episode_length: Max steps per episode before truncation.
gym_kwargs: Extra kwargs forwarded to each RoboCasaEnv.
env_cls: Callable to wrap list of factory fns (SyncVectorEnv or AsyncVectorEnv).
Returns:
dict[task_name][task_id=0] -> vec_env
"""
if env_cls is None or not callable(env_cls):
raise ValueError("env_cls must be a callable wrapping a list of env factory callables.")
if not isinstance(n_envs, int) or n_envs <= 0:
raise ValueError(f"n_envs must be a positive int; got {n_envs}.")
if isinstance(tasks, str):
task_list = [t.strip() for t in tasks.split(",") if t.strip()]
else:
task_list = [str(t).strip() for t in tasks if str(t).strip()]
if not task_list:
raise ValueError("`tasks` must contain at least one task name.")
gym_kwargs = dict(gym_kwargs or {})
out: dict[str, dict[int, Any]] = defaultdict(dict)
total_tasks = len(task_list)
lazy = total_tasks > 50
print(f"Creating RoboCasa envs | tasks={task_list} | n_envs(per task)={n_envs} | split={split}")
if lazy:
print(f"Using lazy env creation for {total_tasks} tasks (envs created on demand)")
for task in task_list:
fns = _make_env_fns(
task=task,
n_envs=n_envs,
image_size=image_size,
split=split,
episode_length=episode_length,
gym_kwargs=gym_kwargs,
)
out["robocasa"][len(out["robocasa"])] = LazyVectorEnv(env_cls, fns) if lazy else env_cls(fns)
if not lazy:
print(f" Built vec env | task={task} | n_envs={n_envs}")
return {suite: dict(task_map) for suite, task_map in out.items()}
+181
View File
@@ -0,0 +1,181 @@
"""RoboMME environment wrapper for LeRobot evaluation.
Wraps the RoboMME ``BenchmarkEnvBuilder`` into a Gymnasium-compatible
``VectorEnv`` suitable for ``lerobot_eval``.
RoboMME tasks:
Counting: BinFill, PickXtimes, SwingXtimes, StopCube
Permanence: VideoUnmask, VideoUnmaskSwap, ButtonUnmask, ButtonUnmaskSwap
Reference: PickHighlight, VideoRepick, VideoPlaceButton, VideoPlaceOrder
Imitation: MoveCube, InsertPeg, PatternLock, RouteStick
Install: pip install robomme (or from source: https://github.com/RoboMME/robomme_benchmark)
"""
from __future__ import annotations
from collections.abc import Callable, Sequence
from functools import partial
from typing import Any
import gymnasium as gym
import numpy as np
from gymnasium import spaces
from lerobot.envs.lazy_vec_env import LazyVectorEnv
ROBOMME_TASKS = [
"BinFill", "PickXtimes", "SwingXtimes", "StopCube",
"VideoUnmask", "VideoUnmaskSwap", "ButtonUnmask", "ButtonUnmaskSwap",
"PickHighlight", "VideoRepick", "VideoPlaceButton", "VideoPlaceOrder",
"MoveCube", "InsertPeg", "PatternLock", "RouteStick",
]
class RoboMMEGymEnv(gym.Env):
"""Thin Gymnasium wrapper around a single RoboMME episode env."""
metadata = {"render_modes": ["rgb_array"]}
def __init__(
self,
task: str = "PickXtimes",
action_space_type: str = "joint_angle",
dataset: str = "test",
episode_idx: int = 0,
max_steps: int = 300,
):
super().__init__()
from robomme.env_record_wrapper import BenchmarkEnvBuilder
self._task = task
self._action_space_type = action_space_type
self._dataset = dataset
self._episode_idx = episode_idx
self._max_steps = max_steps
self._builder = BenchmarkEnvBuilder(
env_id=task,
dataset=dataset,
action_space=action_space_type,
gui_render=False,
max_steps=max_steps,
)
self._env = None
action_dim = 8 if action_space_type == "joint_angle" else 7
self.action_space = spaces.Box(low=-1.0, high=1.0, shape=(action_dim,), dtype=np.float32)
self.observation_space = spaces.Dict({
"front_rgb": spaces.Box(0, 255, shape=(256, 256, 3), dtype=np.uint8),
"wrist_rgb": spaces.Box(0, 255, shape=(256, 256, 3), dtype=np.uint8),
"state": spaces.Box(-np.inf, np.inf, shape=(8,), dtype=np.float32),
})
def reset(self, *, seed=None, options=None):
super().reset(seed=seed)
self._env = self._builder.make_env_for_episode(
episode_idx=self._episode_idx, max_steps=self._max_steps,
)
obs, info = self._env.reset()
return self._convert_obs(obs), self._convert_info(info)
def step(self, action):
obs, reward, terminated, truncated, info = self._env.step(action)
terminated_bool = bool(terminated.item()) if hasattr(terminated, "item") else bool(terminated)
truncated_bool = bool(truncated.item()) if hasattr(truncated, "item") else bool(truncated)
status = info.get("status", "ongoing")
is_success = status == "success"
conv_info = self._convert_info(info)
conv_info["is_success"] = is_success
return self._convert_obs(obs), float(reward), terminated_bool, truncated_bool, conv_info
def _convert_obs(self, obs: dict) -> dict:
front_rgb = obs["front_rgb_list"][-1] if isinstance(obs["front_rgb_list"], list) else obs["front_rgb_list"]
wrist_rgb = obs["wrist_rgb_list"][-1] if isinstance(obs["wrist_rgb_list"], list) else obs["wrist_rgb_list"]
joint_state = obs["joint_state_list"][-1] if isinstance(obs["joint_state_list"], list) else obs["joint_state_list"]
gripper_state = obs["gripper_state_list"][-1] if isinstance(obs["gripper_state_list"], list) else obs["gripper_state_list"]
front_rgb = np.asarray(front_rgb, dtype=np.uint8)
wrist_rgb = np.asarray(wrist_rgb, dtype=np.uint8)
joint = np.asarray(joint_state, dtype=np.float32).flatten()[:7]
gripper = np.asarray(gripper_state, dtype=np.float32).flatten()[:1]
state = np.concatenate([joint, gripper])
return {
"front_rgb": front_rgb,
"wrist_rgb": wrist_rgb,
"state": state,
}
def _convert_info(self, info: dict) -> dict:
return {
"status": info.get("status", "ongoing"),
"task_goal": info.get("task_goal", ""),
}
def _make_env_fns(
*,
task: str,
n_envs: int,
action_space_type: str,
dataset: str,
episode_length: int,
task_id: int,
) -> list[Callable[[], RoboMMEGymEnv]]:
"""Build n_envs factory callables for one RoboMME task id."""
def _make_one(episode_index: int) -> RoboMMEGymEnv:
return RoboMMEGymEnv(
task=task,
action_space_type=action_space_type,
dataset=dataset,
episode_idx=episode_index,
max_steps=episode_length,
)
return [partial(_make_one, task_id + i) for i in range(n_envs)]
def create_robomme_envs(
task: str,
n_envs: int = 1,
action_space_type: str = "joint_angle",
dataset: str = "test",
episode_length: int = 300,
task_ids: list[int] | None = None,
env_cls: Callable[[Sequence[Callable[[], Any]]], Any] | None = None,
) -> dict[str, dict[int, gym.vector.VectorEnv]]:
"""Create vectorized RoboMME environments for evaluation.
Returns {suite_name: {task_id: VectorEnv}} matching lerobot's expected format.
"""
if env_cls is None or not callable(env_cls):
raise ValueError("env_cls must be a callable that wraps a list of env factory callables.")
if not isinstance(n_envs, int) or n_envs <= 0:
raise ValueError(f"n_envs must be a positive int; got {n_envs}.")
if task_ids is None:
task_ids = [0]
suite_name = "robomme"
envs_by_task = {}
lazy = len(task_ids) > 50
if lazy:
print(f"Using lazy env creation for {len(task_ids)} tasks (envs created on demand)")
for task_id in task_ids:
fns = _make_env_fns(
task=task,
n_envs=n_envs,
action_space_type=action_space_type,
dataset=dataset,
episode_length=episode_length,
task_id=task_id,
)
envs_by_task[task_id] = LazyVectorEnv(env_cls, fns) if lazy else env_cls(fns)
return {suite_name: envs_by_task}
+4 -4
View File
@@ -29,7 +29,7 @@ from dataclasses import dataclass
from enum import Enum
from functools import cached_property
from pprint import pformat
from typing import Protocol, TypeAlias
from typing import Protocol
import serial
from deepdiff import DeepDiff
@@ -38,8 +38,8 @@ from tqdm import tqdm
from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
from lerobot.utils.utils import enter_pressed, move_cursor_up
NameOrID: TypeAlias = str | int
Value: TypeAlias = int | float
type NameOrID = str | int
type Value = int | float
logger = logging.getLogger(__name__)
@@ -1277,4 +1277,4 @@ class SerialMotorsBus(MotorsBusBase):
# Backward compatibility alias
MotorsBus: TypeAlias = SerialMotorsBus
MotorsBus = SerialMotorsBus
+1 -2
View File
@@ -18,10 +18,9 @@ from __future__ import annotations
import importlib
import logging
from typing import Any, TypedDict
from typing import Any, TypedDict, Unpack
import torch
from typing_extensions import Unpack
from lerobot.configs.policies import PreTrainedConfig
from lerobot.configs.types import FeatureType
@@ -4,10 +4,9 @@
# Licensed under The MIT License [see LICENSE for details]
# --------------------------------------------------------
from __future__ import annotations
# copy from https://github.com/huggingface/transformers/blob/main/src/transformers/models/llava_onevision/image_processing_llava_onevision_fast.py
from typing import Optional
from transformers.image_processing_utils import (
BatchFeature,
get_patch_output_size,
@@ -165,11 +164,11 @@ class Eagle25VLImageProcessorFast(BaseImageProcessorFast):
def _resize_for_patching(
self,
image: "torch.Tensor",
image: torch.Tensor,
target_resolution: tuple,
interpolation: "F.InterpolationMode",
interpolation: F.InterpolationMode,
input_data_format: ChannelDimension,
) -> "torch.Tensor":
) -> torch.Tensor:
"""
Resizes an image to a target resolution while maintaining aspect ratio.
@@ -219,8 +218,8 @@ class Eagle25VLImageProcessorFast(BaseImageProcessorFast):
return best_ratio
def _pad_for_patching(
self, image: "torch.Tensor", target_resolution: tuple, input_data_format: ChannelDimension
) -> "torch.Tensor":
self, image: torch.Tensor, target_resolution: tuple, input_data_format: ChannelDimension
) -> torch.Tensor:
"""
Pad an image to a target resolution while maintaining aspect ratio.
"""
@@ -236,15 +235,15 @@ class Eagle25VLImageProcessorFast(BaseImageProcessorFast):
def _get_image_patches(
self,
image: "torch.Tensor",
image: torch.Tensor,
min_num: int,
max_num: int,
size: tuple,
tile_size: int,
use_thumbnail: bool,
interpolation: "F.InterpolationMode",
interpolation: F.InterpolationMode,
pad_during_tiling: bool,
) -> list["torch.Tensor"]:
) -> list[torch.Tensor]:
image_size = get_image_size(image, channel_dim=ChannelDimension.FIRST)
orig_height, orig_width = image_size
aspect_ratio = orig_width / orig_height
@@ -305,8 +304,8 @@ class Eagle25VLImageProcessorFast(BaseImageProcessorFast):
def _pad_for_batching(
self,
pixel_values: list["torch.Tensor"],
) -> list["torch.Tensor"]:
pixel_values: list[torch.Tensor],
) -> list[torch.Tensor]:
"""
Pads images on the `num_of_patches` dimension with zeros to form a batch of same number of patches.
@@ -327,14 +326,14 @@ class Eagle25VLImageProcessorFast(BaseImageProcessorFast):
def _preprocess(
self,
images: list["torch.Tensor"],
images: list[torch.Tensor],
do_resize: bool,
size: SizeDict,
max_dynamic_tiles: int,
min_dynamic_tiles: int,
use_thumbnail: bool,
pad_during_tiling: bool,
interpolation: Optional["F.InterpolationMode"],
interpolation: F.InterpolationMode | None,
do_center_crop: bool,
crop_size: SizeDict,
do_rescale: bool,
+1 -2
View File
@@ -20,12 +20,11 @@ import logging
import math
from collections import deque
from pathlib import Path
from typing import TYPE_CHECKING, Literal, TypedDict
from typing import TYPE_CHECKING, Literal, TypedDict, Unpack
import torch
import torch.nn.functional as F # noqa: N812
from torch import Tensor, nn
from typing_extensions import Unpack
from lerobot.utils.import_utils import _transformers_available
+1 -2
View File
@@ -20,12 +20,11 @@ import logging
import math
from collections import deque
from pathlib import Path
from typing import TYPE_CHECKING, Literal, TypedDict
from typing import TYPE_CHECKING, Literal, TypedDict, Unpack
import torch
import torch.nn.functional as F # noqa: N812
from torch import Tensor, nn
from typing_extensions import Unpack
from lerobot.utils.import_utils import _transformers_available
@@ -19,13 +19,12 @@ import logging
import math
from collections import deque
from pathlib import Path
from typing import TYPE_CHECKING, Literal, TypedDict
from typing import TYPE_CHECKING, Literal, TypedDict, Unpack
import numpy as np
import torch
import torch.nn.functional as F # noqa: N812
from torch import Tensor, nn
from typing_extensions import Unpack
from lerobot.utils.import_utils import _scipy_available, _transformers_available
+1 -2
View File
@@ -19,7 +19,7 @@ import os
from importlib.resources import files
from pathlib import Path
from tempfile import TemporaryDirectory
from typing import TypedDict, TypeVar
from typing import TypedDict, TypeVar, Unpack
import packaging
import safetensors
@@ -28,7 +28,6 @@ from huggingface_hub.constants import SAFETENSORS_SINGLE_FILE
from huggingface_hub.errors import HfHubHTTPError
from safetensors.torch import load_model as load_model_as_safetensor, save_model as save_model_as_safetensor
from torch import Tensor, nn
from typing_extensions import Unpack
from lerobot.configs.policies import PreTrainedConfig
from lerobot.configs.train import TrainPipelineConfig
@@ -54,12 +54,11 @@ policy = SmolVLAPolicy.from_pretrained("lerobot/smolvla_base")
import math
from collections import deque
from typing import TypedDict
from typing import TypedDict, Unpack
import torch
import torch.nn.functional as F # noqa: N812
from torch import Tensor, nn
from typing_extensions import Unpack
from lerobot.policies.pretrained import PreTrainedPolicy
from lerobot.policies.rtc.modeling_rtc import RTCProcessor
+5 -5
View File
@@ -17,7 +17,7 @@
from __future__ import annotations
from enum import Enum
from typing import Any, TypeAlias, TypedDict
from typing import Any, TypedDict
import numpy as np
import torch
@@ -36,10 +36,10 @@ class TransitionKey(str, Enum):
COMPLEMENTARY_DATA = "complementary_data"
PolicyAction: TypeAlias = torch.Tensor
RobotAction: TypeAlias = dict[str, Any]
EnvAction: TypeAlias = np.ndarray
RobotObservation: TypeAlias = dict[str, Any]
PolicyAction = torch.Tensor
RobotAction = dict[str, Any]
EnvAction = np.ndarray
RobotObservation = dict[str, Any]
EnvTransition = TypedDict(
+38
View File
@@ -153,6 +153,44 @@ class LiberoProcessorStep(ObservationProcessorStep):
return result
@dataclass
@ProcessorStepRegistry.register(name="robocasa_processor")
class RoboCasaProcessorStep(ObservationProcessorStep):
"""
Processes RoboCasa observations into LeRobot format.
The RoboCasaEnv wrapper returns:
- ``pixels.<cam_name>``: (B, C, H, W) float32 images (already converted by vectorenv)
- ``observation.robot_state``: (B, 16) float32 proprioception
This step remaps them to:
- ``observation.images.<cam_name>`` (unchanged tensor)
- ``observation.state`` (robot_state renamed)
"""
def _process_observation(self, observation: dict) -> dict:
processed = {}
obs_prefix = OBS_PREFIX # "observation."
for key, value in observation.items():
if key.startswith(f"{OBS_IMAGES}."):
# Already in the right place; pass through
processed[key] = value
elif key == OBS_STATE or key == f"{obs_prefix}robot_state":
# Rename robot_state → observation.state
processed[OBS_STATE] = value.float() if hasattr(value, "float") else value
return processed
def transform_features(
self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
return features
def observation(self, observation: dict) -> dict:
return self._process_observation(observation)
@dataclass
@ProcessorStepRegistry.register(name="isaaclab_arena_processor")
class IsaaclabArenaProcessorStep(ObservationProcessorStep):
+4 -4
View File
@@ -39,7 +39,7 @@ from collections.abc import Callable, Iterable, Sequence
from copy import deepcopy
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any, Generic, TypeAlias, TypedDict, TypeVar, cast
from typing import Any, TypedDict, TypeVar, cast
import torch
from huggingface_hub import hf_hub_download
@@ -251,7 +251,7 @@ class ProcessorMigrationError(Exception):
@dataclass
class DataProcessorPipeline(HubMixin, Generic[TInput, TOutput]):
class DataProcessorPipeline[TInput, TOutput](HubMixin):
"""A sequential pipeline for processing data, integrated with the Hugging Face Hub.
This class chains together multiple `ProcessorStep` instances to form a complete
@@ -1432,8 +1432,8 @@ class DataProcessorPipeline(HubMixin, Generic[TInput, TOutput]):
# Type aliases for semantic clarity.
RobotProcessorPipeline: TypeAlias = DataProcessorPipeline[TInput, TOutput]
PolicyProcessorPipeline: TypeAlias = DataProcessorPipeline[TInput, TOutput]
RobotProcessorPipeline = DataProcessorPipeline[TInput, TOutput]
PolicyProcessorPipeline = DataProcessorPipeline[TInput, TOutput]
class ObservationProcessorStep(ProcessorStep, ABC):
@@ -15,7 +15,6 @@
# limitations under the License.
from dataclasses import dataclass, field
from typing import TypeAlias
from lerobot.cameras import CameraConfig
@@ -50,5 +49,5 @@ class SOFollowerRobotConfig(RobotConfig, SOFollowerConfig):
pass
SO100FollowerConfig: TypeAlias = SOFollowerRobotConfig
SO101FollowerConfig: TypeAlias = SOFollowerRobotConfig
SO100FollowerConfig = SOFollowerRobotConfig
SO101FollowerConfig = SOFollowerRobotConfig
@@ -17,7 +17,6 @@
import logging
import time
from functools import cached_property
from typing import TypeAlias
from lerobot.cameras.utils import make_cameras_from_configs
from lerobot.motors import Motor, MotorCalibration, MotorNormMode
@@ -230,5 +229,5 @@ class SOFollower(Robot):
logger.info(f"{self} disconnected.")
SO100Follower: TypeAlias = SOFollower
SO101Follower: TypeAlias = SOFollower
SO100Follower = SOFollower
SO101Follower = SOFollower
@@ -16,3 +16,5 @@
from .config_unitree_g1 import UnitreeG1Config
from .unitree_g1 import UnitreeG1
__all__ = ["UnitreeG1", "UnitreeG1Config"]
@@ -27,11 +27,10 @@ _GAINS: dict[str, dict[str, list[float]]] = {
}, # pitch, roll, yaw, knee, ankle_pitch, ankle_roll
"right_leg": {"kp": [150, 150, 150, 300, 40, 40], "kd": [2, 2, 2, 4, 2, 2]},
"waist": {"kp": [250, 250, 250], "kd": [5, 5, 5]}, # yaw, roll, pitch
"left_arm": {"kp": [80, 80, 80, 80], "kd": [3, 3, 3, 3]}, # shoulder_pitch/roll/yaw, elbow
"left_arm": {"kp": [50, 50, 80, 80], "kd": [3, 3, 3, 3]}, # shoulder_pitch/roll/yaw, elbow
"left_wrist": {"kp": [40, 40, 40], "kd": [1.5, 1.5, 1.5]}, # roll, pitch, yaw
"right_arm": {"kp": [80, 80, 80, 80], "kd": [3, 3, 3, 3]},
"right_arm": {"kp": [50, 50, 80, 80], "kd": [3, 3, 3, 3]},
"right_wrist": {"kp": [40, 40, 40], "kd": [1.5, 1.5, 1.5]},
"other": {"kp": [80, 80, 80, 80, 80, 80], "kd": [3, 3, 3, 3, 3, 3]},
}
@@ -68,3 +67,7 @@ class UnitreeG1Config(RobotConfig):
# Compensates for gravity on the unitree's arms using the arm ik solver
gravity_compensation: bool = False
# Lower-body controller class name, e.g. "GrootLocomotionController" or
# "HolosomaLocomotionController". None disables it.
controller: str | None = None
@@ -16,13 +16,11 @@
import logging
import os
import sys
from collections import deque
import numpy as np
logger = logging.getLogger(__name__)
parent2_dir = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
sys.path.append(parent2_dir)
class WeightedMovingFilter:
@@ -31,18 +29,14 @@ class WeightedMovingFilter:
self._weights = np.array(weights)
self._data_size = data_size
self._filtered_data = np.zeros(self._data_size)
self._data_queue = []
self._data_queue = deque(maxlen=self._window_size)
def _apply_filter(self):
if len(self._data_queue) < self._window_size:
return self._data_queue[-1]
data_array = np.array(self._data_queue)
temp_filtered_data = np.zeros(self._data_size)
for i in range(self._data_size):
temp_filtered_data[i] = np.convolve(data_array[:, i], self._weights, mode="valid")[-1]
return temp_filtered_data
return data_array.T @ self._weights
def add_data(self, new_data):
assert len(new_data) == self._data_size
@@ -52,9 +46,6 @@ class WeightedMovingFilter:
): # skip duplicate data
return
if len(self._data_queue) >= self._window_size:
self._data_queue.pop(0)
self._data_queue.append(new_data)
self._filtered_data = self._apply_filter()
@@ -71,8 +62,6 @@ class G1_29_ArmIK: # noqa: N801
from pinocchio import casadi as cpin
self._pin = pin
np.set_printoptions(precision=5, suppress=True, linewidth=200)
self.unit_test = unit_test
self.repo_path = snapshot_download("lerobot/unitree-g1-mujoco")
@@ -249,50 +238,35 @@ class G1_29_ArmIK: # noqa: N801
self.opti.set_value(self.param_tf_r, right_wrist)
self.opti.set_value(self.var_q_last, self.init_data) # for smooth
converged = True
try:
self.opti.solve()
sol_q = self.opti.value(self.var_q)
self.smooth_filter.add_data(sol_q)
sol_q = self.smooth_filter.filtered_data
if current_lr_arm_motor_dq is not None:
v = current_lr_arm_motor_dq * 0.0
else:
v = (sol_q - self.init_data) * 0.0
self.init_data = sol_q
sol_tauff = self._pin.rnea(
self.reduced_robot.model,
self.reduced_robot.data,
sol_q,
v,
np.zeros(self.reduced_robot.model.nv),
)
return sol_q, sol_tauff
except Exception as e:
logger.error(f"ERROR in convergence, plotting debug info.{e}")
converged = False
logger.error(f"IK convergence error: {e}")
sol_q = self.opti.debug.value(self.var_q)
self.smooth_filter.add_data(sol_q)
sol_q = self.smooth_filter.filtered_data
if current_lr_arm_motor_dq is not None:
v = current_lr_arm_motor_dq * 0.0
else:
v = (sol_q - self.init_data) * 0.0
self.init_data = sol_q
self.smooth_filter.add_data(sol_q)
sol_q = self.smooth_filter.filtered_data
self.init_data = sol_q
if not converged:
logger.error(
f"sol_q:{sol_q} \nmotorstate: \n{current_lr_arm_motor_q} \nleft_pose: \n{left_wrist} \nright_pose: \n{right_wrist}"
)
return current_lr_arm_motor_q, np.zeros(self.reduced_robot.model.nv)
sol_tauff = self._pin.rnea(
self.reduced_robot.model,
self.reduced_robot.data,
sol_q,
np.zeros(self.reduced_robot.model.nv),
np.zeros(self.reduced_robot.model.nv),
)
return sol_q, sol_tauff
def solve_tau(self, current_lr_arm_motor_q=None, current_lr_arm_motor_dq=None):
try:
q_g1 = np.array(current_lr_arm_motor_q, dtype=float)
+39 -2
View File
@@ -14,12 +14,34 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import importlib
from enum import IntEnum
import numpy as np
# ruff: noqa: N801, N815
NUM_MOTORS = 29
REMOTE_AXES = ("remote.lx", "remote.ly", "remote.rx", "remote.ry")
REMOTE_BUTTONS = tuple(f"remote.button.{i}" for i in range(16))
REMOTE_KEYS = REMOTE_AXES + REMOTE_BUTTONS
def default_remote_input() -> dict[str, float]:
"""Return a zeroed-out remote input dict (axes + buttons)."""
return dict.fromkeys(REMOTE_KEYS, 0.0)
def get_gravity_orientation(quaternion: list[float] | np.ndarray) -> np.ndarray:
"""Get gravity orientation from quaternion [w, x, y, z]."""
qw, qx, qy, qz = quaternion
gravity_orientation = np.zeros(3, dtype=np.float32)
gravity_orientation[0] = 2 * (-qz * qx + qw * qy)
gravity_orientation[1] = -2 * (qz * qy + qw * qx)
gravity_orientation[2] = 1 - 2 * (qw * qw + qz * qz)
return gravity_orientation
class G1_29_JointArmIndex(IntEnum):
# Left arm
@@ -29,7 +51,7 @@ class G1_29_JointArmIndex(IntEnum):
kLeftElbow = 18
kLeftWristRoll = 19
kLeftWristPitch = 20
kLeftWristyaw = 21
kLeftWristYaw = 21
# Right arm
kRightShoulderPitch = 22
@@ -41,6 +63,21 @@ class G1_29_JointArmIndex(IntEnum):
kRightWristYaw = 28
def make_locomotion_controller(name: str | None):
"""Instantiate a locomotion controller by class name. Returns None if name is None."""
if name is None:
return None
controllers = {
"GrootLocomotionController": "lerobot.robots.unitree_g1.gr00t_locomotion",
"HolosomaLocomotionController": "lerobot.robots.unitree_g1.holosoma_locomotion",
}
module_path = controllers.get(name)
if module_path is None:
raise ValueError(f"Unknown controller: {name!r}. Available: {list(controllers)}")
module = importlib.import_module(module_path)
return getattr(module, name)()
class G1_29_JointIndex(IntEnum):
# Left leg
kLeftHipPitch = 0
@@ -69,7 +106,7 @@ class G1_29_JointIndex(IntEnum):
kLeftElbow = 18
kLeftWristRoll = 19
kLeftWristPitch = 20
kLeftWristyaw = 21
kLeftWristYaw = 21
# Right arm
kRightShoulderPitch = 22
@@ -14,20 +14,20 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import logging
import time
from collections import deque
import numpy as np
import onnxruntime as ort
from huggingface_hub import hf_hub_download
from lerobot.robots.unitree_g1.config_unitree_g1 import UnitreeG1Config
from lerobot.robots.unitree_g1.g1_utils import G1_29_JointIndex
from lerobot.robots.unitree_g1.unitree_g1 import UnitreeG1
from lerobot.robots.unitree_g1.g1_utils import (
REMOTE_AXES,
REMOTE_BUTTONS,
G1_29_JointIndex,
get_gravity_orientation,
)
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
@@ -36,18 +36,13 @@ GROOT_DEFAULT_ANGLES[[0, 6]] = -0.1 # Hip pitch
GROOT_DEFAULT_ANGLES[[3, 9]] = 0.3 # Knee
GROOT_DEFAULT_ANGLES[[4, 10]] = -0.2 # Ankle pitch
MISSING_JOINTS = []
G1_MODEL = "g1_23" # Or "g1_29"
if G1_MODEL == "g1_23":
MISSING_JOINTS = [12, 14, 20, 21, 27, 28] # Waist yaw/pitch, wrist pitch/yaw
# Control parameters
ACTION_SCALE = 0.25
CONTROL_DT = 0.02 # 50Hz
ANG_VEL_SCALE: float = 0.25
DOF_POS_SCALE: float = 1.0
DOF_VEL_SCALE: float = 0.05
CMD_SCALE: list = [2.0, 2.0, 0.25]
CMD_SCALE: list[float] = [2.0, 2.0, 0.25]
DEFAULT_GROOT_REPO_ID = "nepyope/GR00T-WholeBodyControl_g1"
@@ -85,11 +80,11 @@ def load_groot_policies(
class GrootLocomotionController:
"""GR00T lower-body locomotion controller for the Unitree G1."""
def __init__(self, policy_balance, policy_walk, robot, config):
self.policy_balance = policy_balance
self.policy_walk = policy_walk
self.robot = robot
self.config = config
control_dt = CONTROL_DT # Expose for unitree_g1.py
def __init__(self):
# Load policies
self.policy_balance, self.policy_walk = load_groot_policies()
self.cmd = np.array([0.0, 0.0, 0.0], dtype=np.float32) # vx, vy, theta_dot
@@ -109,45 +104,60 @@ class GrootLocomotionController:
logger.info("GrootLocomotionController initialized")
def run_step(self):
# Get current observation
obs = self.robot.get_observation()
def reset(self) -> None:
"""Reset internal state for a new episode."""
self.cmd[:] = 0.0
self.groot_qj_all[:] = 0.0
self.groot_dqj_all[:] = 0.0
self.groot_action[:] = 0.0
self.groot_obs_single[:] = 0.0
self.groot_obs_stacked[:] = 0.0
self.groot_height_cmd = 0.74
self.groot_orientation_cmd[:] = 0.0
self.groot_obs_history.clear()
for _ in range(6):
self.groot_obs_history.append(np.zeros(86, dtype=np.float32))
if not obs:
return
def run_step(self, action: dict, lowstate) -> dict:
"""Run one step of the locomotion controller.
# Get command from remote controller
if obs["remote.buttons"][0]: # R1 - raise waist
Args:
action: Action dict containing remote.lx/ly/rx/ry and buttons
lowstate: Robot lowstate containing motor positions/velocities and IMU
Returns:
Action dict for lower body joints (0-14)
"""
if lowstate is None:
return {}
buttons = [int(action.get(k, 0)) for k in REMOTE_BUTTONS]
if buttons[0]: # R1 - raise waist
self.groot_height_cmd += 0.001
self.groot_height_cmd = np.clip(self.groot_height_cmd, 0.50, 1.00)
if obs["remote.buttons"][4]: # R2 - lower waist
if buttons[4]: # R2 - lower waist
self.groot_height_cmd -= 0.001
self.groot_height_cmd = np.clip(self.groot_height_cmd, 0.50, 1.00)
self.cmd[0] = obs["remote.ly"] # Forward/backward
self.cmd[1] = obs["remote.lx"] * -1 # Left/right
self.cmd[2] = obs["remote.rx"] * -1 # Rotation rate
lx, ly, rx, _ry = (action.get(k, 0.0) for k in REMOTE_AXES)
self.cmd[0] = ly # Forward/backward
self.cmd[1] = -lx # Left/right (negated)
self.cmd[2] = -rx # Rotation rate (negated)
# Get joint positions and velocities from flat dict
# Get joint positions and velocities from lowstate
for motor in G1_29_JointIndex:
name = motor.name
idx = motor.value
self.groot_qj_all[idx] = obs[f"{name}.q"]
self.groot_dqj_all[idx] = obs[f"{name}.dq"]
# Adapt observation for g1_23dof
for idx in MISSING_JOINTS:
self.groot_qj_all[idx] = 0.0
self.groot_dqj_all[idx] = 0.0
self.groot_qj_all[idx] = lowstate.motor_state[idx].q
self.groot_dqj_all[idx] = lowstate.motor_state[idx].dq
# Scale joint positions and velocities
qj_obs = self.groot_qj_all.copy()
dqj_obs = self.groot_dqj_all.copy()
# Express IMU data in gravity frame of reference
quat = [obs["imu.quat.w"], obs["imu.quat.x"], obs["imu.quat.y"], obs["imu.quat.z"]]
ang_vel = np.array([obs["imu.gyro.x"], obs["imu.gyro.y"], obs["imu.gyro.z"]], dtype=np.float32)
gravity_orientation = self.robot.get_gravity_orientation(quat)
quat = lowstate.imu_state.quaternion
ang_vel = np.array(lowstate.imu_state.gyroscope, dtype=np.float32)
gravity_orientation = get_gravity_orientation(quat)
# Scale joint positions and velocities before policy inference
qj_obs = (qj_obs - GROOT_DEFAULT_ANGLES) * DOF_POS_SCALE
@@ -186,73 +196,10 @@ class GrootLocomotionController:
# Transform action back to target joint positions
target_dof_pos_15 = GROOT_DEFAULT_ANGLES[:15] + self.groot_action * ACTION_SCALE
# Build action dict (only first 15 joints for GR00T)
# Build action dict
action_dict = {}
for i in range(15):
motor_name = G1_29_JointIndex(i).name
action_dict[f"{motor_name}.q"] = float(target_dof_pos_15[i])
# Zero out missing joints for g1_23dof
for joint_idx in MISSING_JOINTS:
motor_name = G1_29_JointIndex(joint_idx).name
action_dict[f"{motor_name}.q"] = 0.0
# Send action to robot
self.robot.send_action(action_dict)
def run(repo_id: str = DEFAULT_GROOT_REPO_ID) -> None:
"""Main function to run the GR00T locomotion controller.
Args:
repo_id: Hugging Face Hub repository ID for GR00T policies.
"""
# Load policies
policy_balance, policy_walk = load_groot_policies(repo_id=repo_id)
# Initialize robot
config = UnitreeG1Config()
robot = UnitreeG1(config)
robot.connect()
# Initialize gr00T locomotion controller
groot_controller = GrootLocomotionController(
policy_balance=policy_balance,
policy_walk=policy_walk,
robot=robot,
config=config,
)
try:
robot.reset(CONTROL_DT, GROOT_DEFAULT_ANGLES)
logger.info("Use joystick: LY=fwd/back, LX=left/right, RX=rotate, R1=raise waist, R2=lower waist")
logger.info("Press Ctrl+C to stop")
# Run step
while not robot._shutdown_event.is_set():
start_time = time.time()
groot_controller.run_step()
elapsed = time.time() - start_time
sleep_time = max(0, CONTROL_DT - elapsed)
time.sleep(sleep_time)
except KeyboardInterrupt:
logger.info("Stopping locomotion...")
finally:
if robot.is_connected:
robot.disconnect()
logger.info("Done!")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="GR00T Locomotion Controller for Unitree G1")
parser.add_argument(
"--repo-id",
type=str,
default=DEFAULT_GROOT_REPO_ID,
help=f"Hugging Face Hub repo ID for GR00T policies (default: {DEFAULT_GROOT_REPO_ID})",
)
args = parser.parse_args()
run(repo_id=args.repo_id)
return action_dict
@@ -14,21 +14,21 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import json
import logging
import time
import numpy as np
import onnx
import onnxruntime as ort
from huggingface_hub import hf_hub_download
from lerobot.robots.unitree_g1.config_unitree_g1 import UnitreeG1Config
from lerobot.robots.unitree_g1.g1_utils import G1_29_JointIndex
from lerobot.robots.unitree_g1.unitree_g1 import UnitreeG1
from lerobot.robots.unitree_g1.g1_utils import (
REMOTE_AXES,
G1_29_JointArmIndex,
G1_29_JointIndex,
get_gravity_orientation,
)
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
DEFAULT_ANGLES = np.zeros(29, dtype=np.float32)
@@ -40,18 +40,13 @@ DEFAULT_ANGLES[16] = 0.2 # Left shoulder roll
DEFAULT_ANGLES[23] = -0.2 # Right shoulder roll
DEFAULT_ANGLES[[18, 25]] = 0.6 # Elbow
MISSING_JOINTS = []
G1_MODEL = "g1_23" # Or "g1_29"
if G1_MODEL == "g1_23":
MISSING_JOINTS = [12, 14, 20, 21, 27, 28] # Waist yaw/pitch, wrist pitch/yaw
# Control parameters
ACTION_SCALE = 0.25
CONTROL_DT = 0.02 # 50Hz
CONTROL_DT = 0.005 # 200Hz
ANG_VEL_SCALE = 0.25
DOF_POS_SCALE = 1.0
DOF_VEL_SCALE = 0.05
GAIT_PERIOD = 1.0
GAIT_PERIOD = 0.5
DEFAULT_HOLOSOMA_REPO_ID = "nepyope/holosoma_locomotion"
@@ -87,7 +82,7 @@ def load_policy(
logger.info(f"Policy loaded: {policy.get_inputs()[0].shape}{policy.get_outputs()[0].shape}")
# Extract KP/KD from ONNX metadata
model = onnx.load(policy_path)
model = onnx.load(policy_path, load_external_data=False)
metadata = {prop.key: prop.value for prop in model.metadata_props}
if "kp" not in metadata or "kd" not in metadata:
@@ -101,15 +96,13 @@ def load_policy(
class HolosomaLocomotionController:
"""Holosoma whole-body locomotion controller for Unitree G1."""
"""Holosoma lower-body locomotion controller for Unitree G1."""
def __init__(self, policy, robot, kp: np.ndarray, kd: np.ndarray):
self.policy = policy
self.robot = robot
control_dt = CONTROL_DT # Expose for unitree_g1.py
# Override robot's PD gains with policy gains
self.robot.kp = kp
self.robot.kd = kd
def __init__(self):
# Load policy and gains
self.policy, self.kp, self.kd = load_policy()
self.cmd = np.zeros(3, dtype=np.float32)
@@ -124,35 +117,55 @@ class HolosomaLocomotionController:
self.phase_dt = 2 * np.pi / ((1.0 / CONTROL_DT) * GAIT_PERIOD)
self.is_standing = True
def run_step(self):
# Get current observation
obs = self.robot.get_observation()
logger.info("HolosomaLocomotionController initialized")
if not obs:
return
def reset(self) -> None:
"""Reset internal state for a new episode."""
self.cmd[:] = 0.0
self.qj[:] = 0.0
self.dqj[:] = 0.0
self.obs[:] = 0.0
self.last_action[:] = 0.0
self.phase = np.array([[0.0, np.pi]], dtype=np.float32)
self.is_standing = True
# Get command from remote controller
ly = obs["remote.ly"] if abs(obs["remote.ly"]) > 0.1 else 0.0
lx = obs["remote.lx"] if abs(obs["remote.lx"]) > 0.1 else 0.0
rx = obs["remote.rx"] if abs(obs["remote.rx"]) > 0.1 else 0.0
def run_step(self, action: dict, lowstate) -> dict:
"""Run one step of the locomotion controller.
Args:
action: Action dict containing remote.lx/ly/rx/ry
lowstate: Robot lowstate containing motor positions/velocities and IMU
Returns:
Action dict for lower body joints (0-14)
"""
if lowstate is None:
return {}
lx, ly, rx, _ry = (action.get(k, 0.0) for k in REMOTE_AXES)
ly = ly if abs(ly) > 0.1 else 0.0
lx = lx if abs(lx) > 0.1 else 0.0
rx = rx if abs(rx) > 0.1 else 0.0
ly = np.clip(ly, -0.3, 0.3)
lx = np.clip(lx, -0.3, 0.3)
self.cmd[:] = [ly, -lx, -rx]
# Get joint positions and velocities
# Get joint positions and velocities from lowstate
for motor in G1_29_JointIndex:
name = motor.name
idx = motor.value
self.qj[idx] = obs[f"{name}.q"]
self.dqj[idx] = obs[f"{name}.dq"]
self.qj[idx] = lowstate.motor_state[idx].q
self.dqj[idx] = lowstate.motor_state[idx].dq
# Adapt observation for g1_23dof
for idx in MISSING_JOINTS:
self.qj[idx] = 0.0
self.dqj[idx] = 0.0
# Hide arm positions from policy (show DEFAULT_ANGLES instead)
# This prevents policy from reacting to teleop arm movements
for arm_joint in G1_29_JointArmIndex:
self.qj[arm_joint.value] = DEFAULT_ANGLES[arm_joint.value]
self.dqj[arm_joint.value] = 0.0
# Express IMU data in gravity frame of reference
quat = [obs["imu.quat.w"], obs["imu.quat.x"], obs["imu.quat.y"], obs["imu.quat.z"]]
ang_vel = np.array([obs["imu.gyro.x"], obs["imu.gyro.y"], obs["imu.gyro.z"]], dtype=np.float32)
gravity = self.robot.get_gravity_orientation(quat)
quat = lowstate.imu_state.quaternion
ang_vel = np.array(lowstate.imu_state.gyroscope, dtype=np.float32)
gravity = get_gravity_orientation(quat)
# Scale joint positions and velocities before policy inference
qj_obs = (self.qj - DEFAULT_ANGLES) * DOF_POS_SCALE
@@ -186,79 +199,16 @@ class HolosomaLocomotionController:
# Run policy inference
ort_in = {self.policy.get_inputs()[0].name: self.obs.reshape(1, -1).astype(np.float32)}
raw_action = self.policy.run(None, ort_in)[0].squeeze()
action = np.clip(raw_action, -100.0, 100.0)
self.last_action = action.copy()
policy_action = np.clip(raw_action, -100.0, 100.0)
self.last_action = policy_action.copy()
# Transform action back to target joint positions
target = DEFAULT_ANGLES + action * ACTION_SCALE
target = DEFAULT_ANGLES + policy_action * ACTION_SCALE
# Build action dict
# Build action dict (first 15 joints only)
action_dict = {}
for motor in G1_29_JointIndex:
action_dict[f"{motor.name}.q"] = float(target[motor.value])
for i in range(15):
motor_name = G1_29_JointIndex(i).name
action_dict[f"{motor_name}.q"] = float(target[i])
# Zero out missing joints for g1_23dof
for joint_idx in MISSING_JOINTS:
motor_name = G1_29_JointIndex(joint_idx).name
action_dict[f"{motor_name}.q"] = 0.0
# Send action to robot
self.robot.send_action(action_dict)
def run(repo_id: str = DEFAULT_HOLOSOMA_REPO_ID, policy_type: str = "fastsac") -> None:
"""Main function to run the Holosoma locomotion controller.
Args:
repo_id: Hugging Face Hub repository ID for Holosoma policies.
policy_type: Policy type to use ('fastsac' or 'ppo').
"""
# Load policy and gains
policy, kp, kd = load_policy(repo_id=repo_id, policy_type=policy_type)
# Initialize robot
config = UnitreeG1Config()
robot = UnitreeG1(config)
robot.connect()
holosoma_controller = HolosomaLocomotionController(policy, robot, kp, kd)
try:
robot.reset(CONTROL_DT, DEFAULT_ANGLES)
logger.info("Use joystick: LY=fwd/back, LX=left/right, RX=rotate")
logger.info("Press Ctrl+C to stop")
# Run step
while not robot._shutdown_event.is_set():
start_time = time.time()
holosoma_controller.run_step()
elapsed = time.time() - start_time
sleep_time = max(0, CONTROL_DT - elapsed)
time.sleep(sleep_time)
except KeyboardInterrupt:
logger.info("Stopping locomotion...")
finally:
if robot.is_connected:
robot.disconnect()
logger.info("Done!")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Holosoma Locomotion Controller for Unitree G1")
parser.add_argument(
"--repo-id",
type=str,
default=DEFAULT_HOLOSOMA_REPO_ID,
help=f"Hugging Face Hub repo ID for Holosoma policies (default: {DEFAULT_HOLOSOMA_REPO_ID})",
)
parser.add_argument(
"--policy",
type=str,
choices=["fastsac", "ppo"],
default="fastsac",
help="Policy type to use: 'fastsac' (default) or 'ppo'",
)
args = parser.parse_args()
run(repo_id=args.repo_id, policy_type=args.policy)
return action_dict
@@ -24,6 +24,7 @@ This server runs on the robot and forwards:
Uses JSON for secure serialization instead of pickle.
"""
import argparse
import base64
import contextlib
import json
@@ -38,6 +39,8 @@ from unitree_sdk2py.idl.default import unitree_hg_msg_dds__LowCmd_
from unitree_sdk2py.idl.unitree_hg.msg.dds_ import LowCmd_ as hg_LowCmd, LowState_ as hg_LowState
from unitree_sdk2py.utils.crc import CRC
from lerobot.cameras.zmq.image_server import ImageServer
# DDS topic names follow Unitree SDK naming conventions
# ruff: noqa: N816
kTopicLowCommand_Debug = "rt/lowcmd" # action to robot
@@ -150,6 +153,32 @@ def cmd_forward_loop(
def main() -> None:
"""Main entry point for the robot server bridge."""
parser = argparse.ArgumentParser(description="DDS-to-ZMQ bridge server for Unitree G1")
parser.add_argument("--camera", action="store_true", help="Also launch camera server")
parser.add_argument("--camera-device", type=int, default=4, help="Camera device ID (default: 4)")
parser.add_argument("--camera-fps", type=int, default=30, help="Camera FPS (default: 30)")
parser.add_argument("--camera-width", type=int, default=640, help="Camera width (default: 640)")
parser.add_argument("--camera-height", type=int, default=480, help="Camera height (default: 480)")
parser.add_argument("--camera-port", type=int, default=5555, help="Camera ZMQ port (default: 5555)")
args = parser.parse_args()
# Optionally start camera server in background thread
camera_thread = None
if args.camera:
camera_config = {
"fps": args.camera_fps,
"cameras": {
"head_camera": {
"device_id": args.camera_device,
"shape": [args.camera_height, args.camera_width],
}
},
}
camera_server = ImageServer(camera_config, port=args.camera_port)
camera_thread = threading.Thread(target=camera_server.run, daemon=True)
camera_thread.start()
print(f"Camera server started on port {args.camera_port} (device {args.camera_device})")
# initialize DDS
ChannelFactoryInitialize(0)
@@ -206,6 +235,8 @@ def main() -> None:
shutdown_event.set()
ctx.term() # terminates blocking zmq.recv() calls
t_state.join(timeout=2.0)
if camera_thread is not None:
camera_thread.join(timeout=2.0)
if __name__ == "__main__":
+252 -132
View File
@@ -14,27 +14,67 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import annotations
import logging
import struct
import threading
import time
from dataclasses import dataclass, field
from functools import cached_property
from typing import Any
from typing import TYPE_CHECKING, Protocol, runtime_checkable
import numpy as np
from lerobot.cameras.utils import make_cameras_from_configs
from lerobot.envs.factory import make_env
from lerobot.processor import RobotAction, RobotObservation
from lerobot.robots.unitree_g1.g1_utils import G1_29_JointArmIndex, G1_29_JointIndex
from lerobot.robots.unitree_g1.robot_kinematic_processor import G1_29_ArmIK
from lerobot.robots.unitree_g1.g1_kinematics import G1_29_ArmIK
from lerobot.robots.unitree_g1.g1_utils import (
REMOTE_AXES,
REMOTE_KEYS,
G1_29_JointArmIndex,
G1_29_JointIndex,
default_remote_input,
make_locomotion_controller,
)
from lerobot.utils.import_utils import _unitree_sdk_available
from ..robot import Robot
from .config_unitree_g1 import UnitreeG1Config
if TYPE_CHECKING or _unitree_sdk_available:
from unitree_sdk2py.core.channel import (
ChannelFactoryInitialize as _SDKChannelFactoryInitialize,
ChannelPublisher as _SDKChannelPublisher,
ChannelSubscriber as _SDKChannelSubscriber,
)
from unitree_sdk2py.idl.default import unitree_hg_msg_dds__LowCmd_
from unitree_sdk2py.idl.unitree_hg.msg.dds_ import (
LowCmd_ as hg_LowCmd,
LowState_ as hg_LowState,
)
from unitree_sdk2py.utils.crc import CRC
else:
_SDKChannelFactoryInitialize = None
_SDKChannelPublisher = None
_SDKChannelSubscriber = None
unitree_hg_msg_dds__LowCmd_ = None
hg_LowCmd = None
hg_LowState = None
CRC = None
logger = logging.getLogger(__name__)
@runtime_checkable
class LocomotionController(Protocol):
control_dt: float
def run_step(self, action: dict, lowstate) -> dict: ...
def reset(self) -> None: ...
# DDS topic names follow Unitree SDK naming conventions
# ruff: noqa: N816
kTopicLowCommand_Debug = "rt/lowcmd"
@@ -63,7 +103,7 @@ class IMUState:
class G1_29_LowState: # noqa: N801
motor_state: list[MotorState] = field(default_factory=lambda: [MotorState() for _ in G1_29_JointIndex])
imu_state: IMUState = field(default_factory=IMUState)
wireless_remote: Any = None # Raw wireless remote data
wireless_remote: bytes | None = None # Raw wireless remote data
mode_machine: int = 0 # Robot mode
@@ -71,25 +111,6 @@ class UnitreeG1(Robot):
config_class = UnitreeG1Config
name = "unitree_g1"
# unitree remote controller
class RemoteController:
def __init__(self):
self.lx = 0
self.ly = 0
self.rx = 0
self.ry = 0
self.button = [0] * 16
def set(self, data):
# wireless_remote
keys = struct.unpack("H", data[2:4])[0]
for i in range(16):
self.button[i] = (keys & (1 << i)) >> i
self.lx = struct.unpack("f", data[4:8])[0]
self.rx = struct.unpack("f", data[8:12])[0]
self.ry = struct.unpack("f", data[12:16])[0]
self.ly = struct.unpack("f", data[20:24])[0]
def __init__(self, config: UnitreeG1Config):
super().__init__(config)
@@ -103,11 +124,9 @@ class UnitreeG1(Robot):
# Import channel classes based on mode
if config.is_simulation:
from unitree_sdk2py.core.channel import (
ChannelFactoryInitialize,
ChannelPublisher,
ChannelSubscriber,
)
self._ChannelFactoryInitialize = _SDKChannelFactoryInitialize
self._ChannelPublisher = _SDKChannelPublisher
self._ChannelSubscriber = _SDKChannelSubscriber
else:
from lerobot.robots.unitree_g1.unitree_sdk2_socket import (
ChannelFactoryInitialize,
@@ -115,22 +134,30 @@ class UnitreeG1(Robot):
ChannelSubscriber,
)
# Store for use in connect()
self._ChannelFactoryInitialize = ChannelFactoryInitialize
self._ChannelPublisher = ChannelPublisher
self._ChannelSubscriber = ChannelSubscriber
self._ChannelFactoryInitialize = ChannelFactoryInitialize
self._ChannelPublisher = ChannelPublisher
self._ChannelSubscriber = ChannelSubscriber
# Initialize state variables
self.sim_env = None
self._env_wrapper = None
self._lowstate = None
self._lowstate_lock = threading.Lock()
self._shutdown_event = threading.Event()
self.subscribe_thread = None
self.remote_controller = self.RemoteController()
self.arm_ik = G1_29_ArmIK()
self.arm_ik = G1_29_ArmIK() if config.gravity_compensation else None
def _subscribe_motor_state(self): # polls robot state @ 250Hz
# Lower-body controller loaded dynamically
self.controller: LocomotionController | None = make_locomotion_controller(config.controller)
# Controller thread state
self._controller_thread = None
self._controller_action_lock = threading.Lock()
self.controller_input = default_remote_input()
self.controller_output = {}
def _subscribe_lowstate(self): # polls robot state @ 250Hz
while not self._shutdown_event.is_set():
start_time = time.time()
@@ -143,11 +170,11 @@ class UnitreeG1(Robot):
lowstate = G1_29_LowState()
# Capture motor states using jointindex
for id in G1_29_JointIndex:
lowstate.motor_state[id].q = msg.motor_state[id].q
lowstate.motor_state[id].dq = msg.motor_state[id].dq
lowstate.motor_state[id].tau_est = msg.motor_state[id].tau_est
lowstate.motor_state[id].temperature = msg.motor_state[id].temperature
for joint in G1_29_JointIndex:
lowstate.motor_state[joint].q = msg.motor_state[joint].q
lowstate.motor_state[joint].dq = msg.motor_state[joint].dq
lowstate.motor_state[joint].tau_est = msg.motor_state[joint].tau_est
lowstate.motor_state[joint].temperature = msg.motor_state[joint].temperature
# Capture IMU state
lowstate.imu_state.quaternion = list(msg.imu_state.quaternion)
@@ -162,31 +189,106 @@ class UnitreeG1(Robot):
# Capture mode_machine
lowstate.mode_machine = msg.mode_machine
self._lowstate = lowstate
with self._lowstate_lock:
self._lowstate = lowstate
current_time = time.time()
all_t_elapsed = current_time - start_time
sleep_time = max(0, (self.control_dt - all_t_elapsed)) # maintain constant control dt
time.sleep(sleep_time)
def publish_lowcmd(
self,
action: RobotAction,
kp: np.ndarray | list[float] | None = None,
kd: np.ndarray | list[float] | None = None,
tau: np.ndarray | list[float] | None = None,
) -> None: # writes robot command whenever requested
for motor in G1_29_JointIndex:
key = f"{motor.name}.q"
if key in action:
self.msg.motor_cmd[motor.value].q = action[key]
self.msg.motor_cmd[motor.value].qd = 0
self.msg.motor_cmd[motor.value].kp = (
kp[motor.value] if kp is not None else self.kp[motor.value]
)
self.msg.motor_cmd[motor.value].kd = (
kd[motor.value] if kd is not None else self.kd[motor.value]
)
self.msg.motor_cmd[motor.value].tau = tau[motor.value] if tau is not None else 0.0
self.msg.crc = self.crc.Crc(self.msg)
self.lowcmd_publisher.Write(self.msg)
@property
def _cameras_ft(self) -> dict[str, tuple]:
return {
cam: (self.config.cameras[cam].height, self.config.cameras[cam].width, 3) for cam in self.cameras
}
@cached_property
def observation_features(self) -> dict[str, type | tuple]:
return {**self._motors_ft, **self._cameras_ft}
@cached_property
def action_features(self) -> dict[str, type]:
return {f"{G1_29_JointIndex(motor).name}.q": float for motor in G1_29_JointIndex}
if self.controller is None:
return {f"{G1_29_JointIndex(motor).name}.q": float for motor in G1_29_JointIndex}
def calibrate(self) -> None: # robot is already calibrated
arm_features = {f"{G1_29_JointArmIndex(motor).name}.q": float for motor in G1_29_JointArmIndex}
remote_features = dict.fromkeys(REMOTE_AXES, float)
return {**arm_features, **remote_features}
def _controller_loop(self):
"""Background thread that runs controller at policy's control_dt."""
control_dt = self.controller.control_dt
logger.info(f"Controller loop starting with control_dt={control_dt} ({1.0 / control_dt:.1f}Hz)")
loop_count = 0
last_log_time = time.time()
while not self._shutdown_event.is_set():
start_time = time.time()
with self._lowstate_lock:
lowstate = self._lowstate
if lowstate is not None and self.controller is not None:
loop_count += 1
if time.time() - last_log_time >= 5.0: # Log every 5 seconds
actual_hz = loop_count / (time.time() - last_log_time)
logger.info(
f"Controller actual rate: {actual_hz:.1f}Hz (target: {1.0 / control_dt:.1f}Hz)"
)
loop_count = 0
last_log_time = time.time()
# Read controller input snapshot
with self._controller_action_lock:
controller_input = dict(self.controller_input)
# Run controller step
controller_action = self.controller.run_step(controller_input, lowstate)
# Write controller output snapshot
with self._controller_action_lock:
self.controller_output = dict(controller_action)
ctrl_kp = self.controller.kp if hasattr(self.controller, "kp") else None
ctrl_kd = self.controller.kd if hasattr(self.controller, "kd") else None
self.publish_lowcmd(controller_action, kp=ctrl_kp, kd=ctrl_kd)
elapsed = time.time() - start_time
sleep_time = max(0, control_dt - elapsed)
time.sleep(sleep_time)
def calibrate(self) -> None:
# TODO: implement g1_29 calibration
pass
def configure(self) -> None:
pass
def connect(self, calibrate: bool = True) -> None: # connect to DDS
from unitree_sdk2py.idl.default import unitree_hg_msg_dds__LowCmd_
from unitree_sdk2py.idl.unitree_hg.msg.dds_ import (
LowCmd_ as hg_LowCmd,
LowState_ as hg_LowState,
)
from unitree_sdk2py.utils.crc import CRC
# Initialize DDS channel and simulation environment
if self.config.is_simulation:
self._ChannelFactoryInitialize(0, "lo")
@@ -194,7 +296,7 @@ class UnitreeG1(Robot):
# Extract the actual gym env from the dict structure
self.sim_env = self._env_wrapper["hub_env"][0].envs[0]
else:
self._ChannelFactoryInitialize(0)
self._ChannelFactoryInitialize(0, config=self.config)
# Initialize direct motor control interface
self.lowcmd_publisher = self._ChannelPublisher(kTopicLowCommand_Debug, hg_LowCmd)
@@ -203,7 +305,7 @@ class UnitreeG1(Robot):
self.lowstate_subscriber.Init()
# Start subscribe thread to read robot state
self.subscribe_thread = threading.Thread(target=self._subscribe_motor_state)
self.subscribe_thread = threading.Thread(target=self._subscribe_lowstate)
self.subscribe_thread.start()
# Connect cameras
@@ -220,25 +322,53 @@ class UnitreeG1(Robot):
# Wait for first state message to arrive
lowstate = None
deadline = time.time() + 10.0
while lowstate is None:
lowstate = self._lowstate
with self._lowstate_lock:
lowstate = self._lowstate
if lowstate is None:
if time.time() > deadline:
raise TimeoutError("Timed out waiting for robot state (10s)")
logger.warning("[UnitreeG1] Waiting for robot state...")
time.sleep(0.01)
logger.warning("[UnitreeG1] Waiting for robot state...")
logger.warning("[UnitreeG1] Connected to robot.")
logger.info("[UnitreeG1] Connected to robot.")
self.msg.mode_machine = lowstate.mode_machine
# Initialize all motors with unified kp/kd from config
self.kp = np.array(self.config.kp, dtype=np.float32)
self.kd = np.array(self.config.kd, dtype=np.float32)
for id in G1_29_JointIndex:
self.msg.motor_cmd[id].mode = 1
self.msg.motor_cmd[id].kp = self.kp[id.value]
self.msg.motor_cmd[id].kd = self.kd[id.value]
self.msg.motor_cmd[id].q = lowstate.motor_state[id.value].q
for joint in G1_29_JointIndex:
self.msg.motor_cmd[joint].mode = 1
self.msg.motor_cmd[joint].kp = self.kp[joint.value]
self.msg.motor_cmd[joint].kd = self.kd[joint.value]
self.msg.motor_cmd[joint].q = lowstate.motor_state[joint.value].q
# Start controller thread if enabled
if self.controller is not None:
self._controller_thread = threading.Thread(target=self._controller_loop, daemon=True)
self._controller_thread.start()
fps = int(1.0 / self.controller.control_dt)
logger.info(f"Controller thread started ({fps}Hz)")
def _send_zero_torque(self) -> None:
"""Send a zero-gain command to make joints passive before shutting down."""
try:
with self._lowstate_lock:
lowstate = self._lowstate
if lowstate is None:
return
action = {f"{motor.name}.q": lowstate.motor_state[motor.value].q for motor in G1_29_JointIndex}
zero_gains = np.zeros(29, dtype=np.float32)
self.publish_lowcmd(action, kp=zero_gains, kd=zero_gains, tau=zero_gains)
logger.info("Sent zero-torque command for safe shutdown")
except Exception as e:
logger.warning(f"Failed to send zero-torque on disconnect: {e}")
def disconnect(self):
# Put robot in passive mode before stopping threads
if not self.config.is_simulation:
self._send_zero_torque()
# Signal thread to stop and unblock any waits
self._shutdown_event.set()
@@ -248,6 +378,12 @@ class UnitreeG1(Robot):
if self.subscribe_thread.is_alive():
logger.warning("Subscribe thread did not stop cleanly")
# Wait for controller thread to finish
if self._controller_thread is not None:
self._controller_thread.join(timeout=2.0)
if self._controller_thread.is_alive():
logger.warning("Controller thread did not stop cleanly")
# Close simulation environment
if self.config.is_simulation and self.sim_env is not None:
try:
@@ -274,7 +410,8 @@ class UnitreeG1(Robot):
cam.disconnect()
def get_observation(self) -> RobotObservation:
lowstate = self._lowstate
with self._lowstate_lock:
lowstate = self._lowstate
if lowstate is None:
return {}
@@ -313,14 +450,9 @@ class UnitreeG1(Robot):
obs["imu.rpy.pitch"] = lowstate.imu_state.rpy[1]
obs["imu.rpy.yaw"] = lowstate.imu_state.rpy[2]
# Controller - parse wireless_remote and add to obs
if lowstate.wireless_remote and len(lowstate.wireless_remote) >= 24:
self.remote_controller.set(lowstate.wireless_remote)
obs["remote.buttons"] = self.remote_controller.button.copy()
obs["remote.lx"] = self.remote_controller.lx
obs["remote.ly"] = self.remote_controller.ly
obs["remote.rx"] = self.remote_controller.rx
obs["remote.ry"] = self.remote_controller.ry
# Wireless remote (raw bytes for teleoperator)
if lowstate.wireless_remote:
obs["wireless_remote"] = lowstate.wireless_remote
# Cameras - read images from ZMQ cameras
for cam_name, cam in self._cameras.items():
@@ -328,73 +460,63 @@ class UnitreeG1(Robot):
return obs
def send_action(self, action: RobotAction) -> RobotAction:
action_to_publish = action
if self.controller is not None:
# Controller thread owns legs/waist. Here we only update joystick inputs
# and publish arm targets from the teleoperator.
self._update_controller_action(action)
arm_prefixes = tuple(j.name for j in G1_29_JointArmIndex)
action_to_publish = {
key: value
for key, value in action.items()
if key.endswith(".q") and key.startswith(arm_prefixes)
}
tau = None
if self.config.gravity_compensation and self.arm_ik is not None:
tau = np.zeros(29, dtype=np.float32)
action_np = np.array(
[
action_to_publish.get(f"{joint.name}.q", self.msg.motor_cmd[joint.value].q)
for joint in G1_29_JointArmIndex
],
dtype=np.float32,
)
arm_tau = self.arm_ik.solve_tau(action_np)
arm_start_idx = G1_29_JointArmIndex.kLeftShoulderPitch.value
for joint in G1_29_JointArmIndex:
local_idx = joint.value - arm_start_idx
tau[joint.value] = arm_tau[local_idx]
self.publish_lowcmd(action_to_publish, tau=tau)
return action
def _update_controller_action(self, action: RobotAction) -> None:
"""Update controller input state from incoming teleop action."""
with self._controller_action_lock:
for key in REMOTE_KEYS:
if key in action:
self.controller_input[key] = action[key]
@property
def is_calibrated(self) -> bool:
return True
@property
def is_connected(self) -> bool:
return self._lowstate is not None
with self._lowstate_lock:
return self._lowstate is not None
@property
def _motors_ft(self) -> dict[str, type]:
"""Joint positions for all 29 joints."""
return {f"{G1_29_JointIndex(motor).name}.q": float for motor in G1_29_JointIndex}
@property
def cameras(self) -> dict:
return self._cameras
@property
def _cameras_ft(self) -> dict[str, tuple]:
return {
cam: (self.config.cameras[cam].height, self.config.cameras[cam].width, 3) for cam in self.cameras
}
@cached_property
def observation_features(self) -> dict[str, type | tuple]:
return {**self._motors_ft, **self._cameras_ft}
def send_action(self, action: RobotAction) -> RobotAction:
for motor in G1_29_JointIndex:
key = f"{motor.name}.q"
if key in action:
self.msg.motor_cmd[motor.value].q = action[key]
self.msg.motor_cmd[motor.value].qd = 0
self.msg.motor_cmd[motor.value].kp = self.kp[motor.value]
self.msg.motor_cmd[motor.value].kd = self.kd[motor.value]
self.msg.motor_cmd[motor.value].tau = 0
if self.config.gravity_compensation:
# Build action_np from motor commands (arm joints are indices 15-28, local indices 0-13)
action_np = np.zeros(14)
arm_start_idx = G1_29_JointArmIndex.kLeftShoulderPitch.value # 15
for joint in G1_29_JointArmIndex:
local_idx = joint.value - arm_start_idx
action_np[local_idx] = self.msg.motor_cmd[joint.value].q
tau = self.arm_ik.solve_tau(action_np)
# Apply tau back to motor commands
for joint in G1_29_JointArmIndex:
local_idx = joint.value - arm_start_idx
self.msg.motor_cmd[joint.value].tau = tau[local_idx]
self.msg.crc = self.crc.Crc(self.msg)
self.lowcmd_publisher.Write(self.msg)
return action
def get_gravity_orientation(self, quaternion): # get gravity orientation from quaternion
"""Get gravity orientation from quaternion."""
qw = quaternion[0]
qx = quaternion[1]
qy = quaternion[2]
qz = quaternion[3]
gravity_orientation = np.zeros(3)
gravity_orientation[0] = 2 * (-qz * qx + qw * qy)
gravity_orientation[1] = -2 * (qz * qy + qw * qx)
gravity_orientation[2] = 1 - 2 * (qw * qw + qz * qz)
return gravity_orientation
def reset(
self,
control_dt: float | None = None,
@@ -407,15 +529,9 @@ class UnitreeG1(Robot):
if self.config.is_simulation and self.sim_env is not None:
self.sim_env.reset()
for motor in G1_29_JointIndex:
self.msg.motor_cmd[motor.value].q = default_positions[motor.value]
self.msg.motor_cmd[motor.value].qd = 0
self.msg.motor_cmd[motor.value].kp = self.kp[motor.value]
self.msg.motor_cmd[motor.value].kd = self.kd[motor.value]
self.msg.motor_cmd[motor.value].tau = 0
self.msg.crc = self.crc.Crc(self.msg)
self.lowcmd_publisher.Write(self.msg)
self.publish_lowcmd(
{f"{motor.name}.q": float(default_positions[motor.value]) for motor in G1_29_JointIndex}
)
else:
total_time = 3.0
num_steps = int(total_time / control_dt)
@@ -446,4 +562,8 @@ class UnitreeG1(Robot):
sleep_time = max(0, control_dt - elapsed)
time.sleep(sleep_time)
# Reset controller internal state (gait phase, obs history, etc.)
if self.controller is not None and hasattr(self.controller, "reset"):
self.controller.reset()
logger.info("Reached default position")
@@ -22,6 +22,8 @@ import zmq
from lerobot.robots.unitree_g1.config_unitree_g1 import UnitreeG1Config
# Module-level ZMQ state mirrors the Unitree SDK's global ChannelFactory Singleton.
# Only one robot connection per process is supported.
_ctx: zmq.Context | None = None
_lowcmd_sock: zmq.Socket | None = None
_lowstate_sock: zmq.Socket | None = None
@@ -97,17 +99,22 @@ def lowcmd_to_dict(topic: str, msg: Any) -> dict[str, Any]:
}
def ChannelFactoryInitialize(*args: Any, **kwargs: Any) -> None: # noqa: N802
def ChannelFactoryInitialize(domain_id: int = 0, config: Any = None) -> None: # noqa: N802
"""
Initialize ZMQ sockets for robot communication.
This function mimics the Unitree SDK's ChannelFactoryInitialize but uses
ZMQ sockets to connect to the robot server bridge instead of DDS.
Args:
domain_id: Ignored (for API compatibility with Unitree SDK)
config: UnitreeG1Config instance with robot_ip
"""
global _ctx, _lowcmd_sock, _lowstate_sock
# read socket config
config = UnitreeG1Config()
if config is None:
config = UnitreeG1Config()
robot_ip = config.robot_ip
ctx = zmq.Context.instance()
+462
View File
@@ -0,0 +1,462 @@
#!/usr/bin/env python
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Benchmark runner: train and evaluate policies across simulation benchmarks.
Orchestrates per-benchmark training and evaluation using the existing
``lerobot-train`` and ``lerobot-eval`` CLI tools.
Typical usage::
# Train SmolVLA on LIBERO-plus (4 GPUs, 50k steps):
lerobot-benchmark train \\
--benchmarks libero_plus \\
--policy-path lerobot/smolvla_base \\
--hub-user $HF_USER \\
--num-gpus 4 --steps 50000
# Evaluate the trained policies:
lerobot-benchmark eval \\
--benchmarks libero_plus \\
--hub-user $HF_USER
# Full pipeline (train → upload → eval) for multiple benchmarks:
lerobot-benchmark all \\
--benchmarks libero_plus,robocasa,robomme \\
--policy-path lerobot/smolvla_base \\
--hub-user $HF_USER \\
--num-gpus 4 --steps 50000
"""
from __future__ import annotations
import argparse
import logging
import shutil
import subprocess
import sys
from dataclasses import dataclass, field
from pathlib import Path
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s %(message)s")
log = logging.getLogger(__name__)
@dataclass
class BenchmarkEntry:
"""Training + evaluation settings for a single benchmark.
When ``eval_tasks`` is set, evaluation runs once per task in the list
(e.g. libero_spatial, libero_object, ). ``env_task`` is still used as
the task for mid-training evaluation during ``lerobot-train``.
"""
dataset_repo_id: str
env_type: str
env_task: str
eval_tasks: list[str] | None = None
train_overrides: dict[str, str] = field(default_factory=dict)
eval_overrides: dict[str, str] = field(default_factory=dict)
LIBERO_SUITES = ["libero_spatial", "libero_object", "libero_goal", "libero_10"]
# Each benchmark maps a human-readable name to its dataset and eval env.
# ``dataset_repo_id`` can contain ``{hub_user}`` which is interpolated at
# runtime from ``--hub-user``.
BENCHMARK_REGISTRY: dict[str, BenchmarkEntry] = {
"libero": BenchmarkEntry(
dataset_repo_id="{hub_user}/libero",
env_type="libero",
env_task="libero_spatial",
eval_tasks=LIBERO_SUITES,
),
"libero_plus": BenchmarkEntry(
dataset_repo_id="{hub_user}/libero_plus",
env_type="libero_plus",
env_task="libero_spatial",
eval_tasks=LIBERO_SUITES,
),
"metaworld": BenchmarkEntry(
dataset_repo_id="{hub_user}/metaworld",
env_type="metaworld",
env_task="metaworld-push-v2",
),
"robocasa": BenchmarkEntry(
dataset_repo_id="{hub_user}/robocasa",
env_type="robocasa",
env_task="PickPlaceCounterToCabinet",
),
"robomme": BenchmarkEntry(
dataset_repo_id="{hub_user}/robomme",
env_type="robomme",
env_task="PickXtimes",
),
}
def _policy_repo_id(hub_user: str, policy_name: str, benchmark: str) -> str:
return f"{hub_user}/{policy_name}_{benchmark}"
def _extra_keys(extra_args: list[str]) -> set[str]:
"""Extract ``--key`` prefixes from extra CLI args for override detection."""
keys: set[str] = set()
for arg in extra_args:
if arg.startswith("--") and "=" in arg:
keys.add(arg.split("=", 1)[0])
return keys
def _build_train_cmd(
benchmark: BenchmarkEntry,
*,
policy_path: str,
hub_user: str,
policy_name: str,
benchmark_name: str,
num_gpus: int,
steps: int,
batch_size: int,
eval_freq: int,
save_freq: int,
wandb: bool,
extra_args: list[str],
) -> list[str]:
"""Build the ``accelerate launch lerobot-train`` command list."""
lerobot_train = shutil.which("lerobot-train")
if lerobot_train is None:
raise RuntimeError("lerobot-train not found on PATH. Is lerobot installed?")
# Strip bare "--" separators that argparse may pass through
cleaned_extra = [a for a in extra_args if a != "--"]
overridden = _extra_keys(cleaned_extra)
repo_id = _policy_repo_id(hub_user, policy_name, benchmark_name)
dataset_id = benchmark.dataset_repo_id.format(hub_user=hub_user)
defaults: list[tuple[str, str]] = [
("--policy.path", policy_path),
("--dataset.repo_id", dataset_id),
("--policy.repo_id", repo_id),
("--env.type", benchmark.env_type),
("--env.task", benchmark.env_task),
("--steps", str(steps)),
("--batch_size", str(batch_size)),
("--eval_freq", str(eval_freq)),
("--save_freq", str(save_freq)),
("--output_dir", f"outputs/train/{policy_name}_{benchmark_name}"),
("--job_name", f"{policy_name}_{benchmark_name}"),
("--policy.push_to_hub", "true"),
]
if wandb:
defaults.append(("--wandb.enable", "true"))
for k, v in benchmark.train_overrides.items():
defaults.append((f"--{k}", v))
cmd: list[str] = [
"accelerate", "launch",
"--multi_gpu",
f"--num_processes={num_gpus}",
lerobot_train,
]
for key, val in defaults:
if key not in overridden:
cmd.append(f"{key}={val}")
cmd.extend(cleaned_extra)
return cmd
def _build_eval_cmd(
benchmark: BenchmarkEntry,
*,
hub_user: str,
policy_name: str,
benchmark_name: str,
eval_task: str | None = None,
n_episodes: int,
batch_size_eval: int,
extra_args: list[str],
) -> list[str]:
"""Build the ``lerobot-eval`` command list.
``eval_task`` overrides the benchmark's ``env_task`` so the same
benchmark can be evaluated on multiple suites (e.g. LIBERO).
"""
lerobot_eval = shutil.which("lerobot-eval")
if lerobot_eval is None:
raise RuntimeError("lerobot-eval not found on PATH. Is lerobot installed?")
task = eval_task or benchmark.env_task
repo_id = _policy_repo_id(hub_user, policy_name, benchmark_name)
out_dir = _eval_output_dir(policy_name, benchmark_name, eval_task=task)
cleaned_extra = [a for a in extra_args if a != "--"]
overridden = _extra_keys(cleaned_extra)
defaults: list[tuple[str, str]] = [
("--policy.path", repo_id),
("--env.type", benchmark.env_type),
("--env.task", task),
("--eval.n_episodes", str(n_episodes)),
("--eval.batch_size", str(batch_size_eval)),
("--output_dir", out_dir),
("--policy.device", "cuda"),
]
for k, v in benchmark.eval_overrides.items():
defaults.append((f"--{k}", v))
cmd: list[str] = [lerobot_eval]
for key, val in defaults:
if key not in overridden:
cmd.append(f"{key}={val}")
cmd.extend(cleaned_extra)
return cmd
def _eval_output_dir(policy_name: str, benchmark_name: str, eval_task: str | None = None) -> Path:
if eval_task:
return Path(f"outputs/eval/{policy_name}_{benchmark_name}/{eval_task}")
return Path(f"outputs/eval/{policy_name}_{benchmark_name}")
def _run(cmd: list[str], *, dry_run: bool) -> None:
log.info("Command: %s", " \\\n ".join(cmd))
if dry_run:
log.info("[dry-run] Skipping execution.")
return
result = subprocess.run(cmd, check=False)
if result.returncode != 0:
log.error("Command failed with exit code %d", result.returncode)
sys.exit(result.returncode)
def _push_eval_to_hub(
*,
hub_user: str,
policy_name: str,
benchmark_name: str,
eval_task: str | None = None,
dry_run: bool,
) -> None:
"""Upload eval results (metrics + videos) to the policy repo on the Hub."""
from huggingface_hub import HfApi
repo_id = _policy_repo_id(hub_user, policy_name, benchmark_name)
local_dir = _eval_output_dir(policy_name, benchmark_name, eval_task=eval_task)
hub_path = f"eval/{eval_task}" if eval_task else f"eval/{benchmark_name}"
if not local_dir.exists():
log.warning("Eval output dir %s does not exist, skipping hub upload.", local_dir)
return
log.info("Uploading eval results from %s to %s (path_in_repo=%s)", local_dir, repo_id, hub_path)
if dry_run:
log.info("[dry-run] Skipping upload.")
return
api = HfApi()
api.upload_folder(
folder_path=str(local_dir),
repo_id=repo_id,
path_in_repo=hub_path,
repo_type="model",
commit_message=f"Upload eval results for {eval_task or benchmark_name}",
)
def _resolve_benchmarks(names: str) -> list[tuple[str, BenchmarkEntry]]:
out = []
for name in names.split(","):
name = name.strip()
if name not in BENCHMARK_REGISTRY:
available = ", ".join(BENCHMARK_REGISTRY)
raise ValueError(f"Unknown benchmark '{name}'. Available: {available}")
out.append((name, BENCHMARK_REGISTRY[name]))
return out
def cmd_train(args: argparse.Namespace) -> None:
benchmarks = _resolve_benchmarks(args.benchmarks)
for bname, bentry in benchmarks:
log.info("=== Training on benchmark: %s ===", bname)
cmd = _build_train_cmd(
bentry,
policy_path=args.policy_path,
hub_user=args.hub_user,
policy_name=args.policy_name,
benchmark_name=bname,
num_gpus=args.num_gpus,
steps=args.steps,
batch_size=args.batch_size,
eval_freq=args.eval_freq,
save_freq=args.save_freq,
wandb=args.wandb,
extra_args=args.extra,
)
_run(cmd, dry_run=args.dry_run)
def _run_eval_for_benchmark(
bname: str,
bentry: BenchmarkEntry,
args: argparse.Namespace,
) -> None:
"""Run evaluation for a single benchmark, iterating over all its eval_tasks."""
tasks = bentry.eval_tasks or [bentry.env_task]
for task in tasks:
log.info("=== Evaluating %s / %s ===", bname, task)
cmd = _build_eval_cmd(
bentry,
hub_user=args.hub_user,
policy_name=args.policy_name,
benchmark_name=bname,
eval_task=task if bentry.eval_tasks else None,
n_episodes=args.n_episodes,
batch_size_eval=args.batch_size_eval,
extra_args=args.extra,
)
_run(cmd, dry_run=args.dry_run)
if args.push_eval_to_hub:
_push_eval_to_hub(
hub_user=args.hub_user,
policy_name=args.policy_name,
benchmark_name=bname,
eval_task=task if bentry.eval_tasks else None,
dry_run=args.dry_run,
)
def cmd_eval(args: argparse.Namespace) -> None:
benchmarks = _resolve_benchmarks(args.benchmarks)
for bname, bentry in benchmarks:
_run_eval_for_benchmark(bname, bentry, args)
def cmd_all(args: argparse.Namespace) -> None:
"""Train on each benchmark, then evaluate each."""
benchmarks = _resolve_benchmarks(args.benchmarks)
log.info("Phase 1: Training on %d benchmark(s)", len(benchmarks))
for bname, bentry in benchmarks:
log.info("=== Training on benchmark: %s ===", bname)
cmd = _build_train_cmd(
bentry,
policy_path=args.policy_path,
hub_user=args.hub_user,
policy_name=args.policy_name,
benchmark_name=bname,
num_gpus=args.num_gpus,
steps=args.steps,
batch_size=args.batch_size,
eval_freq=args.eval_freq,
save_freq=args.save_freq,
wandb=args.wandb,
extra_args=args.extra,
)
_run(cmd, dry_run=args.dry_run)
log.info("Phase 2: Evaluating %d benchmark(s)", len(benchmarks))
for bname, bentry in benchmarks:
_run_eval_for_benchmark(bname, bentry, args)
def _add_common_args(p: argparse.ArgumentParser) -> None:
p.add_argument(
"--benchmarks", required=True,
help="Comma-separated benchmark names (e.g. libero_plus,robocasa,robomme).",
)
p.add_argument("--hub-user", required=True, help="HuggingFace Hub username.")
p.add_argument(
"--policy-name", default="smolvla",
help="Short policy name used in repo IDs and output dirs (default: smolvla).",
)
p.add_argument("--dry-run", action="store_true", help="Print commands without executing.")
def _add_train_args(p: argparse.ArgumentParser) -> None:
p.add_argument("--policy-path", default="lerobot/smolvla_base", help="Pretrained policy path.")
p.add_argument("--num-gpus", type=int, default=4, help="Number of GPUs.")
p.add_argument("--steps", type=int, default=50_000, help="Total training steps.")
p.add_argument("--batch-size", type=int, default=32, help="Per-GPU batch size.")
p.add_argument("--eval-freq", type=int, default=10_000, help="Eval every N steps (0 to disable).")
p.add_argument("--save-freq", type=int, default=10_000, help="Save checkpoint every N steps.")
p.add_argument("--wandb", action="store_true", help="Enable Weights & Biases logging.")
def _add_eval_args(p: argparse.ArgumentParser) -> None:
p.add_argument("--n-episodes", type=int, default=50, help="Number of eval episodes.")
p.add_argument("--batch-size-eval", type=int, default=10, help="Eval batch size (parallel envs).")
p.add_argument(
"--push-eval-to-hub", action="store_true",
help="Upload eval results (metrics + videos) to the policy repo on the Hub.",
)
def build_parser() -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(
prog="lerobot-benchmark",
description="Train and evaluate policies across simulation benchmarks.",
)
sub = parser.add_subparsers(dest="command", required=True)
# train
p_train = sub.add_parser("train", help="Train a policy on each selected benchmark.")
_add_common_args(p_train)
_add_train_args(p_train)
p_train.set_defaults(func=cmd_train)
# eval
p_eval = sub.add_parser("eval", help="Evaluate trained policies on each benchmark.")
_add_common_args(p_eval)
_add_eval_args(p_eval)
p_eval.set_defaults(func=cmd_eval)
# all (train + eval)
p_all = sub.add_parser("all", help="Train then evaluate on each benchmark.")
_add_common_args(p_all)
_add_train_args(p_all)
_add_eval_args(p_all)
p_all.set_defaults(func=cmd_all)
# list
p_list = sub.add_parser("list", help="List available benchmarks.")
p_list.set_defaults(func=lambda _args: _list_benchmarks())
return parser
def _list_benchmarks() -> None:
print("Available benchmarks:\n")
for name, entry in BENCHMARK_REGISTRY.items():
print(f" {name}")
print(f" dataset: {entry.dataset_repo_id}")
print(f" env: {entry.env_type}")
if entry.eval_tasks:
print(f" eval on: {', '.join(entry.eval_tasks)}")
else:
print(f" eval on: {entry.env_task}")
print()
def main() -> None:
parser = build_parser()
args, extra = parser.parse_known_args()
args.extra = extra
args.func(args)
if __name__ == "__main__":
main()
+543 -76
View File
@@ -49,6 +49,9 @@ You can learn about the CLI options for this script in the `EvalPipelineConfig`
import concurrent.futures as cf
import json
import logging
import shutil
import subprocess
import sys
import threading
import time
from collections import defaultdict
@@ -71,7 +74,9 @@ from tqdm import trange
from lerobot.configs import parser
from lerobot.configs.eval import EvalPipelineConfig
from lerobot.envs.factory import make_env, make_env_pre_post_processors
from lerobot.envs.lazy_vec_env import LazyVectorEnv
from lerobot.envs.utils import (
add_envs_task,
check_env_attributes_and_types,
@@ -82,6 +87,11 @@ from lerobot.policies.factory import make_policy, make_pre_post_processors
from lerobot.policies.pretrained import PreTrainedPolicy
from lerobot.processor import PolicyAction, PolicyProcessorPipeline
from lerobot.utils.constants import ACTION, DONE, OBS_STR, REWARD
from lerobot.utils.hf_eval_results import (
build_eval_results_rows,
default_eval_date,
upload_eval_results_yaml,
)
from lerobot.utils.import_utils import register_third_party_plugins
from lerobot.utils.io_utils import write_video
from lerobot.utils.random_utils import set_seed
@@ -502,9 +512,178 @@ def _compile_episode_data(
return data_dict
def _serializable_config(obj: Any) -> Any:
"""Recursively convert a config dict so it is JSON-serializable."""
if isinstance(obj, dict):
return {k: _serializable_config(v) for k, v in obj.items()}
if isinstance(obj, (list, tuple)):
return [_serializable_config(v) for v in obj]
if isinstance(obj, Path):
return str(obj)
if isinstance(obj, (int, float, str, bool, type(None))):
return obj
return str(obj)
def push_eval_to_hub(
repo_id: str,
output_dir: Path,
info: dict,
env_type: str,
env_task: str | None,
benchmark_dataset_id: str,
source_url: str | None = None,
notes: str | None = None,
) -> str:
"""Upload eval artifacts and `.eval_results` rows to the Hub.
Args:
repo_id: HF model repo (e.g. "user/my_policy").
output_dir: Local directory containing eval_info.json and videos/.
info: The eval results dict (as returned by eval_policy_all).
env_type: Environment type string (e.g. "libero_plus", "pusht").
env_task: The env task string from eval config.
benchmark_dataset_id: HF dataset id of the consolidated benchmark dataset.
source_url: Optional source URL for `.eval_results` attribution.
notes: Optional setup notes to include in `.eval_results`.
Returns:
URL of the last Hub commit.
"""
from huggingface_hub import HfApi
api = HfApi()
api.create_repo(repo_id=repo_id, exist_ok=True)
commit_url = ""
# 1. Upload eval_info.json
eval_json_path = output_dir / "eval_info.json"
if eval_json_path.exists():
commit_url = api.upload_file(
path_or_fileobj=str(eval_json_path),
path_in_repo=f"eval/{env_type}/eval_info.json",
repo_id=repo_id,
commit_message=f"Upload eval results for {env_type}",
)
# 2. Upload eval_config.json (policy, env, and eval settings used)
eval_config_path = output_dir / "eval_config.json"
if eval_config_path.exists():
api.upload_file(
path_or_fileobj=str(eval_config_path),
path_in_repo=f"eval/{env_type}/eval_config.json",
repo_id=repo_id,
commit_message=f"Upload eval config for {env_type}",
)
# 3. Upload rollout videos
videos_dir = output_dir / "videos"
if videos_dir.is_dir():
api.upload_folder(
folder_path=str(videos_dir),
path_in_repo=f"eval/{env_type}/videos",
repo_id=repo_id,
commit_message=f"Upload eval rollout videos for {env_type}",
)
# 4. Upload HF-native `.eval_results` rows (canonical leaderboard surface).
rows = build_eval_results_rows(
info=info,
env_type=env_type,
env_task=env_task,
benchmark_dataset_id=benchmark_dataset_id,
source_url=source_url,
notes=notes,
eval_date=default_eval_date(),
)
commit_url = upload_eval_results_yaml(
api=api,
repo_id=repo_id,
rows=rows,
env_type=env_type,
env_task=env_task,
output_dir=output_dir,
)
logging.info(f"Eval results pushed to https://huggingface.co/{repo_id}")
return commit_url
@parser.wrap()
def eval_main(cfg: EvalPipelineConfig):
logging.info(pformat(asdict(cfg)))
# Multi-instance orchestration only applies to local runtime.
# For docker runtime, instance_count controls the number of env containers
# spawned directly by run_eval_in_docker — no extra lerobot-eval processes needed.
if cfg.eval.runtime == "local" and cfg.eval.instance_count > 1 and cfg.eval.instance_id == 0:
_orchestrate_multi_instance_eval(cfg)
else:
_run_eval_worker(cfg)
def _maybe_add_libero_plus_perturbation(info: dict, cfg: EvalPipelineConfig) -> None:
if cfg.env.type != "libero_plus":
return
try:
from lerobot.envs.libero import aggregate_by_perturbation, build_perturbation_index
suite_names = [s.strip() for s in cfg.env.task.split(",") if s.strip()]
suite_indices = {s: build_perturbation_index(s) for s in suite_names}
perturbation_results = aggregate_by_perturbation(info["per_task"], suite_indices)
info["perturbation_results"] = perturbation_results
print("\n=== Perturbation Results ===")
for dim, stats in perturbation_results.items():
print(f" {dim}: {stats['pc_success']:.1f}% ({stats['n_episodes']} episodes)")
except Exception as exc:
# Never fail a finished long-running eval on post-processing.
print(f"WARNING: Failed to compute LIBERO-Plus perturbation breakdown: {exc}")
print("Continuing with per-suite + overall metrics only.")
def _save_eval_outputs(cfg: EvalPipelineConfig, info: dict) -> None:
output_dir = Path(cfg.output_dir)
output_dir.mkdir(parents=True, exist_ok=True)
with open(output_dir / "eval_info.json", "w") as f:
json.dump(info, f, indent=2)
eval_cfg_dict = _serializable_config(asdict(cfg))
with open(output_dir / "eval_config.json", "w") as f:
json.dump(eval_cfg_dict, f, indent=2)
def _maybe_push_eval_outputs(cfg: EvalPipelineConfig, info: dict) -> None:
if not cfg.push_to_hub:
return
repo_id = str(cfg.policy.pretrained_path)
try:
push_eval_to_hub(
repo_id=repo_id,
output_dir=Path(cfg.output_dir),
info=info,
env_type=cfg.env.type,
env_task=cfg.env.task,
benchmark_dataset_id=cfg.benchmark_dataset_id,
source_url=cfg.eval_result_source_url,
notes=cfg.eval_result_notes,
)
except Exception as exc:
logging.warning("Failed to push eval artifacts/results to Hub: %s", exc)
def _run_eval_worker(cfg: EvalPipelineConfig) -> dict:
logging.info(pformat(asdict(cfg)))
if cfg.eval.runtime in ("docker", "multiprocess"):
from lerobot.envs.docker_runtime import run_eval_in_docker, run_eval_multiprocess
if cfg.eval.runtime == "docker":
run_eval_in_docker(cfg)
else:
run_eval_multiprocess(cfg)
output_dir = Path(cfg.output_dir)
with open(output_dir / "eval_info.json") as f:
return json.load(f)
# Check device is available
device = get_safe_torch_device(cfg.policy.device, log=True)
@@ -548,34 +727,123 @@ def eval_main(cfg: EvalPipelineConfig):
# Create environment-specific preprocessor and postprocessor (e.g., for LIBERO environments)
env_preprocessor, env_postprocessor = make_env_pre_post_processors(env_cfg=cfg.env, policy_cfg=cfg.policy)
with torch.no_grad(), torch.autocast(device_type=device.type) if cfg.policy.use_amp else nullcontext():
info = eval_policy_all(
envs=envs,
policy=policy,
env_preprocessor=env_preprocessor,
env_postprocessor=env_postprocessor,
preprocessor=preprocessor,
postprocessor=postprocessor,
n_episodes=cfg.eval.n_episodes,
max_episodes_rendered=10,
videos_dir=Path(cfg.output_dir) / "videos",
start_seed=cfg.seed,
max_parallel_tasks=cfg.env.max_parallel_tasks,
)
print("Overall Aggregated Metrics:")
print(info["overall"])
try:
with (
torch.no_grad(),
torch.autocast(device_type=device.type) if cfg.policy.use_amp else nullcontext(),
):
info = eval_policy_all(
envs=envs,
policy=policy,
env_preprocessor=env_preprocessor,
env_postprocessor=env_postprocessor,
preprocessor=preprocessor,
postprocessor=postprocessor,
n_episodes=cfg.eval.n_episodes,
max_episodes_rendered=10,
videos_dir=Path(cfg.output_dir) / "videos",
start_seed=cfg.seed,
max_parallel_tasks=cfg.env.max_parallel_tasks,
instance_count=cfg.eval.instance_count,
instance_id=cfg.eval.instance_id,
)
print("Overall Aggregated Metrics:")
print(info["overall"])
# Print per-suite stats
for task_group, task_group_info in info.items():
print(f"\nAggregated Metrics for {task_group}:")
print(task_group_info)
# Close all vec envs
close_envs(envs)
for key, val in info.get("per_group", {}).items():
print(f"\nAggregated Metrics for {key}:")
print(val)
# Save info
with open(Path(cfg.output_dir) / "eval_info.json", "w") as f:
json.dump(info, f, indent=2)
_maybe_add_libero_plus_perturbation(info, cfg)
finally:
close_envs(envs)
_save_eval_outputs(cfg, info)
_maybe_push_eval_outputs(cfg, info)
logging.info("End of eval")
return info
def _orchestrate_multi_instance_eval(cfg: EvalPipelineConfig) -> None:
start_t = time.time()
root_output_dir = Path(cfg.output_dir)
instances_root = root_output_dir / "instances"
instances_root.mkdir(parents=True, exist_ok=True)
n_instances = cfg.eval.instance_count
logging.info(f"Launching multi-instance eval with {n_instances} workers.")
# Spawn workers for shard 1..N-1, run shard 0 in-process.
child_procs: list[tuple[int, subprocess.Popen]] = []
argv = [
arg
for arg in sys.argv[1:]
if not arg.startswith("--eval.instance_id=")
and not arg.startswith("--output_dir=")
and not arg.startswith("--push_to_hub=")
]
for i in range(1, n_instances):
child_output_dir = instances_root / str(i)
cmd = [
sys.executable,
"-m",
"lerobot.scripts.lerobot_eval",
*argv,
f"--eval.instance_id={i}",
f"--output_dir={child_output_dir}",
"--push_to_hub=false",
]
logging.info("Starting eval worker %s/%s", i + 1, n_instances)
child_procs.append((i, subprocess.Popen(cmd)))
cfg0 = deepcopy(cfg)
cfg0.eval.instance_id = 0
cfg0.push_to_hub = False
cfg0.output_dir = instances_root / "0"
_run_eval_worker(cfg0)
failed = []
for idx, proc in child_procs:
rc = proc.wait()
if rc != 0:
failed.append((idx, rc))
if failed:
raise RuntimeError(f"Multi-instance eval failed for workers: {failed}")
partial_infos: list[dict] = []
for i in range(n_instances):
info_path = instances_root / str(i) / "eval_info.json"
with open(info_path) as f:
partial_infos.append(json.load(f))
merged_per_task = []
for info in partial_infos:
merged_per_task.extend(info.get("per_task", []))
merged_per_task.sort(key=lambda x: (x["task_group"], x["task_id"]))
# Merge videos from each shard into final output dir.
merged_videos_dir = root_output_dir / "videos"
for i in range(n_instances):
shard_dir = instances_root / str(i)
shard_videos = shard_dir / "videos"
if shard_videos.is_dir():
shutil.copytree(shard_videos, merged_videos_dir, dirs_exist_ok=True)
old_prefix = str(shard_videos)
new_prefix = str(merged_videos_dir)
for entry in merged_per_task:
paths = entry.get("metrics", {}).get("video_paths", [])
entry["metrics"]["video_paths"] = [
p.replace(old_prefix, new_prefix, 1) if p.startswith(old_prefix) else p for p in paths
]
merged_info = _aggregate_eval_from_per_task(merged_per_task, total_eval_s=time.time() - start_t)
_maybe_add_libero_plus_perturbation(merged_info, cfg)
print("Overall Aggregated Metrics:")
print(merged_info["overall"])
_save_eval_outputs(cfg, merged_info)
_maybe_push_eval_outputs(cfg, merged_info)
logging.info("End of eval")
@@ -590,6 +858,179 @@ class TaskMetrics(TypedDict):
ACC_KEYS = ("sum_rewards", "max_rewards", "successes", "video_paths")
def _aggregate_eval_from_per_task(per_task_infos: list[dict], total_eval_s: float) -> dict:
"""Aggregate eval metrics from per-task payloads."""
group_acc: dict[str, dict[str, list]] = defaultdict(lambda: {k: [] for k in ACC_KEYS})
overall: dict[str, list] = {k: [] for k in ACC_KEYS}
def _append(group: str, key: str, value: Any):
if value is None:
return
if isinstance(value, list):
group_acc[group][key].extend(value)
overall[key].extend(value)
else:
group_acc[group][key].append(value)
overall[key].append(value)
for entry in per_task_infos:
group = entry["task_group"]
metrics = entry["metrics"]
_append(group, "sum_rewards", metrics.get("sum_rewards"))
_append(group, "max_rewards", metrics.get("max_rewards"))
_append(group, "successes", metrics.get("successes"))
paths = metrics.get("video_paths", [])
if paths:
group_acc[group]["video_paths"].extend(paths)
overall["video_paths"].extend(paths)
def _agg_from_list(xs: list[float]) -> float:
if not xs:
return float("nan")
arr = np.array(xs, dtype=float)
return float(np.nanmean(arr))
groups_aggregated = {}
for group, acc in group_acc.items():
groups_aggregated[group] = {
"avg_sum_reward": _agg_from_list(acc["sum_rewards"]),
"avg_max_reward": _agg_from_list(acc["max_rewards"]),
"pc_success": _agg_from_list(acc["successes"]) * 100 if acc["successes"] else float("nan"),
"n_episodes": len(acc["sum_rewards"]),
"video_paths": list(acc["video_paths"]),
}
overall_agg = {
"avg_sum_reward": _agg_from_list(overall["sum_rewards"]),
"avg_max_reward": _agg_from_list(overall["max_rewards"]),
"pc_success": _agg_from_list(overall["successes"]) * 100 if overall["successes"] else float("nan"),
"n_episodes": len(overall["sum_rewards"]),
"eval_s": total_eval_s,
"eval_ep_s": total_eval_s / max(1, len(overall["sum_rewards"])),
"video_paths": list(overall["video_paths"]),
}
return {"per_task": per_task_infos, "per_group": groups_aggregated, "overall": overall_agg}
def _eval_task_batch(
batch: list[tuple[str, int, LazyVectorEnv]],
policy,
env_preprocessor: PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
env_postprocessor: PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
preprocessor: PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
postprocessor: PolicyProcessorPipeline[PolicyAction, PolicyAction],
start_seed: int | None,
max_episodes_rendered: int = 0,
videos_dir: Path | None = None,
) -> list[tuple[str, int, TaskMetrics]]:
"""Evaluate N tasks in a single batched rollout for GPU efficiency.
Each task contributes one sub-env to a combined SyncVectorEnv so the policy
processes all N observations in one forward pass per step.
"""
all_fns: list[Callable] = []
task_slices: list[tuple[str, int, int, int]] = []
offset = 0
for task_group, task_id, lazy_env in batch:
fns = lazy_env.factory_fns
if not fns:
continue
start = offset
offset += len(fns)
all_fns.extend(fns)
task_slices.append((task_group, task_id, start, offset))
if not all_fns:
return []
env_cls = batch[0][2].env_cls
combined_env = env_cls(all_fns)
try:
seeds = None
if start_seed is not None:
seeds = list(range(start_seed, start_seed + combined_env.num_envs))
ep_frames: list[np.ndarray] = []
def render_frame(env: gym.vector.VectorEnv):
if max_episodes_rendered <= 0:
return
n = min(max_episodes_rendered, env.num_envs)
if isinstance(env, gym.vector.SyncVectorEnv):
ep_frames.append(np.stack([env.envs[i].render() for i in range(n)]))
elif isinstance(env, gym.vector.AsyncVectorEnv):
ep_frames.append(np.stack(env.call("render")[:n]))
rollout_data = rollout(
env=combined_env,
policy=policy,
env_preprocessor=env_preprocessor,
env_postprocessor=env_postprocessor,
preprocessor=preprocessor,
postprocessor=postprocessor,
seeds=seeds,
render_callback=render_frame if max_episodes_rendered > 0 else None,
)
n_steps = rollout_data["done"].shape[1]
done_indices = torch.argmax(rollout_data["done"].to(int), dim=1)
mask = (torch.arange(n_steps) <= einops.repeat(done_indices + 1, "b -> b s", s=n_steps)).int()
batch_sum_rewards = einops.reduce((rollout_data["reward"] * mask), "b n -> b", "sum")
batch_max_rewards = einops.reduce((rollout_data["reward"] * mask), "b n -> b", "max")
batch_successes = einops.reduce((rollout_data["success"] * mask), "b n -> b", "any")
video_paths_per_task: dict[tuple[str, int], list[str]] = defaultdict(list)
if max_episodes_rendered > 0 and ep_frames and videos_dir:
stacked = np.stack(ep_frames, axis=1) # (batch, time, h, w, c)
rendered = 0
threads = []
for tg, tid, start_i, end_i in task_slices:
if rendered >= max_episodes_rendered:
break
task_dir = videos_dir / f"{tg}_{tid}"
task_dir.mkdir(parents=True, exist_ok=True)
for env_idx in range(start_i, end_i):
if rendered >= max_episodes_rendered:
break
episode_index = env_idx - start_i
video_path = task_dir / f"eval_episode_{episode_index}.mp4"
video_paths_per_task[(tg, tid)].append(str(video_path))
di = done_indices[env_idx].item()
thread = threading.Thread(
target=write_video,
args=(
str(video_path),
stacked[env_idx, : di + 1],
combined_env.unwrapped.metadata["render_fps"],
),
)
thread.start()
threads.append(thread)
rendered += 1
for t in threads:
t.join()
results: list[tuple[str, int, TaskMetrics]] = []
for tg, tid, start_i, end_i in task_slices:
results.append(
(
tg,
tid,
TaskMetrics(
sum_rewards=batch_sum_rewards[start_i:end_i].tolist(),
max_rewards=batch_max_rewards[start_i:end_i].tolist(),
successes=batch_successes[start_i:end_i].tolist(),
video_paths=video_paths_per_task.get((tg, tid), []),
),
)
)
return results
finally:
combined_env.close()
def eval_one(
env: gym.vector.VectorEnv,
*,
@@ -634,7 +1075,7 @@ def eval_one(
def run_one(
task_group: str,
task_id: int,
env,
env: Any,
*,
policy,
env_preprocessor,
@@ -678,7 +1119,7 @@ def run_one(
def eval_policy_all(
envs: dict[str, dict[int, gym.vector.VectorEnv]],
envs: dict[str, dict[int, Any]],
policy,
env_preprocessor: PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
env_postprocessor: PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
@@ -691,6 +1132,8 @@ def eval_policy_all(
return_episode_data: bool = False,
start_seed: int | None = None,
max_parallel_tasks: int = 1,
instance_count: int = 1,
instance_id: int = 0,
) -> dict:
"""
Evaluate a nested `envs` dict: {task_group: {task_id: vec_env}}.
@@ -703,6 +1146,12 @@ def eval_policy_all(
# Flatten envs into list of (task_group, task_id, env)
tasks = [(tg, tid, vec) for tg, group in envs.items() for tid, vec in group.items()]
if instance_count > 1:
total_tasks = len(tasks)
tasks = [task for idx, task in enumerate(tasks) if idx % instance_count == instance_id]
logging.info(
f"Instance shard {instance_id + 1}/{instance_count}: {len(tasks)}/{total_tasks} tasks assigned."
)
# accumulators: track metrics at both per-group level and across all groups
group_acc: dict[str, dict[str, list]] = defaultdict(lambda: {k: [] for k in ACC_KEYS})
@@ -748,59 +1197,77 @@ def eval_policy_all(
start_seed=start_seed,
)
if max_parallel_tasks <= 1:
# sequential path (single accumulator path on the main thread)
# NOTE: keeping a single-threaded accumulator avoids concurrent list appends or locks
for task_group, task_id, env in tasks:
tg, tid, metrics = task_runner(task_group, task_id, env)
_accumulate_to(tg, metrics)
per_task_infos.append({"task_group": tg, "task_id": tid, "metrics": metrics})
else:
# threaded path: submit all tasks, consume completions on main thread and accumulate there
with cf.ThreadPoolExecutor(max_workers=max_parallel_tasks) as executor:
fut2meta = {}
for task_group, task_id, env in tasks:
fut = executor.submit(task_runner, task_group, task_id, env)
fut2meta[fut] = (task_group, task_id)
for fut in cf.as_completed(fut2meta):
tg, tid, metrics = fut.result()
all_lazy = all(isinstance(env, LazyVectorEnv) for _, _, env in tasks)
single_factory_per_task = all(
not isinstance(env, LazyVectorEnv) or env.num_factory_fns == 1 for _, _, env in tasks
)
can_batch = max_parallel_tasks > 1 and all_lazy and single_factory_per_task and n_episodes == 1
if can_batch:
# Multi-task batched path: combine N tasks into one SyncVectorEnv per chunk
# so the policy processes all N observations in a single forward pass per step.
chunk_size = max_parallel_tasks
logging.info(f"Task scheduler mode: batched_lazy (chunk_size={chunk_size})")
n_chunks = (len(tasks) + chunk_size - 1) // chunk_size
rendered_so_far = 0
for chunk_idx in range(n_chunks):
chunk = tasks[chunk_idx * chunk_size : (chunk_idx + 1) * chunk_size]
render_budget = max(0, max_episodes_rendered - rendered_so_far)
logging.info(
f"Batch {chunk_idx + 1}/{n_chunks}: evaluating {len(chunk)} tasks "
f"({chunk_idx * chunk_size + 1}{chunk_idx * chunk_size + len(chunk)}/{len(tasks)})"
)
batch_results = _eval_task_batch(
chunk,
policy=policy,
env_preprocessor=env_preprocessor,
env_postprocessor=env_postprocessor,
preprocessor=preprocessor,
postprocessor=postprocessor,
start_seed=start_seed,
max_episodes_rendered=render_budget,
videos_dir=videos_dir,
)
for tg, tid, metrics in batch_results:
_accumulate_to(tg, metrics)
per_task_infos.append({"task_group": tg, "task_id": tid, "metrics": metrics})
rendered_so_far += len(metrics.get("video_paths", []))
# compute aggregated metrics helper (robust to lists/scalars)
def _agg_from_list(xs):
if not xs:
return float("nan")
arr = np.array(xs, dtype=float)
return float(np.nanmean(arr))
if overall["successes"]:
sr = np.nanmean(overall["successes"]) * 100
logging.info(f" running success rate: {sr:.1f}%")
elif max_parallel_tasks <= 1:
logging.info("Task scheduler mode: sequential")
for task_group, task_id, env in tasks:
try:
tg, tid, metrics = task_runner(task_group, task_id, env)
_accumulate_to(tg, metrics)
per_task_infos.append({"task_group": tg, "task_id": tid, "metrics": metrics})
finally:
env.close()
else:
# Threaded fallback for cases where batched lazy mode cannot be used.
if all_lazy and n_episodes != 1:
logging.info("Task scheduler mode: threaded (lazy batching disabled because n_episodes != 1)")
elif all_lazy and not single_factory_per_task:
logging.info("Task scheduler mode: threaded (lazy batching disabled because eval.batch_size > 1)")
else:
logging.info(f"Task scheduler mode: threaded (max_workers={max_parallel_tasks})")
with cf.ThreadPoolExecutor(max_workers=max_parallel_tasks) as executor:
fut2meta: dict[cf.Future, tuple[str, int, Any]] = {}
for task_group, task_id, env in tasks:
fut = executor.submit(task_runner, task_group, task_id, env)
fut2meta[fut] = (task_group, task_id, env)
for fut in cf.as_completed(fut2meta):
tg, tid, env = fut2meta[fut]
try:
_, _, metrics = fut.result()
_accumulate_to(tg, metrics)
per_task_infos.append({"task_group": tg, "task_id": tid, "metrics": metrics})
finally:
env.close()
# compute per-group aggregates
groups_aggregated = {}
for group, acc in group_acc.items():
groups_aggregated[group] = {
"avg_sum_reward": _agg_from_list(acc["sum_rewards"]),
"avg_max_reward": _agg_from_list(acc["max_rewards"]),
"pc_success": _agg_from_list(acc["successes"]) * 100 if acc["successes"] else float("nan"),
"n_episodes": len(acc["sum_rewards"]),
"video_paths": list(acc["video_paths"]),
}
# overall aggregates
overall_agg = {
"avg_sum_reward": _agg_from_list(overall["sum_rewards"]),
"avg_max_reward": _agg_from_list(overall["max_rewards"]),
"pc_success": _agg_from_list(overall["successes"]) * 100 if overall["successes"] else float("nan"),
"n_episodes": len(overall["sum_rewards"]),
"eval_s": time.time() - start_t,
"eval_ep_s": (time.time() - start_t) / max(1, len(overall["sum_rewards"])),
"video_paths": list(overall["video_paths"]),
}
return {
"per_task": per_task_infos,
"per_group": groups_aggregated,
"overall": overall_agg,
}
return _aggregate_eval_from_per_task(per_task_infos, total_eval_s=time.time() - start_t)
def main():
+218
View File
@@ -0,0 +1,218 @@
#!/usr/bin/env python
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Docker eval worker — runs inside a benchmark container.
Runs gym episodes for a sharded subset of the configured env's tasks, calling
a remote HTTP policy inference server (running on the host GPU) for action chunks.
Usage (normally invoked by docker_runtime.run_eval_in_docker, not directly):
lerobot-eval-worker \\
--env.type=libero_plus \\
--server_address=host.docker.internal:50051 \\
--n_episodes=5 \\
--seed=1000 \\
--instance_id=0 \\
--instance_count=2 \\
--output_path=/results/worker_0.json
"""
from __future__ import annotations
import json
import logging
import pickle # nosec B403 — internal serialisation only
import time
import urllib.request
from dataclasses import dataclass, field
from pathlib import Path
import draccus
import numpy as np
from lerobot import envs # noqa: F401 — registers all env subclasses
from lerobot.envs.configs import EnvConfig
from lerobot.envs.factory import make_env
from lerobot.envs.utils import add_envs_task, preprocess_observation
from lerobot.utils.utils import init_logging
logger = logging.getLogger(__name__)
@dataclass
class EvalWorkerConfig:
env: EnvConfig
# Address of the policy inference HTTP server on the host.
server_address: str = "host.docker.internal:50051"
# Number of episodes to run per task.
n_episodes: int = 1
# Starting random seed; episode i of a task uses seed + i.
seed: int = 0
# 0-indexed shard id for this worker.
instance_id: int = 0
# Total number of shards (workers).
instance_count: int = 1
# Path (inside the container) to write the JSON per-task results.
output_path: Path = field(default_factory=lambda: Path("/results/worker.json"))
# Timeout in seconds for each HTTP request to the policy server.
server_timeout: float = 120.0
def _call_server(server_address: str, obs_t: dict, timeout: float) -> np.ndarray:
"""POST pickled obs to /predict_chunk, return numpy chunk (T, action_dim)."""
body = pickle.dumps({"obs_t": obs_t}) # nosec B301
req = urllib.request.Request(
f"http://{server_address}/predict_chunk",
data=body,
method="POST",
headers={"Content-Type": "application/octet-stream"},
)
with urllib.request.urlopen(req, timeout=timeout) as resp: # nosec B310
return pickle.loads(resp.read()) # nosec B301
def run_worker(cfg: EvalWorkerConfig) -> dict:
"""Run cfg.n_episodes episodes per assigned task. Returns per-task results dict."""
# Build envs: {task_group: {task_id: vec_env}}
envs_dict = make_env(cfg.env, n_envs=1)
# Flatten to list of (task_group, task_id, env)
tasks = [
(task_group, task_id, vec)
for task_group, group in envs_dict.items()
for task_id, vec in group.items()
]
# Shard: this worker handles tasks where index % instance_count == instance_id
if cfg.instance_count > 1:
total = len(tasks)
assigned = {i for i in range(total) if i % cfg.instance_count == cfg.instance_id}
for i, (_, _, env) in enumerate(tasks):
if i not in assigned:
try:
env.close()
except Exception:
pass
tasks = [t for i, t in enumerate(tasks) if i in assigned]
logger.info(
"Shard %d/%d: %d/%d tasks assigned.",
cfg.instance_id + 1,
cfg.instance_count,
len(tasks),
total,
)
per_task: list[dict] = []
for task_group, task_id, env in tasks:
sum_rewards: list[float] = []
max_rewards: list[float] = []
successes: list[bool] = []
for ep_idx in range(cfg.n_episodes):
obs, _info = env.reset(seed=[cfg.seed + ep_idx])
obs_t = preprocess_observation(obs)
obs_t = add_envs_task(env, obs_t)
action_buffer: list[np.ndarray] = [] # each element: (1, action_dim)
ep_rewards: list[float] = []
ep_success = False
done = np.zeros(1, dtype=bool)
ep_steps = 0
ep_infer_time = 0.0
ep_env_time = 0.0
ep_infer_calls = 0
while not np.all(done):
if not action_buffer:
t0 = time.monotonic()
chunk_np = _call_server(cfg.server_address, obs_t, cfg.server_timeout)
ep_infer_time += time.monotonic() - t0
ep_infer_calls += 1
action_buffer = [chunk_np[i : i + 1] for i in range(chunk_np.shape[0])]
action_np = action_buffer.pop(0)
t0 = time.monotonic()
obs, reward, terminated, truncated, info = env.step(action_np)
ep_env_time += time.monotonic() - t0
ep_steps += 1
done = terminated | truncated | done
ep_rewards.append(float(np.mean(reward)))
if "final_info" in info:
final_info = info["final_info"]
if isinstance(final_info, dict) and "is_success" in final_info:
ep_success = bool(final_info["is_success"][0])
if not np.all(done):
obs_t = preprocess_observation(obs)
obs_t = add_envs_task(env, obs_t)
sum_rewards.append(float(np.sum(ep_rewards)))
max_rewards.append(float(np.max(ep_rewards)) if ep_rewards else 0.0)
successes.append(ep_success)
avg_env_ms = (ep_env_time / ep_steps * 1000) if ep_steps else 0
avg_infer_ms = (ep_infer_time / ep_infer_calls * 1000) if ep_infer_calls else 0
logger.info(
"Task %s[%d] ep %d/%d — success=%s | %d steps, %d infer calls | "
"env %.0fms/step, infer %.0fms/call (env %.1fs, infer %.1fs total)",
task_group,
task_id,
ep_idx + 1,
cfg.n_episodes,
ep_success,
ep_steps,
ep_infer_calls,
avg_env_ms,
avg_infer_ms,
ep_env_time,
ep_infer_time,
)
per_task.append(
{
"task_group": task_group,
"task_id": task_id,
"metrics": {
"sum_rewards": sum_rewards,
"max_rewards": max_rewards,
"successes": successes,
"video_paths": [],
},
}
)
env.close()
return {"per_task": per_task}
def worker_main(cfg: EvalWorkerConfig) -> None:
results = run_worker(cfg)
output = Path(cfg.output_path)
output.parent.mkdir(parents=True, exist_ok=True)
output.write_text(json.dumps(results, indent=2))
logger.info("Worker %d wrote results to %s", cfg.instance_id, output)
def main() -> None:
init_logging()
cfg = draccus.parse(config_class=EvalWorkerConfig)
worker_main(cfg)
if __name__ == "__main__":
main()
+588
View File
@@ -0,0 +1,588 @@
#!/usr/bin/env python
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Generate an interactive eval leaderboard from Hub model repos.
Reads eval results (as pushed by ``lerobot-eval --push_to_hub``) from one or
more Hugging Face model repos and produces a self-contained HTML page with a
sortable, filterable leaderboard table.
Usage::
# models.txt contains one HF repo ID per line (lines starting with # are ignored)
lerobot-leaderboard models.txt --output leaderboard.html
"""
from __future__ import annotations
import argparse
import json
import logging
import sys
from dataclasses import dataclass, field
from pathlib import Path
logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s")
logger = logging.getLogger(__name__)
@dataclass
class ModelEntry:
repo_id: str
policy_type: str = ""
dataset: str = ""
training_steps: str = ""
batch_size: str = ""
# env_type -> {group_name -> pc_success}
eval_results: dict[str, dict[str, float]] = field(default_factory=dict)
# env_type -> overall pc_success
eval_overall: dict[str, float] = field(default_factory=dict)
# env_type -> n_episodes
eval_n_episodes: dict[str, int] = field(default_factory=dict)
def _try_download(repo_id: str, filename: str) -> dict | None:
"""Download a JSON file from a Hub repo, return parsed dict or None."""
from huggingface_hub import hf_hub_download
from huggingface_hub.utils import EntryNotFoundError, RepositoryNotFoundError
try:
path = hf_hub_download(repo_id, filename, repo_type="model")
with open(path) as f:
return json.load(f)
except (EntryNotFoundError, RepositoryNotFoundError, OSError):
return None
def _list_eval_dirs(repo_id: str) -> list[str]:
"""List env_type subdirectories under eval/ in a Hub repo."""
from huggingface_hub import HfApi
from huggingface_hub.utils import RepositoryNotFoundError
api = HfApi()
try:
files = api.list_repo_files(repo_id, repo_type="model")
except RepositoryNotFoundError:
return []
env_types = set()
for f in files:
if f.startswith("eval/") and f.count("/") >= 2:
env_types.add(f.split("/")[1])
return sorted(env_types)
def fetch_model_entry(repo_id: str) -> ModelEntry:
"""Fetch all available metadata and eval results for a single model."""
entry = ModelEntry(repo_id=repo_id)
# Policy config
policy_cfg = _try_download(repo_id, "config.json")
if policy_cfg:
entry.policy_type = policy_cfg.get("type", "")
# Training config
train_cfg = _try_download(repo_id, "train_config.json")
if train_cfg:
ds = train_cfg.get("dataset", {})
entry.dataset = ds.get("repo_id", "") if isinstance(ds, dict) else str(ds)
entry.training_steps = str(train_cfg.get("steps", ""))
entry.batch_size = str(train_cfg.get("batch_size", ""))
# Eval results per env_type
for env_type in _list_eval_dirs(repo_id):
eval_info = _try_download(repo_id, f"eval/{env_type}/eval_info.json")
if not eval_info:
continue
per_group = eval_info.get("per_group", {})
group_results = {}
for group_name, stats in per_group.items():
group_results[group_name] = stats.get("pc_success", float("nan"))
entry.eval_results[env_type] = group_results
overall = eval_info.get("overall", {})
entry.eval_overall[env_type] = overall.get("pc_success", float("nan"))
entry.eval_n_episodes[env_type] = overall.get("n_episodes", 0)
return entry
def fetch_all(repo_ids: list[str]) -> list[ModelEntry]:
entries = []
for repo_id in repo_ids:
logger.info(f"Fetching {repo_id}...")
try:
entries.append(fetch_model_entry(repo_id))
except Exception as e:
logger.warning(f"Failed to fetch {repo_id}: {e}")
return entries
def collect_all_env_types(entries: list[ModelEntry]) -> list[str]:
"""Collect all unique env_types across all entries, sorted."""
env_types: set[str] = set()
for e in entries:
env_types.update(e.eval_overall.keys())
return sorted(env_types)
def collect_all_groups(entries: list[ModelEntry]) -> dict[str, list[str]]:
"""Collect all unique group names per env_type."""
groups: dict[str, set[str]] = {}
for e in entries:
for env_type, group_results in e.eval_results.items():
groups.setdefault(env_type, set()).update(group_results.keys())
return {k: sorted(v) for k, v in groups.items()}
def build_html(entries: list[ModelEntry], title: str = "LeRobot Eval Leaderboard") -> str:
env_types = collect_all_env_types(entries)
all_groups = collect_all_groups(entries)
# Build column structure: fixed cols + per env_type (overall + per-group sub-columns)
# We'll build the data as JSON and let JS handle rendering
table_data = []
for e in entries:
row = {
"repo_id": e.repo_id,
"policy_type": e.policy_type,
"dataset": e.dataset,
"training_steps": e.training_steps,
"batch_size": e.batch_size,
}
for env_type in env_types:
overall = e.eval_overall.get(env_type)
row[f"{env_type}__overall"] = round(overall, 1) if overall is not None else None
n_ep = e.eval_n_episodes.get(env_type)
row[f"{env_type}__n_episodes"] = n_ep if n_ep else None
for group in all_groups.get(env_type, []):
val = e.eval_results.get(env_type, {}).get(group)
row[f"{env_type}__{group}"] = round(val, 1) if val is not None else None
table_data.append(row)
# Build column definitions for the JS table
columns_json = json.dumps(_build_column_defs(env_types, all_groups))
data_json = json.dumps(table_data)
return _HTML_TEMPLATE.format(
title=title,
columns_json=columns_json,
data_json=data_json,
)
def _build_column_defs(env_types: list[str], all_groups: dict[str, list[str]]) -> list[dict]:
cols = [
{"key": "repo_id", "label": "Model", "group": "Model Info", "sortable": True, "type": "link"},
{"key": "policy_type", "label": "Policy", "group": "Model Info", "sortable": True, "type": "text"},
{"key": "dataset", "label": "Dataset", "group": "Model Info", "sortable": True, "type": "text"},
{
"key": "training_steps",
"label": "Steps",
"group": "Training",
"sortable": True,
"type": "number",
},
{
"key": "batch_size",
"label": "Batch",
"group": "Training",
"sortable": True,
"type": "number",
},
]
for env_type in env_types:
cols.append(
{
"key": f"{env_type}__overall",
"label": "Overall %",
"group": env_type,
"sortable": True,
"type": "pct",
}
)
for group in all_groups.get(env_type, []):
cols.append(
{
"key": f"{env_type}__{group}",
"label": f"{group} %",
"group": env_type,
"sortable": True,
"type": "pct",
}
)
cols.append(
{
"key": f"{env_type}__n_episodes",
"label": "Episodes",
"group": env_type,
"sortable": True,
"type": "number",
}
)
return cols
_HTML_TEMPLATE = """\
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>{title}</title>
<style>
:root {{
--bg: #0d1117;
--surface: #161b22;
--border: #30363d;
--text: #e6edf3;
--text-muted: #8b949e;
--accent: #58a6ff;
--green: #3fb950;
--yellow: #d29922;
--red: #f85149;
--header-bg: #1c2128;
}}
* {{ box-sizing: border-box; margin: 0; padding: 0; }}
body {{
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Helvetica, Arial, sans-serif;
background: var(--bg);
color: var(--text);
padding: 24px;
line-height: 1.5;
}}
h1 {{
font-size: 1.75rem;
font-weight: 600;
margin-bottom: 8px;
}}
.subtitle {{
color: var(--text-muted);
font-size: 0.9rem;
margin-bottom: 20px;
}}
.controls {{
display: flex;
gap: 12px;
margin-bottom: 16px;
flex-wrap: wrap;
align-items: center;
}}
.controls input {{
background: var(--surface);
border: 1px solid var(--border);
color: var(--text);
padding: 8px 14px;
border-radius: 6px;
font-size: 0.875rem;
width: 280px;
outline: none;
}}
.controls input:focus {{ border-color: var(--accent); }}
.controls label {{
color: var(--text-muted);
font-size: 0.8rem;
display: flex;
align-items: center;
gap: 6px;
cursor: pointer;
}}
.controls input[type="checkbox"] {{
width: auto;
accent-color: var(--accent);
}}
.table-wrap {{
overflow-x: auto;
border: 1px solid var(--border);
border-radius: 8px;
}}
table {{
border-collapse: collapse;
width: 100%;
font-size: 0.85rem;
white-space: nowrap;
}}
thead th {{
background: var(--header-bg);
color: var(--text-muted);
font-weight: 600;
text-transform: uppercase;
font-size: 0.7rem;
letter-spacing: 0.05em;
padding: 10px 14px;
border-bottom: 1px solid var(--border);
cursor: pointer;
user-select: none;
position: sticky;
top: 0;
z-index: 2;
}}
thead th:hover {{ color: var(--text); }}
thead th .arrow {{ margin-left: 4px; opacity: 0.4; }}
thead th.sorted .arrow {{ opacity: 1; color: var(--accent); }}
thead tr.group-header th {{
text-align: center;
font-size: 0.75rem;
letter-spacing: 0.08em;
border-right: 1px solid var(--border);
cursor: default;
}}
tbody tr {{ border-bottom: 1px solid var(--border); }}
tbody tr:hover {{ background: rgba(88,166,255,0.06); }}
td {{
padding: 10px 14px;
vertical-align: middle;
}}
td.model-cell a {{
color: var(--accent);
text-decoration: none;
font-weight: 500;
}}
td.model-cell a:hover {{ text-decoration: underline; }}
td.pct {{
font-weight: 600;
font-variant-numeric: tabular-nums;
text-align: right;
}}
td.number {{
text-align: right;
font-variant-numeric: tabular-nums;
}}
.pct-high {{ color: var(--green); }}
.pct-mid {{ color: var(--yellow); }}
.pct-low {{ color: var(--red); }}
.pct-na {{ color: var(--text-muted); font-weight: 400; }}
.best-in-col {{ background: rgba(63,185,80,0.12); }}
footer {{
margin-top: 20px;
color: var(--text-muted);
font-size: 0.75rem;
}}
footer a {{ color: var(--accent); text-decoration: none; }}
</style>
</head>
<body>
<h1>&#129302; {title}</h1>
<p class="subtitle">Click any column header to sort. Filter by typing below.</p>
<div class="controls">
<input type="text" id="filter" placeholder="Filter models..." />
<label><input type="checkbox" id="toggleGroups" checked /> Show sub-suite columns</label>
</div>
<div class="table-wrap">
<table id="leaderboard">
<thead></thead>
<tbody></tbody>
</table>
</div>
<footer>
Generated by <a href="https://github.com/huggingface/lerobot">LeRobot</a> &middot;
Eval results from <a href="https://huggingface.co">Hugging Face Hub</a>
</footer>
<script>
const COLUMNS = {columns_json};
const DATA = {data_json};
let sortKey = null, sortAsc = true, showGroups = true;
function pctClass(v) {{
if (v == null) return 'pct-na';
if (v >= 70) return 'pct-high';
if (v >= 40) return 'pct-mid';
return 'pct-low';
}}
function visibleCols() {{
if (showGroups) return COLUMNS;
return COLUMNS.filter(c => !c.key.includes('__') || c.key.endsWith('__overall') || c.key.endsWith('__n_episodes'));
}}
function bestPerCol(rows) {{
const best = {{}};
for (const c of visibleCols()) {{
if (c.type !== 'pct') continue;
let max = -Infinity;
for (const r of rows) {{
const v = r[c.key];
if (v != null && v > max) max = v;
}}
best[c.key] = max === -Infinity ? null : max;
}}
return best;
}}
function render() {{
const filter = document.getElementById('filter').value.toLowerCase();
let rows = DATA.filter(r => r.repo_id.toLowerCase().includes(filter)
|| (r.policy_type||'').toLowerCase().includes(filter)
|| (r.dataset||'').toLowerCase().includes(filter));
if (sortKey) {{
rows.sort((a, b) => {{
let va = a[sortKey], vb = b[sortKey];
if (va == null && vb == null) return 0;
if (va == null) return 1;
if (vb == null) return -1;
if (typeof va === 'string') va = va.toLowerCase();
if (typeof vb === 'string') vb = vb.toLowerCase();
return sortAsc ? (va < vb ? -1 : va > vb ? 1 : 0) : (va > vb ? -1 : va < vb ? 1 : 0);
}});
}}
const cols = visibleCols();
const best = bestPerCol(rows);
// Group header row
const groups = [];
let lastGroup = null;
for (const c of cols) {{
if (c.group !== lastGroup) {{
groups.push({{ label: c.group, span: 1 }});
lastGroup = c.group;
}} else {{
groups[groups.length - 1].span++;
}}
}}
const thead = document.querySelector('#leaderboard thead');
thead.innerHTML = '';
// Group header
const gtr = document.createElement('tr');
gtr.className = 'group-header';
for (const g of groups) {{
const th = document.createElement('th');
th.colSpan = g.span;
th.textContent = g.label;
gtr.appendChild(th);
}}
thead.appendChild(gtr);
// Column headers
const htr = document.createElement('tr');
for (const c of cols) {{
const th = document.createElement('th');
th.innerHTML = c.label + ' <span class="arrow">' + (sortKey === c.key ? (sortAsc ? '&#9650;' : '&#9660;') : '&#8693;') + '</span>';
if (sortKey === c.key) th.classList.add('sorted');
th.addEventListener('click', () => {{
if (sortKey === c.key) {{ sortAsc = !sortAsc; }}
else {{ sortKey = c.key; sortAsc = c.type === 'pct' ? false : true; }}
render();
}});
htr.appendChild(th);
}}
thead.appendChild(htr);
// Body
const tbody = document.querySelector('#leaderboard tbody');
tbody.innerHTML = '';
for (const r of rows) {{
const tr = document.createElement('tr');
for (const c of cols) {{
const td = document.createElement('td');
const v = r[c.key];
if (c.type === 'link') {{
td.className = 'model-cell';
const a = document.createElement('a');
a.href = 'https://huggingface.co/' + v;
a.target = '_blank';
a.textContent = v;
td.appendChild(a);
}} else if (c.type === 'pct') {{
td.className = 'pct ' + pctClass(v);
td.textContent = v != null ? v.toFixed(1) : '';
if (v != null && best[c.key] != null && v === best[c.key] && rows.length > 1) {{
td.classList.add('best-in-col');
}}
}} else if (c.type === 'number') {{
td.className = 'number';
td.textContent = v != null ? v : '';
}} else {{
td.textContent = v || '';
}}
tr.appendChild(td);
}}
tbody.appendChild(tr);
}}
}}
document.getElementById('filter').addEventListener('input', render);
document.getElementById('toggleGroups').addEventListener('change', (e) => {{
showGroups = e.target.checked;
render();
}});
render();
</script>
</body>
</html>
"""
def parse_args(argv: list[str] | None = None) -> argparse.Namespace:
p = argparse.ArgumentParser(
description="Generate an interactive eval leaderboard from Hub model repos.",
)
p.add_argument(
"repo_ids_file",
type=str,
help="Path to a text file with one repo ID per line.",
)
p.add_argument(
"--output",
type=str,
default="leaderboard.html",
help="Output HTML file path (default: leaderboard.html).",
)
p.add_argument(
"--title",
type=str,
default="LeRobot Eval Leaderboard",
help="Title shown in the leaderboard page.",
)
return p.parse_args(argv)
def main(argv: list[str] | None = None):
args = parse_args(argv)
path = Path(args.repo_ids_file)
if not path.exists():
logger.error(f"File not found: {path}")
sys.exit(1)
repo_ids = [line.strip() for line in path.read_text().splitlines() if line.strip() and not line.startswith("#")]
if not repo_ids:
logger.error(f"No repo IDs found in {path}")
sys.exit(1)
entries = fetch_all(repo_ids)
if not entries:
logger.error("No valid entries found.")
sys.exit(1)
html = build_html(entries, title=args.title)
out = Path(args.output)
out.write_text(html)
logger.info(f"Leaderboard written to {out.resolve()}")
if __name__ == "__main__":
main()
+2 -4
View File
@@ -369,6 +369,8 @@ def record_loop(
act_processed_policy: RobotAction = make_robot_action(action_values, dataset.features)
elif policy is None and isinstance(teleop, Teleoperator):
if robot.name == "unitree_g1":
teleop.send_feedback(obs)
act = teleop.get_action()
# Applies a pipeline to the raw teleop action, default is IdentityProcessor
@@ -556,10 +558,6 @@ def record(cfg: RecordConfig) -> LeRobotDataset:
):
log_say("Reset the environment", cfg.play_sounds)
# reset g1 robot
if robot.name == "unitree_g1":
robot.reset()
record_loop(
robot=robot,
events=events,
+4 -1
View File
@@ -60,6 +60,7 @@ import rerun as rr
from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig # noqa: F401
from lerobot.cameras.realsense.configuration_realsense import RealSenseCameraConfig # noqa: F401
from lerobot.cameras.zmq.configuration_zmq import ZMQCameraConfig # noqa: F401
from lerobot.configs import parser
from lerobot.processor import (
RobotAction,
@@ -153,7 +154,6 @@ def teleop_loop(
display_len = max(len(key) for key in robot.action_features)
start = time.perf_counter()
while True:
loop_start = time.perf_counter()
@@ -163,6 +163,9 @@ def teleop_loop(
# given that it is the identity processor as default
obs = robot.get_observation()
if robot.name == "unitree_g1":
teleop.send_feedback(obs)
# Get teleop action
raw_action = teleop.get_action()
+5 -1
View File
@@ -209,7 +209,11 @@ def train(cfg: TrainPipelineConfig, accelerator: Accelerator | None = None):
# Use accelerator's device
device = accelerator.device
torch.backends.cudnn.benchmark = True
if cfg.cudnn_deterministic:
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
else:
torch.backends.cudnn.benchmark = True
torch.backends.cuda.matmul.allow_tf32 = True
# Dataset loading synchronization: main process downloads first to avoid race conditions
@@ -15,7 +15,6 @@
# limitations under the License.
from dataclasses import dataclass
from typing import TypeAlias
from ..config import TeleoperatorConfig
@@ -38,5 +37,5 @@ class SOLeaderTeleopConfig(TeleoperatorConfig, SOLeaderConfig):
pass
SO100LeaderConfig: TypeAlias = SOLeaderTeleopConfig
SO101LeaderConfig: TypeAlias = SOLeaderTeleopConfig
SO100LeaderConfig = SOLeaderTeleopConfig
SO101LeaderConfig = SOLeaderTeleopConfig
@@ -16,7 +16,6 @@
import logging
import time
from typing import TypeAlias
from lerobot.motors import Motor, MotorCalibration, MotorNormMode
from lerobot.motors.feetech import (
@@ -156,5 +155,5 @@ class SOLeader(Teleoperator):
logger.info(f"{self} disconnected.")
SO100Leader: TypeAlias = SOLeader
SO101Leader: TypeAlias = SOLeader
SO100Leader = SOLeader
SO101Leader = SOLeader
@@ -19,3 +19,13 @@ from .exo_calib import ExoskeletonCalibration, ExoskeletonJointCalibration
from .exo_ik import ExoskeletonIKHelper
from .exo_serial import ExoskeletonArm
from .unitree_g1 import UnitreeG1Teleoperator
__all__ = [
"ExoskeletonArmPortConfig",
"ExoskeletonCalibration",
"ExoskeletonIKHelper",
"ExoskeletonJointCalibration",
"ExoskeletonArm",
"UnitreeG1Teleoperator",
"UnitreeG1TeleoperatorConfig",
]
@@ -35,6 +35,9 @@ import serial
logger = logging.getLogger(__name__)
ADC_MAX = 2**12 - 1
ADC_HALF = ADC_MAX / 2
# exoskeleton joint names -> ADC channel pairs. TODO: add wrist pitch and wrist yaw
JOINTS = {
"shoulder_pitch": (0, 1),
@@ -59,7 +62,7 @@ class ExoskeletonCalibration:
version: int = 2
side: str = ""
adc_max: int = 2**12 - 1
adc_max: int = ADC_MAX
joints: list[ExoskeletonJointCalibration] = field(default_factory=list)
def to_dict(self) -> dict:
@@ -92,7 +95,7 @@ class ExoskeletonCalibration:
return cls(
version=data.get("version", 2),
side=data.get("side", ""),
adc_max=data.get("adc_max", 2**12 - 1),
adc_max=data.get("adc_max", ADC_MAX),
joints=joints,
)
@@ -112,11 +115,8 @@ class CalibParams:
def normalize_angle(angle: float) -> float:
while angle > np.pi:
angle -= 2 * np.pi
while angle < -np.pi:
angle += 2 * np.pi
return angle
"""Normalize angle to [-pi, pi]."""
return float(np.arctan2(np.sin(angle), np.cos(angle)))
def joint_z_and_angle(raw16: list[int], j: ExoskeletonJointCalibration) -> tuple[np.ndarray, float]:
@@ -125,7 +125,7 @@ def joint_z_and_angle(raw16: list[int], j: ExoskeletonJointCalibration) -> tuple
"""
pair = JOINTS[j.name]
s, c = raw16[pair[0]], raw16[pair[1]] # get sin and cos
p = np.array([float(c) - (2**12 - 1) / 2, float(s) - (2**12 - 1) / 2]) # center the raw values
p = np.array([float(c) - ADC_HALF, float(s) - ADC_HALF]) # center the raw values
z = np.asarray(j.T) @ (
p - np.asarray(j.center_fit)
) # center the ellipse and invert the transformation matrix to get unit circle coords
@@ -167,7 +167,7 @@ def run_exo_calibration(
def read_joint_point(raw16: list[int], pair: tuple[int, int]):
s, c = raw16[pair[0]], raw16[pair[1]]
return float(c) - (2**12 - 1) / 2, float(s) - (2**12 - 1) / 2, float(s), float(c)
return float(c) - ADC_HALF, float(s) - ADC_HALF, float(s), float(c)
def select_fit_subset(xs, ys):
"""Select and filter points for ellipse fitting. Trims outliers by radius and downsamples."""
@@ -317,7 +317,7 @@ def run_exo_calibration(
calib = ExoskeletonCalibration(
version=2,
side=side,
adc_max=2**12 - 1,
adc_max=ADC_MAX,
joints=[
ExoskeletonJointCalibration(
name=j["name"],
@@ -367,8 +367,8 @@ def run_exo_calibration(
state["win_s"].append(s_raw)
state["win_c"].append(c_raw)
if len(state["win_s"]) >= max(3, params.median_window):
state["ys"].append(running_median(state["win_s"]) - (2**12 - 1) / 2)
state["xs"].append(running_median(state["win_c"]) - (2**12 - 1) / 2)
state["ys"].append(running_median(state["win_s"]) - ADC_HALF)
state["xs"].append(running_median(state["win_c"]) - ADC_HALF)
else:
jdata = joints_out[-1]
z = np.array(jdata["T"]) @ (np.array([x_raw, y_raw]) - np.array(jdata["center_fit"]))
@@ -25,8 +25,8 @@ from dataclasses import dataclass
import numpy as np
from lerobot.robots.unitree_g1.g1_kinematics import G1_29_ArmIK
from lerobot.robots.unitree_g1.g1_utils import G1_29_JointArmIndex
from lerobot.robots.unitree_g1.robot_kinematic_processor import G1_29_ArmIK
from .exo_calib import JOINTS
@@ -32,25 +32,29 @@ def parse_raw16(line: bytes) -> list[int] | None:
if len(parts) < 16:
return None
return [int(x) for x in parts[:16]]
except Exception:
except (ValueError, IndexError):
return None
def read_raw_from_serial(ser) -> list[int] | None:
"""Read latest sample from serial; if buffer is backed up, keep only the newest."""
last = None
while ser.in_waiting > 0:
b = ser.readline()
if not b:
break
raw16 = parse_raw16(b)
if raw16 is not None:
last = raw16
if last is None:
b = ser.readline()
if b:
last = parse_raw16(b)
return last
try:
last = None
while ser.in_waiting > 0:
b = ser.readline()
if not b:
break
raw16 = parse_raw16(b)
if raw16 is not None:
last = raw16
if last is None:
b = ser.readline()
if b:
last = parse_raw16(b)
return last
except serial.SerialException as e:
logger.warning(f"Serial read error: {e}")
return None
@dataclass
@@ -115,5 +119,6 @@ class ExoskeletonArm:
return {} if raw is None else exo_raw_to_angles(raw, self.calibration)
def calibrate(self) -> None:
ser = self._ser
self.calibration = run_exo_calibration(ser, self.side, self.calibration_fpath)
if not self.is_connected:
raise RuntimeError("Cannot calibrate: exoskeleton not connected")
self.calibration = run_exo_calibration(self._ser, self.side, self.calibration_fpath)
@@ -17,9 +17,22 @@
import logging
import time
from functools import cached_property
from typing import TYPE_CHECKING, Any
from lerobot.robots.unitree_g1.g1_utils import G1_29_JointIndex
from lerobot.robots.unitree_g1.g1_utils import REMOTE_AXES, G1_29_JointArmIndex
from lerobot.utils.constants import HF_LEROBOT_CALIBRATION, TELEOPERATORS
from lerobot.utils.import_utils import _unitree_sdk_available
if TYPE_CHECKING or _unitree_sdk_available:
from unitree_sdk2py.utils.joystick import Joystick
else:
class Joystick:
def __init__(self):
raise ImportError(
"unitree_sdk2py is required for RemoteController. Install with: pip install unitree_sdk2py"
)
from ..teleoperator import Teleoperator
from .config_unitree_g1 import UnitreeG1TeleoperatorConfig
@@ -29,6 +42,120 @@ from .exo_serial import ExoskeletonArm
logger = logging.getLogger(__name__)
class RemoteController:
"""Unitree remote controller data parser for joystick and button state."""
# ADC parameters for exoskeleton joystick (12-bit ADC)
ADC_MAX = 4095
ADC_HALF = ADC_MAX / 2
JOYSTICK_X_IDX = 11 # X axis in raw ADC array
JOYSTICK_BTN_IDX = 12 # Button in raw ADC array
JOYSTICK_Y_IDX = 13 # Y axis in raw ADC array
# Map SDK named buttons to positional indices matching the wireless_remote
# byte layout (little-endian uint16 from bytes 2-3).
_BUTTON_MAP: list[str] = [
"RB",
"LB",
"start",
"back",
"RT",
"LT",
"",
"",
"A",
"B",
"X",
"Y",
"up",
"right",
"down",
"left",
]
def __init__(self):
self.lx = 0.0
self.ly = 0.0
self.rx = 0.0
self.ry = 0.0
self.button = [0] * 16
self.remote_action = dict.fromkeys(REMOTE_AXES, 0.0)
# SDK joystick parser for wireless remote bytes
self._joystick = Joystick()
# Disable axis smoothing and deadzone to preserve raw values
for axis in (self._joystick.lx, self._joystick.ly, self._joystick.rx, self._joystick.ry):
axis.smooth = 1.0
axis.deadzone = 0.0
# Joystick center calibration (read at connect time)
self.left_center_x = self.ADC_HALF
self.left_center_y = self.ADC_HALF
self.right_center_x = self.ADC_HALF
self.right_center_y = self.ADC_HALF
# Whether to use exo joystick (detected at connect time)
self.use_left_exo_joystick = False
self.use_right_exo_joystick = False
def _sync_remote_action(self) -> None:
self.remote_action.update(zip(REMOTE_AXES, (self.lx, self.ly, self.rx, self.ry), strict=True))
def calibrate_center(self, raw16: list[int] | None, side: str) -> None:
if raw16 is None or len(raw16) < 16:
logger.info(f"{side.capitalize()} exo joystick: no data available")
return
btn_val = raw16[self.JOYSTICK_BTN_IDX]
logger.info(f"{side.capitalize()} exo joystick button ADC: {btn_val} (threshold: {self.ADC_HALF})")
if btn_val <= self.ADC_HALF:
logger.info(f"{side.capitalize()} exo joystick not detected (button below threshold)")
return
x = raw16[self.JOYSTICK_X_IDX]
y = raw16[self.JOYSTICK_Y_IDX]
if side == "left":
self.use_left_exo_joystick = True
self.left_center_x, self.left_center_y = x, y
else:
self.use_right_exo_joystick = True
self.right_center_x, self.right_center_y = x, y
logger.info(f"{side.capitalize()} exo joystick enabled, center: x={x}, y={y}")
def set_from_exo(self, raw16: list[int] | None, side: str) -> None:
if raw16 is None or len(raw16) < 16:
return
if side == "left":
if not self.use_left_exo_joystick:
return
self.lx = (raw16[self.JOYSTICK_X_IDX] - self.left_center_x) / self.ADC_HALF
self.ly = (raw16[self.JOYSTICK_Y_IDX] - self.left_center_y) / self.ADC_HALF
self.button[4] = 1 if raw16[self.JOYSTICK_BTN_IDX] < self.ADC_HALF else 0
return
if not self.use_right_exo_joystick:
return
self.rx = (raw16[self.JOYSTICK_X_IDX] - self.right_center_x) / self.ADC_HALF
self.ry = (raw16[self.JOYSTICK_Y_IDX] - self.right_center_y) / self.ADC_HALF
self.button[0] = 1 if raw16[self.JOYSTICK_BTN_IDX] < self.ADC_HALF else 0
def set_from_wireless(self, wireless_remote: bytes) -> None:
"""Parse Unitree wireless remote raw bytes into joystick + button state."""
if len(wireless_remote) < 24:
return
self._joystick.extract(wireless_remote)
self.lx = self._joystick.lx.data
self.ly = self._joystick.ly.data
self.rx = self._joystick.rx.data
self.ry = self._joystick.ry.data
for i, name in enumerate(self._BUTTON_MAP):
if name:
self.button[i] = getattr(self._joystick, name).data
class UnitreeG1Teleoperator(Teleoperator):
"""
Bimanual exoskeleton arms teleoperator for Unitree G1 arms.
@@ -43,6 +170,13 @@ class UnitreeG1Teleoperator(Teleoperator):
def __init__(self, config: UnitreeG1TeleoperatorConfig):
super().__init__(config)
self.config = config
left_exo_enabled = bool(config.left_arm_config.port.strip())
right_exo_enabled = bool(config.right_arm_config.port.strip())
if left_exo_enabled != right_exo_enabled:
raise ValueError(
"Invalid exo config: set both left/right exo ports, or leave both empty for remote-only mode."
)
self._arm_control_enabled = left_exo_enabled and right_exo_enabled
# Setup calibration directory
self.calibration_dir = (
@@ -70,24 +204,37 @@ class UnitreeG1Teleoperator(Teleoperator):
)
self.ik_helper: ExoskeletonIKHelper | None = None
self.remote_controller = RemoteController()
@cached_property
def action_features(self) -> dict[str, type]:
return {f"{name}.q": float for name in self._g1_joint_names}
remote_features = dict.fromkeys(self.remote_controller.remote_action, float)
if not self._arm_control_enabled:
return remote_features
joint_features = {f"{name}.q": float for name in self._g1_arm_joint_names}
return {**joint_features, **remote_features}
@cached_property
def feedback_features(self) -> dict[str, type]:
return {}
return {"wireless_remote": bytes}
@property
def is_connected(self) -> bool:
if not self._arm_control_enabled:
return True
return self.left_arm.is_connected and self.right_arm.is_connected
@property
def is_calibrated(self) -> bool:
if not self._arm_control_enabled:
return True
return self.left_arm.is_calibrated and self.right_arm.is_calibrated
def connect(self, calibrate: bool = True) -> None:
if not self._arm_control_enabled:
logger.warning("Exo ports not fully configured; teleop will send joystick only (no arm actions)")
return
self.left_arm.connect(calibrate)
self.right_arm.connect(calibrate)
@@ -95,6 +242,13 @@ class UnitreeG1Teleoperator(Teleoperator):
self.ik_helper = ExoskeletonIKHelper(frozen_joints=frozen_joints)
logger.info("IK helper initialized")
time.sleep(0.1) # Give serial time to populate buffer
left_raw = self.left_arm.read_raw()
right_raw = self.right_arm.read_raw()
self.remote_controller.calibrate_center(left_raw, "left")
self.remote_controller.calibrate_center(right_raw, "right")
def calibrate(self) -> None:
if not self.left_arm.is_calibrated:
logger.info("Starting calibration for left arm...")
@@ -115,12 +269,33 @@ class UnitreeG1Teleoperator(Teleoperator):
pass
def get_action(self) -> dict[str, float]:
left_angles = self.left_arm.get_angles()
right_angles = self.right_arm.get_angles()
return self.ik_helper.compute_g1_joints_from_exo(left_angles, right_angles)
joint_action = {}
left_raw = None
right_raw = None
if self._arm_control_enabled:
left_raw = self.left_arm.read_raw()
right_raw = self.right_arm.read_raw()
def send_feedback(self, feedback: dict[str, float]) -> None:
raise NotImplementedError("Exoskeleton arms do not support feedback")
left_angles = self.left_arm.get_angles()
right_angles = self.right_arm.get_angles()
joint_action = self.ik_helper.compute_g1_joints_from_exo(left_angles, right_angles)
# Wireless remote has priority when non-zero; otherwise, use exo joystick.
rc = self.remote_controller
wireless_active = (
abs(rc.lx) > 1e-3 or abs(rc.ly) > 1e-3 or abs(rc.rx) > 1e-3 or abs(rc.ry) > 1e-3
) or any(rc.button)
if self._arm_control_enabled and not wireless_active:
rc.set_from_exo(left_raw, "left")
rc.set_from_exo(right_raw, "right")
rc._sync_remote_action()
return {**joint_action, **rc.remote_action}
def send_feedback(self, feedback: dict[str, Any]) -> None:
wireless_remote = feedback.get("wireless_remote")
if wireless_remote is not None:
self.remote_controller.set_from_wireless(wireless_remote)
def disconnect(self) -> None:
self.left_arm.disconnect()
@@ -153,5 +328,5 @@ class UnitreeG1Teleoperator(Teleoperator):
print("\n\nVisualization stopped.")
@cached_property
def _g1_joint_names(self) -> list[str]:
return [joint.name for joint in G1_29_JointIndex]
def _g1_arm_joint_names(self) -> list[str]:
return [joint.name for joint in G1_29_JointArmIndex]
+161
View File
@@ -0,0 +1,161 @@
#!/usr/bin/env python
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Helpers for building and uploading HF-native `.eval_results` YAML rows.
The `.eval_results` format is consumed by the HuggingFace leaderboard surface.
See https://huggingface.co/docs/evaluate/en/evaluation-on-hub for the spec.
"""
from __future__ import annotations
import math
from datetime import date
from pathlib import Path
from typing import Any
import yaml
def default_eval_date() -> str:
"""Return today's UTC date as an ISO-8601 string (YYYY-MM-DD)."""
return date.today().isoformat()
def build_eval_results_rows(
*,
info: dict,
env_type: str,
env_task: str | None,
benchmark_dataset_id: str,
eval_date: str,
source_url: str | None = None,
notes: str | None = None,
) -> list[dict[str, Any]]:
"""Build a list of `.eval_results` rows from an eval info dict.
Each row represents one (task_group, metric) combination. When no
per-group breakdown is available a single overall row is emitted.
Args:
info: The dict returned by ``_aggregate_eval_from_per_task`` / ``eval_policy_all``.
Expected keys: ``overall``, optionally ``per_group``.
env_type: Environment type string (e.g. ``"libero_plus"``).
env_task: The env task string from eval config (may be ``None``).
benchmark_dataset_id: HF dataset repo-id of the consolidated benchmark dataset.
eval_date: ISO-8601 date string (use ``default_eval_date()``).
source_url: Optional URL to the evaluation run / report.
notes: Optional free-text notes about the evaluation setup.
Returns:
A list of row dicts ready for serialisation with ``upload_eval_results_yaml``.
"""
rows: list[dict[str, Any]] = []
task_name = env_task or env_type
def _safe(v: float) -> float:
return 0.0 if (v is None or (isinstance(v, float) and math.isnan(v))) else float(v)
def _make_row(config_name: str, pc_success: float, n_episodes: int) -> dict[str, Any]:
row: dict[str, Any] = {
"task": {
"type": "robotics",
"name": task_name,
},
"dataset": {
"name": benchmark_dataset_id,
"type": benchmark_dataset_id,
"config": config_name,
"split": "test",
},
"metrics": [
{
"type": "success_rate",
"value": _safe(pc_success),
"name": "Success Rate (%)",
},
{
"type": "n_episodes",
"value": n_episodes,
"name": "Number of Episodes",
},
],
"evaluated_at": eval_date,
}
if source_url:
row["source_url"] = source_url
if notes:
row["notes"] = notes
return row
per_group: dict = info.get("per_group", {})
if per_group:
for group_name, group_metrics in per_group.items():
rows.append(
_make_row(
config_name=group_name,
pc_success=group_metrics.get("pc_success", float("nan")),
n_episodes=group_metrics.get("n_episodes", 0),
)
)
else:
overall = info.get("overall", {})
rows.append(
_make_row(
config_name=env_type,
pc_success=overall.get("pc_success", float("nan")),
n_episodes=overall.get("n_episodes", 0),
)
)
return rows
def upload_eval_results_yaml(
*,
api: Any,
repo_id: str,
rows: list[dict[str, Any]],
env_type: str,
env_task: str | None,
output_dir: Path,
) -> str:
"""Serialise ``rows`` to YAML and upload to the Hub model repo.
The file is written locally to ``output_dir/eval_results.yaml`` and
then uploaded to ``eval/{env_type}/eval_results.yaml`` in ``repo_id``.
Args:
api: An instantiated ``huggingface_hub.HfApi`` object.
repo_id: HF model repo (e.g. ``"user/my_policy"``).
rows: Rows produced by ``build_eval_results_rows``.
env_type: Environment type string (used for the Hub path prefix).
env_task: The env task string (unused, kept for API symmetry).
output_dir: Local directory to write the YAML before uploading.
Returns:
URL of the Hub commit containing the uploaded file.
"""
yaml_path = Path(output_dir) / "eval_results.yaml"
yaml_path.write_text(yaml.dump({"eval_results": rows}, sort_keys=False, allow_unicode=True))
commit_url = api.upload_file(
path_or_fileobj=str(yaml_path),
path_in_repo=f"eval/{env_type}/eval_results.yaml",
repo_id=repo_id,
commit_message=f"Upload eval results YAML for {env_type}",
)
return str(commit_url)
+2
View File
@@ -74,6 +74,8 @@ _peft_available = is_package_available("peft")
_scipy_available = is_package_available("scipy")
_reachy2_sdk_available = is_package_available("reachy2_sdk")
_can_available = is_package_available("python-can", "can")
_unitree_sdk_available = is_package_available("unitree-sdk2", "unitree_sdk2py")
_pygame_available = is_package_available("pygame")
def make_device_from_device_class(config: ChoiceRegistry) -> Any:
+1 -3
View File
@@ -16,12 +16,10 @@
import json
import warnings
from pathlib import Path
from typing import TypeVar
import imageio
JsonLike = str | int | float | bool | None | list["JsonLike"] | dict[str, "JsonLike"] | tuple["JsonLike", ...]
T = TypeVar("T", bound=JsonLike)
def write_video(video_path, stacked_frames, fps):
@@ -33,7 +31,7 @@ def write_video(video_path, stacked_frames, fps):
imageio.mimsave(video_path, stacked_frames, fps=fps)
def deserialize_json_into_object(fpath: Path, obj: T) -> T:
def deserialize_json_into_object[T: JsonLike](fpath: Path, obj: T) -> T:
"""
Loads the JSON data from `fpath` and recursively fills `obj` with the
corresponding values (strictly matching structure and types).
@@ -231,3 +231,39 @@ def test_ready_to_send_observation_with_varying_threshold(robot_client, g_thresh
robot_client.action_queue.put(act)
assert robot_client._ready_to_send_observation() is expected
# -----------------------------------------------------------------------------
# Regression test: robot type registry populated by robot_client imports
# -----------------------------------------------------------------------------
def test_robot_client_registers_builtin_robot_types():
"""Importing robot_client must populate RobotConfig's ChoiceRegistry.
This is a regression test for a bug introduced in #2425, where removing
robot module imports from robot_client.py caused RobotConfig's registry to
be empty, breaking CLI argument parsing with:
error: argument --robot.type: invalid choice: 'so101_follower' (choose from )
Robot types are registered via @RobotConfig.register_subclass() decorators
at import time, so all supported modules must be explicitly imported.
"""
import lerobot.async_inference.robot_client # noqa: F401
from lerobot.robots.config import RobotConfig
known_choices = RobotConfig.get_known_choices()
expected_robot_types = [
"so100_follower",
"so101_follower",
"koch_follower",
"omx_follower",
"bi_so_follower",
]
for robot_type in expected_robot_types:
assert robot_type in known_choices, (
f"Robot type '{robot_type}' is not registered in RobotConfig's ChoiceRegistry. "
f"Ensure the corresponding module is imported in robot_client.py. "
f"Known choices: {sorted(known_choices)}"
)
+1
View File
@@ -170,6 +170,7 @@ def test_async_read(index_or_path):
assert isinstance(img, np.ndarray)
@pytest.mark.skip("Skipping test: async_read 0 timeout behavior may be flaky/non-deterministic.")
def test_async_read_timeout():
config = OpenCVCameraConfig(index_or_path=DEFAULT_PNG_FILE_PATH, warmup_s=0)
+62
View File
@@ -0,0 +1,62 @@
#!/usr/bin/env python
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from lerobot.envs.lazy_vec_env import LazyVectorEnv
class _DummyVectorEnv:
def __init__(self):
self.marker = "ok"
self.closed = False
def close(self):
self.closed = True
def test_lazy_vec_env_materializes_only_on_access():
created = []
def _make_env(fns):
created.append(len(fns))
return _DummyVectorEnv()
lazy = LazyVectorEnv(_make_env, [lambda: None, lambda: None])
assert created == []
assert lazy.num_factory_fns == 2
assert lazy.marker == "ok"
assert created == [2]
# Second access should re-use the same materialized env.
assert lazy.marker == "ok"
assert created == [2]
def test_lazy_vec_env_can_rematerialize_after_close():
created = []
def _make_env(fns):
created.append(len(fns))
return _DummyVectorEnv()
lazy = LazyVectorEnv(_make_env, [lambda: None])
lazy.materialize()
assert created == [1]
lazy.close()
lazy.materialize()
assert created == [1, 1]
@@ -40,7 +40,7 @@ from lerobot.utils.constants import (
OBS_LANGUAGE_TOKENS,
OBS_STATE,
) # noqa: E402
from tests.utils import require_cuda # noqa: E402
from tests.utils import require_cuda, require_hf_token # noqa: E402
# Constants
DUMMY_ACTION_DIM = 7
@@ -65,6 +65,7 @@ EXPECTED_ACTIONS_FIRST_5 = torch.tensor([0.0000, 0.3536, 0.0707, 0.0000, 0.0000]
@require_cuda
@require_hf_token
def set_seed_all(seed: int):
"""Set random seed for all RNG sources to ensure reproducibility."""
random.seed(seed)
@@ -82,6 +83,7 @@ def set_seed_all(seed: int):
@require_cuda
@require_hf_token
def instantiate_lerobot_pi0_fast(
from_pretrained: bool = False,
model_path: str = MODEL_PATH_LEROBOT,
@@ -125,6 +127,7 @@ def instantiate_lerobot_pi0_fast(
@require_cuda
@require_hf_token
def create_dummy_data(device=DEVICE):
"""Create dummy data for testing both implementations."""
batch_size = 1
@@ -157,6 +160,7 @@ def create_dummy_data(device=DEVICE):
# Pytest fixtures
@pytest.fixture(scope="module")
@require_cuda
@require_hf_token
def pi0_fast_components():
"""Fixture to instantiate and provide all PI0Fast components for tests."""
print(f"\nTesting with DEVICE='{DEVICE}'")
@@ -168,6 +172,7 @@ def pi0_fast_components():
@pytest.fixture(scope="module")
@require_cuda
@require_hf_token
def policy(pi0_fast_components):
"""Fixture to provide the PI0Fast policy for tests."""
return pi0_fast_components[0]
@@ -175,12 +180,14 @@ def policy(pi0_fast_components):
@pytest.fixture(scope="module")
@require_cuda
@require_hf_token
def preprocessor(pi0_fast_components):
"""Fixture to provide the PI0Fast preprocessor for tests."""
return pi0_fast_components[1]
@require_cuda
@require_hf_token
def test_pi0_fast_preprocessor_alignment(policy, preprocessor):
"""Test that LeRobot PI0Fast preprocessor produces expected outputs."""
print("\n" + "=" * 80)
@@ -228,6 +235,7 @@ def test_pi0_fast_preprocessor_alignment(policy, preprocessor):
@require_cuda
@require_hf_token
def test_pi0_fast_action_generation(policy, preprocessor):
"""Test PI0Fast LeRobot implementation generates expected actions."""
print("\n" + "=" * 80)
@@ -306,6 +314,7 @@ def test_pi0_fast_action_generation(policy, preprocessor):
@require_cuda
@require_hf_token
def test_pi0_fast_inference_reproducibility(policy, preprocessor):
"""Test that PI0Fast inference is reproducible with the same seed."""
print("\n" + "=" * 80)
@@ -347,6 +356,7 @@ def test_pi0_fast_inference_reproducibility(policy, preprocessor):
@require_cuda
@require_hf_token
def test_pi0_fast_forward_pass_logits(policy, preprocessor):
"""Test PI0Fast forward pass and compare logits against expected values."""
print("\n" + "=" * 80)
@@ -396,6 +406,7 @@ def test_pi0_fast_forward_pass_logits(policy, preprocessor):
@require_cuda
@require_hf_token
def test_pi0_fast_action_token_sampling(policy, preprocessor):
"""Test PI0Fast action token sampling (autoregressive decoding)."""
print("\n" + "=" * 80)
@@ -452,6 +463,7 @@ def test_pi0_fast_action_token_sampling(policy, preprocessor):
@require_cuda
@require_hf_token
def test_pi0_fast_detokenization(policy, preprocessor):
"""Test PI0Fast action detokenization (FAST decoding)."""
print("\n" + "=" * 80)
+7 -2
View File
@@ -14,10 +14,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.
"""Test script to verify PI0 policy integration with LeRobot, only meant to be run locally!"""
"""Test script to verify PI0 policy integration with LeRobot"""
import pytest
import torch
pytest.importorskip("transformers")
from lerobot.policies.factory import make_policy_config # noqa: E402
from lerobot.policies.pi0 import ( # noqa: E402
PI0Config,
@@ -25,10 +28,11 @@ from lerobot.policies.pi0 import ( # noqa: E402
make_pi0_pre_post_processors, # noqa: E402
)
from lerobot.utils.random_utils import set_seed # noqa: E402
from tests.utils import require_cuda # noqa: E402
from tests.utils import require_cuda, require_hf_token # noqa: E402
@require_cuda
@require_hf_token
def test_policy_instantiation():
# Create config
set_seed(42)
@@ -105,6 +109,7 @@ def test_policy_instantiation():
@require_cuda
@require_hf_token
def test_config_creation():
"""Test policy config creation through factory."""
try:
+7 -2
View File
@@ -14,10 +14,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.
"""Test script to verify PI0.5 (pi05) support in PI0 policy, only meant to be run locally!"""
"""Test script to verify PI0.5 (pi05) support in PI0 policy"""
import pytest
import torch
pytest.importorskip("transformers")
from lerobot.policies.factory import make_policy_config # noqa: E402
from lerobot.policies.pi05 import ( # noqa: E402
PI05Config,
@@ -25,10 +28,11 @@ from lerobot.policies.pi05 import ( # noqa: E402
make_pi05_pre_post_processors, # noqa: E402
)
from lerobot.utils.random_utils import set_seed
from tests.utils import require_cuda # noqa: E402
from tests.utils import require_cuda, require_hf_token # noqa: E402
@require_cuda
@require_hf_token
def test_policy_instantiation():
# Create config
set_seed(42)
@@ -141,6 +145,7 @@ def test_policy_instantiation():
@require_cuda
@require_hf_token
def test_config_creation():
"""Test policy config creation through factory."""
try:
@@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
"""Test script to verify PI0OpenPI policy integration with LeRobot vs the original implementation, only meant to be run locally!"""
"""Test script to verify PI0OpenPI policy integration with LeRobot vs the original implementation"""
import os
from copy import deepcopy
@@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
"""Test script to verify PI0 policy integration with LeRobot vs the original implementation, only meant to be run locally!"""
"""Test script to verify PI0 policy integration with LeRobot vs the original implementation"""
import os
from copy import deepcopy

Some files were not shown because too many files have changed in this diff Show More