* chore(deps): allow torch 2.11/2.12 and fix autocast deprecation
- Bump torch to >=2.7,<2.13 (was <2.11), torchvision to <0.28 (was <0.26),
and torchcodec to <0.13 (was <0.11) to allow installs against the latest
stable torch 2.11 and the upcoming 2.12 line.
- Replace removed torch.get_autocast_gpu_dtype() with torch.get_autocast_dtype("cuda")
in Florence2 and Qwen2.5-VL-MoE FlashAttention paths (the former is removed in 2.11+).
- Refresh uv.lock for the new resolution (torch 2.11.0+cu130, torchvision 0.26.0+cu130,
torchcodec 0.11.1, full CUDA 13 stack).
Verified locally with `uv sync --locked` from a clean .venv and the lerobot
test suite (pytest -n 8 --dist=loadfile --timeout=300). Failure set is
identical to the pre-bump baseline: 18 pre-existing failures
(test_sac_policy*, test_pi0_rtc*, test_pi05_rtc*, test_replay_buffer*),
0 new, 0 fixed.
AI assistance: this change was authored with Claude Code per AI_POLICY.md.
* fix(policies): use device-agnostic autocast dtype lookup
Pass query_states.device.type to torch.get_autocast_dtype() instead of
hardcoding 'cuda', so the cast matches the active autocast context when
running under CPU/MPS/XPU autocast.
---------
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>
* feat(policies): add EO-1 model
* chore(eo1): adjust policy_eo1_README.md to to avoid duplicate with eo1.mdx
* chore(eo1): remove policy_eo1_README.md, link eo1.mdx in policy folder
---------
Co-authored-by: Pepijn <138571049+pkooij@users.noreply.github.com>
feat(sim): add VLABench benchmark integration
Add VLABench as a new simulation benchmark in LeRobot, following the existing LIBERO and MetaWorld patterns.
This PR wires VLABench end-to-end across environment integration, Docker setup, CI smoke evaluation, and documentation. It also fixes a number of upstream packaging and runtime issues required to make VLABench usable and reproducible in CI.
What’s included
Benchmark integration
Add VLABench as a new simulation benchmark.
Expose supported VLABench tasks through the LeRobot env interface.
Follow the established LIBERO / MetaWorld factory patterns.
Preserve lazy async-env metadata so env.unwrapped.metadata["render_fps"] continues to work.
CI smoke evaluation
Add a VLABench smoke-eval job using lerobot/smolvla_vlabench.
Use the correct rename_map for the 3-camera dataset layout.
Expand smoke coverage from 1 to 10 primitive tasks.
Extract task descriptions after eval so metrics artifacts include per-task labels.
Skip Docker Hub login when secrets are unavailable (e.g. fork PRs).
Docker / install fixes
Install VLABench from GitHub rather than PyPI.
Use uv pip, not pip, in the base image.
Fail loudly on install errors instead of masking them.
Clone VLABench into the non-root user’s home directory.
Use shallow editable installs for VLABench and rrt-algorithms to work around missing __init__.py issues.
Pin upstream clones to exact commit SHAs for reproducibility.
Add undeclared runtime dependencies required by VLABench (open3d, colorlog, scikit-learn, openai).
Unpin open3d so Python 3.12 wheels resolve.
Assets
Support downloading VLABench assets from a Hugging Face Hub mirror via VLABENCH_ASSETS_REPO.
Keep Google Drive download support as fallback.
Install huggingface_hub[hf_xet] so Xet-backed assets download correctly.
Validate required mesh/XML asset subtrees at build time.
Patch VLABench constants to tolerate missing asset directories at import time.
Runtime / env correctness
Import VLABench robots and tasks explicitly so decorator-based registry population happens.
Resize and normalize camera observations so they always match the declared (H, W, 3) uint8 observation space.
Reinstall LeRobot editably inside the image so the new env code is actually used.
Coerce agent_pos / ee_state to the expected shape.
Pad actions when needed to match data.ctrl.
Replace zero-padding fallback with proper dm_control IK for 7D end-effector actions.
Refetch dm_control physics on each step instead of caching weakrefs.
Retry unstable resets with reseeding and handle PhysicsError gracefully at step time.
Dataset / policy alignment
Align VLABench observations and actions with Hugging Face dataset conventions used by lerobot/vlabench_unified:
convert EE position between world frame and robot-base frame at the env boundary,
expose / consume Euler XYZ instead of raw quaternion layout,
align gripper semantics with dataset convention (1 = open, 0 = closed).
This fixes policy/env mismatches that previously caused incorrect IK targets and unstable behavior at evaluation time.
Docs
Add a full docs/source/vlabench.mdx page aligned with the standard benchmark template.
Document task selection forms (single task, comma list, suite shortcut).
Document installation, evaluation, training, and result reproduction.
Point examples at lerobot/smolvla_vlabench.
Add a benchmark banner image.
Remove outdated / misleading references to upstream evaluation tracks.
Document manual install flow instead of a broken vlabench extra.
Packaging cleanup
Remove the unresolvable vlabench extra from pyproject.toml.
Remove the no-op VLABench processor step.
Remove the obsolete env unit test that only covered the dropped gripper remap helper.
Apply formatting / logging / style cleanup from review feedback.
Why this is needed
VLABench is not currently consumable as a normal Python dependency and requires several upstream workarounds:
no PyPI release,
missing package declarations,
undeclared runtime deps,
SSH-only submodule references,
asset downloads outside normal package install flow,
registry population that depends on import side effects,
env outputs that do not always match declared observation shapes,
task resets that can diverge under some random layouts.
This PR makes the benchmark usable in LeRobot despite those constraints, and ensures CI runs are reproducible and informative.
If you want a much shorter squash commit message, I’d use this:
feat(sim): integrate VLABench benchmark with CI, Docker, and docs
Add VLABench as a new LeRobot simulation benchmark, following the existing LIBERO / MetaWorld patterns.
This includes:
LeRobot env integration and task exposure,
CI smoke eval with lerobot/smolvla_vlabench,
Docker install and asset-download fixes,
runtime fixes for registry loading, assets, camera obs, action handling, dm_control IK, and PhysicsError recovery,
alignment of obs/action semantics with HF VLABench datasets,
docs and packaging cleanup.
The PR also incorporates review feedback, improves reproducibility by pinning upstream commits, and makes VLABench usable in CI despite upstream packaging and asset-management issues.
* feat(envs): add RoboMME benchmark integration
- RoboMME env wrapper with image/wrist_image/state observations
- Docker image with Vulkan, SAPIEN, mani-skill deps
- CI workflow: 1-episode smoke eval with pepijn223/smolvla_robomme
- preprocess_observation: handle image/wrist_image/state keys
- pyproject.toml: robomme extra
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor(docker): rebase RoboMME image on huggingface/lerobot-gpu
Mirror the libero/metaworld pattern: start from the nightly GPU image
(which already has apt deps, uv, venv, and lerobot[all] preinstalled)
and only layer on what RoboMME uniquely needs — the Vulkan libs
ManiSkill/SAPIEN requires, plus the robomme extra with the
gymnasium/numpy overrides.
Drops 48 lines of duplicated base setup (CUDA FROM, python install,
user creation, venv init, base apt deps) that the nightly image already
provides. Net: 102 → 54 lines.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* docs(robomme): drop prototype-branch note and move dataset to lerobot/robomme
- Remove the "Related work" block referencing the prototype branch
feat/robomme-integration; the PR stands on its own.
- Point all dataset references at lerobot/robomme (docs, env module
docstring, RoboMMEEnvConfig docstring) — this is the canonical HF
location once the dataset is mirrored.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(robomme): make docs build + fast tests green
1. Docs: add robomme to _toctree.yml under Benchmarks so doc-builder's
TOC integrity check stops rejecting the new page.
2. Fast tests: robomme's mani-skill transitively pins numpy<2 which is
unsatisfiable against the project's numpy>=2 base pin, so `uv sync`
couldn't resolve a universal lockfile.
Drop robomme as a pyproject extra entirely — it truly cannot coexist
with the rest of the dep tree. The Dockerfile installs robomme
directly from its git URL via `uv pip install --override`, which was
already the runtime path. pyproject, docs, env docstrings, and the
CI job comment all now point to the docker-only install.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* test(robomme): realign unit tests with current env API
The tests were written against an earlier env layout and never updated when
the wrapper was refactored, so CI's fast-test job was failing with:
- KeyError: 'front_rgb' / 'wrist_rgb' — these were renamed to the
lerobot-canonical 'image' / 'wrist_image' keys (matching the dataset
columns and preprocess_observation's built-in fallbacks).
- AssertionError: 'robomme' not in result — create_robomme_envs now
returns {task_name: {task_id: env}}, not {'robomme': {...}}, so
comma-separated task lists work.
- ModuleNotFoundError: lerobot.envs.lazy_vec_env — LazyVectorEnv was
removed; create_robomme_envs is straightforward synchronous now.
Rewrite the 7 failing cases against the current API, drop the three
LazyVectorEnv tests, and add a multi-task test so the new comma-separated
task parsing is covered. Stub install/teardown is moved into helpers
(`_install_robomme_stub` / `_uninstall_robomme_stub`) so individual tests
stop repeating six boilerplate lines.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* ci: point benchmark eval checkpoints at the lerobot/ org mirrors
pepijn223/smolvla_* → lerobot/smolvla_* across every benchmark job in
this branch (libero, metaworld, and the per-branch benchmark). The
checkpoints were mirrored into the lerobot/ org and that's the canonical
location going forward.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: integrate PR #3311 review feedback
- envs: rename obs keys to pixels/image, pixels/wrist_image, agent_pos
- envs: add __post_init__ for dynamic action_dim in RoboMMEEnv config
- envs: remove special-case obs conversion in utils.py (no longer needed)
- ci: add Docker Hub login, HF_USER_TOKEN guard, --env.task_ids=[0]
- scripts: extract_task_descriptions supports multiple task_ids
- docs: title to # RoboMME, add image, restructure eval section
- tests: update all key assertions to match new obs naming
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(docs): use correct RoboMME teaser image URL
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* ci(robomme): smoke-eval 10 tasks instead of 5
Broader coverage on the RoboMME benchmark CI job: bump the smoke eval
from 5 tasks to 10 (one episode each), all drawn from ROBOMME_TASKS.
Tasks now run: PickXtimes, BinFill, StopCube, MoveCube, InsertPeg,
SwingXtimes, VideoUnmask, ButtonUnmask, PickHighlight, PatternLock.
Updated the parse_eval_metrics.py `--task` label from the single
`PickXtimes` stub to the full comma list so the metrics artifact
reflects what was actually run. `parse_eval_metrics.py` already reads
`overall` for multi-task runs, so no parser change is needed.
Made-with: Cursor
* fix(robomme): nest `pixels` as a dict so preprocess_observation picks it up
`_convert_obs` was returning flat keys (`pixels/image`,
`pixels/wrist_image`). `preprocess_observation()` in envs/utils.py
keys off the top-level `"pixels"` entry and, not finding it,
silently dropped every image from the batch. The policy then saw
zero image features and raised
ValueError: All image features are missing from the batch.
Match the LIBERO layout: return
`{"pixels": {"image": ..., "wrist_image": ...}, "agent_pos": ...}`
and declare the same shape in `observation_space`.
Made-with: Cursor
* fix(robomme): align docs and tests with nested pixels obs layout
Addresses PR #3311 review feedback:
- Docs: correct observation keys to `pixels/image` / `pixels/wrist_image`
(mapped to `observation.images.image` / `observation.images.wrist_image`)
and drop the now-obsolete column-rename snippet.
- Tests: assert `result["pixels"]["image"]` instead of flat `pixels/image`,
matching the nested layout required by `preprocess_observation()`.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(envs): preserve AsyncVectorEnv metadata/unwrapped in lazy eval envs
Port of #3416 onto this branch.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* ci: gate Docker Hub login on secret availability
Fork PRs cannot access `secrets.DOCKERHUB_LEROBOT_{USERNAME,PASSWORD}`,
which made every benchmark job fail at the login step. Gate the login
on the env-var expansion of the username so the step is skipped (not
failed) when secrets are absent.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(robomme): address review feedback
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(envs): add RoboCasa365 benchmark integration
Add RoboCasa365 (arXiv:2603.04356) as a new simulation benchmark with
365 everyday kitchen manipulation tasks across 2,500 diverse environments.
New files:
- src/lerobot/envs/robocasa.py: gym.Env wrapper with deferred env creation,
flat 12D action / 16D state vectors, 3-camera support
- docs/source/robocasa.mdx: user-facing documentation
- docker/Dockerfile.benchmark.robocasa: CI benchmark image
Modified files:
- src/lerobot/envs/configs.py: RoboCasaEnv config (--env.type=robocasa)
- pyproject.toml: robocasa optional dependency group
- docs/source/_toctree.yml: sidebar entry
- .github/workflows/benchmark_tests.yml: integration test job
Refs: https://arxiv.org/abs/2603.04356, https://robocasa.ai
Related: huggingface/lerobot#321
* fix(docker): use uv pip to install robocasa in benchmark image
The huggingface/lerobot-gpu base image uses `uv` with a venv at
/lerobot/.venv — `pip` is not on PATH, so `pip install` fails with
"pip: not found". Switch to `uv pip install` which installs into the
existing venv.
Also drop the @v1.0.0 tag pin from the robocasa git URL since the
upstream repo may not have that tag; use default branch instead.
* fix(robocasa): editable install + switch to lerobot/smolvla_robocasa
- pip install from git omits data files like box_links_assets.json
(not declared in package_data). Clone and install editable so the
source tree is used at runtime.
- Download only tex + fixtures_lw asset types (smoke test doesn't need
objaverse/aigen objects). Pipe 'y' to auto-accept download prompt.
- Switch CI policy from pepijn223/smolvla_robocasa to lerobot/smolvla_robocasa.
* fix(docker): re-install lerobot editably after COPY
The nightly huggingface/lerobot-gpu image predates the RoboCasaEnv
registration — so `lerobot-eval --env.type=robocasa` fails at argparse
with "invalid choice" even after COPY . . overlays the new source.
Force an editable reinstall so the venv picks up the current configs.py.
* fix(ci): add rename_map for robocasa eval (image* -> camera*)
Policy lerobot/smolvla_robocasa expects observation.images.camera1/2/3,
but RoboCasaEnv produces observation.images.image/image2/image3.
* fix(robocasa): override RoboCasaGymEnv default split (test -> all)
RoboCasaGymEnv defaults split="test", but create_env only accepts
{None, "all", "pretrain", "target"}, so the out-of-the-box default
crashes with ValueError. Always pass "all" when split is None.
* fix(docker): also download objs_lw (lightwheel objects) for robocasa
Kitchen tasks (e.g. CloseFridge) reference lightwheel object meshes
like Stool022/model.xml. fixtures_lw alone isn't enough — we also
need objs_lw. Still skipping objaverse/aigen to keep image size down.
Made-with: Cursor
* feat(robocasa): raw camera names + benchmark-group task shortcuts
Align the LeRobot env with RoboCasa's native conventions so policies
trained on the upstream datasets don't need a --rename_map at eval
time, and expose the standard task groups as first-class --env.task
values.
- Preserve raw RoboCasa camera names (e.g. robot0_agentview_left)
as observation.images.<name> end-to-end. Drops camera_name_mapping
and DEFAULT_CAMERA_NAME_MAPPING; features/features_map are now
built dynamically from the parsed camera list.
- Accept benchmark-group names as --env.task: atomic_seen,
composite_seen, composite_unseen, pretrain50/100/200/300. Expanded
lazily via robocasa.utils.dataset_registry and auto-sets the
split ("target" | "pretrain").
- Update CI smoke-eval rename_map to map raw cam names to the
camera1/2/3 keys expected by lerobot/smolvla_robocasa.
* docs(robocasa): single-task smolvla train+eval recipe on pepijn223/robocasa_CloseFridge
- Rewrite observation section to use raw RoboCasa camera keys
(observation.images.robot0_agentview_{left,right},
observation.images.robot0_eye_in_hand).
- Add a "Training on a single task" section with a full smolvla
training command on pepijn223/robocasa_CloseFridge, plus matching
single-task eval command.
- Document benchmark-group task shortcuts (atomic_seen, composite_seen,
composite_unseen, pretrain50/100/200/300) as valid --env.task values.
* fix(robocasa): restrict obj_registries to lightwheel by default
CloseFridge (and most kitchen tasks) crashed at reset with
`ValueError: Probabilities contain NaN` coming out of
`sample_kitchen_object_helper`. RoboCasa's upstream default
`obj_registries=("objaverse", "lightwheel")` normalizes per-registry
candidate counts as probabilities; when a sampled category has zero
mjcf paths in every configured registry (because the objaverse asset
pack isn't on disk — ~30GB, skipped by our Docker build), the 0/0
divide yields NaNs and `rng.choice` raises.
- Add `obj_registries: list[str] = ["lightwheel"]` to `RoboCasaEnv`
config; thread it through `create_robocasa_envs`, `_make_env_fns`,
and the gym.Env wrapper to the underlying `RoboCasaGymEnv` (which
forwards to `create_env` → `robosuite.make` → kitchen env).
- Default matches what `download_kitchen_assets --type objs_lw`
actually ships, so the env works out of the box without a 30GB
objaverse download.
- Document the override (`--env.obj_registries='[objaverse,lightwheel]'`)
for users who have downloaded the full asset set.
* fix(docker): also download tex_generative for robocasa benchmark
RoboCasa's lightwheel kitchen fixtures embed references to
`generative_textures/wall/tex*.png` directly in their MuJoCo XML, so
`MjModel.from_xml_string` errors out at reset time with
"No such file or directory" even when the env is constructed with
`generative_textures=None`. The generative textures live under a
separate asset registry key (`tex_generative`) in
`download_kitchen_assets`, distinct from the base `tex` pack we were
already fetching.
- Add `tex_generative` to the download list so the fixture XMLs
resolve.
- Document the remaining omissions (objaverse/aigen, ~30GB) and how
the runtime side pairs this with obj_registries=["lightwheel"] to
avoid sampling from categories whose assets aren't on disk.
* ci(robocasa): smoke-eval 10 atomic tasks instead of 1
Broader coverage in the benchmark CI job: evaluate SmolVLA on ten
fixture-centric atomic RoboCasa tasks (one episode each) instead of
just CloseFridge. The tasks are all drawn from TARGET_TASKS.atomic_seen
and selected to avoid object-manipulation categories that would require
the objaverse/aigen asset packs (we only ship objs_lw in the Docker
image, paired with obj_registries=["lightwheel"] on the runtime side).
Tasks: CloseFridge, OpenCabinet, OpenDrawer, TurnOnMicrowave,
TurnOffStove, CloseToasterOvenDoor, SlideDishwasherRack,
TurnOnSinkFaucet, NavigateKitchen, TurnOnElectricKettle.
`scripts/ci/parse_eval_metrics.py` already handles multi-task output
via the `overall` key, so no parser changes needed. Bumped the metrics
artifact's task label to `atomic_smoke_10` to reflect the grouping.
* fix(pyproject): drop unresolvable robocasa extra
robocasa's upstream setup.py hardcodes `lerobot==0.3.3` in
install_requires. Exposing it as the `lerobot[robocasa]` extra made
uv's dep resolver cycle: `lerobot[robocasa]` -> robocasa -> lerobot
(a different version) -> unsolvable. This broke every `uv sync` — even
invocations with an unrelated extra like `--extra test` — because uv
validates the whole lockfile graph.
- Remove the `robocasa` extra from pyproject.toml. Installation
instructions in docs/source/robocasa.mdx now walk users through the
manual `git clone` + `pip install --no-deps` flow, which matches
what the Docker image already does and sidesteps the cyclic dep
entirely.
- Dockerfile: `uv pip install -e ~/robocasa --no-deps` so the
shadowed lerobot==0.3.3 never lands in the image; install
robocasa's actual runtime deps (numpy, numba, scipy, mujoco,
tianshou, etc.) explicitly.
* docs(robocasa): align page with adding_benchmarks template
Rework docs/source/robocasa.mdx to follow the standard benchmark doc
structure: intro + links + available tasks (with family breakdown and
first-class benchmark-group shortcuts) + installation + eval +
recommended episodes + policy I/O + training + reproducing results.
- Fix the paper link (was pointing at a non-existent arxiv ID).
- Surface lerobot/smolvla_robocasa and pepijn223/robocasa_CloseFridge
in the top-of-page links so they're findable without reading the
training section.
- Add an explicit "Object registries" subsection explaining the
`--env.obj_registries=[objaverse,lightwheel]` override path.
- Add an explicit "Reproducing published results" section pointing
at the CI smoke eval.
* fix: integrate PR #3375 review feedback
- envs(robocasa): hoist the duplicated `_parse_camera_names` helper
out of `libero.py` and `robocasa.py` into `envs/utils.py` as the
public `parse_camera_names`; call sites updated.
- envs(robocasa): give each factory a distinct `episode_index`
(`0..n_envs-1`) and derive a per-worker seed series in `reset()`
so n_envs workers don't all roll the same scene under a shared
outer seed.
- envs(robocasa): drop the unused `**kwargs` on `_make_env`; declare
`visualization_height` / `visualization_width` on both the wrapper
and the `RoboCasaEnv` config + propagate via `gym_kwargs`.
- envs(robocasa): emit `info["final_info"]` on termination (matching
MetaWorld) so downstream vector-env auto-reset keeps the terminal
task/success flags.
- docs(robocasa): add `--rename_map` (robot0_agentview_left/
eye_in_hand/agentview_right → camera1/2/3) plus CI-parity flags to
all three eval snippets.
- docker(robocasa): pin robocasa + robosuite git SHAs and the pip
dep versions (pygame, Pillow, opencv-python, pyyaml, pynput, tqdm,
termcolor, imageio, h5py, lxml, hidapi, gymnasium) for
reproducible benchmark images.
- ci(robocasa): update the workflow comment — there is no
`lerobot[robocasa]` extra; robocasa/robosuite are installed
manually because upstream's `lerobot==0.3.3` pin shadows ours.
* docs(robocasa): add benchmark banner image
* fix(envs): preserve AsyncVectorEnv metadata/unwrapped in lazy eval envs
Port of #3416 onto this branch. Also threads the cached metadata
through the RoboCasa factory so async eval on `--env.type=robocasa`
keeps the same improvement.
* fix: integrate PR #3375 review feedback (round 2)
- envs(robocasa): when the caller passes `seed=None` to `reset()`,
fall back to `self.episode_index` for the inner env seed so each
worker still samples a distinct trajectory instead of all workers
inheriting the same global RNG state.
- envs(robocasa): replace the two module-level `print()` calls in
`create_robocasa_envs` with `logger.info(...)` via a module-level
`logger = logging.getLogger(__name__)`.
- ci(robocasa): run `scripts/ci/extract_task_descriptions.py` after
the eval so `metrics.json` carries per-task natural-language
labels, matching LIBERO / MetaWorld / VLABench jobs. Added a
`_robocasa_descriptions()` extractor that splits CamelCase task
names into word-level labels keyed by `<task>_0`.
* feat(ffmpeg): updating ffmpeg verion to 8.X
* Revert "feat(ffmpeg): updating ffmpeg verion to 8.X"
This reverts commit bb0f03185c.
* chore(pyproject): updating pyproject to fit the minimally required version of torchcodec
* chore(docs): updating doc with specific instructions for ffmpeg/torchcodec installation
* fix(typo): reverting ceiling bound on pytorch to 2.11.0
* chore(format): removing empty line
* chore(typo): fixing typo
* chore(docs): adding warning in case of torchcodec/ffmpeg version mismatch
* chore(docs): applying comments
* chore(docs): adding uv commands for evdev on WSL
* fix(typo): fixing typo
* fix(typo): fixing typos again
* chore(ruff): format
* fix(evdev install): splitting evdev install instructions between conda and uv
* chore(ruff): format
---------
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>
* Add multitask diffusion transformer policy
Add multitask diffusion transformer policy
* expand the observation encoder to support differnt size encoders for vision and text
* add RoPE attention module as this is shown to help training dynamics and generation quality for DiTs
* update readme and citations for multitask dit policy
* remove dino vision encoder and simplify text and vision encoders by removing inheritance structure
* adjust factory comment
* update docstring for multitask dit policy processor file
* simplify config for multitask dit by merging and flattening everything, then adding comments to denote where some parameters are only used for specific objectives
* add references to the modeling file comments
* merge all modules files into the main modeling file
* add torch.no_grad decorators
* split up select action return statement
* remove redundant asserts
* add tutorial to training with multi_task_dit
* fix bugs when testing on hardware
* remove environment state conditioning
* update typo in test instruction comment
* add processor tests to multitask dit tests
* move policy to top of file
* use constants for indexing into batches and remove env state references
* remove the base classes since we don't need to be able to extend
* fix nit formatting in generate actions fcn
* reformat and clean up tutorial for multitask dit policy
* add more descriptions and depth to multitask dit tutorial
* note origins of each training objective
* rename config param for multiple vision encoders
* refactor code to perform task tokenization in the processor instead of in the modeling code for multitask dit
* add multitask dit to toc for docs
* add conditional transformers import to match all other policies that use transformers lib
* add test handling for multitask dit when transformers isnt available
* skip tests without transformers
* remove cropping of images smaller than the crop size
* add kwargs arg to multitask dit constructor
* add wallx dep conflict management for multitask dit policy
* use hyphens for cleanliness in pyproject.toml
* add conflict management to pyproject toml for pi conflict for mtdp as well
* update tests script to not use unnecessary uv sync call which resolves dependencies that do not need to run. This drastically reduces CI run time
* revert fast tests edits
* update docs and readme files, fixing some typos and adding multitask dit to readme
* chore(dependencies): upgrade transformers + hggingface-hub + peft + scipy
* chore(dependencies): bump pi0 family to transformers v5
* chore(dependencies): bump wall x to transformers v5
* chore(dependencies): bump gr00t to transformers v5
* chore(style): fix pre-commit
* fix(policy): xvla forced_bos_token missing
* test(rl): skip ci tests for resnet10
* Fix: full pi models support for transformer v5 (#2967)
* fix(pi): remove loss truncation
* fix(pi): remove state padding before tokenization
* fix(pi): fix image padding value
* fix from_pretrain
* add transformer v5 changes
* remove reference
* more fixes
* make it work
* add support for rest of pi family
* add pifast work
* more changes
* more changes
* more cleanup
* fix torch params
* dtype fix
* torch compile
* embed mismatch fix
* revert groot
* more nit fixes
* remove unused classes
* more fixes
* revert
* nit
* torch dtype warning fix
* but back dynamic renaming
* add tie embedding
---------
Co-authored-by: Yufei Sun <skieyfly@gmail.com>
* chore: fix XVLA in transformers v5 (#3006)
* test(policies): enable wall x CI testing
* style(test): pre-commit check
* style(test): pre-commit
---------
Signed-off-by: Bryson Jones <63133702+brysonjones@users.noreply.github.com>
Co-authored-by: Pepijn <138571049+pkooij@users.noreply.github.com>
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>
Co-authored-by: Jade Choghari <chogharijade@gmail.com>
Co-authored-by: Yufei Sun <skieyfly@gmail.com>
Co-authored-by: Steven Palma <steven.palma@huggingface.co>
1. Include metaworld_config.json in package distributions by adding it to
both MANIFEST.in (for sdist) and pyproject.toml package-data (for wheels).
Without this, pip-installed lerobot raises FileNotFoundError when
importing the metaworld environment.
2. Fix crash in sanity_check_dataset_name where the error message accesses
policy_cfg.type when policy_cfg is None, raising AttributeError instead
of the intended ValueError.
Fixes#2958
* feat(motors): add initial implementation of robstride
Co-authored-by: Virgile <virgilebatto@gmail.com>
* chore(motors): solve some linter
* remove kp/kd attribute
* code uniformisation between damiao and robstride
* remove normalization warning
* remove non valid baudrates and small docstring update
* remove all useless files. Only keeping robstride.py and table.py
* typing for mypy
* reduce NameOrId usage
* align signature with damiao
* put the same helper than in the damiao implementation
* bug correction : expect a response after each bus.send
---------
Co-authored-by: Virgile <virgilebatto@gmail.com>
* fix: ensure motors module passes MyPy type checks
This commit fixes 62 mypy type errors in the motors module by:
- Updating Protocol classes (PortHandler, PacketHandler, GroupSyncRead,
GroupSyncWrite) to use class-level attribute declarations instead of
__init__ body declarations
- Adding missing `broadcastPing` method to PacketHandler Protocol
- Fixing return type annotations (e.g., `_get_motor_model` returns str, not int)
- Fixing parameter types to use `Sequence` for covariant list parameters
- Fixing `Mapping` for covariant dict value types in `_normalize`
- Updating method signatures to be consistent across parent and child classes
(disable_torque, enable_torque, _get_half_turn_homings)
- Adding explicit `int()` casts for MotorCalibration arguments
- Adding explicit `return None` for functions returning Optional types
- Adding type annotations for variables like `data_list: dict[int, int]`
- Using `# type: ignore[method-assign]` for intentional monkeypatch
- Fixing variable references (using `self.groups` instead of `groups`)
Fixes#1723🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* chore(style): pre-commit after main merge
* chore(linter): solve comments
* chore(linter): apply pre-commit fixes to damiao
* chore(linter): more fixes to damiao
---------
Co-authored-by: yurekami <yurekami@users.noreply.github.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* fix(motors): cleanup imports + fix signatures
* feat(motors): add damiao canbus + multiple fixes
* fix(motors): address comments -> last_state + different gains + sleep
* refactor(motors): reduce duplicated code + adressed some comments in the PR
* chore(motors): better timeouts
* tests(motors): damiao test and imports
* chore(deps): fix space
* feat(robot): add openarm leader
Co-authored-by: Pepijn <pepijn@huggingface.co>
* feat(robot): add openarm follower
Co-authored-by: Pepijn <pepijn@huggingface.co>
* refactor(robot): remove mechanical compensations and double arm assumption + rename
* chore(robots): remove left arm references
* refactor(teleop): multiple improvements to leader
* refactor(teleop): multiple improvements to leader
* feat(robots): add open arm to util CLI
* chore(robot): add alias openarm
* Apply suggestions from code review
Co-authored-by: Pepijn <138571049+pkooij@users.noreply.github.com>
Signed-off-by: Steven Palma <imstevenpmwork@ieee.org>
* chore(motors): remove normalization tables damiao
* fix(motors): imports and signatures
* feat(motors): add motor_type_str + recv_id to motor class and _get_motor_recv_id raises if no motor_obj.recv_id
* chore(motors): remove normalize from base motor class and damaio
* tests(motors): remove bad tests (to be replaced)
* chore(motors): updated import check
* fix(robots): open arm mirrored config for joint limits
* chore(motors): update position_kd gain values
* chore(robots): set to 0 if openarm is calibrated at connect time
* chore(robots): remove macos in open arm as can doesn't support it
* chore(robots): update for motor_type_str in Motor class
* chore(robots): no default value for can port in open arms
* use constant for kp and kd range and check responses in mit_control_batch()
* Add docs on setting up canbus and use damiao otor bus, also add lerobot_setup_can.py and log if there is not response from a write command
* precommit format
* supress bandit as these are intentional cli commands
* fix setup-can
* add test
* skip test in ci
* nit precommit
* update doc example
* dont import can for tests
* remove comment
* Add openarms docs
* format
* update purchase link
* can to none if nit availabl;e
* add canfd option in bus
* make handshake logic similar to lerobot-can
* type hint
* type check
* add temp teleop test
* remove script
* mock class
* ignore linter
---------
Signed-off-by: Steven Palma <imstevenpmwork@ieee.org>
Co-authored-by: Pepijn <pepijn@huggingface.co>
Co-authored-by: Pepijn <138571049+pkooij@users.noreply.github.com>
* fix(motors): cleanup imports + fix signatures
* feat(motors): add damiao canbus + multiple fixes
* fix(motors): address comments -> last_state + different gains + sleep
* refactor(motors): reduce duplicated code + adressed some comments in the PR
* chore(motors): better timeouts
* tests(motors): damiao test and imports
* chore(deps): fix space
* Apply suggestions from code review
Co-authored-by: Pepijn <138571049+pkooij@users.noreply.github.com>
Signed-off-by: Steven Palma <imstevenpmwork@ieee.org>
* chore(motors): remove normalization tables damiao
* fix(motors): imports and signatures
* feat(motors): add motor_type_str + recv_id to motor class and _get_motor_recv_id raises if no motor_obj.recv_id
* chore(motors): remove normalize from base motor class and damaio
* tests(motors): remove bad tests (to be replaced)
* chore(motors): updated import check
* use constant for kp and kd range and check responses in mit_control_batch()
* Add docs on setting up canbus and use damiao otor bus, also add lerobot_setup_can.py and log if there is not response from a write command
* precommit format
* supress bandit as these are intentional cli commands
* fix setup-can
* add test
* skip test in ci
* nit precommit
* update doc example
* dont import can for tests
---------
Signed-off-by: Steven Palma <imstevenpmwork@ieee.org>
Co-authored-by: Pepijn <138571049+pkooij@users.noreply.github.com>
Co-authored-by: Pepijn <pepijn@huggingface.co>
* Add basic support for PEFT adapter methods
This changes adds support for training policies with much less parameters
by applying adapter methods such as LoRA on specific parts of the policies
and therefore possibly higher learning rates / batch sizes.
To make this as accessible as possible I thought it useful to provide
defaults for `target_modules` and `modules_to_save`. Currently only SmolVLA
has such defaults but when we agree that this change is useful I will set
out to generate more such defaults. While the user can override these
settings, they are expected to only change the peft_method, rank and init_type
parameters.
* Implement loading of PEFT adapters
Loading a PEFT adapter is currently done by initializing a policy with default config
and then applying the adapter on the resulting model. This has the obvious drawback
that any configurations done during training are not applied in the adapted model.
Currently the `use_peft` attribute of `PreTrainedConfig` is only set during loading
to signal the following code that it has to deal with a PEFT adapter. However
we could imagine a scenario where this is already set at training time and stored
alongside the adapter.
* Store policy config alongside PEFT checkpoint
Before this change the PEFT-wrapped policy did not save the policy's config
alongside the adapter config / weights which prevented us from changing the
policy config. Now the policy config is saved both in full training and PEFT
training.
This change makes loading the PEFT policy adapter much easier as well.
* Add default config for ACT
* Support targets like `all-linear`
* Formatting
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix failing tests
* Remove PEFT compatibility changes in config
We'll wait for the PEFT release that fixes this for good.
* Remove `use_peft` parameter from training script
Instead we make the PEFT config optional which has the same effect.
* Log adapter config to WandB
* Better documentation for CLI arguments
* Don't unload & merge the PEFT model
This can make things hard when using quantized layers (user expects quantized base layers with
unquantized adapters for example, merging defaults to upcast the layers leading to higher
memory).
* Correct way of identifying when to save config
* Add CLI end-to-end tests
Currently there don't seem to be any way to test the CLI commands.
Since this change mostly happens in those I thought it best to add
a way to test these commands end-to-end.
More integrated commands like `lerobot-record` need patching but
standalone commands like training seem to work fine.
* Update default targets
Removed ACT since it doesn't make sense to fine-tune ACT without having it pretrained beforehand.
SmolVLA and Pi0/0.5 are much more senseful targets.
* Clean up loading code
- Centralized instantiation of the PEFT wrapper in `make_policy` for inference
(e.g. in `lerobot-record`)
- Training a PEFT policy also sets `cfg.use_peft` so that all inference code loading
the policy can rely on that attribute to identify if PEFT loading is needed
- Modified RTC example to also include PEFT policies. Mostly because this is an example
I'm currently exploring.
* Make sure push_to_hub works
Since PEFT only wraps `push_to_hub` and not `push_model_to_hub`, the reference
to `self` in `policy.push_model_to_hub` is the unwrapped policy which, of course,
doesn't know anything about PEFT.
To make the upload process aware of PEFT, we pass the unwrapped policy down to
`push_model_to_hub` as a kwarg. This is not ideal but I think it is the best way
for now.
* formatting
* Warn when encountering from-scratch-training
* Revamp pretrained model loading
There were quite a few factors that convinced me that the status quo
is able to load pretrained models from the PEFT adapter config but
in fact that didn't work.
This commit fixes the following things:
- policies wrapped in PEFT will now have a `name_or_path` attribute
containing the name or path of the pretrained model we're fine-tuning
- we further assume that SmolVLA without `pretrained_path` and
`load_vlm_weights==False` must be an user-side error
- we assume that using PEFT on from-scratch-policies must be
an user-side-error
* Make it possible to unset policy features
This is necessary to train pre-trained policies on new datasets so that the
features are inferred from the new dataset and not from the pretrained
policy.
* Use correct loading for PEFT in RTC example
* Make it possible to use PeftModels in eval
* Add test checking that PEFT actually reduces params
* Adapt state/action projections instead of full-finetuning
There doesn't seem to be a benefit to fully fine-tune these layers
over just adapting them, so we do that instead.
* Disallow PEFT training on non-pretrained policies
At first I thought it would make sense to have this feature
in case you want to fine-tune a pre-trained section but in the
end it makes more trouble than it's worth.
It's still possible to allow this in the future when a concrete
need arises.
* Add basic documentation
* Formatting
* Add peft as extra dependency, mark tests
Fast tests currently fail because of the missing dependency.
* Fix pre-commit issues
* Add walx <> peft conflict for uv
* Exclude peft from pi install for now
---------
Co-authored-by: nemo <git@ningu.net>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Pepijn <138571049+pkooij@users.noreply.github.com>
* pi fixes for dependencies
* add walls sarm conflict
* also add conflicts for pi
* fix(ci): use --extra all instead of --all-extras + --no-extra
---------
Co-authored-by: Steven Palma <steven.palma@huggingface.co>
* support wallx
* fix bugs in flow
* incorporate wallx model into lerobot
* update the policy methods
* reduce to least config and params & pass lerobot basic test
* fixed dtype bugs
* add wallx dependencies
* update
* remove flash-attn requirement && fix bug in inference and fast mode
* fix bug for inference
* add some small modifications
* fix pre-commit errors
* remove lerobot[wallx]
* fix ci
* fix precommit issues
* fix: exclude wallx extra properly in CI workflows
* fix: add uv conflicts for wallx transformers version
* fix: peft test import
* pre-commit
* only export WallXConfig from wall_x package to avoid peft import in CI
* remove torch dep
* precommit
* add import
---------
Co-authored-by: vincentchen <chenlufang@x2robot.com>
Co-authored-by: Geoffrey19 <sympathischmann35@gmail.com>
Co-authored-by: Pepijn <138571049+pkooij@users.noreply.github.com>
Co-authored-by: Pepijn <pepijn@huggingface.co>
* fix(optim): enable and resolve mypy type errors
Resolves#1729
build(deps): add mypy as dependency and update pre-commit hook
* change build's type annotation
* add initial modeling
* make rewind pretrained policy
* add annotation
* small fix
* add sarm
* subtasks
* fix spawn
* fix rewind discrepancies
* Add script to generate embedding for dataset (#2138)
* Add generate and validate script
* fix precommit
* Improve generate embeddings function by using dataset tools (#2206)
---------
Co-authored-by: Michel Aractingi <michel.aractingi@huggingface.co>
* cleanup
* change order train log
* print batch size
* update sarm processor
* add reward output
* change expected features
* add image validation
* change validation
* get state input from dataset stats
* raise if no state key is found
* pass stats
* cleanup and refactor
* add episode inddex to complementary data
* add subtask init and detection
* revert lerobot_train changes
* pass dataset metadata to policy
* change loadig subtasks
* add small logging
* fix progress conversion and adding initial frame
* use large offset for initial frame (ugly)
* Remove rewind, use clip tokenizer
* add tests, implement formula 1,2 correctly and cleanup
* use task from dataset, cleanup visualizer
* simplify
* simplify and cleanup code and move compute_temporal_proportions to utils
* fix normalization in visualization
* Fix visualization and change prompt
* fix formatting
* add visualize subtask annotations
* use qwen thinking
* try different prompt
* format
* update prompt
* higher temp, long output
* different settings
* use instruct
* show full resp
* split message
* Temp: increase tolerance dataset
* Fix RA-BC (#2572)
* Add next observation loading for RA-BC progress deltas
* Compute weights based on temporal progress deltas instead of static rewards
* Add hard-masking for negative progress deltas in weight computation
* Feat/add dual head (#2582)
* Add dual dense sparse head and annotation
* Add docs
* add dual to procesor
* cleanup
* change sampling in visualize and cleanup
* remove validation
* remove compile
* Feat/test uniform (#2587)
* test uniform
* add different string for misaligned
* Fix rewind and add tests
* uncomment text implementation
* run precommit
* Add head mode for ra-bc
* fix visalization of single task
* add
* return per sample loss
* Fix RA_BC (#2602)
* update rabc implementation
* compute rabc beforehand
* fix import
* add only progress calulation
* use precomputed progress
* multi gpu processing
* import
* fix dataset meta data extraction
* add logging
* logging
* log
* progress per episode
* split differently
* move clip to gpu
* pre decode frames for an episode
* fix cuda initalization
* fix import
* multi processing
* rename
* fix import
* fix
* fix rabc
* use last known progress if oob
* use last known progress if oob
* add misalignment loss with random embeddings
* discard previous changes
* add selection of models to docs for ra_bc
* add transformers dep
* extend tolerance
* initial commit with new codebase
* add tests
* fix
* remove temporal sampler
* drop last frame for sampler
* use original ref
* some fixes
* fix visualization
* remove smoothing and fix order subtasks
* add stride rabc computation
* add push to hub
* add explanation
* add kappa expllaination
* better rabc logging
* feedback pr
* remove dataset tolerance
* revert dataset tool
* revert dataset changes
* add credit
* run precommit
* change path for generate ra_bc
* fix type
* include sarm in all in pyproject
* fix precommit
* lazy import matplotlib
* lazy import qwen
* remove rich console
* skip if transformers is not installed?
* run only when we have faker
* place transformer lazy loading
* Dont test if low transformer version
* fix
* increase transformer
* increase as 4.57.0 is yanked
* remove pi from all
* go back
---------
Co-authored-by: Michel Aractingi <michel.aractingi@huggingface.co>
Co-authored-by: s1lent4gnt <kmeftah.khalil@gmail.com>
* upload
* feat(omx): simplify motor initialization and remove default calibration files
* feat(omx): read motor positions without normalization for improved accuracy
* update calibration method for return factory value
Signed-off-by: Junha Cha <ckwnsgk1@gachon.ac.kr>
* change the drive mode
* refactor: clean up code by removing unnecessary blank lines in omx_follower and omx_leader modules
* feat(omx): update calibration method to set drive modes for motors
* feat(pyproject): add 'ROBOTIS' to extend-ignore-identifiers-re list
* feat(omx): enhance calibration method to write default drive modes to motors
* Update src/lerobot/robots/omx_follower/__init__.py
Add informations about the robot
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>
Signed-off-by: Woojin Wie <dnldnwls1123@gmail.com>
---------
Signed-off-by: Junha Cha <ckwnsgk1@gachon.ac.kr>
Signed-off-by: Woojin Wie <dnldnwls1123@gmail.com>
Co-authored-by: Junha02 <chajunha2023@naver.com>
Co-authored-by: Junha Cha <ckwnsgk1@gachon.ac.kr>
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>