mirror of
https://github.com/huggingface/lerobot.git
synced 2026-05-15 16:49:55 +00:00
919184d6f8
* docs(benchmarks): add benchmark integration guide and standardize benchmark docs Add a comprehensive guide for adding new benchmarks to LeRobot, and refactor the existing LIBERO and Meta-World docs to follow the new standardized template. Made-with: Cursor * refactor(envs): move dispatch logic from factory into EnvConfig subclasses Replace hardcoded if/elif chains in factory.py with create_envs() and get_env_processors() methods on EnvConfig. New benchmarks now only need to register a config subclass — no factory.py edits required. Net -23 lines: factory.py shrinks from ~200 to ~70 lines of logic. Made-with: Cursor * docs(benchmarks): clean up adding-benchmarks guide for clarity Rewrite for simpler language, better structure, and easier navigation. Move quick-reference table to the top, fold eval explanation into architecture section, condense the doc template to a bulleted outline. Made-with: Cursor * fix link * fix task count * fix: enable SmolVLA eval on LIBERO with custom camera mappings - Thread camera_name_mapping from LiberoEnv config through to gym envs - Sync features_map with camera_name_mapping in LiberoEnv.__post_init__ - Fix render() to use first available camera instead of hardcoded "image" - Handle non-dict final_info in rollout by falling back to info["is_success"] - Add use_peft legacy field to SmolVLAConfig for checkpoint compat - Add defaults to GR00TN15Config init=False fields for transformers 5.3 Made-with: Cursor * fix: use direct AutoresetMode import for gymnasium compat Made-with: Cursor * fix: handle gymnasium < 1.0 without AutoresetMode Made-with: Cursor * refactor: revert policy changes, keep env-only camera mapping fixes - Revert GR00T N1.5 default_factory/default changes (transformers compat) - Revert SmolVLA use_peft legacy field - Apply ruff formatting fixes - camera_name_mapping stays entirely in env/eval layer (no policy changes) Made-with: Cursor * Update docs/source/env_processor.mdx Co-authored-by: Khalil Meftah <khalil.meftah@huggingface.co> Signed-off-by: Pepijn <138571049+pkooij@users.noreply.github.com> * feat(envs): lazy env init + AsyncVectorEnv as default for n_envs > 1 LiberoEnv and MetaworldEnv previously allocated GPU resources (EGL context, OpenGL framebuffer) in __init__, before AsyncVectorEnv's fork(). Worker processes inherited stale GPU handles, causing EGL_BAD_CONTEXT crashes on first render. Fix: defer OffScreenRenderEnv / MT1 construction to _ensure_env(), called on first reset() or step() inside the worker subprocess. Each worker creates its own clean context after fork(). Also fixes lerobot_eval.py:170 (add_envs_task TODO): replace with env.call("task") which works with both SyncVectorEnv and AsyncVectorEnv. AsyncVectorEnv is now the default for n_envs > 1; auto-downgraded to SyncVectorEnv when n_envs=1 (no benefit, less overhead). Expected speedup: ~15-20x for LIBERO Spatial with batch_size=50. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix: close envs between tasks to prevent worker process accumulation eval_policy_all never closed environments after each task completed, causing AsyncVectorEnv worker processes to accumulate (N_tasks × n_envs). This led to OOM, BrokenPipeError and EOFError on multi-task benchmarks. Also fixes: - AsyncVectorEnv compat in envs/utils.py (use get_attr/call instead of .envs) - Tuple task handling in tokenizer_processor and lerobot_eval - _LazyAsyncVectorEnv for deferred worker spawning in LIBERO Made-with: Cursor * fix(eval): use task_description instead of task for language conditioning env.call("task") returns the LIBERO task name with underscores (e.g. "pick_up_the_black_bowl_...") instead of the natural language description ("pick up the black bowl ..."). The VLM tokenizes these completely differently, causing 0.0 reward across all episodes. Made-with: Cursor * docs: update adding_benchmarks for async env changes - Replace add_envs_task reference with env.call("task_description") - Update use_async_envs default to True - Add note about lazy GPU init for AsyncVectorEnv compatibility Made-with: Cursor * feat(eval): batch_size=auto + faster env loading - batch_size=0 (default) auto-tunes based on CPU cores, capped by n_episodes and 64. Removes the need for users to guess the right value. The old batch_size > n_episodes error is replaced by silently clamping to n_episodes. - _LazyAsyncVectorEnv accepts pre-computed spaces so only one temp env is created per suite (not per task). For libero_spatial (10 tasks) this avoids 9 redundant LiberoEnv instantiations during env setup. Made-with: Cursor * docs: add evaluation guide and update benchmarks doc - New docs/source/evaluation.mdx covering lerobot-eval usage, batch_size auto-tuning, AsyncVectorEnv performance, tuning tips, output format, multi-task evaluation, and programmatic usage. - Add evaluation page to _toctree.yml under Benchmarks section. - Update adding_benchmarks.mdx to reference batch_size auto default and link to the evaluation guide. Made-with: Cursor * docs(evaluation): remove benchmark table, rename section header Made-with: Cursor * perf(eval): shared memory, observation passthrough, task prefetch - AsyncVectorEnv now uses shared_memory=True for zero-copy observation transfer - LiberoEnvConfig.gym_kwargs passes observation_height/width to the env - eval_policy_all prefetches next task's workers while current task runs Made-with: Cursor * style: ruff format Made-with: Cursor * chore: revert env_processor.mdx changes (not part of this PR) Made-with: Cursor * ci(benchmarks): add isolated integration tests for libero and metaworld Each benchmark gets its own Docker image (lerobot[libero] / lerobot[metaworld] only) so incompatible dep trees cannot collide. A 1-episode smoke eval runs per benchmark on GPU runners. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * ci(benchmarks): pin action hashes and use uv sync --locked Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * ci(benchmarks): trigger only on envs/ or lerobot_eval.py changes Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(ci): set LIBERO_DATA_FOLDER to bypass interactive stdin prompt libero/__init__.py calls input() to ask about a custom dataset path, which raises EOFError when stdin is closed inside Docker. Setting LIBERO_DATA_FOLDER skips the prompt entirely. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * docs(benchmarks): add CI smoke test step to adding_benchmarks guide Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(ci): pre-create libero config in Dockerfile to bypass stdin prompt libero/__init__.py calls input() when ~/.libero/config.yaml is missing. We write the config at image build time (without importing libero) so the prompt never fires at runtime. Also trigger CI on pyproject.toml changes. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(ci): use shell to create libero config instead of multiline python -c The multiline RUN python -c "..." was being parsed as Dockerfile instructions. Use printf to write ~/.libero/config.yaml directly. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(ci): point libero config to bundled package init_files The config was pointing to /tmp/libero_init which doesn't exist. Use importlib.util.find_spec to locate the hf-libero package directory and write paths to the actual bundled bddl_files/init_files/assets. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(ci): add smolvla extra to benchmark Dockerfiles num2words (required by SmolVLM processor) is declared in lerobot[smolvla], not lerobot[libero/metaworld]. Install both extras together. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(eval): render_frame covers _LazyAsyncVectorEnv isinstance(env, AsyncVectorEnv) silently skipped _LazyAsyncVectorEnv, causing video rendering to produce no frames on the default async path. Switch to hasattr(env, "call") so any async-compatible env (including _LazyAsyncVectorEnv) hits the call("render") branch. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * refactor(envs): remove unused _get_sub_env_attr helper _get_sub_env_attr was defined but never called anywhere in the codebase. _sub_env_has_attr (its sibling) is kept — it is actively used in utils.py. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * chore: apply prettier formatting to docs Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * docs(env_processor): remove deprecated add_envs_task from pipeline example add_envs_task is replaced by env.call("task_description") in this PR. Remove it from the pipeline walkthrough and renumber the steps (8→7). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * refactor(envs): remove __del__ from _LazyAsyncVectorEnv __del__ is unreliable as a cleanup mechanism. close() is already called explicitly in the eval loop's finally block, so the finalizer is redundant. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(eval): prefetch next task's workers after close to avoid GPU memory overlap Previously, next task's AsyncVectorEnv workers were spawned while the current task was still running, causing both tasks' GPU contexts to coexist. Moving the prefetch start into the finally block (after env.close()) ensures workers for task N+1 only spin up once task N has released GPU memory. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * refactor(envs): move _LazyAsyncVectorEnv to utils and apply to metaworld _LazyAsyncVectorEnv lived in libero.py but metaworld had the same OOM problem: all tasks' AsyncVectorEnv workers were spawned eagerly, wasting GPU memory for tasks not yet running. Move the class to envs/utils.py so both environments share it, then apply the same is_async + lazy wrapping pattern in create_metaworld_envs. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * chore: remove out-of-scope benchmark/CI/docs files from PR Benchmark CI workflow, Dockerfiles, benchmark docs, evaluation smoke-test doc, and dispatch tests belong in a separate PR. Scope this PR to the async env init changes only. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * chore: restore adding_benchmarks + test_dispatch, drop env_processor changes - Restore docs/source/adding_benchmarks.mdx (belongs in this PR) - Restore tests/envs/test_dispatch.py (belongs in this PR) - Revert docs/source/env_processor.mdx to main (out of scope for this PR) Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * docs(adding_benchmarks): remove CI smoke test step (coming in separate PR) Step 7 (Dockerfile + benchmark_tests.yml CI job) and its table rows are out of scope for this PR. The CI infrastructure will be added on top in a follow-up PR. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * refactor(envs): remove unused add_envs_task Replaced by env.call("task_description") in lerobot_eval.py. No callers remain in the codebase. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * style: fix prettier formatting in env_processor.mdx Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(eval): catch AttributeError and NotImplementedError explicitly for task description Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(envs): use forkserver context and close envs in test to prevent deadlock AsyncVectorEnv with default fork context leaks worker processes between test_policy parametrized cases; subsequent env creation deadlocks because new forked workers inherit stale pipe FDs from previous test's leaked workers. - configs.py: pass context="forkserver" to AsyncVectorEnv (matches _LazyAsyncVectorEnv) - test_policies.py: call close_envs(envs) at end of test_policy to clean up workers Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> * fix(envs): default use_async_envs=False in create_envs and make_env Tests that call make_env(n_envs=2) without passing use_async_envs were getting AsyncVectorEnv, whose forked workers can't resolve gym namespaces registered at runtime. Default to False (sync) so existing tests pass. lerobot_eval.py explicitly passes cfg.eval.use_async_envs, so the CLI async behaviour (controlled by EvalConfig.use_async_envs) is unchanged. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> --------- Signed-off-by: Pepijn <138571049+pkooij@users.noreply.github.com> Co-authored-by: Khalil Meftah <khalil.meftah@huggingface.co> Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
323 lines
15 KiB
Plaintext
323 lines
15 KiB
Plaintext
# Adding a New Benchmark
|
|
|
|
This guide walks you through adding a new simulation benchmark to LeRobot. Follow the steps in order and use the existing benchmarks as templates.
|
|
|
|
A benchmark in LeRobot is a set of [Gymnasium](https://gymnasium.farama.org/) environments that wrap a third-party simulator (like LIBERO or Meta-World) behind a standard `gym.Env` interface. The `lerobot-eval` CLI then runs evaluation uniformly across all benchmarks.
|
|
|
|
## Existing benchmarks at a glance
|
|
|
|
Before diving in, here is what is already integrated:
|
|
|
|
| Benchmark | Env file | Config class | Tasks | Action dim | Processor |
|
|
| -------------- | ------------------- | ------------------ | ------------------- | ------------ | ---------------------------- |
|
|
| LIBERO | `envs/libero.py` | `LiberoEnv` | 130 across 5 suites | 7 | `LiberoProcessorStep` |
|
|
| Meta-World | `envs/metaworld.py` | `MetaworldEnv` | 50 (MT50) | 4 | None |
|
|
| IsaacLab Arena | Hub-hosted | `IsaaclabArenaEnv` | Configurable | Configurable | `IsaaclabArenaProcessorStep` |
|
|
|
|
Use `src/lerobot/envs/libero.py` and `src/lerobot/envs/metaworld.py` as reference implementations.
|
|
|
|
## How it all fits together
|
|
|
|
### Data flow
|
|
|
|
During evaluation, data moves through four stages:
|
|
|
|
```
|
|
1. gym.Env ──→ raw observations (numpy dicts)
|
|
|
|
2. Preprocessing ──→ standard LeRobot keys + task description
|
|
(preprocess_observation in envs/utils.py, env.call("task_description"))
|
|
|
|
3. Processors ──→ env-specific then policy-specific transforms
|
|
(env_preprocessor, policy_preprocessor)
|
|
|
|
4. Policy ──→ select_action() ──→ action tensor
|
|
then reverse: policy_postprocessor → env_postprocessor → numpy action → env.step()
|
|
```
|
|
|
|
Most benchmarks only need to care about stage 1 (producing observations in the right format) and optionally stage 3 (if env-specific transforms are needed).
|
|
|
|
### Environment structure
|
|
|
|
`make_env()` returns a nested dict of vectorized environments:
|
|
|
|
```python
|
|
dict[str, dict[int, gym.vector.VectorEnv]]
|
|
# ^suite ^task_id
|
|
```
|
|
|
|
A single-task env (e.g. PushT) looks like `{"pusht": {0: vec_env}}`.
|
|
A multi-task benchmark (e.g. LIBERO) looks like `{"libero_spatial": {0: vec0, 1: vec1, ...}, ...}`.
|
|
|
|
### How evaluation runs
|
|
|
|
All benchmarks are evaluated the same way by `lerobot-eval`:
|
|
|
|
1. `make_env()` builds the nested `{suite: {task_id: VectorEnv}}` dict.
|
|
2. `eval_policy_all()` iterates over every suite and task.
|
|
3. For each task, it runs `n_episodes` rollouts via `rollout()`.
|
|
4. Results are aggregated hierarchically: episode, task, suite, overall.
|
|
5. Metrics include `pc_success` (success rate), `avg_sum_reward`, and `avg_max_reward`.
|
|
|
|
The critical piece: your env must return `info["is_success"]` on every `step()` call. This is how the eval loop knows whether a task was completed.
|
|
|
|
## What your environment must provide
|
|
|
|
LeRobot does not enforce a strict observation schema. Instead it relies on a set of conventions that all benchmarks follow.
|
|
|
|
### Env attributes
|
|
|
|
Your `gym.Env` must set these attributes:
|
|
|
|
| Attribute | Type | Why |
|
|
| -------------------- | ----- | ---------------------------------------------------- |
|
|
| `_max_episode_steps` | `int` | `rollout()` uses this to cap episode length |
|
|
| `task_description` | `str` | Passed to VLA policies as a language instruction |
|
|
| `task` | `str` | Fallback identifier if `task_description` is not set |
|
|
|
|
### Success reporting
|
|
|
|
Your `step()` and `reset()` must include `"is_success"` in the `info` dict:
|
|
|
|
```python
|
|
info = {"is_success": True} # or False
|
|
return observation, reward, terminated, truncated, info
|
|
```
|
|
|
|
### Observations
|
|
|
|
The simplest approach is to map your simulator's outputs to the standard keys that `preprocess_observation()` already understands. Do this inside your `gym.Env` (e.g. in a `_format_raw_obs()` helper):
|
|
|
|
| Your env should output | LeRobot maps it to | What it is |
|
|
| ------------------------- | -------------------------- | ------------------------------------- |
|
|
| `"pixels"` (single array) | `observation.image` | Single camera image, HWC uint8 |
|
|
| `"pixels"` (dict) | `observation.images.<cam>` | Multiple cameras, each HWC uint8 |
|
|
| `"agent_pos"` | `observation.state` | Proprioceptive state vector |
|
|
| `"environment_state"` | `observation.env_state` | Full environment state (e.g. PushT) |
|
|
| `"robot_state"` | `observation.robot_state` | Nested robot state dict (e.g. LIBERO) |
|
|
|
|
If your simulator uses different key names, you have two options:
|
|
|
|
1. **Recommended:** Rename them to the standard keys inside your `gym.Env` wrapper.
|
|
2. **Alternative:** Write an env processor to transform observations after `preprocess_observation()` runs (see step 4 below).
|
|
|
|
### Actions
|
|
|
|
Actions are continuous numpy arrays in a `gym.spaces.Box`. The dimensionality depends on your benchmark (7 for LIBERO, 4 for Meta-World, etc.). Policies adapt to different action dimensions through their `input_features` / `output_features` config.
|
|
|
|
### Feature declaration
|
|
|
|
Each `EnvConfig` subclass declares two dicts that tell the policy what to expect:
|
|
|
|
- `features` — maps feature names to `PolicyFeature(type, shape)` (e.g. action dim, image shape).
|
|
- `features_map` — maps raw observation keys to LeRobot convention keys (e.g. `"agent_pos"` to `"observation.state"`).
|
|
|
|
## Step by step
|
|
|
|
<Tip>
|
|
At minimum, you need two files: a **gym.Env wrapper** and an **EnvConfig
|
|
subclass** with a `create_envs()` override. Everything else is optional or
|
|
documentation. No changes to `factory.py` are needed.
|
|
</Tip>
|
|
|
|
### Checklist
|
|
|
|
| File | Required | Why |
|
|
| ---------------------------------------- | -------- | ------------------------------------------------------------ |
|
|
| `src/lerobot/envs/<benchmark>.py` | Yes | Wraps the simulator as a standard gym.Env |
|
|
| `src/lerobot/envs/configs.py` | Yes | Registers your benchmark and its `create_envs()` for the CLI |
|
|
| `src/lerobot/processor/env_processor.py` | Optional | Custom observation/action transforms |
|
|
| `src/lerobot/envs/utils.py` | Optional | Only if you need new raw observation keys |
|
|
| `pyproject.toml` | Yes | Declares benchmark-specific dependencies |
|
|
| `docs/source/<benchmark>.mdx` | Yes | User-facing documentation page |
|
|
| `docs/source/_toctree.yml` | Yes | Adds your page to the docs sidebar |
|
|
|
|
### 1. The gym.Env wrapper (`src/lerobot/envs/<benchmark>.py`)
|
|
|
|
Create a `gym.Env` subclass that wraps the third-party simulator:
|
|
|
|
```python
|
|
class MyBenchmarkEnv(gym.Env):
|
|
metadata = {"render_modes": ["rgb_array"], "render_fps": <fps>}
|
|
|
|
def __init__(self, task_suite, task_id, ...):
|
|
super().__init__()
|
|
self.task = <task_name_string>
|
|
self.task_description = <natural_language_instruction>
|
|
self._max_episode_steps = <max_steps>
|
|
self.observation_space = spaces.Dict({...})
|
|
self.action_space = spaces.Box(low=..., high=..., shape=(...,), dtype=np.float32)
|
|
|
|
def reset(self, seed=None, **kwargs):
|
|
... # return (observation, info) — info must contain {"is_success": False}
|
|
|
|
def step(self, action: np.ndarray):
|
|
... # return (obs, reward, terminated, truncated, info) — info must contain {"is_success": <bool>}
|
|
|
|
def render(self):
|
|
... # return RGB image as numpy array
|
|
|
|
def close(self):
|
|
...
|
|
```
|
|
|
|
**GPU-based simulators (e.g. MuJoCo with EGL rendering):** If your simulator allocates GPU/EGL contexts during `__init__`, defer that allocation to a `_ensure_env()` helper called on first `reset()`/`step()`. This avoids inheriting stale GPU handles when `AsyncVectorEnv` spawns worker processes. See `LiberoEnv._ensure_env()` for the pattern.
|
|
|
|
Also provide a factory function that returns the nested dict structure:
|
|
|
|
```python
|
|
def create_mybenchmark_envs(
|
|
task: str,
|
|
n_envs: int,
|
|
gym_kwargs: dict | None = None,
|
|
env_cls: type | None = None,
|
|
) -> dict[str, dict[int, Any]]:
|
|
"""Create {suite_name: {task_id: VectorEnv}} for MyBenchmark."""
|
|
...
|
|
```
|
|
|
|
See `create_libero_envs()` (multi-suite, multi-task) and `create_metaworld_envs()` (difficulty-grouped tasks) for reference.
|
|
|
|
### 2. The config (`src/lerobot/envs/configs.py`)
|
|
|
|
Register a config dataclass so users can select your benchmark with `--env.type=<name>`. Each config owns its environment creation and processor logic via two methods:
|
|
|
|
- **`create_envs(n_envs, use_async_envs)`** — Returns `{suite: {task_id: VectorEnv}}`. The base class default uses `gym.make()` for single-task envs. Multi-task benchmarks override this.
|
|
- **`get_env_processors()`** — Returns `(preprocessor, postprocessor)`. The base class default returns identity (no-op) pipelines. Override if your benchmark needs observation/action transforms.
|
|
|
|
```python
|
|
@EnvConfig.register_subclass("<benchmark_name>")
|
|
@dataclass
|
|
class MyBenchmarkEnvConfig(EnvConfig):
|
|
task: str = "<default_task>"
|
|
fps: int = <fps>
|
|
obs_type: str = "pixels_agent_pos"
|
|
|
|
features: dict[str, PolicyFeature] = field(default_factory=lambda: {
|
|
ACTION: PolicyFeature(type=FeatureType.ACTION, shape=(<action_dim>,)),
|
|
})
|
|
features_map: dict[str, str] = field(default_factory=lambda: {
|
|
ACTION: ACTION,
|
|
"agent_pos": OBS_STATE,
|
|
"pixels": OBS_IMAGE,
|
|
})
|
|
|
|
def __post_init__(self):
|
|
... # populate features based on obs_type
|
|
|
|
@property
|
|
def gym_kwargs(self) -> dict:
|
|
return {"obs_type": self.obs_type, "render_mode": self.render_mode}
|
|
|
|
def create_envs(self, n_envs: int, use_async_envs: bool = True):
|
|
"""Override for multi-task benchmarks or custom env creation."""
|
|
from lerobot.envs.<benchmark> import create_<benchmark>_envs
|
|
return create_<benchmark>_envs(task=self.task, n_envs=n_envs, ...)
|
|
|
|
def get_env_processors(self):
|
|
"""Override if your benchmark needs observation/action transforms."""
|
|
from lerobot.processor.pipeline import PolicyProcessorPipeline
|
|
from lerobot.processor.env_processor import MyBenchmarkProcessorStep
|
|
return (
|
|
PolicyProcessorPipeline(steps=[MyBenchmarkProcessorStep()]),
|
|
PolicyProcessorPipeline(steps=[]),
|
|
)
|
|
```
|
|
|
|
Key points:
|
|
|
|
- The `register_subclass` name is what users pass on the CLI (`--env.type=<name>`).
|
|
- `features` tells the policy what the environment produces.
|
|
- `features_map` maps raw observation keys to LeRobot convention keys.
|
|
- **No changes to `factory.py` needed** — the factory delegates to `cfg.create_envs()` and `cfg.get_env_processors()` automatically.
|
|
|
|
### 3. Env processor (optional — `src/lerobot/processor/env_processor.py`)
|
|
|
|
Only needed if your benchmark requires observation transforms beyond what `preprocess_observation()` handles (e.g. image flipping, coordinate conversion). Define the processor step here and return it from `get_env_processors()` in your config (see step 2):
|
|
|
|
```python
|
|
@dataclass
|
|
@ProcessorStepRegistry.register(name="<benchmark>_processor")
|
|
class MyBenchmarkProcessorStep(ObservationProcessorStep):
|
|
def _process_observation(self, observation):
|
|
processed = observation.copy()
|
|
# your transforms here
|
|
return processed
|
|
|
|
def transform_features(self, features):
|
|
return features # update if shapes change
|
|
|
|
def observation(self, observation):
|
|
return self._process_observation(observation)
|
|
```
|
|
|
|
See `LiberoProcessorStep` for a full example (image rotation, quaternion-to-axis-angle conversion).
|
|
|
|
### 4. Dependencies (`pyproject.toml`)
|
|
|
|
Add a new optional-dependency group:
|
|
|
|
```toml
|
|
mybenchmark = ["my-benchmark-pkg==1.2.3", "lerobot[scipy-dep]"]
|
|
```
|
|
|
|
Pinning rules:
|
|
|
|
- **Always pin** benchmark packages to exact versions for reproducibility (e.g. `metaworld==3.0.0`).
|
|
- **Add platform markers** when needed (e.g. `; sys_platform == 'linux'`).
|
|
- **Pin fragile transitive deps** if known (e.g. `gymnasium==1.1.0` for Meta-World).
|
|
- **Document constraints** in your benchmark doc page.
|
|
|
|
Users install with:
|
|
|
|
```bash
|
|
pip install -e ".[mybenchmark]"
|
|
```
|
|
|
|
### 5. Documentation (`docs/source/<benchmark>.mdx`)
|
|
|
|
Write a user-facing page following the template in the next section. See `docs/source/libero.mdx` and `docs/source/metaworld.mdx` for full examples.
|
|
|
|
### 6. Table of contents (`docs/source/_toctree.yml`)
|
|
|
|
Add your benchmark to the "Benchmarks" section:
|
|
|
|
```yaml
|
|
- sections:
|
|
- local: libero
|
|
title: LIBERO
|
|
- local: metaworld
|
|
title: Meta-World
|
|
- local: envhub_isaaclab_arena
|
|
title: NVIDIA IsaacLab Arena Environments
|
|
- local: <your_benchmark>
|
|
title: <Your Benchmark Name>
|
|
title: "Benchmarks"
|
|
```
|
|
|
|
## Verifying your integration
|
|
|
|
After completing the steps above, confirm that everything works:
|
|
|
|
1. **Install** — `pip install -e ".[mybenchmark]"` and verify the dependency group installs cleanly.
|
|
2. **Smoke test env creation** — call `make_env()` with your config in Python, check that the returned dict has the expected `{suite: {task_id: VectorEnv}}` shape, and that `reset()` returns observations with the right keys.
|
|
3. **Run a full eval** — `lerobot-eval --env.type=<name> --env.task=<task> --eval.n_episodes=1 --policy.path=<any_compatible_policy>` to exercise the full pipeline end-to-end. (`batch_size` defaults to auto-tuning based on CPU cores; pass `--eval.batch_size=1` to force a single environment.)
|
|
4. **Check success detection** — verify that `info["is_success"]` flips to `True` when the task is actually completed. This is what the eval loop uses to compute success rates.
|
|
|
|
## Writing a benchmark doc page
|
|
|
|
Each benchmark `.mdx` page should include:
|
|
|
|
- **Title and description** — 1-2 paragraphs on what the benchmark tests and why it matters.
|
|
- **Links** — paper, GitHub repo, project website (if available).
|
|
- **Overview image or GIF.**
|
|
- **Available tasks** — table of task suites with counts and brief descriptions.
|
|
- **Installation** — `pip install -e ".[<benchmark>]"` plus any extra steps (env vars, system packages).
|
|
- **Evaluation** — recommended `lerobot-eval` command with `n_episodes` for reproducible results. `batch_size` defaults to auto; only specify it if needed. Include single-task and multi-task examples if applicable.
|
|
- **Policy inputs and outputs** — observation keys with shapes, action space description.
|
|
- **Recommended evaluation episodes** — how many episodes per task is standard.
|
|
- **Training** — example `lerobot-train` command.
|
|
- **Reproducing published results** — link to pretrained model, eval command, results table (if available).
|
|
|
|
See `docs/source/libero.mdx` and `docs/source/metaworld.mdx` for complete examples.
|