- Remove broken Triton issue link from Dockerfile.benchmark.libero
- Add module-level _safe_int helper to guard n_episodes against NaN
- Move _safe_float to module level alongside _safe_int
- Add # zizmor: ignore[unpinned-uses] to all upload-artifact@v4 steps
- Add if: env.HF_USER_TOKEN != '' to Libero smoke eval for fork PRs
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
These use_async_envs default changes belong to the async-vector-env
PR (#3274), not this CI PR. Restore to match origin/main.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
uv sync --locked validates the entire lockfile across all extras.
Since robomme depends on mani-skill which pins numpy<2.0, and the
base project requires numpy>=2.0, the full lockfile is unsatisfiable.
Switch to uv pip install -e ".[libero,smolvla]" which only resolves
the requested extras for the current Python version and platform,
avoiding the cross-extra numpy conflict entirely.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Benchmark PRs (robomme, libero-plus, robocerebra, robotwin) target
feat/benchmark-ci, not main. Without this, the workflow never runs
on those PRs.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Security:
- Remove "Login to Hugging Face" step — it was a no-op (ephemeral
--rm container) that exposed the HF token via CLI argument in
docker inspect / /proc/*/cmdline. The eval step already
re-authenticates via env var.
Functional:
- Remove feat/benchmark-ci from push trigger branches (won't exist
post-merge).
Dockerfiles:
- Pin uv to 0.8.0 (was unpinned, fetching whatever latest ships).
- Add comment explaining the chmod +x ptxas workaround (Triton
packaging bug — ships ptxas without execute bit).
Scripts:
- parse_eval_metrics.py: add note that it runs on bare host and must
stay stdlib-only.
- parse_eval_metrics.py: add NaN guard for avg_sum_reward and eval_s
(was only guarding pc_success).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix(one shot load): adding metadata loading when reading from a dataset after writing
* refactor(one shot load): move metadata reload to ensure_readable() on LeRobotDatasetMetadata
Move the metadata reload from DatasetReader.load_and_activate() to a new
public ensure_readable() method on LeRobotDatasetMetadata, called from
LeRobotDataset._ensure_reader(). This places lifecycle management in the
right layer: metadata owns its readiness check, the dataset orchestrates
the write-to-read transition, and the reader stays clean.
Also adds a regression test using delta_timestamps to exercise the
meta.episodes access path in the create -> write -> finalize -> read flow.
Co-authored-by: Steven Palma <imstevenpmwork@users.noreply.github.com>
---------
Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Steven Palma <imstevenpmwork@users.noreply.github.com>
The huggingface org restricts GHCR package creation via GITHUB_TOKEN,
causing 403 on cache export. Remove all registry caching and GHCR
login. The Dockerfile layer split (deps vs source) still helps when
the runner has a warm Docker daemon.
Also fix the metaworld job which had a stale conditional Docker Hub
login and was missing the GHCR login entirely.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Docker Hub CI token can't push to new repos. GHCR works out of the
box — GITHUB_TOKEN has automatic packages:write for the repo owner.
- Add GHCR login step (github.actor + GITHUB_TOKEN)
- Switch cache refs to ghcr.io/huggingface/lerobot/cache-benchmark
- Add packages:write at job level (not workflow, per zizmor)
- Keep Docker Hub login for pulling nvidia/cuda base image
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
GHA cache is capped at 10GB per repo — a single CUDA + PyTorch +
benchmark image is ~8GB so the cache evicts before it's reused.
Switch to type=registry which pushes cache layers to Docker Hub
(huggingface/lerobot-benchmark-cache:{libero,metaworld}). No size
limit, layers persist until explicitly deleted, and shared across
all runners and branches.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Drop the conditional guard — other workflows (docker_publish,
full_tests) call docker/login-action unconditionally.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Step-level 'if' cannot reference 'secrets' directly. Expose the
secret via an env var and check that instead.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Anonymous pulls from Docker Hub are rate-limited to 100/6h, which
fails when multiple benchmark jobs pull nvidia/cuda in parallel.
Add docker/login-action step (conditional on DOCKERHUB_USERNAME var)
to authenticate and get 200 pulls/6h.
Setup: add DOCKERHUB_USERNAME as a repository variable and
DOCKERHUB_TOKEN as a repository secret in GitHub Settings.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The dep-install layer (uv sync) now only depends on pyproject.toml,
uv.lock, and a minimal package stub — not the full src/ tree. Source
code changes only rebuild the final COPY layer (seconds, not minutes).
Also switch from type=local cache (lost on ephemeral runners) to
type=gha (persisted in GitHub Actions cache, shared across all runs).
Before: every src/ change → full uv sync rebuild (~8-10 min)
After: src/-only change → cached dep layer, ~30s source copy
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
AsyncVectorEnv spawns new subprocesses that do not inherit the
in-process gym registration created by the test. Pass
use_async_envs=False since this test validates dispatch logic,
not async parallelism.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The task descriptions were never populated in metrics.json because
extract_task_descriptions.py was never invoked. The script exists and
parse_eval_metrics.py already looks for its output — the call was
simply missing from the workflow.
Appends the extraction step to the existing bash -c block (runs inside
the container where libero/metaworld is installed) so task_descriptions.json
is written to the eval-artifacts dir before docker cp copies it out.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add scripts/ci/extract_task_descriptions.py: runs inside the benchmark
Docker container (LIBERO/MetaWorld installed) after lerobot-eval and
writes task_descriptions.json mapping task keys to NL instructions.
LIBERO: uses libero.libero.benchmark to get suite.get_task(i).language.
MetaWorld: formats task name as human-readable label.
- Call extraction at the end of each eval bash-c (|| true so never fatal).
- parse_eval_metrics.py reads task_descriptions.json and includes it in
metrics.json so the health dashboard Space can label videos by task.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Resolves conflict in lerobot_eval.py by taking explicit
(AttributeError, NotImplementedError) catches from main (#3274).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* docs(benchmarks): add benchmark integration guide and standardize benchmark docs
Add a comprehensive guide for adding new benchmarks to LeRobot, and
refactor the existing LIBERO and Meta-World docs to follow the new
standardized template.
Made-with: Cursor
* refactor(envs): move dispatch logic from factory into EnvConfig subclasses
Replace hardcoded if/elif chains in factory.py with create_envs() and
get_env_processors() methods on EnvConfig. New benchmarks now only need
to register a config subclass — no factory.py edits required.
Net -23 lines: factory.py shrinks from ~200 to ~70 lines of logic.
Made-with: Cursor
* docs(benchmarks): clean up adding-benchmarks guide for clarity
Rewrite for simpler language, better structure, and easier navigation.
Move quick-reference table to the top, fold eval explanation into
architecture section, condense the doc template to a bulleted outline.
Made-with: Cursor
* fix link
* fix task count
* fix: enable SmolVLA eval on LIBERO with custom camera mappings
- Thread camera_name_mapping from LiberoEnv config through to gym envs
- Sync features_map with camera_name_mapping in LiberoEnv.__post_init__
- Fix render() to use first available camera instead of hardcoded "image"
- Handle non-dict final_info in rollout by falling back to info["is_success"]
- Add use_peft legacy field to SmolVLAConfig for checkpoint compat
- Add defaults to GR00TN15Config init=False fields for transformers 5.3
Made-with: Cursor
* fix: use direct AutoresetMode import for gymnasium compat
Made-with: Cursor
* fix: handle gymnasium < 1.0 without AutoresetMode
Made-with: Cursor
* refactor: revert policy changes, keep env-only camera mapping fixes
- Revert GR00T N1.5 default_factory/default changes (transformers compat)
- Revert SmolVLA use_peft legacy field
- Apply ruff formatting fixes
- camera_name_mapping stays entirely in env/eval layer (no policy changes)
Made-with: Cursor
* Update docs/source/env_processor.mdx
Co-authored-by: Khalil Meftah <khalil.meftah@huggingface.co>
Signed-off-by: Pepijn <138571049+pkooij@users.noreply.github.com>
* feat(envs): lazy env init + AsyncVectorEnv as default for n_envs > 1
LiberoEnv and MetaworldEnv previously allocated GPU resources (EGL context,
OpenGL framebuffer) in __init__, before AsyncVectorEnv's fork(). Worker
processes inherited stale GPU handles, causing EGL_BAD_CONTEXT crashes on
first render.
Fix: defer OffScreenRenderEnv / MT1 construction to _ensure_env(), called on
first reset() or step() inside the worker subprocess. Each worker creates its
own clean context after fork().
Also fixes lerobot_eval.py:170 (add_envs_task TODO): replace with
env.call("task") which works with both SyncVectorEnv and AsyncVectorEnv.
AsyncVectorEnv is now the default for n_envs > 1; auto-downgraded to
SyncVectorEnv when n_envs=1 (no benefit, less overhead).
Expected speedup: ~15-20x for LIBERO Spatial with batch_size=50.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix: close envs between tasks to prevent worker process accumulation
eval_policy_all never closed environments after each task completed,
causing AsyncVectorEnv worker processes to accumulate (N_tasks × n_envs).
This led to OOM, BrokenPipeError and EOFError on multi-task benchmarks.
Also fixes:
- AsyncVectorEnv compat in envs/utils.py (use get_attr/call instead of .envs)
- Tuple task handling in tokenizer_processor and lerobot_eval
- _LazyAsyncVectorEnv for deferred worker spawning in LIBERO
Made-with: Cursor
* fix(eval): use task_description instead of task for language conditioning
env.call("task") returns the LIBERO task name with underscores
(e.g. "pick_up_the_black_bowl_...") instead of the natural language
description ("pick up the black bowl ..."). The VLM tokenizes these
completely differently, causing 0.0 reward across all episodes.
Made-with: Cursor
* docs: update adding_benchmarks for async env changes
- Replace add_envs_task reference with env.call("task_description")
- Update use_async_envs default to True
- Add note about lazy GPU init for AsyncVectorEnv compatibility
Made-with: Cursor
* feat(eval): batch_size=auto + faster env loading
- batch_size=0 (default) auto-tunes based on CPU cores, capped by
n_episodes and 64. Removes the need for users to guess the right
value. The old batch_size > n_episodes error is replaced by silently
clamping to n_episodes.
- _LazyAsyncVectorEnv accepts pre-computed spaces so only one temp env
is created per suite (not per task). For libero_spatial (10 tasks)
this avoids 9 redundant LiberoEnv instantiations during env setup.
Made-with: Cursor
* docs: add evaluation guide and update benchmarks doc
- New docs/source/evaluation.mdx covering lerobot-eval usage, batch_size
auto-tuning, AsyncVectorEnv performance, tuning tips, output format,
multi-task evaluation, and programmatic usage.
- Add evaluation page to _toctree.yml under Benchmarks section.
- Update adding_benchmarks.mdx to reference batch_size auto default and
link to the evaluation guide.
Made-with: Cursor
* docs(evaluation): remove benchmark table, rename section header
Made-with: Cursor
* perf(eval): shared memory, observation passthrough, task prefetch
- AsyncVectorEnv now uses shared_memory=True for zero-copy observation transfer
- LiberoEnvConfig.gym_kwargs passes observation_height/width to the env
- eval_policy_all prefetches next task's workers while current task runs
Made-with: Cursor
* style: ruff format
Made-with: Cursor
* chore: revert env_processor.mdx changes (not part of this PR)
Made-with: Cursor
* ci(benchmarks): add isolated integration tests for libero and metaworld
Each benchmark gets its own Docker image (lerobot[libero] / lerobot[metaworld]
only) so incompatible dep trees cannot collide. A 1-episode smoke eval runs
per benchmark on GPU runners.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* ci(benchmarks): pin action hashes and use uv sync --locked
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* ci(benchmarks): trigger only on envs/ or lerobot_eval.py changes
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(ci): set LIBERO_DATA_FOLDER to bypass interactive stdin prompt
libero/__init__.py calls input() to ask about a custom dataset path,
which raises EOFError when stdin is closed inside Docker. Setting
LIBERO_DATA_FOLDER skips the prompt entirely.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* docs(benchmarks): add CI smoke test step to adding_benchmarks guide
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(ci): pre-create libero config in Dockerfile to bypass stdin prompt
libero/__init__.py calls input() when ~/.libero/config.yaml is missing.
We write the config at image build time (without importing libero) so
the prompt never fires at runtime. Also trigger CI on pyproject.toml changes.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(ci): use shell to create libero config instead of multiline python -c
The multiline RUN python -c "..." was being parsed as Dockerfile
instructions. Use printf to write ~/.libero/config.yaml directly.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(ci): point libero config to bundled package init_files
The config was pointing to /tmp/libero_init which doesn't exist.
Use importlib.util.find_spec to locate the hf-libero package directory
and write paths to the actual bundled bddl_files/init_files/assets.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(ci): add smolvla extra to benchmark Dockerfiles
num2words (required by SmolVLM processor) is declared in lerobot[smolvla],
not lerobot[libero/metaworld]. Install both extras together.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(eval): render_frame covers _LazyAsyncVectorEnv
isinstance(env, AsyncVectorEnv) silently skipped _LazyAsyncVectorEnv,
causing video rendering to produce no frames on the default async path.
Switch to hasattr(env, "call") so any async-compatible env (including
_LazyAsyncVectorEnv) hits the call("render") branch.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* refactor(envs): remove unused _get_sub_env_attr helper
_get_sub_env_attr was defined but never called anywhere in the codebase.
_sub_env_has_attr (its sibling) is kept — it is actively used in utils.py.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* chore: apply prettier formatting to docs
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* docs(env_processor): remove deprecated add_envs_task from pipeline example
add_envs_task is replaced by env.call("task_description") in this PR.
Remove it from the pipeline walkthrough and renumber the steps (8→7).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* refactor(envs): remove __del__ from _LazyAsyncVectorEnv
__del__ is unreliable as a cleanup mechanism. close() is already called
explicitly in the eval loop's finally block, so the finalizer is redundant.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(eval): prefetch next task's workers after close to avoid GPU memory overlap
Previously, next task's AsyncVectorEnv workers were spawned while the
current task was still running, causing both tasks' GPU contexts to coexist.
Moving the prefetch start into the finally block (after env.close()) ensures
workers for task N+1 only spin up once task N has released GPU memory.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* refactor(envs): move _LazyAsyncVectorEnv to utils and apply to metaworld
_LazyAsyncVectorEnv lived in libero.py but metaworld had the same OOM
problem: all tasks' AsyncVectorEnv workers were spawned eagerly, wasting
GPU memory for tasks not yet running.
Move the class to envs/utils.py so both environments share it, then apply
the same is_async + lazy wrapping pattern in create_metaworld_envs.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* chore: remove out-of-scope benchmark/CI/docs files from PR
Benchmark CI workflow, Dockerfiles, benchmark docs, evaluation smoke-test
doc, and dispatch tests belong in a separate PR. Scope this PR to the
async env init changes only.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* chore: restore adding_benchmarks + test_dispatch, drop env_processor changes
- Restore docs/source/adding_benchmarks.mdx (belongs in this PR)
- Restore tests/envs/test_dispatch.py (belongs in this PR)
- Revert docs/source/env_processor.mdx to main (out of scope for this PR)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* docs(adding_benchmarks): remove CI smoke test step (coming in separate PR)
Step 7 (Dockerfile + benchmark_tests.yml CI job) and its table rows are
out of scope for this PR. The CI infrastructure will be added on top in a
follow-up PR.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* refactor(envs): remove unused add_envs_task
Replaced by env.call("task_description") in lerobot_eval.py. No callers
remain in the codebase.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* style: fix prettier formatting in env_processor.mdx
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(eval): catch AttributeError and NotImplementedError explicitly for task description
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(envs): use forkserver context and close envs in test to prevent deadlock
AsyncVectorEnv with default fork context leaks worker processes between
test_policy parametrized cases; subsequent env creation deadlocks because
new forked workers inherit stale pipe FDs from previous test's leaked workers.
- configs.py: pass context="forkserver" to AsyncVectorEnv (matches _LazyAsyncVectorEnv)
- test_policies.py: call close_envs(envs) at end of test_policy to clean up workers
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix(envs): default use_async_envs=False in create_envs and make_env
Tests that call make_env(n_envs=2) without passing use_async_envs were
getting AsyncVectorEnv, whose forked workers can't resolve gym namespaces
registered at runtime. Default to False (sync) so existing tests pass.
lerobot_eval.py explicitly passes cfg.eval.use_async_envs, so the CLI
async behaviour (controlled by EvalConfig.use_async_envs) is unchanged.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
---------
Signed-off-by: Pepijn <138571049+pkooij@users.noreply.github.com>
Co-authored-by: Khalil Meftah <khalil.meftah@huggingface.co>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Runs accelerate launch --num_processes=1 lerobot-train with:
- steps=1, batch_size=1, dataset.episodes=[0] (episode 0 only)
- eval_freq=1 so the training loop triggers eval after step 1
- eval.n_episodes=1, eval.use_async_envs=false
Tests the full train→eval-within-training pipeline in the existing
libero-benchmark-libero:ci image (no extra Docker build cost).
Uploads eval video from /tmp/train-smoke/eval/ as libero-train-smoke-video.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds scripts/ci/parse_eval_metrics.py and wires it into both Libero and
MetaWorld jobs so the dashboard can read pc_success, avg_sum_reward and
eval_s from the metrics artifact instead of relying on GitHub step timing.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
user_lerobot cannot create /artifacts at the container root.
Use /tmp/eval-artifacts (always writable) then docker cp it out.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Bind mounts on these runners don't surface container-written files on
the host path (likely DinD/socket-mount setup). Switch to named
containers + docker cp, which copies directly through the daemon and
lands files in the runner's accessible filesystem.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Runs on the 1st of every month at 02:00 UTC in addition to the
existing push/PR and manual dispatch triggers.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Files created by user_lerobot inside the eval container inherit a
restrictive umask, making them unreadable by the runner after the
container exits. Add a post-eval 'docker run --user root' chmod step
so upload-artifact can find the video files.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Running chmod on the host doesn't propagate into Docker due to UID/SELinux
mismatch. Instead, spin up the image as root to mkdir+chmod from inside
the container before the eval run mounts the same path.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Replaced by env.call("task_description") in lerobot_eval.py. No callers
remain in the codebase.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Step 7 (Dockerfile + benchmark_tests.yml CI job) and its table rows are
out of scope for this PR. The CI infrastructure will be added on top in a
follow-up PR.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Restore docs/source/adding_benchmarks.mdx (belongs in this PR)
- Restore tests/envs/test_dispatch.py (belongs in this PR)
- Revert docs/source/env_processor.mdx to main (out of scope for this PR)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Benchmark CI workflow, Dockerfiles, benchmark docs, evaluation smoke-test
doc, and dispatch tests belong in a separate PR. Scope this PR to the
async env init changes only.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
_LazyAsyncVectorEnv lived in libero.py but metaworld had the same OOM
problem: all tasks' AsyncVectorEnv workers were spawned eagerly, wasting
GPU memory for tasks not yet running.
Move the class to envs/utils.py so both environments share it, then apply
the same is_async + lazy wrapping pattern in create_metaworld_envs.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Previously, next task's AsyncVectorEnv workers were spawned while the
current task was still running, causing both tasks' GPU contexts to coexist.
Moving the prefetch start into the finally block (after env.close()) ensures
workers for task N+1 only spin up once task N has released GPU memory.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
__del__ is unreliable as a cleanup mechanism. close() is already called
explicitly in the eval loop's finally block, so the finalizer is redundant.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
add_envs_task is replaced by env.call("task_description") in this PR.
Remove it from the pipeline walkthrough and renumber the steps (8→7).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
_get_sub_env_attr was defined but never called anywhere in the codebase.
_sub_env_has_attr (its sibling) is kept — it is actively used in utils.py.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
isinstance(env, AsyncVectorEnv) silently skipped _LazyAsyncVectorEnv,
causing video rendering to produce no frames on the default async path.
Switch to hasattr(env, "call") so any async-compatible env (including
_LazyAsyncVectorEnv) hits the call("render") branch.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
num2words (required by SmolVLM processor) is declared in lerobot[smolvla],
not lerobot[libero/metaworld]. Install both extras together.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The config was pointing to /tmp/libero_init which doesn't exist.
Use importlib.util.find_spec to locate the hf-libero package directory
and write paths to the actual bundled bddl_files/init_files/assets.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The multiline RUN python -c "..." was being parsed as Dockerfile
instructions. Use printf to write ~/.libero/config.yaml directly.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
libero/__init__.py calls input() when ~/.libero/config.yaml is missing.
We write the config at image build time (without importing libero) so
the prompt never fires at runtime. Also trigger CI on pyproject.toml changes.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>