- Remove broken Triton issue link from Dockerfile.benchmark.libero
- Add module-level _safe_int helper to guard n_episodes against NaN
- Move _safe_float to module level alongside _safe_int
- Add # zizmor: ignore[unpinned-uses] to all upload-artifact@v4 steps
- Add if: env.HF_USER_TOKEN != '' to Libero smoke eval for fork PRs
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Benchmark PRs (robomme, libero-plus, robocerebra, robotwin) target
feat/benchmark-ci, not main. Without this, the workflow never runs
on those PRs.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Security:
- Remove "Login to Hugging Face" step — it was a no-op (ephemeral
--rm container) that exposed the HF token via CLI argument in
docker inspect / /proc/*/cmdline. The eval step already
re-authenticates via env var.
Functional:
- Remove feat/benchmark-ci from push trigger branches (won't exist
post-merge).
Dockerfiles:
- Pin uv to 0.8.0 (was unpinned, fetching whatever latest ships).
- Add comment explaining the chmod +x ptxas workaround (Triton
packaging bug — ships ptxas without execute bit).
Scripts:
- parse_eval_metrics.py: add note that it runs on bare host and must
stay stdlib-only.
- parse_eval_metrics.py: add NaN guard for avg_sum_reward and eval_s
(was only guarding pc_success).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The huggingface org restricts GHCR package creation via GITHUB_TOKEN,
causing 403 on cache export. Remove all registry caching and GHCR
login. The Dockerfile layer split (deps vs source) still helps when
the runner has a warm Docker daemon.
Also fix the metaworld job which had a stale conditional Docker Hub
login and was missing the GHCR login entirely.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Docker Hub CI token can't push to new repos. GHCR works out of the
box — GITHUB_TOKEN has automatic packages:write for the repo owner.
- Add GHCR login step (github.actor + GITHUB_TOKEN)
- Switch cache refs to ghcr.io/huggingface/lerobot/cache-benchmark
- Add packages:write at job level (not workflow, per zizmor)
- Keep Docker Hub login for pulling nvidia/cuda base image
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
GHA cache is capped at 10GB per repo — a single CUDA + PyTorch +
benchmark image is ~8GB so the cache evicts before it's reused.
Switch to type=registry which pushes cache layers to Docker Hub
(huggingface/lerobot-benchmark-cache:{libero,metaworld}). No size
limit, layers persist until explicitly deleted, and shared across
all runners and branches.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Drop the conditional guard — other workflows (docker_publish,
full_tests) call docker/login-action unconditionally.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Step-level 'if' cannot reference 'secrets' directly. Expose the
secret via an env var and check that instead.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Anonymous pulls from Docker Hub are rate-limited to 100/6h, which
fails when multiple benchmark jobs pull nvidia/cuda in parallel.
Add docker/login-action step (conditional on DOCKERHUB_USERNAME var)
to authenticate and get 200 pulls/6h.
Setup: add DOCKERHUB_USERNAME as a repository variable and
DOCKERHUB_TOKEN as a repository secret in GitHub Settings.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The dep-install layer (uv sync) now only depends on pyproject.toml,
uv.lock, and a minimal package stub — not the full src/ tree. Source
code changes only rebuild the final COPY layer (seconds, not minutes).
Also switch from type=local cache (lost on ephemeral runners) to
type=gha (persisted in GitHub Actions cache, shared across all runs).
Before: every src/ change → full uv sync rebuild (~8-10 min)
After: src/-only change → cached dep layer, ~30s source copy
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The task descriptions were never populated in metrics.json because
extract_task_descriptions.py was never invoked. The script exists and
parse_eval_metrics.py already looks for its output — the call was
simply missing from the workflow.
Appends the extraction step to the existing bash -c block (runs inside
the container where libero/metaworld is installed) so task_descriptions.json
is written to the eval-artifacts dir before docker cp copies it out.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Resolves conflict in lerobot_eval.py by taking explicit
(AttributeError, NotImplementedError) catches from main (#3274).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Runs accelerate launch --num_processes=1 lerobot-train with:
- steps=1, batch_size=1, dataset.episodes=[0] (episode 0 only)
- eval_freq=1 so the training loop triggers eval after step 1
- eval.n_episodes=1, eval.use_async_envs=false
Tests the full train→eval-within-training pipeline in the existing
libero-benchmark-libero:ci image (no extra Docker build cost).
Uploads eval video from /tmp/train-smoke/eval/ as libero-train-smoke-video.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Adds scripts/ci/parse_eval_metrics.py and wires it into both Libero and
MetaWorld jobs so the dashboard can read pc_success, avg_sum_reward and
eval_s from the metrics artifact instead of relying on GitHub step timing.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
user_lerobot cannot create /artifacts at the container root.
Use /tmp/eval-artifacts (always writable) then docker cp it out.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Bind mounts on these runners don't surface container-written files on
the host path (likely DinD/socket-mount setup). Switch to named
containers + docker cp, which copies directly through the daemon and
lands files in the runner's accessible filesystem.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Runs on the 1st of every month at 02:00 UTC in addition to the
existing push/PR and manual dispatch triggers.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Files created by user_lerobot inside the eval container inherit a
restrictive umask, making them unreadable by the runner after the
container exits. Add a post-eval 'docker run --user root' chmod step
so upload-artifact can find the video files.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Running chmod on the host doesn't propagate into Docker due to UID/SELinux
mismatch. Instead, spin up the image as root to mkdir+chmod from inside
the container before the eval run mounts the same path.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Benchmark CI workflow, Dockerfiles, benchmark docs, evaluation smoke-test
doc, and dispatch tests belong in a separate PR. Scope this PR to the
async env init changes only.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
libero/__init__.py calls input() when ~/.libero/config.yaml is missing.
We write the config at image build time (without importing libero) so
the prompt never fires at runtime. Also trigger CI on pyproject.toml changes.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
libero/__init__.py calls input() to ask about a custom dataset path,
which raises EOFError when stdin is closed inside Docker. Setting
LIBERO_DATA_FOLDER skips the prompt entirely.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Each benchmark gets its own Docker image (lerobot[libero] / lerobot[metaworld]
only) so incompatible dep trees cannot collide. A 1-episode smoke eval runs
per benchmark on GPU runners.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* feat(ci): add uv.lock
* feat(ci): use uv.lock in CI PR testing
* chore(ci): rename nightly to docker publish and test
* feat(ci): automated update of uv.lock + remove unbound check + docker images now use uv.lock
* fix(ci): add --force-with-lease + set -e for silent erros
* fix(ci): skip HF log in (and tests) in forks and community PRs
* chore(test): remove comment about test meant to be only run locally
* fix(tests): no hf log in decorator for xvla
* fix(test): no decorator in yield
* fix(ci): prevent runner group error on fork pushes
Add repository check to unbound_deps_tests workflow to ensure
aws-general-8-plus runner group is only used on main repository,
preventing 'Required runner group not found' errors on forks.
* fix(ci): use gating job to prevent runner allocation on forks
The previous approach failed because GitHub evaluates runs-on before if conditions.
Now using a check-repo job that runs on ubuntu-latest first, and all jobs with
special runners depend on it and check its output before being scheduled.
* fix(ci): add gating job to full_tests to prevent runner allocation on forks
Apply the same gating pattern used in unbound_deps_tests to full_tests.yml
to prevent GitHub from trying to allocate custom runners when workflows
run on forks. The check-repo job runs first on ubuntu-latest and all jobs
with custom runners depend on it and check its output.
* fix(ci): add repository check to unbound_deps_tests workflow
Add 'if: github.repository == huggingface/lerobot' check to build-and-push-docker job to prevent runner group access errors on forks, matching the pattern used in nightly.yml
* fix(ci): add repository check to full_tests workflow
Add 'if: github.repository == huggingface/lerobot' check to build-and-push-docker and gpu-tests jobs to prevent runner group access errors on forks
* refactor(ci): remove redundant check from gpu-tests job
gpu-tests depends on build-and-push-docker via needs, so it will automatically skip when the parent job is skipped
* refactor(ci): remove unnecessary fork check from full-tests job
full-tests runs on ubuntu-latest which is available to all forks, no need to restrict it
---------
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>