* feat(ci): add RoboCerebra benchmark eval job
- Docker image with robosuite/libero deps for RoboCerebra eval
- CI workflow: 1-episode eval with pepijn223/smolvla_robocerebra
- Reuses libero env with rename_map + empty_cameras=3
* docs(robocerebra): add benchmark page and toctree entry
Add a dedicated docs page for RoboCerebra that points at the canonical
dataset lerobot/robocerebra_unified and shows how to run eval + fine-tune
against it. Wire it into the Benchmarks section of the toctree so
doc-builder picks it up.
* ci: point benchmark eval checkpoints at the lerobot/ org mirrors
pepijn223/smolvla_* → lerobot/smolvla_* across every benchmark job in
this branch (libero, metaworld, and the per-branch benchmark). The
checkpoints were mirrored into the lerobot/ org and that's the canonical
location going forward.
* fix(robocerebra): drop alias extra + simplify docker image
Two problems rolled up:
1. `uv sync --locked --extra test` was failing because pyproject.toml added
a `robocerebra = ["lerobot[libero]"]` alias extra but uv.lock wasn't
regenerated. Drop the alias — nothing in CI actually needs the extra
name (the Dockerfile just installs what it needs directly), so this
restores pyproject.toml and uv.lock to byte-exact origin/main.
2. Rebase docker/Dockerfile.benchmark.robocerebra on
huggingface/lerobot-gpu:latest (same pattern as libero/metaworld/…).
The nightly image already ships lerobot[all] which includes [libero],
so the RoboCerebra image is essentially identical to the LIBERO one:
fetch libero-assets, write ~/.libero/config.yaml, overlay source.
92 → 43 lines.
Also repoint the CI workflow comment that referenced the removed extra.
* ci: use dedicated lerobot/smolvla_robocerebra checkpoint for smoke eval
Replace the generic pepijn223/smolvla_libero placeholder with the
purpose-trained lerobot/smolvla_robocerebra model in the RoboCerebra
CI smoke test.
* fix(ci): align RoboCerebra eval with training pipeline
Fixes 5 mismatches that caused 0% success rate:
- env.type: robocerebra (unregistered) → libero
- resolution: 360x360 (default) → 256x256 (matches dataset)
- camera_name_mapping: map eye_in_hand → wrist_image (not image2)
- empty_cameras: 3 → 1 (matches training)
- add HF_USER_TOKEN guard on eval step
* fix(ci): set env.fps=20 and explicit obs_type for RoboCerebra eval
Match the dataset's 20 FPS (LiberoEnv defaults to 30) and make
obs_type=pixels_agent_pos explicit for safety against future changes.
* docs(robocerebra): align page with adding_benchmarks template
Rework docs/source/robocerebra.mdx to follow the standard benchmark
doc structure: intro + links + available tasks + installation + eval
+ recommended episodes + policy I/O + training + reproducing results.
- Point everything at lerobot/smolvla_robocerebra (the released
checkpoint), not the personal pepijn223 mirror.
- Add the --env.fps=20 and --env.obs_type=pixels_agent_pos flags
that CI actually uses, so copy-paste eval reproduces CI.
- Split the "Training" block out of the recipe section into its own
section with the feature table.
- Add an explicit "Reproducing published results" section pointing
at the CI smoke eval.
* fix: integrate PR #3314 review feedback
- ci(robocerebra): drop redundant hf auth login step (auth is
already performed inside the eval step's container).
- ci(robocerebra): add Docker Hub login before the image build
to pick up the authenticated rate limit.
- docs(robocerebra): align eval snippet with the CI command
(observation size, camera_name_mapping, use_async_envs, device,
empty_cameras=1).
* fix(envs): preserve AsyncVectorEnv metadata/unwrapped in lazy eval envs
Port of #3416 onto this branch.
* ci: gate Docker Hub login on secret availability
* Update .github/workflows/benchmark_tests.yml
Co-authored-by: Khalil Meftah <khalil.meftah@huggingface.co>
Signed-off-by: Pepijn <138571049+pkooij@users.noreply.github.com>
* Update .github/workflows/benchmark_tests.yml
Signed-off-by: Pepijn <138571049+pkooij@users.noreply.github.com>
Co-authored-by: Khalil Meftah <khalil.meftah@huggingface.co>