mirror of
https://github.com/huggingface/lerobot.git
synced 2026-05-11 14:49:43 +00:00
e699e52388
* feat(envs): add RoboCasa365 benchmark integration Add RoboCasa365 (arXiv:2603.04356) as a new simulation benchmark with 365 everyday kitchen manipulation tasks across 2,500 diverse environments. New files: - src/lerobot/envs/robocasa.py: gym.Env wrapper with deferred env creation, flat 12D action / 16D state vectors, 3-camera support - docs/source/robocasa.mdx: user-facing documentation - docker/Dockerfile.benchmark.robocasa: CI benchmark image Modified files: - src/lerobot/envs/configs.py: RoboCasaEnv config (--env.type=robocasa) - pyproject.toml: robocasa optional dependency group - docs/source/_toctree.yml: sidebar entry - .github/workflows/benchmark_tests.yml: integration test job Refs: https://arxiv.org/abs/2603.04356, https://robocasa.ai Related: huggingface/lerobot#321 * fix(docker): use uv pip to install robocasa in benchmark image The huggingface/lerobot-gpu base image uses `uv` with a venv at /lerobot/.venv — `pip` is not on PATH, so `pip install` fails with "pip: not found". Switch to `uv pip install` which installs into the existing venv. Also drop the @v1.0.0 tag pin from the robocasa git URL since the upstream repo may not have that tag; use default branch instead. * fix(robocasa): editable install + switch to lerobot/smolvla_robocasa - pip install from git omits data files like box_links_assets.json (not declared in package_data). Clone and install editable so the source tree is used at runtime. - Download only tex + fixtures_lw asset types (smoke test doesn't need objaverse/aigen objects). Pipe 'y' to auto-accept download prompt. - Switch CI policy from pepijn223/smolvla_robocasa to lerobot/smolvla_robocasa. * fix(docker): re-install lerobot editably after COPY The nightly huggingface/lerobot-gpu image predates the RoboCasaEnv registration — so `lerobot-eval --env.type=robocasa` fails at argparse with "invalid choice" even after COPY . . overlays the new source. Force an editable reinstall so the venv picks up the current configs.py. * fix(ci): add rename_map for robocasa eval (image* -> camera*) Policy lerobot/smolvla_robocasa expects observation.images.camera1/2/3, but RoboCasaEnv produces observation.images.image/image2/image3. * fix(robocasa): override RoboCasaGymEnv default split (test -> all) RoboCasaGymEnv defaults split="test", but create_env only accepts {None, "all", "pretrain", "target"}, so the out-of-the-box default crashes with ValueError. Always pass "all" when split is None. * fix(docker): also download objs_lw (lightwheel objects) for robocasa Kitchen tasks (e.g. CloseFridge) reference lightwheel object meshes like Stool022/model.xml. fixtures_lw alone isn't enough — we also need objs_lw. Still skipping objaverse/aigen to keep image size down. Made-with: Cursor * feat(robocasa): raw camera names + benchmark-group task shortcuts Align the LeRobot env with RoboCasa's native conventions so policies trained on the upstream datasets don't need a --rename_map at eval time, and expose the standard task groups as first-class --env.task values. - Preserve raw RoboCasa camera names (e.g. robot0_agentview_left) as observation.images.<name> end-to-end. Drops camera_name_mapping and DEFAULT_CAMERA_NAME_MAPPING; features/features_map are now built dynamically from the parsed camera list. - Accept benchmark-group names as --env.task: atomic_seen, composite_seen, composite_unseen, pretrain50/100/200/300. Expanded lazily via robocasa.utils.dataset_registry and auto-sets the split ("target" | "pretrain"). - Update CI smoke-eval rename_map to map raw cam names to the camera1/2/3 keys expected by lerobot/smolvla_robocasa. * docs(robocasa): single-task smolvla train+eval recipe on pepijn223/robocasa_CloseFridge - Rewrite observation section to use raw RoboCasa camera keys (observation.images.robot0_agentview_{left,right}, observation.images.robot0_eye_in_hand). - Add a "Training on a single task" section with a full smolvla training command on pepijn223/robocasa_CloseFridge, plus matching single-task eval command. - Document benchmark-group task shortcuts (atomic_seen, composite_seen, composite_unseen, pretrain50/100/200/300) as valid --env.task values. * fix(robocasa): restrict obj_registries to lightwheel by default CloseFridge (and most kitchen tasks) crashed at reset with `ValueError: Probabilities contain NaN` coming out of `sample_kitchen_object_helper`. RoboCasa's upstream default `obj_registries=("objaverse", "lightwheel")` normalizes per-registry candidate counts as probabilities; when a sampled category has zero mjcf paths in every configured registry (because the objaverse asset pack isn't on disk — ~30GB, skipped by our Docker build), the 0/0 divide yields NaNs and `rng.choice` raises. - Add `obj_registries: list[str] = ["lightwheel"]` to `RoboCasaEnv` config; thread it through `create_robocasa_envs`, `_make_env_fns`, and the gym.Env wrapper to the underlying `RoboCasaGymEnv` (which forwards to `create_env` → `robosuite.make` → kitchen env). - Default matches what `download_kitchen_assets --type objs_lw` actually ships, so the env works out of the box without a 30GB objaverse download. - Document the override (`--env.obj_registries='[objaverse,lightwheel]'`) for users who have downloaded the full asset set. * fix(docker): also download tex_generative for robocasa benchmark RoboCasa's lightwheel kitchen fixtures embed references to `generative_textures/wall/tex*.png` directly in their MuJoCo XML, so `MjModel.from_xml_string` errors out at reset time with "No such file or directory" even when the env is constructed with `generative_textures=None`. The generative textures live under a separate asset registry key (`tex_generative`) in `download_kitchen_assets`, distinct from the base `tex` pack we were already fetching. - Add `tex_generative` to the download list so the fixture XMLs resolve. - Document the remaining omissions (objaverse/aigen, ~30GB) and how the runtime side pairs this with obj_registries=["lightwheel"] to avoid sampling from categories whose assets aren't on disk. * ci(robocasa): smoke-eval 10 atomic tasks instead of 1 Broader coverage in the benchmark CI job: evaluate SmolVLA on ten fixture-centric atomic RoboCasa tasks (one episode each) instead of just CloseFridge. The tasks are all drawn from TARGET_TASKS.atomic_seen and selected to avoid object-manipulation categories that would require the objaverse/aigen asset packs (we only ship objs_lw in the Docker image, paired with obj_registries=["lightwheel"] on the runtime side). Tasks: CloseFridge, OpenCabinet, OpenDrawer, TurnOnMicrowave, TurnOffStove, CloseToasterOvenDoor, SlideDishwasherRack, TurnOnSinkFaucet, NavigateKitchen, TurnOnElectricKettle. `scripts/ci/parse_eval_metrics.py` already handles multi-task output via the `overall` key, so no parser changes needed. Bumped the metrics artifact's task label to `atomic_smoke_10` to reflect the grouping. * fix(pyproject): drop unresolvable robocasa extra robocasa's upstream setup.py hardcodes `lerobot==0.3.3` in install_requires. Exposing it as the `lerobot[robocasa]` extra made uv's dep resolver cycle: `lerobot[robocasa]` -> robocasa -> lerobot (a different version) -> unsolvable. This broke every `uv sync` — even invocations with an unrelated extra like `--extra test` — because uv validates the whole lockfile graph. - Remove the `robocasa` extra from pyproject.toml. Installation instructions in docs/source/robocasa.mdx now walk users through the manual `git clone` + `pip install --no-deps` flow, which matches what the Docker image already does and sidesteps the cyclic dep entirely. - Dockerfile: `uv pip install -e ~/robocasa --no-deps` so the shadowed lerobot==0.3.3 never lands in the image; install robocasa's actual runtime deps (numpy, numba, scipy, mujoco, tianshou, etc.) explicitly. * docs(robocasa): align page with adding_benchmarks template Rework docs/source/robocasa.mdx to follow the standard benchmark doc structure: intro + links + available tasks (with family breakdown and first-class benchmark-group shortcuts) + installation + eval + recommended episodes + policy I/O + training + reproducing results. - Fix the paper link (was pointing at a non-existent arxiv ID). - Surface lerobot/smolvla_robocasa and pepijn223/robocasa_CloseFridge in the top-of-page links so they're findable without reading the training section. - Add an explicit "Object registries" subsection explaining the `--env.obj_registries=[objaverse,lightwheel]` override path. - Add an explicit "Reproducing published results" section pointing at the CI smoke eval. * fix: integrate PR #3375 review feedback - envs(robocasa): hoist the duplicated `_parse_camera_names` helper out of `libero.py` and `robocasa.py` into `envs/utils.py` as the public `parse_camera_names`; call sites updated. - envs(robocasa): give each factory a distinct `episode_index` (`0..n_envs-1`) and derive a per-worker seed series in `reset()` so n_envs workers don't all roll the same scene under a shared outer seed. - envs(robocasa): drop the unused `**kwargs` on `_make_env`; declare `visualization_height` / `visualization_width` on both the wrapper and the `RoboCasaEnv` config + propagate via `gym_kwargs`. - envs(robocasa): emit `info["final_info"]` on termination (matching MetaWorld) so downstream vector-env auto-reset keeps the terminal task/success flags. - docs(robocasa): add `--rename_map` (robot0_agentview_left/ eye_in_hand/agentview_right → camera1/2/3) plus CI-parity flags to all three eval snippets. - docker(robocasa): pin robocasa + robosuite git SHAs and the pip dep versions (pygame, Pillow, opencv-python, pyyaml, pynput, tqdm, termcolor, imageio, h5py, lxml, hidapi, gymnasium) for reproducible benchmark images. - ci(robocasa): update the workflow comment — there is no `lerobot[robocasa]` extra; robocasa/robosuite are installed manually because upstream's `lerobot==0.3.3` pin shadows ours. * docs(robocasa): add benchmark banner image * fix(envs): preserve AsyncVectorEnv metadata/unwrapped in lazy eval envs Port of #3416 onto this branch. Also threads the cached metadata through the RoboCasa factory so async eval on `--env.type=robocasa` keeps the same improvement. * fix: integrate PR #3375 review feedback (round 2) - envs(robocasa): when the caller passes `seed=None` to `reset()`, fall back to `self.episode_index` for the inner env seed so each worker still samples a distinct trajectory instead of all workers inheriting the same global RNG state. - envs(robocasa): replace the two module-level `print()` calls in `create_robocasa_envs` with `logger.info(...)` via a module-level `logger = logging.getLogger(__name__)`. - ci(robocasa): run `scripts/ci/extract_task_descriptions.py` after the eval so `metrics.json` carries per-task natural-language labels, matching LIBERO / MetaWorld / VLABench jobs. Added a `_robocasa_descriptions()` extractor that splits CamelCase task names into word-level labels keyed by `<task>_0`.
419 lines
17 KiB
YAML
419 lines
17 KiB
YAML
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
|
|
#
|
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
# you may not use this file except in compliance with the License.
|
|
# You may obtain a copy of the License at
|
|
#
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
#
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
# See the License for the specific language governing permissions and
|
|
# limitations under the License.
|
|
|
|
# Integration tests: build an isolated Docker image per benchmark and run a
|
|
# 1-episode smoke eval. Each benchmark gets its own image so incompatible
|
|
# dependency trees (e.g. hf-libero vs metaworld==3.0.0) can never collide.
|
|
#
|
|
# To add a new benchmark:
|
|
# 1. Add docker/Dockerfile.benchmark.<name> (install only lerobot[<name>])
|
|
# 2. Copy one of the jobs below and adjust the image name and eval command.
|
|
name: Benchmark Integration Tests
|
|
|
|
on:
|
|
# Run manually from the Actions tab
|
|
workflow_dispatch:
|
|
|
|
# Run every Monday at 02:00 UTC.
|
|
schedule:
|
|
- cron: "0 2 * * 1"
|
|
|
|
push:
|
|
branches:
|
|
- main
|
|
paths:
|
|
- "src/lerobot/envs/**"
|
|
- "src/lerobot/scripts/lerobot_eval.py"
|
|
- "docker/Dockerfile.benchmark.*"
|
|
- ".github/workflows/benchmark_tests.yml"
|
|
- "pyproject.toml"
|
|
|
|
pull_request:
|
|
branches:
|
|
- main
|
|
paths:
|
|
- "src/lerobot/envs/**"
|
|
- "src/lerobot/scripts/lerobot_eval.py"
|
|
- "docker/Dockerfile.benchmark.*"
|
|
- ".github/workflows/benchmark_tests.yml"
|
|
- "pyproject.toml"
|
|
|
|
permissions:
|
|
contents: read
|
|
|
|
env:
|
|
UV_VERSION: "0.8.0"
|
|
PYTHON_VERSION: "3.12"
|
|
|
|
# Cancel in-flight runs for the same branch/PR.
|
|
concurrency:
|
|
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
|
|
cancel-in-progress: true
|
|
|
|
jobs:
|
|
# ── LIBERO ────────────────────────────────────────────────────────────────
|
|
# Isolated image: lerobot[libero] only (hf-libero, dm-control, mujoco chain)
|
|
libero-integration-test:
|
|
name: Libero — build image + 1-episode eval
|
|
runs-on:
|
|
group: aws-g6-4xlarge-plus
|
|
env:
|
|
HF_USER_TOKEN: ${{ secrets.LEROBOT_HF_USER }}
|
|
|
|
steps:
|
|
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
|
with:
|
|
persist-credentials: false
|
|
lfs: true
|
|
|
|
- name: Set up Docker Buildx
|
|
uses: docker/setup-buildx-action@v3 # zizmor: ignore[unpinned-uses]
|
|
with:
|
|
cache-binary: false
|
|
|
|
- name: Login to Docker Hub
|
|
if: ${{ env.DOCKERHUB_USERNAME != '' }}
|
|
uses: docker/login-action@v3 # zizmor: ignore[unpinned-uses]
|
|
with:
|
|
username: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
|
|
password: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
|
|
env:
|
|
DOCKERHUB_USERNAME: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
|
|
|
|
# Build the benchmark-specific image. The Dockerfile separates dep-install
|
|
# from source-copy, so code-only changes skip the slow uv-sync layer
|
|
# when the runner has a warm Docker daemon cache.
|
|
- name: Build Libero benchmark image
|
|
uses: docker/build-push-action@v6 # zizmor: ignore[unpinned-uses]
|
|
with:
|
|
context: .
|
|
file: docker/Dockerfile.benchmark.libero
|
|
push: false
|
|
load: true
|
|
tags: lerobot-benchmark-libero:ci
|
|
|
|
- name: Run Libero smoke eval (1 episode)
|
|
if: env.HF_USER_TOKEN != ''
|
|
run: |
|
|
# Named container (no --rm) so we can docker cp artifacts out.
|
|
# Output to /tmp inside the container — /artifacts doesn't exist
|
|
# and user_lerobot cannot create root-level dirs.
|
|
docker run --name libero-eval --gpus all \
|
|
--shm-size=4g \
|
|
-e HF_HOME=/tmp/hf \
|
|
-e HF_USER_TOKEN="${HF_USER_TOKEN}" \
|
|
-e HF_HUB_DOWNLOAD_TIMEOUT=300 \
|
|
lerobot-benchmark-libero:ci \
|
|
bash -c "
|
|
hf auth login --token \"\$HF_USER_TOKEN\" --add-to-git-credential 2>/dev/null || true
|
|
lerobot-eval \
|
|
--policy.path=pepijn223/smolvla_libero \
|
|
--env.type=libero \
|
|
--env.task=libero_spatial \
|
|
--eval.batch_size=1 \
|
|
--eval.n_episodes=1 \
|
|
--eval.use_async_envs=false \
|
|
--policy.device=cuda \
|
|
'--env.camera_name_mapping={\"agentview_image\": \"camera1\", \"robot0_eye_in_hand_image\": \"camera2\"}' \
|
|
--policy.empty_cameras=1 \
|
|
--output_dir=/tmp/eval-artifacts
|
|
python scripts/ci/extract_task_descriptions.py \
|
|
--env libero --task libero_spatial \
|
|
--output /tmp/eval-artifacts/task_descriptions.json
|
|
"
|
|
|
|
- name: Copy Libero artifacts from container
|
|
if: always()
|
|
run: |
|
|
mkdir -p /tmp/libero-artifacts
|
|
docker cp libero-eval:/tmp/eval-artifacts/. /tmp/libero-artifacts/ 2>/dev/null || true
|
|
docker rm -f libero-eval || true
|
|
|
|
- name: Parse Libero eval metrics
|
|
if: always()
|
|
run: |
|
|
python3 scripts/ci/parse_eval_metrics.py \
|
|
--artifacts-dir /tmp/libero-artifacts \
|
|
--env libero \
|
|
--task libero_spatial \
|
|
--policy pepijn223/smolvla_libero
|
|
|
|
- name: Upload Libero rollout video
|
|
if: always()
|
|
uses: actions/upload-artifact@v4 # zizmor: ignore[unpinned-uses]
|
|
with:
|
|
name: libero-rollout-video
|
|
path: /tmp/libero-artifacts/videos/
|
|
if-no-files-found: warn
|
|
|
|
- name: Upload Libero eval metrics
|
|
if: always()
|
|
uses: actions/upload-artifact@v4 # zizmor: ignore[unpinned-uses]
|
|
with:
|
|
name: libero-metrics
|
|
path: /tmp/libero-artifacts/metrics.json
|
|
if-no-files-found: warn
|
|
|
|
# ── LIBERO TRAIN+EVAL SMOKE ──────────────────────────────────────────────
|
|
# Train SmolVLA for 1 step (batch_size=1, dataset episode 0 only) then
|
|
# immediately runs eval inside the training loop (eval_freq=1, 1 episode).
|
|
# Tests the full train→eval-within-training pipeline end-to-end.
|
|
- name: Run Libero train+eval smoke (1 step, eval_freq=1)
|
|
if: env.HF_USER_TOKEN != ''
|
|
run: |
|
|
docker run --name libero-train-smoke --gpus all \
|
|
--shm-size=4g \
|
|
-e HF_HOME=/tmp/hf \
|
|
-e HF_USER_TOKEN="${HF_USER_TOKEN}" \
|
|
-e HF_HUB_DOWNLOAD_TIMEOUT=300 \
|
|
lerobot-benchmark-libero:ci \
|
|
bash -c "
|
|
hf auth login --token \"\$HF_USER_TOKEN\" --add-to-git-credential 2>/dev/null || true
|
|
accelerate launch --num_processes=1 \$(which lerobot-train) \
|
|
--policy.path=lerobot/smolvla_base \
|
|
--policy.load_vlm_weights=true \
|
|
--policy.scheduler_decay_steps=25000 \
|
|
--policy.freeze_vision_encoder=false \
|
|
--policy.train_expert_only=false \
|
|
--dataset.repo_id=lerobot/libero \
|
|
--dataset.episodes=[0] \
|
|
--dataset.use_imagenet_stats=false \
|
|
--env.type=libero \
|
|
--env.task=libero_spatial \
|
|
'--env.camera_name_mapping={\"agentview_image\": \"camera1\", \"robot0_eye_in_hand_image\": \"camera2\"}' \
|
|
--policy.empty_cameras=1 \
|
|
--output_dir=/tmp/train-smoke \
|
|
--steps=1 \
|
|
--batch_size=1 \
|
|
--eval_freq=1 \
|
|
--eval.n_episodes=1 \
|
|
--eval.batch_size=1 \
|
|
--eval.use_async_envs=false \
|
|
--save_freq=1 \
|
|
--policy.push_to_hub=false \
|
|
'--rename_map={\"observation.images.image\": \"observation.images.camera1\", \"observation.images.image2\": \"observation.images.camera2\"}'
|
|
"
|
|
|
|
- name: Copy Libero train-smoke artifacts from container
|
|
if: always()
|
|
run: |
|
|
mkdir -p /tmp/libero-train-smoke-artifacts
|
|
docker cp libero-train-smoke:/tmp/train-smoke/. /tmp/libero-train-smoke-artifacts/ 2>/dev/null || true
|
|
docker rm -f libero-train-smoke || true
|
|
|
|
- name: Upload Libero train-smoke eval video
|
|
if: always()
|
|
uses: actions/upload-artifact@v4 # zizmor: ignore[unpinned-uses]
|
|
with:
|
|
name: libero-train-smoke-video
|
|
path: /tmp/libero-train-smoke-artifacts/eval/
|
|
if-no-files-found: warn
|
|
|
|
# ── METAWORLD ─────────────────────────────────────────────────────────────
|
|
# Isolated image: lerobot[metaworld] only (metaworld==3.0.0, mujoco>=3 chain)
|
|
metaworld-integration-test:
|
|
name: MetaWorld — build image + 1-episode eval
|
|
runs-on:
|
|
group: aws-g6-4xlarge-plus
|
|
env:
|
|
HF_USER_TOKEN: ${{ secrets.LEROBOT_HF_USER }}
|
|
|
|
steps:
|
|
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
|
with:
|
|
persist-credentials: false
|
|
lfs: true
|
|
|
|
- name: Set up Docker Buildx
|
|
uses: docker/setup-buildx-action@v3 # zizmor: ignore[unpinned-uses]
|
|
with:
|
|
cache-binary: false
|
|
|
|
- name: Login to Docker Hub
|
|
if: ${{ env.DOCKERHUB_USERNAME != '' }}
|
|
uses: docker/login-action@v3 # zizmor: ignore[unpinned-uses]
|
|
with:
|
|
username: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
|
|
password: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
|
|
env:
|
|
DOCKERHUB_USERNAME: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
|
|
|
|
- name: Build MetaWorld benchmark image
|
|
uses: docker/build-push-action@v6 # zizmor: ignore[unpinned-uses]
|
|
with:
|
|
context: .
|
|
file: docker/Dockerfile.benchmark.metaworld
|
|
push: false
|
|
load: true
|
|
tags: lerobot-benchmark-metaworld:ci
|
|
|
|
- name: Run MetaWorld smoke eval (1 episode)
|
|
if: env.HF_USER_TOKEN != ''
|
|
run: |
|
|
docker run --name metaworld-eval --gpus all \
|
|
--shm-size=4g \
|
|
-e HF_HOME=/tmp/hf \
|
|
-e HF_USER_TOKEN="${HF_USER_TOKEN}" \
|
|
-e HF_HUB_DOWNLOAD_TIMEOUT=300 \
|
|
lerobot-benchmark-metaworld:ci \
|
|
bash -c "
|
|
hf auth login --token \"\$HF_USER_TOKEN\" --add-to-git-credential 2>/dev/null || true
|
|
lerobot-eval \
|
|
--policy.path=pepijn223/smolvla_metaworld \
|
|
--env.type=metaworld \
|
|
--env.task=metaworld-push-v3 \
|
|
--eval.batch_size=1 \
|
|
--eval.n_episodes=1 \
|
|
--eval.use_async_envs=false \
|
|
--policy.device=cuda \
|
|
'--rename_map={\"observation.image\": \"observation.images.camera1\"}' \
|
|
--policy.empty_cameras=2 \
|
|
--output_dir=/tmp/eval-artifacts
|
|
python scripts/ci/extract_task_descriptions.py \
|
|
--env metaworld --task metaworld-push-v3 \
|
|
--output /tmp/eval-artifacts/task_descriptions.json
|
|
"
|
|
|
|
- name: Copy MetaWorld artifacts from container
|
|
if: always()
|
|
run: |
|
|
mkdir -p /tmp/metaworld-artifacts
|
|
docker cp metaworld-eval:/tmp/eval-artifacts/. /tmp/metaworld-artifacts/ 2>/dev/null || true
|
|
docker rm -f metaworld-eval || true
|
|
|
|
- name: Parse MetaWorld eval metrics
|
|
if: always()
|
|
run: |
|
|
python3 scripts/ci/parse_eval_metrics.py \
|
|
--artifacts-dir /tmp/metaworld-artifacts \
|
|
--env metaworld \
|
|
--task metaworld-push-v3 \
|
|
--policy pepijn223/smolvla_metaworld
|
|
|
|
- name: Upload MetaWorld rollout video
|
|
if: always()
|
|
uses: actions/upload-artifact@v4 # zizmor: ignore[unpinned-uses]
|
|
with:
|
|
name: metaworld-rollout-video
|
|
path: /tmp/metaworld-artifacts/videos/
|
|
if-no-files-found: warn
|
|
|
|
- name: Upload MetaWorld eval metrics
|
|
if: always()
|
|
uses: actions/upload-artifact@v4 # zizmor: ignore[unpinned-uses]
|
|
with:
|
|
name: metaworld-metrics
|
|
path: /tmp/metaworld-artifacts/metrics.json
|
|
if-no-files-found: warn
|
|
|
|
# ── ROBOCASA365 ──────────────────────────────────────────────────────────
|
|
# Isolated image: robocasa + robosuite installed manually as editable
|
|
# clones (no `lerobot[robocasa]` extra — robocasa's setup.py pins
|
|
# `lerobot==0.3.3`, which would shadow this repo's lerobot).
|
|
robocasa-integration-test:
|
|
name: RoboCasa365 — build image + 1-episode eval
|
|
runs-on:
|
|
group: aws-g6-4xlarge-plus
|
|
env:
|
|
HF_USER_TOKEN: ${{ secrets.LEROBOT_HF_USER }}
|
|
|
|
steps:
|
|
- uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6.0.2
|
|
with:
|
|
persist-credentials: false
|
|
lfs: true
|
|
|
|
- name: Set up Docker Buildx
|
|
uses: docker/setup-buildx-action@v3 # zizmor: ignore[unpinned-uses]
|
|
with:
|
|
cache-binary: false
|
|
|
|
- name: Login to Docker Hub
|
|
if: ${{ env.DOCKERHUB_USERNAME != '' }}
|
|
uses: docker/login-action@v3 # zizmor: ignore[unpinned-uses]
|
|
with:
|
|
username: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
|
|
password: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
|
|
env:
|
|
DOCKERHUB_USERNAME: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
|
|
|
|
- name: Build RoboCasa365 benchmark image
|
|
uses: docker/build-push-action@v6 # zizmor: ignore[unpinned-uses]
|
|
with:
|
|
context: .
|
|
file: docker/Dockerfile.benchmark.robocasa
|
|
push: false
|
|
load: true
|
|
tags: lerobot-benchmark-robocasa:ci
|
|
|
|
- name: Run RoboCasa365 smoke eval (10 atomic tasks, 1 episode each)
|
|
if: env.HF_USER_TOKEN != ''
|
|
run: |
|
|
docker run --name robocasa-eval --gpus all \
|
|
--shm-size=4g \
|
|
-e HF_HOME=/tmp/hf \
|
|
-e HF_USER_TOKEN="${HF_USER_TOKEN}" \
|
|
-e HF_HUB_DOWNLOAD_TIMEOUT=300 \
|
|
-e MUJOCO_GL=egl \
|
|
lerobot-benchmark-robocasa:ci \
|
|
bash -c "
|
|
hf auth login --token \"\$HF_USER_TOKEN\" --add-to-git-credential 2>/dev/null || true
|
|
lerobot-eval \
|
|
--policy.path=lerobot/smolvla_robocasa \
|
|
--env.type=robocasa \
|
|
--env.task=CloseFridge,OpenCabinet,OpenDrawer,TurnOnMicrowave,TurnOffStove,CloseToasterOvenDoor,SlideDishwasherRack,TurnOnSinkFaucet,NavigateKitchen,TurnOnElectricKettle \
|
|
--eval.batch_size=1 \
|
|
--eval.n_episodes=1 \
|
|
--eval.use_async_envs=false \
|
|
--policy.device=cuda \
|
|
'--rename_map={\"observation.images.robot0_agentview_left\": \"observation.images.camera1\", \"observation.images.robot0_eye_in_hand\": \"observation.images.camera2\", \"observation.images.robot0_agentview_right\": \"observation.images.camera3\"}' \
|
|
--output_dir=/tmp/eval-artifacts
|
|
python scripts/ci/extract_task_descriptions.py \
|
|
--env robocasa \
|
|
--task CloseFridge,OpenCabinet,OpenDrawer,TurnOnMicrowave,TurnOffStove,CloseToasterOvenDoor,SlideDishwasherRack,TurnOnSinkFaucet,NavigateKitchen,TurnOnElectricKettle \
|
|
--output /tmp/eval-artifacts/task_descriptions.json
|
|
"
|
|
|
|
- name: Copy RoboCasa365 artifacts from container
|
|
if: always()
|
|
run: |
|
|
mkdir -p /tmp/robocasa-artifacts
|
|
docker cp robocasa-eval:/tmp/eval-artifacts/. /tmp/robocasa-artifacts/ 2>/dev/null || true
|
|
docker rm -f robocasa-eval || true
|
|
|
|
- name: Parse RoboCasa365 eval metrics
|
|
if: always()
|
|
run: |
|
|
python3 scripts/ci/parse_eval_metrics.py \
|
|
--artifacts-dir /tmp/robocasa-artifacts \
|
|
--env robocasa \
|
|
--task atomic_smoke_10 \
|
|
--policy lerobot/smolvla_robocasa
|
|
|
|
- name: Upload RoboCasa365 rollout video
|
|
if: always()
|
|
uses: actions/upload-artifact@v4 # zizmor: ignore[unpinned-uses]
|
|
with:
|
|
name: robocasa-rollout-video
|
|
path: /tmp/robocasa-artifacts/videos/
|
|
if-no-files-found: warn
|
|
|
|
- name: Upload RoboCasa365 eval metrics
|
|
if: always()
|
|
uses: actions/upload-artifact@v4 # zizmor: ignore[unpinned-uses]
|
|
with:
|
|
name: robocasa-metrics
|
|
path: /tmp/robocasa-artifacts/metrics.json
|
|
if-no-files-found: warn
|