[pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci
This commit is contained in:
pre-commit-ci[bot]
2025-09-05 09:55:42 +00:00
parent 504421949c
commit 38f7229078
+27 -22
View File
@@ -15,7 +15,7 @@ LIBERO includes **five task suites**:
Together, these suites cover **130 tasks**, ranging from simple object manipulations to complex multi-step scenarios. LIBERO is meant to grow over time, and to serve as a shared benchmark where the community can test and improve lifelong learning algorithms.
![An overview of the LIBERO benchmark](https://libero-project.github.io/assets/img/libero/fig1.png)
*Figure 1: An overview of the LIBERO benchmark.*
_Figure 1: An overview of the LIBERO benchmark._
## Evaluating with LIBERO
@@ -23,15 +23,16 @@ At **LeRobot**, we ported [LIBERO](https://github.com/Lifelong-Robot-Learning/LI
LIBERO is now part of our **multi-eval supported simulation**, allowing you to benchmark your policies either on a **single suite of tasks** or across **multiple suites at once** with just a single flag.
To install LIBERO, first follow the [LeRobot Installation Guide](https://huggingface.co/docs/lerobot/installation).
To install LIBERO, first follow the [LeRobot Installation Guide](https://huggingface.co/docs/lerobot/installation).
Once LeRobot is installed, there are two options:
1. **Install via pip** (recommended):
1. **Install via pip** (recommended):
```bash
pip install "lerobot[libero,smolvla]"
```
2. **Install from source**:
2. **Install from source**:
```bash
git clone https://github.com/huggingface/lerobot.git
cd lerobot
@@ -85,9 +86,10 @@ When using LIBERO through LeRobot, policies interact with the environment via **
- `observation.images.image2` wrist camera view (`robot0_eye_in_hand_image`).
⚠️ **Note:** LeRobot enforces the `.images.*` prefix for any visual features. Make sure your dataset metadata keys match this convention when evaluating.
## Input Features and Metadata Alignment
To train or evaluate a policy, you use `make_policy`, which builds a feature-naming dictionary for the observations the policy expects.
To train or evaluate a policy, you use `make_policy`, which builds a feature-naming dictionary for the observations the policy expects.
This mapping can come from:
- Dataset metadata
- The evaluation environment
@@ -96,32 +98,33 @@ When using LIBERO through LeRobot, policies interact with the environment via **
### Common Issues
A common problem is when the keys in the dataset, environment, and policy config do not match. For example:
- `wrist_image` vs `observation.images.image2`
- `observation.image2` (as in SmolVLA) vs the `.images.*` prefix convention
- `wrist_image` vs `observation.images.image2`
- `observation.image2` (as in SmolVLA) vs the `.images.*` prefix convention
Such mismatches will cause `KeyError`s. This may be due to assumptions in `make_policy` or missing error handling.
---
***
### How to Check Expected Features
- Open your policy config (`config.json`), e.g. [example here](https://huggingface.co/jadechoghari/smolvla-libero/blob/main/config.json).
- Open your policy config (`config.json`), e.g. [example here](https://huggingface.co/jadechoghari/smolvla-libero/blob/main/config.json).
- Or add a breakpoint in `train.py` and inspect:
```python
print(policy.config.input_features)
To ensure you can just check what your policy expects as `input_features`:
- Open your policy config (`config.json`), e.g. [example here](https://huggingface.co/jadechoghari/smolvla-libero/blob/main/config.json).
````python
print(policy.config.input_features)
To ensure you can just check what your policy expects as `input_features`:
- Open your policy config (`config.json`), e.g. [example here](https://huggingface.co/jadechoghari/smolvla-libero/blob/main/config.json).
- Or add a breakpoint in `train.py` and inspect:
```python
print(policy.config.input_features)
Fixing KeyErrors (Preprocessing Example)
Fixing KeyErrors (Preprocessing Example)
````
## Fixing KeyErrors (Preprocessing Example)
If your dataset columns do not follow the expected naming, you can rename them in-place before training:
```python
````python
import pyarrow.parquet as pq
import shutil
@@ -162,12 +165,12 @@ The environment expects:
- `observation.images.image2` → wrist camera (`robot0_eye_in_hand_image`)
⚠️ Cleaning the dataset upfront is **cleaner and more efficient** than remapping keys inside the code. We plan to provide a script to easily preprocess such data.
To avoid potential mismatches and `KeyError`s, we provide a **preprocessed LIBERO dataset** that is fully compatible with the current LeRobot codebase and requires no additional manipulations.
To avoid potential mismatches and `KeyError`s, we provide a **preprocessed LIBERO dataset** that is fully compatible with the current LeRobot codebase and requires no additional manipulations.
- 🔗 [Preprocessed LIBERO dataset (Hugging Face LeRobot org)](https://huggingface.co/datasets/HuggingFaceVLA/libero)
- 🔗 [Original LIBERO dataset (physical-intelligence)](https://huggingface.co/datasets/physical-intelligence/libero)
- 🔗 [Preprocessed LIBERO dataset (Hugging Face LeRobot org)](https://huggingface.co/datasets/HuggingFaceVLA/libero)
- 🔗 [Original LIBERO dataset (physical-intelligence)](https://huggingface.co/datasets/physical-intelligence/libero)
The preprocessed dataset follows LeRobot naming conventions (e.g., `.images.*` prefix for visual features) and aligns with policy configs out-of-the-box.
The preprocessed dataset follows LeRobot naming conventions (e.g., `.images.*` prefix for visual features) and aligns with policy configs out-of-the-box.
The original dataset is acknowledged here as the primary source.
---
@@ -187,7 +190,7 @@ python src/lerobot/scripts/train.py \
--eval.batch_size=1 \
--eval.n_episodes=1 \
--eval_freq=1000 \
```
````
---
@@ -209,15 +212,17 @@ UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to
This happens because Colabs rendering contexts are **not thread-safe**, and `ThreadPoolExecutor(max_workers=num_workers)` can trigger segfaults or leaked semaphore warnings.
**Colab Note:**
**Colab Note:**
Parallel evaluation is not supported in Colab. To avoid these issues, run sequentially or disable multitask evaluation:
Run sequentially:
```bash
--env.max_parallel_tasks=1
```
Or disable multitask evaluation:
```bash
--env.multitask_eval=False
```