mirror of
https://github.com/huggingface/lerobot.git
synced 2026-05-11 22:59:50 +00:00
Compare commits
32 Commits
main
...
feat/add-pi05
| Author | SHA1 | Date | |
|---|---|---|---|
| e29e89e4ed | |||
| 3d55c5e484 | |||
| 51b3b31927 | |||
| 4503019d18 | |||
| 6aa0cc267f | |||
| 6629b454b2 | |||
| 0059ca7924 | |||
| 6c94fcd1b1 | |||
| 092f4617ca | |||
| 6380c0d0dd | |||
| 0947111edd | |||
| 477204d485 | |||
| 4eb912da30 | |||
| 99dbbd56c2 | |||
| 6a6912ec37 | |||
| 2bf6359d24 | |||
| 4c694e20c7 | |||
| 5e609426fd | |||
| d0b6a66f34 | |||
| dc85e9b742 | |||
| 90d9698c7e | |||
| bbef8bb077 | |||
| 80417111d3 | |||
| d44f3a3bd9 | |||
| b864c13dfb | |||
| fd917e4fa0 | |||
| 966fedfeef | |||
| 6e88d6f387 | |||
| 83276eeb2f | |||
| 72b0af4ed7 | |||
| b57504b89e | |||
| 72f7aaedb5 |
@@ -27,6 +27,8 @@
|
||||
title: Porting Large Datasets
|
||||
- local: using_dataset_tools
|
||||
title: Using the Dataset Tools
|
||||
- local: annotation_tools
|
||||
title: Using the Annotation Tools
|
||||
- local: dataset_subtask
|
||||
title: Using Subtasks in the Dataset
|
||||
title: "Datasets"
|
||||
|
||||
@@ -0,0 +1,425 @@
|
||||
# Dataset Annotation Tools
|
||||
|
||||
This guide explains how to use the automatic annotation tools to add skill labels and synthetic dialogue to your LeRobot datasets.
|
||||
|
||||
## Overview
|
||||
|
||||
The annotation pipeline consists of two main components:
|
||||
|
||||
1. **Subtask Annotation** (`subtask_annotate.py`): Automatically segments robot demonstrations into atomic skills using Vision-Language Models (VLMs)
|
||||
2. **High-Level Annotation** (`high_level_annotate.py`): Generates synthetic user prompts and robot utterances for hierarchical policy training
|
||||
|
||||
These tools enable you to transform raw robot demonstration data into richly annotated datasets suitable for training hierarchical policies.
|
||||
|
||||
## Installation Requirements
|
||||
|
||||
Before using the annotation tools, ensure you have the required dependencies:
|
||||
|
||||
```bash
|
||||
pip install transformers qwen-vl-utils opencv-python rich pandas pyarrow
|
||||
```
|
||||
|
||||
You'll also need FFmpeg for video processing:
|
||||
|
||||
```bash
|
||||
# Ubuntu/Debian
|
||||
sudo apt-get install ffmpeg
|
||||
|
||||
# macOS
|
||||
brew install ffmpeg
|
||||
```
|
||||
|
||||
## Part 1: Subtask Annotation
|
||||
|
||||
### What It Does
|
||||
|
||||
The subtask annotator segments each episode into short atomic manipulation skills (1-3 seconds each). For example, a "pick and place" episode might be segmented into:
|
||||
- "reach towards object" (0.0s - 1.2s)
|
||||
- "grasp object" (1.2s - 2.1s)
|
||||
- "lift object" (2.1s - 3.5s)
|
||||
- "move to target" (3.5s - 5.0s)
|
||||
- "release object" (5.0s - 6.2s)
|
||||
|
||||
### Usage
|
||||
|
||||
#### Basic Example
|
||||
|
||||
```bash
|
||||
python src/lerobot/policies/pi05_full/annotate/subtask_annotate.py \
|
||||
--repo-id your-username/your-dataset \
|
||||
--video-key observation.images.base \
|
||||
--output-dir /path/to/output
|
||||
```
|
||||
|
||||
#### With Local Dataset
|
||||
|
||||
```bash
|
||||
python src/lerobot/policies/pi05_full/annotate/subtask_annotate.py \
|
||||
--data-dir /path/to/local/dataset \
|
||||
--video-key observation.images.base \
|
||||
--output-dir /path/to/output
|
||||
```
|
||||
|
||||
#### Advanced Options
|
||||
|
||||
```bash
|
||||
python src/lerobot/policies/pi05_full/annotate/subtask_annotate.py \
|
||||
--repo-id your-username/your-dataset \
|
||||
--video-key observation.images.base \
|
||||
--model Qwen/Qwen2-VL-7B-Instruct \
|
||||
--batch-size 16 \
|
||||
--output-dir /path/to/output \
|
||||
--push-to-hub
|
||||
```
|
||||
|
||||
### Parameters
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|-----------|-------------|---------|
|
||||
| `--repo-id` | HuggingFace Hub dataset ID | Required (or use --data-dir) |
|
||||
| `--data-dir` | Path to local dataset | Required (or use --repo-id) |
|
||||
| `--video-key` | Video observation key | Required |
|
||||
| `--model` | VLM model to use | `Qwen/Qwen2-VL-7B-Instruct` |
|
||||
| `--device` | Device to run model on | `cuda` |
|
||||
| `--dtype` | Model dtype | `bfloat16` |
|
||||
| `--batch-size` | Episodes per batch | `8` |
|
||||
| `--episodes` | Specific episodes to annotate | All episodes |
|
||||
| `--output-dir` | Output directory | Auto-generated |
|
||||
| `--push-to-hub` | Push to HuggingFace Hub | `False` |
|
||||
|
||||
### Supported Models
|
||||
|
||||
- **Qwen2-VL**: `Qwen/Qwen2-VL-2B-Instruct`, `Qwen/Qwen2-VL-7B-Instruct`, `Qwen/Qwen2-VL-72B-Instruct`
|
||||
- **Qwen3-VL**: `Qwen/Qwen3-VL-30B-A3B-Instruct`
|
||||
|
||||
### Output Files
|
||||
|
||||
The subtask annotation creates the following files in your dataset:
|
||||
|
||||
1. **`meta/subtasks.parquet`**: DataFrame with unique subtask names
|
||||
```python
|
||||
# Structure:
|
||||
# Index: subtask name (string)
|
||||
# Column: subtask_index (int64)
|
||||
```
|
||||
|
||||
2. **`meta/skills.json`**: Raw skill annotations with timestamps
|
||||
```json
|
||||
{
|
||||
"coarse_description": "Pick and place the object",
|
||||
"skill_to_subtask_index": {
|
||||
"reach towards object": 0,
|
||||
"grasp object": 1,
|
||||
...
|
||||
},
|
||||
"episodes": {
|
||||
"0": {
|
||||
"episode_index": 0,
|
||||
"description": "Pick and place the object",
|
||||
"skills": [
|
||||
{"name": "reach towards object", "start": 0.0, "end": 1.2},
|
||||
{"name": "grasp object", "start": 1.2, "end": 2.1},
|
||||
...
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
3. **`subtask_index` feature**: Added to each frame in the dataset
|
||||
- Type: `int64`
|
||||
- Shape: `(1,)`
|
||||
- Maps each frame to its corresponding subtask
|
||||
|
||||
### Accessing Subtask Annotations
|
||||
|
||||
```python
|
||||
from lerobot.datasets.lerobot_dataset import LeRobotDataset
|
||||
|
||||
# Load annotated dataset
|
||||
dataset = LeRobotDataset(repo_id="your/dataset_with_subtasks")
|
||||
|
||||
# Get a frame
|
||||
frame = dataset[100]
|
||||
|
||||
# Get the subtask for this frame
|
||||
subtask_idx = frame["subtask_index"].item()
|
||||
subtask_name = dataset.meta.subtasks.iloc[subtask_idx].name
|
||||
|
||||
print(f"Frame 100 is performing: {subtask_name}")
|
||||
|
||||
# Load all subtasks
|
||||
subtasks_df = dataset.meta.subtasks
|
||||
print(subtasks_df)
|
||||
```
|
||||
|
||||
## Part 2: High-Level Annotation
|
||||
|
||||
### What It Does
|
||||
|
||||
The high-level annotator generates synthetic dialogue for hierarchical policy training. For each skill, it creates:
|
||||
- **User Prompt** (`ℓ_t`): A natural language request from the user
|
||||
- **Robot Utterance** (`u_t`): A natural language response from the robot
|
||||
|
||||
This enables training policies that can understand and respond to human instructions in natural dialogue.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
**Important**: You must run subtask annotation first! High-level annotation requires the `skills.json` file generated by subtask annotation.
|
||||
|
||||
### Usage
|
||||
|
||||
#### Image Mode (Default)
|
||||
|
||||
Samples frames at regular intervals and passes images to the VLM:
|
||||
|
||||
```bash
|
||||
python src/lerobot/policies/pi05_full/annotate/high_level_annotate.py \
|
||||
--repo-id your/dataset_with_subtasks \
|
||||
--model Qwen/Qwen2-VL-7B-Instruct \
|
||||
--image-key observation.images.base \
|
||||
--output-dir /path/to/output
|
||||
```
|
||||
|
||||
#### Video Mode
|
||||
|
||||
Passes entire episode videos to the VLM for better temporal understanding:
|
||||
|
||||
```bash
|
||||
python src/lerobot/policies/pi05_full/annotate/high_level_annotate.py \
|
||||
--repo-id your/dataset_with_subtasks \
|
||||
--model Qwen/Qwen2-VL-7B-Instruct \
|
||||
--video-mode \
|
||||
--video-key observation.images.base \
|
||||
--video-batch-size 4 \
|
||||
--output-dir /path/to/output
|
||||
```
|
||||
|
||||
### Parameters
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|-----------|-------------|---------|
|
||||
| `--repo-id` | HuggingFace Hub dataset ID | Required (or use --data-dir) |
|
||||
| `--data-dir` | Path to local dataset | Required (or use --repo-id) |
|
||||
| `--model` | VLM model to use | `Qwen/Qwen2-VL-7B-Instruct` |
|
||||
| `--image-key` | Image observation key (image mode) | First camera key |
|
||||
| `--video-mode` | Use video instead of images | `False` |
|
||||
| `--video-key` | Video observation key (video mode) | Auto-detected |
|
||||
| `--video-batch-size` | Episodes per batch (video mode) | `1` |
|
||||
| `--sample-interval` | Sampling interval in seconds | `1.0` |
|
||||
| `--temperature` | Sampling temperature | `0.7` |
|
||||
| `--output-dir` | Output directory | Auto-generated |
|
||||
| `--push-to-hub` | Push to HuggingFace Hub | `False` |
|
||||
|
||||
### Output Files
|
||||
|
||||
The high-level annotation creates:
|
||||
|
||||
1. **`meta/tasks_high_level.parquet`**: DataFrame with high-level tasks
|
||||
```python
|
||||
# Structure:
|
||||
# Index: task string (concatenated user_prompt | robot_utterance)
|
||||
# Columns:
|
||||
# - task_index: int64
|
||||
# - user_prompt: string
|
||||
# - robot_utterance: string
|
||||
# - skill: string (associated subtask)
|
||||
# - scenario_type: string
|
||||
# - response_type: string
|
||||
```
|
||||
|
||||
2. **`meta/syn_annotations.jsonl`**: Debug annotations (JSONL format)
|
||||
```json
|
||||
{"episode_id": 0, "timestamp": 1.5, "skill_current": "grasp object", "user_prompt": "Can you pick that up?", "robot_utterance": "Sure, I'll grasp it now", ...}
|
||||
```
|
||||
|
||||
3. **`task_index_high_level` feature**: Added to each frame
|
||||
- Type: `int64`
|
||||
- Shape: `(1,)`
|
||||
- Maps each frame to its high-level task
|
||||
|
||||
### Dialogue Types Generated
|
||||
|
||||
The system generates diverse interaction types:
|
||||
|
||||
**Scenario Types:**
|
||||
- `specific_object`: "Pick up the red block"
|
||||
- `negative_task`: "Don't touch the blue one"
|
||||
- `situated_correction`: "Actually, move to the other box instead"
|
||||
- `implicit_request`: "I need something red for the tower"
|
||||
- `constraint_based`: "Make sure to handle it gently"
|
||||
|
||||
**Response Types:**
|
||||
- `confirmation`: "OK, I'll pick it up"
|
||||
- `clarification`: "Just to confirm, you want me to pick up the red block?"
|
||||
- `acknowledgment`: "Got it, picking up the red block"
|
||||
- `constraint_acknowledgment`: "Sure, I'll pick it up gently"
|
||||
|
||||
### Accessing High-Level Annotations
|
||||
|
||||
```python
|
||||
from lerobot.datasets.lerobot_dataset import LeRobotDataset
|
||||
import pandas as pd
|
||||
|
||||
# Load annotated dataset
|
||||
dataset = LeRobotDataset(repo_id="your/dataset_with_high_level_tasks")
|
||||
|
||||
# Get a frame
|
||||
frame = dataset[100]
|
||||
|
||||
# Get the high-level task
|
||||
task_idx = frame["task_index_high_level"].item()
|
||||
|
||||
# Load tasks metadata
|
||||
tasks_df = pd.read_parquet(dataset.root / "meta" / "tasks_high_level.parquet")
|
||||
task_row = tasks_df[tasks_df["task_index"] == task_idx].iloc[0]
|
||||
|
||||
print(f"User: {task_row['user_prompt']}")
|
||||
print(f"Robot: {task_row['robot_utterance']}")
|
||||
print(f"Skill: {task_row['skill']}")
|
||||
|
||||
# Use in a DataLoader
|
||||
import torch
|
||||
from torch.utils.data import DataLoader
|
||||
|
||||
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
|
||||
batch = next(iter(dataloader))
|
||||
|
||||
print(f"Task indices: {batch['task_index_high_level']}")
|
||||
print(f"User prompts: {batch['user_prompt'][0]}")
|
||||
print(f"Robot utterances: {batch['robot_utterance'][0]}")
|
||||
```
|
||||
|
||||
## Complete Pipeline Example
|
||||
|
||||
Here's how to run both annotation stages:
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
REPO_ID="your-username/your-dataset"
|
||||
MODEL="Qwen/Qwen2-VL-7B-Instruct"
|
||||
OUTPUT_DIR="/path/to/output"
|
||||
|
||||
# Step 1: Subtask Annotation
|
||||
python src/lerobot/policies/pi05_full/annotate/subtask_annotate.py \
|
||||
--repo-id "$REPO_ID" \
|
||||
--video-key observation.images.base \
|
||||
--model "$MODEL" \
|
||||
--batch-size 8 \
|
||||
--output-dir "${OUTPUT_DIR}/subtasks"
|
||||
|
||||
# Step 2: High-Level Annotation (Image Mode)
|
||||
python src/lerobot/policies/pi05_full/annotate/high_level_annotate.py \
|
||||
--data-dir "${OUTPUT_DIR}/subtasks" \
|
||||
--model "$MODEL" \
|
||||
--image-key observation.images.base \
|
||||
--sample-interval 1.0 \
|
||||
--output-dir "${OUTPUT_DIR}/final"
|
||||
|
||||
# Or Step 2: High-Level Annotation (Video Mode - Recommended)
|
||||
python src/lerobot/policies/pi05_full/annotate/high_level_annotate.py \
|
||||
--data-dir "${OUTPUT_DIR}/subtasks" \
|
||||
--model "$MODEL" \
|
||||
--video-mode \
|
||||
--video-key observation.images.base \
|
||||
--video-batch-size 4 \
|
||||
--output-dir "${OUTPUT_DIR}/final"
|
||||
```
|
||||
|
||||
## Performance Tips
|
||||
|
||||
### For Faster Processing
|
||||
|
||||
1. **Increase batch size**: Use `--batch-size 16` or higher (subtask annotation)
|
||||
2. **Increase video batch size**: Use `--video-batch-size 8` (high-level annotation in video mode)
|
||||
3. **Larger sampling interval**: Use `--sample-interval 5.0` for testing (samples every 5 seconds instead of 1)
|
||||
4. **Use smaller models**: `Qwen/Qwen2-VL-2B-Instruct` is faster than `Qwen2-VL-7B-Instruct`
|
||||
5. **Process specific episodes**: Use `--episodes 0 1 2 3` to annotate only a subset
|
||||
|
||||
### For Better Quality
|
||||
|
||||
1. **Use larger models**: `Qwen/Qwen3-VL-30B-A3B-Instruct` or `Qwen/Qwen2-VL-72B-Instruct`
|
||||
2. **Use video mode**: Provides better temporal context
|
||||
3. **Smaller sampling intervals**: `--sample-interval 0.5` for dense annotations
|
||||
4. **Adjust temperature**: Use `--temperature 0.9` for more diverse dialogue
|
||||
|
||||
## Memory Requirements
|
||||
|
||||
| Model | GPU Memory | Recommended Batch Size |
|
||||
|-------|------------|------------------------|
|
||||
| Qwen2-VL-2B | ~8 GB | 16-32 |
|
||||
| Qwen2-VL-7B | ~16 GB | 8-16 |
|
||||
| Qwen2-VL-72B | ~80 GB | 1-2 |
|
||||
| Qwen3-VL-30B | ~40 GB | 4-8 |
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "FFmpeg not found"
|
||||
```bash
|
||||
# Install FFmpeg
|
||||
sudo apt-get install ffmpeg # Ubuntu/Debian
|
||||
brew install ffmpeg # macOS
|
||||
```
|
||||
|
||||
### "CUDA out of memory"
|
||||
- Reduce batch size: `--batch-size 1` or `--video-batch-size 1`
|
||||
- Use smaller model: `Qwen/Qwen2-VL-2B-Instruct`
|
||||
- Use CPU: `--device cpu` (much slower)
|
||||
|
||||
### "No skills.json found"
|
||||
Run subtask annotation first before high-level annotation.
|
||||
|
||||
### "Video key not found"
|
||||
List available keys:
|
||||
```python
|
||||
from lerobot.datasets.lerobot_dataset import LeRobotDataset
|
||||
dataset = LeRobotDataset(repo_id="your/dataset")
|
||||
print("Video keys:", dataset.meta.video_keys)
|
||||
print("Camera keys:", dataset.meta.camera_keys)
|
||||
```
|
||||
|
||||
## Dataset Structure After Annotation
|
||||
|
||||
```
|
||||
your_dataset_with_high_level_tasks/
|
||||
├── meta/
|
||||
│ ├── info.json # Original metadata
|
||||
│ ├── tasks.parquet # Original tasks (preserved)
|
||||
│ ├── subtasks.parquet # NEW: Subtask names and indices
|
||||
│ ├── skills.json # NEW: Raw skill annotations with timestamps
|
||||
│ ├── tasks_high_level.parquet # NEW: High-level tasks with dialogue
|
||||
│ └── syn_annotations.jsonl # NEW: Debug annotations
|
||||
├── data/
|
||||
│ └── chunk-000/
|
||||
│ ├── observation.images.base.mp4
|
||||
│ ├── action.safetensors
|
||||
│ ├── subtask_index.safetensors # NEW: Subtask per frame
|
||||
│ └── task_index_high_level.safetensors # NEW: High-level task per frame
|
||||
└── videos/
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Citation
|
||||
|
||||
If you use these annotation tools in your research, please cite:
|
||||
|
||||
```bibtex
|
||||
@article{lerobot2024,
|
||||
title={LeRobot: State-of-the-art Machine Learning for Real-World Robotics},
|
||||
author={LeRobot Contributors},
|
||||
year={2024},
|
||||
url={https://github.com/huggingface/lerobot}
|
||||
}
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
After annotation, you can:
|
||||
1. Train hierarchical policies using the subtask and high-level annotations
|
||||
2. Use the synthetic dialogue for instruction-following policy training
|
||||
3. Analyze skill distributions and dialogue patterns
|
||||
4. Share your annotated dataset on HuggingFace Hub with `--push-to-hub`
|
||||
|
||||
For training examples, see the [training documentation](../training/).
|
||||
|
||||
@@ -0,0 +1,50 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Example script to run synthetic data generation with Qwen VLM
|
||||
# This generates user prompts and robot utterances for hierarchical policy training
|
||||
|
||||
# Configuration
|
||||
REPO_ID="lerobot/libero_10"
|
||||
MODEL="Qwen/Qwen3-VL-30B-A3B-Instruct"
|
||||
# or: MODEL="Qwen/Qwen2-VL-7B-Instruct"
|
||||
|
||||
|
||||
OUTPUT_DIR="/fsx/jade_choghari/outputs/libero-10-annotate-high"
|
||||
|
||||
BATCH_SIZE=16
|
||||
TEMPERATURE=0.9
|
||||
SAMPLE_INTERVAL=5.0 # generate dialogue every 1 second (all episodes processed)
|
||||
|
||||
# Run subtask annotation
|
||||
# python /admin/home/jade_choghari/lerobot/src/lerobot/policies/pi05_full/annotate/subtask_annotate.py \
|
||||
# --repo-id "$REPO_ID" \
|
||||
# --video-key observation.images.image \
|
||||
# --output-dir "$OUTPUT_DIR" \
|
||||
# --skip-existing \
|
||||
# --output-repo-id "jadechoghari/libero10-annotate" \
|
||||
# --batch-size "$BATCH_SIZE" \
|
||||
# run synthetic data generation (all episodes processed)
|
||||
# python examples/dataset/annotate_pgen.py \
|
||||
# --repo-id "$REPO_ID" \
|
||||
# --model "$MODEL" \
|
||||
# --output-dir "$OUTPUT_DIR" \
|
||||
# --temperature "$TEMPERATURE" \
|
||||
# --batch-size "$BATCH_SIZE" \
|
||||
# --sample-interval "$SAMPLE_INTERVAL" \
|
||||
# --image-key observation.images.base \
|
||||
# --num-image-views-per-sample 1
|
||||
|
||||
# for faster testing, increase sample interval:
|
||||
# --sample-interval 5.0 # Samples every 5 seconds (much faster)
|
||||
|
||||
# to push to hub after generation:
|
||||
# add --push-to-hub flag
|
||||
|
||||
# efficient batch processing: 4 episodes at once
|
||||
python src/lerobot/data_processing/annotations/high_level_annotate.py \
|
||||
--data-dir "/fsx/jade_choghari/outputs/libero-10-annotate" \
|
||||
--output-dir "$OUTPUT_DIR" \
|
||||
--video-mode \
|
||||
--video-key observation.images.image \
|
||||
--video-batch-size "$BATCH_SIZE" \
|
||||
--sample-interval 5.0
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,52 @@
|
||||
import torch
|
||||
from huggingface_hub import HfApi
|
||||
|
||||
import lerobot
|
||||
from lerobot.datasets.lerobot_dataset import LeRobotDataset, LeRobotDatasetMetadata
|
||||
from lerobot.policies.factory import make_pre_post_processors
|
||||
from lerobot.configs.policies import PreTrainedConfig
|
||||
|
||||
# /fsx/jade_choghari/data/libero_10_subtasks_kw_converted
|
||||
dataset = LeRobotDataset(repo_id="lerobot/libero_10_image_subtask")
|
||||
|
||||
dataloader = torch.utils.data.DataLoader(
|
||||
dataset,
|
||||
num_workers=0,
|
||||
batch_size=2,
|
||||
shuffle=True,
|
||||
)
|
||||
|
||||
cfg = PreTrainedConfig.from_pretrained(
|
||||
pretrained_name_or_path="/fsx/jade_choghari/models/pi05-base",
|
||||
)
|
||||
cfg.dtype = "bfloat16"
|
||||
|
||||
pre_processor, post_processor = make_pre_post_processors(
|
||||
policy_cfg=cfg,
|
||||
pretrained_path="/fsx/jade_choghari/models/pi05-base",
|
||||
)
|
||||
batch = next(iter(dataloader))
|
||||
breakpoint()
|
||||
batch1 = pre_processor(batch)
|
||||
breakpoint()
|
||||
print(batch.keys())
|
||||
# print(batch['task_index_high_level'].shape)
|
||||
# print(batch['task_index_high_level'])
|
||||
# print(batch['user_prompt'][0])
|
||||
# print(batch['robot_utterance'][0])
|
||||
# print(batch['task'][0])
|
||||
|
||||
valid_episode_list = []
|
||||
for episode_idx in range(len(dataset.meta.episodes)):
|
||||
subtask_index = dataset[episode_idx]["subtask_index"]
|
||||
valid_episode_list.append(episode_idx)
|
||||
|
||||
print(len(valid_episode_list))
|
||||
|
||||
# read this parquet /fsx/jade_choghari/outputs/pgen_annotations1/meta/tasks.parquett
|
||||
# import pandas as pd
|
||||
# tasks_df = pd.read_parquet('/fsx/jade_choghari/outputs/pgen_annotations1/meta/tasks.parquet')
|
||||
|
||||
# # print all
|
||||
# print(tasks_df.columns)
|
||||
# breakpoint()
|
||||
@@ -0,0 +1,74 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Example script to run synthetic data generation with Qwen VLM
|
||||
# This generates user prompts and robot utterances for hierarchical policy training
|
||||
|
||||
# Configuration
|
||||
REPO_ID="jadechoghari/piper-demo-20260205_103303"
|
||||
# MODEL="Qwen/Qwen3-VL-30B-A3B-Thinking"
|
||||
MODEL="Qwen/Qwen3.5-27B"
|
||||
# or: MODEL="Qwen/Qwen2-VL-7B-Instruct"
|
||||
|
||||
|
||||
OUTPUT_DIR="/fsx/jade_choghari/outputs/collect-data-pgen_new"
|
||||
|
||||
BATCH_SIZE=2
|
||||
TEMPERATURE=0.9
|
||||
SAMPLE_INTERVAL=5.0 # generate dialogue every 1 second (all episodes processed)
|
||||
|
||||
# Run subtask annotation.
|
||||
# To use closed-vocabulary labels, add a line: --subtask-labels "label1" "label2" ...
|
||||
# Example (add backslash after "$MODEL" and uncomment the next line):
|
||||
# --model "$MODEL" \
|
||||
# --subtask-labels "pick_up_yellow_nut_bar" "pick_up_cake" "pick_up_biscuit_pack" "pick_up_soda_can"
|
||||
python /home/lerobot/src/lerobot/data_processing/annotations/subtask_annotate.py \
|
||||
--repo-id "$REPO_ID" \
|
||||
--video-key observation.images.top \
|
||||
--output-dir "$OUTPUT_DIR" \
|
||||
--output-repo-id "jadechoghari/piper-demo-annotated1" \
|
||||
--push-to-hub \
|
||||
--no-timer-overlay \
|
||||
--model "$MODEL" \
|
||||
--subtask-labels "pick_up_yellow_nut_bar" "pick_up_cake" "pick_up_biscuit_pack" "pick_up_soda_can" \
|
||||
--batch-size 2
|
||||
|
||||
# Run subtask annotation (image-window: frames as images for better accuracy)
|
||||
# python /admin/home/jade_choghari/lerobot/src/lerobot/data_processing/annotations/subtask_annotate_image.py \
|
||||
# --repo-id "$REPO_ID" \
|
||||
# --camera-key observation.images.wrist \
|
||||
# --output-dir "$OUTPUT_DIR" \
|
||||
# --output-repo-id "jadechoghari/piper-demo-annotated1-image" \
|
||||
# --push-to-hub \
|
||||
# --model "$MODEL" \
|
||||
# --window-size 184 \
|
||||
# --max-frames-per-window 16 \
|
||||
# --subtask-labels "pick_up_yellow_nut_bar" "pick_up_cake" "pick_up_biscuit_pack" "pick_up_soda_can" \
|
||||
# --batch-size 2
|
||||
|
||||
|
||||
# run synthetic data generation (all episodes processed)
|
||||
# python examples/dataset/annotate_pgen.py \
|
||||
# --repo-id "$REPO_ID" \
|
||||
# --model "$MODEL" \
|
||||
# --output-dir "$OUTPUT_DIR" \
|
||||
# --temperature "$TEMPERATURE" \
|
||||
# --batch-size "$BATCH_SIZE" \
|
||||
# --sample-interval "$SAMPLE_INTERVAL" \
|
||||
# --image-key observation.images.base \
|
||||
# --num-image-views-per-sample 1
|
||||
|
||||
# for faster testing, increase sample interval:
|
||||
# --sample-interval 5.0 # Samples every 5 seconds (much faster)
|
||||
|
||||
# to push to hub after generation:
|
||||
# add --push-to-hub flag
|
||||
|
||||
# efficient batch processing: 4 episodes at once
|
||||
# python examples/dataset/annotate_pgen.py \
|
||||
# --repo-id "$REPO_ID" \
|
||||
# --model "$MODEL" \
|
||||
# --output-dir "$OUTPUT_DIR" \
|
||||
# --video-mode \
|
||||
# --video-key observation.images.up \
|
||||
# --video-batch-size "$BATCH_SIZE" \
|
||||
# --sample-interval 1.0
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,561 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
Image-window subtask annotation for LeRobot datasets using Qwen VLMs.
|
||||
|
||||
This script assigns a subtask to each window of consecutive frames by sending
|
||||
those frames as images to the VLM (instead of a video) for better accuracy.
|
||||
Supports Qwen2-VL and Qwen3-VL (same models as subtask_annotate.py).
|
||||
|
||||
Pipeline:
|
||||
1. Load a LeRobot dataset (local or Hub).
|
||||
2. For each episode, slide a window over frame indices.
|
||||
3. For each window, load the corresponding images (from image_key or decoded video_key).
|
||||
4. Send the window of images to Qwen2-VL with the same skill prompt; get one subtask name.
|
||||
5. Assign that subtask to all frames in the window.
|
||||
6. Write subtasks.parquet and add subtask_index via add_features (same as subtask_annotate).
|
||||
|
||||
Usage:
|
||||
python -m lerobot.data_processing.annotations.subtask_annotate_image \\
|
||||
--data-dir /path/to/dataset --camera-key observation.images.base \\
|
||||
--window-size 8 --stride 8 --output-dir ./output
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import random
|
||||
import textwrap
|
||||
from pathlib import Path
|
||||
|
||||
import numpy as np
|
||||
import PIL.Image
|
||||
import torch
|
||||
from rich.console import Console
|
||||
|
||||
from lerobot.datasets.lerobot_dataset import LeRobotDataset
|
||||
|
||||
# Reuse data structures and save/load from the video-based annotator
|
||||
from lerobot.data_processing.annotations.subtask_annotate import (
|
||||
EpisodeSkills,
|
||||
Skill,
|
||||
load_skill_annotations,
|
||||
save_skill_annotations,
|
||||
)
|
||||
|
||||
|
||||
def create_window_skill_prompt(
|
||||
coarse_goal: str | None = None,
|
||||
subtask_labels: list[str] | None = None,
|
||||
) -> str:
|
||||
"""Prompt for labeling a single window of frames with one atomic skill.
|
||||
If subtask_labels are provided, the model must choose exactly one from that list.
|
||||
"""
|
||||
goal_context = f'The overall goal is: "{coarse_goal}".\n\n' if coarse_goal else ""
|
||||
if subtask_labels:
|
||||
labels_list = ", ".join(f'"{l}"' for l in subtask_labels)
|
||||
label_instruction = (
|
||||
f"You must choose exactly ONE skill from this list: [{labels_list}]. "
|
||||
"Do not create new labels. Reply with only that label.\n\n"
|
||||
)
|
||||
else:
|
||||
label_instruction = ""
|
||||
return textwrap.dedent(f"""\
|
||||
# Role
|
||||
You are a Robotics Vision System that labels short clips from robot manipulation demonstrations.
|
||||
|
||||
# Task
|
||||
{goal_context}{label_instruction}The following images are consecutive frames from a single short clip of a robot demonstration.
|
||||
What single atomic manipulation skill is being performed in this clip?
|
||||
|
||||
# Requirements
|
||||
- Reply with ONLY one short skill name (e.g. "pick up object", "move arm left", "release gripper").
|
||||
- No explanation, no timestamps, no JSON. Just the skill name.
|
||||
""").strip()
|
||||
|
||||
|
||||
def _run_image_segmenter(
|
||||
self,
|
||||
images: list[PIL.Image.Image],
|
||||
coarse_goal: str | None,
|
||||
subtask_labels: list[str] | None = None,
|
||||
) -> str:
|
||||
"""Shared inference for Qwen2-VL and Qwen3-VL image window labeling."""
|
||||
prompt = create_window_skill_prompt(coarse_goal, subtask_labels)
|
||||
content = []
|
||||
for img in images:
|
||||
content.append({"type": "image", "image": img})
|
||||
content.append({"type": "text", "text": "What single atomic skill is shown in these frames? Reply with only the skill name."})
|
||||
|
||||
messages = [
|
||||
{"role": "system", "content": [{"type": "text", "text": prompt}]},
|
||||
{"role": "user", "content": content},
|
||||
]
|
||||
text = self.processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
||||
image_inputs, video_inputs = self.process_vision_info(messages)
|
||||
inputs = self.processor(
|
||||
text=[text],
|
||||
images=image_inputs,
|
||||
videos=video_inputs,
|
||||
padding=True,
|
||||
return_tensors="pt",
|
||||
).to(self.device)
|
||||
|
||||
with torch.no_grad():
|
||||
generated_ids = self.model.generate(**inputs, max_new_tokens=128, do_sample=False)
|
||||
|
||||
response = self.processor.batch_decode(
|
||||
[out[len(inp) :] for inp, out in zip(inputs.input_ids, generated_ids)],
|
||||
skip_special_tokens=True,
|
||||
)[0].strip()
|
||||
skill_name = response.split("\n")[0].strip().strip('."')
|
||||
return skill_name if skill_name else "unknown"
|
||||
|
||||
|
||||
def _run_image_segmenter_batch(
|
||||
self,
|
||||
batch_images: list[list[PIL.Image.Image]],
|
||||
coarse_goal: str | None,
|
||||
subtask_labels: list[str] | None = None,
|
||||
) -> list[str]:
|
||||
"""Run VLM on multiple windows at once; returns one skill name per window."""
|
||||
if not batch_images:
|
||||
return []
|
||||
prompt = create_window_skill_prompt(coarse_goal, subtask_labels)
|
||||
all_texts = []
|
||||
all_image_inputs = []
|
||||
all_video_inputs = []
|
||||
for images in batch_images:
|
||||
content = []
|
||||
for img in images:
|
||||
content.append({"type": "image", "image": img})
|
||||
content.append({"type": "text", "text": "What single atomic skill is shown in these frames? Reply with only the skill name."})
|
||||
messages = [
|
||||
{"role": "system", "content": [{"type": "text", "text": prompt}]},
|
||||
{"role": "user", "content": content},
|
||||
]
|
||||
text = self.processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
||||
image_inputs, video_inputs = self.process_vision_info(messages)
|
||||
all_texts.append(text)
|
||||
if image_inputs is not None:
|
||||
all_image_inputs.extend(image_inputs if isinstance(image_inputs, list) else [image_inputs])
|
||||
if video_inputs is not None:
|
||||
all_video_inputs.extend(video_inputs if isinstance(video_inputs, list) else [video_inputs])
|
||||
inputs = self.processor(
|
||||
text=all_texts,
|
||||
images=all_image_inputs if all_image_inputs else None,
|
||||
videos=all_video_inputs if all_video_inputs else None,
|
||||
padding=True,
|
||||
return_tensors="pt",
|
||||
).to(self.device)
|
||||
with torch.no_grad():
|
||||
generated_ids = self.model.generate(**inputs, max_new_tokens=128, do_sample=False)
|
||||
responses = self.processor.batch_decode(
|
||||
[out[len(inp) :] for inp, out in zip(inputs.input_ids, generated_ids)],
|
||||
skip_special_tokens=True,
|
||||
)
|
||||
return [
|
||||
(r.split("\n")[0].strip().strip('."') or "unknown")
|
||||
for r in responses
|
||||
]
|
||||
|
||||
|
||||
class Qwen2VLImageSegmenter:
|
||||
"""Uses Qwen2-VL to assign one skill name to a window of images (same model as subtask_annotate)."""
|
||||
|
||||
def __init__(self, model_name: str, device: str = "cuda", torch_dtype: torch.dtype = torch.bfloat16):
|
||||
from qwen_vl_utils import process_vision_info
|
||||
from transformers import AutoProcessor, Qwen2VLForConditionalGeneration
|
||||
|
||||
self.console = Console()
|
||||
self.device = device
|
||||
self.process_vision_info = process_vision_info
|
||||
self.console.print(f"[cyan]Loading Qwen2-VL for image-window labeling: {model_name}...[/cyan]")
|
||||
self.model = Qwen2VLForConditionalGeneration.from_pretrained(
|
||||
model_name, torch_dtype=torch_dtype, device_map=device, trust_remote_code=True
|
||||
)
|
||||
self.processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
|
||||
self.console.print(f"[green]✓ Model loaded on {device}[/green]")
|
||||
|
||||
def segment_skill_from_images(
|
||||
self,
|
||||
images: list[PIL.Image.Image],
|
||||
coarse_goal: str | None = None,
|
||||
subtask_labels: list[str] | None = None,
|
||||
) -> str:
|
||||
"""Return a single skill name for the given window of images."""
|
||||
return _run_image_segmenter(self, images, coarse_goal, subtask_labels)
|
||||
|
||||
def segment_skill_from_images_batch(
|
||||
self,
|
||||
batch_images: list[list[PIL.Image.Image]],
|
||||
coarse_goal: str | None = None,
|
||||
subtask_labels: list[str] | None = None,
|
||||
) -> list[str]:
|
||||
"""Return one skill name per window; processes multiple windows in one forward pass."""
|
||||
return _run_image_segmenter_batch(self, batch_images, coarse_goal, subtask_labels)
|
||||
|
||||
|
||||
class Qwen3VLImageSegmenter:
|
||||
"""Uses Qwen3-VL (MoE) to assign one skill name to a window of images."""
|
||||
|
||||
def __init__(self, model_name: str, device: str = "cuda", torch_dtype: torch.dtype = torch.bfloat16):
|
||||
from qwen_vl_utils import process_vision_info
|
||||
from transformers import AutoProcessor, Qwen3VLMoeForConditionalGeneration
|
||||
|
||||
self.console = Console()
|
||||
self.device = device
|
||||
self.process_vision_info = process_vision_info
|
||||
self.console.print(f"[cyan]Loading Qwen3-VL for image-window labeling: {model_name}...[/cyan]")
|
||||
self.model = Qwen3VLMoeForConditionalGeneration.from_pretrained(
|
||||
model_name, torch_dtype=torch_dtype, device_map=device, trust_remote_code=True
|
||||
)
|
||||
self.processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
|
||||
self.console.print(f"[green]✓ Model loaded on {device}[/green]")
|
||||
|
||||
def segment_skill_from_images(
|
||||
self,
|
||||
images: list[PIL.Image.Image],
|
||||
coarse_goal: str | None = None,
|
||||
subtask_labels: list[str] | None = None,
|
||||
) -> str:
|
||||
"""Return a single skill name for the given window of images."""
|
||||
return _run_image_segmenter(self, images, coarse_goal, subtask_labels)
|
||||
|
||||
def segment_skill_from_images_batch(
|
||||
self,
|
||||
batch_images: list[list[PIL.Image.Image]],
|
||||
coarse_goal: str | None = None,
|
||||
subtask_labels: list[str] | None = None,
|
||||
) -> list[str]:
|
||||
"""Return one skill name per window; processes multiple windows in one forward pass."""
|
||||
return _run_image_segmenter_batch(self, batch_images, coarse_goal, subtask_labels)
|
||||
|
||||
|
||||
def get_image_segmenter(
|
||||
model_name: str,
|
||||
device: str = "cuda",
|
||||
torch_dtype: torch.dtype = torch.bfloat16,
|
||||
):
|
||||
"""Return the appropriate image-window segmenter for the model (Qwen2-VL or Qwen3-VL)."""
|
||||
model_lower = model_name.lower()
|
||||
if "qwen3" in model_lower:
|
||||
return Qwen3VLImageSegmenter(model_name, device, torch_dtype)
|
||||
return Qwen2VLImageSegmenter(model_name, device, torch_dtype)
|
||||
|
||||
|
||||
def frame_to_pil(frame_value) -> PIL.Image.Image:
|
||||
"""Convert a single frame from dataset (tensor or PIL or path) to PIL.Image."""
|
||||
if isinstance(frame_value, PIL.Image.Image):
|
||||
return frame_value
|
||||
if isinstance(frame_value, (str, Path)):
|
||||
return PIL.Image.open(frame_value).convert("RGB")
|
||||
if hasattr(frame_value, "numpy"):
|
||||
arr = frame_value.numpy()
|
||||
else:
|
||||
arr = np.asarray(frame_value)
|
||||
if arr.ndim == 3 and arr.shape[0] in (1, 3, 4):
|
||||
arr = np.transpose(arr, (1, 2, 0))
|
||||
if arr.dtype == np.float32 or arr.dtype == np.float64:
|
||||
arr = (np.clip(arr, 0, 1) * 255).astype(np.uint8)
|
||||
elif arr.dtype != np.uint8:
|
||||
arr = np.clip(arr, 0, 255).astype(np.uint8)
|
||||
if arr.shape[-1] == 1:
|
||||
arr = np.repeat(arr, 3, axis=-1)
|
||||
return PIL.Image.fromarray(arr)
|
||||
|
||||
|
||||
def _sample_window_indices(window_length: int, max_frames: int) -> list[int]:
|
||||
"""Return indices into a window of length window_length, at most max_frames, in order.
|
||||
If window_length <= max_frames, returns range(window_length).
|
||||
Otherwise returns sorted random sample of max_frames indices (temporal order preserved).
|
||||
"""
|
||||
if max_frames <= 0 or window_length <= max_frames:
|
||||
return list(range(window_length))
|
||||
return sorted(random.sample(range(window_length), max_frames))
|
||||
|
||||
|
||||
class SkillAnnotatorImage:
|
||||
"""Annotates episodes by sliding a window over frames and labeling each window with the VLM."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
segmenter: Qwen2VLImageSegmenter | Qwen3VLImageSegmenter,
|
||||
window_size: int = 8,
|
||||
stride: int | None = None,
|
||||
batch_size: int = 1,
|
||||
max_frames_per_window: int | None = None,
|
||||
console: Console | None = None,
|
||||
):
|
||||
self.segmenter = segmenter
|
||||
self.window_size = window_size
|
||||
self.stride = stride if stride is not None else window_size
|
||||
self.batch_size = max(1, batch_size)
|
||||
self.max_frames_per_window = max_frames_per_window
|
||||
self.console = console or Console()
|
||||
|
||||
def annotate_dataset(
|
||||
self,
|
||||
dataset: LeRobotDataset,
|
||||
camera_key: str,
|
||||
episodes: list[int] | None = None,
|
||||
skip_existing: bool = False,
|
||||
subtask_labels: list[str] | None = None,
|
||||
) -> dict[int, EpisodeSkills]:
|
||||
"""Annotate episodes using image windows. camera_key can be an image_key or video_key."""
|
||||
episode_indices = episodes or list(range(dataset.meta.total_episodes))
|
||||
coarse_goal = self._get_coarse_goal(dataset)
|
||||
annotations: dict[int, EpisodeSkills] = {}
|
||||
|
||||
if skip_existing:
|
||||
existing = load_skill_annotations(dataset.root)
|
||||
if existing and existing.get("episodes"):
|
||||
existing_eps = {int(k) for k in existing["episodes"] if existing["episodes"][k].get("skills")}
|
||||
episode_indices = [i for i in episode_indices if i not in existing_eps]
|
||||
|
||||
for ep_idx in episode_indices:
|
||||
try:
|
||||
skills = self._annotate_episode(
|
||||
dataset, ep_idx, camera_key, coarse_goal, subtask_labels
|
||||
)
|
||||
if skills:
|
||||
annotations[ep_idx] = EpisodeSkills(
|
||||
episode_index=ep_idx,
|
||||
description=coarse_goal,
|
||||
skills=skills,
|
||||
)
|
||||
self.console.print(f"[green]✓ Episode {ep_idx}: {len(skills)} window skills[/green]")
|
||||
else:
|
||||
self.console.print(f"[yellow]⚠ Episode {ep_idx}: no skills[/yellow]")
|
||||
except Exception as e:
|
||||
self.console.print(f"[red]Episode {ep_idx} failed: {e}[/red]")
|
||||
|
||||
return annotations
|
||||
|
||||
def _get_coarse_goal(self, dataset: LeRobotDataset) -> str:
|
||||
if dataset.meta.tasks is not None and len(dataset.meta.tasks) > 0:
|
||||
return str(dataset.meta.tasks.index[0])
|
||||
return "Perform the demonstrated manipulation task."
|
||||
|
||||
def _annotate_episode(
|
||||
self,
|
||||
dataset: LeRobotDataset,
|
||||
episode_index: int,
|
||||
camera_key: str,
|
||||
coarse_goal: str,
|
||||
subtask_labels: list[str] | None = None,
|
||||
) -> list[Skill]:
|
||||
ep = dataset.meta.episodes[episode_index]
|
||||
ep_from = int(ep["dataset_from_index"])
|
||||
ep_to = int(ep["dataset_to_index"])
|
||||
length = ep_to - ep_from
|
||||
fps = dataset.meta.fps
|
||||
if length == 0:
|
||||
return []
|
||||
|
||||
# Collect full windows: (images, t_start, t_end) using frame timestamps.
|
||||
# If max_frames_per_window is set and window is larger, sample that many frames (order preserved).
|
||||
window_specs: list[tuple[list[PIL.Image.Image], float, float]] = []
|
||||
start = 0
|
||||
while start + self.window_size <= length:
|
||||
offsets = _sample_window_indices(
|
||||
self.window_size,
|
||||
self.max_frames_per_window or self.window_size,
|
||||
)
|
||||
frame_indices = [ep_from + start + i for i in offsets]
|
||||
images = []
|
||||
t_start = float(dataset[frame_indices[0]]["timestamp"].item())
|
||||
for idx in frame_indices:
|
||||
item = dataset[idx]
|
||||
images.append(frame_to_pil(item[camera_key]))
|
||||
t_end = t_start + self.window_size / fps
|
||||
window_specs.append((images, t_start, t_end))
|
||||
start += self.stride
|
||||
|
||||
# Last partial window
|
||||
if start < length:
|
||||
partial_len = ep_to - (ep_from + start)
|
||||
offsets = _sample_window_indices(
|
||||
partial_len,
|
||||
self.max_frames_per_window or partial_len,
|
||||
)
|
||||
frame_indices = [ep_from + start + i for i in offsets]
|
||||
images = []
|
||||
t_start = float(dataset[frame_indices[0]]["timestamp"].item())
|
||||
for idx in frame_indices:
|
||||
item = dataset[idx]
|
||||
images.append(frame_to_pil(item[camera_key]))
|
||||
t_end = float(dataset[frame_indices[-1]]["timestamp"].item()) + 1.0 / fps
|
||||
window_specs.append((images, t_start, t_end))
|
||||
|
||||
# Run in batches
|
||||
skills: list[Skill] = []
|
||||
for i in range(0, len(window_specs), self.batch_size):
|
||||
chunk = window_specs[i : i + self.batch_size]
|
||||
batch_images = [spec[0] for spec in chunk]
|
||||
if len(batch_images) > 1:
|
||||
skill_names = self.segmenter.segment_skill_from_images_batch(
|
||||
batch_images, coarse_goal, subtask_labels
|
||||
)
|
||||
else:
|
||||
skill_names = [
|
||||
self.segmenter.segment_skill_from_images(
|
||||
batch_images[0], coarse_goal, subtask_labels
|
||||
)
|
||||
]
|
||||
for (_, t_start, t_end), name in zip(chunk, skill_names, strict=True):
|
||||
skills.append(Skill(name=name, start=t_start, end=t_end))
|
||||
|
||||
return skills
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Image-window subtask annotation using Qwen VLM (frames as images for better accuracy)",
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog=textwrap.dedent("""\
|
||||
Examples:
|
||||
python -m lerobot.data_processing.annotations.subtask_annotate_image \\
|
||||
--data-dir /path/to/dataset --camera-key observation.images.base \\
|
||||
--window-size 8 --output-dir ./output
|
||||
|
||||
python -m lerobot.data_processing.annotations.subtask_annotate_image \\
|
||||
--repo-id user/dataset --camera-key observation.images.base \\
|
||||
--window-size 6 --stride 3 --model Qwen/Qwen2-VL-7B-Instruct
|
||||
|
||||
# Use Qwen3-VL (MoE)
|
||||
python -m lerobot.data_processing.annotations.subtask_annotate_image \\
|
||||
--data-dir /path/to/dataset --camera-key observation.images.base \\
|
||||
--model Qwen/Qwen3-VL-30B-A3B-Instruct
|
||||
"""),
|
||||
)
|
||||
data_group = parser.add_mutually_exclusive_group(required=True)
|
||||
data_group.add_argument("--data-dir", type=str, help="Path to local LeRobot dataset")
|
||||
data_group.add_argument("--repo-id", type=str, help="HuggingFace Hub dataset repository ID")
|
||||
|
||||
parser.add_argument(
|
||||
"--camera-key",
|
||||
type=str,
|
||||
required=True,
|
||||
help="Image or video observation key (e.g. observation.images.base)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--model",
|
||||
type=str,
|
||||
default="Qwen/Qwen2-VL-7B-Instruct",
|
||||
help="VLM model: Qwen2-VL or Qwen3-VL (default: Qwen/Qwen2-VL-7B-Instruct)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--device",
|
||||
type=str,
|
||||
default="cuda",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--window-size",
|
||||
type=int,
|
||||
default=8,
|
||||
help="Number of frames per window (default: 8)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--stride",
|
||||
type=int,
|
||||
default=None,
|
||||
help="Stride for sliding window (default: window_size = non-overlapping)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--batch-size",
|
||||
type=int,
|
||||
default=1,
|
||||
help="Number of windows to process in one VLM call (default: 1; increase for speed)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--max-frames-per-window",
|
||||
type=int,
|
||||
default=None,
|
||||
metavar="N",
|
||||
help="If window has more than N frames, randomly sample N frames (order kept) to avoid OOM (e.g. 16)",
|
||||
)
|
||||
parser.add_argument("--episodes", type=int, nargs="+", help="Episode indices to annotate (default: all)")
|
||||
parser.add_argument("--skip-existing", action="store_true", help="Skip episodes that already have annotations")
|
||||
parser.add_argument(
|
||||
"--subtask-labels",
|
||||
type=str,
|
||||
nargs="*",
|
||||
default=None,
|
||||
help="Closed vocabulary: model must choose only from these labels",
|
||||
)
|
||||
parser.add_argument("--output-dir", type=str, help="Output directory for dataset with subtask_index")
|
||||
parser.add_argument("--output-repo-id", type=str, help="Output repo id (default: <repo_id>_with_subtasks)")
|
||||
parser.add_argument("--push-to-hub", action="store_true")
|
||||
|
||||
args = parser.parse_args()
|
||||
console = Console()
|
||||
|
||||
# Load dataset
|
||||
console.print("[cyan]Loading dataset...[/cyan]")
|
||||
if args.data_dir:
|
||||
dataset = LeRobotDataset(repo_id="local/dataset", root=args.data_dir, download_videos=False)
|
||||
else:
|
||||
dataset = LeRobotDataset(repo_id=args.repo_id, download_videos=True)
|
||||
camera_keys = dataset.meta.camera_keys
|
||||
if args.camera_key not in camera_keys:
|
||||
console.print(f"[red]Error: camera key '{args.camera_key}' not in {camera_keys}[/red]")
|
||||
return
|
||||
console.print(f"[green]✓ Loaded dataset, {dataset.meta.total_episodes} episodes[/green]")
|
||||
|
||||
# Same Qwen VLM as subtask_annotate (Qwen2-VL or Qwen3-VL), image windows instead of video
|
||||
segmenter = get_image_segmenter(args.model, args.device, torch.bfloat16)
|
||||
|
||||
annotator = SkillAnnotatorImage(
|
||||
segmenter=segmenter,
|
||||
window_size=args.window_size,
|
||||
stride=args.stride,
|
||||
batch_size=args.batch_size,
|
||||
max_frames_per_window=args.max_frames_per_window,
|
||||
console=console,
|
||||
)
|
||||
annotations = annotator.annotate_dataset(
|
||||
dataset=dataset,
|
||||
camera_key=args.camera_key,
|
||||
episodes=args.episodes,
|
||||
skip_existing=args.skip_existing,
|
||||
subtask_labels=args.subtask_labels,
|
||||
)
|
||||
|
||||
if not annotations:
|
||||
console.print("[yellow]No annotations to save.[/yellow]")
|
||||
return
|
||||
|
||||
output_dir = Path(args.output_dir) if args.output_dir else None
|
||||
output_repo_id = args.output_repo_id
|
||||
new_dataset = save_skill_annotations(dataset, annotations, output_dir, output_repo_id)
|
||||
|
||||
total_skills = sum(len(a.skills) for a in annotations.values())
|
||||
console.print(f"[bold green]✓ Done.[/bold green] Episodes: {len(annotations)}, total window skills: {total_skills}")
|
||||
console.print(f" Dataset with subtask_index: {new_dataset.root}")
|
||||
|
||||
if args.push_to_hub and not args.data_dir:
|
||||
console.print("[cyan]Pushing to Hub...[/cyan]")
|
||||
try:
|
||||
new_dataset.push_to_hub(push_videos=False)
|
||||
console.print("[green]✓ Pushed.[/green]")
|
||||
except Exception as e:
|
||||
console.print(f"[red]Push failed: {e}[/red]")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -59,6 +59,7 @@ from lerobot.datasets.utils import (
|
||||
load_stats,
|
||||
load_subtasks,
|
||||
load_tasks,
|
||||
load_tasks_high_level,
|
||||
update_chunk_file_indices,
|
||||
validate_episode_buffer,
|
||||
validate_frame,
|
||||
@@ -163,6 +164,7 @@ class LeRobotDatasetMetadata:
|
||||
self.info = load_info(self.root)
|
||||
check_version_compatibility(self.repo_id, self._version, CODEBASE_VERSION)
|
||||
self.tasks = load_tasks(self.root)
|
||||
self.tasks_high_level = load_tasks_high_level(self.root)
|
||||
self.subtasks = load_subtasks(self.root)
|
||||
self.episodes = load_episodes(self.root)
|
||||
self.stats = load_stats(self.root)
|
||||
@@ -520,6 +522,7 @@ class LeRobotDatasetMetadata:
|
||||
_validate_feature_names(features)
|
||||
|
||||
obj.tasks = None
|
||||
obj.tasks_high_level = None
|
||||
obj.subtasks = None
|
||||
obj.episodes = None
|
||||
obj.stats = None
|
||||
@@ -1067,7 +1070,17 @@ class LeRobotDataset(torch.utils.data.Dataset):
|
||||
if len(self.meta.video_keys) > 0:
|
||||
current_ts = item["timestamp"].item()
|
||||
query_timestamps = self._get_query_timestamps(current_ts, query_indices)
|
||||
video_frames = self._query_videos(query_timestamps, ep_idx)
|
||||
try:
|
||||
video_frames = self._query_videos(query_timestamps, ep_idx)
|
||||
except Exception as e:
|
||||
print("\n" + "=" * 120)
|
||||
print("[VIDEO DECODE FAILURE]")
|
||||
print(f"item={item}")
|
||||
print(f"query_indices={query_indices}")
|
||||
print(f"query_timestamps={query_timestamps}")
|
||||
print(f"ep_idx={ep_idx}")
|
||||
print("=" * 120 + "\n")
|
||||
raise
|
||||
item = {**video_frames, **item}
|
||||
|
||||
if self.image_transforms is not None:
|
||||
@@ -1078,6 +1091,14 @@ class LeRobotDataset(torch.utils.data.Dataset):
|
||||
# Add task as a string
|
||||
task_idx = item["task_index"].item()
|
||||
item["task"] = self.meta.tasks.iloc[task_idx].name
|
||||
|
||||
# optionally add high level task index
|
||||
if "task_index_high_level" in self.features:
|
||||
high_level_task_idx = item["task_index_high_level"].item()
|
||||
item["robot_utterance"] = self.meta.tasks_high_level.iloc[high_level_task_idx]["robot_utterance"]
|
||||
item["user_prompt"] = self.meta.tasks_high_level.iloc[high_level_task_idx]["user_prompt"]
|
||||
|
||||
|
||||
|
||||
# add subtask information if available
|
||||
if "subtask_index" in self.features and self.meta.subtasks is not None:
|
||||
|
||||
@@ -62,6 +62,8 @@ CHUNK_FILE_PATTERN = "chunk-{chunk_index:03d}/file-{file_index:03d}"
|
||||
DEFAULT_TASKS_PATH = "meta/tasks.parquet"
|
||||
DEFAULT_SUBTASKS_PATH = "meta/subtasks.parquet"
|
||||
DEFAULT_EPISODES_PATH = EPISODES_DIR + "/" + CHUNK_FILE_PATTERN + ".parquet"
|
||||
DEFAULT_TASKS_HIGH_LEVEL_PATH = "meta/tasks_high_level.parquet"
|
||||
DEFAULT_SUBTASKS_PATH = "meta/subtasks.parquet"
|
||||
DEFAULT_DATA_PATH = DATA_DIR + "/" + CHUNK_FILE_PATTERN + ".parquet"
|
||||
DEFAULT_VIDEO_PATH = VIDEO_DIR + "/{video_key}/" + CHUNK_FILE_PATTERN + ".mp4"
|
||||
DEFAULT_IMAGE_PATH = "images/{image_key}/episode-{episode_index:06d}/frame-{frame_index:06d}.png"
|
||||
@@ -353,6 +355,20 @@ def load_tasks(local_dir: Path) -> pandas.DataFrame:
|
||||
tasks = pd.read_parquet(local_dir / DEFAULT_TASKS_PATH)
|
||||
return tasks
|
||||
|
||||
def load_tasks_high_level(local_dir: Path) -> pandas.DataFrame | None:
|
||||
"""Load high-level tasks from tasks_high_level.parquet if it exists."""
|
||||
tasks_high_level_path = local_dir / DEFAULT_TASKS_HIGH_LEVEL_PATH
|
||||
if tasks_high_level_path.exists():
|
||||
return pd.read_parquet(tasks_high_level_path)
|
||||
return None
|
||||
|
||||
|
||||
def load_subtasks(local_dir: Path) -> pandas.DataFrame | None:
|
||||
"""Load subtasks from subtasks.parquet if it exists."""
|
||||
subtasks_path = local_dir / DEFAULT_SUBTASKS_PATH
|
||||
if subtasks_path.exists():
|
||||
return pd.read_parquet(subtasks_path)
|
||||
return None
|
||||
|
||||
def load_subtasks(local_dir: Path) -> pandas.DataFrame | None:
|
||||
"""Load subtasks from subtasks.parquet if it exists."""
|
||||
|
||||
@@ -34,6 +34,7 @@ from lerobot.policies.diffusion.configuration_diffusion import DiffusionConfig
|
||||
from lerobot.policies.groot.configuration_groot import GrootConfig
|
||||
from lerobot.policies.pi0.configuration_pi0 import PI0Config
|
||||
from lerobot.policies.pi05.configuration_pi05 import PI05Config
|
||||
from lerobot.policies.pi05_full.configuration_pi05 import PI05FullConfig
|
||||
from lerobot.policies.pretrained import PreTrainedPolicy
|
||||
from lerobot.policies.sac.configuration_sac import SACConfig
|
||||
from lerobot.policies.sac.reward_model.configuration_classifier import RewardClassifierConfig
|
||||
@@ -390,6 +391,13 @@ def make_pre_post_processors(
|
||||
config=policy_cfg,
|
||||
dataset_stats=kwargs.get("dataset_stats"),
|
||||
)
|
||||
elif isinstance(policy_cfg, PI05FullConfig):
|
||||
from lerobot.policies.pi05_full.processor_pi05 import make_pi05_full_pre_post_processors
|
||||
|
||||
processors = make_pi05_full_pre_post_processors(
|
||||
config=policy_cfg,
|
||||
dataset_stats=kwargs.get("dataset_stats"),
|
||||
)
|
||||
|
||||
else:
|
||||
try:
|
||||
|
||||
@@ -0,0 +1,49 @@
|
||||
# π₀.₅ (pi05)
|
||||
|
||||
This repository contains the Hugging Face port of **π₀.₅**, adapted from [OpenPI](https://github.com/Physical-Intelligence/openpi) by the Physical Intelligence.
|
||||
It is designed as a **Vision-Language-Action model with open-world generalization**.
|
||||
|
||||
---
|
||||
|
||||
## Model Overview
|
||||
|
||||
| Feature | π₀ | π₀.₅ |
|
||||
| -------------------- | ------------------------------------------------------ | ----------------------------------------- |
|
||||
| Time Conditioning | Concatenates time with actions via `action_time_mlp_*` | Uses `time_mlp_*` for AdaRMS conditioning |
|
||||
| AdaRMS | Not used | Used in action expert |
|
||||
| Tokenizer Length | 48 tokens | 200 tokens |
|
||||
| Discrete State Input | False (Uses `state_proj` layer) | True |
|
||||
| Parameter Count | Higher (includes state embedding) | Lower (no state embedding) |
|
||||
|
||||
---
|
||||
|
||||
## Citation
|
||||
|
||||
If you use this work, please cite both **OpenPI** and the π₀.₅ paper:
|
||||
|
||||
```bibtex
|
||||
@misc{openpi2024,
|
||||
author = {Physical Intelligence Lab},
|
||||
title = {OpenPI: PyTorch Implementation of π0 and π0.5 Policies},
|
||||
year = {2024},
|
||||
publisher = {GitHub},
|
||||
howpublished = {\url{https://github.com/Physical-Intelligence/openpi}},
|
||||
license = {Apache-2.0}
|
||||
}
|
||||
|
||||
@misc{intelligence2025pi05visionlanguageactionmodelopenworld,
|
||||
title = {π₀.₅: a Vision-Language-Action Model with Open-World Generalization},
|
||||
author = {Physical Intelligence and Kevin Black and Noah Brown and James Darpinian and Karan Dhabalia and Danny Driess and Adnan Esmail and Michael Equi and Chelsea Finn and Niccolo Fusai and Manuel Y. Galliker and Dibya Ghosh and Lachy Groom and Karol Hausman and Brian Ichter and Szymon Jakubczak and Tim Jones and Liyiming Ke and Devin LeBlanc and Sergey Levine and Adrian Li-Bell and Mohith Mothukuri and Suraj Nair and Karl Pertsch and Allen Z. Ren and Lucy Xiaoyang Shi and Laura Smith and Jost Tobias Springenberg and Kyle Stachowicz and James Tanner and Quan Vuong and Homer Walke and Anna Walling and Haohuan Wang and Lili Yu and Ury Zhilinsky},
|
||||
year = {2025},
|
||||
eprint = {2504.16054},
|
||||
archivePrefix= {arXiv},
|
||||
primaryClass = {cs.LG},
|
||||
url = {https://arxiv.org/abs/2504.16054},
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## License
|
||||
|
||||
This port follows the **Apache 2.0 License**, consistent with the original [OpenPI repository](https://github.com/Physical-Intelligence/openpi).
|
||||
@@ -0,0 +1,21 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
# Copyright 2025 Physical Intelligence and The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from .configuration_pi05 import PI05FullConfig
|
||||
from .modeling_pi05 import PI05FullPolicy
|
||||
from .processor_pi05 import make_pi05_full_pre_post_processors
|
||||
|
||||
__all__ = ["PI05FullConfig", "PI05FullPolicy", "make_pi05_full_pre_post_processors"]
|
||||
@@ -0,0 +1,50 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Example script to run synthetic data generation with Qwen VLM
|
||||
# This generates user prompts and robot utterances for hierarchical policy training
|
||||
|
||||
# Configuration
|
||||
REPO_ID="lerobot/libero_10"
|
||||
MODEL="Qwen/Qwen3-VL-30B-A3B-Instruct"
|
||||
# or: MODEL="Qwen/Qwen2-VL-7B-Instruct"
|
||||
|
||||
|
||||
OUTPUT_DIR="/fsx/jade_choghari/outputs/libero-10-annotate-high"
|
||||
|
||||
BATCH_SIZE=16
|
||||
TEMPERATURE=0.9
|
||||
SAMPLE_INTERVAL=5.0 # generate dialogue every 1 second (all episodes processed)
|
||||
|
||||
# Run subtask annotation
|
||||
# python /admin/home/jade_choghari/lerobot/src/lerobot/policies/pi05_full/annotate/subtask_annotate.py \
|
||||
# --repo-id "$REPO_ID" \
|
||||
# --video-key observation.images.image \
|
||||
# --output-dir "$OUTPUT_DIR" \
|
||||
# --skip-existing \
|
||||
# --output-repo-id "jadechoghari/libero10-annotate" \
|
||||
# --batch-size "$BATCH_SIZE" \
|
||||
# run synthetic data generation (all episodes processed)
|
||||
# python examples/dataset/annotate_pgen.py \
|
||||
# --repo-id "$REPO_ID" \
|
||||
# --model "$MODEL" \
|
||||
# --output-dir "$OUTPUT_DIR" \
|
||||
# --temperature "$TEMPERATURE" \
|
||||
# --batch-size "$BATCH_SIZE" \
|
||||
# --sample-interval "$SAMPLE_INTERVAL" \
|
||||
# --image-key observation.images.base \
|
||||
# --num-image-views-per-sample 1
|
||||
|
||||
# for faster testing, increase sample interval:
|
||||
# --sample-interval 5.0 # Samples every 5 seconds (much faster)
|
||||
|
||||
# to push to hub after generation:
|
||||
# add --push-to-hub flag
|
||||
|
||||
# efficient batch processing: 4 episodes at once
|
||||
python /admin/home/jade_choghari/lerobot/src/lerobot/policies/pi05_full/annotate/high_level_annotate.py \
|
||||
--data-dir "/fsx/jade_choghari/outputs/libero-10-annotate" \
|
||||
--output-dir "$OUTPUT_DIR" \
|
||||
--video-mode \
|
||||
--video-key observation.images.image \
|
||||
--video-batch-size "$BATCH_SIZE" \
|
||||
--sample-interval 5.0
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,52 @@
|
||||
import torch
|
||||
from huggingface_hub import HfApi
|
||||
|
||||
import lerobot
|
||||
from lerobot.datasets.lerobot_dataset import LeRobotDataset, LeRobotDatasetMetadata
|
||||
from lerobot.policies.factory import make_pre_post_processors
|
||||
from lerobot.configs.policies import PreTrainedConfig
|
||||
|
||||
# /fsx/jade_choghari/data/libero_10_subtasks_kw_converted
|
||||
dataset = LeRobotDataset(repo_id="lerobot/libero_10_image_subtask")
|
||||
|
||||
dataloader = torch.utils.data.DataLoader(
|
||||
dataset,
|
||||
num_workers=0,
|
||||
batch_size=2,
|
||||
shuffle=True,
|
||||
)
|
||||
|
||||
cfg = PreTrainedConfig.from_pretrained(
|
||||
pretrained_name_or_path="/fsx/jade_choghari/models/pi05-base",
|
||||
)
|
||||
cfg.dtype = "bfloat16"
|
||||
|
||||
pre_processor, post_processor = make_pre_post_processors(
|
||||
policy_cfg=cfg,
|
||||
pretrained_path="/fsx/jade_choghari/models/pi05-base",
|
||||
)
|
||||
batch = next(iter(dataloader))
|
||||
breakpoint()
|
||||
batch1 = pre_processor(batch)
|
||||
breakpoint()
|
||||
print(batch.keys())
|
||||
# print(batch['task_index_high_level'].shape)
|
||||
# print(batch['task_index_high_level'])
|
||||
# print(batch['user_prompt'][0])
|
||||
# print(batch['robot_utterance'][0])
|
||||
# print(batch['task'][0])
|
||||
|
||||
valid_episode_list = []
|
||||
for episode_idx in range(len(dataset.meta.episodes)):
|
||||
subtask_index = dataset[episode_idx]["subtask_index"]
|
||||
valid_episode_list.append(episode_idx)
|
||||
|
||||
print(len(valid_episode_list))
|
||||
|
||||
# read this parquet /fsx/jade_choghari/outputs/pgen_annotations1/meta/tasks.parquett
|
||||
# import pandas as pd
|
||||
# tasks_df = pd.read_parquet('/fsx/jade_choghari/outputs/pgen_annotations1/meta/tasks.parquet')
|
||||
|
||||
# # print all
|
||||
# print(tasks_df.columns)
|
||||
# breakpoint()
|
||||
@@ -0,0 +1,49 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Example script to run synthetic data generation with Qwen VLM
|
||||
# This generates user prompts and robot utterances for hierarchical policy training
|
||||
|
||||
# Configuration
|
||||
REPO_ID="jadechoghari/collect-data"
|
||||
MODEL="Qwen/Qwen3-VL-30B-A3B-Instruct"
|
||||
# or: MODEL="Qwen/Qwen2-VL-7B-Instruct"
|
||||
|
||||
|
||||
OUTPUT_DIR="/fsx/jade_choghari/outputs/collect-data-pgen_new"
|
||||
|
||||
BATCH_SIZE=32
|
||||
TEMPERATURE=0.9
|
||||
SAMPLE_INTERVAL=5.0 # generate dialogue every 1 second (all episodes processed)
|
||||
|
||||
# Run subtask annotation
|
||||
python /admin/home/jade_choghari/lerobot/src/lerobot/policies/pi05_full/annotate/subtask_annotate.py \
|
||||
--repo-id "$REPO_ID" \
|
||||
--video-key observation.images.base \
|
||||
--output-dir "$OUTPUT_DIR" \
|
||||
--output-repo-id "jadechoghari/collect-data-with-subtasks"
|
||||
# run synthetic data generation (all episodes processed)
|
||||
# python examples/dataset/annotate_pgen.py \
|
||||
# --repo-id "$REPO_ID" \
|
||||
# --model "$MODEL" \
|
||||
# --output-dir "$OUTPUT_DIR" \
|
||||
# --temperature "$TEMPERATURE" \
|
||||
# --batch-size "$BATCH_SIZE" \
|
||||
# --sample-interval "$SAMPLE_INTERVAL" \
|
||||
# --image-key observation.images.base \
|
||||
# --num-image-views-per-sample 1
|
||||
|
||||
# for faster testing, increase sample interval:
|
||||
# --sample-interval 5.0 # Samples every 5 seconds (much faster)
|
||||
|
||||
# to push to hub after generation:
|
||||
# add --push-to-hub flag
|
||||
|
||||
# efficient batch processing: 4 episodes at once
|
||||
# python examples/dataset/annotate_pgen.py \
|
||||
# --repo-id "$REPO_ID" \
|
||||
# --model "$MODEL" \
|
||||
# --output-dir "$OUTPUT_DIR" \
|
||||
# --video-mode \
|
||||
# --video-key observation.images.up \
|
||||
# --video-batch-size "$BATCH_SIZE" \
|
||||
# --sample-interval 1.0
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,183 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
# Copyright 2025 Physical Intelligence and The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
|
||||
from lerobot.configs.policies import PreTrainedConfig
|
||||
from lerobot.configs.types import FeatureType, NormalizationMode, PolicyFeature
|
||||
from lerobot.optim.optimizers import AdamWConfig
|
||||
from lerobot.optim.schedulers import CosineDecayWithWarmupSchedulerConfig
|
||||
from lerobot.policies.rtc.configuration_rtc import RTCConfig
|
||||
from lerobot.utils.constants import ACTION, OBS_IMAGES, OBS_STATE
|
||||
|
||||
DEFAULT_IMAGE_SIZE = 224
|
||||
|
||||
|
||||
@PreTrainedConfig.register_subclass("pi05_full")
|
||||
@dataclass
|
||||
class PI05FullConfig(PreTrainedConfig):
|
||||
paligemma_variant: str = "gemma_2b"
|
||||
action_expert_variant: str = "gemma_300m"
|
||||
dtype: str = "float32" # Options: "bfloat16", "float32"
|
||||
|
||||
n_obs_steps: int = 1
|
||||
chunk_size: int = 50 # Number of action steps to predict, in openpi called "action_horizon"
|
||||
n_action_steps: int = 50 # Number of action steps to execute
|
||||
|
||||
# Shorter state and action vectors will be padded to these dimensions
|
||||
max_state_dim: int = 32
|
||||
max_action_dim: int = 32
|
||||
|
||||
# Flow matching parameters: see openpi `PI0Pytorch`
|
||||
num_inference_steps: int = 10
|
||||
time_sampling_beta_alpha: float = 1.5
|
||||
time_sampling_beta_beta: float = 1.0
|
||||
time_sampling_scale: float = 0.999
|
||||
time_sampling_offset: float = 0.001
|
||||
min_period: float = 4e-3
|
||||
max_period: float = 4.0
|
||||
|
||||
# Real-Time Chunking (RTC) configuration
|
||||
rtc_config: RTCConfig | None = None
|
||||
|
||||
image_resolution: tuple[int, int] = (
|
||||
DEFAULT_IMAGE_SIZE,
|
||||
DEFAULT_IMAGE_SIZE,
|
||||
) # see openpi `preprocessing_pytorch.py`
|
||||
|
||||
# Add empty images. Used to add empty cameras when no image features are present.
|
||||
empty_cameras: int = 0
|
||||
|
||||
normalization_mapping: dict[str, NormalizationMode] = field(
|
||||
default_factory=lambda: {
|
||||
"VISUAL": NormalizationMode.IDENTITY,
|
||||
"STATE": NormalizationMode.MEAN_STD, # Pi0.5 uses quantiles for state
|
||||
"ACTION": NormalizationMode.MEAN_STD, # Pi0.5 uses quantiles for action
|
||||
}
|
||||
)
|
||||
|
||||
action_tokenizer_name: str = "physical-intelligence/fast"
|
||||
text_tokenizer_name: str = "google/paligemma-3b-pt-224"
|
||||
max_action_tokens: int = 256
|
||||
fast_skip_tokens: int = 128
|
||||
|
||||
# subtask stuff
|
||||
max_decoding_steps: int = 200
|
||||
temperature: float = 0.0
|
||||
subtask_regeneration_interval: float = 1.0 # Regenerate subtask tokens every N seconds (0 = every call)
|
||||
|
||||
# Training settings
|
||||
gradient_checkpointing: bool = False # Enable gradient checkpointing for memory optimization
|
||||
compile_model: bool = False # Whether to use torch.compile for model optimization
|
||||
compile_mode: str = "max-autotune" # Torch compile mode
|
||||
device: str | None = None # Device to use for the model (None = auto-detect)
|
||||
|
||||
# Finetuning settings
|
||||
freeze_vision_encoder: bool = False # Freeze only the vision encoder
|
||||
train_expert_only: bool = False # Freeze entire VLM, train only action expert and projections
|
||||
knowledge_insulation: bool = True # Enable knowledge insulation in attention (blocks gradients from action to VLM K/V)
|
||||
|
||||
# Loss weights (used when knowledge_insulation is enabled)
|
||||
loss_weight_flow: float = 1.0 # Weight for flow matching MSE loss (continuous actions)
|
||||
loss_weight_action_ce: float = 1.0 # Weight for FAST action token cross-entropy loss
|
||||
loss_weight_subtask_ce: float = 1.0 # Weight for subtask token cross-entropy loss
|
||||
|
||||
# Optimizer settings: see openpi `AdamW`
|
||||
optimizer_lr: float = 2.5e-5 # see openpi `CosineDecaySchedule: peak_lr`
|
||||
optimizer_betas: tuple[float, float] = (0.9, 0.95)
|
||||
optimizer_eps: float = 1e-8
|
||||
optimizer_weight_decay: float = 0.01
|
||||
optimizer_grad_clip_norm: float = 1.0
|
||||
|
||||
# Scheduler settings: see openpi `CosineDecaySchedule`
|
||||
# Note: These will auto-scale if --steps < scheduler_decay_steps
|
||||
# For example, --steps=3000 will scale warmup to 100 and decay to 3000
|
||||
scheduler_warmup_steps: int = 1_000
|
||||
scheduler_decay_steps: int = 30_000
|
||||
scheduler_decay_lr: float = 2.5e-6
|
||||
|
||||
tokenizer_max_length: int = 48 # see openpi `__post_init__`
|
||||
|
||||
def __post_init__(self):
|
||||
super().__post_init__()
|
||||
|
||||
# Validate configuration
|
||||
if self.n_action_steps > self.chunk_size:
|
||||
raise ValueError(
|
||||
f"n_action_steps ({self.n_action_steps}) cannot be greater than chunk_size ({self.chunk_size})"
|
||||
)
|
||||
|
||||
if self.paligemma_variant not in ["gemma_300m", "gemma_2b"]:
|
||||
raise ValueError(f"Invalid paligemma_variant: {self.paligemma_variant}")
|
||||
|
||||
if self.action_expert_variant not in ["gemma_300m", "gemma_2b"]:
|
||||
raise ValueError(f"Invalid action_expert_variant: {self.action_expert_variant}")
|
||||
|
||||
if self.dtype not in ["bfloat16", "float32"]:
|
||||
raise ValueError(f"Invalid dtype: {self.dtype}")
|
||||
|
||||
def validate_features(self) -> None:
|
||||
"""Validate and set up input/output features."""
|
||||
for i in range(self.empty_cameras):
|
||||
key = OBS_IMAGES + f".empty_camera_{i}"
|
||||
empty_camera = PolicyFeature(
|
||||
type=FeatureType.VISUAL,
|
||||
shape=(3, *self.image_resolution), # Use configured image resolution
|
||||
)
|
||||
self.input_features[key] = empty_camera
|
||||
|
||||
if OBS_STATE not in self.input_features:
|
||||
state_feature = PolicyFeature(
|
||||
type=FeatureType.STATE,
|
||||
shape=(self.max_state_dim,), # Padded to max_state_dim
|
||||
)
|
||||
self.input_features[OBS_STATE] = state_feature
|
||||
|
||||
if ACTION not in self.output_features:
|
||||
action_feature = PolicyFeature(
|
||||
type=FeatureType.ACTION,
|
||||
shape=(self.max_action_dim,), # Padded to max_action_dim
|
||||
)
|
||||
self.output_features[ACTION] = action_feature
|
||||
|
||||
def get_optimizer_preset(self) -> AdamWConfig:
|
||||
return AdamWConfig(
|
||||
lr=self.optimizer_lr,
|
||||
betas=self.optimizer_betas,
|
||||
eps=self.optimizer_eps,
|
||||
weight_decay=self.optimizer_weight_decay,
|
||||
grad_clip_norm=self.optimizer_grad_clip_norm,
|
||||
)
|
||||
|
||||
def get_scheduler_preset(self):
|
||||
return CosineDecayWithWarmupSchedulerConfig(
|
||||
peak_lr=self.optimizer_lr,
|
||||
decay_lr=self.scheduler_decay_lr,
|
||||
num_warmup_steps=self.scheduler_warmup_steps,
|
||||
num_decay_steps=self.scheduler_decay_steps,
|
||||
)
|
||||
|
||||
@property
|
||||
def observation_delta_indices(self) -> None:
|
||||
return None
|
||||
|
||||
@property
|
||||
def action_delta_indices(self) -> list:
|
||||
return list(range(self.chunk_size))
|
||||
|
||||
@property
|
||||
def reward_delta_indices(self) -> None:
|
||||
return None
|
||||
@@ -0,0 +1,92 @@
|
||||
import torch
|
||||
from huggingface_hub import HfApi
|
||||
|
||||
import lerobot
|
||||
from lerobot.datasets.lerobot_dataset import LeRobotDataset, LeRobotDatasetMetadata
|
||||
# import make_pre_post_processors
|
||||
from lerobot.policies.factory import make_pre_post_processors
|
||||
from lerobot.policies.pi05.configuration_pi05 import PI05Config
|
||||
from lerobot.policies.factory import make_policy, make_policy_config
|
||||
from lerobot.configs.policies import PreTrainedConfig
|
||||
|
||||
cfg = PreTrainedConfig.from_pretrained(
|
||||
pretrained_name_or_path="/fsx/jade_choghari/models/pi05-base",
|
||||
)
|
||||
cfg.dtype = "bfloat16"
|
||||
|
||||
pre_processor, post_processor = make_pre_post_processors(
|
||||
policy_cfg=cfg,
|
||||
pretrained_path="/fsx/jade_choghari/models/pi05-base",
|
||||
)
|
||||
|
||||
delta_timestamps = {'action': [0.0, 0.03333333333333333, 0.06666666666666667, 0.1, 0.13333333333333333, 0.16666666666666666, 0.2, 0.23333333333333334, 0.26666666666666666, 0.3, 0.3333333333333333, 0.36666666666666664, 0.4, 0.43333333333333335, 0.4666666666666667, 0.5, 0.5333333333333333, 0.5666666666666667, 0.6, 0.6333333333333333, 0.6666666666666666, 0.7, 0.7333333333333333, 0.7666666666666667, 0.8, 0.8333333333333334, 0.8666666666666667, 0.9, 0.9333333333333333, 0.9666666666666667, 1.0, 1.0333333333333334, 1.0666666666666667, 1.1, 1.1333333333333333, 1.1666666666666667, 1.2, 1.2333333333333334, 1.2666666666666666, 1.3, 1.3333333333333333, 1.3666666666666667, 1.4, 1.4333333333333333, 1.4666666666666666, 1.5, 1.5333333333333334, 1.5666666666666667, 1.6, 1.6333333333333333]}
|
||||
|
||||
dataset = LeRobotDataset(repo_id="local", root="/fsx/jade_choghari/outputs/pgen_annotations1", delta_timestamps=delta_timestamps)
|
||||
|
||||
# rename map --rename_map='{
|
||||
# "observation.images.side": "observation.images.base_0_rgb",
|
||||
# "observation.images.up": "observation.images.left_wrist_0_rgb"
|
||||
# }'
|
||||
rename_map = {
|
||||
"observation.images.side": "observation.images.base_0_rgb",
|
||||
"observation.images.up": "observation.images.left_wrist_0_rgb"
|
||||
}
|
||||
policy = make_policy(
|
||||
cfg=cfg,
|
||||
ds_meta=dataset.meta,
|
||||
rename_map=rename_map,
|
||||
)
|
||||
|
||||
dataloader = torch.utils.data.DataLoader(
|
||||
dataset,
|
||||
num_workers=0,
|
||||
batch_size=4,
|
||||
shuffle=True,
|
||||
)
|
||||
|
||||
batch = next(iter(dataloader))
|
||||
breakpoint()
|
||||
batch = pre_processor(batch)
|
||||
policy.train()
|
||||
# run inference
|
||||
# action = policy.select_action(batch)
|
||||
loss, loss_dict = policy.forward(batch)
|
||||
breakpoint()
|
||||
# import requests
|
||||
# from PIL import Image
|
||||
# from transformers import AutoProcessor
|
||||
# model = policy.model.paligemma_with_expert.paligemma
|
||||
# model = model.to(device="cuda", dtype=torch.bfloat16)
|
||||
# model.eval()
|
||||
# prompt = "Describe this image."
|
||||
# url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
|
||||
# image = Image.open(requests.get(url, stream=True).raw)
|
||||
# processor = AutoProcessor.from_pretrained(
|
||||
# "google/paligemma-3b-pt-224",
|
||||
# )
|
||||
# inputs = processor(image, prompt, return_tensors="pt").to(model.device)
|
||||
# print("generating...")
|
||||
# output = model.generate(
|
||||
# **inputs,
|
||||
# max_new_tokens=50,
|
||||
# use_cache=True, # default dynamic cache
|
||||
# )
|
||||
# print(processor.decode(output[0], skip_special_tokens=True))
|
||||
|
||||
|
||||
# # other model
|
||||
# from transformers import PaliGemmaForConditionalGeneration
|
||||
# model = PaliGemmaForConditionalGeneration.from_pretrained(
|
||||
# "google/paligemma2-3b-pt-224",
|
||||
# torch_dtype=torch.bfloat16,
|
||||
# device_map="auto",
|
||||
# )
|
||||
# model.eval()
|
||||
# print("generating...")
|
||||
# output = model.generate(
|
||||
# **inputs,
|
||||
# max_new_tokens=100,
|
||||
# use_cache=True, # default dynamic cache
|
||||
# )
|
||||
# print("Model 2 output:")
|
||||
# print(processor.decode(output[0], skip_special_tokens=True))
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,194 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
# Copyright 2025 Physical Intelligence and The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from copy import deepcopy
|
||||
from dataclasses import dataclass
|
||||
from typing import Any
|
||||
|
||||
import numpy as np
|
||||
import torch
|
||||
|
||||
from lerobot.configs.types import PipelineFeatureType, PolicyFeature
|
||||
from lerobot.policies.pi05_full.configuration_pi05 import PI05FullConfig
|
||||
from lerobot.policies.pi05_full.modeling_pi05 import pad_vector
|
||||
from lerobot.processor import (
|
||||
ActionTokenizerProcessorStep,
|
||||
AddBatchDimensionProcessorStep,
|
||||
DeviceProcessorStep,
|
||||
NormalizerProcessorStep,
|
||||
PolicyAction,
|
||||
PolicyProcessorPipeline,
|
||||
ProcessorStep,
|
||||
ProcessorStepRegistry,
|
||||
RenameObservationsProcessorStep,
|
||||
TokenizerProcessorStep,
|
||||
UnnormalizerProcessorStep,
|
||||
)
|
||||
from lerobot.processor.converters import policy_action_to_transition, transition_to_policy_action
|
||||
from lerobot.processor.core import EnvTransition, TransitionKey
|
||||
from lerobot.utils.constants import (
|
||||
OBS_STATE,
|
||||
POLICY_POSTPROCESSOR_DEFAULT_NAME,
|
||||
POLICY_PREPROCESSOR_DEFAULT_NAME,
|
||||
)
|
||||
|
||||
|
||||
@ProcessorStepRegistry.register(name="pi05_full_prepare_state_tokenizer_processor_step")
|
||||
@dataclass
|
||||
class Pi05FullPrepareStateTokenizerProcessorStep(ProcessorStep):
|
||||
"""
|
||||
Processor step to prepare the state and tokenize the language input.
|
||||
"""
|
||||
|
||||
max_state_dim: int = 32
|
||||
task_key: str = "task"
|
||||
subtask_key: str = "subtask"
|
||||
|
||||
def __call__(self, transition: EnvTransition) -> EnvTransition:
|
||||
transition = transition.copy()
|
||||
|
||||
state = transition.get(TransitionKey.OBSERVATION, {}).get(OBS_STATE)
|
||||
if state is None:
|
||||
raise ValueError("State is required for PI05")
|
||||
user_prompts = transition.get(TransitionKey.COMPLEMENTARY_DATA, {}).get(self.task_key)
|
||||
if user_prompts is None:
|
||||
raise ValueError("No user prompts found in complementary data")
|
||||
commands = transition.get(TransitionKey.COMPLEMENTARY_DATA, {}).get(self.subtask_key)
|
||||
|
||||
# TODO: check if this necessary
|
||||
state = deepcopy(state)
|
||||
|
||||
# Prepare state (pad to max_state_dim)
|
||||
state = pad_vector(state, self.max_state_dim)
|
||||
|
||||
# State should already be normalized to [-1, 1] by the NormalizerProcessorStep that runs before this step
|
||||
# Discretize into 256 bins (see openpi `PaligemmaTokenizer.tokenize()`)
|
||||
state_np = state.cpu().numpy()
|
||||
discretized_states = np.digitize(state_np, bins=np.linspace(-1, 1, 256 + 1)[:-1]) - 1
|
||||
|
||||
full_prompts = []
|
||||
for i, user_prompt in enumerate(user_prompts):
|
||||
cleaned_text = user_prompt.strip().replace("_", " ").replace("\n", " ")
|
||||
cleaned_text = cleaned_text.lower() # all lowercase # NOTE: added by (jadechoghari)
|
||||
state_str = " ".join(map(str, discretized_states[i]))
|
||||
full_prompt = f"Task: {cleaned_text}, State: {state_str};\n"
|
||||
full_prompts.append(full_prompt)
|
||||
|
||||
transition[TransitionKey.COMPLEMENTARY_DATA][self.task_key] = full_prompts
|
||||
|
||||
# process commands (optional)
|
||||
if commands is not None:
|
||||
full_commands = []
|
||||
for i, command in enumerate(commands):
|
||||
cleaned_text = command.strip().replace("_", " ").replace("\n", " ")
|
||||
cleaned_text = cleaned_text.lower() # all lowercase # NOTE: added by (jadechoghari)
|
||||
full_command = f"Subtask: {cleaned_text};\n"
|
||||
full_commands.append(full_command)
|
||||
|
||||
transition[TransitionKey.COMPLEMENTARY_DATA][self.subtask_key] = full_commands
|
||||
|
||||
# note: action tokens will be processed in the ActionTokenizerProcessorStep
|
||||
# Normalize state to [-1, 1] range if needed (assuming it's already normalized by normalizer processor step!!)
|
||||
# Discretize into 256 bins (see openpi `PaligemmaTokenizer.tokenize()`)
|
||||
return transition
|
||||
|
||||
def transform_features(
|
||||
self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
|
||||
) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
|
||||
"""
|
||||
This step does not alter the feature definitions.
|
||||
"""
|
||||
return features
|
||||
|
||||
|
||||
def make_pi05_full_pre_post_processors(
|
||||
config: PI05FullConfig,
|
||||
dataset_stats: dict[str, dict[str, torch.Tensor]] | None = None,
|
||||
) -> tuple[
|
||||
PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
|
||||
PolicyProcessorPipeline[PolicyAction, PolicyAction],
|
||||
]:
|
||||
"""
|
||||
Constructs pre-processor and post-processor pipelines for the PI0 policy.
|
||||
|
||||
The pre-processing pipeline prepares input data for the model by:
|
||||
1. Renaming features to match pretrained configurations.
|
||||
2. Normalizing input and output features based on dataset statistics.
|
||||
3. Adding a batch dimension.
|
||||
4. Appending a newline character to the task description for tokenizer compatibility.
|
||||
5. Tokenizing the text prompt using the PaliGemma tokenizer.
|
||||
6. Moving all data to the specified device.
|
||||
|
||||
The post-processing pipeline handles the model's output by:
|
||||
1. Moving data to the CPU.
|
||||
2. Unnormalizing the output features to their original scale.
|
||||
|
||||
Args:
|
||||
config: The configuration object for the PI0 policy.
|
||||
dataset_stats: A dictionary of statistics for normalization.
|
||||
preprocessor_kwargs: Additional arguments for the pre-processor pipeline.
|
||||
postprocessor_kwargs: Additional arguments for the post-processor pipeline.
|
||||
|
||||
Returns:
|
||||
A tuple containing the configured pre-processor and post-processor pipelines.
|
||||
"""
|
||||
|
||||
# Add remaining processors
|
||||
input_steps: list[ProcessorStep] = [
|
||||
RenameObservationsProcessorStep(rename_map={}), # To mimic the same processor as pretrained one
|
||||
AddBatchDimensionProcessorStep(),
|
||||
# NOTE: NormalizerProcessorStep MUST come before Pi05PrepareStateTokenizerProcessorStep
|
||||
# because the tokenizer step expects normalized state in [-1, 1] range for discretization
|
||||
NormalizerProcessorStep(
|
||||
features={**config.input_features, **config.output_features},
|
||||
norm_map=config.normalization_mapping,
|
||||
stats=dataset_stats,
|
||||
),
|
||||
Pi05FullPrepareStateTokenizerProcessorStep(max_state_dim=config.max_state_dim),
|
||||
TokenizerProcessorStep(
|
||||
tokenizer_name=config.text_tokenizer_name,
|
||||
max_length=config.tokenizer_max_length,
|
||||
padding_side="right",
|
||||
padding="max_length",
|
||||
),
|
||||
ActionTokenizerProcessorStep(
|
||||
action_tokenizer_name=config.action_tokenizer_name,
|
||||
max_action_tokens=config.max_action_tokens,
|
||||
fast_skip_tokens=config.fast_skip_tokens,
|
||||
paligemma_tokenizer_name=config.text_tokenizer_name,
|
||||
),
|
||||
DeviceProcessorStep(device=config.device),
|
||||
]
|
||||
|
||||
output_steps: list[ProcessorStep] = [
|
||||
UnnormalizerProcessorStep(
|
||||
features=config.output_features, norm_map=config.normalization_mapping, stats=dataset_stats
|
||||
),
|
||||
DeviceProcessorStep(device="cpu"),
|
||||
]
|
||||
|
||||
return (
|
||||
PolicyProcessorPipeline[dict[str, Any], dict[str, Any]](
|
||||
steps=input_steps,
|
||||
name=POLICY_PREPROCESSOR_DEFAULT_NAME,
|
||||
),
|
||||
PolicyProcessorPipeline[PolicyAction, PolicyAction](
|
||||
steps=output_steps,
|
||||
name=POLICY_POSTPROCESSOR_DEFAULT_NAME,
|
||||
to_transition=policy_action_to_transition,
|
||||
to_output=transition_to_policy_action,
|
||||
),
|
||||
)
|
||||
@@ -171,9 +171,11 @@ def _extract_complementary_data(batch: dict[str, Any]) -> dict[str, Any]:
|
||||
subtask_key = {"subtask": batch["subtask"]} if "subtask" in batch else {}
|
||||
index_key = {"index": batch["index"]} if "index" in batch else {}
|
||||
task_index_key = {"task_index": batch["task_index"]} if "task_index" in batch else {}
|
||||
user_prompt_key = {"user_prompt": batch["user_prompt"]} if "user_prompt" in batch else {}
|
||||
subtask_key = {"subtask": batch["subtask"]} if "subtask" in batch else {}
|
||||
episode_index_key = {"episode_index": batch["episode_index"]} if "episode_index" in batch else {}
|
||||
|
||||
return {**pad_keys, **task_key, **subtask_key, **index_key, **task_index_key, **episode_index_key}
|
||||
return {**pad_keys, **task_key, **index_key, **task_index_key, **episode_index_key, **user_prompt_key, **subtask_key}
|
||||
|
||||
|
||||
def create_transition(
|
||||
|
||||
@@ -37,6 +37,9 @@ from lerobot.utils.constants import (
|
||||
OBS_LANGUAGE_SUBTASK_ATTENTION_MASK,
|
||||
OBS_LANGUAGE_SUBTASK_TOKENS,
|
||||
OBS_LANGUAGE_TOKENS,
|
||||
OBS_LANGUAGE_USER_PROMPT,
|
||||
OBS_LANGUAGE_USER_PROMPT_ATTENTION_MASK,
|
||||
OBS_LANGUAGE_USER_PROMPT_TOKENS,
|
||||
)
|
||||
from lerobot.utils.import_utils import _transformers_available
|
||||
|
||||
@@ -141,6 +144,32 @@ class TokenizerProcessorStep(ObservationProcessorStep):
|
||||
|
||||
return None
|
||||
|
||||
def get_user_prompt(self, transition: EnvTransition) -> list[str] | None:
|
||||
"""
|
||||
Extracts the user_prompt from the transition's complementary data.
|
||||
|
||||
Args:
|
||||
transition: The environment transition.
|
||||
|
||||
Returns:
|
||||
A list of user_prompt strings, or None if the user_prompt key is not found or the value is None.
|
||||
"""
|
||||
complementary_data = transition.get(TransitionKey.COMPLEMENTARY_DATA)
|
||||
if complementary_data is None:
|
||||
return None
|
||||
|
||||
user_prompt = complementary_data.get("user_prompt")
|
||||
if user_prompt is None:
|
||||
return None
|
||||
|
||||
# Standardize to a list of strings for the tokenizer
|
||||
if isinstance(user_prompt, str):
|
||||
return [user_prompt]
|
||||
elif isinstance(user_prompt, list) and all(isinstance(t, str) for t in user_prompt):
|
||||
return user_prompt
|
||||
|
||||
return None
|
||||
|
||||
def get_subtask(self, transition: EnvTransition) -> list[str] | None:
|
||||
"""
|
||||
Extracts the subtask from the transition's complementary data.
|
||||
@@ -169,16 +198,16 @@ class TokenizerProcessorStep(ObservationProcessorStep):
|
||||
|
||||
def observation(self, observation: RobotObservation) -> RobotObservation:
|
||||
"""
|
||||
Tokenizes the task description and adds it to the observation dictionary.
|
||||
Tokenizes the task description and user_prompt (if available) and adds them to the observation dictionary.
|
||||
|
||||
This method retrieves the task, tokenizes it, moves the resulting tensors to the
|
||||
This method retrieves the task and user_prompt, tokenizes them, moves the resulting tensors to the
|
||||
same device as other data in the transition, and updates the observation.
|
||||
|
||||
Args:
|
||||
observation: The original observation dictionary.
|
||||
|
||||
Returns:
|
||||
The updated observation dictionary including token IDs and an attention mask.
|
||||
The updated observation dictionary including token IDs and attention masks.
|
||||
"""
|
||||
task = self.get_task(self.transition)
|
||||
if task is None:
|
||||
@@ -204,11 +233,45 @@ class TokenizerProcessorStep(ObservationProcessorStep):
|
||||
new_observation[OBS_LANGUAGE_TOKENS] = tokenized_prompt["input_ids"]
|
||||
new_observation[OBS_LANGUAGE_ATTENTION_MASK] = tokenized_prompt["attention_mask"].to(dtype=torch.bool)
|
||||
|
||||
# Tokenize user_prompt if available
|
||||
user_prompt = self.get_user_prompt(self.transition)
|
||||
if user_prompt is not None:
|
||||
tokenized_user_prompt = self._tokenize_text(user_prompt)
|
||||
|
||||
# Move new tokenized tensors to the detected device
|
||||
if target_device is not None:
|
||||
tokenized_user_prompt = {
|
||||
k: v.to(target_device) if isinstance(v, torch.Tensor) else v
|
||||
for k, v in tokenized_user_prompt.items()
|
||||
}
|
||||
|
||||
# Add tokenized user_prompt to the observation
|
||||
new_observation[OBS_LANGUAGE_USER_PROMPT_TOKENS] = tokenized_user_prompt["input_ids"]
|
||||
new_observation[OBS_LANGUAGE_USER_PROMPT_ATTENTION_MASK] = tokenized_user_prompt["attention_mask"].to(dtype=torch.bool)
|
||||
|
||||
# Tokenize subtask if available
|
||||
subtask = self.get_subtask(self.transition)
|
||||
if subtask is not None:
|
||||
tokenized_subtask = self._tokenize_text(subtask)
|
||||
|
||||
# Add EOS token at the end of each subtask sequence (before padding)
|
||||
eos_token_id = self.input_tokenizer.eos_token_id
|
||||
input_ids = tokenized_subtask["input_ids"]
|
||||
attention_mask = tokenized_subtask["attention_mask"]
|
||||
for i in range(input_ids.size(0)):
|
||||
# Find the length of actual tokens (sum of attention mask)
|
||||
seq_len = attention_mask[i].sum().item()
|
||||
|
||||
max_len = input_ids.size(1)
|
||||
if seq_len >= max_len:
|
||||
raise ValueError(
|
||||
f"No room to append EOS: seq_len={seq_len} equals max_length={max_len}. "
|
||||
"Increase max_length or tokenize with padding=False then pad after adding EOS."
|
||||
)
|
||||
# Add EOS token at the end
|
||||
input_ids[i, seq_len] = eos_token_id
|
||||
attention_mask[i, seq_len] = 1
|
||||
|
||||
# Move new tokenized tensors to the detected device
|
||||
if target_device is not None:
|
||||
tokenized_subtask = {
|
||||
@@ -320,6 +383,28 @@ class TokenizerProcessorStep(ObservationProcessorStep):
|
||||
type=FeatureType.LANGUAGE, shape=(self.max_length,)
|
||||
)
|
||||
|
||||
# Add features for user_prompt tokens and attention mask if they don't already exist
|
||||
if OBS_LANGUAGE_USER_PROMPT_TOKENS not in features[PipelineFeatureType.OBSERVATION]:
|
||||
features[PipelineFeatureType.OBSERVATION][OBS_LANGUAGE_USER_PROMPT_TOKENS] = PolicyFeature(
|
||||
type=FeatureType.LANGUAGE, shape=(self.max_length,)
|
||||
)
|
||||
|
||||
if OBS_LANGUAGE_USER_PROMPT_ATTENTION_MASK not in features[PipelineFeatureType.OBSERVATION]:
|
||||
features[PipelineFeatureType.OBSERVATION][OBS_LANGUAGE_USER_PROMPT_ATTENTION_MASK] = PolicyFeature(
|
||||
type=FeatureType.LANGUAGE, shape=(self.max_length,)
|
||||
)
|
||||
|
||||
# Add features for subtask tokens and attention mask if they don't already exist
|
||||
if OBS_LANGUAGE_SUBTASK_TOKENS not in features[PipelineFeatureType.OBSERVATION]:
|
||||
features[PipelineFeatureType.OBSERVATION][OBS_LANGUAGE_SUBTASK_TOKENS] = PolicyFeature(
|
||||
type=FeatureType.LANGUAGE, shape=(self.max_length,)
|
||||
)
|
||||
|
||||
if OBS_LANGUAGE_SUBTASK_ATTENTION_MASK not in features[PipelineFeatureType.OBSERVATION]:
|
||||
features[PipelineFeatureType.OBSERVATION][OBS_LANGUAGE_SUBTASK_ATTENTION_MASK] = PolicyFeature(
|
||||
type=FeatureType.LANGUAGE, shape=(self.max_length,)
|
||||
)
|
||||
|
||||
return features
|
||||
|
||||
|
||||
@@ -573,4 +658,4 @@ class ActionTokenizerProcessorStep(ActionProcessorStep):
|
||||
Returns:
|
||||
The updated dictionary of policy features.
|
||||
"""
|
||||
return features
|
||||
return features
|
||||
@@ -338,11 +338,21 @@ def train(cfg: TrainPipelineConfig, accelerator: Accelerator | None = None):
|
||||
|
||||
# create dataloader for offline training
|
||||
if hasattr(cfg.policy, "drop_n_last_frames"):
|
||||
# loop over dataset subtask parquet file to find episode indices that don't have subtask index != -1
|
||||
# valid_episode_list passed to episode_indexes_to_use
|
||||
valid_episode_list = []
|
||||
for episode_idx in range(len(dataset.meta.episodes)):
|
||||
subtask_index = dataset[episode_idx]["subtask_index"]
|
||||
if subtask_index != -1:
|
||||
valid_episode_list.append(episode_idx)
|
||||
|
||||
episode_indices_to_use = valid_episode_list
|
||||
|
||||
shuffle = False
|
||||
sampler = EpisodeAwareSampler(
|
||||
dataset.meta.episodes["dataset_from_index"],
|
||||
dataset.meta.episodes["dataset_to_index"],
|
||||
episode_indices_to_use=dataset.episodes,
|
||||
episode_indices_to_use=episode_indices_to_use,
|
||||
drop_n_last_frames=cfg.policy.drop_n_last_frames,
|
||||
shuffle=True,
|
||||
)
|
||||
|
||||
@@ -26,6 +26,9 @@ OBS_IMAGES = OBS_IMAGE + "s"
|
||||
OBS_LANGUAGE = OBS_STR + ".language"
|
||||
OBS_LANGUAGE_TOKENS = OBS_LANGUAGE + ".tokens"
|
||||
OBS_LANGUAGE_ATTENTION_MASK = OBS_LANGUAGE + ".attention_mask"
|
||||
OBS_LANGUAGE_USER_PROMPT = OBS_STR + ".user_prompt"
|
||||
OBS_LANGUAGE_USER_PROMPT_TOKENS = OBS_LANGUAGE_USER_PROMPT + ".tokens"
|
||||
OBS_LANGUAGE_USER_PROMPT_ATTENTION_MASK = OBS_LANGUAGE_USER_PROMPT_TOKENS + ".attention_mask"
|
||||
OBS_LANGUAGE_SUBTASK = OBS_STR + ".subtask"
|
||||
OBS_LANGUAGE_SUBTASK_TOKENS = OBS_LANGUAGE_SUBTASK + ".tokens"
|
||||
OBS_LANGUAGE_SUBTASK_ATTENTION_MASK = OBS_LANGUAGE_SUBTASK + ".attention_mask"
|
||||
@@ -89,3 +92,101 @@ LIBERO_KEY_JOINTS_POS = "robot_state/joints/pos"
|
||||
LIBERO_KEY_JOINTS_VEL = "robot_state/joints/vel"
|
||||
LIBERO_KEY_PIXELS_AGENTVIEW = "pixels/agentview_image"
|
||||
LIBERO_KEY_PIXELS_EYE_IN_HAND = "pixels/robot0_eye_in_hand_image"
|
||||
|
||||
# Skill segmentation prompt template for VLM-based subtask annotation
|
||||
# Placeholders: {goal_context}, {subtask_labels_section}
|
||||
# When subtask_labels are provided, use format_subtask_labels_section() to fill {subtask_labels_section}.
|
||||
|
||||
|
||||
def format_subtask_labels_section(subtask_labels: list[str]) -> str:
|
||||
"""Format a list of subtask labels for insertion into SKILL_SEGMENTATION_PROMPT_TEMPLATE.
|
||||
The model will be instructed to choose only from these labels.
|
||||
"""
|
||||
if not subtask_labels:
|
||||
return ""
|
||||
return "\n".join(f' "{label}",' for label in subtask_labels).rstrip(",")
|
||||
|
||||
|
||||
SKILL_SEGMENTATION_PROMPT_TEMPLATE = """# Role
|
||||
You are a Robotics Vision System specializing in temporal action segmentation for robot manipulation demonstrations.
|
||||
|
||||
# Video duration (critical)
|
||||
The total video length is **{video_duration_seconds} seconds** ({video_duration_mm_ss}). All "start" and "end" values in your JSON must be numeric seconds in the range [0.0, {video_duration_seconds}]. The last skill's "end" must be exactly **{video_duration_seconds}**. Do not stop earlier.
|
||||
|
||||
# Task
|
||||
{goal_context}Segment this robot demonstration video into short atomic manipulation skills. Each skill should:
|
||||
- Last approximately 1-3 seconds (or longer if the action takes longer)
|
||||
- Describe a clear, single action (e.g., "pick up object", "move arm left", "release gripper")
|
||||
- Have precise start and end timestamps in seconds (float)
|
||||
|
||||
# Requirements
|
||||
1. **Atomic Actions**: Each skill should be a single, indivisible action
|
||||
2. **Complete Coverage**: Skills must cover the entire video from 0.0 to {video_duration_seconds} seconds with no gaps
|
||||
3. **Boundary Consistency**: The end of one skill equals the start of the next
|
||||
4. **Natural Language**: Use clear, descriptive names for each skill
|
||||
5. **Timestamps**: Use seconds as floats (e.g. 12.5) for all timestamps; the last "end" must be {video_duration_seconds}. If the video has a visible timer in the corner showing elapsed time in seconds, use it to report accurate start and end times for each skill.
|
||||
# Subtask Label Set (Closed Vocabulary)
|
||||
You MUST strictly identify the video segments using ONLY the following labels. Do not create new labels or modify existing ones:
|
||||
|
||||
[
|
||||
{subtask_labels_section}
|
||||
]
|
||||
|
||||
The video shows one successful execution of all subtasks in a logical order.
|
||||
|
||||
# Ground-Truth Semantics (Very Important)
|
||||
Use **visual state changes** to define when a subtask starts and ends. Do NOT assume equal durations for the subtasks.
|
||||
|
||||
- A subtask **starts** at the first frame where the robot's motion clearly initiates that subtask.
|
||||
- A subtask **ends** at the first frame where that specific action is visually completed and the manipulated object reaches a temporary, stable configuration.
|
||||
|
||||
If there are short pauses or micro-motions that don't clearly correspond to a new subtask, they belong to the **current** subtask.
|
||||
|
||||
# Hard Constraints & Logic
|
||||
1. **Continuous Coverage (No Gaps):**
|
||||
- The entire video from 0.0 to {video_duration_seconds} seconds must be covered by subtasks.
|
||||
- There can be no gaps between subtasks.
|
||||
- If there is any idle or ambiguous time between clear actions, extend the *preceding* subtask to cover it.
|
||||
|
||||
2. **Boundary Consistency:**
|
||||
- The `"end"` timestamp of one subtask must be exactly equal to the `"start"` timestamp of the next subtask.
|
||||
- Boundaries must coincide with a real visual state transition, not just a convenient time split.
|
||||
|
||||
3. **Chronological Order, One Occurrence Each:**
|
||||
- This is a single successful demonstration.
|
||||
- Each subtask from the vocabulary appears **exactly once**, in the correct logical order.
|
||||
- **Durations may be very different** between subtasks. Never assume they are similar lengths. Base all boundaries only on the video.
|
||||
|
||||
4. **Reject Uniform Segmentation (Important):**
|
||||
- Do NOT simply divide the video into equal or nearly equal time chunks.
|
||||
- If your boundaries would result in subtasks with similar durations (e.g. all around 5 seconds), treat this as evidence that your segmentation is wrong and refine the boundaries.
|
||||
- Only use nearly equal durations if the video truly shows each subtask taking the same amount of time (this is very rare).
|
||||
|
||||
5. **Timestamps (critical):**
|
||||
- Use numeric seconds (float) in the JSON, e.g. 0.0, 5.2, 12.8.
|
||||
- The first subtask always starts at 0.0.
|
||||
- The last subtask must end at exactly {video_duration_seconds} (the full video length).
|
||||
- **Time is displayed inside the video**: a visible timer in one corner shows the elapsed time in seconds (from 0.0 to the end). Use this on-screen timer to set accurate start and end times for each skill.
|
||||
Format this as a bullet list.
|
||||
|
||||
# Output Format
|
||||
output ONLY valid JSON with this exact structure. The last skill's "end" MUST be exactly {video_duration_seconds}. Use the timestamps you read from the visible timer in the video:
|
||||
|
||||
```json
|
||||
{{
|
||||
"skills": [
|
||||
{{"name": "first skill", "start": 0.0, "end": 5.0}},
|
||||
{{"name": "second skill", "start": 5.0, "end": 12.0}},
|
||||
{{"name": "last skill", "start": 12.0, "end": {video_duration_seconds}}}
|
||||
]
|
||||
}}
|
||||
```
|
||||
|
||||
The first skill must start at 0.0 and the last skill must end at **{video_duration_seconds}** (the total video duration in seconds).
|
||||
# Strict Structural Rule
|
||||
This video contains exactly 4 subtasks.
|
||||
You MUST output exactly 4 segments.
|
||||
Each segment must use a unique label from the vocabulary.
|
||||
No label may be repeated.
|
||||
|
||||
"""
|
||||
|
||||
Reference in New Issue
Block a user