Compare commits

..

1 Commits

Author SHA1 Message Date
Michel Aractingi 74b7cd246e add check for cfg.policy in force_cpu line 2026-01-19 13:54:44 +01:00
77 changed files with 841 additions and 5781 deletions
+1 -12
View File
@@ -18,11 +18,6 @@ name: Documentation
on:
# Allows running this workflow manually from the Actions tab
workflow_dispatch:
inputs:
version:
description: 'Version tag (e.g. v0.1.2) - Leave empty for standard main build'
required: false
type: string
# Triggers the workflow on push events to main for the docs folder
push:
@@ -59,13 +54,7 @@ jobs:
with:
commit_sha: ${{ github.sha }}
package: lerobot
additional_args: >-
--not_python_module
${{
(github.event_name == 'release' && format('--version {0}', github.event.release.tag_name)) ||
(inputs.version != '' && format('--version {0}', inputs.version)) ||
''
}}
additional_args: --not_python_module ${{ github.event_name == 'release' && format('--version {0}', github.event.release.tag_name) || '' }}
secrets:
token: ${{ secrets.HUGGINGFACE_PUSH }}
hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
+2 -2
View File
@@ -20,8 +20,8 @@ on:
workflow_dispatch:
# Run on the 1st and 15th of every month at 09:00 UTC
# schedule:
# - cron: '0 2 1,15 * *'
schedule:
- cron: '0 2 1,15 * *'
permissions:
contents: read
+1 -1
View File
@@ -14,7 +14,7 @@ You can contribute in many ways:
- **Documentation:** Improve examples, guides, and docstrings.
- **Feedback:** Submit tickets related to bugs or desired new features.
If you are unsure where to start, join our [Discord Channel](https://discord.gg/q8Dzzpym3f).
If you are unsure where to start, join our [Discord Channel](https://discord.gg/JkrYNdmw).
## Development Setup
-1
View File
@@ -128,7 +128,6 @@ Learn how to implement your own simulation environment or benchmark and distribu
## Resources
- **[Documentation](https://huggingface.co/docs/lerobot/index):** The complete guide to tutorials & API.
- **[Chinese Tutorials: LeRobot+SO-ARM101中文教程-同济子豪兄](https://zihao-ai.feishu.cn/wiki/space/7589642043471924447)** Detailed doc for assembling, teleoperate, dataset, train, deploy. Verified by Seed Studio and 5 global hackathon players.
- **[Discord](https://discord.gg/q8Dzzpym3f):** Join the `LeRobot` server to discuss with the community.
- **[X](https://x.com/LeRobotHF):** Follow us on X to stay up-to-date with the latest developments.
- **[Robot Learning Tutorial](https://huggingface.co/spaces/lerobot/robot-learning-tutorial):** A free, hands-on course to learn robot learning using LeRobot.
-48
View File
@@ -1,48 +0,0 @@
# Security Policy
## Project Status & Philosophy
`lerobot` has so far been primarily a research and prototyping tool, which is why deployment security hasnt been a strong focus until now. As `lerobot` continues to be adopted and deployed in production, we are paying much closer attention to these kinds of issues.
Fortunately, being an open-source project, the community can also help by reporting and fixing vulnerabilities. We appreciate your efforts to responsibly disclose your findings and will make every effort to acknowledge your contributions.
## Reporting a Vulnerability
To report a security issue, please use the GitHub Security Advisory ["Report a Vulnerability"](https://github.com/huggingface/lerobot/security/advisories/new) tab.
The `lerobot` team will send a response indicating the next steps in handling your report. After the initial reply to your report, the security team will keep you informed of the progress towards a fix and full announcement, and may ask for additional information or guidance.
#### Hugging Face Security Team
Since this project is part of the Hugging Face ecosystem, feel free to submit vulnerability reports directly to: **[security@huggingface.co](mailto:security@huggingface.co)**. Someone from the HF security team will review the report and recommend next steps.
#### Open Source Disclosures
If reporting a vulnerability specific to the open-source codebase (and not the underlying Hub infrastructure), you may also use [Huntr](https://huntr.com), a vulnerability disclosure program for open source software.
## Supported Versions
Currently, we treat `lerobot` as a rolling release. We prioritize security updates for the latest available version (`main` branch).
| Version | Supported |
| -------- | --------- |
| Latest | ✅ |
| < Latest | ❌ |
## Secure Usage Guidelines
`lerobot` is tightly coupled to the Hugging Face Hub for sharing data and pretrained policies. When downloading artifacts uploaded by others, you expose yourself to risks. Please read below for recommendations to keep your runtime and robot environment safe.
### Remote Artefacts (Weights & Policies)
Models and policies uploaded to the Hugging Face Hub come in different formats. We heavily recommend uploading and downloading models in the [`safetensors`](https://github.com/huggingface/safetensors) format.
`safetensors` was developed specifically to prevent arbitrary code execution on your system, which is critical when running software on physical hardware/robots.
To avoid loading models from unsafe formats (e.g., `pickle`), you should ensure you are prioritizing `safetensors` files.
### Remote Code
Some models or environments on the Hub may require `trust_remote_code=True` to run custom architecture code.
Please **always** verify the content of the modeling files when using this argument. We recommend setting a specific `revision` (commit hash) when loading remote code to ensure you protect yourself from unverified updates to the repository.
-1
View File
@@ -195,7 +195,6 @@ client_cfg = RobotClientConfig(
robot=robot_cfg,
server_address="localhost:8080",
policy_device="mps",
client_device="cpu",
policy_type="smolvla",
pretrained_name_or_path="<user>/smolvla_async",
chunk_size_threshold=0.5,
+6 -13
View File
@@ -95,26 +95,26 @@ Convert an image-based dataset to video format, creating a new LeRobotDataset wh
# Local-only: Save to a custom output directory (no hub push)
lerobot-edit-dataset \
--repo_id lerobot/pusht_image \
--operation.type convert_image_to_video \
--operation.type convert_to_video \
--operation.output_dir /path/to/output/pusht_video
# Save with new repo_id (local storage)
lerobot-edit-dataset \
--repo_id lerobot/pusht_image \
--new_repo_id lerobot/pusht_video \
--operation.type convert_image_to_video
--operation.type convert_to_video
# Convert and push to Hugging Face Hub
lerobot-edit-dataset \
--repo_id lerobot/pusht_image \
--new_repo_id lerobot/pusht_video \
--operation.type convert_image_to_video \
--operation.type convert_to_video \
--push_to_hub true
# Convert with custom video codec and quality settings
lerobot-edit-dataset \
--repo_id lerobot/pusht_image \
--operation.type convert_image_to_video \
--operation.type convert_to_video \
--operation.output_dir outputs/pusht_video \
--operation.vcodec libsvtav1 \
--operation.pix_fmt yuv420p \
@@ -124,23 +124,16 @@ lerobot-edit-dataset \
# Convert only specific episodes
lerobot-edit-dataset \
--repo_id lerobot/pusht_image \
--operation.type convert_image_to_video \
--operation.type convert_to_video \
--operation.output_dir outputs/pusht_video \
--operation.episode_indices "[0, 1, 2, 5, 10]"
# Convert with multiple workers for parallel processing
lerobot-edit-dataset \
--repo_id lerobot/pusht_image \
--operation.type convert_image_to_video \
--operation.type convert_to_video \
--operation.output_dir outputs/pusht_video \
--operation.num_workers 8
# For memory-constrained systems, users can now specify limits:
lerobot-edit-dataset \
--repo_id lerobot/pusht_image \
--operation.type convert_to_video \
--operation.max_episodes_per_batch 50 \
--operation.max_frames_per_batch 10000
```
**Parameters:**
@@ -30,7 +30,6 @@ def main():
robot=robot_cfg,
server_address=server_address,
policy_device="mps",
client_device="cpu",
policy_type="act",
pretrained_name_or_path="<user>/robot_learning_tutorial_act",
chunk_size_threshold=0.5, # g
+1 -1
View File
@@ -25,7 +25,7 @@ discord = "https://discord.gg/s3KuuzsPFb"
[project]
name = "lerobot"
version = "0.4.4"
version = "0.4.3"
description = "🤗 LeRobot: State-of-the-art Machine Learning for Real-World Robotics in Pytorch"
dynamic = ["readme"]
license = { text = "Apache-2.0" }
-10
View File
@@ -126,12 +126,6 @@ class RobotClientConfig:
# Device configuration
policy_device: str = field(default="cpu", metadata={"help": "Device for policy inference"})
client_device: str = field(
default="cpu",
metadata={
"help": "Device to move actions to after receiving from server (e.g., for downstream planners)"
},
)
# Control behavior configuration
chunk_size_threshold: float = field(default=0.5, metadata={"help": "Threshold for chunk size control"})
@@ -167,9 +161,6 @@ class RobotClientConfig:
if not self.policy_device:
raise ValueError("policy_device cannot be empty")
if not self.client_device:
raise ValueError("client_device cannot be empty")
if self.chunk_size_threshold < 0 or self.chunk_size_threshold > 1:
raise ValueError(f"chunk_size_threshold must be between 0 and 1, got {self.chunk_size_threshold}")
@@ -193,7 +184,6 @@ class RobotClientConfig:
"policy_type": self.policy_type,
"pretrained_name_or_path": self.pretrained_name_or_path,
"policy_device": self.policy_device,
"client_device": self.client_device,
"chunk_size_threshold": self.chunk_size_threshold,
"fps": self.fps,
"actions_per_chunk": self.actions_per_chunk,
+1 -1
View File
@@ -23,7 +23,7 @@ DEFAULT_INFERENCE_LATENCY = 1 / DEFAULT_FPS
DEFAULT_OBS_QUEUE_TIMEOUT = 2
# All action chunking policies
SUPPORTED_POLICIES = ["act", "smolvla", "diffusion", "tdmpc", "vqbet", "pi0", "pi05", "groot"]
SUPPORTED_POLICIES = ["act", "smolvla", "diffusion", "tdmpc", "vqbet", "pi0", "pi05"]
# TODO: Add all other robots
SUPPORTED_ROBOTS = ["so100_follower", "so101_follower", "bi_so_follower", "omx_follower"]
+2 -3
View File
@@ -18,7 +18,6 @@ import os
import time
from dataclasses import dataclass, field
from pathlib import Path
from typing import Any
import torch
@@ -40,8 +39,8 @@ from lerobot.utils.utils import init_logging
Action = torch.Tensor
# observation as received from the robot (can be numpy arrays, floats, etc.)
RawObservation = dict[str, Any]
# observation as received from the robot
RawObservation = dict[str, torch.Tensor]
# observation as those recorded in LeRobot dataset (keys are different)
LeRobotObservation = dict[str, torch.Tensor]
@@ -381,8 +381,6 @@ class PolicyServer(services_pb2_grpc.AsyncInferenceServicer):
action_tensor = torch.stack(processed_actions, dim=1).squeeze(0)
self.logger.debug(f"Postprocessed action shape: {action_tensor.shape}")
action_tensor = action_tensor.detach().cpu()
"""5. Convert to TimedAction list"""
action_chunk = self._time_action_chunk(
observation_t.get_timestamp(), list(action_tensor), observation_t.get_timestep()
+2 -18
View File
@@ -25,7 +25,6 @@ python src/lerobot/async_inference/robot_client.py \
--policy_type=act \
--pretrained_name_or_path=user/model \
--policy_device=mps \
--client_device=cpu \
--actions_per_chunk=50 \
--chunk_size_threshold=0.5 \
--aggregate_fn_name=weighted_average \
@@ -41,7 +40,6 @@ from collections.abc import Callable
from dataclasses import asdict
from pprint import pformat
from queue import Queue
from typing import Any
import draccus
import grpc
@@ -49,6 +47,7 @@ import torch
from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig # noqa: F401
from lerobot.cameras.realsense.configuration_realsense import RealSenseCameraConfig # noqa: F401
from lerobot.processor import RobotAction
from lerobot.robots import ( # noqa: F401
Robot,
RobotConfig,
@@ -286,21 +285,6 @@ class RobotClient:
timed_actions = pickle.loads(actions_chunk.data) # nosec
deserialize_time = time.perf_counter() - deserialize_start
# Log device type of received actions
if len(timed_actions) > 0:
received_device = timed_actions[0].get_action().device.type
self.logger.debug(f"Received actions on device: {received_device}")
# Move actions to client_device (e.g., for downstream planners that need GPU)
client_device = self.config.client_device
if client_device != "cpu":
for timed_action in timed_actions:
if timed_action.get_action().device.type != client_device:
timed_action.action = timed_action.get_action().to(client_device)
self.logger.debug(f"Converted actions to device: {client_device}")
else:
self.logger.debug(f"Actions kept on device: {client_device}")
self.action_chunk_size = max(self.action_chunk_size, len(timed_actions))
# Calculate network latency if we have matching observations
@@ -367,7 +351,7 @@ class RobotClient:
action = {key: action_tensor[i].item() for i, key in enumerate(self.robot.action_features)}
return action
def control_loop_action(self, verbose: bool = False) -> dict[str, Any]:
def control_loop_action(self, verbose: bool = False) -> RobotAction:
"""Reading and performing actions in local queue"""
# Lock only for queue operations
-10
View File
@@ -105,16 +105,6 @@ class PreTrainedConfig(draccus.ChoiceRegistry, HubMixin, abc.ABC): # type: igno
def observation_delta_indices(self) -> list | None: # type: ignore[type-arg] #TODO: No implementation
raise NotImplementedError
@property
def image_observation_delta_indices(self) -> list | None: # type: ignore[type-arg]
"""Return indices for delta image observations only.
Unlike observation_delta_indices which applies to ALL observations,
this only applies to image observations (keys starting with observation.images).
Default returns None. Override in subclass to enable.
"""
return None
@property
@abc.abstractmethod
def action_delta_indices(self) -> list | None: # type: ignore[type-arg] #TODO: No implementation
+5 -24
View File
@@ -19,7 +19,6 @@ import logging
import shutil
from pathlib import Path
import datasets
import pandas as pd
import tqdm
@@ -33,7 +32,6 @@ from lerobot.datasets.utils import (
DEFAULT_VIDEO_FILE_SIZE_IN_MB,
DEFAULT_VIDEO_PATH,
get_file_size_in_mb,
get_hf_features_from_features,
get_parquet_file_size_in_mb,
to_parquet_with_hf_images,
update_chunk_file_indices,
@@ -404,21 +402,12 @@ def aggregate_data(src_meta, dst_meta, data_idx, data_files_size_in_mb, chunk_si
}
unique_chunk_file_ids = sorted(unique_chunk_file_ids)
contains_images = len(dst_meta.image_keys) > 0
# retrieve features schema for proper image typing in parquet
hf_features = get_hf_features_from_features(dst_meta.features) if contains_images else None
for src_chunk_idx, src_file_idx in unique_chunk_file_ids:
src_path = src_meta.root / DEFAULT_DATA_PATH.format(
chunk_index=src_chunk_idx, file_index=src_file_idx
)
if contains_images:
# Use HuggingFace datasets to read source data to preserve image format
src_ds = datasets.Dataset.from_parquet(str(src_path))
df = src_ds.to_pandas()
else:
df = pd.read_parquet(src_path)
df = pd.read_parquet(src_path)
df = update_data_df(df, src_meta, dst_meta)
data_idx = append_or_create_parquet_file(
@@ -428,9 +417,8 @@ def aggregate_data(src_meta, dst_meta, data_idx, data_files_size_in_mb, chunk_si
data_files_size_in_mb,
chunk_size,
DEFAULT_DATA_PATH,
contains_images=contains_images,
contains_images=len(dst_meta.image_keys) > 0,
aggr_root=dst_meta.root,
hf_features=hf_features,
)
return data_idx
@@ -500,7 +488,6 @@ def append_or_create_parquet_file(
default_path: str,
contains_images: bool = False,
aggr_root: Path = None,
hf_features: datasets.Features | None = None,
):
"""Appends data to an existing parquet file or creates a new one based on size constraints.
@@ -516,7 +503,6 @@ def append_or_create_parquet_file(
default_path: Format string for generating file paths.
contains_images: Whether the data contains images requiring special handling.
aggr_root: Root path for the aggregated dataset.
hf_features: Optional HuggingFace Features schema for proper image typing.
Returns:
dict: Updated index dictionary with current chunk and file indices.
@@ -526,7 +512,7 @@ def append_or_create_parquet_file(
if not dst_path.exists():
dst_path.parent.mkdir(parents=True, exist_ok=True)
if contains_images:
to_parquet_with_hf_images(df, dst_path, features=hf_features)
to_parquet_with_hf_images(df, dst_path)
else:
df.to_parquet(dst_path)
return idx
@@ -541,17 +527,12 @@ def append_or_create_parquet_file(
final_df = df
target_path = new_path
else:
if contains_images:
# Use HuggingFace datasets to read existing data to preserve image format
existing_ds = datasets.Dataset.from_parquet(str(dst_path))
existing_df = existing_ds.to_pandas()
else:
existing_df = pd.read_parquet(dst_path)
existing_df = pd.read_parquet(dst_path)
final_df = pd.concat([existing_df, df], ignore_index=True)
target_path = dst_path
if contains_images:
to_parquet_with_hf_images(final_df, target_path, features=hf_features)
to_parquet_with_hf_images(final_df, target_path)
else:
final_df.to_parquet(target_path)
+1 -561
View File
@@ -26,7 +26,6 @@ This module provides utilities for:
import logging
import shutil
from collections.abc import Callable
from concurrent.futures import ThreadPoolExecutor, as_completed
from pathlib import Path
import datasets
@@ -52,8 +51,7 @@ from lerobot.datasets.utils import (
write_stats,
write_tasks,
)
from lerobot.datasets.video_utils import encode_video_frames, get_video_info
from lerobot.utils.constants import HF_LEROBOT_HOME, OBS_IMAGE
from lerobot.utils.constants import HF_LEROBOT_HOME
def _load_episode_with_stats(src_dataset: LeRobotDataset, episode_idx: int) -> dict:
@@ -1085,561 +1083,3 @@ def _copy_episodes_metadata_and_stats(
else:
if src_dataset.meta.stats:
write_stats(src_dataset.meta.stats, dst_meta.root)
def _save_episode_images_for_video(
dataset: LeRobotDataset,
imgs_dir: Path,
img_key: str,
episode_index: int,
num_workers: int = 4,
) -> None:
"""Save images from a specific episode and camera to disk for video encoding.
Args:
dataset: The LeRobot dataset to extract images from
imgs_dir: Directory to save images to
img_key: The image key (camera) to extract
episode_index: Index of the episode to save
num_workers: Number of threads for parallel image saving
"""
# Create directory
imgs_dir.mkdir(parents=True, exist_ok=True)
# Get dataset without torch format for PIL image access
hf_dataset = dataset.hf_dataset.with_format(None)
# Select only this camera's images
imgs_dataset = hf_dataset.select_columns(img_key)
# Get episode start and end indices
from_idx = dataset.meta.episodes["dataset_from_index"][episode_index]
to_idx = dataset.meta.episodes["dataset_to_index"][episode_index]
# Get all items for this episode
episode_dataset = imgs_dataset.select(range(from_idx, to_idx))
# Define function to save a single image
def save_single_image(i_item_tuple):
i, item = i_item_tuple
img = item[img_key]
# Use frame-XXXXXX.png format to match encode_video_frames expectations
img.save(str(imgs_dir / f"frame-{i:06d}.png"), quality=100)
return i
# Save images with proper naming convention for encode_video_frames (frame-XXXXXX.png)
items = list(enumerate(episode_dataset))
with ThreadPoolExecutor(max_workers=num_workers) as executor:
futures = [executor.submit(save_single_image, item) for item in items]
for future in as_completed(futures):
future.result() # This will raise any exceptions that occurred
def _save_batch_episodes_images(
dataset: LeRobotDataset,
imgs_dir: Path,
img_key: str,
episode_indices: list[int],
num_workers: int = 4,
) -> list[float]:
"""Save images from multiple episodes to disk for batch video encoding.
Args:
dataset: The LeRobot dataset to extract images from
imgs_dir: Directory to save images to
img_key: The image key (camera) to extract
episode_indices: List of episode indices to save
num_workers: Number of threads for parallel image saving
Returns:
List of episode durations in seconds
"""
imgs_dir.mkdir(parents=True, exist_ok=True)
hf_dataset = dataset.hf_dataset.with_format(None)
imgs_dataset = hf_dataset.select_columns(img_key)
# Define function to save a single image with global frame index
# Defined once outside the loop to avoid repeated closure creation
def save_single_image(i_item_tuple, base_frame_idx, img_key_param):
i, item = i_item_tuple
img = item[img_key_param]
# Use global frame index for naming
img.save(str(imgs_dir / f"frame-{base_frame_idx + i:06d}.png"), quality=100)
return i
episode_durations = []
frame_idx = 0
for ep_idx in episode_indices:
# Get episode range
from_idx = dataset.meta.episodes["dataset_from_index"][ep_idx]
to_idx = dataset.meta.episodes["dataset_to_index"][ep_idx]
episode_length = to_idx - from_idx
episode_durations.append(episode_length / dataset.fps)
# Get episode images
episode_dataset = imgs_dataset.select(range(from_idx, to_idx))
# Save images
items = list(enumerate(episode_dataset))
with ThreadPoolExecutor(max_workers=num_workers) as executor:
futures = [executor.submit(save_single_image, item, frame_idx, img_key) for item in items]
for future in as_completed(futures):
future.result()
frame_idx += episode_length
return episode_durations
def _iter_episode_batches(
episode_indices: list[int],
episode_lengths: dict[int, int],
size_per_frame_mb: float,
video_file_size_limit: float,
max_episodes: int | None,
max_frames: int | None,
):
"""Generator that yields batches of episode indices for video encoding.
Groups episodes into batches that respect size and memory constraints:
- Stays under video file size limit
- Respects maximum episodes per batch (if specified)
- Respects maximum frames per batch (if specified)
Args:
episode_indices: List of episode indices to batch
episode_lengths: Dictionary mapping episode index to episode length
size_per_frame_mb: Estimated size per frame in MB
video_file_size_limit: Maximum video file size in MB
max_episodes: Maximum number of episodes per batch (None = no limit)
max_frames: Maximum number of frames per batch (None = no limit)
Yields:
List of episode indices for each batch
"""
batch_episodes = []
estimated_size = 0.0
total_frames = 0
for ep_idx in episode_indices:
ep_length = episode_lengths[ep_idx]
ep_estimated_size = ep_length * size_per_frame_mb
# we check if adding this episode would exceed any constraint
would_exceed_size = estimated_size > 0 and estimated_size + ep_estimated_size >= video_file_size_limit
would_exceed_episodes = max_episodes is not None and len(batch_episodes) >= max_episodes
would_exceed_frames = max_frames is not None and total_frames + ep_length > max_frames
if batch_episodes and (would_exceed_size or would_exceed_episodes or would_exceed_frames):
# yield current batch before adding this episode
yield batch_episodes
# start a new batch with current episode
batch_episodes = [ep_idx]
estimated_size = ep_estimated_size
total_frames = ep_length
else:
# add to current batch
batch_episodes.append(ep_idx)
estimated_size += ep_estimated_size
total_frames += ep_length
# yield final batch if not empty
if batch_episodes:
yield batch_episodes
def _estimate_frame_size_via_calibration(
dataset: LeRobotDataset,
img_key: str,
episode_indices: list[int],
temp_dir: Path,
fps: int,
vcodec: str,
pix_fmt: str,
g: int,
crf: int,
fast_decode: int,
num_calibration_frames: int = 30,
) -> float:
"""Estimate MB per frame by encoding a small calibration sample.
Encodes a representative sample of frames using the exact codec parameters
to measure actual compression ratio, which is more accurate than heuristics.
Args:
dataset: Source dataset with images.
img_key: Image key to calibrate (e.g., "observation.images.top").
episode_indices: List of episode indices being processed.
temp_dir: Temporary directory for calibration files.
fps: Frames per second for video encoding.
vcodec: Video codec (libsvtav1, h264, hevc).
pix_fmt: Pixel format (yuv420p, etc.).
g: GOP size (group of pictures).
crf: Constant Rate Factor (quality).
fast_decode: Fast decode tuning parameter.
num_calibration_frames: Number of frames to use for calibration (default: 30).
Returns:
Estimated size in MB per frame based on actual encoding.
"""
calibration_dir = temp_dir / "calibration" / img_key
calibration_dir.mkdir(parents=True, exist_ok=True)
try:
# Select a representative episode (prefer middle episode if available)
calibration_ep_idx = episode_indices[len(episode_indices) // 2]
# Get episode range
from_idx = dataset.meta.episodes["dataset_from_index"][calibration_ep_idx]
to_idx = dataset.meta.episodes["dataset_to_index"][calibration_ep_idx]
episode_length = to_idx - from_idx
# Use up to num_calibration_frames from this episode
num_frames = min(num_calibration_frames, episode_length)
# Get frames from dataset
hf_dataset = dataset.hf_dataset.with_format(None)
sample_indices = range(from_idx, from_idx + num_frames)
# Save calibration frames
for i, idx in enumerate(sample_indices):
img = hf_dataset[idx][img_key]
img.save(str(calibration_dir / f"frame-{i:06d}.png"), quality=100)
# Encode calibration video
calibration_video_path = calibration_dir / "calibration.mp4"
encode_video_frames(
imgs_dir=calibration_dir,
video_path=calibration_video_path,
fps=fps,
vcodec=vcodec,
pix_fmt=pix_fmt,
g=g,
crf=crf,
fast_decode=fast_decode,
overwrite=True,
)
# Measure actual compressed size
video_size_bytes = calibration_video_path.stat().st_size
video_size_mb = video_size_bytes / BYTES_PER_MIB
size_per_frame_mb = video_size_mb / num_frames
logging.info(
f" Calibration: {num_frames} frames -> {video_size_mb:.2f} MB "
f"= {size_per_frame_mb:.4f} MB/frame for {img_key}"
)
return size_per_frame_mb
finally:
# Clean up calibration files
if calibration_dir.exists():
shutil.rmtree(calibration_dir)
def _copy_data_without_images(
src_dataset: LeRobotDataset,
dst_meta: LeRobotDatasetMetadata,
episode_indices: list[int],
img_keys: list[str],
) -> None:
"""Copy data files without image columns.
Args:
src_dataset: Source dataset
dst_meta: Destination metadata
episode_indices: Episodes to include
img_keys: Image keys to remove
"""
from lerobot.datasets.utils import DATA_DIR
data_dir = src_dataset.root / DATA_DIR
parquet_files = sorted(data_dir.glob("*/*.parquet"))
if not parquet_files:
raise ValueError(f"No parquet files found in {data_dir}")
episode_set = set(episode_indices)
for src_path in tqdm(parquet_files, desc="Processing data files"):
df = pd.read_parquet(src_path).reset_index(drop=True)
# Filter to only include selected episodes
df = df[df["episode_index"].isin(episode_set)].copy()
if len(df) == 0:
continue
# Remove image columns
columns_to_drop = [col for col in img_keys if col in df.columns]
if columns_to_drop:
df = df.drop(columns=columns_to_drop)
# Get chunk and file indices from path
relative_path = src_path.relative_to(src_dataset.root)
chunk_dir = relative_path.parts[1]
file_name = relative_path.parts[2]
chunk_idx = int(chunk_dir.split("-")[1])
file_idx = int(file_name.split("-")[1].split(".")[0])
# Write to destination without pandas index
dst_path = dst_meta.root / f"data/chunk-{chunk_idx:03d}/file-{file_idx:03d}.parquet"
dst_path.parent.mkdir(parents=True, exist_ok=True)
df.to_parquet(dst_path, index=False)
# Video conversion constants
BYTES_PER_KIB = 1024
BYTES_PER_MIB = BYTES_PER_KIB * BYTES_PER_KIB
def convert_image_to_video_dataset(
dataset: LeRobotDataset,
output_dir: Path,
repo_id: str | None = None,
vcodec: str = "libsvtav1",
pix_fmt: str = "yuv420p",
g: int = 2,
crf: int = 30,
fast_decode: int = 0,
episode_indices: list[int] | None = None,
num_workers: int = 4,
max_episodes_per_batch: int | None = None,
max_frames_per_batch: int | None = None,
) -> LeRobotDataset:
"""Convert image-to-video dataset.
Creates a new LeRobotDataset with images encoded as videos, following the proper
LeRobot dataset structure with videos stored in chunked MP4 files.
Args:
dataset: The source LeRobot dataset with images
output_dir: Directory to save the new video dataset
repo_id: Repository ID for the new dataset (default: original_id + "_video")
vcodec: Video codec (default: libsvtav1)
pix_fmt: Pixel format (default: yuv420p)
g: Group of pictures size (default: 2)
crf: Constant rate factor (default: 30)
fast_decode: Fast decode tuning (default: 0)
episode_indices: List of episode indices to convert (None = all episodes)
num_workers: Number of threads for parallel processing (default: 4)
max_episodes_per_batch: Maximum episodes per video batch to avoid memory issues (None = no limit)
max_frames_per_batch: Maximum frames per video batch to avoid memory issues (None = no limit)
Returns:
New LeRobotDataset with images encoded as videos
"""
# Check that it's an image dataset
if len(dataset.meta.video_keys) > 0:
raise ValueError(
f"This operation is for image datasets only. Video dataset provided: {dataset.repo_id}"
)
# Get all image keys
hf_dataset = dataset.hf_dataset.with_format(None)
img_keys = [key for key in hf_dataset.features if key.startswith(OBS_IMAGE)]
if len(img_keys) == 0:
raise ValueError(f"No image keys found in dataset {dataset.repo_id}")
# Determine which episodes to process
if episode_indices is None:
episode_indices = list(range(dataset.meta.total_episodes))
if repo_id is None:
repo_id = f"{dataset.repo_id}_video"
logging.info(
f"Converting {len(episode_indices)} episodes with {len(img_keys)} cameras from {dataset.repo_id}"
)
logging.info(f"Video codec: {vcodec}, pixel format: {pix_fmt}, GOP: {g}, CRF: {crf}")
# Create new features dict, converting image features to video features
new_features = {}
for key, value in dataset.meta.features.items():
if key not in img_keys:
new_features[key] = value
else:
# Convert image key to video format
new_features[key] = value.copy()
new_features[key]["dtype"] = "video" # Change dtype from "image" to "video"
# Video info will be updated after episodes are encoded
# Create new metadata for video dataset
new_meta = LeRobotDatasetMetadata.create(
repo_id=repo_id,
fps=dataset.meta.fps,
features=new_features,
robot_type=dataset.meta.robot_type,
root=output_dir,
use_videos=True,
chunks_size=dataset.meta.chunks_size,
data_files_size_in_mb=dataset.meta.data_files_size_in_mb,
video_files_size_in_mb=dataset.meta.video_files_size_in_mb,
)
# Create temporary directory for image extraction
temp_dir = output_dir / "temp_images"
temp_dir.mkdir(parents=True, exist_ok=True)
# Process all episodes and batch encode videos
# Use dictionary for O(1) episode metadata lookups instead of O(n) linear search
all_episode_metadata = {}
fps = int(dataset.fps)
try:
# Build episode metadata entries first
logging.info("Building episode metadata...")
cumulative_frame_idx = 0
for ep_idx in episode_indices:
src_episode = dataset.meta.episodes[ep_idx]
ep_length = src_episode["length"]
ep_meta = {
"episode_index": ep_idx,
"length": ep_length,
"dataset_from_index": cumulative_frame_idx,
"dataset_to_index": cumulative_frame_idx + ep_length,
}
if "data/chunk_index" in src_episode:
ep_meta["data/chunk_index"] = src_episode["data/chunk_index"]
ep_meta["data/file_index"] = src_episode["data/file_index"]
all_episode_metadata[ep_idx] = ep_meta
cumulative_frame_idx += ep_length
# Process each camera and batch encode multiple episodes together
video_file_size_limit = new_meta.video_files_size_in_mb
# Pre-compute episode lengths for batching
episode_lengths = {ep_idx: dataset.meta.episodes["length"][ep_idx] for ep_idx in episode_indices}
for img_key in tqdm(img_keys, desc="Processing cameras"):
# Estimate size per frame by encoding a small calibration sample
# This provides accurate compression ratio for the specific codec parameters
size_per_frame_mb = _estimate_frame_size_via_calibration(
dataset=dataset,
img_key=img_key,
episode_indices=episode_indices,
temp_dir=temp_dir,
fps=fps,
vcodec=vcodec,
pix_fmt=pix_fmt,
g=g,
crf=crf,
fast_decode=fast_decode,
)
logging.info(f"Processing camera: {img_key}")
chunk_idx, file_idx = 0, 0
cumulative_timestamp = 0.0
# Process episodes in batches to stay under size limit
for batch_episodes in _iter_episode_batches(
episode_indices=episode_indices,
episode_lengths=episode_lengths,
size_per_frame_mb=size_per_frame_mb,
video_file_size_limit=video_file_size_limit,
max_episodes=max_episodes_per_batch,
max_frames=max_frames_per_batch,
):
total_frames_in_batch = sum(episode_lengths[idx] for idx in batch_episodes)
logging.info(
f" Encoding batch of {len(batch_episodes)} episodes "
f"({batch_episodes[0]}-{batch_episodes[-1]}) = {total_frames_in_batch} frames"
)
# Save images for all episodes in this batch
imgs_dir = temp_dir / f"batch_{chunk_idx}_{file_idx}" / img_key
episode_durations = _save_batch_episodes_images(
dataset=dataset,
imgs_dir=imgs_dir,
img_key=img_key,
episode_indices=batch_episodes,
num_workers=num_workers,
)
# Encode all batched episodes into single video
video_path = new_meta.root / new_meta.video_path.format(
video_key=img_key, chunk_index=chunk_idx, file_index=file_idx
)
video_path.parent.mkdir(parents=True, exist_ok=True)
encode_video_frames(
imgs_dir=imgs_dir,
video_path=video_path,
fps=fps,
vcodec=vcodec,
pix_fmt=pix_fmt,
g=g,
crf=crf,
fast_decode=fast_decode,
overwrite=True,
)
# Clean up temporary images
shutil.rmtree(imgs_dir)
# Update metadata for each episode in the batch
for ep_idx, duration in zip(batch_episodes, episode_durations, strict=True):
from_timestamp = cumulative_timestamp
to_timestamp = cumulative_timestamp + duration
cumulative_timestamp = to_timestamp
# Find episode metadata entry and add video metadata (O(1) dictionary lookup)
ep_meta = all_episode_metadata[ep_idx]
ep_meta[f"videos/{img_key}/chunk_index"] = chunk_idx
ep_meta[f"videos/{img_key}/file_index"] = file_idx
ep_meta[f"videos/{img_key}/from_timestamp"] = from_timestamp
ep_meta[f"videos/{img_key}/to_timestamp"] = to_timestamp
# Move to next video file for next batch
chunk_idx, file_idx = update_chunk_file_indices(chunk_idx, file_idx, new_meta.chunks_size)
cumulative_timestamp = 0.0
# Copy and transform data files (removing image columns)
_copy_data_without_images(dataset, new_meta, episode_indices, img_keys)
# Save episode metadata
episodes_df = pd.DataFrame(list(all_episode_metadata.values()))
episodes_path = new_meta.root / "meta" / "episodes" / "chunk-000" / "file-000.parquet"
episodes_path.parent.mkdir(parents=True, exist_ok=True)
episodes_df.to_parquet(episodes_path, index=False)
# Update metadata info
new_meta.info["total_episodes"] = len(episode_indices)
new_meta.info["total_frames"] = sum(ep["length"] for ep in all_episode_metadata.values())
new_meta.info["total_tasks"] = dataset.meta.total_tasks
new_meta.info["splits"] = {"train": f"0:{len(episode_indices)}"}
# Update video info for all image keys (now videos)
# We need to manually set video info since update_video_info() checks video_keys first
for img_key in img_keys:
if not new_meta.features[img_key].get("info", None):
video_path = new_meta.root / new_meta.video_path.format(
video_key=img_key, chunk_index=0, file_index=0
)
new_meta.info["features"][img_key]["info"] = get_video_info(video_path)
write_info(new_meta.info, new_meta.root)
# Copy stats and tasks
if dataset.meta.stats is not None:
# Remove image stats
new_stats = {k: v for k, v in dataset.meta.stats.items() if k not in img_keys}
write_stats(new_stats, new_meta.root)
if dataset.meta.tasks is not None:
write_tasks(dataset.meta.tasks, new_meta.root)
finally:
# Clean up temporary directory
if temp_dir.exists():
shutil.rmtree(temp_dir)
logging.info(f"Completed converting {dataset.repo_id} to video format")
logging.info(f"New dataset saved to: {output_dir}")
# Return new dataset
return LeRobotDataset(repo_id=repo_id, root=output_dir)
+2 -7
View File
@@ -27,7 +27,7 @@ from lerobot.datasets.lerobot_dataset import (
)
from lerobot.datasets.streaming_dataset import StreamingLeRobotDataset
from lerobot.datasets.transforms import ImageTransforms
from lerobot.utils.constants import ACTION, OBS_IMAGES, OBS_PREFIX, REWARD
from lerobot.utils.constants import ACTION, OBS_PREFIX, REWARD
IMAGENET_STATS = {
"mean": [[[0.485]], [[0.456]], [[0.406]]], # (c,1,1)
@@ -59,12 +59,7 @@ def resolve_delta_timestamps(
delta_timestamps[key] = [i / ds_meta.fps for i in cfg.reward_delta_indices]
if key == ACTION and cfg.action_delta_indices is not None:
delta_timestamps[key] = [i / ds_meta.fps for i in cfg.action_delta_indices]
# Check for image-specific delta indices first (e.g., for video encoding)
if key.startswith(OBS_IMAGES) and cfg.image_observation_delta_indices is not None:
delta_timestamps[key] = [i / ds_meta.fps for i in cfg.image_observation_delta_indices]
# Fall back to generic observation delta indices for all observations
elif key.startswith(OBS_PREFIX) and cfg.observation_delta_indices is not None:
if key.startswith(OBS_PREFIX) and cfg.observation_delta_indices is not None:
delta_timestamps[key] = [i / ds_meta.fps for i in cfg.observation_delta_indices]
if len(delta_timestamps) == 0:
+5 -20
View File
@@ -935,30 +935,17 @@ class LeRobotDataset(torch.utils.data.Dataset):
else:
return get_hf_features_from_features(self.features)
def _get_query_indices(
self, abs_idx: int, ep_idx: int
) -> tuple[dict[str, list[int]], dict[str, torch.Tensor]]:
"""Compute query indices for delta timestamps.
Args:
abs_idx: The absolute index in the full dataset (not the relative index in filtered episodes).
ep_idx: The episode index.
Returns:
A tuple of (query_indices, padding) where:
- query_indices: Dict mapping keys to lists of absolute indices to query
- padding: Dict mapping "{key}_is_pad" to boolean tensors indicating padded positions
"""
def _get_query_indices(self, idx: int, ep_idx: int) -> tuple[dict[str, list[int | bool]]]:
ep = self.meta.episodes[ep_idx]
ep_start = ep["dataset_from_index"]
ep_end = ep["dataset_to_index"]
query_indices = {
key: [max(ep_start, min(ep_end - 1, abs_idx + delta)) for delta in delta_idx]
key: [max(ep_start, min(ep_end - 1, idx + delta)) for delta in delta_idx]
for key, delta_idx in self.delta_indices.items()
}
padding = { # Pad values outside of current episode range
f"{key}_is_pad": torch.BoolTensor(
[(abs_idx + delta < ep_start) | (abs_idx + delta >= ep_end) for delta in delta_idx]
[(idx + delta < ep_start) | (idx + delta >= ep_end) for delta in delta_idx]
)
for key, delta_idx in self.delta_indices.items()
}
@@ -1050,12 +1037,10 @@ class LeRobotDataset(torch.utils.data.Dataset):
self._ensure_hf_dataset_loaded()
item = self.hf_dataset[idx]
ep_idx = item["episode_index"].item()
# Use the absolute index from the dataset for delta timestamp calculations
abs_idx = item["index"].item()
query_indices = None
if self.delta_indices is not None:
query_indices, padding = self._get_query_indices(abs_idx, ep_idx)
query_indices, padding = self._get_query_indices(idx, ep_idx)
query_result = self._query_hf_dataset(query_indices)
item = {**item, **padding}
for key, val in query_result.items():
@@ -1513,7 +1498,7 @@ class LeRobotDataset(torch.utils.data.Dataset):
episode_index = self.episode_buffer["episode_index"]
if isinstance(episode_index, np.ndarray):
episode_index = episode_index.item() if episode_index.size == 1 else episode_index[0]
for cam_key in self.meta.image_keys:
for cam_key in self.meta.camera_keys:
img_dir = self._get_image_file_dir(episode_index, cam_key)
if img_dir.is_dir():
shutil.rmtree(img_dir)
+2 -11
View File
@@ -1172,21 +1172,12 @@ def validate_episode_buffer(episode_buffer: dict, total_episodes: int, features:
)
def to_parquet_with_hf_images(
df: pandas.DataFrame, path: Path, features: datasets.Features | None = None
) -> None:
def to_parquet_with_hf_images(df: pandas.DataFrame, path: Path) -> None:
"""This function correctly writes to parquet a panda DataFrame that contains images encoded by HF dataset.
This way, it can be loaded by HF dataset and correctly formatted images are returned.
Args:
df: DataFrame to write to parquet.
path: Path to write the parquet file.
features: Optional HuggingFace Features schema. If provided, ensures image columns
are properly typed as Image() in the parquet schema.
"""
# TODO(qlhoest): replace this weird synthax by `df.to_parquet(path)` only
ds = datasets.Dataset.from_dict(df.to_dict(orient="list"), features=features)
ds.to_parquet(path)
datasets.Dataset.from_dict(df.to_dict(orient="list")).to_parquet(path)
def item_to_torch(item: dict) -> dict:
+2 -2
View File
@@ -293,9 +293,9 @@ class LiberoEnv(gym.Env):
def reset(self, seed=None, **kwargs):
super().reset(seed=seed)
self._env.seed(seed)
raw_obs = self._env.reset()
if self.init_states and self._init_states is not None:
raw_obs = self._env.set_init_state(self._init_states[self._init_state_id])
self._env.set_init_state(self._init_states[self._init_state_id])
raw_obs = self._env.reset()
# After reset, objects may be unstable (slightly floating, intersecting, etc.).
# Step the simulator with a no-op action for a few frames so everything settles.
-1
View File
@@ -205,7 +205,6 @@ MODEL_BAUDRATE_TABLE = {
# Sign-Magnitude encoding bits
STS_SMS_SERIES_ENCODINGS_TABLE = {
"Present_Load": 10,
"Homing_Offset": 11,
"Goal_Position": 15,
"Goal_Velocity": 15,
+25 -7
View File
@@ -32,7 +32,7 @@ import serial
from deepdiff import DeepDiff
from tqdm import tqdm
from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
from lerobot.utils.utils import enter_pressed, move_cursor_up
NameOrID: TypeAlias = str | int
@@ -411,7 +411,6 @@ class MotorsBus(abc.ABC):
"""bool: `True` if the underlying serial port is open."""
return self.port_handler.is_open
@check_if_already_connected
def connect(self, handshake: bool = True) -> None:
"""Open the serial port and initialise communication.
@@ -423,6 +422,10 @@ class MotorsBus(abc.ABC):
DeviceAlreadyConnectedError: The port is already open.
ConnectionError: The underlying SDK failed to open the port or the handshake did not succeed.
"""
if self.is_connected:
raise DeviceAlreadyConnectedError(
f"{self.__class__.__name__}('{self.port}') is already connected. Do not call `{self.__class__.__name__}.connect()` twice."
)
self._connect(handshake)
self.set_timeout()
@@ -444,7 +447,6 @@ class MotorsBus(abc.ABC):
def _handshake(self) -> None:
pass
@check_if_not_connected
def disconnect(self, disable_torque: bool = True) -> None:
"""Close the serial port (optionally disabling torque first).
@@ -453,6 +455,10 @@ class MotorsBus(abc.ABC):
closing the port. This can prevent damaging motors if they are left applying resisting torque
after disconnect.
"""
if not self.is_connected:
raise DeviceNotConnectedError(
f"{self.__class__.__name__}('{self.port}') is not connected. Try running `{self.__class__.__name__}.connect()` first."
)
if disable_torque:
self.port_handler.clearPort()
@@ -901,7 +907,6 @@ class MotorsBus(abc.ABC):
"""
pass
@check_if_not_connected
def read(
self,
data_name: str,
@@ -922,6 +927,10 @@ class MotorsBus(abc.ABC):
Returns:
Value: Raw or normalised value depending on *normalize*.
"""
if not self.is_connected:
raise DeviceNotConnectedError(
f"{self.__class__.__name__}('{self.port}') is not connected. You need to run `{self.__class__.__name__}.connect()`."
)
id_ = self.motors[motor].id
model = self.motors[motor].model
@@ -972,7 +981,6 @@ class MotorsBus(abc.ABC):
return value, comm, error
@check_if_not_connected
def write(
self, data_name: str, motor: str, value: Value, *, normalize: bool = True, num_retry: int = 0
) -> None:
@@ -991,6 +999,10 @@ class MotorsBus(abc.ABC):
normalize (bool, optional): Enable or disable normalisation. Defaults to `True`.
num_retry (int, optional): Retry attempts. Defaults to `0`.
"""
if not self.is_connected:
raise DeviceNotConnectedError(
f"{self.__class__.__name__}('{self.port}') is not connected. You need to run `{self.__class__.__name__}.connect()`."
)
id_ = self.motors[motor].id
model = self.motors[motor].model
@@ -1032,7 +1044,6 @@ class MotorsBus(abc.ABC):
return comm, error
@check_if_not_connected
def sync_read(
self,
data_name: str,
@@ -1052,6 +1063,10 @@ class MotorsBus(abc.ABC):
Returns:
dict[str, Value]: Mapping *motor name value*.
"""
if not self.is_connected:
raise DeviceNotConnectedError(
f"{self.__class__.__name__}('{self.port}') is not connected. You need to run `{self.__class__.__name__}.connect()`."
)
self._assert_protocol_is_compatible("sync_read")
@@ -1124,7 +1139,6 @@ class MotorsBus(abc.ABC):
# for id_ in motor_ids:
# value = self.sync_reader.getData(id_, address, length)
@check_if_not_connected
def sync_write(
self,
data_name: str,
@@ -1146,6 +1160,10 @@ class MotorsBus(abc.ABC):
normalize (bool, optional): If `True` (default) convert values from the user range to raw units.
num_retry (int, optional): Retry attempts. Defaults to `0`.
"""
if not self.is_connected:
raise DeviceNotConnectedError(
f"{self.__class__.__name__}('{self.port}') is not connected. You need to run `{self.__class__.__name__}.connect()`."
)
ids_values = self._get_ids_values_dict(values)
models = [self._id_to_model(id_) for id_ in ids_values]
+2 -17
View File
@@ -35,7 +35,6 @@ from lerobot.policies.groot.configuration_groot import GrootConfig
from lerobot.policies.pi0.configuration_pi0 import PI0Config
from lerobot.policies.pi05.configuration_pi05 import PI05Config
from lerobot.policies.pretrained import PreTrainedPolicy
from lerobot.policies.videovla.configuration_pi05 import PI05VideoConfig
from lerobot.policies.sac.configuration_sac import SACConfig
from lerobot.policies.sac.reward_model.configuration_classifier import RewardClassifierConfig
from lerobot.policies.sarm.configuration_sarm import SARMConfig
@@ -68,7 +67,7 @@ def get_policy_class(name: str) -> type[PreTrainedPolicy]:
Args:
name: The name of the policy. Supported names are "tdmpc", "diffusion", "act",
"vqbet", "pi0", "pi05", "pi05_video", "sac", "reward_classifier", "smolvla", "wall_x".
"vqbet", "pi0", "pi05", "sac", "reward_classifier", "smolvla", "wall_x".
Returns:
The policy class corresponding to the given name.
@@ -104,10 +103,6 @@ def get_policy_class(name: str) -> type[PreTrainedPolicy]:
from lerobot.policies.pi05.modeling_pi05 import PI05Policy
return PI05Policy
elif name == "pi05_video":
from lerobot.policies.videovla.modeling_pi05 import PI05VideoPolicy
return PI05VideoPolicy
elif name == "sac":
from lerobot.policies.sac.modeling_sac import SACPolicy
@@ -152,7 +147,7 @@ def make_policy_config(policy_type: str, **kwargs) -> PreTrainedConfig:
Args:
policy_type: The type of the policy. Supported types include "tdmpc",
"diffusion", "act", "vqbet", "pi0", "pi05", "pi05_video", "sac", "smolvla",
"diffusion", "act", "vqbet", "pi0", "pi05", "sac", "smolvla",
"reward_classifier", "wall_x".
**kwargs: Keyword arguments to be passed to the configuration class constructor.
@@ -174,8 +169,6 @@ def make_policy_config(policy_type: str, **kwargs) -> PreTrainedConfig:
return PI0Config(**kwargs)
elif policy_type == "pi05":
return PI05Config(**kwargs)
elif policy_type == "pi05_video":
return PI05VideoConfig(**kwargs)
elif policy_type == "sac":
return SACConfig(**kwargs)
elif policy_type == "smolvla":
@@ -340,14 +333,6 @@ def make_pre_post_processors(
dataset_stats=kwargs.get("dataset_stats"),
)
elif isinstance(policy_cfg, PI05VideoConfig):
from lerobot.policies.videovla.processor_pi05 import make_pi05_video_pre_post_processors
processors = make_pi05_video_pre_post_processors(
config=policy_cfg,
dataset_stats=kwargs.get("dataset_stats"),
)
elif isinstance(policy_cfg, SACConfig):
from lerobot.policies.sac.processor_sac import make_sac_pre_post_processors
+1 -130
View File
@@ -32,22 +32,16 @@ Notes:
from LeRobot, see `GrootPolicy.finetune_with_groot_runner` below.
"""
import builtins
import os
from collections import deque
from pathlib import Path
from typing import TypeVar
import torch
from torch import Tensor
from lerobot.configs.types import FeatureType, PolicyFeature
from lerobot.policies.groot.configuration_groot import GrootConfig
from lerobot.policies.groot.groot_n1 import GR00TN15
from lerobot.policies.pretrained import PreTrainedPolicy
from lerobot.utils.constants import ACTION, OBS_IMAGES
T = TypeVar("T", bound="GrootPolicy")
from lerobot.utils.constants import ACTION
class GrootPolicy(PreTrainedPolicy):
@@ -96,129 +90,6 @@ class GrootPolicy(PreTrainedPolicy):
"""Reset policy state when environment resets."""
self._action_queue = deque([], maxlen=self.config.n_action_steps)
@classmethod
def from_pretrained(
cls: builtins.type[T],
pretrained_name_or_path: str | Path,
*,
config: GrootConfig | None = None,
force_download: bool = False,
resume_download: bool | None = None,
proxies: dict | None = None,
token: str | bool | None = None,
cache_dir: str | Path | None = None,
local_files_only: bool = False,
revision: str | None = None,
strict: bool = True,
**kwargs,
) -> T:
"""Load Groot policy from pretrained model.
Handles two cases:
1. Base GR00T models (e.g., 'nvidia/GR00T-N1.5-3B') - loads the raw model
2. Fine-tuned LeRobot checkpoints - loads config and weights from safetensors
Args:
pretrained_name_or_path: Path to the GR00T model or fine-tuned checkpoint
config: Optional GrootConfig. If None, loads from checkpoint or creates default
force_download: Force download even if cached
resume_download: Resume interrupted download
proxies: Proxy settings
token: HuggingFace authentication token
cache_dir: Cache directory path
local_files_only: Only use local files
revision: Specific model revision
strict: Strict state dict loading
**kwargs: Additional arguments (passed to config)
Returns:
Initialized GrootPolicy instance with loaded model
"""
from huggingface_hub import hf_hub_download
from huggingface_hub.constants import SAFETENSORS_SINGLE_FILE
from huggingface_hub.errors import HfHubHTTPError
print(
"The Groot policy is a wrapper around Nvidia's GR00T N1.5 model.\n"
f"Loading pretrained model from: {pretrained_name_or_path}"
)
model_id = str(pretrained_name_or_path)
is_finetuned_checkpoint = False
# Check if this is a fine-tuned LeRobot checkpoint (has model.safetensors)
try:
if os.path.isdir(model_id):
is_finetuned_checkpoint = os.path.exists(os.path.join(model_id, SAFETENSORS_SINGLE_FILE))
else:
# Try to download the safetensors file to check if it exists
try:
hf_hub_download(
repo_id=model_id,
filename=SAFETENSORS_SINGLE_FILE,
revision=revision,
cache_dir=cache_dir,
force_download=False, # Just check, don't force download
proxies=proxies,
token=token,
local_files_only=local_files_only,
)
is_finetuned_checkpoint = True
except HfHubHTTPError:
is_finetuned_checkpoint = False
except Exception:
is_finetuned_checkpoint = False
if is_finetuned_checkpoint:
# This is a fine-tuned LeRobot checkpoint - use parent class loading
print("Detected fine-tuned LeRobot checkpoint, loading with state dict...")
return super().from_pretrained(
pretrained_name_or_path=pretrained_name_or_path,
config=config,
force_download=force_download,
resume_download=resume_download,
proxies=proxies,
token=token,
cache_dir=cache_dir,
local_files_only=local_files_only,
revision=revision,
strict=strict,
**kwargs,
)
# This is a base GR00T model - load it fresh
print("Detected base GR00T model, loading from HuggingFace...")
if config is None:
# Create default config with the pretrained path
config = GrootConfig(base_model_path=str(pretrained_name_or_path))
# Add minimal visual feature required for validation
# validate_features() will automatically add state and action features
# These are placeholders - actual robot features come from the preprocessor
if not config.input_features:
config.input_features = {
f"{OBS_IMAGES}.camera": PolicyFeature(
type=FeatureType.VISUAL,
shape=(3, 224, 224), # Default image size from config
),
}
else:
# Override the base_model_path with the provided path
config.base_model_path = str(pretrained_name_or_path)
# Pass through any additional config overrides from kwargs
for key, value in kwargs.items():
if hasattr(config, key):
setattr(config, key, value)
# Create a fresh policy instance - this will automatically load the GR00T model
# in __init__ via _create_groot_model()
policy = cls(config)
policy.eval()
return policy
def get_optim_params(self) -> dict:
return self.parameters()
-11
View File
@@ -1297,14 +1297,3 @@ class PI0Policy(PreTrainedPolicy):
loss = losses.mean()
loss_dict["loss"] = loss.item()
return loss, loss_dict
def _get_default_peft_targets(self) -> dict[str, any]:
"""Return default PEFT target modules for PI0 fine-tuning."""
common_projections = (
"state_proj|action_in_proj|action_out_proj|action_time_mlp_in|action_time_mlp_out"
)
target_modules = rf"(.*\.gemma_expert\..*\.self_attn\.(q|v)_proj|model\.({common_projections}))"
return {
"target_modules": target_modules,
"modules_to_save": [],
}
+8 -19
View File
@@ -460,8 +460,8 @@ class PaliGemmaWithExpertModel(
inputs_embeds=inputs_embeds[1],
attention_mask=attention_mask,
position_ids=position_ids,
use_cache=False,
past_key_values=None, #jadechoghari
past_key_values=past_key_values,
use_cache=use_cache,
adarms_cond=adarms_cond[1] if adarms_cond is not None else None,
)
suffix_output = suffix_output.last_hidden_state
@@ -575,13 +575,13 @@ class PI05Pytorch(nn.Module): # see openpi `PI0Pytorch`
msg = """An incorrect transformer version is used, please create an issue on https://github.com/huggingface/lerobot/issues"""
# try:
# from transformers.models.siglip import check
try:
from transformers.models.siglip import check
# if not check.check_whether_transformers_replace_is_installed_correctly():
# raise ValueError(msg)
# except ImportError:
# raise ValueError(msg) from None
if not check.check_whether_transformers_replace_is_installed_correctly():
raise ValueError(msg)
except ImportError:
raise ValueError(msg) from None
def gradient_checkpointing_enable(self):
"""Enable gradient checkpointing for memory optimization."""
@@ -1270,14 +1270,3 @@ class PI05Policy(PreTrainedPolicy):
loss = losses.mean()
loss_dict["loss"] = loss.item()
return loss, loss_dict
def _get_default_peft_targets(self) -> dict[str, any]:
"""Return default PEFT target modules for PI0.5 fine-tuning."""
common_projections = (
"state_proj|action_in_proj|action_out_proj|action_time_mlp_in|action_time_mlp_out"
)
target_modules = rf"(.*\.gemma_expert\..*\.self_attn\.(q|v)_proj|model\.({common_projections}))"
return {
"target_modules": target_modules,
"modules_to_save": [],
}
-164
View File
@@ -13,7 +13,6 @@
# limitations under the License.
import abc
import builtins
import dataclasses
import logging
import os
from importlib.resources import files
@@ -266,166 +265,3 @@ class PreTrainedPolicy(nn.Module, HubMixin, abc.ABC):
card = ModelCard.from_template(card_data, template_str=template_card)
card.validate()
return card
def wrap_with_peft(
self,
peft_config=None,
peft_cli_overrides: dict | None = None,
) -> "PreTrainedPolicy":
"""
Wrap this policy with PEFT adapters for parameter-efficient fine-tuning.
This method is the single entry point for PEFT integration. Subclasses should
override `_get_default_peft_targets()` to provide default target modules, and
`_validate_peft_config()` for policy-specific validation.
Args:
peft_config: Optional PEFT adapter configuration (e.g., LoraConfig).
If provided, used directly (with CLI overrides applied).
peft_cli_overrides: Optional dict of CLI overrides (method_type, target_modules, r, etc.)
These are merged with policy defaults to build the final config.
"""
from peft import get_peft_model
# If user provided a complete config, use it directly (with overrides)
if peft_config is not None:
final_config = peft_config
if peft_cli_overrides:
final_config = self._apply_peft_cli_overrides(final_config, peft_cli_overrides)
else:
# Build config from defaults + CLI overrides
final_config = self._build_peft_config(peft_cli_overrides or {})
# Validate the configuration
self._validate_peft_config(final_config)
# Freeze base parameters, only adapter params will be trained
for p in self.parameters():
p.requires_grad_(False)
# Store pretrained path for PEFT's base_model_name_or_path
if self.config.pretrained_path:
self.name_or_path = str(self.config.pretrained_path)
# Wrap with PEFT
peft_model = get_peft_model(self, final_config)
# Mark config as using PEFT for proper loading later
peft_model.config.use_peft = True
logging.info(f"Wrapped {self.name} with PEFT ({type(final_config).__name__})")
return peft_model
def _get_default_peft_targets(self) -> dict[str, any] | None:
"""
Return default PEFT target modules for this policy.
Override this in subclasses to provide policy-specific defaults. These defaults
are PEFT-method agnostic - they only specify which modules to target.
"""
return None
def _validate_peft_config(self, peft_config) -> None:
"""
Validate the PEFT configuration for this policy.
Override this in subclasses to add policy-specific validation or warnings.
The default implementation checks that a pretrained_path exists.
Args:
peft_config: The PEFT configuration to validate.
Raises:
ValueError: If the configuration is invalid.
"""
if not self.config.pretrained_path:
raise ValueError(
"Training from scratch using PEFT is unlikely to yield good results. "
"Supply a `policy.pretrained_path` to fine-tune an existing model."
)
def _preprocess_peft_cli_overrides(self, cli_overrides: dict, peft_method_type) -> dict:
"""
Preprocess CLI overrides: rename keys and handle method-specific init_type.
Args:
cli_overrides: Dict of CLI options (will be copied, not mutated).
peft_method_type: The PeftType enum value for the PEFT method.
Returns:
Preprocessed dict with renamed keys and init_type mapped to method-specific key.
"""
from peft import PeftType
cli_overrides = cli_overrides.copy()
# Handle the full_training_modules -> modules_to_save rename
if "full_training_modules" in cli_overrides:
cli_overrides["modules_to_save"] = cli_overrides.pop("full_training_modules")
# Remove method_type as it's handled separately
cli_overrides.pop("method_type", None)
# Handle init_type specially based on PEFT method
init_type = cli_overrides.pop("init_type", None)
if init_type is not None:
if peft_method_type == PeftType.LORA:
cli_overrides["init_lora_weights"] = init_type
elif peft_method_type == PeftType.MISS:
cli_overrides["init_weights"] = init_type
else:
raise ValueError(f"Init type '{init_type}' unknown for PEFT method {peft_method_type}.")
return cli_overrides
def _build_peft_config(self, cli_overrides: dict):
"""Build a PEFT config from policy defaults and CLI overrides."""
from peft import PEFT_TYPE_TO_CONFIG_MAPPING, PeftType
# Determine PEFT method type (default to LORA)
method_type_str = cli_overrides.get("method_type") or "lora"
peft_method_type = PeftType[method_type_str.upper()]
peft_config_cls = PEFT_TYPE_TO_CONFIG_MAPPING[peft_method_type]
# Preprocess CLI overrides
cli_overrides = self._preprocess_peft_cli_overrides(cli_overrides, peft_method_type)
# Start with policy defaults, apply CLI overrides
config_dict = dict(self._get_default_peft_targets() or {})
for key, value in cli_overrides.items():
if value is not None:
config_dict[key] = value
# Ensure we have target_modules
if not config_dict.get("target_modules"):
raise ValueError(
f"Policy '{self.name}' does not define default target_modules. "
"Please pass --peft.target_modules explicitly."
)
return peft_config_cls(**config_dict)
def _apply_peft_cli_overrides(self, peft_config, cli_overrides: dict):
"""Apply CLI overrides to an existing PEFT config."""
from peft import PEFT_TYPE_TO_CONFIG_MAPPING, PeftType
# Get method type from existing config or CLI override
method_type_str = cli_overrides.get("method_type")
if method_type_str:
peft_method_type = PeftType[method_type_str.upper()]
peft_config_cls = PEFT_TYPE_TO_CONFIG_MAPPING[peft_method_type]
else:
peft_method_type = PeftType(peft_config.peft_type)
peft_config_cls = type(peft_config)
# Preprocess CLI overrides
cli_overrides = self._preprocess_peft_cli_overrides(cli_overrides, peft_method_type)
# Start with existing config, apply CLI overrides
config_dict = {k: v for k, v in dataclasses.asdict(peft_config).items() if not k.startswith("_")}
for key, value in cli_overrides.items():
if value is not None:
config_dict[key] = value
return peft_config_cls(**config_dict)
@@ -480,28 +480,6 @@ class SmolVLAPolicy(PreTrainedPolicy):
actions = pad_vector(batch[ACTION], self.config.max_action_dim)
return actions
def _get_default_peft_targets(self) -> dict[str, any]:
"""Return default PEFT target modules for SmolVLA fine-tuning."""
common_projections = (
"state_proj|action_in_proj|action_out_proj|action_time_mlp_in|action_time_mlp_out"
)
target_modules = rf"(model\.vlm_with_expert\.lm_expert\..*\.(q|v)_proj|model\.({common_projections}))"
return {
"target_modules": target_modules,
"modules_to_save": [],
}
def _validate_peft_config(self, peft_config) -> None:
"""Validate PEFT configuration for SmolVLA."""
super()._validate_peft_config(peft_config)
if not self.config.load_vlm_weights:
import logging
logging.warning(
"Training SmolVLA from scratch using PEFT. This is unlikely to yield good results. "
"Set `load_vlm_weights=True` to fine-tune the existing policy."
)
def pad_tensor(tensor, max_len, pad_value=0):
"""
-49
View File
@@ -1,49 +0,0 @@
# π₀.₅ (pi05)
This repository contains the Hugging Face port of **π₀.₅**, adapted from [OpenPI](https://github.com/Physical-Intelligence/openpi) by the Physical Intelligence.
It is designed as a **Vision-Language-Action model with open-world generalization**.
---
## Model Overview
| Feature | π₀ | π₀.₅ |
| -------------------- | ------------------------------------------------------ | ----------------------------------------- |
| Time Conditioning | Concatenates time with actions via `action_time_mlp_*` | Uses `time_mlp_*` for AdaRMS conditioning |
| AdaRMS | Not used | Used in action expert |
| Tokenizer Length | 48 tokens | 200 tokens |
| Discrete State Input | False (Uses `state_proj` layer) | True |
| Parameter Count | Higher (includes state embedding) | Lower (no state embedding) |
---
## Citation
If you use this work, please cite both **OpenPI** and the π₀.₅ paper:
```bibtex
@misc{openpi2024,
author = {Physical Intelligence Lab},
title = {OpenPI: PyTorch Implementation of π0 and π0.5 Policies},
year = {2024},
publisher = {GitHub},
howpublished = {\url{https://github.com/Physical-Intelligence/openpi}},
license = {Apache-2.0}
}
@misc{intelligence2025pi05visionlanguageactionmodelopenworld,
title = {π₀.₅: a Vision-Language-Action Model with Open-World Generalization},
author = {Physical Intelligence and Kevin Black and Noah Brown and James Darpinian and Karan Dhabalia and Danny Driess and Adnan Esmail and Michael Equi and Chelsea Finn and Niccolo Fusai and Manuel Y. Galliker and Dibya Ghosh and Lachy Groom and Karol Hausman and Brian Ichter and Szymon Jakubczak and Tim Jones and Liyiming Ke and Devin LeBlanc and Sergey Levine and Adrian Li-Bell and Mohith Mothukuri and Suraj Nair and Karl Pertsch and Allen Z. Ren and Lucy Xiaoyang Shi and Laura Smith and Jost Tobias Springenberg and Kyle Stachowicz and James Tanner and Quan Vuong and Homer Walke and Anna Walling and Haohuan Wang and Lili Yu and Ury Zhilinsky},
year = {2025},
eprint = {2504.16054},
archivePrefix= {arXiv},
primaryClass = {cs.LG},
url = {https://arxiv.org/abs/2504.16054},
}
```
---
## License
This port follows the **Apache 2.0 License**, consistent with the original [OpenPI repository](https://github.com/Physical-Intelligence/openpi).
-31
View File
@@ -1,31 +0,0 @@
#!/usr/bin/env python
# Copyright 2025 Physical Intelligence and The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lazy imports to avoid conflicts with lerobot.policies.pi05.PI05Config
# when only importing subpackages like videoprism
def __getattr__(name):
if name == "PI05VideoConfig":
from .configuration_pi05 import PI05VideoConfig
return PI05VideoConfig
elif name == "PI05VideoPolicy":
from .modeling_pi05 import PI05VideoPolicy
return PI05VideoPolicy
elif name == "make_pi05_video_pre_post_processors":
from .processor_pi05 import make_pi05_video_pre_post_processors
return make_pi05_video_pre_post_processors
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
__all__ = ["PI05VideoConfig", "PI05VideoPolicy", "make_pi05_video_pre_post_processors"]
@@ -1,212 +0,0 @@
#!/usr/bin/env python
# Copyright 2025 Physical Intelligence and The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass, field
from lerobot.configs.policies import PreTrainedConfig
from lerobot.configs.types import FeatureType, NormalizationMode, PolicyFeature
from lerobot.optim.optimizers import AdamWConfig
from lerobot.optim.schedulers import CosineDecayWithWarmupSchedulerConfig
from lerobot.policies.rtc.configuration_rtc import RTCConfig
from lerobot.utils.constants import ACTION, OBS_IMAGES, OBS_STATE
DEFAULT_IMAGE_SIZE = 224
@PreTrainedConfig.register_subclass("pi05_video")
@dataclass
class PI05VideoConfig(PreTrainedConfig):
paligemma_variant: str = "gemma_2b"
action_expert_variant: str = "gemma_300m"
dtype: str = "float32" # Options: "bfloat16", "float32"
n_obs_steps: int = 1
chunk_size: int = 50 # Number of action steps to predict, in openpi called "action_horizon"
n_action_steps: int = 50 # Number of action steps to execute
# Video encoder settings (VideoPrism)
use_video_encoder: bool = False # Enable video encoding with VideoPrism
video_num_frames: int = 16 # Number of frames for video encoding (VideoPrism default is 16)
videoprism_model_name: str = "MHRDYN7/videoprism-base-f16r288" # VideoPrism model to use
videoprism_image_size: int = 288 # VideoPrism expects 288x288 images
freeze_video_encoder: bool = True # Whether to freeze the video encoder weights
video_padding_mode: str = "repeat" # How to pad frames at episode start: "repeat" or "zero"
# Which camera to use for video encoding (None = first camera, or specify key like "observation.images.top")
video_encoder_camera_key: str | None = None
# Perceiver Resampler settings to reduce video tokens (4096 -> video_num_latents)
video_num_latents: int = 256 # Number of latent tokens for video resampler
video_resampler_num_heads: int = 8 # Number of attention heads in resampler
# Shorter state and action vectors will be padded to these dimensions
max_state_dim: int = 32
max_action_dim: int = 32
# Flow matching parameters: see openpi `PI0Pytorch`
num_inference_steps: int = 10
time_sampling_beta_alpha: float = 1.5
time_sampling_beta_beta: float = 1.0
time_sampling_scale: float = 0.999
time_sampling_offset: float = 0.001
min_period: float = 4e-3
max_period: float = 4.0
# Real-Time Chunking (RTC) configuration
rtc_config: RTCConfig | None = None
image_resolution: tuple[int, int] = (
DEFAULT_IMAGE_SIZE,
DEFAULT_IMAGE_SIZE,
) # see openpi `preprocessing_pytorch.py`
# Add empty images. Used to add empty cameras when no image features are present.
empty_cameras: int = 0
tokenizer_max_length: int = 200 # see openpi `__post_init__`
normalization_mapping: dict[str, NormalizationMode] = field(
default_factory=lambda: {
"VISUAL": NormalizationMode.IDENTITY,
"STATE": NormalizationMode.QUANTILES, # Pi0.5 uses quantiles for state
"ACTION": NormalizationMode.QUANTILES, # Pi0.5 uses quantiles for action
}
)
# Training settings
gradient_checkpointing: bool = False # Enable gradient checkpointing for memory optimization
compile_model: bool = False # Whether to use torch.compile for model optimization
compile_mode: str = "max-autotune" # Torch compile mode
device: str | None = None # Device to use for the model (None = auto-detect)
# Finetuning settings
freeze_vision_encoder: bool = False # Freeze only the vision encoder
train_expert_only: bool = False # Freeze entire VLM, train only action expert and projections
# Optimizer settings: see openpi `AdamW`
optimizer_lr: float = 2.5e-5 # see openpi `CosineDecaySchedule: peak_lr`
optimizer_betas: tuple[float, float] = (0.9, 0.95)
optimizer_eps: float = 1e-8
optimizer_weight_decay: float = 0.01
optimizer_grad_clip_norm: float = 1.0
# Scheduler settings: see openpi `CosineDecaySchedule`
# Note: These will auto-scale if --steps < scheduler_decay_steps
# For example, --steps=3000 will scale warmup to 100 and decay to 3000
scheduler_warmup_steps: int = 1_000
scheduler_decay_steps: int = 30_000
scheduler_decay_lr: float = 2.5e-6
tokenizer_max_length: int = 200 # see openpi `__post_init__`
def __post_init__(self):
super().__post_init__()
# Validate configuration
if self.n_action_steps > self.chunk_size:
raise ValueError(
f"n_action_steps ({self.n_action_steps}) cannot be greater than chunk_size ({self.chunk_size})"
)
if self.paligemma_variant not in ["gemma_300m", "gemma_2b"]:
raise ValueError(f"Invalid paligemma_variant: {self.paligemma_variant}")
if self.action_expert_variant not in ["gemma_300m", "gemma_2b"]:
raise ValueError(f"Invalid action_expert_variant: {self.action_expert_variant}")
if self.dtype not in ["bfloat16", "float32"]:
raise ValueError(f"Invalid dtype: {self.dtype}")
# Validate video encoder settings
if self.use_video_encoder:
if self.video_num_frames < 1:
raise ValueError(f"video_num_frames must be >= 1, got {self.video_num_frames}")
if self.videoprism_image_size < 1:
raise ValueError(f"videoprism_image_size must be >= 1, got {self.videoprism_image_size}")
if self.video_padding_mode not in ["repeat", "zero"]:
raise ValueError(
f"video_padding_mode must be 'repeat' or 'zero', got {self.video_padding_mode}"
)
def validate_features(self) -> None:
"""Validate and set up input/output features."""
for i in range(self.empty_cameras):
key = OBS_IMAGES + f".empty_camera_{i}"
empty_camera = PolicyFeature(
type=FeatureType.VISUAL,
shape=(3, *self.image_resolution), # Use configured image resolution
)
self.input_features[key] = empty_camera
if OBS_STATE not in self.input_features:
state_feature = PolicyFeature(
type=FeatureType.STATE,
shape=(self.max_state_dim,), # Padded to max_state_dim
)
self.input_features[OBS_STATE] = state_feature
if ACTION not in self.output_features:
action_feature = PolicyFeature(
type=FeatureType.ACTION,
shape=(self.max_action_dim,), # Padded to max_action_dim
)
self.output_features[ACTION] = action_feature
def get_optimizer_preset(self) -> AdamWConfig:
return AdamWConfig(
lr=self.optimizer_lr,
betas=self.optimizer_betas,
eps=self.optimizer_eps,
weight_decay=self.optimizer_weight_decay,
grad_clip_norm=self.optimizer_grad_clip_norm,
)
def get_scheduler_preset(self):
return CosineDecayWithWarmupSchedulerConfig(
peak_lr=self.optimizer_lr,
decay_lr=self.scheduler_decay_lr,
num_warmup_steps=self.scheduler_warmup_steps,
num_decay_steps=self.scheduler_decay_steps,
)
@property
def observation_delta_indices(self) -> list[int] | None:
"""Return indices for delta observations.
For PI05, we don't use generic observation_delta_indices because it would
apply to both images AND state. Instead, we use image_observation_delta_indices
which only applies to image observations.
"""
return None
@property
def image_observation_delta_indices(self) -> list[int] | None:
"""Return indices for delta image observations only.
When video encoding is enabled, returns indices for the past frames
needed by VideoPrism (e.g., -15, -14, ..., -1, 0 for 16 frames).
This only applies to image observations, not state.
"""
if self.use_video_encoder:
# Return indices for past frames: [-15, -14, ..., -1, 0] for 16 frames
return list(range(-(self.video_num_frames - 1), 1))
return None
@property
def action_delta_indices(self) -> list:
return list(range(self.chunk_size))
@property
def reward_delta_indices(self) -> None:
return None
File diff suppressed because it is too large Load Diff
@@ -1,171 +0,0 @@
#!/usr/bin/env python
# Copyright 2025 Physical Intelligence and The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from copy import deepcopy
from dataclasses import dataclass
from typing import Any
import numpy as np
import torch
from lerobot.configs.types import PipelineFeatureType, PolicyFeature
from lerobot.policies.videovla.configuration_pi05 import PI05VideoConfig
from lerobot.policies.pi05.modeling_pi05 import pad_vector
from lerobot.processor import (
AddBatchDimensionProcessorStep,
DeviceProcessorStep,
NormalizerProcessorStep,
PolicyAction,
PolicyProcessorPipeline,
ProcessorStep,
ProcessorStepRegistry,
RenameObservationsProcessorStep,
TokenizerProcessorStep,
UnnormalizerProcessorStep,
)
from lerobot.processor.converters import policy_action_to_transition, transition_to_policy_action
from lerobot.processor.core import EnvTransition, TransitionKey
from lerobot.utils.constants import (
OBS_STATE,
POLICY_POSTPROCESSOR_DEFAULT_NAME,
POLICY_PREPROCESSOR_DEFAULT_NAME,
)
@ProcessorStepRegistry.register(name="pi05_prepare_state_tokenizer_processor_step")
@dataclass
class Pi05PrepareStateTokenizerProcessorStep(ProcessorStep):
"""
Processor step to prepare the state and tokenize the language input.
"""
max_state_dim: int = 32
task_key: str = "task"
def __call__(self, transition: EnvTransition) -> EnvTransition:
transition = transition.copy()
state = transition.get(TransitionKey.OBSERVATION, {}).get(OBS_STATE)
if state is None:
raise ValueError("State is required for PI05")
tasks = transition.get(TransitionKey.COMPLEMENTARY_DATA, {}).get(self.task_key)
if tasks is None:
raise ValueError("No task found in complementary data")
# TODO: check if this necessary
state = deepcopy(state)
# Prepare state (pad to max_state_dim)
state = pad_vector(state, self.max_state_dim)
# State should already be normalized to [-1, 1] by the NormalizerProcessorStep that runs before this step
# Discretize into 256 bins (see openpi `PaligemmaTokenizer.tokenize()`)
state_np = state.cpu().numpy()
discretized_states = np.digitize(state_np, bins=np.linspace(-1, 1, 256 + 1)[:-1]) - 1
full_prompts = []
for i, task in enumerate(tasks):
cleaned_text = task.strip().replace("_", " ").replace("\n", " ")
state_str = " ".join(map(str, discretized_states[i]))
full_prompt = f"Task: {cleaned_text}, State: {state_str};\nAction: "
full_prompts.append(full_prompt)
transition[TransitionKey.COMPLEMENTARY_DATA][self.task_key] = full_prompts
# Normalize state to [-1, 1] range if needed (assuming it's already normalized by normalizer processor step!!)
# Discretize into 256 bins (see openpi `PaligemmaTokenizer.tokenize()`)
return transition
def transform_features(
self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
"""
This step does not alter the feature definitions.
"""
return features
def make_pi05_video_pre_post_processors(
config: PI05VideoConfig,
dataset_stats: dict[str, dict[str, torch.Tensor]] | None = None,
) -> tuple[
PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
PolicyProcessorPipeline[PolicyAction, PolicyAction],
]:
"""
Constructs pre-processor and post-processor pipelines for the PI05Video policy.
The pre-processing pipeline prepares input data for the model by:
1. Renaming features to match pretrained configurations.
2. Normalizing input and output features based on dataset statistics.
3. Adding a batch dimension.
4. Appending a newline character to the task description for tokenizer compatibility.
5. Tokenizing the text prompt using the PaliGemma tokenizer.
6. Moving all data to the specified device.
The post-processing pipeline handles the model's output by:
1. Moving data to the CPU.
2. Unnormalizing the output features to their original scale.
Args:
config: The configuration object for the PI0 policy.
dataset_stats: A dictionary of statistics for normalization.
preprocessor_kwargs: Additional arguments for the pre-processor pipeline.
postprocessor_kwargs: Additional arguments for the post-processor pipeline.
Returns:
A tuple containing the configured pre-processor and post-processor pipelines.
"""
# Add remaining processors
input_steps: list[ProcessorStep] = [
RenameObservationsProcessorStep(rename_map={}), # To mimic the same processor as pretrained one
AddBatchDimensionProcessorStep(),
# NOTE: NormalizerProcessorStep MUST come before Pi05PrepareStateTokenizerProcessorStep
# because the tokenizer step expects normalized state in [-1, 1] range for discretization
NormalizerProcessorStep(
features={**config.input_features, **config.output_features},
norm_map=config.normalization_mapping,
stats=dataset_stats,
),
Pi05PrepareStateTokenizerProcessorStep(max_state_dim=config.max_state_dim),
TokenizerProcessorStep(
tokenizer_name="google/paligemma-3b-pt-224",
max_length=config.tokenizer_max_length,
padding_side="right",
padding="max_length",
),
DeviceProcessorStep(device=config.device),
]
output_steps: list[ProcessorStep] = [
UnnormalizerProcessorStep(
features=config.output_features, norm_map=config.normalization_mapping, stats=dataset_stats
),
DeviceProcessorStep(device="cpu"),
]
return (
PolicyProcessorPipeline[dict[str, Any], dict[str, Any]](
steps=input_steps,
name=POLICY_PREPROCESSOR_DEFAULT_NAME,
),
PolicyProcessorPipeline[PolicyAction, PolicyAction](
steps=output_steps,
name=POLICY_POSTPROCESSOR_DEFAULT_NAME,
to_transition=policy_action_to_transition,
to_output=transition_to_policy_action,
),
)
@@ -1,214 +0,0 @@
#!/usr/bin/env python
"""
Test script for PI05 with video encoder (VideoPrism).
This script creates a dummy example to test the model with video encoding enabled.
"""
import torch
from lerobot.configs.types import FeatureType, PolicyFeature
from lerobot.policies.videovla.configuration_pi05 import PI05VideoConfig
from lerobot.policies.videovla.modeling_pi05 import PI05VideoPolicy
from lerobot.utils.constants import ACTION, OBS_IMAGES, OBS_STATE
def create_dummy_batch(
batch_size: int = 2,
num_frames: int = 16,
image_size: int = 224,
num_cameras: int = 2,
state_dim: int = 14,
action_dim: int = 14,
chunk_size: int = 50,
seq_len: int = 10,
device: str = "cuda",
) -> dict[str, torch.Tensor]:
"""Create a dummy batch for testing."""
batch = {}
# Create image observations with temporal dimension [B, T, C, H, W]
for i in range(num_cameras):
key = f"{OBS_IMAGES}.camera_{i}"
# Images in [0, 1] range
batch[key] = torch.rand(batch_size, num_frames, 3, image_size, image_size, device=device)
# Create state observation [B, state_dim]
batch[OBS_STATE] = torch.rand(batch_size, state_dim, device=device)
# Create language tokens and attention mask [B, seq_len]
batch["observation.language.tokens"] = torch.randint(0, 1000, (batch_size, seq_len), device=device)
batch["observation.language.attention_mask"] = torch.ones(batch_size, seq_len, dtype=torch.bool, device=device)
# Create action targets [B, chunk_size, action_dim]
batch[ACTION] = torch.rand(batch_size, chunk_size, action_dim, device=device)
return batch
def test_video_encoder():
"""Test the PI05 model with video encoding enabled."""
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using device: {device}")
# Configuration
batch_size = 2
num_frames = 16
image_size = 224
num_cameras = 2
state_dim = 14
action_dim = 14
chunk_size = 50
# Create config with video encoder enabled
print("Creating PI05VideoConfig with video encoder...")
config = PI05VideoConfig(
use_video_encoder=True,
video_num_frames=num_frames,
videoprism_model_name="MHRDYN7/videoprism-base-f16r288",
videoprism_image_size=288,
freeze_video_encoder=True,
video_padding_mode="repeat",
video_encoder_camera_key=f"{OBS_IMAGES}.camera_0", # Use first camera for video
chunk_size=chunk_size,
max_action_dim=32,
max_state_dim=32,
dtype="float32", # Use float32 for testing
device=device,
)
# Set up input/output features
for i in range(num_cameras):
key = f"{OBS_IMAGES}.camera_{i}"
config.input_features[key] = PolicyFeature(
type=FeatureType.VISUAL,
shape=(3, image_size, image_size),
)
config.input_features[OBS_STATE] = PolicyFeature(
type=FeatureType.STATE,
shape=(state_dim,),
)
config.output_features[ACTION] = PolicyFeature(
type=FeatureType.ACTION,
shape=(action_dim,),
)
print(f"use_video_encoder: {config.use_video_encoder}")
print(f"video_num_frames: {config.video_num_frames}")
print(f"video_padding_mode: {config.video_padding_mode}")
print(f"video_encoder_camera_key: {config.video_encoder_camera_key}")
print(f"image_observation_delta_indices: {config.image_observation_delta_indices}")
# Create model
model = PI05VideoPolicy(config)
model.to(device)
# Create dummy batch
batch = create_dummy_batch(
batch_size=batch_size,
num_frames=num_frames,
image_size=image_size,
num_cameras=num_cameras,
state_dim=state_dim,
action_dim=action_dim,
chunk_size=chunk_size,
device=device,
)
print(f"Batch keys: {list(batch.keys())}" )
for key, value in batch.items():
print(f"{key}: {value.shape}")
# Test forward pass
model.train()
try:
loss, loss_dict = model.forward(batch)
print(f"Forward pass successful!")
print(f"Loss: {loss.item():.4f}")
print(f"Loss dict: {loss_dict}")
except Exception as e:
print(f"Forward pass failed: {e}")
raise
# Test inference
model.eval()
with torch.no_grad():
try:
actions = model.predict_action_chunk(batch)
print(f"Test pass, inference pass!")
print(f"Predicted actions shape: {actions.shape}")
except Exception as e:
print(f"Inference failed: {e}")
raise
print("All tests passed!")
def test_frame_padding():
"""Test frame padding at episode start."""
device = "cuda" if torch.cuda.is_available() else "cpu"
# Create config
config = PI05VideoConfig(
use_video_encoder=True,
video_num_frames=16,
videoprism_model_name="MHRDYN7/videoprism-base-f16r288",
freeze_video_encoder=True,
video_padding_mode="repeat",
chunk_size=50,
dtype="float32",
device=device,
)
# Set up minimal features
config.input_features[f"{OBS_IMAGES}.camera_0"] = PolicyFeature(
type=FeatureType.VISUAL,
shape=(3, 224, 224),
)
config.output_features[ACTION] = PolicyFeature(
type=FeatureType.ACTION,
shape=(14,),
)
# Create model
model = PI05VideoPolicy(config)
model.to(device)
# Test with fewer frames than expected (simulating episode start)
batch = {
f"{OBS_IMAGES}.camera_0": torch.rand(2, 5, 3, 224, 224, device=device),
"observation.language.tokens": torch.randint(0, 1000, (2, 10), device=device),
"observation.language.attention_mask": torch.ones(2, 10, dtype=torch.bool, device=device),
ACTION: torch.rand(2, 50, 14, device=device),
}
video_frames = model._preprocess_video(batch)
if video_frames is not None:
print(f"Input frames: 5")
print(f"Output video_frames shape: {video_frames.shape}")
print(f"Expected: [2, 16, 3, 224, 224]")
assert video_frames.shape == (2, 16, 3, 224, 224), f"Unexpected shape: {video_frames.shape}"
print("Frame padding test PASSED!")
else:
print("video_frames is None (unexpected)")
# Test with single frame
batch[f"{OBS_IMAGES}.camera_0"] = torch.rand(2, 3, 224, 224, device=device) # [B, C, H, W]
video_frames = model._preprocess_video(batch)
if video_frames is not None:
print(f"Input: single frame [B, C, H, W]")
print(f"Output video_frames shape: {video_frames.shape}")
print(f"Expected: [2, 16, 3, 224, 224]")
assert video_frames.shape == (2, 16, 3, 224, 224), f"Unexpected shape: {video_frames.shape}"
print("Single frame expansion test PASSED!")
else:
print("video_frames is None (unexpected)")
print("All tests passed!")
if __name__ == "__main__":
# Run tests
test_frame_padding()
test_video_encoder()
@@ -1,37 +0,0 @@
# Copyright 2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .configuration_videoprism import VideoPrismConfig, VideoPrismTextConfig, VideoPrismVisionConfig
from .modeling_videoprism import (
VideoPrismClipModel,
VideoPrismForVideoClassification,
VideoPrismPreTrainedModel,
VideoPrismTextModel,
VideoPrismVideoModel,
VideoPrismVisionModel,
)
from .video_processing_videoprism import VideoPrismVideoProcessor
__all__ = [
"VideoPrismConfig",
"VideoPrismTextConfig",
"VideoPrismVisionConfig",
"VideoPrismClipModel",
"VideoPrismForVideoClassification",
"VideoPrismPreTrainedModel",
"VideoPrismTextModel",
"VideoPrismVideoModel",
"VideoPrismVisionModel",
"VideoPrismVideoProcessor",
]
@@ -1,269 +0,0 @@
# 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
# This file was automatically generated from src/transformers/models/videoprism/modular_videoprism.py.
# Do NOT edit this file manually as any edits will be overwritten by the generation of
# the file from the modular. If any change should be done, please apply the change to the
# modular_videoprism.py file directly. One of our CI enforces this.
# 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
from transformers import PretrainedConfig
from transformers.utils import logging
logger = logging.get_logger(__name__)
class VideoPrismVisionConfig(PretrainedConfig):
r"""
This is the configuration class to store the configuration of a [`VideoPrismVisionModel`]. It is used to instantiate a
VideoPrism vision encoder according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the VideoPrism
[google/videoprism](https://huggingface.co/google/videoprism) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
image_size (`int`, *optional*, defaults to 288):
The size of the input image.
num_frames (`int`, *optional*, defaults to 16):
The number of frames in the input video.
tubelet_size (`List[int]`, *optional*, defaults to `[1, 18, 18]`):
The size of the tubelet patch.
num_channels (`int`, *optional*, defaults to 3):
The number of input channels.
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
num_spatial_layers (`int`, *optional*, defaults to 12):
Number of spatial transformer blocks.
num_temporal_layers (`int`, *optional*, defaults to 4):
Number of temporal transformer blocks.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
hidden_act (`str` or `function`, *optional*, defaults to `"gelu_python"`):
The non-linear activation function (function or string).
hidden_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
layer_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the layer normalization layers.
qkv_bias (`bool`, *optional*, defaults to `True`):
Whether to add a bias to the qkv projections in attention layers.
attn_logit_softcapping (`float`, *optional*, defaults to 50.0):
Softcapping constant for attention logits.
num_auxiliary_layers (`int`, *optional*, defaults to 2):
Number of auxiliary layers. This is used in the VideoPrismVideoModel that is a part of VideoPrismClipModel.
apply_l2_norm (`bool`, *optional*, defaults to `True`):
Whether to apply L2 normalization to the output. This is used in the VideoPrismVideoModel that is a part of VideoPrismClipModel.
Example:
```python
>>> from transformers import VideoPrismVisionConfig, VideoPrismVisionModel
>>> # Initializing a VideoPrismVisionConfig with default values
>>> configuration = VideoPrismVisionConfig()
>>> # Initializing a VideoPrismVisionModel with the configuration
>>> model = VideoPrismVisionModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```"""
model_type = "videoprism_vision_model"
base_config_key = "vision_config"
def __init__(
self,
image_size=288,
num_frames=16,
tubelet_size=[1, 18, 18],
num_channels=3,
hidden_size=768,
num_spatial_layers=12,
num_temporal_layers=4,
num_attention_heads=12,
intermediate_size=3072,
hidden_act="gelu_python",
hidden_dropout_prob=0.0,
attention_probs_dropout_prob=0.0,
initializer_range=0.02,
layer_norm_eps=1e-06,
qkv_bias=True,
attn_logit_softcapping=50.0,
num_auxiliary_layers=2,
apply_l2_norm=True,
**kwargs,
):
super().__init__(**kwargs)
self.hidden_size = hidden_size
self.num_attention_heads = num_attention_heads
self.intermediate_size = intermediate_size
self.hidden_act = hidden_act
self.hidden_dropout_prob = hidden_dropout_prob
self.attention_probs_dropout_prob = attention_probs_dropout_prob
self.initializer_range = initializer_range
self.layer_norm_eps = layer_norm_eps
self.image_size = image_size
self.num_frames = num_frames
self.tubelet_size = tubelet_size
self.num_channels = num_channels
self.qkv_bias = qkv_bias
self.num_spatial_layers = num_spatial_layers
self.num_temporal_layers = num_temporal_layers
self.attn_logit_softcapping = attn_logit_softcapping
self.num_auxiliary_layers = num_auxiliary_layers
self.apply_l2_norm = apply_l2_norm
class VideoPrismTextConfig(PretrainedConfig):
r"""
This is the configuration class to store the configuration of a [`VideoPrismTextModel`]. It is used to instantiate a
VideoPrism text encoder according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the VideoPrism
[google/videoprism](https://huggingface.co/google/videoprism) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
hidden_size (`int`, *optional*, defaults to 768):
Dimensionality of the encoder layers and the pooler layer.
intermediate_size (`int`, *optional*, defaults to 3072):
Dimensionality of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder.
num_attention_heads (`int`, *optional*, defaults to 12):
Number of attention heads for each attention layer in the Transformer encoder.
num_text_layers (`int`, *optional*, defaults to 12):
Number of hidden layers in the text Transformer encoder.
vocab_size (`int`, *optional*, defaults to 32000):
Vocabulary size of the text model. Defines the number of different tokens that can be represented by the
`input_ids` passed when calling [`VideoPrismTextModel`].
apply_l2_norm (`bool`, *optional*, defaults to `True`):
Whether to apply L2 normalization to the output text embeddings.
hidden_act (`str` or `function`, *optional*, defaults to `"relu"`):
The non-linear activation function (function or string) in the encoder and pooler.
attention_probs_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout ratio for the attention probabilities.
qkv_bias (`bool`, *optional*, defaults to `True`):
Whether to add a bias to the query, key, and value projections in the attention layers.
hidden_dropout_prob (`float`, *optional*, defaults to 0.0):
The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
layer_norm_eps (`float`, *optional*, defaults to 1e-06):
The epsilon used by the layer normalization layers.
initializer_range (`float`, *optional*, defaults to 0.02):
The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
attn_logit_softcapping (`float`, *optional*, defaults to 50.0):
Softcapping constant for attention logits.
Example:
```python
>>> from transformers import VideoPrismTextConfig, VideoPrismTextModel
>>> # Initializing a VideoPrismTextConfig with default values
>>> configuration = VideoPrismTextConfig()
>>> # Initializing a VideoPrismTextModel (with random weights) from the configuration
>>> model = VideoPrismTextModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```"""
model_type = "videoprism_text_model"
base_config_key = "text_config"
def __init__(
self,
hidden_size=768,
intermediate_size=3072,
num_attention_heads=12,
num_text_layers=12,
vocab_size=32000,
apply_l2_norm=True,
hidden_act="relu",
attention_probs_dropout_prob=0.0,
qkv_bias=True,
hidden_dropout_prob=0.0,
layer_norm_eps=1e-06,
initializer_range=0.02,
attn_logit_softcapping=50.0,
**kwargs,
):
super().__init__(**kwargs)
self.hidden_size = hidden_size
self.intermediate_size = intermediate_size
self.num_attention_heads = num_attention_heads
self.num_text_layers = num_text_layers
self.vocab_size = vocab_size
self.apply_l2_norm = apply_l2_norm
self.hidden_act = hidden_act
self.attention_probs_dropout_prob = attention_probs_dropout_prob
self.qkv_bias = qkv_bias
self.hidden_dropout_prob = hidden_dropout_prob
self.layer_norm_eps = layer_norm_eps
self.initializer_range = initializer_range
self.attn_logit_softcapping = attn_logit_softcapping
class VideoPrismConfig(PretrainedConfig):
r"""
This is the configuration class to store the configuration of a [`VideoPrismModel`]. It is used to instantiate a
VideoPrism model according to the specified arguments, defining the model architecture. Instantiating a
configuration with the defaults will yield a similar configuration to that of the VideoPrism
[google/videoprism](https://huggingface.co/google/videoprism) architecture.
Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
documentation from [`PretrainedConfig`] for more information.
Args:
text_config (`VideoPrismTextConfig`, *optional*):
Configuration for the text model.
vision_config (`VideoPrismVisionConfig`, *optional*):
Configuration for the vision model.
kwargs (*optional*):
Dictionary of keyword arguments.
Example:
```python
>>> from transformers import VideoPrismConfig, VideoPrismModel
>>> # Initializing a VideoPrismConfig with default values
>>> configuration = VideoPrismConfig()
>>> # Initializing a VideoPrismClipModel with the configuration
>>> model = VideoPrismClipModel(configuration)
>>> # Accessing the model configuration
>>> configuration = model.config
```"""
model_type = "videoprism"
sub_configs = {"text_config": VideoPrismTextConfig, "vision_config": VideoPrismVisionConfig}
def __init__(self, text_config=None, vision_config=None, **kwargs):
if text_config is None:
text_config = VideoPrismTextConfig()
logger.info("`text_config` is `None`. Initializing the `VideoPrismTextConfig` with default values.")
elif isinstance(text_config, dict):
text_config = VideoPrismTextConfig(**text_config)
if vision_config is None:
vision_config = VideoPrismVisionConfig()
logger.info("`vision_config` is `None`. initializing the `VideoPrismVisionConfig` with default values.")
elif isinstance(vision_config, dict):
vision_config = VideoPrismVisionConfig(**vision_config)
self.text_config = text_config
self.vision_config = vision_config
super().__init__(**kwargs)
__all__ = ["VideoPrismVisionConfig", "VideoPrismTextConfig", "VideoPrismConfig"]
@@ -1,245 +0,0 @@
# Copyright 2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
from collections import defaultdict
from contextlib import contextmanager
import torch
# Record all the torch primitives in advance, so that we can use them without them being modified when we patch torch
# in context managers
TORCH_INIT_FUNCTIONS = {
"uniform_": torch.nn.init.uniform_,
"normal_": torch.nn.init.normal_,
"constant_": torch.nn.init.constant_,
"ones_": torch.nn.init.ones_,
"zeros_": torch.nn.init.zeros_,
"eye_": torch.nn.init.eye_,
"dirac_": torch.nn.init.dirac_,
"xavier_uniform_": torch.nn.init.xavier_uniform_,
"xavier_normal_": torch.nn.init.xavier_normal_,
"kaiming_uniform_": torch.nn.init.kaiming_uniform_,
"kaiming_normal_": torch.nn.init.kaiming_normal_,
"trunc_normal_": torch.nn.init.trunc_normal_,
"orthogonal_": torch.nn.init.orthogonal_,
"sparse_": torch.nn.init.sparse_,
}
def uniform_(
tensor: torch.Tensor, a: float = 0.0, b: float = 1.0, generator: torch.Generator | None = None
) -> torch.Tensor:
if not getattr(tensor, "_is_hf_initialized", False):
return TORCH_INIT_FUNCTIONS["uniform_"](tensor, a=a, b=b, generator=generator)
return tensor
def normal_(
tensor: torch.Tensor, mean: float = 0.0, std: float = 1.0, generator: torch.Generator | None = None
) -> torch.Tensor:
if not getattr(tensor, "_is_hf_initialized", False):
return TORCH_INIT_FUNCTIONS["normal_"](tensor, mean=mean, std=std, generator=generator)
return tensor
def constant_(tensor: torch.Tensor, val: float) -> torch.Tensor:
if not getattr(tensor, "_is_hf_initialized", False):
return TORCH_INIT_FUNCTIONS["constant_"](tensor, val=val)
return tensor
def ones_(tensor: torch.Tensor) -> torch.Tensor:
if not getattr(tensor, "_is_hf_initialized", False):
return TORCH_INIT_FUNCTIONS["ones_"](tensor)
return tensor
def zeros_(tensor: torch.Tensor) -> torch.Tensor:
if not getattr(tensor, "_is_hf_initialized", False):
return TORCH_INIT_FUNCTIONS["zeros_"](tensor)
return tensor
def eye_(tensor: torch.Tensor) -> torch.Tensor:
if not getattr(tensor, "_is_hf_initialized", False):
return TORCH_INIT_FUNCTIONS["eye_"](tensor)
return tensor
def dirac_(tensor: torch.Tensor, groups: int = 1) -> torch.Tensor:
if not getattr(tensor, "_is_hf_initialized", False):
return TORCH_INIT_FUNCTIONS["dirac_"](tensor, groups=groups)
return tensor
def xavier_uniform_(tensor: torch.Tensor, gain: float = 1.0, generator: torch.Generator | None = None) -> torch.Tensor:
if not getattr(tensor, "_is_hf_initialized", False):
return TORCH_INIT_FUNCTIONS["xavier_uniform_"](tensor, gain=gain, generator=generator)
return tensor
def xavier_normal_(tensor: torch.Tensor, gain: float = 1.0, generator: torch.Generator | None = None) -> torch.Tensor:
if not getattr(tensor, "_is_hf_initialized", False):
return TORCH_INIT_FUNCTIONS["xavier_normal_"](tensor, gain=gain, generator=generator)
return tensor
def kaiming_uniform_(
tensor: torch.Tensor,
a: float = 0,
mode: str = "fan_in",
nonlinearity: str = "leaky_relu",
generator: torch.Generator | None = None,
) -> torch.Tensor:
if not getattr(tensor, "_is_hf_initialized", False):
return TORCH_INIT_FUNCTIONS["kaiming_uniform_"](
tensor, a=a, mode=mode, nonlinearity=nonlinearity, generator=generator
)
return tensor
def kaiming_normal_(
tensor: torch.Tensor,
a: float = 0,
mode: str = "fan_in",
nonlinearity: str = "leaky_relu",
generator: torch.Generator | None = None,
) -> torch.Tensor:
if not getattr(tensor, "_is_hf_initialized", False):
return TORCH_INIT_FUNCTIONS["kaiming_normal_"](
tensor, a=a, mode=mode, nonlinearity=nonlinearity, generator=generator
)
return tensor
def trunc_normal_(
tensor: torch.Tensor,
mean: float = 0.0,
std: float = 1.0,
a: float = -2.0,
b: float = 2.0,
generator: torch.Generator | None = None,
) -> torch.Tensor:
if not getattr(tensor, "_is_hf_initialized", False):
return TORCH_INIT_FUNCTIONS["trunc_normal_"](tensor, mean=mean, std=std, a=a, b=b, generator=generator)
return tensor
def orthogonal_(
tensor: torch.Tensor,
gain: float = 1,
generator: torch.Generator | None = None,
) -> torch.Tensor:
if not getattr(tensor, "_is_hf_initialized", False):
return TORCH_INIT_FUNCTIONS["orthogonal_"](tensor, gain=gain, generator=generator)
return tensor
def sparse_(
tensor: torch.Tensor, sparsity: float, std: float = 0.01, generator: torch.Generator | None = None
) -> torch.Tensor:
if not getattr(tensor, "_is_hf_initialized", False):
return TORCH_INIT_FUNCTIONS["sparse_"](tensor, sparsity=sparsity, std=std, generator=generator)
return tensor
def copy_(tensor: torch.Tensor, other: torch.Tensor) -> torch.Tensor:
if not getattr(tensor, "_is_hf_initialized", False):
with torch.no_grad():
return tensor.copy_(other)
return tensor
# Here, we need to check several modules imported, and hot patch all of them, as sometimes torch does
# something like `from torch.nn.init import xavier_uniform_` in their internals (e.g in torch.nn.modules.activations,
# where MultiHeadAttention lives), so the function name is binded at import time and just doing
# `setattr(torch.nn.init, name, globals()[name])` is thus not enough
# The following list should be enough for all torch versions we work with
TORCH_MODULES_TO_PATCH = (
"torch.nn.init",
"torch.nn.modules.activation",
"torch.nn.modules.transformer",
"torch.nn.modules.linear",
"torch.nn.modules.loss",
"torch.nn.modules.batchnorm",
"torch.nn.modules.conv",
"torch.nn.modules.normalization",
"torch.nn.modules.rnn",
"torch.nn.modules.sparse",
)
@contextmanager
def guard_torch_init_functions():
"""
Guard the `torch.nn.init` primitive functions to behave exactly like the functions in this file, i.e. be
protected against the `_is_hf_initialized` flag to avoid re-init if the param was already loaded.
Usually, all models are using the init from `transformers` which are already guarded, but just to make extra sure
and for remote code, we also use this context manager.
"""
originals = defaultdict(dict)
try:
# Replace all torch funcs by the ones in this file
for module_name in TORCH_MODULES_TO_PATCH:
if module_name in sys.modules:
module = sys.modules[module_name]
for func_name in TORCH_INIT_FUNCTIONS.keys():
if hasattr(module, func_name):
originals[module][func_name] = getattr(module, func_name)
setattr(module, func_name, globals()[func_name])
yield
finally:
# Set back the original functions on all modules
for module, functions in originals.items():
for func_name, func in functions.items():
setattr(module, func_name, func)
@contextmanager
def no_init_weights():
"""
Disable weight initialization both at the torch-level, and at the transformers-level (`init_weights`).
This is used to speed-up initializing an empty model with deepspeed, as we do not initialize the model on meta device
with deepspeed, but we still don't need to run expensive weight initializations as we are loading params afterwards.
"""
from .modeling_utils import PreTrainedModel
def empty_func(*args, **kwargs):
pass
originals = defaultdict(dict)
try:
# Replace all torch funcs by empty ones
for module_name in TORCH_MODULES_TO_PATCH:
if module_name in sys.modules:
module = sys.modules[module_name]
for func_name in TORCH_INIT_FUNCTIONS.keys():
if hasattr(module, func_name):
originals[module][func_name] = getattr(module, func_name)
setattr(module, func_name, empty_func)
# Also patch our own `init_weights`
original_init_weights = PreTrainedModel.init_weights
PreTrainedModel.init_weights = empty_func
yield
finally:
# Set back the original torch functions on all modules
for module, functions in originals.items():
for func_name, func in functions.items():
setattr(module, func_name, func)
# Set back `init_weights`
PreTrainedModel.init_weights = original_init_weights
@@ -1,994 +0,0 @@
# 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
# This file was automatically generated from src/transformers/models/videoprism/modular_videoprism.py.
# Do NOT edit this file manually as any edits will be overwritten by the generation of
# the file from the modular. If any change should be done, please apply the change to the
# modular_videoprism.py file directly. One of our CI enforces this.
# 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
import math
from collections.abc import Callable
from dataclasses import dataclass
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.init import _calculate_fan_in_and_fan_out
from . import initialization as init
from transformers.activations import ACT2FN
from transformers.masking_utils import create_causal_mask
from transformers.modeling_layers import GradientCheckpointingLayer
from transformers.modeling_outputs import BaseModelOutput, ImageClassifierOutput
from transformers.modeling_utils import ALL_ATTENTION_FUNCTIONS, PreTrainedModel
from transformers.file_utils import ModelOutput
from .configuration_videoprism import VideoPrismConfig, VideoPrismTextConfig, VideoPrismVisionConfig
def torch_int(x):
"""
Casts an input to a torch int64 tensor if we are in a tracing context, otherwise to a Python int.
"""
if not torch.is_available():
return int(x)
return x.to(torch.int64) if torch.jit.is_tracing() and isinstance(x, torch.Tensor) else int(x)
@dataclass
class BaseModelOutputWithSpatialAndTemporalStates(ModelOutput):
"""
Base class for model outputs that include spatial and temporal states.
Args:
last_hidden_state (Optional[torch.FloatTensor]):
The last hidden state of the model, typically of shape
(batch_size, num_patches * num_frames, hidden_size).
temporal_hidden_state (Optional[torch.FloatTensor]):
The last hidden_state of the temporal encoder, typically of shape
(batch_size * num_patches, num_frames, hidden_size).
spatial_hidden_state (Optional[torch.FloatTensor]):
The last hidden_state of the spatial encoder, typically of shape
(batch_size * num_frames, num_patches, hidden_size).
"""
last_hidden_state: torch.FloatTensor | None = None
temporal_hidden_state: torch.FloatTensor | None = None
spatial_hidden_state: torch.FloatTensor | None = None
@dataclass
class VideoPrismClipOutput(ModelOutput):
"""
Base class for VideoPrismClip model outputs.
"""
logits_per_video: torch.FloatTensor | None = None
logits_per_text: torch.FloatTensor | None = None
video_embeds: torch.FloatTensor | None = None
text_embeds: torch.FloatTensor | None = None
@dataclass
class VideoPrismVideoOutput(ModelOutput):
"""
Base class for VideoPrismVideo model outputs.
"""
video_last_hidden_state: torch.FloatTensor | None = None
auxiliary_output: torch.FloatTensor | None = None
attention_pooling_output: torch.FloatTensor | None = None
class VideoPrismTubeletEmbeddings(nn.Module):
"""
Construct VideoPrism Tubelet embeddings.
This module turns a batch of videos of shape (batch_size, num_frames, num_channels, height, width) into a tensor of
shape (batch_size, seq_len, hidden_size) to be consumed by a Transformer encoder.
The seq_len (the number of patches) equals (number of frames // tubelet_size[0]) * (height // tubelet_size[1]) *
(width // tubelet_size[2]).
"""
def __init__(self, config: VideoPrismVisionConfig):
super().__init__()
self.config = config
self.num_frames = config.num_frames
self.image_size = (
config.image_size
if isinstance(self.config.image_size, tuple)
else (self.config.image_size, self.config.image_size)
)
self.patch_size = config.tubelet_size
self.embed_dim = config.hidden_size
self.projection = nn.Conv3d(
config.num_channels, config.hidden_size, kernel_size=config.tubelet_size, stride=config.tubelet_size
)
self.pos_emb_shape = [self.image_size[0] // self.patch_size[1], self.image_size[1] // self.patch_size[2]]
self.num_patches = self.pos_emb_shape[0] * self.pos_emb_shape[1]
def forward(self, pixel_values_videos: torch.Tensor, interpolate_pos_encoding: bool = False) -> torch.Tensor:
batch_size, num_frames, num_channels, height, width = pixel_values_videos.shape
if not interpolate_pos_encoding and (height != self.image_size[0] or width != self.image_size[1]):
raise ValueError(
f"Image size ({height}*{width}) doesn't match model ({self.image_size[0]}*{self.image_size[1]}). Set interpolate_pos_encoding=True to automatically resize the model position embeddings."
)
# permute to (batch_size, num_channels, num_frames, height, width)
pixel_values_videos = pixel_values_videos.permute(0, 2, 1, 3, 4)
hidden_states = self.projection(pixel_values_videos)
# flatten the spatial part and permute to (B, T, num_patches, dim)
hidden_states = hidden_states.flatten(3).permute(0, 2, 3, 1)
# combine batch and time dimension
batch_size, num_frames, num_patches, hidden_size = hidden_states.shape
hidden_states = hidden_states.reshape(batch_size * num_frames, num_patches, hidden_size)
return hidden_states
class VideoPrismSpatialEmbeddings(nn.Module):
"""
VideoPrism Spatial Embeddings.
Creates embeddings from a video using VideoPrismSpatialTubeletEmbeddings and adds positional embeddings.
"""
def __init__(self, config: VideoPrismVisionConfig):
super().__init__()
self.config = config
self.patch_embeddings = VideoPrismTubeletEmbeddings(config)
self.position_embeddings = nn.Parameter(torch.zeros(1, self.patch_embeddings.num_patches, config.hidden_size))
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.patch_size = config.tubelet_size[1:]
self.tubelet_size = config.tubelet_size
# Adapted from transformers.models.vit.modeling_vit.ViTEmbeddings.interpolate_pos_encoding
def interpolate_pos_encoding(self, embeddings: torch.Tensor, height: int, width: int) -> torch.Tensor:
"""
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher resolution
images. This method is also adapted to support torch.jit tracing.
Adapted from:
- https://github.com/facebookresearch/dino/blob/de9ee3df6cf39fac952ab558447af1fa1365362a/vision_transformer.py#L174-L194, and
- https://github.com/facebookresearch/dinov2/blob/e1277af2ba9496fbadf7aec6eba56e8d882d1e35/dinov2/models/vision_transformer.py#L179-L211
"""
num_patches = embeddings.shape[1]
num_positions = self.position_embeddings.shape[1]
# always interpolate when tracing to ensure the exported model works for dynamic input shapes
if not torch.jit.is_tracing() and num_patches == num_positions and height == width:
return self.position_embeddings
dim = embeddings.shape[-1]
num_row_patches = height // self.patch_size[0]
num_col_patches = width // self.patch_size[1]
sqrt_num_positions = torch_int(num_positions**0.5)
patch_pos_embed = self.position_embeddings.reshape(1, sqrt_num_positions, sqrt_num_positions, dim)
patch_pos_embed = patch_pos_embed.permute(0, 3, 1, 2)
patch_pos_embed = nn.functional.interpolate(
patch_pos_embed,
size=(num_row_patches, num_col_patches),
mode="bilinear",
antialias=True,
)
patch_pos_embed = patch_pos_embed.permute(0, 2, 3, 1).view(1, -1, dim)
return patch_pos_embed
def forward(
self, pixel_values_videos: torch.Tensor, interpolate_pos_encoding: bool | None = False
) -> torch.Tensor:
b, t, c, h, w = pixel_values_videos.shape
assert h == w, "Input image height and width must be the same"
embeddings = self.patch_embeddings(pixel_values_videos, interpolate_pos_encoding)
# add positional encoding to each token
if interpolate_pos_encoding:
embeddings = embeddings + self.interpolate_pos_encoding(embeddings, h, w)
else:
embeddings = embeddings + self.position_embeddings
embeddings = self.dropout(embeddings)
return embeddings
class VideoPrismTemporalEmbeddings(nn.Module):
"""
VideoPrism Temporal Embeddings.
Receives embeddings from spatial encoder, reshapes the hidden state to
(batch_size * num_patches, num_frames, hidden_size) and adds positional embeddings.
"""
def __init__(self, config: VideoPrismVisionConfig):
super().__init__()
self.config = config
self.position_embeddings = nn.Parameter(torch.zeros(1, self.config.num_frames, config.hidden_size))
self.dropout = nn.Dropout(config.hidden_dropout_prob)
# Adapted from transformers.models.vit.modeling_vit.ViTEmbeddings.interpolate_pos_encoding
def interpolate_pos_encoding(self, embeddings: torch.Tensor) -> torch.Tensor:
"""
This method allows to interpolate the pre-trained position encodings, to be able to use the model on higher resolution
images. This method is also adapted to support torch.jit tracing.
Adapted from:
- https://github.com/facebookresearch/dino/blob/de9ee3df6cf39fac952ab558447af1fa1365362a/vision_transformer.py#L174-L194, and
- https://github.com/facebookresearch/dinov2/blob/e1277af2ba9496fbadf7aec6eba56e8d882d1e35/dinov2/models/vision_transformer.py#L179-L211
"""
target_emb_length = embeddings.shape[1]
source_emb_length = self.position_embeddings.shape[1]
# always interpolate when tracing to ensure the exported model works for dynamic input shapes
if not torch.jit.is_tracing() and target_emb_length == source_emb_length:
return self.position_embeddings
source_emb = self.position_embeddings
dim = embeddings.shape[-1]
source_emb = source_emb.unsqueeze(1)
source_emb = nn.functional.interpolate(
source_emb,
size=(target_emb_length, dim),
mode="bilinear",
antialias=True,
)
return source_emb.squeeze(1)
def forward(
self,
pixel_values_videos: torch.Tensor,
input_shape: torch.Size,
interpolate_pos_encoding: bool | None = False,
) -> torch.Tensor:
if input_shape is not None:
b, t, c, h, w = input_shape
_, features, dim = pixel_values_videos.shape
hidden_states = pixel_values_videos.view(b, t, features, dim)
hidden_states = hidden_states.permute(0, 2, 1, 3)
embeddings = hidden_states.reshape(b * features, t, dim)
# add positional encoding to each token
if interpolate_pos_encoding:
embeddings = embeddings + self.interpolate_pos_encoding(embeddings)
else:
embeddings = embeddings + self.position_embeddings
embeddings = self.dropout(embeddings)
return embeddings
def eager_attention_forward(
module: nn.Module,
query: torch.Tensor,
key: torch.Tensor,
value: torch.Tensor,
attention_mask: torch.Tensor | None,
scaling: float,
dropout: float = 0.0,
softcap: float | None = None,
**kwargs,
):
# Take the dot product between "query" and "key" to get the raw attention scores.
attn_weights = torch.matmul(query, key.transpose(-1, -2)) * scaling
if softcap is not None:
attn_weights = attn_weights / softcap
attn_weights = torch.tanh(attn_weights)
attn_weights = attn_weights * softcap
if attention_mask is not None:
attn_weights = attn_weights + attention_mask.expand(*attn_weights.shape)
# Normalize the attention scores to probabilities.
attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query.dtype)
attn_weights = nn.functional.dropout(attn_weights, p=dropout, training=module.training)
attn_output = torch.matmul(attn_weights, value)
attn_output = attn_output.transpose(1, 2).contiguous()
return attn_output, attn_weights
class VideoPrismSelfAttention(nn.Module):
def __init__(self, config: VideoPrismVisionConfig | VideoPrismTextConfig):
super().__init__()
if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"):
raise ValueError(
f"The hidden size {config.hidden_size} is not a multiple of the number of attention "
f"heads {config.num_attention_heads}."
)
self.config = config
self.num_attention_heads = config.num_attention_heads
self.attention_head_size = int(config.hidden_size / config.num_attention_heads)
self.all_head_size = self.num_attention_heads * self.attention_head_size
self.dropout_prob = config.attention_probs_dropout_prob
self.scale = self.attention_head_size**-0.5
self.query = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias)
self.key = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias)
self.value = nn.Linear(config.hidden_size, self.all_head_size, bias=config.qkv_bias)
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: torch.Tensor | None,
**kwargs,
) -> tuple[torch.Tensor, torch.Tensor]:
batch_size = hidden_states.shape[0]
new_shape = batch_size, -1, self.num_attention_heads, self.attention_head_size
query = self.query(hidden_states).view(*new_shape).transpose(1, 2)
key = self.key(hidden_states).view(*new_shape).transpose(1, 2)
value = self.value(hidden_states).view(*new_shape).transpose(1, 2)
attention_interface: Callable = eager_attention_forward
if self.config._attn_implementation != "eager":
attention_interface = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation]
context_layer, attention_probs = attention_interface(
self,
query,
key,
value,
attention_mask,
scaling=self.scale,
dropout=0.0 if not self.training else self.dropout_prob,
softcap=self.config.attn_logit_softcapping,
**kwargs,
)
new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,)
context_layer = context_layer.reshape(new_context_layer_shape)
return (context_layer, attention_probs)
class VideoPrismSelfOutput(nn.Module):
"""
The residual connection is defined in VideoPrismLayer instead of here (as is the case with other models), due to the
layernorm applied before each block.
"""
def __init__(self, config: VideoPrismConfig):
super().__init__()
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor) -> torch.Tensor:
hidden_states = self.dense(hidden_states)
hidden_states = self.dropout(hidden_states)
return hidden_states
class VideoPrismAttention(nn.Module):
def __init__(self, config: VideoPrismConfig):
super().__init__()
self.attention = VideoPrismSelfAttention(config)
self.output = VideoPrismSelfOutput(config)
def forward(
self, hidden_states: torch.Tensor, attention_mask: torch.Tensor, **kwargs
) -> torch.Tensor:
self_attn_output, _ = self.attention(hidden_states, attention_mask, **kwargs)
output = self.output(self_attn_output, hidden_states)
return output
class VideoPrismLayerNorm(nn.LayerNorm):
def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
return F.layer_norm(hidden_states, self.normalized_shape, self.weight + 1, self.bias, self.eps)
class VideoPrismIntermediate(nn.Module):
def __init__(self, config: VideoPrismConfig):
super().__init__()
self.dense = nn.Linear(config.hidden_size, config.intermediate_size)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
if isinstance(config.hidden_act, str):
self.intermediate_act_fn = ACT2FN[config.hidden_act]
else:
self.intermediate_act_fn = config.hidden_act
def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
hidden_states = self.dense(hidden_states)
hidden_states = self.intermediate_act_fn(hidden_states)
hidden_states = self.dropout(hidden_states)
return hidden_states
class VideoPrismOutput(nn.Module):
def __init__(self, config: VideoPrismConfig):
super().__init__()
self.dense = nn.Linear(config.intermediate_size, config.hidden_size)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor) -> torch.Tensor:
hidden_states = self.dense(hidden_states)
hidden_states = self.dropout(hidden_states)
hidden_states = hidden_states + input_tensor
return hidden_states
class VideoPrismLayer(GradientCheckpointingLayer):
"""This corresponds to the EncoderBlock class in the scenic/videoprism implementation."""
def __init__(self, config: VideoPrismVisionConfig | VideoPrismTextConfig):
super().__init__()
self.config = config
self.attention = VideoPrismAttention(config)
self.intermediate = VideoPrismIntermediate(config)
self.output = VideoPrismOutput(config)
self.layernorm_before = VideoPrismLayerNorm(self.config.hidden_size, eps=self.config.layer_norm_eps)
self.layernorm_after = VideoPrismLayerNorm(self.config.hidden_size, eps=self.config.layer_norm_eps)
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: torch.Tensor | None = None,
**kwargs,
) -> torch.Tensor:
hidden_states_norm = self.layernorm_before(hidden_states)
attention_output = self.attention(hidden_states_norm, attention_mask, **kwargs)
# first residual connection
hidden_states = attention_output + hidden_states
# in VideoPrism, layernorm is also applied after self-attention
layer_output = self.layernorm_after(hidden_states)
layer_output = self.intermediate(layer_output)
# second residual connection is done here
layer_output = self.output(layer_output, hidden_states)
return layer_output
class VideoPrismSpatialEncoder(nn.Module):
def __init__(self, config: VideoPrismVisionConfig):
super().__init__()
self.config = config
self.layer = nn.ModuleList([VideoPrismLayer(config) for _ in range(config.num_spatial_layers)])
self.gradient_checkpointing = False
def forward(self, hidden_states: torch.Tensor) -> BaseModelOutput:
for i, layer_module in enumerate(self.layer):
hidden_states = layer_module(hidden_states)
return BaseModelOutput(last_hidden_state=hidden_states)
class VideoPrismTemporalEncoder(nn.Module):
def __init__(self, config: VideoPrismVisionConfig):
super().__init__()
self.config = config
self.layer = nn.ModuleList([VideoPrismLayer(config) for _ in range(config.num_temporal_layers)])
self.gradient_checkpointing = False
def forward(self, hidden_states: torch.Tensor) -> BaseModelOutput:
for i, layer_module in enumerate(self.layer):
hidden_states = layer_module(hidden_states)
return BaseModelOutput(last_hidden_state=hidden_states)
class VideoPrismAuxiliaryEncoder(nn.Module):
def __init__(self, config: VideoPrismVisionConfig):
super().__init__()
self.config = config
self.layer = nn.ModuleList([VideoPrismLayer(self.config) for _ in range(config.num_auxiliary_layers)])
self.gradient_checkpointing = False
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: torch.Tensor | None = None,
**kwargs,
) -> BaseModelOutput:
for i, layer_module in enumerate(self.layer):
hidden_states = layer_module(hidden_states, attention_mask, **kwargs)
return BaseModelOutput(last_hidden_state=hidden_states)
class VideoPrismTextEncoder(nn.Module):
def __init__(self, config: VideoPrismTextConfig):
super().__init__()
self.config = config
self.layer = nn.ModuleList([VideoPrismLayer(config) for _ in range(config.num_text_layers)])
self.gradient_checkpointing = False
def forward(
self,
hidden_states: torch.Tensor,
attention_mask: torch.Tensor | None = None,
**kwargs,
) -> BaseModelOutput:
for i, layer_module in enumerate(self.layer):
hidden_states = layer_module(hidden_states, attention_mask, **kwargs)
return BaseModelOutput(last_hidden_state=hidden_states)
def variance_scaling_(tensor, mode="fan_in", distribution="normal"):
fan_in, fan_out = _calculate_fan_in_and_fan_out(tensor)
if mode == "fan_in":
denom = fan_in
elif mode == "fan_out":
denom = fan_out
elif mode == "fan_avg":
denom = (fan_in + fan_out) / 2
variance = 1.0 / denom
if distribution == "truncated_normal":
init.trunc_normal_(tensor, std=math.sqrt(variance) / 0.87962566103423978)
elif distribution == "normal":
init.normal_(tensor, std=math.sqrt(variance))
elif distribution == "uniform":
bound = math.sqrt(3 * variance)
init.uniform_(tensor, -bound, bound)
else:
raise ValueError(f"invalid distribution {distribution}")
def lecun_normal_(tensor):
variance_scaling_(tensor, mode="fan_in", distribution="truncated_normal")
class VideoPrismPreTrainedModel(PreTrainedModel):
config_class = VideoPrismConfig
config: VideoPrismConfig
base_model_prefix = "videoprism"
main_input_name = "pixel_values_videos"
input_modalities = ("video", "text")
supports_gradient_checkpointing = True
_no_split_modules = [
"VideoPrismSpatialEmbeddings",
"VideoPrismTemporalEmbeddings",
"VideoPrismSpatialEncoder",
"VideoPrismTemporalEncoder",
"VideoPrismAuxiliaryEncoder",
"VideoPrismTextEncoder",
"VideoPrismMultiheadAttentionPoolingHead",
]
_supports_sdpa = True
_supports_flash_attn = True
_supports_attention_backend = True
_supports_flex_attention = True
def _init_weights(self, module):
if isinstance(module, (nn.Linear, nn.Conv3d)):
lecun_normal_(module.weight)
init.zeros_(module.bias)
elif isinstance(module, nn.LayerNorm):
init.zeros_(module.bias)
init.ones_(module.weight)
class VideoPrismVisionModel(VideoPrismPreTrainedModel):
config_class = VideoPrismVisionConfig
config: VideoPrismVisionConfig
def __init__(self, config: VideoPrismVisionConfig):
super().__init__(config)
self.config = config
self.layernorm1 = VideoPrismLayerNorm(self.config.hidden_size, eps=self.config.layer_norm_eps)
self.layernorm2 = VideoPrismLayerNorm(self.config.hidden_size, eps=self.config.layer_norm_eps)
self.spatial_embeddings = VideoPrismSpatialEmbeddings(self.config)
self.temporal_embeddings = VideoPrismTemporalEmbeddings(self.config)
self.spatial_encoder = VideoPrismSpatialEncoder(self.config)
self.temporal_encoder = VideoPrismTemporalEncoder(self.config)
self.post_init()
def get_input_embeddings(self):
return self.spatial_embeddings.patch_embeddings
def forward(
self,
pixel_values_videos: torch.FloatTensor | None = None,
interpolate_pos_encoding: bool | None = False,
**kwargs,
) -> BaseModelOutputWithSpatialAndTemporalStates:
r"""
Args:
pixel_values_videos (`torch.FloatTensor`):
Pixel values of the video frames of shape (batch_size, num_frames, num_channels, height, width).
interpolate_pos_encoding (`bool`, *optional*, defaults to `False`):
Whether to interpolate positional encodings to match input size.
Example:
```python
>>> from transformers import VideoPrismVideoProcessor, VideoPrismVisionModel
>>> import torch
>>> processor = VideoPrismVideoProcessor.from_pretrained("google/videoprism")
>>> model = VideoPrismVisionModel.from_pretrained("google/videoprism")
>>> video = "sample_video.mp4"
>>> inputs = processor(videos=video)
>>> with torch.no_grad():
... outputs = model(**inputs)
... features = outputs.last_hidden_state
```
"""
if pixel_values_videos is None:
raise ValueError("You have to specify pixel_values_videos")
input_shape = pixel_values_videos.shape
spatial_embeds = self.spatial_embeddings(pixel_values_videos, interpolate_pos_encoding)
spatial_encoder_outputs: BaseModelOutput = self.spatial_encoder(hidden_states=spatial_embeds, **kwargs)
# shape of spatial_sequence_output is (B * num_frames, num_patches, dim)
spatial_sequence_output = spatial_encoder_outputs.last_hidden_state
features = self.layernorm1(spatial_sequence_output)
temporal_embeds = self.temporal_embeddings(features, input_shape, interpolate_pos_encoding)
temporal_encoder_outputs: BaseModelOutput = self.temporal_encoder(hidden_states=temporal_embeds, **kwargs)
# shape of temporal_sequence_output is (B * num_patches, num_frames, dim)
temporal_sequence_output = temporal_encoder_outputs.last_hidden_state
features = self.layernorm2(temporal_sequence_output)
_, num_frames, dim = features.shape
features = features.view(input_shape[0], -1, num_frames, dim).permute(0, 2, 1, 3).contiguous()
_, num_frames, num_patches, dim = features.shape
features = features.view(input_shape[0], num_frames * num_patches, -1)
return BaseModelOutputWithSpatialAndTemporalStates(
last_hidden_state=features,
temporal_hidden_state=temporal_sequence_output,
spatial_hidden_state=spatial_sequence_output,
)
class VideoPrismMultiheadAttentionPoolingHead(nn.Module):
def __init__(self, config: VideoPrismVisionConfig):
super().__init__()
self.config = config
self.num_attention_heads = self.config.num_attention_heads
self.attention_head_size = int(self.config.intermediate_size / self.config.num_attention_heads)
self.all_head_size = self.num_attention_heads * self.attention_head_size
self.dropout_prob = self.config.attention_probs_dropout_prob
# PerDimScale
self.dim = int(self.config.intermediate_size / self.config.num_attention_heads)
self.per_dim_scale = nn.Parameter(torch.zeros(self.dim))
r_softplus_0 = 1.442695041
scale = torch.tensor(r_softplus_0 / (self.dim**0.5))
softplus = nn.functional.softplus(self.per_dim_scale)
scale = scale * softplus
self.register_buffer("scale", scale)
self.pooling_attention_query = nn.Parameter(torch.zeros(1, 1, self.config.hidden_size))
self.query = nn.Linear(self.config.hidden_size, self.config.intermediate_size, bias=self.config.qkv_bias)
self.key = nn.Linear(self.config.hidden_size, self.config.intermediate_size, bias=self.config.qkv_bias)
self.value = nn.Linear(self.config.hidden_size, self.config.intermediate_size, bias=self.config.qkv_bias)
self.projection = nn.Linear(self.config.intermediate_size, self.config.hidden_size, bias=self.config.qkv_bias)
self.layernorm = VideoPrismLayerNorm(self.config.hidden_size, eps=self.config.layer_norm_eps)
self.dim = int(self.config.intermediate_size / self.config.num_attention_heads)
def forward(
self,
hidden_states: torch.FloatTensor,
attention_mask: torch.LongTensor | None = None,
**kwargs,
) -> tuple[torch.FloatTensor, torch.FloatTensor]:
batch_size, seq_length, hidden_size = hidden_states.shape
query = self.pooling_attention_query.expand(batch_size, -1, -1)
query_layer = (
self.query(query).view(batch_size, -1, self.num_attention_heads, self.attention_head_size).transpose(1, 2)
)
query_layer = query_layer * self.scale.expand(*query_layer.shape)
key_layer = (
self.key(hidden_states)
.view(batch_size, -1, self.num_attention_heads, self.attention_head_size)
.transpose(1, 2)
)
value_layer = (
self.value(hidden_states)
.view(batch_size, -1, self.num_attention_heads, self.attention_head_size)
.transpose(1, 2)
)
attention_interface: Callable = eager_attention_forward
if self.config._attn_implementation != "eager":
attention_interface = ALL_ATTENTION_FUNCTIONS[self.config._attn_implementation]
context_layer, attention_probs = attention_interface(
self,
query_layer,
key_layer,
value_layer,
attention_mask,
scaling=1.0,
dropout=0.0 if not self.training else self.dropout_prob,
softcap=None,
**kwargs,
)
new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,)
context_layer = context_layer.reshape(new_context_layer_shape)
outputs = self.projection(context_layer)
outputs = self.layernorm(outputs)
return (outputs, attention_probs)
def l2norm(x: torch.FloatTensor, dim: int = -1, eps: float = 1e-6):
"""This function is intended to align with the l2norm implementation in the FLA library."""
inv_norm = torch.rsqrt((x * x).sum(dim=dim, keepdim=True) + eps)
return x * inv_norm
class VideoPrismTextModel(VideoPrismPreTrainedModel):
config_class = VideoPrismTextConfig
config: VideoPrismTextConfig
def __init__(self, config: VideoPrismTextConfig):
super().__init__(config)
self.config = config
self.text_encoder = VideoPrismTextEncoder(self.config)
self.token_embeddings = nn.Embedding(config.vocab_size, config.hidden_size)
self.cls_emb = nn.Parameter(torch.zeros(1, 1, config.hidden_size))
self.layernorm = VideoPrismLayerNorm(config.hidden_size, eps=config.layer_norm_eps)
self.normalize = config.apply_l2_norm
self.post_init()
def create_sinusoidal_positions(self, num_pos: int, dim: int) -> torch.Tensor:
inv_freq = 1.0 / (10000 ** (torch.arange(0, dim, 2, dtype=torch.int64) / (dim - 2)))
sinusoid_inp = torch.einsum("i , j -> i j", torch.arange(num_pos, dtype=torch.int64).float(), inv_freq).float()
return torch.cat((torch.sin(sinusoid_inp), torch.cos(sinusoid_inp)), dim=1)
def forward(
self,
input_ids: torch.Tensor,
attention_mask: torch.Tensor | None = None,
**kwargs,
) -> BaseModelOutput:
r"""
Args:
input_ids (`torch.Tensor`):
Input token IDs.
attention_mask (`torch.Tensor`, *optional*):
Attention mask to avoid performing attention on padding token indices.
"""
batch_size, seq_length = input_ids.shape
hidden_states = self.token_embeddings(input_ids)
hidden_states = hidden_states * (self.config.hidden_size**0.5)
cls_padding = torch.ones(batch_size, 1)
input_ids = torch.cat((input_ids, cls_padding), dim=1)
attention_mask = torch.cat((attention_mask, cls_padding), dim=1) if attention_mask is not None else None
if attention_mask is not None:
attention_mask = create_causal_mask(
config=self.config,
input_embeds=hidden_states,
attention_mask=attention_mask,
cache_position=torch.arange(hidden_states.shape[1] + 1, device=hidden_states.device),
past_key_values=None,
)
features = hidden_states + self.create_sinusoidal_positions(seq_length, self.config.hidden_size)
cls_emb = self.cls_emb * (self.config.hidden_size**0.5)
cls_emb = cls_emb.expand(features.shape[0], -1, -1)
features = torch.cat((features, cls_emb), dim=1)
text_encoder_output = self.text_encoder(features, attention_mask)
features = text_encoder_output.last_hidden_state
features = self.layernorm(features)
text_embeddings = features[:, -1]
if self.normalize:
text_embeddings = l2norm(text_embeddings, dim=-1)
return BaseModelOutput(
last_hidden_state=text_embeddings,
)
class VideoPrismVideoModel(VideoPrismPreTrainedModel):
config_class = VideoPrismVisionConfig
config: VideoPrismVisionConfig
def __init__(self, config: VideoPrismVisionConfig):
super().__init__(config)
self.config = config
self.backbone = VideoPrismVisionModel(self.config)
self.auxiliary_encoder = VideoPrismAuxiliaryEncoder(self.config)
self.contrastive_vision_pooler = VideoPrismMultiheadAttentionPoolingHead(self.config)
self.normalize = self.config.apply_l2_norm
self.post_init()
def get_input_embeddings(self):
return self.backbone.spatial_embeddings.patch_embeddings
def forward(
self,
pixel_values_videos: torch.FloatTensor,
interpolate_pos_encoding: bool | None = False,
**kwargs,
) -> VideoPrismVideoOutput:
r"""
Args:
pixel_values_videos (`torch.FloatTensor`):
Pixel values of the video frames.
interpolate_pos_encoding (`bool`, *optional*, defaults to `False`):
Whether to interpolate positional encodings to match input size.
"""
backbone_outputs = self.backbone(
pixel_values_videos=pixel_values_videos, interpolate_pos_encoding=interpolate_pos_encoding, **kwargs
)
video_features = backbone_outputs.last_hidden_state
auxiliary_output = self.auxiliary_encoder(video_features)
auxiliary_output_features = auxiliary_output.last_hidden_state
contrastive_vision_pooler_output = self.contrastive_vision_pooler(auxiliary_output_features, **kwargs)
video_embeddings = contrastive_vision_pooler_output[0]
if self.normalize:
video_embeddings = l2norm(video_embeddings, dim=-1)
return VideoPrismVideoOutput(
video_last_hidden_state=video_embeddings,
auxiliary_output=auxiliary_output,
attention_pooling_output=contrastive_vision_pooler_output,
)
class VideoPrismClipModel(VideoPrismPreTrainedModel):
config_class = VideoPrismConfig
def __init__(self, config: VideoPrismConfig):
super().__init__(config)
self.config = config
self.vision_config = config.vision_config
self.text_config = config.text_config
self.video_model = VideoPrismVideoModel(self.vision_config)
self.text_model = VideoPrismTextModel(self.text_config)
self.post_init()
def forward(
self,
pixel_values_videos: torch.FloatTensor,
input_ids: torch.Tensor,
attention_mask: torch.Tensor | None = None,
interpolate_pos_encoding: bool | None = False,
temperature: float | None = None,
**kwargs,
) -> VideoPrismClipOutput:
r"""
Args:
pixel_values_videos (`torch.FloatTensor`):
Pixel values of the video frames.
input_ids (`torch.Tensor`):
Input token IDs for text.
attention_mask (`torch.Tensor`, *optional*):
Attention mask for text inputs.
interpolate_pos_encoding (`bool`, *optional*, defaults to `False`):
Whether to interpolate positional encodings.
temperature (`float`, *optional*):
Temperature parameter for scaling similarity scores.
Example:
```python
>>> from transformers import VideoPrismProcessor, VideoPrismClipModel
>>> import torch
>>> processor = VideoPrismProcessor.from_pretrained("google/videoprism")
>>> model = VideoPrismClipModel.from_pretrained("google/videoprism")
>>> video = "sample_video.mp4"
>>> texts = ["a dog", "a cat"]
>>> inputs = processor(videos=video, texts=texts, return_tensors="pt", padding=True)
>>> with torch.no_grad():
... outputs = model(**inputs)
... logits_per_video = outputs.logits_per_video
```
"""
video_model_outputs = self.video_model(
pixel_values_videos=pixel_values_videos, interpolate_pos_encoding=interpolate_pos_encoding, **kwargs
)
text_model_outputs = self.text_model(input_ids=input_ids, attention_mask=attention_mask, **kwargs)
video_embeddings = video_model_outputs.video_last_hidden_state
text_embeddings = text_model_outputs.last_hidden_state
emb_dim = video_embeddings[0].shape[-1]
assert emb_dim == text_embeddings[0].shape[-1]
video_embeds = video_embeddings.reshape(-1, emb_dim)
text_embeds = text_embeddings.reshape(-1, emb_dim)
similarity_matrix = torch.matmul(video_embeds, text_embeds.T)
if temperature is not None:
similarity_matrix /= temperature
logits_per_video = torch.exp(similarity_matrix)
logits_per_text = logits_per_video.T
logits_per_video = logits_per_video / torch.sum(logits_per_video, dim=0, keepdims=True)
logits_per_text = logits_per_text / torch.sum(logits_per_text, dim=0, keepdims=True)
return VideoPrismClipOutput(
logits_per_video=logits_per_video,
logits_per_text=logits_per_text,
video_embeds=video_embeds,
text_embeds=text_embeds,
)
class VideoPrismForVideoClassification(VideoPrismPreTrainedModel):
config_class = VideoPrismVisionConfig
config: VideoPrismVisionConfig
def __init__(self, config: VideoPrismVisionConfig):
super().__init__(config)
self.config = config
self.encoder = VideoPrismVisionModel(self.config)
self.contrastive_vision_pooler = VideoPrismMultiheadAttentionPoolingHead(self.config)
self.classifier = nn.Linear(self.config.hidden_size, self.config.num_labels)
self.post_init()
def get_input_embeddings(self):
return self.encoder.spatial_embeddings.patch_embeddings
def forward(
self,
pixel_values_videos: torch.FloatTensor,
labels: torch.LongTensor | None = None,
interpolate_pos_encoding: bool | None = False,
**kwargs,
) -> ImageClassifierOutput:
r"""
Args:
pixel_values_videos (`torch.FloatTensor`):
Pixel values of the video frames.
labels (`torch.LongTensor`, *optional*):
Video classification labels.
interpolate_pos_encoding (`bool`, *optional*, defaults to `False`):
Whether to interpolate positional encodings.
Example:
```python
>>> from transformers import VideoPrismVideoProcessor, VideoPrismForVideoClassification
>>> import torch
>>> processor = VideoPrismVideoProcessor("google/videoprism")
>>> model = VideoPrismForVideoClassification.from_pretrained("google/videoprism", num_labels=1000)
>>> video = "sample_video.mp4"
>>> inputs = processor(videos=video, return_tensors="pt")
>>> with torch.no_grad():
... outputs = model(**inputs)
... logits = outputs.logits
```
"""
encoder_outputs = self.encoder(
pixel_values_videos=pixel_values_videos, interpolate_pos_encoding=interpolate_pos_encoding, **kwargs
)
sequence_output = encoder_outputs.last_hidden_state
pooled_output = self.contrastive_vision_pooler(sequence_output, **kwargs).pooled_output
logits = self.classifier(pooled_output)
loss = None
if labels is not None:
loss = self.loss_function(labels, logits, self.config, **kwargs)
return ImageClassifierOutput(
loss=loss,
logits=logits,
hidden_states=encoder_outputs.last_hidden_state,
)
__all__ = [
"VideoPrismVisionModel",
"VideoPrismPreTrainedModel",
"VideoPrismVideoModel",
"VideoPrismTextModel",
"VideoPrismClipModel",
"VideoPrismForVideoClassification",
]
@@ -1,50 +0,0 @@
import torch
import numpy as np
from torchcodec.decoders import VideoDecoder
from lerobot.policies.videovla.videoprism import VideoPrismVideoProcessor
from lerobot.policies.videovla.videoprism import VideoPrismVisionModel
processor = VideoPrismVideoProcessor.from_pretrained(
"MHRDYN7/videoprism-base-f16r288"
)
model = VideoPrismVisionModel.from_pretrained(
"MHRDYN7/videoprism-base-f16r288",
torch_dtype=torch.float16,
device_map="auto",
attn_implementation="sdpa",
)
video_url = "https://huggingface.co/datasets/nateraw/kinetics-mini/resolve/main/val/archery/-Qz25rXdMjE_000014_000024.mp4"
vr = VideoDecoder(video_url)
frame_idx = np.arange(0, 64)
video = vr.get_frames_at(indices=frame_idx).data # T x C x H x W
video = processor(video, return_tensors="pt")
video = {k: v.to(model.device, model.dtype) for k, v in video.items()}
outputs = model(**video)
encoder_outputs = outputs.last_hidden_state
print(encoder_outputs.shape) #
import time
import torch
# warmup
for _ in range(10):
_ = model(**video)
times = []
for _ in range(50):
torch.cuda.synchronize()
t0 = time.perf_counter()
_ = model(**video)
torch.cuda.synchronize()
t1 = time.perf_counter()
times.append(t1 - t0)
print(f"Mean: {1000*sum(times)/len(times):.2f} ms")
print(f"Min : {1000*min(times):.2f} ms")
print(f"Max : {1000*max(times):.2f} ms")
@@ -1,44 +0,0 @@
# 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
# This file was automatically generated from src/transformers/models/videoprism/modular_videoprism.py.
# Do NOT edit this file manually as any edits will be overwritten by the generation of
# the file from the modular. If any change should be done, please apply the change to the
# modular_videoprism.py file directly. One of our CI enforces this.
# 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
from transformers.image_utils import OPENAI_CLIP_MEAN, OPENAI_CLIP_STD, PILImageResampling
from transformers.video_processing_utils import BaseVideoProcessor
class VideoPrismVideoProcessor(BaseVideoProcessor):
r"""
Constructs a VideoPrism video processor.
This processor inherits from [`LlavaOnevisionVideoProcessor`] and sets default parameters for VideoPrism models.
Video frames are resized to 288x288 using bicubic resampling without normalization.
Args:
size (`Dict[str, int]`, *optional*, defaults to `{"height": 288, "width": 288}`):
The size to resize the video frames to.
resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BICUBIC`):
The resampling filter to use when resizing images.
do_normalize (`bool`, *optional*, defaults to `False`):
Whether to normalize the video frames.
"""
resample = PILImageResampling.BICUBIC
image_mean = OPENAI_CLIP_MEAN
image_std = OPENAI_CLIP_STD
size = {"height": 288, "width": 288}
rescale_factor = 1 / 255
default_to_square = False
crop_size = None
do_resize = True
do_center_crop = None
do_rescale = True
do_normalize = False
do_convert_rgb = True
do_sample_frames = False # Set to False for BC, recommended to set `True` in new models
__all__ = ["VideoPrismVideoProcessor"]
@@ -24,8 +24,7 @@ import numpy as np
import requests
from lerobot.processor import RobotAction, RobotObservation
from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
from lerobot.utils.errors import DeviceNotConnectedError
from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
from ..robot import Robot
from .config_earthrover_mini_plus import EarthRoverMiniPlusConfig
@@ -100,7 +99,6 @@ class EarthRoverMiniPlus(Robot):
"""Check if robot is connected to SDK."""
return self._is_connected
@check_if_already_connected
def connect(self, calibrate: bool = True) -> None:
"""Connect to robot via Frodobots SDK.
@@ -111,6 +109,8 @@ class EarthRoverMiniPlus(Robot):
DeviceAlreadyConnectedError: If robot is already connected
DeviceNotConnectedError: If cannot connect to SDK server
"""
if self._is_connected:
raise DeviceAlreadyConnectedError(f"{self.name} is already connected")
# Verify SDK is running and accessible
try:
@@ -197,7 +197,6 @@ class EarthRoverMiniPlus(Robot):
ACTION_ANGULAR_VEL: float,
}
@check_if_not_connected
def get_observation(self) -> RobotObservation:
"""Get current robot observation from SDK.
@@ -224,6 +223,8 @@ class EarthRoverMiniPlus(Robot):
Robot telemetry is retrieved from /data endpoint.
All SDK values are normalized to appropriate ranges for dataset recording.
"""
if not self._is_connected:
raise DeviceNotConnectedError(f"{self.name} is not connected")
observation = {}
@@ -254,7 +255,6 @@ class EarthRoverMiniPlus(Robot):
return observation
@check_if_not_connected
def send_action(self, action: RobotAction) -> RobotAction:
"""Send action to robot via SDK.
@@ -272,6 +272,8 @@ class EarthRoverMiniPlus(Robot):
Actions are sent to SDK via POST /control endpoint.
SDK expects commands in range [-1, 1].
"""
if not self._is_connected:
raise DeviceNotConnectedError(f"{self.name} is not connected")
# Extract action values and convert to float
linear = float(action.get(ACTION_LINEAR_VEL, 0.0))
@@ -289,7 +291,6 @@ class EarthRoverMiniPlus(Robot):
ACTION_ANGULAR_VEL: angular,
}
@check_if_not_connected
def disconnect(self) -> None:
"""Disconnect from robot.
@@ -298,6 +299,8 @@ class EarthRoverMiniPlus(Robot):
Raises:
DeviceNotConnectedError: If robot is not connected
"""
if not self._is_connected:
raise DeviceNotConnectedError(f"{self.name} is not connected")
# Stop the robot before disconnecting
try:
+12 -5
View File
@@ -25,7 +25,7 @@ from lerobot.motors.feetech import (
FeetechMotorsBus,
)
from lerobot.processor import RobotAction, RobotObservation
from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
from ..robot import Robot
from ..utils import ensure_safe_goal_position
@@ -82,12 +82,13 @@ class HopeJrArm(Robot):
def is_connected(self) -> bool:
return self.bus.is_connected and all(cam.is_connected for cam in self.cameras.values())
@check_if_already_connected
def connect(self, calibrate: bool = True) -> None:
"""
We assume that at connection time, arm is in a rest position,
and torque can be safely disabled to run calibration.
"""
if self.is_connected:
raise DeviceAlreadyConnectedError(f"{self} already connected")
self.bus.connect(handshake=False)
if not self.is_calibrated and calibrate:
@@ -127,8 +128,10 @@ class HopeJrArm(Robot):
self.bus.setup_motor(motor)
print(f"'{motor}' motor id set to {self.bus.motors[motor].id}")
@check_if_not_connected
def get_observation(self) -> RobotObservation:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
# Read arm position
start = time.perf_counter()
obs_dict = self.bus.sync_read("Present_Position", self.other_motors)
@@ -146,8 +149,10 @@ class HopeJrArm(Robot):
return obs_dict
@check_if_not_connected
def send_action(self, action: RobotAction) -> RobotAction:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
goal_pos = {key.removesuffix(".pos"): val for key, val in action.items() if key.endswith(".pos")}
# Cap goal position when too far away from present position.
@@ -160,8 +165,10 @@ class HopeJrArm(Robot):
self.bus.sync_write("Goal_Position", goal_pos)
return {f"{motor}.pos": val for motor, val in goal_pos.items()}
@check_if_not_connected
def disconnect(self):
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
self.bus.disconnect(self.config.disable_torque_on_disconnect)
for cam in self.cameras.values():
cam.disconnect()
+13 -5
View File
@@ -25,7 +25,7 @@ from lerobot.motors.feetech import (
FeetechMotorsBus,
)
from lerobot.processor import RobotAction, RobotObservation
from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
from ..robot import Robot
from .config_hope_jr import HopeJrHandConfig
@@ -118,8 +118,10 @@ class HopeJrHand(Robot):
def is_connected(self) -> bool:
return self.bus.is_connected and all(cam.is_connected for cam in self.cameras.values())
@check_if_already_connected
def connect(self, calibrate: bool = True) -> None:
if self.is_connected:
raise DeviceAlreadyConnectedError(f"{self} already connected")
self.bus.connect()
if not self.is_calibrated and calibrate:
self.calibrate()
@@ -157,8 +159,10 @@ class HopeJrHand(Robot):
self.bus.setup_motor(motor)
print(f"'{motor}' motor id set to {self.bus.motors[motor].id}")
@check_if_not_connected
def get_observation(self) -> RobotObservation:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
obs_dict = {}
# Read hand position
@@ -177,14 +181,18 @@ class HopeJrHand(Robot):
return obs_dict
@check_if_not_connected
def send_action(self, action: RobotAction) -> RobotAction:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
goal_pos = {key.removesuffix(".pos"): val for key, val in action.items() if key.endswith(".pos")}
self.bus.sync_write("Goal_Position", goal_pos)
return action
@check_if_not_connected
def disconnect(self):
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
self.bus.disconnect(self.config.disable_torque_on_disconnect)
for cam in self.cameras.values():
cam.disconnect()
@@ -25,7 +25,7 @@ from lerobot.motors.dynamixel import (
OperatingMode,
)
from lerobot.processor import RobotAction, RobotObservation
from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
from ..robot import Robot
from ..utils import ensure_safe_goal_position
@@ -84,12 +84,13 @@ class KochFollower(Robot):
def is_connected(self) -> bool:
return self.bus.is_connected and all(cam.is_connected for cam in self.cameras.values())
@check_if_already_connected
def connect(self, calibrate: bool = True) -> None:
"""
We assume that at connection time, arm is in a rest position,
and torque can be safely disabled to run calibration.
"""
if self.is_connected:
raise DeviceAlreadyConnectedError(f"{self} already connected")
self.bus.connect()
if not self.is_calibrated and calibrate:
@@ -181,8 +182,10 @@ class KochFollower(Robot):
self.bus.setup_motor(motor)
print(f"'{motor}' motor id set to {self.bus.motors[motor].id}")
@check_if_not_connected
def get_observation(self) -> RobotObservation:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
# Read arm position
start = time.perf_counter()
obs_dict = self.bus.sync_read("Present_Position")
@@ -199,7 +202,6 @@ class KochFollower(Robot):
return obs_dict
@check_if_not_connected
def send_action(self, action: RobotAction) -> RobotAction:
"""Command arm to move to a target joint configuration.
@@ -213,6 +215,8 @@ class KochFollower(Robot):
Returns:
RobotAction: The action sent to the motors, potentially clipped.
"""
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
goal_pos = {key.removesuffix(".pos"): val for key, val in action.items() if key.endswith(".pos")}
@@ -227,8 +231,10 @@ class KochFollower(Robot):
self.bus.sync_write("Goal_Position", goal_pos)
return {f"{motor}.pos": val for motor, val in goal_pos.items()}
@check_if_not_connected
def disconnect(self):
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
self.bus.disconnect(self.config.disable_torque_on_disconnect)
for cam in self.cameras.values():
cam.disconnect()
+12 -5
View File
@@ -29,7 +29,7 @@ from lerobot.motors.feetech import (
OperatingMode,
)
from lerobot.processor import RobotAction, RobotObservation
from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
from ..robot import Robot
from ..utils import ensure_safe_goal_position
@@ -109,8 +109,10 @@ class LeKiwi(Robot):
def is_connected(self) -> bool:
return self.bus.is_connected and all(cam.is_connected for cam in self.cameras.values())
@check_if_already_connected
def connect(self, calibrate: bool = True) -> None:
if self.is_connected:
raise DeviceAlreadyConnectedError(f"{self} already connected")
self.bus.connect()
if not self.is_calibrated and calibrate:
logger.info(
@@ -337,8 +339,10 @@ class LeKiwi(Robot):
"theta.vel": theta,
} # m/s and deg/s
@check_if_not_connected
def get_observation(self) -> RobotObservation:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
# Read actuators position for arm and vel for base
start = time.perf_counter()
arm_pos = self.bus.sync_read("Present_Position", self.arm_motors)
@@ -366,7 +370,6 @@ class LeKiwi(Robot):
return obs_dict
@check_if_not_connected
def send_action(self, action: RobotAction) -> RobotAction:
"""Command lekiwi to move to a target joint configuration.
@@ -380,6 +383,8 @@ class LeKiwi(Robot):
Returns:
RobotAction: the action sent to the motors, potentially clipped.
"""
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
arm_goal_pos = {k: v for k, v in action.items() if k.endswith(".pos")}
base_goal_vel = {k: v for k, v in action.items() if k.endswith(".vel")}
@@ -407,8 +412,10 @@ class LeKiwi(Robot):
self.bus.sync_write("Goal_Velocity", dict.fromkeys(self.base_motors, 0), num_retry=5)
logger.info("Base motors stopped")
@check_if_not_connected
def disconnect(self):
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
self.stop_base()
self.bus.disconnect(self.config.disable_torque_on_disconnect)
for cam in self.cameras.values():
+16 -6
View File
@@ -24,8 +24,7 @@ import numpy as np
from lerobot.processor import RobotAction, RobotObservation
from lerobot.utils.constants import ACTION, OBS_STATE
from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
from lerobot.utils.errors import DeviceNotConnectedError
from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
from ..robot import Robot
from .config_lekiwi import LeKiwiClientConfig
@@ -113,10 +112,14 @@ class LeKiwiClient(Robot):
def is_calibrated(self) -> bool:
pass
@check_if_already_connected
def connect(self) -> None:
"""Establishes ZMQ sockets with the remote mobile robot"""
if self._is_connected:
raise DeviceAlreadyConnectedError(
"LeKiwi Daemon is already connected. Do not run `robot.connect()` twice."
)
zmq = self._zmq
self.zmq_context = zmq.Context()
self.zmq_cmd_socket = self.zmq_context.socket(zmq.PUSH)
@@ -249,13 +252,14 @@ class LeKiwiClient(Robot):
return new_frames, new_state
@check_if_not_connected
def get_observation(self) -> RobotObservation:
"""
Capture observations from the remote robot: current follower arm positions,
present wheel speeds (converted to body-frame velocities: x, y, theta),
and a camera frame. Receives over ZMQ, translate to body-frame vel
"""
if not self._is_connected:
raise DeviceNotConnectedError("LeKiwiClient is not connected. You need to run `robot.connect()`.")
frames, obs_dict = self._get_data()
@@ -303,7 +307,6 @@ class LeKiwiClient(Robot):
def configure(self):
pass
@check_if_not_connected
def send_action(self, action: RobotAction) -> RobotAction:
"""Command lekiwi to move to a target joint configuration. Translates to motor space + sends over ZMQ
@@ -315,6 +318,10 @@ class LeKiwiClient(Robot):
Returns:
np.ndarray: the action sent to the motors, potentially clipped.
"""
if not self._is_connected:
raise DeviceNotConnectedError(
"ManipulatorRobot is not connected. You need to run `robot.connect()`."
)
self.zmq_cmd_socket.send_string(json.dumps(action)) # action is in motor space
@@ -325,10 +332,13 @@ class LeKiwiClient(Robot):
action_sent[ACTION] = actions
return action_sent
@check_if_not_connected
def disconnect(self):
"""Cleans ZMQ comms"""
if not self._is_connected:
raise DeviceNotConnectedError(
"LeKiwi is not connected. You need to run `robot.connect()` before disconnecting."
)
self.zmq_observation_socket.close()
self.zmq_cmd_socket.close()
self.zmq_context.term()
@@ -26,7 +26,7 @@ from lerobot.motors.dynamixel import (
OperatingMode,
)
from lerobot.processor import RobotAction, RobotObservation
from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
from ..robot import Robot
from ..utils import ensure_safe_goal_position
@@ -84,7 +84,6 @@ class OmxFollower(Robot):
def is_connected(self) -> bool:
return self.bus.is_connected and all(cam.is_connected for cam in self.cameras.values())
@check_if_already_connected
def connect(self, calibrate: bool = True) -> None:
"""
For OMX robots that come pre-calibrated:
@@ -92,6 +91,8 @@ class OmxFollower(Robot):
- This allows using pre-calibrated robots without manual calibration
- If no calibration file exists, use factory default values (homing_offset=0, range_min=0, range_max=4095)
"""
if self.is_connected:
raise DeviceAlreadyConnectedError(f"{self} already connected")
self.bus.connect()
if not self.is_calibrated and calibrate:
@@ -164,8 +165,10 @@ class OmxFollower(Robot):
self.bus.setup_motor(motor)
print(f"'{motor}' motor id set to {self.bus.motors[motor].id}")
@check_if_not_connected
def get_observation(self) -> RobotObservation:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
# Read arm position
start = time.perf_counter()
obs_dict = self.bus.sync_read("Present_Position")
@@ -182,7 +185,6 @@ class OmxFollower(Robot):
return obs_dict
@check_if_not_connected
def send_action(self, action: RobotAction) -> RobotAction:
"""Command arm to move to a target joint configuration.
@@ -196,6 +198,8 @@ class OmxFollower(Robot):
Returns:
RobotAction: The action sent to the motors, potentially clipped.
"""
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
goal_pos = {key.removesuffix(".pos"): val for key, val in action.items() if key.endswith(".pos")}
@@ -210,8 +214,10 @@ class OmxFollower(Robot):
self.bus.sync_write("Goal_Position", goal_pos)
return {f"{motor}.pos": val for motor, val in goal_pos.items()}
@check_if_not_connected
def disconnect(self):
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
self.bus.disconnect(self.config.disable_torque_on_disconnect)
for cam in self.cameras.values():
cam.disconnect()
-26
View File
@@ -58,32 +58,6 @@ class Robot(abc.ABC):
def __str__(self) -> str:
return f"{self.id} {self.__class__.__name__}"
def __enter__(self):
"""
Context manager entry.
Automatically connects to the camera.
"""
self.connect()
return self
def __exit__(self, exc_type, exc_value, traceback) -> None:
"""
Context manager exit.
Automatically disconnects, ensuring resources are released even on error.
"""
self.disconnect()
def __del__(self) -> None:
"""
Destructor safety net.
Attempts to disconnect if the object is garbage collected without cleanup.
"""
try:
if self.is_connected:
self.disconnect()
except Exception: # nosec B110
pass
# TODO(aliberts): create a proper Feature class for this that links with datasets
@property
@abc.abstractmethod
+11 -5
View File
@@ -26,7 +26,7 @@ from lerobot.motors.feetech import (
OperatingMode,
)
from lerobot.processor import RobotAction, RobotObservation
from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
from ..robot import Robot
from ..utils import ensure_safe_goal_position
@@ -85,12 +85,13 @@ class SOFollower(Robot):
def is_connected(self) -> bool:
return self.bus.is_connected and all(cam.is_connected for cam in self.cameras.values())
@check_if_already_connected
def connect(self, calibrate: bool = True) -> None:
"""
We assume that at connection time, arm is in a rest position,
and torque can be safely disabled to run calibration.
"""
if self.is_connected:
raise DeviceAlreadyConnectedError(f"{self} already connected")
self.bus.connect()
if not self.is_calibrated and calibrate:
@@ -175,8 +176,10 @@ class SOFollower(Robot):
self.bus.setup_motor(motor)
print(f"'{motor}' motor id set to {self.bus.motors[motor].id}")
@check_if_not_connected
def get_observation(self) -> RobotObservation:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
# Read arm position
start = time.perf_counter()
obs_dict = self.bus.sync_read("Present_Position")
@@ -193,7 +196,6 @@ class SOFollower(Robot):
return obs_dict
@check_if_not_connected
def send_action(self, action: RobotAction) -> RobotAction:
"""Command arm to move to a target joint configuration.
@@ -207,6 +209,8 @@ class SOFollower(Robot):
Returns:
RobotAction: the action sent to the motors, potentially clipped.
"""
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
goal_pos = {key.removesuffix(".pos"): val for key, val in action.items() if key.endswith(".pos")}
@@ -221,8 +225,10 @@ class SOFollower(Robot):
self.bus.sync_write("Goal_Position", goal_pos)
return {f"{motor}.pos": val for motor, val in goal_pos.items()}
@check_if_not_connected
def disconnect(self):
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
self.bus.disconnect(self.config.disable_torque_on_disconnect)
for cam in self.cameras.values():
cam.disconnect()
+379 -32
View File
@@ -66,23 +66,23 @@ Remove camera feature:
--operation.type remove_feature \
--operation.feature_names "['observation.images.top']"
Convert image dataset to video format and save locally:
Convert image dataset to video format (saves locally):
python -m lerobot.scripts.lerobot_edit_dataset \
--repo_id lerobot/pusht_image \
--operation.type convert_image_to_video \
--operation.type convert_to_video \
--operation.output_dir /path/to/output/pusht_video
Convert image dataset to video format and save with new repo_id:
Convert image dataset and save with new repo_id:
python -m lerobot.scripts.lerobot_edit_dataset \
--repo_id lerobot/pusht_image \
--new_repo_id lerobot/pusht_video \
--operation.type convert_image_to_video
--operation.type convert_to_video
Convert image dataset to video format and push to hub:
Convert and push to hub:
python -m lerobot.scripts.lerobot_edit_dataset \
--repo_id lerobot/pusht_image \
--new_repo_id lerobot/pusht_video \
--operation.type convert_image_to_video \
--operation.type convert_to_video \
--push_to_hub true
Using JSON config file:
@@ -92,19 +92,24 @@ Using JSON config file:
import logging
import shutil
from concurrent.futures import ThreadPoolExecutor, as_completed
from dataclasses import dataclass
from pathlib import Path
import pandas as pd
from tqdm import tqdm
from lerobot.configs import parser
from lerobot.datasets.dataset_tools import (
convert_image_to_video_dataset,
delete_episodes,
merge_datasets,
remove_feature,
split_dataset,
)
from lerobot.datasets.lerobot_dataset import LeRobotDataset
from lerobot.utils.constants import HF_LEROBOT_HOME
from lerobot.datasets.lerobot_dataset import LeRobotDataset, LeRobotDatasetMetadata
from lerobot.datasets.utils import write_stats, write_tasks
from lerobot.datasets.video_utils import encode_video_frames, get_video_info
from lerobot.utils.constants import HF_LEROBOT_HOME, OBS_IMAGE
from lerobot.utils.utils import init_logging
@@ -133,8 +138,8 @@ class RemoveFeatureConfig:
@dataclass
class ConvertImageToVideoConfig:
type: str = "convert_image_to_video"
class ConvertToVideoConfig:
type: str = "convert_to_video"
output_dir: str | None = None
vcodec: str = "libsvtav1"
pix_fmt: str = "yuv420p"
@@ -143,16 +148,12 @@ class ConvertImageToVideoConfig:
fast_decode: int = 0
episode_indices: list[int] | None = None
num_workers: int = 4
max_episodes_per_batch: int | None = None
max_frames_per_batch: int | None = None
@dataclass
class EditDatasetConfig:
repo_id: str
operation: (
DeleteEpisodesConfig | SplitConfig | MergeConfig | RemoveFeatureConfig | ConvertImageToVideoConfig
)
operation: DeleteEpisodesConfig | SplitConfig | MergeConfig | RemoveFeatureConfig | ConvertToVideoConfig
root: str | None = None
new_repo_id: str | None = None
push_to_hub: bool = False
@@ -296,7 +297,362 @@ def handle_remove_feature(cfg: EditDatasetConfig) -> None:
LeRobotDataset(output_repo_id, root=output_dir).push_to_hub()
def handle_convert_image_to_video(cfg: EditDatasetConfig) -> None:
def save_episode_images_for_video(
dataset: LeRobotDataset,
imgs_dir: Path,
img_key: str,
episode_index: int,
num_workers: int = 4,
) -> None:
"""Save images from a specific episode and camera to disk for video encoding.
Args:
dataset: The LeRobot dataset to extract images from
imgs_dir: Directory to save images to
img_key: The image key (camera) to extract
episode_index: Index of the episode to save
num_workers: Number of threads for parallel image saving
"""
# Create directory
imgs_dir.mkdir(parents=True, exist_ok=True)
# Get dataset without torch format for PIL image access
hf_dataset = dataset.hf_dataset.with_format(None)
# Select only this camera's images
imgs_dataset = hf_dataset.select_columns(img_key)
# Get episode start and end indices
from_idx = dataset.meta.episodes["dataset_from_index"][episode_index]
to_idx = dataset.meta.episodes["dataset_to_index"][episode_index]
# Get all items for this episode
episode_dataset = imgs_dataset.select(range(from_idx, to_idx))
# Define function to save a single image
def save_single_image(i_item_tuple):
i, item = i_item_tuple
img = item[img_key]
# Use frame-XXXXXX.png format to match encode_video_frames expectations
img.save(str(imgs_dir / f"frame-{i:06d}.png"), quality=100)
return i
# Save images with proper naming convention for encode_video_frames (frame-XXXXXX.png)
items = list(enumerate(episode_dataset))
with ThreadPoolExecutor(max_workers=num_workers) as executor:
futures = [executor.submit(save_single_image, item) for item in items]
for future in as_completed(futures):
future.result() # This will raise any exceptions that occurred
def encode_episode_videos(
dataset: LeRobotDataset,
new_meta: LeRobotDatasetMetadata,
episode_index: int,
vcodec: str,
pix_fmt: str,
g: int,
crf: int,
fast_decode: int,
temp_dir: Path,
num_image_workers: int = 4,
) -> dict[str, dict]:
"""Encode videos for a single episode and return video metadata.
Args:
dataset: Source dataset with images
new_meta: Metadata object for the new video dataset
episode_index: Episode index to process
vcodec: Video codec
pix_fmt: Pixel format
g: Group of pictures size
crf: Constant rate factor
fast_decode: Fast decode tuning
temp_dir: Temporary directory for images
num_image_workers: Number of workers for saving images
Returns:
Dictionary mapping video keys to their metadata (chunk_index, file_index, timestamps)
"""
hf_dataset = dataset.hf_dataset.with_format(None)
img_keys = [key for key in hf_dataset.features if key.startswith(OBS_IMAGE)]
video_metadata = {}
fps = int(dataset.fps) # Convert to int for PyAV compatibility
episode_length = dataset.meta.episodes["length"][episode_index]
episode_duration = episode_length / dataset.fps # Use original fps for duration calculation
for img_key in img_keys:
# Save images temporarily
imgs_dir = temp_dir / f"episode_{episode_index:06d}" / img_key
save_episode_images_for_video(dataset, imgs_dir, img_key, episode_index, num_image_workers)
# Determine chunk and file indices
# For simplicity, we'll put each episode in its own file
chunk_idx = episode_index // new_meta.chunks_size
file_idx = episode_index % new_meta.chunks_size
# Create video path in the new dataset structure
video_path = new_meta.root / new_meta.video_path.format(
video_key=img_key, chunk_index=chunk_idx, file_index=file_idx
)
video_path.parent.mkdir(parents=True, exist_ok=True)
# Encode video
encode_video_frames(
imgs_dir=imgs_dir,
video_path=video_path,
fps=fps,
vcodec=vcodec,
pix_fmt=pix_fmt,
g=g,
crf=crf,
fast_decode=fast_decode,
overwrite=True,
)
# Clean up temporary images
shutil.rmtree(imgs_dir)
# Store video metadata
video_metadata[img_key] = {
f"videos/{img_key}/chunk_index": chunk_idx,
f"videos/{img_key}/file_index": file_idx,
f"videos/{img_key}/from_timestamp": 0.0,
f"videos/{img_key}/to_timestamp": episode_duration,
}
return video_metadata
def convert_dataset_to_videos(
dataset: LeRobotDataset,
output_dir: Path,
repo_id: str | None = None,
vcodec: str = "libsvtav1",
pix_fmt: str = "yuv420p",
g: int = 2,
crf: int = 30,
fast_decode: int = 0,
episode_indices: list[int] | None = None,
num_workers: int = 4,
) -> LeRobotDataset:
"""Convert image-based dataset to video-based dataset.
Creates a new LeRobotDataset with videos instead of images, following the proper
LeRobot dataset structure with videos stored in chunked MP4 files.
Args:
dataset: The source LeRobot dataset with images
output_dir: Directory to save the new video dataset
repo_id: Repository ID for the new dataset (default: original_id + "_video")
vcodec: Video codec (default: libsvtav1)
pix_fmt: Pixel format (default: yuv420p)
g: Group of pictures size (default: 2)
crf: Constant rate factor (default: 30)
fast_decode: Fast decode tuning (default: 0)
episode_indices: List of episode indices to convert (None = all episodes)
num_workers: Number of threads for parallel processing (default: 4)
Returns:
New LeRobotDataset with videos
"""
# Check that it's an image dataset
if len(dataset.meta.video_keys) > 0:
raise ValueError(
f"This operation is for image datasets only. Video dataset provided: {dataset.repo_id}"
)
# Get all image keys
hf_dataset = dataset.hf_dataset.with_format(None)
img_keys = [key for key in hf_dataset.features if key.startswith(OBS_IMAGE)]
if len(img_keys) == 0:
raise ValueError(f"No image keys found in dataset {dataset.repo_id}")
# Determine which episodes to process
if episode_indices is None:
episode_indices = list(range(dataset.meta.total_episodes))
if repo_id is None:
repo_id = f"{dataset.repo_id}_video"
logging.info(
f"Converting {len(episode_indices)} episodes with {len(img_keys)} cameras from {dataset.repo_id}"
)
logging.info(f"Video codec: {vcodec}, pixel format: {pix_fmt}, GOP: {g}, CRF: {crf}")
# Create new features dict, converting image features to video features
new_features = {}
for key, value in dataset.meta.features.items():
if key not in img_keys:
new_features[key] = value
else:
# Convert image key to video format
new_features[key] = value.copy()
new_features[key]["dtype"] = "video" # Change dtype from "image" to "video"
# Video info will be updated after episodes are encoded
# Create new metadata for video dataset
new_meta = LeRobotDatasetMetadata.create(
repo_id=repo_id,
fps=dataset.meta.fps,
features=new_features,
robot_type=dataset.meta.robot_type,
root=output_dir,
use_videos=True,
chunks_size=dataset.meta.chunks_size,
data_files_size_in_mb=dataset.meta.data_files_size_in_mb,
video_files_size_in_mb=dataset.meta.video_files_size_in_mb,
)
# Create temporary directory for image extraction
temp_dir = output_dir / "temp_images"
temp_dir.mkdir(parents=True, exist_ok=True)
# Process each episode
all_episode_metadata = []
try:
for ep_idx in tqdm(episode_indices, desc="Converting episodes to videos"):
# Get episode metadata from source
src_episode = dataset.meta.episodes[ep_idx]
# Encode videos for this episode
video_metadata = encode_episode_videos(
dataset=dataset,
new_meta=new_meta,
episode_index=ep_idx,
vcodec=vcodec,
pix_fmt=pix_fmt,
g=g,
crf=crf,
fast_decode=fast_decode,
temp_dir=temp_dir,
num_image_workers=num_workers,
)
# Build episode metadata
episode_meta = {
"episode_index": ep_idx,
"length": src_episode["length"],
"dataset_from_index": ep_idx * src_episode["length"],
"dataset_to_index": (ep_idx + 1) * src_episode["length"],
}
# Add video metadata
for img_key in img_keys:
episode_meta.update(video_metadata[img_key])
# Add data chunk/file info (using same structure as source)
if "data/chunk_index" in src_episode:
episode_meta["data/chunk_index"] = src_episode["data/chunk_index"]
episode_meta["data/file_index"] = src_episode["data/file_index"]
all_episode_metadata.append(episode_meta)
# Copy and transform data files (removing image columns)
_copy_data_without_images(dataset, new_meta, episode_indices, img_keys)
# Save episode metadata
episodes_df = pd.DataFrame(all_episode_metadata)
episodes_path = new_meta.root / "meta" / "episodes" / "chunk-000" / "file-000.parquet"
episodes_path.parent.mkdir(parents=True, exist_ok=True)
episodes_df.to_parquet(episodes_path, index=False)
# Update metadata info
new_meta.info["total_episodes"] = len(episode_indices)
new_meta.info["total_frames"] = sum(ep["length"] for ep in all_episode_metadata)
new_meta.info["total_tasks"] = dataset.meta.total_tasks
new_meta.info["splits"] = {"train": f"0:{len(episode_indices)}"}
# Update video info for all image keys (now videos)
# We need to manually set video info since update_video_info() checks video_keys first
for img_key in img_keys:
if not new_meta.features[img_key].get("info", None):
video_path = new_meta.root / new_meta.video_path.format(
video_key=img_key, chunk_index=0, file_index=0
)
new_meta.info["features"][img_key]["info"] = get_video_info(video_path)
from lerobot.datasets.utils import write_info
write_info(new_meta.info, new_meta.root)
# Copy stats and tasks
if dataset.meta.stats is not None:
# Remove image stats
new_stats = {k: v for k, v in dataset.meta.stats.items() if k not in img_keys}
write_stats(new_stats, new_meta.root)
if dataset.meta.tasks is not None:
write_tasks(dataset.meta.tasks, new_meta.root)
finally:
# Clean up temporary directory
if temp_dir.exists():
shutil.rmtree(temp_dir)
logging.info(f"✓ Completed converting {dataset.repo_id} to video format")
logging.info(f"New dataset saved to: {output_dir}")
# Return new dataset
return LeRobotDataset(repo_id=repo_id, root=output_dir)
def _copy_data_without_images(
src_dataset: LeRobotDataset,
dst_meta: LeRobotDatasetMetadata,
episode_indices: list[int],
img_keys: list[str],
) -> None:
"""Copy data files without image columns.
Args:
src_dataset: Source dataset
dst_meta: Destination metadata
episode_indices: Episodes to include
img_keys: Image keys to remove
"""
from lerobot.datasets.utils import DATA_DIR
data_dir = src_dataset.root / DATA_DIR
parquet_files = sorted(data_dir.glob("*/*.parquet"))
if not parquet_files:
raise ValueError(f"No parquet files found in {data_dir}")
episode_set = set(episode_indices)
for src_path in tqdm(parquet_files, desc="Processing data files"):
df = pd.read_parquet(src_path).reset_index(drop=True)
# Filter to only include selected episodes
df = df[df["episode_index"].isin(episode_set)].copy()
if len(df) == 0:
continue
# Remove image columns
columns_to_drop = [col for col in img_keys if col in df.columns]
if columns_to_drop:
df = df.drop(columns=columns_to_drop)
# Get chunk and file indices from path
relative_path = src_path.relative_to(src_dataset.root)
chunk_dir = relative_path.parts[1]
file_name = relative_path.parts[2]
chunk_idx = int(chunk_dir.split("-")[1])
file_idx = int(file_name.split("-")[1].split(".")[0])
# Write to destination without pandas index
dst_path = dst_meta.root / f"data/chunk-{chunk_idx:03d}/file-{file_idx:03d}.parquet"
dst_path.parent.mkdir(parents=True, exist_ok=True)
df.to_parquet(dst_path, index=False)
def handle_convert_to_video(cfg: EditDatasetConfig) -> None:
# Note: Parser may create any config type with the right fields, so we access fields directly
# instead of checking isinstance()
dataset = LeRobotDataset(cfg.repo_id, root=cfg.root)
@@ -308,12 +664,8 @@ def handle_convert_image_to_video(cfg: EditDatasetConfig) -> None:
if cfg.new_repo_id:
# Use new_repo_id for both local storage and hub push
output_repo_id = cfg.new_repo_id
# Place new dataset as a sibling to the original dataset
# Get the parent of the actual dataset root (not cfg.root which might be the lerobot cache dir)
# Extract just the dataset name (after last slash) for the local directory
local_dir_name = cfg.new_repo_id.split("/")[-1]
output_dir = dataset.root.parent / local_dir_name
logging.info(f"Saving to new dataset: {cfg.new_repo_id} at {output_dir}")
output_dir = Path(cfg.root) / cfg.new_repo_id if cfg.root else HF_LEROBOT_HOME / cfg.new_repo_id
logging.info(f"Saving to new dataset: {cfg.new_repo_id}")
elif output_dir_config:
# Use custom output directory for local-only storage
output_dir = Path(output_dir_config)
@@ -323,15 +675,12 @@ def handle_convert_image_to_video(cfg: EditDatasetConfig) -> None:
else:
# Auto-generate name: append "_video" to original repo_id
output_repo_id = f"{cfg.repo_id}_video"
# Place new dataset as a sibling to the original dataset
# Extract just the dataset name (after last slash) for the local directory
local_dir_name = output_repo_id.split("/")[-1]
output_dir = dataset.root.parent / local_dir_name
output_dir = Path(cfg.root) / output_repo_id if cfg.root else HF_LEROBOT_HOME / output_repo_id
logging.info(f"Saving to auto-generated location: {output_dir}")
logging.info(f"Converting dataset {cfg.repo_id} to video format")
new_dataset = convert_image_to_video_dataset(
new_dataset = convert_dataset_to_videos(
dataset=dataset,
output_dir=output_dir,
repo_id=output_repo_id,
@@ -342,8 +691,6 @@ def handle_convert_image_to_video(cfg: EditDatasetConfig) -> None:
fast_decode=getattr(cfg.operation, "fast_decode", 0),
episode_indices=getattr(cfg.operation, "episode_indices", None),
num_workers=getattr(cfg.operation, "num_workers", 4),
max_episodes_per_batch=getattr(cfg.operation, "max_episodes_per_batch", None),
max_frames_per_batch=getattr(cfg.operation, "max_frames_per_batch", None),
)
logging.info("Video dataset created successfully!")
@@ -371,8 +718,8 @@ def edit_dataset(cfg: EditDatasetConfig) -> None:
handle_merge(cfg)
elif operation_type == "remove_feature":
handle_remove_feature(cfg)
elif operation_type == "convert_image_to_video":
handle_convert_image_to_video(cfg)
elif operation_type == "convert_to_video":
handle_convert_to_video(cfg)
else:
raise ValueError(
f"Unknown operation type: {operation_type}\n"
+92 -6
View File
@@ -148,6 +148,92 @@ def update_policy(
return train_metrics, output_dict
def get_default_peft_configuration(policy_type):
"""Build a basic PEFT configuration for the given policy type assuming that we train a policy from a checkpoint."""
common_projections = "state_proj|action_in_proj|action_out_proj|action_time_mlp_in|action_time_mlp_out"
if policy_type == "smolvla":
return {
"target_modules": rf"(model\.vlm_with_expert\.lm_expert\..*\.(q|v)_proj|model\.({common_projections}))",
"modules_to_save": [],
}
elif policy_type in ("pi0", "pi05"):
return {
"target_modules": rf"(.*\.gemma_expert\..*\.self_attn.(q|v)_proj|model\.({common_projections}))",
"modules_to_save": [],
}
return {"modules_to_save": None}
def wrap_policy_in_peft_model(cfg, policy):
from peft import PEFT_TYPE_TO_CONFIG_MAPPING, PeftType, get_peft_model
# Disable all gradients because we'll only train the parameters selected by the PEFT method.
# Layers that should receive gradients anyway need to be listed in `modules_to_save`.
for p in policy.parameters():
p.requires_grad_(False)
if not cfg.policy.pretrained_path:
raise ValueError(
"Training from scratch using PEFT. This is unlikely to yield good results. "
"Supply a `policy.path` to fine-tune an existing model."
)
if cfg.policy.type == "smolvla" and not cfg.policy.load_vlm_weights:
logging.warning(
"Training SmolVLA from scratch using PEFT. This is unlikely to yield good results. Set "
"`load_vlm_weights=True` to fine-tune the existing policy."
)
peft_config_policy = get_default_peft_configuration(cfg.policy.type)
peft_config_cli = dataclasses.asdict(cfg.peft) if cfg.peft else {}
peft_config_cli["modules_to_save"] = peft_config_cli["full_training_modules"] # compatibility with PEFT
peft_method_type = PeftType[peft_config_cli["method_type"].upper()]
peft_config_cls = PEFT_TYPE_TO_CONFIG_MAPPING[peft_method_type]
# Handle specific CLI overrides
for key in ["target_modules", "modules_to_save", "r"]:
if peft_config_cli[key] is not None:
peft_config_policy[key] = peft_config_cli[key]
if "target_modules" not in peft_config_policy:
raise ValueError(
f"There is no default `target_modules` value for policy {cfg.policy.type}. Please pass it manually."
)
# Init method depends on the used PEFT method, your specific PEFT method
# might not be considered here, in that case an error is raised.
if peft_config_cli["init_type"] is not None:
if peft_method_type == "LORA":
peft_config_policy["init_lora_weights"] = peft_config_cli["init_type"]
elif peft_method_type == "MISS":
peft_config_policy["init_weights"] = peft_config_cli["init_type"]
else:
raise ValueError(
f"Init type {peft_config_cli['init_type']} unknown for PEFT method {peft_method_type}."
)
# PEFT uses this attribute to set adapter_config.base_name_or_path which we use for loading the
# correct base model in `make_policy` since in a PEFT loading setting we only get the path to the
# adapter, not the base model.
if policy.config.pretrained_path:
policy.name_or_path = str(policy.config.pretrained_path)
# Finally wrap the policy in a PEFT model
policy = get_peft_model(
policy,
peft_config_cls(**peft_config_policy),
)
# Make sure that the config is tagged as using PEFT so that the loading code can take the
# appropriate steps to use the adapter weights and the PEFT config instead of the full model weights.
policy.config.use_peft = True
return policy
@parser.wrap()
def train(cfg: TrainPipelineConfig, accelerator: Accelerator | None = None):
"""
@@ -177,7 +263,8 @@ def train(cfg: TrainPipelineConfig, accelerator: Accelerator | None = None):
ddp_kwargs = DistributedDataParallelKwargs(find_unused_parameters=True)
# Accelerate auto-detects the device based on the available hardware and ignores the policy.device setting.
# Force the device to be CPU when policy.device is set to CPU.
force_cpu = cfg.policy.device == "cpu"
# Note (maractin): cfg.policy may be None before validate() fully loads from pretrained_path
force_cpu = cfg.policy is not None and cfg.policy.device == "cpu"
accelerator = Accelerator(
step_scheduler_with_optimizer=False,
kwargs_handlers=[ddp_kwargs],
@@ -225,8 +312,9 @@ def train(cfg: TrainPipelineConfig, accelerator: Accelerator | None = None):
# On real-world data, no need to create an environment as evaluations are done outside train.py,
# using the eval.py instead, with gym_dora environment and dora-rs.
eval_env = None
if cfg.eval_freq > 0 and cfg.env is not None and is_main_process:
logging.info("Creating env")
if cfg.eval_freq > 0 and cfg.env is not None:
if is_main_process:
logging.info("Creating env")
eval_env = make_env(cfg.env, n_envs=cfg.eval.batch_size, use_async_envs=cfg.eval.use_async_envs)
if is_main_process:
@@ -239,9 +327,7 @@ def train(cfg: TrainPipelineConfig, accelerator: Accelerator | None = None):
if cfg.peft is not None:
logging.info("Using PEFT! Wrapping model.")
# Convert CLI peft config to dict for overrides
peft_cli_overrides = dataclasses.asdict(cfg.peft)
policy = policy.wrap_with_peft(peft_cli_overrides=peft_cli_overrides)
policy = wrap_policy_in_peft_model(cfg, policy)
# Wait for all processes to finish policy creation before continuing
accelerator.wait_for_everyone()
@@ -18,7 +18,7 @@ import logging
from functools import cached_property
from lerobot.teleoperators.so_leader import SOLeaderTeleopConfig
from lerobot.utils.decorators import check_if_not_connected
from lerobot.utils.errors import DeviceNotConnectedError
from ..so_leader import SOLeader
from ..teleoperator import Teleoperator
@@ -92,8 +92,10 @@ class BiSOLeader(Teleoperator):
self.left_arm.setup_motors()
self.right_arm.setup_motors()
@check_if_not_connected
def get_action(self) -> dict[str, float]:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
action_dict = {}
# Add "left_" prefix
@@ -21,7 +21,7 @@ from typing import Any
import numpy as np
from lerobot.processor import RobotAction
from lerobot.utils.decorators import check_if_not_connected
from lerobot.utils.errors import DeviceNotConnectedError
from ..teleoperator import Teleoperator
from ..utils import TeleopEvents
@@ -86,8 +86,10 @@ class GamepadTeleop(Teleoperator):
self.gamepad = Gamepad()
self.gamepad.start()
@check_if_not_connected
def get_action(self) -> RobotAction:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
# Update the controller to get fresh inputs
self.gamepad.update()
@@ -22,7 +22,7 @@ from pprint import pformat
import serial
from lerobot.motors.motors_bus import MotorCalibration, MotorNormMode
from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
from lerobot.utils.utils import enter_pressed, move_cursor_up
from ..teleoperator import Teleoperator
@@ -93,8 +93,10 @@ class HomunculusArm(Teleoperator):
with self.serial_lock:
return self.serial.is_open and self.thread.is_alive()
@check_if_already_connected
def connect(self, calibrate: bool = True) -> None:
if self.is_connected:
raise DeviceAlreadyConnectedError(f"{self} already connected")
if not self.serial.is_open:
self.serial.open()
self.thread.start()
@@ -297,16 +299,20 @@ class HomunculusArm(Teleoperator):
except Exception as e:
logger.debug(f"Error reading frame in background thread for {self}: {e}")
@check_if_not_connected
def get_action(self) -> dict[str, float]:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
joint_positions = self._read()
return {f"{joint}.pos": pos for joint, pos in joint_positions.items()}
def send_feedback(self, feedback: dict[str, float]) -> None:
raise NotImplementedError
@check_if_not_connected
def disconnect(self) -> None:
if not self.is_connected:
DeviceNotConnectedError(f"{self} is not connected.")
self.stop_event.set()
self.thread.join(timeout=1)
self.serial.close()
@@ -24,7 +24,7 @@ import serial
from lerobot.motors import MotorCalibration
from lerobot.motors.motors_bus import MotorNormMode
from lerobot.teleoperators.homunculus.joints_translation import homunculus_glove_to_hope_jr_hand
from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
from lerobot.utils.utils import enter_pressed, move_cursor_up
from ..teleoperator import Teleoperator
@@ -119,8 +119,10 @@ class HomunculusGlove(Teleoperator):
with self.serial_lock:
return self.serial.is_open and self.thread.is_alive()
@check_if_already_connected
def connect(self, calibrate: bool = True) -> None:
if self.is_connected:
raise DeviceAlreadyConnectedError(f"{self} already connected")
if not self.serial.is_open:
self.serial.open()
self.thread.start()
@@ -323,8 +325,10 @@ class HomunculusGlove(Teleoperator):
except Exception as e:
logger.debug(f"Error reading frame in background thread for {self}: {e}")
@check_if_not_connected
def get_action(self) -> dict[str, float]:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
joint_positions = self._read()
return homunculus_glove_to_hope_jr_hand(
{f"{joint}.pos": pos for joint, pos in joint_positions.items()}
@@ -333,8 +337,10 @@ class HomunculusGlove(Teleoperator):
def send_feedback(self, feedback: dict[str, float]) -> None:
raise NotImplementedError
@check_if_not_connected
def disconnect(self) -> None:
if not self.is_connected:
DeviceNotConnectedError(f"{self} is not connected.")
self.stop_event.set()
self.thread.join(timeout=1)
self.serial.close()
@@ -22,7 +22,7 @@ from queue import Queue
from typing import Any
from lerobot.processor import RobotAction
from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
from ..teleoperator import Teleoperator
from ..utils import TeleopEvents
@@ -86,8 +86,12 @@ class KeyboardTeleop(Teleoperator):
def is_calibrated(self) -> bool:
pass
@check_if_already_connected
def connect(self) -> None:
if self.is_connected:
raise DeviceAlreadyConnectedError(
"Keyboard is already connected. Do not run `robot.connect()` twice."
)
if PYNPUT_AVAILABLE:
logging.info("pynput is available - enabling local keyboard listener.")
self.listener = keyboard.Listener(
@@ -121,10 +125,14 @@ class KeyboardTeleop(Teleoperator):
def configure(self):
pass
@check_if_not_connected
def get_action(self) -> RobotAction:
before_read_t = time.perf_counter()
if not self.is_connected:
raise DeviceNotConnectedError(
"KeyboardTeleop is not connected. You need to run `connect()` before `get_action()`."
)
self._drain_pressed_keys()
# Generate action based on current key states
@@ -136,8 +144,11 @@ class KeyboardTeleop(Teleoperator):
def send_feedback(self, feedback: dict[str, Any]) -> None:
pass
@check_if_not_connected
def disconnect(self) -> None:
if not self.is_connected:
raise DeviceNotConnectedError(
"KeyboardTeleop is not connected. You need to run `robot.connect()` before `disconnect()`."
)
if self.listener is not None:
self.listener.stop()
@@ -171,8 +182,12 @@ class KeyboardEndEffectorTeleop(KeyboardTeleop):
"names": {"delta_x": 0, "delta_y": 1, "delta_z": 2},
}
@check_if_not_connected
def get_action(self) -> RobotAction:
if not self.is_connected:
raise DeviceNotConnectedError(
"KeyboardTeleop is not connected. You need to run `connect()` before `get_action()`."
)
self._drain_pressed_keys()
delta_x = 0.0
delta_y = 0.0
@@ -360,7 +375,6 @@ class KeyboardRoverTeleop(KeyboardTeleop):
# Only remove key if it's being released
self.current_pressed.pop(key_char, None)
@check_if_not_connected
def get_action(self) -> RobotAction:
"""
Get the current action based on pressed keys.
@@ -370,6 +384,11 @@ class KeyboardRoverTeleop(KeyboardTeleop):
"""
before_read_t = time.perf_counter()
if not self.is_connected:
raise DeviceNotConnectedError(
"KeyboardRoverTeleop is not connected. You need to run `connect()` before `get_action()`."
)
self._drain_pressed_keys()
linear_velocity = 0.0
@@ -23,7 +23,7 @@ from lerobot.motors.dynamixel import (
DynamixelMotorsBus,
OperatingMode,
)
from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
from ..teleoperator import Teleoperator
from .config_koch_leader import KochLeaderConfig
@@ -69,8 +69,10 @@ class KochLeader(Teleoperator):
def is_connected(self) -> bool:
return self.bus.is_connected
@check_if_already_connected
def connect(self, calibrate: bool = True) -> None:
if self.is_connected:
raise DeviceAlreadyConnectedError(f"{self} already connected")
self.bus.connect()
if not self.is_calibrated and calibrate:
logger.info(
@@ -159,8 +161,10 @@ class KochLeader(Teleoperator):
self.bus.setup_motor(motor)
print(f"'{motor}' motor id set to {self.bus.motors[motor].id}")
@check_if_not_connected
def get_action(self) -> dict[str, float]:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
start = time.perf_counter()
action = self.bus.sync_read("Present_Position")
action = {f"{motor}.pos": val for motor, val in action.items()}
@@ -172,7 +176,9 @@ class KochLeader(Teleoperator):
# TODO(rcadene, aliberts): Implement force feedback
raise NotImplementedError
@check_if_not_connected
def disconnect(self) -> None:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
self.bus.disconnect()
logger.info(f"{self} disconnected.")
@@ -23,7 +23,7 @@ from lerobot.motors.dynamixel import (
DynamixelMotorsBus,
OperatingMode,
)
from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
from ..teleoperator import Teleoperator
from .config_omx_leader import OmxLeaderConfig
@@ -68,8 +68,10 @@ class OmxLeader(Teleoperator):
def is_connected(self) -> bool:
return self.bus.is_connected
@check_if_already_connected
def connect(self, calibrate: bool = True) -> None:
if self.is_connected:
raise DeviceAlreadyConnectedError(f"{self} already connected")
self.bus.connect()
if not self.is_calibrated and calibrate:
logger.info(
@@ -140,8 +142,10 @@ class OmxLeader(Teleoperator):
self.bus.setup_motor(motor)
print(f"'{motor}' motor id set to {self.bus.motors[motor].id}")
@check_if_not_connected
def get_action(self) -> dict[str, float]:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
start = time.perf_counter()
action = self.bus.sync_read("Present_Position")
action = {f"{motor}.pos": val for motor, val in action.items()}
@@ -153,7 +157,9 @@ class OmxLeader(Teleoperator):
# TODO(rcadene, aliberts): Implement force feedback
raise NotImplementedError
@check_if_not_connected
def disconnect(self) -> None:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
self.bus.disconnect()
logger.info(f"{self} disconnected.")
@@ -28,7 +28,7 @@ from teleop import Teleop
from lerobot.teleoperators.phone.config_phone import PhoneConfig, PhoneOS
from lerobot.teleoperators.teleoperator import Teleoperator
from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
from lerobot.utils.rotation import Rotation
logger = logging.getLogger(__name__)
@@ -81,8 +81,10 @@ class IOSPhone(BasePhone, Teleoperator):
def is_connected(self) -> bool:
return self._group is not None
@check_if_already_connected
def connect(self) -> None:
if self.is_connected:
raise DeviceAlreadyConnectedError(f"{self} already connected")
logger.info("Connecting to IPhone, make sure to open the HEBI Mobile I/O app.")
lookup = hebi.Lookup()
time.sleep(2.0)
@@ -162,8 +164,10 @@ class IOSPhone(BasePhone, Teleoperator):
pos = ar_pos - rot.apply(self.config.camera_offset)
return True, pos, rot, pose
@check_if_not_connected
def get_action(self) -> dict:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
has_pose, raw_position, raw_rotation, fb_pose = self._read_current_pose()
if not has_pose or not self.is_calibrated:
return {}
@@ -203,8 +207,10 @@ class IOSPhone(BasePhone, Teleoperator):
"phone.enabled": self._enabled,
}
@check_if_not_connected
def disconnect(self) -> None:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
self._group = None
@@ -224,8 +230,10 @@ class AndroidPhone(BasePhone, Teleoperator):
def is_connected(self) -> bool:
return self._teleop is not None
@check_if_already_connected
def connect(self) -> None:
if self.is_connected:
raise DeviceAlreadyConnectedError(f"{self} already connected")
logger.info("Starting teleop stream for Android...")
self._teleop = Teleop()
self._teleop.subscribe(self._android_callback)
@@ -313,8 +321,10 @@ class AndroidPhone(BasePhone, Teleoperator):
self._latest_pose = pose
self._latest_message = message
@check_if_not_connected
def get_action(self) -> dict:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
ok, raw_pos, raw_rot, pose = self._read_current_pose()
if not ok or not self.is_calibrated:
return {}
@@ -346,8 +356,10 @@ class AndroidPhone(BasePhone, Teleoperator):
"phone.enabled": self._enabled,
}
@check_if_not_connected
def disconnect(self) -> None:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
self._teleop = None
if self._teleop_thread and self._teleop_thread.is_alive():
self._teleop_thread.join(timeout=1.0)
@@ -26,8 +26,7 @@ if TYPE_CHECKING or _reachy2_sdk_available:
else:
ReachySDK = None
from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
from lerobot.utils.errors import DeviceNotConnectedError
from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
from ..teleoperator import Teleoperator
from .config_reachy2_teleoperator import Reachy2TeleoperatorConfig
@@ -127,8 +126,10 @@ class Reachy2Teleoperator(Teleoperator):
def is_connected(self) -> bool:
return self.reachy.is_connected() if self.reachy is not None else False
@check_if_already_connected
def connect(self, calibrate: bool = True) -> None:
if self.is_connected:
raise DeviceAlreadyConnectedError(f"{self} already connected")
self.reachy = ReachySDK(self.config.ip_address)
if not self.is_connected:
@@ -145,10 +146,12 @@ class Reachy2Teleoperator(Teleoperator):
def configure(self) -> None:
pass
@check_if_not_connected
def get_action(self) -> dict[str, float]:
start = time.perf_counter()
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
joint_action: dict[str, float] = {}
vel_action: dict[str, float] = {}
@@ -23,7 +23,7 @@ from lerobot.motors.feetech import (
FeetechMotorsBus,
OperatingMode,
)
from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
from ..teleoperator import Teleoperator
from .config_so_leader import SOLeaderTeleopConfig
@@ -66,8 +66,10 @@ class SOLeader(Teleoperator):
def is_connected(self) -> bool:
return self.bus.is_connected
@check_if_already_connected
def connect(self, calibrate: bool = True) -> None:
if self.is_connected:
raise DeviceAlreadyConnectedError(f"{self} already connected")
self.bus.connect()
if not self.is_calibrated and calibrate:
logger.info(
@@ -137,8 +139,10 @@ class SOLeader(Teleoperator):
self.bus.setup_motor(motor)
print(f"'{motor}' motor id set to {self.bus.motors[motor].id}")
@check_if_not_connected
def get_action(self) -> dict[str, float]:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
start = time.perf_counter()
action = self.bus.sync_read("Present_Position")
action = {f"{motor}.pos": val for motor, val in action.items()}
@@ -150,8 +154,10 @@ class SOLeader(Teleoperator):
# TODO: Implement force feedback
raise NotImplementedError
@check_if_not_connected
def disconnect(self) -> None:
if not self.is_connected:
DeviceNotConnectedError(f"{self} is not connected.")
self.bus.disconnect()
logger.info(f"{self} disconnected.")
-26
View File
@@ -58,32 +58,6 @@ class Teleoperator(abc.ABC):
def __str__(self) -> str:
return f"{self.id} {self.__class__.__name__}"
def __enter__(self):
"""
Context manager entry.
Automatically connects to the camera.
"""
self.connect()
return self
def __exit__(self, exc_type, exc_value, traceback) -> None:
"""
Context manager exit.
Automatically disconnects, ensuring resources are released even on error.
"""
self.disconnect()
def __del__(self) -> None:
"""
Destructor safety net.
Attempts to disconnect if the object is garbage collected without cleanup.
"""
try:
if self.is_connected:
self.disconnect()
except Exception: # nosec B110
pass
@property
@abc.abstractmethod
def action_features(self) -> dict:
-41
View File
@@ -1,41 +0,0 @@
#!/usr/bin/env python
# Copyright 2026 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from functools import wraps
from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
def check_if_not_connected(func):
@wraps(func)
def wrapper(self, *args, **kwargs):
if not self.is_connected:
raise DeviceNotConnectedError(
f"{self.__class__.__name__} is not connected. Run `.connect()` first."
)
return func(self, *args, **kwargs)
return wrapper
def check_if_already_connected(func):
@wraps(func)
def wrapper(self, *args, **kwargs):
if self.is_connected:
raise DeviceAlreadyConnectedError(f"{self.__class__.__name__} is already connected.")
return func(self, *args, **kwargs)
return wrapper
+8 -16
View File
@@ -21,23 +21,12 @@ from typing import Any
from draccus.choice_types import ChoiceRegistry
def is_package_available(
pkg_name: str, import_name: str | None = None, return_version: bool = False
) -> tuple[bool, str] | bool:
"""
def is_package_available(pkg_name: str, return_version: bool = False) -> tuple[bool, str] | bool:
"""Copied from https://github.com/huggingface/transformers/blob/main/src/transformers/utils/import_utils.py
Check if the package spec exists and grab its version to avoid importing a local directory.
Args:
pkg_name: The name of the package as installed via pip (e.g. "python-can").
import_name: The actual name used to import the package (e.g. "can").
Defaults to pkg_name if not provided.
return_version: Whether to return the version string.
**Note:** this doesn't work for all packages.
"""
if import_name is None:
import_name = pkg_name
# Check if the module spec exists using the import name
package_exists = importlib.util.find_spec(import_name) is not None
package_exists = importlib.util.find_spec(pkg_name) is not None
package_version = "N/A"
if package_exists:
try:
@@ -48,7 +37,7 @@ def is_package_available(
# Fallback method: Only for "torch" and versions containing "dev"
if pkg_name == "torch":
try:
package = importlib.import_module(import_name)
package = importlib.import_module(pkg_name)
temp_version = getattr(package, "__version__", "N/A")
# Check if the version contains "dev"
if "dev" in temp_version:
@@ -59,6 +48,9 @@ def is_package_available(
except ImportError:
# If the package can't be imported, it's not available
package_exists = False
elif pkg_name == "grpc":
package = importlib.import_module(pkg_name)
package_version = getattr(package, "__version__", "N/A")
else:
# For packages other than "torch", don't attempt the fallback and set as not available
package_exists = False
+2 -8
View File
@@ -144,18 +144,12 @@ def test_async_inference_e2e(monkeypatch):
client = RobotClient(client_config)
assert client.start(), "Client failed initial handshake with the server"
# Track action chunks received and verify device type
action_chunks_received = {"count": 0, "actions_on_cpu": True}
# Track action chunks received without modifying RobotClient
action_chunks_received = {"count": 0}
original_aggregate = client._aggregate_action_queues
def counting_aggregate(*args, **kwargs):
action_chunks_received["count"] += 1
# Check that all received actions are on CPU
if args:
for timed_action in args[0]: # args[0] is the list of TimedAction
action_tensor = timed_action.get_action()
if action_tensor.device.type != "cpu":
action_chunks_received["actions_on_cpu"] = False
return original_aggregate(*args, **kwargs)
monkeypatch.setattr(client, "_aggregate_action_queues", counting_aggregate)
+1 -1
View File
@@ -62,7 +62,7 @@ class MockPolicy:
@pytest.fixture
@require_package("grpcio", "grpc")
@require_package("grpc")
def policy_server():
"""Fresh `PolicyServer` instance with a stubbed-out policy model."""
# Import only when the test actually runs (after decorator check)
-145
View File
@@ -16,7 +16,6 @@
from unittest.mock import patch
import datasets
import torch
from lerobot.datasets.aggregate import aggregate_datasets
@@ -381,147 +380,3 @@ def test_video_timestamps_regression(tmp_path, lerobot_dataset_factory):
for key in aggr_ds.meta.video_keys:
assert key in item, f"Video key {key} missing from item {i}"
assert item[key].shape[0] == 3, f"Expected 3 channels for video key {key}"
def assert_image_schema_preserved(aggr_ds):
"""Test that HuggingFace Image feature schema is preserved in aggregated parquet files.
This verifies the fix for a bug where image columns were written with a generic
struct schema {'bytes': Value('binary'), 'path': Value('string')} instead of
the proper Image() feature type, causing HuggingFace Hub viewer to display
raw dict objects instead of image thumbnails.
"""
image_keys = aggr_ds.meta.image_keys
if not image_keys:
return
# Check that parquet files have proper Image schema
data_dir = aggr_ds.root / "data"
parquet_files = list(data_dir.rglob("*.parquet"))
assert len(parquet_files) > 0, "No parquet files found in aggregated dataset"
for parquet_file in parquet_files:
# Load with HuggingFace datasets to check schema
ds = datasets.Dataset.from_parquet(str(parquet_file))
for image_key in image_keys:
feature = ds.features.get(image_key)
assert feature is not None, f"Image key '{image_key}' not found in parquet schema"
assert isinstance(feature, datasets.Image), (
f"Image key '{image_key}' should have Image() feature type, "
f"but got {type(feature).__name__}: {feature}. "
"This indicates image schema was not preserved during aggregation."
)
def assert_image_frames_integrity(aggr_ds, ds_0, ds_1):
"""Test that image frames are correctly preserved after aggregation."""
image_keys = aggr_ds.meta.image_keys
if not image_keys:
return
def images_equal(img1, img2):
return torch.allclose(img1, img2)
# Test the section corresponding to the first dataset (ds_0)
for i in range(len(ds_0)):
assert aggr_ds[i]["index"] == i, (
f"Frame index at position {i} should be {i}, but got {aggr_ds[i]['index']}"
)
for key in image_keys:
assert images_equal(aggr_ds[i][key], ds_0[i][key]), (
f"Image frames at position {i} should be equal between aggregated and ds_0"
)
# Test the section corresponding to the second dataset (ds_1)
for i in range(len(ds_0), len(ds_0) + len(ds_1)):
assert aggr_ds[i]["index"] == i, (
f"Frame index at position {i} should be {i}, but got {aggr_ds[i]['index']}"
)
for key in image_keys:
assert images_equal(aggr_ds[i][key], ds_1[i - len(ds_0)][key]), (
f"Image frames at position {i} should be equal between aggregated and ds_1"
)
def test_aggregate_image_datasets(tmp_path, lerobot_dataset_factory):
"""Test aggregation of image-based datasets preserves HuggingFace Image schema.
This test specifically verifies that:
1. Image-based datasets can be aggregated correctly
2. The HuggingFace Image() feature type is preserved in parquet files
3. Image data integrity is maintained across aggregation
4. Images can be properly decoded after aggregation
This catches the bug where to_parquet_with_hf_images() was not passing
the features schema, causing image columns to be written as generic
struct types instead of Image() types.
"""
ds_0_num_frames = 50
ds_1_num_frames = 75
ds_0_num_episodes = 2
ds_1_num_episodes = 3
# Create two image-based datasets (use_videos=False)
ds_0 = lerobot_dataset_factory(
root=tmp_path / "image_0",
repo_id=f"{DUMMY_REPO_ID}_image_0",
total_episodes=ds_0_num_episodes,
total_frames=ds_0_num_frames,
use_videos=False, # Image-based dataset
)
ds_1 = lerobot_dataset_factory(
root=tmp_path / "image_1",
repo_id=f"{DUMMY_REPO_ID}_image_1",
total_episodes=ds_1_num_episodes,
total_frames=ds_1_num_frames,
use_videos=False, # Image-based dataset
)
# Verify source datasets have image keys
assert len(ds_0.meta.image_keys) > 0, "ds_0 should have image keys"
assert len(ds_1.meta.image_keys) > 0, "ds_1 should have image keys"
# Aggregate the datasets
aggregate_datasets(
repo_ids=[ds_0.repo_id, ds_1.repo_id],
roots=[ds_0.root, ds_1.root],
aggr_repo_id=f"{DUMMY_REPO_ID}_image_aggr",
aggr_root=tmp_path / "image_aggr",
)
# Load the aggregated dataset
with (
patch("lerobot.datasets.lerobot_dataset.get_safe_version") as mock_get_safe_version,
patch("lerobot.datasets.lerobot_dataset.snapshot_download") as mock_snapshot_download,
):
mock_get_safe_version.return_value = "v3.0"
mock_snapshot_download.return_value = str(tmp_path / "image_aggr")
aggr_ds = LeRobotDataset(f"{DUMMY_REPO_ID}_image_aggr", root=tmp_path / "image_aggr")
# Verify aggregated dataset has image keys
assert len(aggr_ds.meta.image_keys) > 0, "Aggregated dataset should have image keys"
assert aggr_ds.meta.image_keys == ds_0.meta.image_keys, "Image keys should match source datasets"
# Run standard aggregation assertions
expected_total_episodes = ds_0_num_episodes + ds_1_num_episodes
expected_total_frames = ds_0_num_frames + ds_1_num_frames
assert_episode_and_frame_counts(aggr_ds, expected_total_episodes, expected_total_frames)
assert_dataset_content_integrity(aggr_ds, ds_0, ds_1)
assert_metadata_consistency(aggr_ds, ds_0, ds_1)
assert_episode_indices_updated_correctly(aggr_ds, ds_0, ds_1)
# Image-specific assertions
assert_image_schema_preserved(aggr_ds)
assert_image_frames_integrity(aggr_ds, ds_0, ds_1)
# Verify images can be accessed and have correct shape
sample_item = aggr_ds[0]
for image_key in aggr_ds.meta.image_keys:
img = sample_item[image_key]
assert isinstance(img, torch.Tensor), f"Image {image_key} should be a tensor"
assert img.dim() == 3, f"Image {image_key} should have 3 dimensions (C, H, W)"
assert img.shape[0] == 3, f"Image {image_key} should have 3 channels"
assert_dataset_iteration_works(aggr_ds)
+5 -5
View File
@@ -29,7 +29,7 @@ from lerobot.datasets.dataset_tools import (
remove_feature,
split_dataset,
)
from lerobot.scripts.lerobot_edit_dataset import convert_image_to_video_dataset
from lerobot.scripts.lerobot_edit_dataset import convert_dataset_to_videos
@pytest.fixture
@@ -1050,7 +1050,7 @@ def test_modify_features_preserves_file_structure(sample_dataset, tmp_path):
assert "reward" in modified_dataset.meta.features
def test_convert_image_to_video_dataset(tmp_path):
def test_convert_dataset_to_videos(tmp_path):
"""Test converting lerobot/pusht_image dataset to video format."""
from lerobot.datasets.lerobot_dataset import LeRobotDataset
@@ -1071,7 +1071,7 @@ def test_convert_image_to_video_dataset(tmp_path):
assert "observation.image" in source_dataset.meta.features
# Convert to video dataset (only first 2 episodes for speed)
video_dataset = convert_image_to_video_dataset(
video_dataset = convert_dataset_to_videos(
dataset=source_dataset,
output_dir=output_dir,
repo_id="lerobot/pusht_video",
@@ -1113,7 +1113,7 @@ def test_convert_image_to_video_dataset(tmp_path):
shutil.rmtree(output_dir)
def test_convert_image_to_video_dataset_subset_episodes(tmp_path):
def test_convert_dataset_to_videos_subset_episodes(tmp_path):
"""Test converting only specific episodes from lerobot/pusht_image to video format."""
from lerobot.datasets.lerobot_dataset import LeRobotDataset
@@ -1132,7 +1132,7 @@ def test_convert_image_to_video_dataset_subset_episodes(tmp_path):
# Convert only episode 0 to video (subset of loaded episodes)
episode_indices = [0]
video_dataset = convert_image_to_video_dataset(
video_dataset = convert_dataset_to_videos(
dataset=source_dataset,
output_dir=output_dir,
repo_id="lerobot/pusht_video_subset",
-258
View File
@@ -352,65 +352,6 @@ def test_image_array_to_pil_image_wrong_range_float_0_255():
image_array_to_pil_image(image)
def test_tmp_image_deletion(tmp_path, empty_lerobot_dataset_factory):
"""Verify temporary image directories are removed for image features after saving episode."""
# Image feature: images should be deleted after saving episode
image_key = "image"
features_image = {
image_key: {"dtype": "image", "shape": DUMMY_CHW, "names": ["channels", "height", "width"]}
}
ds_img = empty_lerobot_dataset_factory(root=tmp_path / "img", features=features_image)
ds_img.add_frame({"image": np.random.rand(*DUMMY_CHW), "task": "Dummy task"})
ds_img.save_episode()
img_dir = ds_img._get_image_file_dir(0, image_key)
assert not img_dir.exists(), "Temporary image directory should be removed for image features"
def test_tmp_video_deletion(tmp_path, empty_lerobot_dataset_factory):
"""Verify temporary image directories are removed for video encoding when `batch_encoding_size == 1`."""
# Video feature: when batch_encoding_size == 1 temporary images should be deleted
vid_key = "video"
features_video = {
vid_key: {"dtype": "video", "shape": DUMMY_CHW, "names": ["channels", "height", "width"]}
}
ds_vid = empty_lerobot_dataset_factory(root=tmp_path / "vid", features=features_video)
ds_vid.batch_encoding_size = 1
ds_vid.add_frame({vid_key: np.random.rand(*DUMMY_CHW), "task": "Dummy task"})
ds_vid.save_episode()
vid_img_dir = ds_vid._get_image_file_dir(0, vid_key)
assert not vid_img_dir.exists(), (
"Temporary image directory should be removed when batch_encoding_size == 1"
)
def test_tmp_mixed_deletion(tmp_path, empty_lerobot_dataset_factory):
"""Verify temporary image directories are removed appropriately when both image and video features are present."""
image_key = "image"
vid_key = "video"
features_mixed = {
image_key: {"dtype": "image", "shape": DUMMY_CHW, "names": ["channels", "height", "width"]},
vid_key: {"dtype": "video", "shape": DUMMY_HWC, "names": ["height", "width", "channels"]},
}
ds_mixed = empty_lerobot_dataset_factory(
root=tmp_path / "mixed", features=features_mixed, batch_encoding_size=2
)
ds_mixed.add_frame(
{
"image": np.random.rand(*DUMMY_CHW),
"video": np.random.rand(*DUMMY_HWC),
"task": "Dummy task",
}
)
ds_mixed.save_episode()
img_dir = ds_mixed._get_image_file_dir(0, image_key)
vid_img_dir = ds_mixed._get_image_file_dir(0, vid_key)
assert not img_dir.exists(), "Temporary image directory should be removed for image features"
assert vid_img_dir.exists(), (
"Temporary image directory should not be removed for video features when batch_encoding_size == 2"
)
# TODO(aliberts):
# - [ ] test various attributes & state from init and create
# - [ ] test init with episodes and check num_frames
@@ -1451,202 +1392,3 @@ def test_valid_video_codecs_constant():
assert "hevc" in VALID_VIDEO_CODECS
assert "libsvtav1" in VALID_VIDEO_CODECS
assert len(VALID_VIDEO_CODECS) == 3
def test_delta_timestamps_with_episodes_filter(tmp_path, empty_lerobot_dataset_factory):
"""Regression test for bug where delta_timestamps incorrectly marked all frames as padded when using episodes filter.
The bug occurred because _get_query_indices was using the relative index (idx) in the filtered dataset
instead of the absolute index when comparing against episode boundaries (ep_start, ep_end).
"""
features = {
"observation.state": {"dtype": "float32", "shape": (2,), "names": ["x", "y"]},
"action": {"dtype": "float32", "shape": (2,), "names": ["vx", "vy"]},
}
dataset = empty_lerobot_dataset_factory(root=tmp_path / "test", features=features, use_videos=False)
# Create 3 episodes with 10 frames each
frames_per_episode = 10
for ep_idx in range(3):
for frame_idx in range(frames_per_episode):
dataset.add_frame(
{
"observation.state": torch.tensor([ep_idx, frame_idx], dtype=torch.float32),
"action": torch.randn(2),
"task": f"task_{ep_idx}",
}
)
dataset.save_episode()
dataset.finalize()
# Load only episode 1 (middle episode) with delta_timestamps
delta_ts = {"observation.state": [0.0]} # Just the current frame
filtered_dataset = LeRobotDataset(
dataset.repo_id,
root=dataset.root,
episodes=[1],
delta_timestamps=delta_ts,
)
# Verify the filtered dataset has the correct length
assert len(filtered_dataset) == frames_per_episode
# Check that no frames are marked as padded (since delta=0 should always be valid)
for idx in range(len(filtered_dataset)):
frame = filtered_dataset[idx]
assert frame["observation.state_is_pad"].item() is False, f"Frame {idx} incorrectly marked as padded"
# Verify we're getting data from episode 1
assert frame["episode_index"].item() == 1
def test_delta_timestamps_padding_at_episode_boundaries(tmp_path, empty_lerobot_dataset_factory):
"""Test that delta_timestamps correctly marks padding at episode boundaries when using episodes filter."""
features = {
"observation.state": {"dtype": "float32", "shape": (2,), "names": ["x", "y"]},
"action": {"dtype": "float32", "shape": (2,), "names": ["vx", "vy"]},
}
dataset = empty_lerobot_dataset_factory(
root=tmp_path / "test", features=features, use_videos=False, fps=10
)
# Create 3 episodes with 5 frames each
frames_per_episode = 5
for ep_idx in range(3):
for frame_idx in range(frames_per_episode):
dataset.add_frame(
{
"observation.state": torch.tensor([ep_idx, frame_idx], dtype=torch.float32),
"action": torch.randn(2),
"task": f"task_{ep_idx}",
}
)
dataset.save_episode()
dataset.finalize()
# Load only episode 1 with delta_timestamps that go beyond episode boundaries
# fps=10, so 0.1s = 1 frame offset
delta_ts = {"observation.state": [-0.2, -0.1, 0.0, 0.1, 0.2]} # -2, -1, 0, +1, +2 frames
filtered_dataset = LeRobotDataset(
dataset.repo_id,
root=dataset.root,
episodes=[1],
delta_timestamps=delta_ts,
tolerance_s=0.04, # Slightly less than half a frame at 10fps
)
assert len(filtered_dataset) == frames_per_episode
# Check padding at the start of the episode (first frame)
first_frame = filtered_dataset[0]
is_pad = first_frame["observation.state_is_pad"].tolist()
# At frame 0 of episode 1: delta -2 and -1 should be padded, 0, +1, +2 should not
assert is_pad == [True, True, False, False, False], f"First frame padding incorrect: {is_pad}"
# Check middle frame (no padding expected)
mid_frame = filtered_dataset[2]
is_pad = mid_frame["observation.state_is_pad"].tolist()
assert is_pad == [False, False, False, False, False], f"Middle frame padding incorrect: {is_pad}"
# Check padding at the end of the episode (last frame)
last_frame = filtered_dataset[4]
is_pad = last_frame["observation.state_is_pad"].tolist()
# At frame 4 of episode 1: delta -2, -1, 0 should not be padded, +1, +2 should be
assert is_pad == [False, False, False, True, True], f"Last frame padding incorrect: {is_pad}"
def test_delta_timestamps_multiple_episodes_filter(tmp_path, empty_lerobot_dataset_factory):
"""Test delta_timestamps with multiple non-consecutive episodes selected."""
features = {
"observation.state": {"dtype": "float32", "shape": (2,), "names": ["x", "y"]},
}
dataset = empty_lerobot_dataset_factory(
root=tmp_path / "test", features=features, use_videos=False, fps=10
)
# Create 5 episodes with 5 frames each
frames_per_episode = 5
for ep_idx in range(5):
for frame_idx in range(frames_per_episode):
dataset.add_frame(
{
"observation.state": torch.tensor([ep_idx, frame_idx], dtype=torch.float32),
"task": f"task_{ep_idx}",
}
)
dataset.save_episode()
dataset.finalize()
# Load episodes 1 and 3 (non-consecutive)
delta_ts = {"observation.state": [0.0]}
filtered_dataset = LeRobotDataset(
dataset.repo_id,
root=dataset.root,
episodes=[1, 3],
delta_timestamps=delta_ts,
)
assert len(filtered_dataset) == 2 * frames_per_episode
# All frames should have valid (non-padded) data for delta=0
for idx in range(len(filtered_dataset)):
frame = filtered_dataset[idx]
assert frame["observation.state_is_pad"].item() is False
# Verify we're getting the correct episodes
episode_indices = [filtered_dataset[i]["episode_index"].item() for i in range(len(filtered_dataset))]
expected_episodes = [1] * frames_per_episode + [3] * frames_per_episode
assert episode_indices == expected_episodes
def test_delta_timestamps_query_returns_correct_values(tmp_path, empty_lerobot_dataset_factory):
"""Test that delta_timestamps returns the correct observation values, not just correct padding."""
features = {
"observation.state": {"dtype": "float32", "shape": (1,), "names": ["x"]},
}
dataset = empty_lerobot_dataset_factory(
root=tmp_path / "test", features=features, use_videos=False, fps=10
)
# Create 2 episodes with known values
# Episode 0: frames with values 0, 1, 2, 3, 4
# Episode 1: frames with values 10, 11, 12, 13, 14
frames_per_episode = 5
for ep_idx in range(2):
for frame_idx in range(frames_per_episode):
value = ep_idx * 10 + frame_idx
dataset.add_frame(
{
"observation.state": torch.tensor([value], dtype=torch.float32),
"task": f"task_{ep_idx}",
}
)
dataset.save_episode()
dataset.finalize()
# Load episode 1 with delta that looks at previous frame
delta_ts = {"observation.state": [-0.1, 0.0]} # Previous frame and current frame
filtered_dataset = LeRobotDataset(
dataset.repo_id,
root=dataset.root,
episodes=[1],
delta_timestamps=delta_ts,
tolerance_s=0.04,
)
# Check frame 2 of episode 1 (which has absolute index 7, value 12)
frame = filtered_dataset[2]
state_values = frame["observation.state"].tolist()
# Should get [11, 12] - the previous and current values within episode 1
assert state_values == [11.0, 12.0], f"Expected [11.0, 12.0], got {state_values}"
# Check first frame - previous frame should be clamped to episode start (padded)
first_frame = filtered_dataset[0]
state_values = first_frame["observation.state"].tolist()
is_pad = first_frame["observation.state_is_pad"].tolist()
# Previous frame is outside episode, so it's clamped to first frame and marked as padded
assert state_values == [10.0, 10.0], f"Expected [10.0, 10.0], got {state_values}"
assert is_pad == [True, False], f"Expected [True, False], got {is_pad}"
+16 -6
View File
@@ -22,7 +22,7 @@ from lerobot.cameras import CameraConfig, make_cameras_from_configs
from lerobot.motors.motors_bus import Motor, MotorNormMode
from lerobot.processor import RobotAction, RobotObservation
from lerobot.robots import Robot, RobotConfig
from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
from tests.mocks.mock_motors_bus import MockMotorsBus
@@ -98,8 +98,10 @@ class MockRobot(Robot):
def is_connected(self) -> bool:
return self._is_connected
@check_if_already_connected
def connect(self, calibrate: bool = True) -> None:
if self.is_connected:
raise DeviceAlreadyConnectedError(f"{self} already connected")
self._is_connected = True
if calibrate:
self.calibrate()
@@ -108,15 +110,19 @@ class MockRobot(Robot):
def is_calibrated(self) -> bool:
return self._is_calibrated
@check_if_not_connected
def calibrate(self) -> None:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
self._is_calibrated = True
def configure(self) -> None:
pass
@check_if_not_connected
def get_observation(self) -> RobotObservation:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
if self.config.random_values:
return {f"{motor}.pos": random.uniform(-100, 100) for motor in self.motors}
else:
@@ -124,10 +130,14 @@ class MockRobot(Robot):
f"{motor}.pos": val for motor, val in zip(self.motors, self.config.static_values, strict=True)
}
@check_if_not_connected
def send_action(self, action: RobotAction) -> RobotAction:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
return action
@check_if_not_connected
def disconnect(self) -> None:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
self._is_connected = False
+16 -7
View File
@@ -21,7 +21,7 @@ from typing import Any
from lerobot.processor import RobotAction
from lerobot.teleoperators import Teleoperator, TeleoperatorConfig
from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
@TeleoperatorConfig.register_subclass("mock_teleop")
@@ -68,8 +68,10 @@ class MockTeleop(Teleoperator):
def is_connected(self) -> bool:
return self._is_connected
@check_if_already_connected
def connect(self, calibrate: bool = True) -> None:
if self.is_connected:
raise DeviceAlreadyConnectedError(f"{self} already connected")
self._is_connected = True
if calibrate:
self.calibrate()
@@ -78,15 +80,19 @@ class MockTeleop(Teleoperator):
def is_calibrated(self) -> bool:
return self._is_calibrated
@check_if_not_connected
def calibrate(self) -> None:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
self._is_calibrated = True
def configure(self) -> None:
pass
@check_if_not_connected
def get_action(self) -> RobotAction:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
if self.config.random_values:
return {f"{motor}.pos": random.uniform(-100, 100) for motor in self.motors}
else:
@@ -94,9 +100,12 @@ class MockTeleop(Teleoperator):
f"{motor}.pos": val for motor, val in zip(self.motors, self.config.static_values, strict=True)
}
@check_if_not_connected
def send_feedback(self, feedback: dict[str, Any]) -> None: ...
def send_feedback(self, feedback: dict[str, Any]) -> None:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
@check_if_not_connected
def disconnect(self) -> None:
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
self._is_connected = False
+5 -5
View File
@@ -64,7 +64,7 @@ def close_service_stub(channel, server):
server.stop(None)
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_establish_learner_connection_success():
from lerobot.rl.actor import establish_learner_connection
@@ -81,7 +81,7 @@ def test_establish_learner_connection_success():
close_service_stub(channel, server)
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_establish_learner_connection_failure():
from lerobot.rl.actor import establish_learner_connection
@@ -100,7 +100,7 @@ def test_establish_learner_connection_failure():
close_service_stub(channel, server)
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_push_transitions_to_transport_queue():
from lerobot.rl.actor import push_transitions_to_transport_queue
from lerobot.transport.utils import bytes_to_transitions
@@ -135,7 +135,7 @@ def test_push_transitions_to_transport_queue():
assert_transitions_equal(deserialized_transition, transitions[i])
@require_package("grpcio", "grpc")
@require_package("grpc")
@pytest.mark.timeout(3) # force cross-platform watchdog
def test_transitions_stream():
from lerobot.rl.actor import transitions_stream
@@ -167,7 +167,7 @@ def test_transitions_stream():
assert streamed_data[2].data == b"transition_data_3"
@require_package("grpcio", "grpc")
@require_package("grpc")
@pytest.mark.timeout(3) # force cross-platform watchdog
def test_interactions_stream():
from lerobot.rl.actor import interactions_stream
+3 -3
View File
@@ -88,7 +88,7 @@ def cfg():
return cfg
@require_package("grpcio", "grpc")
@require_package("grpc")
@pytest.mark.timeout(10) # force cross-platform watchdog
def test_end_to_end_transitions_flow(cfg):
from lerobot.rl.actor import (
@@ -150,7 +150,7 @@ def test_end_to_end_transitions_flow(cfg):
assert_transitions_equal(transition, input_transitions[i])
@require_package("grpcio", "grpc")
@require_package("grpc")
@pytest.mark.timeout(10)
def test_end_to_end_interactions_flow(cfg):
from lerobot.rl.actor import (
@@ -223,7 +223,7 @@ def test_end_to_end_interactions_flow(cfg):
assert received == expected
@require_package("grpcio", "grpc")
@require_package("grpc")
@pytest.mark.parametrize("data_size", ["small", "large"])
@pytest.mark.timeout(10)
def test_end_to_end_parameters_flow(cfg, data_size):
+8 -8
View File
@@ -39,7 +39,7 @@ def learner_service_stub():
close_learner_service_stub(channel, server)
@require_package("grpcio", "grpc")
@require_package("grpc")
def create_learner_service_stub(
shutdown_event: Event,
parameters_queue: Queue,
@@ -75,7 +75,7 @@ def create_learner_service_stub(
return services_pb2_grpc.LearnerServiceStub(channel), channel, server
@require_package("grpcio", "grpc")
@require_package("grpc")
def close_learner_service_stub(channel, server):
channel.close()
server.stop(None)
@@ -91,7 +91,7 @@ def test_ready_method(learner_service_stub):
assert response == services_pb2.Empty()
@require_package("grpcio", "grpc")
@require_package("grpc")
@pytest.mark.timeout(3) # force cross-platform watchdog
def test_send_interactions():
from lerobot.transport import services_pb2
@@ -135,7 +135,7 @@ def test_send_interactions():
assert interactions == [b"123", b"4", b"5", b"678"]
@require_package("grpcio", "grpc")
@require_package("grpc")
@pytest.mark.timeout(3) # force cross-platform watchdog
def test_send_transitions():
from lerobot.transport import services_pb2
@@ -181,7 +181,7 @@ def test_send_transitions():
assert transitions == [b"transition_1transition_2transition_3", b"batch_1batch_2"]
@require_package("grpcio", "grpc")
@require_package("grpc")
@pytest.mark.timeout(3) # force cross-platform watchdog
def test_send_transitions_empty_stream():
from lerobot.transport import services_pb2
@@ -209,7 +209,7 @@ def test_send_transitions_empty_stream():
assert transitions_queue.empty()
@require_package("grpcio", "grpc")
@require_package("grpc")
@pytest.mark.timeout(10) # force cross-platform watchdog
def test_stream_parameters():
import time
@@ -267,7 +267,7 @@ def test_stream_parameters():
assert time_diff == pytest.approx(seconds_between_pushes, abs=0.1)
@require_package("grpcio", "grpc")
@require_package("grpc")
@pytest.mark.timeout(3) # force cross-platform watchdog
def test_stream_parameters_with_shutdown():
from lerobot.transport import services_pb2
@@ -319,7 +319,7 @@ def test_stream_parameters_with_shutdown():
assert received_params == [b"param_batch_1", b"stop"]
@require_package("grpcio", "grpc")
@require_package("grpc")
@pytest.mark.timeout(3) # force cross-platform watchdog
def test_stream_parameters_waits_and_retries_on_empty_queue():
import threading
+31 -31
View File
@@ -26,7 +26,7 @@ from lerobot.utils.transition import Transition
from tests.utils import require_cuda, require_package
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_bytes_buffer_size_empty_buffer():
from lerobot.transport.utils import bytes_buffer_size
@@ -37,7 +37,7 @@ def test_bytes_buffer_size_empty_buffer():
assert buffer.tell() == 0
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_bytes_buffer_size_small_buffer():
from lerobot.transport.utils import bytes_buffer_size
@@ -47,7 +47,7 @@ def test_bytes_buffer_size_small_buffer():
assert buffer.tell() == 0
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_bytes_buffer_size_large_buffer():
from lerobot.transport.utils import CHUNK_SIZE, bytes_buffer_size
@@ -58,7 +58,7 @@ def test_bytes_buffer_size_large_buffer():
assert buffer.tell() == 0
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_send_bytes_in_chunks_empty_data():
from lerobot.transport.utils import send_bytes_in_chunks, services_pb2
@@ -68,7 +68,7 @@ def test_send_bytes_in_chunks_empty_data():
assert len(chunks) == 0
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_single_chunk_small_data():
from lerobot.transport.utils import send_bytes_in_chunks, services_pb2
@@ -82,7 +82,7 @@ def test_single_chunk_small_data():
assert chunks[0].transfer_state == services_pb2.TransferState.TRANSFER_END
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_not_silent_mode():
from lerobot.transport.utils import send_bytes_in_chunks, services_pb2
@@ -94,7 +94,7 @@ def test_not_silent_mode():
assert chunks[0].data == b"Some data"
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_send_bytes_in_chunks_large_data():
from lerobot.transport.utils import CHUNK_SIZE, send_bytes_in_chunks, services_pb2
@@ -111,7 +111,7 @@ def test_send_bytes_in_chunks_large_data():
assert chunks[2].transfer_state == services_pb2.TransferState.TRANSFER_END
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_send_bytes_in_chunks_large_data_with_exact_chunk_size():
from lerobot.transport.utils import CHUNK_SIZE, send_bytes_in_chunks, services_pb2
@@ -124,7 +124,7 @@ def test_send_bytes_in_chunks_large_data_with_exact_chunk_size():
assert chunks[0].transfer_state == services_pb2.TransferState.TRANSFER_END
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_receive_bytes_in_chunks_empty_data():
from lerobot.transport.utils import receive_bytes_in_chunks
@@ -138,7 +138,7 @@ def test_receive_bytes_in_chunks_empty_data():
assert queue.empty()
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_receive_bytes_in_chunks_single_chunk():
from lerobot.transport.utils import receive_bytes_in_chunks, services_pb2
@@ -157,7 +157,7 @@ def test_receive_bytes_in_chunks_single_chunk():
assert queue.empty()
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_receive_bytes_in_chunks_single_not_end_chunk():
from lerobot.transport.utils import receive_bytes_in_chunks, services_pb2
@@ -175,7 +175,7 @@ def test_receive_bytes_in_chunks_single_not_end_chunk():
assert queue.empty()
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_receive_bytes_in_chunks_multiple_chunks():
from lerobot.transport.utils import receive_bytes_in_chunks, services_pb2
@@ -199,7 +199,7 @@ def test_receive_bytes_in_chunks_multiple_chunks():
assert queue.empty()
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_receive_bytes_in_chunks_multiple_messages():
from lerobot.transport.utils import receive_bytes_in_chunks, services_pb2
@@ -235,7 +235,7 @@ def test_receive_bytes_in_chunks_multiple_messages():
assert queue.empty()
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_receive_bytes_in_chunks_shutdown_during_receive():
from lerobot.transport.utils import receive_bytes_in_chunks, services_pb2
@@ -259,7 +259,7 @@ def test_receive_bytes_in_chunks_shutdown_during_receive():
assert queue.empty()
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_receive_bytes_in_chunks_only_begin_chunk():
from lerobot.transport.utils import receive_bytes_in_chunks, services_pb2
@@ -279,7 +279,7 @@ def test_receive_bytes_in_chunks_only_begin_chunk():
assert queue.empty()
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_receive_bytes_in_chunks_missing_begin():
from lerobot.transport.utils import receive_bytes_in_chunks, services_pb2
@@ -303,7 +303,7 @@ def test_receive_bytes_in_chunks_missing_begin():
# Tests for state_to_bytes and bytes_to_state_dict
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_state_to_bytes_empty_dict():
from lerobot.transport.utils import bytes_to_state_dict, state_to_bytes
@@ -314,7 +314,7 @@ def test_state_to_bytes_empty_dict():
assert reconstructed == state_dict
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_bytes_to_state_dict_empty_data():
from lerobot.transport.utils import bytes_to_state_dict
@@ -323,7 +323,7 @@ def test_bytes_to_state_dict_empty_data():
bytes_to_state_dict(b"")
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_state_to_bytes_simple_dict():
from lerobot.transport.utils import bytes_to_state_dict, state_to_bytes
@@ -347,7 +347,7 @@ def test_state_to_bytes_simple_dict():
assert torch.allclose(state_dict[key], reconstructed[key])
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_state_to_bytes_various_dtypes():
from lerobot.transport.utils import bytes_to_state_dict, state_to_bytes
@@ -372,7 +372,7 @@ def test_state_to_bytes_various_dtypes():
assert torch.allclose(state_dict[key], reconstructed[key])
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_bytes_to_state_dict_invalid_data():
from lerobot.transport.utils import bytes_to_state_dict
@@ -382,7 +382,7 @@ def test_bytes_to_state_dict_invalid_data():
@require_cuda
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_state_to_bytes_various_dtypes_cuda():
from lerobot.transport.utils import bytes_to_state_dict, state_to_bytes
@@ -407,7 +407,7 @@ def test_state_to_bytes_various_dtypes_cuda():
assert torch.allclose(state_dict[key], reconstructed[key])
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_python_object_to_bytes_none():
from lerobot.transport.utils import bytes_to_python_object, python_object_to_bytes
@@ -439,7 +439,7 @@ def test_python_object_to_bytes_none():
(1, 2, 3),
],
)
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_python_object_to_bytes_simple_types(obj):
from lerobot.transport.utils import bytes_to_python_object, python_object_to_bytes
@@ -450,7 +450,7 @@ def test_python_object_to_bytes_simple_types(obj):
assert type(reconstructed) is type(obj)
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_python_object_to_bytes_with_tensors():
from lerobot.transport.utils import bytes_to_python_object, python_object_to_bytes
@@ -475,7 +475,7 @@ def test_python_object_to_bytes_with_tensors():
assert torch.equal(obj["nested"]["tensor2"], reconstructed["nested"]["tensor2"])
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_transitions_to_bytes_empty_list():
from lerobot.transport.utils import bytes_to_transitions, transitions_to_bytes
@@ -487,7 +487,7 @@ def test_transitions_to_bytes_empty_list():
assert isinstance(reconstructed, list)
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_transitions_to_bytes_single_transition():
from lerobot.transport.utils import bytes_to_transitions, transitions_to_bytes
@@ -509,7 +509,7 @@ def test_transitions_to_bytes_single_transition():
assert_transitions_equal(transitions[0], reconstructed[0])
@require_package("grpcio", "grpc")
@require_package("grpc")
def assert_transitions_equal(t1: Transition, t2: Transition):
"""Helper to assert two transitions are equal."""
assert_observation_equal(t1["state"], t2["state"])
@@ -519,7 +519,7 @@ def assert_transitions_equal(t1: Transition, t2: Transition):
assert_observation_equal(t1["next_state"], t2["next_state"])
@require_package("grpcio", "grpc")
@require_package("grpc")
def assert_observation_equal(o1: dict, o2: dict):
"""Helper to assert two observations are equal."""
assert set(o1.keys()) == set(o2.keys())
@@ -527,7 +527,7 @@ def assert_observation_equal(o1: dict, o2: dict):
assert torch.allclose(o1[key], o2[key])
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_transitions_to_bytes_multiple_transitions():
from lerobot.transport.utils import bytes_to_transitions, transitions_to_bytes
@@ -551,7 +551,7 @@ def test_transitions_to_bytes_multiple_transitions():
assert_transitions_equal(original, reconstructed_item)
@require_package("grpcio", "grpc")
@require_package("grpc")
def test_receive_bytes_in_chunks_unknown_state():
from lerobot.transport.utils import receive_bytes_in_chunks
+2 -2
View File
@@ -167,7 +167,7 @@ def require_package_arg(func):
return wrapper
def require_package(package_name, import_name=None):
def require_package(package_name):
"""
Decorator that skips the test if the specified package is not installed.
"""
@@ -175,7 +175,7 @@ def require_package(package_name, import_name=None):
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
if not is_package_available(pkg_name=package_name, import_name=import_name):
if not is_package_available(package_name):
pytest.skip(f"{package_name} not installed")
return func(*args, **kwargs)