feature(pipeline): port tokenizer pipeline for VLA (#1645)

* feat(tokenizer): Introduce TokenizerProcessor for text tokenization

- Added TokenizerProcessor class to handle tokenization of task strings using Hugging Face's AutoTokenizer.
- Supports both string and list inputs, with customizable parameters for task key, output key, and tokenization settings.
- Implemented comprehensive unit tests to validate functionality, including handling of various input scenarios and integration with RobotProcessor.
- Updated types.py to include LANGUAGE feature type and modified __init__.py to register the new processor.

* feat(language): Enhance language processing in TokenizerProcessor

- Added OBS_LANGUAGE constant to define the observation language key.
- Updated TokenizerProcessor to store tokenized task data in the observation dictionary, ensuring compatibility with the new language feature.
- Introduced Pi0NewLineProcessor to append newlines to tasks for proper tokenization.
- Modified tests to validate the integration of language tokens and attention masks in the observation structure.

* feat(tokenizer): Add padding configuration to TokenizerProcessor

- Introduced `padding_side` parameter to the TokenizerProcessor for customizable padding direction.
- Updated the `make_pi0_processor` function to include the new padding configuration.
- Enhanced unit tests to validate the functionality of the `padding_side` parameter in various scenarios.

* feat(processor): Add state management methods to Pi0NewLineProcessor

* feat(normalization): Track normalization and unnormalization info in complementary data

- Updated NormalizerProcessor and UnnormalizerProcessor to accept additional parameters for tracking normalization modes.
- Enhanced the __call__ methods to store normalization and unnormalization information in the complementary data of transitions.
- Added unit tests to verify the correct tracking of normalization info, including scenarios with missing stats and selective normalization keys.

* feat(factory): Add preprocessor and postprocessor overrides to ProcessorConfigKwargs

- Updated ProcessorConfigKwargs to include optional overrides for preprocessor and postprocessor configurations.
- Enhanced the make_processor function to utilize the new overrides, allowing for more flexible processor initialization.

* feat(processors): Integrate RenameProcessor into various processor configurations

- Added RenameProcessor to the input steps of multiple processor functions, including make_act_processor, make_diffusion_processor, make_pi0_processor, make_sac_processor, make_tdmpc_processor, make_vqbet_processor, and make_smolvla_processor.
- Consolidated normalization features from input and output into a single NormalizerProcessor for improved efficiency.
- Updated the input steps to ensure compatibility with the new RenameProcessor integration.

* feat(smolvla): Refactor language processing and introduce new line processor (#1658)

- Removed the prepare_language method and directly accessed language tokens and masks from the batch using the OBS_LANGUAGE constant.
- Added SmolVLANewLineProcessor to ensure tasks end with a newline, enhancing tokenization compatibility.
- Updated the make_smolvla_processor function to include the new line processor and tokenizer processor for improved input handling.

* feture(policies): add device processor (#1659)

* feat(processors): Integrate DeviceProcessor into multiple processor configurations

- Added DeviceProcessor to the input and output steps of various processor functions, including make_act_processor, make_diffusion_processor, make_pi0_processor, make_pi0fast_processor, make_sac_processor, make_tdmpc_processor, make_vqbet_processor, and make_smolvla_processor.
- Enhanced the DeviceProcessor class with state management methods and ensured compatibility with existing processor pipelines.
- Introduced unit tests for DeviceProcessor to validate functionality across different scenarios, including CPU and CUDA operations.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* refactor(pipeline): Remove to() method for device management

- Eliminated the to() method from RobotProcessor, which was responsible for moving tensor states to specified devices.
- Removed associated unit tests that validated the functionality of the to() method across various scenarios.
- Streamlined the pipeline code by focusing on other device management strategies.

* feat(processor): Enhance DeviceProcessor with float dtype conversion

- Added support for optional float dtype conversion in DeviceProcessor, allowing tensors to be converted to specified floating-point types while preserving non-float types.
- Implemented validation for float dtype input and updated the processor's configuration methods to include float dtype.
- Refactored tensor processing logic to streamline device movement and dtype conversion.
- Introduced comprehensive unit tests to validate the new float dtype functionality across various scenarios.

* feat(policies): Add new line processors and update module exports

* feat(processor): Enhance batch and device processors to handle index and task_index fields

- Added logic to ToBatchProcessor for unsqueezing 0D tensors for index and task_index fields, ensuring they are processed as 1D tensors.
- Updated DeviceProcessor to process index and task_index fields in complementary data, preserving their tensor types and ensuring non-tensor fields remain unchanged.
- Enhanced unit tests to validate the correct handling of index and task_index fields across various scenarios, including device compatibility and dtype preservation.
This commit is contained in:
Adil Zouitine
2025-08-05 10:53:08 +02:00
committed by Steven Palma
parent a1734cf575
commit 5326ffe77e
26 changed files with 2776 additions and 232 deletions
+228
View File
@@ -899,3 +899,231 @@ def test_task_preserves_other_keys():
assert processed_comp_data["motor_id"] == "motor_456"
assert processed_comp_data["config"] == {"speed": "slow", "precision": "high"}
assert processed_comp_data["metrics"] == [1.0, 2.0, 3.0]
# Index and task_index specific tests
def test_index_scalar_to_1d():
"""Test that 0D index tensor gets unsqueezed to 1D."""
processor = ToBatchProcessor()
# Create 0D index tensor (scalar)
index_0d = torch.tensor(42, dtype=torch.int64)
complementary_data = {"index": index_0d}
transition = create_transition(complementary_data=complementary_data)
result = processor(transition)
processed_comp_data = result[TransitionKey.COMPLEMENTARY_DATA]
assert processed_comp_data["index"].shape == (1,)
assert processed_comp_data["index"].dtype == torch.int64
assert processed_comp_data["index"][0] == 42
def test_task_index_scalar_to_1d():
"""Test that 0D task_index tensor gets unsqueezed to 1D."""
processor = ToBatchProcessor()
# Create 0D task_index tensor (scalar)
task_index_0d = torch.tensor(7, dtype=torch.int64)
complementary_data = {"task_index": task_index_0d}
transition = create_transition(complementary_data=complementary_data)
result = processor(transition)
processed_comp_data = result[TransitionKey.COMPLEMENTARY_DATA]
assert processed_comp_data["task_index"].shape == (1,)
assert processed_comp_data["task_index"].dtype == torch.int64
assert processed_comp_data["task_index"][0] == 7
def test_index_and_task_index_together():
"""Test processing both index and task_index together."""
processor = ToBatchProcessor()
# Create 0D tensors for both
index_0d = torch.tensor(100, dtype=torch.int64)
task_index_0d = torch.tensor(3, dtype=torch.int64)
complementary_data = {
"index": index_0d,
"task_index": task_index_0d,
"task": "pick_object",
}
transition = create_transition(complementary_data=complementary_data)
result = processor(transition)
processed_comp_data = result[TransitionKey.COMPLEMENTARY_DATA]
# Check index
assert processed_comp_data["index"].shape == (1,)
assert processed_comp_data["index"][0] == 100
# Check task_index
assert processed_comp_data["task_index"].shape == (1,)
assert processed_comp_data["task_index"][0] == 3
# Check task is also processed
assert processed_comp_data["task"] == ["pick_object"]
def test_index_already_batched():
"""Test that already batched index tensors remain unchanged."""
processor = ToBatchProcessor()
# Create already batched tensors
index_1d = torch.tensor([42], dtype=torch.int64)
index_2d = torch.tensor([[42, 43]], dtype=torch.int64)
# Test 1D (already batched)
complementary_data = {"index": index_1d}
transition = create_transition(complementary_data=complementary_data)
result = processor(transition)
assert torch.equal(result[TransitionKey.COMPLEMENTARY_DATA]["index"], index_1d)
# Test 2D
complementary_data = {"index": index_2d}
transition = create_transition(complementary_data=complementary_data)
result = processor(transition)
assert torch.equal(result[TransitionKey.COMPLEMENTARY_DATA]["index"], index_2d)
def test_task_index_already_batched():
"""Test that already batched task_index tensors remain unchanged."""
processor = ToBatchProcessor()
# Create already batched tensors
task_index_1d = torch.tensor([7], dtype=torch.int64)
task_index_2d = torch.tensor([[7, 8]], dtype=torch.int64)
# Test 1D (already batched)
complementary_data = {"task_index": task_index_1d}
transition = create_transition(complementary_data=complementary_data)
result = processor(transition)
assert torch.equal(result[TransitionKey.COMPLEMENTARY_DATA]["task_index"], task_index_1d)
# Test 2D
complementary_data = {"task_index": task_index_2d}
transition = create_transition(complementary_data=complementary_data)
result = processor(transition)
assert torch.equal(result[TransitionKey.COMPLEMENTARY_DATA]["task_index"], task_index_2d)
def test_index_non_tensor_unchanged():
"""Test that non-tensor index values remain unchanged."""
processor = ToBatchProcessor()
complementary_data = {
"index": 42, # Plain int, not tensor
"task_index": [1, 2, 3], # List, not tensor
}
transition = create_transition(complementary_data=complementary_data)
result = processor(transition)
processed_comp_data = result[TransitionKey.COMPLEMENTARY_DATA]
assert processed_comp_data["index"] == 42
assert processed_comp_data["task_index"] == [1, 2, 3]
def test_index_dtype_preservation():
"""Test that index and task_index dtype is preserved during processing."""
processor = ToBatchProcessor()
# Test different dtypes
dtypes = [torch.int32, torch.int64, torch.long]
for dtype in dtypes:
index_0d = torch.tensor(42, dtype=dtype)
task_index_0d = torch.tensor(7, dtype=dtype)
complementary_data = {
"index": index_0d,
"task_index": task_index_0d,
}
transition = create_transition(complementary_data=complementary_data)
result = processor(transition)
processed_comp_data = result[TransitionKey.COMPLEMENTARY_DATA]
assert processed_comp_data["index"].dtype == dtype
assert processed_comp_data["task_index"].dtype == dtype
def test_index_with_full_transition():
"""Test index/task_index processing with full transition data."""
processor = ToBatchProcessor()
# Create full transition with all components
observation = {
OBS_STATE: torch.randn(7),
OBS_IMAGE: torch.randn(64, 64, 3),
}
action = torch.randn(4)
complementary_data = {
"task": "navigate_to_goal",
"index": torch.tensor(1000, dtype=torch.int64),
"task_index": torch.tensor(5, dtype=torch.int64),
"episode_id": 123,
}
transition = create_transition(
observation=observation,
action=action,
reward=0.5,
done=False,
complementary_data=complementary_data,
)
result = processor(transition)
# Check all components are processed correctly
assert result[TransitionKey.OBSERVATION][OBS_STATE].shape == (1, 7)
assert result[TransitionKey.OBSERVATION][OBS_IMAGE].shape == (1, 64, 64, 3)
assert result[TransitionKey.ACTION].shape == (1, 4)
processed_comp_data = result[TransitionKey.COMPLEMENTARY_DATA]
assert processed_comp_data["task"] == ["navigate_to_goal"]
assert processed_comp_data["index"].shape == (1,)
assert processed_comp_data["index"][0] == 1000
assert processed_comp_data["task_index"].shape == (1,)
assert processed_comp_data["task_index"][0] == 5
assert processed_comp_data["episode_id"] == 123 # Non-tensor field unchanged
@pytest.mark.skipif(not torch.cuda.is_available(), reason="CUDA not available")
def test_index_device_compatibility():
"""Test processor works with index/task_index tensors on different devices."""
processor = ToBatchProcessor()
# Create tensors on GPU
index_0d = torch.tensor(42, dtype=torch.int64, device="cuda")
task_index_0d = torch.tensor(7, dtype=torch.int64, device="cuda")
complementary_data = {
"index": index_0d,
"task_index": task_index_0d,
}
transition = create_transition(complementary_data=complementary_data)
result = processor(transition)
processed_comp_data = result[TransitionKey.COMPLEMENTARY_DATA]
# Check shapes and that tensors stayed on GPU
assert processed_comp_data["index"].shape == (1,)
assert processed_comp_data["task_index"].shape == (1,)
assert processed_comp_data["index"].device.type == "cuda"
assert processed_comp_data["task_index"].device.type == "cuda"
def test_empty_index_tensor():
"""Test handling of empty index tensors."""
processor = ToBatchProcessor()
# Empty 0D tensor doesn't make sense, but test empty 1D
index_empty = torch.tensor([], dtype=torch.int64)
complementary_data = {"index": index_empty}
transition = create_transition(complementary_data=complementary_data)
result = processor(transition)
# Should remain unchanged (already 1D)
assert result[TransitionKey.COMPLEMENTARY_DATA]["index"].shape == (0,)