feature(pipeline): port tokenizer pipeline for VLA (#1645)

* feat(tokenizer): Introduce TokenizerProcessor for text tokenization

- Added TokenizerProcessor class to handle tokenization of task strings using Hugging Face's AutoTokenizer.
- Supports both string and list inputs, with customizable parameters for task key, output key, and tokenization settings.
- Implemented comprehensive unit tests to validate functionality, including handling of various input scenarios and integration with RobotProcessor.
- Updated types.py to include LANGUAGE feature type and modified __init__.py to register the new processor.

* feat(language): Enhance language processing in TokenizerProcessor

- Added OBS_LANGUAGE constant to define the observation language key.
- Updated TokenizerProcessor to store tokenized task data in the observation dictionary, ensuring compatibility with the new language feature.
- Introduced Pi0NewLineProcessor to append newlines to tasks for proper tokenization.
- Modified tests to validate the integration of language tokens and attention masks in the observation structure.

* feat(tokenizer): Add padding configuration to TokenizerProcessor

- Introduced `padding_side` parameter to the TokenizerProcessor for customizable padding direction.
- Updated the `make_pi0_processor` function to include the new padding configuration.
- Enhanced unit tests to validate the functionality of the `padding_side` parameter in various scenarios.

* feat(processor): Add state management methods to Pi0NewLineProcessor

* feat(normalization): Track normalization and unnormalization info in complementary data

- Updated NormalizerProcessor and UnnormalizerProcessor to accept additional parameters for tracking normalization modes.
- Enhanced the __call__ methods to store normalization and unnormalization information in the complementary data of transitions.
- Added unit tests to verify the correct tracking of normalization info, including scenarios with missing stats and selective normalization keys.

* feat(factory): Add preprocessor and postprocessor overrides to ProcessorConfigKwargs

- Updated ProcessorConfigKwargs to include optional overrides for preprocessor and postprocessor configurations.
- Enhanced the make_processor function to utilize the new overrides, allowing for more flexible processor initialization.

* feat(processors): Integrate RenameProcessor into various processor configurations

- Added RenameProcessor to the input steps of multiple processor functions, including make_act_processor, make_diffusion_processor, make_pi0_processor, make_sac_processor, make_tdmpc_processor, make_vqbet_processor, and make_smolvla_processor.
- Consolidated normalization features from input and output into a single NormalizerProcessor for improved efficiency.
- Updated the input steps to ensure compatibility with the new RenameProcessor integration.

* feat(smolvla): Refactor language processing and introduce new line processor (#1658)

- Removed the prepare_language method and directly accessed language tokens and masks from the batch using the OBS_LANGUAGE constant.
- Added SmolVLANewLineProcessor to ensure tasks end with a newline, enhancing tokenization compatibility.
- Updated the make_smolvla_processor function to include the new line processor and tokenizer processor for improved input handling.

* feture(policies): add device processor (#1659)

* feat(processors): Integrate DeviceProcessor into multiple processor configurations

- Added DeviceProcessor to the input and output steps of various processor functions, including make_act_processor, make_diffusion_processor, make_pi0_processor, make_pi0fast_processor, make_sac_processor, make_tdmpc_processor, make_vqbet_processor, and make_smolvla_processor.
- Enhanced the DeviceProcessor class with state management methods and ensured compatibility with existing processor pipelines.
- Introduced unit tests for DeviceProcessor to validate functionality across different scenarios, including CPU and CUDA operations.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* refactor(pipeline): Remove to() method for device management

- Eliminated the to() method from RobotProcessor, which was responsible for moving tensor states to specified devices.
- Removed associated unit tests that validated the functionality of the to() method across various scenarios.
- Streamlined the pipeline code by focusing on other device management strategies.

* feat(processor): Enhance DeviceProcessor with float dtype conversion

- Added support for optional float dtype conversion in DeviceProcessor, allowing tensors to be converted to specified floating-point types while preserving non-float types.
- Implemented validation for float dtype input and updated the processor's configuration methods to include float dtype.
- Refactored tensor processing logic to streamline device movement and dtype conversion.
- Introduced comprehensive unit tests to validate the new float dtype functionality across various scenarios.

* feat(policies): Add new line processors and update module exports

* feat(processor): Enhance batch and device processors to handle index and task_index fields

- Added logic to ToBatchProcessor for unsqueezing 0D tensors for index and task_index fields, ensuring they are processed as 1D tensors.
- Updated DeviceProcessor to process index and task_index fields in complementary data, preserving their tensor types and ensuring non-tensor fields remain unchanged.
- Enhanced unit tests to validate the correct handling of index and task_index fields across various scenarios, including device compatibility and dtype preservation.
This commit is contained in:
Adil Zouitine
2025-08-05 10:53:08 +02:00
committed by Steven Palma
parent a1734cf575
commit 5326ffe77e
26 changed files with 2776 additions and 232 deletions
+267
View File
@@ -1260,6 +1260,273 @@ def test_hotswap_stats_with_different_data_types():
torch.testing.assert_close(tensor_stats["observation.image"]["max"], torch.tensor(1.0))
def test_normalization_info_tracking():
"""Test that normalization info is tracked in complementary_data."""
features = {
"observation.image": PolicyFeature(FeatureType.VISUAL, (3, 96, 96)),
"observation.state": PolicyFeature(FeatureType.STATE, (2,)),
"action": PolicyFeature(FeatureType.ACTION, (2,)),
}
norm_map = {
FeatureType.VISUAL: NormalizationMode.MEAN_STD,
FeatureType.STATE: NormalizationMode.MIN_MAX,
FeatureType.ACTION: NormalizationMode.IDENTITY,
}
stats = {
"observation.image": {
"mean": np.array([0.5, 0.5, 0.5]),
"std": np.array([0.2, 0.2, 0.2]),
},
"observation.state": {
"min": np.array([0.0, -1.0]),
"max": np.array([1.0, 1.0]),
},
"action": {
"mean": np.array([0.0, 0.0]),
"std": np.array([1.0, 1.0]),
},
}
normalizer = NormalizerProcessor(features=features, norm_map=norm_map, stats=stats)
observation = {
"observation.image": torch.tensor([0.7, 0.5, 0.3]),
"observation.state": torch.tensor([0.5, 0.0]),
}
action = torch.tensor([1.0, -0.5])
transition = create_transition(observation=observation, action=action)
# Process the transition
normalized_transition = normalizer(transition)
# Check that normalization info is added
comp_data = normalized_transition.get(TransitionKey.COMPLEMENTARY_DATA)
assert comp_data is not None
assert "normalized_keys" in comp_data
norm_info = comp_data["normalized_keys"]
assert norm_info["observation.image"] == "MEAN_STD"
assert norm_info["observation.state"] == "MIN_MAX"
assert norm_info["action"] == "IDENTITY"
def test_unnormalization_info_tracking():
"""Test that unnormalization info is tracked in complementary_data."""
features = {
"observation.image": PolicyFeature(FeatureType.VISUAL, (3,)),
"action": PolicyFeature(FeatureType.ACTION, (2,)),
}
norm_map = {
FeatureType.VISUAL: NormalizationMode.MEAN_STD,
FeatureType.ACTION: NormalizationMode.MIN_MAX,
}
stats = {
"observation.image": {
"mean": np.array([0.5, 0.5, 0.5]),
"std": np.array([0.2, 0.2, 0.2]),
},
"action": {
"min": np.array([-1.0, -1.0]),
"max": np.array([1.0, 1.0]),
},
}
unnormalizer = UnnormalizerProcessor(features=features, norm_map=norm_map, stats=stats)
observation = {"observation.image": torch.tensor([0.7, 0.5, 0.3])}
action = torch.tensor([0.0, -0.5])
transition = create_transition(observation=observation, action=action)
# Process the transition
unnormalized_transition = unnormalizer(transition)
# Check that unnormalization info is added
comp_data = unnormalized_transition.get(TransitionKey.COMPLEMENTARY_DATA)
assert comp_data is not None
assert "unnormalized_keys" in comp_data
unnorm_info = comp_data["unnormalized_keys"]
assert unnorm_info["observation.image"] == "MEAN_STD"
assert unnorm_info["action"] == "MIN_MAX"
def test_normalization_info_with_missing_stats():
"""Test normalization info when stats are missing for some keys."""
features = {
"observation.image": PolicyFeature(FeatureType.VISUAL, (3,)),
"observation.state": PolicyFeature(FeatureType.STATE, (2,)),
}
norm_map = {
FeatureType.VISUAL: NormalizationMode.MEAN_STD,
FeatureType.STATE: NormalizationMode.MIN_MAX,
}
# Only provide stats for image, not state
stats = {
"observation.image": {
"mean": np.array([0.5, 0.5, 0.5]),
"std": np.array([0.2, 0.2, 0.2]),
},
}
normalizer = NormalizerProcessor(features=features, norm_map=norm_map, stats=stats)
observation = {
"observation.image": torch.tensor([0.7, 0.5, 0.3]),
"observation.state": torch.tensor([0.5, 0.0]),
}
transition = create_transition(observation=observation)
# Process the transition
normalized_transition = normalizer(transition)
# Check that only keys with stats are in normalization info
comp_data = normalized_transition.get(TransitionKey.COMPLEMENTARY_DATA)
assert comp_data is not None
assert "normalized_keys" in comp_data
norm_info = comp_data["normalized_keys"]
assert norm_info["observation.image"] == "MEAN_STD"
# State should not be in the normalization info since it has no stats
assert "observation.state" not in norm_info
def test_normalization_info_with_selective_keys():
"""Test normalization info with selective normalization."""
features = {
"observation.image": PolicyFeature(FeatureType.VISUAL, (3,)),
"observation.state": PolicyFeature(FeatureType.STATE, (2,)),
}
norm_map = {
FeatureType.VISUAL: NormalizationMode.MEAN_STD,
FeatureType.STATE: NormalizationMode.MIN_MAX,
}
stats = {
"observation.image": {
"mean": np.array([0.5, 0.5, 0.5]),
"std": np.array([0.2, 0.2, 0.2]),
},
"observation.state": {
"min": np.array([0.0, -1.0]),
"max": np.array([1.0, 1.0]),
},
}
# Only normalize image
normalizer = NormalizerProcessor(
features=features, norm_map=norm_map, stats=stats, normalize_keys={"observation.image"}
)
observation = {
"observation.image": torch.tensor([0.7, 0.5, 0.3]),
"observation.state": torch.tensor([0.5, 0.0]),
}
transition = create_transition(observation=observation)
# Process the transition
normalized_transition = normalizer(transition)
# Check that only selected keys are in normalization info
comp_data = normalized_transition.get(TransitionKey.COMPLEMENTARY_DATA)
assert comp_data is not None
assert "normalized_keys" in comp_data
norm_info = comp_data["normalized_keys"]
assert norm_info["observation.image"] == "MEAN_STD"
# State should not be in the normalization info since it wasn't in normalize_keys
assert "observation.state" not in norm_info
def test_normalization_info_preserved_in_pipeline():
"""Test that normalization info is preserved when using RobotProcessor pipeline."""
features = {
"observation.image": PolicyFeature(FeatureType.VISUAL, (3,)),
"action": PolicyFeature(FeatureType.ACTION, (2,)),
}
norm_map = {
FeatureType.VISUAL: NormalizationMode.MEAN_STD,
FeatureType.ACTION: NormalizationMode.MIN_MAX,
}
stats = {
"observation.image": {
"mean": np.array([0.5, 0.5, 0.5]),
"std": np.array([0.2, 0.2, 0.2]),
},
"action": {
"min": np.array([-1.0, -1.0]),
"max": np.array([1.0, 1.0]),
},
}
normalizer = NormalizerProcessor(features=features, norm_map=norm_map, stats=stats)
unnormalizer = UnnormalizerProcessor(features=features, norm_map=norm_map, stats=stats)
# Create pipeline
pipeline = RobotProcessor([normalizer, unnormalizer])
observation = {"observation.image": torch.tensor([0.7, 0.5, 0.3])}
action = torch.tensor([0.5, -0.5])
transition = create_transition(observation=observation, action=action)
# Process through pipeline
result = pipeline(transition)
# Check that both normalization and unnormalization info are present
comp_data = result.get(TransitionKey.COMPLEMENTARY_DATA)
assert comp_data is not None
assert "normalized_keys" in comp_data
assert "unnormalized_keys" in comp_data
# Check normalization info
norm_info = comp_data["normalized_keys"]
assert norm_info["observation.image"] == "MEAN_STD"
assert norm_info["action"] == "MIN_MAX"
# Check unnormalization info
unnorm_info = comp_data["unnormalized_keys"]
assert unnorm_info["observation.image"] == "MEAN_STD"
assert unnorm_info["action"] == "MIN_MAX"
def test_normalization_info_empty_transition():
"""Test that no normalization info is added for empty transitions."""
features = {
"observation.image": PolicyFeature(FeatureType.VISUAL, (3,)),
"action": PolicyFeature(FeatureType.ACTION, (2,)),
}
norm_map = {
FeatureType.VISUAL: NormalizationMode.MEAN_STD,
FeatureType.ACTION: NormalizationMode.MIN_MAX,
}
stats = {
"observation.image": {"mean": [0.5], "std": [0.2]},
"action": {"min": [-1.0], "max": [1.0]},
}
normalizer = NormalizerProcessor(features=features, norm_map=norm_map, stats=stats)
# Empty transition
transition = create_transition()
# Process the transition
normalized_transition = normalizer(transition)
# Check that no normalization info is added
comp_data = normalized_transition.get(TransitionKey.COMPLEMENTARY_DATA)
assert comp_data is None or "normalized_keys" not in comp_data
def test_hotswap_stats_functional_test():
"""Test that hotswapped processor actually works functionally."""
# Create test data