mirror of
https://github.com/huggingface/lerobot.git
synced 2026-05-16 00:59:46 +00:00
e5ade5565d
* Add normalization processor and related components - Introduced `NormalizationProcessor` to handle both observation normalization and action unnormalization. - Added `ObservationNormalizer` and `ActionUnnormalizer` classes for specific normalization tasks. - Updated `__init__.py` to include the new `NormalizationProcessor` in the module exports. - Enhanced `ObservationProcessor` with registration in the `ProcessorStepRegistry` for better modularity. - Created `RenameProcessor` for renaming keys in observations, improving flexibility in data processing. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Enhance processing architecture with new components - Added `RenameProcessor` to facilitate key renaming in observations, improving data handling flexibility. - Updated `__init__.py` to include `RenameProcessor` in module exports. - Refactored `NormalizationProcessor` and `ObservationNormalizer` to use `rsplit` for better key handling. - Introduced comprehensive tests for `NormalizationProcessor` and `RenameProcessor` to ensure functionality and robustness. * chore (docs): add docstring for processor * fix (test): test factory * fix(test): policies * Update tests/processor/test_observation_processor.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Signed-off-by: Adil Zouitine <adilzouitinegm@gmail.com> * chore(test): add suggestion made by copilot regarding numpy test * fix(test): import issue * Refactor normalization components and update tests - Renamed `ObservationNormalizer` to `NormalizerProcessor` and `ActionUnnormalizer` to `UnnormalizerProcessor` for clarity. - Consolidated normalization logic for both observations and actions into `NormalizerProcessor` and `UnnormalizerProcessor`. - Updated tests to reflect the new class names and ensure proper functionality of normalization and unnormalization processes. - Enhanced handling of missing statistics in normalization processes. * chore (docstrin):Improve docstring for NormalizerProcessor * feat (device processor): Implement device processor * chore (batch handling): Enhance processing components with batch conversion utilities * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix(test): linting issue * chore (output format): improves output format * chore (type): add typing for multiprocess envs * feat (overrides): Implement support for loading processors with parameter overrides - Added the ability to provide non-serializable objects when loading processors from saved configurations using the `overrides` parameter. - Enhanced error handling for invalid override keys and instantiation errors. - Updated documentation and examples to illustrate the usage of overrides for both registered and unregistered steps. - Added comprehensive tests to validate the new functionality and ensure backward compatibility. * chore(normalization): addressing comments from copilot * chore(learner): nit comment from copilot * feat(pipeline): Enhance step_through method to support both tuple and dict inputs * refactor(pipeline): Simplify observation and padding data handling in batch transitions * Apply suggestions from code review Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com> Signed-off-by: Adil Zouitine <adilzouitinegm@gmail.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * refactor(pipeline): Introduce ComplementaryDataProcessor for handling complementary data in transitions * fix(ci): temporary fix on dataset deps version * feat(processors): Introduce processors for various policy types - Added `make_processor` function to create processor instances for different policy types, including `tdmpc`, `diffusion`, `act`, `vqbet`, `pi0`, `pi0fast`, `sac`, and `reward_classifier`. - Implemented corresponding processor files for each policy type, encapsulating normalization and unnormalization steps. - Updated existing policies to remove direct normalization dependencies, enhancing modularity and clarity. - Enhanced test coverage to validate the integration of new processors with existing policy configurations. * refactor(learner): Remove normalization from cached image features retrieval - Simplified the retrieval of observation features by removing the normalization step from the `get_cached_image_features` method calls. - This change enhances clarity and aligns with the recent updates to policy processors. * refactor(policies): Remove unnormalization step from action predictions - Eliminated the unnormalization of actions in both `TDMPCPolicy` and `VQBeTPolicy` classes to streamline action prediction. - This change improves code clarity and aligns with recent updates to policy processors. * feat(train): Integrate preprocessor into training pipeline * refactor(train): Update preprocessor initialization to include dataset statistics * refactor(policies): Enhance processor creation and add NaN detection hook * refactor(train): Update memory pinning logic for mps compatibility * feat: initial commit phone teleop * ugly delta control * use quaternion * Refactor observation preprocessing to use a modular pipeline system - Introduced `RobotPipeline` and `ObservationProcessor` for handling observation transformations. - Updated `preprocess_observation` to maintain backward compatibility while leveraging the new pipeline. - Added tests for the new processing components and ensured they match the original functionality. - Removed hardcoded logic in favor of a more flexible, composable architecture. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Refactor observation processing and improve modularity - Updated `ObservationProcessor` to enhance the modular design for processing observations. - Cleaned up imports and improved code readability by removing unnecessary lines and comments. - Ensured backward compatibility while integrating new processing components. - Added tests to validate the functionality of the updated processing architecture. * Remove redundant tests for None observation and serialization methods in `test_observation_processor.py` to streamline the test suite and improve maintainability. * Refactor processing architecture to use RobotProcessor - Replaced instances of RobotPipeline with RobotProcessor across the codebase for improved modularity and clarity. - Introduced ProcessorStepRegistry for better management of processing steps. - Updated relevant documentation and tests to reflect the new processing structure. - Enhanced the save/load functionality to support the new processor design. - Added a model card template for RobotProcessor to facilitate sharing and documentation. * Add RobotProcessor tutorial to documentation - Introduced a new tutorial on using RobotProcessor for preprocessing robot data. - Added a section in the table of contents for easy navigation to the new tutorial. - The tutorial covers key concepts, real-world scenarios, and practical examples for effective use of the RobotProcessor pipeline. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Add normalization processor and related components - Introduced `NormalizationProcessor` to handle both observation normalization and action unnormalization. - Added `ObservationNormalizer` and `ActionUnnormalizer` classes for specific normalization tasks. - Updated `__init__.py` to include the new `NormalizationProcessor` in the module exports. - Enhanced `ObservationProcessor` with registration in the `ProcessorStepRegistry` for better modularity. - Created `RenameProcessor` for renaming keys in observations, improving flexibility in data processing. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Enhance processing architecture with new components - Added `RenameProcessor` to facilitate key renaming in observations, improving data handling flexibility. - Updated `__init__.py` to include `RenameProcessor` in module exports. - Refactored `NormalizationProcessor` and `ObservationNormalizer` to use `rsplit` for better key handling. - Introduced comprehensive tests for `NormalizationProcessor` and `RenameProcessor` to ensure functionality and robustness. * chore (docs): add docstring for processor * fix (test): test factory * fix(test): policies * Update tests/processor/test_observation_processor.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Signed-off-by: Adil Zouitine <adilzouitinegm@gmail.com> * chore(test): add suggestion made by copilot regarding numpy test * fix(test): import issue * Refactor normalization components and update tests - Renamed `ObservationNormalizer` to `NormalizerProcessor` and `ActionUnnormalizer` to `UnnormalizerProcessor` for clarity. - Consolidated normalization logic for both observations and actions into `NormalizerProcessor` and `UnnormalizerProcessor`. - Updated tests to reflect the new class names and ensure proper functionality of normalization and unnormalization processes. - Enhanced handling of missing statistics in normalization processes. * chore (docstrin):Improve docstring for NormalizerProcessor * feat (device processor): Implement device processor * chore (batch handling): Enhance processing components with batch conversion utilities * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix(test): linting issue * chore (output format): improves output format * chore (type): add typing for multiprocess envs * feat (overrides): Implement support for loading processors with parameter overrides - Added the ability to provide non-serializable objects when loading processors from saved configurations using the `overrides` parameter. - Enhanced error handling for invalid override keys and instantiation errors. - Updated documentation and examples to illustrate the usage of overrides for both registered and unregistered steps. - Added comprehensive tests to validate the new functionality and ensure backward compatibility. * chore(normalization): addressing comments from copilot * chore(learner): nit comment from copilot * feat(pipeline): Enhance step_through method to support both tuple and dict inputs * refactor(pipeline): Simplify observation and padding data handling in batch transitions * Apply suggestions from code review Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com> Signed-off-by: Adil Zouitine <adilzouitinegm@gmail.com> * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * refactor(pipeline): Introduce ComplementaryDataProcessor for handling complementary data in transitions * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * refactor(pipeline): Transition from tuple to dictionary format for EnvTransition - Updated the EnvTransition structure to use a dictionary format instead of a tuple, enhancing readability and maintainability. - Replaced instances of TransitionIndex with TransitionKey for accessing transition components. - Adjusted related processing functions and tests to accommodate the new dictionary format, ensuring consistent handling of transitions across the codebase. * refactor(observation_processor): Improve observation processing by using constants and simplifying pixel handling - Introduced constants for observation keys to enhance readability. - Streamlined the handling of the "pixels" key by copying observations first and processing images more clearly. - Updated the environment state and agent position assignments to use the new constants, improving maintainability. * feat(pipeline): Add hook unregistration functionality and enhance documentation - Implemented methods to unregister before, after, and reset hooks in the RobotProcessor class, allowing for more flexible hook management. - Enhanced documentation to clarify hook execution semantics and the implications of modifying transitions within hooks. - Added comprehensive tests to verify the correct behavior of hook registration and unregistration, including error handling for non-existent hooks. * refactor(pipeline): Clarify hook behavior and improve documentation - Updated the RobotProcessor class to ensure hooks are strictly for observation and do not modify transitions, enhancing clarity and maintainability. - Refactored hook registration methods to reflect the new behavior, ensuring they accept only functions that do not return modified transitions. - Enhanced documentation to clearly outline the purpose of hooks and their execution semantics. - Added tests to verify that hooks are not executed during the step_through method while ensuring they function correctly during the __call__ method. * feat(pipeline): Add __repr__ method to RobotProcessor for improved readability - Implemented a __repr__ method in the RobotProcessor class to provide a clear string representation of the processor, including step names and optional parameters like name and seed. - Added comprehensive tests to validate the __repr__ output for various scenarios, including empty processors, single and multiple steps, custom names, and seed values. - Ensured that the representation handles long lists of steps with truncation for better readability. * chore(pipeline): Move _CFG_NAME along other class member * refactor(pipeline): Utilize get_safe_torch_device for device assignment - Replaced direct torch.device instantiation with get_safe_torch_device to ensure safe device handling. - This change enhances code readability and maintains consistency in device management across the RobotProcessor class. * refactor(pipeline): Enhance state filename generation and profiling method - Updated state filename generation to use the registry name when available, improving clarity in saved files. - Modified the profile_steps method to include a warmup_runs parameter, allowing for more controlled performance profiling. - Ensured consistent conditions during profiling by deep copying transitions for each run, enhancing accuracy in timing results. * chore(doc): address pip install commant lerobot that not exist yet * feat(pipeline): Enhance configuration filename handling and state file naming - Introduced support for custom configuration filenames in the `save_pretrained` method, allowing users to specify a filename instead of the default. - Improved state file naming to include step indices, preventing conflicts when multiple processors of the same type are saved. - Added automatic detection for configuration files when loading from a directory, with error handling for multiple files. - Updated tests to validate new features, including custom filenames and automatic config detection. * refactor(pipeline): Improve state file naming conventions for clarity and uniqueness - Enhanced state file naming to include the processor's sanitized name, ensuring uniqueness when multiple processors are saved in the same directory. - Updated tests to reflect changes in state file naming, verifying that filenames now include the processor name and step indices to prevent conflicts. - Added a new test to validate state file naming when using multiple processors, ensuring distinct filenames for each processor's state files. * docs(pipeline): Add clarification for repo name sanitization process * feat(processors): Introduce processors for various policy types - Added `make_processor` function to create processor instances for different policy types, including `tdmpc`, `diffusion`, `act`, `vqbet`, `pi0`, `pi0fast`, `sac`, and `reward_classifier`. - Implemented corresponding processor files for each policy type, encapsulating normalization and unnormalization steps. - Updated existing policies to remove direct normalization dependencies, enhancing modularity and clarity. - Enhanced test coverage to validate the integration of new processors with existing policy configurations. * refactor(learner): Remove normalization from cached image features retrieval - Simplified the retrieval of observation features by removing the normalization step from the `get_cached_image_features` method calls. - This change enhances clarity and aligns with the recent updates to policy processors. * refactor(policies): Remove unnormalization step from action predictions - Eliminated the unnormalization of actions in both `TDMPCPolicy` and `VQBeTPolicy` classes to streamline action prediction. - This change improves code clarity and aligns with recent updates to policy processors. * feat(train): Integrate preprocessor into training pipeline * refactor(train): Update preprocessor initialization to include dataset statistics * refactor(policies): Enhance processor creation and add NaN detection hook * feat(record): Integrate RobotProcessor into recording loop and update policy handling - Added support for RobotProcessor in the record_loop function to enhance data processing capabilities. - Updated the logic to reset both policy and processor when provided, ensuring proper state management. - Modified action prediction to utilize the processor, improving the overall functionality of the recording process. - Adjusted the save_checkpoint function to include preprocessor state saving, enhancing checkpointing capabilities. * feat(migration): Add script for migrating policy models with normalization layers * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * feat(migrate): Enhance migration script to create preprocessor and postprocessor for policy models - Updated the migration script to generate both a preprocessor and a postprocessor, improving the handling of normalization for training and inference. - Added functionality to convert features to PolicyFeature objects, ensuring compatibility with the new processor architecture. - Refined the extraction and removal of normalization statistics and layers, streamlining the migration process. - Improved error handling for missing mandatory configuration fields during model instantiation. * feat(migrate): Add model card generation and saving to migration script - Implemented functionality to generate and save a model card for the migrated model, including metadata such as dataset repository ID, license, and tags. - Enhanced the script to push the model card to the hub if requested, improving model documentation and accessibility. - Refactored the saving process to ensure the model card is saved locally and uploaded correctly when pushing to the hub. * feat(processor): Introduce ToBatchProcessor for handling observation batching - Added ToBatchProcessor to ensure observations have proper batch dimensions for model processing. - Implemented functionality to add batch dimensions to state and image observations as needed. - Created comprehensive unit tests to validate the processor's behavior with various tensor dimensions and types. - Ensured compatibility with existing transition keys and maintained the integrity of non-observation data. * feat(processors): Add ToBatchProcessor to multiple policy processors - Integrated ToBatchProcessor into various policy processors to handle observation batching. - Updated make functions for act, diffusion, pi0, pi0fast, sac, smolvla, tdmpc, and vqbet processors to include the new batching functionality. - Ensured consistency across all processor implementations for improved data handling. * refactor(factory): Remove unused imports and NaN detection hook from processor creation * feat(batch_processor): Enhance ToBatchProcessor to handle action batching - Updated ToBatchProcessor to add batch dimensions to actions in addition to observations. - Implemented separate methods for processing observations and actions, improving code readability. - Added comprehensive unit tests to validate action batching functionality across various tensor dimensions and types. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * feat(factory): Enhance make_processor to support preprocessor and postprocessor configuration - Introduced ProcessorConfigKwargs TypedDict for better type safety in processor configuration. - Updated make_processor to accept preprocessor and postprocessor configuration filenames, improving flexibility in processor instantiation. - Refactored the loading of pretrained processors to utilize the new configuration options. * refactor(factory): Clean up imports in factory.py - Removed unused import of IdentityProcessor to streamline the code. * feat(migrate): Extend load_model_from_hub to include train configuration - Updated load_model_from_hub to return the train configuration alongside the model state_dict and config. - Modified main function to handle the additional train configuration when loading models from both the hub and local paths. - Adjusted dataset_repo_id extraction to utilize the train configuration for improved accuracy. * refactor(record): Rename processor parameters and update processing logic - Renamed `processor` to `preprocessor` and added `postprocessor` parameter for clarity. - Updated the `record_loop` and `predict_action` functions to utilize the new preprocessor and postprocessor, enhancing the processing flow. - Ensured compatibility with existing functionality while improving code readability. * feat(batch_processor): Add task field processing to ToBatchProcessor - Enhanced ToBatchProcessor to wrap string tasks in a list, adding batch dimensions for compatibility with model inference. - Implemented a new method for processing complementary data, ensuring that task values are correctly handled as either strings or lists of strings. - Added comprehensive unit tests to validate task processing, including edge cases and in-place mutation of complementary data. * feat(normalization): Implement IDENTITY mode for normalization and unnormalization - Enhanced NormalizerProcessor and UnnormalizerProcessor to support IDENTITY mode, allowing features to bypass normalization when specified. - Updated processing logic to check normalization modes and handle missing statistics gracefully. - Added comprehensive unit tests to validate IDENTITY mode functionality for both observations and actions, ensuring correct behavior across various scenarios. - Improved error handling for unsupported normalization modes. * fix(rebase): remove residual normalization layer: * refactor(diffusion): remove normalization layer from input processing * Add debug + calib * cleanup * Add pipeline * fix int * Add record example * nit * Add feature contract to pipelinestep and pipeline * Add tests * Add processor tests * PR feedback * encorperate pr feedback * type in doc * oops * cleaned up steps and integrated pipeline with feature_contract * refactor steps and robot to pipeline * cleanup pipeline * cleanup code further * make it run * feat(processors): Introduce processors for various policy types - Added `make_processor` function to create processor instances for different policy types, including `tdmpc`, `diffusion`, `act`, `vqbet`, `pi0`, `pi0fast`, `sac`, and `reward_classifier`. - Implemented corresponding processor files for each policy type, encapsulating normalization and unnormalization steps. - Updated existing policies to remove direct normalization dependencies, enhancing modularity and clarity. - Enhanced test coverage to validate the integration of new processors with existing policy configurations. * refactor(learner): Remove normalization from cached image features retrieval - Simplified the retrieval of observation features by removing the normalization step from the `get_cached_image_features` method calls. - This change enhances clarity and aligns with the recent updates to policy processors. * refactor(policies): Remove unnormalization step from action predictions - Eliminated the unnormalization of actions in both `TDMPCPolicy` and `VQBeTPolicy` classes to streamline action prediction. - This change improves code clarity and aligns with recent updates to policy processors. * feat(train): Integrate preprocessor into training pipeline * refactor(train): Update preprocessor initialization to include dataset statistics * refactor(policies): Enhance processor creation and add NaN detection hook * feat(record): Integrate RobotProcessor into recording loop and update policy handling - Added support for RobotProcessor in the record_loop function to enhance data processing capabilities. - Updated the logic to reset both policy and processor when provided, ensuring proper state management. - Modified action prediction to utilize the processor, improving the overall functionality of the recording process. - Adjusted the save_checkpoint function to include preprocessor state saving, enhancing checkpointing capabilities. * feat(migration): Add script for migrating policy models with normalization layers * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * feat(migrate): Enhance migration script to create preprocessor and postprocessor for policy models - Updated the migration script to generate both a preprocessor and a postprocessor, improving the handling of normalization for training and inference. - Added functionality to convert features to PolicyFeature objects, ensuring compatibility with the new processor architecture. - Refined the extraction and removal of normalization statistics and layers, streamlining the migration process. - Improved error handling for missing mandatory configuration fields during model instantiation. * feat(migrate): Add model card generation and saving to migration script - Implemented functionality to generate and save a model card for the migrated model, including metadata such as dataset repository ID, license, and tags. - Enhanced the script to push the model card to the hub if requested, improving model documentation and accessibility. - Refactored the saving process to ensure the model card is saved locally and uploaded correctly when pushing to the hub. * feat(processor): Introduce ToBatchProcessor for handling observation batching - Added ToBatchProcessor to ensure observations have proper batch dimensions for model processing. - Implemented functionality to add batch dimensions to state and image observations as needed. - Created comprehensive unit tests to validate the processor's behavior with various tensor dimensions and types. - Ensured compatibility with existing transition keys and maintained the integrity of non-observation data. * feat(processors): Add ToBatchProcessor to multiple policy processors - Integrated ToBatchProcessor into various policy processors to handle observation batching. - Updated make functions for act, diffusion, pi0, pi0fast, sac, smolvla, tdmpc, and vqbet processors to include the new batching functionality. - Ensured consistency across all processor implementations for improved data handling. * refactor(factory): Remove unused imports and NaN detection hook from processor creation * feat(batch_processor): Enhance ToBatchProcessor to handle action batching - Updated ToBatchProcessor to add batch dimensions to actions in addition to observations. - Implemented separate methods for processing observations and actions, improving code readability. - Added comprehensive unit tests to validate action batching functionality across various tensor dimensions and types. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * feat(factory): Enhance make_processor to support preprocessor and postprocessor configuration - Introduced ProcessorConfigKwargs TypedDict for better type safety in processor configuration. - Updated make_processor to accept preprocessor and postprocessor configuration filenames, improving flexibility in processor instantiation. - Refactored the loading of pretrained processors to utilize the new configuration options. * refactor(factory): Clean up imports in factory.py - Removed unused import of IdentityProcessor to streamline the code. * feat(migrate): Extend load_model_from_hub to include train configuration - Updated load_model_from_hub to return the train configuration alongside the model state_dict and config. - Modified main function to handle the additional train configuration when loading models from both the hub and local paths. - Adjusted dataset_repo_id extraction to utilize the train configuration for improved accuracy. * refactor(record): Rename processor parameters and update processing logic - Renamed `processor` to `preprocessor` and added `postprocessor` parameter for clarity. - Updated the `record_loop` and `predict_action` functions to utilize the new preprocessor and postprocessor, enhancing the processing flow. - Ensured compatibility with existing functionality while improving code readability. * feat(batch_processor): Add task field processing to ToBatchProcessor - Enhanced ToBatchProcessor to wrap string tasks in a list, adding batch dimensions for compatibility with model inference. - Implemented a new method for processing complementary data, ensuring that task values are correctly handled as either strings or lists of strings. - Added comprehensive unit tests to validate task processing, including edge cases and in-place mutation of complementary data. * feat(normalization): Implement IDENTITY mode for normalization and unnormalization - Enhanced NormalizerProcessor and UnnormalizerProcessor to support IDENTITY mode, allowing features to bypass normalization when specified. - Updated processing logic to check normalization modes and handle missing statistics gracefully. - Added comprehensive unit tests to validate IDENTITY mode functionality for both observations and actions, ensuring correct behavior across various scenarios. - Improved error handling for unsupported normalization modes. * fix(rebase): remove residual normalization layer: * refactor(diffusion): remove normalization layer from input processing * refactor(normalization): Remove unused state dict transformation methods and streamline imports - Eliminated the _transform_state_dict_keys and _load_as_safetensor methods from PI0Policy, simplifying the model loading process. - Cleaned up imports in modeling_pi0.py by removing log_model_loading_keys and init_logging. - Updated TDMPCPolicy and VQBeTPolicy to handle action removal from batches during offline evaluation. - Introduced hotswap_stats function in normalize_processor.py to update normalization statistics dynamically, with corresponding tests to ensure functionality. * refactor(normalization): Clean up imports in normalize_processor.py * feat(batch_processor): Add feature_contract method to ToBatchProcessor - Introduced feature_contract method that returns features without modification, maintaining the no-op behavior of the processor. - This addition enhances the flexibility of the ToBatchProcessor for future feature processing needs. * fix(dependencies): Update transformers dependency constraint to allow only versions up to 4.52.0 * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * feat(tokenizer): Introduce TokenizerProcessor for text tokenization - Added TokenizerProcessor class to handle tokenization of task strings using Hugging Face's AutoTokenizer. - Supports both string and list inputs, with customizable parameters for task key, output key, and tokenization settings. - Implemented comprehensive unit tests to validate functionality, including handling of various input scenarios and integration with RobotProcessor. - Updated types.py to include LANGUAGE feature type and modified __init__.py to register the new processor. * feat(language): Enhance language processing in TokenizerProcessor - Added OBS_LANGUAGE constant to define the observation language key. - Updated TokenizerProcessor to store tokenized task data in the observation dictionary, ensuring compatibility with the new language feature. - Introduced Pi0NewLineProcessor to append newlines to tasks for proper tokenization. - Modified tests to validate the integration of language tokens and attention masks in the observation structure. * feat(tokenizer): Add padding configuration to TokenizerProcessor - Introduced `padding_side` parameter to the TokenizerProcessor for customizable padding direction. - Updated the `make_pi0_processor` function to include the new padding configuration. - Enhanced unit tests to validate the functionality of the `padding_side` parameter in various scenarios. * feat(processor): Add state management methods to Pi0NewLineProcessor * feat(normalization): Track normalization and unnormalization info in complementary data - Updated NormalizerProcessor and UnnormalizerProcessor to accept additional parameters for tracking normalization modes. - Enhanced the __call__ methods to store normalization and unnormalization information in the complementary data of transitions. - Added unit tests to verify the correct tracking of normalization info, including scenarios with missing stats and selective normalization keys. * feat(factory): Add preprocessor and postprocessor overrides to ProcessorConfigKwargs - Updated ProcessorConfigKwargs to include optional overrides for preprocessor and postprocessor configurations. - Enhanced the make_processor function to utilize the new overrides, allowing for more flexible processor initialization. * feat(processors): Integrate RenameProcessor into various processor configurations - Added RenameProcessor to the input steps of multiple processor functions, including make_act_processor, make_diffusion_processor, make_pi0_processor, make_sac_processor, make_tdmpc_processor, make_vqbet_processor, and make_smolvla_processor. - Consolidated normalization features from input and output into a single NormalizerProcessor for improved efficiency. - Updated the input steps to ensure compatibility with the new RenameProcessor integration. * Do some todos and cleanup * change feature_contract to dataset_features * use one method for conversion pipeline output to add_frame dict and use base processors where possible * Add back in and use record_loop * update todo * rename to_dataset_frame * feat(smolvla): Refactor language processing and introduce new line processor (#1658) - Removed the prepare_language method and directly accessed language tokens and masks from the batch using the OBS_LANGUAGE constant. - Added SmolVLANewLineProcessor to ensure tasks end with a newline, enhancing tokenization compatibility. - Updated the make_smolvla_processor function to include the new line processor and tokenizer processor for improved input handling. * feat(processors): Integrate DeviceProcessor into multiple processor configurations - Added DeviceProcessor to the input and output steps of various processor functions, including make_act_processor, make_diffusion_processor, make_pi0_processor, make_pi0fast_processor, make_sac_processor, make_tdmpc_processor, make_vqbet_processor, and make_smolvla_processor. - Enhanced the DeviceProcessor class with state management methods and ensured compatibility with existing processor pipelines. - Introduced unit tests for DeviceProcessor to validate functionality across different scenarios, including CPU and CUDA operations. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * fix * fix reference frame * refactor(pipeline): Remove to() method for device management - Eliminated the to() method from RobotProcessor, which was responsible for moving tensor states to specified devices. - Removed associated unit tests that validated the functionality of the to() method across various scenarios. - Streamlined the pipeline code by focusing on other device management strategies. * feat(processor): Enhance DeviceProcessor with float dtype conversion - Added support for optional float dtype conversion in DeviceProcessor, allowing tensors to be converted to specified floating-point types while preserving non-float types. - Implemented validation for float dtype input and updated the processor's configuration methods to include float dtype. - Refactored tensor processing logic to streamline device movement and dtype conversion. - Introduced comprehensive unit tests to validate the new float dtype functionality across various scenarios. * update data visualization * update teleop example * fix record bugs * Add replay * Not code * feature(pipeline): port tokenizer pipeline for VLA (#1645) * feat(tokenizer): Introduce TokenizerProcessor for text tokenization - Added TokenizerProcessor class to handle tokenization of task strings using Hugging Face's AutoTokenizer. - Supports both string and list inputs, with customizable parameters for task key, output key, and tokenization settings. - Implemented comprehensive unit tests to validate functionality, including handling of various input scenarios and integration with RobotProcessor. - Updated types.py to include LANGUAGE feature type and modified __init__.py to register the new processor. * feat(language): Enhance language processing in TokenizerProcessor - Added OBS_LANGUAGE constant to define the observation language key. - Updated TokenizerProcessor to store tokenized task data in the observation dictionary, ensuring compatibility with the new language feature. - Introduced Pi0NewLineProcessor to append newlines to tasks for proper tokenization. - Modified tests to validate the integration of language tokens and attention masks in the observation structure. * feat(tokenizer): Add padding configuration to TokenizerProcessor - Introduced `padding_side` parameter to the TokenizerProcessor for customizable padding direction. - Updated the `make_pi0_processor` function to include the new padding configuration. - Enhanced unit tests to validate the functionality of the `padding_side` parameter in various scenarios. * feat(processor): Add state management methods to Pi0NewLineProcessor * feat(normalization): Track normalization and unnormalization info in complementary data - Updated NormalizerProcessor and UnnormalizerProcessor to accept additional parameters for tracking normalization modes. - Enhanced the __call__ methods to store normalization and unnormalization information in the complementary data of transitions. - Added unit tests to verify the correct tracking of normalization info, including scenarios with missing stats and selective normalization keys. * feat(factory): Add preprocessor and postprocessor overrides to ProcessorConfigKwargs - Updated ProcessorConfigKwargs to include optional overrides for preprocessor and postprocessor configurations. - Enhanced the make_processor function to utilize the new overrides, allowing for more flexible processor initialization. * feat(processors): Integrate RenameProcessor into various processor configurations - Added RenameProcessor to the input steps of multiple processor functions, including make_act_processor, make_diffusion_processor, make_pi0_processor, make_sac_processor, make_tdmpc_processor, make_vqbet_processor, and make_smolvla_processor. - Consolidated normalization features from input and output into a single NormalizerProcessor for improved efficiency. - Updated the input steps to ensure compatibility with the new RenameProcessor integration. * feat(smolvla): Refactor language processing and introduce new line processor (#1658) - Removed the prepare_language method and directly accessed language tokens and masks from the batch using the OBS_LANGUAGE constant. - Added SmolVLANewLineProcessor to ensure tasks end with a newline, enhancing tokenization compatibility. - Updated the make_smolvla_processor function to include the new line processor and tokenizer processor for improved input handling. * feture(policies): add device processor (#1659) * feat(processors): Integrate DeviceProcessor into multiple processor configurations - Added DeviceProcessor to the input and output steps of various processor functions, including make_act_processor, make_diffusion_processor, make_pi0_processor, make_pi0fast_processor, make_sac_processor, make_tdmpc_processor, make_vqbet_processor, and make_smolvla_processor. - Enhanced the DeviceProcessor class with state management methods and ensured compatibility with existing processor pipelines. - Introduced unit tests for DeviceProcessor to validate functionality across different scenarios, including CPU and CUDA operations. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * refactor(pipeline): Remove to() method for device management - Eliminated the to() method from RobotProcessor, which was responsible for moving tensor states to specified devices. - Removed associated unit tests that validated the functionality of the to() method across various scenarios. - Streamlined the pipeline code by focusing on other device management strategies. * feat(processor): Enhance DeviceProcessor with float dtype conversion - Added support for optional float dtype conversion in DeviceProcessor, allowing tensors to be converted to specified floating-point types while preserving non-float types. - Implemented validation for float dtype input and updated the processor's configuration methods to include float dtype. - Refactored tensor processing logic to streamline device movement and dtype conversion. - Introduced comprehensive unit tests to validate the new float dtype functionality across various scenarios. * feat(policies): Add new line processors and update module exports * feat(processor): Enhance batch and device processors to handle index and task_index fields - Added logic to ToBatchProcessor for unsqueezing 0D tensors for index and task_index fields, ensuring they are processed as 1D tensors. - Updated DeviceProcessor to process index and task_index fields in complementary data, preserving their tensor types and ensuring non-tensor fields remain unchanged. - Enhanced unit tests to validate the correct handling of index and task_index fields across various scenarios, including device compatibility and dtype preservation. * Add eval script * fix `q_curr` in InverseKinematicsEEToJoints to the IK solution * feat(processors): Introduce processors for various policy types - Added `make_processor` function to create processor instances for different policy types, including `tdmpc`, `diffusion`, `act`, `vqbet`, `pi0`, `pi0fast`, `sac`, and `reward_classifier`. - Implemented corresponding processor files for each policy type, encapsulating normalization and unnormalization steps. - Updated existing policies to remove direct normalization dependencies, enhancing modularity and clarity. - Enhanced test coverage to validate the integration of new processors with existing policy configurations. * refactor(learner): Remove normalization from cached image features retrieval - Simplified the retrieval of observation features by removing the normalization step from the `get_cached_image_features` method calls. - This change enhances clarity and aligns with the recent updates to policy processors. * refactor(policies): Remove unnormalization step from action predictions - Eliminated the unnormalization of actions in both `TDMPCPolicy` and `VQBeTPolicy` classes to streamline action prediction. - This change improves code clarity and aligns with recent updates to policy processors. * feat(train): Integrate preprocessor into training pipeline * refactor(train): Update preprocessor initialization to include dataset statistics * refactor(policies): Enhance processor creation and add NaN detection hook * feat(record): Integrate RobotProcessor into recording loop and update policy handling - Added support for RobotProcessor in the record_loop function to enhance data processing capabilities. - Updated the logic to reset both policy and processor when provided, ensuring proper state management. - Modified action prediction to utilize the processor, improving the overall functionality of the recording process. - Adjusted the save_checkpoint function to include preprocessor state saving, enhancing checkpointing capabilities. * feat(migration): Add script for migrating policy models with normalization layers * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * feat(migrate): Enhance migration script to create preprocessor and postprocessor for policy models - Updated the migration script to generate both a preprocessor and a postprocessor, improving the handling of normalization for training and inference. - Added functionality to convert features to PolicyFeature objects, ensuring compatibility with the new processor architecture. - Refined the extraction and removal of normalization statistics and layers, streamlining the migration process. - Improved error handling for missing mandatory configuration fields during model instantiation. * feat(migrate): Add model card generation and saving to migration script - Implemented functionality to generate and save a model card for the migrated model, including metadata such as dataset repository ID, license, and tags. - Enhanced the script to push the model card to the hub if requested, improving model documentation and accessibility. - Refactored the saving process to ensure the model card is saved locally and uploaded correctly when pushing to the hub. * feat(processor): Introduce ToBatchProcessor for handling observation batching - Added ToBatchProcessor to ensure observations have proper batch dimensions for model processing. - Implemented functionality to add batch dimensions to state and image observations as needed. - Created comprehensive unit tests to validate the processor's behavior with various tensor dimensions and types. - Ensured compatibility with existing transition keys and maintained the integrity of non-observation data. * feat(processors): Add ToBatchProcessor to multiple policy processors - Integrated ToBatchProcessor into various policy processors to handle observation batching. - Updated make functions for act, diffusion, pi0, pi0fast, sac, smolvla, tdmpc, and vqbet processors to include the new batching functionality. - Ensured consistency across all processor implementations for improved data handling. * refactor(factory): Remove unused imports and NaN detection hook from processor creation * feat(batch_processor): Enhance ToBatchProcessor to handle action batching - Updated ToBatchProcessor to add batch dimensions to actions in addition to observations. - Implemented separate methods for processing observations and actions, improving code readability. - Added comprehensive unit tests to validate action batching functionality across various tensor dimensions and types. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * feat(factory): Enhance make_processor to support preprocessor and postprocessor configuration - Introduced ProcessorConfigKwargs TypedDict for better type safety in processor configuration. - Updated make_processor to accept preprocessor and postprocessor configuration filenames, improving flexibility in processor instantiation. - Refactored the loading of pretrained processors to utilize the new configuration options. * refactor(factory): Clean up imports in factory.py - Removed unused import of IdentityProcessor to streamline the code. * feat(migrate): Extend load_model_from_hub to include train configuration - Updated load_model_from_hub to return the train configuration alongside the model state_dict and config. - Modified main function to handle the additional train configuration when loading models from both the hub and local paths. - Adjusted dataset_repo_id extraction to utilize the train configuration for improved accuracy. * refactor(record): Rename processor parameters and update processing logic - Renamed `processor` to `preprocessor` and added `postprocessor` parameter for clarity. - Updated the `record_loop` and `predict_action` functions to utilize the new preprocessor and postprocessor, enhancing the processing flow. - Ensured compatibility with existing functionality while improving code readability. * feat(batch_processor): Add task field processing to ToBatchProcessor - Enhanced ToBatchProcessor to wrap string tasks in a list, adding batch dimensions for compatibility with model inference. - Implemented a new method for processing complementary data, ensuring that task values are correctly handled as either strings or lists of strings. - Added comprehensive unit tests to validate task processing, including edge cases and in-place mutation of complementary data. * feat(normalization): Implement IDENTITY mode for normalization and unnormalization - Enhanced NormalizerProcessor and UnnormalizerProcessor to support IDENTITY mode, allowing features to bypass normalization when specified. - Updated processing logic to check normalization modes and handle missing statistics gracefully. - Added comprehensive unit tests to validate IDENTITY mode functionality for both observations and actions, ensuring correct behavior across various scenarios. - Improved error handling for unsupported normalization modes. * fix(rebase): remove residual normalization layer: * refactor(diffusion): remove normalization layer from input processing * refactor(normalization): Remove unused state dict transformation methods and streamline imports - Eliminated the _transform_state_dict_keys and _load_as_safetensor methods from PI0Policy, simplifying the model loading process. - Cleaned up imports in modeling_pi0.py by removing log_model_loading_keys and init_logging. - Updated TDMPCPolicy and VQBeTPolicy to handle action removal from batches during offline evaluation. - Introduced hotswap_stats function in normalize_processor.py to update normalization statistics dynamically, with corresponding tests to ensure functionality. * refactor(normalization): Clean up imports in normalize_processor.py * feat(batch_processor): Add feature_contract method to ToBatchProcessor - Introduced feature_contract method that returns features without modification, maintaining the no-op behavior of the processor. - This addition enhances the flexibility of the ToBatchProcessor for future feature processing needs. * fix(dependencies): Update transformers dependency constraint to allow only versions up to 4.52.0 * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * feature(pipeline): port tokenizer pipeline for VLA (#1645) * feat(tokenizer): Introduce TokenizerProcessor for text tokenization - Added TokenizerProcessor class to handle tokenization of task strings using Hugging Face's AutoTokenizer. - Supports both string and list inputs, with customizable parameters for task key, output key, and tokenization settings. - Implemented comprehensive unit tests to validate functionality, including handling of various input scenarios and integration with RobotProcessor. - Updated types.py to include LANGUAGE feature type and modified __init__.py to register the new processor. * feat(language): Enhance language processing in TokenizerProcessor - Added OBS_LANGUAGE constant to define the observation language key. - Updated TokenizerProcessor to store tokenized task data in the observation dictionary, ensuring compatibility with the new language feature. - Introduced Pi0NewLineProcessor to append newlines to tasks for proper tokenization. - Modified tests to validate the integration of language tokens and attention masks in the observation structure. * feat(tokenizer): Add padding configuration to TokenizerProcessor - Introduced `padding_side` parameter to the TokenizerProcessor for customizable padding direction. - Updated the `make_pi0_processor` function to include the new padding configuration. - Enhanced unit tests to validate the functionality of the `padding_side` parameter in various scenarios. * feat(processor): Add state management methods to Pi0NewLineProcessor * feat(normalization): Track normalization and unnormalization info in complementary data - Updated NormalizerProcessor and UnnormalizerProcessor to accept additional parameters for tracking normalization modes. - Enhanced the __call__ methods to store normalization and unnormalization information in the complementary data of transitions. - Added unit tests to verify the correct tracking of normalization info, including scenarios with missing stats and selective normalization keys. * feat(factory): Add preprocessor and postprocessor overrides to ProcessorConfigKwargs - Updated ProcessorConfigKwargs to include optional overrides for preprocessor and postprocessor configurations. - Enhanced the make_processor function to utilize the new overrides, allowing for more flexible processor initialization. * feat(processors): Integrate RenameProcessor into various processor configurations - Added RenameProcessor to the input steps of multiple processor functions, including make_act_processor, make_diffusion_processor, make_pi0_processor, make_sac_processor, make_tdmpc_processor, make_vqbet_processor, and make_smolvla_processor. - Consolidated normalization features from input and output into a single NormalizerProcessor for improved efficiency. - Updated the input steps to ensure compatibility with the new RenameProcessor integration. * feat(smolvla): Refactor language processing and introduce new line processor (#1658) - Removed the prepare_language method and directly accessed language tokens and masks from the batch using the OBS_LANGUAGE constant. - Added SmolVLANewLineProcessor to ensure tasks end with a newline, enhancing tokenization compatibility. - Updated the make_smolvla_processor function to include the new line processor and tokenizer processor for improved input handling. * feture(policies): add device processor (#1659) * feat(processors): Integrate DeviceProcessor into multiple processor configurations - Added DeviceProcessor to the input and output steps of various processor functions, including make_act_processor, make_diffusion_processor, make_pi0_processor, make_pi0fast_processor, make_sac_processor, make_tdmpc_processor, make_vqbet_processor, and make_smolvla_processor. - Enhanced the DeviceProcessor class with state management methods and ensured compatibility with existing processor pipelines. - Introduced unit tests for DeviceProcessor to validate functionality across different scenarios, including CPU and CUDA operations. * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * refactor(pipeline): Remove to() method for device management - Eliminated the to() method from RobotProcessor, which was responsible for moving tensor states to specified devices. - Removed associated unit tests that validated the functionality of the to() method across various scenarios. - Streamlined the pipeline code by focusing on other device management strategies. * feat(processor): Enhance DeviceProcessor with float dtype conversion - Added support for optional float dtype conversion in DeviceProcessor, allowing tensors to be converted to specified floating-point types while preserving non-float types. - Implemented validation for float dtype input and updated the processor's configuration methods to include float dtype. - Refactored tensor processing logic to streamline device movement and dtype conversion. - Introduced comprehensive unit tests to validate the new float dtype functionality across various scenarios. * feat(policies): Add new line processors and update module exports * feat(processor): Enhance batch and device processors to handle index and task_index fields - Added logic to ToBatchProcessor for unsqueezing 0D tensors for index and task_index fields, ensuring they are processed as 1D tensors. - Updated DeviceProcessor to process index and task_index fields in complementary data, preserving their tensor types and ensuring non-tensor fields remain unchanged. - Enhanced unit tests to validate the correct handling of index and task_index fields across various scenarios, including device compatibility and dtype preservation. * refactor(processors): Standardize processor naming conventions - Updated processor names across various files to use a consistent "robot_preprocessor" and "robot_postprocessor" format. - Modified the make_processor functions in factory, act, diffusion, pi0, pi0fast, sac, smolvla, tdmpc, and vqbet to reflect the new naming scheme. - Enhanced the pipeline configuration to align with the updated processor names, improving clarity and maintainability. * refactor(factory): Update processor configuration and type hints - Changed return type of get_policy_class to type[PreTrainedPolicy] for improved type safety. - Enhanced make_processor function to utilize dataset_stats in processor creation for better flexibility. - Updated ProcessorConfigKwargs to include dataset_stats, allowing for more comprehensive processor configurations. - Streamlined processor initialization by removing unnecessary kwargs and ensuring clarity in processor type handling. * Fix eval and android gripper * add some tests * refactor(factory, pi0fast): Update processor function names and parameters - Renamed make_pi0_processor to make_pi0fast_processor for clarity and consistency. - Updated parameter names in the factory's make_processor function to use pretrained_model_name_or_path instead of source, enhancing readability and alignment with naming conventions. * fix(train.py) push postprocessor with preprocessor - Add preprocesser policy overrides for device and rename_map - Add rename_map to DatasetRecordConfig (record.py) * Cleanup pr * fix more git diff pr issues * add path as type in save_pretrained * small nit * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * rename test file * fix: make dataset_features/feature_contract is optional * fix tests * Encorperate pr feedback * clean up record.py * add ascii art, fix normal record * remove merge issues * fix merge * remove features * Add feedback PR * fix last 4 tests * remove features check * rename to transform_features * add transform_features * fix lekiwi eval and update eval api example --------- Signed-off-by: Adil Zouitine <adilzouitinegm@gmail.com> Signed-off-by: Pepijn <138571049+pkooij@users.noreply.github.com> Co-authored-by: Adil Zouitine <adilzouitinegm@gmail.com> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: Simon Alibert <75076266+aliberts@users.noreply.github.com> Co-authored-by: Michel Aractingi <michel.aractingi@huggingface.co>
728 lines
26 KiB
Python
728 lines
26 KiB
Python
"""
|
|
Tests for the TokenizerProcessor class.
|
|
"""
|
|
|
|
import tempfile
|
|
from unittest.mock import patch
|
|
|
|
import pytest
|
|
import torch
|
|
|
|
from lerobot.configs.types import FeatureType, PolicyFeature
|
|
from lerobot.constants import OBS_LANGUAGE
|
|
from lerobot.processor.pipeline import RobotProcessor, TransitionKey
|
|
from lerobot.processor.tokenizer_processor import TokenizerProcessor
|
|
from tests.utils import require_package
|
|
|
|
|
|
def create_transition(
|
|
observation=None, action=None, reward=None, done=None, truncated=None, info=None, complementary_data=None
|
|
):
|
|
"""Helper function to create test transitions."""
|
|
return {
|
|
TransitionKey.OBSERVATION: observation,
|
|
TransitionKey.ACTION: action,
|
|
TransitionKey.REWARD: reward,
|
|
TransitionKey.DONE: done,
|
|
TransitionKey.TRUNCATED: truncated,
|
|
TransitionKey.INFO: info,
|
|
TransitionKey.COMPLEMENTARY_DATA: complementary_data,
|
|
}
|
|
|
|
|
|
class MockTokenizer:
|
|
"""Mock tokenizer for testing that mimics transformers tokenizer interface."""
|
|
|
|
def __init__(self, vocab_size: int = 1000):
|
|
self.vocab_size = vocab_size
|
|
|
|
def __call__(
|
|
self,
|
|
text: str | list[str],
|
|
max_length: int = 512,
|
|
truncation: bool = True,
|
|
padding: str = "max_length",
|
|
padding_side: str = "right",
|
|
return_tensors: str = "pt",
|
|
**kwargs,
|
|
) -> dict[str, torch.Tensor]:
|
|
"""Mock tokenization that returns deterministic tokens based on text."""
|
|
if isinstance(text, str):
|
|
texts = [text]
|
|
else:
|
|
texts = text
|
|
|
|
batch_size = len(texts)
|
|
|
|
# Create mock input_ids and attention_mask
|
|
input_ids = torch.zeros(batch_size, max_length, dtype=torch.long)
|
|
attention_mask = torch.zeros(batch_size, max_length, dtype=torch.long)
|
|
|
|
for i, txt in enumerate(texts):
|
|
# Simple mock: use hash of text to generate deterministic tokens
|
|
text_hash = hash(txt) % self.vocab_size
|
|
seq_len = min(len(txt.split()), max_length)
|
|
|
|
# Fill input_ids with simple pattern based on text
|
|
for j in range(seq_len):
|
|
input_ids[i, j] = (text_hash + j) % self.vocab_size
|
|
|
|
# Set attention mask for non-padded positions
|
|
attention_mask[i, :seq_len] = 1
|
|
|
|
result = {
|
|
"input_ids": input_ids,
|
|
"attention_mask": attention_mask,
|
|
}
|
|
|
|
# Return single sequence for single input to match transformers behavior
|
|
if len(texts) == 1:
|
|
result = {k: v.squeeze(0) for k, v in result.items()}
|
|
|
|
return result
|
|
|
|
|
|
@pytest.fixture
|
|
def mock_tokenizer():
|
|
"""Provide a mock tokenizer for testing."""
|
|
return MockTokenizer(vocab_size=100)
|
|
|
|
|
|
@require_package("transformers")
|
|
@patch("lerobot.processor.tokenizer_processor.AutoTokenizer")
|
|
def test_basic_tokenization(mock_auto_tokenizer):
|
|
"""Test basic string tokenization functionality."""
|
|
# Mock AutoTokenizer.from_pretrained to return our mock tokenizer
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
mock_auto_tokenizer.from_pretrained.return_value = mock_tokenizer
|
|
|
|
processor = TokenizerProcessor(tokenizer_name="test-tokenizer", max_length=10)
|
|
|
|
transition = create_transition(complementary_data={"task": "pick up the red cube"})
|
|
|
|
result = processor(transition)
|
|
|
|
# Check that original task is preserved
|
|
assert result[TransitionKey.COMPLEMENTARY_DATA]["task"] == "pick up the red cube"
|
|
|
|
# Check that tokens were added to observation
|
|
observation = result[TransitionKey.OBSERVATION]
|
|
assert f"{OBS_LANGUAGE}.tokens" in observation
|
|
assert f"{OBS_LANGUAGE}.attention_mask" in observation
|
|
|
|
# Check token structure
|
|
tokens = observation[f"{OBS_LANGUAGE}.tokens"]
|
|
attention_mask = observation[f"{OBS_LANGUAGE}.attention_mask"]
|
|
assert isinstance(tokens, torch.Tensor)
|
|
assert isinstance(attention_mask, torch.Tensor)
|
|
assert tokens.shape == (10,)
|
|
assert attention_mask.shape == (10,)
|
|
|
|
|
|
@require_package("transformers")
|
|
def test_basic_tokenization_with_tokenizer_object():
|
|
"""Test basic string tokenization functionality using tokenizer object directly."""
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
|
|
processor = TokenizerProcessor(tokenizer=mock_tokenizer, max_length=10)
|
|
|
|
transition = create_transition(complementary_data={"task": "pick up the red cube"})
|
|
|
|
result = processor(transition)
|
|
|
|
# Check that original task is preserved
|
|
assert result[TransitionKey.COMPLEMENTARY_DATA]["task"] == "pick up the red cube"
|
|
|
|
# Check that tokens were added to observation
|
|
observation = result[TransitionKey.OBSERVATION]
|
|
assert f"{OBS_LANGUAGE}.tokens" in observation
|
|
assert f"{OBS_LANGUAGE}.attention_mask" in observation
|
|
|
|
# Check token structure
|
|
tokens = observation[f"{OBS_LANGUAGE}.tokens"]
|
|
attention_mask = observation[f"{OBS_LANGUAGE}.attention_mask"]
|
|
assert isinstance(tokens, torch.Tensor)
|
|
assert isinstance(attention_mask, torch.Tensor)
|
|
assert tokens.shape == (10,)
|
|
assert attention_mask.shape == (10,)
|
|
|
|
|
|
@require_package("transformers")
|
|
@patch("lerobot.processor.tokenizer_processor.AutoTokenizer")
|
|
def test_list_of_strings_tokenization(mock_auto_tokenizer):
|
|
"""Test tokenization of a list of strings."""
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
mock_auto_tokenizer.from_pretrained.return_value = mock_tokenizer
|
|
|
|
processor = TokenizerProcessor(tokenizer_name="test-tokenizer", max_length=8)
|
|
|
|
transition = create_transition(complementary_data={"task": ["pick up cube", "place on table"]})
|
|
|
|
result = processor(transition)
|
|
|
|
# Check that original task is preserved
|
|
assert result[TransitionKey.COMPLEMENTARY_DATA]["task"] == ["pick up cube", "place on table"]
|
|
|
|
# Check that tokens were added to observation
|
|
observation = result[TransitionKey.OBSERVATION]
|
|
tokens = observation[f"{OBS_LANGUAGE}.tokens"]
|
|
attention_mask = observation[f"{OBS_LANGUAGE}.attention_mask"]
|
|
assert tokens.shape == (2, 8) # batch_size=2, seq_len=8
|
|
assert attention_mask.shape == (2, 8)
|
|
|
|
|
|
@require_package("transformers")
|
|
@patch("lerobot.processor.tokenizer_processor.AutoTokenizer")
|
|
def test_custom_keys(mock_auto_tokenizer):
|
|
"""Test using custom task_key."""
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
mock_auto_tokenizer.from_pretrained.return_value = mock_tokenizer
|
|
|
|
processor = TokenizerProcessor(tokenizer_name="test-tokenizer", task_key="instruction", max_length=5)
|
|
|
|
transition = create_transition(complementary_data={"instruction": "move forward"})
|
|
|
|
result = processor(transition)
|
|
|
|
# Check that tokens are stored in observation regardless of task_key
|
|
observation = result[TransitionKey.OBSERVATION]
|
|
assert f"{OBS_LANGUAGE}.tokens" in observation
|
|
assert f"{OBS_LANGUAGE}.attention_mask" in observation
|
|
|
|
tokens = observation[f"{OBS_LANGUAGE}.tokens"]
|
|
assert tokens.shape == (5,)
|
|
|
|
|
|
@require_package("transformers")
|
|
@patch("lerobot.processor.tokenizer_processor.AutoTokenizer")
|
|
def test_none_complementary_data(mock_auto_tokenizer):
|
|
"""Test handling of None complementary_data."""
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
mock_auto_tokenizer.from_pretrained.return_value = mock_tokenizer
|
|
|
|
processor = TokenizerProcessor(tokenizer_name="test-tokenizer")
|
|
|
|
transition = create_transition(complementary_data=None)
|
|
|
|
result = processor(transition)
|
|
assert result == transition # Should return unchanged
|
|
|
|
|
|
@require_package("transformers")
|
|
@patch("lerobot.processor.tokenizer_processor.AutoTokenizer")
|
|
def test_missing_task_key(mock_auto_tokenizer):
|
|
"""Test handling when task key is missing."""
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
mock_auto_tokenizer.from_pretrained.return_value = mock_tokenizer
|
|
|
|
processor = TokenizerProcessor(tokenizer_name="test-tokenizer")
|
|
|
|
transition = create_transition(complementary_data={"other_field": "some value"})
|
|
|
|
result = processor(transition)
|
|
assert result == transition # Should return unchanged
|
|
|
|
|
|
@require_package("transformers")
|
|
@patch("lerobot.processor.tokenizer_processor.AutoTokenizer")
|
|
def test_none_task_value(mock_auto_tokenizer):
|
|
"""Test handling when task value is None."""
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
mock_auto_tokenizer.from_pretrained.return_value = mock_tokenizer
|
|
|
|
processor = TokenizerProcessor(tokenizer_name="test-tokenizer")
|
|
|
|
transition = create_transition(complementary_data={"task": None})
|
|
|
|
result = processor(transition)
|
|
assert result == transition # Should return unchanged
|
|
|
|
|
|
@require_package("transformers")
|
|
@patch("lerobot.processor.tokenizer_processor.AutoTokenizer")
|
|
def test_unsupported_task_type(mock_auto_tokenizer):
|
|
"""Test handling of unsupported task types."""
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
mock_auto_tokenizer.from_pretrained.return_value = mock_tokenizer
|
|
|
|
processor = TokenizerProcessor(tokenizer_name="test-tokenizer")
|
|
|
|
# Test with integer task
|
|
transition = create_transition(complementary_data={"task": 123})
|
|
|
|
result = processor(transition)
|
|
assert result == transition # Should return unchanged
|
|
|
|
# Test with mixed list
|
|
transition = create_transition(complementary_data={"task": ["text", 123, "more text"]})
|
|
|
|
result = processor(transition)
|
|
assert result == transition # Should return unchanged
|
|
|
|
|
|
@require_package("transformers")
|
|
def test_no_tokenizer_error():
|
|
"""Test that ValueError is raised when neither tokenizer nor tokenizer_name is provided."""
|
|
with pytest.raises(ValueError, match="Either 'tokenizer' or 'tokenizer_name' must be provided"):
|
|
TokenizerProcessor()
|
|
|
|
|
|
@require_package("transformers")
|
|
def test_invalid_tokenizer_name_error():
|
|
"""Test that error is raised when invalid tokenizer_name is provided."""
|
|
with patch("lerobot.processor.tokenizer_processor.AutoTokenizer") as mock_auto_tokenizer:
|
|
# Mock import error
|
|
mock_auto_tokenizer.from_pretrained.side_effect = Exception("Model not found")
|
|
|
|
with pytest.raises(Exception, match="Model not found"):
|
|
TokenizerProcessor(tokenizer_name="invalid-tokenizer")
|
|
|
|
|
|
@require_package("transformers")
|
|
@patch("lerobot.processor.tokenizer_processor.AutoTokenizer")
|
|
def test_get_config_with_tokenizer_name(mock_auto_tokenizer):
|
|
"""Test configuration serialization when using tokenizer_name."""
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
mock_auto_tokenizer.from_pretrained.return_value = mock_tokenizer
|
|
|
|
processor = TokenizerProcessor(
|
|
tokenizer_name="test-tokenizer",
|
|
max_length=256,
|
|
task_key="instruction",
|
|
padding="longest",
|
|
truncation=False,
|
|
)
|
|
|
|
config = processor.get_config()
|
|
|
|
expected = {
|
|
"tokenizer_name": "test-tokenizer",
|
|
"max_length": 256,
|
|
"task_key": "instruction",
|
|
"padding_side": "right",
|
|
"padding": "longest",
|
|
"truncation": False,
|
|
}
|
|
|
|
assert config == expected
|
|
|
|
|
|
@require_package("transformers")
|
|
def test_get_config_with_tokenizer_object():
|
|
"""Test configuration serialization when using tokenizer object."""
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
|
|
processor = TokenizerProcessor(
|
|
tokenizer=mock_tokenizer,
|
|
max_length=256,
|
|
task_key="instruction",
|
|
padding="longest",
|
|
truncation=False,
|
|
)
|
|
|
|
config = processor.get_config()
|
|
|
|
# tokenizer_name should not be in config when tokenizer object is used
|
|
expected = {
|
|
"max_length": 256,
|
|
"task_key": "instruction",
|
|
"padding_side": "right",
|
|
"padding": "longest",
|
|
"truncation": False,
|
|
}
|
|
|
|
assert config == expected
|
|
assert "tokenizer_name" not in config
|
|
|
|
|
|
@require_package("transformers")
|
|
@patch("lerobot.processor.tokenizer_processor.AutoTokenizer")
|
|
def test_state_dict_methods(mock_auto_tokenizer):
|
|
"""Test state_dict and load_state_dict methods."""
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
mock_auto_tokenizer.from_pretrained.return_value = mock_tokenizer
|
|
|
|
processor = TokenizerProcessor(tokenizer_name="test-tokenizer")
|
|
|
|
# Should return empty dict
|
|
state = processor.state_dict()
|
|
assert state == {}
|
|
|
|
# load_state_dict should not raise error
|
|
processor.load_state_dict({})
|
|
|
|
|
|
@require_package("transformers")
|
|
@patch("lerobot.processor.tokenizer_processor.AutoTokenizer")
|
|
def test_reset_method(mock_auto_tokenizer):
|
|
"""Test reset method."""
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
mock_auto_tokenizer.from_pretrained.return_value = mock_tokenizer
|
|
|
|
processor = TokenizerProcessor(tokenizer_name="test-tokenizer")
|
|
|
|
# Should not raise error
|
|
processor.reset()
|
|
|
|
|
|
@require_package("transformers")
|
|
@patch("lerobot.processor.tokenizer_processor.AutoTokenizer")
|
|
def test_integration_with_robot_processor(mock_auto_tokenizer):
|
|
"""Test integration with RobotProcessor."""
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
mock_auto_tokenizer.from_pretrained.return_value = mock_tokenizer
|
|
|
|
tokenizer_processor = TokenizerProcessor(tokenizer_name="test-tokenizer", max_length=6)
|
|
robot_processor = RobotProcessor([tokenizer_processor])
|
|
|
|
transition = create_transition(
|
|
observation={"state": torch.tensor([1.0, 2.0])},
|
|
action=torch.tensor([0.1, 0.2]),
|
|
complementary_data={"task": "test task"},
|
|
)
|
|
|
|
result = robot_processor(transition)
|
|
|
|
# Check that observation exists and tokenization was applied
|
|
assert TransitionKey.OBSERVATION in result
|
|
observation = result[TransitionKey.OBSERVATION]
|
|
assert f"{OBS_LANGUAGE}.tokens" in observation
|
|
assert f"{OBS_LANGUAGE}.attention_mask" in observation
|
|
tokens = observation[f"{OBS_LANGUAGE}.tokens"]
|
|
attention_mask = observation[f"{OBS_LANGUAGE}.attention_mask"]
|
|
assert tokens.shape == (6,)
|
|
assert attention_mask.shape == (6,)
|
|
|
|
# Check that other data is preserved
|
|
assert torch.equal(
|
|
result[TransitionKey.OBSERVATION]["state"], transition[TransitionKey.OBSERVATION]["state"]
|
|
)
|
|
assert torch.equal(result[TransitionKey.ACTION], transition[TransitionKey.ACTION])
|
|
|
|
|
|
@require_package("transformers")
|
|
@patch("lerobot.processor.tokenizer_processor.AutoTokenizer")
|
|
def test_save_and_load_pretrained_with_tokenizer_name(mock_auto_tokenizer):
|
|
"""Test saving and loading processor with tokenizer_name."""
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
mock_auto_tokenizer.from_pretrained.return_value = mock_tokenizer
|
|
|
|
original_processor = TokenizerProcessor(
|
|
tokenizer_name="test-tokenizer", max_length=32, task_key="instruction"
|
|
)
|
|
|
|
robot_processor = RobotProcessor([original_processor])
|
|
|
|
with tempfile.TemporaryDirectory() as temp_dir:
|
|
# Save processor
|
|
robot_processor.save_pretrained(temp_dir)
|
|
|
|
# Load processor - tokenizer will be recreated from saved config
|
|
loaded_processor = RobotProcessor.from_pretrained(temp_dir)
|
|
|
|
# Test that loaded processor works
|
|
transition = create_transition(complementary_data={"instruction": "test instruction"})
|
|
|
|
result = loaded_processor(transition)
|
|
assert TransitionKey.OBSERVATION in result
|
|
assert f"{OBS_LANGUAGE}.tokens" in result[TransitionKey.OBSERVATION]
|
|
assert f"{OBS_LANGUAGE}.attention_mask" in result[TransitionKey.OBSERVATION]
|
|
|
|
|
|
@require_package("transformers")
|
|
def test_save_and_load_pretrained_with_tokenizer_object():
|
|
"""Test saving and loading processor with tokenizer object using overrides."""
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
|
|
original_processor = TokenizerProcessor(tokenizer=mock_tokenizer, max_length=32, task_key="instruction")
|
|
|
|
robot_processor = RobotProcessor([original_processor])
|
|
|
|
with tempfile.TemporaryDirectory() as temp_dir:
|
|
# Save processor
|
|
robot_processor.save_pretrained(temp_dir)
|
|
|
|
# Load processor with tokenizer override (since tokenizer object wasn't saved)
|
|
loaded_processor = RobotProcessor.from_pretrained(
|
|
temp_dir, overrides={"tokenizer_processor": {"tokenizer": mock_tokenizer}}
|
|
)
|
|
|
|
# Test that loaded processor works
|
|
transition = create_transition(complementary_data={"instruction": "test instruction"})
|
|
|
|
result = loaded_processor(transition)
|
|
assert TransitionKey.OBSERVATION in result
|
|
assert f"{OBS_LANGUAGE}.tokens" in result[TransitionKey.OBSERVATION]
|
|
assert f"{OBS_LANGUAGE}.attention_mask" in result[TransitionKey.OBSERVATION]
|
|
|
|
|
|
@require_package("transformers")
|
|
def test_registry_functionality():
|
|
"""Test that the processor is properly registered."""
|
|
from lerobot.processor.pipeline import ProcessorStepRegistry
|
|
|
|
# Check that the processor is registered
|
|
assert "tokenizer_processor" in ProcessorStepRegistry.list()
|
|
|
|
# Check that we can retrieve it
|
|
retrieved_class = ProcessorStepRegistry.get("tokenizer_processor")
|
|
assert retrieved_class is TokenizerProcessor
|
|
|
|
|
|
@require_package("transformers")
|
|
def test_features_basic():
|
|
"""Test basic feature contract functionality."""
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
processor = TokenizerProcessor(tokenizer=mock_tokenizer, max_length=128)
|
|
|
|
input_features = {
|
|
"observation.state": PolicyFeature(type=FeatureType.STATE, shape=(10,)),
|
|
"action": PolicyFeature(type=FeatureType.ACTION, shape=(5,)),
|
|
}
|
|
|
|
output_features = processor.transform_features(input_features)
|
|
|
|
# Check that original features are preserved
|
|
assert "observation.state" in output_features
|
|
assert "action" in output_features
|
|
|
|
# Check that tokenized features are added
|
|
assert f"{OBS_LANGUAGE}.tokens" in output_features
|
|
assert f"{OBS_LANGUAGE}.attention_mask" in output_features
|
|
|
|
# Check feature properties
|
|
tokens_feature = output_features[f"{OBS_LANGUAGE}.tokens"]
|
|
attention_mask_feature = output_features[f"{OBS_LANGUAGE}.attention_mask"]
|
|
|
|
assert tokens_feature.type == FeatureType.LANGUAGE
|
|
assert tokens_feature.shape == (128,)
|
|
assert attention_mask_feature.type == FeatureType.LANGUAGE
|
|
assert attention_mask_feature.shape == (128,)
|
|
|
|
|
|
@require_package("transformers")
|
|
def test_features_with_custom_max_length():
|
|
"""Test feature contract with custom max_length."""
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
processor = TokenizerProcessor(tokenizer=mock_tokenizer, max_length=64)
|
|
|
|
input_features = {}
|
|
output_features = processor.transform_features(input_features)
|
|
|
|
# Check that features use correct max_length
|
|
assert f"{OBS_LANGUAGE}.tokens" in output_features
|
|
assert f"{OBS_LANGUAGE}.attention_mask" in output_features
|
|
|
|
tokens_feature = output_features[f"{OBS_LANGUAGE}.tokens"]
|
|
attention_mask_feature = output_features[f"{OBS_LANGUAGE}.attention_mask"]
|
|
|
|
assert tokens_feature.shape == (64,)
|
|
assert attention_mask_feature.shape == (64,)
|
|
|
|
|
|
@require_package("transformers")
|
|
def test_features_existing_features():
|
|
"""Test feature contract when tokenized features already exist."""
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
processor = TokenizerProcessor(tokenizer=mock_tokenizer, max_length=256)
|
|
|
|
input_features = {
|
|
f"{OBS_LANGUAGE}.tokens": PolicyFeature(type=FeatureType.LANGUAGE, shape=(100,)),
|
|
f"{OBS_LANGUAGE}.attention_mask": PolicyFeature(type=FeatureType.LANGUAGE, shape=(100,)),
|
|
}
|
|
|
|
output_features = processor.transform_features(input_features)
|
|
|
|
# Should not overwrite existing features
|
|
assert output_features[f"{OBS_LANGUAGE}.tokens"].shape == (100,) # Original shape preserved
|
|
assert output_features[f"{OBS_LANGUAGE}.attention_mask"].shape == (100,)
|
|
|
|
|
|
@require_package("transformers")
|
|
@patch("lerobot.processor.tokenizer_processor.AutoTokenizer")
|
|
def test_tokenization_parameters(mock_auto_tokenizer):
|
|
"""Test that tokenization parameters are correctly passed to tokenizer."""
|
|
|
|
# Create a custom mock that tracks calls
|
|
class TrackingMockTokenizer:
|
|
def __init__(self):
|
|
self.last_call_args = None
|
|
self.last_call_kwargs = None
|
|
|
|
def __call__(self, *args, **kwargs):
|
|
self.last_call_args = args
|
|
self.last_call_kwargs = kwargs
|
|
# Return minimal valid output
|
|
return {
|
|
"input_ids": torch.zeros(16, dtype=torch.long),
|
|
"attention_mask": torch.ones(16, dtype=torch.long),
|
|
}
|
|
|
|
tracking_tokenizer = TrackingMockTokenizer()
|
|
mock_auto_tokenizer.from_pretrained.return_value = tracking_tokenizer
|
|
|
|
processor = TokenizerProcessor(
|
|
tokenizer_name="test-tokenizer",
|
|
max_length=16,
|
|
padding="longest",
|
|
truncation=False,
|
|
padding_side="left",
|
|
)
|
|
|
|
transition = create_transition(complementary_data={"task": "test task"})
|
|
|
|
processor(transition)
|
|
|
|
# Check that parameters were passed correctly (task is converted to list)
|
|
assert tracking_tokenizer.last_call_args == (["test task"],)
|
|
assert tracking_tokenizer.last_call_kwargs["max_length"] == 16
|
|
assert tracking_tokenizer.last_call_kwargs["padding"] == "longest"
|
|
assert tracking_tokenizer.last_call_kwargs["padding_side"] == "left"
|
|
assert tracking_tokenizer.last_call_kwargs["truncation"] is False
|
|
assert tracking_tokenizer.last_call_kwargs["return_tensors"] == "pt"
|
|
|
|
|
|
@require_package("transformers")
|
|
@patch("lerobot.processor.tokenizer_processor.AutoTokenizer")
|
|
def test_preserves_other_complementary_data(mock_auto_tokenizer):
|
|
"""Test that other complementary data fields are preserved."""
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
mock_auto_tokenizer.from_pretrained.return_value = mock_tokenizer
|
|
|
|
processor = TokenizerProcessor(tokenizer_name="test-tokenizer")
|
|
|
|
transition = create_transition(
|
|
complementary_data={
|
|
"task": "test task",
|
|
"episode_id": 123,
|
|
"timestamp": 456.789,
|
|
"other_field": {"nested": "data"},
|
|
}
|
|
)
|
|
|
|
result = processor(transition)
|
|
comp_data = result[TransitionKey.COMPLEMENTARY_DATA]
|
|
|
|
# Check that all original fields are preserved
|
|
assert comp_data["task"] == "test task"
|
|
assert comp_data["episode_id"] == 123
|
|
assert comp_data["timestamp"] == 456.789
|
|
assert comp_data["other_field"] == {"nested": "data"}
|
|
|
|
# Check that tokens were added to observation
|
|
observation = result[TransitionKey.OBSERVATION]
|
|
assert f"{OBS_LANGUAGE}.tokens" in observation
|
|
assert f"{OBS_LANGUAGE}.attention_mask" in observation
|
|
|
|
|
|
@require_package("transformers")
|
|
@patch("lerobot.processor.tokenizer_processor.AutoTokenizer")
|
|
def test_deterministic_tokenization(mock_auto_tokenizer):
|
|
"""Test that tokenization is deterministic for the same input."""
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
mock_auto_tokenizer.from_pretrained.return_value = mock_tokenizer
|
|
|
|
processor = TokenizerProcessor(tokenizer_name="test-tokenizer", max_length=10)
|
|
|
|
transition = create_transition(complementary_data={"task": "consistent test"})
|
|
|
|
result1 = processor(transition)
|
|
result2 = processor(transition)
|
|
|
|
tokens1 = result1[TransitionKey.OBSERVATION][f"{OBS_LANGUAGE}.tokens"]
|
|
attention_mask1 = result1[TransitionKey.OBSERVATION][f"{OBS_LANGUAGE}.attention_mask"]
|
|
tokens2 = result2[TransitionKey.OBSERVATION][f"{OBS_LANGUAGE}.tokens"]
|
|
attention_mask2 = result2[TransitionKey.OBSERVATION][f"{OBS_LANGUAGE}.attention_mask"]
|
|
|
|
# Results should be identical
|
|
assert torch.equal(tokens1, tokens2)
|
|
assert torch.equal(attention_mask1, attention_mask2)
|
|
|
|
|
|
@require_package("transformers")
|
|
@patch("lerobot.processor.tokenizer_processor.AutoTokenizer")
|
|
def test_empty_string_task(mock_auto_tokenizer):
|
|
"""Test handling of empty string task."""
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
mock_auto_tokenizer.from_pretrained.return_value = mock_tokenizer
|
|
|
|
processor = TokenizerProcessor(tokenizer_name="test-tokenizer", max_length=8)
|
|
|
|
transition = create_transition(complementary_data={"task": ""})
|
|
|
|
result = processor(transition)
|
|
|
|
# Should still tokenize (mock tokenizer handles empty strings)
|
|
observation = result[TransitionKey.OBSERVATION]
|
|
assert f"{OBS_LANGUAGE}.tokens" in observation
|
|
tokens = observation[f"{OBS_LANGUAGE}.tokens"]
|
|
assert tokens.shape == (8,)
|
|
|
|
|
|
@require_package("transformers")
|
|
@patch("lerobot.processor.tokenizer_processor.AutoTokenizer")
|
|
def test_very_long_task(mock_auto_tokenizer):
|
|
"""Test handling of very long task strings."""
|
|
mock_tokenizer = MockTokenizer(vocab_size=100)
|
|
mock_auto_tokenizer.from_pretrained.return_value = mock_tokenizer
|
|
|
|
processor = TokenizerProcessor(tokenizer_name="test-tokenizer", max_length=5, truncation=True)
|
|
|
|
long_task = " ".join(["word"] * 100) # Very long task
|
|
transition = create_transition(complementary_data={"task": long_task})
|
|
|
|
result = processor(transition)
|
|
|
|
# Should be truncated to max_length
|
|
observation = result[TransitionKey.OBSERVATION]
|
|
tokens = observation[f"{OBS_LANGUAGE}.tokens"]
|
|
attention_mask = observation[f"{OBS_LANGUAGE}.attention_mask"]
|
|
assert tokens.shape == (5,)
|
|
assert attention_mask.shape == (5,)
|
|
|
|
|
|
@require_package("transformers")
|
|
@patch("lerobot.processor.tokenizer_processor.AutoTokenizer")
|
|
def test_custom_padding_side(mock_auto_tokenizer):
|
|
"""Test using custom padding_side parameter."""
|
|
|
|
# Create a mock tokenizer that tracks padding_side calls
|
|
class PaddingSideTrackingTokenizer:
|
|
def __init__(self):
|
|
self.padding_side_calls = []
|
|
|
|
def __call__(
|
|
self,
|
|
text,
|
|
max_length=512,
|
|
truncation=True,
|
|
padding="max_length",
|
|
padding_side="right",
|
|
return_tensors="pt",
|
|
**kwargs,
|
|
):
|
|
self.padding_side_calls.append(padding_side)
|
|
# Return minimal valid output
|
|
return {
|
|
"input_ids": torch.zeros(max_length, dtype=torch.long),
|
|
"attention_mask": torch.ones(max_length, dtype=torch.long),
|
|
}
|
|
|
|
tracking_tokenizer = PaddingSideTrackingTokenizer()
|
|
mock_auto_tokenizer.from_pretrained.return_value = tracking_tokenizer
|
|
|
|
# Test left padding
|
|
processor_left = TokenizerProcessor(tokenizer_name="test-tokenizer", max_length=10, padding_side="left")
|
|
|
|
transition = create_transition(complementary_data={"task": "test task"})
|
|
processor_left(transition)
|
|
|
|
assert tracking_tokenizer.padding_side_calls[-1] == "left"
|
|
|
|
# Test right padding (default)
|
|
processor_right = TokenizerProcessor(tokenizer_name="test-tokenizer", max_length=10, padding_side="right")
|
|
|
|
processor_right(transition)
|
|
|
|
assert tracking_tokenizer.padding_side_calls[-1] == "right"
|