mirror of
https://github.com/huggingface/lerobot.git
synced 2026-05-15 16:49:55 +00:00
e670ac5daf
* Add basic support for PEFT adapter methods This changes adds support for training policies with much less parameters by applying adapter methods such as LoRA on specific parts of the policies and therefore possibly higher learning rates / batch sizes. To make this as accessible as possible I thought it useful to provide defaults for `target_modules` and `modules_to_save`. Currently only SmolVLA has such defaults but when we agree that this change is useful I will set out to generate more such defaults. While the user can override these settings, they are expected to only change the peft_method, rank and init_type parameters. * Implement loading of PEFT adapters Loading a PEFT adapter is currently done by initializing a policy with default config and then applying the adapter on the resulting model. This has the obvious drawback that any configurations done during training are not applied in the adapted model. Currently the `use_peft` attribute of `PreTrainedConfig` is only set during loading to signal the following code that it has to deal with a PEFT adapter. However we could imagine a scenario where this is already set at training time and stored alongside the adapter. * Store policy config alongside PEFT checkpoint Before this change the PEFT-wrapped policy did not save the policy's config alongside the adapter config / weights which prevented us from changing the policy config. Now the policy config is saved both in full training and PEFT training. This change makes loading the PEFT policy adapter much easier as well. * Add default config for ACT * Support targets like `all-linear` * Formatting * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix failing tests * Remove PEFT compatibility changes in config We'll wait for the PEFT release that fixes this for good. * Remove `use_peft` parameter from training script Instead we make the PEFT config optional which has the same effect. * Log adapter config to WandB * Better documentation for CLI arguments * Don't unload & merge the PEFT model This can make things hard when using quantized layers (user expects quantized base layers with unquantized adapters for example, merging defaults to upcast the layers leading to higher memory). * Correct way of identifying when to save config * Add CLI end-to-end tests Currently there don't seem to be any way to test the CLI commands. Since this change mostly happens in those I thought it best to add a way to test these commands end-to-end. More integrated commands like `lerobot-record` need patching but standalone commands like training seem to work fine. * Update default targets Removed ACT since it doesn't make sense to fine-tune ACT without having it pretrained beforehand. SmolVLA and Pi0/0.5 are much more senseful targets. * Clean up loading code - Centralized instantiation of the PEFT wrapper in `make_policy` for inference (e.g. in `lerobot-record`) - Training a PEFT policy also sets `cfg.use_peft` so that all inference code loading the policy can rely on that attribute to identify if PEFT loading is needed - Modified RTC example to also include PEFT policies. Mostly because this is an example I'm currently exploring. * Make sure push_to_hub works Since PEFT only wraps `push_to_hub` and not `push_model_to_hub`, the reference to `self` in `policy.push_model_to_hub` is the unwrapped policy which, of course, doesn't know anything about PEFT. To make the upload process aware of PEFT, we pass the unwrapped policy down to `push_model_to_hub` as a kwarg. This is not ideal but I think it is the best way for now. * formatting * Warn when encountering from-scratch-training * Revamp pretrained model loading There were quite a few factors that convinced me that the status quo is able to load pretrained models from the PEFT adapter config but in fact that didn't work. This commit fixes the following things: - policies wrapped in PEFT will now have a `name_or_path` attribute containing the name or path of the pretrained model we're fine-tuning - we further assume that SmolVLA without `pretrained_path` and `load_vlm_weights==False` must be an user-side error - we assume that using PEFT on from-scratch-policies must be an user-side-error * Make it possible to unset policy features This is necessary to train pre-trained policies on new datasets so that the features are inferred from the new dataset and not from the pretrained policy. * Use correct loading for PEFT in RTC example * Make it possible to use PeftModels in eval * Add test checking that PEFT actually reduces params * Adapt state/action projections instead of full-finetuning There doesn't seem to be a benefit to fully fine-tune these layers over just adapting them, so we do that instead. * Disallow PEFT training on non-pretrained policies At first I thought it would make sense to have this feature in case you want to fine-tune a pre-trained section but in the end it makes more trouble than it's worth. It's still possible to allow this in the future when a concrete need arises. * Add basic documentation * Formatting * Add peft as extra dependency, mark tests Fast tests currently fail because of the missing dependency. * Fix pre-commit issues * Add walx <> peft conflict for uv * Exclude peft from pi install for now --------- Co-authored-by: nemo <git@ningu.net> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Pepijn <138571049+pkooij@users.noreply.github.com>
98 lines
4.5 KiB
Python
98 lines
4.5 KiB
Python
#!/usr/bin/env python
|
|
|
|
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
|
|
#
|
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
# you may not use this file except in compliance with the License.
|
|
# You may obtain a copy of the License at
|
|
#
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
#
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
# See the License for the specific language governing permissions and
|
|
# limitations under the License.
|
|
|
|
from dataclasses import dataclass, field
|
|
|
|
from lerobot.datasets.transforms import ImageTransformsConfig
|
|
from lerobot.datasets.video_utils import get_safe_default_codec
|
|
|
|
|
|
@dataclass
|
|
class DatasetConfig:
|
|
# You may provide a list of datasets here. `train.py` creates them all and concatenates them. Note: only data
|
|
# keys common between the datasets are kept. Each dataset gets and additional transform that inserts the
|
|
# "dataset_index" into the returned item. The index mapping is made according to the order in which the
|
|
# datasets are provided.
|
|
repo_id: str
|
|
# Root directory where the dataset will be stored (e.g. 'dataset/path').
|
|
root: str | None = None
|
|
episodes: list[int] | None = None
|
|
image_transforms: ImageTransformsConfig = field(default_factory=ImageTransformsConfig)
|
|
revision: str | None = None
|
|
use_imagenet_stats: bool = True
|
|
video_backend: str = field(default_factory=get_safe_default_codec)
|
|
streaming: bool = False
|
|
|
|
|
|
@dataclass
|
|
class WandBConfig:
|
|
enable: bool = False
|
|
# Set to true to disable saving an artifact despite training.save_checkpoint=True
|
|
disable_artifact: bool = False
|
|
project: str = "lerobot"
|
|
entity: str | None = None
|
|
notes: str | None = None
|
|
run_id: str | None = None
|
|
mode: str | None = None # Allowed values: 'online', 'offline' 'disabled'. Defaults to 'online'
|
|
|
|
|
|
@dataclass
|
|
class EvalConfig:
|
|
n_episodes: int = 50
|
|
# `batch_size` specifies the number of environments to use in a gym.vector.VectorEnv.
|
|
batch_size: int = 50
|
|
# `use_async_envs` specifies whether to use asynchronous environments (multiprocessing).
|
|
use_async_envs: bool = False
|
|
|
|
def __post_init__(self) -> None:
|
|
if self.batch_size > self.n_episodes:
|
|
raise ValueError(
|
|
"The eval batch size is greater than the number of eval episodes "
|
|
f"({self.batch_size} > {self.n_episodes}). As a result, {self.batch_size} "
|
|
f"eval environments will be instantiated, but only {self.n_episodes} will be used. "
|
|
"This might significantly slow down evaluation. To fix this, you should update your command "
|
|
f"to increase the number of episodes to match the batch size (e.g. `eval.n_episodes={self.batch_size}`), "
|
|
f"or lower the batch size (e.g. `eval.batch_size={self.n_episodes}`)."
|
|
)
|
|
|
|
|
|
@dataclass
|
|
class PeftConfig:
|
|
# PEFT offers many fine-tuning methods, layer adapters being the most common and currently also the most
|
|
# effective methods so we'll focus on those in this high-level config interface.
|
|
|
|
# Either a string (module name suffix or 'all-linear'), a list of module name suffixes or a regular expression
|
|
# describing module names to target with the configured PEFT method. Some policies have a default value for this
|
|
# so that you don't *have* to choose which layers to adapt but it might still be worthwhile depending on your case.
|
|
target_modules: list[str] | str | None = None
|
|
|
|
# Names/suffixes of modules to fully fine-tune and store alongside adapter weights. Useful for layers that are
|
|
# not part of a pre-trained model (e.g., action state projections). Depending on the policy this defaults to layers
|
|
# that are newly created in pre-trained policies. If you're fine-tuning an already trained policy you might want
|
|
# to set this to `[]`. Corresponds to PEFT's `modules_to_save`.
|
|
full_training_modules: list[str] | None = None
|
|
|
|
# The PEFT (adapter) method to apply to the policy. Needs to be a valid PEFT type.
|
|
method_type: str = "LORA"
|
|
|
|
# Adapter initialization method. Look at the specific PEFT adapter documentation for defaults.
|
|
init_type: str | None = None
|
|
|
|
# We expect that all PEFT adapters are in some way doing rank-decomposition therefore this parameter specifies
|
|
# the rank used for the adapter. In general a higher rank means more trainable parameters and closer to full
|
|
# fine-tuning.
|
|
r: int = 16
|