mirror of
https://github.com/huggingface/lerobot.git
synced 2026-05-16 09:09:48 +00:00
e670ac5daf
* Add basic support for PEFT adapter methods This changes adds support for training policies with much less parameters by applying adapter methods such as LoRA on specific parts of the policies and therefore possibly higher learning rates / batch sizes. To make this as accessible as possible I thought it useful to provide defaults for `target_modules` and `modules_to_save`. Currently only SmolVLA has such defaults but when we agree that this change is useful I will set out to generate more such defaults. While the user can override these settings, they are expected to only change the peft_method, rank and init_type parameters. * Implement loading of PEFT adapters Loading a PEFT adapter is currently done by initializing a policy with default config and then applying the adapter on the resulting model. This has the obvious drawback that any configurations done during training are not applied in the adapted model. Currently the `use_peft` attribute of `PreTrainedConfig` is only set during loading to signal the following code that it has to deal with a PEFT adapter. However we could imagine a scenario where this is already set at training time and stored alongside the adapter. * Store policy config alongside PEFT checkpoint Before this change the PEFT-wrapped policy did not save the policy's config alongside the adapter config / weights which prevented us from changing the policy config. Now the policy config is saved both in full training and PEFT training. This change makes loading the PEFT policy adapter much easier as well. * Add default config for ACT * Support targets like `all-linear` * Formatting * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * Fix failing tests * Remove PEFT compatibility changes in config We'll wait for the PEFT release that fixes this for good. * Remove `use_peft` parameter from training script Instead we make the PEFT config optional which has the same effect. * Log adapter config to WandB * Better documentation for CLI arguments * Don't unload & merge the PEFT model This can make things hard when using quantized layers (user expects quantized base layers with unquantized adapters for example, merging defaults to upcast the layers leading to higher memory). * Correct way of identifying when to save config * Add CLI end-to-end tests Currently there don't seem to be any way to test the CLI commands. Since this change mostly happens in those I thought it best to add a way to test these commands end-to-end. More integrated commands like `lerobot-record` need patching but standalone commands like training seem to work fine. * Update default targets Removed ACT since it doesn't make sense to fine-tune ACT without having it pretrained beforehand. SmolVLA and Pi0/0.5 are much more senseful targets. * Clean up loading code - Centralized instantiation of the PEFT wrapper in `make_policy` for inference (e.g. in `lerobot-record`) - Training a PEFT policy also sets `cfg.use_peft` so that all inference code loading the policy can rely on that attribute to identify if PEFT loading is needed - Modified RTC example to also include PEFT policies. Mostly because this is an example I'm currently exploring. * Make sure push_to_hub works Since PEFT only wraps `push_to_hub` and not `push_model_to_hub`, the reference to `self` in `policy.push_model_to_hub` is the unwrapped policy which, of course, doesn't know anything about PEFT. To make the upload process aware of PEFT, we pass the unwrapped policy down to `push_model_to_hub` as a kwarg. This is not ideal but I think it is the best way for now. * formatting * Warn when encountering from-scratch-training * Revamp pretrained model loading There were quite a few factors that convinced me that the status quo is able to load pretrained models from the PEFT adapter config but in fact that didn't work. This commit fixes the following things: - policies wrapped in PEFT will now have a `name_or_path` attribute containing the name or path of the pretrained model we're fine-tuning - we further assume that SmolVLA without `pretrained_path` and `load_vlm_weights==False` must be an user-side error - we assume that using PEFT on from-scratch-policies must be an user-side-error * Make it possible to unset policy features This is necessary to train pre-trained policies on new datasets so that the features are inferred from the new dataset and not from the pretrained policy. * Use correct loading for PEFT in RTC example * Make it possible to use PeftModels in eval * Add test checking that PEFT actually reduces params * Adapt state/action projections instead of full-finetuning There doesn't seem to be a benefit to fully fine-tune these layers over just adapting them, so we do that instead. * Disallow PEFT training on non-pretrained policies At first I thought it would make sense to have this feature in case you want to fine-tune a pre-trained section but in the end it makes more trouble than it's worth. It's still possible to allow this in the future when a concrete need arises. * Add basic documentation * Formatting * Add peft as extra dependency, mark tests Fast tests currently fail because of the missing dependency. * Fix pre-commit issues * Add walx <> peft conflict for uv * Exclude peft from pi install for now --------- Co-authored-by: nemo <git@ningu.net> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Pepijn <138571049+pkooij@users.noreply.github.com>
230 lines
7.5 KiB
Python
230 lines
7.5 KiB
Python
import importlib
|
|
import os
|
|
from unittest.mock import MagicMock, patch
|
|
|
|
import pytest
|
|
from safetensors.torch import load_file
|
|
|
|
from .utils import require_package
|
|
|
|
|
|
def run_command(cmd, module, args):
|
|
module = importlib.import_module(f"lerobot.scripts.{module}")
|
|
with patch("sys.argv", [cmd] + args):
|
|
module.main()
|
|
|
|
|
|
def lerobot_train(args):
|
|
return run_command(cmd="lerobot-train", module="lerobot_train", args=args)
|
|
|
|
|
|
def lerobot_record(args):
|
|
return run_command(cmd="lerobot-record", module="lerobot_record", args=args)
|
|
|
|
|
|
def resolve_model_id_for_peft_training(policy_type):
|
|
"""PEFT training needs pretrained models, this finds the pretrained model of a policy type for PEFT training."""
|
|
if policy_type == "smolvla":
|
|
return "lerobot/smolvla_base"
|
|
|
|
raise ValueError(f"No pretrained model known for {policy_type}. PEFT training will not work.")
|
|
|
|
|
|
@pytest.mark.parametrize("policy_type", ["smolvla"])
|
|
@require_package("peft")
|
|
def test_peft_training_push_to_hub_works(policy_type, tmp_path):
|
|
"""Ensure that push to hub stores PEFT only the adapter, not the full model weights."""
|
|
output_dir = tmp_path / f"output_{policy_type}"
|
|
upload_folder_contents = set()
|
|
|
|
model_id = resolve_model_id_for_peft_training(policy_type)
|
|
|
|
def mock_upload_folder(*args, **kwargs):
|
|
folder_path = kwargs["folder_path"]
|
|
# we include more than is actually uploaded since we ignore {allow,ignore}_patterns of upload_folders()
|
|
upload_folder_contents.update(os.listdir(folder_path))
|
|
return MagicMock()
|
|
|
|
with (
|
|
patch("huggingface_hub.HfApi.create_repo"),
|
|
patch("huggingface_hub.HfApi.upload_folder", mock_upload_folder),
|
|
):
|
|
lerobot_train(
|
|
[
|
|
f"--policy.path={model_id}",
|
|
"--policy.push_to_hub=true",
|
|
"--policy.repo_id=foo/bar",
|
|
"--policy.input_features=null",
|
|
"--policy.output_features=null",
|
|
"--peft.method=LORA",
|
|
"--dataset.repo_id=lerobot/pusht",
|
|
"--dataset.episodes=[0, 1]",
|
|
"--steps=1",
|
|
f"--output_dir={output_dir}",
|
|
]
|
|
)
|
|
|
|
assert "adapter_model.safetensors" in upload_folder_contents
|
|
assert "config.json" in upload_folder_contents
|
|
assert "adapter_config.json" in upload_folder_contents
|
|
|
|
|
|
@pytest.mark.parametrize("policy_type", ["smolvla"])
|
|
@require_package("peft")
|
|
def test_peft_training_works(policy_type, tmp_path):
|
|
"""Check whether the standard case of fine-tuning a (partially) pre-trained policy with PEFT works."""
|
|
output_dir = tmp_path / f"output_{policy_type}"
|
|
model_id = resolve_model_id_for_peft_training(policy_type)
|
|
|
|
lerobot_train(
|
|
[
|
|
f"--policy.path={model_id}",
|
|
"--policy.push_to_hub=false",
|
|
"--policy.input_features=null",
|
|
"--policy.output_features=null",
|
|
"--peft.method=LORA",
|
|
"--dataset.repo_id=lerobot/pusht",
|
|
"--dataset.episodes=[0, 1]",
|
|
"--steps=1",
|
|
f"--output_dir={output_dir}",
|
|
]
|
|
)
|
|
|
|
policy_dir = output_dir / "checkpoints" / "last" / "pretrained_model"
|
|
|
|
for file in ["adapter_config.json", "adapter_model.safetensors", "config.json"]:
|
|
assert (policy_dir / file).exists()
|
|
|
|
# This is the default case where we train a pre-trained policy from scratch with new data.
|
|
# We assume that we target policy-specific modules but fully fine-tune action and state projections
|
|
# so these must be part of the trained state dict.
|
|
state_dict = load_file(policy_dir / "adapter_model.safetensors")
|
|
|
|
adapted_keys = [
|
|
"state_proj",
|
|
"action_in_proj",
|
|
"action_out_proj",
|
|
"action_time_mlp_in",
|
|
"action_time_mlp_out",
|
|
]
|
|
|
|
found_keys = [
|
|
module_key
|
|
for module_key in adapted_keys
|
|
for state_dict_key in state_dict
|
|
if f".{module_key}." in state_dict_key
|
|
]
|
|
|
|
assert set(found_keys) == set(adapted_keys)
|
|
|
|
|
|
@pytest.mark.parametrize("policy_type", ["smolvla"])
|
|
@require_package("peft")
|
|
def test_peft_training_params_are_fewer(policy_type, tmp_path):
|
|
"""Check whether the standard case of fine-tuning a (partially) pre-trained policy with PEFT works."""
|
|
output_dir = tmp_path / f"output_{policy_type}"
|
|
model_id = resolve_model_id_for_peft_training(policy_type)
|
|
|
|
def dummy_update_policy(
|
|
train_metrics, policy, batch, optimizer, grad_clip_norm: float, accelerator, **kwargs
|
|
):
|
|
params_total = sum(p.numel() for p in policy.parameters())
|
|
params_trainable = sum(p.numel() for p in policy.parameters() if p.requires_grad)
|
|
|
|
assert params_total > params_trainable
|
|
|
|
return train_metrics, {}
|
|
|
|
with patch("lerobot.scripts.lerobot_train.update_policy", dummy_update_policy):
|
|
lerobot_train(
|
|
[
|
|
f"--policy.path={model_id}",
|
|
"--policy.push_to_hub=false",
|
|
"--policy.input_features=null",
|
|
"--policy.output_features=null",
|
|
"--peft.method=LORA",
|
|
"--dataset.repo_id=lerobot/pusht",
|
|
"--dataset.episodes=[0, 1]",
|
|
"--steps=1",
|
|
f"--output_dir={output_dir}",
|
|
]
|
|
)
|
|
|
|
|
|
class DummyRobot:
|
|
name = "dummy"
|
|
cameras = []
|
|
action_features = {"foo": 1.0, "bar": 2.0}
|
|
observation_features = {"obs1": 1.0, "obs2": 2.0}
|
|
is_connected = True
|
|
|
|
def connect(self, *args):
|
|
pass
|
|
|
|
def disconnect(self):
|
|
pass
|
|
|
|
|
|
def dummy_make_robot_from_config(*args, **kwargs):
|
|
return DummyRobot()
|
|
|
|
|
|
@pytest.mark.parametrize("policy_type", ["smolvla"])
|
|
@require_package("peft")
|
|
def test_peft_record_loads_policy(policy_type, tmp_path):
|
|
"""Train a policy with PEFT and attempt to load it with `lerobot-record`."""
|
|
from peft import PeftModel
|
|
|
|
output_dir = tmp_path / f"output_{policy_type}"
|
|
model_id = resolve_model_id_for_peft_training(policy_type)
|
|
|
|
lerobot_train(
|
|
[
|
|
f"--policy.path={model_id}",
|
|
"--policy.push_to_hub=false",
|
|
"--policy.input_features=null",
|
|
"--policy.output_features=null",
|
|
"--peft.method=LORA",
|
|
"--dataset.repo_id=lerobot/pusht",
|
|
"--dataset.episodes=[0, 1]",
|
|
"--steps=1",
|
|
f"--output_dir={output_dir}",
|
|
]
|
|
)
|
|
|
|
policy_dir = output_dir / "checkpoints" / "last" / "pretrained_model"
|
|
dataset_dir = tmp_path / "eval_pusht"
|
|
single_task = "move the table"
|
|
loaded_policy = None
|
|
|
|
def dummy_record_loop(*args, **kwargs):
|
|
nonlocal loaded_policy
|
|
|
|
if "dataset" not in kwargs:
|
|
return
|
|
|
|
dataset = kwargs["dataset"]
|
|
dataset.add_frame({"task": single_task})
|
|
loaded_policy = kwargs["policy"]
|
|
|
|
with (
|
|
patch("lerobot.robots.make_robot_from_config", dummy_make_robot_from_config),
|
|
# disable record loop since we're only interested in successful loading of the policy.
|
|
patch("lerobot.scripts.lerobot_record.record_loop", dummy_record_loop),
|
|
# disable speech output
|
|
patch("lerobot.utils.utils.say"),
|
|
):
|
|
lerobot_record(
|
|
[
|
|
f"--policy.path={policy_dir}",
|
|
"--robot.type=so101_follower",
|
|
"--robot.port=/dev/null",
|
|
"--dataset.repo_id=lerobot/eval_pusht",
|
|
f'--dataset.single_task="{single_task}"',
|
|
f"--dataset.root={dataset_dir}",
|
|
"--dataset.push_to_hub=false",
|
|
]
|
|
)
|
|
|
|
assert isinstance(loaded_policy, PeftModel)
|