mirror of
https://github.com/huggingface/lerobot.git
synced 2026-05-14 16:19:45 +00:00
9db9c35cb4
* fix(config): add lora_alpha to PeftConfig PeftConfig was missing the lora_alpha field, causing the PEFT library to default to alpha=8 regardless of the LoRA rank, which dampens the adaptation signal for high-rank adapters (e.g., r=128). This adds lora_alpha: int | None = None to PeftConfig, allowing users to specify --peft.lora_alpha <value> on the CLI. Closes #3551 * fix(docs): add lora_alpha to peft training example + clarify scaling formula - Add --peft.lora_alpha=64 to docs/source/peft_training.mdx example to prevent new users from hitting the alpha=8 default dampening bug - Clarify lora_alpha comment in default.py with scaling = lora_alpha / r * docs: mention both --peft.r and --peft.lora_alpha in LoRA description --------- Co-authored-by: Cheng Yin <yin@users.noreply.github.com>