Files
lerobot/docs/source
Cheng Yin 9db9c35cb4 fix(config): add lora_alpha to PeftConfig (#3573)
* fix(config): add lora_alpha to PeftConfig

PeftConfig was missing the lora_alpha field, causing the PEFT library
to default to alpha=8 regardless of the LoRA rank, which dampens the
adaptation signal for high-rank adapters (e.g., r=128).

This adds lora_alpha: int | None = None to PeftConfig, allowing users
to specify --peft.lora_alpha <value> on the CLI.

Closes #3551

* fix(docs): add lora_alpha to peft training example + clarify scaling formula

- Add --peft.lora_alpha=64 to docs/source/peft_training.mdx example to
  prevent new users from hitting the alpha=8 default dampening bug
- Clarify lora_alpha comment in default.py with scaling = lora_alpha / r

* docs: mention both --peft.r and --peft.lora_alpha in LoRA description

---------

Co-authored-by: Cheng Yin <yin@users.noreply.github.com>
2026-05-13 11:09:19 +02:00
..
2026-05-06 18:01:16 +02:00
2026-05-12 15:49:54 +02:00
2025-08-01 17:39:39 +02:00
2025-08-01 17:39:39 +02:00
2025-08-01 17:39:39 +02:00
2025-08-01 17:39:39 +02:00
2025-09-15 09:53:30 +02:00
2026-04-28 17:56:24 +02:00
2026-04-23 14:50:32 +02:00