scale lr decay if we reduce steps

This commit is contained in:
Pepijn
2025-10-14 15:59:46 +02:00
parent 9950bfd66f
commit a66b50d372
5 changed files with 37 additions and 9 deletions
-1
View File
@@ -102,7 +102,6 @@ accelerate launch --num_processes=2 $(which lerobot-train) \
Since the effective batch size `bs` increases with multiple GPUs (batch_size × num_gpus), you may want to reduce the number of training steps proportionally:
#TODO(pepijn): verify this (bs scaling)
```bash
# Example: 2 GPUs with effective batch size 2x larger
# Original: batch_size=8, steps=100000