note origins of each training objective

This commit is contained in:
Bryson Jones
2025-12-11 09:55:41 -08:00
parent 8e3a1e8945
commit 1f74982469
+5 -1
View File
@@ -187,7 +187,11 @@ The vision encoder uses a separate learning rate multiplier, where 1/10th is sug
#### 1. Flow Matching with Beta Sampling
Consider switching to flow matching with beta sampling distribution for potentially improved performance:
The original diffusion implementation here is based on the work described in [TRI's LBM paper](https://arxiv.org/abs/2507.05331)
Additionally, we have implemented a flow-matching objective, which is described at a high-level in [Boston Dynamics blog post](https://bostondynamics.com/blog/large-behavior-models-atlas-find-new-footing/).
Consider testing the flow-matching objective and evaluating performance differences for your task:
```bash
--policy.objective=flow_matching \