docs: add evaluation guide and update benchmarks doc

- New docs/source/evaluation.mdx covering lerobot-eval usage, batch_size
  auto-tuning, AsyncVectorEnv performance, tuning tips, output format,
  multi-task evaluation, and programmatic usage.
- Add evaluation page to _toctree.yml under Benchmarks section.
- Update adding_benchmarks.mdx to reference batch_size auto default and
  link to the evaluation guide.

Made-with: Cursor
This commit is contained in:
Pepijn Kooijmans
2026-04-07 13:57:24 +02:00
committed by Pepijn
parent 5ec6119542
commit 2c32c04cca
3 changed files with 174 additions and 2 deletions
+2
View File
@@ -73,6 +73,8 @@
title: Control & Train Robots in Sim (LeIsaac)
title: "Simulation"
- sections:
- local: evaluation
title: Evaluation (lerobot-eval)
- local: adding_benchmarks
title: Adding a New Benchmark
- local: libero