mirror of
https://github.com/huggingface/lerobot.git
synced 2026-05-17 09:39:47 +00:00
feat(eval): implement docker runtime with HTTP policy inference server
Add docker_runtime.py (host-side) and lerobot_eval_worker.py (container-side) for --eval.runtime=docker. Policy loads once on the host GPU; Docker containers run env-only workers that call back via HTTP for action chunks, maximising GPU utilisation across parallel benchmark tasks. - _InferenceServer: HTTP server wrapping predict_action_chunk with a single lock - run_eval_in_docker: spawns instance_count containers, collects + merges per-task JSON, writes eval_info.json compatible with _aggregate_eval_from_per_task - lerobot-eval-worker CLI: make_env → shard tasks → run episodes → write JSON - EvalDockerConfig: add port field (default 50051) - pyproject.toml: add lerobot-eval-worker entry point Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -247,6 +247,7 @@ lerobot-replay="lerobot.scripts.lerobot_replay:main"
|
||||
lerobot-setup-motors="lerobot.scripts.lerobot_setup_motors:main"
|
||||
lerobot-teleoperate="lerobot.scripts.lerobot_teleoperate:main"
|
||||
lerobot-eval="lerobot.scripts.lerobot_eval:main"
|
||||
lerobot-eval-worker="lerobot.scripts.lerobot_eval_worker:main"
|
||||
lerobot-train="lerobot.scripts.lerobot_train:main"
|
||||
lerobot-train-tokenizer="lerobot.scripts.lerobot_train_tokenizer:main"
|
||||
lerobot-dataset-viz="lerobot.scripts.lerobot_dataset_viz:main"
|
||||
|
||||
Reference in New Issue
Block a user