mirror of
https://github.com/huggingface/lerobot.git
synced 2026-05-11 06:39:42 +00:00
39b966e20a
* docs(agents): add AGENT_GUIDE.md with SO-101, data, policy, training, eval guidance Adds an agent-facing companion to AGENTS.md that helps AI agents (Cursor, Claude, ChatGPT, etc.) guide end-users through LeRobot without needing to re-read every doc: - Mandatory "ask the user first" block (goal, hardware, GPU, skill level) - SO-101 end-to-end cheat-sheet: install -> calibrate -> record -> train -> eval - Data-collection tips distilled from the folding project (practice before you record, quality > speed, start constrained then add diversity) - Policy decision table with indicative profiling numbers (update ms, peak GPU mem) and AdamW-vs-SGD caveats - Training duration guidance: 5-10 epoch rule, epoch<->step conversion, scheduler/checkpoint scaling with --steps, SmolVLA unfreeze tip - Real-robot eval via lerobot-record --policy.path and sim eval via lerobot-eval, including the pre-baked docker/Dockerfile.benchmark.* images AGENTS.md gets a short pointer to AGENT_GUIDE.md at the top. CLAUDE.md (symlink to AGENTS.md) inherits the pointer automatically. Made-with: Cursor * docs(agents): recommend 2 cameras (front + wrist) as default Made-with: Cursor * docs(agents): add Feetech wiring check and broaden visualizer note Made-with: Cursor * docs(agents): clarify Feetech LED behavior (steady-on, not flash) Made-with: Cursor * docs(agents): expand Feetech troubleshooting (blinking LED, 5V vs 12V variants) Made-with: Cursor * docs(agents): tighten Feetech LED wording Made-with: Cursor
4.1 KiB
4.1 KiB
This file provides guidance to AI agents when working with code in this repository.
User-facing help →
AGENT_GUIDE.md(SO-101 setup, recording, picking a policy, training duration, eval — with copy-pasteable commands).
Project Overview
LeRobot is a PyTorch-based library for real-world robotics, providing datasets, pretrained policies, and tools for training, evaluation, data collection, and robot control. It integrates with Hugging Face Hub for model/dataset sharing.
Tech Stack
Python 3.12+ · PyTorch · Hugging Face (datasets, Hub, accelerate) · draccus (config/CLI) · Gymnasium (envs) · uv (package management)
Development Setup
uv sync --locked # Base dependencies
uv sync --locked --extra test --extra dev # Test + dev tools
uv sync --locked --extra all # Everything
git lfs install && git lfs pull # Test artifacts
Key Commands
uv run pytest tests -svv --maxfail=10 # All tests
DEVICE=cuda make test-end-to-end # All E2E tests
pre-commit run --all-files # Lint + format (ruff, typos, bandit, etc.)
Architecture (src/lerobot/)
scripts/— CLI entry points (lerobot-train,lerobot-eval,lerobot-record, etc.), mapped inpyproject.toml [project.scripts].configs/— Dataclass configs parsed by draccus.train.pyhasTrainPipelineConfig(top-level).policies.pyhasPreTrainedConfigbase. Polymorphism viadraccus.ChoiceRegistrywith@register_subclass("name")decorators.policies/— Each policy in its own subdir. All inheritPreTrainedPolicy(nn.Module+HubMixin) frompretrained.py. Factory with lazy imports infactory.py.processor/— Data transformation pipeline.ProcessorStepbase with registry.DataProcessorPipeline/PolicyProcessorPipelinechain steps.datasets/—LeRobotDataset(episode-aware sampling + video decoding) andLeRobotDatasetMetadata.envs/—EnvConfigbase inconfigs.py, factory infactory.py. Each env subclass definesgym_kwargsandcreate_envs().robots/,motors/,cameras/,teleoperators/— Hardware abstraction layers.types.pyandconfigs/types.py— Core type aliases and feature type definitions.
Repository Structure (outside src/)
tests/— Pytest suite organized by module. Fixtures intests/fixtures/, mocks intests/mocks/. Hardware tests use skip decorators fromtests/utils.py. E2E tests viaMakefilewrite totests/outputs/..github/workflows/— CI:quality.yml(pre-commit),fast_tests.yml(base deps, every PR),full_tests.yml(all extras + E2E + GPU, post-approval),latest_deps_tests.yml(daily lockfile upgrade),security.yml(TruffleHog),release.yml(PyPI publish on tags).docs/source/— HF documentation (.mdxfiles). Per-policy READMEs, hardware guides, tutorials. Built separately viadocs-requirements.txtand CI workflows.examples/— End-user tutorials and scripts organized by use case (dataset creation, training, hardware setup).docker/— Dockerfiles for user (Dockerfile.user) and CI (Dockerfile.internal).benchmarks/— Performance benchmarking scripts.- Root files:
pyproject.toml(single source of truth for deps, build, tool config),Makefile(E2E test targets),uv.lock,CONTRIBUTING.md&README.md(general information).
Notes
- Mypy is gradual: strict only for
lerobot.envs,lerobot.configs,lerobot.optim,lerobot.model,lerobot.cameras,lerobot.motors,lerobot.transport. Add type annotations when modifying these modules. - Optional dependencies: many policies, envs, and robots are behind extras (e.g.,
lerobot[aloha]). New imports for optional packages must be guarded or lazy. Seepyproject.toml [project.optional-dependencies]. - Video decoding: datasets can store observations as video files.
LeRobotDatasethandles frame extraction, but tests need ffmpeg installed. - Prioritize use of
uv runto execute Python commands (not rawpythonorpip).