Files
lerobot/AGENTS.md
Pepijn 39b966e20a docs(agents): add AGENT_GUIDE.md for user facing agent (#3430)
* docs(agents): add AGENT_GUIDE.md with SO-101, data, policy, training, eval guidance

Adds an agent-facing companion to AGENTS.md that helps AI agents (Cursor,
Claude, ChatGPT, etc.) guide end-users through LeRobot without needing to
re-read every doc:

- Mandatory "ask the user first" block (goal, hardware, GPU, skill level)
- SO-101 end-to-end cheat-sheet: install -> calibrate -> record -> train -> eval
- Data-collection tips distilled from the folding project (practice before
  you record, quality > speed, start constrained then add diversity)
- Policy decision table with indicative profiling numbers (update ms, peak
  GPU mem) and AdamW-vs-SGD caveats
- Training duration guidance: 5-10 epoch rule, epoch<->step conversion,
  scheduler/checkpoint scaling with --steps, SmolVLA unfreeze tip
- Real-robot eval via lerobot-record --policy.path and sim eval via
  lerobot-eval, including the pre-baked docker/Dockerfile.benchmark.* images

AGENTS.md gets a short pointer to AGENT_GUIDE.md at the top.
CLAUDE.md (symlink to AGENTS.md) inherits the pointer automatically.

Made-with: Cursor

* docs(agents): recommend 2 cameras (front + wrist) as default

Made-with: Cursor

* docs(agents): add Feetech wiring check and broaden visualizer note

Made-with: Cursor

* docs(agents): clarify Feetech LED behavior (steady-on, not flash)

Made-with: Cursor

* docs(agents): expand Feetech troubleshooting (blinking LED, 5V vs 12V variants)

Made-with: Cursor

* docs(agents): tighten Feetech LED wording

Made-with: Cursor
2026-04-22 11:54:19 +02:00

4.1 KiB

This file provides guidance to AI agents when working with code in this repository.

User-facing help → AGENT_GUIDE.md (SO-101 setup, recording, picking a policy, training duration, eval — with copy-pasteable commands).

Project Overview

LeRobot is a PyTorch-based library for real-world robotics, providing datasets, pretrained policies, and tools for training, evaluation, data collection, and robot control. It integrates with Hugging Face Hub for model/dataset sharing.

Tech Stack

Python 3.12+ · PyTorch · Hugging Face (datasets, Hub, accelerate) · draccus (config/CLI) · Gymnasium (envs) · uv (package management)

Development Setup

uv sync --locked                            # Base dependencies
uv sync --locked --extra test --extra dev   # Test + dev tools
uv sync --locked --extra all                # Everything
git lfs install && git lfs pull             # Test artifacts

Key Commands

uv run pytest tests -svv --maxfail=10                 # All tests
DEVICE=cuda make test-end-to-end                      # All E2E tests
pre-commit run --all-files                           # Lint + format (ruff, typos, bandit, etc.)

Architecture (src/lerobot/)

  • scripts/ — CLI entry points (lerobot-train, lerobot-eval, lerobot-record, etc.), mapped in pyproject.toml [project.scripts].
  • configs/ — Dataclass configs parsed by draccus. train.py has TrainPipelineConfig (top-level). policies.py has PreTrainedConfig base. Polymorphism via draccus.ChoiceRegistry with @register_subclass("name") decorators.
  • policies/ — Each policy in its own subdir. All inherit PreTrainedPolicy (nn.Module + HubMixin) from pretrained.py. Factory with lazy imports in factory.py.
  • processor/ — Data transformation pipeline. ProcessorStep base with registry. DataProcessorPipeline / PolicyProcessorPipeline chain steps.
  • datasets/LeRobotDataset (episode-aware sampling + video decoding) and LeRobotDatasetMetadata.
  • envs/EnvConfig base in configs.py, factory in factory.py. Each env subclass defines gym_kwargs and create_envs().
  • robots/, motors/, cameras/, teleoperators/ — Hardware abstraction layers.
  • types.py and configs/types.py — Core type aliases and feature type definitions.

Repository Structure (outside src/)

  • tests/ — Pytest suite organized by module. Fixtures in tests/fixtures/, mocks in tests/mocks/. Hardware tests use skip decorators from tests/utils.py. E2E tests via Makefile write to tests/outputs/.
  • .github/workflows/ — CI: quality.yml (pre-commit), fast_tests.yml (base deps, every PR), full_tests.yml (all extras + E2E + GPU, post-approval), latest_deps_tests.yml (daily lockfile upgrade), security.yml (TruffleHog), release.yml (PyPI publish on tags).
  • docs/source/ — HF documentation (.mdx files). Per-policy READMEs, hardware guides, tutorials. Built separately via docs-requirements.txt and CI workflows.
  • examples/ — End-user tutorials and scripts organized by use case (dataset creation, training, hardware setup).
  • docker/ — Dockerfiles for user (Dockerfile.user) and CI (Dockerfile.internal).
  • benchmarks/ — Performance benchmarking scripts.
  • Root files: pyproject.toml (single source of truth for deps, build, tool config), Makefile (E2E test targets), uv.lock, CONTRIBUTING.md & README.md (general information).

Notes

  • Mypy is gradual: strict only for lerobot.envs, lerobot.configs, lerobot.optim, lerobot.model, lerobot.cameras, lerobot.motors, lerobot.transport. Add type annotations when modifying these modules.
  • Optional dependencies: many policies, envs, and robots are behind extras (e.g., lerobot[aloha]). New imports for optional packages must be guarded or lazy. See pyproject.toml [project.optional-dependencies].
  • Video decoding: datasets can store observations as video files. LeRobotDataset handles frame extraction, but tests need ffmpeg installed.
  • Prioritize use of uv run to execute Python commands (not raw python or pip).