mirror of
https://github.com/huggingface/lerobot.git
synced 2026-05-16 00:59:46 +00:00
beb22afd81
* **#2 — dedupe `_PLACEHOLDER_RE`.** The same regex was compiled in
`recipe.py` and `language_render.py`. Promote to module-level
`PLACEHOLDER_RE` in `recipe.py` (its primary owner — declares
template syntax) and import from `language_render.py`.
* **#3 — centralize language column names.** `io_utils.py` had
hardcoded `{"language_persistent", "language_events"}` literals at
two sites. Replace with `LANGUAGE_COLUMNS` import so a future column
rename can't silently desync.
* **#4 — defensive collate preserved-keys.** `lerobot_collate_fn`
silently filtered language fields from samples that didn't have
them, which would hand downstream consumers a preserved list
shorter than the tensor batch. Now: if any sample carries a key,
every sample in the batch must carry it; otherwise raise a
`ValueError` so the upstream rendering bug surfaces at the boundary.
* **#5 — `_scalar` rejects non-singleton lists.** Previously a zero-
or multi-element list fell through and triggered confusing
`float([])` errors downstream. Now raises `ValueError` with the
actual length.
* **#6 — refactor `_extract_complementary_data`.** Replace 11 lines
of `key = {... if ... else {}}` plus an 11-line splat dict with a
single `_COMPLEMENTARY_KEYS` tuple iterated once.
* **#7 — document `EXTENDED_STYLES`.** Was an empty `set()` with no
comment. Add a docstring explaining it's an intentional extension
point: downstream modules append project-local styles before
`column_for_style` is called.
* **#9 — `tools.mdx` notes the runtime layer is future work.** The
page referenced `src/lerobot/tools/`, `registry.py`, and
`get_tools(meta)` — none exist in this PR. Added a callout at the
start of "How to add your own tool" plus a note on the
implementations paragraph.
* **#10 — tests for YAML round-trip, malformed rows, blend
validation.** `test_recipe.py` grew from 1 case to 12 covering:
blend-or-messages exclusivity, target-turn requirement, blend
emptiness, weight presence/positivity, nested-blend rejection,
`from_dict` with nested blends, `from_yaml` / `load_recipe`
agreement, top-level non-mapping rejection. Added a malformed-row
test for `_normalize_rows` that asserts non-dict entries raise
`TypeError`.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
211 lines
6.0 KiB
Plaintext
211 lines
6.0 KiB
Plaintext
# Tools
|
|
|
|
LeRobot v3.1 supports **tool calls** in policies — assistant messages can
|
|
emit structured invocations like `say(text="OK, starting now")` that the
|
|
runtime dispatches to a real implementation (TTS, controller, logger, …).
|
|
|
|
This page covers:
|
|
|
|
1. Where the tool catalog lives.
|
|
2. How the annotation pipeline produces tool-call atoms.
|
|
3. How to add your own tool.
|
|
|
|
## Where tools are declared
|
|
|
|
Two layers.
|
|
|
|
**The catalog** — a list of OpenAI-style function schemas — lives at
|
|
`meta/info.json["tools"]` on each dataset. Example:
|
|
|
|
```json
|
|
{
|
|
"features": { "...": "..." },
|
|
"tools": [
|
|
{
|
|
"type": "function",
|
|
"function": {
|
|
"name": "say",
|
|
"description": "Speak a short utterance to the user via the TTS executor.",
|
|
"parameters": {
|
|
"type": "object",
|
|
"properties": {
|
|
"text": {
|
|
"type": "string",
|
|
"description": "The verbatim text to speak."
|
|
}
|
|
},
|
|
"required": ["text"]
|
|
}
|
|
}
|
|
}
|
|
]
|
|
}
|
|
```
|
|
|
|
Read it via the dataset metadata accessor:
|
|
|
|
```python
|
|
from lerobot.datasets.dataset_metadata import LeRobotDatasetMetadata
|
|
|
|
meta = LeRobotDatasetMetadata(repo_id="pepijn/super_poulain_final_annotations")
|
|
tools = meta.tools # list[dict] — OpenAI tool schemas
|
|
```
|
|
|
|
If the dataset's `info.json` doesn't declare any tools, `meta.tools`
|
|
returns `DEFAULT_TOOLS` from `lerobot.datasets.language` — currently a
|
|
single-entry list with the canonical `say` schema. So unannotated
|
|
datasets and chat-template consumers keep working without any
|
|
configuration:
|
|
|
|
```python
|
|
prompt_str = tokenizer.apply_chat_template(
|
|
sample["messages"],
|
|
tools=meta.tools, # works either way
|
|
add_generation_prompt=False,
|
|
tokenize=False,
|
|
)
|
|
```
|
|
|
|
**The implementations** — runnable Python — will live under
|
|
`src/lerobot/tools/`, one file per tool. The runtime dispatcher and
|
|
the canonical `say` implementation (wrapping Kyutai's pocket-tts) land
|
|
in a follow-up PR; this PR ships only the catalog storage and
|
|
fallback constant.
|
|
|
|
## Per-row tool _invocations_
|
|
|
|
The catalog above describes _what can be called_. The actual _call_ — the
|
|
function name plus the argument values — is stored per-row, on the
|
|
assistant atoms in `language_events`:
|
|
|
|
```python
|
|
{
|
|
"role": "assistant",
|
|
"content": null,
|
|
"style": null,
|
|
"timestamp": 12.4,
|
|
"camera": null,
|
|
"tool_calls": [
|
|
{ "type": "function",
|
|
"function": { "name": "say", "arguments": { "text": "On it." } } }
|
|
]
|
|
}
|
|
```
|
|
|
|
Recipes splice these into rendered messages via `tool_calls_from`:
|
|
|
|
```yaml
|
|
user_interjection_response:
|
|
bindings:
|
|
speech: "emitted_at(t, role=assistant, tool_name=say)"
|
|
messages:
|
|
- { role: user, content: "${task}", stream: high_level }
|
|
- {
|
|
role: assistant,
|
|
content: "${current_plan}",
|
|
stream: high_level,
|
|
target: true,
|
|
tool_calls_from: speech,
|
|
}
|
|
```
|
|
|
|
The model's training target is one assistant turn that carries both the
|
|
plan text _and_ the `say` tool call. At inference, the runtime parses
|
|
the generated text back into structured `tool_calls` and dispatches to
|
|
the matching implementation.
|
|
|
|
## How to add your own tool
|
|
|
|
> **Note:** Steps 2 and 3 below describe the runtime layer
|
|
> (`src/lerobot/tools/`, the `Tool` protocol, `TOOL_REGISTRY`,
|
|
> `get_tools(meta)`) which lands in a follow-up PR. Today (this PR
|
|
> only), Step 1 is enough to make the tool visible to the chat
|
|
> template via `meta.tools` so the model can learn to _generate_ the
|
|
> call. Executing the call at inference is what the follow-up PR
|
|
> wires up.
|
|
|
|
Three steps. Concrete example: a `record_observation` tool the policy
|
|
can call to capture an extra observation outside the regular control
|
|
loop.
|
|
|
|
### Step 1 — declare the schema
|
|
|
|
Add an entry under `meta/info.json["tools"]`. Either edit the file
|
|
directly on disk _before_ running the annotation pipeline (it'll be
|
|
preserved) or hand it to `lerobot-annotate` via a config flag.
|
|
|
|
```json
|
|
{
|
|
"tools": [
|
|
{ "type": "function", "function": { "name": "say", "...": "..." } },
|
|
{
|
|
"type": "function",
|
|
"function": {
|
|
"name": "record_observation",
|
|
"description": "Capture a high-resolution still image for the user.",
|
|
"parameters": {
|
|
"type": "object",
|
|
"properties": {
|
|
"label": {
|
|
"type": "string",
|
|
"description": "Short label for the saved image."
|
|
}
|
|
},
|
|
"required": ["label"]
|
|
}
|
|
}
|
|
}
|
|
]
|
|
}
|
|
```
|
|
|
|
The schema follows OpenAI's function-calling convention exactly, so the
|
|
chat template can render it natively.
|
|
|
|
### Step 2 — implement the call
|
|
|
|
Create `src/lerobot/tools/record_observation.py`:
|
|
|
|
```python
|
|
from .base import Tool
|
|
from typing import Any
|
|
|
|
RECORD_OBSERVATION_SCHEMA: dict[str, Any] = { "...": "..." } # mirrors the JSON above
|
|
|
|
|
|
class RecordObservationTool:
|
|
name = "record_observation"
|
|
schema = RECORD_OBSERVATION_SCHEMA
|
|
|
|
def __init__(self, schema: dict | None = None, output_dir: str = "."):
|
|
self.output_dir = output_dir
|
|
|
|
def call(self, arguments: dict) -> str:
|
|
label = arguments["label"]
|
|
# ... save the latest camera frame to <output_dir>/<label>.png ...
|
|
return f"saved {label}.png"
|
|
```
|
|
|
|
One file per tool keeps dependencies isolated — `record_observation`
|
|
might pull `pillow`, while `say` pulls `pocket-tts`. Users installing
|
|
only the tools they need avoid heavy transitive deps.
|
|
|
|
### Step 3 — register it
|
|
|
|
Add to `src/lerobot/tools/registry.py`:
|
|
|
|
```python
|
|
from .record_observation import RecordObservationTool
|
|
|
|
TOOL_REGISTRY["record_observation"] = RecordObservationTool
|
|
```
|
|
|
|
That's it. At runtime `get_tools(meta)` looks up each schema in
|
|
`meta.tools`, instantiates the matching registered class, and returns
|
|
a name → instance dict the dispatcher can route into.
|
|
|
|
If you want to use a tool _without_ writing an implementation (e.g. for
|
|
training-time chat-template formatting only), step 1 alone is enough —
|
|
the model still learns to _generate_ the call. Steps 2 and 3 are only
|
|
needed to actually _execute_ it at inference.
|