diff --git a/docs/source/tools.mdx b/docs/source/tools.mdx index 04d5da6b9..d88881184 100644 --- a/docs/source/tools.mdx +++ b/docs/source/tools.mdx @@ -68,9 +68,9 @@ prompt_str = tokenizer.apply_chat_template( **The implementations** — runnable Python — will live under `src/lerobot/tools/`, one file per tool. The runtime dispatcher and -the canonical `say` implementation (wrapping Kyutai's pocket-tts) land -in a follow-up PR; this PR ships only the catalog storage and -fallback constant. +the canonical `say` implementation (wrapping Kyutai's pocket-tts) are +not part of the catalog layer described here; today this layer ships +only the schema storage and the `DEFAULT_TOOLS` fallback constant. ## Per-row tool _invocations_ @@ -118,11 +118,11 @@ the matching implementation. > **Note:** Steps 2 and 3 below describe the runtime layer > (`src/lerobot/tools/`, the `Tool` protocol, `TOOL_REGISTRY`, -> `get_tools(meta)`) which lands in a follow-up PR. Today (this PR -> only), Step 1 is enough to make the tool visible to the chat -> template via `meta.tools` so the model can learn to _generate_ the -> call. Executing the call at inference is what the follow-up PR -> wires up. +> `get_tools(meta)`) which is not part of the catalog layer shipped +> today — those modules don't yet exist in the tree. Step 1 alone is +> enough to make the tool visible to the chat template via +> `meta.tools` so the model can learn to _generate_ the call; +> executing the call at inference requires the runtime layer. Three steps. Concrete example: a `record_observation` tool the policy can call to capture an extra observation outside the regular control