From d79dd6d31f53ce8ef146b58359eef9df21faa76f Mon Sep 17 00:00:00 2001 From: Vladislav Sovrasov Date: Fri, 5 Dec 2025 13:32:58 +0100 Subject: [PATCH] Add a documentation page with a brief intro to hw backends (#2385) --- docs/source/_toctree.yml | 4 +++ docs/source/torch_accelerators.mdx | 42 ++++++++++++++++++++++++++++++ 2 files changed, 46 insertions(+) create mode 100644 docs/source/torch_accelerators.mdx diff --git a/docs/source/_toctree.yml b/docs/source/_toctree.yml index f9f792113..aae7372fa 100644 --- a/docs/source/_toctree.yml +++ b/docs/source/_toctree.yml @@ -92,6 +92,10 @@ - local: phone_teleop title: Phone title: "Teleoperators" +- sections: + - local: torch_accelerators + title: PyTorch accelerators + title: "Supported Hardware" - sections: - local: notebooks title: Notebooks diff --git a/docs/source/torch_accelerators.mdx b/docs/source/torch_accelerators.mdx new file mode 100644 index 000000000..6dfbfbad5 --- /dev/null +++ b/docs/source/torch_accelerators.mdx @@ -0,0 +1,42 @@ +# PyTorch accelerators + +LeRobot supports multiple hardware acceleration options for both training and inference. + +These options include: + +- **CPU**: CPU executes all computations, no dedicated accelerator is used +- **CUDA**: acceleration with NVIDIA & AMD GPUs +- **MPS**: acceleration with Apple Silicon GPUs +- **XPU**: acceleration with Intel integrated and discrete GPUs + +## Getting Started + +To use particular accelerator, a suitable version of PyTorch should be installed. + +For CPU, CUDA, and MPS backends follow instructions provided on [PyTorch installation page](https://pytorch.org/get-started/locally). +For XPU backend, follow instructions from [PyTorch documentation](https://docs.pytorch.org/docs/stable/notes/get_start_xpu.html). + +### Verifying the installation + +After installation, accelerator availability can be verified by running + +```python +import torch +print(torch..is_available()) # is cuda, mps, or xpu +``` + +## How to run training or evaluation + +To select the desired accelerator, use the `--policy.device` flag when running `lerobot-train` or `lerobot-eval`. For example, to use MPS on Apple Silicon, run: + +```bash +lerobot-train + --policy.device=mps ... +``` + +```bash +lerobot-eval \ + --policy.device=mps ... +``` + +However, in most cases, presence of an accelerator is detected automatically and `policy.device` parameter can be omitted from CLI commands.