Commit Graph

205 Commits

Author SHA1 Message Date
Pepijn 86e7302e10 Merge branch 'feat/mirror' into openarms_wallx_rebased_3 2026-02-24 11:53:01 +01:00
Pepijn 3ec7c25e7d speedup stats and encoding 2026-02-06 11:26:27 +01:00
Pepijn 2d8ac028f9 remove async stuff 2026-02-03 11:01:32 +01:00
Pepijn c028ae3a44 Async encoding 2026-02-03 08:50:34 +01:00
Pepijn 2598dbc31a Merge branch 'feat/training_time_rtc' into openarms_wallx_rebased_3 2026-01-29 11:17:15 +01:00
Steven Palma 3409ef0dc2 refactor(cameras): cameras API extension (#2808)
* feat(cameras): add new read_latest() method

* fix(cameras): fix threading bug + clear state

* refactor(cameras): multiple improvements

* feat(camera): add context manager to camera base class

* chore(camera): slight modifications to opencv

* test(cameras): update opencv tests according to the changes

* refactor(cameras): reflect desing changes to realsense + deal with depth

* test(cameras): fix realsense tests accordingly to new changes

* refactor(cameras): update reachymini and zmq accordingly

* chore: wrap resource sensitive examples into a try/finally

* test(cameras): add test for new read_latest

* test(cameras): fix problem with image artifact in opencv tests

* test(cameras): fix test_read_latest_high_frequency expectations

* Apply suggestions from code review 1

Co-authored-by: Caroline Pascal <caroline8.pascal@gmail.com>
Signed-off-by: Steven Palma <imstevenpmwork@ieee.org>

* chore(cameras): address feedback

* feat(cameras): add max_age_ms check in read_latest

* test(cameras): fix read_latest tests

* chore(redundancies): removing redundancies in Reachy 2 camera class

* fix(warmup): replacing the arbitrary time.sleep in by an actual warmup in the RealSense camera class

* chore(format): formatting latest changes

* chore(warning): adding a "to be implemented" warning for read_latest() in Camera base class

* chore(warning): making read_latest() warning message shorter and clearer

---------

Signed-off-by: Steven Palma <imstevenpmwork@ieee.org>
Co-authored-by: Caroline Pascal <caroline8.pascal@gmail.com>
2026-01-29 11:07:47 +01:00
sato_shinji 9919b16b36 fix: ensure action tensors are moved to client_device in async training (#2792)
* feat(async_inference): server always sends CPU tensors, client handles device conversion

* fix:fix the type annotation of RawObservation in src/lerobot/async_inference/helpers.py

* update the import of robot_client

---------

Co-authored-by: Sato shinji <wwwsatoshinji@gmail.com>
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>
Co-authored-by: KB <kevin-brian.n-diaye@epita.fr>
2026-01-20 15:17:38 +01:00
Pepijn bc68651815 add command 2026-01-16 16:43:45 +01:00
Pepijn d1f50babaa fix rac data collection with rtc by disabling compile 2026-01-15 17:06:58 +01:00
Martino Russi 6b8d4c75a6 Feat/g1 improvements record sim (#2765)
This PR extends the integration of Unitree g1 with the LeRobot codebase. By converting robot state to a flat dict we can now record and replay episodes (example groot/holosoma scripts need to be adjusted as well). We also improve the simulation integration by calling .step @ _subscribe_motor_state instead of it running in a separate thread. We also add ZMQ camera to lerobot, streaming base64 images over json
2026-01-12 17:31:39 +01:00
Steven Palma d791a431fe feat(robots): consolidates bi SO setups (#2780)
* feat(robots): consolidates bi SO setups

* fix(robots): solve circular dependecy

* fix(robots): teleop & record working

* feat(robots): only one SO

* fix(utils): rename bi so

* fix(scripts): bi so import

* fix(rl): remove imports
2026-01-12 16:01:22 +01:00
Pepijn 3316301693 debug rtc 2026-01-09 16:58:57 +01:00
Pepijn feedababd2 debug 2026-01-09 16:54:11 +01:00
Pepijn 480ee3299f log 2026-01-09 16:50:44 +01:00
Pepijn 2d1fb0f508 refactor 2026-01-09 16:41:59 +01:00
Pepijn b1a55b0666 by default dont use rtc 2026-01-09 16:26:54 +01:00
Pepijn 24af996f82 add logging 2026-01-09 16:10:32 +01:00
Pepijn 8d7eec79c8 f 2026-01-09 16:06:02 +01:00
Pepijn ccced0c9fc f 2026-01-09 15:58:37 +01:00
Pepijn 4166eeb7da have only rtc thread read obs and expose it 2026-01-09 15:48:49 +01:00
Pepijn 1f93a74d8c fix queue 2026-01-09 14:00:06 +01:00
Pepijn b16e2f25f7 remove move to zero due to potential race condition 2026-01-09 13:56:16 +01:00
Pepijn 9cc841c674 wait for first actions 2026-01-09 13:45:06 +01:00
Pepijn 63c28ea395 add cmd arg 2026-01-09 13:38:33 +01:00
Pepijn 98c33a4748 Add RaC with RTC 2026-01-09 13:26:25 +01:00
Pepijn 7d6f113072 fix at 2x actual freq 2026-01-09 13:03:29 +01:00
Pepijn 7ac05c838d add interpolation option 2026-01-09 12:56:43 +01:00
Steven Palma ccfd609ece feat(robots): consolidate SO arms implementation (#2763)
* feat(robots): consolidate SO arms implementation

* chore(robots): delete unnecessary init modules
2026-01-08 13:04:30 +01:00
Martino Russi 7e9d05a799 add holosoma locomotion (#2669)
Add holosoma locomotion from Amazon-FAR
Add reset method to unitree_g1
Format actions as dict
Update docs
2026-01-07 16:05:31 +01:00
Steven Palma e2957d7783 fix: precise_sleep is never called with negative value (#2757) 2026-01-06 20:09:43 +01:00
githubnemo e670ac5daf Add basic PEFT support to train script + record module (#1411)
* Add basic support for PEFT adapter methods

This changes adds support for training policies with much less parameters
by applying adapter methods such as LoRA on specific parts of the policies
and therefore possibly higher learning rates / batch sizes.

To make this as accessible as possible I thought it useful to provide
defaults for `target_modules` and `modules_to_save`. Currently only SmolVLA
has such defaults but when we agree that this change is useful I will set
out to generate more such defaults. While the user can override these
settings, they are expected to only change the peft_method, rank and init_type
parameters.

* Implement loading of PEFT adapters

Loading a PEFT adapter is currently done by initializing a policy with default config
and then applying the adapter on the resulting model. This has the obvious drawback
that any configurations done during training are not applied in the adapted model.

Currently the `use_peft` attribute of `PreTrainedConfig` is only set during loading
to signal the following code that it has to deal with a PEFT adapter. However
we could imagine a scenario where this is already set at training time and stored
alongside the adapter.

* Store policy config alongside PEFT checkpoint

Before this change the PEFT-wrapped policy did not save the policy's config
alongside the adapter config / weights which prevented us from changing the
policy config. Now the policy config is saved both in full training and PEFT
training.

This change makes loading the PEFT policy adapter much easier as well.

* Add default config for ACT

* Support targets like `all-linear`

* Formatting

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix failing tests

* Remove PEFT compatibility changes in config

We'll wait for the PEFT release that fixes this for good.

* Remove `use_peft` parameter from training script

Instead we make the PEFT config optional which has the same effect.

* Log adapter config to WandB

* Better documentation for CLI arguments

* Don't unload & merge the PEFT model

This can make things hard when using quantized layers (user expects quantized base layers with
unquantized adapters for example, merging defaults to upcast the layers leading to higher
memory).

* Correct way of identifying when to save config

* Add CLI end-to-end tests

Currently there don't seem to be any way to test the CLI commands.
Since this change mostly happens in those I thought it best to add
a way to test these commands end-to-end.

More integrated commands like `lerobot-record` need patching but
standalone commands like training seem to work fine.

* Update default targets

Removed ACT since it doesn't make sense to fine-tune ACT without having it pretrained beforehand.
SmolVLA and Pi0/0.5 are much more senseful targets.

* Clean up loading code

- Centralized instantiation of the PEFT wrapper in `make_policy` for inference
  (e.g. in `lerobot-record`)
- Training a PEFT policy also sets `cfg.use_peft` so that all inference code loading
  the policy can rely on that attribute to identify if PEFT loading is needed
- Modified RTC example to also include PEFT policies. Mostly because this is an example
  I'm currently exploring.

* Make sure push_to_hub works

Since PEFT only wraps `push_to_hub` and not `push_model_to_hub`, the reference
to `self` in `policy.push_model_to_hub` is the unwrapped policy which, of course,
doesn't know anything about PEFT.

To make the upload process aware of PEFT, we pass the unwrapped policy down to
`push_model_to_hub` as a kwarg. This is not ideal but I think it is the best way
for now.

* formatting

* Warn when encountering from-scratch-training

* Revamp pretrained model loading

There were quite a few factors that convinced me that the status quo
is able to load pretrained models from the PEFT adapter config but
in fact that didn't work.

This commit fixes the following things:
- policies wrapped in PEFT will now have a `name_or_path` attribute
  containing the name or path of the pretrained model we're fine-tuning
- we further assume that SmolVLA without `pretrained_path` and
  `load_vlm_weights==False` must be an user-side error
- we assume that using PEFT on from-scratch-policies must be
  an user-side-error

* Make it possible to unset policy features

This is necessary to train pre-trained policies on new datasets so that the
features are inferred from the new dataset and not from the pretrained
policy.

* Use correct loading for PEFT in RTC example

* Make it possible to use PeftModels in eval

* Add test checking that PEFT actually reduces params

* Adapt state/action projections instead of full-finetuning

There doesn't seem to be a benefit to fully fine-tune these layers
over just adapting them, so we do that instead.

* Disallow PEFT training on non-pretrained policies

At first I thought it would make sense to have this feature
in case you want to fine-tune a pre-trained section but in the
end it makes more trouble than it's worth.

It's still possible to allow this in the future when a concrete
need arises.

* Add basic documentation

* Formatting

* Add peft as extra dependency, mark tests

Fast tests currently fail because of the missing dependency.

* Fix pre-commit issues

* Add walx <> peft conflict for uv

* Exclude peft from pi install for now

---------

Co-authored-by: nemo <git@ningu.net>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Pepijn <138571049+pkooij@users.noreply.github.com>
2026-01-05 08:51:26 +01:00
Pepijn c85f1692d6 in place 2026-01-03 22:12:22 +01:00
Pepijn 9fd329713a modift in place 2026-01-03 22:11:11 +01:00
Pepijn 97d068e5a2 rename to fold 2026-01-03 21:59:11 +01:00
Pepijn e5bea36387 add unify task 2026-01-03 21:52:19 +01:00
Pepijn cf1d8c3d5b stop policy when we dont teleop yet 2026-01-02 13:12:22 +01:00
Pepijn 464b65cfb0 wait for start button before teleop 2026-01-02 13:05:00 +01:00
Pepijn c76bc4cdea Move robot to zero before begin episode 2026-01-02 10:52:48 +01:00
Pepijn 20f0381f81 wait for takeover press 2026-01-02 10:18:59 +01:00
Pepijn a447c652cb change pedal flow 2026-01-02 09:53:40 +01:00
Pepijn 8277dbf0dc add foot pedal support 2026-01-02 09:36:36 +01:00
Pepijn eb0918249d keep teleop active in reset 2026-01-02 09:21:15 +01:00
Pepijn 03c6ee5f9a fix grippers 2026-01-01 16:40:53 +01:00
Pepijn dfd229ae4f fix direction and encoding 2026-01-01 16:37:11 +01:00
Pepijn aba42c805f some changes to smooth 2025-12-31 15:16:23 +01:00
Pepijn 0514616c87 dont move teleop when not pause pressed 2025-12-31 12:33:40 +01:00
Pepijn f15872293d Only move teleop after space press 2025-12-31 12:24:43 +01:00
Pepijn a97255e3d1 use robot_action 2025-12-30 12:04:30 +01:00
Pepijn 1716d599c1 only use position in dataset 2025-12-30 12:01:26 +01:00
Pepijn c07ab7e1fa policy path can be none 2025-12-30 11:14:21 +01:00