This PR extends the integration of Unitree g1 with the LeRobot codebase. By converting robot state to a flat dict we can now record and replay episodes (example groot/holosoma scripts need to be adjusted as well). We also improve the simulation integration by calling .step @ _subscribe_motor_state instead of it running in a separate thread. We also add ZMQ camera to lerobot, streaming base64 images over json
* feat(robots): consolidates bi SO setups
* fix(robots): solve circular dependecy
* fix(robots): teleop & record working
* feat(robots): only one SO
* fix(utils): rename bi so
* fix(scripts): bi so import
* fix(rl): remove imports
* Add basic support for PEFT adapter methods
This changes adds support for training policies with much less parameters
by applying adapter methods such as LoRA on specific parts of the policies
and therefore possibly higher learning rates / batch sizes.
To make this as accessible as possible I thought it useful to provide
defaults for `target_modules` and `modules_to_save`. Currently only SmolVLA
has such defaults but when we agree that this change is useful I will set
out to generate more such defaults. While the user can override these
settings, they are expected to only change the peft_method, rank and init_type
parameters.
* Implement loading of PEFT adapters
Loading a PEFT adapter is currently done by initializing a policy with default config
and then applying the adapter on the resulting model. This has the obvious drawback
that any configurations done during training are not applied in the adapted model.
Currently the `use_peft` attribute of `PreTrainedConfig` is only set during loading
to signal the following code that it has to deal with a PEFT adapter. However
we could imagine a scenario where this is already set at training time and stored
alongside the adapter.
* Store policy config alongside PEFT checkpoint
Before this change the PEFT-wrapped policy did not save the policy's config
alongside the adapter config / weights which prevented us from changing the
policy config. Now the policy config is saved both in full training and PEFT
training.
This change makes loading the PEFT policy adapter much easier as well.
* Add default config for ACT
* Support targets like `all-linear`
* Formatting
* [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
* Fix failing tests
* Remove PEFT compatibility changes in config
We'll wait for the PEFT release that fixes this for good.
* Remove `use_peft` parameter from training script
Instead we make the PEFT config optional which has the same effect.
* Log adapter config to WandB
* Better documentation for CLI arguments
* Don't unload & merge the PEFT model
This can make things hard when using quantized layers (user expects quantized base layers with
unquantized adapters for example, merging defaults to upcast the layers leading to higher
memory).
* Correct way of identifying when to save config
* Add CLI end-to-end tests
Currently there don't seem to be any way to test the CLI commands.
Since this change mostly happens in those I thought it best to add
a way to test these commands end-to-end.
More integrated commands like `lerobot-record` need patching but
standalone commands like training seem to work fine.
* Update default targets
Removed ACT since it doesn't make sense to fine-tune ACT without having it pretrained beforehand.
SmolVLA and Pi0/0.5 are much more senseful targets.
* Clean up loading code
- Centralized instantiation of the PEFT wrapper in `make_policy` for inference
(e.g. in `lerobot-record`)
- Training a PEFT policy also sets `cfg.use_peft` so that all inference code loading
the policy can rely on that attribute to identify if PEFT loading is needed
- Modified RTC example to also include PEFT policies. Mostly because this is an example
I'm currently exploring.
* Make sure push_to_hub works
Since PEFT only wraps `push_to_hub` and not `push_model_to_hub`, the reference
to `self` in `policy.push_model_to_hub` is the unwrapped policy which, of course,
doesn't know anything about PEFT.
To make the upload process aware of PEFT, we pass the unwrapped policy down to
`push_model_to_hub` as a kwarg. This is not ideal but I think it is the best way
for now.
* formatting
* Warn when encountering from-scratch-training
* Revamp pretrained model loading
There were quite a few factors that convinced me that the status quo
is able to load pretrained models from the PEFT adapter config but
in fact that didn't work.
This commit fixes the following things:
- policies wrapped in PEFT will now have a `name_or_path` attribute
containing the name or path of the pretrained model we're fine-tuning
- we further assume that SmolVLA without `pretrained_path` and
`load_vlm_weights==False` must be an user-side error
- we assume that using PEFT on from-scratch-policies must be
an user-side-error
* Make it possible to unset policy features
This is necessary to train pre-trained policies on new datasets so that the
features are inferred from the new dataset and not from the pretrained
policy.
* Use correct loading for PEFT in RTC example
* Make it possible to use PeftModels in eval
* Add test checking that PEFT actually reduces params
* Adapt state/action projections instead of full-finetuning
There doesn't seem to be a benefit to fully fine-tune these layers
over just adapting them, so we do that instead.
* Disallow PEFT training on non-pretrained policies
At first I thought it would make sense to have this feature
in case you want to fine-tune a pre-trained section but in the
end it makes more trouble than it's worth.
It's still possible to allow this in the future when a concrete
need arises.
* Add basic documentation
* Formatting
* Add peft as extra dependency, mark tests
Fast tests currently fail because of the missing dependency.
* Fix pre-commit issues
* Add walx <> peft conflict for uv
* Exclude peft from pi install for now
---------
Co-authored-by: nemo <git@ningu.net>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Pepijn <138571049+pkooij@users.noreply.github.com>
* wording
* added how to guide to build you own envhub repos
* include LW edits
* wording
* chat fixes
* additional
* wording
* wording
* wording
* pre commit fixes
* pi fixes for dependencies
* add walls sarm conflict
* also add conflicts for pi
* fix(ci): use --extra all instead of --all-extras + --no-extra
---------
Co-authored-by: Steven Palma <steven.palma@huggingface.co>
* support wallx
* fix bugs in flow
* incorporate wallx model into lerobot
* update the policy methods
* reduce to least config and params & pass lerobot basic test
* fixed dtype bugs
* add wallx dependencies
* update
* remove flash-attn requirement && fix bug in inference and fast mode
* fix bug for inference
* add some small modifications
* fix pre-commit errors
* remove lerobot[wallx]
* fix ci
* fix precommit issues
* fix: exclude wallx extra properly in CI workflows
* fix: add uv conflicts for wallx transformers version
* fix: peft test import
* pre-commit
* only export WallXConfig from wall_x package to avoid peft import in CI
* remove torch dep
* precommit
* add import
---------
Co-authored-by: vincentchen <chenlufang@x2robot.com>
Co-authored-by: Geoffrey19 <sympathischmann35@gmail.com>
Co-authored-by: Pepijn <138571049+pkooij@users.noreply.github.com>
Co-authored-by: Pepijn <pepijn@huggingface.co>
* fix(optim): enable and resolve mypy type errors
Resolves#1729
build(deps): add mypy as dependency and update pre-commit hook
* change build's type annotation
* add initial modeling
* make rewind pretrained policy
* add annotation
* small fix
* add sarm
* subtasks
* fix spawn
* fix rewind discrepancies
* Add script to generate embedding for dataset (#2138)
* Add generate and validate script
* fix precommit
* Improve generate embeddings function by using dataset tools (#2206)
---------
Co-authored-by: Michel Aractingi <michel.aractingi@huggingface.co>
* cleanup
* change order train log
* print batch size
* update sarm processor
* add reward output
* change expected features
* add image validation
* change validation
* get state input from dataset stats
* raise if no state key is found
* pass stats
* cleanup and refactor
* add episode inddex to complementary data
* add subtask init and detection
* revert lerobot_train changes
* pass dataset metadata to policy
* change loadig subtasks
* add small logging
* fix progress conversion and adding initial frame
* use large offset for initial frame (ugly)
* Remove rewind, use clip tokenizer
* add tests, implement formula 1,2 correctly and cleanup
* use task from dataset, cleanup visualizer
* simplify
* simplify and cleanup code and move compute_temporal_proportions to utils
* fix normalization in visualization
* Fix visualization and change prompt
* fix formatting
* add visualize subtask annotations
* use qwen thinking
* try different prompt
* format
* update prompt
* higher temp, long output
* different settings
* use instruct
* show full resp
* split message
* Temp: increase tolerance dataset
* Fix RA-BC (#2572)
* Add next observation loading for RA-BC progress deltas
* Compute weights based on temporal progress deltas instead of static rewards
* Add hard-masking for negative progress deltas in weight computation
* Feat/add dual head (#2582)
* Add dual dense sparse head and annotation
* Add docs
* add dual to procesor
* cleanup
* change sampling in visualize and cleanup
* remove validation
* remove compile
* Feat/test uniform (#2587)
* test uniform
* add different string for misaligned
* Fix rewind and add tests
* uncomment text implementation
* run precommit
* Add head mode for ra-bc
* fix visalization of single task
* add
* return per sample loss
* Fix RA_BC (#2602)
* update rabc implementation
* compute rabc beforehand
* fix import
* add only progress calulation
* use precomputed progress
* multi gpu processing
* import
* fix dataset meta data extraction
* add logging
* logging
* log
* progress per episode
* split differently
* move clip to gpu
* pre decode frames for an episode
* fix cuda initalization
* fix import
* multi processing
* rename
* fix import
* fix
* fix rabc
* use last known progress if oob
* use last known progress if oob
* add misalignment loss with random embeddings
* discard previous changes
* add selection of models to docs for ra_bc
* add transformers dep
* extend tolerance
* initial commit with new codebase
* add tests
* fix
* remove temporal sampler
* drop last frame for sampler
* use original ref
* some fixes
* fix visualization
* remove smoothing and fix order subtasks
* add stride rabc computation
* add push to hub
* add explanation
* add kappa expllaination
* better rabc logging
* feedback pr
* remove dataset tolerance
* revert dataset tool
* revert dataset changes
* add credit
* run precommit
* change path for generate ra_bc
* fix type
* include sarm in all in pyproject
* fix precommit
* lazy import matplotlib
* lazy import qwen
* remove rich console
* skip if transformers is not installed?
* run only when we have faker
* place transformer lazy loading
* Dont test if low transformer version
* fix
* increase transformer
* increase as 4.57.0 is yanked
* remove pi from all
* go back
---------
Co-authored-by: Michel Aractingi <michel.aractingi@huggingface.co>
Co-authored-by: s1lent4gnt <kmeftah.khalil@gmail.com>