* fix(deps): better versioning control for torchcodec
* refactor(video_utils): replace torchvision with pyav
* adding Torchcodec version to lerobot-info
* chore(benchmarks): delete video benchmark
---------
Co-authored-by: Maximellerbach <maxime.ellerbach@huggingface.co>
* refactor: RL stack refactoring — RLAlgorithm, RLTrainer, DataMixer, and SAC restructuring
* chore: clarify torch.compile disabled note in SACAlgorithm
* fix(teleop): keyboard EE teleop not registering special keys and losing intervention state
Fixes#2345
Co-authored-by: jpizarrom <jpizarrom@gmail.com>
* fix: remove leftover normalization calls from reward classifier predict_reward
Fixes#2355
* fix: add thread synchronization to ReplayBuffer to prevent race condition between add() and sample()
* refactor: update SACAlgorithm to pass action_dim to _init_critics and fix encoder reference
* perf: remove redundant CPU→GPU→CPU transition move in learner
* Fix: add kwargs in reward classifier __init__()
* fix: include IS_INTERVENTION in complementary_info sent to learner for offline replay buffer
* fix: add try/finally to control_loop to ensure image writer cleanup on exit
* fix: use string key for IS_INTERVENTION in complementary_info to avoid torch.load serialization error
* fix: skip tests that require grpc if not available
* fix(tests): ensure tensor stats comparison accounts for reshaping in normalization tests
* fix(tests): skip tests that require grpc if not available
* refactor(rl): expose public API in rl/__init__ and use relative imports in sub-packages
* fix(config): update vision encoder model name to lerobot/resnet10
* fix(sac): clarify torch.compile status
* refactor(rl): update shutdown_event type hints from 'any' to 'Any' for consistency and clarity
* refactor(sac): simplify optimizer return structure
* perf(rl): use async iterators in OnlineOfflineMixer.get_iterator
* refactor(sac): decouple algorithm hyperparameters from policy config
* update losses names in tests
* fix docstring
* remove unused type alias
* fix test for flat dict structure
* refactor(policies): rename policies/sac → policies/gaussian_actor
* refactor(rl/sac): consolidate hyperparameter ownership and clean up discrete critic
* perf(observation_processor): add CUDA support for image processing
* fix(rl): correctly wire HIL-SERL gripper penalty through processor pipeline
(cherry picked from commit 9c2af818ff)
* fix(rl): add time limit processor to environment pipeline
(cherry picked from commit cd105f65cb)
* fix(rl): clarify discrete gripper action mapping in GripperVelocityToJoint for SO100
(cherry picked from commit 494f469a2b)
* fix(rl): update neutral gripper action
(cherry picked from commit 9c9064e5be)
* fix(rl): merge environment and action-processor info in transition processing
(cherry picked from commit 30e1886b64)
* fix(rl): mirror gym_manipulator in actor
(cherry picked from commit d2a046dfc5)
* fix(rl): postprocess action in actor
(cherry picked from commit c2556439e5)
* fix(rl): improve action processing for discrete and continuous actions
(cherry picked from commit f887ab3f6a)
* fix(rl): enhance intervention handling in actor and learner
(cherry picked from commit ef8bfffbd7)
* Revert "perf(observation_processor): add CUDA support for image processing"
This reverts commit 38b88c414c.
* refactor(rl): make algorithm a nested config so all SAC hyperparameters are JSON-addressable
* refactor(rl): add make_algorithm_config function for RLAlgorithmConfig instantiation
* refactor(rl): add type property to RLAlgorithmConfig for better clarity
* refactor(rl): make RLAlgorithmConfig an abstract base class for better extensibility
* refactor(tests): remove grpc import checks from test files for cleaner code
* fix(tests): gate RL tests on the `datasets` extra
* refactor: simplify docstrings for clarity and conciseness across multiple files
* fix(rl): update gripper position key and handle action absence during reset
* fix(rl): record pre-step observation so (obs, action, next.reward) align in gym_manipulator dataset
* refactor: clean up import statements
* chore: address reviewer comments
* chore: improve visual stats reshaping logic and update docstring for clarity
* refactor: enforce mandatory config_class and name attributes in RLAlgorithm
* refactor: implement NotImplementedError for abstract methods in RLAlgorithm and DataMixer
* refactor: replace build_algorithm with make_algorithm for SACAlgorithmConfig and update related tests
* refactor: add require_package calls for grpcio and gym-hil in relevant modules
* refactor(rl): move grpcio guards to runtime entry points
* feat(rl): consolidate HIL-SERL checkpoint into HF-style components
Make `RLAlgorithmConfig` and `RLAlgorithm` `HubMixin`s, add abstract
`state_dict()` / `load_state_dict()` for critic ensemble, target nets
and `log_alpha`, and persist them as a sibling `algorithm/` component
next to `pretrained_model/`. Replace the pickled `training_state.pt`
with an enriched `training_step.json` carrying `step` and
`interaction_step`, so resume restores actor + critics + target nets +
temperature + optimizers + RNG + counters from HF-standard files.
* refactor(rl): move actor weight-sync wire format from policy to algorithm
* refactor(rl): update type hints for learner and actor functions
* refactor(rl): hoist grpcio guard to module top in actor/learner
* chore(rl): manage import pattern in actor (#3564)
* chore(rl): manage import pattern in actor
* chore(rl): optional grpc imports in learner; quote grpc ServicerContext types
---------
Co-authored-by: Khalil Meftah <khalil.meftah@huggingface.co>
* update uv.lock
* chore(doc): update doc
---------
Co-authored-by: jpizarrom <jpizarrom@gmail.com>
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>
* chore(deps): ceiling + cuda
* ci: bump cuda version docker image
* ci: add cpu wheel to release workflow
* chore(deps): update uv.lock
* docs: update installation with cuda note
* docs(omx): adding some examples and scripts
* cleaning up and reviewing the cli args
* adding __init__.py to example folder, adjusting the examples
* adding reference to pretrained act policy
* moving `.send_action` before `dataset.add_frame` for consistency
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Maxime Ellerbach <maxime@ellerbach.net>
* adjusting docstring
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Maxime Ellerbach <maxime@ellerbach.net>
* adressing hardcoded dataset fps
* removed init as it worked without
---------
Signed-off-by: Maxime Ellerbach <maxime@ellerbach.net>
If VideoDecoder() raises during initialization, the fsspec file handle
was leaked since it was opened via __enter__() but never closed on the
exception path. Now explicitly closes the handle before re-raising.
* chore(deps): allow torch 2.11/2.12 and fix autocast deprecation
- Bump torch to >=2.7,<2.13 (was <2.11), torchvision to <0.28 (was <0.26),
and torchcodec to <0.13 (was <0.11) to allow installs against the latest
stable torch 2.11 and the upcoming 2.12 line.
- Replace removed torch.get_autocast_gpu_dtype() with torch.get_autocast_dtype("cuda")
in Florence2 and Qwen2.5-VL-MoE FlashAttention paths (the former is removed in 2.11+).
- Refresh uv.lock for the new resolution (torch 2.11.0+cu130, torchvision 0.26.0+cu130,
torchcodec 0.11.1, full CUDA 13 stack).
Verified locally with `uv sync --locked` from a clean .venv and the lerobot
test suite (pytest -n 8 --dist=loadfile --timeout=300). Failure set is
identical to the pre-bump baseline: 18 pre-existing failures
(test_sac_policy*, test_pi0_rtc*, test_pi05_rtc*, test_replay_buffer*),
0 new, 0 fixed.
AI assistance: this change was authored with Claude Code per AI_POLICY.md.
* fix(policies): use device-agnostic autocast dtype lookup
Pass query_states.device.type to torch.get_autocast_dtype() instead of
hardcoding 'cuda', so the cast matches the active autocast context when
running under CPU/MPS/XPU autocast.
---------
Co-authored-by: Steven Palma <imstevenpmwork@ieee.org>