Compare commits

..

27 Commits

Author SHA1 Message Date
Martino Russi b320530482 add amazon policies 2025-12-02 16:23:52 +01:00
Martino Russi 4bdd2475b0 complete work 2025-11-27 10:22:11 +01:00
Martino Russi 5d9266b024 simplify implementation 2025-11-26 15:37:04 +01:00
Martino Russi c7834c3db8 simplified robot class 2025-11-26 15:11:45 +01:00
Martino Russi c65866ddd8 sync updates 2025-11-26 13:11:01 +01:00
Martino Russi bebf9b8480 mujoco sim viewer integration 2025-11-21 21:41:24 +01:00
Martino Russi 3be342a00d remove leftovers from unitree_il_lerobot 2025-11-21 14:49:03 +01:00
Martino Russi e6c16a60b1 load mujoco sim from hub 2025-11-21 14:42:55 +01:00
Martino Russi 786f4df529 test 2025-11-21 14:38:22 +01:00
Martino Russi 58739f4b7a signed test 2025-11-21 14:13:52 +01:00
Martino Russi b6e606c28d trace test 2025-11-21 14:13:39 +01:00
Martino Russi c477c54e3c move test items to .dev 2025-11-21 14:13:05 +01:00
Martino Russi f30e15d411 move test scripts inside .dev 2025-11-21 14:13:05 +01:00
Martino Russi 8f06c02c17 add robot models xml etc
remove isaaclab, add test scripts

added simulator to the hub

remove mujoco env
2025-11-21 14:13:05 +01:00
Martino Russi 9a052566a3 sync recent changes 2025-11-21 14:13:05 +01:00
Martino Russi e5cae6be64 add weighted moving filter, assets 2025-11-21 14:13:04 +01:00
Martino Russi 56b66b9542 add example config for custom teleop class 2025-11-21 14:13:04 +01:00
Martino Russi 30c3bbef7b add custom teleoperator, integrate unitree g1 except for eval 2025-11-21 14:13:04 +01:00
Martino Russi c4bf27772c finish unitree camera integration 2025-11-21 14:13:04 +01:00
Martino Russi 3422f2cb01 find cameras via zmq 2025-11-21 09:55:01 +01:00
Martino Russi c365bcd0a5 begin unitree_g1 integration 2025-11-21 09:55:01 +01:00
resolver101757 2cbd6649f2 bug causes error uploading to huggingface, unicode issue on windows. (#450) 2025-11-21 09:55:00 +01:00
Alexander Soare c749fba0f5 empty commit 2025-11-21 09:55:00 +01:00
Michel Aractingi 0f551df8f4 add absolute_to_reative_idx for remapping indicies when a subset of data is loaded (#2490) 2025-11-20 14:05:31 +01:00
Jade Choghari 6e86a69dcd feat(envs): add envs pre-post processor (#2474)
* more changes

* working changes

* more changes

* more fixes

* fix style

* more

* clean

* put axis-1

* more fixes

* more styling fixes:

* iterate on review:

* more changes

* add env processor

* style

* more changes

* add docs

* fix imports

* fix test, add to train

* Update src/lerobot/envs/factory.py

Co-authored-by: Michel Aractingi <michel.aractingi@huggingface.co>
Signed-off-by: Jade Choghari <chogharijade@gmail.com>

* iterate on review

---------

Signed-off-by: Jade Choghari <chogharijade@gmail.com>
Co-authored-by: jade.choghari@huggingface.co <“chogharijade@gmail.com”>
Co-authored-by: Michel Aractingi <michel.aractingi@huggingface.co>
2025-11-19 18:36:14 +01:00
Eugene Mironov 8a915c6b6f [RTC] Real Time Chunking for Pi0, Smolvla, Pi0.5 (#1698)
* Add Real-Time Chunking (RTC) support for flow matching models

Implement Real-Time Chunking (RTC) for action chunking policies using flow
matching denoising. RTC enables smooth action transitions between consecutive
chunks by using prefix guidance during denoising.

Key features:
- RTCProcessor class with denoise_step method for RTC guidance
- Tracker system for debug tracking using time-based dictionary storage
- RTCDebugVisualizer with comprehensive visualization utilities
- Integration with SmolVLA policy for flow matching models
- Support for multiple prefix attention schedules (ZEROS, ONES, LINEAR, EXP)
- Configurable execution horizon and max guidance weight
- Example scripts for dataset evaluation and real-time control

Technical details:
- Uses autograd-based gradient computation for RTC corrections
- Time-based tracking eliminates duplicate step issues
- Proxy methods in RTCProcessor for cleaner API
- Full integration with LeRobot's policy and dataset systems

Files added/modified:
- src/lerobot/configs/types.py: Add RTCAttentionSchedule enum
- src/lerobot/policies/rtc/: Core RTC implementation
  - configuration_rtc.py: RTC configuration
  - modeling_rtc.py: RTCProcessor with denoise_step
  - debug_handler.py: Tracker for debug information
  - debug_visualizer.py: Visualization utilities
- src/lerobot/policies/smolvla/modeling_smolvla.py: RTC integration
- examples/rtc/: Example scripts and evaluation tools

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Alexander Soare <alexander.soare159@gmail.com>
Co-Authored-By: Claude <noreply@anthropic.com>

* Fix rtc_config attribute access in SmolVLA

Use getattr() to safely check for rtc_config attribute existence
instead of direct attribute access. This fixes AttributeError when
loading policies without rtc_config in their config.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Alexander Soare <alexander.soare159@gmail.com>
Co-Authored-By: Claude <noreply@anthropic.com>

* fixup! Fix rtc_config attribute access in SmolVLA

* Add RTCConfig field to SmolVLAConfig

Add rtc_config as an optional field in SmolVLAConfig to properly
support Real-Time Chunking configuration. This replaces the previous
getattr() workarounds with direct attribute access, making the code
cleaner and more maintainable.

Changes:
- Import RTCConfig in configuration_smolvla.py
- Add rtc_config: RTCConfig | None = None field
- Revert getattr() calls to direct attribute access in modeling_smolvla.py

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Alexander Soare <alexander.soare159@gmail.com>
Co-Authored-By: Claude <noreply@anthropic.com>

* Refactor RTC enabled checks to use _rtc_enabled helper

Add _rtc_enabled() helper method in VLAFlowMatching class to simplify
and clean up RTC enabled checks throughout the code. This reduces
code duplication and improves readability.

Changes:
- Add _rtc_enabled() method in VLAFlowMatching
- Replace verbose rtc_config checks with _rtc_enabled() calls
- Maintain exact same functionality with cleaner code

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Alexander Soare <alexander.soare159@gmail.com>
Co-Authored-By: Claude <noreply@anthropic.com>

* Rename track_debug method to track

Simplify the method name from track_debug to just track for better
readability and consistency. The method already has clear documentation
about its debug tracking purpose.

Changes:
- Rename RTCProcessor.track_debug() to track()
- Update all call sites in modeling_smolvla.py and modeling_rtc.py

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Alexander Soare <alexander.soare159@gmail.com>
Co-Authored-By: Claude <noreply@anthropic.com>

* Use output_dir for saving all evaluation images

Update eval_dataset.py to save all comparison images to the
configured output_dir instead of the current directory. This provides
better organization and allows users to specify where outputs should be
saved.

Changes:
- Add os import at top level
- Create output_dir at start of run_evaluation()
- Save all comparison images to output_dir
- Remove duplicate os imports
- Update init_rtc_processor() docstring to be more concise

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Alexander Soare <alexander.soare159@gmail.com>
Co-Authored-By: Claude <noreply@anthropic.com>

* fixup! Use output_dir for saving all evaluation images

* Fix logging buffering and enable tracking when RTC config provided

- Add force=True to logging.basicConfig to override existing configuration
- Enable line buffering for stdout/stderr for real-time log output
- Modify init_rtc_processor to create processor when rtc_config exists
  even if RTC is disabled, allowing tracking of denoising data

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Alexander Soare <alexander.soare159@gmail.com>

* Refactor SmolVLA plotting to use tracker data instead of local variables

Remove local tracking variables (correction, x1_t, error) from the
denoising loop and instead retrieve plotting data from the RTC tracker
after each denoise step. This makes the code cleaner and uses the
tracker as the single source of truth for debug/visualization data.

Changes:
- Remove initialization of correction, x1_t, error before denoising loop
- After each Euler step, retrieve most recent debug step from tracker
- Extract correction, x1_t, err from debug step for plotting
- Update tracking condition to use is_debug_enabled() method

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Alexander Soare <alexander.soare159@gmail.com>

* Move plotting logic from modeling_smolvla to eval_dataset script

Refactor to improve separation of concerns:

modeling_smolvla.py changes:
- Remove all plotting logic from sample_actions method
- Remove viz_xt_axs, viz_vt_axs, viz_x1t_axs parameters
- Remove matplotlib and RTCDebugVisualizer imports
- Remove viz_fig, viz_axs, denoise_step_counter instance variables
- Simplify denoising loop to only track data in rtc_processor

eval_dataset.py changes:
- Add _plot_denoising_steps_from_tracker helper method
- Retrieve debug steps from tracker after inference
- Plot x_t, v_t, x1_t, correction, and error from tracker data
- Enable debug tracking (cfg.rtc.debug = True) for visualization
- Remove viz axes parameters from predict_action_chunk calls

modeling_rtc.py changes:
- Remove v_t from track() call (handled by user change)

Benefits:
- Cleaner modeling code focused on inference
- Evaluation script owns all visualization logic
- Better separation of concerns
- Tracker is single source of truth for debug data

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Alexander Soare <alexander.soare159@gmail.com>

* Refactor plotting loging

* fixup! Refactor plotting loging

* Improve visualization: separate correction plot and fix axis scaling

Changes:
- Create separate figure for correction data instead of overlaying on v_t
- Add _rescale_axes helper method to properly scale all axes
- Add 10% margin to y-axis for better visualization
- Fix v_t chart vertical compression issue

Benefits:
- Clearer v_t plot without correction overlay
- Better axis scaling with proper margins
- Separate correction figure for focused analysis
- Improved readability of all denoising visualizations

Output files:
- denoising_xt_comparison.png (x_t trajectories)
- denoising_vt_comparison.png (v_t velocity - now cleaner)
- denoising_correction_comparison.png (NEW - separate corrections)
- denoising_x1t_comparison.png (x1_t state with error)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
Co-Authored-By: Alexander Soare <alexander.soare159@gmail.com>

* fixup! Improve visualization: separate correction plot and fix axis scaling

* fixup! fixup! Improve visualization: separate correction plot and fix axis scaling

* fixup! fixup! fixup! Improve visualization: separate correction plot and fix axis scaling

* Fix traacking

* Right kwargs for the policy

* Add tests for tracker

* Fix tests

* Drop not required methods

* Add torch compilation for eval_dataset

* delete policies

* Add matplotliv to dev

* fixup! Add matplotliv to dev

* Experiemnt with late detach

* Debug

* Fix compilation

* Add RTC to PI0

* Pi0

* Pi0 eval dataset

* fixup! Pi0 eval dataset

* Turn off compilation for pi0/pi05

* fixup! Turn off compilation for pi0/pi05

* fixup! fixup! Turn off compilation for pi0/pi05

* fixup! fixup! fixup! Turn off compilation for pi0/pi05

* fixup! fixup! fixup! fixup! Turn off compilation for pi0/pi05

* fixup! fixup! fixup! fixup! fixup! Turn off compilation for pi0/pi05

* Add workable flow

* Small fixes

* Add more tests

* Add validatio at the end

* Update README

* Silent validation

* Fix tests

* Add tests for modeling_rtc

* Add tests for flow matching models with RTC

* fixup! Add tests for flow matching models with RTC

* fixup! fixup! Add tests for flow matching models with RTC

* Add one more test

* fixup! Add one more test

* Fix test to use _rtc_enabled() instead of is_rtc_enabled()

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fixup! Fix test to use _rtc_enabled() instead of is_rtc_enabled()

* fixup! fixup! Fix test to use _rtc_enabled() instead of is_rtc_enabled()

* Add RTC initialization tests without config for PI0.5 and SmolVLA

Add test_pi05_rtc_initialization_without_rtc_config and
test_smolvla_rtc_initialization_without_rtc_config to verify that
policies can initialize without RTC config and that _rtc_enabled()
returns False in this case.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix PI0.5 init_rtc_processor to use getattr instead of direct model access

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix SmolVLA init_rtc_processor to use getattr instead of direct model access

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* Fix PI0.5 RTC tests to use quantile stats (q01, q99) for normalization

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fixup! Fix PI0.5 RTC tests to use quantile stats (q01, q99) for normalization

* Fixup eval with real robot

* fixup! Fixup eval with real robot

* fixup! fixup! Fixup eval with real robot

* Extract simulator logic from eval_with real robot and add proper headers to files

* Update images

* Fix tests

* fixup! Fix tests

* add docs for rtc

* enhance doc and add images

* Fix instal instructions

---------
Co-authored-by: Ben Zhang <benzhangniu@gmail.com>
Co-authored-by: Alexander Soare <alexander.soare159@gmail.com>
Co-authored-by: Michel Aractingi <michel.aractingi@huggingface.co>
2025-11-19 11:19:48 +01:00
Michel Aractingi b464d9f8bc Fix episode filtering bug when requesting a subset of the episodes in a dataset (#2456)
* filter episodes in load_nested_dataset

* nit

* remove test filtering

* move import to module level

* added missing episode indices to the EpisodeAwareSampler in lerobot_train.py;
2025-11-18 17:26:41 +01:00
136 changed files with 14484 additions and 61 deletions
-4
View File
@@ -173,7 +173,3 @@ outputs/
# Dev folders
.cache/*
*.stl
*.urdf
*.xml
*.part
+2
View File
@@ -63,6 +63,8 @@
title: Implement your own processor
- local: processors_robots_teleop
title: Processors for Robots and Teleoperators
- local: env_processor
title: Environment Processors
title: "Robot Processors"
- sections:
- local: so101
+418
View File
@@ -0,0 +1,418 @@
# Environment Processors
Environment processors are a critical layer in LeRobot's data processing architecture that handle **environment-specific** transformations, separate from policy-specific processing. This separation of concerns enables cleaner code, better modularity, and easier experimentation with different environments and policies.
## Why Environment Processors?
When working with different robot environments (LIBERO, MetaWorld, Aloha, etc.), each environment often has unique data formats, coordinate systems, and conventions that need standardization **before** policy processing. Without environment processors, these transformations would be:
1. **Hardcoded in environment code** - Making it difficult to experiment with different state representations
2. **Duplicated across policies** - Each policy would need to handle environment-specific quirks
3. **Mixed with policy logic** - Violating separation of concerns and making debugging harder
Environment processors solve this by providing a **dedicated processing layer** between raw environment observations and policy inputs.
## The Processing Pipeline
Here's how data flows through the complete processing pipeline during evaluation:
```python
# In lerobot_eval.py rollout() function:
# 1. Raw environment observation (numpy arrays, various formats)
raw_observation = env.step(action)
# 2. Convert numpy to torch, normalize images [0,1]
observation = preprocess_observation(raw_observation)
# 3. Add task metadata (for multi-task environments)
observation = add_envs_task(env, observation)
# 4. ENVIRONMENT-SPECIFIC preprocessing (NEW!)
# - Flatten robot states
# - Rotate images to match dataset conventions
# - Handle environment-specific coordinate systems
observation = env_preprocessor(observation)
# 5. POLICY-SPECIFIC preprocessing
# - Normalize with dataset statistics
# - Add batch dimensions
# - Move to GPU
# - Tokenize language instructions
observation = preprocessor(observation)
# 6. Policy inference
action = policy.select_action(observation)
# 7. POLICY-SPECIFIC postprocessing
# - Unnormalize actions
# - Remove batch dimensions
action = postprocessor(action)
# 8. ENVIRONMENT-SPECIFIC postprocessing (NEW!)
# - Convert action formats if needed
# - Apply environment-specific constraints
action_transition = {"action": action}
action_transition = env_postprocessor(action_transition)
action = action_transition["action"]
# 9. Execute in environment
env.step(action)
```
## The Benefits
### 1. **Separation of Concerns**
Environment processors handle transformations specific to the **environment's data format**, while policy processors handle transformations specific to the **model's requirements**.
```python
# ❌ Before: Mixed concerns
class LiberoVLAPolicy:
def preprocess(self, obs):
# Environment-specific: Flatten robot state (shouldn't be in policy!)
state = self._flatten_robot_state(obs["robot_state"])
# Policy-specific: Normalize with dataset stats
state = self.normalizer(state)
return state
# ✅ After: Clear separation
# Environment processor: Handles LIBERO's nested robot state
env_preprocessor = LiberoProcessorStep() # Flattens robot_state
# Policy processor: Handles model requirements
policy_preprocessor = NormalizerProcessorStep(stats=dataset_stats)
```
### 2. **Flexibility and Reusability**
The same policy can work with different environment processors, and the same environment processor can work with different policies:
```python
# Use SmolVLA policy with LIBERO environment
libero_preprocessor, libero_postprocessor = make_env_pre_post_processors(libero_cfg)
smolvla_preprocessor, smolvla_postprocessor = make_pre_post_processors(smolvla_cfg)
# Or use ACT policy with the same LIBERO environment
libero_preprocessor, libero_postprocessor = make_env_pre_post_processors(libero_cfg)
act_preprocessor, act_postprocessor = make_pre_post_processors(act_cfg)
```
### 3. **Easier Experimentation**
Want to try different state representations for LIBERO? Just create a new processor:
```python
# Original: 8D state (pos + quat→axisangle + gripper)
@ProcessorStepRegistry.register("libero_processor")
class LiberoProcessorStep(ObservationProcessorStep):
def _process_observation(self, obs):
eef_pos = robot_state["eef"]["pos"] # 3D
eef_axisangle = quat2axisangle(quat) # 3D
gripper = robot_state["gripper"]["qpos"] # 2D
state = torch.cat([eef_pos, eef_axisangle, gripper], dim=-1) # 8D
return state
# Experiment: Add velocity for better control
@ProcessorStepRegistry.register("libero_velocity_processor")
class LiberoVelocityProcessorStep(ObservationProcessorStep):
def _process_observation(self, obs):
# Include velocities for 14D state
eef_pos = robot_state["eef"]["pos"] # 3D
eef_axisangle = quat2axisangle(quat) # 3D
eef_vel = robot_state["eef"]["vel"] # 3D (NEW)
gripper_pos = robot_state["gripper"]["qpos"] # 2D
gripper_vel = robot_state["gripper"]["qvel"] # 3D (NEW)
state = torch.cat([eef_pos, eef_axisangle, eef_vel,
gripper_pos, gripper_vel], dim=-1) # 14D
return state
```
### 4. **Cleaner Environment Code**
Environments expose **all available data** without needing to know what downstream models will use:
```python
# LIBERO environment exposes full robot state
observation = {
"pixels": {"image": img, "image2": img2},
"robot_state": {
"eef": {"pos": ..., "quat": ..., "vel": ..., "mat": ..., "axisangle": ...},
"gripper": {"qpos": ..., "qvel": ...},
"joints": {"pos": ..., "vel": ...}
}
}
# Environment processor decides what to use
# Policy processor handles model-specific transformations
```
## Using Environment Processors
### Factory Function
The `make_env_pre_post_processors` function follows the same pattern as `make_pre_post_processors` for policies:
```python
from lerobot.envs.factory import make_env_pre_post_processors
from lerobot.envs.configs import LiberoEnv, PushtEnv
# For LIBERO: Returns LiberoProcessorStep in preprocessor
libero_cfg = LiberoEnv(task="libero_spatial", camera_name=["agentview"])
env_preprocessor, env_postprocessor = make_env_pre_post_processors(libero_cfg)
# For other environments: Returns identity processors (no-op)
pusht_cfg = PushtEnv()
env_preprocessor, env_postprocessor = make_env_pre_post_processors(pusht_cfg)
```
### Implementation in `envs/factory.py`
```python
def make_env_pre_post_processors(
env_cfg: EnvConfig,
) -> tuple[
PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
]:
"""
Create preprocessor and postprocessor pipelines for environment observations.
Args:
env_cfg: The configuration of the environment.
Returns:
A tuple containing:
- preprocessor: Pipeline that processes environment observations
- postprocessor: Pipeline that processes environment outputs
"""
# For LIBERO environments, add the LiberoProcessorStep to preprocessor
if isinstance(env_cfg, LiberoEnv) or "libero" in env_cfg.type:
preprocessor = PolicyProcessorPipeline(steps=[LiberoProcessorStep()])
else:
# For all other environments, return an identity preprocessor
preprocessor = PolicyProcessorPipeline(steps=[])
# Postprocessor is currently identity for all environments
# Future: Could add environment-specific action transformations
postprocessor = PolicyProcessorPipeline(steps=[])
return preprocessor, postprocessor
```
### Integration in Evaluation
In `lerobot_eval.py`, the environment processors are created once and used throughout:
```python
def eval_main(cfg: EvalPipelineConfig):
# Create environment
envs = make_env(cfg.env, n_envs=cfg.eval.batch_size)
# Create policy
policy = make_policy(cfg=cfg.policy, env_cfg=cfg.env)
# Create policy processors
preprocessor, postprocessor = make_pre_post_processors(
policy_cfg=cfg.policy,
pretrained_path=cfg.policy.pretrained_path,
)
# Create environment processors (NEW!)
env_preprocessor, env_postprocessor = make_env_pre_post_processors(env_cfg=cfg.env)
# Run evaluation with both processor types
eval_policy_all(
envs=envs,
policy=policy,
env_preprocessor=env_preprocessor, # Environment-specific
env_postprocessor=env_postprocessor, # Environment-specific
preprocessor=preprocessor, # Policy-specific
postprocessor=postprocessor, # Policy-specific
n_episodes=cfg.eval.n_episodes,
)
```
## Example: LIBERO Environment Processor
The `LiberoProcessorStep` demonstrates a real-world environment processor:
```python
from lerobot.processor.pipeline import ObservationProcessorStep
@dataclass
@ProcessorStepRegistry.register(name="libero_processor")
class LiberoProcessorStep(ObservationProcessorStep):
"""
Processes LIBERO observations into the LeRobot format.
**State Processing:**
- Extracts end-effector position (3D)
- Converts quaternion to axis-angle representation (3D)
- Extracts gripper joint positions (2D)
- Concatenates into 8D state vector
**Image Processing:**
- Rotates images 180° to match HuggingFaceVLA/libero convention
"""
def _process_observation(self, observation):
processed_obs = observation.copy()
# Process images: Flip 180° for camera convention
for key in list(processed_obs.keys()):
if key.startswith("observation.images."):
img = processed_obs[key]
img = torch.flip(img, dims=[2, 3]) # Flip H and W
processed_obs[key] = img
# Process robot_state: Flatten to 8D vector
if "observation.robot_state" in processed_obs:
robot_state = processed_obs.pop("observation.robot_state")
eef_pos = robot_state["eef"]["pos"] # (B, 3)
eef_quat = robot_state["eef"]["quat"] # (B, 4)
gripper_qpos = robot_state["gripper"]["qpos"] # (B, 2)
# Convert quaternion to axis-angle
eef_axisangle = self._quat2axisangle(eef_quat) # (B, 3)
# Concatenate into single state vector
state = torch.cat((eef_pos, eef_axisangle, gripper_qpos), dim=-1)
state = state.float()
processed_obs["observation.state"] = state
return processed_obs
```
### Why These Transformations?
1. **Image Rotation**: The HuggingFaceVLA/libero dataset has images rotated 180° from the raw LIBERO simulator. The processor handles this convention mismatch so policies trained on the dataset work seamlessly.
2. **State Flattening**: The raw LIBERO environment exposes nested dictionaries with all available state information (position, quaternion, velocity, matrix representation, etc.). The processor:
- Selects the relevant components (pos, quat, gripper)
- Converts quaternion to axis-angle (more suitable for learning)
- Flattens to a single 8D vector that policies expect
3. **Flexibility**: The environment still exposes **all** raw data. If you want to try different state representations (e.g., including velocities, using matrix representation instead of axis-angle), you can create a new processor without modifying the environment code.
## Adding Environment Processors for New Environments
To add environment processors for a new environment:
### 1. Create the Processor Step
```python
# In src/lerobot/processor/env_processor.py
@dataclass
@ProcessorStepRegistry.register(name="myenv_processor")
class MyEnvProcessorStep(ObservationProcessorStep):
"""Process observations from MyEnv."""
def _process_observation(self, observation):
processed = observation.copy()
# Your environment-specific transformations
if "myenv.specific.state" in processed:
state = processed.pop("myenv.specific.state")
# Transform to standard format
processed["observation.state"] = self._transform_state(state)
return processed
```
### 2. Update the Factory
```python
# In src/lerobot/envs/factory.py
def make_env_pre_post_processors(env_cfg: EnvConfig):
if isinstance(env_cfg, LiberoEnv) or "libero" in env_cfg.type:
preprocessor = PolicyProcessorPipeline(steps=[LiberoProcessorStep()])
elif isinstance(env_cfg, MyEnvConfig) or "myenv" in env_cfg.type:
preprocessor = PolicyProcessorPipeline(steps=[MyEnvProcessorStep()])
else:
preprocessor = PolicyProcessorPipeline(steps=[])
postprocessor = PolicyProcessorPipeline(steps=[])
return preprocessor, postprocessor
```
### 3. Use in Evaluation
No changes needed! The evaluation script automatically uses the appropriate processor:
```bash
lerobot-eval \
--policy.path=lerobot/my_policy \
--env.type=myenv \ # Automatically uses MyEnvProcessorStep
--eval.n_episodes=10
```
## Future: Environment Postprocessors
Currently, postprocessors are identity (no-op) for all environments. Future use cases include:
### Action Space Transformations
```python
@dataclass
class MyEnvActionPostprocessor(ProcessorStep):
"""Convert policy actions to environment-specific format."""
def __call__(self, transition: EnvTransition) -> EnvTransition:
action = transition["action"]
# Example: Convert from Cartesian to joint space
if self.action_space == "joint":
action = self.ik_solver(action)
# Example: Apply environment-specific safety limits
action = torch.clamp(action, self.min_action, self.max_action)
transition["action"] = action
return transition
```
### Coordinate System Conversions
```python
@dataclass
class CoordinateTransformPostprocessor(ProcessorStep):
"""Transform actions between coordinate systems."""
def __call__(self, transition: EnvTransition) -> EnvTransition:
action = transition["action"]
# Example: Policy outputs in world frame, env expects base frame
action = self.world_to_base_transform(action)
transition["action"] = action
return transition
```
## Best Practices
1. **Keep environment processors simple**: They should only handle environment-specific data format issues, not complex learning-related transformations.
2. **Use policy processors for model requirements**: Normalization, batching, device placement, and tokenization belong in policy processors.
3. **Expose all data from environments**: Let processors decide what to use rather than hardcoding choices in the environment.
4. **Document conventions**: Clearly document any coordinate system conventions, camera orientations, or data formats that your processor handles.
5. **Test independently**: Environment processors should be testable without loading full policies or environments.
## Summary
Environment processors provide a **clean separation** between environment-specific data transformations and policy-specific model requirements. This architecture:
- ✅ Enables easy experimentation with different state representations
- ✅ Allows policies to work seamlessly across different environments
- ✅ Keeps environment code focused on simulation/hardware interface
- ✅ Makes processor pipelines more maintainable and debuggable
- ✅ Follows the single responsibility principle
The key insight: **Environments define data formats, processors standardize them, policies consume standardized data.** Each layer has a clear, focused responsibility.
+1 -1
View File
@@ -142,7 +142,7 @@ def _check_matplotlib_available():
raise ImportError(
"matplotlib is required for RTC debug visualizations. "
"Please install it by running:\n"
" uv pip install -e '.[matplotlib-dep]'"
" uv pip install matplotlib"
)
+107
View File
@@ -0,0 +1,107 @@
import json
import time
import math
from pathlib import Path
# ---- key → (section, name, id)
MAP = {
# LEFT
"kLeftShoulderPitch.pos": ("left", "shoulder_pitch", 0),
"kLeftShoulderYaw.pos": ("left", "shoulder_yaw", 1),
"kLeftShoulderRoll.pos": ("left", "shoulder_roll", 2),
"kLeftElbow.pos": ("left", "elbow_flex", 3),
"kLeftWristRoll.pos": ("left", "wrist_roll", 4),
"kLeftWristYaw.pos": ("left", "wrist_yaw", 5),
"kLeftWristyaw.pos": ("left", "wrist_yaw", 5), # tolerate casing variant
"kLeftWristPitch.pos": ("left", "wrist_pitch", 6),
# RIGHT
"kRightShoulderPitch.pos": ("right", "shoulder_pitch", 0),
"kRightShoulderYaw.pos": ("right", "shoulder_yaw", 1),
"kRightShoulderRoll.pos": ("right", "shoulder_roll", 2),
"kRightElbow.pos": ("right", "elbow_flex", 3),
"kRightWristRoll.pos": ("right", "wrist_roll", 4),
"kRightWristYaw.pos": ("right", "wrist_yaw", 5),
"kRightWristPitch.pos": ("right", "wrist_pitch", 6),
}
# Output
CALIB_PATH = Path("calibration.json")
ROUND_TO_INT = False # set True if you want int ranges
# Init tracker: tracker["left"]["shoulder_pitch"] = {...}
tracker = {"left": {}, "right": {}}
for sec, name, idx in MAP.values():
if name not in tracker[sec]:
tracker[sec][name] = {
"id": idx,
"drive_mode": 0,
"homing_offset": 0,
"range_min": math.inf,
"range_max": -math.inf,
}
def _to_float(x):
# unwrap numpy / torch scalars if present
if hasattr(x, "item"):
try:
x = x.item()
except Exception:
pass
return float(x)
def update_tracker(obs: dict):
for k, v in obs.items():
if k not in MAP:
continue
sec, name, _ = MAP[k]
try:
x = _to_float(v)
except Exception:
continue
t = tracker[sec][name]
if x < t["range_min"]:
t["range_min"] = x
if x > t["range_max"]:
t["range_max"] = x
def dump_calibration(path: Path):
out = {"left": {}, "right": {}}
for sec in ("left", "right"):
for name, d in tracker[sec].items():
mn, mx = d["range_min"], d["range_max"]
if ROUND_TO_INT:
mn = None if mn is math.inf else int(round(mn))
mx = None if mx is -math.inf else int(round(mx))
else:
mn = None if mn is math.inf else mn
mx = None if mx is -math.inf else mx
out[sec][name] = {
"id": d["id"],
"drive_mode": d["drive_mode"],
"homing_offset": d["homing_offset"],
"range_min": mn,
"range_max": mx,
}
path.write_text(json.dumps(out, indent=4))
print(f"Saved calibration to {path.resolve()}")
from lerobot.robots.unitree_g1.unitree_g1 import UnitreeG1, G1_29_JointIndex
from lerobot.robots.unitree_g1.config_unitree_g1 import UnitreeG1Config
from lerobot.datasets.lerobot_dataset import LeRobotDataset
import time
config = UnitreeG1Config(
motion_mode=False,
simulation_mode=False
)
robot = UnitreeG1(config)
try:
while True:
observation = robot.get_observation()
update_tracker(observation)
robot.send_action(observation) # mirror, if desired
time.sleep(0.01)
except KeyboardInterrupt:
dump_calibration(CALIB_PATH)
BIN
View File
Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

+5
View File
@@ -43,6 +43,11 @@ def make_cameras_from_configs(camera_configs: dict[str, CameraConfig]) -> dict[s
cameras[key] = Reachy2Camera(cfg)
elif cfg.type == "zmq":
from .zmq import ZMQCamera
cameras[key] = ZMQCamera(cfg)
else:
try:
cameras[key] = cast(Camera, make_device_from_device_class(cfg))
+16
View File
@@ -0,0 +1,16 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .camera_zmq import ZMQCamera
from .configuration_zmq import ZMQCameraConfig
+623
View File
@@ -0,0 +1,623 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Provides the ZMQCamera class for capturing frames from remote cameras via ZeroMQ.
"""
import json
import logging
import os
import threading
import time
from pathlib import Path
from threading import Event, Lock, Thread
from typing import Any
import base64
import cv2
import numpy as np
import zmq
from numpy.typing import NDArray
import base64
import msgpack
import msgpack_numpy as m
from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
from ..camera import Camera
from ..configs import ColorMode
from .configuration_zmq import ZMQCameraConfig
logger = logging.getLogger(__name__)
class ZMQCamera(Camera):
"""
Manages camera interactions using ZeroMQ for remote frame streaming.
This class provides a high-level interface to connect to remote cameras
that stream JPEG-encoded images over ZeroMQ PUB/SUB sockets. It supports
both synchronous and asynchronous frame reading.
The camera server must be running and publishing JPEG images on the specified
address and port. Use the provided utility script to find available ZMQ cameras:
```bash
lerobot-find-cameras zmq
```
Example:
```python
from lerobot.cameras.zmq import ZMQCamera
from lerobot.cameras.zmq.configuration_zmq import ZMQCameraConfig, ColorMode
# Basic usage
config = ZMQCameraConfig(
server_address="192.168.123.164",
port=5554,
camera_name="remote_cam"
)
camera = ZMQCamera(config)
camera.connect()
# Read 1 frame synchronously
color_image = camera.read()
print(color_image.shape)
# Read 1 frame asynchronously
async_image = camera.async_read()
# When done, properly disconnect the camera
camera.disconnect()
```
"""
def __init__(self, config: ZMQCameraConfig):
"""
Initializes the ZMQCamera instance.
Args:
config: The configuration settings for the ZMQ camera.
"""
super().__init__(config)
self.config = config
self.server_address = config.server_address
self.port = config.port
self.camera_name = config.camera_name
self.color_mode = config.color_mode
self.timeout_ms = config.timeout_ms
self.context: zmq.Context | None = None
self.socket: zmq.Socket | None = None
self._connected = False
self.thread: Thread | None = None
self.stop_event: Event | None = None
self.frame_lock: Lock = Lock()
self.latest_frame: NDArray[Any] | None = None
self.new_frame_event: Event = Event()
# Format type detected during connection (msgpack, json, or raw_jpeg)
self._format_type: str | None = None
def __str__(self) -> str:
return f"{self.__class__.__name__}({self.camera_name}@{self.server_address}:{self.port})"
@property
def is_connected(self) -> bool:
"""Checks if the camera is currently connected."""
return self._connected and self.context is not None and self.socket is not None
def connect(self, warmup: bool = True) -> None:
"""
Connects to the ZMQ camera server and configures settings.
Args:
warmup: If True (default), captures a warmup frame before returning.
Raises:
DeviceAlreadyConnectedError: If the camera is already connected.
RuntimeError: If connection to the ZMQ server fails.
"""
if self.is_connected:
raise DeviceAlreadyConnectedError(f"{self} is already connected.")
logger.info(f"Connecting to {self}...")
try:
self.context = zmq.Context()
self.socket = self.context.socket(zmq.SUB)
self.socket.connect(f"tcp://{self.server_address}:{self.port}")
self.socket.setsockopt_string(zmq.SUBSCRIBE, "")
# Set receive timeout
self.socket.setsockopt(zmq.RCVTIMEO, self.timeout_ms)
self._connected = True
# Try to receive one frame to validate connection and detect format
try:
# Try each format until one works
test_frame = None
for format_type in ["msgpack", "json", "raw_jpeg"]:
try:
test_frame = self.read(format=format_type)
self._format_type = format_type
logger.info(f"{self} detected format: {format_type}")
break
except Exception as e:
logger.debug(f"{self} format '{format_type}' failed: {e}")
continue
if test_frame is None:
raise RuntimeError("Failed to decode frame with any supported format (msgpack, json, raw_jpeg)")
# Auto-detect resolution if not specified
if self.width is None or self.height is None:
h, w = test_frame.shape[:2]
self.height = h
self.width = w
logger.info(f"{self} auto-detected resolution: {w}x{h}")
logger.info(f"{self} connected successfully.")
if warmup:
logger.debug(f"Warming up {self}...")
time.sleep(0.1) # Brief warmup period
except Exception as e:
self._connected = False
if self.socket:
self.socket.close()
if self.context:
self.context.term()
self.socket = None
self.context = None
raise RuntimeError(f"Failed to receive initial frame from {self}: {e}")
except Exception as e:
self._connected = False
if self.socket:
self.socket.close()
if self.context:
self.context.term()
self.socket = None
self.context = None
raise RuntimeError(f"Failed to connect to {self}: {e}")
@staticmethod
def find_cameras(
subnet: str | None = None,
ports: list[int] | None = None,
timeout_ms: int = 200,
) -> list[dict[str, Any]]:
"""
Scans the local network for ZMQ cameras (fast parallel scan).
Uses threading to scan multiple hosts simultaneously. Without parallelization,
scanning 254 hosts would take 6+ minutes. With threads, takes ~10-15 seconds.
Args:
subnet: Network subnet to scan (e.g., "192.168.1.0/24"). If None, auto-detects.
ports: List of ports to scan. Defaults to [5554, 5555, 5556].
timeout_ms: Connection timeout per host in milliseconds. Default: 200ms.
Returns:
List of dicts containing camera info (address, port, format, resolution).
Example:
>>> cameras = ZMQCamera.find_cameras()
>>> # Or specify: cameras = ZMQCamera.find_cameras(subnet="10.0.0.0/24", ports=[5554])
"""
import socket
import ipaddress
from concurrent.futures import ThreadPoolExecutor, as_completed
if ports is None:
ports = [5554, 5555, 5556]
# Auto-detect local subnet
if subnet is None:
try:
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.connect(("8.8.8.8", 80))
local_ip = s.getsockname()[0]
s.close()
subnet = ".".join(local_ip.split(".")[:-1]) + ".0/24"
logger.info(f"Auto-detected subnet: {subnet}")
except Exception as e:
logger.error(f"Failed to auto-detect subnet: {e}")
return []
# Parse subnet
try:
network = ipaddress.ip_network(subnet, strict=False)
hosts = list(network.hosts())
# Always include localhost (for MuJoCo sim, local servers)
hosts.insert(0, ipaddress.IPv4Address("127.0.0.1"))
except Exception as e:
logger.error(f"Invalid subnet '{subnet}': {e}")
return []
total = len(hosts) * len(ports)
logger.info(f"Scanning {len(hosts)} hosts × {len(ports)} ports = {total} targets (this takes ~10-15s)...")
def test_target(host_ip: str, port: int) -> dict | None:
"""Test one host:port for ZMQ camera."""
ctx = zmq.Context()
sock = ctx.socket(zmq.SUB)
sock.connect(f"tcp://{host_ip}:{port}")
sock.setsockopt_string(zmq.SUBSCRIBE, "")
sock.setsockopt(zmq.RCVTIMEO, timeout_ms)
# Wait for subscription to establish (ZMQ "slow joiner" problem)
time.sleep(0.1)
# Try receiving a few times
msg = None
for _ in range(3):
try:
msg = sock.recv()
break
except zmq.Again:
time.sleep(0.05)
if msg is None:
sock.close()
ctx.term()
return None
# Try formats: msgpack → json → raw_jpeg
frame = fmt = None
# Msgpack
try:
d = msgpack.unpackb(msg, object_hook=m.decode)
if isinstance(d, dict) and "images" in d and len(d["images"]) > 0:
img = next(iter(d["images"].values()))
if isinstance(img, str):
frame = cv2.imdecode(np.frombuffer(base64.b64decode(img), np.uint8), cv2.IMREAD_COLOR)
elif isinstance(img, np.ndarray):
frame = img
if frame is not None:
fmt = "msgpack"
except:
pass
# JSON
if frame is None:
try:
d = json.loads(msg.decode('utf-8'))
if isinstance(d, dict):
for v in d.values():
if isinstance(v, str) and len(v) > 100:
try:
frame = cv2.imdecode(np.frombuffer(base64.b64decode(v), np.uint8), cv2.IMREAD_COLOR)
if frame is not None:
fmt = "json"
break
except:
pass
except:
pass
# Raw JPEG
if frame is None:
try:
frame = cv2.imdecode(np.frombuffer(msg, np.uint8), cv2.IMREAD_COLOR)
if frame is not None:
fmt = "raw_jpeg"
except:
pass
sock.close()
ctx.term()
if frame is not None:
h, w = frame.shape[:2]
return {
"name": f"ZMQ @ {host_ip}:{port}",
"type": "ZMQ",
"id": f"{host_ip}:{port}",
"server_address": host_ip,
"port": port,
"camera_name": f"cam_{host_ip.replace('.', '_')}_{port}",
"format": fmt,
"default_stream_profile": {"width": w, "height": h, "format": fmt.upper()},
}
return None
# Parallel scan with thread pool
found = []
with ThreadPoolExecutor(max_workers=100) as ex:
futures = [ex.submit(test_target, str(h), p) for h in hosts for p in ports]
for i, fut in enumerate(as_completed(futures), 1):
if i % 100 == 0:
logger.info(f" Progress: {i}/{total} ({100*i//total}%)")
res = fut.result()
if res:
found.append(res)
logger.info(f"{res['server_address']}:{res['port']} ({res['format']})")
logger.info(f"Scan complete! Found {len(found)} camera(s).")
return found
def read(self, color_mode: ColorMode | None = None, format: str | None = None) -> NDArray[Any]:
"""
Reads a single frame synchronously from the ZMQ camera.
Supports three message formats:
1. "msgpack": Msgpack with base64 JPEGs: {"timestamps": {...}, "images": {camera_name: "b64"}}
(used by MuJoCo sim)
2. "json": JSON with base64 JPEGs: {"state": 0.0, "camera_name": "b64jpeg"}
(used by LeKiwi-style servers)
3. "raw_jpeg": Raw JPEG bytes (used by Unitree G1 head camera)
Args:
color_mode: Target color mode (RGB or BGR). If None, uses self.color_mode.
format: Message format to use. If None, uses auto-detected format from connect().
One of: "msgpack", "json", "raw_jpeg"
Returns:
np.ndarray: Decoded frame in shape (height, width, 3)
Raises:
DeviceNotConnectedError: If camera is not connected
TimeoutError: If no frame received within timeout_ms
RuntimeError: If frame decoding fails
"""
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
if self.socket is None:
raise DeviceNotConnectedError(f"{self} socket is not initialized")
# Use detected format if not specified
if format is None:
format = self._format_type
if format is None:
raise RuntimeError(f"{self} format not specified and not auto-detected during connect()")
start_time = time.perf_counter()
try:
message = self.socket.recv()
except zmq.Again:
raise TimeoutError(f"{self} timeout waiting for frame after {self.timeout_ms}ms")
except Exception as e:
raise RuntimeError(f"{self} read failed: {e}")
frame = None
# Decode based on format
if format == "msgpack":
data = msgpack.unpackb(message, object_hook=m.decode)
if not isinstance(data, dict) or "images" not in data:
raise RuntimeError(f"{self} invalid msgpack format: expected dict with 'images' key")
images_dict = data["images"]
# Prefer named camera if present
if self.camera_name in images_dict:
img_data = images_dict[self.camera_name]
elif len(images_dict) > 0:
# Fallback: first available camera
img_data = next(iter(images_dict.values()))
else:
raise RuntimeError(f"{self} no images found in msgpack message")
# Decode the image data
if isinstance(img_data, str):
color_bytes = base64.b64decode(img_data)
np_img = np.frombuffer(color_bytes, dtype=np.uint8)
frame = cv2.imdecode(np_img, cv2.IMREAD_COLOR)
elif isinstance(img_data, np.ndarray):
frame = img_data
else:
raise RuntimeError(f"{self} unknown image payload type: {type(img_data)}")
elif format == "json":
data = json.loads(message.decode('utf-8'))
if not isinstance(data, dict) or self.camera_name not in data:
raise RuntimeError(f"{self} invalid JSON format: expected dict with '{self.camera_name}' key")
img_b64 = data[self.camera_name]
if not isinstance(img_b64, str):
raise RuntimeError(f"{self} expected base64 string in JSON, got {type(img_b64)}")
color_bytes = base64.b64decode(img_b64)
np_img = np.frombuffer(color_bytes, dtype=np.uint8)
frame = cv2.imdecode(np_img, cv2.IMREAD_COLOR)
elif format == "raw_jpeg":
np_img = np.frombuffer(message, dtype=np.uint8)
frame = cv2.imdecode(np_img, cv2.IMREAD_COLOR)
else:
raise ValueError(f"{self} unsupported format: {format}. Use 'msgpack', 'json', or 'raw_jpeg'")
if frame is None or not isinstance(frame, np.ndarray):
raise RuntimeError(f"{self} failed to decode image using format '{format}'")
processed_frame = self._postprocess_image(frame, color_mode)
read_duration_ms = (time.perf_counter() - start_time) * 1e3
logger.debug(f"{self} read took: {read_duration_ms:.1f}ms")
return processed_frame
def _postprocess_image(self, image: NDArray[Any], color_mode: ColorMode | None = None) -> NDArray[Any]:
"""
Applies color conversion to a raw frame.
Args:
image: The raw image frame (BGR format from cv2.imdecode).
color_mode: The target color mode (RGB or BGR). If None, uses self.color_mode.
Returns:
np.ndarray: The processed image frame.
Raises:
ValueError: If the requested color_mode is invalid.
RuntimeError: If the frame dimensions don't match expectations.
"""
requested_color_mode = self.color_mode if color_mode is None else color_mode
if requested_color_mode not in (ColorMode.RGB, ColorMode.BGR):
raise ValueError(
f"Invalid color mode '{requested_color_mode}'. Expected {ColorMode.RGB} or {ColorMode.BGR}."
)
h, w, c = image.shape
# Validate dimensions if they were specified
if self.height is not None and self.width is not None:
if h != self.height or w != self.width:
logger.warning(
f"{self} frame dimensions ({w}x{h}) don't match configured ({self.width}x{self.height}). "
"This might be expected if the server sends different resolutions."
)
if c != 3:
raise RuntimeError(f"{self} frame channels={c} do not match expected 3 channels (RGB/BGR).")
processed_image = image
if requested_color_mode == ColorMode.RGB:
processed_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
return processed_image
def _read_loop(self) -> None:
"""
Internal loop run by the background thread for asynchronous reading.
On each iteration:
1. Reads a frame from ZMQ
2. Stores result in latest_frame (thread-safe)
3. Sets new_frame_event to notify listeners
Stops on DeviceNotConnectedError, logs other errors and continues.
"""
if self.stop_event is None:
raise RuntimeError(f"{self}: stop_event is not initialized before starting read loop.")
while not self.stop_event.is_set():
try:
frame = self.read()
with self.frame_lock:
self.latest_frame = frame
self.new_frame_event.set()
except DeviceNotConnectedError:
break
except TimeoutError:
# Timeout is expected occasionally, just continue
logger.debug(f"{self} read timeout in background thread")
except Exception as e:
logger.warning(f"Error reading frame in background thread for {self}: {e}")
def _start_read_thread(self) -> None:
"""Starts or restarts the background read thread if it's not running."""
if self.thread is not None and self.thread.is_alive():
self.thread.join(timeout=0.1)
if self.stop_event is not None:
self.stop_event.set()
self.stop_event = Event()
self.thread = Thread(target=self._read_loop, args=(), name=f"{self}_read_loop")
self.thread.daemon = True
self.thread.start()
def _stop_read_thread(self) -> None:
"""Signals the background read thread to stop and waits for it to join."""
if self.stop_event is not None:
self.stop_event.set()
if self.thread is not None and self.thread.is_alive():
self.thread.join(timeout=2.0)
self.thread = None
self.stop_event = None
def async_read(self, timeout_ms: float = 10000) -> NDArray[Any]:
"""
Reads the latest available frame asynchronously.
This method retrieves the most recent frame captured by the background
read thread. It does not block waiting for ZMQ directly, but may wait
up to timeout_ms for the background thread to provide a frame.
Args:
timeout_ms: Maximum time in milliseconds to wait for a frame
to become available. Defaults to 2000ms.
Returns:
np.ndarray: The latest captured frame as a NumPy array in the format
(height, width, channels), processed according to configuration.
Raises:
DeviceNotConnectedError: If the camera is not connected.
TimeoutError: If no frame becomes available within the specified timeout.
RuntimeError: If an unexpected error occurs.
"""
if not self.is_connected:
raise DeviceNotConnectedError(f"{self} is not connected.")
if self.thread is None or not self.thread.is_alive():
self._start_read_thread()
if not self.new_frame_event.wait(timeout=timeout_ms / 1000.0):
thread_alive = self.thread is not None and self.thread.is_alive()
raise TimeoutError(
f"Timed out waiting for frame from {self} after {timeout_ms} ms. "
f"Read thread alive: {thread_alive}."
)
with self.frame_lock:
frame = self.latest_frame
self.new_frame_event.clear()
if frame is None:
raise RuntimeError(f"Internal error: Event set but no frame available for {self}.")
return frame
def disconnect(self) -> None:
"""
Disconnects from the ZMQ camera and cleans up resources.
Stops the background read thread (if running) and closes the ZMQ socket.
Raises:
DeviceNotConnectedError: If the camera is already disconnected.
"""
if not self.is_connected and self.thread is None:
raise DeviceNotConnectedError(f"{self} not connected.")
if self.thread is not None:
self._stop_read_thread()
if self.socket is not None:
self.socket.close()
self.socket = None
if self.context is not None:
self.context.term()
self.context = None
self._connected = False
logger.info(f"{self} disconnected.")
@@ -0,0 +1,78 @@
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
from ..configs import CameraConfig, ColorMode
__all__ = ["ZMQCameraConfig", "ColorMode"]
@CameraConfig.register_subclass("zmq")
@dataclass
class ZMQCameraConfig(CameraConfig):
"""Configuration class for ZMQ-based remote camera streams.
This class provides configuration options for cameras accessed through ZeroMQ (ZMQ),
supporting remote camera streams over the network. The server must be running and
streaming JPEG-encoded images over a ZMQ PUB socket.
Example configurations:
```python
# Basic configuration
ZMQCameraConfig(
server_address="192.168.123.164",
port=5554,
camera_name="remote_cam_1"
)
# With custom resolution
ZMQCameraConfig(
server_address="10.0.0.100",
port=5555,
camera_name="lab_cam",
width=1280,
height=480,
fps=30
)
```
Attributes:
server_address: IP address or hostname of the ZMQ image server.
port: Port number where the ZMQ server is publishing images.
camera_name: Identifier name for this camera (for logging/debugging).
color_mode: Color mode for image output (RGB or BGR). Defaults to RGB.
timeout_ms: Timeout in milliseconds for receiving frames. Defaults to 1000ms.
"""
server_address: str
port: int = 5554
camera_name: str = "zmq_camera"
color_mode: ColorMode = ColorMode.RGB
timeout_ms: int = 5000
def __post_init__(self) -> None:
if self.color_mode not in (ColorMode.RGB, ColorMode.BGR):
raise ValueError(
f"`color_mode` is expected to be {ColorMode.RGB.value} or {ColorMode.BGR.value}, but {self.color_mode} is provided."
)
if self.timeout_ms <= 0:
raise ValueError(f"`timeout_ms` must be positive, but {self.timeout_ms} is provided.")
if not self.server_address:
raise ValueError("`server_address` cannot be empty.")
if self.port <= 0 or self.port > 65535:
raise ValueError(f"`port` must be between 1 and 65535, but {self.port} is provided.")
+24 -6
View File
@@ -712,6 +712,15 @@ class LeRobotDataset(torch.utils.data.Dataset):
self.download(download_videos)
self.hf_dataset = self.load_hf_dataset()
# Create mapping from absolute indices to relative indices when only a subset of the episodes are loaded
# Build a mapping: absolute_index -> relative_index_in_filtered_dataset
self._absolute_to_relative_idx = None
if self.episodes is not None:
self._absolute_to_relative_idx = {
abs_idx.item() if isinstance(abs_idx, torch.Tensor) else abs_idx: rel_idx
for rel_idx, abs_idx in enumerate(self.hf_dataset["index"])
}
# Setup delta_indices
if self.delta_timestamps is not None:
check_delta_timestamps(self.delta_timestamps, self.fps, self.tolerance_s)
@@ -830,7 +839,7 @@ class LeRobotDataset(torch.utils.data.Dataset):
def load_hf_dataset(self) -> datasets.Dataset:
"""hf_dataset contains all the observations, states, actions, rewards, etc."""
features = get_hf_features_from_features(self.features)
hf_dataset = load_nested_dataset(self.root / "data", features=features)
hf_dataset = load_nested_dataset(self.root / "data", features=features, episodes=self.episodes)
hf_dataset.set_transform(hf_transform_to_torch)
return hf_dataset
@@ -847,10 +856,8 @@ class LeRobotDataset(torch.utils.data.Dataset):
# Determine requested episodes
if self.episodes is None:
# Requesting all episodes - check if we have all episodes from metadata
requested_episodes = set(range(self.meta.total_episodes))
else:
# Requesting specific episodes
requested_episodes = set(self.episodes)
# Check if all requested episodes are available in cached data
@@ -932,7 +939,11 @@ class LeRobotDataset(torch.utils.data.Dataset):
query_timestamps = {}
for key in self.meta.video_keys:
if query_indices is not None and key in query_indices:
timestamps = self.hf_dataset[query_indices[key]]["timestamp"]
if self._absolute_to_relative_idx is not None:
relative_indices = [self._absolute_to_relative_idx[idx] for idx in query_indices[key]]
timestamps = self.hf_dataset[relative_indices]["timestamp"]
else:
timestamps = self.hf_dataset[query_indices[key]]["timestamp"]
query_timestamps[key] = torch.stack(timestamps).tolist()
else:
query_timestamps[key] = [current_ts]
@@ -955,10 +966,16 @@ class LeRobotDataset(torch.utils.data.Dataset):
for key, q_idx in query_indices.items():
if key in self.meta.video_keys:
continue
# Map absolute indices to relative indices if needed
relative_indices = (
q_idx
if self._absolute_to_relative_idx is None
else [self._absolute_to_relative_idx[idx] for idx in q_idx]
)
try:
result[key] = torch.stack(self.hf_dataset[key][q_idx])
result[key] = torch.stack(self.hf_dataset[key][relative_indices])
except (KeyError, TypeError, IndexError):
result[key] = torch.stack(self.hf_dataset[q_idx][key])
result[key] = torch.stack(self.hf_dataset[relative_indices][key])
return result
def _query_videos(self, query_timestamps: dict[str, list[float]], ep_idx: int) -> dict[str, torch.Tensor]:
@@ -1498,6 +1515,7 @@ class LeRobotDataset(torch.utils.data.Dataset):
obj.image_transforms = None
obj.delta_timestamps = None
obj.delta_indices = None
obj._absolute_to_relative_idx = None
obj.video_backend = video_backend if video_backend is not None else get_safe_default_codec()
obj.writer = None
obj.latest_episode = None
+18 -4
View File
@@ -28,6 +28,7 @@ import numpy as np
import packaging.version
import pandas
import pandas as pd
import pyarrow.dataset as pa_ds
import pyarrow.parquet as pq
import torch
from datasets import Dataset
@@ -103,7 +104,9 @@ def update_chunk_file_indices(chunk_idx: int, file_idx: int, chunks_size: int) -
return chunk_idx, file_idx
def load_nested_dataset(pq_dir: Path, features: datasets.Features | None = None) -> Dataset:
def load_nested_dataset(
pq_dir: Path, features: datasets.Features | None = None, episodes: list[int] | None = None
) -> Dataset:
"""Find parquet files in provided directory {pq_dir}/chunk-xxx/file-xxx.parquet
Convert parquet files to pyarrow memory mapped in a cache folder for efficient RAM usage
Concatenate all pyarrow references to return HF Dataset format
@@ -111,15 +114,26 @@ def load_nested_dataset(pq_dir: Path, features: datasets.Features | None = None)
Args:
pq_dir: Directory containing parquet files
features: Optional features schema to ensure consistent loading of complex types like images
episodes: Optional list of episode indices to filter. Uses PyArrow predicate pushdown for efficiency.
"""
paths = sorted(pq_dir.glob("*/*.parquet"))
if len(paths) == 0:
raise FileNotFoundError(f"Provided directory does not contain any parquet file: {pq_dir}")
# TODO(rcadene): set num_proc to accelerate conversion to pyarrow
with SuppressProgressBars():
datasets = Dataset.from_parquet([str(path) for path in paths], features=features)
return datasets
# When no filtering needed, Dataset uses memory-mapped loading for efficiency
# PyArrow loads the entire dataset into memory
if episodes is None:
return Dataset.from_parquet([str(path) for path in paths], features=features)
arrow_dataset = pa_ds.dataset(paths, format="parquet")
filter_expr = pa_ds.field("episode_index").isin(episodes)
table = arrow_dataset.to_table(filter=filter_expr)
if features is not None:
table = table.cast(features.arrow_schema)
return Dataset(table)
def get_parquet_num_frames(parquet_path: str | Path) -> int:
+57 -9
View File
@@ -21,7 +21,22 @@ import draccus
from lerobot.configs.types import FeatureType, PolicyFeature
from lerobot.robots import RobotConfig
from lerobot.teleoperators.config import TeleoperatorConfig
from lerobot.utils.constants import ACTION, OBS_ENV_STATE, OBS_IMAGE, OBS_IMAGES, OBS_STATE
from lerobot.utils.constants import (
ACTION,
LIBERO_KEY_EEF_MAT,
LIBERO_KEY_EEF_POS,
LIBERO_KEY_EEF_QUAT,
LIBERO_KEY_GRIPPER_QPOS,
LIBERO_KEY_GRIPPER_QVEL,
LIBERO_KEY_JOINTS_POS,
LIBERO_KEY_JOINTS_VEL,
LIBERO_KEY_PIXELS_AGENTVIEW,
LIBERO_KEY_PIXELS_EYE_IN_HAND,
OBS_ENV_STATE,
OBS_IMAGE,
OBS_IMAGES,
OBS_STATE,
)
@dataclass
@@ -246,28 +261,61 @@ class LiberoEnv(EnvConfig):
features_map: dict[str, str] = field(
default_factory=lambda: {
ACTION: ACTION,
"agent_pos": OBS_STATE,
"pixels/agentview_image": f"{OBS_IMAGES}.image",
"pixels/robot0_eye_in_hand_image": f"{OBS_IMAGES}.image2",
LIBERO_KEY_EEF_POS: f"{OBS_STATE}.eef_pos",
LIBERO_KEY_EEF_QUAT: f"{OBS_STATE}.eef_quat",
LIBERO_KEY_EEF_MAT: f"{OBS_STATE}.eef_mat",
LIBERO_KEY_GRIPPER_QPOS: f"{OBS_STATE}.gripper_qpos",
LIBERO_KEY_GRIPPER_QVEL: f"{OBS_STATE}.gripper_qvel",
LIBERO_KEY_JOINTS_POS: f"{OBS_STATE}.joint_pos",
LIBERO_KEY_JOINTS_VEL: f"{OBS_STATE}.joint_vel",
LIBERO_KEY_PIXELS_AGENTVIEW: f"{OBS_IMAGES}.image",
LIBERO_KEY_PIXELS_EYE_IN_HAND: f"{OBS_IMAGES}.image2",
}
)
def __post_init__(self):
if self.obs_type == "pixels":
self.features["pixels/agentview_image"] = PolicyFeature(
self.features[LIBERO_KEY_PIXELS_AGENTVIEW] = PolicyFeature(
type=FeatureType.VISUAL, shape=(self.observation_height, self.observation_width, 3)
)
self.features["pixels/robot0_eye_in_hand_image"] = PolicyFeature(
self.features[LIBERO_KEY_PIXELS_EYE_IN_HAND] = PolicyFeature(
type=FeatureType.VISUAL, shape=(self.observation_height, self.observation_width, 3)
)
elif self.obs_type == "pixels_agent_pos":
self.features["agent_pos"] = PolicyFeature(type=FeatureType.STATE, shape=(8,))
self.features["pixels/agentview_image"] = PolicyFeature(
self.features[LIBERO_KEY_PIXELS_AGENTVIEW] = PolicyFeature(
type=FeatureType.VISUAL, shape=(self.observation_height, self.observation_width, 3)
)
self.features["pixels/robot0_eye_in_hand_image"] = PolicyFeature(
self.features[LIBERO_KEY_PIXELS_EYE_IN_HAND] = PolicyFeature(
type=FeatureType.VISUAL, shape=(self.observation_height, self.observation_width, 3)
)
self.features[LIBERO_KEY_EEF_POS] = PolicyFeature(
type=FeatureType.STATE,
shape=(3,),
)
self.features[LIBERO_KEY_EEF_QUAT] = PolicyFeature(
type=FeatureType.STATE,
shape=(4,),
)
self.features[LIBERO_KEY_EEF_MAT] = PolicyFeature(
type=FeatureType.STATE,
shape=(3, 3),
)
self.features[LIBERO_KEY_GRIPPER_QPOS] = PolicyFeature(
type=FeatureType.STATE,
shape=(2,),
)
self.features[LIBERO_KEY_GRIPPER_QVEL] = PolicyFeature(
type=FeatureType.STATE,
shape=(2,),
)
self.features[LIBERO_KEY_JOINTS_POS] = PolicyFeature(
type=FeatureType.STATE,
shape=(7,),
)
self.features[LIBERO_KEY_JOINTS_VEL] = PolicyFeature(
type=FeatureType.STATE,
shape=(7,),
)
else:
raise ValueError(f"Unsupported obs_type: {self.obs_type}")
+39 -1
View File
@@ -14,12 +14,16 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import importlib
from typing import Any
import gymnasium as gym
from gymnasium.envs.registration import registry as gym_registry
from lerobot.envs.configs import AlohaEnv, EnvConfig, LiberoEnv, PushtEnv
from lerobot.envs.utils import _call_make_env, _download_hub_file, _import_hub_module, _normalize_hub_result
from lerobot.processor import ProcessorStep
from lerobot.processor.env_processor import LiberoProcessorStep
from lerobot.processor.pipeline import PolicyProcessorPipeline
def make_env_config(env_type: str, **kwargs) -> EnvConfig:
@@ -33,6 +37,41 @@ def make_env_config(env_type: str, **kwargs) -> EnvConfig:
raise ValueError(f"Policy type '{env_type}' is not available.")
def make_env_pre_post_processors(
env_cfg: EnvConfig,
) -> tuple[
PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
]:
"""
Create preprocessor and postprocessor pipelines for environment observations.
This function creates processor pipelines that transform raw environment
observations and actions. By default, it returns identity processors that do nothing.
For specific environments like LIBERO, it adds environment-specific processing steps.
Args:
env_cfg: The configuration of the environment.
Returns:
A tuple containing:
- preprocessor: Pipeline that processes environment observations
- postprocessor: Pipeline that processes environment outputs (currently identity)
"""
# Preprocessor and Postprocessor steps are Identity for most environments
preprocessor_steps: list[ProcessorStep] = []
postprocessor_steps: list[ProcessorStep] = []
# For LIBERO environments, add the LiberoProcessorStep to preprocessor
if isinstance(env_cfg, LiberoEnv) or "libero" in env_cfg.type:
preprocessor_steps.append(LiberoProcessorStep())
preprocessor = PolicyProcessorPipeline(steps=preprocessor_steps)
postprocessor = PolicyProcessorPipeline(steps=postprocessor_steps)
return preprocessor, postprocessor
def make_env(
cfg: EnvConfig | str,
n_envs: int = 1,
@@ -72,7 +111,6 @@ def make_env(
# import and surface clear import errors
module = _import_hub_module(local_file, repo_id)
# call the hub-provided make_env
raw_result = _call_make_env(module, n_envs=n_envs, use_async_envs=use_async_envs)
+69 -21
View File
@@ -28,7 +28,6 @@ import torch
from gymnasium import spaces
from libero.libero import benchmark, get_libero_path
from libero.libero.envs import OffScreenRenderEnv
from robosuite.utils.transform_utils import quat2axisangle
def _parse_camera_names(camera_name: str | Sequence[str]) -> list[str]:
@@ -175,11 +174,36 @@ class LiberoEnv(gym.Env):
self.observation_space = spaces.Dict(
{
"pixels": spaces.Dict(images),
"agent_pos": spaces.Box(
low=AGENT_POS_LOW,
high=AGENT_POS_HIGH,
shape=(OBS_STATE_DIM,),
dtype=np.float64,
"robot_state": spaces.Dict(
{
"eef": spaces.Dict(
{
"pos": spaces.Box(low=-np.inf, high=np.inf, shape=(3,), dtype=np.float64),
"quat": spaces.Box(
low=-np.inf, high=np.inf, shape=(4,), dtype=np.float64
),
"mat": spaces.Box(
low=-np.inf, high=np.inf, shape=(3, 3), dtype=np.float64
),
}
),
"gripper": spaces.Dict(
{
"qpos": spaces.Box(
low=-np.inf, high=np.inf, shape=(2,), dtype=np.float64
),
"qvel": spaces.Box(
low=-np.inf, high=np.inf, shape=(2,), dtype=np.float64
),
}
),
"joints": spaces.Dict(
{
"pos": spaces.Box(low=-np.inf, high=np.inf, shape=(7,), dtype=np.float64),
"vel": spaces.Box(low=-np.inf, high=np.inf, shape=(7,), dtype=np.float64),
}
),
}
),
}
)
@@ -191,6 +215,7 @@ class LiberoEnv(gym.Env):
def render(self):
raw_obs = self._env.env._get_observations()
image = self._format_raw_obs(raw_obs)["pixels"]["image"]
image = image[::-1, ::-1] # flip both H and W for visualization
return image
def _make_envs_task(self, task_suite: Any, task_id: int = 0):
@@ -212,23 +237,48 @@ class LiberoEnv(gym.Env):
images = {}
for camera_name in self.camera_name:
image = raw_obs[camera_name]
image = image[::-1, ::-1] # rotate 180 degrees
images[self.camera_name_mapping[camera_name]] = image
state = np.concatenate(
(
raw_obs["robot0_eef_pos"],
quat2axisangle(raw_obs["robot0_eef_quat"]),
raw_obs["robot0_gripper_qpos"],
)
)
agent_pos = state
eef_pos = raw_obs.get("robot0_eef_pos")
eef_quat = raw_obs.get("robot0_eef_quat")
# rotation matrix from controller
eef_mat = self._env.robots[0].controller.ee_ori_mat if eef_pos is not None else None
gripper_qpos = raw_obs.get("robot0_gripper_qpos")
gripper_qvel = raw_obs.get("robot0_gripper_qvel")
joint_pos = raw_obs.get("robot0_joint_pos")
joint_vel = raw_obs.get("robot0_joint_vel")
obs = {
"pixels": images,
"robot_state": {
"eef": {
"pos": eef_pos, # (3,)
"quat": eef_quat, # (4,)
"mat": eef_mat, # (3, 3)
},
"gripper": {
"qpos": gripper_qpos, # (2,)
"qvel": gripper_qvel, # (2,)
},
"joints": {
"pos": joint_pos, # (7,)
"vel": joint_vel, # (7,)
},
},
}
if self.obs_type == "pixels":
return {"pixels": images.copy()}
if self.obs_type == "pixels_agent_pos":
return {
"pixels": images.copy(),
"agent_pos": agent_pos,
}
# Validate required fields are present
if eef_pos is None or eef_quat is None or gripper_qpos is None:
raise ValueError(
f"Missing required robot state fields in raw observation. "
f"Got eef_pos={eef_pos is not None}, eef_quat={eef_quat is not None}, "
f"gripper_qpos={gripper_qpos is not None}"
)
return obs
raise NotImplementedError(
f"The observation type '{self.obs_type}' is not supported in LiberoEnv. "
"Please switch to an image-based obs_type (e.g. 'pixels', 'pixels_agent_pos')."
@@ -355,12 +405,10 @@ def create_libero_envs(
print(f"Restricting to task_ids={task_ids_filter}")
out: dict[str, dict[int, Any]] = defaultdict(dict)
for suite_name in suite_names:
suite = _get_suite(suite_name)
total = len(suite.tasks)
selected = _select_task_ids(total, task_ids_filter)
if not selected:
raise ValueError(f"No tasks selected for suite '{suite_name}' (available: {total}).")
+36 -7
View File
@@ -29,10 +29,22 @@ from torch import Tensor
from lerobot.configs.types import FeatureType, PolicyFeature
from lerobot.envs.configs import EnvConfig
from lerobot.utils.constants import OBS_ENV_STATE, OBS_IMAGE, OBS_IMAGES, OBS_STATE
from lerobot.utils.constants import OBS_ENV_STATE, OBS_IMAGE, OBS_IMAGES, OBS_STATE, OBS_STR
from lerobot.utils.utils import get_channel_first_image_shape
def _convert_nested_dict(d):
result = {}
for k, v in d.items():
if isinstance(v, dict):
result[k] = _convert_nested_dict(v)
elif isinstance(v, np.ndarray):
result[k] = torch.from_numpy(v)
else:
result[k] = v
return result
def preprocess_observation(observations: dict[str, np.ndarray]) -> dict[str, Tensor]:
# TODO(aliberts, rcadene): refactor this to use features from the environment (no hardcoding)
"""Convert environment observation to LeRobot format observation.
@@ -78,12 +90,14 @@ def preprocess_observation(observations: dict[str, np.ndarray]) -> dict[str, Ten
return_observations[OBS_ENV_STATE] = env_state
# TODO(rcadene): enable pixels only baseline with `obs_type="pixels"` in environment by removing
agent_pos = torch.from_numpy(observations["agent_pos"]).float()
if agent_pos.dim() == 1:
agent_pos = agent_pos.unsqueeze(0)
return_observations[OBS_STATE] = agent_pos
if "agent_pos" in observations:
agent_pos = torch.from_numpy(observations["agent_pos"]).float()
if agent_pos.dim() == 1:
agent_pos = agent_pos.unsqueeze(0)
return_observations[OBS_STATE] = agent_pos
if "robot_state" in observations:
return_observations[f"{OBS_STR}.robot_state"] = _convert_nested_dict(observations["robot_state"])
return return_observations
@@ -207,7 +221,22 @@ def _load_module_from_path(path: str, module_name: str | None = None):
if spec is None:
raise ImportError(f"Could not load module spec for {module_name} from {path}")
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module) # type: ignore
# Add the module's directory to sys.path so it can import local modules
import sys
module_dir = os.path.dirname(os.path.abspath(path))
sys_path_modified = False
if module_dir not in sys.path:
sys.path.insert(0, module_dir)
sys_path_modified = True
try:
spec.loader.exec_module(module) # type: ignore
finally:
# Clean up sys.path after import
if sys_path_modified:
sys.path.remove(module_dir)
return module
+154
View File
@@ -0,0 +1,154 @@
#!/usr/bin/env python
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
import torch
from lerobot.configs.types import PipelineFeatureType, PolicyFeature
from lerobot.utils.constants import OBS_IMAGES, OBS_STATE
from .pipeline import ObservationProcessorStep, ProcessorStepRegistry
@dataclass
@ProcessorStepRegistry.register(name="libero_processor")
class LiberoProcessorStep(ObservationProcessorStep):
"""
Processes LIBERO observations into the LeRobot format.
This step handles the specific observation structure from LIBERO environments,
which includes nested robot_state dictionaries and image observations.
**State Processing:**
- Processes the `robot_state` dictionary which contains nested end-effector,
gripper, and joint information.
- Extracts and concatenates:
- End-effector position (3D)
- End-effector quaternion converted to axis-angle (3D)
- Gripper joint positions (2D)
- Maps the concatenated state to `"observation.state"`.
**Image Processing:**
- Rotates images by 180 degrees by flipping both height and width dimensions.
- This accounts for the HuggingFaceVLA/libero camera orientation convention.
"""
def _process_observation(self, observation):
"""
Processes both image and robot_state observations from LIBERO.
"""
processed_obs = observation.copy()
for key in list(processed_obs.keys()):
if key.startswith(f"{OBS_IMAGES}."):
img = processed_obs[key]
# Flip both H and W
img = torch.flip(img, dims=[2, 3])
processed_obs[key] = img
# Process robot_state into a flat state vector
if "observation.robot_state" in processed_obs:
robot_state = processed_obs.pop("observation.robot_state")
# Extract components
eef_pos = robot_state["eef"]["pos"] # (B, 3,)
eef_quat = robot_state["eef"]["quat"] # (B, 4,)
gripper_qpos = robot_state["gripper"]["qpos"] # (B, 2,)
# Convert quaternion to axis-angle
eef_axisangle = self._quat2axisangle(eef_quat) # (B, 3)
# Concatenate into a single state vector
state = torch.cat((eef_pos, eef_axisangle, gripper_qpos), dim=-1)
# ensure float32
state = state.float()
if state.dim() == 1:
state = state.unsqueeze(0)
processed_obs[OBS_STATE] = state
return processed_obs
def transform_features(
self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
"""
Transforms feature keys from the LIBERO format to the LeRobot standard.
"""
new_features: dict[PipelineFeatureType, dict[str, PolicyFeature]] = {}
# copy over non-STATE features
for ft, feats in features.items():
if ft != PipelineFeatureType.STATE:
new_features[ft] = feats.copy()
# rebuild STATE features
state_feats = {}
# add our new flattened state
state_feats["observation.state"] = PolicyFeature(
key="observation.state",
shape=(8,), # [eef_pos(3), axis_angle(3), gripper(2)]
dtype="float32",
description=("Concatenated end-effector position (3), axis-angle (3), and gripper qpos (2)."),
)
new_features[PipelineFeatureType.STATE] = state_feats
return new_features
def observation(self, observation):
return self._process_observation(observation)
def _quat2axisangle(self, quat: torch.Tensor) -> torch.Tensor:
"""
Convert batched quaternions to axis-angle format.
Only accepts torch tensors of shape (B, 4).
Args:
quat (Tensor): (B, 4) tensor of quaternions in (x, y, z, w) format
Returns:
Tensor: (B, 3) axis-angle vectors
Raises:
TypeError: if input is not a torch tensor
ValueError: if shape is not (B, 4)
"""
if not isinstance(quat, torch.Tensor):
raise TypeError(f"_quat2axisangle expected a torch.Tensor, got {type(quat)}")
if quat.ndim != 2 or quat.shape[1] != 4:
raise ValueError(f"_quat2axisangle expected shape (B, 4), got {tuple(quat.shape)}")
quat = quat.to(dtype=torch.float32)
device = quat.device
batch_size = quat.shape[0]
w = quat[:, 3].clamp(-1.0, 1.0)
den = torch.sqrt(torch.clamp(1.0 - w * w, min=0.0))
result = torch.zeros((batch_size, 3), device=device)
mask = den > 1e-10
if mask.any():
angle = 2.0 * torch.acos(w[mask]) # (M,)
axis = quat[mask, :3] / den[mask].unsqueeze(1)
result[mask] = axis * angle.unsqueeze(1)
return result
+18
View File
@@ -0,0 +1,18 @@
#!/usr/bin/env python
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .config_unitree_g1 import UnitreeG1Config
from .unitree_g1 import UnitreeG1
@@ -0,0 +1,108 @@
{
"kLeftShoulderPitch.pos": {
"id": 0,
"drive_mode": 0,
"homing_offset": 0,
"range_min": -3,
"range_max": 1
},
"kLeftShoulderYaw.pos": {
"id": 1,
"drive_mode": 0,
"homing_offset": 0,
"range_min": -2.6,
"range_max": 2.6
},
"kLeftShoulderRoll.pos": {
"id": 2,
"drive_mode": 0,
"homing_offset": 0,
"range_min": -0.1,
"range_max": 2.2
},
"kLeftElbow.pos": {
"id": 3,
"drive_mode": 0,
"homing_offset": 0,
"range_min": -1,
"range_max": 1
},
"kLeftWristRoll.pos": {
"id": 4,
"drive_mode": 0,
"homing_offset": 0,
"range_min": -1.9,
"range_max": 1.9
},
"kLeftWristYaw.pos": {
"id": 5,
"drive_mode": 0,
"homing_offset": 0,
"range_min": 0.0,
"range_max": 0.0
},
"kLeftWristyaw.pos": {
"id": 5,
"drive_mode": 0,
"homing_offset": 0,
"range_min": 0.0,
"range_max": 0.0
},
"kLeftWristPitch.pos": {
"id": 6,
"drive_mode": 0,
"homing_offset": 0,
"range_min": 0.0,
"range_max": 0.0
},
"kRightShoulderPitch.pos": {
"id": 0,
"drive_mode": 0,
"homing_offset": 0,
"range_min": -3.0,
"range_max": 1
},
"kRightShoulderYaw.pos": {
"id": 1,
"drive_mode": 0,
"homing_offset": 0,
"range_min": -2.6,
"range_max": 2.6
},
"kRightShoulderRoll.pos": {
"id": 2,
"drive_mode": 0,
"homing_offset": 0,
"range_min": -2.2,
"range_max": 0.5
},
"kRightElbow.pos": {
"id": 3,
"drive_mode": 0,
"homing_offset": 0,
"range_min": -1,
"range_max": 1
},
"kRightWristRoll.pos": {
"id": 4,
"drive_mode": 0,
"homing_offset": 0,
"range_min": -1.9,
"range_max": 1.9
},
"kRightWristYaw.pos": {
"id": 5,
"drive_mode": 0,
"homing_offset": 0,
"range_min": 0.0,
"range_max": 0.0
},
"kRightWristPitch.pos": {
"id": 6,
"drive_mode": 0,
"homing_offset": 0,
"range_min": 0.0,
"range_max": 0.0
}
}
@@ -0,0 +1,2 @@
*.gv
*.pdf
@@ -0,0 +1,33 @@
# Unitree G1 Description (URDF & MJCF)
## Overview
This package includes a universal humanoid robot description (URDF & MJCF) for the [Unitree G1](https://www.unitree.com/g1/), developed by [Unitree Robotics](https://www.unitree.com/).
MJCF/URDF for the G1 robot:
| MJCF/URDF file name | `mode_machine` | Hip roll reduction ratio | Update status | dof#leg | dof#waist | dof#arm | dof#hand |
| ----------------------------- | :------------: | :----------------------: | ------------- | :-----: | :-------: | :-----: | :------: |
| `g1_23dof` | 1 | 14.5 | Beta | 6*2 | 1 | 5*2 | 0 |
| `g1_29dof` | 2 | 14.5 | Beta | 6*2 | 3 | 7*2 | 0 |
| `g1_29dof_with_hand` | 2 | 14.5 | Beta | 6*2 | 3 | 7*2 | 7*2 |
| `g1_29dof_lock_waist` | 3 | 14.5 | Beta | 6*2 | 1 | 7*2 | 0 |
| `g1_23dof_rev_1_0` | 4 | 22.5 | Up-to-date | 6*2 | 1 | 5*2 | 0 |
| `g1_29dof_rev_1_0` | 5 | 22.5 | Up-to-date | 6*2 | 3 | 7*2 | 0 |
| `g1_29dof_with_hand_rev_1_0` | 5 | 22.5 | Up-to-date | 6*2 | 3 | 7*2 | 7*2 |
| `g1_29dof_lock_waist_rev_1_0` | 6 | 22.5 | Up-to-date | 6*2 | 1 | 7*2 | 0 |
| `g1_dual_arm` | 9 | null | Up-to-date | 0 | 0 | 7*2 | 0 |
## Visulization with [MuJoCo](https://github.com/google-deepmind/mujoco)
1. Open MuJoCo Viewer
```bash
pip install mujoco
python -m mujoco.viewer
```
2. Drag and drop the MJCF/URDF model file (`g1_XXX.xml`/`g1_XXX.urdf`) to the MuJoCo Viewer.
## Note for teleoperate
g1_body29_hand14 is modified from [g1_29dof_with_hand_rev_1_0](https://github.com/unitreerobotics/unitree_ros/blob/master/robots/g1_description/g1_29dof_with_hand_rev_1_0.urdf)
@@ -0,0 +1,903 @@
<robot name="g1_23dof">
<mujoco>
<compiler meshdir="meshes" discardvisual="false"/>
</mujoco>
<!-- [CAUTION] uncomment when convert to mujoco -->
<!-- <link name="world"></link>
<joint name="floating_base_joint" type="floating">
<parent link="world"/>
<child link="pelvis"/>
</joint> -->
<link name="pelvis">
<inertial>
<origin xyz="0 0 -0.07605" rpy="0 0 0"/>
<mass value="3.813"/>
<inertia ixx="0.010549" ixy="0" ixz="2.1E-06" iyy="0.0093089" iyz="0" izz="0.0079184"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/pelvis.STL"/>
</geometry>
<material name="dark">
<color rgba="0.2 0.2 0.2 1"/>
</material>
</visual>
</link>
<link name="pelvis_contour_link">
<inertial>
<origin xyz="0 0 0" rpy="0 0 0"/>
<mass value="0.001"/>
<inertia ixx="1e-7" ixy="0" ixz="0" iyy="1e-7" iyz="0" izz="1e-7"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/pelvis_contour_link.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/pelvis_contour_link.STL"/>
</geometry>
</collision>
</link>
<joint name="pelvis_contour_joint" type="fixed">
<parent link="pelvis"/>
<child link="pelvis_contour_link"/>
</joint>
<!-- Legs -->
<link name="left_hip_pitch_link">
<inertial>
<origin xyz="0.002741 0.047791 -0.02606" rpy="0 0 0"/>
<mass value="1.35"/>
<inertia ixx="0.001811" ixy="3.68E-05" ixz="-3.44E-05" iyy="0.0014193" iyz="0.000171" izz="0.0012812"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/left_hip_pitch_link.STL"/>
</geometry>
<material name="dark">
<color rgba="0.2 0.2 0.2 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/left_hip_pitch_link.STL"/>
</geometry>
</collision>
</link>
<joint name="left_hip_pitch_joint" type="revolute">
<origin xyz="0 0.064452 -0.1027" rpy="0 0 0"/>
<parent link="pelvis"/>
<child link="left_hip_pitch_link"/>
<axis xyz="0 1 0"/>
<limit lower="-2.5307" upper="2.8798" effort="88" velocity="32"/>
</joint>
<link name="left_hip_roll_link">
<inertial>
<origin xyz="0.029812 -0.001045 -0.087934" rpy="0 0 0"/>
<mass value="1.52"/>
<inertia ixx="0.0023773" ixy="-3.8E-06" ixz="-0.0003908" iyy="0.0024123" iyz="1.84E-05" izz="0.0016595"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/left_hip_roll_link.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/left_hip_roll_link.STL"/>
</geometry>
</collision>
</link>
<joint name="left_hip_roll_joint" type="revolute">
<origin xyz="0 0.052 -0.030465" rpy="0 -0.1749 0"/>
<parent link="left_hip_pitch_link"/>
<child link="left_hip_roll_link"/>
<axis xyz="1 0 0"/>
<limit lower="-0.5236" upper="2.9671" effort="88" velocity="32"/>
</joint>
<link name="left_hip_yaw_link">
<inertial>
<origin xyz="-0.057709 -0.010981 -0.15078" rpy="0 0 0"/>
<mass value="1.702"/>
<inertia ixx="0.0057774" ixy="-0.0005411" ixz="-0.0023948" iyy="0.0076124" iyz="-0.0007072" izz="0.003149"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/left_hip_yaw_link.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/left_hip_yaw_link.STL"/>
</geometry>
</collision>
</link>
<joint name="left_hip_yaw_joint" type="revolute">
<origin xyz="0.025001 0 -0.12412" rpy="0 0 0"/>
<parent link="left_hip_roll_link"/>
<child link="left_hip_yaw_link"/>
<axis xyz="0 0 1"/>
<limit lower="-2.7576" upper="2.7576" effort="88" velocity="32"/>
</joint>
<link name="left_knee_link">
<inertial>
<origin xyz="0.005457 0.003964 -0.12074" rpy="0 0 0"/>
<mass value="1.932"/>
<inertia ixx="0.011329" ixy="4.82E-05" ixz="-4.49E-05" iyy="0.011277" iyz="-0.0007146" izz="0.0015168"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/left_knee_link.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/left_knee_link.STL"/>
</geometry>
</collision>
</link>
<joint name="left_knee_joint" type="revolute">
<origin xyz="-0.078273 0.0021489 -0.17734" rpy="0 0.1749 0"/>
<parent link="left_hip_yaw_link"/>
<child link="left_knee_link"/>
<axis xyz="0 1 0"/>
<limit lower="-0.087267" upper="2.8798" effort="139" velocity="20"/>
</joint>
<link name="left_ankle_pitch_link">
<inertial>
<origin xyz="-0.007269 0 0.011137" rpy="0 0 0"/>
<mass value="0.074"/>
<inertia ixx="8.4E-06" ixy="0" ixz="-2.9E-06" iyy="1.89E-05" iyz="0" izz="1.26E-05"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/left_ankle_pitch_link.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/left_ankle_pitch_link.STL"/>
</geometry>
</collision>
</link>
<joint name="left_ankle_pitch_joint" type="revolute">
<origin xyz="0 -9.4445E-05 -0.30001" rpy="0 0 0"/>
<parent link="left_knee_link"/>
<child link="left_ankle_pitch_link"/>
<axis xyz="0 1 0"/>
<limit lower="-0.87267" upper="0.5236" effort="50" velocity="37"/>
</joint>
<link name="left_ankle_roll_link">
<inertial>
<origin xyz="0.026505 0 -0.016425" rpy="0 0 0"/>
<mass value="0.608"/>
<inertia ixx="0.0002231" ixy="2E-07" ixz="8.91E-05" iyy="0.0016161" iyz="-1E-07" izz="0.0016667"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/left_ankle_roll_link.STL"/>
</geometry>
<material name="dark">
<color rgba="0.2 0.2 0.2 1"/>
</material>
</visual>
<collision>
<origin xyz="-0.05 0.025 -0.03" rpy="0 0 0"/>
<geometry>
<sphere radius="0.005"/>
</geometry>
</collision>
<collision>
<origin xyz="-0.05 -0.025 -0.03" rpy="0 0 0"/>
<geometry>
<sphere radius="0.005"/>
</geometry>
</collision>
<collision>
<origin xyz="0.12 0.03 -0.03" rpy="0 0 0"/>
<geometry>
<sphere radius="0.005"/>
</geometry>
</collision>
<collision>
<origin xyz="0.12 -0.03 -0.03" rpy="0 0 0"/>
<geometry>
<sphere radius="0.005"/>
</geometry>
</collision>
</link>
<joint name="left_ankle_roll_joint" type="revolute">
<origin xyz="0 0 -0.017558" rpy="0 0 0"/>
<parent link="left_ankle_pitch_link"/>
<child link="left_ankle_roll_link"/>
<axis xyz="1 0 0"/>
<limit lower="-0.2618" upper="0.2618" effort="50" velocity="37"/>
</joint>
<link name="right_hip_pitch_link">
<inertial>
<origin xyz="0.002741 -0.047791 -0.02606" rpy="0 0 0"/>
<mass value="1.35"/>
<inertia ixx="0.001811" ixy="-3.68E-05" ixz="-3.44E-05" iyy="0.0014193" iyz="-0.000171" izz="0.0012812"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/right_hip_pitch_link.STL"/>
</geometry>
<material name="dark">
<color rgba="0.2 0.2 0.2 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/right_hip_pitch_link.STL"/>
</geometry>
</collision>
</link>
<joint name="right_hip_pitch_joint" type="revolute">
<origin xyz="0 -0.064452 -0.1027" rpy="0 0 0"/>
<parent link="pelvis"/>
<child link="right_hip_pitch_link"/>
<axis xyz="0 1 0"/>
<limit lower="-2.5307" upper="2.8798" effort="88" velocity="32"/>
</joint>
<link name="right_hip_roll_link">
<inertial>
<origin xyz="0.029812 0.001045 -0.087934" rpy="0 0 0"/>
<mass value="1.52"/>
<inertia ixx="0.0023773" ixy="3.8E-06" ixz="-0.0003908" iyy="0.0024123" iyz="-1.84E-05" izz="0.0016595"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/right_hip_roll_link.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/right_hip_roll_link.STL"/>
</geometry>
</collision>
</link>
<joint name="right_hip_roll_joint" type="revolute">
<origin xyz="0 -0.052 -0.030465" rpy="0 -0.1749 0"/>
<parent link="right_hip_pitch_link"/>
<child link="right_hip_roll_link"/>
<axis xyz="1 0 0"/>
<limit lower="-2.9671" upper="0.5236" effort="88" velocity="32"/>
</joint>
<link name="right_hip_yaw_link">
<inertial>
<origin xyz="-0.057709 0.010981 -0.15078" rpy="0 0 0"/>
<mass value="1.702"/>
<inertia ixx="0.0057774" ixy="0.0005411" ixz="-0.0023948" iyy="0.0076124" iyz="0.0007072" izz="0.003149"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/right_hip_yaw_link.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/right_hip_yaw_link.STL"/>
</geometry>
</collision>
</link>
<joint name="right_hip_yaw_joint" type="revolute">
<origin xyz="0.025001 0 -0.12412" rpy="0 0 0"/>
<parent link="right_hip_roll_link"/>
<child link="right_hip_yaw_link"/>
<axis xyz="0 0 1"/>
<limit lower="-2.7576" upper="2.7576" effort="88" velocity="32"/>
</joint>
<link name="right_knee_link">
<inertial>
<origin xyz="0.005457 -0.003964 -0.12074" rpy="0 0 0"/>
<mass value="1.932"/>
<inertia ixx="0.011329" ixy="-4.82E-05" ixz="4.49E-05" iyy="0.011277" iyz="0.0007146" izz="0.0015168"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/right_knee_link.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/right_knee_link.STL"/>
</geometry>
</collision>
</link>
<joint name="right_knee_joint" type="revolute">
<origin xyz="-0.078273 -0.0021489 -0.17734" rpy="0 0.1749 0"/>
<parent link="right_hip_yaw_link"/>
<child link="right_knee_link"/>
<axis xyz="0 1 0"/>
<limit lower="-0.087267" upper="2.8798" effort="139" velocity="20"/>
</joint>
<link name="right_ankle_pitch_link">
<inertial>
<origin xyz="-0.007269 0 0.011137" rpy="0 0 0"/>
<mass value="0.074"/>
<inertia ixx="8.4E-06" ixy="0" ixz="-2.9E-06" iyy="1.89E-05" iyz="0" izz="1.26E-05"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/right_ankle_pitch_link.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/right_ankle_pitch_link.STL"/>
</geometry>
</collision>
</link>
<joint name="right_ankle_pitch_joint" type="revolute">
<origin xyz="0 9.4445E-05 -0.30001" rpy="0 0 0"/>
<parent link="right_knee_link"/>
<child link="right_ankle_pitch_link"/>
<axis xyz="0 1 0"/>
<limit lower="-0.87267" upper="0.5236" effort="50" velocity="37"/>
</joint>
<link name="right_ankle_roll_link">
<inertial>
<origin xyz="0.026505 0 -0.016425" rpy="0 0 0"/>
<mass value="0.608"/>
<inertia ixx="0.0002231" ixy="-2E-07" ixz="8.91E-05" iyy="0.0016161" iyz="1E-07" izz="0.0016667"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/right_ankle_roll_link.STL"/>
</geometry>
<material name="dark">
<color rgba="0.2 0.2 0.2 1"/>
</material>
</visual>
<collision>
<origin xyz="-0.05 0.025 -0.03" rpy="0 0 0"/>
<geometry>
<sphere radius="0.005"/>
</geometry>
</collision>
<collision>
<origin xyz="-0.05 -0.025 -0.03" rpy="0 0 0"/>
<geometry>
<sphere radius="0.005"/>
</geometry>
</collision>
<collision>
<origin xyz="0.12 0.03 -0.03" rpy="0 0 0"/>
<geometry>
<sphere radius="0.005"/>
</geometry>
</collision>
<collision>
<origin xyz="0.12 -0.03 -0.03" rpy="0 0 0"/>
<geometry>
<sphere radius="0.005"/>
</geometry>
</collision>
</link>
<joint name="right_ankle_roll_joint" type="revolute">
<origin xyz="0 0 -0.017558" rpy="0 0 0"/>
<parent link="right_ankle_pitch_link"/>
<child link="right_ankle_roll_link"/>
<axis xyz="1 0 0"/>
<limit lower="-0.2618" upper="0.2618" effort="50" velocity="37"/>
</joint>
<!-- Torso -->
<link name="waist_yaw_fixed_link">
<inertial>
<origin xyz="0.003964 0 0.018769" rpy="0 0 0"/>
<mass value="0.244"/>
<inertia ixx="9.9587E-05" ixy="-1.833E-06" ixz="-1.2617E-05" iyy="0.00012411" iyz="-1.18E-07" izz="0.00015586"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/waist_yaw_link.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
</link>
<joint name="waist_yaw_fixed_joint" type="fixed">
<origin xyz="0.0039635 0 -0.054" rpy="0 0 0"/>
<parent link="torso_link"/>
<child link="waist_yaw_fixed_link"/>
</joint>
<joint name="waist_yaw_joint" type="revolute">
<origin xyz="-0.0039635 0 0.054" rpy="0 0 0"/>
<parent link="pelvis"/>
<child link="torso_link"/>
<axis xyz="0 0 1"/>
<limit lower="-2.618" upper="2.618" effort="88" velocity="32"/>
</joint>
<link name="torso_link">
<inertial>
<origin xyz="0.002601 0.000257 0.153719" rpy="0 0 0"/>
<mass value="8.562"/>
<inertia ixx="0.065674966" ixy="-8.597E-05" ixz="-0.001737252" iyy="0.053535188" iyz="8.6899E-05" izz="0.030808125"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/torso_link.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/torso_link.STL"/>
</geometry>
</collision>
</link>
<!-- LOGO -->
<joint name="logo_joint" type="fixed">
<origin xyz="0.0039635 0 -0.054" rpy="0 0 0"/>
<parent link="torso_link"/>
<child link="logo_link"/>
</joint>
<link name="logo_link">
<inertial>
<origin xyz="0 0 0" rpy="0 0 0"/>
<mass value="0.001"/>
<inertia ixx="1e-7" ixy="0" ixz="0" iyy="1e-7" iyz="0" izz="1e-7"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/logo_link.STL"/>
</geometry>
<material name="dark">
<color rgba="0.2 0.2 0.2 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/logo_link.STL"/>
</geometry>
</collision>
</link>
<!-- Head -->
<link name="head_link">
<inertial>
<origin xyz="0.005267 0.000299 0.449869" rpy="0 0 0"/>
<mass value="1.036"/>
<inertia ixx="0.004085051" ixy="-2.543E-06" ixz="-6.9455E-05" iyy="0.004185212" iyz="-3.726E-06" izz="0.001807911"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/head_link.STL"/>
</geometry>
<material name="dark">
<color rgba="0.2 0.2 0.2 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/head_link.STL"/>
</geometry>
</collision>
</link>
<joint name="head_joint" type="fixed">
<origin xyz="0.0039635 0 -0.054" rpy="0 0 0"/>
<parent link="torso_link"/>
<child link="head_link"/>
</joint>
<!-- Waist Support -->
<link name="waist_support_link">
<inertial>
<origin xyz="0 0 0" rpy="0 0 0"/>
<mass value="0.001"/>
<inertia ixx="1e-7" ixy="0" ixz="0" iyy="1e-7" iyz="0" izz="1e-7"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/waist_support_link.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/waist_support_link.STL"/>
</geometry>
</collision>
</link>
<joint name="waist_support_joint" type="fixed">
<origin xyz="0.0039635 0 -0.054" rpy="0 0 0"/>
<parent link="torso_link"/>
<child link="waist_support_link"/>
</joint>
<!-- IMU -->
<link name="imu_in_torso"></link>
<joint name="imu_in_torso_joint" type="fixed">
<origin xyz="-0.03959 -0.00224 0.13792" rpy="0 0 0"/>
<parent link="torso_link"/>
<child link="imu_in_torso"/>
</joint>
<link name="imu_in_pelvis"></link>
<joint name="imu_in_pelvis_joint" type="fixed">
<origin xyz="0.04525 0 -0.08339" rpy="0 0 0"/>
<parent link="pelvis"/>
<child link="imu_in_pelvis"/>
</joint>
<!-- d435 -->
<link name="d435_link"></link>
<joint name="d435_joint" type="fixed">
<origin xyz="0.0576235 0.01753 0.41987" rpy="0 0.8307767239493009 0"/>
<parent link="torso_link"/>
<child link="d435_link"/>
</joint>
<!-- mid360 -->
<link name="mid360_link"></link>
<joint name="mid360_joint" type="fixed">
<origin xyz="0.0002835 0.00003 0.40618" rpy="0 0.04014257279586953 0"/>
<parent link="torso_link"/>
<child link="mid360_link"/>
</joint>
<!-- Arm -->
<link name="left_shoulder_pitch_link">
<inertial>
<origin xyz="0 0.035892 -0.011628" rpy="0 0 0"/>
<mass value="0.718"/>
<inertia ixx="0.0004291" ixy="-9.2E-06" ixz="6.4E-06" iyy="0.000453" iyz="2.26E-05" izz="0.000423"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/left_shoulder_pitch_link.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0.04 -0.01" rpy="0 1.5707963267948966 0"/>
<geometry>
<cylinder radius="0.03" length="0.05"/>
</geometry>
</collision>
</link>
<joint name="left_shoulder_pitch_joint" type="revolute">
<origin xyz="0.0039563 0.10022 0.23778" rpy="0.27931 5.4949E-05 -0.00019159"/>
<parent link="torso_link"/>
<child link="left_shoulder_pitch_link"/>
<axis xyz="0 1 0"/>
<limit lower="-3.0892" upper="2.6704" effort="25" velocity="37"/>
</joint>
<link name="left_shoulder_roll_link">
<inertial>
<origin xyz="-0.000227 0.00727 -0.063243" rpy="0 0 0"/>
<mass value="0.643"/>
<inertia ixx="0.0006177" ixy="-1E-06" ixz="8.7E-06" iyy="0.0006912" iyz="-5.3E-06" izz="0.0003894"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/left_shoulder_roll_link.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
<collision>
<origin xyz="-0.004 0.006 -0.053" rpy="0 0 0"/>
<geometry>
<cylinder radius="0.03" length="0.03"/>
</geometry>
</collision>
</link>
<joint name="left_shoulder_roll_joint" type="revolute">
<origin xyz="0 0.038 -0.013831" rpy="-0.27925 0 0"/>
<parent link="left_shoulder_pitch_link"/>
<child link="left_shoulder_roll_link"/>
<axis xyz="1 0 0"/>
<limit lower="-1.5882" upper="2.2515" effort="25" velocity="37"/>
</joint>
<link name="left_shoulder_yaw_link">
<inertial>
<origin xyz="0.010773 -0.002949 -0.072009" rpy="0 0 0"/>
<mass value="0.734"/>
<inertia ixx="0.0009988" ixy="7.9E-06" ixz="0.0001412" iyy="0.0010605" iyz="-2.86E-05" izz="0.0004354"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/left_shoulder_yaw_link.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/left_shoulder_yaw_link.STL"/>
</geometry>
</collision>
</link>
<joint name="left_shoulder_yaw_joint" type="revolute">
<origin xyz="0 0.00624 -0.1032" rpy="0 0 0"/>
<parent link="left_shoulder_roll_link"/>
<child link="left_shoulder_yaw_link"/>
<axis xyz="0 0 1"/>
<limit lower="-2.618" upper="2.618" effort="25" velocity="37"/>
</joint>
<link name="left_elbow_link">
<inertial>
<origin xyz="0.064956 0.004454 -0.010062" rpy="0 0 0"/>
<mass value="0.6"/>
<inertia ixx="0.0002891" ixy="6.53E-05" ixz="1.72E-05" iyy="0.0004152" iyz="-5.6E-06" izz="0.0004197"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/left_elbow_link.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/left_elbow_link.STL"/>
</geometry>
</collision>
</link>
<joint name="left_elbow_joint" type="revolute">
<origin xyz="0.015783 0 -0.080518" rpy="0 0 0"/>
<parent link="left_shoulder_yaw_link"/>
<child link="left_elbow_link"/>
<axis xyz="0 1 0"/>
<limit lower="-1.0472" upper="2.0944" effort="25" velocity="37"/>
</joint>
<joint name="left_wrist_roll_joint" type="revolute">
<origin xyz="0.100 0.00188791 -0.010" rpy="0 0 0"/>
<axis xyz="1 0 0"/>
<parent link="left_elbow_link"/>
<child link="left_wrist_roll_rubber_hand"/>
<limit effort="25" velocity="37" lower="-1.972222054" upper="1.972222054"/>
</joint>
<link name="left_wrist_roll_rubber_hand">
<inertial>
<origin xyz="0.10794656650 0.00163511945 0.00202244863" rpy="0 0 0"/>
<mass value="0.35692864"/>
<inertia ixx="0.00019613494735" ixy="-0.00000419816908" ixz="-0.00003950860580" iyy="0.00200280358206" iyz="0.00000249774203" izz="0.00194181412808"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/left_wrist_roll_rubber_hand.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/left_wrist_roll_rubber_hand.STL"/>
</geometry>
</collision>
</link>
<link name="right_shoulder_pitch_link">
<inertial>
<origin xyz="0 -0.035892 -0.011628" rpy="0 0 0"/>
<mass value="0.718"/>
<inertia ixx="0.0004291" ixy="9.2E-06" ixz="6.4E-06" iyy="0.000453" iyz="-2.26E-05" izz="0.000423"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/right_shoulder_pitch_link.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
<collision>
<origin xyz="0 -0.04 -0.01" rpy="0 1.5707963267948966 0"/>
<geometry>
<cylinder radius="0.03" length="0.05"/>
</geometry>
</collision>
</link>
<joint name="right_shoulder_pitch_joint" type="revolute">
<origin xyz="0.0039563 -0.10021 0.23778" rpy="-0.27931 5.4949E-05 0.00019159"/>
<parent link="torso_link"/>
<child link="right_shoulder_pitch_link"/>
<axis xyz="0 1 0"/>
<limit lower="-3.0892" upper="2.6704" effort="25" velocity="37"/>
</joint>
<link name="right_shoulder_roll_link">
<inertial>
<origin xyz="-0.000227 -0.00727 -0.063243" rpy="0 0 0"/>
<mass value="0.643"/>
<inertia ixx="0.0006177" ixy="1E-06" ixz="8.7E-06" iyy="0.0006912" iyz="5.3E-06" izz="0.0003894"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/right_shoulder_roll_link.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
<collision>
<origin xyz="-0.004 -0.006 -0.053" rpy="0 0 0"/>
<geometry>
<cylinder radius="0.03" length="0.03"/>
</geometry>
</collision>
</link>
<joint name="right_shoulder_roll_joint" type="revolute">
<origin xyz="0 -0.038 -0.013831" rpy="0.27925 0 0"/>
<parent link="right_shoulder_pitch_link"/>
<child link="right_shoulder_roll_link"/>
<axis xyz="1 0 0"/>
<limit lower="-2.2515" upper="1.5882" effort="25" velocity="37"/>
</joint>
<link name="right_shoulder_yaw_link">
<inertial>
<origin xyz="0.010773 0.002949 -0.072009" rpy="0 0 0"/>
<mass value="0.734"/>
<inertia ixx="0.0009988" ixy="-7.9E-06" ixz="0.0001412" iyy="0.0010605" iyz="2.86E-05" izz="0.0004354"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/right_shoulder_yaw_link.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/right_shoulder_yaw_link.STL"/>
</geometry>
</collision>
</link>
<joint name="right_shoulder_yaw_joint" type="revolute">
<origin xyz="0 -0.00624 -0.1032" rpy="0 0 0"/>
<parent link="right_shoulder_roll_link"/>
<child link="right_shoulder_yaw_link"/>
<axis xyz="0 0 1"/>
<limit lower="-2.618" upper="2.618" effort="25" velocity="37"/>
</joint>
<link name="right_elbow_link">
<inertial>
<origin xyz="0.064956 -0.004454 -0.010062" rpy="0 0 0"/>
<mass value="0.6"/>
<inertia ixx="0.0002891" ixy="-6.53E-05" ixz="1.72E-05" iyy="0.0004152" iyz="5.6E-06" izz="0.0004197"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/right_elbow_link.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/right_elbow_link.STL"/>
</geometry>
</collision>
</link>
<joint name="right_elbow_joint" type="revolute">
<origin xyz="0.015783 0 -0.080518" rpy="0 0 0"/>
<parent link="right_shoulder_yaw_link"/>
<child link="right_elbow_link"/>
<axis xyz="0 1 0"/>
<limit lower="-1.0472" upper="2.0944" effort="25" velocity="37"/>
</joint>
<joint name="right_wrist_roll_joint" type="revolute">
<origin xyz="0.100 -0.00188791 -0.010" rpy="0 0 0"/>
<axis xyz="1 0 0"/>
<parent link="right_elbow_link"/>
<child link="right_wrist_roll_rubber_hand"/>
<limit effort="25" velocity="37" lower="-1.972222054" upper="1.972222054"/>
</joint>
<link name="right_wrist_roll_rubber_hand">
<inertial>
<origin xyz="0.10794656650 -0.00163511945 0.00202244863" rpy="0 0 0"/>
<mass value="0.35692864"/>
<inertia ixx="0.00019613494735" ixy="0.00000419816908" ixz="-0.00003950860580" iyy="0.00200280358206" iyz="-0.00000249774203" izz="0.00194181412808"/>
</inertial>
<visual>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/right_wrist_roll_rubber_hand.STL"/>
</geometry>
<material name="white">
<color rgba="0.7 0.7 0.7 1"/>
</material>
</visual>
<collision>
<origin xyz="0 0 0" rpy="0 0 0"/>
<geometry>
<mesh filename="meshes/right_wrist_roll_rubber_hand.STL"/>
</geometry>
</collision>
</link>
</robot>
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,408 @@
<mujoco model="g1">
<compiler angle="radian" meshdir="meshes"/>
<asset>
<mesh name="pelvis" file="pelvis.STL"/>
<mesh name="pelvis_contour_link" file="pelvis_contour_link.STL"/>
<mesh name="left_hip_pitch_link" file="left_hip_pitch_link.STL"/>
<mesh name="left_hip_roll_link" file="left_hip_roll_link.STL"/>
<mesh name="left_hip_yaw_link" file="left_hip_yaw_link.STL"/>
<mesh name="left_knee_link" file="left_knee_link.STL"/>
<mesh name="left_ankle_pitch_link" file="left_ankle_pitch_link.STL"/>
<mesh name="left_ankle_roll_link" file="left_ankle_roll_link.STL"/>
<mesh name="right_hip_pitch_link" file="right_hip_pitch_link.STL"/>
<mesh name="right_hip_roll_link" file="right_hip_roll_link.STL"/>
<mesh name="right_hip_yaw_link" file="right_hip_yaw_link.STL"/>
<mesh name="right_knee_link" file="right_knee_link.STL"/>
<mesh name="right_ankle_pitch_link" file="right_ankle_pitch_link.STL"/>
<mesh name="right_ankle_roll_link" file="right_ankle_roll_link.STL"/>
<mesh name="waist_yaw_link" file="waist_yaw_link_rev_1_0.STL"/>
<mesh name="waist_roll_link" file="waist_roll_link_rev_1_0.STL"/>
<mesh name="torso_link" file="torso_link_rev_1_0.STL"/>
<mesh name="logo_link" file="logo_link.STL"/>
<mesh name="head_link" file="head_link.STL"/>
<mesh name="left_shoulder_pitch_link" file="left_shoulder_pitch_link.STL"/>
<mesh name="left_shoulder_roll_link" file="left_shoulder_roll_link.STL"/>
<mesh name="left_shoulder_yaw_link" file="left_shoulder_yaw_link.STL"/>
<mesh name="left_elbow_link" file="left_elbow_link.STL"/>
<mesh name="left_wrist_roll_link" file="left_wrist_roll_link.STL"/>
<mesh name="left_wrist_pitch_link" file="left_wrist_pitch_link.STL"/>
<mesh name="left_wrist_yaw_link" file="left_wrist_yaw_link.STL"/>
<mesh name="left_hand_palm_link" file="left_hand_palm_link.STL"/>
<mesh name="left_hand_thumb_0_link" file="left_hand_thumb_0_link.STL"/>
<mesh name="left_hand_thumb_1_link" file="left_hand_thumb_1_link.STL"/>
<mesh name="left_hand_thumb_2_link" file="left_hand_thumb_2_link.STL"/>
<mesh name="left_hand_middle_0_link" file="left_hand_middle_0_link.STL"/>
<mesh name="left_hand_middle_1_link" file="left_hand_middle_1_link.STL"/>
<mesh name="left_hand_index_0_link" file="left_hand_index_0_link.STL"/>
<mesh name="left_hand_index_1_link" file="left_hand_index_1_link.STL"/>
<mesh name="right_shoulder_pitch_link" file="right_shoulder_pitch_link.STL"/>
<mesh name="right_shoulder_roll_link" file="right_shoulder_roll_link.STL"/>
<mesh name="right_shoulder_yaw_link" file="right_shoulder_yaw_link.STL"/>
<mesh name="right_elbow_link" file="right_elbow_link.STL"/>
<mesh name="right_wrist_roll_link" file="right_wrist_roll_link.STL"/>
<mesh name="right_wrist_pitch_link" file="right_wrist_pitch_link.STL"/>
<mesh name="right_wrist_yaw_link" file="right_wrist_yaw_link.STL"/>
<mesh name="right_hand_palm_link" file="right_hand_palm_link.STL"/>
<mesh name="right_hand_thumb_0_link" file="right_hand_thumb_0_link.STL"/>
<mesh name="right_hand_thumb_1_link" file="right_hand_thumb_1_link.STL"/>
<mesh name="right_hand_thumb_2_link" file="right_hand_thumb_2_link.STL"/>
<mesh name="right_hand_middle_0_link" file="right_hand_middle_0_link.STL"/>
<mesh name="right_hand_middle_1_link" file="right_hand_middle_1_link.STL"/>
<mesh name="right_hand_index_0_link" file="right_hand_index_0_link.STL"/>
<mesh name="right_hand_index_1_link" file="right_hand_index_1_link.STL"/>
</asset>
<worldbody>
<body name="pelvis" pos="0 0 0.793">
<inertial pos="0 0 -0.07605" quat="1 0 -0.000399148 0" mass="3.813" diaginertia="0.010549 0.0093089 0.0079184"/>
<joint name="floating_base_joint" type="free" limited="false" actuatorfrclimited="false"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.2 0.2 0.2 1" mesh="pelvis"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="pelvis_contour_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="pelvis_contour_link"/>
<site name="imu_in_pelvis" size="0.01" pos="0.04525 0 -0.08339"/>
<body name="left_hip_pitch_link" pos="0 0.064452 -0.1027">
<inertial pos="0.002741 0.047791 -0.02606" quat="0.954862 0.293964 0.0302556 0.030122" mass="1.35" diaginertia="0.00181517 0.00153422 0.00116212"/>
<joint name="left_hip_pitch_joint" pos="0 0 0" axis="0 1 0" range="-2.5307 2.8798" actuatorfrcrange="-88 88"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.2 0.2 0.2 1" mesh="left_hip_pitch_link"/>
<geom type="mesh" rgba="0.2 0.2 0.2 1" mesh="left_hip_pitch_link"/>
<body name="left_hip_roll_link" pos="0 0.052 -0.030465" quat="0.996179 0 -0.0873386 0">
<inertial pos="0.029812 -0.001045 -0.087934" quat="0.977808 -1.97119e-05 0.205576 -0.0403793" mass="1.52" diaginertia="0.00254986 0.00241169 0.00148755"/>
<joint name="left_hip_roll_joint" pos="0 0 0" axis="1 0 0" range="-0.5236 2.9671" actuatorfrcrange="-139 139"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_hip_roll_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_hip_roll_link"/>
<body name="left_hip_yaw_link" pos="0.025001 0 -0.12412">
<inertial pos="-0.057709 -0.010981 -0.15078" quat="0.600598 0.15832 0.223482 0.751181" mass="1.702" diaginertia="0.00776166 0.00717575 0.00160139"/>
<joint name="left_hip_yaw_joint" pos="0 0 0" axis="0 0 1" range="-2.7576 2.7576" actuatorfrcrange="-88 88"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_hip_yaw_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_hip_yaw_link"/>
<body name="left_knee_link" pos="-0.078273 0.0021489 -0.17734" quat="0.996179 0 0.0873386 0">
<inertial pos="0.005457 0.003964 -0.12074" quat="0.923418 -0.0327699 0.0158246 0.382067" mass="1.932" diaginertia="0.0113804 0.0112778 0.00146458"/>
<joint name="left_knee_joint" pos="0 0 0" axis="0 1 0" range="-0.087267 2.8798" actuatorfrcrange="-139 139"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_knee_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_knee_link"/>
<body name="left_ankle_pitch_link" pos="0 -9.4445e-05 -0.30001">
<inertial pos="-0.007269 0 0.011137" quat="0.603053 0.369225 0.369225 0.603053" mass="0.074" diaginertia="1.89e-05 1.40805e-05 6.9195e-06"/>
<joint name="left_ankle_pitch_joint" pos="0 0 0" axis="0 1 0" range="-0.87267 0.5236" actuatorfrcrange="-50 50"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_ankle_pitch_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_ankle_pitch_link"/>
<body name="left_ankle_roll_link" pos="0 0 -0.017558">
<inertial pos="0.026505 0 -0.016425" quat="-0.000481092 0.728482 -0.000618967 0.685065" mass="0.608" diaginertia="0.00167218 0.0016161 0.000217621"/>
<joint name="left_ankle_roll_joint" pos="0 0 0" axis="1 0 0" range="-0.2618 0.2618" actuatorfrcrange="-50 50"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.2 0.2 0.2 1" mesh="left_ankle_roll_link"/>
<geom size="0.005" pos="-0.05 0.025 -0.03" rgba="0.2 0.2 0.2 1"/>
<geom size="0.005" pos="-0.05 -0.025 -0.03" rgba="0.2 0.2 0.2 1"/>
<geom size="0.005" pos="0.12 0.03 -0.03" rgba="0.2 0.2 0.2 1"/>
<geom size="0.005" pos="0.12 -0.03 -0.03" rgba="0.2 0.2 0.2 1"/>
</body>
</body>
</body>
</body>
</body>
</body>
<body name="right_hip_pitch_link" pos="0 -0.064452 -0.1027">
<inertial pos="0.002741 -0.047791 -0.02606" quat="0.954862 -0.293964 0.0302556 -0.030122" mass="1.35" diaginertia="0.00181517 0.00153422 0.00116212"/>
<joint name="right_hip_pitch_joint" pos="0 0 0" axis="0 1 0" range="-2.5307 2.8798" actuatorfrcrange="-88 88"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.2 0.2 0.2 1" mesh="right_hip_pitch_link"/>
<geom type="mesh" rgba="0.2 0.2 0.2 1" mesh="right_hip_pitch_link"/>
<body name="right_hip_roll_link" pos="0 -0.052 -0.030465" quat="0.996179 0 -0.0873386 0">
<inertial pos="0.029812 0.001045 -0.087934" quat="0.977808 1.97119e-05 0.205576 0.0403793" mass="1.52" diaginertia="0.00254986 0.00241169 0.00148755"/>
<joint name="right_hip_roll_joint" pos="0 0 0" axis="1 0 0" range="-2.9671 0.5236" actuatorfrcrange="-139 139"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_hip_roll_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_hip_roll_link"/>
<body name="right_hip_yaw_link" pos="0.025001 0 -0.12412">
<inertial pos="-0.057709 0.010981 -0.15078" quat="0.751181 0.223482 0.15832 0.600598" mass="1.702" diaginertia="0.00776166 0.00717575 0.00160139"/>
<joint name="right_hip_yaw_joint" pos="0 0 0" axis="0 0 1" range="-2.7576 2.7576" actuatorfrcrange="-88 88"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_hip_yaw_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_hip_yaw_link"/>
<body name="right_knee_link" pos="-0.078273 -0.0021489 -0.17734" quat="0.996179 0 0.0873386 0">
<inertial pos="0.005457 -0.003964 -0.12074" quat="0.923439 0.0345276 0.0116333 -0.382012" mass="1.932" diaginertia="0.011374 0.0112843 0.00146452"/>
<joint name="right_knee_joint" pos="0 0 0" axis="0 1 0" range="-0.087267 2.8798" actuatorfrcrange="-139 139"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_knee_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_knee_link"/>
<body name="right_ankle_pitch_link" pos="0 9.4445e-05 -0.30001">
<inertial pos="-0.007269 0 0.011137" quat="0.603053 0.369225 0.369225 0.603053" mass="0.074" diaginertia="1.89e-05 1.40805e-05 6.9195e-06"/>
<joint name="right_ankle_pitch_joint" pos="0 0 0" axis="0 1 0" range="-0.87267 0.5236" actuatorfrcrange="-50 50"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_ankle_pitch_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_ankle_pitch_link"/>
<body name="right_ankle_roll_link" pos="0 0 -0.017558">
<inertial pos="0.026505 0 -0.016425" quat="0.000481092 0.728482 0.000618967 0.685065" mass="0.608" diaginertia="0.00167218 0.0016161 0.000217621"/>
<joint name="right_ankle_roll_joint" pos="0 0 0" axis="1 0 0" range="-0.2618 0.2618" actuatorfrcrange="-50 50"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.2 0.2 0.2 1" mesh="right_ankle_roll_link"/>
<geom size="0.005" pos="-0.05 0.025 -0.03" rgba="0.2 0.2 0.2 1"/>
<geom size="0.005" pos="-0.05 -0.025 -0.03" rgba="0.2 0.2 0.2 1"/>
<geom size="0.005" pos="0.12 0.03 -0.03" rgba="0.2 0.2 0.2 1"/>
<geom size="0.005" pos="0.12 -0.03 -0.03" rgba="0.2 0.2 0.2 1"/>
</body>
</body>
</body>
</body>
</body>
</body>
<body name="waist_yaw_link">
<inertial pos="0.003494 0.000233 0.018034" quat="0.289697 0.591001 -0.337795 0.672821" mass="0.214" diaginertia="0.000163531 0.000107714 0.000102205"/>
<joint name="waist_yaw_joint" pos="0 0 0" axis="0 0 1" range="-2.618 2.618" actuatorfrcrange="-88 88"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="waist_yaw_link"/>
<body name="waist_roll_link" pos="-0.0039635 0 0.044">
<inertial pos="0 2.3e-05 0" quat="0.5 0.5 -0.5 0.5" mass="0.086" diaginertia="8.245e-06 7.079e-06 6.339e-06"/>
<joint name="waist_roll_joint" pos="0 0 0" axis="1 0 0" range="-0.52 0.52" actuatorfrcrange="-50 50"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="waist_roll_link"/>
<body name="torso_link">
<inertial pos="0.00203158 0.000339683 0.184568" quat="0.999803 -6.03319e-05 0.0198256 0.00131986" mass="7.818" diaginertia="0.121847 0.109825 0.0273735"/>
<joint name="waist_pitch_joint" pos="0 0 0" axis="0 1 0" range="-0.52 0.52" actuatorfrcrange="-50 50"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="torso_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="torso_link"/>
<geom pos="0.0039635 0 -0.044" quat="1 0 0 0" type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.2 0.2 0.2 1" mesh="logo_link"/>
<geom pos="0.0039635 0 -0.044" quat="1 0 0 0" type="mesh" rgba="0.2 0.2 0.2 1" mesh="logo_link"/>
<geom pos="0.0039635 0 -0.044" type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.2 0.2 0.2 1" mesh="head_link"/>
<geom pos="0.0039635 0 -0.044" type="mesh" rgba="0.2 0.2 0.2 1" mesh="head_link"/>
<site name="imu_in_torso" size="0.01" pos="-0.03959 -0.00224 0.14792"/>
<body name="left_shoulder_pitch_link" pos="0.0039563 0.10022 0.24778" quat="0.990264 0.139201 1.38722e-05 -9.86868e-05">
<inertial pos="0 0.035892 -0.011628" quat="0.654152 0.0130458 -0.326267 0.68225" mass="0.718" diaginertia="0.000465864 0.000432842 0.000406394"/>
<joint name="left_shoulder_pitch_joint" pos="0 0 0" axis="0 1 0" range="-3.0892 2.6704" actuatorfrcrange="-25 25"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_shoulder_pitch_link"/>
<geom size="0.03 0.025" pos="0 0.04 -0.01" quat="0.707107 0 0.707107 0" type="cylinder" rgba="0.7 0.7 0.7 1"/>
<body name="left_shoulder_roll_link" pos="0 0.038 -0.013831" quat="0.990268 -0.139172 0 0">
<inertial pos="-0.000227 0.00727 -0.063243" quat="0.701256 -0.0196223 -0.00710317 0.712604" mass="0.643" diaginertia="0.000691311 0.000618011 0.000388977"/>
<joint name="left_shoulder_roll_joint" pos="0 0 0" axis="1 0 0" range="-1.5882 2.2515" actuatorfrcrange="-25 25"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_shoulder_roll_link"/>
<geom size="0.03 0.015" pos="-0.004 0.006 -0.053" type="cylinder" rgba="0.7 0.7 0.7 1"/>
<body name="left_shoulder_yaw_link" pos="0 0.00624 -0.1032">
<inertial pos="0.010773 -0.002949 -0.072009" quat="0.716879 -0.0964829 -0.0679942 0.687134" mass="0.734" diaginertia="0.00106187 0.00103217 0.000400661"/>
<joint name="left_shoulder_yaw_joint" pos="0 0 0" axis="0 0 1" range="-2.618 2.618" actuatorfrcrange="-25 25"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_shoulder_yaw_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_shoulder_yaw_link"/>
<body name="left_elbow_link" pos="0.015783 0 -0.080518">
<inertial pos="0.064956 0.004454 -0.010062" quat="0.541765 0.636132 0.388821 0.388129" mass="0.6" diaginertia="0.000443035 0.000421612 0.000259353"/>
<joint name="left_elbow_joint" pos="0 0 0" axis="0 1 0" range="-1.0472 2.0944" actuatorfrcrange="-25 25"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_elbow_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_elbow_link"/>
<body name="left_wrist_roll_link" pos="0.1 0.00188791 -0.01">
<inertial pos="0.0171394 0.000537591 4.8864e-07" quat="0.575338 0.411667 -0.574906 0.411094" mass="0.085445" diaginertia="5.48211e-05 4.96646e-05 3.57798e-05"/>
<joint name="left_wrist_roll_joint" pos="0 0 0" axis="1 0 0" range="-1.97222 1.97222" actuatorfrcrange="-25 25"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_wrist_roll_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_wrist_roll_link"/>
<body name="left_wrist_pitch_link" pos="0.038 0 0">
<inertial pos="0.0229999 -0.00111685 -0.00111658" quat="0.249998 0.661363 0.293036 0.643608" mass="0.48405" diaginertia="0.000430353 0.000429873 0.000164648"/>
<joint name="left_wrist_pitch_joint" pos="0 0 0" axis="0 1 0" range="-1.61443 1.61443" actuatorfrcrange="-5 5"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_wrist_pitch_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_wrist_pitch_link"/>
<body name="left_wrist_yaw_link" pos="0.046 0 0">
<inertial pos="0.0885506 0.00212216 -0.000374562" quat="0.487149 0.493844 0.513241 0.505358" mass="0.457415" diaginertia="0.00105989 0.000895419 0.000323842"/>
<joint name="left_wrist_yaw_joint" pos="0 0 0" axis="0 0 1" range="-1.61443 1.61443" actuatorfrcrange="-5 5"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_wrist_yaw_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_wrist_yaw_link"/>
<geom pos="0.0415 0.003 0" quat="1 0 0 0" type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_hand_palm_link"/>
<geom pos="0.0415 0.003 0" quat="1 0 0 0" type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_hand_palm_link"/>
<body name="left_hand_thumb_0_link" pos="0.067 0.003 0">
<inertial pos="-0.000884246 -0.00863407 0.000944293" quat="0.462991 0.643965 -0.460173 0.398986" mass="0.0862366" diaginertia="1.6546e-05 1.60058e-05 1.43741e-05"/>
<joint name="left_hand_thumb_0_joint" pos="0 0 0" axis="0 1 0" range="-1.0472 1.0472" actuatorfrcrange="-2.45 2.45"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_hand_thumb_0_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_hand_thumb_0_link"/>
<body name="left_hand_thumb_1_link" pos="-0.0025 -0.0193 0">
<inertial pos="-0.000827888 -0.0354744 -0.0003809" quat="0.685598 0.705471 -0.15207 0.0956069" mass="0.0588507" diaginertia="1.28514e-05 1.22902e-05 5.9666e-06"/>
<joint name="left_hand_thumb_1_joint" pos="0 0 0" axis="0 0 1" range="-0.724312 1.0472" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_hand_thumb_1_link"/>
<geom size="0.01 0.015 0.01" pos="-0.001 -0.032 0" type="box" rgba="0.7 0.7 0.7 1"/>
<body name="left_hand_thumb_2_link" pos="0 -0.0458 0">
<inertial pos="-0.00171735 -0.0262819 0.000107789" quat="0.703174 0.710977 -0.00017564 -0.00766553" mass="0.0203063" diaginertia="4.61314e-06 3.86645e-06 1.53495e-06"/>
<joint name="left_hand_thumb_2_joint" pos="0 0 0" axis="0 0 1" range="0 1.74533" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_hand_thumb_2_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_hand_thumb_2_link"/>
</body>
</body>
</body>
<body name="left_hand_middle_0_link" pos="0.1192 0.0046 -0.0285">
<inertial pos="0.0354744 0.000827888 0.0003809" quat="0.391313 0.552395 0.417187 0.606373" mass="0.0588507" diaginertia="1.28514e-05 1.22902e-05 5.9666e-06"/>
<joint name="left_hand_middle_0_joint" pos="0 0 0" axis="0 0 1" range="-1.5708 0" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_hand_middle_0_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_hand_middle_0_link"/>
<body name="left_hand_middle_1_link" pos="0.0458 0 0">
<inertial pos="0.0262819 0.00171735 -0.000107789" quat="0.502612 0.491799 0.502639 0.502861" mass="0.0203063" diaginertia="4.61314e-06 3.86645e-06 1.53495e-06"/>
<joint name="left_hand_middle_1_joint" pos="0 0 0" axis="0 0 1" range="-1.74533 0" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_hand_middle_1_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_hand_middle_1_link"/>
</body>
</body>
<body name="left_hand_index_0_link" pos="0.1192 0.0046 0.0285">
<inertial pos="0.0354744 0.000827888 0.0003809" quat="0.391313 0.552395 0.417187 0.606373" mass="0.0588507" diaginertia="1.28514e-05 1.22902e-05 5.9666e-06"/>
<joint name="left_hand_index_0_joint" pos="0 0 0" axis="0 0 1" range="-1.5708 0" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_hand_index_0_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_hand_index_0_link"/>
<body name="left_hand_index_1_link" pos="0.0458 0 0">
<inertial pos="0.0262819 0.00171735 -0.000107789" quat="0.502612 0.491799 0.502639 0.502861" mass="0.0203063" diaginertia="4.61314e-06 3.86645e-06 1.53495e-06"/>
<joint name="left_hand_index_1_joint" pos="0 0 0" axis="0 0 1" range="-1.74533 0" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="left_hand_index_1_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="left_hand_index_1_link"/>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
<body name="right_shoulder_pitch_link" pos="0.0039563 -0.10021 0.24778" quat="0.990264 -0.139201 1.38722e-05 9.86868e-05">
<inertial pos="0 -0.035892 -0.011628" quat="0.68225 -0.326267 0.0130458 0.654152" mass="0.718" diaginertia="0.000465864 0.000432842 0.000406394"/>
<joint name="right_shoulder_pitch_joint" pos="0 0 0" axis="0 1 0" range="-3.0892 2.6704" actuatorfrcrange="-25 25"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_shoulder_pitch_link"/>
<geom size="0.03 0.025" pos="0 -0.04 -0.01" quat="0.707107 0 0.707107 0" type="cylinder" rgba="0.7 0.7 0.7 1"/>
<body name="right_shoulder_roll_link" pos="0 -0.038 -0.013831" quat="0.990268 0.139172 0 0">
<inertial pos="-0.000227 -0.00727 -0.063243" quat="0.712604 -0.00710317 -0.0196223 0.701256" mass="0.643" diaginertia="0.000691311 0.000618011 0.000388977"/>
<joint name="right_shoulder_roll_joint" pos="0 0 0" axis="1 0 0" range="-2.2515 1.5882" actuatorfrcrange="-25 25"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_shoulder_roll_link"/>
<geom size="0.03 0.015" pos="-0.004 -0.006 -0.053" type="cylinder" rgba="0.7 0.7 0.7 1"/>
<body name="right_shoulder_yaw_link" pos="0 -0.00624 -0.1032">
<inertial pos="0.010773 0.002949 -0.072009" quat="0.687134 -0.0679942 -0.0964829 0.716879" mass="0.734" diaginertia="0.00106187 0.00103217 0.000400661"/>
<joint name="right_shoulder_yaw_joint" pos="0 0 0" axis="0 0 1" range="-2.618 2.618" actuatorfrcrange="-25 25"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_shoulder_yaw_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_shoulder_yaw_link"/>
<body name="right_elbow_link" pos="0.015783 0 -0.080518">
<inertial pos="0.064956 -0.004454 -0.010062" quat="0.388129 0.388821 0.636132 0.541765" mass="0.6" diaginertia="0.000443035 0.000421612 0.000259353"/>
<joint name="right_elbow_joint" pos="0 0 0" axis="0 1 0" range="-1.0472 2.0944" actuatorfrcrange="-25 25"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_elbow_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_elbow_link"/>
<body name="right_wrist_roll_link" pos="0.1 -0.00188791 -0.01">
<inertial pos="0.0171394 -0.000537591 4.8864e-07" quat="0.411667 0.575338 -0.411094 0.574906" mass="0.085445" diaginertia="5.48211e-05 4.96646e-05 3.57798e-05"/>
<joint name="right_wrist_roll_joint" pos="0 0 0" axis="1 0 0" range="-1.97222 1.97222" actuatorfrcrange="-25 25"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_wrist_roll_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_wrist_roll_link"/>
<body name="right_wrist_pitch_link" pos="0.038 0 0">
<inertial pos="0.0229999 0.00111685 -0.00111658" quat="0.643608 0.293036 0.661363 0.249998" mass="0.48405" diaginertia="0.000430353 0.000429873 0.000164648"/>
<joint name="right_wrist_pitch_joint" pos="0 0 0" axis="0 1 0" range="-1.61443 1.61443" actuatorfrcrange="-5 5"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_wrist_pitch_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_wrist_pitch_link"/>
<body name="right_wrist_yaw_link" pos="0.046 0 0">
<inertial pos="0.0885506 -0.00212216 -0.000374562" quat="0.505358 0.513241 0.493844 0.487149" mass="0.457415" diaginertia="0.00105989 0.000895419 0.000323842"/>
<joint name="right_wrist_yaw_joint" pos="0 0 0" axis="0 0 1" range="-1.61443 1.61443" actuatorfrcrange="-5 5"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_wrist_yaw_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_wrist_yaw_link"/>
<geom pos="0.0415 -0.003 0" quat="1 0 0 0" type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_hand_palm_link"/>
<geom pos="0.0415 -0.003 0" quat="1 0 0 0" type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_hand_palm_link"/>
<body name="right_hand_thumb_0_link" pos="0.067 -0.003 0">
<inertial pos="-0.000884246 0.00863407 0.000944293" quat="0.643965 0.462991 -0.398986 0.460173" mass="0.0862366" diaginertia="1.6546e-05 1.60058e-05 1.43741e-05"/>
<joint name="right_hand_thumb_0_joint" pos="0 0 0" axis="0 1 0" range="-1.0472 1.0472" actuatorfrcrange="-2.45 2.45"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_hand_thumb_0_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_hand_thumb_0_link"/>
<body name="right_hand_thumb_1_link" pos="-0.0025 0.0193 0">
<inertial pos="-0.000827888 0.0354744 -0.0003809" quat="0.705471 0.685598 -0.0956069 0.15207" mass="0.0588507" diaginertia="1.28514e-05 1.22902e-05 5.9666e-06"/>
<joint name="right_hand_thumb_1_joint" pos="0 0 0" axis="0 0 1" range="-1.0472 0.724312" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_hand_thumb_1_link"/>
<geom size="0.01 0.015 0.01" pos="-0.001 0.032 0" type="box" rgba="0.7 0.7 0.7 1"/>
<body name="right_hand_thumb_2_link" pos="0 0.0458 0">
<inertial pos="-0.00171735 0.0262819 0.000107789" quat="0.710977 0.703174 0.00766553 0.00017564" mass="0.0203063" diaginertia="4.61314e-06 3.86645e-06 1.53495e-06"/>
<joint name="right_hand_thumb_2_joint" pos="0 0 0" axis="0 0 1" range="-1.74533 0" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_hand_thumb_2_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_hand_thumb_2_link"/>
</body>
</body>
</body>
<body name="right_hand_middle_0_link" pos="0.1192 -0.0046 -0.0285">
<inertial pos="0.0354744 -0.000827888 0.0003809" quat="0.606373 0.417187 0.552395 0.391313" mass="0.0588507" diaginertia="1.28514e-05 1.22902e-05 5.9666e-06"/>
<joint name="right_hand_middle_0_joint" pos="0 0 0" axis="0 0 1" range="0 1.5708" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_hand_middle_0_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_hand_middle_0_link"/>
<body name="right_hand_middle_1_link" pos="0.0458 0 0">
<inertial pos="0.0262819 -0.00171735 -0.000107789" quat="0.502861 0.502639 0.491799 0.502612" mass="0.0203063" diaginertia="4.61314e-06 3.86645e-06 1.53495e-06"/>
<joint name="right_hand_middle_1_joint" pos="0 0 0" axis="0 0 1" range="0 1.74533" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_hand_middle_1_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_hand_middle_1_link"/>
</body>
</body>
<body name="right_hand_index_0_link" pos="0.1192 -0.0046 0.0285">
<inertial pos="0.0354744 -0.000827888 0.0003809" quat="0.606373 0.417187 0.552395 0.391313" mass="0.0588507" diaginertia="1.28514e-05 1.22902e-05 5.9666e-06"/>
<joint name="right_hand_index_0_joint" pos="0 0 0" axis="0 0 1" range="0 1.5708" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_hand_index_0_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_hand_index_0_link"/>
<body name="right_hand_index_1_link" pos="0.0458 0 0">
<inertial pos="0.0262819 -0.00171735 -0.000107789" quat="0.502861 0.502639 0.491799 0.502612" mass="0.0203063" diaginertia="4.61314e-06 3.86645e-06 1.53495e-06"/>
<joint name="right_hand_index_1_joint" pos="0 0 0" axis="0 0 1" range="0 1.74533" actuatorfrcrange="-1.4 1.4"/>
<geom type="mesh" contype="0" conaffinity="0" group="1" density="0" rgba="0.7 0.7 0.7 1" mesh="right_hand_index_1_link"/>
<geom type="mesh" rgba="0.7 0.7 0.7 1" mesh="right_hand_index_1_link"/>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</body>
</worldbody>
<actuator>
<motor name="left_hip_pitch_joint" joint="left_hip_pitch_joint"/>
<motor name="left_hip_roll_joint" joint="left_hip_roll_joint"/>
<motor name="left_hip_yaw_joint" joint="left_hip_yaw_joint"/>
<motor name="left_knee_joint" joint="left_knee_joint"/>
<motor name="left_ankle_pitch_joint" joint="left_ankle_pitch_joint"/>
<motor name="left_ankle_roll_joint" joint="left_ankle_roll_joint"/>
<motor name="right_hip_pitch_joint" joint="right_hip_pitch_joint"/>
<motor name="right_hip_roll_joint" joint="right_hip_roll_joint"/>
<motor name="right_hip_yaw_joint" joint="right_hip_yaw_joint"/>
<motor name="right_knee_joint" joint="right_knee_joint"/>
<motor name="right_ankle_pitch_joint" joint="right_ankle_pitch_joint"/>
<motor name="right_ankle_roll_joint" joint="right_ankle_roll_joint"/>
<motor name="waist_yaw_joint" joint="waist_yaw_joint"/>
<motor name="waist_roll_joint" joint="waist_roll_joint"/>
<motor name="waist_pitch_joint" joint="waist_pitch_joint"/>
<motor name="left_shoulder_pitch_joint" joint="left_shoulder_pitch_joint"/>
<motor name="left_shoulder_roll_joint" joint="left_shoulder_roll_joint"/>
<motor name="left_shoulder_yaw_joint" joint="left_shoulder_yaw_joint"/>
<motor name="left_elbow_joint" joint="left_elbow_joint"/>
<motor name="left_wrist_roll_joint" joint="left_wrist_roll_joint"/>
<motor name="left_wrist_pitch_joint" joint="left_wrist_pitch_joint"/>
<motor name="left_wrist_yaw_joint" joint="left_wrist_yaw_joint"/>
<motor name="left_hand_thumb_0_joint" joint="left_hand_thumb_0_joint"/>
<motor name="left_hand_thumb_1_joint" joint="left_hand_thumb_1_joint"/>
<motor name="left_hand_thumb_2_joint" joint="left_hand_thumb_2_joint"/>
<motor name="left_hand_middle_0_joint" joint="left_hand_middle_0_joint"/>
<motor name="left_hand_middle_1_joint" joint="left_hand_middle_1_joint"/>
<motor name="left_hand_index_0_joint" joint="left_hand_index_0_joint"/>
<motor name="left_hand_index_1_joint" joint="left_hand_index_1_joint"/>
<motor name="right_shoulder_pitch_joint" joint="right_shoulder_pitch_joint"/>
<motor name="right_shoulder_roll_joint" joint="right_shoulder_roll_joint"/>
<motor name="right_shoulder_yaw_joint" joint="right_shoulder_yaw_joint"/>
<motor name="right_elbow_joint" joint="right_elbow_joint"/>
<motor name="right_wrist_roll_joint" joint="right_wrist_roll_joint"/>
<motor name="right_wrist_pitch_joint" joint="right_wrist_pitch_joint"/>
<motor name="right_wrist_yaw_joint" joint="right_wrist_yaw_joint"/>
<motor name="right_hand_thumb_0_joint" joint="right_hand_thumb_0_joint"/>
<motor name="right_hand_thumb_1_joint" joint="right_hand_thumb_1_joint"/>
<motor name="right_hand_thumb_2_joint" joint="right_hand_thumb_2_joint"/>
<motor name="right_hand_index_0_joint" joint="right_hand_index_0_joint"/>
<motor name="right_hand_index_1_joint" joint="right_hand_index_1_joint"/>
<motor name="right_hand_middle_0_joint" joint="right_hand_middle_0_joint"/>
<motor name="right_hand_middle_1_joint" joint="right_hand_middle_1_joint"/>
</actuator>
<sensor>
<gyro name="imu-torso-angular-velocity" site="imu_in_torso" noise="5e-4" cutoff="34.9"/>
<accelerometer name="imu-torso-linear-acceleration" site="imu_in_torso" noise="1e-2" cutoff="157"/>
<gyro name="imu-pelvis-angular-velocity" site="imu_in_pelvis" noise="5e-4" cutoff="34.9"/>
<accelerometer name="imu-pelvis-linear-acceleration" site="imu_in_pelvis" noise="1e-2" cutoff="157"/>
</sensor>
<!-- setup scene -->
<statistic center="1.0 0.7 1.0" extent="0.8"/>
<visual>
<headlight diffuse="0.6 0.6 0.6" ambient="0.1 0.1 0.1" specular="0.9 0.9 0.9"/>
<rgba haze="0.15 0.25 0.35 1"/>
<global azimuth="-140" elevation="-20"/>
</visual>
<asset>
<texture type="skybox" builtin="flat" rgb1="0 0 0" rgb2="0 0 0" width="512" height="3072"/>
<texture type="2d" name="groundplane" builtin="checker" mark="edge" rgb1="0.2 0.3 0.4" rgb2="0.1 0.2 0.3" markrgb="0.8 0.8 0.8" width="300" height="300"/>
<material name="groundplane" texture="groundplane" texuniform="true" texrepeat="5 5" reflectance="0.2"/>
</asset>
<worldbody>
<light pos="1 0 3.5" dir="0 0 -1" directional="true"/>
<geom name="floor" size="0 0 0.05" type="plane" material="groundplane"/>
</worldbody>
</mujoco>

Some files were not shown because too many files have changed in this diff Show More