mirror of
https://github.com/huggingface/lerobot.git
synced 2026-05-11 14:49:43 +00:00
Compare commits
8 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 8455efc474 | |||
| e627d6442e | |||
| b08a62af89 | |||
| d028978552 | |||
| 58bd11caf3 | |||
| 0fc855df13 | |||
| dfe16e8b84 | |||
| 5ac3e568f1 |
@@ -173,7 +173,5 @@ outputs/
|
||||
|
||||
# Dev folders
|
||||
.cache/*
|
||||
*.stl
|
||||
*.urdf
|
||||
*.xml
|
||||
*.part
|
||||
|
||||
@@ -21,6 +21,8 @@
|
||||
title: Training with PEFT (e.g., LoRA)
|
||||
- local: rename_map
|
||||
title: Using Rename Map and Empty Cameras
|
||||
- local: umi_pi0_relative_ee
|
||||
title: UMI Data with pi0 Relative EE Actions
|
||||
title: "Tutorials"
|
||||
- sections:
|
||||
- local: lerobot-dataset-v3
|
||||
|
||||
@@ -202,11 +202,22 @@ Here is how the different processors compose. Each arrow is a processor step, an
|
||||
└─────────────────────────────────────────┘
|
||||
|
||||
┌─────────────────────────────────────────┐
|
||||
Representation │ Absolute ←────→ Relative │
|
||||
State Derivation │ Action column ────→ State + Action │
|
||||
│ DeriveStateFromActionStep (pre only) │
|
||||
│ (UMI-style: state from action chunk) │
|
||||
└─────────────────────────────────────────┘
|
||||
|
||||
┌─────────────────────────────────────────┐
|
||||
Action Repr. │ Absolute ←────→ Relative │
|
||||
│ RelativeActionsProcessorStep (pre) │
|
||||
│ AbsoluteActionsProcessorStep (post) │
|
||||
└─────────────────────────────────────────┘
|
||||
|
||||
┌─────────────────────────────────────────┐
|
||||
State Repr. │ Absolute ────→ Relative │
|
||||
│ RelativeStateProcessorStep (pre only) │
|
||||
└─────────────────────────────────────────┘
|
||||
|
||||
┌─────────────────────────────────────────┐
|
||||
Normalization │ Raw ←────→ Normalized │
|
||||
│ NormalizerProcessorStep (pre) │
|
||||
@@ -216,6 +227,10 @@ Here is how the different processors compose. Each arrow is a processor step, an
|
||||
|
||||
A typical training preprocessor might chain: `raw absolute joint actions → relative → normalize`. A typical inference postprocessor: `unnormalize → absolute → (optionally IK to joints)`.
|
||||
|
||||
With UMI-style relative proprioception (`use_relative_state=True`), the preprocessor also converts observation.state to offsets from the current timestep via `RelativeStateProcessorStep` before normalization. This is a pre-processing-only step (state is an input, not an output).
|
||||
|
||||
With `derive_state_from_action=True`, the preprocessor first runs `DeriveStateFromActionStep` to extract a 2-step state from the extended action chunk. This enables full UMI-style training without a separate `observation.state` column. See the [UMI pi0 guide](umi_pi0_relative_ee) for details.
|
||||
|
||||
## References
|
||||
|
||||
- [Universal Manipulation Interface (UMI)](https://arxiv.org/abs/2402.10329) - Chi et al., 2024. Defines the relative trajectory action representation and compares it with absolute and delta actions.
|
||||
|
||||
@@ -0,0 +1,227 @@
|
||||
# UMI Data with pi0 Relative EE Actions
|
||||
|
||||
This guide explains how to train a pi0 policy with UMI-style relative end-effector (EE) actions and deploy it on a real OpenArm robot.
|
||||
|
||||
**What we will do:**
|
||||
|
||||
1. Prepare the dataset (EE pose + gripper in the action column).
|
||||
2. Recompute statistics for relative actions.
|
||||
3. Train pi0 with `derive_state_from_action=true`.
|
||||
4. Evaluate the trained policy on a real robot.
|
||||
|
||||
## Background
|
||||
|
||||
[UMI (Universal Manipulation Interface)](https://umi-gripper.github.io) collects manipulation data with hand-held grippers, recovering 6-DoF EE poses via SLAM. The key insight from UMI (Chi et al., 2024) is that the action space must include **both EE trajectory and gripper width**, and actions should be expressed as **relative trajectories** (offsets from the current pose).
|
||||
|
||||
### Dataset layout
|
||||
|
||||
The dataset should have this structure:
|
||||
|
||||
| Feature | Shape | Content |
|
||||
| ------------------------- | --------- | -------------------------------------------------------- |
|
||||
| `observation.images.cam0` | `[3,H,W]` | Wrist camera image |
|
||||
| `action` | `[8]` | `[x, y, z, ax, ay, az, proximal, distal]` (EE + gripper) |
|
||||
|
||||
No separate `observation.pose` or `observation.joints` columns are needed — the model derives its proprioception state directly from the action column (`derive_state_from_action=true`).
|
||||
|
||||
### Why relative actions?
|
||||
|
||||
With relative actions, each action in a chunk is an **offset from the current state** rather than an absolute target:
|
||||
|
||||
```
|
||||
relative_action[i] = absolute_action[t + i] − state[t]
|
||||
```
|
||||
|
||||
UMI ablations show this is critical: absolute actions achieve only 25% success vs 100% for relative trajectory on the cup arrangement task. Compared to delta actions (each step relative to the previous), relative trajectory avoids error accumulation. See the [Action Representations](action_representations) guide for details.
|
||||
|
||||
### `derive_state_from_action`
|
||||
|
||||
When `derive_state_from_action=true`, pi0 derives `observation.state` from the action column during training — no separate state column needed. Under the hood:
|
||||
|
||||
- `action_delta_indices` extends to `[-1, 0, 1, ..., chunk_size-1]` (one extra leading timestep).
|
||||
- `DeriveStateFromActionStep` extracts `[action[t-1], action[t]]` as a 2-step state and strips the extra timestep from the action chunk.
|
||||
- `RelativeActionsProcessorStep` converts actions to offsets from `state[t]`.
|
||||
- `RelativeStateProcessorStep` converts the 2-step state to relative proprioception (velocity + zeros) and flattens.
|
||||
|
||||
This implies `use_relative_state=true` and `state_obs_steps=2`.
|
||||
|
||||
During **inference**, `DeriveStateFromActionStep` is a no-op — state comes from the robot via forward kinematics. `RelativeStateProcessorStep` buffers the previous state and applies the same conversion automatically.
|
||||
|
||||
## Step 1: Recompute Stats
|
||||
|
||||
After preparing the dataset with EE pose in the action column, recompute statistics with `derive_state_from_action=true`. This computes relative action and state stats so the normalizer sees offset distributions:
|
||||
|
||||
```bash
|
||||
lerobot-edit-dataset \
|
||||
--repo-id=glannuzel/grabette-dataset \
|
||||
--operation=recompute_stats \
|
||||
--operation.relative_action=true \
|
||||
--operation.relative_exclude_joints='["proximal", "distal"]' \
|
||||
--operation.derive_state_from_action=true \
|
||||
--operation.chunk_size=30 \
|
||||
--push_to_hub=true
|
||||
```
|
||||
|
||||
| Flag | Purpose |
|
||||
| ------------------------------- | ------------------------------------------------------------------------------- |
|
||||
| `relative_action=true` | Compute stats on `action − state` (relative actions) |
|
||||
| `relative_exclude_joints` | Keep gripper dims absolute (they don't benefit from relative encoding) |
|
||||
| `derive_state_from_action=true` | Derive state from action column (implies `relative_state`, `state_obs_steps=2`) |
|
||||
| `chunk_size=30` | Must match training chunk size |
|
||||
|
||||
## Step 2: Train
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
set -euo pipefail
|
||||
|
||||
export LD_LIBRARY_PATH=$CONDA_PREFIX/lib:${LD_LIBRARY_PATH:-}
|
||||
|
||||
DATASET="glannuzel/grabette-dataset"
|
||||
NUM_PROCESSES=8
|
||||
|
||||
echo "=== Training pi0 on $DATASET (UMI relative EE, ${NUM_PROCESSES} GPUs) ==="
|
||||
accelerate launch --multi_gpu --num_processes=$NUM_PROCESSES \
|
||||
-m lerobot.scripts.lerobot_train \
|
||||
--dataset.repo_id="$DATASET" \
|
||||
--dataset.video_backend=pyav \
|
||||
--policy.type=pi0 \
|
||||
--policy.pretrained_path=lerobot/pi0_base \
|
||||
--policy.repo_id=pepijn/grabette-umi-pi0 \
|
||||
--policy.chunk_size=30 \
|
||||
--policy.n_action_steps=30 \
|
||||
--policy.derive_state_from_action=true \
|
||||
--use_relative_actions=true \
|
||||
--policy.relative_exclude_joints='["proximal", "distal"]' \
|
||||
--batch_size=32 \
|
||||
--steps=5000 \
|
||||
--policy.scheduler_decay_steps=5000 \
|
||||
--policy.dtype=bfloat16 \
|
||||
--policy.compile_model=false \
|
||||
--policy.gradient_checkpointing=true \
|
||||
--policy.device=cuda \
|
||||
--output_dir=/fsx/pepijn/outputs/grabette-umi \
|
||||
--job_name=grabette-umi-v2 \
|
||||
--wandb.enable=true \
|
||||
--wandb.disable_artifact=true \
|
||||
--wandb.project=grabette-umi \
|
||||
--log_freq=100 \
|
||||
--save_freq=5000
|
||||
```
|
||||
|
||||
Key flags:
|
||||
|
||||
| Flag | Purpose |
|
||||
| ------------------------------- | ---------------------------------------------------------------------- |
|
||||
| `derive_state_from_action=true` | Derive proprioception from action column (full UMI mode) |
|
||||
| `use_relative_actions=true` | Actions are offsets from current state |
|
||||
| `relative_exclude_joints` | `["proximal", "distal"]` — gripper stays absolute, EE pose is relative |
|
||||
| `chunk_size=30` | Action horizon: 30 steps (~0.65s at 46 FPS) |
|
||||
| `n_action_steps=30` | Execute full chunk before replanning |
|
||||
|
||||
Note: `derive_state_from_action=true` automatically implies `use_relative_state=true` and `state_obs_steps=2`. No `rename_map` is needed since there are no separate observation columns to rename.
|
||||
|
||||
## Step 3: Evaluate
|
||||
|
||||
The evaluation script in `examples/umi_pi0_relative_ee/evaluate.py` runs inference on a real OpenArm robot:
|
||||
|
||||
```bash
|
||||
python examples/umi_pi0_relative_ee/evaluate.py
|
||||
```
|
||||
|
||||
Edit `HF_MODEL_ID`, camera index, and robot configuration at the top of the file.
|
||||
|
||||
### How inference works
|
||||
|
||||
At inference, the training dataset has no `observation.state` — it was derived from actions. The evaluate script provides `observation.state` from the robot via forward kinematics:
|
||||
|
||||
1. **Robot → FK** — Arm joint positions → EE pose `[x,y,z,ax,ay,az]`, gripper → `[proximal, distal]`. Combined into `observation.state` (8D).
|
||||
2. **Preprocessor** (loaded from checkpoint) — `DeriveStateFromActionStep` is a no-op. `RelativeStateProcessorStep` buffers previous state, stacks `[prev, current]`, subtracts current → velocity info. `RelativeActionsProcessorStep` caches state. `NormalizerProcessorStep` normalizes.
|
||||
3. **pi0 inference** — Predicts normalized relative action chunk (30 steps).
|
||||
4. **Postprocessor** — `UnnormalizerProcessorStep` unnormalizes, `AbsoluteActionsProcessorStep` adds cached state → absolute EE targets.
|
||||
5. **IK → Robot** — Absolute `[x,y,z,ax,ay,az]` → arm joint targets with full 6-DOF IK (orientation weight = 1.0). `[proximal, distal]` → direct gripper position commands.
|
||||
|
||||
### Latency compensation
|
||||
|
||||
Set `LATENCY_SKIP_STEPS` to skip the first few predicted action steps, compensating for system latency:
|
||||
|
||||
```python
|
||||
LATENCY_SKIP_STEPS = 7 # ceil(total_latency_ms / (1000 / FPS))
|
||||
```
|
||||
|
||||
At 46 FPS (~22ms/step) with ~150ms total latency: `ceil(150/22) ≈ 7`. Start with 0 for a safe first test.
|
||||
|
||||
## Replay Viewer
|
||||
|
||||
Visualize any dataset episode in a browser-based 3D viewer before running on hardware. The viewer shows the EE trajectory overlaid on the OpenArm URDF model.
|
||||
|
||||
### Quick start
|
||||
|
||||
```bash
|
||||
python examples/umi_pi0_relative_ee/replay.py
|
||||
```
|
||||
|
||||
### Options
|
||||
|
||||
| Flag | Default | Description |
|
||||
| ----------- | ---------------------------- | ------------------------------------ |
|
||||
| `--repo-id` | `glannuzel/grabette-dataset` | HuggingFace dataset repo to load |
|
||||
| `--episode` | `0` | Episode index to replay |
|
||||
| `--port` | `8765` | HTTP server port |
|
||||
| `--force` | off | Re-extract trajectory even if cached |
|
||||
|
||||
### Viewer controls
|
||||
|
||||
The panel in the top-left corner shows live EE coordinates and gripper state. Transport controls:
|
||||
|
||||
- **Play / Pause** — toggle automatic playback.
|
||||
- **Step buttons** (◀ ▶) — advance or rewind one frame.
|
||||
- **Reset** (⟳) — jump to frame 0.
|
||||
- **Scrubber** — drag to seek.
|
||||
- **Speed selector** — 0.25× to 4× playback speed.
|
||||
|
||||
### Color legend
|
||||
|
||||
| Color | Meaning |
|
||||
| ------------------ | --------------------------------------------- |
|
||||
| Red sphere | Current EE position |
|
||||
| Yellow trail | Past trajectory |
|
||||
| Dark trail | Future trajectory |
|
||||
| Orange ring + axes | URDF `ee_target` frame (zero-joint reference) |
|
||||
|
||||
## How the Pieces Fit Together
|
||||
|
||||
```
|
||||
Training (derive_state_from_action=true):
|
||||
DataLoader loads action: [B, 31, 8] (chunk_size=30 + 1 leading)
|
||||
→ DeriveStateFromActionStep
|
||||
state = action[:, :2, :] → [B, 2, 8]
|
||||
action = action[:, 1:, :] → [B, 30, 8]
|
||||
→ RelativeActionsProcessorStep (action -= state[:, -1, :])
|
||||
→ RelativeStateProcessorStep (state offsets from current, flatten → [B, 16])
|
||||
→ NormalizerProcessorStep → pi0 model
|
||||
|
||||
Inference:
|
||||
arm joints → FK → observation.state [8D: x,y,z,ax,ay,az,prox,dist]
|
||||
↓
|
||||
DeriveStateFromActionStep (no-op)
|
||||
↓
|
||||
RelativeActionsProcessorStep (caches state)
|
||||
↓
|
||||
RelativeStateProcessorStep (buffers prev, stacks, subtracts, flattens)
|
||||
↓
|
||||
NormalizerProcessorStep → pi0 model → relative action chunk [30, 8]
|
||||
↓
|
||||
UnnormalizerProcessorStep
|
||||
↓
|
||||
AbsoluteActionsProcessorStep (+ cached state → absolute EE)
|
||||
↓
|
||||
IK → joint targets → robot
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
- [UMI: Universal Manipulation Interface](https://umi-gripper.github.io) — Chi et al., 2024. Defines relative trajectory actions.
|
||||
- [Action Representations](action_representations) — LeRobot guide comparing absolute, relative, and delta actions.
|
||||
- [pi0 documentation](pi0) — Full pi0 configuration including `use_relative_actions`.
|
||||
- [`examples/so100_to_so100_EE/`](https://github.com/huggingface/lerobot/tree/main/examples/so100_to_so100_EE) — EE-space evaluation example this builds on.
|
||||
@@ -0,0 +1,297 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
Inference script for a pi0 model trained with UMI-style relative EE actions
|
||||
on an OpenArm robot (single right arm, one wrist camera).
|
||||
|
||||
Training dataset layout:
|
||||
observation.images.cam0 [3, 720, 960]
|
||||
action [x, y, z, ax, ay, az, proximal, distal] (shape 8)
|
||||
|
||||
The model uses ``derive_state_from_action=true``, so observation.state is
|
||||
derived from the action column during training. At inference the state must
|
||||
be provided by the robot — this script uses FK to compute the current EE
|
||||
pose and gripper position, which it exposes as ``observation.state``.
|
||||
|
||||
Pipeline:
|
||||
1. Read arm joints from robot → FK → observation.state [x,y,z,ax,ay,az,prox,dist]
|
||||
2. Read camera image → observation.images.cam0
|
||||
3. pi0 preprocessor (loaded from checkpoint):
|
||||
- DeriveStateFromActionStep: no-op at inference (state from robot)
|
||||
- RelativeActionsProcessorStep: caches current state
|
||||
- RelativeStateProcessorStep: buffers prev state, stacks [prev,cur],
|
||||
subtracts current → velocity info, flattens
|
||||
- NormalizerProcessorStep: normalizes
|
||||
4. pi0 predicts relative action chunk (30 steps)
|
||||
5. pi0 postprocessor: unnormalize, add cached state → absolute EE
|
||||
6. IK: absolute EE [x,y,z,ax,ay,az] → arm joint targets
|
||||
7. Gripper [proximal, distal] → gripper motor targets
|
||||
8. Send to robot
|
||||
|
||||
Usage:
|
||||
python evaluate.py
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import numpy as np
|
||||
from scipy.spatial.transform import Rotation
|
||||
|
||||
from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
|
||||
from lerobot.datasets.lerobot_dataset import LeRobotDataset
|
||||
from lerobot.model.kinematics import RobotKinematics
|
||||
from lerobot.policies.factory import make_pre_post_processors
|
||||
from lerobot.policies.pi0.modeling_pi0 import PI0Policy
|
||||
from lerobot.processor import RelativeStateProcessorStep
|
||||
from lerobot.robots.openarm_follower import OpenArmFollower, OpenArmFollowerConfig
|
||||
from lerobot.scripts.lerobot_record import record_loop
|
||||
from lerobot.types import RobotAction, RobotObservation
|
||||
from lerobot.utils.control_utils import init_keyboard_listener
|
||||
from lerobot.utils.utils import log_say
|
||||
from lerobot.utils.visualization_utils import init_rerun
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Configuration — adapt these to your setup
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
FPS = 46
|
||||
EPISODE_TIME_SEC = 60
|
||||
TASK_DESCRIPTION = "red cube"
|
||||
|
||||
HF_MODEL_ID = "pepijn223/grabette-umi-pi0"
|
||||
|
||||
# Latency compensation: skip this many predicted action steps to account for
|
||||
# camera + inference + execution latency. Formula: ceil(total_ms / (1000/FPS)).
|
||||
# At 46 FPS (~22ms/step) with ~150ms total latency: ceil(150/22) ≈ 7.
|
||||
# Start with 0 for a safe first test, then increase to match measured latency.
|
||||
LATENCY_SKIP_STEPS = 0
|
||||
|
||||
URDF_PATH = "src/lerobot/robots/openarm_follower/urdf/openarm_bimanual_pybullet.urdf"
|
||||
URDF_EE_FRAME = "openarm_right_ee_target"
|
||||
|
||||
IK_POSITION_WEIGHT = 1.0
|
||||
IK_ORIENTATION_WEIGHT = 1.0
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Dataset features for inference
|
||||
#
|
||||
# The training dataset has only observation.images.cam0 and action.
|
||||
# observation.state is derived from action during training
|
||||
# (derive_state_from_action=true) but must be supplied by the robot at
|
||||
# inference. We define it here so build_dataset_frame can map FK output
|
||||
# to the right feature.
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
DATASET_FEATURES: dict = {
|
||||
"observation.state": {
|
||||
"dtype": "float32",
|
||||
"shape": [8],
|
||||
"names": ["x", "y", "z", "ax", "ay", "az", "proximal", "distal"],
|
||||
},
|
||||
"observation.images.cam0": {
|
||||
"dtype": "video",
|
||||
"shape": [3, 720, 960],
|
||||
"names": ["channels", "height", "width"],
|
||||
"info": {
|
||||
"video.height": 720,
|
||||
"video.width": 960,
|
||||
"video.codec": "h264",
|
||||
"video.pix_fmt": "yuv420p",
|
||||
"video.is_depth_map": False,
|
||||
"video.fps": FPS,
|
||||
"video.channels": 3,
|
||||
"has_audio": False,
|
||||
},
|
||||
},
|
||||
"action": {
|
||||
"dtype": "float32",
|
||||
"shape": [8],
|
||||
"names": ["x", "y", "z", "ax", "ay", "az", "proximal", "distal"],
|
||||
},
|
||||
"timestamp": {"dtype": "float32", "shape": [1], "names": None},
|
||||
"frame_index": {"dtype": "int64", "shape": [1], "names": None},
|
||||
"episode_index": {"dtype": "int64", "shape": [1], "names": None},
|
||||
"index": {"dtype": "int64", "shape": [1], "names": None},
|
||||
"task_index": {"dtype": "int64", "shape": [1], "names": None},
|
||||
}
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# FK / IK callables
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
class JointsToEE:
|
||||
"""FK: raw robot observation → flat dict matching observation.state names.
|
||||
|
||||
Arm joint positions → EE pose [x,y,z,ax,ay,az] via forward kinematics.
|
||||
Gripper motor positions → [proximal, distal].
|
||||
Camera images pass through unchanged.
|
||||
"""
|
||||
|
||||
def __init__(self, kinematics: RobotKinematics, arm_motor_names: list[str]):
|
||||
self.kin = kinematics
|
||||
self.arm = arm_motor_names
|
||||
|
||||
def __call__(self, obs: RobotObservation) -> RobotObservation:
|
||||
q = np.array([float(obs[f"{m}.pos"]) for m in self.arm])
|
||||
t = self.kin.forward_kinematics(q)
|
||||
rot = Rotation.from_matrix(t[:3, :3]).as_rotvec()
|
||||
|
||||
out: dict = {
|
||||
"x": float(t[0, 3]),
|
||||
"y": float(t[1, 3]),
|
||||
"z": float(t[2, 3]),
|
||||
"ax": float(rot[0]),
|
||||
"ay": float(rot[1]),
|
||||
"az": float(rot[2]),
|
||||
"proximal": float(obs["proximal.pos"]),
|
||||
"distal": float(obs["distal.pos"]),
|
||||
}
|
||||
for k, v in obs.items():
|
||||
if not k.endswith((".pos", ".vel", ".torque")):
|
||||
out[k] = v
|
||||
return out
|
||||
|
||||
|
||||
class EEToJoints:
|
||||
"""IK: policy action dict → motor position dict for the robot.
|
||||
|
||||
Reads [x,y,z,ax,ay,az] from the action, runs IK for arm joint targets.
|
||||
Passes [proximal, distal] as direct gripper position commands.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
kinematics: RobotKinematics,
|
||||
arm_motor_names: list[str],
|
||||
position_weight: float = 1.0,
|
||||
orientation_weight: float = 1.0,
|
||||
):
|
||||
self.kin = kinematics
|
||||
self.arm = arm_motor_names
|
||||
self.pw = position_weight
|
||||
self.ow = orientation_weight
|
||||
self.q_curr: np.ndarray | None = None
|
||||
|
||||
def __call__(self, args: tuple[RobotAction, RobotObservation]) -> RobotAction:
|
||||
action, obs = args
|
||||
|
||||
q_raw = np.array([float(obs[f"{m}.pos"]) for m in self.arm])
|
||||
if self.q_curr is None:
|
||||
self.q_curr = q_raw
|
||||
|
||||
t_des = np.eye(4)
|
||||
t_des[:3, :3] = Rotation.from_rotvec([action["ax"], action["ay"], action["az"]]).as_matrix()
|
||||
t_des[:3, 3] = [action["x"], action["y"], action["z"]]
|
||||
|
||||
q_target = self.kin.inverse_kinematics(
|
||||
self.q_curr, t_des, position_weight=self.pw, orientation_weight=self.ow
|
||||
)
|
||||
self.q_curr = q_target
|
||||
|
||||
out: dict = {f"{m}.pos": float(q_target[i]) for i, m in enumerate(self.arm)}
|
||||
out["proximal.pos"] = float(action["proximal"])
|
||||
out["distal.pos"] = float(action["distal"])
|
||||
return out
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Main
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def main():
|
||||
camera_config = {
|
||||
"cam0": OpenCVCameraConfig(index_or_path=0, width=960, height=720, fps=FPS),
|
||||
}
|
||||
robot_config = OpenArmFollowerConfig(
|
||||
port="can0",
|
||||
id="right_openarm",
|
||||
side="right",
|
||||
cameras=camera_config,
|
||||
max_relative_target=8.0,
|
||||
gripper_port="/dev/ttyUSB0",
|
||||
)
|
||||
robot = OpenArmFollower(robot_config)
|
||||
|
||||
policy = PI0Policy.from_pretrained(HF_MODEL_ID)
|
||||
policy.config.latency_skip_steps = LATENCY_SKIP_STEPS
|
||||
|
||||
arm_motor_names = list(robot.bus.motors.keys())
|
||||
|
||||
kinematics = RobotKinematics(
|
||||
urdf_path=URDF_PATH,
|
||||
target_frame_name=URDF_EE_FRAME,
|
||||
joint_names=arm_motor_names,
|
||||
)
|
||||
|
||||
fk = JointsToEE(kinematics, arm_motor_names)
|
||||
ik = EEToJoints(kinematics, arm_motor_names, IK_POSITION_WEIGHT, IK_ORIENTATION_WEIGHT)
|
||||
|
||||
dataset = LeRobotDataset.create(
|
||||
repo_id="tmp/openarm_eval_scratch",
|
||||
fps=FPS,
|
||||
features=DATASET_FEATURES,
|
||||
robot_type=robot.name,
|
||||
use_videos=True,
|
||||
image_writer_threads=4,
|
||||
)
|
||||
|
||||
preprocessor, postprocessor = make_pre_post_processors(
|
||||
policy_cfg=policy,
|
||||
pretrained_path=HF_MODEL_ID,
|
||||
dataset_stats=dataset.meta.stats,
|
||||
preprocessor_overrides={"device_processor": {"device": str(policy.config.device)}},
|
||||
)
|
||||
|
||||
relative_state_steps = [s for s in preprocessor.steps if isinstance(s, RelativeStateProcessorStep)]
|
||||
|
||||
robot.connect()
|
||||
|
||||
listener, events = init_keyboard_listener()
|
||||
init_rerun(session_name="openarm_umi_pi0_relative_ee_evaluate")
|
||||
|
||||
try:
|
||||
if not robot.is_connected:
|
||||
raise ValueError("Robot is not connected!")
|
||||
|
||||
log_say("Starting policy execution")
|
||||
for step in relative_state_steps:
|
||||
step.reset()
|
||||
|
||||
record_loop(
|
||||
robot=robot,
|
||||
events=events,
|
||||
fps=FPS,
|
||||
policy=policy,
|
||||
preprocessor=preprocessor,
|
||||
postprocessor=postprocessor,
|
||||
dataset=dataset,
|
||||
control_time_s=EPISODE_TIME_SEC,
|
||||
single_task=TASK_DESCRIPTION,
|
||||
display_data=True,
|
||||
robot_action_processor=ik,
|
||||
robot_observation_processor=fk,
|
||||
)
|
||||
finally:
|
||||
robot.disconnect()
|
||||
listener.stop()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -0,0 +1,113 @@
|
||||
#!/usr/bin/env python
|
||||
|
||||
# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
Replay a dataset episode in EE frame using a browser-based URDF viewer.
|
||||
|
||||
Extracts ``observation.pose`` from the dataset, saves a trajectory JSON file,
|
||||
then launches a local HTTP server and opens the replay viewer. The trajectory
|
||||
is re-centered so frame 0 starts at the OpenArm ``openarm_right_ee_target``
|
||||
EE tip (zero-joint pose).
|
||||
|
||||
Usage:
|
||||
python replay.py
|
||||
python replay.py --episode 3 --repo-id myuser/mydata
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import http.server
|
||||
import json
|
||||
import os
|
||||
import threading
|
||||
import webbrowser
|
||||
from pathlib import Path
|
||||
|
||||
VIEWER_DIR = Path(__file__).resolve().parents[2] / "src/lerobot/robots/openarm_follower/urdf"
|
||||
TRAJECTORY_FILENAME = "trajectory_ep0.json"
|
||||
|
||||
|
||||
def extract_trajectory(repo_id: str, episode: int, output_path: Path) -> dict:
|
||||
from lerobot.datasets.lerobot_dataset import LeRobotDataset
|
||||
|
||||
dataset = LeRobotDataset(repo_id, episodes=[episode])
|
||||
poses = dataset.select_columns("observation.pose")
|
||||
actions = dataset.select_columns("action")
|
||||
|
||||
frames = []
|
||||
for i in range(dataset.num_frames):
|
||||
p = poses[i]["observation.pose"]
|
||||
a = actions[i]["action"]
|
||||
frames.append(
|
||||
{
|
||||
"x": float(p[0]),
|
||||
"y": float(p[1]),
|
||||
"z": float(p[2]),
|
||||
"ax": float(p[3]),
|
||||
"ay": float(p[4]),
|
||||
"az": float(p[5]),
|
||||
"proximal": float(a[0]),
|
||||
"distal": float(a[1]),
|
||||
}
|
||||
)
|
||||
payload = {"fps": dataset.fps, "num_frames": dataset.num_frames, "frames": frames}
|
||||
with open(output_path, "w") as f:
|
||||
json.dump(payload, f)
|
||||
print(f"Extracted {dataset.num_frames} frames at {dataset.fps} FPS → {output_path}")
|
||||
return payload
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Viewer mode
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def serve_and_open(directory: Path, port: int = 8765):
|
||||
os.chdir(directory)
|
||||
handler = http.server.SimpleHTTPRequestHandler
|
||||
httpd = http.server.HTTPServer(("", port), handler)
|
||||
url = f"http://localhost:{port}/replay_viewer.html"
|
||||
print(f"Serving at {url}")
|
||||
threading.Thread(target=lambda: webbrowser.open(url), daemon=True).start()
|
||||
try:
|
||||
httpd.serve_forever()
|
||||
except KeyboardInterrupt:
|
||||
print("\nServer stopped.")
|
||||
httpd.server_close()
|
||||
|
||||
|
||||
def run_viewer(args):
|
||||
trajectory_path = VIEWER_DIR / TRAJECTORY_FILENAME
|
||||
if not trajectory_path.exists() or args.force:
|
||||
extract_trajectory(args.repo_id, args.episode, trajectory_path)
|
||||
else:
|
||||
print(f"Using cached trajectory at {trajectory_path} (pass --force to re-extract)")
|
||||
serve_and_open(VIEWER_DIR, args.port)
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Replay a dataset episode in EE frame (URDF viewer)")
|
||||
parser.add_argument("--repo-id", default="glannuzel/grabette-dataset")
|
||||
parser.add_argument("--episode", type=int, default=0)
|
||||
parser.add_argument("--port", type=int, default=8765)
|
||||
parser.add_argument("--force", action="store_true", help="Re-extract trajectory even if cached")
|
||||
args = parser.parse_args()
|
||||
run_viewer(args)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
+2
-1
@@ -306,7 +306,8 @@ default.extend-ignore-identifiers-re = [
|
||||
"thw",
|
||||
"inpt",
|
||||
"ROBOTIS",
|
||||
"OT_VALUE"
|
||||
"OT_VALUE",
|
||||
"metalness",
|
||||
]
|
||||
|
||||
# TODO: Uncomment when ready to use
|
||||
|
||||
@@ -115,6 +115,17 @@ class PreTrainedConfig(draccus.ChoiceRegistry, HubMixin, abc.ABC): # type: igno
|
||||
def reward_delta_indices(self) -> list | None: # type: ignore[type-arg] #TODO: No implementation
|
||||
raise NotImplementedError
|
||||
|
||||
@property
|
||||
def state_delta_indices(self) -> list | None: # type: ignore[type-arg]
|
||||
"""Delta indices specifically for observation.state.
|
||||
|
||||
When not None, overrides ``observation_delta_indices`` for the
|
||||
``observation.state`` key only. Useful for loading state history
|
||||
(e.g. ``[-1, 0]`` for UMI-style relative proprioception) without
|
||||
also loading multiple image timesteps.
|
||||
"""
|
||||
return None
|
||||
|
||||
@abc.abstractmethod
|
||||
def get_optimizer_preset(self) -> OptimizerConfig:
|
||||
raise NotImplementedError
|
||||
|
||||
@@ -767,3 +767,94 @@ def compute_relative_action_stats(
|
||||
)
|
||||
|
||||
return stats
|
||||
|
||||
|
||||
def compute_relative_state_stats(
|
||||
hf_dataset,
|
||||
features: dict,
|
||||
state_obs_steps: int = 2,
|
||||
exclude_joints: list[str] | None = None,
|
||||
source_key: str = OBS_STATE,
|
||||
) -> dict[str, np.ndarray]:
|
||||
"""Compute normalization statistics for observation.state after relative conversion.
|
||||
|
||||
For UMI-style relative proprioception with ``state_obs_steps`` timesteps,
|
||||
each state observation becomes a stack of offsets from the current timestep:
|
||||
``state[t-k] - state[t]`` for k in ``range(state_obs_steps-1, -1, -1)``.
|
||||
|
||||
The stats are computed over the flattened ``[state_obs_steps * state_dim]``
|
||||
vector that the model actually sees after ``prepare_state`` flattening.
|
||||
|
||||
Args:
|
||||
hf_dataset: The HuggingFace dataset with the source column and
|
||||
"episode_index" columns.
|
||||
features: Dataset feature metadata.
|
||||
state_obs_steps: Number of observation timesteps (must be >= 2).
|
||||
exclude_joints: State dimension names to keep absolute.
|
||||
source_key: Column to read data from. Defaults to "observation.state".
|
||||
When ``derive_state_from_action=True``, pass ``ACTION`` to read
|
||||
from the action column instead.
|
||||
|
||||
Returns:
|
||||
Statistics dict with keys "mean", "std", "min", "max", "q01", …, "q99".
|
||||
"""
|
||||
from lerobot.processor.relative_action_processor import RelativeStateProcessorStep
|
||||
|
||||
if exclude_joints is None:
|
||||
exclude_joints = []
|
||||
|
||||
state_dim = features[source_key]["shape"][0]
|
||||
state_names = features.get(source_key, {}).get("names")
|
||||
mask_step = RelativeStateProcessorStep(
|
||||
enabled=True,
|
||||
exclude_joints=exclude_joints,
|
||||
state_names=state_names,
|
||||
)
|
||||
relative_mask = np.array(mask_step._build_mask(state_dim), dtype=np.float32)
|
||||
|
||||
logging.info(f"Loading data from '{source_key}' for relative state stats...")
|
||||
all_states = np.array(hf_dataset[source_key], dtype=np.float32)
|
||||
episode_indices = np.array(hf_dataset["episode_index"])
|
||||
|
||||
# Build all valid windows of length state_obs_steps within each episode
|
||||
n = len(all_states)
|
||||
if n < state_obs_steps:
|
||||
raise ValueError(f"Dataset has {n} frames but state_obs_steps={state_obs_steps}")
|
||||
|
||||
max_start = n - state_obs_steps
|
||||
starts = np.arange(max_start + 1)
|
||||
valid = episode_indices[starts] == episode_indices[starts + state_obs_steps - 1]
|
||||
valid_starts = starts[valid]
|
||||
|
||||
if len(valid_starts) == 0:
|
||||
raise RuntimeError("No valid state windows found within single episodes")
|
||||
|
||||
offsets = np.arange(state_obs_steps)
|
||||
mask_dim = len(relative_mask)
|
||||
|
||||
running_stats = RunningQuantileStats()
|
||||
|
||||
batch_size = 50_000
|
||||
for i in range(0, len(valid_starts), batch_size):
|
||||
batch_starts = valid_starts[i : i + batch_size]
|
||||
frame_idx = batch_starts[:, None] + offsets[None, :] # [N, state_obs_steps]
|
||||
windows = all_states[frame_idx].copy() # [N, state_obs_steps, state_dim]
|
||||
|
||||
# Subtract current (last) timestep from all timesteps for masked dims
|
||||
current = windows[:, -1:, :] # [N, 1, state_dim]
|
||||
windows[:, :, :mask_dim] -= current[:, :, :mask_dim] * relative_mask[None, None, :]
|
||||
|
||||
# Flatten to [N, state_obs_steps * state_dim] (same as prepare_state)
|
||||
flattened = windows.reshape(len(batch_starts), -1)
|
||||
running_stats.update(flattened)
|
||||
|
||||
stats = running_stats.get_statistics()
|
||||
|
||||
excluded_dims = int(mask_dim - relative_mask.sum())
|
||||
logging.info(
|
||||
f"Relative state stats ({len(valid_starts)} windows, obs_steps={state_obs_steps}): "
|
||||
f"relative_dims={int(relative_mask.sum())}/{mask_dim} (excluded={excluded_dims}), "
|
||||
f"mean={np.abs(stats['mean']).mean():.4f}, std={stats['std'].mean():.4f}"
|
||||
)
|
||||
|
||||
return stats
|
||||
|
||||
@@ -41,6 +41,7 @@ from lerobot.datasets.compute_stats import (
|
||||
aggregate_stats,
|
||||
compute_episode_stats,
|
||||
compute_relative_action_stats,
|
||||
compute_relative_state_stats,
|
||||
)
|
||||
from lerobot.datasets.dataset_metadata import LeRobotDatasetMetadata
|
||||
from lerobot.datasets.io_utils import (
|
||||
@@ -1544,6 +1545,10 @@ def recompute_stats(
|
||||
relative_exclude_joints: list[str] | None = None,
|
||||
chunk_size: int = 50,
|
||||
num_workers: int = 0,
|
||||
relative_state: bool = False,
|
||||
relative_exclude_state_joints: list[str] | None = None,
|
||||
state_obs_steps: int = 2,
|
||||
derive_state_from_action: bool = False,
|
||||
) -> LeRobotDataset:
|
||||
"""Recompute stats.json from scratch by iterating all episodes.
|
||||
|
||||
@@ -1561,10 +1566,22 @@ def recompute_stats(
|
||||
``policy.chunk_size``. Only used when ``relative_action=True``.
|
||||
num_workers: Number of parallel threads for relative action stats computation.
|
||||
Values ≤1 mean single-threaded. Only used when ``relative_action=True``.
|
||||
relative_state: If True, compute observation.state stats in relative space
|
||||
(multi-timestep offsets from current). This matches the normalization
|
||||
the model sees during training with ``use_relative_state=True``.
|
||||
relative_exclude_state_joints: State dim names to exclude from relative conversion.
|
||||
state_obs_steps: Number of observation timesteps for relative state stats.
|
||||
Should match ``policy.state_obs_steps``. Only used when ``relative_state=True``.
|
||||
derive_state_from_action: If True, compute relative state stats from the
|
||||
action column instead of observation.state. Implies ``relative_state=True``
|
||||
and ``state_obs_steps=2``.
|
||||
|
||||
Returns:
|
||||
The same dataset with updated stats.
|
||||
"""
|
||||
if derive_state_from_action:
|
||||
relative_state = True
|
||||
state_obs_steps = 2
|
||||
features = dataset.meta.features
|
||||
meta_keys = {"index", "episode_index", "task_index", "frame_index", "timestamp"}
|
||||
numeric_features = {
|
||||
@@ -1596,6 +1613,20 @@ def recompute_stats(
|
||||
)
|
||||
features_to_compute.pop(ACTION, None)
|
||||
|
||||
# When relative_state is enabled, compute state stats over the flattened
|
||||
# multi-timestep relative representation (matching what the model sees).
|
||||
relative_state_stats = None
|
||||
if relative_state and (OBS_STATE in features or derive_state_from_action):
|
||||
source_key = ACTION if derive_state_from_action else OBS_STATE
|
||||
relative_state_stats = compute_relative_state_stats(
|
||||
hf_dataset=dataset.hf_dataset,
|
||||
features=features,
|
||||
state_obs_steps=state_obs_steps,
|
||||
exclude_joints=relative_exclude_state_joints,
|
||||
source_key=source_key,
|
||||
)
|
||||
features_to_compute.pop(OBS_STATE, None)
|
||||
|
||||
logging.info(f"Recomputing stats for features: {list(features_to_compute.keys())}")
|
||||
|
||||
data_dir = dataset.root / DATA_DIR
|
||||
@@ -1632,6 +1663,9 @@ def recompute_stats(
|
||||
if relative_action_stats is not None:
|
||||
new_stats[ACTION] = relative_action_stats
|
||||
|
||||
if relative_state_stats is not None:
|
||||
new_stats[OBS_STATE] = relative_state_stats
|
||||
|
||||
# Merge: keep existing stats for features we didn't recompute
|
||||
if dataset.meta.stats:
|
||||
for key, value in dataset.meta.stats.items():
|
||||
|
||||
@@ -25,7 +25,7 @@ from lerobot.datasets.lerobot_dataset import LeRobotDataset
|
||||
from lerobot.datasets.multi_dataset import MultiLeRobotDataset
|
||||
from lerobot.datasets.streaming_dataset import StreamingLeRobotDataset
|
||||
from lerobot.datasets.transforms import ImageTransforms
|
||||
from lerobot.utils.constants import ACTION, OBS_PREFIX, REWARD
|
||||
from lerobot.utils.constants import ACTION, OBS_PREFIX, OBS_STATE, REWARD
|
||||
|
||||
IMAGENET_STATS = {
|
||||
"mean": [[[0.485]], [[0.456]], [[0.406]]], # (c,1,1)
|
||||
@@ -52,12 +52,15 @@ def resolve_delta_timestamps(
|
||||
returns `None` if the resulting dict is empty.
|
||||
"""
|
||||
delta_timestamps = {}
|
||||
state_delta = getattr(cfg, "state_delta_indices", None)
|
||||
for key in ds_meta.features:
|
||||
if key == REWARD and cfg.reward_delta_indices is not None:
|
||||
delta_timestamps[key] = [i / ds_meta.fps for i in cfg.reward_delta_indices]
|
||||
if key == ACTION and cfg.action_delta_indices is not None:
|
||||
delta_timestamps[key] = [i / ds_meta.fps for i in cfg.action_delta_indices]
|
||||
if key.startswith(OBS_PREFIX) and cfg.observation_delta_indices is not None:
|
||||
if key == OBS_STATE and state_delta is not None:
|
||||
delta_timestamps[key] = [i / ds_meta.fps for i in state_delta]
|
||||
elif key.startswith(OBS_PREFIX) and cfg.observation_delta_indices is not None:
|
||||
delta_timestamps[key] = [i / ds_meta.fps for i in cfg.observation_delta_indices]
|
||||
|
||||
if len(delta_timestamps) == 0:
|
||||
|
||||
@@ -57,6 +57,28 @@ class PI0Config(PreTrainedConfig):
|
||||
# Populated at runtime from dataset metadata by make_policy.
|
||||
action_feature_names: list[str] | None = None
|
||||
|
||||
# Relative state (UMI-style relative proprioception): converts multi-timestep
|
||||
# observation.state to offsets from the current timestep, providing velocity info.
|
||||
# Requires state_obs_steps >= 2. The flattened multi-timestep state is padded to
|
||||
# max_state_dim, so ensure state_obs_steps * state_dim <= max_state_dim.
|
||||
use_relative_state: bool = False
|
||||
state_obs_steps: int = 1
|
||||
relative_exclude_state_joints: list[str] = field(default_factory=list)
|
||||
# Populated at runtime from dataset metadata by make_policy.
|
||||
state_feature_names: list[str] | None = None
|
||||
|
||||
# Derive observation.state from the action column (UMI-style).
|
||||
# When True, action_delta_indices loads one extra leading timestep [-1, 0, ..., chunk_size-1],
|
||||
# DeriveStateFromActionStep extracts [action[t-1], action[t]] as a 2-step state,
|
||||
# and strips the extra timestep from the action chunk.
|
||||
# Implies use_relative_state=True and state_obs_steps=2.
|
||||
derive_state_from_action: bool = False
|
||||
|
||||
# Latency compensation: skip this many steps from the start of each predicted
|
||||
# action chunk during inference. E.g. at 10Hz with ~200ms total latency,
|
||||
# latency_skip_steps=2 compensates for the delay.
|
||||
latency_skip_steps: int = 0
|
||||
|
||||
# Real-Time Chunking (RTC) configuration
|
||||
rtc_config: RTCConfig | None = None
|
||||
|
||||
@@ -106,6 +128,10 @@ class PI0Config(PreTrainedConfig):
|
||||
def __post_init__(self):
|
||||
super().__post_init__()
|
||||
|
||||
if self.derive_state_from_action:
|
||||
self.use_relative_state = True
|
||||
self.state_obs_steps = 2
|
||||
|
||||
# Validate configuration
|
||||
if self.n_action_steps > self.chunk_size:
|
||||
raise ValueError(
|
||||
@@ -121,6 +147,13 @@ class PI0Config(PreTrainedConfig):
|
||||
if self.dtype not in ["bfloat16", "float32"]:
|
||||
raise ValueError(f"Invalid dtype: {self.dtype}")
|
||||
|
||||
if self.use_relative_state and self.state_obs_steps < 2:
|
||||
raise ValueError(
|
||||
"use_relative_state requires state_obs_steps >= 2 "
|
||||
f"(got {self.state_obs_steps}). Set state_obs_steps=2 for "
|
||||
"UMI-style relative proprioception."
|
||||
)
|
||||
|
||||
def validate_features(self) -> None:
|
||||
"""Validate and set up input/output features."""
|
||||
for i in range(self.empty_cameras):
|
||||
@@ -166,8 +199,16 @@ class PI0Config(PreTrainedConfig):
|
||||
def observation_delta_indices(self) -> None:
|
||||
return None
|
||||
|
||||
@property
|
||||
def state_delta_indices(self) -> list[int] | None:
|
||||
if self.state_obs_steps >= 2:
|
||||
return list(range(-(self.state_obs_steps - 1), 1))
|
||||
return None
|
||||
|
||||
@property
|
||||
def action_delta_indices(self) -> list:
|
||||
if self.derive_state_from_action:
|
||||
return [-1] + list(range(self.chunk_size))
|
||||
return list(range(self.chunk_size))
|
||||
|
||||
@property
|
||||
|
||||
@@ -1230,8 +1230,11 @@ class PI0Policy(PreTrainedPolicy):
|
||||
return images, img_masks
|
||||
|
||||
def prepare_state(self, batch):
|
||||
"""Pad state"""
|
||||
state = pad_vector(batch[OBS_STATE], self.config.max_state_dim)
|
||||
"""Flatten multi-timestep state and pad to max_state_dim."""
|
||||
state = batch[OBS_STATE]
|
||||
if state.ndim == 3:
|
||||
state = state.flatten(start_dim=1)
|
||||
state = pad_vector(state, self.config.max_state_dim)
|
||||
return state
|
||||
|
||||
def prepare_action(self, batch):
|
||||
@@ -1250,7 +1253,8 @@ class PI0Policy(PreTrainedPolicy):
|
||||
|
||||
# Action queue logic for n_action_steps > 1
|
||||
if len(self._action_queue) == 0:
|
||||
actions = self.predict_action_chunk(batch)[:, : self.config.n_action_steps]
|
||||
skip = self.config.latency_skip_steps
|
||||
actions = self.predict_action_chunk(batch)[:, skip : skip + self.config.n_action_steps]
|
||||
# Transpose to get shape (n_action_steps, batch_size, action_dim)
|
||||
self._action_queue.extend(actions.transpose(0, 1))
|
||||
|
||||
|
||||
@@ -24,6 +24,7 @@ from lerobot.processor import (
|
||||
AbsoluteActionsProcessorStep,
|
||||
AddBatchDimensionProcessorStep,
|
||||
ComplementaryDataProcessorStep,
|
||||
DeriveStateFromActionStep,
|
||||
DeviceProcessorStep,
|
||||
NormalizerProcessorStep,
|
||||
PolicyAction,
|
||||
@@ -31,6 +32,7 @@ from lerobot.processor import (
|
||||
ProcessorStep,
|
||||
ProcessorStepRegistry,
|
||||
RelativeActionsProcessorStep,
|
||||
RelativeStateProcessorStep,
|
||||
RenameObservationsProcessorStep,
|
||||
TokenizerProcessorStep,
|
||||
UnnormalizerProcessorStep,
|
||||
@@ -128,13 +130,25 @@ def make_pi0_pre_post_processors(
|
||||
A tuple containing the configured pre-processor and post-processor pipelines.
|
||||
"""
|
||||
|
||||
derive_state_step = DeriveStateFromActionStep(
|
||||
enabled=getattr(config, "derive_state_from_action", False),
|
||||
)
|
||||
|
||||
relative_step = RelativeActionsProcessorStep(
|
||||
enabled=config.use_relative_actions,
|
||||
exclude_joints=getattr(config, "relative_exclude_joints", []),
|
||||
action_names=getattr(config, "action_feature_names", None),
|
||||
)
|
||||
|
||||
# OpenPI order: raw → relative → normalize → model → unnormalize → absolute
|
||||
relative_state_step = RelativeStateProcessorStep(
|
||||
enabled=getattr(config, "use_relative_state", False),
|
||||
exclude_joints=getattr(config, "relative_exclude_state_joints", []),
|
||||
state_names=getattr(config, "state_feature_names", None),
|
||||
)
|
||||
|
||||
# Order: DeriveStateFromAction extracts state from the extended action chunk,
|
||||
# then relative_action uses current state[t] for subtraction,
|
||||
# then relative_state converts the multi-timestep state to offsets.
|
||||
input_steps: list[ProcessorStep] = [
|
||||
RenameObservationsProcessorStep(rename_map={}), # To mimic the same processor as pretrained one
|
||||
AddBatchDimensionProcessorStep(),
|
||||
@@ -146,7 +160,9 @@ def make_pi0_pre_post_processors(
|
||||
padding="max_length",
|
||||
),
|
||||
DeviceProcessorStep(device=config.device),
|
||||
derive_state_step,
|
||||
relative_step,
|
||||
relative_state_step,
|
||||
NormalizerProcessorStep(
|
||||
features={**config.input_features, **config.output_features},
|
||||
norm_map=config.normalization_mapping,
|
||||
|
||||
@@ -136,7 +136,7 @@ def make_pi0_fast_pre_post_processors(
|
||||
# Pi0Fast order: relative → normalize → tokenize → model → unnormalize → absolute
|
||||
# This matches pi0/pi0.5: RelativeActionsProcessorStep runs first on raw absolute actions,
|
||||
# caching the raw state. NormalizerProcessorStep then normalizes the raw relative actions,
|
||||
# so the normalizer (and action tokenizer) sees delta values — relative stats are required.
|
||||
# so the normalizer (and action tokenizer) sees delta values, relative stats are required.
|
||||
# NOTE: RelativeActionsProcessorStep only modifies the action in the transition; it reads
|
||||
# state from the observation but does not change it. NormalizerProcessorStep still runs
|
||||
# before Pi0FastPrepareStateAndLanguageTokenizerProcessorStep, so the state tokenizer
|
||||
|
||||
@@ -77,9 +77,12 @@ from .policy_robot_bridge import (
|
||||
)
|
||||
from .relative_action_processor import (
|
||||
AbsoluteActionsProcessorStep,
|
||||
DeriveStateFromActionStep,
|
||||
RelativeActionsProcessorStep,
|
||||
RelativeStateProcessorStep,
|
||||
to_absolute_actions,
|
||||
to_relative_actions,
|
||||
to_relative_state,
|
||||
)
|
||||
from .rename_processor import RenameObservationsProcessorStep
|
||||
from .tokenizer_processor import ActionTokenizerProcessorStep, TokenizerProcessorStep
|
||||
@@ -107,7 +110,9 @@ __all__ = [
|
||||
"make_default_robot_action_processor",
|
||||
"make_default_robot_observation_processor",
|
||||
"AbsoluteActionsProcessorStep",
|
||||
"DeriveStateFromActionStep",
|
||||
"RelativeActionsProcessorStep",
|
||||
"RelativeStateProcessorStep",
|
||||
"MapDeltaActionToRobotActionStep",
|
||||
"MapTensorToDeltaActionDictStep",
|
||||
"NormalizerProcessorStep",
|
||||
@@ -139,6 +144,7 @@ __all__ = [
|
||||
"TruncatedProcessorStep",
|
||||
"to_absolute_actions",
|
||||
"to_relative_actions",
|
||||
"to_relative_state",
|
||||
"UnnormalizerProcessorStep",
|
||||
"VanillaObservationProcessorStep",
|
||||
]
|
||||
|
||||
@@ -30,10 +30,13 @@ from .pipeline import ProcessorStep, ProcessorStepRegistry
|
||||
__all__ = [
|
||||
"MapDeltaActionToRobotActionStep",
|
||||
"MapTensorToDeltaActionDictStep",
|
||||
"DeriveStateFromActionStep",
|
||||
"RelativeActionsProcessorStep",
|
||||
"AbsoluteActionsProcessorStep",
|
||||
"RelativeStateProcessorStep",
|
||||
"to_relative_actions",
|
||||
"to_absolute_actions",
|
||||
"to_relative_state",
|
||||
]
|
||||
|
||||
|
||||
@@ -81,6 +84,41 @@ def to_absolute_actions(actions: Tensor, state: Tensor, mask: Sequence[bool]) ->
|
||||
return actions
|
||||
|
||||
|
||||
@ProcessorStepRegistry.register("derive_state_from_action_processor")
|
||||
@dataclass
|
||||
class DeriveStateFromActionStep(ProcessorStep):
|
||||
"""Derives 2-step observation.state from the action chunk (UMI-style).
|
||||
|
||||
Expects action with one extra leading timestep: [B, chunk_size+1, D]
|
||||
from action_delta_indices = [-1, 0, 1, ..., chunk_size-1].
|
||||
Extracts [action[t-1], action[t]] as state and strips the extra timestep.
|
||||
No-op during inference (state comes from robot).
|
||||
"""
|
||||
|
||||
enabled: bool = False
|
||||
|
||||
def __call__(self, transition: EnvTransition) -> EnvTransition:
|
||||
if not self.enabled:
|
||||
return transition
|
||||
action = transition.get(TransitionKey.ACTION)
|
||||
if action is None or action.ndim < 3:
|
||||
return transition
|
||||
new_transition = transition.copy()
|
||||
new_obs = dict(new_transition.get(TransitionKey.OBSERVATION, {}))
|
||||
new_obs[OBS_STATE] = action[..., :2, :]
|
||||
new_transition[TransitionKey.ACTION] = action[..., 1:, :]
|
||||
new_transition[TransitionKey.OBSERVATION] = new_obs
|
||||
return new_transition
|
||||
|
||||
def get_config(self) -> dict[str, Any]:
|
||||
return {"enabled": self.enabled}
|
||||
|
||||
def transform_features(
|
||||
self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
|
||||
) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
|
||||
return features
|
||||
|
||||
|
||||
@ProcessorStepRegistry.register("delta_actions_processor")
|
||||
@dataclass
|
||||
class RelativeActionsProcessorStep(ProcessorStep):
|
||||
@@ -124,7 +162,14 @@ class RelativeActionsProcessorStep(ProcessorStep):
|
||||
|
||||
def __call__(self, transition: EnvTransition) -> EnvTransition:
|
||||
observation = transition.get(TransitionKey.OBSERVATION, {})
|
||||
state = observation.get(OBS_STATE) if observation else None
|
||||
raw_state = observation.get(OBS_STATE) if observation else None
|
||||
|
||||
# When state_delta_indices loads multi-timestep state [B, n_obs, D],
|
||||
# use only the current (last) timestep for relative action conversion.
|
||||
if raw_state is not None:
|
||||
state = raw_state[..., -1, :] if raw_state.ndim >= 3 else raw_state
|
||||
else:
|
||||
state = None
|
||||
|
||||
# Always cache state for the paired AbsoluteActionsProcessorStep
|
||||
if state is not None:
|
||||
@@ -155,6 +200,120 @@ class RelativeActionsProcessorStep(ProcessorStep):
|
||||
return features
|
||||
|
||||
|
||||
def to_relative_state(state: Tensor, mask: Sequence[bool]) -> Tensor:
|
||||
"""Convert multi-timestep absolute state to relative (offset from current timestep).
|
||||
|
||||
Each timestep becomes: ``state[..., t, :] - state[..., -1, :]`` for masked dims.
|
||||
The last (current) timestep becomes zeros for masked dims.
|
||||
|
||||
Args:
|
||||
state: (..., n_obs, state_dim) — last timestep is the reference (current).
|
||||
mask: Which dims to convert. Can be shorter than state_dim.
|
||||
"""
|
||||
mask_t = torch.tensor(mask, dtype=state.dtype, device=state.device)
|
||||
dims = mask_t.shape[0]
|
||||
current = state[..., -1:, :] # (..., 1, state_dim)
|
||||
state = state.clone()
|
||||
state[..., :dims] -= current[..., :dims] * mask_t
|
||||
return state
|
||||
|
||||
|
||||
@ProcessorStepRegistry.register("relative_state_processor")
|
||||
@dataclass
|
||||
class RelativeStateProcessorStep(ProcessorStep):
|
||||
"""Converts observation.state to relative (offset from current timestep).
|
||||
|
||||
UMI-style relative proprioception: each state timestep is expressed as
|
||||
an offset from the current EE pose, providing velocity information.
|
||||
|
||||
During training (multi-timestep input from ``state_delta_indices``):
|
||||
``state[..., t, :] -= state[..., -1, :]`` — subtract current from all.
|
||||
|
||||
During inference (single timestep): buffers the previous state and stacks
|
||||
``[previous, current]`` before applying the relative conversion, producing
|
||||
the same ``[n_obs, D]`` shape the model expects.
|
||||
|
||||
Attributes:
|
||||
enabled: Whether to apply the relative conversion.
|
||||
exclude_joints: Joint/dim names to keep absolute.
|
||||
state_names: State dimension names from dataset metadata.
|
||||
"""
|
||||
|
||||
enabled: bool = False
|
||||
exclude_joints: list[str] = field(default_factory=list)
|
||||
state_names: list[str] | None = None
|
||||
_previous_state: torch.Tensor | None = field(default=None, init=False, repr=False)
|
||||
|
||||
def _build_mask(self, state_dim: int) -> list[bool]:
|
||||
if not self.exclude_joints or self.state_names is None:
|
||||
return [True] * state_dim
|
||||
|
||||
exclude_tokens = [str(name).lower() for name in self.exclude_joints if name]
|
||||
if not exclude_tokens:
|
||||
return [True] * state_dim
|
||||
|
||||
mask = []
|
||||
for name in self.state_names[:state_dim]:
|
||||
state_name = str(name).lower()
|
||||
is_excluded = any(token == state_name or token in state_name for token in exclude_tokens)
|
||||
mask.append(not is_excluded)
|
||||
|
||||
if len(mask) < state_dim:
|
||||
mask.extend([True] * (state_dim - len(mask)))
|
||||
|
||||
return mask
|
||||
|
||||
def __call__(self, transition: EnvTransition) -> EnvTransition:
|
||||
if not self.enabled:
|
||||
return transition
|
||||
|
||||
observation = transition.get(TransitionKey.OBSERVATION, {})
|
||||
state = observation.get(OBS_STATE) if observation else None
|
||||
|
||||
if state is None:
|
||||
return transition
|
||||
|
||||
new_transition = transition.copy()
|
||||
new_obs = dict(new_transition.get(TransitionKey.OBSERVATION, {}))
|
||||
mask = self._build_mask(state.shape[-1])
|
||||
|
||||
if state.ndim >= 3:
|
||||
# [B, n_obs, D] — multi-timestep (training with state_delta_indices)
|
||||
relative = to_relative_state(state, mask)
|
||||
new_obs[OBS_STATE] = relative.flatten(start_dim=-2) # [B, n_obs*D]
|
||||
elif state.ndim == 2:
|
||||
# [B, D] — single timestep (inference): buffer previous and stack
|
||||
current = state
|
||||
if self._previous_state is None:
|
||||
self._previous_state = current.clone()
|
||||
prev = self._previous_state
|
||||
if prev.device != current.device or prev.dtype != current.dtype:
|
||||
prev = prev.to(device=current.device, dtype=current.dtype)
|
||||
stacked = torch.stack([prev, current], dim=-2) # [B, 2, D]
|
||||
relative = to_relative_state(stacked, mask)
|
||||
new_obs[OBS_STATE] = relative.flatten(start_dim=-2) # [B, 2*D]
|
||||
self._previous_state = current.clone()
|
||||
|
||||
new_transition[TransitionKey.OBSERVATION] = new_obs
|
||||
return new_transition
|
||||
|
||||
def reset(self) -> None:
|
||||
"""Reset the state buffer. Call at episode boundaries during inference."""
|
||||
self._previous_state = None
|
||||
|
||||
def get_config(self) -> dict[str, Any]:
|
||||
return {
|
||||
"enabled": self.enabled,
|
||||
"exclude_joints": self.exclude_joints,
|
||||
"state_names": self.state_names,
|
||||
}
|
||||
|
||||
def transform_features(
|
||||
self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
|
||||
) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
|
||||
return features
|
||||
|
||||
|
||||
@ProcessorStepRegistry.register("absolute_actions_processor")
|
||||
@dataclass
|
||||
class AbsoluteActionsProcessorStep(ProcessorStep):
|
||||
|
||||
@@ -62,6 +62,8 @@ class BiOpenArmFollower(Robot):
|
||||
can_bitrate=config.left_arm_config.can_bitrate,
|
||||
can_data_bitrate=config.left_arm_config.can_data_bitrate,
|
||||
motor_config=config.left_arm_config.motor_config,
|
||||
gripper_port=config.left_arm_config.gripper_port,
|
||||
gripper_motor_ids=config.left_arm_config.gripper_motor_ids,
|
||||
position_kd=config.left_arm_config.position_kd,
|
||||
position_kp=config.left_arm_config.position_kp,
|
||||
joint_limits=config.left_arm_config.joint_limits,
|
||||
@@ -80,6 +82,8 @@ class BiOpenArmFollower(Robot):
|
||||
can_bitrate=config.right_arm_config.can_bitrate,
|
||||
can_data_bitrate=config.right_arm_config.can_data_bitrate,
|
||||
motor_config=config.right_arm_config.motor_config,
|
||||
gripper_port=config.right_arm_config.gripper_port,
|
||||
gripper_motor_ids=config.right_arm_config.gripper_motor_ids,
|
||||
position_kd=config.right_arm_config.position_kd,
|
||||
position_kp=config.right_arm_config.position_kp,
|
||||
joint_limits=config.right_arm_config.joint_limits,
|
||||
|
||||
@@ -28,7 +28,8 @@ LEFT_DEFAULT_JOINTS_LIMITS: dict[str, tuple[float, float]] = {
|
||||
"joint_5": (-85.0, 85.0),
|
||||
"joint_6": (-40.0, 40.0),
|
||||
"joint_7": (-80.0, 80.0),
|
||||
"gripper": (-65.0, 0.0),
|
||||
"proximal": (0.0, 100.0),
|
||||
"distal": (0.0, 100.0),
|
||||
}
|
||||
|
||||
RIGHT_DEFAULT_JOINTS_LIMITS: dict[str, tuple[float, float]] = {
|
||||
@@ -39,7 +40,8 @@ RIGHT_DEFAULT_JOINTS_LIMITS: dict[str, tuple[float, float]] = {
|
||||
"joint_5": (-85.0, 85.0),
|
||||
"joint_6": (-40.0, 40.0),
|
||||
"joint_7": (-80.0, 80.0),
|
||||
"gripper": (-65.0, 0.0),
|
||||
"proximal": (0.0, 100.0),
|
||||
"distal": (0.0, 100.0),
|
||||
}
|
||||
|
||||
|
||||
@@ -73,13 +75,8 @@ class OpenArmFollowerConfigBase:
|
||||
# Camera configurations
|
||||
cameras: dict[str, CameraConfig] = field(default_factory=dict)
|
||||
|
||||
# Motor configuration for OpenArms (7 DOF per arm)
|
||||
# Arm motor configuration (7 DOF, Damiao on CAN bus)
|
||||
# Maps motor names to (send_can_id, recv_can_id, motor_type)
|
||||
# Based on: https://docs.openarm.dev/software/setup/configure-test
|
||||
# OpenArms uses 4 types of motors:
|
||||
# - DM8009 (DM-J8009P-2EC) for shoulders (high torque)
|
||||
# - DM4340P and DM4340 for shoulder rotation and elbow
|
||||
# - DM4310 (DM-J4310-2EC V1.1) for wrist and gripper
|
||||
motor_config: dict[str, tuple[int, int, str]] = field(
|
||||
default_factory=lambda: {
|
||||
"joint_1": (0x01, 0x11, "dm8009"), # J1 - Shoulder pan (DM8009)
|
||||
@@ -89,19 +86,18 @@ class OpenArmFollowerConfigBase:
|
||||
"joint_5": (0x05, 0x15, "dm4310"), # J5 - Wrist roll (DM4310)
|
||||
"joint_6": (0x06, 0x16, "dm4310"), # J6 - Wrist pitch (DM4310)
|
||||
"joint_7": (0x07, 0x17, "dm4310"), # J7 - Wrist rotation (DM4310)
|
||||
"gripper": (0x08, 0x18, "dm4310"), # J8 - Gripper (DM4310)
|
||||
}
|
||||
)
|
||||
|
||||
# MIT control parameters for position control (used in send_action)
|
||||
# List of 8 values: [joint_1, joint_2, joint_3, joint_4, joint_5, joint_6, joint_7, gripper]
|
||||
position_kp: list[float] = field(
|
||||
default_factory=lambda: [240.0, 240.0, 240.0, 240.0, 24.0, 31.0, 25.0, 25.0]
|
||||
)
|
||||
position_kd: list[float] = field(default_factory=lambda: [5.0, 5.0, 3.0, 5.0, 0.3, 0.3, 0.3, 0.3])
|
||||
# UMI-style gripper (Feetech STS3215 on serial bus)
|
||||
gripper_port: str = "/dev/ttyUSB0"
|
||||
gripper_motor_ids: dict[str, int] = field(default_factory=lambda: {"proximal": 1, "distal": 2})
|
||||
|
||||
# Values for joint limits. Can be overridden via CLI (for custom values) or by setting config.side to either 'left' or 'right'.
|
||||
# If config.side is left set to None and no CLI values are passed, the default joint limit values are small for safety.
|
||||
# MIT control parameters for the 7 arm joints
|
||||
position_kp: list[float] = field(default_factory=lambda: [240.0, 240.0, 240.0, 240.0, 24.0, 31.0, 25.0])
|
||||
position_kd: list[float] = field(default_factory=lambda: [5.0, 5.0, 3.0, 5.0, 0.3, 0.3, 0.3])
|
||||
|
||||
# Joint limits. Can be overridden via CLI or by setting config.side to 'left' or 'right'.
|
||||
joint_limits: dict[str, tuple[float, float]] = field(
|
||||
default_factory=lambda: {
|
||||
"joint_1": (-5.0, 5.0),
|
||||
@@ -111,7 +107,8 @@ class OpenArmFollowerConfigBase:
|
||||
"joint_5": (-5.0, 5.0),
|
||||
"joint_6": (-5.0, 5.0),
|
||||
"joint_7": (-5.0, 5.0),
|
||||
"gripper": (-5.0, 0.0),
|
||||
"proximal": (0.0, 100.0),
|
||||
"distal": (0.0, 100.0),
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@@ -22,6 +22,7 @@ from typing import Any
|
||||
from lerobot.cameras.utils import make_cameras_from_configs
|
||||
from lerobot.motors import Motor, MotorCalibration, MotorNormMode
|
||||
from lerobot.motors.damiao import DamiaoMotorsBus
|
||||
from lerobot.motors.feetech import FeetechMotorsBus, OperatingMode
|
||||
from lerobot.types import RobotAction, RobotObservation
|
||||
from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
|
||||
|
||||
@@ -38,8 +39,7 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
class OpenArmFollower(Robot):
|
||||
"""
|
||||
OpenArms Follower Robot which uses CAN bus communication to control 7 DOF arm with a gripper.
|
||||
The arm uses Damiao motors in MIT control mode.
|
||||
OpenArms Follower Robot: 7 DOF Damiao arm (CAN) + UMI-style Feetech gripper (serial).
|
||||
"""
|
||||
|
||||
config_class = OpenArmFollowerConfig
|
||||
@@ -49,19 +49,17 @@ class OpenArmFollower(Robot):
|
||||
super().__init__(config)
|
||||
self.config = config
|
||||
|
||||
# Arm motors
|
||||
motors: dict[str, Motor] = {}
|
||||
# Arm motors (Damiao on CAN bus)
|
||||
arm_motors: dict[str, Motor] = {}
|
||||
for motor_name, (send_id, recv_id, motor_type_str) in config.motor_config.items():
|
||||
motor = Motor(
|
||||
send_id, motor_type_str, MotorNormMode.DEGREES
|
||||
) # Always use degrees for Damiao motors
|
||||
motor = Motor(send_id, motor_type_str, MotorNormMode.DEGREES)
|
||||
motor.recv_id = recv_id
|
||||
motor.motor_type_str = motor_type_str
|
||||
motors[motor_name] = motor
|
||||
arm_motors[motor_name] = motor
|
||||
|
||||
self.bus = DamiaoMotorsBus(
|
||||
port=self.config.port,
|
||||
motors=motors,
|
||||
motors=arm_motors,
|
||||
calibration=self.calibration,
|
||||
can_interface=self.config.can_interface,
|
||||
use_can_fd=self.config.use_can_fd,
|
||||
@@ -69,6 +67,17 @@ class OpenArmFollower(Robot):
|
||||
data_bitrate=self.config.can_data_bitrate if self.config.use_can_fd else None,
|
||||
)
|
||||
|
||||
# Gripper motors (Feetech STS3215 on serial bus)
|
||||
gripper_motors: dict[str, Motor] = {
|
||||
name: Motor(motor_id, "sts3215", MotorNormMode.RANGE_0_100)
|
||||
for name, motor_id in config.gripper_motor_ids.items()
|
||||
}
|
||||
self.gripper_bus = FeetechMotorsBus(
|
||||
port=config.gripper_port,
|
||||
motors=gripper_motors,
|
||||
calibration=self.calibration,
|
||||
)
|
||||
|
||||
if config.side is not None:
|
||||
if config.side == "left":
|
||||
config.joint_limits = LEFT_DEFAULT_JOINTS_LIMITS
|
||||
@@ -84,7 +93,6 @@ class OpenArmFollower(Robot):
|
||||
)
|
||||
logger.info(f"Values used for joint limits: {config.joint_limits}.")
|
||||
|
||||
# Initialize cameras
|
||||
self.cameras = make_cameras_from_configs(config.cameras)
|
||||
|
||||
@property
|
||||
@@ -93,8 +101,10 @@ class OpenArmFollower(Robot):
|
||||
features: dict[str, type] = {}
|
||||
for motor in self.bus.motors:
|
||||
features[f"{motor}.pos"] = float
|
||||
features[f"{motor}.vel"] = float # Add this
|
||||
features[f"{motor}.torque"] = float # Add this
|
||||
features[f"{motor}.vel"] = float
|
||||
features[f"{motor}.torque"] = float
|
||||
for motor in self.gripper_bus.motors:
|
||||
features[f"{motor}.pos"] = float
|
||||
return features
|
||||
|
||||
@property
|
||||
@@ -116,8 +126,11 @@ class OpenArmFollower(Robot):
|
||||
|
||||
@property
|
||||
def is_connected(self) -> bool:
|
||||
"""Check if robot is connected."""
|
||||
return self.bus.is_connected and all(cam.is_connected for cam in self.cameras.values())
|
||||
return (
|
||||
self.bus.is_connected
|
||||
and self.gripper_bus.is_connected
|
||||
and all(cam.is_connected for cam in self.cameras.values())
|
||||
)
|
||||
|
||||
@check_if_already_connected
|
||||
def connect(self, calibrate: bool = True) -> None:
|
||||
@@ -127,12 +140,12 @@ class OpenArmFollower(Robot):
|
||||
We assume that at connection time, the arms are in a safe rest position,
|
||||
and torque can be safely disabled to run calibration if needed.
|
||||
"""
|
||||
|
||||
# Connect to CAN bus
|
||||
logger.info(f"Connecting arm on {self.config.port}...")
|
||||
self.bus.connect()
|
||||
|
||||
# Run calibration if needed
|
||||
logger.info(f"Connecting gripper on {self.config.gripper_port}...")
|
||||
self.gripper_bus.connect()
|
||||
|
||||
if not self.is_calibrated and calibrate:
|
||||
logger.info(
|
||||
"Mismatch between calibration values in the motor and the calibration file or no calibration file found"
|
||||
@@ -144,7 +157,7 @@ class OpenArmFollower(Robot):
|
||||
|
||||
self.configure()
|
||||
|
||||
if self.is_calibrated:
|
||||
if self.bus.is_calibrated:
|
||||
self.bus.set_zero_position()
|
||||
|
||||
self.bus.enable_torque()
|
||||
@@ -153,47 +166,39 @@ class OpenArmFollower(Robot):
|
||||
|
||||
@property
|
||||
def is_calibrated(self) -> bool:
|
||||
"""Check if robot is calibrated."""
|
||||
return self.bus.is_calibrated
|
||||
return self.bus.is_calibrated and self.gripper_bus.is_calibrated
|
||||
|
||||
def calibrate(self) -> None:
|
||||
"""
|
||||
Run calibration procedure for OpenArms robot.
|
||||
Run calibration for both the Damiao arm and Feetech gripper.
|
||||
|
||||
The calibration procedure:
|
||||
1. Disable torque
|
||||
2. Ask user to position arms in hanging position with grippers closed
|
||||
3. Set this as zero position
|
||||
4. Record range of motion for each joint
|
||||
5. Save calibration
|
||||
Arm calibration: set zero position with arm hanging, ±90° default range.
|
||||
Gripper calibration: SO100-style half-turn homing + range recording.
|
||||
"""
|
||||
if self.calibration:
|
||||
# Calibration file exists, ask user whether to use it or run new calibration
|
||||
user_input = input(
|
||||
f"Press ENTER to use provided calibration file associated with the id {self.id}, or type 'c' and press ENTER to run calibration: "
|
||||
)
|
||||
if user_input.strip().lower() != "c":
|
||||
logger.info(f"Writing calibration file associated with the id {self.id} to the motors")
|
||||
self.bus.write_calibration(self.calibration)
|
||||
self.gripper_bus.write_calibration(self.calibration)
|
||||
return
|
||||
|
||||
logger.info(f"\nRunning calibration for {self}")
|
||||
self.bus.disable_torque()
|
||||
|
||||
# Step 1: Set zero position
|
||||
# --- Arm calibration (Damiao) ---
|
||||
self.bus.disable_torque()
|
||||
input(
|
||||
"\nCalibration: Set Zero Position)\n"
|
||||
"\nCalibration: Set Zero Position\n"
|
||||
"Position the arm in the following configuration:\n"
|
||||
" - Arm hanging straight down\n"
|
||||
" - Gripper closed\n"
|
||||
"Press ENTER when ready..."
|
||||
)
|
||||
|
||||
# Set current position as zero for all motors
|
||||
self.bus.set_zero_position()
|
||||
logger.info("Arm zero position set.")
|
||||
|
||||
logger.info("Setting range: -90° to +90° for safety by default for all joints")
|
||||
for motor_name, motor in self.bus.motors.items():
|
||||
self.calibration[motor_name] = MotorCalibration(
|
||||
id=motor.id,
|
||||
@@ -202,17 +207,52 @@ class OpenArmFollower(Robot):
|
||||
range_min=-90,
|
||||
range_max=90,
|
||||
)
|
||||
|
||||
self.bus.write_calibration(self.calibration)
|
||||
|
||||
# --- Gripper calibration (Feetech) ---
|
||||
self.gripper_bus.disable_torque()
|
||||
for motor in self.gripper_bus.motors:
|
||||
self.gripper_bus.write("Operating_Mode", motor, OperatingMode.POSITION.value)
|
||||
|
||||
input("Move gripper to the middle of its range of motion and press ENTER....")
|
||||
homing_offsets = self.gripper_bus.set_half_turn_homings()
|
||||
|
||||
gripper_motor_names = list(self.gripper_bus.motors.keys())
|
||||
print(
|
||||
f"Move gripper joints ({', '.join(gripper_motor_names)}) through their "
|
||||
"entire ranges of motion.\nRecording positions. Press ENTER to stop..."
|
||||
)
|
||||
range_mins, range_maxes = self.gripper_bus.record_ranges_of_motion(gripper_motor_names)
|
||||
|
||||
for motor_name, m in self.gripper_bus.motors.items():
|
||||
self.calibration[motor_name] = MotorCalibration(
|
||||
id=m.id,
|
||||
drive_mode=0,
|
||||
homing_offset=homing_offsets[motor_name],
|
||||
range_min=range_mins[motor_name],
|
||||
range_max=range_maxes[motor_name],
|
||||
)
|
||||
self.gripper_bus.write_calibration(self.calibration)
|
||||
|
||||
self._save_calibration()
|
||||
print(f"Calibration saved to {self.calibration_fpath}")
|
||||
|
||||
def configure(self) -> None:
|
||||
"""Configure motors with appropriate settings."""
|
||||
# TODO(Steven, Pepijn): Slightly different from what it is happening in the leader
|
||||
"""Configure both arm (Damiao) and gripper (Feetech) motors."""
|
||||
with self.bus.torque_disabled():
|
||||
self.bus.configure_motors()
|
||||
|
||||
with self.gripper_bus.torque_disabled():
|
||||
self.gripper_bus.configure_motors()
|
||||
for motor in self.gripper_bus.motors:
|
||||
self.gripper_bus.write("Operating_Mode", motor, OperatingMode.POSITION.value)
|
||||
self.gripper_bus.write("P_Coefficient", motor, 16)
|
||||
self.gripper_bus.write("I_Coefficient", motor, 0)
|
||||
self.gripper_bus.write("D_Coefficient", motor, 32)
|
||||
self.gripper_bus.write("Max_Torque_Limit", motor, 500)
|
||||
self.gripper_bus.write("Protection_Current", motor, 250)
|
||||
self.gripper_bus.write("Overload_Torque", motor, 25)
|
||||
|
||||
def setup_motors(self) -> None:
|
||||
raise NotImplementedError(
|
||||
"Motor ID configuration is typically done via manufacturer tools for CAN motors."
|
||||
@@ -220,25 +260,23 @@ class OpenArmFollower(Robot):
|
||||
|
||||
@check_if_not_connected
|
||||
def get_observation(self) -> RobotObservation:
|
||||
"""
|
||||
Get current observation from robot including position, velocity, and torque.
|
||||
|
||||
Reads all motor states (pos/vel/torque) in one CAN refresh cycle
|
||||
instead of 3 separate reads.
|
||||
"""
|
||||
"""Read all motor states from arm (CAN) and gripper (serial), plus cameras."""
|
||||
start = time.perf_counter()
|
||||
|
||||
obs_dict: dict[str, Any] = {}
|
||||
|
||||
# Arm motors (Damiao) — pos/vel/torque in one CAN refresh cycle
|
||||
states = self.bus.sync_read_all_states()
|
||||
|
||||
for motor in self.bus.motors:
|
||||
state = states.get(motor, {})
|
||||
obs_dict[f"{motor}.pos"] = state.get("position", 0.0)
|
||||
obs_dict[f"{motor}.vel"] = state.get("velocity", 0.0)
|
||||
obs_dict[f"{motor}.torque"] = state.get("torque", 0.0)
|
||||
|
||||
# Capture images from cameras
|
||||
# Gripper motors (Feetech) — position only
|
||||
gripper_positions = self.gripper_bus.sync_read("Present_Position")
|
||||
for motor, val in gripper_positions.items():
|
||||
obs_dict[f"{motor}.pos"] = val
|
||||
|
||||
for cam_key, cam in self.cameras.items():
|
||||
start = time.perf_counter()
|
||||
obs_dict[cam_key] = cam.read_latest()
|
||||
@@ -258,86 +296,76 @@ class OpenArmFollower(Robot):
|
||||
custom_kd: dict[str, float] | None = None,
|
||||
) -> RobotAction:
|
||||
"""
|
||||
Send action command to robot.
|
||||
|
||||
The action magnitude may be clipped based on safety limits.
|
||||
Send action command to robot. Arm joints go to Damiao CAN bus,
|
||||
gripper joints go to Feetech serial bus.
|
||||
|
||||
Args:
|
||||
action: Dictionary with motor positions (e.g., "joint_1.pos", "joint_2.pos")
|
||||
custom_kp: Optional custom kp gains per motor (e.g., {"joint_1": 120.0, "joint_2": 150.0})
|
||||
custom_kd: Optional custom kd gains per motor (e.g., {"joint_1": 1.5, "joint_2": 2.0})
|
||||
action: Dictionary with motor positions (e.g., "joint_1.pos", "proximal.pos")
|
||||
custom_kp: Optional custom kp gains per arm motor
|
||||
custom_kd: Optional custom kd gains per arm motor
|
||||
|
||||
Returns:
|
||||
The action actually sent (potentially clipped)
|
||||
"""
|
||||
|
||||
goal_pos = {key.removesuffix(".pos"): val for key, val in action.items() if key.endswith(".pos")}
|
||||
|
||||
# Apply joint limit clipping to arm
|
||||
# Apply joint limit clipping
|
||||
for motor_name, position in goal_pos.items():
|
||||
if motor_name in self.config.joint_limits:
|
||||
min_limit, max_limit = self.config.joint_limits[motor_name]
|
||||
clipped_position = max(min_limit, min(max_limit, position))
|
||||
if clipped_position != position:
|
||||
logger.debug(f"Clipped {motor_name} from {position:.2f}° to {clipped_position:.2f}°")
|
||||
logger.debug(f"Clipped {motor_name} from {position:.2f} to {clipped_position:.2f}")
|
||||
goal_pos[motor_name] = clipped_position
|
||||
|
||||
# Cap goal position when too far away from present position.
|
||||
# /!\ Slower fps expected due to reading from the follower.
|
||||
if self.config.max_relative_target is not None:
|
||||
# Split into arm and gripper actions
|
||||
arm_motors = set(self.bus.motors.keys())
|
||||
gripper_motors = set(self.gripper_bus.motors.keys())
|
||||
arm_goal = {k: v for k, v in goal_pos.items() if k in arm_motors}
|
||||
gripper_goal = {k: v for k, v in goal_pos.items() if k in gripper_motors}
|
||||
|
||||
# Cap arm goal position when too far away from present position
|
||||
if self.config.max_relative_target is not None and arm_goal:
|
||||
present_pos = self.bus.sync_read("Present_Position")
|
||||
goal_present_pos = {key: (g_pos, present_pos[key]) for key, g_pos in goal_pos.items()}
|
||||
goal_pos = ensure_safe_goal_position(goal_present_pos, self.config.max_relative_target)
|
||||
goal_present_pos = {key: (g_pos, present_pos[key]) for key, g_pos in arm_goal.items()}
|
||||
arm_goal = ensure_safe_goal_position(goal_present_pos, self.config.max_relative_target)
|
||||
|
||||
# TODO(Steven, Pepijn): Refactor writing
|
||||
# Motor name to index mapping for gains
|
||||
motor_index = {
|
||||
"joint_1": 0,
|
||||
"joint_2": 1,
|
||||
"joint_3": 2,
|
||||
"joint_4": 3,
|
||||
"joint_5": 4,
|
||||
"joint_6": 5,
|
||||
"joint_7": 6,
|
||||
"gripper": 7,
|
||||
}
|
||||
# Arm: batch MIT control (Damiao)
|
||||
if arm_goal:
|
||||
arm_motor_names = list(self.bus.motors.keys())
|
||||
commands = {}
|
||||
for motor_name, position_degrees in arm_goal.items():
|
||||
idx = arm_motor_names.index(motor_name) if motor_name in arm_motor_names else 0
|
||||
if custom_kp is not None and motor_name in custom_kp:
|
||||
kp = custom_kp[motor_name]
|
||||
else:
|
||||
kp = (
|
||||
self.config.position_kp[idx]
|
||||
if isinstance(self.config.position_kp, list)
|
||||
else self.config.position_kp
|
||||
)
|
||||
if custom_kd is not None and motor_name in custom_kd:
|
||||
kd = custom_kd[motor_name]
|
||||
else:
|
||||
kd = (
|
||||
self.config.position_kd[idx]
|
||||
if isinstance(self.config.position_kd, list)
|
||||
else self.config.position_kd
|
||||
)
|
||||
commands[motor_name] = (kp, kd, position_degrees, 0.0, 0.0)
|
||||
self.bus._mit_control_batch(commands)
|
||||
|
||||
# Use batch MIT control for arm (sends all commands, then collects responses)
|
||||
commands = {}
|
||||
for motor_name, position_degrees in goal_pos.items():
|
||||
idx = motor_index.get(motor_name, 0)
|
||||
# Use custom gains if provided, otherwise use config defaults
|
||||
if custom_kp is not None and motor_name in custom_kp:
|
||||
kp = custom_kp[motor_name]
|
||||
else:
|
||||
kp = (
|
||||
self.config.position_kp[idx]
|
||||
if isinstance(self.config.position_kp, list)
|
||||
else self.config.position_kp
|
||||
)
|
||||
if custom_kd is not None and motor_name in custom_kd:
|
||||
kd = custom_kd[motor_name]
|
||||
else:
|
||||
kd = (
|
||||
self.config.position_kd[idx]
|
||||
if isinstance(self.config.position_kd, list)
|
||||
else self.config.position_kd
|
||||
)
|
||||
commands[motor_name] = (kp, kd, position_degrees, 0.0, 0.0)
|
||||
|
||||
self.bus._mit_control_batch(commands)
|
||||
# Gripper: position control (Feetech)
|
||||
if gripper_goal:
|
||||
self.gripper_bus.sync_write("Goal_Position", gripper_goal)
|
||||
|
||||
goal_pos.update(arm_goal)
|
||||
return {f"{motor}.pos": val for motor, val in goal_pos.items()}
|
||||
|
||||
@check_if_not_connected
|
||||
def disconnect(self):
|
||||
"""Disconnect from robot."""
|
||||
|
||||
# Disconnect CAN bus
|
||||
self.bus.disconnect(self.config.disable_torque_on_disconnect)
|
||||
|
||||
# Disconnect cameras
|
||||
self.gripper_bus.disconnect(self.config.disable_torque_on_disconnect)
|
||||
for cam in self.cameras.values():
|
||||
cam.disconnect()
|
||||
|
||||
logger.info(f"{self} disconnected.")
|
||||
|
||||
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:baf52578e1d9e6225f3818cae82b6074a0b948d3cef8e9a3e6dfafca78507590
|
||||
size 40284
|
||||
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:066113d13d5cc85098609003bc7ebb73c570015350877f5ed7162ef1b6601852
|
||||
size 17784
|
||||
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:382ab32e4ae0880e8a1512e7a6ca6ce1f478a6c125db4efa977429ffb1d6b02a
|
||||
size 13384
|
||||
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:00c908cefab152c00416a570a48bf9aafed1549085f19ff2d882dc3f355d9f59
|
||||
size 156984
|
||||
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:b54883b8c7c96268a68a5879f95998a53ad0b0c4fe74325fad63a6caef669c73
|
||||
size 1139984
|
||||
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:678a2802906eff7b45a836d2f34a2d8e51def50b6599376968f888e05c72739e
|
||||
size 751484
|
||||
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:95529bec23733476dfdbbb266c7db0d25a473a568de73c8337a82440fe4a9ac3
|
||||
size 30084
|
||||
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:434f207f21f75f5f0bd604e390b8e5bc7b62b619265222846770e06b3f9b5cfb
|
||||
size 23884
|
||||
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:c77bdc9419947088e1dfc452e29c6092cc7b02b239ff4f2f5be3d77e393af185
|
||||
size 4148234
|
||||
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:2a182c7fd4d25226aff6d9e2dd9d1008a85fce7df8e16525722a5c5053f8b055
|
||||
size 5741534
|
||||
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:de1b190a56c16dea14546fb8e22c86eccabc2ca5e054819630c1932592381745
|
||||
size 4543534
|
||||
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:8d47d75d0c47a65021708ba63cb73162bf7c3b1d14e9cfc70dfa47336034ac76
|
||||
size 4978834
|
||||
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:8ca8149f2ce8b1b102270ec0b1a4b75c3e6f98c09b084430a450adce808607e1
|
||||
size 4944684
|
||||
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:30ea7abfdd3661b315f897bb82a5f34fd966357b358e890e155e928e931ea975
|
||||
size 6322984
|
||||
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:e49f91279f109baecb9ff54f5041eeb4514f757ba6daa65c3ea01fb1991967e4
|
||||
size 4818434
|
||||
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:a91593b67d2dec16d1dfb6f1305df3ddcd214cdef02e97d5aba30bc633e775b2
|
||||
size 5114784
|
||||
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:6c18bbf7e86b03e3faf802e61e8eb438b38dcbcf146d97cffe6e808c65e9a72a
|
||||
size 293284
|
||||
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:7ab0afa9a26bebbf5c6149894c41d96622caa649555ae7c5f0f2548f86148d91
|
||||
size 7955034
|
||||
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:8e96e1314618cf434908f70df78f68dd2b049c03538964e8d41fc99abe41564d
|
||||
size 13284
|
||||
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:8e5d373ebbd3fd001b506058644062ad71a68f1ced5ca5d5ed0f6de20137956b
|
||||
size 18284
|
||||
@@ -0,0 +1,3 @@
|
||||
version https://git-lfs.github.com/spec/v1
|
||||
oid sha256:25c115c7c55a422f30ad11581dc21576dc8fc4e09e659890772d86fe82ec04d7
|
||||
size 432484
|
||||
@@ -0,0 +1,630 @@
|
||||
<?xml version='1.0' encoding='utf-8'?>
|
||||
<robot name="openarm">
|
||||
<link name="world" />
|
||||
<joint name="openarm_body_world_joint" type="fixed">
|
||||
<parent link="world" />
|
||||
<child link="openarm_body_link0" />
|
||||
<origin rpy="0 0 0" xyz="0 0 0" />
|
||||
</joint>
|
||||
<link name="openarm_body_link0">
|
||||
<visual name="openarm_body_link0_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.0 0.0" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/body/v10/visual/body_link0.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_body_link0_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.0 0.0" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/body/v10/collision/body_link0_symp.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.0 0.0" />
|
||||
<mass value="13.89" />
|
||||
<inertia ixx="1.653" ixy="0.0" ixz="0.0" iyy="1.653" iyz="0.0" izz="0.051" />
|
||||
</inertial>
|
||||
</link>
|
||||
<joint name="openarm_left_openarm_body_link0_joint" type="fixed">
|
||||
<parent link="openarm_body_link0" />
|
||||
<child link="openarm_left_link0" />
|
||||
<origin rpy="-1.5708 0 0" xyz="0.0 0.031 0.698" />
|
||||
</joint>
|
||||
<link name="openarm_left_link0">
|
||||
<visual name="openarm_left_link0_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.0 0.0" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/visual/link0.stl" scale="0.001 -0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_left_link0_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.0 0.0" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/collision/link0_symp.stl" scale="0.001 -0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.0009483362816297526 -0.0001580207020448382 0.03076860287587199" />
|
||||
<mass value="1.1432284943239561" />
|
||||
<inertia ixx="0.001128" ixy="-4e-06" ixz="-3.3e-05" iyy="0.000962" iyz="-7e-06" izz="0.00147" />
|
||||
</inertial>
|
||||
</link>
|
||||
<link name="openarm_left_link1">
|
||||
<visual name="openarm_left_link1_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.0 0.0 -0.0625" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/visual/link1.stl" scale="0.001 -0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_left_link1_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.0 0.0 -0.0625" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/collision/link1_symp.stl" scale="0.001 -0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0011467657911800769 -3.319987657026362e-05 0.05395284380736254" />
|
||||
<mass value="1.1416684646202298" />
|
||||
<inertia ixx="0.001567" ixy="-1e-06" ixz="-2.9e-05" iyy="0.001273" iyz="1e-06" izz="0.001016" />
|
||||
</inertial>
|
||||
</link>
|
||||
<joint name="openarm_left_joint1" type="revolute">
|
||||
<origin rpy="0 0 0" xyz="0.0 0.0 0.0625" />
|
||||
<parent link="openarm_left_link0" />
|
||||
<child link="openarm_left_link1" />
|
||||
<axis xyz="0 0 1" />
|
||||
<limit effort="40" lower="-3.490659" upper="1.3962629999999998" velocity="16.754666" />
|
||||
</joint>
|
||||
<link name="openarm_left_link2">
|
||||
<visual name="openarm_left_link2_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0301 0.0 -0.1225" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/visual/link2.stl" scale="0.001 -0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_left_link2_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0301 0.0 -0.1225" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/collision/link2_symp.stl" scale="0.001 -0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.00839629182351943 2.0145102027597523e-08 0.03256649300522363" />
|
||||
<mass value="0.2775092746011571" />
|
||||
<inertia ixx="0.000359" ixy="1e-06" ixz="-0.000109" iyy="0.000376" iyz="1e-06" izz="0.000232" />
|
||||
</inertial>
|
||||
</link>
|
||||
<joint name="openarm_left_joint2" type="revolute">
|
||||
<origin rpy="-1.57079632679 0 0" xyz="-0.0301 0.0 0.06" />
|
||||
<parent link="openarm_left_link1" />
|
||||
<child link="openarm_left_link2" />
|
||||
<axis xyz="-1 0 0" />
|
||||
<limit effort="40" lower="-3.3161253267948965" upper="0.17453267320510335" velocity="16.754666" />
|
||||
</joint>
|
||||
<link name="openarm_left_link3">
|
||||
<visual name="openarm_left_link3_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.0 -0.0 -0.18875" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/visual/link3.stl" scale="0.001 -0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_left_link3_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.0 -0.0 -0.18875" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/collision/link3_symp.stl" scale="0.001 -0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.002104752099628911 -0.0005549085042607548 0.09047470545721961" />
|
||||
<mass value="1.073863338202347" />
|
||||
<inertia ixx="0.004372" ixy="1e-06" ixz="1.1e-05" iyy="0.004319" iyz="-3.6e-05" izz="0.000661" />
|
||||
</inertial>
|
||||
</link>
|
||||
<joint name="openarm_left_joint3" type="revolute">
|
||||
<origin rpy="0 0 0" xyz="0.0301 0.0 0.06625" />
|
||||
<parent link="openarm_left_link2" />
|
||||
<child link="openarm_left_link3" />
|
||||
<axis xyz="0 0 1" />
|
||||
<limit effort="27" lower="-1.570796" upper="1.570796" velocity="5.445426" />
|
||||
</joint>
|
||||
<link name="openarm_left_link4">
|
||||
<visual name="openarm_left_link4_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 -0.0315 -0.3425" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/visual/link4.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_left_link4_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 -0.0315 -0.3425" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/collision/link4_symp.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.0029006831074562967 -0.03030575826634669 0.06339637422196209" />
|
||||
<mass value="0.6348534566833373" />
|
||||
<inertia ixx="0.000623" ixy="-1e-06" ixz="-1.9e-05" iyy="0.000511" iyz="3.8e-05" izz="0.000334" />
|
||||
</inertial>
|
||||
</link>
|
||||
<joint name="openarm_left_joint4" type="revolute">
|
||||
<origin rpy="0 0 0" xyz="-0.0 0.0315 0.15375" />
|
||||
<parent link="openarm_left_link3" />
|
||||
<child link="openarm_left_link4" />
|
||||
<axis xyz="0 1 0" />
|
||||
<limit effort="27" lower="0.0" upper="2.443461" velocity="5.445426" />
|
||||
</joint>
|
||||
<link name="openarm_left_link5">
|
||||
<visual name="openarm_left_link5_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.0 -0.0 -0.438" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/visual/link5.stl" scale="0.001 -0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_left_link5_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.0 -0.0 -0.438" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/collision/link5_symp.stl" scale="0.001 -0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.003049665024221911 -0.0008866902457326625 0.043079803024980934" />
|
||||
<mass value="0.6156588026168502" />
|
||||
<inertia ixx="0.000423" ixy="-8e-06" ixz="6e-06" iyy="0.000445" iyz="-6e-06" izz="0.000324" />
|
||||
</inertial>
|
||||
</link>
|
||||
<joint name="openarm_left_joint5" type="revolute">
|
||||
<origin rpy="0 0 0" xyz="0.0 -0.0315 0.0955" />
|
||||
<parent link="openarm_left_link4" />
|
||||
<child link="openarm_left_link5" />
|
||||
<axis xyz="0 0 1" />
|
||||
<limit effort="7" lower="-1.570796" upper="1.570796" velocity="20.943946" />
|
||||
</joint>
|
||||
<link name="openarm_left_link6">
|
||||
<visual name="openarm_left_link6_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.0375 -0.0 -0.5585" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/visual/link6.stl" scale="0.001 -0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_left_link6_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.0375 -0.0 -0.5585" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/collision/link6_symp.stl" scale="0.001 -0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.037136587005447405 -0.00033230528343419053 -9.498374522309838e-05" />
|
||||
<mass value="0.475202773187987" />
|
||||
<inertia ixx="0.000143" ixy="1e-06" ixz="1e-06" iyy="0.000157" iyz="1e-06" izz="0.000159" />
|
||||
</inertial>
|
||||
</link>
|
||||
<joint name="openarm_left_joint6" type="revolute">
|
||||
<origin rpy="0 0 0" xyz="0.0375 0.0 0.1205" />
|
||||
<parent link="openarm_left_link5" />
|
||||
<child link="openarm_left_link6" />
|
||||
<axis xyz="1 0 0" />
|
||||
<limit effort="7" lower="-0.785398" upper="0.785398" velocity="20.943946" />
|
||||
</joint>
|
||||
<link name="openarm_left_link7">
|
||||
<visual name="openarm_left_link7_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 -0.0 -0.5585" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/visual/link7.stl" scale="0.001 -0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_left_link7_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 -0.0 -0.5585" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/collision/link7_symp.stl" scale="0.001 -0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0.0 0.0 0.0" xyz="6.875510271106056e-05 -0.01266175250761268 0.06951945409987448" />
|
||||
<mass value="0.4659771327380578" />
|
||||
<inertia ixx="0.000639" ixy="1e-06" ixz="1e-06" iyy="0.000497" iyz="8.9e-05" izz="0.000342" />
|
||||
</inertial>
|
||||
</link>
|
||||
<joint name="openarm_left_joint7" type="revolute">
|
||||
<origin rpy="0 0 0" xyz="-0.0375 0.0 0.0" />
|
||||
<parent link="openarm_left_link6" />
|
||||
<child link="openarm_left_link7" />
|
||||
<axis xyz="0 -1 0" />
|
||||
<limit effort="7" lower="-1.570796" upper="1.570796" velocity="20.943946" />
|
||||
</joint>
|
||||
<joint name="openarm_right_openarm_body_link0_joint" type="fixed">
|
||||
<parent link="openarm_body_link0" />
|
||||
<child link="openarm_right_link0" />
|
||||
<origin rpy="1.5708 0 0" xyz="0.0 -0.031 0.698" />
|
||||
</joint>
|
||||
<link name="openarm_right_link0">
|
||||
<visual name="openarm_right_link0_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.0 0.0" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/visual/link0.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_right_link0_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.0 0.0" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/collision/link0_symp.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.0009483362816297526 0.0001580207020448382 0.03076860287587199" />
|
||||
<mass value="1.1432284943239561" />
|
||||
<inertia ixx="0.001128" ixy="-4e-06" ixz="-3.3e-05" iyy="0.000962" iyz="-7e-06" izz="0.00147" />
|
||||
</inertial>
|
||||
</link>
|
||||
<link name="openarm_right_link1">
|
||||
<visual name="openarm_right_link1_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.0 0.0 -0.0625" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/visual/link1.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_right_link1_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.0 0.0 -0.0625" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/collision/link1_symp.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0011467657911800769 3.319987657026362e-05 0.05395284380736254" />
|
||||
<mass value="1.1416684646202298" />
|
||||
<inertia ixx="0.001567" ixy="-1e-06" ixz="-2.9e-05" iyy="0.001273" iyz="1e-06" izz="0.001016" />
|
||||
</inertial>
|
||||
</link>
|
||||
<joint name="openarm_right_joint1" type="revolute">
|
||||
<origin rpy="0 0 0" xyz="0.0 0.0 0.0625" />
|
||||
<parent link="openarm_right_link0" />
|
||||
<child link="openarm_right_link1" />
|
||||
<axis xyz="0 0 1" />
|
||||
<limit effort="40" lower="-1.396263" upper="3.490659" velocity="16.754666" />
|
||||
</joint>
|
||||
<link name="openarm_right_link2">
|
||||
<visual name="openarm_right_link2_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0301 0.0 -0.1225" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/visual/link2.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_right_link2_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0301 0.0 -0.1225" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/collision/link2_symp.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.00839629182351943 -2.0145102027597523e-08 0.03256649300522363" />
|
||||
<mass value="0.2775092746011571" />
|
||||
<inertia ixx="0.000359" ixy="1e-06" ixz="-0.000109" iyy="0.000376" iyz="1e-06" izz="0.000232" />
|
||||
</inertial>
|
||||
</link>
|
||||
<joint name="openarm_right_joint2" type="revolute">
|
||||
<origin rpy="1.57079632679 0 0" xyz="-0.0301 0.0 0.06" />
|
||||
<parent link="openarm_right_link1" />
|
||||
<child link="openarm_right_link2" />
|
||||
<axis xyz="-1 0 0" />
|
||||
<limit effort="40" lower="-0.17453267320510335" upper="3.3161253267948965" velocity="16.754666" />
|
||||
</joint>
|
||||
<link name="openarm_right_link3">
|
||||
<visual name="openarm_right_link3_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.0 -0.0 -0.18875" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/visual/link3.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_right_link3_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.0 -0.0 -0.18875" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/collision/link3_symp.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.002104752099628911 0.0005549085042607548 0.09047470545721961" />
|
||||
<mass value="1.073863338202347" />
|
||||
<inertia ixx="0.004372" ixy="1e-06" ixz="1.1e-05" iyy="0.004319" iyz="-3.6e-05" izz="0.000661" />
|
||||
</inertial>
|
||||
</link>
|
||||
<joint name="openarm_right_joint3" type="revolute">
|
||||
<origin rpy="0 0 0" xyz="0.0301 0.0 0.06625" />
|
||||
<parent link="openarm_right_link2" />
|
||||
<child link="openarm_right_link3" />
|
||||
<axis xyz="0 0 1" />
|
||||
<limit effort="27" lower="-1.570796" upper="1.570796" velocity="5.445426" />
|
||||
</joint>
|
||||
<link name="openarm_right_link4">
|
||||
<visual name="openarm_right_link4_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 -0.0315 -0.3425" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/visual/link4.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_right_link4_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 -0.0315 -0.3425" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/collision/link4_symp.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.0029006831074562967 -0.03030575826634669 0.06339637422196209" />
|
||||
<mass value="0.6348534566833373" />
|
||||
<inertia ixx="0.000623" ixy="-1e-06" ixz="-1.9e-05" iyy="0.000511" iyz="3.8e-05" izz="0.000334" />
|
||||
</inertial>
|
||||
</link>
|
||||
<joint name="openarm_right_joint4" type="revolute">
|
||||
<origin rpy="0 0 0" xyz="-0.0 0.0315 0.15375" />
|
||||
<parent link="openarm_right_link3" />
|
||||
<child link="openarm_right_link4" />
|
||||
<axis xyz="0 1 0" />
|
||||
<limit effort="27" lower="0.0" upper="2.443461" velocity="5.445426" />
|
||||
</joint>
|
||||
<link name="openarm_right_link5">
|
||||
<visual name="openarm_right_link5_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.0 -0.0 -0.438" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/visual/link5.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_right_link5_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.0 -0.0 -0.438" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/collision/link5_symp.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.003049665024221911 0.0008866902457326625 0.043079803024980934" />
|
||||
<mass value="0.6156588026168502" />
|
||||
<inertia ixx="0.000423" ixy="-8e-06" ixz="6e-06" iyy="0.000445" iyz="-6e-06" izz="0.000324" />
|
||||
</inertial>
|
||||
</link>
|
||||
<joint name="openarm_right_joint5" type="revolute">
|
||||
<origin rpy="0 0 0" xyz="0.0 -0.0315 0.0955" />
|
||||
<parent link="openarm_right_link4" />
|
||||
<child link="openarm_right_link5" />
|
||||
<axis xyz="0 0 1" />
|
||||
<limit effort="7" lower="-1.570796" upper="1.570796" velocity="20.943946" />
|
||||
</joint>
|
||||
<link name="openarm_right_link6">
|
||||
<visual name="openarm_right_link6_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.0375 -0.0 -0.5585" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/visual/link6.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_right_link6_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.0375 -0.0 -0.5585" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/collision/link6_symp.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0.0 0.0 0.0" xyz="-0.037136587005447405 0.00033230528343419053 -9.498374522309838e-05" />
|
||||
<mass value="0.475202773187987" />
|
||||
<inertia ixx="0.000143" ixy="1e-06" ixz="1e-06" iyy="0.000157" iyz="1e-06" izz="0.000159" />
|
||||
</inertial>
|
||||
</link>
|
||||
<joint name="openarm_right_joint6" type="revolute">
|
||||
<origin rpy="0 0 0" xyz="0.0375 0.0 0.1205" />
|
||||
<parent link="openarm_right_link5" />
|
||||
<child link="openarm_right_link6" />
|
||||
<axis xyz="1 0 0" />
|
||||
<limit effort="7" lower="-0.785398" upper="0.785398" velocity="20.943946" />
|
||||
</joint>
|
||||
<link name="openarm_right_link7">
|
||||
<visual name="openarm_right_link7_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 -0.0 -0.5585" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/visual/link7.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_right_link7_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 -0.0 -0.5585" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/arm/v10/collision/link7_symp.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0.0 0.0 0.0" xyz="6.875510271106056e-05 0.01266175250761268 0.06951945409987448" />
|
||||
<mass value="0.4659771327380578" />
|
||||
<inertia ixx="0.000639" ixy="1e-06" ixz="1e-06" iyy="0.000497" iyz="8.9e-05" izz="0.000342" />
|
||||
</inertial>
|
||||
</link>
|
||||
<joint name="openarm_right_joint7" type="revolute">
|
||||
<origin rpy="0 0 0" xyz="-0.0375 0.0 0.0" />
|
||||
<parent link="openarm_right_link6" />
|
||||
<child link="openarm_right_link7" />
|
||||
<axis xyz="0 1 0" />
|
||||
<limit effort="7" lower="-1.570796" upper="1.570796" velocity="20.943946" />
|
||||
</joint>
|
||||
<link name="openarm_left_hand">
|
||||
<visual name="openarm_left_hand_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.0 -0.6585" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/ee/openarm_hand/visual/hand.dae" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_left_hand_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.0 -0.6585" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/ee/openarm_hand/collision/hand.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0 0 0" xyz="0.0 0.002 0.03" />
|
||||
<mass value="0.35" />
|
||||
<inertia ixx="0.0002473" ixy="1e-06" ixz="1e-06" iyy="1.763e-05" iyz="1e-06" izz="0.0002521" />
|
||||
</inertial>
|
||||
</link>
|
||||
<joint name="left_openarm_hand_joint" type="fixed">
|
||||
<parent link="openarm_left_link7" />
|
||||
<child link="openarm_left_hand" />
|
||||
<origin rpy="0 0 0" xyz="0 -0.0 0.1001" />
|
||||
</joint>
|
||||
<link name="openarm_left_hand_tcp">
|
||||
<inertial>
|
||||
<origin xyz="0 0 0" rpy="0 0 0" />
|
||||
<mass value="0.001" />
|
||||
<inertia ixx="0.000001" ixy="0.0" ixz="0.0" iyy="0.000001" iyz="0.0" izz="0.000001" />
|
||||
</inertial>
|
||||
</link>
|
||||
<joint name="openarm_left_hand_tcp_joint" type="fixed">
|
||||
<origin rpy="0 0 0" xyz="0 -0.0 0.08" />
|
||||
<parent link="openarm_left_hand" />
|
||||
<child link="openarm_left_hand_tcp" />
|
||||
</joint>
|
||||
<link name="openarm_left_left_finger">
|
||||
<visual name="openarm_left_left_finger_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 -0.05 -0.673001" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/ee/openarm_hand/visual/finger.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_left_left_finger_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 -0.05 -0.673001" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/ee/openarm_hand/collision/finger.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0 0 0" xyz="0.0064528 0.01702 0.0219685" />
|
||||
<mass value="0.03602545343277134" />
|
||||
<inertia ixx="2.3749999999999997e-06" ixy="1e-06" ixz="1e-06" iyy="2.3749999999999997e-06" iyz="1e-06" izz="7.5e-07" />
|
||||
</inertial>
|
||||
</link>
|
||||
<link name="openarm_left_right_finger">
|
||||
<visual name="openarm_left_right_finger_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.05 -0.673001" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/ee/openarm_hand/visual/finger.stl" scale="0.001 -0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_left_right_finger_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.05 -0.673001" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/ee/openarm_hand/collision/finger.stl" scale="0.001 -0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0 0 0" xyz="0.0064528 -0.01702 0.0219685" />
|
||||
<mass value="0.03602545343277134" />
|
||||
<inertia ixx="2.3749999999999997e-06" ixy="1e-06" ixz="1e-06" iyy="2.3749999999999997e-06" iyz="1e-06" izz="7.5e-07" />
|
||||
</inertial>
|
||||
</link>
|
||||
<joint name="openarm_left_finger_joint1" type="prismatic">
|
||||
<parent link="openarm_left_hand" />
|
||||
<child link="openarm_left_right_finger" />
|
||||
<origin rpy="0 0 0" xyz="0 -0.006 0.015" />
|
||||
<axis xyz="0 -1 0" />
|
||||
<limit effort="333" lower="0.0" upper="0.044" velocity="10.0" />
|
||||
</joint>
|
||||
<joint name="openarm_left_finger_joint2" type="prismatic">
|
||||
<parent link="openarm_left_hand" />
|
||||
<child link="openarm_left_left_finger" />
|
||||
<origin rpy="0 0 0" xyz="0 0.006 0.015" />
|
||||
<axis xyz="0 1 0" />
|
||||
<limit effort="333" lower="0.0" upper="0.044" velocity="10.0" />
|
||||
<mimic joint="openarm_left_finger_joint1" />
|
||||
</joint>
|
||||
<link name="openarm_right_hand">
|
||||
<visual name="openarm_right_hand_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.0 -0.6585" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/ee/openarm_hand/visual/hand.dae" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_right_hand_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.0 -0.6585" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/ee/openarm_hand/collision/hand.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0 0 0" xyz="0.0 0.002 0.03" />
|
||||
<mass value="0.35" />
|
||||
<inertia ixx="0.0002473" ixy="1e-06" ixz="1e-06" iyy="1.763e-05" iyz="1e-06" izz="0.0002521" />
|
||||
</inertial>
|
||||
</link>
|
||||
<link name="openarm_right_ee_target">
|
||||
<inertial>
|
||||
<origin xyz="0 0 0" rpy="0 0 0" />
|
||||
<mass value="0.001" />
|
||||
<inertia ixx="0.000001" ixy="0.0" ixz="0.0" iyy="0.000001" iyz="0.0" izz="0.000001" />
|
||||
</inertial>
|
||||
</link>
|
||||
<joint name="openarm_right_ee_target_joint" type="fixed">
|
||||
<parent link="openarm_right_link7" />
|
||||
<child link="openarm_right_ee_target" />
|
||||
<origin rpy="0 0 0" xyz="0 0.0 0.07" />
|
||||
</joint>
|
||||
<joint name="right_openarm_hand_joint" type="fixed">
|
||||
<parent link="openarm_right_link7" />
|
||||
<child link="openarm_right_hand" />
|
||||
<origin rpy="0 0 0" xyz="0 -0.0 0.1001" />
|
||||
</joint>
|
||||
<link name="openarm_right_hand_tcp">
|
||||
<inertial>
|
||||
<origin xyz="0 0 0" rpy="0 0 0" />
|
||||
<mass value="0.001" />
|
||||
<inertia ixx="0.000001" ixy="0.0" ixz="0.0" iyy="0.000001" iyz="0.0" izz="0.000001" />
|
||||
</inertial>
|
||||
</link>
|
||||
<joint name="openarm_right_hand_tcp_joint" type="fixed">
|
||||
<origin rpy="0 0 0" xyz="0 -0.0 0.08" />
|
||||
<parent link="openarm_right_hand" />
|
||||
<child link="openarm_right_hand_tcp" />
|
||||
</joint>
|
||||
<link name="openarm_right_left_finger">
|
||||
<visual name="openarm_right_left_finger_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 -0.05 -0.673001" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/ee/openarm_hand/visual/finger.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_right_left_finger_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 -0.05 -0.673001" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/ee/openarm_hand/collision/finger.stl" scale="0.001 0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0 0 0" xyz="0.0064528 0.01702 0.0219685" />
|
||||
<mass value="0.03602545343277134" />
|
||||
<inertia ixx="2.3749999999999997e-06" ixy="1e-06" ixz="1e-06" iyy="2.3749999999999997e-06" iyz="1e-06" izz="7.5e-07" />
|
||||
</inertial>
|
||||
</link>
|
||||
<link name="openarm_right_right_finger">
|
||||
<visual name="openarm_right_right_finger_visual">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.05 -0.673001" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/ee/openarm_hand/visual/finger.stl" scale="0.001 -0.001 0.001" />
|
||||
</geometry>
|
||||
</visual>
|
||||
<collision name="openarm_right_right_finger_collision">
|
||||
<origin rpy="0.0 0.0 0.0" xyz="0.0 0.05 -0.673001" />
|
||||
<geometry>
|
||||
<mesh filename="./meshes/ee/openarm_hand/collision/finger.stl" scale="0.001 -0.001 0.001" />
|
||||
</geometry>
|
||||
</collision>
|
||||
<inertial>
|
||||
<origin rpy="0 0 0" xyz="0.0064528 -0.01702 0.0219685" />
|
||||
<mass value="0.03602545343277134" />
|
||||
<inertia ixx="2.3749999999999997e-06" ixy="1e-06" ixz="1e-06" iyy="2.3749999999999997e-06" iyz="1e-06" izz="7.5e-07" />
|
||||
</inertial>
|
||||
</link>
|
||||
<joint name="openarm_right_finger_joint1" type="prismatic">
|
||||
<parent link="openarm_right_hand" />
|
||||
<child link="openarm_right_right_finger" />
|
||||
<origin rpy="0 0 0" xyz="0 -0.006 0.015" />
|
||||
<axis xyz="0 -1 0" />
|
||||
<limit effort="333" lower="0.0" upper="0.044" velocity="10.0" />
|
||||
</joint>
|
||||
<joint name="openarm_right_finger_joint2" type="prismatic">
|
||||
<parent link="openarm_right_hand" />
|
||||
<child link="openarm_right_left_finger" />
|
||||
<origin rpy="0 0 0" xyz="0 0.006 0.015" />
|
||||
<axis xyz="0 1 0" />
|
||||
<limit effort="333" lower="0.0" upper="0.044" velocity="10.0" />
|
||||
<mimic joint="openarm_right_finger_joint1" />
|
||||
</joint>
|
||||
</robot>
|
||||
@@ -0,0 +1,408 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<title>Dataset Replay — EE Frame Viewer</title>
|
||||
<style>
|
||||
* { margin: 0; padding: 0; box-sizing: border-box; }
|
||||
body { background: #0d1117; overflow: hidden; font-family: 'JetBrains Mono', monospace; color: #c9d1d9; }
|
||||
canvas { display: block; }
|
||||
|
||||
#panel {
|
||||
position: absolute; top: 14px; left: 14px;
|
||||
background: rgba(13,17,23,0.92); border: 1px solid #30363d;
|
||||
border-radius: 10px; padding: 16px 20px; z-index: 10;
|
||||
width: 340px; backdrop-filter: blur(8px);
|
||||
}
|
||||
#panel h2 { font-size: 14px; color: #58a6ff; margin-bottom: 10px; letter-spacing: 0.5px; }
|
||||
|
||||
.row { display: flex; align-items: center; gap: 8px; margin: 6px 0; font-size: 12px; }
|
||||
.row label { width: 70px; color: #8b949e; flex-shrink: 0; }
|
||||
.row .val { color: #f0f6fc; font-variant-numeric: tabular-nums; }
|
||||
|
||||
#transport {
|
||||
margin-top: 12px; display: flex; align-items: center; gap: 8px;
|
||||
}
|
||||
#transport button {
|
||||
background: #21262d; color: #c9d1d9; border: 1px solid #30363d;
|
||||
padding: 6px 14px; border-radius: 6px; cursor: pointer;
|
||||
font-family: inherit; font-size: 12px; transition: background 0.15s;
|
||||
}
|
||||
#transport button:hover { background: #30363d; }
|
||||
#transport button.active { background: #1f6feb; border-color: #1f6feb; color: #fff; }
|
||||
|
||||
#scrubber {
|
||||
width: 100%; margin-top: 8px;
|
||||
-webkit-appearance: none; appearance: none;
|
||||
height: 6px; border-radius: 3px; background: #21262d; outline: none;
|
||||
}
|
||||
#scrubber::-webkit-slider-thumb {
|
||||
-webkit-appearance: none; width: 14px; height: 14px;
|
||||
border-radius: 50%; background: #58a6ff; cursor: pointer;
|
||||
}
|
||||
|
||||
#speed-ctrl { margin-top: 6px; }
|
||||
#speed-ctrl select {
|
||||
background: #21262d; color: #c9d1d9; border: 1px solid #30363d;
|
||||
padding: 4px 8px; border-radius: 4px; font-family: inherit; font-size: 11px;
|
||||
}
|
||||
|
||||
#frame-counter {
|
||||
font-size: 11px; color: #8b949e; margin-top: 6px;
|
||||
font-variant-numeric: tabular-nums;
|
||||
}
|
||||
|
||||
.legend { display: flex; align-items: center; gap: 6px; margin: 3px 0; font-size: 11px; }
|
||||
.dot { width: 10px; height: 10px; border-radius: 50%; display: inline-block; }
|
||||
</style>
|
||||
<link href="https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@400;600&display=swap" rel="stylesheet">
|
||||
</head>
|
||||
<body>
|
||||
|
||||
<div id="panel">
|
||||
<h2>DATASET REPLAY — EE FRAME</h2>
|
||||
<div style="font-size:11px;color:#8b949e;margin-bottom:8px;">glannuzel/grabette-dataset · episode 0</div>
|
||||
|
||||
<div class="legend"><span class="dot" style="background:#ff6b6b"></span> EE target (dataset)</div>
|
||||
<div class="legend"><span class="dot" style="background:#ffd43b"></span> Trajectory (past)</div>
|
||||
<div class="legend"><span class="dot" style="background:#30363d"></span> Trajectory (future)</div>
|
||||
|
||||
<div class="row"><label>x</label><span class="val" id="v-x">—</span></div>
|
||||
<div class="row"><label>y</label><span class="val" id="v-y">—</span></div>
|
||||
<div class="row"><label>z</label><span class="val" id="v-z">—</span></div>
|
||||
<div class="row"><label>ax</label><span class="val" id="v-ax">—</span></div>
|
||||
<div class="row"><label>ay</label><span class="val" id="v-ay">—</span></div>
|
||||
<div class="row"><label>az</label><span class="val" id="v-az">—</span></div>
|
||||
<div class="row"><label>gripper</label><span class="val" id="v-grip">—</span></div>
|
||||
|
||||
<div id="transport">
|
||||
<button id="btn-play" onclick="togglePlay()">▶ Play</button>
|
||||
<button onclick="stepFrame(-1)">◀</button>
|
||||
<button onclick="stepFrame(1)">▶</button>
|
||||
<button onclick="resetPlay()">⟳</button>
|
||||
</div>
|
||||
<input type="range" id="scrubber" min="0" max="1" value="0" step="1" />
|
||||
<div id="speed-ctrl">
|
||||
<label style="font-size:11px;color:#8b949e;">Speed:</label>
|
||||
<select id="speed-select" onchange="setSpeed(this.value)">
|
||||
<option value="0.25">0.25×</option>
|
||||
<option value="0.5">0.5×</option>
|
||||
<option value="1" selected>1×</option>
|
||||
<option value="2">2×</option>
|
||||
<option value="4">4×</option>
|
||||
</select>
|
||||
</div>
|
||||
<div id="frame-counter">Frame 0 / 0 · 0.00s</div>
|
||||
</div>
|
||||
|
||||
<script type="importmap">
|
||||
{
|
||||
"imports": {
|
||||
"three": "https://cdn.jsdelivr.net/npm/three@0.169.0/build/three.module.js",
|
||||
"three/examples/jsm/": "https://cdn.jsdelivr.net/npm/three@0.169.0/examples/jsm/"
|
||||
}
|
||||
}
|
||||
</script>
|
||||
|
||||
<script type="module">
|
||||
import * as THREE from 'three';
|
||||
import { OrbitControls } from 'three/examples/jsm/controls/OrbitControls.js';
|
||||
import { STLLoader } from 'three/examples/jsm/loaders/STLLoader.js';
|
||||
|
||||
let trajectory = null;
|
||||
let currentFrame = 0;
|
||||
let playing = false;
|
||||
let speed = 1.0;
|
||||
let lastTime = 0;
|
||||
let accumulator = 0;
|
||||
|
||||
// Anchor: EE tip world position at zero-joint pose (in Y-up Three.js space)
|
||||
const eeAnchor = new THREE.Vector3();
|
||||
// Z-up → Y-up rotation (same as robotGroup): -90° around X
|
||||
const zUpToYUp = new THREE.Quaternion().setFromAxisAngle(new THREE.Vector3(1, 0, 0), -Math.PI / 2);
|
||||
|
||||
const scene = new THREE.Scene();
|
||||
scene.background = new THREE.Color(0x0d1117);
|
||||
|
||||
const camera = new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 0.01, 100);
|
||||
|
||||
const renderer = new THREE.WebGLRenderer({ antialias: true });
|
||||
renderer.setSize(window.innerWidth, window.innerHeight);
|
||||
renderer.setPixelRatio(window.devicePixelRatio);
|
||||
renderer.shadowMap.enabled = true;
|
||||
document.body.appendChild(renderer.domElement);
|
||||
|
||||
const controls = new OrbitControls(camera, renderer.domElement);
|
||||
controls.enableDamping = true;
|
||||
controls.dampingFactor = 0.08;
|
||||
|
||||
scene.add(new THREE.AmbientLight(0xffffff, 0.8));
|
||||
const dirLight = new THREE.DirectionalLight(0xffffff, 1.4);
|
||||
dirLight.position.set(2, 4, 3);
|
||||
scene.add(dirLight);
|
||||
scene.add(new THREE.DirectionalLight(0x8899cc, 0.6).translateX(-2).translateY(1).translateZ(-3));
|
||||
scene.add(new THREE.DirectionalLight(0xffffff, 0.5).translateY(-1).translateZ(2));
|
||||
|
||||
const grid = new THREE.GridHelper(2, 20, 0x21262d, 0x161b22);
|
||||
scene.add(grid);
|
||||
scene.add(new THREE.AxesHelper(0.15));
|
||||
|
||||
// EE marker
|
||||
const eeMarker = new THREE.Mesh(
|
||||
new THREE.SphereGeometry(0.012, 20, 20),
|
||||
new THREE.MeshStandardMaterial({ color: 0xff6b6b, emissive: 0xff6b6b, emissiveIntensity: 0.7 })
|
||||
);
|
||||
scene.add(eeMarker);
|
||||
eeMarker.add(new THREE.AxesHelper(0.06));
|
||||
|
||||
// Trajectory lines
|
||||
const MAX_POINTS = 2000;
|
||||
const pastGeo = new THREE.BufferGeometry();
|
||||
pastGeo.setAttribute('position', new THREE.Float32BufferAttribute(new Float32Array(MAX_POINTS * 3), 3));
|
||||
const pastLine = new THREE.Line(pastGeo, new THREE.LineBasicMaterial({ color: 0xffd43b, linewidth: 2 }));
|
||||
scene.add(pastLine);
|
||||
|
||||
const futureGeo = new THREE.BufferGeometry();
|
||||
futureGeo.setAttribute('position', new THREE.Float32BufferAttribute(new Float32Array(MAX_POINTS * 3), 3));
|
||||
const futureLine = new THREE.Line(futureGeo, new THREE.LineBasicMaterial({ color: 0x30363d, linewidth: 1 }));
|
||||
scene.add(futureLine);
|
||||
|
||||
// URDF
|
||||
const stlLoader = new STLLoader();
|
||||
const robotGroup = new THREE.Group();
|
||||
// URDF is Z-up; Three.js is Y-up → rotate -90° around X
|
||||
robotGroup.rotation.x = -Math.PI / 2;
|
||||
scene.add(robotGroup);
|
||||
let urdfLinks = {};
|
||||
|
||||
function rotvecToQuat(ax, ay, az) {
|
||||
const angle = Math.sqrt(ax * ax + ay * ay + az * az);
|
||||
if (angle < 1e-8) return new THREE.Quaternion();
|
||||
return new THREE.Quaternion().setFromAxisAngle(
|
||||
new THREE.Vector3(ax / angle, ay / angle, az / angle), angle
|
||||
);
|
||||
}
|
||||
|
||||
async function loadURDF() {
|
||||
const resp = await fetch('./openarm_bimanual_pybullet.urdf');
|
||||
const text = await resp.text();
|
||||
const xml = new DOMParser().parseFromString(text, 'text/xml');
|
||||
|
||||
const links = {};
|
||||
|
||||
for (const linkEl of xml.querySelectorAll('link')) {
|
||||
const name = linkEl.getAttribute('name');
|
||||
const group = new THREE.Group();
|
||||
group.name = name;
|
||||
|
||||
const visual = linkEl.querySelector('visual');
|
||||
if (visual) {
|
||||
const meshEl = visual.querySelector('mesh');
|
||||
const originEl = visual.querySelector('origin');
|
||||
if (meshEl) {
|
||||
const filename = meshEl.getAttribute('filename');
|
||||
const scaleStr = meshEl.getAttribute('scale');
|
||||
const sc = scaleStr ? scaleStr.split(' ').map(Number) : [1, 1, 1];
|
||||
let xyz = [0, 0, 0];
|
||||
if (originEl && originEl.getAttribute('xyz'))
|
||||
xyz = originEl.getAttribute('xyz').split(' ').map(Number);
|
||||
if (filename.endsWith('.stl')) {
|
||||
try {
|
||||
const geo = await new Promise((res, rej) =>
|
||||
stlLoader.load(filename, res, undefined, rej));
|
||||
const mesh = new THREE.Mesh(geo, new THREE.MeshStandardMaterial({
|
||||
color: 0x8899bb, metalness: 0.3, roughness: 0.5,
|
||||
}));
|
||||
mesh.scale.set(sc[0], sc[1], sc[2]);
|
||||
mesh.position.set(xyz[0], xyz[1], xyz[2]);
|
||||
group.add(mesh);
|
||||
} catch (e) { /* skip missing mesh */ }
|
||||
}
|
||||
}
|
||||
}
|
||||
links[name] = group;
|
||||
}
|
||||
|
||||
const rootLinks = new Set(Object.keys(links));
|
||||
|
||||
for (const jointEl of xml.querySelectorAll('joint')) {
|
||||
const parentName = jointEl.querySelector('parent').getAttribute('link');
|
||||
const childName = jointEl.querySelector('child').getAttribute('link');
|
||||
rootLinks.delete(childName);
|
||||
|
||||
const originEl = jointEl.querySelector('origin');
|
||||
let xyz = [0, 0, 0], rpy = [0, 0, 0];
|
||||
if (originEl) {
|
||||
if (originEl.getAttribute('xyz')) xyz = originEl.getAttribute('xyz').split(' ').map(Number);
|
||||
if (originEl.getAttribute('rpy')) rpy = originEl.getAttribute('rpy').split(' ').map(Number);
|
||||
}
|
||||
|
||||
const parent = links[parentName];
|
||||
const child = links[childName];
|
||||
if (!parent || !child) continue;
|
||||
|
||||
child.position.set(xyz[0], xyz[1], xyz[2]);
|
||||
if (rpy[0] || rpy[1] || rpy[2])
|
||||
child.rotation.set(rpy[0], rpy[1], rpy[2], 'XYZ');
|
||||
parent.add(child);
|
||||
}
|
||||
|
||||
for (const n of rootLinks)
|
||||
if (links[n]) robotGroup.add(links[n]);
|
||||
|
||||
// EE target marker on the URDF
|
||||
const eeTargetLink = links['openarm_right_ee_target'];
|
||||
if (eeTargetLink) {
|
||||
eeTargetLink.add(new THREE.Mesh(
|
||||
new THREE.TorusGeometry(0.02, 0.002, 8, 32),
|
||||
new THREE.MeshStandardMaterial({ color: 0xffaa00, emissive: 0xffaa00, emissiveIntensity: 0.5 })
|
||||
));
|
||||
eeTargetLink.add(new THREE.AxesHelper(0.05));
|
||||
}
|
||||
|
||||
urdfLinks = links;
|
||||
}
|
||||
|
||||
async function loadTrajectory() {
|
||||
const resp = await fetch('./trajectory_ep0.json');
|
||||
trajectory = await resp.json();
|
||||
document.getElementById('scrubber').max = trajectory.num_frames - 1;
|
||||
document.getElementById('scrubber').value = 0;
|
||||
}
|
||||
|
||||
function computeOffset() {
|
||||
if (!trajectory || !urdfLinks['openarm_right_ee_target']) return;
|
||||
|
||||
robotGroup.updateMatrixWorld(true);
|
||||
const eeLink = urdfLinks['openarm_right_ee_target'];
|
||||
eeLink.getWorldPosition(eeAnchor);
|
||||
|
||||
controls.target.copy(eeAnchor);
|
||||
camera.position.set(eeAnchor.x + 0.8, eeAnchor.y + 0.3, eeAnchor.z + 0.0);
|
||||
controls.update();
|
||||
|
||||
updateFrame(0);
|
||||
}
|
||||
|
||||
function mapFramePos(f) {
|
||||
const f0 = trajectory.frames[0];
|
||||
const delta = new THREE.Vector3(f.x - f0.x, f.y - f0.y, f.z - f0.z);
|
||||
delta.applyQuaternion(zUpToYUp);
|
||||
return delta.add(eeAnchor);
|
||||
}
|
||||
|
||||
function updateFrame(idx) {
|
||||
if (!trajectory) return;
|
||||
currentFrame = Math.max(0, Math.min(idx, trajectory.num_frames - 1));
|
||||
|
||||
const f = trajectory.frames[currentFrame];
|
||||
const pos = mapFramePos(f);
|
||||
|
||||
eeMarker.position.copy(pos);
|
||||
// Orientation: rotate the dataset axis-angle into Y-up space
|
||||
const q = rotvecToQuat(f.ax, f.ay, f.az);
|
||||
eeMarker.quaternion.copy(zUpToYUp).multiply(q);
|
||||
|
||||
// Past trajectory
|
||||
const pastArr = pastGeo.attributes.position.array;
|
||||
let pi = 0;
|
||||
for (let i = 0; i <= currentFrame && i < MAX_POINTS; i++) {
|
||||
const p = mapFramePos(trajectory.frames[i]);
|
||||
pastArr[pi++] = p.x; pastArr[pi++] = p.y; pastArr[pi++] = p.z;
|
||||
}
|
||||
pastGeo.setDrawRange(0, Math.min(currentFrame + 1, MAX_POINTS));
|
||||
pastGeo.attributes.position.needsUpdate = true;
|
||||
|
||||
// Future trajectory
|
||||
const futArr = futureGeo.attributes.position.array;
|
||||
let fi = 0;
|
||||
for (let i = currentFrame; i < trajectory.num_frames && (i - currentFrame) < MAX_POINTS; i++) {
|
||||
const p = mapFramePos(trajectory.frames[i]);
|
||||
futArr[fi++] = p.x; futArr[fi++] = p.y; futArr[fi++] = p.z;
|
||||
}
|
||||
futureGeo.setDrawRange(0, Math.min(trajectory.num_frames - currentFrame, MAX_POINTS));
|
||||
futureGeo.attributes.position.needsUpdate = true;
|
||||
|
||||
// UI
|
||||
document.getElementById('v-x').textContent = pos.x.toFixed(4);
|
||||
document.getElementById('v-y').textContent = pos.y.toFixed(4);
|
||||
document.getElementById('v-z').textContent = pos.z.toFixed(4);
|
||||
document.getElementById('v-ax').textContent = f.ax.toFixed(4);
|
||||
document.getElementById('v-ay').textContent = f.ay.toFixed(4);
|
||||
document.getElementById('v-az').textContent = f.az.toFixed(4);
|
||||
document.getElementById('v-grip').textContent =
|
||||
`p=${f.proximal.toFixed(2)} d=${f.distal.toFixed(2)}`;
|
||||
|
||||
document.getElementById('scrubber').value = currentFrame;
|
||||
const timeS = (currentFrame / trajectory.fps).toFixed(2);
|
||||
document.getElementById('frame-counter').textContent =
|
||||
`Frame ${currentFrame} / ${trajectory.num_frames - 1} · ${timeS}s`;
|
||||
}
|
||||
|
||||
// Playback controls
|
||||
window.togglePlay = function() {
|
||||
playing = !playing;
|
||||
const btn = document.getElementById('btn-play');
|
||||
btn.textContent = playing ? '⏸ Pause' : '▶ Play';
|
||||
btn.classList.toggle('active', playing);
|
||||
if (playing) { lastTime = performance.now(); accumulator = 0; }
|
||||
};
|
||||
|
||||
window.stepFrame = function(delta) {
|
||||
playing = false;
|
||||
document.getElementById('btn-play').textContent = '▶ Play';
|
||||
document.getElementById('btn-play').classList.remove('active');
|
||||
updateFrame(currentFrame + delta);
|
||||
};
|
||||
|
||||
window.resetPlay = function() {
|
||||
playing = false;
|
||||
document.getElementById('btn-play').textContent = '▶ Play';
|
||||
document.getElementById('btn-play').classList.remove('active');
|
||||
updateFrame(0);
|
||||
};
|
||||
|
||||
window.setSpeed = function(v) { speed = parseFloat(v); };
|
||||
|
||||
document.getElementById('scrubber').addEventListener('input', (e) => {
|
||||
updateFrame(parseInt(e.target.value));
|
||||
});
|
||||
|
||||
window.addEventListener('resize', () => {
|
||||
camera.aspect = window.innerWidth / window.innerHeight;
|
||||
camera.updateProjectionMatrix();
|
||||
renderer.setSize(window.innerWidth, window.innerHeight);
|
||||
});
|
||||
|
||||
function animate(now) {
|
||||
requestAnimationFrame(animate);
|
||||
controls.update();
|
||||
|
||||
if (playing && trajectory) {
|
||||
const dt = (now - lastTime) / 1000;
|
||||
lastTime = now;
|
||||
accumulator += dt * speed;
|
||||
const frameDuration = 1.0 / trajectory.fps;
|
||||
while (accumulator >= frameDuration) {
|
||||
accumulator -= frameDuration;
|
||||
if (currentFrame < trajectory.num_frames - 1) {
|
||||
updateFrame(currentFrame + 1);
|
||||
} else {
|
||||
playing = false;
|
||||
document.getElementById('btn-play').textContent = '▶ Play';
|
||||
document.getElementById('btn-play').classList.remove('active');
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
renderer.render(scene, camera);
|
||||
}
|
||||
requestAnimationFrame(animate);
|
||||
|
||||
Promise.all([loadURDF(), loadTrajectory()])
|
||||
.then(() => computeOffset())
|
||||
.catch(err => console.error(err));
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -0,0 +1,311 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<title>OpenArm URDF Viewer</title>
|
||||
<style>
|
||||
* { margin: 0; padding: 0; box-sizing: border-box; }
|
||||
body { background: #1a1a2e; overflow: hidden; font-family: 'IBM Plex Mono', monospace; }
|
||||
canvas { display: block; }
|
||||
#info {
|
||||
position: absolute; top: 16px; left: 16px;
|
||||
color: #e0e0e0; font-size: 13px; line-height: 1.6;
|
||||
background: rgba(0,0,0,0.7); padding: 14px 18px; border-radius: 8px;
|
||||
border: 1px solid #333; max-width: 340px; z-index: 10;
|
||||
}
|
||||
#info h2 { font-size: 15px; color: #fff; margin-bottom: 8px; }
|
||||
.legend { display: flex; align-items: center; gap: 8px; margin: 4px 0; }
|
||||
.dot { width: 12px; height: 12px; border-radius: 50%; display: inline-block; flex-shrink: 0; }
|
||||
.dot-red { background: #ff4444; }
|
||||
.dot-green { background: #44ff44; }
|
||||
.dot-blue { background: #4488ff; }
|
||||
#frame-select { margin-top: 10px; }
|
||||
#frame-select button {
|
||||
background: #333; color: #e0e0e0; border: 1px solid #555;
|
||||
padding: 6px 10px; margin: 2px; border-radius: 4px; cursor: pointer;
|
||||
font-family: inherit; font-size: 12px;
|
||||
}
|
||||
#frame-select button:hover { background: #555; }
|
||||
#frame-select button.active { background: #4488ff; color: #fff; border-color: #4488ff; }
|
||||
#status { margin-top: 8px; font-size: 11px; color: #888; }
|
||||
</style>
|
||||
<link href="https://fonts.googleapis.com/css2?family=IBM+Plex+Mono:wght@400;600&display=swap" rel="stylesheet">
|
||||
</head>
|
||||
<body>
|
||||
|
||||
<div id="info">
|
||||
<h2>OpenArm Right Arm — EE Frame Options</h2>
|
||||
<div class="legend"><span class="dot dot-red"></span> openarm_right_link7 (wrist output)</div>
|
||||
<div class="legend"><span class="dot" style="background:#ffaa00"></span> openarm_right_ee_target (+5cm)</div>
|
||||
<div class="legend"><span class="dot dot-green"></span> openarm_right_hand (+10cm)</div>
|
||||
<div class="legend"><span class="dot dot-blue"></span> openarm_right_hand_tcp (+18cm)</div>
|
||||
<div id="frame-select">
|
||||
<button onclick="focusFrame('link7')" class="active">link7</button>
|
||||
<button onclick="focusFrame('ee_target')">ee_target</button>
|
||||
<button onclick="focusFrame('hand')">hand</button>
|
||||
<button onclick="focusFrame('tcp')">hand_tcp</button>
|
||||
</div>
|
||||
<div id="status">Loading URDF...</div>
|
||||
<p style="margin-top:8px;font-size:11px;color:#888;">Drag to orbit · Scroll to zoom · Right-drag to pan</p>
|
||||
</div>
|
||||
|
||||
<script type="importmap">
|
||||
{
|
||||
"imports": {
|
||||
"three": "https://cdn.jsdelivr.net/npm/three@0.169.0/build/three.module.js",
|
||||
"three/examples/jsm/": "https://cdn.jsdelivr.net/npm/three@0.169.0/examples/jsm/"
|
||||
}
|
||||
}
|
||||
</script>
|
||||
|
||||
<script type="module">
|
||||
import * as THREE from 'three';
|
||||
import { OrbitControls } from 'three/examples/jsm/controls/OrbitControls.js';
|
||||
import { STLLoader } from 'three/examples/jsm/loaders/STLLoader.js';
|
||||
|
||||
const statusEl = document.getElementById('status');
|
||||
|
||||
const scene = new THREE.Scene();
|
||||
scene.background = new THREE.Color(0x1a1a2e);
|
||||
|
||||
const camera = new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 0.01, 100);
|
||||
camera.position.set(0.8, 1.0, 1.8);
|
||||
|
||||
const renderer = new THREE.WebGLRenderer({ antialias: true });
|
||||
renderer.setSize(window.innerWidth, window.innerHeight);
|
||||
renderer.setPixelRatio(window.devicePixelRatio);
|
||||
renderer.shadowMap.enabled = true;
|
||||
document.body.appendChild(renderer.domElement);
|
||||
|
||||
const controls = new OrbitControls(camera, renderer.domElement);
|
||||
controls.target.set(0, 0, 0.9);
|
||||
controls.enableDamping = true;
|
||||
controls.dampingFactor = 0.08;
|
||||
controls.update();
|
||||
|
||||
// Lighting
|
||||
scene.add(new THREE.AmbientLight(0xffffff, 0.6));
|
||||
const dirLight = new THREE.DirectionalLight(0xffffff, 1.2);
|
||||
dirLight.position.set(3, 5, 4);
|
||||
scene.add(dirLight);
|
||||
scene.add(new THREE.DirectionalLight(0x8888ff, 0.4).translateX(-2).translateY(1).translateZ(-3));
|
||||
|
||||
// Ground grid
|
||||
scene.add(new THREE.GridHelper(4, 40, 0x333355, 0x222244));
|
||||
scene.add(new THREE.AxesHelper(0.3));
|
||||
|
||||
// Parse URDF manually — build the kinematic tree and load STL meshes
|
||||
const stlLoader = new STLLoader();
|
||||
const robotGroup = new THREE.Group();
|
||||
scene.add(robotGroup);
|
||||
|
||||
async function loadURDF() {
|
||||
const resp = await fetch('./openarm_bimanual_pybullet.urdf');
|
||||
const text = await resp.text();
|
||||
const parser = new DOMParser();
|
||||
const xml = parser.parseFromString(text, 'text/xml');
|
||||
|
||||
// Parse links and joints
|
||||
const links = {};
|
||||
const joints = [];
|
||||
|
||||
for (const linkEl of xml.querySelectorAll('link')) {
|
||||
const name = linkEl.getAttribute('name');
|
||||
const group = new THREE.Group();
|
||||
group.name = name;
|
||||
|
||||
// Try to load visual mesh
|
||||
const visual = linkEl.querySelector('visual');
|
||||
if (visual) {
|
||||
const meshEl = visual.querySelector('mesh');
|
||||
const originEl = visual.querySelector('origin');
|
||||
if (meshEl) {
|
||||
const filename = meshEl.getAttribute('filename');
|
||||
const scaleStr = meshEl.getAttribute('scale');
|
||||
const scale = scaleStr ? scaleStr.split(' ').map(Number) : [1, 1, 1];
|
||||
|
||||
let xyz = [0, 0, 0];
|
||||
if (originEl && originEl.getAttribute('xyz')) {
|
||||
xyz = originEl.getAttribute('xyz').split(' ').map(Number);
|
||||
}
|
||||
|
||||
if (filename.endsWith('.stl')) {
|
||||
try {
|
||||
const geo = await new Promise((resolve, reject) => {
|
||||
stlLoader.load(filename, resolve, undefined, reject);
|
||||
});
|
||||
const mat = new THREE.MeshStandardMaterial({
|
||||
color: 0x6688aa,
|
||||
metalness: 0.3,
|
||||
roughness: 0.6,
|
||||
transparent: true,
|
||||
opacity: 0.7,
|
||||
});
|
||||
const mesh = new THREE.Mesh(geo, mat);
|
||||
mesh.scale.set(scale[0], scale[1], scale[2]);
|
||||
mesh.position.set(xyz[0], xyz[1], xyz[2]);
|
||||
group.add(mesh);
|
||||
} catch (e) {
|
||||
// Mesh file not found, skip
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
links[name] = group;
|
||||
}
|
||||
|
||||
// Parse joints and build hierarchy
|
||||
for (const jointEl of xml.querySelectorAll('joint')) {
|
||||
const name = jointEl.getAttribute('name');
|
||||
const type = jointEl.getAttribute('type');
|
||||
const parentName = jointEl.querySelector('parent').getAttribute('link');
|
||||
const childName = jointEl.querySelector('child').getAttribute('link');
|
||||
const originEl = jointEl.querySelector('origin');
|
||||
|
||||
let xyz = [0, 0, 0];
|
||||
let rpy = [0, 0, 0];
|
||||
if (originEl) {
|
||||
if (originEl.getAttribute('xyz')) xyz = originEl.getAttribute('xyz').split(' ').map(Number);
|
||||
if (originEl.getAttribute('rpy')) rpy = originEl.getAttribute('rpy').split(' ').map(Number);
|
||||
}
|
||||
|
||||
joints.push({ name, type, parentName, childName, xyz, rpy });
|
||||
}
|
||||
|
||||
// Build tree
|
||||
const rootLinks = new Set(Object.keys(links));
|
||||
for (const j of joints) {
|
||||
rootLinks.delete(j.childName);
|
||||
}
|
||||
|
||||
for (const j of joints) {
|
||||
const parent = links[j.parentName];
|
||||
const child = links[j.childName];
|
||||
if (parent && child) {
|
||||
child.position.set(j.xyz[0], j.xyz[1], j.xyz[2]);
|
||||
if (j.rpy[0] || j.rpy[1] || j.rpy[2]) {
|
||||
child.rotation.set(j.rpy[0], j.rpy[1], j.rpy[2], 'XYZ');
|
||||
}
|
||||
parent.add(child);
|
||||
}
|
||||
}
|
||||
|
||||
// Add root links to scene
|
||||
for (const name of rootLinks) {
|
||||
if (links[name]) robotGroup.add(links[name]);
|
||||
}
|
||||
|
||||
// Place markers at the three EE frame candidates
|
||||
const targets = {
|
||||
link7: 'openarm_right_link7',
|
||||
ee_target: 'openarm_right_ee_target',
|
||||
hand: 'openarm_right_hand',
|
||||
tcp: 'openarm_right_hand_tcp',
|
||||
};
|
||||
const colors = { link7: 0xff4444, ee_target: 0xffaa00, hand: 0x44ff44, tcp: 0x4488ff };
|
||||
const labels = { link7: 'link7', ee_target: 'ee_target', hand: 'hand', tcp: 'hand_tcp' };
|
||||
|
||||
for (const [key, linkName] of Object.entries(targets)) {
|
||||
const link = links[linkName];
|
||||
if (!link) continue;
|
||||
|
||||
// Marker sphere
|
||||
const sphere = new THREE.Mesh(
|
||||
new THREE.SphereGeometry(0.018, 16, 16),
|
||||
new THREE.MeshStandardMaterial({ color: colors[key], emissive: colors[key], emissiveIntensity: 0.6 })
|
||||
);
|
||||
link.add(sphere);
|
||||
|
||||
// Ring around sphere for visibility
|
||||
const ring = new THREE.Mesh(
|
||||
new THREE.TorusGeometry(0.03, 0.003, 8, 32),
|
||||
new THREE.MeshStandardMaterial({ color: colors[key], emissive: colors[key], emissiveIntensity: 0.4 })
|
||||
);
|
||||
link.add(ring);
|
||||
|
||||
// Axes helper
|
||||
link.add(new THREE.AxesHelper(0.08));
|
||||
|
||||
// Sprite label
|
||||
const canvas = document.createElement('canvas');
|
||||
canvas.width = 512; canvas.height = 80;
|
||||
const ctx = canvas.getContext('2d');
|
||||
ctx.font = 'bold 36px IBM Plex Mono, monospace';
|
||||
ctx.fillStyle = '#' + colors[key].toString(16).padStart(6, '0');
|
||||
ctx.fillText(labels[key], 4, 50);
|
||||
const tex = new THREE.CanvasTexture(canvas);
|
||||
const sprite = new THREE.Sprite(new THREE.SpriteMaterial({ map: tex, depthTest: false }));
|
||||
sprite.scale.set(0.3, 0.05, 1);
|
||||
sprite.position.set(0.06, 0.0, 0.03);
|
||||
link.add(sprite);
|
||||
}
|
||||
|
||||
// Dashed lines between markers (in world space)
|
||||
robotGroup.updateMatrixWorld(true);
|
||||
const positions = {};
|
||||
for (const [key, linkName] of Object.entries(targets)) {
|
||||
const link = links[linkName];
|
||||
if (link) {
|
||||
const wp = new THREE.Vector3();
|
||||
link.getWorldPosition(wp);
|
||||
positions[key] = wp;
|
||||
}
|
||||
}
|
||||
|
||||
function addDashedLine(from, to) {
|
||||
const geo = new THREE.BufferGeometry().setFromPoints([from, to]);
|
||||
const mat = new THREE.LineDashedMaterial({ color: 0xaaaaaa, dashSize: 0.012, gapSize: 0.008 });
|
||||
const line = new THREE.Line(geo, mat);
|
||||
line.computeLineDistances();
|
||||
scene.add(line);
|
||||
}
|
||||
if (positions.link7 && positions.hand) addDashedLine(positions.link7, positions.hand);
|
||||
if (positions.hand && positions.tcp) addDashedLine(positions.hand, positions.tcp);
|
||||
|
||||
// Store for focus buttons
|
||||
window._framePositions = positions;
|
||||
window._links = links;
|
||||
window._targets = targets;
|
||||
|
||||
// Focus on the hand area
|
||||
if (positions.hand) {
|
||||
controls.target.copy(positions.hand);
|
||||
camera.position.set(positions.hand.x + 0.5, positions.hand.y + 0.4, positions.hand.z + 0.5);
|
||||
controls.update();
|
||||
}
|
||||
|
||||
const meshCount = robotGroup.children.length;
|
||||
statusEl.textContent = `Loaded. Right arm chain visible with ${Object.keys(links).length} links.`;
|
||||
}
|
||||
|
||||
window.focusFrame = function(key) {
|
||||
const pos = window._framePositions?.[key];
|
||||
if (!pos) return;
|
||||
controls.target.copy(pos);
|
||||
camera.position.set(pos.x + 0.35, pos.y + 0.25, pos.z + 0.35);
|
||||
controls.update();
|
||||
document.querySelectorAll('#frame-select button').forEach(b => b.classList.remove('active'));
|
||||
event.target.classList.add('active');
|
||||
};
|
||||
|
||||
window.addEventListener('resize', () => {
|
||||
camera.aspect = window.innerWidth / window.innerHeight;
|
||||
camera.updateProjectionMatrix();
|
||||
renderer.setSize(window.innerWidth, window.innerHeight);
|
||||
});
|
||||
|
||||
function animate() {
|
||||
requestAnimationFrame(animate);
|
||||
controls.update();
|
||||
renderer.render(scene, camera);
|
||||
}
|
||||
animate();
|
||||
|
||||
loadURDF().catch(err => {
|
||||
statusEl.textContent = `Error: ${err.message}`;
|
||||
console.error(err);
|
||||
});
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -255,19 +255,16 @@ class InverseKinematicsEEToJoints(RobotActionProcessorStep):
|
||||
"""
|
||||
Computes desired joint positions from a target end-effector pose using inverse kinematics (IK).
|
||||
|
||||
This step translates a Cartesian command (position and orientation of the end-effector) into
|
||||
the corresponding joint-space commands for each motor.
|
||||
|
||||
Attributes:
|
||||
kinematics: The robot's kinematic model for inverse kinematics.
|
||||
motor_names: A list of motor names for which to compute joint positions.
|
||||
q_curr: Internal state storing the last joint positions, used as an initial guess for the IK solver.
|
||||
motor_names: Arm joint names for IK computation.
|
||||
gripper_names: Gripper joint name(s). ee.gripper_pos is written to all of them.
|
||||
initial_guess_current_joints: If True, use the robot's current joint state as the IK guess.
|
||||
If False, use the solution from the previous step.
|
||||
"""
|
||||
|
||||
kinematics: RobotKinematics
|
||||
motor_names: list[str]
|
||||
gripper_names: list[str] = field(default_factory=lambda: ["gripper"])
|
||||
q_curr: np.ndarray | None = field(default=None, init=False, repr=False)
|
||||
initial_guess_current_joints: bool = True
|
||||
|
||||
@@ -278,63 +275,73 @@ class InverseKinematicsEEToJoints(RobotActionProcessorStep):
|
||||
wx = action.pop("ee.wx")
|
||||
wy = action.pop("ee.wy")
|
||||
wz = action.pop("ee.wz")
|
||||
gripper_pos = action.pop("ee.gripper_pos")
|
||||
|
||||
if None in (x, y, z, wx, wy, wz, gripper_pos):
|
||||
raise ValueError(
|
||||
"Missing required end-effector pose components: ee.x, ee.y, ee.z, ee.wx, ee.wy, ee.wz, ee.gripper_pos must all be present in action"
|
||||
)
|
||||
ee_keys = [x, y, z, wx, wy, wz]
|
||||
|
||||
if self.gripper_names:
|
||||
gripper_pos = action.pop("ee.gripper_pos")
|
||||
ee_keys.append(gripper_pos)
|
||||
if None in ee_keys:
|
||||
raise ValueError("Missing required end-effector pose components in action")
|
||||
|
||||
observation = self.transition.get(TransitionKey.OBSERVATION).copy()
|
||||
if observation is None:
|
||||
raise ValueError("Joints observation is require for computing robot kinematics")
|
||||
raise ValueError("Joints observation is required for computing robot kinematics")
|
||||
|
||||
q_raw = np.array(
|
||||
[float(v) for k, v in observation.items() if isinstance(k, str) and k.endswith(".pos")],
|
||||
[
|
||||
float(v)
|
||||
for k, v in observation.items()
|
||||
if isinstance(k, str) and k.endswith(".pos") and k.removesuffix(".pos") in self.motor_names
|
||||
],
|
||||
dtype=float,
|
||||
)
|
||||
if q_raw is None:
|
||||
raise ValueError("Joints observation is require for computing robot kinematics")
|
||||
|
||||
if self.initial_guess_current_joints: # Use current joints as initial guess
|
||||
if self.initial_guess_current_joints:
|
||||
self.q_curr = q_raw
|
||||
else: # Use previous ik solution as initial guess
|
||||
else:
|
||||
if self.q_curr is None:
|
||||
self.q_curr = q_raw
|
||||
|
||||
# Build desired 4x4 transform from pos + rotvec (twist)
|
||||
t_des = np.eye(4, dtype=float)
|
||||
t_des[:3, :3] = Rotation.from_rotvec([wx, wy, wz]).as_matrix()
|
||||
t_des[:3, 3] = [x, y, z]
|
||||
|
||||
# Compute inverse kinematics
|
||||
q_target = self.kinematics.inverse_kinematics(self.q_curr, t_des)
|
||||
self.q_curr = q_target
|
||||
|
||||
# TODO: This is sentitive to order of motor_names = q_target mapping
|
||||
for i, name in enumerate(self.motor_names):
|
||||
if name != "gripper":
|
||||
action[f"{name}.pos"] = float(q_target[i])
|
||||
else:
|
||||
action["gripper.pos"] = float(gripper_pos)
|
||||
action[f"{name}.pos"] = float(q_target[i])
|
||||
|
||||
if self.gripper_names:
|
||||
for gname in self.gripper_names:
|
||||
action[f"{gname}.pos"] = float(gripper_pos)
|
||||
|
||||
# When gripper_names is empty, gripper keys (e.g. proximal.pos, distal.pos)
|
||||
# are already in the action dict as absolute positions — left untouched.
|
||||
return action
|
||||
|
||||
def transform_features(
|
||||
self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
|
||||
) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
|
||||
for feat in ["x", "y", "z", "wx", "wy", "wz", "gripper_pos"]:
|
||||
ee_feats = ["x", "y", "z", "wx", "wy", "wz"]
|
||||
if self.gripper_names:
|
||||
ee_feats.append("gripper_pos")
|
||||
for feat in ee_feats:
|
||||
features[PipelineFeatureType.ACTION].pop(f"ee.{feat}", None)
|
||||
|
||||
for name in self.motor_names:
|
||||
features[PipelineFeatureType.ACTION][f"{name}.pos"] = PolicyFeature(
|
||||
type=FeatureType.ACTION, shape=(1,)
|
||||
)
|
||||
for name in self.gripper_names:
|
||||
features[PipelineFeatureType.ACTION][f"{name}.pos"] = PolicyFeature(
|
||||
type=FeatureType.ACTION, shape=(1,)
|
||||
)
|
||||
|
||||
return features
|
||||
|
||||
def reset(self):
|
||||
"""Resets the initial guess for the IK solver."""
|
||||
self.q_curr = None
|
||||
|
||||
|
||||
@@ -402,24 +409,39 @@ class GripperVelocityToJoint(RobotActionProcessorStep):
|
||||
|
||||
|
||||
def compute_forward_kinematics_joints_to_ee(
|
||||
joints: dict[str, Any], kinematics: RobotKinematics, motor_names: list[str]
|
||||
joints: dict[str, Any],
|
||||
kinematics: RobotKinematics,
|
||||
motor_names: list[str],
|
||||
gripper_names: list[str] | None = None,
|
||||
) -> dict[str, Any]:
|
||||
if gripper_names is None:
|
||||
gripper_names = ["gripper"]
|
||||
|
||||
motor_joint_values = [joints[f"{n}.pos"] for n in motor_names]
|
||||
|
||||
q = np.array(motor_joint_values, dtype=float)
|
||||
t = kinematics.forward_kinematics(q)
|
||||
pos = t[:3, 3]
|
||||
tw = Rotation.from_matrix(t[:3, :3]).as_rotvec()
|
||||
gripper_pos = joints["gripper.pos"]
|
||||
|
||||
for n in motor_names:
|
||||
joints.pop(f"{n}.pos")
|
||||
|
||||
joints["ee.x"] = float(pos[0])
|
||||
joints["ee.y"] = float(pos[1])
|
||||
joints["ee.z"] = float(pos[2])
|
||||
joints["ee.wx"] = float(tw[0])
|
||||
joints["ee.wy"] = float(tw[1])
|
||||
joints["ee.wz"] = float(tw[2])
|
||||
joints["ee.gripper_pos"] = float(gripper_pos)
|
||||
|
||||
# When gripper_names is non-empty, fold them into ee.gripper_pos (e.g. SO100).
|
||||
# When empty, gripper joints pass through as-is (absolute position control).
|
||||
if gripper_names:
|
||||
gripper_pos = joints[f"{gripper_names[0]}.pos"]
|
||||
for n in gripper_names:
|
||||
joints.pop(f"{n}.pos", None)
|
||||
joints["ee.gripper_pos"] = float(gripper_pos)
|
||||
|
||||
return joints
|
||||
|
||||
|
||||
@@ -429,27 +451,33 @@ class ForwardKinematicsJointsToEEObservation(ObservationProcessorStep):
|
||||
"""
|
||||
Computes the end-effector pose from joint positions using forward kinematics (FK).
|
||||
|
||||
This step is typically used to add the robot's Cartesian pose to the observation space,
|
||||
which can be useful for visualization or as an input to a policy.
|
||||
|
||||
Attributes:
|
||||
kinematics: The robot's kinematic model.
|
||||
motor_names: Arm joint names used for FK computation.
|
||||
gripper_names: Gripper joint name(s) to fold into ee.gripper_pos.
|
||||
Empty list means gripper joints pass through as absolute positions.
|
||||
"""
|
||||
|
||||
kinematics: RobotKinematics
|
||||
motor_names: list[str]
|
||||
gripper_names: list[str] = field(default_factory=lambda: ["gripper"])
|
||||
|
||||
def observation(self, observation: RobotObservation) -> RobotObservation:
|
||||
return compute_forward_kinematics_joints_to_ee(observation, self.kinematics, self.motor_names)
|
||||
return compute_forward_kinematics_joints_to_ee(
|
||||
observation, self.kinematics, self.motor_names, self.gripper_names
|
||||
)
|
||||
|
||||
def transform_features(
|
||||
self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
|
||||
) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
|
||||
# We only use the ee pose in the dataset, so we don't need the joint positions
|
||||
for n in self.motor_names:
|
||||
features[PipelineFeatureType.OBSERVATION].pop(f"{n}.pos", None)
|
||||
# We specify the dataset features of this step that we want to be stored in the dataset
|
||||
for k in ["x", "y", "z", "wx", "wy", "wz", "gripper_pos"]:
|
||||
ee_keys = ["x", "y", "z", "wx", "wy", "wz"]
|
||||
if self.gripper_names:
|
||||
for n in self.gripper_names:
|
||||
features[PipelineFeatureType.OBSERVATION].pop(f"{n}.pos", None)
|
||||
ee_keys.append("gripper_pos")
|
||||
for k in ee_keys:
|
||||
features[PipelineFeatureType.OBSERVATION][f"ee.{k}"] = PolicyFeature(
|
||||
type=FeatureType.STATE, shape=(1,)
|
||||
)
|
||||
@@ -462,27 +490,33 @@ class ForwardKinematicsJointsToEEAction(RobotActionProcessorStep):
|
||||
"""
|
||||
Computes the end-effector pose from joint positions using forward kinematics (FK).
|
||||
|
||||
This step is typically used to add the robot's Cartesian pose to the observation space,
|
||||
which can be useful for visualization or as an input to a policy.
|
||||
|
||||
Attributes:
|
||||
kinematics: The robot's kinematic model.
|
||||
motor_names: Arm joint names used for FK computation.
|
||||
gripper_names: Gripper joint name(s) to fold into ee.gripper_pos.
|
||||
Empty list means gripper joints pass through as absolute positions.
|
||||
"""
|
||||
|
||||
kinematics: RobotKinematics
|
||||
motor_names: list[str]
|
||||
gripper_names: list[str] = field(default_factory=lambda: ["gripper"])
|
||||
|
||||
def action(self, action: RobotAction) -> RobotAction:
|
||||
return compute_forward_kinematics_joints_to_ee(action, self.kinematics, self.motor_names)
|
||||
return compute_forward_kinematics_joints_to_ee(
|
||||
action, self.kinematics, self.motor_names, self.gripper_names
|
||||
)
|
||||
|
||||
def transform_features(
|
||||
self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
|
||||
) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
|
||||
# We only use the ee pose in the dataset, so we don't need the joint positions
|
||||
for n in self.motor_names:
|
||||
features[PipelineFeatureType.ACTION].pop(f"{n}.pos", None)
|
||||
# We specify the dataset features of this step that we want to be stored in the dataset
|
||||
for k in ["x", "y", "z", "wx", "wy", "wz", "gripper_pos"]:
|
||||
ee_keys = ["x", "y", "z", "wx", "wy", "wz"]
|
||||
if self.gripper_names:
|
||||
for n in self.gripper_names:
|
||||
features[PipelineFeatureType.ACTION].pop(f"{n}.pos", None)
|
||||
ee_keys.append("gripper_pos")
|
||||
for k in ee_keys:
|
||||
features[PipelineFeatureType.ACTION][f"ee.{k}"] = PolicyFeature(
|
||||
type=FeatureType.STATE, shape=(1,)
|
||||
)
|
||||
@@ -494,13 +528,14 @@ class ForwardKinematicsJointsToEEAction(RobotActionProcessorStep):
|
||||
class ForwardKinematicsJointsToEE(ProcessorStep):
|
||||
kinematics: RobotKinematics
|
||||
motor_names: list[str]
|
||||
gripper_names: list[str] = field(default_factory=lambda: ["gripper"])
|
||||
|
||||
def __post_init__(self):
|
||||
self.joints_to_ee_action_processor = ForwardKinematicsJointsToEEAction(
|
||||
kinematics=self.kinematics, motor_names=self.motor_names
|
||||
kinematics=self.kinematics, motor_names=self.motor_names, gripper_names=self.gripper_names
|
||||
)
|
||||
self.joints_to_ee_observation_processor = ForwardKinematicsJointsToEEObservation(
|
||||
kinematics=self.kinematics, motor_names=self.motor_names
|
||||
kinematics=self.kinematics, motor_names=self.motor_names, gripper_names=self.gripper_names
|
||||
)
|
||||
|
||||
def __call__(self, transition: EnvTransition) -> EnvTransition:
|
||||
@@ -524,13 +559,13 @@ class ForwardKinematicsJointsToEE(ProcessorStep):
|
||||
@dataclass
|
||||
class InverseKinematicsRLStep(ProcessorStep):
|
||||
"""
|
||||
Computes desired joint positions from a target end-effector pose using inverse kinematics (IK).
|
||||
|
||||
This is modified from the InverseKinematicsEEToJoints step to be used in the RL pipeline.
|
||||
IK step for the RL pipeline. Same logic as InverseKinematicsEEToJoints but
|
||||
operates on EnvTransition directly and stores the IK solution.
|
||||
"""
|
||||
|
||||
kinematics: RobotKinematics
|
||||
motor_names: list[str]
|
||||
gripper_names: list[str] = field(default_factory=lambda: ["gripper"])
|
||||
q_curr: np.ndarray | None = field(default=None, init=False, repr=False)
|
||||
initial_guess_current_joints: bool = True
|
||||
|
||||
@@ -538,7 +573,7 @@ class InverseKinematicsRLStep(ProcessorStep):
|
||||
new_transition = dict(transition)
|
||||
action = new_transition.get(TransitionKey.ACTION)
|
||||
if action is None:
|
||||
raise ValueError("Action is required for InverseKinematicsEEToJoints")
|
||||
raise ValueError("Action is required for InverseKinematicsRLStep")
|
||||
action = dict(action)
|
||||
|
||||
x = action.pop("ee.x")
|
||||
@@ -547,45 +582,46 @@ class InverseKinematicsRLStep(ProcessorStep):
|
||||
wx = action.pop("ee.wx")
|
||||
wy = action.pop("ee.wy")
|
||||
wz = action.pop("ee.wz")
|
||||
gripper_pos = action.pop("ee.gripper_pos")
|
||||
|
||||
if None in (x, y, z, wx, wy, wz, gripper_pos):
|
||||
raise ValueError(
|
||||
"Missing required end-effector pose components: ee.x, ee.y, ee.z, ee.wx, ee.wy, ee.wz, ee.gripper_pos must all be present in action"
|
||||
)
|
||||
ee_keys = [x, y, z, wx, wy, wz]
|
||||
if self.gripper_names:
|
||||
gripper_pos = action.pop("ee.gripper_pos")
|
||||
ee_keys.append(gripper_pos)
|
||||
if None in ee_keys:
|
||||
raise ValueError("Missing required end-effector pose components in action")
|
||||
|
||||
observation = new_transition.get(TransitionKey.OBSERVATION).copy()
|
||||
if observation is None:
|
||||
raise ValueError("Joints observation is require for computing robot kinematics")
|
||||
raise ValueError("Joints observation is required for computing robot kinematics")
|
||||
|
||||
q_raw = np.array(
|
||||
[float(v) for k, v in observation.items() if isinstance(k, str) and k.endswith(".pos")],
|
||||
[
|
||||
float(v)
|
||||
for k, v in observation.items()
|
||||
if isinstance(k, str) and k.endswith(".pos") and k.removesuffix(".pos") in self.motor_names
|
||||
],
|
||||
dtype=float,
|
||||
)
|
||||
if q_raw is None:
|
||||
raise ValueError("Joints observation is require for computing robot kinematics")
|
||||
|
||||
if self.initial_guess_current_joints: # Use current joints as initial guess
|
||||
if self.initial_guess_current_joints:
|
||||
self.q_curr = q_raw
|
||||
else: # Use previous ik solution as initial guess
|
||||
else:
|
||||
if self.q_curr is None:
|
||||
self.q_curr = q_raw
|
||||
|
||||
# Build desired 4x4 transform from pos + rotvec (twist)
|
||||
t_des = np.eye(4, dtype=float)
|
||||
t_des[:3, :3] = Rotation.from_rotvec([wx, wy, wz]).as_matrix()
|
||||
t_des[:3, 3] = [x, y, z]
|
||||
|
||||
# Compute inverse kinematics
|
||||
q_target = self.kinematics.inverse_kinematics(self.q_curr, t_des)
|
||||
self.q_curr = q_target
|
||||
|
||||
# TODO: This is sentitive to order of motor_names = q_target mapping
|
||||
for i, name in enumerate(self.motor_names):
|
||||
if name != "gripper":
|
||||
action[f"{name}.pos"] = float(q_target[i])
|
||||
else:
|
||||
action["gripper.pos"] = float(gripper_pos)
|
||||
action[f"{name}.pos"] = float(q_target[i])
|
||||
|
||||
if self.gripper_names:
|
||||
for gname in self.gripper_names:
|
||||
action[f"{gname}.pos"] = float(gripper_pos)
|
||||
|
||||
new_transition[TransitionKey.ACTION] = action
|
||||
complementary_data = new_transition.get(TransitionKey.COMPLEMENTARY_DATA, {})
|
||||
@@ -596,16 +632,22 @@ class InverseKinematicsRLStep(ProcessorStep):
|
||||
def transform_features(
|
||||
self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
|
||||
) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
|
||||
for feat in ["x", "y", "z", "wx", "wy", "wz", "gripper_pos"]:
|
||||
ee_feats = ["x", "y", "z", "wx", "wy", "wz"]
|
||||
if self.gripper_names:
|
||||
ee_feats.append("gripper_pos")
|
||||
for feat in ee_feats:
|
||||
features[PipelineFeatureType.ACTION].pop(f"ee.{feat}", None)
|
||||
|
||||
for name in self.motor_names:
|
||||
features[PipelineFeatureType.ACTION][f"{name}.pos"] = PolicyFeature(
|
||||
type=FeatureType.ACTION, shape=(1,)
|
||||
)
|
||||
for name in self.gripper_names:
|
||||
features[PipelineFeatureType.ACTION][f"{name}.pos"] = PolicyFeature(
|
||||
type=FeatureType.ACTION, shape=(1,)
|
||||
)
|
||||
|
||||
return features
|
||||
|
||||
def reset(self):
|
||||
"""Resets the initial guess for the IK solver."""
|
||||
self.q_curr = None
|
||||
|
||||
@@ -254,6 +254,10 @@ class RecomputeStatsConfig(OperationConfig):
|
||||
relative_exclude_joints: list[str] | None = None
|
||||
chunk_size: int = 50
|
||||
num_workers: int = 0
|
||||
relative_state: bool = False
|
||||
relative_exclude_state_joints: list[str] | None = None
|
||||
state_obs_steps: int = 2
|
||||
derive_state_from_action: bool = False
|
||||
|
||||
|
||||
@OperationConfig.register_subclass("info")
|
||||
@@ -563,6 +567,14 @@ def handle_recompute_stats(cfg: EditDatasetConfig) -> None:
|
||||
f"Relative action stats enabled (chunk_size={cfg.operation.chunk_size}, "
|
||||
f"exclude_joints={cfg.operation.relative_exclude_joints})"
|
||||
)
|
||||
if cfg.operation.relative_state:
|
||||
logging.info(
|
||||
f"Relative state stats enabled (state_obs_steps={cfg.operation.state_obs_steps}, "
|
||||
f"exclude_state_joints={cfg.operation.relative_exclude_state_joints})"
|
||||
)
|
||||
|
||||
if cfg.operation.derive_state_from_action:
|
||||
logging.info("Derive state from action enabled (implies relative_state=True, state_obs_steps=2)")
|
||||
|
||||
recompute_stats(
|
||||
dataset,
|
||||
@@ -571,6 +583,10 @@ def handle_recompute_stats(cfg: EditDatasetConfig) -> None:
|
||||
relative_exclude_joints=cfg.operation.relative_exclude_joints,
|
||||
chunk_size=cfg.operation.chunk_size,
|
||||
num_workers=cfg.operation.num_workers,
|
||||
relative_state=cfg.operation.relative_state,
|
||||
relative_exclude_state_joints=cfg.operation.relative_exclude_state_joints,
|
||||
state_obs_steps=cfg.operation.state_obs_steps,
|
||||
derive_state_from_action=cfg.operation.derive_state_from_action,
|
||||
)
|
||||
|
||||
logging.info(f"Stats written to {dataset.root}")
|
||||
|
||||
Reference in New Issue
Block a user