text
stringlengths 24
63
|
|---|
25550_exterior_image_2_left
|
25550_exterior_image_1_left
|
78150_exterior_image_1_left
|
78150_exterior_image_2_left
|
65267_exterior_image_2_left
|
65267_exterior_image_1_left
|
27705_exterior_image_2_left
|
27705_exterior_image_1_left
|
65660_exterior_image_1_left
|
65660_exterior_image_2_left
|
76141_exterior_image_2_left
|
76141_exterior_image_1_left
|
81486_exterior_image_2_left
|
81486_exterior_image_1_left
|
69358_exterior_image_1_left
|
69358_exterior_image_2_left
|
84313_exterior_image_2_left
|
84313_exterior_image_1_left
|
48974_exterior_image_2_left
|
48974_exterior_image_1_left
|
77732_exterior_image_2_left
|
77732_exterior_image_1_left
|
88693_exterior_image_1_left
|
88693_exterior_image_2_left
|
86555_exterior_image_1_left
|
86555_exterior_image_2_left
|
52032_exterior_image_2_left
|
52032_exterior_image_1_left
|
26173_exterior_image_1_left
|
26173_exterior_image_2_left
|
21585_exterior_image_1_left
|
21585_exterior_image_2_left
|
15661_exterior_image_1_left
|
15661_exterior_image_2_left
|
92204_exterior_image_2_left
|
92204_exterior_image_1_left
|
59598_exterior_image_2_left
|
59598_exterior_image_1_left
|
38730_exterior_image_1_left
|
38730_exterior_image_2_left
|
52259_exterior_image_1_left
|
52259_exterior_image_2_left
|
40635_exterior_image_2_left
|
40635_exterior_image_1_left
|
39884_exterior_image_1_left
|
39884_exterior_image_2_left
|
15112_exterior_image_1_left
|
15112_exterior_image_2_left
|
88538_exterior_image_2_left
|
88538_exterior_image_1_left
|
40349_exterior_image_1_left
|
40349_exterior_image_2_left
|
7338_exterior_image_2_left
|
7338_exterior_image_1_left
|
68107_exterior_image_1_left
|
68107_exterior_image_2_left
|
47225_exterior_image_1_left
|
47225_exterior_image_2_left
|
72195_exterior_image_2_left
|
72195_exterior_image_1_left
|
68810_exterior_image_2_left
|
68810_exterior_image_1_left
|
67269_exterior_image_1_left
|
67269_exterior_image_2_left
|
31390_exterior_image_1_left
|
31390_exterior_image_2_left
|
61861_exterior_image_2_left
|
61861_exterior_image_1_left
|
26417_exterior_image_2_left
|
26417_exterior_image_1_left
|
40084_exterior_image_1_left
|
40084_exterior_image_2_left
|
32511_exterior_image_2_left
|
32511_exterior_image_1_left
|
2574_exterior_image_2_left
|
2574_exterior_image_1_left
|
50112_exterior_image_1_left
|
50112_exterior_image_2_left
|
18559_exterior_image_1_left
|
18559_exterior_image_2_left
|
7909_exterior_image_1_left
|
7909_exterior_image_2_left
|
19238_exterior_image_2_left
|
19238_exterior_image_1_left
|
27788_exterior_image_2_left
|
27788_exterior_image_1_left
|
46993_exterior_image_2_left
|
46993_exterior_image_1_left
|
48395_exterior_image_2_left
|
48395_exterior_image_1_left
|
64577_exterior_image_1_left
|
64577_exterior_image_2_left
|
81155_exterior_image_1_left
|
81155_exterior_image_2_left
|
31484_exterior_image_2_left
|
31484_exterior_image_1_left
|
52270_exterior_image_1_left
|
52270_exterior_image_2_left
|
558_exterior_image_1_left
|
558_exterior_image_2_left
|
RoboInter-Data: Intermediate Representation Annotations for Robot Manipulation
Rich, dense, per-frame intermediate representation annotations for robot manipulation, built on top of DROID and RH20T. Developed as part of the RoboInter project. You can try our Online demo.
The annotations cover 230k episodes and include: subtasks, primitive skills, segmentation, gripper/object bounding boxes, placement proposals, affordance boxes, grasp poses, traces, contact points, etc. And each with a quality rating (Primary / Secondary).
Dataset Structure
RoboInter-Data/
β
βββ Annotation_with_action_lerobotv21/ # [Main] LeRobot v2.1 format (actions + annotations + videos)
β βββ lerobot_droid_anno/ # DROID: 152,986 episodes
β βββ lerobot_rh20t_anno/ # RH20T: 82,894 episodes
β
βββ Annotation_pure/ # Annotation-only LMDB (no actions/videos)
β βββ annotations/ # 35 GB, all 235,920 episodes
β
βββ Annotation_raw/ # Original unprocessed annotations
β βββ droid_annotation.pkl # Raw DROID annotations (~20 GB)
β βββ rh20t_annotation.pkl # Raw RH20T annotations (~11 GB)
β βββ segmentation_npz.zip.* # Segmentation masks (~50 GB, split archives)
β
βββ Annotation_demo_app/ # Small demo subset for online visualization
β βββ demo_data/ # LMDB annotations for 20 sampled videos
β βββ videos/ # 20 MP4 videos
β
βββ Annotation_demo_larger/ # Larger demo subset for local visualization
β βββ demo_annotations/ # LMDB annotations for 120 videos
β βββ videos/ # 120 MP4 videos
β
βββ All_Keys_of_Primary.json # Episode names where all annotations are Primary quality
βββ RoboInter_Data_Qsheet.json # Per-episode quality ratings for each annotation type
βββ RoboInter_Data_Qsheet_value_stats.json# Distribution statistics of quality ratings
βββ RoboInter_Data_RawPath_Qmapping.json # Mapping: original data source path -> episode splits & quality
βββ range_nop.json # Non-idle frame ranges for all 230k episodes
βββ range_nop_droid_all.json # Non-idle frame ranges (DROID only)
βββ range_nop_rh20t_all.json # Non-idle frame ranges (RH20T only)
βββ val_video.json # Validation set: 7,246 episode names
βββ VideoID_2_SegmentationNPZ.json # Episode video ID -> segmentation NPZ file path mapping
1. Annotation_with_action_lerobotv21 (Recommended)
The primary data format. Contains actions + observations + annotations in LeRobot v2.1 format (parquet + MP4 videos), ready for policy training.
Directory Layout
lerobot_droid_anno/ (or lerobot_rh20t_anno/)
βββ meta/
β βββ info.json # Dataset metadata (fps=10, features, etc.)
β βββ episodes.jsonl # Episode information
β βββ tasks.jsonl # Task/instruction mapping
βββ data/
β βββ chunk-{NNN}/ # Parquet files (1,000 episodes per chunk)
β βββ episode_{NNNNNN}.parquet
βββ videos/
βββ chunk-{NNN}/
βββ observation.images.primary/
β βββ episode_{NNNNNN}.mp4
βββ observation.images.wrist/
βββ episode_{NNNNNN}.mp4
Data Fields
| Category | Field | Shape / Type | Description |
|---|---|---|---|
| Core | action |
(7,) float64 | Delta EEF: [dx, dy, dz, drx, dry, drz, gripper] |
state |
(7,) float64 | EEF state: [x, y, z, rx, ry, rz, gripper] | |
observation.images.primary |
(180, 320, 3) video | Primary camera RGB | |
observation.images.wrist |
(180, 320, 3) video | Wrist camera RGB | |
| Annotation | annotation.instruction_add |
string | Structured task language instruction |
annotation.substask |
string | Current subtask description | |
annotation.primitive_skill |
string | Primitive skill label (pick, place, push, ...) | |
annotation.object_box |
JSON [[x1,y1],[x2,y2]] |
Manipulated object bounding box | |
annotation.gripper_box |
JSON [[x1,y1],[x2,y2]] |
Gripper bounding box | |
annotation.trace |
JSON [[x,y], ...] |
Future 10-step gripper trajectory | |
annotation.contact_frame |
JSON int | Frame index when gripper contacts object | |
annotation.contact_points |
JSON [x, y] |
Contact point pixel coordinates | |
annotation.affordance_box |
JSON [[x1,y1],[x2,y2]] |
Gripper box at contact frame | |
annotation.state_affordance |
JSON [x,y,z,rx,ry,rz] |
6D EEF state at contact frame | |
annotation.placement_proposal |
JSON [[x1,y1],[x2,y2]] |
Target placement bounding box | |
annotation.time_clip |
JSON [[s,e], ...] |
Subtask temporal segments | |
| Quality | Q_annotation.* |
string | Quality rating: "Primary" / "Secondary" / "" |
Quick Start
The dataloader is located at our RoboInter Codebase.
from lerobot_dataloader import create_dataloader
# Single dataset
dataloader = create_dataloader(
"path/to/Annotation_with_action_lerobotv21/lerobot_droid_anno",
batch_size=32,
action_horizon=16,
)
for batch in dataloader:
images = batch["observation.images.primary"] # (B, H, W, 3)
actions = batch["action"] # (B, 16, 7)
trace = batch["annotation.trace"] # JSON strings
skill = batch["annotation.primitive_skill"] # List[str]
break
# Multiple datasets (DROID + RH20T)
dataloader = create_dataloader(
[
"path/to/lerobot_droid_anno",
"path/to/lerobot_rh20t_anno",
],
batch_size=32,
action_horizon=16,
)
Filtering by Quality & Frame Range
from lerobot_dataloader import create_dataloader, QAnnotationFilter
dataloader = create_dataloader(
"path/to/lerobot_droid_anno",
batch_size=32,
range_nop_path="path/to/range_nop.json", # Remove idle frames
q_filters=[
QAnnotationFilter("Q_annotation.trace", ["Primary"]),
QAnnotationFilter("Q_annotation.gripper_box", ["Primary", "Secondary"]),
],
)
For full dataloader documentation and transforms, see: RoboInterData/lerobot_dataloader.
Format Conversion Scripts
The LeRobot v2.1 data was converted using:
2. Annotation_pure (Annotation-Only LMDB)
Contains only the intermediate representation annotations (no action data, no videos) stored as a single LMDB database. Useful for lightweight access to annotations or as input for the LeRobot conversion pipeline. The format conversion scripts and corresponding lightweight dataloader functions are provided in lmdb_tool. You can downloade high-resolution videos by following Droid hr_video_reader and RH20T API.
Data Format
Each LMDB key is an episode name (e.g., "3072_exterior_image_1_left"). The value is a dict mapping frame indices to per-frame annotation dicts:
{
0: { # frame_id
"time_clip": [[0, 132], [132, 197], [198, 224]], # subtask segments
"instruction_add": "pick up the red cup", # language instruction
"substask": "reach for the cup", # current subtask
"primitive_skill": "reach", # skill label
"segmentation": None, # (stored separately in Annotation_raw)
"object_box": [[45, 30], [120, 95]], # manipulated object bbox
"placement_proposal": [[150, 80], [220, 140]], # target placement bbox
"trace": [[x, y], ...], # next 10 gripper waypoints
"gripper_box": [[60, 50], [100, 80]], # gripper bbox
"contact_frame": 101, # contact event frame (β1 if past contact)
"state_affordance": [0.1, 0.2, 0.3, 0.4, 0.5, 0.6],# 6D EEF state at contact
"affordance_box": [[62, 48], [98, 82]], # gripper bbox at contact frame
"contact_points": [[75, 65], [85, 65]], # contact pixel coordinates
...
},
1: { ... },
...
}
Reading LMDB
import lmdb
import pickle
lmdb_path = "Annotation_pure/annotations"
env = lmdb.open(lmdb_path, readonly=True, lock=False, readahead=False)
with env.begin() as txn:
# List all episode keys
cursor = txn.cursor()
for key, value in cursor:
episode_name = key.decode("utf-8")
episode_data = pickle.loads(value)
# Access frame 0
frame_0 = episode_data[0]
print(f"{episode_name}: {frame_0['instruction_add']}")
print(f" object_box: {frame_0['object_box']}")
print(f" trace: {frame_0['trace'][:3]}...") # first 3 waypoints
break
env.close()
CLI Inspection Tool
cd RoboInter/RoboInterData/lmdb_tool
# Basic info
python read_lmdb.py --lmdb_path Annotation_pure/annotations --action info
# View a specific episode
python read_lmdb.py --lmdb_path Annotation_pure/annotations --action item --key "3072_exterior_image_1_left"
# Field coverage statistics
python read_lmdb.py --lmdb_path Annotation_pure/annotations --action stats --key "3072_exterior_image_1_left"
# Multi-episode summary
python read_lmdb.py --lmdb_path Annotation_pure/annotations --action summary --limit 100
3. Annotation_raw (Original Annotations)
The original, unprocessed annotation files before conversion to LMDB format. These files are large and slow to load.
| File | Size | Description |
|---|---|---|
droid_annotation.pkl |
~20 GB | Raw DROID intermediate representation annotations |
rh20t_annotation.pkl |
~11 GB | Raw RH20T intermediate representation annotations |
segmentation_npz.zip.* |
~50 GB | Object segmentation masks (split archives) |
Reading Raw PKL
cd /RoboInter-Data/Annotation_raw
cat segmentation_npz.zip.* > segmentation_npz.zip
unzip segmentation_npz.zip
import pickle
with open("Annotation_raw/droid_annotation.pkl", "rb") as f:
droid_data = pickle.load(f) # Warning: ~20 GB, takes several minutes
# droid_data[episode_key] contains raw intermediate representation data
# including: all_language, all_gripper_box, all_grounding_box, all_contact_point, all_traj, etc.
To convert raw PKL to the LMDB format used in
Annotation_pure, see the conversion script in the RoboInter repository.
4. Demo Subsets (Annotation_demo_app & Annotation_demo_larger)
Pre-packaged subsets for quick visualization using the RoboInterData-Demo Gradio app. Both subsets share the same LMDB annotation format + MP4 video structure.
| Subset | Videos | Size | Use Case |
|---|---|---|---|
Annotation_demo_app |
20 | ~929 MB | HuggingFace Spaces online demo |
Annotation_demo_larger |
120 | ~12 GB | Local visualization with more examples |
Running the Visualizer
git clone https://github.com/InternRobotics/RoboInter.git
cd RoboInter/RoboInterData-Demo
# Option A: Use the small demo subset (for Spaces)
ln -s /path/to/Annotation_demo_app/demo_data ./demo_data
ln -s /path/to/Annotation_demo_app/videos ./videos
# Option B: Use the larger demo subset (for local)
ln -s /path/to/Annotation_demo_larger/demo_annotations ./demo_data
ln -s /path/to/Annotation_demo_larger/videos ./videos
pip install -r requirements.txt
python app.py
# Open http://localhost:7860
The visualizer supports all annotation types: object segmentation masks, gripper/object/affordance bounding boxes, trajectory traces, contact points, grasp poses, and language annotations (instructions, subtasks, primitive skills).
5. Metadata JSON Files
Quality & Filtering
| File | Description |
|---|---|
All_Keys_of_Primary.json |
List of 65,515 episode names where all annotation types are rated Primary quality. |
RoboInter_Data_Qsheet.json |
Per-episode quality ratings for every annotation type. Each entry contains Q_instruction_add, Q_substask, Q_trace, etc. with values "Primary", "Secondary", or null. |
RoboInter_Data_Qsheet_value_stats.json |
Distribution of quality ratings across all episodes. |
RoboInter_Data_RawPath_Qmapping.json |
Mapping from original data source paths to episode splits and their quality ratings. |
Frame Ranges (Idle Frame Removal)
| File | Description |
|---|---|
range_nop.json |
Non-idle frame ranges for all 235,920 episodes (DROID + RH20T). |
range_nop_droid_all.json |
Non-idle frame ranges for DROID episodes only. |
range_nop_rh20t_all.json |
Non-idle frame ranges for RH20T episodes only. |
Format: { "episode_name": [start_frame, end_frame, valid_length] }
import json
with open("range_nop.json") as f:
range_nop = json.load(f)
# Example: "3072_exterior_image_1_left": [12, 217, 206]
# Means: valid action frames are 12~217, total 206 valid frames
# (frames 0~11 and 218+ are idle/stationary)
Other
| File | Description |
|---|---|
val_video.json |
List of 7,246 episode names reserved for the validation set. |
VideoID_2_SegmentationNPZ.json |
Mapping from episode video ID to the corresponding segmentation NPZ file path in Annotation_raw/segmentation_npz. null if no segmentation is available. |
Related Resources
| Resource | Link |
|---|---|
| Project | RoboInter |
| VQA Dataset | RoboInter-VQA |
| VLM Checkpoints | RoboInter-VLM |
| LMDB Tool | RoboInterData/lmdb_tool |
| High-Resolution Video Reader | RoboInterData/hr_video_reader |
| LeRobot DataLoader | RoboInterData/lerobot_dataloader |
| LeRobot Conversion | RoboInterData/convert_to_lerobot |
| Demo Visualizer | RoboInterData-Demo |
| Online Demo | HuggingFace Space |
| Raw DROID Dataset | droid-dataset.github.io |
| Raw RH20T Dataset | rh20t.github.io |
License
Please refer to the original dataset licenses for RoboInter, DROID, and RH20T.
- Downloads last month
- 5