Datasets:
BAAI
/

Languages:
English
ArXiv:
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

You agree not to use this dataset for commercial purposes, but only for academic research.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for MOVE Real-World Manipulation Dataset

MOVE: Motion-Based Variability Enhancement for Spatial Generalization in Robotic Manipulation

Jointly Released by:

🎓 清华大学 (Tsinghua University)

🤖 智源人工智能研究院 (Beijing Academy of Artificial Intelligence - BAAI)

This Hugging Face Dataset Card describes the Real-World Robotic Manipulation Dataset collected using the MOVE (Motion-Based Variability Enhancement) paradigm, as presented in the paper "MOVE: A Simple Motion-Based Data Collection Paradigm for Spatial Generalization in Robotic Manipulation."

The core value of the MOVE paradigm is the injection of dynamic feature augmentation into the environment (objects and camera) during expert demonstrations. This process captures a richer variety of spatial configurations within a single trajectory, significantly improving policy performance, especially in terms of spatial generalization to unseen locations and data efficiency.

💾 Dataset Structure

This dataset focuses on the Real-World Pick-and-Place task. All data was collected using the Piper robotic arm teleoperated via a Pika device under dynamic environment configurations.

Data Fields

Each trajectory contains a sequence of timesteps, recording the essential observations and state information required for robotic policy learning.

Field Key Data Type Description
timestep_id int Sequential timestep index within the trajectory.
camera/color/Camera PIL.Image / ndarray RGB Image Observation. Due to the MOVE paradigm, the image captures dynamically changing object, target, and camera viewpoints.
arm/jointStatePosition/joint_single array[float] Robot Joint State/Action. The joint positions of the Piper robotic arm, representing the robot's state or executed action at this timestep.
/arm/jointStatePosition/master array[float] Master Device State. The joint or position states of the Pika teleoperation master device, recording the human operator's intent.

Note: This dataset strictly includes only the three key data streams listed above. It does not include explicit 3D coordinates, world-frame camera poses, or other calculated metadata.

Data Splits

The real-world dataset is split based on the total number of environment interaction steps (timesteps), allowing for efficiency evaluation:

Split Name Task Total Timesteps Description
real_world_35k Real-World Pick-and-Place (e.g., Orange to Tray) 35,000 A challenging, low-data scenario for testing spatial generalization capability.
real_world_75k Real-World Pick-and-Place 75,000 Used for performance scaling and efficiency comparison against static baselines.

🚀 Key Advantages

  • High Spatial Generalization: Policies trained on this dynamically augmented data demonstrate superior success rates when tested on spatially randomized, unseen configurations.
  • Superior Data Efficiency: MOVE datasets enable policies to achieve competitive performance with a significantly lower total number of timesteps compared to datasets collected using the traditional static approach.

🎯 Usage

This dataset is an ideal resource for training robust real-world robotic manipulation policies, particularly those that rely on high-generalization visual-motor data.

Loading the Data

from datasets import load_dataset

# Load the 75k timestep real-world subset
dataset = load_dataset("your-huggingface-org/MOVE", "real_world_75k")

# Access image and state data
first_example = dataset["train"][0]
image = first_example["camera/color/Camera"]
robot_state = first_example["arm/jointStatePosition/joint_single"]

Citation

if you find this work helpful, please consider citing our paper:

@misc{wang2025movesimplemotionbaseddata,
      title={MOVE: A Simple Motion-Based Data Collection Paradigm for Spatial Generalization in Robotic Manipulation}, 
      author={Huanqian Wang and Chi Bene Chen and Yang Yue and Danhua Tao and Tong Guo and Shaoxuan Xie and Denghang Huang and Shiji Song and Guocai Yao and Gao Huang},
      year={2025},
      eprint={2512.04813},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2512.04813}, 
}
Downloads last month
12