The dataset viewer should be available soon. Please retry later.
ForceFlow Dataset
ForceFlow: Learning to Feel and Act via Contact-Driven Flow Matching
[Project Page] | [Code]
Contact-rich manipulation remains one of the hardest problems in robot learning: vision alone cannot capture the high-frequency contact dynamics that determine whether a plug seats correctly, a stamp triggers cleanly, or a wipe exerts consistent pressure. This dataset is collected to support ForceFlow, a force-aware reactive framework built on flow matching that addresses this gap.
ForceFlow fuses temporal force/torque history with visual observations through an asymmetric multimodal design β force history acts as a global regulation signal to prevent it from being overshadowed by high-dimensional image features, while a hybrid action space jointly predicts end-effector motion and expected next-step contact force. To handle spatial generalization, ForceFlow introduces a Vision-to-Force (V2F) handover: a VLM first localizes the target in the scene, then control passes to the force-aware policy for precise local contact interaction.
This dataset contains 7 real-robot teleoperated demonstration tasks spanning two categories of contact-rich manipulation, collected on a UFACTORY xArm6 equipped with a 6-axis wrist F/T sensor and dual Intel RealSense cameras.
Tasks
Short-horizon contact β tasks requiring precise force application at a specific moment:
| Task | Episodes | Total Steps | Key Challenge |
|---|---|---|---|
stamp |
100 | 45,867 | Visual ambiguity in paper thickness; force-triggered stamping |
plug |
100 | 50,107 | Coarse visual alignment with force-guided insertion |
press_button |
50 | 23,396 | Varying spring constants and trigger depths |
insert |
50 | 25,032 | Sub-millimeter tolerance and geometric jamming |
Continuous contact β tasks requiring sustained force regulation throughout execution:
| Task | Episodes | Total Steps | Key Challenge |
|---|---|---|---|
clean_whiteboard |
100 | 56,810 | Stable normal force tracking on a planar surface |
clean_vase |
50 | 85,478 | Adaptive force regulation on a curved, non-linear surface |
peel |
50 | 38,564 | Consistent peel force on adhesive tape |
Data Format
Each task is provided in two formats:
<task>.zarr/β Zarr v2 directory store, ready for direct training use<task>.zipβ Zipped archive of the same zarr store<task>_normalizer.jsonβ Pre-computed normalizer statistics (mean/std) for all fields
Zarr Structure
<task>.zarr/
βββ data/
β βββ action (N, 6) float32 β end-effector delta pose (6-DOF)
β βββ pos (N, 6) float32 β end-effector absolute pose
β βββ force (N, 6) float32 β raw F/T sensor readings
β βββ delta_force (N, 6) float32 β force delta (not in `peel`)
β βββ gripper_action (N, 1) float32 β gripper command (0=open, 1=close)
β βββ gripper_state (N, 1) float32 β gripper current state
β βββ rgb_arm (N, 3, 240, 320) uint8 β wrist camera (JPEG-compressed)
β βββ rgb_fix (N, 3, 240, 320) uint8 β fixed camera (JPEG-compressed)
βββ meta/
βββ episode_ends (E,) uint32 β cumulative step index at each episode end
Note: The
peeltask does not contain thedelta_forcefield.
RGB arrays are stored with a custom JPEG codec. To read them, install image_codecs from the ForceFlow repo and register the codec before opening the zarr store.
Usage
Prerequisites
git clone --recurse-submodules https://github.com/JokerESC/ForceFlow.git
cd ForceFlow
pip install -r requirements.txt
pip install -e CleanDiffuser/
Load a dataset
import sys
sys.path.insert(0, 'path/to/ForceFlow/CleanDiffuser')
import numcodecs
import image_codecs
numcodecs.register_codec(image_codecs.jpeg)
import zarr
import numpy as np
z = zarr.open('plug.zarr', 'r')
episode_ends = z['meta/episode_ends'][:] # shape (100,)
actions = z['data/action'][:] # shape (50107, 6)
forces = z['data/force'][:] # shape (50107, 6)
rgb_arm = z['data/rgb_arm'][:] # shape (50107, 3, 240, 320)
# Reconstruct per-episode slices
starts = np.concatenate([[0], episode_ends[:-1]])
for ep_idx, (s, e) in enumerate(zip(starts, episode_ends)):
ep_actions = actions[s:e] # (T, 6)
ep_forces = forces[s:e] # (T, 6)
Training with ForceFlow
# Edit configs/xarm.yaml to point to the downloaded data
python -m pipeline.train --config configs/xarm.yaml
Hardware
| Component | Details |
|---|---|
| Robot arm | UFACTORY xArm6 |
| F/T sensor | 6-axis wrist force/torque sensor |
| Wrist camera | Intel RealSense D435 |
| Fixed camera | Intel RealSense L515 |
| Teleoperation | 3Dconnexion SpaceMouse |
License
MIT β see LICENSE.
Citation
If you use this dataset, please cite:
@misc{forceflow2025,
title = {ForceFlow: Learning to Feel and Act via Contact-Driven Flow Matching},
author = {JokerESC},
year = {2025},
url = {https://github.com/JokerESC/ForceFlow}
}
- Downloads last month
- 100
