The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 64, in _split_generators
with h5py.File(first_file, "r") as h5:
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 564, in __init__
fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 238, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "h5py/_objects.pyx", line 56, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 57, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 102, in h5py.h5f.open
FileNotFoundError: [Errno 2] Unable to synchronously open file (unable to open file: name = 'hf://datasets/VLA-Arena/VLA_Arena_L1_M_hdf5@8f89a14e7936f728e14523587b8ad7e337aac5c7/close_all_of_the_drawer_of_the_cabinet_demo.hdf5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
VLA-Arena Dataset (L1 - Medium Variant)
About VLA-Arena
VLA-Arena is an open-source benchmark designed for the systematic evaluation of Vision-Language-Action (VLA) models. It provides a complete and unified toolchain covering scene modeling, demonstration collection, model training, and evaluation. Featuring 150+ tasks across 11 specialized suites, VLA-Arena assesses models through hierarchical difficulty levels (L0-L2) to ensure comprehensive metrics for safety, generalization, and efficiency.
Key Evaluation Domains VLA-Arena focuses on four critical dimensions to ensure robotic agents can operate effectively in the real world:
- Safety: Evaluate the ability to operate reliably in the physical world while avoiding static/dynamic obstacles and hazards.
- Distractor: Assess performance stability when facing environmental unpredictability and visual clutter.
- Extrapolation: Test the ability to generalize learned knowledge to novel situations, unseen objects, and new workflows.
- Long Horizon: Challenge agents to combine long sequences of actions to achieve complex, multi-step goals.
Highlights
- End-to-End Toolchain: From scene construction to final evaluation metrics.
- Systematic Difficulty Scaling: Tasks range from basic object manipulation (L0) to complex, constraint-heavy scenarios (L2).
- Flexible Customization: Powered by CBDDL (Constrained Behavior Domain Definition Language) for easy task definition.
Resources
- Project Homepage: VLA-Arena Website
- GitHub Repository: PKU-Alignment/VLA-Arena
- Documentation: Read the Docs
Dataset Description
This dataset is the Level 1 (L1) - Medium (M) variant of the VLA-Arena benchmark data. It contains a balanced set of human demonstrations suitable for standard training scenarios.
- Tasks Covered: 55 distinct tasks at Difficulty Level 1.
- Total Trajectories: 1,650 (30 trajectories per task).
- Task Suites: Covers Safety, Distractor, Extrapolation, and Long Horizon domains.
Format and Compatibility
This dataset is strictly formatted according to the hdf5 format.
The data structure includes standardized features for:
- Observation: High-resolution RGB images (256x256) and robot state vectors.
- Action: 7-DoF continuous control signals (End-effector pose + Gripper).
- Language: Natural language task instructions.
Dataset Construction and Preprocessing
To ensure high data quality and fair comparison, the dataset underwent several rigorous construction and quality control steps:
1. High-Resolution Regeneration The demonstrations were re-rendered at a higher resolution of 256 x 256. Simple upscaling of the original 128 x 128 benchmark images resulted in poor visual fidelity. We re-executed the recorded action trajectories in the simulator to capture superior visual observations suitable for modern VLA backbones.
2. Camera Selection and Rotation
- Viewpoint: Only the static third-person camera images are utilized. Wrist camera images were discarded to ensure fair comparison across baselines.
- Rotation: All third-person camera images were rotated by 180 degrees at both train and test time to correct for the visual inversion observed in the simulation environment.
3. Success Filtering All demonstrations were replayed in the simulation environments. Any trajectory that failed to meet the task's success criteria during replay was filtered out.
4. Action Filtering (Iterative Optimization) Standard data cleaning often involves filtering out all no-operation (no-op) actions. However, we found that completely removing no-ops significantly decreased the trajectory success rate upon playback in the VLA-Arena setup. To address this, we adopted an iterative optimization strategy:
- Instead of removing all no-ops, we sequentially attempted to preserve N no-operation actions (N = 4, 8, 12, 16), specifically around critical state transition points (e.g., gripper closure and opening).
- Only trajectories that remained successful during validation playback were retained.
Evaluation & Usage
This dataset is designed to be used within the VLA-Arena benchmark ecosystem. It allows for the training of models that are subsequently tested across 11 specialized suites with difficulty levels ranging from L0 (Basic) to L2 (Advanced).
For detailed evaluation instructions, metrics, and scripts, please refer to the VLA-Arena repository.
- Downloads last month
- 20