Dataset Viewer
Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ValueError
Message:      Not able to read records in the JSON file at hf://datasets/haosulab/ManiSkill2@da0cf7cafdc5c279037d4f61728e6925e5352bc7/demos/v0/rigid_body/AssemblingKits-v0/trajectory.json. You should probably indicate the field of the JSON file containing your records. This JSON file contain the following fields: ['env_info', 'episodes']. Select the correct one and provide it as `field='XXX'` to the dataset loading method. 
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows_from_streaming.py", line 162, in compute_first_rows_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2206, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1230, in _head
                  return _examples_to_batch(list(self.take(n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1379, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1039, in __iter__
                  yield from islice(self.ex_iterable, self.n)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 281, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 161, in _generate_tables
                  raise ValueError(
              ValueError: Not able to read records in the JSON file at hf://datasets/haosulab/ManiSkill2@da0cf7cafdc5c279037d4f61728e6925e5352bc7/demos/v0/rigid_body/AssemblingKits-v0/trajectory.json. You should probably indicate the field of the JSON file containing your records. This JSON file contain the following fields: ['env_info', 'episodes']. Select the correct one and provide it as `field='XXX'` to the dataset loading method.

Need help to make the dataset viewer work? Open a discussion for direct support.

ManiSkill2 Data

teaser

PyPI version Open In Colab Docs status Discord

ManiSkill2 is a unified benchmark for learning generalizable robotic manipulation skills powered by SAPIEN. It features 20 out-of-box task families with 2000+ diverse object models and 4M+ demonstration frames. Moreover, it empowers fast visual input learning algorithms so that a CNN-based policy can collect samples at about 2000 FPS with 1 GPU and 16 processes on a workstation. The benchmark can be used to study a wide range of algorithms: 2D & 3D vision-based reinforcement learning, imitation learning, sense-plan-act, etc. This is the huggingface datasets page for all data related to ManiSkill2, including assets, robot demonstrations, and pretrained models

For detailed information about ManiSkill2, head over to our GitHub repository, website, or ICLR 2023 paper documentation

Note that to download the data you must use the mani_skill2 package to do so as shown below, currently loading through HuggingFace datasets does not work as intended just yet

Assets

Some environments require you to download additional assets, which are stored here.

You can download all the assets by running

python -m mani_skill2.utils.download_asset all

or download task-specific assets by running

python -m mani_skill2.utils.download_asset ${ENV_ID}

Demonstration Data

The robot demonstrations consist of 4 million+ frames across 20+ robot manipulation tasks.

We provide a command line tool (mani_skill2.utils.download_demo) to download demonstrations from here.

# Download all the demonstration datasets
python -m mani_skill2.utils.download_demo all
# Download the demonstration dataset for a specific task
python -m mani_skill2.utils.download_demo ${ENV_ID}
# Download the demonstration datasets for all rigid-body tasks to "./demos"
python -m mani_skill2.utils.download_demo rigid_body -o ./demos
# Download the demonstration datasets for all soft-body tasks
python -m mani_skill2.utils.download_demo soft_body

To learn how to use the demonstrations and what environments are available, go to the demonstrations documentation page: https://haosulab.github.io/ManiSkill2/concepts/demonstrations.html

License

All rigid body environments in ManiSkill are licensed under fully permissive licenses (e.g., Apache-2.0).

However, the soft body environments will follow Warp's license. Currently, they are licensed under NVIDIA Source Code License for Warp.

The assets are licensed under CC BY-NC 4.0.

Citation

If you use ManiSkill2 or its assets, models, and demonstrations, please cite using the following BibTeX entry:

@inproceedings{gu2023maniskill2,
  title={ManiSkill2: A Unified Benchmark for Generalizable Manipulation Skills},
  author={Gu, Jiayuan and Xiang, Fanbo and Li, Xuanlin and Ling, Zhan and Liu, Xiqiaing and Mu, Tongzhou and Tang, Yihe and Tao, Stone and Wei, Xinyue and Yao, Yunchao and Yuan, Xiaodi and Xie, Pengwei and Huang, Zhiao and Chen, Rui and Su, Hao},
  booktitle={International Conference on Learning Representations},
  year={2023}
}
Downloads last month
0
Edit dataset card
Evaluate models HF Leaderboard