Datasets:
Tasks:
Image Segmentation
Sub-tasks:
instance-segmentation
Size Categories:
1M<n<10M
License:
cc-by-4.0
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code: StreamingRowsError Exception: FileNotFoundError Message: https://s3-us-west-2.amazonaws.com/dl.fbaipublicfiles.com/LVIS/lvis_v1_train.json.zip Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 417, in _info await _file_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 837, in _file_info r.raise_for_status() File "/src/services/worker/.venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py", line 1005, in raise_for_status raise ClientResponseError( aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://s3-us-west-2.amazonaws.com/dl.fbaipublicfiles.com/LVIS/lvis_v1_train.json.zip') The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/utils.py", line 263, in get_rows_or_raise return get_rows( File "/src/services/worker/src/worker/utils.py", line 204, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/utils.py", line 241, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1353, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 207, in __iter__ yield from self.generate_examples_fn(**self.kwargs) File "/tmp/modules-cache/datasets_modules/datasets/winvoker--lvis/c9532ceb961fcee814d78f8e65aa92b5e2b4942dfd8a45d127b46f730f1a87d8/lvis.py", line 152, in _generate_examples if not os.path.exists(annotation_path): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 74, in wrapper return function(*args, download_config=download_config, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 159, in xexists fs, *_ = fsspec.get_fs_token_paths(urlpath, storage_options=storage_options) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 606, in get_fs_token_paths fs = filesystem(protocol, **inkwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 261, in filesystem return cls(**storage_options) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 76, in __call__ obj = super().__call__(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 58, in __init__ self.fo = fo.__enter__() # the whole instance is a context File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 102, in __enter__ f = self.fs.open(self.path, mode=mode) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 1199, in open f = self._open( File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 356, in _open size = size or self.info(path, **kwargs)["size"] File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 115, in wrapper return sync(self.loop, func, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 100, in sync raise return_result File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/asyn.py", line 55, in _runner result[0] = await coro File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py", line 430, in _info raise FileNotFoundError(url) from exc FileNotFoundError: https://s3-us-west-2.amazonaws.com/dl.fbaipublicfiles.com/LVIS/lvis_v1_train.json.zip
Need help to make the dataset viewer work? Open a discussion for direct support.
LVIS
Dataset Summary
This dataset is the implementation of LVIS dataset into Model Database datasets. Please visit the original website for more information.
Loading
This code returns train, validation and test generators.
from datasets import load_dataset
dataset = load_dataset("winvoker/lvis")
Objects is a dictionary which contains annotation information like bbox, class.
DatasetDict({
train: Dataset({
features: ['id', 'image', 'height', 'width', 'objects'],
num_rows: 100170
})
validation: Dataset({
features: ['id', 'image', 'height', 'width', 'objects'],
num_rows: 4809
})
test: Dataset({
features: ['id', 'image', 'height', 'width', 'objects'],
num_rows: 19822
})
})
Access Generators
train = dataset["train"]
validation = dataset["validation"]
test = dataset["test"]
An example row is as follows.
{ 'id': 0,
'image': '000000437561.jpg',
'height': 480,
'width': 640,
'objects': {
'bboxes': [[[392, 271, 14, 3]],
'classes': [117],
'segmentation': [[376, 272, 375, 270, 372, 269, 371, 269, 373, 269, 373]]
}
}
- Downloads last month
- 12