Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
for split_generator in builder._split_generators(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 81, in _split_generators
first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 55, in _get_pipeline_from_tar
current_example[field_name] = cls.DECODERS[data_extension](current_example[field_name])
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 302, in npy_loads
return numpy.lib.format.read_array(stream, allow_pickle=False)
File "/src/services/worker/.venv/lib/python3.9/site-packages/numpy/lib/format.py", line 795, in read_array
raise ValueError("Object arrays cannot be loaded when "
ValueError: Object arrays cannot be loaded when allow_pickle=False
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
info = get_dataset_config_info(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Kubric Movi-F Dataset for LBM (version 2)
Preprocessed Kubric Movi-F basically following CoTracker with additional depth and camera parameters.
Each scene contains the following structure when extracted:
0000/
├── depths
| ├── 000.png
| ├── ...
| └── 023.png
├── frames
| ├── 000.png
| ├── ...
| └── 023.png
└── 0000.npy
Data Format Details:
- frames: RGB images in .png format
- depths: Depth data in .png format and unint16 data type
- annotation: annotation in .npy format
- 'depth_range': float32, [depth_min, depth_max], metric
- 'coords': float32, (2048, 24, 2), 2D trajectories
- 'queries': float32, (2048, 2), 2D query points
- 'reproj_depth': float32, (2048, 24), depth of 2D trajectories
- 'visibility': bool, (2048, 24), visibility of 2D trajectories, True: visible, False: occluded/out of view
- 'traj_3d': float32, (2048, 24, 3), 3D trajectories in world coordinates
- 'intrinsics': float32, (24, 3, 3), camera intrinsics in Kubric format
- 'extrinsics': float32, (24, 4, 4), camera extrinsics in Kubric format
Please note:
- Compared with the data to train the LBM model, version 2 add depth and 3D annotations.
- Downloads last month
- 1