Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -15,13 +15,8 @@ library_name: webdataset
|
|
| 15 |
|
| 16 |
# TerraMesh
|
| 17 |
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
Paper: [TerraMesh: A Planetary Mosaic of Multimodal Earth Observation Data](https://huggingface.co/papers/2504.11172)
|
| 21 |
-
|
| 22 |
-
**TerraMesh** merges data from **Sentinel‑1 SAR, Sentinel‑2 optical, Copernicus DEM, NDVI and land‑cover** sources into more than **9 million co‑registered patches** ready for large‑scale representation learning.
|
| 23 |
-
|
| 24 |
-
**Dataset to be released soon.**
|
| 25 |
|
| 26 |

|
| 27 |
|
|
@@ -35,23 +30,23 @@ The archive ships two top‑level splits `train/` and `val/`, each holding one f
|
|
| 35 |
```text
|
| 36 |
TerraMesh
|
| 37 |
├── train
|
| 38 |
-
│
|
| 39 |
-
│
|
| 40 |
-
│
|
| 41 |
-
│
|
| 42 |
-
│
|
| 43 |
-
│
|
| 44 |
-
│
|
| 45 |
-
│
|
| 46 |
├── val
|
| 47 |
-
│
|
| 48 |
-
│
|
| 49 |
└── terramesh.py
|
| 50 |
```
|
| 51 |
|
| 52 |
Each folder includes up to 889 shard files, containing up to 10240 samples each. Samples from MajorTom-Core are stored in shards with the pattern `majortom_{split}_{id}.tar` while shards with SSL4EO-S12 samples start with `ssl4eos12_`.
|
| 53 |
Samples are stored as Zarr Zip files which can be loaded with `zarr` (Version <= 2.18) or `xarray.load_zarr()`. Each sample location includes seven modalities that share the same shard and sample name. Note that each sample only inludes one Sentinel-1 version (S1GRD or S1RTC) because of different processing versions in the source datasets.
|
| 54 |
-
Each Zarr file includes aligned metadata as
|
| 55 |
|
| 56 |
```
|
| 57 |
<xarray.Dataset> Size: 283kB
|
|
@@ -108,23 +103,23 @@ Install the required packages with:
|
|
| 108 |
pip install huggingface_hub webdataset torch numpy albumentations fsspec braceexpand zarr==2.18.0 numcodecs==0.15.1
|
| 109 |
```
|
| 110 |
|
| 111 |
-
Important! The dataset was created using `zarr==2.18.0` and `numcodecs==0.15.1`.
|
| 112 |
|
| 113 |
### Download
|
| 114 |
|
| 115 |
-
You can download the dataset with the Hugging Face CLI tool. Please note that the dataset requires
|
| 116 |
|
| 117 |
```shell
|
| 118 |
-
|
| 119 |
```
|
| 120 |
|
| 121 |
If you like to download only a subset of the data, you can specify it with `--include`.
|
| 122 |
```
|
| 123 |
# Only download val data
|
| 124 |
-
|
| 125 |
|
| 126 |
# Only download a single modality (e.g., S2L2A)
|
| 127 |
-
|
| 128 |
```
|
| 129 |
|
| 130 |
### Data loader
|
|
@@ -134,7 +129,7 @@ We provide the data loading code in `terramesh.py` which is downloaded together
|
|
| 134 |
wget https://huggingface.co/datasets/ibm-esa-geospatial/TerraMesh/resolve/main/terramesh.py
|
| 135 |
```
|
| 136 |
|
| 137 |
-
You can use the `build_terramesh_dataset` function to
|
| 138 |
```python
|
| 139 |
from terramesh import build_terramesh_dataset
|
| 140 |
from torch.utils.data import DataLoader
|
|
@@ -159,7 +154,7 @@ dataset = build_terramesh_dataset(
|
|
| 159 |
)
|
| 160 |
|
| 161 |
# Set batch size to None because batching is handled by WebDataset.
|
| 162 |
-
dataloader = DataLoader(dataset, batch_size=None, num_workers=4)
|
| 163 |
|
| 164 |
# Iterate over the dataloader
|
| 165 |
for batch in dataloader:
|
|
@@ -177,7 +172,7 @@ for batch in dataloader:
|
|
| 177 |
|
| 178 |
We provide some additional code for wrapping `albumentations` transform functions.
|
| 179 |
We recommend albumentations because parameters are shared between all image modalities (e.g., same random crop).
|
| 180 |
-
However, it requires some wrapping to bring the data into the expected shape.
|
| 181 |
|
| 182 |
```python
|
| 183 |
import albumentations as A
|
|
@@ -229,24 +224,27 @@ The resulting batch keys include: `["__key__", "__url__", "S2L2A", "S1RTC", ...,
|
|
| 229 |
|
| 230 |
Therefore, you need to update the `transform` if you use one:
|
| 231 |
```python
|
| 232 |
-
|
|
|
|
| 233 |
additional_targets={m: "image" for m in modalities + ["cloud_mask"]}
|
| 234 |
),
|
| 235 |
non_image_modalities=["__key__", "__url__", "center_lon", "center_lat"] + ["time_" + m for m in modalities]
|
|
|
|
| 236 |
```
|
| 237 |
|
| 238 |
For a single modality dataset, "time" does not have a suffix and the following changes for the `transform` are required:
|
| 239 |
```python
|
| 240 |
-
|
|
|
|
| 241 |
additional_targets={"cloud_mask": "image"}
|
| 242 |
),
|
| 243 |
non_image_modalities=["__key__", "__url__", "center_lon", "center_lat", "time"]
|
| 244 |
-
|
| 245 |
```
|
| 246 |
|
| 247 |
Note that center points are not updated when random crop is used.
|
| 248 |
-
The cloud mask provides the classes land (0), water (1), snow (2), thin cloud (3), thick cloud (4),
|
| 249 |
-
DEM does not return a time value while LULC uses the S2 timestamp because of the augmentation
|
| 250 |
```python
|
| 251 |
batch["time_S2L2A"].numpy().astype("datetime64[ns]")
|
| 252 |
```
|
|
@@ -285,6 +283,6 @@ The satellite data (S2L1C, S2L2A, S1GRD, S1RTC) is sourced from the [SSL4EO‑S1
|
|
| 285 |
|
| 286 |
The LULC data is provided by [ESRI, Impact Observatory, and Microsoft](https://planetarycomputer.microsoft.com/dataset/io-lulc-annual-v02) (CC-BY-4.0).
|
| 287 |
|
| 288 |
-
The cloud masks used for
|
| 289 |
|
| 290 |
-
The DEM data is produced using [Copernicus WorldDEM-30](https://dataspace.copernicus.eu/explore-data/data-collections/copernicus-contributing-missions/collections-description/COP-DEM) © DLR e.V. 2010-2014 and © Airbus Defence and Space GmbH 2014-2018 provided under COPERNICUS by the European Union and ESA; all rights reserved
|
|
|
|
| 15 |
|
| 16 |
# TerraMesh
|
| 17 |
|
| 18 |
+
A planetary‑scale, multimodal analysis‑ready dataset for Earth‑Observation foundation models: **TerraMesh** merges data from **Sentinel‑1 SAR, Sentinel‑2 optical, Copernicus DEM, NDVI, and land‑cover** sources into more than **9 million co‑registered patches** ready for large‑scale representation learning.
|
| 19 |
+
You find more information about the data sampling and preprocessing in our paper: [TerraMesh: A Planetary Mosaic of Multimodal Earth Observation Data](https://huggingface.co/papers/2504.11172).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |

|
| 22 |
|
|
|
|
| 30 |
```text
|
| 31 |
TerraMesh
|
| 32 |
├── train
|
| 33 |
+
│ ├── DEM
|
| 34 |
+
│ ├── LULC
|
| 35 |
+
│ ├── NDVI
|
| 36 |
+
│ ├── S1GRD
|
| 37 |
+
│ ├── S1RTC
|
| 38 |
+
│ ├── S2L1C
|
| 39 |
+
│ ├── S2L2A
|
| 40 |
+
│ └── S2RGB
|
| 41 |
├── val
|
| 42 |
+
│ ├── DEM
|
| 43 |
+
│ └── ...
|
| 44 |
└── terramesh.py
|
| 45 |
```
|
| 46 |
|
| 47 |
Each folder includes up to 889 shard files, containing up to 10240 samples each. Samples from MajorTom-Core are stored in shards with the pattern `majortom_{split}_{id}.tar` while shards with SSL4EO-S12 samples start with `ssl4eos12_`.
|
| 48 |
Samples are stored as Zarr Zip files which can be loaded with `zarr` (Version <= 2.18) or `xarray.load_zarr()`. Each sample location includes seven modalities that share the same shard and sample name. Note that each sample only inludes one Sentinel-1 version (S1GRD or S1RTC) because of different processing versions in the source datasets.
|
| 49 |
+
Each Zarr file includes aligned metadata as shown by this S1GRD example from sample `ssl4eos12_val_0080385.zarr.zip`:
|
| 50 |
|
| 51 |
```
|
| 52 |
<xarray.Dataset> Size: 283kB
|
|
|
|
| 103 |
pip install huggingface_hub webdataset torch numpy albumentations fsspec braceexpand zarr==2.18.0 numcodecs==0.15.1
|
| 104 |
```
|
| 105 |
|
| 106 |
+
Important! The dataset was created using `zarr==2.18.0` and `numcodecs==0.15.1`. Zarr 3.0 has backwards compatibility issues, and Zarr 2.18 is incompatible with NumCodecs >= 0.16.
|
| 107 |
|
| 108 |
### Download
|
| 109 |
|
| 110 |
+
You can download the dataset with the Hugging Face CLI tool. Please note that the full dataset requires 17TB or storage.
|
| 111 |
|
| 112 |
```shell
|
| 113 |
+
hf download ibm-esa-geospatial/TerraMesh --repo-type dataset --local-dir data/TerraMesh
|
| 114 |
```
|
| 115 |
|
| 116 |
If you like to download only a subset of the data, you can specify it with `--include`.
|
| 117 |
```
|
| 118 |
# Only download val data
|
| 119 |
+
hf download ibm-esa-geospatial/TerraMesh --repo-type dataset --include "val/*" --local-dir data/TerraMesh
|
| 120 |
|
| 121 |
# Only download a single modality (e.g., S2L2A)
|
| 122 |
+
hf download ibm-esa-geospatial/TerraMesh --repo-type dataset --include "*/S2L2A/*" --local-dir data/TerraMesh
|
| 123 |
```
|
| 124 |
|
| 125 |
### Data loader
|
|
|
|
| 129 |
wget https://huggingface.co/datasets/ibm-esa-geospatial/TerraMesh/resolve/main/terramesh.py
|
| 130 |
```
|
| 131 |
|
| 132 |
+
You can use the `build_terramesh_dataset` function to initialize a dataset, which uses the WebDataset package to load samples from the shard files. You can stream the data from Hugging Face using the urls or download the full dataset and pass a local path (e.g, `data/TerraMesh/`).
|
| 133 |
```python
|
| 134 |
from terramesh import build_terramesh_dataset
|
| 135 |
from torch.utils.data import DataLoader
|
|
|
|
| 154 |
)
|
| 155 |
|
| 156 |
# Set batch size to None because batching is handled by WebDataset.
|
| 157 |
+
dataloader = DataLoader(dataset, batch_size=None, num_workers=4, persistent_workers=True, prefetch_factor=1)
|
| 158 |
|
| 159 |
# Iterate over the dataloader
|
| 160 |
for batch in dataloader:
|
|
|
|
| 172 |
|
| 173 |
We provide some additional code for wrapping `albumentations` transform functions.
|
| 174 |
We recommend albumentations because parameters are shared between all image modalities (e.g., same random crop).
|
| 175 |
+
However, it requires some code wrapping to bring the data into the expected shape.
|
| 176 |
|
| 177 |
```python
|
| 178 |
import albumentations as A
|
|
|
|
| 224 |
|
| 225 |
Therefore, you need to update the `transform` if you use one:
|
| 226 |
```python
|
| 227 |
+
val_transform = MultimodalTransforms(
|
| 228 |
+
transforms=A.Compose([...],
|
| 229 |
additional_targets={m: "image" for m in modalities + ["cloud_mask"]}
|
| 230 |
),
|
| 231 |
non_image_modalities=["__key__", "__url__", "center_lon", "center_lat"] + ["time_" + m for m in modalities]
|
| 232 |
+
)
|
| 233 |
```
|
| 234 |
|
| 235 |
For a single modality dataset, "time" does not have a suffix and the following changes for the `transform` are required:
|
| 236 |
```python
|
| 237 |
+
val_transform = MultimodalTransforms(
|
| 238 |
+
transforms=A.Compose([...],
|
| 239 |
additional_targets={"cloud_mask": "image"}
|
| 240 |
),
|
| 241 |
non_image_modalities=["__key__", "__url__", "center_lon", "center_lat", "time"]
|
| 242 |
+
)
|
| 243 |
```
|
| 244 |
|
| 245 |
Note that center points are not updated when random crop is used.
|
| 246 |
+
The cloud mask provides the classes land (0), water (1), snow (2), thin cloud (3), thick cloud (4), cloud shadow (5), and no data (6).
|
| 247 |
+
DEM does not return a time value while LULC uses the S2 timestamp because of the augmentation using the S2 cloud and ice mask. Time values are returned as integer values but can be converted back to datetime with
|
| 248 |
```python
|
| 249 |
batch["time_S2L2A"].numpy().astype("datetime64[ns]")
|
| 250 |
```
|
|
|
|
| 283 |
|
| 284 |
The LULC data is provided by [ESRI, Impact Observatory, and Microsoft](https://planetarycomputer.microsoft.com/dataset/io-lulc-annual-v02) (CC-BY-4.0).
|
| 285 |
|
| 286 |
+
The cloud masks used for augmenting the LULC maps are provided as metadata and are produced using the [SEnSeIv2](https://github.com/aliFrancis/SEnSeIv2/tree/main) model.
|
| 287 |
|
| 288 |
+
The DEM data is produced using [Copernicus WorldDEM-30](https://dataspace.copernicus.eu/explore-data/data-collections/copernicus-contributing-missions/collections-description/COP-DEM) © DLR e.V. 2010-2014 and © Airbus Defence and Space GmbH 2014-2018 provided under COPERNICUS by the European Union and ESA; all rights reserved
|