# Dataset-Viewer

## Docs

- [DuckDB](https://huggingface.co/docs/dataset-viewer/duckdb.md)
- [Overview](https://huggingface.co/docs/dataset-viewer/parquet_process.md)
- [Download slices of rows](https://huggingface.co/docs/dataset-viewer/rows.md)
- [List splits and subsets](https://huggingface.co/docs/dataset-viewer/splits.md)
- [Quickstart](https://huggingface.co/docs/dataset-viewer/quick_start.md)
- [mlcroissant](https://huggingface.co/docs/dataset-viewer/mlcroissant.md)
- [Search text in a dataset](https://huggingface.co/docs/dataset-viewer/search.md)
- [Check dataset validity](https://huggingface.co/docs/dataset-viewer/valid.md)
- [List Parquet files](https://huggingface.co/docs/dataset-viewer/parquet.md)
- [PostgreSQL](https://huggingface.co/docs/dataset-viewer/postgresql.md)
- [Server infrastructure](https://huggingface.co/docs/dataset-viewer/server.md)
- [Filter rows in a dataset](https://huggingface.co/docs/dataset-viewer/filter.md)
- [Analyze a dataset on the Hub](https://huggingface.co/docs/dataset-viewer/analyze_data.md)
- [ClickHouse](https://huggingface.co/docs/dataset-viewer/clickhouse.md)
- [PySpark](https://huggingface.co/docs/dataset-viewer/pyspark.md)
- [Get the number of rows and the size in bytes](https://huggingface.co/docs/dataset-viewer/size.md)
- [Preview a dataset](https://huggingface.co/docs/dataset-viewer/first_rows.md)
- [Get dataset information](https://huggingface.co/docs/dataset-viewer/info.md)
- [Polars](https://huggingface.co/docs/dataset-viewer/polars.md)
- [Data types](https://huggingface.co/docs/dataset-viewer/data_types.md)
- [Splits and subsets](https://huggingface.co/docs/dataset-viewer/configs_and_splits.md)
- [Pandas](https://huggingface.co/docs/dataset-viewer/pandas.md)
- [Explore statistics over split data](https://huggingface.co/docs/dataset-viewer/statistics.md)
- [cuDF](https://huggingface.co/docs/dataset-viewer/cudf.md)
- [🤗 Dataset viewer](https://huggingface.co/docs/dataset-viewer/index.md)
- [Get Croissant metadata](https://huggingface.co/docs/dataset-viewer/croissant.md)

### DuckDB
https://huggingface.co/docs/dataset-viewer/duckdb.md

# DuckDB

[DuckDB](https://duckdb.org/docs/) is a database that supports reading and querying Parquet files really fast. Begin by creating a connection to DuckDB, and then install and load the [`httpfs`](https://duckdb.org/docs/extensions/httpfs.html) extension to read and write remote files:

<inferencesnippet>
<python>
```py
import duckdb

url = "https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet"

con = duckdb.connect()
con.execute("INSTALL httpfs;")
con.execute("LOAD httpfs;")
```
</python>
<js>
```js
var duckdb = require('duckdb');
var db = new duckdb.Database(':memory:');
var con = db.connect();
con.exec('INSTALL httpfs');
con.exec('LOAD httpfs');

const url = "https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet"
```
</js>
</inferencesnippet>

Now you can write and execute your SQL query on the Parquet file:

<inferencesnippet>
<python>
```py
con.sql(f"SELECT sign, count(*), AVG(LENGTH(text)) AS avg_blog_length FROM '{url}' GROUP BY sign ORDER BY avg_blog_length DESC LIMIT(5)")
┌───────────┬──────────────┬────────────────────┐
│   sign    │ count_star() │  avg_blog_length   │
│  varchar  │    int64     │       double       │
├───────────┼──────────────┼────────────────────┤
│ Cancer    │        38956 │ 1206.5212034089743 │
│ Leo       │        35487 │ 1180.0673767858652 │
│ Aquarius  │        32723 │ 1152.1136815084192 │
│ Virgo     │        36189 │ 1117.1982094006466 │
│ Capricorn │        31825 │  1102.397360565593 │
└───────────┴──────────────┴────────────────────┘
```
</python>
<js>
```js
con.all(`SELECT sign, count(*), AVG(LENGTH(text)) AS avg_blog_length FROM '${url}' GROUP BY sign ORDER BY avg_blog_length DESC LIMIT(5)`, function(err, res) {
  if (err) {
    throw err;
  }
  console.log(res)
});
```
</js>
</inferencesnippet>

To query multiple files - for example, if the dataset is sharded:

<inferencesnippet>
<python>
```py
urls = ["https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet", "https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0001.parquet"]

con.sql(f"SELECT sign, count(*), AVG(LENGTH(text)) AS avg_blog_length FROM read_parquet({urls}) GROUP BY sign ORDER BY avg_blog_length DESC LIMIT(5)")
┌──────────┬──────────────┬────────────────────┐
│   sign   │ count_star() │  avg_blog_length   │
│ varchar  │    int64     │       double       │
├──────────┼──────────────┼────────────────────┤
│ Aquarius │        49687 │  1191.417211745527 │
│ Leo      │        53811 │ 1183.8782219248853 │
│ Cancer   │        65048 │ 1158.9691612347804 │
│ Gemini   │        51985 │ 1156.0693084543618 │
│ Virgo    │        60399 │ 1140.9584430205798 │
└──────────┴──────────────┴────────────────────┘
```
</python>
<js>
```js
const urls = ["https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet", "https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0001.parquet"];

con.all(`SELECT sign, count(*), AVG(LENGTH(text)) AS avg_blog_length FROM read_parquet(${JSON.stringify(urls)}) GROUP BY sign ORDER BY avg_blog_length DESC LIMIT(5)`, function(err, res) {
  if (err) {
    throw err;
  }
  console.log(res)
});
```
</js>
</inferencesnippet>

[DuckDB-Wasm](https://duckdb.org/docs/api/wasm), a package powered by [WebAssembly](https://webassembly.org/), is also available for running DuckDB in any browser. This could be useful, for instance, if you want to create a web app to query Parquet files from the browser!


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/duckdb.md" />

### Overview
https://huggingface.co/docs/dataset-viewer/parquet_process.md

# Overview

The dataset viewer automatically converts and publishes public datasets less than 5GB on the Hub as Parquet files. If the dataset is already in Parquet format, it will be published as is. [Parquet](https://parquet.apache.org/docs/) files are column-based and they shine when you're working with big data.

For private datasets, the feature is provided if the repository is owned by a [PRO user](https://huggingface.co/pricing) or an [Enterprise Hub organization](https://huggingface.co/enterprise).

There are several different libraries you can use to work with the published Parquet files:

- [ClickHouse](https://clickhouse.com/docs/en/intro), a column-oriented database management system for online analytical processing
- [cuDF](https://docs.rapids.ai/api/cudf/stable/), a Python GPU DataFrame library
- [DuckDB](https://duckdb.org/docs/), a high-performance SQL database for analytical queries
- [Pandas](https://pandas.pydata.org/docs/index.html), a data analysis tool for working with data structures
- [Polars](https://pola-rs.github.io/polars-book/user-guide/), a Rust based DataFrame library
- [PostgreSQL via pgai](https://github.com/timescale/pgai/blob/main/docs/load_dataset_from_huggingface.md), a powerful, open source object-relational database system
- [mlcroissant](https://github.com/mlcommons/croissant/tree/main/python/mlcroissant), a library for loading datasets from Croissant metadata
- [pyspark](https://spark.apache.org/docs/latest/api/python), the Python API for Apache Spark


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/parquet_process.md" />

### Download slices of rows
https://huggingface.co/docs/dataset-viewer/rows.md

# Download slices of rows

The dataset viewer provides a `/rows` endpoint for visualizing any slice of rows of a dataset. This will let you walk-through and inspect the data contained in a dataset.

<div class="flex justify-center">
    <img style="margin-bottom: 0;" class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets-server/oasst1_light.png"/>
    <img style="margin-bottom: 0;" class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets-server/oasst1_dark.png"/>
</div>

<Tip warning={true}>
  Currently, only {" "}
  <a href="./parquet">datasets with parquet exports</a>
  are supported so the dataset viewer can extract any slice of rows without downloading the
  whole dataset.
</Tip>

This guide shows you how to use the dataset viewer's `/rows` endpoint to download slices of a dataset.
Feel free to also try it out with [Postman](https://www.postman.com/huggingface/workspace/hugging-face-apis/request/23242779-32d6a8be-b800-446a-8cee-f6b5ca1710df),
[RapidAPI](https://rapidapi.com/hugging-face-hugging-face-default/api/hugging-face-datasets-api),
or [ReDoc](https://redocly.github.io/redoc/?url=https://datasets-server.huggingface.co/openapi.json#operation/listFirstRows).

The `/rows` endpoint accepts five query parameters:

- `dataset`: the dataset name, for example `nyu-mll/glue` or `mozilla-foundation/common_voice_10_0`
- `config`: the subset name, for example `cola`
- `split`: the split name, for example `train`
- `offset`: the offset of the slice, for example `150`
- `length`: the length of the slice, for example `10` (maximum: `100`)

<inferencesnippet>
<python>
```python
import requests
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://datasets-server.huggingface.co/rows?dataset=ibm/duorc&config=SelfRC&split=train&offset=150&length=10"
def query():
    response = requests.get(API_URL, headers=headers)
    return response.json()
data = query()
```
</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://datasets-server.huggingface.co/rows?dataset=ibm/duorc&config=SelfRC&split=train&offset=150&length=10",
        {
            headers: { Authorization: `Bearer ${API_TOKEN}` },
            method: "GET"
        }
    );
    const result = await response.json();
    return result;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://datasets-server.huggingface.co/rows?dataset=ibm/duorc&config=SelfRC&split=train&offset=150&length=10 \
        -X GET \
        -H "Authorization: Bearer ${API_TOKEN}"
```
</curl>
</inferencesnippet>

The endpoint response is a JSON containing two keys:

- The [`features`](https://huggingface.co/docs/datasets/about_dataset_features) of a dataset, including the column's name and data type.
- The slice of `rows` of a dataset and the content contained in each column of a specific row.

For example, here are the `features` and the slice of `rows` of the `ibm/duorc`/`SelfRC` train split from 150 to 151:

```json
// https://datasets-server.huggingface.co/rows?dataset=ibm/duorc&config=SelfRC&split=train&offset=150&length=2
{
  "features": [
    {
      "feature_idx": 0,
      "name": "plot_id",
      "type": { "dtype": "string", "_type": "Value" }
    },
    {
      "feature_idx": 1,
      "name": "plot",
      "type": { "dtype": "string", "_type": "Value" }
    },
    {
      "feature_idx": 2,
      "name": "title",
      "type": { "dtype": "string", "_type": "Value" }
    },
    {
      "feature_idx": 3,
      "name": "question_id",
      "type": { "dtype": "string", "_type": "Value" }
    },
    {
      "feature_idx": 4,
      "name": "question",
      "type": { "dtype": "string", "_type": "Value" }
    },
    {
      "feature_idx": 5,
      "name": "answers",
      "type": {
        "feature": { "dtype": "string", "_type": "Value" },
        "_type": "List"
      }
    },
    {
      "feature_idx": 6,
      "name": "no_answer",
      "type": { "dtype": "bool", "_type": "Value" }
    }
  ],
  "rows": [
    {
      "row_idx": 150,
      "row": {
        "plot_id": "/m/03wj_q",
        "plot": "The film is centered on Mortal Kombat, a fighting tournament between the representatives of the realms of Earth and Outworld conceived by the Elder Gods amid looming invasion of the Earth by Outworld. If the realm of Outworld wins Mortal Kombat ten consecutive times, its Emperor Shao Kahn will be able to invade and conquer the Earth realm.\nShaolin monk Liu Kang and his comrades, movie star Johnny Cage and military officer Sonya Blade were handpicked by Raiden, the god of thunder and defender of the Earth realm, to overcome their powerful adversaries in order to prevent Outworld from winning their tenth straight Mortal Kombat tournament. Each of the three has his or her own reason for competing: Liu seeks revenge against the tournament host Shang Tsung for killing his brother Chan; Sonya seeks revenge on an Australian crime lord Kano; and Cage, having been branded as a fake by the media, seeks to prove otherwise.\nAt Shang Tsung's island, Liu is attracted to Princess Kitana, Shao Kahn's adopted daughter. Aware that Kitana is a dangerous adversary because she is the rightful heir to Outworld and that she will attempt to ally herself with the Earth warriors, Tsung orders the creature Reptile to spy on her. Liu defeats his first opponent and Sonya gets her revenge on Kano by snapping his neck. Cage encounters and barely beats Scorpion. Liu engages in a brief duel with Kitana, who secretly offers him cryptic advice for his next battle. Liu's next opponent is Sub-Zero, whose defense seems untouched because of his freezing abilities, until Liu recalls Kitana's advice and uses it to kill Sub-Zero.\nPrince Goro enters the tournament and mercilessly crushes every opponent he faces. One of Cage's peers, Art Lean, is defeated by Goro as well and has his soul taken by Shang Tsung. Sonya worries that they may not win against Goro, but Raiden disagrees. He reveals their own fears and egos are preventing them from winning the tournament.\nDespite Sonya's warning, Cage comes to Tsung to request a fight with Goro. The sorcerer accepts on the condition that he be allowed to challenge any opponent of his choosing, anytime and anywhere he chooses. Raiden tries to intervene, but the conditions are agreed upon before he can do so. After Shang Tsung leaves, Raiden confronts Cage for what he has done in challenging Goro, but is impressed when Cage shows his awareness of the gravity of the tournament. Cage faces Goro and uses guile and the element of surprise to defeat the defending champion. Now desperate, Tsung takes Sonya hostage and takes her to Outworld, intending to fight her as his opponent. Knowing that his powers are ineffective there and that Sonya cannot defeat Tsung by herself, Raiden sends Liu and Cage into Outworld in order to rescue Sonya and challenge Tsung. In Outworld, Liu is attacked by Reptile, but eventually gains the upper hand and defeats him. Afterward, Kitana meets up with Cage and Liu, revealing to the pair the origins of both herself and Outworld. Kitana allies with them and helps them to infiltrate Tsung's castle.\nInside the castle tower, Shang Tsung challenges Sonya to fight him, claiming that her refusal to accept will result in the Earth realm forfeiting Mortal Kombat (this is, in fact, a lie on Shang's part). All seems lost for Earth realm until Kitana, Liu, and Cage appear. Kitana berates Tsung for his treachery to the Emperor as Sonya is set free. Tsung challenges Cage, but is counter-challenged by Liu. During the lengthy battle, Liu faces not only Tsung, but the souls that Tsung had forcibly taken in past tournaments. In a last-ditch attempt to take advantage, Tsung morphs into Chan. Seeing through the charade, Liu renews his determination and ultimately fires an energy bolt at the sorcerer, knocking him down and impaling him on a row of spikes. Tsung's death releases all of the captive souls, including Chan's. Before ascending to the afterlife, Chan tells Liu that he will remain with him in spirit until they are once again reunited, after Liu dies.\nThe warriors return to Earth realm, where a victory celebration is taking place at the Shaolin temple. The jubilation abruptly stops, however, when Shao Kahn's giant figure suddenly appears in the skies. When the Emperor declares that he has come for everyone's souls, the warriors take up fighting stances.",
        "title": "Mortal Kombat",
        "question_id": "40c1866a-b214-11ba-be57-8979d2cefa90",
        "question": "Where is Sonya taken to?",
        "answers": ["Outworld"],
        "no_answer": false
      },
      "truncated_cells": []
    },
    {
      "row_idx": 151,
      "row": {
        "plot_id": "/m/03wj_q",
        "plot": "The film is centered on Mortal Kombat, a fighting tournament between the representatives of the realms of Earth and Outworld conceived by the Elder Gods amid looming invasion of the Earth by Outworld. If the realm of Outworld wins Mortal Kombat ten consecutive times, its Emperor Shao Kahn will be able to invade and conquer the Earth realm.\nShaolin monk Liu Kang and his comrades, movie star Johnny Cage and military officer Sonya Blade were handpicked by Raiden, the god of thunder and defender of the Earth realm, to overcome their powerful adversaries in order to prevent Outworld from winning their tenth straight Mortal Kombat tournament. Each of the three has his or her own reason for competing: Liu seeks revenge against the tournament host Shang Tsung for killing his brother Chan; Sonya seeks revenge on an Australian crime lord Kano; and Cage, having been branded as a fake by the media, seeks to prove otherwise.\nAt Shang Tsung's island, Liu is attracted to Princess Kitana, Shao Kahn's adopted daughter. Aware that Kitana is a dangerous adversary because she is the rightful heir to Outworld and that she will attempt to ally herself with the Earth warriors, Tsung orders the creature Reptile to spy on her. Liu defeats his first opponent and Sonya gets her revenge on Kano by snapping his neck. Cage encounters and barely beats Scorpion. Liu engages in a brief duel with Kitana, who secretly offers him cryptic advice for his next battle. Liu's next opponent is Sub-Zero, whose defense seems untouched because of his freezing abilities, until Liu recalls Kitana's advice and uses it to kill Sub-Zero.\nPrince Goro enters the tournament and mercilessly crushes every opponent he faces. One of Cage's peers, Art Lean, is defeated by Goro as well and has his soul taken by Shang Tsung. Sonya worries that they may not win against Goro, but Raiden disagrees. He reveals their own fears and egos are preventing them from winning the tournament.\nDespite Sonya's warning, Cage comes to Tsung to request a fight with Goro. The sorcerer accepts on the condition that he be allowed to challenge any opponent of his choosing, anytime and anywhere he chooses. Raiden tries to intervene, but the conditions are agreed upon before he can do so. After Shang Tsung leaves, Raiden confronts Cage for what he has done in challenging Goro, but is impressed when Cage shows his awareness of the gravity of the tournament. Cage faces Goro and uses guile and the element of surprise to defeat the defending champion. Now desperate, Tsung takes Sonya hostage and takes her to Outworld, intending to fight her as his opponent. Knowing that his powers are ineffective there and that Sonya cannot defeat Tsung by herself, Raiden sends Liu and Cage into Outworld in order to rescue Sonya and challenge Tsung. In Outworld, Liu is attacked by Reptile, but eventually gains the upper hand and defeats him. Afterward, Kitana meets up with Cage and Liu, revealing to the pair the origins of both herself and Outworld. Kitana allies with them and helps them to infiltrate Tsung's castle.\nInside the castle tower, Shang Tsung challenges Sonya to fight him, claiming that her refusal to accept will result in the Earth realm forfeiting Mortal Kombat (this is, in fact, a lie on Shang's part). All seems lost for Earth realm until Kitana, Liu, and Cage appear. Kitana berates Tsung for his treachery to the Emperor as Sonya is set free. Tsung challenges Cage, but is counter-challenged by Liu. During the lengthy battle, Liu faces not only Tsung, but the souls that Tsung had forcibly taken in past tournaments. In a last-ditch attempt to take advantage, Tsung morphs into Chan. Seeing through the charade, Liu renews his determination and ultimately fires an energy bolt at the sorcerer, knocking him down and impaling him on a row of spikes. Tsung's death releases all of the captive souls, including Chan's. Before ascending to the afterlife, Chan tells Liu that he will remain with him in spirit until they are once again reunited, after Liu dies.\nThe warriors return to Earth realm, where a victory celebration is taking place at the Shaolin temple. The jubilation abruptly stops, however, when Shao Kahn's giant figure suddenly appears in the skies. When the Emperor declares that he has come for everyone's souls, the warriors take up fighting stances.",
        "title": "Mortal Kombat",
        "question_id": "f1fdefcf-1191-b5f9-4cae-4ce4d0a59da7",
        "question": "Who took Goro's soul?",
        "answers": ["Shang Tsung."],
        "no_answer": false
      },
      "truncated_cells": []
    }
  ],
  "num_rows_total":60721,
  "num_rows_per_page":100,
  "partial":false
}
```

## Image and audio samples

Image and audio are represented by a URL that points to the file.

### Images

Images are represented as a JSON object with three fields:

- `src`: URL to the image file. It's a [signed URL](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-urls.html) that expires after a certain time.
- `height`: height (in pixels) of the image
- `width`: width (in pixels) of the image

Here is an example of image, from the first row of the uoft-cs/cifar100 dataset:

```json
// https://datasets-server.huggingface.co/rows?dataset=uoft-cs/cifar100&config=cifar100&split=train&offset=0&length=1
{
  "features": [
    { "feature_idx": 0, "name": "img", "type": { "_type": "Image" } },
    ...
  ],
  "rows": [
    {
      "row_idx": 0,
      "row": {
        "img": {
          "src": "https://datasets-server.huggingface.co/cached-assets/uoft-cs/cifar100/--/aadb3af77e9048adbea6b47c21a81e47dd092ae5/--/cifar100/train/0/img/image.jpg?Expires=1710283469&Signature=A1v0cG07nuaBxYbuPR5EUZpJ9Se072SBDr4935gEsOESHGVyeqvd3qmvdsy1fuqbHk0dnx~p6MLtQ-hg3aCBOJ8eIJ5ItIoyYT4riJRuPQC0VFUb~b1maEwU8LRoXXuvrSysSz2QhBbC~ofv6cQudm~~bgGxXWAslDs180KnmPDsMU55ySsKyKQYNEkQKyuYvrGIJbFeg4lEps0f5CEwUstAwRAwlk~mzRpzUDBq7nJ~DcujTlllLv36nJX~too8mMnFn6dCn2nfGOFYwUiyYM73Czv-laLhVaIVUzcuJum90No~KNGzfYeFZpPqktA7MjCzRLf1gz5kA7wBqnY-8Q__&Key-Pair-Id=K3EI6M078Z3AC3",
          "height": 32,
          "width": 32
        },
        "fine_label": 19,
        "coarse_label": 11
      },
      "truncated_cells": []
    }
  ],
  "num_rows_total":50000,
  "num_rows_per_page":100,
  "partial":false
}
```

If the result has `partial: true` it means that the slices couldn't be run on the full dataset because it's too big.


### Caching

The images and audio samples are cached by the dataset viewer temporarily.
Internally we empty the cached assets of certain datasets from time to time based on usage.

If a certain asset is not available, you may have to call `/rows` again.


## Truncated responses

Unlike `/first-rows`, there is currently no truncation in `/rows`.
The `truncated_cells` field is still there but is always empty.


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/rows.md" />

### List splits and subsets
https://huggingface.co/docs/dataset-viewer/splits.md

# List splits and subsets

Datasets typically have splits and may also have subsets. A _split_ is a subset of the dataset, like `train` and `test`, that are used during different stages of training and evaluating a model. A _subset_ (also called _configuration_) is a sub-dataset contained within a larger dataset. Subsets are especially common in multilingual speech datasets where there may be a different subset for each language. If you're interested in learning more about splits and subsets, check out the [conceptual guide on "Splits and subsets"](./configs_and_splits)!

![split-configs-server](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/split-configs-server.gif)

This guide shows you how to use the dataset viewer's `/splits` endpoint to retrieve a dataset's splits and subsets programmatically. Feel free to also try it out with [Postman](https://www.postman.com/huggingface/workspace/hugging-face-apis/request/23242779-f0cde3b9-c2ee-4062-aaca-65c4cfdd96f8), [RapidAPI](https://rapidapi.com/hugging-face-hugging-face-default/api/hugging-face-datasets-api), or [ReDoc](https://redocly.github.io/redoc/?url=https://datasets-server.huggingface.co/openapi.json#operation/listSplits)

The `/splits` endpoint accepts the dataset name as its query parameter:

<inferencesnippet>
<python>
```python
import requests
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://datasets-server.huggingface.co/splits?dataset=ibm/duorc"
def query():
    response = requests.get(API_URL, headers=headers)
    return response.json()
data = query()
```
</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://datasets-server.huggingface.co/splits?dataset=ibm/duorc",
        {
            headers: { Authorization: `Bearer ${API_TOKEN}` },
            method: "GET"
        }
    );
    const result = await response.json();
    return result;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://datasets-server.huggingface.co/splits?dataset=ibm/duorc \
        -X GET \
        -H "Authorization: Bearer ${API_TOKEN}"
```
</curl>
</inferencesnippet>

The endpoint response is a JSON containing a list of the dataset's splits and subsets. For example, the [ibm/duorc](https://huggingface.co/datasets/ibm/duorc) dataset has six splits and two subsets:

```json
{
  "splits": [
    { "dataset": "ibm/duorc", "config": "ParaphraseRC", "split": "train" },
    { "dataset": "ibm/duorc", "config": "ParaphraseRC", "split": "validation" },
    { "dataset": "ibm/duorc", "config": "ParaphraseRC", "split": "test" },
    { "dataset": "ibm/duorc", "config": "SelfRC", "split": "train" },
    { "dataset": "ibm/duorc", "config": "SelfRC", "split": "validation" },
    { "dataset": "ibm/duorc", "config": "SelfRC", "split": "test" }
  ],
  "pending": [],
  "failed": []
}
```


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/splits.md" />

### Quickstart
https://huggingface.co/docs/dataset-viewer/quick_start.md

# Quickstart


In this quickstart, you'll learn how to use the dataset viewer's REST API to:

- Check whether a dataset on the Hub is functional.
- Return the subsets and splits of a dataset.
- Preview the first 100 rows of a dataset.
- Download slices of rows of a dataset.
- Search a word in a dataset.
- Filter rows based on a query string.
- Access the dataset as parquet files.
- Get the dataset size (in number of rows or bytes).
- Get statistics about the dataset.

## API endpoints

Each feature is served through an endpoint summarized in the table below:

| Endpoint                    | Method | Description                                             | Query parameters                                                                                                                                                                                                                                  |
|-----------------------------|--------|---------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [/is-valid](./valid)        | GET    | Check whether a specific dataset is valid.              | `dataset`: name of the dataset                                                                                                                                                                                                                    |
| [/splits](./splits)         | GET    | Get the list of subsets and splits of a dataset. | `dataset`: name of the dataset                                                                                                                                                                                                                    |
| [/first-rows](./first_rows) | GET    | Get the first rows of a dataset split.                  | - `dataset`: name of the dataset<br>- `config`: name of the config<br>- `split`: name of the split                                                                                                                                                |
| [/rows](./rows)             | GET    | Get a slice of rows of a dataset split.                 | - `dataset`: name of the dataset<br>- `config`: name of the config<br>- `split`: name of the split<br>- `offset`: offset of the slice<br>- `length`: length of the slice (maximum 100)                                                            |
| [/search](./search)         | GET    | Search text in a dataset split.                         | - `dataset`: name of the dataset<br>- `config`: name of the config<br>- `split`: name of the split<br>- `query`: text to search for<br>                                                                                                           |
| [/filter](./filter)         | GET    | Filter rows in a dataset split.                         | - `dataset`: name of the dataset<br>- `config`: name of the config<br>- `split`: name of the split<br>- `where`: filter query<br>- `orderby`: order-by clause<br>- `offset`: offset of the slice<br>- `length`: length of the slice (maximum 100) |
| [/parquet](./parquet)       | GET    | Get the list of parquet files of a dataset.             | `dataset`: name of the dataset                                                                                                                                                                                                                    |
| [/size](./size)             | GET    | Get the size of a dataset.                              | `dataset`: name of the dataset                                                                                                                                                                                                                    |
| [/statistics](./statistics) | GET    | Get statistics about a dataset split.                   | - `dataset`: name of the dataset<br>- `config`: name of the config<br>- `split`: name of the split                                                                                                                                                |
| [/croissant](./croissant)   | GET    | Get Croissant metadata about a dataset.                 | - `dataset`: name of the dataset                                                                                                                                                                                                                  |

There is no installation or setup required to use the dataset viewer API.

<Tip>
  Sign up for a <a href="https://huggingface.co/join">Hugging Face account</a>{" "}
  if you don't already have one! While you can use the dataset viewer API without a
  Hugging Face account, you won't be able to access{" "}
  <a href="https://huggingface.co/docs/hub/datasets-gated">gated datasets</a>{" "}
  like{" "}
  <a href="https://huggingface.co/datasets/mozilla-foundation/common_voice_10_0">
    CommonVoice
  </a>{" "}
  and <a href="https://huggingface.co/datasets/ILSVRC/imagenet-1k">ImageNet</a> without
  providing a <a href="https://huggingface.co/settings/tokens">user token</a>{" "}
  which you can find in your user settings.
</Tip>

Feel free to try out the API in [Postman](https://www.postman.com/huggingface/workspace/hugging-face-apis/documentation/23242779-d068584e-96d1-4d92-a703-7cb12cbd8053), [ReDoc](https://redocly.github.io/redoc/?url=https://datasets-server.huggingface.co/openapi.json) or [RapidAPI](https://rapidapi.com/hugging-face-hugging-face-default/api/hugging-face-datasets-api/). This quickstart will show you how to query the endpoints programmatically.

The base URL of the REST API is:

```
https://datasets-server.huggingface.co
```

## Private and gated datasets

For [private](https://huggingface.co/docs/hub/repositories-settings#private-repositories) and [gated](https://huggingface.co/docs/hub/datasets-gated) datasets, you'll need to provide your user token in `headers` of your query. Otherwise, you'll get an error message to retry with authentication.

<Tip>

The dataset viewer supports private datasets owned by a [PRO user](https://huggingface.co/pricing) or an [Enterprise Hub organization](https://huggingface.co/enterprise).

</Tip>

<inferencesnippet>
<python>
```python
import requests
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://datasets-server.huggingface.co/is-valid?dataset=allenai/WildChat-nontoxic"
def query():
    response = requests.get(API_URL, headers=headers)
    return response.json()
data = query()
```
</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://datasets-server.huggingface.co/is-valid?dataset=allenai/WildChat-nontoxic",
        {
            headers: { Authorization: `Bearer ${API_TOKEN}` },
            method: "GET",
        }
    );
    const result = await response.json();
    return result;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://datasets-server.huggingface.co/is-valid?dataset=allenai/WildChat-nontoxic \
        -X GET \
        -H "Authorization: Bearer ${API_TOKEN}"
```
</curl>
</inferencesnippet>

You'll see the following error if you're trying to access a gated dataset without providing your user token:

```py
print(data)
{'error': 'The dataset does not exist, or is not accessible without authentication (private or gated). Please check the spelling of the dataset name or retry with authentication.'}
```

## Check dataset validity

To check whether a specific dataset is valid, for example, [Rotten Tomatoes](https://huggingface.co/datasets/cornell-movie-review-data/rotten_tomatoes), use the `/is-valid` endpoint:

<inferencesnippet>
<python>
```python
import requests
API_URL = "https://datasets-server.huggingface.co/is-valid?dataset=cornell-movie-review-data/rotten_tomatoes"
def query():
    response = requests.get(API_URL)
    return response.json()
data = query()
```
</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://datasets-server.huggingface.co/is-valid?dataset=cornell-movie-review-data/rotten_tomatoes",
        {
            method: "GET"
        }
    );
    const result = await response.json();
    return result;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://datasets-server.huggingface.co/is-valid?dataset=cornell-movie-review-data/rotten_tomatoes \
        -X GET
```
</curl>
</inferencesnippet>

This returns whether the dataset provides a preview (see /first-rows), the viewer (see /rows), the search (see /search) and the filter (see /filter), and statistics (see /statistics):

```json
{ "preview": true, "viewer": true, "search": true, "filter": true, "statistics": true }
```

## List configurations and splits

The `/splits` endpoint returns a JSON list of the splits in a dataset:

<inferencesnippet>
<python>
```python
import requests
API_URL = "https://datasets-server.huggingface.co/splits?dataset=cornell-movie-review-data/rotten_tomatoes"
def query():
    response = requests.get(API_URL)
    return response.json()
data = query()
```
</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://datasets-server.huggingface.co/splits?dataset=cornell-movie-review-data/rotten_tomatoes",
        {
            method: "GET"
        }
    );
    const result = await response.json();
    return result;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://datasets-server.huggingface.co/splits?dataset=cornell-movie-review-data/rotten_tomatoes \
        -X GET
```
</curl>
</inferencesnippet>

This returns the available subsets and splits in the dataset:

```json
{
  "splits": [
    { "dataset": "cornell-movie-review-data/rotten_tomatoes", "config": "default", "split": "train" },
    {
      "dataset": "cornell-movie-review-data/rotten_tomatoes",
      "config": "default",
      "split": "validation"
    },
    { "dataset": "cornell-movie-review-data/rotten_tomatoes", "config": "default", "split": "test" }
  ],
  "pending": [],
  "failed": []
}
```

## Preview a dataset

The `/first-rows` endpoint returns a JSON list of the first 100 rows of a dataset. It also returns the types of data features ("columns" data types). You should specify the dataset name, subset name (you can find out the subset name from the `/splits` endpoint), and split name of the dataset you'd like to preview:

<inferencesnippet>
<python>
```python
import requests
API_URL = "https://datasets-server.huggingface.co/first-rows?dataset=cornell-movie-review-data/rotten_tomatoes&config=default&split=train"
def query():
    response = requests.get(API_URL)
    return response.json()
data = query()
```
</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://datasets-server.huggingface.co/first-rows?dataset=cornell-movie-review-data/rotten_tomatoes&config=default&split=train",
        {
            method: "GET"
        }
    );
    const result = await response.json();
    return result;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://datasets-server.huggingface.co/first-rows?dataset=cornell-movie-review-data/rotten_tomatoes&config=default&split=train \
        -X GET
```
</curl>
</inferencesnippet>

This returns the first 100 rows of the dataset:

```json
{
  "dataset": "cornell-movie-review-data/rotten_tomatoes",
  "config": "default",
  "split": "train",
  "features": [
    {
      "feature_idx": 0,
      "name": "text",
      "type": { "dtype": "string", "_type": "Value" }
    },
    {
      "feature_idx": 1,
      "name": "label",
      "type": { "names": ["neg", "pos"], "_type": "ClassLabel" }
    }
  ],
  "rows": [
    {
      "row_idx": 0,
      "row": {
        "text": "the rock is destined to be the 21st century's new \" conan \" and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .",
        "label": 1
      },
      "truncated_cells": []
    },
    {
      "row_idx": 1,
      "row": {
        "text": "the gorgeously elaborate continuation of \" the lord of the rings \" trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson's expanded vision of j . r . r . tolkien's middle-earth .",
        "label": 1
      },
      "truncated_cells": []
    },
    ...,
    ...
  ]
}
```

## Download slices of a dataset

The `/rows` endpoint returns a JSON list of a slice of rows of a dataset at any given location (offset).
It also returns the types of data features ("columns" data types).
You should specify the dataset name, subset name (you can find out the subset name from the `/splits` endpoint), the split name and the offset and length of the slice you'd like to download:

<inferencesnippet>
<python>
```python
import requests
API_URL = "https://datasets-server.huggingface.co/rows?dataset=cornell-movie-review-data/rotten_tomatoes&config=default&split=train&offset=150&length=10"
def query():
    response = requests.get(API_URL)
    return response.json()
data = query()
```
</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://datasets-server.huggingface.co/rows?dataset=cornell-movie-review-data/rotten_tomatoes&config=default&split=train&offset=150&length=10",
        {
            method: "GET"
        }
    );
    const result = await response.json();
    return result;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://datasets-server.huggingface.co/rows?dataset=cornell-movie-review-data/rotten_tomatoes&config=default&split=train&offset=150&length=10 \
        -X GET
```
</curl>
</inferencesnippet>

You can download slices of 100 rows maximum at a time.

The response looks like:

```json
{
  "features": [
    {
      "feature_idx": 0,
      "name": "text",
      "type": { "dtype": "string", "_type": "Value" }
    },
    {
      "feature_idx": 1,
      "name": "label",
      "type": { "names": ["neg", "pos"], "_type": "ClassLabel" }
    }
  ],
  "rows": [
    {
      "row_idx": 150,
      "row": {
        "text": "enormously likable , partly because it is aware of its own grasp of the absurd .",
        "label": 1
      },
      "truncated_cells": []
    },
    {
      "row_idx": 151,
      "row": {
        "text": "here's a british flick gleefully unconcerned with plausibility , yet just as determined to entertain you .",
        "label": 1
      },
      "truncated_cells": []
    },
    ...,
    ...
  ],
  "num_rows_total": 8530,
  "num_rows_per_page": 100,
  "partial": false
}
```

## Search text in a dataset

The `/search` endpoint returns a JSON list of a slice of rows of a dataset that match a text query. The text is searched in the columns of type `string`, even if the values are nested in a dictionary.
It also returns the types of data features ("columns" data types). The response format is the same as the /rows endpoint.
You should specify the dataset name, subset name (you can find out the subset name from the `/splits` endpoint), the split name and the search query you'd like to find in the text columns:

<inferencesnippet>
<python>
```python
import requests
API_URL = "https://datasets-server.huggingface.co/search?dataset=cornell-movie-review-data/rotten_tomatoes&config=default&split=train&query=cat"
def query():
    response = requests.get(API_URL)
    return response.json()
data = query()
```
</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://datasets-server.huggingface.co/search?dataset=cornell-movie-review-data/rotten_tomatoes&config=default&split=train&query=cat",
        {
            method: "GET"
        }
    );
    const result = await response.json();
    return result;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://datasets-server.huggingface.co/search?dataset=cornell-movie-review-data/rotten_tomatoes&config=default&split=train&query=cat \
        -X GET
```
</curl>
</inferencesnippet>

You can get slices of 100 rows maximum at a time, and you can ask for other slices using the `offset` and `length` parameters, as for the `/rows` endpoint.

The response looks like:

```json
{
  "features": [
    {
      "feature_idx": 0,
      "name": "text",
      "type": { "dtype": "string", "_type": "Value" }
    },
    {
      "feature_idx": 1,
      "name": "label",
      "type": { "dtype": "int64", "_type": "Value" }
    }
  ],
  "rows": [
    {
      "row_idx": 9,
      "row": {
        "text": "take care of my cat offers a refreshingly different slice of asian cinema .",
        "label": 1
      },
      "truncated_cells": []
    },
    {
      "row_idx": 472,
      "row": {
        "text": "[ \" take care of my cat \" ] is an honestly nice little film that takes us on an examination of young adult life in urban south korea through the hearts and minds of the five principals .",
        "label": 1
      },
      "truncated_cells": []
    },
    ...,
    ...
  ],
  "num_rows_total": 12,
  "num_rows_per_page": 100,
  "partial": false
}
```

## Access Parquet files

The dataset viewer converts every dataset on the Hub to the [Parquet](https://parquet.apache.org/) format. The `/parquet` endpoint returns a JSON list of the Parquet URLs for a dataset:

<inferencesnippet>
<python>
```python
import requests
API_URL = "https://datasets-server.huggingface.co/parquet?dataset=cornell-movie-review-data/rotten_tomatoes"
def query():
    response = requests.get(API_URL)
    return response.json()
data = query()
```
</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://datasets-server.huggingface.co/parquet?dataset=cornell-movie-review-data/rotten_tomatoes",
        {
            method: "GET"
        }
    );
    const result = await response.json();
    return result;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://datasets-server.huggingface.co/parquet?dataset=cornell-movie-review-data/rotten_tomatoes \
        -X GET
```
</curl>
</inferencesnippet>

This returns a URL to the Parquet file for each split:

```json
{
  "parquet_files": [
    {
      "dataset": "cornell-movie-review-data/rotten_tomatoes",
      "config": "default",
      "split": "test",
      "url": "https://huggingface.co/datasets/cornell-movie-review-data/rotten_tomatoes/resolve/refs%2Fconvert%2Fparquet/default/test/0000.parquet",
      "filename": "0000.parquet",
      "size": 92206
    },
    {
      "dataset": "cornell-movie-review-data/rotten_tomatoes",
      "config": "default",
      "split": "train",
      "url": "https://huggingface.co/datasets/cornell-movie-review-data/rotten_tomatoes/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet",
      "filename": "0000.parquet",
      "size": 698845
    },
    {
      "dataset": "cornell-movie-review-data/rotten_tomatoes",
      "config": "default",
      "split": "validation",
      "url": "https://huggingface.co/datasets/cornell-movie-review-data/rotten_tomatoes/resolve/refs%2Fconvert%2Fparquet/default/validation/0000.parquet",
      "filename": "0000.parquet",
      "size": 90001
    }
  ],
  "pending": [],
  "failed": [],
  "partial": false
}
```

## Get the size of the dataset

The `/size` endpoint returns a JSON with the size (number of rows and size in bytes) of the dataset, and for every subset and split:

<inferencesnippet>
<python>
```python
import requests
API_URL = "https://datasets-server.huggingface.co/size?dataset=cornell-movie-review-data/rotten_tomatoes"
def query():
    response = requests.get(API_URL)
    return response.json()
data = query()
````

</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://datasets-server.huggingface.co/size?dataset=cornell-movie-review-data/rotten_tomatoes",
        {
            method: "GET"
        }
    );
    const result = await response.json();
    return result;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://datasets-server.huggingface.co/size?dataset=cornell-movie-review-data/rotten_tomatoes \
        -X GET
```
</curl>
</inferencesnippet>

This returns the size of the dataset, and for every subset and split:

```json
{
  "size": {
    "dataset": {
      "dataset": "cornell-movie-review-data/rotten_tomatoes",
      "num_bytes_original_files": 487770,
      "num_bytes_parquet_files": 881052,
      "num_bytes_memory": 1345449,
      "num_rows": 10662
    },
    "configs": [
      {
        "dataset": "cornell-movie-review-data/rotten_tomatoes",
        "config": "default",
        "num_bytes_original_files": 487770,
        "num_bytes_parquet_files": 881052,
        "num_bytes_memory": 1345449,
        "num_rows": 10662,
        "num_columns": 2
      }
    ],
    "splits": [
      {
        "dataset": "cornell-movie-review-data/rotten_tomatoes",
        "config": "default",
        "split": "train",
        "num_bytes_parquet_files": 698845,
        "num_bytes_memory": 1074806,
        "num_rows": 8530,
        "num_columns": 2
      },
      {
        "dataset": "cornell-movie-review-data/rotten_tomatoes",
        "config": "default",
        "split": "validation",
        "num_bytes_parquet_files": 90001,
        "num_bytes_memory": 134675,
        "num_rows": 1066,
        "num_columns": 2
      },
      {
        "dataset": "cornell-movie-review-data/rotten_tomatoes",
        "config": "default",
        "split": "test",
        "num_bytes_parquet_files": 92206,
        "num_bytes_memory": 135968,
        "num_rows": 1066,
        "num_columns": 2
      }
    ]
  },
  "pending": [],
  "failed": [],
  "partial": false
}
```


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/quick_start.md" />

### mlcroissant
https://huggingface.co/docs/dataset-viewer/mlcroissant.md

# mlcroissant

[mlcroissant](https://github.com/mlcommons/croissant/tree/main/python/mlcroissant) is a library to load datasets from Croissant metadata.

<Tip>

💡 Learn more about how to get the metadata from the dataset viewer API in the [Get Croissant metadata](croissant) guide.

</Tip>

Let's start by parsing the Croissant metadata for the [`tasksource/blog_authorship_corpus`](https://huggingface.co/datasets/tasksource/blog_authorship_corpus) dataset. Be sure to first install `mlcroissant[parquet]` and `GitPython` to be able to load Parquet files over the git+https protocol.

```py
from mlcroissant import Dataset
ds = Dataset(jsonld="https://huggingface.co/api/datasets/tasksource/blog_authorship_corpus/croissant")
```

To read from the first subset (called RecordSet in Croissant's vocabulary), use the [`records`](https://github.com/mlcommons/croissant/blob/cd64e12c733cf8bf48f2f85c951c1c67b1c94f5a/python/mlcroissant/mlcroissant/_src/datasets.py#L86) function, which returns an iterator of dicts.

```py
records = ds.records("default")
```

Finally use Pandas to compute your query on the first 1,000 rows:

```py
import itertools

import pandas as pd

df = (
    pd.DataFrame(list(itertools.islice(records, 100)))
    .groupby("default/sign")["default/text"]
    .apply(lambda x: x.str.len().mean())
    .sort_values(ascending=False)
    .head(5)
)
print(df)
default/sign
b'Leo'          6463.500000
b'Capricorn'    2374.500000
b'Aquarius'     2303.757143
b'Gemini'       1420.333333
b'Aries'         918.666667
Name: default/text, dtype: float64
```


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/mlcroissant.md" />

### Search text in a dataset
https://huggingface.co/docs/dataset-viewer/search.md

# Search text in a dataset

The dataset viewer provides a `/search` endpoint for searching words in a dataset.

<Tip warning={true}>
  Currently, only <a href="./parquet">datasets with Parquet exports</a>
  are supported so the dataset viewer can index the contents and run the search without
  downloading the whole dataset.
</Tip>

This guide shows you how to use the dataset viewer's `/search` endpoint to search for a query string.
Feel free to also try it out with [ReDoc](https://redocly.github.io/redoc/?url=https://datasets-server.huggingface.co/openapi.json#operation/searchRows).

The text is searched in the columns of type `string`, even if the values are nested in a dictionary.

<Tip>

We use [DuckDB](https://duckdb.org/docs/) for [full text search](https://duckdb.org/docs/extensions/full_text_search.html) with the `BM25` (Best Match 25) algorithm. `BM25` is a ranking algorithm for information retrieval and search engines that determines a document’s relevance to a given query and ranks documents based on their relevance scores.
[`Porter` stemmer](https://tartarus.org/martin/PorterStemmer/) (which assumes English text) is used to reduce words to their root or base form, known as the stem. This process, called stemming, involves removing suffixes and prefixes from words to identify their core meaning. The purpose of a stemmer is to improve search accuracy and efficiency by ensuring that different forms of a word are recognized as the same term.

</Tip>

The `/search` endpoint accepts five query parameters:

- `dataset`: the dataset name, for example `nyu-mll/glue` or `mozilla-foundation/common_voice_10_0`
- `config`: the subset name, for example `cola`
- `split`: the split name, for example `train`
- `query`: the text to search
- `offset`: the offset of the slice, for example `150`
- `length`: the length of the slice, for example `10` (maximum: `100`)

For example, let's search for the text `"dog"` in the `train` split of the `SelfRC` subset of the `ibm/duorc` dataset, restricting the results to the slice 150-151:

<inferencesnippet>
<python>
```python
import requests
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://datasets-server.huggingface.co/search?dataset=ibm/duorc&config=SelfRC&split=train&query=dog&offset=150&length=2"
def query():
    response = requests.get(API_URL, headers=headers)
    return response.json()
data = query()
```
</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://datasets-server.huggingface.co/search?dataset=ibm/duorc&config=SelfRC&split=train&query=dog&offset=150&length=2",
        {
            headers: { Authorization: `Bearer ${API_TOKEN}` },
            method: "GET"
        }
    );
    const result = await response.json();
    return result;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://datasets-server.huggingface.co/search?dataset=ibm/duorc&config=SelfRC&split=train&query=dog&offset=150&length=2 \
        -X GET \
        -H "Authorization: Bearer ${API_TOKEN}"
```
</curl>
</inferencesnippet>

The endpoint response is a JSON containing two keys (same format as [`/rows`](./rows)):

- The [`features`](https://huggingface.co/docs/datasets/about_dataset_features) of a dataset, including the column's name and data type.
- The slice of `rows` of a dataset and the content contained in each column of a specific row.

The rows are ordered by the row index, and the text strings matching the query are not highlighted.

For example, here are the `features` and the slice 150-151 of matching `rows` of the `ibm/duorc`/`SelfRC` train split for the query `dog`:

```json
{
  "features": [
    {
      "feature_idx": 0,
      "name": "plot_id",
      "type": { "dtype": "string", "_type": "Value" }
    },
    {
      "feature_idx": 1,
      "name": "plot",
      "type": { "dtype": "string", "_type": "Value" }
    },
    {
      "feature_idx": 2,
      "name": "title",
      "type": { "dtype": "string", "_type": "Value" }
    },
    {
      "feature_idx": 3,
      "name": "question_id",
      "type": { "dtype": "string", "_type": "Value" }
    },
    {
      "feature_idx": 4,
      "name": "question",
      "type": { "dtype": "string", "_type": "Value" }
    },
    {
      "feature_idx": 5,
      "name": "answers",
      "type": {
        "feature": { "dtype": "string", "_type": "Value" },
        "_type": "List"
      }
    },
    {
      "feature_idx": 6,
      "name": "no_answer",
      "type": { "dtype": "bool", "_type": "Value" }
    }
  ],
  "rows": [
    {
      "row_idx": 1561,
      "row": {
        "plot_id": "/m/014bjk",
        "plot": "The film begins with clips that track a telephone call between London and Geneva, where a university student and part-time model, Valentine Dussault (IrÃ¨ne Jacob), is talking to her emotionally infantile and possessive boyfriend. During her work as a model she poses for a chewing-gum campaign and during the photo shoot the photographer asks her to look very sad. While walking back home, Auguste, a neighbour of Valentine's, drops a set of books, notices that a particular chapter of the Criminal Code opened at random, and concentrates on that passage. As she drives back to her apartment, Valentine is distracted while adjusting the radio and accidentally hits a dog. She tracks down the owner, a reclusive retired judge, Joseph Kern (Jean-Louis Trintignant). He seems unconcerned by the accident or the injuries sustained by Rita, his dog. Valentine takes Rita to a veterinarian, where she learns that Rita is pregnant. Valentine takes the dog home. Later, money is delivered to her apartment from an unnamed sender.\nWhilst Valentine is walking Rita the next day the dog runs away and Valentine eventually finds her back at Kern's house. She asks and he confirms that the money sent to her came from him, for the vet bill. He then tells Valentine she can have the dog. A short time later Valentine finds Kern eavesdropping on his neighbours' private telephone conversations. The judge challenges Valentine to go tell the neighbours and initially she goes to do so. She visits the neighbours' house, which appears, on the surface, to contain a contented nuclear family, causing her to change her mind about exposing their secrets. She returns to Kern's house and Kern tells her that it would make no difference if she denounced him for his spying because the people's lives he listens to would eventually turn into hell anyway. She leaves saying that she feels nothing but pity for him.\nWhilst visiting Kern, Valentine hears a phone conversation between her (unbeknownst to her) neighbour, Auguste, and his girlfriend, Karin (Frederique Feder). They discuss if they should go bowling. Valentine covers her ears but from the very little she hears she concludes that they love each other. Kern disagrees. That evening Valentine is alone at home and hopes that her boyfriend will call, but it is the photographer who calls, saying that her billboard was set up that evening and asks her to join them bowling to celebrate. Later, Auguste takes his exam and passes it and becomes a judge. Karin asks if he was asked any questions regarding the article that was open when he dropped his books. Auguste says yes. Karin gives him a fancy fountain pen as a gift and he wonders what the first judgment he signs with it will be. That evening, Kern writes a series of letters to his neighbours and the court confessing his activities, and the community files a class action. Later, at the law courts, he sees Karin make the acquaintance of and begin to flirt with another man. Earlier, Auguste had missed a call from Karin and tried to call her back but got no answer.\nValentine reads the news about a retired judge who spied on his neighbours and rushes to Kern's house to tell him that she did not report on him. He confesses that he turned himself in, just to see what she would do. He asks her in and shows her that Rita has had seven puppies. He tells her that in their last conversation when she spoke about pity he later realized that she really meant disgust. He ponders about the reasons why people obey laws and concludes that often it is more on selfish grounds and from fear than about obeying the law or being decent. It is his birthday and he offers her pear brandy for a toast. During their conversation he reminisces about a sailor he acquitted a long time ago, only later realizing he had made a mistake, and that the man was guilty. However, the man later married, had children and grandchildren and lives peacefully and happy. Valentine says that he did what he had to do, but Kern wonders how many other people that he acquitted or condemned might have seen a different life had he decided otherwise. Valentine tells Kern about her intended trip to England for a modeling job and to visit her boyfriend. Kern suggests that she take the ferry.\nAuguste has been unable to reach Karin since graduation so he goes to her place and sees her having sex with another man. Distraught, he leaves. Later, Auguste sees Karin and her new boyfriend in a restaurant. He gets her attention by tapping on the restaurant window with the pen she gave him. But when she rushes outside, he hides from her. In a temper, he ties his dog by a quayside and abandons him.\nKarin runs a service providing personalised weather information to travelers by telephone. Kern calls and enquires about the weather in the English Channel for the time when Valentine will be traveling to England. Karin states that she expects the weather to be perfect and reveals that she is about to take a trip there (with her new boyfriend who owns a yacht).\nThe day before Valentine leaves, she invites Kern to a fashion show where she is modeling. After the show they speak about the dream Kern had about her, where he saw her at the age of 50 and happy with an unidentified man. The conversation then turns to Kern and the reasons why he disliked Karin. Kern reveals that before becoming a judge, he was in love with a woman very much like Karin, who betrayed him for another man. While preparing for his exam, he once went to the same theatre where the fashion show took place and he accidentally dropped one of his books. When he picked it up, Kern studied the chapter where the book accidentally opened, which turned out to be the crucial question at his examination. After his girlfriend left him, he followed her across the English Channel but never saw her again, because she died in an accident. Later, he was assigned to judge a case where the defendant was the same man who took his girlfriend from him. Despite this connection, Kern did not recuse himself from the case and found the man guilty. He tells Valentine the judgment was entirely legal but also that he subsequently requested early retirement.\nValentine boards the ferry to England. Auguste is also on the ferry, clutching the dog he had temporarily abandoned. Although living in the same neighborhood and nearly crossing paths many times, the two have still never met. Suddenly a storm rises and sinks both the ferry and the boat with Karin and her boyfriend. Only seven survivors are pulled from the ferry: the main characters from the first two films of the trilogy, Julie and Olivier from Blue, Karol and Dominique from White, and Valentine and Auguste, who meet for the first time, as well as an English bartender named Stephen Killian. As in the previous films, the film's final sequence shows a character crying - in this case, the judge - but the final image replicates the iconic chewing-gum poster of Valentine, but this time with real emotion showing on her face.",
        "title": "Three Colors: Red",
        "question_id": "7c583513-0b7f-ddb3-be43-64befc7e90cc",
        "question": "Where is Valentine going on her trip?",
        "answers": ["England."],
        "no_answer": false
      },
      "truncated_cells": []
    },
    {
      "row_idx": 1562,
      "row": {
        "plot_id": "/m/014bjk",
        "plot": "The film begins with clips that track a telephone call between London and Geneva, where a university student and part-time model, Valentine Dussault (IrÃ¨ne Jacob), is talking to her emotionally infantile and possessive boyfriend. During her work as a model she poses for a chewing-gum campaign and during the photo shoot the photographer asks her to look very sad. While walking back home, Auguste, a neighbour of Valentine's, drops a set of books, notices that a particular chapter of the Criminal Code opened at random, and concentrates on that passage. As she drives back to her apartment, Valentine is distracted while adjusting the radio and accidentally hits a dog. She tracks down the owner, a reclusive retired judge, Joseph Kern (Jean-Louis Trintignant). He seems unconcerned by the accident or the injuries sustained by Rita, his dog. Valentine takes Rita to a veterinarian, where she learns that Rita is pregnant. Valentine takes the dog home. Later, money is delivered to her apartment from an unnamed sender.\nWhilst Valentine is walking Rita the next day the dog runs away and Valentine eventually finds her back at Kern's house. She asks and he confirms that the money sent to her came from him, for the vet bill. He then tells Valentine she can have the dog. A short time later Valentine finds Kern eavesdropping on his neighbours' private telephone conversations. The judge challenges Valentine to go tell the neighbours and initially she goes to do so. She visits the neighbours' house, which appears, on the surface, to contain a contented nuclear family, causing her to change her mind about exposing their secrets. She returns to Kern's house and Kern tells her that it would make no difference if she denounced him for his spying because the people's lives he listens to would eventually turn into hell anyway. She leaves saying that she feels nothing but pity for him.\nWhilst visiting Kern, Valentine hears a phone conversation between her (unbeknownst to her) neighbour, Auguste, and his girlfriend, Karin (Frederique Feder). They discuss if they should go bowling. Valentine covers her ears but from the very little she hears she concludes that they love each other. Kern disagrees. That evening Valentine is alone at home and hopes that her boyfriend will call, but it is the photographer who calls, saying that her billboard was set up that evening and asks her to join them bowling to celebrate. Later, Auguste takes his exam and passes it and becomes a judge. Karin asks if he was asked any questions regarding the article that was open when he dropped his books. Auguste says yes. Karin gives him a fancy fountain pen as a gift and he wonders what the first judgment he signs with it will be. That evening, Kern writes a series of letters to his neighbours and the court confessing his activities, and the community files a class action. Later, at the law courts, he sees Karin make the acquaintance of and begin to flirt with another man. Earlier, Auguste had missed a call from Karin and tried to call her back but got no answer.\nValentine reads the news about a retired judge who spied on his neighbours and rushes to Kern's house to tell him that she did not report on him. He confesses that he turned himself in, just to see what she would do. He asks her in and shows her that Rita has had seven puppies. He tells her that in their last conversation when she spoke about pity he later realized that she really meant disgust. He ponders about the reasons why people obey laws and concludes that often it is more on selfish grounds and from fear than about obeying the law or being decent. It is his birthday and he offers her pear brandy for a toast. During their conversation he reminisces about a sailor he acquitted a long time ago, only later realizing he had made a mistake, and that the man was guilty. However, the man later married, had children and grandchildren and lives peacefully and happy. Valentine says that he did what he had to do, but Kern wonders how many other people that he acquitted or condemned might have seen a different life had he decided otherwise. Valentine tells Kern about her intended trip to England for a modeling job and to visit her boyfriend. Kern suggests that she take the ferry.\nAuguste has been unable to reach Karin since graduation so he goes to her place and sees her having sex with another man. Distraught, he leaves. Later, Auguste sees Karin and her new boyfriend in a restaurant. He gets her attention by tapping on the restaurant window with the pen she gave him. But when she rushes outside, he hides from her. In a temper, he ties his dog by a quayside and abandons him.\nKarin runs a service providing personalised weather information to travelers by telephone. Kern calls and enquires about the weather in the English Channel for the time when Valentine will be traveling to England. Karin states that she expects the weather to be perfect and reveals that she is about to take a trip there (with her new boyfriend who owns a yacht).\nThe day before Valentine leaves, she invites Kern to a fashion show where she is modeling. After the show they speak about the dream Kern had about her, where he saw her at the age of 50 and happy with an unidentified man. The conversation then turns to Kern and the reasons why he disliked Karin. Kern reveals that before becoming a judge, he was in love with a woman very much like Karin, who betrayed him for another man. While preparing for his exam, he once went to the same theatre where the fashion show took place and he accidentally dropped one of his books. When he picked it up, Kern studied the chapter where the book accidentally opened, which turned out to be the crucial question at his examination. After his girlfriend left him, he followed her across the English Channel but never saw her again, because she died in an accident. Later, he was assigned to judge a case where the defendant was the same man who took his girlfriend from him. Despite this connection, Kern did not recuse himself from the case and found the man guilty. He tells Valentine the judgment was entirely legal but also that he subsequently requested early retirement.\nValentine boards the ferry to England. Auguste is also on the ferry, clutching the dog he had temporarily abandoned. Although living in the same neighborhood and nearly crossing paths many times, the two have still never met. Suddenly a storm rises and sinks both the ferry and the boat with Karin and her boyfriend. Only seven survivors are pulled from the ferry: the main characters from the first two films of the trilogy, Julie and Olivier from Blue, Karol and Dominique from White, and Valentine and Auguste, who meet for the first time, as well as an English bartender named Stephen Killian. As in the previous films, the film's final sequence shows a character crying - in this case, the judge - but the final image replicates the iconic chewing-gum poster of Valentine, but this time with real emotion showing on her face.",
        "title": "Three Colors: Red",
        "question_id": "80becb22-908d-84bc-3a5f-00b620d551bc",
        "question": "What was the profession of the dog's owner?",
        "answers": ["Retired Judge"],
        "no_answer": false
      },
      "truncated_cells": []
    }
  ],
  "num_rows_total": 5247,
  "num_rows_per_page": 100,
  "partial": false
}
```

If the result has `partial: true` it means that the search couldn't be run on the full dataset because it's too big.

Indeed, the indexing for `/search` can be partial if the dataset is bigger than 5GB. In that case, it only uses the first 5GB.


## Truncated responses

Unlike `/first-rows`, there is currently no truncation in `/search`.
The `truncated_cells` field is still there but is always empty.


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/search.md" />

### Check dataset validity
https://huggingface.co/docs/dataset-viewer/valid.md

# Check dataset validity

Before you download a dataset from the Hub, it is helpful to know if a specific dataset you're interested in is available. The dataset viewer provides the `/is-valid` endpoint to check if a specific dataset works without any errors.

The API endpoint will return an error for datasets that cannot be loaded with the [🤗 Datasets](https://github.com/huggingface/datasets) library, for example, because the data hasn't been uploaded or the format is not supported.

<Tip warning={true}>
  The largest datasets are partially supported by the dataset viewer. If they are{" "}
  <a href="https://huggingface.co/docs/datasets/stream">streamable</a>, Datasets
  Server can extract the first 100 rows without downloading the whole dataset.
  This is especially useful for previewing large datasets where downloading the
  whole dataset may take hours! See the <code>preview</code> field in the
  response of <code>/is-valid</code> to check if a dataset is partially
  supported.
</Tip>

This guide shows you how to check dataset validity programmatically, but free to try it out with [Postman](https://www.postman.com/huggingface/workspace/hugging-face-apis/request/23242779-17b761d0-b2b8-4638-a4f7-73be9049c324), [RapidAPI](https://rapidapi.com/hugging-face-hugging-face-default/api/hugging-face-datasets-api), or [ReDoc](https://redocly.github.io/redoc/?url=https://datasets-server.huggingface.co/openapi.json#operation/isValidDataset).

## Check if a dataset is valid

`/is-valid` checks whether a specific dataset loads without any error. This endpoint's query parameter requires you to specify the name of the dataset:

<inferencesnippet>
<python>
```python
import requests
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://datasets-server.huggingface.co/is-valid?dataset=cornell-movie-review-data/rotten_tomatoes"
def query():
    response = requests.get(API_URL, headers=headers)
    return response.json()
data = query()
```
</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://datasets-server.huggingface.co/is-valid?dataset=cornell-movie-review-data/rotten_tomatoes",
        {
            headers: { Authorization: `Bearer ${API_TOKEN}` },
            method: "GET"
        }
    );
    const result = await response.json();
    return result;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://datasets-server.huggingface.co/is-valid?dataset=cornell-movie-review-data/rotten_tomatoes \
        -X GET \
        -H "Authorization: Bearer ${API_TOKEN}"
```
</curl>
</inferencesnippet>

The response looks like this if a dataset is valid:

```json
{
  "viewer": true,
  "preview": true,
  "search": true,
  "filter": true,
  "statistics": true,
}
```

The response looks like this if a dataset is valid but /search is not available for it:

```json
{
  "viewer": true,
  "preview": true,
  "search": false,
  "filter": true,
  "statistics": true,
}
```

The response looks like this if a dataset is valid but /filter is not available for it:

```json
{
  "viewer": true,
  "preview": true,
  "search": true,
  "filter": false,
  "statistics": true,
}
```

Similarly, if the statistics are not available:

```json
{
  "viewer": true,
  "preview": true,
  "search": true,
  "filter": true,
  "statistics": false,
}
```

If only the first rows of a dataset are available, then the response looks like:

```json
{
  "viewer": false,
  "preview": true,
  "search": true,
  "filter": true,
  "statistics": true,
}
```

Finally, if the dataset is not valid at all, then the response is:

```json
{
  "viewer": false,
  "preview": false,
  "search": false,
  "filter": false,
  "statistics": false,
}
```

Some cases where a dataset is not valid are:

- the dataset viewer is disabled
- the dataset is gated but the access is not granted: no token is passed or the passed token is not authorized
- the dataset is private but the owner is not a PRO user or an Enterprise Hub org
- the dataset contains no data or the data format is not supported

<Tip>
  Remember if a dataset is <a href="./quick_start#private-and-gated-datasets">gated</a>,
  you'll need to provide your user token to submit a successful query!
</Tip>


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/valid.md" />

### List Parquet files
https://huggingface.co/docs/dataset-viewer/parquet.md

# List Parquet files

Datasets can be published in any format (CSV, JSONL, directories of images, etc.) to the Hub, and they are easily accessed with the 🤗 [Datasets](https://huggingface.co/docs/datasets/) library. For a more performant experience (especially when it comes to large datasets), the dataset viewer automatically converts every dataset to the [Parquet](https://parquet.apache.org/) format.

## What is Parquet?

Parquet is a columnar storage format optimized for querying and processing large datasets. Parquet is a popular choice for big data processing and analytics and is widely used for data processing and machine learning.

In Parquet, data is divided into chunks called "row groups", and within each row group, it is stored in columns rather than rows. Each row group column is compressed separately using the best compression algorithm depending on the data, and contains metadata and statistics (min/max value, number of NULL values) about the data it contains.

This structure allows for efficient data reading and querying:
- only the necessary columns are read from disk (projection pushdown); no need to read the entire file. This reduces the memory requirement for working with Parquet data. 
- entire row groups are skipped if the statistics stored in its metadata do not match the data of interest (automatic filtering)
- the data is compressed, which reduces the amount of data that needs to be stored and transferred.

A Parquet file contains a single table. If a dataset has multiple tables (e.g. multiple splits or subsets), each table is stored in a separate Parquet file.

## Conversion to Parquet

The Parquet files are published to the Hub on a specific `refs/convert/parquet` branch (like this `fancyzhx/amazon_polarity` [branch](https://huggingface.co/datasets/fancyzhx/amazon_polarity/tree/refs%2Fconvert%2Fparquet) for example) that parallels the `main` branch.

<Tip>

In order for the dataset viewer to generate a Parquet version of a dataset, the dataset must be _public_, or owned by a [PRO user](https://huggingface.co/pricing) or an [Enterprise Hub organization](https://huggingface.co/enterprise).

</Tip>

## Using the dataset viewer API

This guide shows you how to use the dataset viewer's `/parquet` endpoint to retrieve a list of a dataset's files converted to Parquet. Feel free to also try it out with [Postman](https://www.postman.com/huggingface/workspace/hugging-face-apis/request/23242779-f0cde3b9-c2ee-4062-aaca-65c4cfdd96f8), [RapidAPI](https://rapidapi.com/hugging-face-hugging-face-default/api/hugging-face-datasets-api), or [ReDoc](https://redocly.github.io/redoc/?url=https://datasets-server.huggingface.co/openapi.json#operation/listSplits).

The `/parquet` endpoint accepts the dataset name as its query parameter:

<inferencesnippet>
<python>
```python
import requests
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://datasets-server.huggingface.co/parquet?dataset=ibm/duorc"
def query():
    response = requests.get(API_URL, headers=headers)
    return response.json()
data = query()
```
</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://datasets-server.huggingface.co/parquet?dataset=ibm/duorc",
        {
            headers: { Authorization: `Bearer ${API_TOKEN}` },
            method: "GET"
        }
    );
    const result = await response.json();
    return result;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://datasets-server.huggingface.co/parquet?dataset=ibm/duorc \
        -X GET \
        -H "Authorization: Bearer ${API_TOKEN}"
```
</curl>
</inferencesnippet>

The endpoint response is a JSON containing a list of the dataset's files in the Parquet format. For example, the [`ibm/duorc`](https://huggingface.co/datasets/ibm/duorc) dataset has six Parquet files, which corresponds to the `test`, `train` and `validation` splits of its two subsets, `ParaphraseRC` and `SelfRC` (see the [List splits and subsets](./splits) guide for more details about splits and subsets).

The endpoint also gives the filename and size of each file:

```json
{
   "parquet_files":[
      {
         "dataset":"ibm/duorc",
         "config":"ParaphraseRC",
         "split":"test",
         "url":"https://huggingface.co/datasets/ibm/duorc/resolve/refs%2Fconvert%2Fparquet/ParaphraseRC/test/0000.parquet",
         "filename":"0000.parquet",
         "size":6136591
      },
      {
         "dataset":"ibm/duorc",
         "config":"ParaphraseRC",
         "split":"train",
         "url":"https://huggingface.co/datasets/ibm/duorc/resolve/refs%2Fconvert%2Fparquet/ParaphraseRC/train/0000.parquet",
         "filename":"0000.parquet",
         "size":26005668
      },
      {
         "dataset":"ibm/duorc",
         "config":"ParaphraseRC",
         "split":"validation",
         "url":"https://huggingface.co/datasets/ibm/duorc/resolve/refs%2Fconvert%2Fparquet/ParaphraseRC/validation/0000.parquet",
         "filename":"0000.parquet",
         "size":5566868
      },
      {
         "dataset":"ibm/duorc",
         "config":"SelfRC",
         "split":"test",
         "url":"https://huggingface.co/datasets/ibm/duorc/resolve/refs%2Fconvert%2Fparquet/SelfRC/test/0000.parquet",
         "filename":"0000.parquet",
         "size":3035736
      },
      {
         "dataset":"ibm/duorc",
         "config":"SelfRC",
         "split":"train",
         "url":"https://huggingface.co/datasets/ibm/duorc/resolve/refs%2Fconvert%2Fparquet/SelfRC/train/0000.parquet",
         "filename":"0000.parquet",
         "size":14851720
      },
      {
         "dataset":"ibm/duorc",
         "config":"SelfRC",
         "split":"validation",
         "url":"https://huggingface.co/datasets/ibm/duorc/resolve/refs%2Fconvert%2Fparquet/SelfRC/validation/0000.parquet",
         "filename":"0000.parquet",
         "size":3114390
      }
   ],
   "pending":[
      
   ],
   "failed":[
      
   ],
   "partial":false
}
```

## Sharded Parquet files

Big datasets are partitioned into Parquet files (shards) of about 500MB each. The filename contains the name of the dataset, the split, the shard index, and the total number of shards (`dataset-name-train-0000-of-0004.parquet`). For a given split, the elements in the list are sorted by their shard index, in ascending order. For example, the `train` split of the [`fancyzhx/amazon_polarity`](https://datasets-server.huggingface.co/parquet?dataset=fancyzhx/amazon_polarity) dataset is partitioned into 4 shards:

```json
{
   "parquet_files":[
      {
         "dataset":"fancyzhx/amazon_polarity",
         "config":"amazon_polarity",
         "split":"test",
         "url":"https://huggingface.co/datasets/fancyzhx/amazon_polarity/resolve/refs%2Fconvert%2Fparquet/amazon_polarity/test/0000.parquet",
         "filename":"0000.parquet",
         "size":117422360
      },
      {
         "dataset":"fancyzhx/amazon_polarity",
         "config":"amazon_polarity",
         "split":"train",
         "url":"https://huggingface.co/datasets/fancyzhx/amazon_polarity/resolve/refs%2Fconvert%2Fparquet/amazon_polarity/train/0000.parquet",
         "filename":"0000.parquet",
         "size":259761770
      },
      {
         "dataset":"fancyzhx/amazon_polarity",
         "config":"amazon_polarity",
         "split":"train",
         "url":"https://huggingface.co/datasets/fancyzhx/amazon_polarity/resolve/refs%2Fconvert%2Fparquet/amazon_polarity/train/0001.parquet",
         "filename":"0001.parquet",
         "size":258363554
      },
      {
         "dataset":"fancyzhx/amazon_polarity",
         "config":"amazon_polarity",
         "split":"train",
         "url":"https://huggingface.co/datasets/fancyzhx/amazon_polarity/resolve/refs%2Fconvert%2Fparquet/amazon_polarity/train/0002.parquet",
         "filename":"0002.parquet",
         "size":255471883
      },
      {
         "dataset":"fancyzhx/amazon_polarity",
         "config":"amazon_polarity",
         "split":"train",
         "url":"https://huggingface.co/datasets/fancyzhx/amazon_polarity/resolve/refs%2Fconvert%2Fparquet/amazon_polarity/train/0003.parquet",
         "filename":"0003.parquet",
         "size":254410930
      }
   ],
   "pending":[
      
   ],
   "failed":[
      
   ],
   "partial":false
}
```

To read and query the Parquet files, take a look at the [Query datasets from the dataset viewer API](parquet_process) guide.

## Partially converted datasets

The Parquet version can be partial in two cases:
- if the dataset is already in Parquet format but it contains row groups bigger than the recommended size (100-300MB uncompressed). This size is better for memory usage since Parquet is streamed row group per row group in most data libraries.
- if the dataset is not already in Parquet format or if it is bigger than 5GB.

In that case the Parquet files are generated up to 5GB and placed in a split directory prefixed with "partial", e.g. "partial-train" instead of "train".

You can check the row groups size directly on Hugging Face using the Parquet metadata sidebar, for example [here](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu/tree/main/data/CC-MAIN-2013-20?show_file_info=data%2FCC-MAIN-2013-20%2Ftrain-00000-of-00014.parquet):

![clic-parquet-metadata-sidebar](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets-server/clic-parquet-metadata-sidebar.png)

![parquet-metadata-sidebar-total-byte-size](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets-server/parquet-metadata-sidebar-total-byte-size.png)

## Parquet-native datasets

When the dataset is already in Parquet format, the data are not converted and the files in `refs/convert/parquet` are links to the original files. This rule suffers an exception to ensure the dataset viewer API to stay fast: if the [row group](https://parquet.apache.org/docs/concepts/) size of the original Parquet files is too big, new Parquet files are generated.

## Using the Hugging Face Hub API

For convenience, you can directly use the Hugging Face Hub `/api/parquet` endpoint which returns the list of Parquet URLs:

<inferencesnippet>
<python>
```python
import requests
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://huggingface.co/api/datasets/ibm/duorc/parquet"
def query():
    response = requests.get(API_URL, headers=headers)
    return response.json()
urls = query()
```
</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://huggingface.co/api/datasets/ibm/duorc/parquet",
        {
            headers: { Authorization: `Bearer ${API_TOKEN}` },
            method: "GET"
        }
    );
    const urls = await response.json();
    return urls;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://huggingface.co/api/datasets/ibm/duorc/parquet \
        -X GET \
        -H "Authorization: Bearer ${API_TOKEN}"
```
</curl>
</inferencesnippet>

The endpoint response is a JSON containing a list of the dataset's files URLs in the Parquet format for each split and subset. For example, the [`ibm/duorc`](https://huggingface.co/datasets/ibm/duorc) dataset has one Parquet file for the train split of the "ParaphraseRC" subset (see the [List splits and subsets](./splits) guide for more details about splits and subsets).

```json
{
   "ParaphraseRC":{
      "test":[
         "https://huggingface.co/api/datasets/ibm/duorc/parquet/ParaphraseRC/test/0.parquet"
      ],
      "train":[
         "https://huggingface.co/api/datasets/ibm/duorc/parquet/ParaphraseRC/train/0.parquet"
      ],
      "validation":[
         "https://huggingface.co/api/datasets/ibm/duorc/parquet/ParaphraseRC/validation/0.parquet"
      ]
   },
   "SelfRC":{
      "test":[
         "https://huggingface.co/api/datasets/ibm/duorc/parquet/SelfRC/test/0.parquet"
      ],
      "train":[
         "https://huggingface.co/api/datasets/ibm/duorc/parquet/SelfRC/train/0.parquet"
      ],
      "validation":[
         "https://huggingface.co/api/datasets/ibm/duorc/parquet/SelfRC/validation/0.parquet"
      ]
   }
}
```

Optionally you can specify which subset name to return, as well as which split:

<inferencesnippet>
<python>
```python
import requests
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://huggingface.co/api/datasets/ibm/duorc/parquet/ParaphraseRC/train"
def query():
    response = requests.get(API_URL, headers=headers)
    return response.json()
urls = query()
```
</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://huggingface.co/api/datasets/ibm/duorc/parquet/ParaphraseRC/train",
        {
            headers: { Authorization: `Bearer ${API_TOKEN}` },
            method: "GET"
        }
    );
    const urls = await response.json();
    return urls;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://huggingface.co/api/datasets/ibm/duorc/parquet/ParaphraseRC/train \
        -X GET \
        -H "Authorization: Bearer ${API_TOKEN}"
```
</curl>
</inferencesnippet>

```json
[
  "https://huggingface.co/api/datasets/ibm/duorc/parquet/ParaphraseRC/train/0.parquet"
]
```

Each parquet file can also be accessed using its shard index: `https://huggingface.co/api/datasets/ibm/duorc/parquet/ParaphraseRC/train/0.parquet` redirects to `https://huggingface.co/datasets/ibm/duorc/resolve/refs%2Fconvert%2Fparquet/ParaphraseRC/train/0000.parquet` for example.


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/parquet.md" />

### PostgreSQL
https://huggingface.co/docs/dataset-viewer/postgresql.md

# PostgreSQL

[PostgreSQL](https://www.postgresql.org/docs/) is a powerful, open source object-relational database system. It is the most [popular](https://survey.stackoverflow.co/2024/technology#most-popular-technologies-database) database by application developers for a few years running. [pgai](https://github.com/timescale/pgai) is a PostgreSQL extension that allows you to easily ingest huggingface datasets into your PostgreSQL database.


## Run PostgreSQL with pgai installed

You can easily run a docker container containing PostgreSQL with pgai.

```bash
docker run -d --name pgai -p 5432:5432 \
-v pg-data:/home/postgres/pgdata/data \
-e POSTGRES_PASSWORD=password timescale/timescaledb-ha:pg17
```

Then run the following command to install pgai into the database.

```bash
docker exec -it pgai psql -c "CREATE EXTENSION ai CASCADE;"
```

You can then connect to the database using the `psql` command line tool in the container.

```bash
docker exec -it pgai psql
```

or using your favorite PostgreSQL client using the following connection string: `postgresql://postgres:password@localhost:5432/postgres
`

Alternatively, you can install pgai into an existing PostgreSQL database. For instructions on how to install pgai into an existing PostgreSQL database, follow the instructions in the [github repo](https://github.com/timescale/pgai).

## Create a table from a dataset

To load a dataset into PostgreSQL, you can use the `ai.load_dataset` function. This function will create a PostgreSQL table, and load the dataset from the Hugging Face Hub
in a streaming fashion.

```sql
select ai.load_dataset('rajpurkar/squad', table_name => 'squad');
```

You can now query the table using standard SQL.

```sql
select * from squad limit 10;
```

Full documentation for the `ai.load_dataset` function can be found [here](https://github.com/timescale/pgai/blob/main/docs/load_dataset_from_huggingface.md).

## Import only a subset of the dataset

You can also import a subset of the dataset by specifying the `max_batches` parameter.
This is useful if the dataset is large and you want to experiment with a smaller subset.

```sql
SELECT ai.load_dataset('rajpurkar/squad', table_name => 'squad', batch_size => 100, max_batches => 1);
```

## Load a dataset into an existing table

You can also load a dataset into an existing table.
This is useful if you want more control over the data schema or want to predefine indexes and constraints on the data.

```sql
select ai.load_dataset('rajpurkar/squad', table_name => 'squad', if_table_exists => 'append');
```


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/postgresql.md" />

### Server infrastructure
https://huggingface.co/docs/dataset-viewer/server.md

# Server infrastructure

The [dataset viewer](https://github.com/huggingface/dataset-viewer) has two main components that work together to return queries about a dataset instantly:

- a user-facing web API for exploring and returning information about a dataset
- a server runs the queries ahead of time and caches them in a database

While most of the documentation is focused on the web API, the server is crucial because it performs all the time-consuming preprocessing and stores the results so the web API can retrieve and serve them to the user. This saves a user time because instead of generating the response every time it gets requested, the dataset viewer can return the preprocessed results instantly from the cache.

There are three elements that keep the server running: the job queue, workers, and the cache.

## Job queue

The job queue is a list of jobs stored in a Mongo database that should be completed by the workers. The jobs are practically identical to the endpoints the user uses; only the server runs the jobs ahead of time, and the user gets the results when they use the endpoint.

There are three jobs:

- `/splits` corresponds to the `/splits` endpoint. It refreshes a dataset and then returns that dataset's splits and subsets. For every split in the dataset, it'll create a new job.
- `/first-rows` corresponds to the `/first-rows` endpoint. It gets the first 100 rows and columns of a dataset split.
- `/parquet` corresponds to the `/parquet` endpoint. It downloads the whole dataset, converts it to [parquet](https://parquet.apache.org/) and publishes the parquet files to the Hub.

You might've noticed the `/rows` and `/search` endpoints don't have a job in the queue. The responses from these endpoints are generated on demand.

## Workers

Workers are responsible for executing the jobs in the queue. They complete the actual preprocessing requests, such as getting a list of splits and subsets. The workers can be controlled by configurable environment variables, like the minimum or the maximum number of rows returned by a worker or the maximum number of jobs to start per dataset user or organization.

Take a look at the [workers configuration](https://github.com/huggingface/dataset-viewer/tree/main/services/worker#configuration) for a complete list of the environment variables if you're interested in learning more.

## Cache

Once the workers complete a job, the results are stored - or _cached_ - in a Mongo database. When a user makes a request with an endpoint like `/first-rows`, the dataset viewer retrieves the preprocessed response from the cache, and serves it to the user. This eliminates the time a user would've waited if the server hadn't already completed the job and stored the response.

As a result, users can get their requested information about a dataset (even large ones) nearly instantaneously!


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/server.md" />

### Filter rows in a dataset
https://huggingface.co/docs/dataset-viewer/filter.md

# Filter rows in a dataset

The dataset viewer provides a `/filter` endpoint for filtering rows in a dataset.

<Tip warning={true}>
  Currently, only <a href="./parquet">datasets with Parquet exports</a>
  are supported so the dataset viewer can index the contents and run the filter query without
  downloading the whole dataset.
</Tip>

This guide shows you how to use the dataset viewer's `/filter` endpoint to filter rows based on a query string.
Feel free to also try it out with [ReDoc](https://redocly.github.io/redoc/?url=https://datasets-server.huggingface.co/openapi.json#operation/filterRows).

The `/filter` endpoint accepts the following query parameters:
- `dataset`: the dataset name, for example `nyu-mll/glue` or `mozilla-foundation/common_voice_10_0`
- `config`: the subset name, for example `cola`
- `split`: the split name, for example `train`
- `where`: the filter condition
- `orderby`: the order-by clause
- `offset`: the offset of the slice, for example `150`
- `length`: the length of the slice, for example `10` (maximum: `100`)

The `where` parameter must be expressed as a comparison predicate, which can be:
- a simple predicate composed of a column name in double quotes, a comparison operator, and a value
  - the comparison operators are: `=`, `<>`, `>`, `>=`, `<`, `<=`
- a composite predicate composed of two or more simple predicates (optionally grouped with parentheses to indicate the order of evaluation), combined with logical operators
  - the logical operators are: `AND`, `OR`, `NOT`

For example, the following `where` parameter value
```
where="age">30 AND ("name"='Simone' OR "children"=0)
```
will filter the data to select only those rows where the float "age" column is larger than 30 and,
either the string "name" column is equal to 'Simone' or the integer "children" column is equal to 0.

<Tip>
  Note that, following SQL syntax, in comparison predicates,
  column names should be enclosed in double quotes (<code>"name"</code>),
  and string values must be enclosed in single quotes (<code>'Simone'</code>).
  Additionally, if the string value contains a single quote, it must be escaped with another single quote,
  for example: <code>'O''Hara'</code>.
</Tip>

The `orderby` parameter must contain the column name (in double quotes) whose values will be sorted (in ascending order by default).
To sort the rows in descending order, use the DESC keyword, like `orderby="age" DESC`.

For example, let's filter those rows with no_answer=false in the `train` split of the `SelfRC` subset of the `ibm/duorc` dataset restricting the results to the slice 150-151:

<inferencesnippet>
<python>
```python
import requests
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://datasets-server.huggingface.co/filter?dataset=ibm/duorc&config=SelfRC&split=train&where="no_answer"=true&offset=150&length=2"
def query():
    response = requests.get(API_URL, headers=headers)
    return response.json()
data = query()
```
</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://datasets-server.huggingface.co/filter?dataset=ibm/duorc&config=SelfRC&split=train&where="no_answer"=true&offset=150&length=2",
        {
            headers: { Authorization: `Bearer ${API_TOKEN}` },
            method: "GET"
        }
    );
    const result = await response.json();
    return result;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://datasets-server.huggingface.co/filter?dataset=ibm/duorc&config=SelfRC&split=train&where="no_answer"=true&offset=150&length=2 \
        -X GET \
        -H "Authorization: Bearer ${API_TOKEN}"
```
</curl>
</inferencesnippet>

The endpoint response is a JSON containing two keys (same format as [`/rows`](./rows)):

- The [`features`](https://huggingface.co/docs/datasets/about_dataset_features) of a dataset, including the column's name and data type.
- The slice of `rows` of a dataset and the content contained in each column of a specific row.

The rows are ordered by the row index.

For example, here are the `features` and the slice 150-151 of matching `rows` of the `ibm/duorc`/`SelfRC` train split for the `where` condition `no_answer=true`:

```json
{
   "features":[
      {
         "feature_idx":0,
         "name":"plot_id",
         "type":{
            "dtype":"string",
            "_type":"Value"
         }
      },
      {
         "feature_idx":1,
         "name":"plot",
         "type":{
            "dtype":"string",
            "_type":"Value"
         }
      },
      {
         "feature_idx":2,
         "name":"title",
         "type":{
            "dtype":"string",
            "_type":"Value"
         }
      },
      {
         "feature_idx":3,
         "name":"question_id",
         "type":{
            "dtype":"string",
            "_type":"Value"
         }
      },
      {
         "feature_idx":4,
         "name":"question",
         "type":{
            "dtype":"string",
            "_type":"Value"
         }
      },
      {
         "feature_idx":5,
         "name":"answers",
         "type":{
            "feature":{
               "dtype":"string",
               "_type":"Value"
            },
            "_type":"List"
         }
      },
      {
         "feature_idx":6,
         "name":"no_answer",
         "type":{
            "dtype":"bool",
            "_type":"Value"
         }
      }
   ],
   "rows":[
      {
         "row_idx":12825,
         "row":{
            "plot_id":"/m/06qxsf",
            "plot":"Prologue\nA creepy-looking coroner introduces three different horror tales involving his current work on cadavers in \"body bags\".\n\"The Gas Station\"[edit]\nAnne is a young college student who arrives for her first job working the night shift at an all-night filling station near Haddonfield, Illinois (a reference to the setting of Carpenter's two Halloween films). The attending worker, Bill, tells her that a serial killer has broken out of a mental hospital, and cautions her not to leave the booth at the station without the keys because the door locks automatically. After Bill leaves, Anne is alone and the tension mounts as she deals with various late-night customers seeking to buy gas for a quick fill-up, purchase cigarettes or just use the restroom key, unsure whether any of them might be the escaped maniac. Eventually, when Anne suspects that the escaped killer is lurking around the gas station, she tries to call the police, only to find that the phone line is dead. Soon after that, she finds an elaborately grotesque drawing in the Restroom and then the dead body of a transient sitting in a pickup truck on the lift in one of the garage bays. She makes a phone call for help which results in her realization that \"Bill\", the attending worker she met earlier, is in fact the escaped killer, who has killed the real Bill and is killing numerous passers-by. She finds the real Bill's dead body in one of the lockers. Serial Killer \"Bill\" then reappears and attempts to kill Anne with a machete, breaking into the locked booth by smashing out the glass with a sledgehammer and then chasing her around the deserted garage. Just as he is about to kill her, a customer returns, having forgotten his credit card, and he wrestles the killer, giving Anne time to crush him under the vehicle lift.\n\"Hair\"[edit]\nRichard Coberts is a middle-aged businessman who is very self-conscious about his thinning hair. This obsession has caused a rift between him and his long-suffering girlfriend Megan. Richard answers a television ad about a \"miracle\" hair transplant operation, pays a visit to the office, and meets the shady Dr. Lock, who, for a very large fee, agrees to give Richard a surgical procedure to make his hair grow back. The next day, Richard wakes up and removes the bandage around his head, and is overjoyed to find that he has a full head of hair. But soon he becomes increasingly sick and fatigued, and finds his hair continuing to grow and, additionally, growing out of parts of his body, where hair does not normally grow. Trying to cut some of the hair off, he finds that it \"bleeds\", and, examining some of the hairs under a magnifying glass, sees that they are alive and resemble tiny serpents. He goes back to Dr. Lock for an explanation, but finds himself a prisoner as Dr. Lock explains that he and his entire staff are aliens from another planet, seeking out narcissistic human beings and planting seeds of \"hair\" to take over their bodies for consumption as part of their plan to spread their essence to Earth.\n\"Eye\"[edit]\nBrent Matthews is a baseball player whose life and career take a turn for the worse when he gets into a serious car accident in which his right eye is gouged out. Unwilling to admit that his career is over, he jumps at the chance to undergo an experimental surgical procedure to replace his eye with one from a recently deceased person. But soon after the surgery he begins to see things out of his new eye that others cannot see, and begins having nightmares of killing women and having sex with them. Brent seeks out the doctor who operated on him, and the doctor tells him that the donor of his new eye was a recently executed serial killer and necrophile who killed several young women, and then had sex with their dead bodies. Brent becomes convinced that the spirit of the dead killer is taking over his body so that he can resume killing women. He flees back to his house and tells his skeptical wife, Cathy, about what is happening. Just then the spirit of the killer emerges and attempts to kill Cathy as well. Cathy fights back, subduing him long enough for Brent to re-emerge. Realizing that it is only a matter of time before the killer emerges again, Brent cuts out his donated eye, severing his link with the killer, but then bleeds to death.\nEpilogue The coroner is finishing telling his last tale when he hears a noise from outside the morgue. He crawls back inside a body bag, revealing that he himself is a living cadaver, as two other morgue workers begin to go to work on his \"John Doe\" corpse.",
            "title":"John Carpenter presents Body Bags",
            "question_id":"cf58489f-12ba-ace6-67a7-010d957b4ff4",
            "question":"What happens soon after the surgery?",
            "answers":[
               
            ],
            "no_answer":true
         },
         "truncated_cells":[
            
         ]
      },
      {
         "row_idx":12836,
         "row":{
            "plot_id":"/m/04z_3pm",
            "plot":"In 1976, eight-year-old Mary Daisy Dinkle (Bethany Whitmore) lives a lonely life in Mount Waverley, Australia. At school, she is teased by her classmates because of an unfortunate birthmark on her forehead; while at home, her distant father, Noel, and alcoholic, kleptomaniac mother, Vera, provide little support. Her only comforts are her pet rooster, Ethel; her favourite food, sweetened condensed milk; and a Smurfs-like cartoon show called The Noblets. One day, while at the post office with her mother, Mary spots a New York City telephone book and, becoming curious about Americans, decides to write to one. She randomly chooses Max Jerry Horowitz's name from the phone book and writes him a letter telling him about herself, sending it off in the hope that he will become her pen friend.\nMax Jerry Horowitz (Philip Seymour Hoffman) is a morbidly obese 44-year-old ex-Jewish atheist who has trouble forming close bonds with other people, due to various mental and social problems. Though Mary's letter initially gives him an anxiety attack, he decides to write back to her, and the two quickly become friends (partly due to their shared love of chocolate and The Noblets). Due to Vera's disapproval of Max, Mary tells him to send his letters to her agoraphobic neighbour, Len Hislop, whose mail she collects regularly. When Mary later asks Max about love, he suffers a severe anxiety attack and is institutionalized for eight months. After his release, he is hesitant to write to Mary again for some time. On his 48th birthday, he wins the New York lottery, using his winnings to buy a lifetime supply of chocolate and an entire collection of Noblet figurines. He gives the rest of his money to his elderly neighbour Ivy, who uses most of it to pamper herself before dying in an accident with a malfunctioning jet pack. Meanwhile, Mary becomes despondent, thinking Max has abandoned her.\nOn the advice of his therapist, Max finally writes back to Mary and explains he has been diagnosed with Asperger syndrome. Mary is thrilled to hear from him again, and the two continue their correspondence for the next several years. When Noel retires from his job at a tea bag factory, he takes up metal detecting, but is soon swept away (and presumably killed) by a big tidal bore while on a beach. Mary (Toni Colette) goes to university and has her birthmark surgically removed, and develops a crush on her Greek Australian neighbour, Damien Popodopoulos (Eric Bana). Drunk and guilt-ridden over her husband's death, Vera accidentally kills herself after she drinks embalming fluid (which she mistook for cooking sherry). Mary and Damien grow closer following Vera's death and are later married.\nInspired by her friendship with Max, Mary studies psychology at university, writing her doctoral dissertation on Asperger syndrome with Max as her test subject. She plans to have her dissertation published as a book; but when Max receives a copy from her, he is infuriated that she has taken advantage of his condition, which he sees as an integral part of his personality and not a disability that needs to be cured. He breaks off communication with Mary (by removing the letter \"M\" from his typewriter), who, heartbroken, has the entire run of her book pulped, effectively ending her budding career. She sinks into depression and begins drinking cooking sherry, as her mother had done. While searching through a cabinet, she finds a can of condensed milk, and sends it to Max as an apology. She checks the post daily for a response and one day finds a note from Damien, informing her that he has left her for his own pen friend, Desmond, a sheep farmer in New Zealand.\nMeanwhile, after an incident in which he nearly chokes a homeless man (Ian \"Molly\" Meldrum) in anger, after throwing a used cigarette, Max realizes Mary is an imperfect human being, like himself, and sends her a package containing his Noblet figurine collection as a sign of forgiveness. Mary, however, has sunken into despair after Damien's departure, and fails to find the package on her doorstep for several days. Finding some Valium that had belonged to her mother, and unaware that she is pregnant with Damien's child, Mary decides to commit suicide. As she takes the Valium and is on the verge of hanging herself, Len knocks on her door, having conquered his agoraphobia to alert her of Max's package. Inside, she finds the Noblet figurines and a letter from Max, in which he tells her of his realization that they are not perfect and expresses his forgiveness. He also states how much their friendship means to him, and that he hopes their paths will cross one day.\nOne year later, Mary travels to New York with her infant child to finally visit Max. Entering his apartment, Mary discovers Max on his couch, gazing upward with a smile on his face, having died earlier that morning. Looking around the apartment, Mary is awestruck to find all the letters she had sent to Max over the years, laminated and taped to the ceiling. Realizing Max had been gazing at the letters when he died, and seeing how much he had valued their friendship, Mary cries tears of joy and joins him on the couch.",
            "title":"Mary and Max",
            "question_id":"1dc019ad-80cf-1d49-5a69-368f90fae2f8",
            "question":"Why was Mary Daisy Dinkle teased in school?",
            "answers":[
               
            ],
            "no_answer":true
         },
         "truncated_cells":[
            
         ]
      }
   ],
   "num_rows_total":627,
   "num_rows_per_page":100,
   "partial":false
}
```

If the result has `partial: true` it means that the filtering couldn't be run on the full dataset because it's too big.

Indeed, the indexing for `/filter` can be partial if the dataset is bigger than 5GB. In that case, it only uses the first 5GB.


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/filter.md" />

### Analyze a dataset on the Hub
https://huggingface.co/docs/dataset-viewer/analyze_data.md

# Analyze a dataset on the Hub


In the Quickstart, you were introduced to various endpoints for interacting with datasets on the Hub. One of the most useful ones is the `/parquet` endpoint, which allows you to get a dataset stored on the Hub and analyze it. This is a great way to explore the dataset, and get a better understanding of it's contents.

To demonstrate, this guide will show you an end-to-end example of how to retrieve a dataset from the Hub and do some basic data analysis with the Pandas library.

## Get a dataset

The [Hub](https://huggingface.co/datasets) is home to more than 200,000 datasets across a wide variety of tasks, sizes, and languages. For this example, you'll use the [`codeparrot/codecomplex`](https://huggingface.co/datasets/codeparrot/codecomplex) dataset, but feel free to explore and find another dataset that interests you! The dataset contains Java code from programming competitions, and the time complexity of the code is labeled by a group of algorithm experts. 

Let's say you're interested in the average length of the submitted code as it relates to the time complexity. Here's how you can get started. 

Use the `/parquet` endpoint to convert the dataset to a Parquet file and return the URL to it:

<inferencesnippet>
<python>
```python
import requests
API_URL = "https://datasets-server.huggingface.co/parquet?dataset=codeparrot/codecomplex"
def query():
    response = requests.get(API_URL)
    return response.json()
data = query()
```
</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://datasets-server.huggingface.co/parquet?dataset=codeparrot/codecomplex",
        {
            method: "GET"
        }
    );
    const result = await response.json();
    return result;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://datasets-server.huggingface.co/parquet?dataset=codeparrot/codecomplex \
        -X GET
```
</curl>
</inferencesnippet>

```json
{"parquet_files": 
    [
        {"dataset": "codeparrot/codecomplex", "config": "default", "split": "train", "url": "https://huggingface.co/datasets/codeparrot/codecomplex/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet", "filename": "0000.parquet", "size": 4115908}
    ], 
 "pending": [], "failed": [], "partial": false
}
```

## Read dataset with Pandas

With the URL, you can read the Parquet file into a Pandas DataFrame:

```py
import pandas as pd

url = "https://huggingface.co/datasets/codeparrot/codecomplex/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet"
df = pd.read_parquet(url)
df.head(5)
```

|                                               src | complexity |                         problem |       from |
|--------------------------------------------------:|-----------:|--------------------------------:|-----------:|
| import java.io.*;\nimport java.math.BigInteger... |  quadratic |     1179_B. Tolik and His Uncle | CODEFORCES |
| import java.util.Scanner;\n \npublic class pil... |     linear |                 1197_B. Pillars | CODEFORCES |
| import java.io.BufferedReader;\nimport java.io... |     linear | 1059_C. Sequence Transformation | CODEFORCES |
| import java.util.*;\n\nimport java.io.*;\npubl... |     linear |                  1011_A. Stages | CODEFORCES |
| import java.io.OutputStream;\nimport java.io.I... |     linear |    1190_C. Tokitsukaze and Duel | CODEFORCES |

## Calculate mean code length by time complexity

Pandas is a powerful library for data analysis; group the dataset by time complexity, apply a function to calculate the average length of the code snippet, and plot the results:

```py
df.groupby('complexity')['src'].apply(lambda x: x.str.len().mean()).sort_values(ascending=False).plot.barh(color="orange")
```

<div class="flex justify-center">
    <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets-server/codecomplex.png"/>
</div>

<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/analyze_data.md" />

### ClickHouse
https://huggingface.co/docs/dataset-viewer/clickhouse.md

# ClickHouse

[ClickHouse](https://clickhouse.com/docs/en/intro) is a fast and efficient column-oriented database for analytical workloads, making it easy to analyze Hub-hosted datasets with SQL. To get started quickly, use [`clickhouse-local`](https://clickhouse.com/docs/en/operations/utilities/clickhouse-local) to run SQL queries from the command line and avoid the need to fully install ClickHouse.

<Tip>

Check this [blog](https://clickhouse.com/blog/query-analyze-hugging-face-datasets-with-clickhouse) for more details about how to analyze datasets on the Hub with ClickHouse.

</Tip>

To start, download and install `clickhouse-local`:

```bash
curl https://clickhouse.com/ | sh
```

For this example, you'll analyze the [maharshipandya/spotify-tracks-dataset](https://huggingface.co/datasets/maharshipandya/spotify-tracks-dataset) which contains information about Spotify tracks. Datasets on the Hub are stored as Parquet files and you can access it with the [`/parquet`](parquet) endpoint:

```py
import requests

r = requests.get("https://datasets-server.huggingface.co/parquet?dataset=maharshipandya/spotify-tracks-dataset")
j = r.json()
url = [f['url'] for f in j['parquet_files']]
url
['https://huggingface.co/datasets/maharshipandya/spotify-tracks-dataset/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet']
```

## Aggregate functions

Now you can begin to analyze the dataset. Use the `-q` argument to specify the query to execute, and the [`url`](https://clickhouse.com/docs/en/sql-reference/table-functions/url) function to create a table from the data in the Parquet file.

You should set `enable_url_encoding` to 0 to ensure the escape characters in the URL are preserved as intended, and `max_https_get_redirects` to 1 to redirect to the path of the Parquet file.

Let's start by identifying the most popular artists:

```bash
./clickhouse local -q "
    SELECT count() AS c, artists 
    FROM url('https://huggingface.co/datasets/maharshipandya/spotify-tracks-dataset/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet') 
    GROUP BY artists 
    ORDER BY c 
    DESC LIMIT 5
    SETTINGS enable_url_encoding=0, max_http_get_redirects=1"

┌───c─┬─artists─────────┐
│ 279 │ The Beatles 	│
│ 271 │ George Jones	│
│ 236 │ Stevie Wonder   │
│ 224 │ Linkin Park 	│
│ 222 │ Ella Fitzgerald │
└─────┴─────────────────┘
```

ClickHouse also provides functions for visualizing your queries. For example, you can use the [`bar`](https://clickhouse.com/docs/en/sql-reference/functions/other-functions#bar) function to create a bar chart of the danceability of songs:

```bash
./clickhouse local -q "
    SELECT
        round(danceability, 1) AS danceability,
        bar(count(), 0, max(count()) OVER ()) AS dist
    FROM url('https://huggingface.co/datasets/maharshipandya/spotify-tracks-dataset/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet')
    GROUP BY danceability
    ORDER BY danceability ASC
    SETTINGS enable_url_encoding=0, max_http_get_redirects=1"

┌─danceability─┬─dist─────────────────────────────────────────────────────────────────────────────────┐
│            0 │ ▍                                                                            	      │
│      	   0.1 │ ████▎                                                                        	      │
│      	   0.2 │ █████████████▍                                                               	      │
│      	   0.3 │ ████████████████████████                                                     	      │
│      	   0.4 │ ████████████████████████████████████████████▋                                	      │
│      	   0.5 │ ████████████████████████████████████████████████████████████████████▊        	      │
│      	   0.6 │ ████████████████████████████████████████████████████████████████████████████████     │
│      	   0.7 │ ██████████████████████████████████████████████████████████████████████       	      │
│      	   0.8 │ ██████████████████████████████████████████                                   	      │
│      	   0.9 │ ██████████▋                                                                  	      │
│            1 │ ▌                                                                            	      │
└──────────────┴──────────────────────────────────────────────────────────────────────────────────────┘
```

To get a deeper understanding about a dataset, ClickHouse provides statistical analysis functions for determining how your data is correlated, calculating statistical hypothesis tests, and more. Take a look at ClickHouse's [List of Aggregate Functions](https://clickhouse.com/docs/en/sql-reference/aggregate-functions/reference) for a complete list of available aggregate functions.

## User-defined function (UDFs)

A user-defined function (UDF) allows you to reuse custom logic. Many Hub datasets are often sharded into more than one Parquet file, so it can be easier and more efficient to create a UDF to list and query all the Parquet files of a given dataset from just the dataset name.

For this example, you'll need to run `clickhouse-local` in console mode so the UDF persists between queries:

```bash
./clickhouse local
```

Remember to set `enable_url_encoding` to 0 and `max_https_get_redirects` to 1 to redirect to the path of the Parquet files:

```bash
SET max_http_get_redirects = 1, enable_url_encoding = 0
```

Let's create a function to return a list of Parquet files from the [`tasksource/blog_authorship_corpus`](https://huggingface.co/datasets/tasksource/blog_authorship_corpus):

```bash
CREATE OR REPLACE FUNCTION hugging_paths AS dataset -> (
    SELECT arrayMap(x -> (x.1), JSONExtract(json, 'parquet_files', 'Array(Tuple(url String))'))
    FROM url('https://datasets-server.huggingface.co/parquet?dataset=' || dataset, 'JSONAsString')
);

SELECT hugging_paths('tasksource/blog_authorship_corpus') AS paths

['https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet','https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0001.parquet']
```

You can make this even easier by creating another function that calls `hugging_paths` and outputs all the files based on the dataset name:

```bash
CREATE OR REPLACE FUNCTION hf AS dataset -> (
    WITH hugging_paths(dataset) as urls
    SELECT multiIf(length(urls) = 0, '', length(urls) = 1, urls[1], 'https://huggingface.co/datasets/{' || arrayStringConcat(arrayMap(x -> replaceRegexpOne(replaceOne(x, 'https://huggingface.co/datasets/', ''), '\\.parquet$', ''), urls), ',') || '}.parquet')
);

SELECT hf('tasksource/blog_authorship_corpus') AS pattern

https://huggingface.co/datasets/{tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000,tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0001}.parquet 
```

Now use the `hf` function to query any dataset by passing the dataset name:

```bash
SELECT sign, count(*), AVG(LENGTH(text)) AS avg_blog_length 
FROM url(hf('tasksource/blog_authorship_corpus'))
GROUP BY sign 
ORDER BY avg_blog_length 
DESC LIMIT(5) 

┌───────────┬────────┬────────────────────┐
│  sign     │ count  │ avg_blog_length    │
├───────────┼────────┼────────────────────┤
│ Aquarius  │ 49687  │ 1193.9523819107615 │
│ Leo       │ 53811  │ 1186.0665291483153 │
│ Cancer    │ 65048  │ 1160.8010392325666 │
│ Gemini    │ 51985  │ 1158.4132922958545 │
│ Vurgi     │ 60399  │ 1142.9977648636566 │
└───────────┴────────┴────────────────────┘
```


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/clickhouse.md" />

### PySpark
https://huggingface.co/docs/dataset-viewer/pyspark.md

# PySpark

[pyspark](https://spark.apache.org/docs/latest/api/python) is the Python interface for Apache Spark, enabling large-scale data processing and real-time analytics in a distributed environment using Python.

<Tip>

For a detailed guide on how to analyze datasets on the Hub with PySpark, check out this [blog](https://huggingface.co/blog/asoria/pyspark-hugging-face-datasets).

</Tip>

To start working with Parquet files in PySpark, you'll first need to add the file(s) to a Spark context. Below is an example of how to read a single Parquet file:

```py
from pyspark import SparkFiles, SparkContext, SparkFiles
from pyspark.sql import SparkSession

# Initialize a Spark session
spark = SparkSession.builder.appName("WineReviews").getOrCreate()

# Add the Parquet file to the Spark context
spark.sparkContext.addFile("https://huggingface.co/api/datasets/james-burton/wine_reviews/parquet/default/train/0.parquet")

# Read the Parquet file into a DataFrame
df = spark.read.parquet(SparkFiles.get("0.parquet"))

```
If your dataset is sharded into multiple Parquet files, you'll need to add each file to the Spark context individually. Here's how to do it:

```py
import requests

# Fetch the URLs of the Parquet files for the train split
r = requests.get('https://huggingface.co/api/datasets/james-burton/wine_reviews/parquet')
train_parquet_files = r.json()['default']['train']

# Add each Parquet file to the Spark context
for url in train_parquet_files:
  spark.sparkContext.addFile(url)

# Read all Parquet files into a single DataFrame
df = spark.read.parquet(SparkFiles.getRootDirectory() + "/*.parquet")

```

Once you've loaded the data into a PySpark DataFrame, you can perform various operations to explore and analyze it:

```py
print(f"Shape of the dataset: {df.count()}, {len(df.columns)}")

# Display first 10 rows
df.show(n=10)

# Get a statistical summary of the data
df.describe().show()

# Print the schema of the DataFrame
df.printSchema()

```

<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/pyspark.md" />

### Get the number of rows and the size in bytes
https://huggingface.co/docs/dataset-viewer/size.md

# Get the number of rows and the size in bytes

This guide shows you how to use the dataset viewer's `/size` endpoint to retrieve a dataset's size programmatically. Feel free to also try it out with [ReDoc](https://redocly.github.io/redoc/?url=https://datasets-server.huggingface.co/openapi.json#operation/getSize).

The `/size` endpoint accepts the dataset name as its query parameter:

<inferencesnippet>
<python>
```python
import requests
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://datasets-server.huggingface.co/size?dataset=ibm/duorc"
def query():
    response = requests.get(API_URL, headers=headers)
    return response.json()
data = query()
```
</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://datasets-server.huggingface.co/size?dataset=ibm/duorc",
        {
            headers: { Authorization: `Bearer ${API_TOKEN}` },
            method: "GET"
        }
    );
    const result = await response.json();
    return result;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://datasets-server.huggingface.co/size?dataset=ibm/duorc \
        -X GET \
        -H "Authorization: Bearer ${API_TOKEN}"
```
</curl>
</inferencesnippet>

The endpoint response is a JSON containing the size of the dataset, as well as each of its subsets and splits. It provides the number of rows, the number of colums (where applicable) and the size in bytes for the different forms of the data: original files, size in memory (RAM) and auto-converted parquet files. For example, the [ibm/duorc](https://huggingface.co/datasets/ibm/duorc) dataset has 187.213 rows along all its subsets and splits, for a total of 97MB.

```json
{
   "size":{
      "dataset":{
         "dataset":"ibm/duorc",
         "num_bytes_original_files":58710973,
         "num_bytes_parquet_files":58710973,
         "num_bytes_memory":1060742354,
         "num_rows":187213
      },
      "configs":[
         {
            "dataset":"ibm/duorc",
            "config":"ParaphraseRC",
            "num_bytes_original_files":37709127,
            "num_bytes_parquet_files":37709127,
            "num_bytes_memory":704394283,
            "num_rows":100972,
            "num_columns":7
         },
         {
            "dataset":"ibm/duorc",
            "config":"SelfRC",
            "num_bytes_original_files":21001846,
            "num_bytes_parquet_files":21001846,
            "num_bytes_memory":356348071,
            "num_rows":86241,
            "num_columns":7
         }
      ],
      "splits":[
         {
            "dataset":"ibm/duorc",
            "config":"ParaphraseRC",
            "split":"train",
            "num_bytes_parquet_files":26005668,
            "num_bytes_memory":494389683,
            "num_rows":69524,
            "num_columns":7
         },
         {
            "dataset":"ibm/duorc",
            "config":"ParaphraseRC",
            "split":"validation",
            "num_bytes_parquet_files":5566868,
            "num_bytes_memory":106733319,
            "num_rows":15591,
            "num_columns":7
         },
         {
            "dataset":"ibm/duorc",
            "config":"ParaphraseRC",
            "split":"test",
            "num_bytes_parquet_files":6136591,
            "num_bytes_memory":103271281,
            "num_rows":15857,
            "num_columns":7
         },
         {
            "dataset":"ibm/duorc",
            "config":"SelfRC",
            "split":"train",
            "num_bytes_parquet_files":14851720,
            "num_bytes_memory":248966361,
            "num_rows":60721,
            "num_columns":7
         },
         {
            "dataset":"ibm/duorc",
            "config":"SelfRC",
            "split":"validation",
            "num_bytes_parquet_files":3114390,
            "num_bytes_memory":56359392,
            "num_rows":12961,
            "num_columns":7
         },
         {
            "dataset":"ibm/duorc",
            "config":"SelfRC",
            "split":"test",
            "num_bytes_parquet_files":3035736,
            "num_bytes_memory":51022318,
            "num_rows":12559,
            "num_columns":7
         }
      ]
   },
   "pending":[
      
   ],
   "failed":[
      
   ],
   "partial":false
}
```

If the size has `partial: true` it means that the actual size of the dataset couldn't been determined because it's too big.

In that case the number of rows and bytes can be inferior to the actual numbers.


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/size.md" />

### Preview a dataset
https://huggingface.co/docs/dataset-viewer/first_rows.md

# Preview a dataset

The dataset viewer provides a `/first-rows` endpoint for visualizing the first 100 rows of a dataset. This'll give you a good idea of the data types and example data contained in a dataset.

This guide shows you how to use the dataset viewer's `/first-rows` endpoint to preview a dataset. Feel free to also try it out with [Postman](https://www.postman.com/huggingface/workspace/hugging-face-apis/request/23242779-32d6a8be-b800-446a-8cee-f6b5ca1710df), [RapidAPI](https://rapidapi.com/hugging-face-hugging-face-default/api/hugging-face-datasets-api), or [ReDoc](https://redocly.github.io/redoc/?url=https://datasets-server.huggingface.co/openapi.json#operation/listFirstRows).

The `/first-rows` endpoint accepts three query parameters:

- `dataset`: the dataset name, for example `nyu-mll/glue` or `mozilla-foundation/common_voice_10_0`
- `config`: the subset name, for example `cola`
- `split`: the split name, for example `train`

<inferencesnippet>
<python>
```python
import requests
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://datasets-server.huggingface.co/first-rows?dataset=ibm/duorc&config=SelfRC&split=train"
def query():
    response = requests.get(API_URL, headers=headers)
    return response.json()
data = query()
```
</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://datasets-server.huggingface.co/first-rows?dataset=ibm/duorc&config=SelfRC&split=train",
        {
            headers: { Authorization: `Bearer ${API_TOKEN}` },
            method: "GET"
        }
    );
    const result = await response.json();
    return result;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://datasets-server.huggingface.co/first-rows?dataset=ibm/duorc&config=SelfRC&split=train \
        -X GET \
        -H "Authorization: Bearer ${API_TOKEN}"
```
</curl>
</inferencesnippet>

The endpoint response is a JSON containing two keys:

- The [`features`](https://huggingface.co/docs/datasets/about_dataset_features) of a dataset, including the column's name and data type.
- The first 100 `rows` of a dataset and the content contained in each column of a specific row.

For example, here are the `features` and the first 100 `rows` of the `ibm/duorc`/`SelfRC` train split:

```json
{
  "dataset": "ibm/duorc",
  "config": "SelfRC",
  "split": "train",
  "features": [
    {
      "feature_idx": 0,
      "name": "plot_id",
      "type": { "dtype": "string", "_type": "Value" }
    },
    {
      "feature_idx": 1,
      "name": "plot",
      "type": { "dtype": "string", "_type": "Value" }
    },
    {
      "feature_idx": 2,
      "name": "title",
      "type": { "dtype": "string", "_type": "Value" }
    },
    {
      "feature_idx": 3,
      "name": "question_id",
      "type": { "dtype": "string", "_type": "Value" }
    },
    {
      "feature_idx": 4,
      "name": "question",
      "type": { "dtype": "string", "_type": "Value" }
    },
    {
      "feature_idx": 5,
      "name": "answers",
      "type": {
        "feature": { "dtype": "string", "_type": "Value" },
        "_type": "List"
      }
    },
    {
      "feature_idx": 6,
      "name": "no_answer",
      "type": { "dtype": "bool", "_type": "Value" }
    }
  ],
  "rows": [
    {
      "row_idx": 0,
      "row": {
        "plot_id": "/m/03vyhn",
        "plot": "200 years in the future, Mars has been colonized by a high-tech company.\nMelanie Ballard (Natasha Henstridge) arrives by train to a Mars mining camp which has cut all communication links with the company headquarters. She's not alone, as she is with a group of fellow police officers. They find the mining camp deserted except for a person in the prison, Desolation Williams (Ice Cube), who seems to laugh about them because they are all going to die. They were supposed to take Desolation to headquarters, but decide to explore first to find out what happened.They find a man inside an encapsulated mining car, who tells them not to open it. However, they do and he tries to kill them. One of the cops witnesses strange men with deep scarred and heavily tattooed faces killing the remaining survivors. The cops realise they need to leave the place fast.Desolation explains that the miners opened a kind of Martian construction in the soil which unleashed red dust. Those who breathed that dust became violent psychopaths who started to build weapons and kill the uninfected. They changed genetically, becoming distorted but much stronger.The cops and Desolation leave the prison with difficulty, and devise a plan to kill all the genetically modified ex-miners on the way out. However, the plan goes awry, and only Melanie and Desolation reach headquarters alive. Melanie realises that her bosses won't ever believe her. However, the red dust eventually arrives to headquarters, and Melanie and Desolation need to fight once again.",
        "title": "Ghosts of Mars",
        "question_id": "b440de7d-9c3f-841c-eaec-a14bdff950d1",
        "question": "How did the police arrive at the Mars mining camp?",
        "answers": ["They arrived by train."],
        "no_answer": false
      },
      "truncated_cells": []
    },
    {
      "row_idx": 1,
      "row": {
        "plot_id": "/m/03vyhn",
        "plot": "200 years in the future, Mars has been colonized by a high-tech company.\nMelanie Ballard (Natasha Henstridge) arrives by train to a Mars mining camp which has cut all communication links with the company headquarters. She's not alone, as she is with a group of fellow police officers. They find the mining camp deserted except for a person in the prison, Desolation Williams (Ice Cube), who seems to laugh about them because they are all going to die. They were supposed to take Desolation to headquarters, but decide to explore first to find out what happened.They find a man inside an encapsulated mining car, who tells them not to open it. However, they do and he tries to kill them. One of the cops witnesses strange men with deep scarred and heavily tattooed faces killing the remaining survivors. The cops realise they need to leave the place fast.Desolation explains that the miners opened a kind of Martian construction in the soil which unleashed red dust. Those who breathed that dust became violent psychopaths who started to build weapons and kill the uninfected. They changed genetically, becoming distorted but much stronger.The cops and Desolation leave the prison with difficulty, and devise a plan to kill all the genetically modified ex-miners on the way out. However, the plan goes awry, and only Melanie and Desolation reach headquarters alive. Melanie realises that her bosses won't ever believe her. However, the red dust eventually arrives to headquarters, and Melanie and Desolation need to fight once again.",
        "title": "Ghosts of Mars",
        "question_id": "a9f95c0d-121f-3ca9-1595-d497dc8bc56c",
        "question": "Who has colonized Mars 200 years in the future?",
        "answers": [
          "A high-tech company has colonized Mars 200 years in the future."
        ],
        "no_answer": false
      },
      "truncated_cells": []
    }
    ...
  ],
  "truncated": false
}
```

## Truncated responses

For some datasets, the response size from `/first-rows` may exceed 1MB, in which case the response is truncated until the size is under 1MB. This means you may not get 100 rows in your response because the rows are truncated, in which case the `truncated` field would be `true`.

In some cases, if even the first few rows generate a response that exceeds 1MB, some of the columns are truncated and converted to a string. You'll see these listed in the `truncated_cells` field.

For example, the [`GEM/SciDuet`](https://datasets-server.huggingface.co/first-rows?dataset=GEM/SciDuet&config=default&split=train) dataset only returns 10 rows, and the `paper_abstract`, `paper_content`, `paper_headers`, `slide_content_text` and `target` columns are truncated:

```json
  ...
  "rows": [
    {
      {
         "row_idx":8,
         "row":{
            "gem_id":"GEM-SciDuet-train-1#paper-954#slide-8",
            "paper_id":"954",
            "paper_title":"Incremental Syntactic Language Models for Phrase-based Translation",
            "paper_abstract":"\"This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machi",
            "paper_content":"{\"paper_content_id\":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29",
            "paper_headers":"{\"paper_header_number\":[\"1\",\"2\",\"3\",\"3.1\",\"3.3\",\"4\",\"4.1\",\"6\",\"7\"],\"paper_header_content\":[\"Introduc",
            "slide_id":"GEM-SciDuet-train-1#paper-954#slide-8",
            "slide_title":"Does an Incremental Syntactic LM Help Translation",
            "slide_content_text":"\"but will it make my BLEU score go up?\\nMotivation Syntactic LM Decoder Integration Questions?\\nMose",
            "target":"\"but will it make my BLEU score go up?\\nMotivation Syntactic LM Decoder Integration Questions?\\nMose",
            "references":[]
         },
         "truncated_cells":[
            "paper_abstract",
            "paper_content",
            "paper_headers",
            "slide_content_text",
            "target"
         ]
      },
      {
         "row_idx":9,
         "row":{
            "gem_id":"GEM-SciDuet-train-1#paper-954#slide-9",
            "paper_id":"954",
            "paper_title":"Incremental Syntactic Language Models for Phrase-based Translation",
            "paper_abstract":"\"This paper describes a novel technique for incorporating syntactic knowledge into phrasebased machi",
            "paper_content":"{\"paper_content_id\":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29",
            "paper_headers":"{\"paper_header_number\":[\"1\",\"2\",\"3\",\"3.1\",\"3.3\",\"4\",\"4.1\",\"6\",\"7\"],\"paper_header_content\":[\"Introduc",
            "slide_id":"GEM-SciDuet-train-1#paper-954#slide-9",
            "slide_title":"Perplexity Results",
            "slide_content_text":"\"Language models trained on WSJ Treebank corpus\\nMotivation Syntactic LM Decoder Integration Questio",
            "target":"\"Language models trained on WSJ Treebank corpus\\nMotivation Syntactic LM Decoder Integration Questio",
            "references":[
               
            ]
         },
         "truncated_cells":[
            "paper_abstract",
            "paper_content",
            "paper_headers",
            "slide_content_text",
            "target"
         ]
      }
      "truncated_cells": ["target", "feat_dynamic_real"]
    },
  ...
  ],
  truncated: true
```


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/first_rows.md" />

### Get dataset information
https://huggingface.co/docs/dataset-viewer/info.md

# Get dataset information

The dataset viewer provides an `/info` endpoint for exploring the general information about dataset, including such fields as description, citation, homepage, license and features.

The `/info` endpoint accepts two query parameters:

- `dataset`: the dataset name
- `config`: the subset name

<inferencesnippet>
<python>
```python
import requests
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://datasets-server.huggingface.co/info?dataset=ibm/duorc&config=SelfRC"
def query():
    response = requests.get(API_URL, headers=headers)
    return response.json()
data = query()
```
</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://datasets-server.huggingface.co/info?dataset=ibm/duorc&config=SelfRC",
        {
            headers: { Authorization: `Bearer ${API_TOKEN}` },
            method: "GET"
        }
    );
    const result = await response.json();
    return result;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://datasets-server.huggingface.co/info?dataset=ibm/duorc&config=SelfRC \
        -X GET \
        -H "Authorization: Bearer ${API_TOKEN}"
```
</curl>
</inferencesnippet>

The endpoint response is a JSON with the `dataset_info` key. Its structure and content correspond to [DatasetInfo](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.DatasetInfo) object of the `datasets` library.

```json
{
  "dataset_info": {
    "description": "",
    "citation": "",
    "homepage": "",
    "license": "",
    "features": {
      "plot_id": { "dtype": "string", "_type": "Value" },
      "plot": { "dtype": "string", "_type": "Value" },
      "title": { "dtype": "string", "_type": "Value" },
      "question_id": { "dtype": "string", "_type": "Value" },
      "question": { "dtype": "string", "_type": "Value" },
      "answers": {
        "feature": { "dtype": "string", "_type": "Value" },
        "_type": "List"
      },
      "no_answer": { "dtype": "bool", "_type": "Value" }
    },
    "builder_name": "parquet",
    "dataset_name": "duorc",
    "config_name": "SelfRC",
    "version": { "version_str": "0.0.0", "major": 0, "minor": 0, "patch": 0 },
    "splits": {
      "train": {
        "name": "train",
        "num_bytes": 248966361,
        "num_examples": 60721,
        "dataset_name": null
      },
      "validation": {
        "name": "validation",
        "num_bytes": 56359392,
        "num_examples": 12961,
        "dataset_name": null
      },
      "test": {
        "name": "test",
        "num_bytes": 51022318,
        "num_examples": 12559,
        "dataset_name": null
      }
    },
    "download_size": 21001846,
    "dataset_size": 356348071
  },
  "partial": false
}
```


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/info.md" />

### Polars
https://huggingface.co/docs/dataset-viewer/polars.md

# Polars 

[Polars](https://pola-rs.github.io/polars-book/user-guide/) is a fast DataFrame library written in Rust with Arrow as its foundation.

<Tip>

💡 Learn more about how to get the dataset URLs in the [List Parquet files](parquet) guide.

</Tip>

Let's start by grabbing the URLs to the `train` split of the [`tasksource/blog_authorship_corpus`](https://huggingface.co/datasets/tasksource/blog_authorship_corpus) dataset from the dataset viewer API:

```py
import requests 

r = requests.get("https://datasets-server.huggingface.co/parquet?dataset=tasksource/blog_authorship_corpus")
j = r.json()
urls = [f['url'] for f in j['parquet_files'] if f['split'] == 'train']
urls
['https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet', 'https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0001.parquet']
```

To read from a single Parquet file, use the [`read_parquet`](https://pola-rs.github.io/polars/py-polars/html/reference/api/polars.read_parquet.html) function to read it into a DataFrame and then execute your query:

```py
import polars as pl

df = (
    pl.read_parquet("https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet")
    .group_by("sign")
    .agg(
        [
            pl.count(),
            pl.col("text").str.len_chars().mean().alias("avg_blog_length")
        ]
    )
    .sort("avg_blog_length", descending=True)
    .limit(5)
)
print(df)
shape: (5, 3)
┌───────────┬───────┬─────────────────┐
│ sign      ┆ count ┆ avg_blog_length │
│ ---       ┆ ---   ┆ ---             │
│ str       ┆ u32   ┆ f64             │
╞═══════════╪═══════╪═════════════════╡
│ Cancer    ┆ 38956 ┆ 1206.521203     │
│ Leo       ┆ 35487 ┆ 1180.067377     │
│ Aquarius  ┆ 32723 ┆ 1152.113682     │
│ Virgo     ┆ 36189 ┆ 1117.198209     │
│ Capricorn ┆ 31825 ┆ 1102.397361     │
└───────────┴───────┴─────────────────┘
```

To read multiple Parquet files - for example, if the dataset is sharded - you'll need to use the [`concat`](https://pola-rs.github.io/polars/py-polars/html/reference/api/polars.concat.html) function to concatenate the files into a single DataFrame: 

```py
import polars as pl

df = (
    pl.concat([pl.read_parquet(url) for url in urls])
    .group_by("sign")
    .agg(
        [
            pl.count(),
            pl.col("text").str.len_chars().mean().alias("avg_blog_length")
        ]
    )
    .sort("avg_blog_length", descending=True)
    .limit(5)
)
print(df)
shape: (5, 3)
┌──────────┬───────┬─────────────────┐
│ sign     ┆ count ┆ avg_blog_length │
│ ---      ┆ ---   ┆ ---             │
│ str      ┆ u32   ┆ f64             │
╞══════════╪═══════╪═════════════════╡
│ Aquarius ┆ 49687 ┆ 1191.417212     │
│ Leo      ┆ 53811 ┆ 1183.878222     │
│ Cancer   ┆ 65048 ┆ 1158.969161     │
│ Gemini   ┆ 51985 ┆ 1156.069308     │
│ Virgo    ┆ 60399 ┆ 1140.958443     │
└──────────┴───────┴─────────────────┘
```

## Lazy API

Polars offers a [lazy API](https://pola-rs.github.io/polars-book/user-guide/lazy/using/) that is more performant and memory-efficient for large Parquet files. The LazyFrame API keeps track of what you want to do, and it'll only execute the entire query when you're ready. This way, the lazy API doesn't load everything into RAM beforehand, and it allows you to work with datasets larger than your available RAM.

To lazily read a Parquet file, use the [`scan_parquet`](https://pola-rs.github.io/polars/py-polars/html/reference/api/polars.scan_parquet.html) function instead. Then, execute the entire query with the [`collect`](https://pola-rs.github.io/polars/py-polars/html/reference/lazyframe/api/polars.LazyFrame.collect.html) function:

```py
import polars as pl

q = (
    pl.scan_parquet("https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet")
    .group_by("sign")
    .agg(
        [
            pl.count(),
            pl.col("text").str.len_chars().mean().alias("avg_blog_length")
        ]
    )
    .sort("avg_blog_length", descending=True)
    .limit(5)
)
df = q.collect()
```


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/polars.md" />

### Data types
https://huggingface.co/docs/dataset-viewer/data_types.md

# Data types

Datasets supported by the dataset viewer have a tabular format, meaning a data point is represented in a row and its features are contained in columns. Using the `/first-rows` endpoint allows you to preview the first 100 rows of a dataset and information about each feature. Within the `features` key, you'll notice it returns a `_type` field. This value describes the data type of the column, and it is also known as a dataset's [`Features`](https://huggingface.co/docs/datasets/about_dataset_features). 

There are several different data `Features` for representing different data formats such as [`Audio`](https://huggingface.co/docs/datasets/v2.5.2/en/package_reference/main_classes#datasets.Audio) and [`Image`](https://huggingface.co/docs/datasets/v2.5.2/en/package_reference/main_classes#datasets.Image) for speech and image data respectively. Knowing a dataset feature gives you a better understanding of the data type you're working with, and how you can preprocess it.

For example, the `/first-rows` endpoint for the [Rotten Tomatoes](https://huggingface.co/datasets/cornell-movie-review-data/rotten_tomatoes) dataset returns the following:

```json
{"dataset": "cornell-movie-review-data/rotten_tomatoes",
 "config": "default",
 "split": "train",
 "features": [{"feature_idx": 0,
   "name": "text",
   "type": {"dtype": "string", 
   "id": null,
   "_type": "Value"}},
  {"feature_idx": 1,
   "name": "label",
   "type": {"num_classes": 2,
    "names": ["neg", "pos"],
    "id": null,
    "_type": "ClassLabel"}}],
  ...
 }
```

This dataset has two columns, `text` and `label`:

- The `text` column has a type of `Value`. The [`Value`](https://huggingface.co/docs/datasets/v2.5.2/en/package_reference/main_classes#datasets.Value) type is extremely versatile and represents scalar values such as strings, integers, dates, and even timestamp values.

- The `label` column has a type of `ClassLabel`. The [`ClassLabel`](https://huggingface.co/docs/datasets/v2.5.2/en/package_reference/main_classes#datasets.ClassLabel) type represents the number of classes in a dataset and their label names. Naturally, this means you'll frequently see `ClassLabel` used in classification datasets.

For a complete list of available data types, take a look at the [`Features`](https://huggingface.co/docs/datasets/v2.5.2/en/package_reference/main_classes#datasets.Features) documentation.


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/data_types.md" />

### Splits and subsets
https://huggingface.co/docs/dataset-viewer/configs_and_splits.md

# Splits and subsets

Machine learning datasets are commonly organized in *splits* and they may also have *subsets* (also called *configurations*). These internal structures provide the scaffolding for building out a dataset, and determines how a dataset should be split and organized. Understanding a dataset's structure can help you create your own dataset, and know which subset of data you should use when during model training and evaluation.

![split-configs-server](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/split-configs-server.gif)

## Splits

Every processed and cleaned dataset contains *splits*, specific parts of the data reserved for specific needs. The most common splits are:

* `train`: data used to train a model; this data is exposed to the model
* `validation`: data reserved for evaluation and improving model hyperparameters; this data is hidden from the model
* `test`: data reserved for evaluation only; this data is completely hidden from the model and ourselves

The `validation` and `test` sets are especially important to ensure a model is actually learning instead of *overfitting*, or just memorizing the data.

## Subsets

A *subset* (also called *configuration*) is a higher-level internal structure than a split, and a subset contains splits. You can think of a subset as a sub-dataset contained within a larger dataset. It is a useful structure for adding additional layers of organization to a dataset. For example, if you take a look at the [Multilingual LibriSpeech (MLS)](https://huggingface.co/datasets/facebook/multilingual_librispeech) dataset, you'll notice there are eight different languages. While you can create a dataset containing all eight languages, it's probably neater to create a dataset with each language as a subset. This way, users can instantly load a dataset with their language of interest instead of preprocessing the dataset to filter for a specific language.

Subsets are flexible, and can be used to organize a dataset along whatever objective you'd like. For example, the [SceneParse150](https://huggingface.co/datasets/scene_parse_150) dataset uses subsets to organize the dataset by task. One subset is dedicated to segmenting the whole image, while the other subset is for instance segmentation.


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/configs_and_splits.md" />

### Pandas
https://huggingface.co/docs/dataset-viewer/pandas.md

# Pandas

[Pandas](https://pandas.pydata.org/docs/index.html) is a popular DataFrame library for data analysis.

To read from a single Parquet file, use the [`read_parquet`](https://pandas.pydata.org/docs/reference/api/pandas.read_parquet.html) function to read it into a DataFrame:

```py
import pandas as pd

df = (
    pd.read_parquet("https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet")
    .groupby('sign')['text']
    .apply(lambda x: x.str.len().mean())
    .sort_values(ascending=False)
    .head(5)
)
```

To read multiple Parquet files - for example, if the dataset is sharded - you'll need to use the [`concat`](https://pandas.pydata.org/docs/reference/api/pandas.concat.html) function to concatenate the files into a single DataFrame:

```py
urls = ["https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet", "https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0001.parquet"]

df = (
      pd.concat([pd.read_parquet(url) for url in urls])
      .groupby('sign')['text']
      .apply(lambda x: x.str.len().mean())
      .sort_values(ascending=False)
      .head(5)
)
```


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/pandas.md" />

### Explore statistics over split data
https://huggingface.co/docs/dataset-viewer/statistics.md

# Explore statistics over split data

The dataset viewer provides a `/statistics` endpoint for fetching some basic statistics precomputed for a requested dataset. This will get you a quick insight on how the data is distributed.

<Tip warning={true}>
  Currently, statistics are computed only for <a href="./parquet">datasets with Parquet exports</a>.
</Tip>

The `/statistics` endpoint requires three query parameters:

- `dataset`: the dataset name, for example `nyu-mll/glue`
- `config`: the subset name, for example `cola`
- `split`: the split name, for example `train`

Let's get some stats for `nyu-mll/glue` dataset, `cola` subset, `train` split:

<inferencesnippet>
<python>
```python
import requests
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://datasets-server.huggingface.co/statistics?dataset=nyu-mll/glue&config=cola&split=train"
def query():
    response = requests.get(API_URL, headers=headers)
    return response.json()
data = query()
```
</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://datasets-server.huggingface.co/statistics?dataset=nyu-mll/glue&config=cola&split=train",
        {
            headers: { Authorization: `Bearer ${API_TOKEN}` },
            method: "GET"
        }
    );
    const result = await response.json();
    return result;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://datasets-server.huggingface.co/statistics?dataset=nyu-mll/glue&config=cola&split=train \
        -X GET \
        -H "Authorization: Bearer ${API_TOKEN}"
```
</curl>
</inferencesnippet>

The response JSON contains three keys:
* `num_examples` - number of samples in a split or number of samples in the first chunk of data if dataset is larger than 5GB (see `partial` field below).
* `statistics` - list of dictionaries of statistics per each column, each dictionary has three keys: `column_name`, `column_type`, and `column_statistics`. Content of `column_statistics` depends on a column type, see [Response structure by data types](./statistics#response-structure-by-data-type) for more details
* `partial` - `true` if statistics are computed on the first 5 GB of data, not on the full split, `false` otherwise.

```json
{
  "num_examples": 8551,
  "statistics": [
    {
      "column_name": "idx",
      "column_type": "int",
      "column_statistics": {
        "nan_count": 0,
        "nan_proportion": 0,
        "min": 0,
        "max": 8550,
        "mean": 4275,
        "median": 4275,
        "std": 2468.60541,
        "histogram": {
          "hist": [
            856,
            856,
            856,
            856,
            856,
            856,
            856,
            856,
            856,
            847
          ],
          "bin_edges": [
            0,
            856,
            1712,
            2568,
            3424,
            4280,
            5136,
            5992,
            6848,
            7704,
            8550
          ]
        }
      }
    },
    {
      "column_name": "label",
      "column_type": "class_label",
      "column_statistics": {
        "nan_count": 0,
        "nan_proportion": 0,
        "no_label_count": 0,
        "no_label_proportion": 0,
        "n_unique": 2,
        "frequencies": {
          "unacceptable": 2528,
          "acceptable": 6023
        }
      }
    },
    {
      "column_name": "sentence",
      "column_type": "string_text",
      "column_statistics": {
        "nan_count": 0,
        "nan_proportion": 0,
        "min": 6,
        "max": 231,
        "mean": 40.70074,
        "median": 37,
        "std": 19.14431,
        "histogram": {
          "hist": [
            2260,
            4512,
            1262,
            380,
            102,
            26,
            6,
            1,
            1,
            1
          ],
          "bin_edges": [
            6,
            29,
            52,
            75,
            98,
            121,
            144,
            167,
            190,
            213,
            231
          ]
        }
      }
    }
  ],
  "partial": false
}
```

## Response structure by data type

Currently, statistics are supported for strings, float and integer numbers, lists, datetimes, audio and image data and the special [`datasets.ClassLabel`](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.ClassLabel) feature type of the [`datasets`](https://huggingface.co/docs/datasets/) library.

`column_type` in response can be one of the following values:

* `class_label` - for [`datasets.ClassLabel`](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.ClassLabel) feature which represents categorical data
* `float` - for float data types
* `int` - for integer data types
* `bool` - for boolean data type
* `string_label` - for string data types being treated as categories (see below)
* `string_text` - for string data types if they do not represent categories (see below)
* `list` - for lists of any other data types (including lists)
* `audio` - for audio data
* `image` - for image data
* `datetime` - for datetime data 

### `class_label`

This type represents categorical data encoded as [`ClassLabel`](https://huggingface.co/docs/datasets/package_reference/main_classes#datasets.ClassLabel) feature. The following measures are computed:

* number and proportion of `null` values
* number and proportion of values with no label
* number of unique values (excluding `null` and `no label`)
* value counts for each label (excluding `null` and `no label`)

<details><summary>Example </summary>
<p>

```json
{
  "column_name": "label",
  "column_type": "class_label",
  "column_statistics": {
    "nan_count": 0,
    "nan_proportion": 0,
    "no_label_count": 0,
    "no_label_proportion": 0,
    "n_unique": 2,
    "frequencies": {
      "unacceptable": 2528,
      "acceptable": 6023
    }
  }
}
```

</p>
</details>

### float

The following measures are returned for float data types:

* minimum, maximum, mean, median, and standard deviation values
* number and proportion of `null` and `NaN` values (`NaN` values are treated as `null`)
* histogram with 10 bins

<details><summary>Example </summary>
<p>

```json
{
  "column_name": "clarity",
  "column_type": "float",
  "column_statistics": {
    "nan_count": 0,
    "nan_proportion": 0,
    "min": 0,
    "max": 2,
    "mean": 1.67206,
    "median": 1.8,
    "std": 0.38714,
    "histogram": {
      "hist": [
        17,
        12,
        48,
        52,
        135,
        188,
        814,
        15,
        1628,
        2048
      ],
      "bin_edges": [
        0,
        0.2,
        0.4,
        0.6,
        0.8,
        1,
        1.2,
        1.4,
        1.6,
        1.8,
        2
      ]
    }
  }
}
```

</p>
</details>

### int

The following measures are returned for integer data types:

* minimum, maximum, mean, median, and standard deviation values
* number and proportion of `null` values
* histogram with less than or equal to 10 bins

<details><summary>Example </summary>
<p>

```json
{
    "column_name": "direction",
    "column_type": "int",
    "column_statistics": {
        "nan_count": 0,
        "nan_proportion": 0.0,
        "min": 0,
        "max": 1,
        "mean": 0.49925,
        "median": 0.0,
        "std": 0.5,
        "histogram": {
            "hist": [
                50075,
                49925
            ],
            "bin_edges": [
                0,
                1,
                1
            ]
        }
    }
}
```

</p>
</details>

### bool

The following measures are returned for bool data type:

* number and proportion of `null` values
* value counts for `'True'` and `'False'` values

<details><summary>Example </summary>
<p>

```json
{
  "column_name": "penalty",
  "column_type": "bool",
  "column_statistics":
    {
        "nan_count": 3,
        "nan_proportion": 0.15,
        "frequencies": {
            "False": 7,
            "True": 10
        }
    }
}
```

</p>
</details>


### string_label

If the proportion of unique values in a string column within requested split is lower than or equal to 0.2 and the number of unique values is lower than 1000, or if the number of unique values is lower or equal to 10 (independently of the proportion), it is considered to be a category. The following measures are returned:

* number and proportion of `null` values
* number of unique values (excluding `null`)
* value counts for each label (excluding `null`)

<details><summary>Example </summary>
<p>

```json
{
  "column_name": "answerKey",
  "column_type": "string_label",
  "column_statistics": {
    "nan_count": 0,
    "nan_proportion": 0,
    "n_unique": 4,
    "frequencies": {
      "D": 1221,
      "C": 1146,
      "A": 1378,
      "B": 1212
    }
  }
}

```

</p>
</details>

### string_text

If string column does not satisfy the conditions to be treated as a `string_label`, it is considered to be a column containing texts and response contains statistics over text lengths which are calculated by character number. The following measures are computed:

* minimum, maximum, mean, median, and standard deviation of text lengths
* number and proportion of `null` values
* histogram of text lengths with 10 bins

<details><summary>Example </summary>
<p>

```json
{
  "column_name": "sentence",
  "column_type": "string_text",
  "column_statistics": {
    "nan_count": 0,
    "nan_proportion": 0,
    "min": 6,
    "max": 231,
    "mean": 40.70074,
    "median": 37,
    "std": 19.14431,
    "histogram": {
      "hist": [
        2260,
        4512,
        1262,
        380,
        102,
        26,
        6,
        1,
        1,
        1
      ],
      "bin_edges": [
        6,
        29,
        52,
        75,
        98,
        121,
        144,
        167,
        190,
        213,
        231
      ]
    }
  }
}
```

</p>
</details>

### list

For lists, the distribution of their lengths is computed. The following measures are returned:

* minimum, maximum, mean, median, and standard deviation of lists lengths
* number and proportion of `null` values
* histogram of lists lengths with up to 10 bins

<details><summary>Example </summary>
<p>

```json
{
    "column_name": "chat_history",
    "column_type": "list",
    "column_statistics": {
        "nan_count": 0,
        "nan_proportion": 0.0,
        "min": 1,
        "max": 3,
        "mean": 1.01741,
        "median": 1.0,
        "std": 0.13146,
        "histogram": {
            "hist": [
                11177,
                196,
                1
            ],
            "bin_edges": [
                1,
                2,
                3,
                3
            ]
        }
    }
}
```

</p>
</details>

Note that dictionaries of lists are not supported.


### audio

For audio data, the distribution of audio files durations is computed. The following measures are returned:

* minimum, maximum, mean, median, and standard deviation of audio files durations
* number and proportion of `null` values
* histogram of audio files durations with 10 bins


<details><summary>Example </summary>
<p>

```json
{
    "column_name": "audio",
    "column_type": "audio",
    "column_statistics": {
        "nan_count": 0,
        "nan_proportion": 0,
        "min": 1.02,
        "max": 15,
        "mean": 13.93042,
        "median": 14.77,
        "std": 2.63734,
        "histogram": {
            "hist": [
                32,
                25,
                18,
                24,
                22,
                17,
                18,
                19,
                55,
                1770
            ],
            "bin_edges": [
                1.02,
                2.418,
                3.816,
                5.214,
                6.612,
                8.01,
                9.408,
                10.806,
                12.204,
                13.602,
                15
            ]
        }
    }
}
```

</p>
</details>


### image

For image data, the distribution of images widths is computed. The following measures are returned:

* minimum, maximum, mean, median, and standard deviation of widths of image files
* number and proportion of `null` values
* histogram of images widths with 10 bins

<details><summary>Example </summary>
<p>

```json
{
    "column_name": "image",
    "column_type": "image",
    "column_statistics": {
        "nan_count": 0,
        "nan_proportion": 0.0,
        "min": 256,
        "max": 873,
        "mean": 327.99339,
        "median": 341.0,
        "std": 60.07286,
        "histogram": {
            "hist": [
                1734,
                1637,
                1326,
                121,
                10,
                3,
                1,
                3,
                1,
                2
            ],
            "bin_edges": [
                256,
                318,
                380,
                442,
                504,
                566,
                628,
                690,
                752,
                814,
                873
            ]
        }
    }
}
```

</p>
</details>

### datetime

The distribution of datetime is computed. The following measures are returned:

* minimum, maximum, mean, median, and standard deviation of datetimes represented as strings with precision up to seconds
* number and proportion of `null` values
* histogram of datetimes with 10 bins

<details><summary>Example </summary>
<p>

```json
{
    "column_name": "date",
    "column_type": "datetime",
    "column_statistics": {
        "nan_count": 0,
        "nan_proportion": 0.0,
        "min": "2013-05-18 04:54:11",
        "max": "2013-06-20 10:01:41",
        "mean": "2013-05-27 18:03:39",
        "median": "2013-05-23 11:55:50",
        "std": "11 days, 4:57:32.322450",
        "histogram": {
            "hist": [
                318776,
                393036,
                173904,
                0,
                0,
                0,
                0,
                0,
                0,
                206284
            ],
            "bin_edges": [
                "2013-05-18 04:54:11",
                "2013-05-21 12:36:57",
                "2013-05-24 20:19:43",
                "2013-05-28 04:02:29",
                "2013-05-31 11:45:15",
                "2013-06-03 19:28:01",
                "2013-06-07 03:10:47",
                "2013-06-10 10:53:33",
                "2013-06-13 18:36:19",
                "2013-06-17 02:19:05",
                "2013-06-20 10:01:41"
            ]
        }
    }
}
```

</p>
</details>



<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/statistics.md" />

### cuDF
https://huggingface.co/docs/dataset-viewer/cudf.md

# cuDF

[cuDF](https://docs.rapids.ai/api/cudf/stable/) is a Python GPU DataFrame library.

To read from a single Parquet file, use the [`read_parquet`](https://docs.rapids.ai/api/cudf/stable/user_guide/api_docs/api/cudf.read_parquet/) function to read it into a DataFrame:

```py
import cudf

df = (
    cudf.read_parquet("https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/0000.parquet")
    .groupby('sign')['text']
    .apply(lambda x: x.str.len().mean())
    .sort_values(ascending=False)
    .head(5)
)
```

To read multiple Parquet files - for example, if the dataset is sharded - you'll need to use [`dask-cudf`](https://docs.rapids.ai/api/dask-cudf/stable/):

```py
import dask
import dask.dataframe as dd

dask.config.set({"dataframe.backend": "cudf"})

df = (
    dd.read_parquet("https://huggingface.co/datasets/tasksource/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/default/train/*.parquet")
)
```


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/cudf.md" />

### 🤗 Dataset viewer
https://huggingface.co/docs/dataset-viewer/index.md

# 🤗 Dataset viewer

The dataset page includes a table with the dataset's contents, arranged by pages of 100 rows. You can navigate between pages using the buttons at the bottom of the table, filter, search, look at basic statistics, and more.

<div class="flex justify-center">
  <img
    style="margin-bottom: 0;"
    class="block dark:hidden"
    src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets-server/openbookqa_light.png"
  />
  <img
    style="margin-bottom: 0;"
    class="hidden dark:block"
    src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets-server/openbookqa_dark.png"
  />
</div>

<p style="text-align: center; font-style: italic; margin-top: 0;">
  Dataset viewer of the
  <a href="https://huggingface.co/datasets/openbookqa" rel="nofollow">
    OpenBookQA dataset
  </a>
</p>

## Contents of the documentation

These documentation pages are focused on the **dataset viewer's backend** (code in https://github.com/huggingface/dataset-viewer), which provides the table with pre-computed data through an API for all the datasets on the Hub. You can explore the sections if you want to consume the API for your application or to understand how we preprocess the datasets.

Otherwise, if you want to learn about creating datasets from the Hub's web-based interface, [**configuring the dataset viewer**](https://huggingface.co/docs/hub/datasets-data-files-configuration) for data, [images](https://huggingface.co/docs/hub/datasets-image), or audio, or fixing errors, you might prefer reading the [Datasets Hub documentation pages](https://huggingface.co/docs/hub/datasets). Take also a look to the [example datasets](https://huggingface.co/datasets-examples) collections: [splits configuration](https://huggingface.co/collections/datasets-examples/file-names-and-splits-655e28af4471bd95709eb135), [subsets configuration](https://huggingface.co/collections/datasets-examples/manual-configuration-655e293cea26da0acab95b87), [CSV data files](https://huggingface.co/collections/datasets-examples/format-csv-and-tsv-655f681cb9673a4249cccb3d) and [image datasets](https://huggingface.co/collections/datasets-examples/image-dataset-6568e7cf28639db76eb92d65).

## Dataset viewer's backend

The dataset viewer's backend provides an API for visualizing and exploring all types of datasets - computer vision, speech, text, and tabular - stored on the Hugging Face [Hub](https://huggingface.co/datasets).

The main feature of the dataset viewer's backend is to auto-convert all the [Hub datasets](https://huggingface.co/datasets) to [Parquet](https://parquet.apache.org/). Read more in the [Parquet section](./parquet).

As datasets increase in size and data type richness, the cost of preprocessing (storage and compute) these datasets can be challenging and time-consuming.
To help users access these modern datasets, The dataset viewer runs a server behind the scenes to generate the API responses ahead of time and stores them in a database so they are instantly returned when you make a query through the API.

Let the dataset viewer take care of the heavy lifting so you can use a simple **REST API** on any of the **100,000+ datasets on Hugging Face** to:

- List the **dataset splits, column names and data types**
- Get the **dataset size** (in number of rows or bytes)
- Download and view **rows at any index** in the dataset
- **Search** a word in the dataset
- **Filter** rows based on a query string
- Get insightful **statistics** about the data
- Access the dataset as **parquet files** to use in your favorite **processing or analytics framework**

Join the growing community on the [forum](https://discuss.huggingface.co/) or [Discord](https://discord.com/invite/JfAtkvEtRb) today, and give the [dataset viewer repository](https://github.com/huggingface/dataset-viewer) a ⭐️ if you're interested in the latest updates!


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/index.md" />

### Get Croissant metadata
https://huggingface.co/docs/dataset-viewer/croissant.md

# Get Croissant metadata

The dataset viewer automatically generates the metadata in [Croissant](https://github.com/mlcommons/croissant) format (JSON-LD) for every dataset on the Hugging Face Hub. It lists the dataset's name, description, URL, and the distribution of the dataset as Parquet files, including the columns' metadata. The Croissant metadata is available for all the datasets that can be [converted to Parquet format](./parquet#conversion-to-parquet).

## What is Croissant?

Croissant is a metadata format built on top of [schema.org](https://schema.org/) aimed at describing datasets used for machine learning to help indexing, searching and loading them programmatically.

## Get the metadata

This guide shows you how to use [Hugging Face `/croissant` endpoint](https://huggingface.co/docs/hub/api#get-apidatasetsrepoidcroissant) to retrieve the Croissant metadata associated to a dataset.

The `/croissant` endpoint takes the dataset name in the URL, for example for the `ibm/duorc` dataset:

<inferencesnippet>
<python>
```python
import requests
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://huggingface.co/api/datasets/ibm/duorc/croissant"
def query():
    response = requests.get(API_URL, headers=headers)
    return response.json()
data = query()
```
</python>
<js>
```js
import fetch from "node-fetch";
async function query(data) {
    const response = await fetch(
        "https://huggingface.co/api/datasets/ibm/duorc/croissant",
        {
            headers: { Authorization: `Bearer ${API_TOKEN}` },
            method: "GET"
        }
    );
    const result = await response.json();
    return result;
}
query().then((response) => {
    console.log(JSON.stringify(response));
});
```
</js>
<curl>
```curl
curl https://huggingface.co/api/datasets/ibm/duorc/croissant \
        -X GET \
        -H "Authorization: Bearer ${API_TOKEN}"
```
</curl>
</inferencesnippet>

Under the hood it uses the `https://datasets-server.huggingface.co/croissant-crumbs` endpoint and enriches it with the Hub metadata.

The endpoint response is a [JSON-LD](https://json-ld.org/) containing the metadata in the Croissant format. For example, the [`ibm/duorc`](https://huggingface.co/datasets/ibm/duorc) dataset has two subsets, `ParaphraseRC` and `SelfRC` (see the [List splits and subsets](./splits) guide for more details about splits and subsets). The metadata links to their Parquet files and describes the type of each of the six columns: `plot_id`, `plot`, `title`, `question_id`, `question`, and `no_answer`:

```json
{
  "@context": {
    "@language": "en",
    "@vocab": "https://schema.org/",
    "citeAs": "cr:citeAs",
    "column": "cr:column",
    "conformsTo": "dct:conformsTo",
    "cr": "http://mlcommons.org/croissant/",
    "data": {
      "@id": "cr:data",
      "@type": "@json"
    },
    "dataBiases": "cr:dataBiases",
    "dataCollection": "cr:dataCollection",
    "dataType": {
      "@id": "cr:dataType",
      "@type": "@vocab"
    },
    "dct": "http://purl.org/dc/terms/",
    "extract": "cr:extract",
    "field": "cr:field",
    "fileProperty": "cr:fileProperty",
    "fileObject": "cr:fileObject",
    "fileSet": "cr:fileSet",
    "format": "cr:format",
    "includes": "cr:includes",
    "isLiveDataset": "cr:isLiveDataset",
    "jsonPath": "cr:jsonPath",
    "key": "cr:key",
    "md5": "cr:md5",
    "parentField": "cr:parentField",
    "path": "cr:path",
    "personalSensitiveInformation": "cr:personalSensitiveInformation",
    "recordSet": "cr:recordSet",
    "references": "cr:references",
    "regex": "cr:regex",
    "repeated": "cr:repeated",
    "replace": "cr:replace",
    "sc": "https://schema.org/",
    "separator": "cr:separator",
    "source": "cr:source",
    "subField": "cr:subField",
    "transform": "cr:transform"
  },
  "@type": "sc:Dataset",
  "distribution": [
    {
      "@type": "cr:FileObject",
      "@id": "repo",
      "name": "repo",
      "description": "The Hugging Face git repository.",
      "contentUrl": "https://huggingface.co/datasets/ibm/duorc/tree/refs%2Fconvert%2Fparquet",
      "encodingFormat": "git+https",
      "sha256": "https://github.com/mlcommons/croissant/issues/80"
    },
    {
      "@type": "cr:FileSet",
      "@id": "parquet-files-for-config-ParaphraseRC",
      "name": "parquet-files-for-config-ParaphraseRC",
      "description": "The underlying Parquet files as converted by Hugging Face (see: https://huggingface.co/docs/dataset-viewer/parquet).",
      "containedIn": {
        "@id": "repo"
      },
      "encodingFormat": "application/x-parquet",
      "includes": "ParaphraseRC/*/*.parquet"
    },
    {
      "@type": "cr:FileSet",
      "@id": "parquet-files-for-config-SelfRC",
      "name": "parquet-files-for-config-SelfRC",
      "description": "The underlying Parquet files as converted by Hugging Face (see: https://huggingface.co/docs/dataset-viewer/parquet).",
      "containedIn": {
        "@id": "repo"
      },
      "encodingFormat": "application/x-parquet",
      "includes": "SelfRC/*/*.parquet"
    }
  ],
  "recordSet": [
    {
      "@type": "cr:RecordSet",
      "@id": "ParaphraseRC",
      "name": "ParaphraseRC",
      "description": "ibm/duorc - 'ParaphraseRC' subset\n\nAdditional information:\n- 3 splits: train, validation, test\n- 1 skipped column: answers",
      "field": [
        {
          "@type": "cr:Field",
          "@id": "ParaphraseRC/plot_id",
          "name": "ParaphraseRC/plot_id",
          "description": "Column 'plot_id' from the Hugging Face parquet file.",
          "dataType": "sc:Text",
          "source": {
            "fileSet": {
              "@id": "parquet-files-for-config-ParaphraseRC"
            },
            "extract": {
              "column": "plot_id"
            }
          }
        },
        {
          "@type": "cr:Field",
          "@id": "ParaphraseRC/plot",
          "name": "ParaphraseRC/plot",
          "description": "Column 'plot' from the Hugging Face parquet file.",
          "dataType": "sc:Text",
          "source": {
            "fileSet": {
              "@id": "parquet-files-for-config-ParaphraseRC"
            },
            "extract": {
              "column": "plot"
            }
          }
        },
        {
          "@type": "cr:Field",
          "@id": "ParaphraseRC/title",
          "name": "ParaphraseRC/title",
          "description": "Column 'title' from the Hugging Face parquet file.",
          "dataType": "sc:Text",
          "source": {
            "fileSet": {
              "@id": "parquet-files-for-config-ParaphraseRC"
            },
            "extract": {
              "column": "title"
            }
          }
        },
        {
          "@type": "cr:Field",
          "@id": "ParaphraseRC/question_id",
          "name": "ParaphraseRC/question_id",
          "description": "Column 'question_id' from the Hugging Face parquet file.",
          "dataType": "sc:Text",
          "source": {
            "fileSet": {
              "@id": "parquet-files-for-config-ParaphraseRC"
            },
            "extract": {
              "column": "question_id"
            }
          }
        },
        {
          "@type": "cr:Field",
          "@id": "ParaphraseRC/question",
          "name": "ParaphraseRC/question",
          "description": "Column 'question' from the Hugging Face parquet file.",
          "dataType": "sc:Text",
          "source": {
            "fileSet": {
              "@id": "parquet-files-for-config-ParaphraseRC"
            },
            "extract": {
              "column": "question"
            }
          }
        },
        {
          "@type": "cr:Field",
          "@id": "ParaphraseRC/no_answer",
          "name": "ParaphraseRC/no_answer",
          "description": "Column 'no_answer' from the Hugging Face parquet file.",
          "dataType": "sc:Boolean",
          "source": {
            "fileSet": {
              "@id": "parquet-files-for-config-ParaphraseRC"
            },
            "extract": {
              "column": "no_answer"
            }
          }
        }
      ]
    },
    {
      "@type": "cr:RecordSet",
      "@id": "SelfRC",
      "name": "SelfRC",
      "description": "ibm/duorc - 'SelfRC' subset\n\nAdditional information:\n- 3 splits: train, validation, test\n- 1 skipped column: answers",
      "field": [
        {
          "@type": "cr:Field",
          "@id": "SelfRC/plot_id",
          "name": "SelfRC/plot_id",
          "description": "Column 'plot_id' from the Hugging Face parquet file.",
          "dataType": "sc:Text",
          "source": {
            "fileSet": {
              "@id": "parquet-files-for-config-SelfRC"
            },
            "extract": {
              "column": "plot_id"
            }
          }
        },
        {
          "@type": "cr:Field",
          "@id": "SelfRC/plot",
          "name": "SelfRC/plot",
          "description": "Column 'plot' from the Hugging Face parquet file.",
          "dataType": "sc:Text",
          "source": {
            "fileSet": {
              "@id": "parquet-files-for-config-SelfRC"
            },
            "extract": {
              "column": "plot"
            }
          }
        },
        {
          "@type": "cr:Field",
          "@id": "SelfRC/title",
          "name": "SelfRC/title",
          "description": "Column 'title' from the Hugging Face parquet file.",
          "dataType": "sc:Text",
          "source": {
            "fileSet": {
              "@id": "parquet-files-for-config-SelfRC"
            },
            "extract": {
              "column": "title"
            }
          }
        },
        {
          "@type": "cr:Field",
          "@id": "SelfRC/question_id",
          "name": "SelfRC/question_id",
          "description": "Column 'question_id' from the Hugging Face parquet file.",
          "dataType": "sc:Text",
          "source": {
            "fileSet": {
              "@id": "parquet-files-for-config-SelfRC"
            },
            "extract": {
              "column": "question_id"
            }
          }
        },
        {
          "@type": "cr:Field",
          "@id": "SelfRC/question",
          "name": "SelfRC/question",
          "description": "Column 'question' from the Hugging Face parquet file.",
          "dataType": "sc:Text",
          "source": {
            "fileSet": {
              "@id": "parquet-files-for-config-SelfRC"
            },
            "extract": {
              "column": "question"
            }
          }
        },
        {
          "@type": "cr:Field",
          "@id": "SelfRC/no_answer",
          "name": "SelfRC/no_answer",
          "description": "Column 'no_answer' from the Hugging Face parquet file.",
          "dataType": "sc:Boolean",
          "source": {
            "fileSet": {
              "@id": "parquet-files-for-config-SelfRC"
            },
            "extract": {
              "column": "no_answer"
            }
          }
        }
      ]
    }
  ],
  "name": "duorc",
  "description": "\n\t\n\t\t\n\t\n\t\n\t\tDataset Card for duorc\n\t\n\n\n\t\n\t\t\n\t\n\t\n\t\tDataset Summary\n\t\n\nThe DuoRC dataset is an English language dataset of questions and answers gathered from crowdsourced AMT workers on Wikipedia and IMDb movie plots. The workers were given freedom to pick answer from the plots or synthesize their own answers. It contains two sub-datasets - SelfRC and ParaphraseRC. SelfRC dataset is built on Wikipedia movie plots solely. ParaphraseRC has questions written from Wikipedia movie plots and the… See the full description on the dataset page: https://huggingface.co/datasets/ibm/duorc.",
  "alternateName": [
    "ibm/duorc",
    "DuoRC"
  ],
  "creator": {
    "@type": "Organization",
    "name": "IBM",
    "url": "https://huggingface.co/ibm"
  },
  "keywords": [
    "question-answering",
    "text2text-generation",
    "abstractive-qa",
    "extractive-qa",
    "crowdsourced",
    "crowdsourced",
    "monolingual",
    "100K<n<1M",
    "10K<n<100K",
    "original",
    "English",
    "mit",
    "Croissant",
    "arxiv:1804.07927",
    "🇺🇸 Region: US"
  ],
  "license": "https://choosealicense.com/licenses/mit/",
  "sameAs": "https://duorc.github.io/",
  "url": "https://huggingface.co/datasets/ibm/duorc"
}
```

## Load the dataset

To load the dataset, you can use the [mlcroissant](./mlcroissant) library. It provides a simple way to load datasets from Croissant metadata.


<EditOnGithub source="https://github.com/huggingface/dataset-viewer/blob/main/docs/source/croissant.md" />
