Model Database's logo
Join the Model Database community

and get access to the augmented documentation experience

to get started

Polars

Polars is a fast DataFrame library written in Rust with Arrow as its foundation.

πŸ’‘ Learn more about how to get the dataset URLs in the List Parquet files guide.

Let’s start by grabbing the URLs to the train split of the blog_authorship_corpus dataset from Datasets Server:

r = requests.get("https://datasets-server.huggingface.co/parquet?dataset=blog_authorship_corpus")
j = r.json()
urls = [f['url'] for f in j['parquet_files'] if f['split'] == 'train']
urls
['https://huggingface.co/datasets/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/blog_authorship_corpus/train/0000.parquet',
 'https://huggingface.co/datasets/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/blog_authorship_corpus/train/0001.parquet']

To read from a single Parquet file, use the read_parquet function to read it into a DataFrame and then execute your query:

import polars as pl

df = (
    pl.read_parquet("https://huggingface.co/datasets/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/blog_authorship_corpus/train/0000.parquet")
    .groupby("horoscope")
    .agg(
        [
            pl.count(),
            pl.col("text").str.n_chars().mean().alias("avg_blog_length")
        ]
    )
    .sort("avg_blog_length", descending=True)
    .limit(5)
)
print(df)
shape: (5, 3)
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ horoscope ┆ count ┆ avg_blog_length β”‚
β”‚ ---       ┆ ---   ┆ ---             β”‚
β”‚ str       ┆ u32   ┆ f64             β”‚
β•žβ•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ═════════════════║
β”‚ Aquarius  ┆ 34062 ┆ 1129.218836     β”‚
β”‚ Cancer    ┆ 41509 ┆ 1098.366812     β”‚
β”‚ Capricorn ┆ 33961 ┆ 1073.2002       β”‚
β”‚ Libra     ┆ 40302 ┆ 1072.071833     β”‚
β”‚ Leo       ┆ 40587 ┆ 1064.053687     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

To read multiple Parquet files - for example, if the dataset is sharded - you’ll need to use the concat function to concatenate the files into a single DataFrame:

import polars as pl
df = (
    pl.concat([pl.read_parquet(url) for url in urls])
    .groupby("horoscope")
    .agg(
        [
            pl.count(),
            pl.col("text").str.n_chars().mean().alias("avg_blog_length")
        ]
    )
    .sort("avg_blog_length", descending=True)
    .limit(5)
)
print(df)
shape: (5, 3)
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ horoscope   ┆ count ┆ avg_blog_length β”‚
β”‚ ---         ┆ ---   ┆ ---             β”‚
β”‚ str         ┆ u32   ┆ f64             β”‚
β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═══════β•ͺ═════════════════║
β”‚ Aquarius    ┆ 49568 ┆ 1125.830677     β”‚
β”‚ Cancer      ┆ 63512 ┆ 1097.956087     β”‚
β”‚ Libra       ┆ 60304 ┆ 1060.611054     β”‚
β”‚ Capricorn   ┆ 49402 ┆ 1059.555261     β”‚
β”‚ Sagittarius ┆ 50431 ┆ 1057.458984     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Lazy API

Polars offers a lazy API that is more performant and memory-efficient for large Parquet files. The LazyFrame API keeps track of what you want to do, and it’ll only execute the entire query when you’re ready. This way, the lazy API doesn’t load everything into RAM beforehand, and it allows you to work with datasets larger than your available RAM.

To lazily read a Parquet file, use the scan_parquet function instead. Then, execute the entire query with the collect function:

import polars as pl

q = (
    pl.scan_parquet("https://huggingface.co/datasets/blog_authorship_corpus/resolve/refs%2Fconvert%2Fparquet/blog_authorship_corpus/train/0000.parquet")
    .groupby("horoscope")
    .agg(
        [
            pl.count(),
            pl.col("text").str.n_chars().mean().alias("avg_blog_length")
        ]
    )
    .sort("avg_blog_length", descending=True)
    .limit(5)
)
df = q.collect()