Ray Datasets: Distributed Data Preprocessing¶

Ray Datasets are the standard way to load and exchange data in Ray libraries and applications. They provide basic distributed data transformations such as maps (map_batches), global and grouped aggregations (GroupedDataset), and shuffling operations (random_shuffle, sort, repartition), and are compatible with a variety of file formats, data sources, and distributed frameworks.

Here’s an overview of the integrations with other processing frameworks, file formats, and supported operations, as well as a glimpse at the Ray Datasets API.

Check our compatibility matrix to see if your favorite format is already supported.

../_images/dataset.svg

Data Loading and Preprocessing for ML Training¶

Ray Datasets is designed to load and preprocess data for distributed ML training pipelines. Compared to other loading solutions, Datasets are more flexible (e.g., can express higher-quality per-epoch global shuffles) and provides higher overall performance.

Ray Datasets is not intended as a replacement for more general data processing systems. Learn more about how Ray Datasets works with other ETL systems.

Datasets for Parallel Compute¶

Datasets also simplifies general purpose parallel GPU and CPU compute in Ray; for instance, for GPU batch inference. It provides a higher-level API for Ray tasks and actors for such embarrassingly parallel compute, internally handling operations like batching, pipelining, and memory management.

../_images/dataset-compute-1.png

As part of the Ray ecosystem, Ray Datasets can leverage the full functionality of Ray’s distributed scheduler, e.g., using actors for optimizing setup time and GPU scheduling.

Where to Go from Here?¶

As new user of Ray Datasets, you may want to start with our Getting Started guide. If you’ve run your first examples already, you might want to dive into Ray Datasets’ key concepts or our User Guide instead. Advanced users can refer directly to the Ray Datasets API reference for their projects.

Getting Started

Start with our quick start tutorials for working with Datasets and Dataset Pipelines. These concrete examples will give you an idea of how to use Ray Datasets.

Key Concepts

Understand the key concepts behind Ray Datasets. Learn what Datasets and Dataset Pipelines are and how they get executed in Ray Datasets.

Examples

Find both simple and scaling-out examples of using Ray Datasets for data processing and ML ingest.

Ray Datasets FAQ

Find answers to commonly asked questions in our detailed FAQ.

API

Get more in-depth information about the Ray Datasets API.

Other Data Processing Solutions

For running ETL pipelines, check out Spark-on-Ray. For scaling up your data science workloads, check out Dask-on-Ray, Modin, and Mars-on-Ray.

Datasource Compatibility¶

Ray Datasets supports reading and writing many file formats. The following compatibility matrices will help you understand which formats are currently available.

If none of these meet your needs, please reach out on Discourse or open a feature request on the Ray GitHub repo, and check out our guide for implementing a custom Datasets datasource if you’re interested in rolling your own integration!

Supported Input Formats¶

Input compatibility matrix¶

Input Type

Read API

Status

CSV File Format

ray.data.read_csv()

✅

JSON File Format

ray.data.read_json()

✅

Parquet File Format

ray.data.read_parquet()

✅

Numpy File Format

ray.data.read_numpy()

✅

Text Files

ray.data.read_text()

✅

Binary Files

ray.data.read_binary_files()

✅

TFRecord Files

ray.data.read_tfrecords()

🚧

Python Objects

ray.data.from_items()

✅

Spark Dataframe

ray.data.from_spark()

✅

Dask Dataframe

ray.data.from_dask()

✅

Modin Dataframe

ray.data.from_modin()

✅

MARS Dataframe

ray.data.from_mars()

✅

Pandas Dataframe Objects

ray.data.from_pandas()

✅

NumPy ndarray Objects

ray.data.from_numpy()

✅

Arrow Table Objects

ray.data.from_arrow()

✅

🤗 (Hugging Face) Dataset

ray.data.from_huggingface()

✅

Custom Datasource

ray.data.read_datasource()

✅

Supported Output Formats¶

Output compatibility matrix¶

Output Type

Dataset API

Status

CSV File Format

ds.write_csv()

✅

JSON File Format

ds.write_json()

✅

Parquet File Format

ds.write_parquet()

✅

Numpy File Format

ds.write_numpy()

✅

Spark Dataframe

ds.to_spark()

✅

Dask Dataframe

ds.to_dask()

✅

Modin Dataframe

ds.to_modin()

✅

MARS Dataframe

ds.to_mars()

✅

Arrow Table Objects

ds.to_arrow_refs()

✅

Arrow Table Iterator

ds.iter_batches(batch_format="pyarrow")

✅

Single Pandas Dataframe

ds.to_pandas()

✅

Pandas Dataframe Objects

ds.to_pandas_refs()

✅

NumPy ndarray Objects

ds.to_numpy_refs()

✅

Pandas Dataframe Iterator

ds.iter_batches(batch_format="pandas")

✅

PyTorch Tensor Iterator

ds.iter_torch_batches()

✅

TensorFlow Tensor Iterator

ds.iter_tf_batches()

✅

Random Access Dataset

ds.to_random_access_dataset()

✅

Custom Datasource

ds.write_datasource()

✅

Contribute¶

Contributions to Ray Datasets are welcome! There are many potential improvements, including:

  • Supporting more data sources and transforms.

  • Integration with more ecosystem libraries.

  • Adding features such as join().

  • Performance optimizations.