Ray Datasets: Distributed Data Preprocessing#

Ray Datasets are the standard way to load and exchange data in Ray libraries and applications. They provide basic distributed data transformations such as maps (map_batches), global and grouped aggregations (GroupedDataset), and shuffling operations (random_shuffle, sort, repartition), and are compatible with a variety of file formats, data sources, and distributed frameworks.

Here’s an overview of the integrations with other processing frameworks, file formats, and supported operations, as well as a glimpse at the Ray Datasets API.

Check the Input/Output reference to see if your favorite format is already supported.

../_images/dataset.svg

Data Loading and Preprocessing for ML Training#

Use Ray Datasets to load and preprocess data for distributed ML training pipelines. Compared to other loading solutions, Datasets are more flexible (e.g., can express higher-quality per-epoch global shuffles) and provides higher overall performance.

Use Datasets as a last-mile bridge from storage or ETL pipeline outputs to distributed applications and libraries in Ray. Don’t use it as a replacement for more general data processing systems.

../_images/dataset-loading-1.png

To learn more about the features Datasets supports, read the Datasets User Guide.

Datasets for Parallel Compute#

Datasets also simplify general purpose parallel GPU and CPU compute in Ray; for instance, for GPU batch inference. They provide a higher-level API for Ray tasks and actors for such embarrassingly parallel compute, internally handling operations like batching, pipelining, and memory management.

../_images/dataset-compute-1.png

As part of the Ray ecosystem, Ray Datasets can leverage the full functionality of Ray’s distributed scheduler, e.g., using actors for optimizing setup time and GPU scheduling.

Where to Go from Here?#

As new user of Ray Datasets, you may want to start with our Getting Started guide. If you’ve run your first examples already, you might want to dive into Ray Datasets’ key concepts or our User Guide instead. Advanced users can refer directly to the Ray Datasets API reference for their projects.

Getting Started

Start with our quick start tutorials for working with Datasets. These concrete examples will give you an idea of how to use Ray Datasets.

Key Concepts

Understand the key concepts behind Ray Datasets. Learn what Datasets are and how they are executed in Ray Datasets.

Examples

Find both simple and scaling-out examples of using Ray Datasets for data processing and ML ingest.

Ray Datasets FAQ

Find answers to commonly asked questions in our detailed FAQ.

API

Get more in-depth information about the Ray Datasets API.

Other Data Processing Solutions

For running ETL pipelines, check out Spark-on-Ray. For scaling up your data science workloads, check out Dask-on-Ray, Modin, and Mars-on-Ray.

Datasource Compatibility#

Ray Datasets supports reading and writing many file formats. To view supported formats, read the Input/Output reference.

If your use case isn’t supported, reach out on Discourse or open a feature request on the Ray GitHub repo, and check out our guide for implementing a custom Datasets datasource if you’re interested in rolling your own integration!

Learn More#

Contribute#

Contributions to Ray Datasets are welcome! There are many potential improvements, including:

  • Supporting more data sources and transforms.

  • Integration with more ecosystem libraries.

  • Performance optimizations.