The Ray Ecosystem#

This page lists libraries that have integrations with Ray for distributed execution in alphabetical order. It’s easy to add your own integration to this list. Simply open a pull request with a few lines of text, see the dropdown below for more information.

Adding Your Integration

To add an integration add an entry to this file, using the same grid-item-card directive that the other examples use.

../_images/airflow_logo_full.png
https://img.shields.io/github/stars/astronomer/astro-provider-ray?style=social)]

Apache Airflow® is an open-source platform that enables users to programmatically author, schedule, and monitor workflows using directed acyclic graphs (DAGs). With the Ray provider, users can seamlessly orchestrate Ray jobs within Airflow DAGs.

../_images/buildflow.png
https://img.shields.io/github/stars/launchflow/buildflow?style=social)]

BuildFlow is a backend framework that allows you to build and manage complex cloud infrastructure using pure python. With BuildFlow’s decorator pattern you can turn any function into a component of your backend system.

../_images/classyvision.png
https://img.shields.io/github/stars/facebookresearch/ClassyVision?style=social)]

Classy Vision is a new end-to-end, PyTorch-based framework for large-scale training of state-of-the-art image and video classification models. The library features a modular, flexible design that allows anyone to train machine learning models on top of PyTorch using very simple abstractions.

../_images/dask.png
https://img.shields.io/github/stars/dask/dask?style=social)]

Dask provides advanced parallelism for analytics, enabling performance at scale for the tools you love. Dask uses existing Python APIs and data structures to make it easy to switch between Numpy, Pandas, Scikit-learn to their Dask-powered equivalents.

../_images/flambe.png
https://img.shields.io/github/stars/asappresearch/flambe?style=social)]

Flambé is a machine learning experimentation framework built to accelerate the entire research life cycle. Flambé’s main objective is to provide a unified interface for prototyping models, running experiments containing complex pipelines, monitoring those experiments in real-time, reporting results, and deploying a final model for inference.

../_images/flowdapt.png
https://img.shields.io/github/stars/emergentmethods/flowdapt?style=social)]

Flowdapt is a platform designed to help developers configure, debug, schedule, trigger, deploy and serve adaptive and reactive Artificial Intelligence workflows at large-scale.

../_images/flyte.png
https://img.shields.io/github/stars/flyteorg/flyte?style=social)]

Flyte is a Kubernetes-native workflow automation platform for complex, mission-critical data and ML processes at scale. It has been battle-tested at Lyft, Spotify, Freenome, and others and is truly open-source.

../_images/horovod.png
https://img.shields.io/github/stars/horovod/horovod?style=social)]

Horovod is a distributed deep learning training framework for TensorFlow, Keras, PyTorch, and Apache MXNet. The goal of Horovod is to make distributed deep learning fast and easy to use.

../_images/hugging.png
https://img.shields.io/github/stars/huggingface/transformers?style=social)]

State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0. It integrates with Ray for distributed hyperparameter tuning of transformer models.

../_images/zoo.png
https://img.shields.io/github/stars/intel-analytics/analytics-zoo?style=social)]

Analytics Zoo seamlessly scales TensorFlow, Keras and PyTorch to distributed big data (using Spark, Flink & Ray).

../_images/nlu.png
https://img.shields.io/github/stars/JohnSnowLabs/nlu?style=social)]

The power of 350+ pre-trained NLP models, 100+ Word Embeddings, 50+ Sentence Embeddings, and 50+ Classifiers in 46 languages with 1 line of Python code.

../_images/ludwig.png
https://img.shields.io/github/stars/ludwig-ai/ludwig?style=social)]

Ludwig is a toolbox that allows users to train and test deep learning models without the need to write code. With Ludwig, you can train a deep learning model on Ray in zero lines of code, automatically leveraging Dask on Ray for data preprocessing, Horovod on Ray for distributed training, and Ray Tune for hyperparameter optimization.

../_images/mars.png
https://img.shields.io/github/stars/mars-project/mars?style=social)]

Mars is a tensor-based unified framework for large-scale data computation which scales Numpy, Pandas and Scikit-learn. Mars can scale in to a single machine, and scale out to a cluster with thousands of machines.

../_images/modin.png
https://img.shields.io/github/stars/modin-project/modin?style=social)]

Scale your pandas workflows by changing one line of code. Modin transparently distributes the data and computation so that all you need to do is continue using the pandas API as you were before installing Modin.

../_images/prefect.png
https://img.shields.io/github/stars/PrefectHQ/prefect-ray?style=social)]

Prefect is an open source workflow orchestration platform in Python. It allows you to easily define, track and schedule workflows in Python. This integration makes it easy to run a Prefect workflow on a Ray cluster in a distributed way.

../_images/pycaret.png
https://img.shields.io/github/stars/pycaret/pycaret?style=social)]

PyCaret is an open source low-code machine learning library in Python that aims to reduce the hypothesis to insights cycle time in a ML experiment. It enables data scientists to perform end-to-end experiments quickly and efficiently.

../_images/intel.png
https://img.shields.io/github/stars/Intel-bigdata/oap-raydp?style=social)]

RayDP (“Spark on Ray”) enables you to easily use Spark inside a Ray program. You can use Spark to read the input data, process the data using SQL, Spark DataFrame, or Pandas (via Koalas) API, extract and transform features using Spark MLLib, and use RayDP Estimator API for distributed training on the preprocessed dataset.

../_images/scikit.png
https://img.shields.io/github/stars/scikit-learn/scikit-learn?style=social)]

Scikit-learn is a free software machine learning library for the Python programming language. It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy.

../_images/seldon.png
https://img.shields.io/github/stars/SeldonIO/alibi?style=social)]

Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models.

../_images/sematic.png
https://img.shields.io/github/stars/sematic-ai/sematic?style=social)]

Sematic is an open-source ML pipelining tool written in Python. It enables users to write end-to-end pipelines that can seamlessly transition between your laptop and the cloud, with rich visualizations, traceability, reproducibility, and usability as first-class citizens. This integration enables dynamic allocation of Ray clusters within Sematic pipelines.

../_images/spacy.png
https://img.shields.io/github/stars/explosion/spacy-ray?style=social)]

spaCy is a library for advanced Natural Language Processing in Python and Cython. It’s built on the very latest research, and was designed from day one to be used in real products.

../_images/xgboost_logo.png
https://img.shields.io/github/stars/ray-project/xgboost_ray?style=social)]

XGBoost is a popular gradient boosting library for classification and regression. It is one of the most popular tools in data science and workhorse of many top-performing Kaggle kernels.

../_images/lightgbm_logo.png
https://img.shields.io/github/stars/ray-project/lightgbm_ray?style=social)]

LightGBM is a high-performance gradient boosting library for classification and regression. It is designed to be distributed and efficient.

../_images/volcano.png
https://img.shields.io/github/stars/volcano-sh/volcano?style=social)]

Volcano is system for running high-performance workloads on Kubernetes. It features powerful batch scheduling capabilities required by ML and other data-intensive workloads.