Welcome to the Ray documentation

https://github.com/ray-project/ray/raw/master/doc/source/images/ray_header_logo.png https://readthedocs.org/projects/ray/badge/?version=master https://img.shields.io/badge/Ray-Join%20Slack-blue https://img.shields.io/badge/Discuss-Ask%20Questions-blue https://img.shields.io/twitter/follow/raydistributed.svg?style=social&logo=twitter

What can you do with Ray?

Scale machine learning workloads with
rayAIR

Ray AI Runtime (AIR) is an open-source toolkit for building ML applications. It provides libraries for distributed data processing, model training, tuning, reinforcement learning, model serving, and more.

Build distributed applications with
rayCore

Ray Core provides a simple and flexible API for building and running your distributed applications. You can often parallelize single machine code with little to zero code changes.

Deploy large-scale workloads with
rayClusters

With a Ray cluster you can deploy your workloads on AWS, GCP, Azure or on premise. You can also use Ray cluster managers to run Ray on your existing Kubernetes, YARN, or Slurm clusters.

What is Ray?

Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a toolkit of libraries (Ray AIR) for simplifying ML compute:

what-is-ray

 

Learn more about Ray AIR and its libraries:

  • Datasets: Distributed Data Preprocessing

  • Train: Distributed Training

  • Tune: Scalable Hyperparameter Tuning

  • Serve: Scalable and Programmable Serving

  • RLlib: Scalable Reinforcement Learning

Or more about Ray Core and its key abstractions:

  • Tasks: Stateless functions executed in the cluster.

  • Actors: Stateful worker processes created in the cluster.

  • Objects: Immutable values accessible across the cluster.

Ray runs on any machine, cluster, cloud provider, and Kubernetes, and features a growing ecosystem of community integrations.

Why Ray?

Today’s ML workloads are increasingly compute-intensive. As convenient as they are, single-node development environments such as your laptop cannot scale to meet these demands.

Ray is a unified way to scale Python and AI applications from a laptop to a cluster.

With Ray, you can seamlessly scale the same code from a laptop to a cluster. Ray is designed to be general-purpose, meaning that it can performantly run any kind of workload. If your application is written in Python, you can scale it with Ray, no other infrastructure required.

How to get involved?

Ray is more than a framework for distributed applications but also an active community of developers, researchers, and folks that love machine learning. Here’s a list of tips for getting involved with the Ray community:

Get involved and become part of the Ray community

💬 Join our community: Discuss all things Ray with us in our community Slack channel or use our discussion board to ask questions and get answers.

💡 Open an issue: Help us improve Ray by submitting feature requests, bug-reports, or simply ask for help and get support via GitHub issues.

👩‍💻 Create a pull request: Found a typo in the documentation? Want to add a new feature? Submit a pull request to help us improve Ray.

🐦 Follow us on Twitter: Stay up to date with the latest news and updates on Ray.

Star and follow us on GitHub: Support Ray by following its development on GitHub and give us a boost by starring the project.

🤝🏿 Join our Meetup Group: Join one of our community events to learn more about Ray and get a chance to meet the team behind Ray.

🙌 Discuss on Stack Overflow: Use the [ray] tag on Stack Overflow to ask and answer questions about Ray usage.

If you’re interested in contributing to Ray, check out our contributing guide for this release or see the latest version of our contributing guide to read about the contribution process and see what you can work on.

What documentation resource is right for you?

Getting Started

getting_started

If you’re new to Ray, check out the getting started guide. You will learn how to install Ray, how to compute an example with the Ray Core API, and how to use each of Ray’s ML libraries. You will also understand where to go from there.

User Guides

user_guide

Our user guides provide you with in-depth information about how to use Ray’s libraries and tooling. You will learn about the key concepts and features of Ray and how to use them in practice.

API reference

api

Our API reference guide provides you with a detailed description of the different Ray APIs. It assumes familiarity with the key concepts and gives you information about functions, classes, and methods.

Developer guides

contribute

You need more information on how to debug or profile Ray? You want more information about Ray’s internals? Maybe you saw a typo in the documentation, want to fix a bug or contribute a new feature? Our developer guides will help you get started.