Welcome to the Ray documentation
Contents

Welcome to the Ray documentation#

What can you do with Ray?#
Ray AI Runtime (AIR) is an open-source toolkit for building ML applications. It provides libraries for distributed data processing, model training, tuning, reinforcement learning, model serving, and more.
Ray Core provides a simple and flexible API for building and running your distributed applications. You can often parallelize single machine code with little to zero code changes.
With a Ray cluster you can deploy your workloads on AWS, GCP, Azure or on premise. You can also use Ray cluster managers to run Ray on your existing Kubernetes, YARN, or Slurm clusters.
What is Ray?#
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a toolkit of libraries (Ray AIR) for simplifying ML compute:
Learn more about Ray AIR and its libraries:
Datasets: Distributed Data Preprocessing
Train: Distributed Training
Tune: Scalable Hyperparameter Tuning
Serve: Scalable and Programmable Serving
RLlib: Scalable Reinforcement Learning
Or more about Ray Core and its key abstractions:
Tasks: Stateless functions executed in the cluster.
Actors: Stateful worker processes created in the cluster.
Objects: Immutable values accessible across the cluster.
Ray runs on any machine, cluster, cloud provider, and Kubernetes, and features a growing ecosystem of community integrations.
Why Ray?#
Today’s ML workloads are increasingly compute-intensive. As convenient as they are, single-node development environments such as your laptop cannot scale to meet these demands.
Ray is a unified way to scale Python and AI applications from a laptop to a cluster.
With Ray, you can seamlessly scale the same code from a laptop to a cluster. Ray is designed to be general-purpose, meaning that it can performantly run any kind of workload. If your application is written in Python, you can scale it with Ray, no other infrastructure required.
How to get involved?#
Ray is more than a framework for distributed applications but also an active community of developers, researchers, and folks that love machine learning. Here’s a list of tips for getting involved with the Ray community:
Get involved and become part of the Ray community
💬 Join our community: Discuss all things Ray with us in our community Slack channel or use our discussion board to ask questions and get answers.
💡 Open an issue: Help us improve Ray by submitting feature requests, bug-reports, or simply ask for help and get support via GitHub issues.
👩💻 Create a pull request: Found a typo in the documentation? Want to add a new feature? Submit a pull request to help us improve Ray.
📰 Subscribe to the Ray newsletter: Get the latest news from Ray in a monthly email: Ray updates, community highlights, events, useful tutorials, and more!
🐦 Follow us on Twitter: Stay up to date with the latest news and updates on Ray.
⭐ Star and follow us on GitHub: Support Ray by following its development on GitHub and give us a boost by starring the project.
🤝🏿 Join our Meetup Group: Join one of our community events to learn more about Ray and get a chance to meet the team behind Ray.
🙌 Discuss on Stack Overflow:
Use the [ray]
tag on Stack Overflow to ask and answer questions about Ray usage.
If you’re interested in contributing to Ray, check out our contributing guide for this release or see the latest version of our contributing guide to read about the contribution process and see what you can work on.