Ray on Kubernetes¶
In this section we cover how to execute your distributed Ray programs on a Kubernetes cluster.
Using the KubeRay Operator is the recommended way to do so. The operator provides a Kubernetes-native way to manage Ray clusters. Each Ray cluster consists of a head node pod and a collection of worker node pods. Optional autoscaling support allows the KubeRay Operator to size your Ray clusters according to the requirements of your Ray workload, adding and removing Ray pods as needed. KubeRay supports heterogenous compute nodes (including GPUs) as well as running multiple Ray clusters with different Ray versions in the same Kubernetes cluster.
Concretely, you will learn how to:
Set up and configure Ray on a Kubernetes cluster
Deploy and monitor Ray applications
Integrate Ray applications with Kubernetes networking
The Ray docs present all the information you need to start running Ray workloads on Kubernetes.
Learn how to start a Ray cluster and deploy Ray applications on Kubernetes.
Try example Ray workloads on Kubernetes.
Learn best practices for configuring Ray clusters on Kubernetes.
Find API references on RayCluster configuration.
Visit the KubeRay GitHub repo to track progress, report bugs, propose new features, or contribute to the project.
Check out the KubeRay docs for further technical information, developer guides, and discussion of new and upcoming features.
The KubeRay operator replaces the older Ray operator hosted in the Ray repository. Check the linked README for migration notes.
If you have used the legacy Ray operator in the past, make sure to de-register that operator’s CRD before using KubeRay:
kubectl delete crd rayclusters.cluster.ray.io