Ray on Kubernetes¶
In this section we cover how to execute your distributed Ray programs on a Kubernetes cluster.
Using the KubeRay Operator is the recommended way to do so. The operator provides a Kubernetes-native way to manage Ray clusters. Each Ray cluster consists of a head node pod and a collection of worker node pods. Optional autoscaling support allows the KubeRay Operator to size your Ray clusters according to the requirements of your Ray workload, adding and removing Ray pods as needed. KubeRay supports heterogenous compute nodes (including GPUs) as well as running multiple Ray clusters with different Ray versions in the same Kubernetes cluster.
Concretely you will learn how to:
Set up and configure Ray on a Kubernetes cluster
Deploy and monitor Ray applications
Integrate Ray applications with Kubernetes networking
The Ray docs present all the information you need to start running Ray workloads on Kubernetes.
Learn how to start a Ray cluster and deploy Ray applications on Kubernetes.
Try example Ray workloads on Kubernetes.
Learn best practices for configuring Ray clusters on Kubernetes.
Find API references on RayCluster configuration.