Getting Started¶

In this guide, we show you how to manage and interact with Ray clusters on Kubernetes.

You can download this guide as an executable Jupyter notebook by clicking the download button on the top right of the page.

Preparation¶

Install the latest Ray release¶

This step is needed to interact with remote Ray clusters using Ray Job Submission and Ray Client.

! pip install -U "ray[default]"

See Installing Ray for more details.

Install kubectl¶

We will use kubectl to interact with Kubernetes. Find installation instructions at the Kubernetes documentation.

Access a Kubernetes cluster¶

We will need access to a Kubernetes cluster. There are two options:

  1. Configure access to a remote Kubernetes cluster OR

  2. Run the examples locally by installing kind. Start your kind cluster by running the following command:

! kind create cluster

To run the example in this guide, make sure your Kubernetes cluster (or local Kind cluster) can accomodate additional resource requests of 3 CPU and 2Gi memory.

Deploying the KubeRay operator¶

Deploy the KubeRay Operator by cloning the KubeRay repo and applying the relevant configuration files from the master branch.

! git clone https://github.com/ray-project/kuberay -b release-0.3

# This creates the KubeRay operator and all of the resources it needs.
! kubectl create -k kuberay/ray-operator/config/default

# Note that we must use "kubectl create" in the above command. "kubectl apply" will not work due to https://github.com/ray-project/kuberay/issues/271

Confirm that the operator is running in the namespace ray-system.

! kubectl -n ray-system get pod --selector=app.kubernetes.io/component=kuberay-operator

# NAME                                READY   STATUS    RESTARTS   AGE
# kuberay-operator-557c6c8bcd-t9zkz   1/1     Running   0          XXs

Namespace-scoped operator¶

Note that the above command deploys the operator at Kubernetes cluster scope; the operator will manage resources in all Kubernetes namespaces. If your use-case requires running the operator at single namespace scope, refer to the instructions at the KubeRay docs.

Deploying a Ray Cluster¶

Once the KubeRay operator is running, we are ready to deploy a Ray cluster. To do so, we create a RayCluster Custom Resource (CR).

In the rest of this guide, we will deploy resources into the default namespace. To use a non-default namespace, specify the namespace in your kubectl commands:

kubectl -n <your-namespace> ...

# Deploy the Ray Cluster CR:
! kubectl apply -f kuberay/ray-operator/config/samples/ray-cluster.autoscaler.yaml

# This Ray cluster is named `raycluster-autoscaler` because it has optional Ray Autoscaler support enabled.

Once the RayCluster CR has been created, you can view it by running

! kubectl get raycluster

# NAME                    AGE
# raycluster-autoscaler   XXs

The KubeRay operator will detect the RayCluster object. The operator will then start your Ray cluster by creating head and worker pods. To view Ray cluster’s pods, run the following command:

# View the pods in the Ray cluster named "raycluster-autoscaler"
! kubectl get pods --selector=ray.io/cluster=raycluster-autoscaler

# NAME                                             READY   STATUS    RESTARTS   AGE
# raycluster-autoscaler-head-xxxxx                 2/2     Running   0          XXs
# raycluster-autoscaler-worker-small-group-yyyyy   1/1     Running   0          XXs

We see a Ray head pod with two containers – the Ray container and autoscaler sidecar. We also have a Ray worker with its single Ray container.

Wait for the pods to reach Running state. This may take a few minutes – most of this time is spent downloading the Ray images. In a separate shell, you may wish to observe the pods’ status in real-time with the following command:

# If you're on MacOS, first `brew install watch`.
# Run in a separate shell:
! watch -n 1 kubectl get pod

Interacting with a Ray Cluster¶

Now, let’s interact with the Ray cluster we’ve deployed.

Accessing the cluster with kubectl exec¶

The most straightforward way to experiment with your Ray cluster is to exec directly into the head pod. First, identify your Ray cluster’s head pod:

! kubectl get pods --selector=ray.io/cluster=raycluster-autoscaler --selector=ray.io/node-type=head -o custom-columns=POD:metadata.name --no-headers
    
# raycluster-autoscaler-head-xxxxx

Now, we can run a Ray program on the head pod. The Ray program in the next cell asks the autoscaler to scale the cluster to a total of 3 CPUs. The head and worker in our example cluster each have a capacity of 1 CPU, so the request should trigger upscaling of an additional worker pod.

Note that in real-life scenarios, you will want to use larger Ray pods. In fact, it is advantageous to size each Ray pod to take up an entire Kubernetes node. See the configuration guide for more details.

# Substitute your output from the last cell in place of "raycluster-autoscaler-head-xxxxx"

! kubectl exec raycluster-autoscaler-head-xxxxx -it -c ray-head -- python -c "import ray; ray.init(); ray.autoscaler.sdk.request_resources(num_cpus=3)"

Autoscaling¶

The last command should have triggered Ray pod upscaling. To confirm the new worker pod is up, let’s query the RayCluster’s pods again:

! kubectl get pod --selector=ray.io/cluster=raycluster-autoscaler

# NAME                                             READY   STATUS    RESTARTS   AGE
# raycluster-autoscaler-head-xxxxx                 2/2     Running   0          XXs
# raycluster-autoscaler-worker-small-group-yyyyy   1/1     Running   0          XXs
# raycluster-autoscaler-worker-small-group-zzzzz   1/1     Running   0          XXs 

To get a summary of your cluster’s status, run ray status on your cluster’s Ray head node.

# Substitute your head pod's name in place of "raycluster-autoscaler-head-xxxxx"
! kubectl exec raycluster-autoscaler-head-xxxxx -it -c ray-head -- ray status

# ======== Autoscaler status: 2022-07-21 xxxxxxxxxx ========
# ....

Alternatively, to examine the full autoscaling logs, fetch the stdout of the Ray head pod’s autoscaler sidecar:

# This command gets the last 20 lines of autoscaler logs.

# Substitute your head pod's name in place of "raycluster-autoscaler-head-xxxxx"
! kubectl logs raycluster-autoscaler-head-xxxxx -c autoscaler | tail -n 20

# ======== Autoscaler status: 2022-07-21 xxxxxxxxxx ========
# ...

The Ray head service¶

The KubeRay operator configures a Kubernetes service targeting the Ray head pod. This service allows us to interact with Ray clusters without directly executing commands in the Ray container. To identify the Ray head service for our example cluster, run

! kubectl get service raycluster-autoscaler-head-svc

# NAME                             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                       AGE
# raycluster-autoscaler-head-svc   ClusterIP   10.96.114.20   <none>        6379/TCP,8265/TCP,10001/TCP   XXs

Ray Job submission¶

Ray provides a Job Submission API which can be used to submit Ray workloads to a remote Ray cluster. The Ray Job Submission server listens on the Ray head’s Dashboard port, 8265 by default. Let’s access the dashboard port via port-forwarding.

Note: The following port-forwarding command is blocking. If you are following along from a Jupyter notebook, the command must be executed in a separate shell outside of the notebook.

# Execute this in a separate shell.
! kubectl port-forward service/raycluster-autoscaler-head-svc 8265:8265

Note: We use port-forwarding in this guide as a simple way to experiment with a Ray cluster’s services. For production use-cases, you would typically either

  • Access the service from within the Kubernetes cluster or

  • Use an ingress controller to expose the service outside the cluster.

See the networking notes for details.

Now that we have access to the Dashboard port, we can submit jobs to the Ray Cluster:

# The following job's logs will show the Ray cluster's total resource capacity, including 3 CPUs.

! ray job submit --address http://localhost:8265 -- python -c "import ray; ray.init(); print(ray.cluster_resources())"

Viewing the Ray Dashboard¶

Assuming the port-forwarding process described above is still running, you may view the Ray Dashboard by visiting localhost:8265 in you browser.

The dashboard port will not be used in the rest of this guide. You may stop the port-forwarding process if you wish.

Accessing the cluster using Ray Client¶

Ray Client allows you to interact programatically with a remote Ray cluster using the core Ray APIs. To try out Ray Client, first make sure your local Ray version and Python minor version match the versions used in your Ray cluster. The Ray cluster in our example is running Ray 2.0.0 and Python 3.7, so that’s what we’ll need locally. If you have a different local Python version and would like to avoid changing it, you can modify the images specified in the yaml file ray-cluster.autoscaler.yaml. For example, use rayproject/ray:2.0.0-py38 for Python 3.8.

After confirming the Ray and Python versions match up, the next step is to port-forward the Ray Client server port (10001 by default). If you are following along in a Jupyter notebook, execute the following command in a separate shell.

# Execute this in a separate shell.
! kubectl port-forward service/raycluster-autoscaler-head-svc 10001:10001

Now that we have port-forwarding set up, we can connect to the Ray Client from a local Python shell as follows:

import ray
import platform

ray.init("ray://localhost:10001")

# The network name of the local machine.
local_host_name = platform.node()

# This is a Ray task.
# The task will returns the name of the Ray pod that executes it.
@ray.remote
def get_host_name():
    return platform.node()

# The task will be scheduled on the head node.
# Thus, this variable will hold the head pod's name.
remote_host_name = ray.get(get_host_name.remote())

print("The local host name is {}".format(local_host_name))
print("The Ray head pod's name is {}".format(remote_host_name))

# Disconnect from Ray.
ray.shutdown()

Cleanup¶

Deleting a Ray Cluster¶

To delete the Ray Cluster we deployed in this example, you can run either of the following commands.

# Delete by reference to the RayCluster custom resource
! kubectl delete raycluster raycluster-autoscaler

OR

# Delete by reference to the yaml file we used to define the RayCluster CR 
! kubectl delete -f kuberay/ray-operator/config/samples/ray-cluster.autoscaler.yaml

Confirm that the Ray Cluster’s pods are gone by running

! kubectl get pods

Note that it may take several seconds for the Ray pods to be fully terminated.

Deleting the KubeRay operator¶

In typical operation, the KubeRay operator should be left as a long-running process that manages many Ray clusters. If you would like to delete the operator and associated resources, run

! kubectl delete -k kuberay/ray-operator/config/default

Deleting a local kind cluster¶

Finally, if you’d like to delete your local kind cluster, run

! kind delete cluster