Getting Started¶

In this guide, we show you how to manage and interact with Ray clusters on Kubernetes.

You can download this guide as an executable Jupyter notebook by clicking the download button on the top right of the page.

Preparation¶

Install the latest Ray release¶

This step is needed to interact with remote Ray clusters using Ray Job Submission.

! pip install -U "ray[default]"

See Installing Ray for more details.

Install kubectl¶

We will use kubectl to interact with Kubernetes. Find installation instructions at the Kubernetes documentation.

Access a Kubernetes cluster¶

We will need access to a Kubernetes cluster. There are two options:

  1. Configure access to a remote Kubernetes cluster OR

  2. Run the examples locally by installing kind. Start your kind cluster by running the following command:

! kind create cluster

To run the example in this guide, make sure your Kubernetes cluster (or local Kind cluster) can accomodate additional resource requests of 3 CPU and 2Gi memory.

Deploying the KubeRay operator¶

Deploy the KubeRay Operator by applying the relevant configuration files from the KubeRay GitHub repo.

# This creates the KubeRay operator and all of the resources it needs.
! kubectl create -k "github.com/ray-project/kuberay/ray-operator/config/default?ref=v0.3.0&timeout=90s"

# Note that we must use "kubectl create" in the above command. "kubectl apply" will not work due to https://github.com/ray-project/kuberay/issues/271

# You may alternatively clone the KubeRay GitHub repo and deploy the operator's configuration from your local file system.

Confirm that the operator is running in the namespace ray-system.

! kubectl -n ray-system get pod --selector=app.kubernetes.io/component=kuberay-operator

# NAME                                READY   STATUS    RESTARTS   AGE
# kuberay-operator-557c6c8bcd-t9zkz   1/1     Running   0          XXs

Namespace-scoped operator¶

Note that the above command deploys the operator at Kubernetes cluster scope; the operator will manage resources in all Kubernetes namespaces. If your use-case requires running the operator at single namespace scope, refer to the instructions at the KubeRay docs.

Deploying a Ray Cluster¶

Once the KubeRay operator is running, we are ready to deploy a Ray cluster. To do so, we create a RayCluster Custom Resource (CR).

In the rest of this guide, we will deploy resources into the default namespace. To use a non-default namespace, specify the namespace in your kubectl commands:

kubectl -n <your-namespace> ...

# Deploy a sample Ray Cluster CR from the KubeRay repo:
! kubectl apply -f https://raw.githubusercontent.com/ray-project/kuberay/release-0.3/ray-operator/config/samples/ray-cluster.autoscaler.yaml

# This Ray cluster is named `raycluster-autoscaler` because it has optional Ray Autoscaler support enabled.

Once the RayCluster CR has been created, you can view it by running

! kubectl get raycluster

# NAME                    AGE
# raycluster-autoscaler   XXs

The KubeRay operator will detect the RayCluster object. The operator will then start your Ray cluster by creating head and worker pods. To view Ray cluster’s pods, run the following command:

# View the pods in the Ray cluster named "raycluster-autoscaler"
! kubectl get pods --selector=ray.io/cluster=raycluster-autoscaler

# NAME                                             READY   STATUS    RESTARTS   AGE
# raycluster-autoscaler-head-xxxxx                 2/2     Running   0          XXs
# raycluster-autoscaler-worker-small-group-yyyyy   1/1     Running   0          XXs

We see a Ray head pod with two containers – the Ray container and autoscaler sidecar. We also have a Ray worker with its single Ray container.

Wait for the pods to reach Running state. This may take a few minutes – most of this time is spent downloading the Ray images. In a separate shell, you may wish to observe the pods’ status in real-time with the following command:

# If you're on MacOS, first `brew install watch`.
# Run in a separate shell:
! watch -n 1 kubectl get pod

Note that in production scenarios, you will want to use larger Ray pods. In fact, it is advantageous to size each Ray pod to take up an entire Kubernetes node. See the configuration guide for more details.

Running Applications on a Ray Cluster¶

Now, let’s interact with the Ray cluster we’ve deployed.

Accessing the cluster with kubectl exec¶

The most straightforward way to experiment with your Ray cluster is to exec directly into the head pod. First, identify your Ray cluster’s head pod:

! kubectl get pods --selector=ray.io/cluster=raycluster-autoscaler --selector=ray.io/node-type=head -o custom-columns=POD:metadata.name --no-headers
    
# raycluster-autoscaler-head-xxxxx

Now, we can run a Ray program on the head pod. The Ray program in the next cell simply connects to the Ray Cluster, then exits.

# Substitute your output from the last cell in place of "raycluster-autoscaler-head-xxxxx"

! kubectl exec raycluster-autoscaler-head-xxxxx -it -c ray-head -- python -c "import ray; ray.init()"
# 2022-08-10 11:23:17,093 INFO worker.py:1312 -- Connecting to existing Ray cluster at address: <IP address>:6379...
# 2022-08-10 11:23:17,097 INFO worker.py:1490 -- Connected to Ray cluster.

While this can be useful for ad-hoc execution on the Ray Cluster, the recommended way to execute an application on a Ray Cluster is to use Ray Jobs.

Ray Job submission¶

To set up your Ray Cluster for Ray Jobs submission, we just need to make sure that the Ray Jobs port is visible to the client. Ray listens for Job requests through the head pod’s Dashboard server.

First, we need to find the location of the Ray head node. The KubeRay operator configures a Kubernetes service targeting the Ray head pod. This service allows us to interact with Ray clusters without directly executing commands in the Ray container. To identify the Ray head service for our example cluster, run:

! kubectl get service raycluster-autoscaler-head-svc

# NAME                             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                       AGE
# raycluster-autoscaler-head-svc   ClusterIP   10.96.114.20   <none>        6379/TCP,8265/TCP,10001/TCP   XXs

Now that we have the name of the service, we can use port-forwarding to access the Ray Dashboard port (8265 by default).

Note: The following port-forwarding command is blocking. If you are following along from a Jupyter notebook, the command must be executed in a separate shell outside of the notebook.

# Execute this in a separate shell.
! kubectl port-forward service/raycluster-autoscaler-head-svc 8265:8265

Note: We use port-forwarding in this guide as a simple way to experiment with a Ray cluster’s services. For production use-cases, you would typically either

  • Access the service from within the Kubernetes cluster or

  • Use an ingress controller to expose the service outside the cluster.

See the networking notes for details.

Now that we have access to the Dashboard port, we can submit jobs to the Ray Cluster:

# The following job's logs will show the Ray cluster's total resource capacity, including 3 CPUs.

! ray job submit --address http://localhost:8265 -- python -c "import ray; ray.init(); print(ray.cluster_resources())"

For a more detailed guide on using Ray Jobs to run applications on a Ray Cluster, check out the quickstart guide

Cleanup¶

Deleting a Ray Cluster¶

To delete the Ray Cluster we deployed in this example, run the following command.

# Delete by reference to the RayCluster custom resource
! kubectl delete raycluster raycluster-autoscaler

Confirm that the Ray Cluster’s pods are gone by running

! kubectl get pods

Note that it may take several seconds for the Ray pods to be fully terminated.

Deleting the KubeRay operator¶

In typical operation, the KubeRay operator should be left as a long-running process that manages many Ray clusters. If you would like to delete the operator and associated resources, run

! kubectl delete -k "github.com/ray-project/kuberay/ray-operator/config/default?ref=v0.3.0&timeout=90s"

Deleting a local kind cluster¶

Finally, if you’d like to delete your local kind cluster, run

! kind delete cluster