Using GPUs#

This document provides tips on GPU usage with KubeRay.

To use GPUs on Kubernetes, configure both your Kubernetes setup and add additional values to your Ray cluster configuration.

To learn about GPU usage on different clouds, see instructions for GKE, for EKS, and for AKS.

Quickstart: Serve a GPU-based StableDiffusion model#

You can find several GPU workload examples in the examples section of the docs. The StableDiffusion example is a good place to start.

Dependencies for GPU-based machine learning#

The Ray Docker Hub hosts CUDA-based container images packaged with Ray and certain machine learning libraries. For example, the image rayproject/ray-ml:2.6.3-gpu is ideal for running GPU-based ML workloads with Ray 2.6.3. The Ray ML images are packaged with dependencies (such as TensorFlow and PyTorch) needed for the Ray Libraries that are used in these docs. To add custom dependencies, use one, or both, of the following methods:

Configuring Ray pods for GPU usage#

Using Nvidia GPUs requires specifying nvidia.com/gpu resource limits and requests in the container fields of your RayCluster’s headGroupSpec and/or workerGroupSpecs.

Here is a config snippet for a RayCluster workerGroup of up to 5 GPU workers.

groupName: gpu-group
replicas: 0
minReplicas: 0
maxReplicas: 5
...
template:
    spec:
     ...
     containers:
      - name: ray-node
        image: rayproject/ray-ml:2.6.3-gpu
        ...
        resources:
         nvidia.com/gpu: 1 # Optional, included just for documentation.
         cpu: 3
         memory: 50Gi
        limits:
         nvidia.com/gpu: 1 # Required to use GPU.
         cpu: 3
         memory: 50Gi
         ...

Each of the Ray pods in the group can be scheduled on an AWS p2.xlarge instance (1 GPU, 4vCPU, 61Gi RAM).

Tip

GPU instances are expensive – consider setting up autoscaling for your GPU Ray workers, as demonstrated with the minReplicas:0 and maxReplicas:5 settings above. To enable autoscaling, remember also to set enableInTreeAutoscaling:True in your RayCluster’s spec Finally, make sure you configured the group or pool of GPU Kubernetes nodes, to autoscale. Refer to your cloud provider’s documentation for details on autoscaling node pools.

GPU multi-tenancy#

If a Pod doesn’t include nvidia.com/gpu in its resource configurations, users typically expect the Pod to be unaware of any GPU devices, even if it’s scheduled on a GPU node. However, when nvidia.com/gpu isn’t specified, the default value for NVIDIA_VISIBLE_DEVICES becomes all, giving the Pod awareness of all GPU devices on the node. This behavior isn’t unique to KubeRay, but is a known issue for NVIDIA. A workaround is to set the NVIDIA_VISIBLE_DEVICES environment variable to void in the Pods which don’t require GPU devices.

Some useful links:

GPUs and Ray#

This section discuss GPU usage for Ray applications running on Kubernetes. For general guidance on GPU usage with Ray, see also Accelerator Support.

The KubeRay operator advertises container GPU resource limits to the Ray scheduler and the Ray autoscaler. In particular, the Ray container’s ray start entrypoint will be automatically configured with the appropriate --num-gpus option.

GPU workload scheduling#

After a Ray pod with access to GPU is deployed, it will be able to execute tasks and actors annotated with gpu requests. For example, the decorator @ray.remote(num_gpus=1) annotates a task or actor requiring 1 GPU.

GPU autoscaling#

The Ray autoscaler is aware of each Ray worker group’s GPU capacity. Say we have a RayCluster configured as in the config snippet above:

  • There is a worker group of Ray pods with 1 unit of GPU capacity each.

  • The Ray cluster does not currently have any workers from that group.

  • maxReplicas for the group is at least 2.

Then the following Ray program will trigger upscaling of 2 GPU workers.

import ray

ray.init()

@ray.remote(num_gpus=1)
class GPUActor:
    def say_hello(self):
        print("I live in a pod with GPU access.")

# Request actor placement.
gpu_actors = [GPUActor.remote() for _ in range(2)]
# The following command will block until two Ray pods with GPU access are scaled
# up and the actors are placed.
ray.get([actor.say_hello.remote() for actor in gpu_actors])

After the program exits, the actors will be garbage collected. The GPU worker pods will be scaled down after the idle timeout (60 seconds by default). If the GPU worker pods were running on an autoscaling pool of Kubernetes nodes, the Kubernetes nodes will be scaled down as well.

Requesting GPUs#

You can also make a direct request to the autoscaler to scale up GPU resources.

import ray

ray.init()
ray.autoscaler.sdk.request_resources(bundles=[{"GPU": 1}] * 2)

After the nodes are scaled up, they will persist until the request is explicitly overridden. The following program will remove the resource request.

import ray

ray.init()
ray.autoscaler.sdk.request_resources(bundles=[])

The GPU workers can then scale down.

Overriding Ray GPU capacity (advanced)#

For specialized use-cases, it is possible to override the Ray pod GPU capacities advertised to Ray. To do so, set a value for the num-gpus key of the head or worker group’s rayStartParams. For example,

rayStartParams:
    # Note that all rayStartParam values must be supplied as strings.
    num-gpus: "2"

The Ray scheduler and autoscaler will then account 2 units of GPU capacity for each Ray pod in the group, even if the container limits do not indicate the presence of GPU.

GPU pod scheduling (advanced)#

GPU taints and tolerations#

Note

Managed Kubernetes services typically take care of GPU-related taints and tolerations for you. If you are using a managed Kubernetes service, you might not need to worry about this section.

The Nvidia gpu plugin for Kubernetes applies taints to GPU nodes; these taints prevent non-GPU pods from being scheduled on GPU nodes. Managed Kubernetes services like GKE, EKS, and AKS automatically apply matching tolerations to pods requesting GPU resources. Tolerations are applied by means of Kubernetes’s ExtendedResourceToleration admission controller. If this admission controller is not enabled for your Kubernetes cluster, you may need to manually add a GPU toleration to each of your GPU pod configurations. For example,

apiVersion: v1
kind: Pod
metadata:
 generateName: example-cluster-ray-worker
 spec:
 ...
 tolerations:
 - effect: NoSchedule
   key: nvidia.com/gpu
   operator: Exists
 ...
 containers:
 - name: ray-node
   image: rayproject/ray:nightly-gpu
   ...

Node selectors and node labels#

To ensure Ray pods are bound to Kubernetes nodes satisfying specific conditions (such as the presence of GPU hardware), you may wish to use the nodeSelector field of your workerGroup’s pod template spec. See the Kubernetes docs for more about Pod-to-Node assignment.

Further reference and discussion#

Read about Kubernetes device plugins here, about Kubernetes GPU plugins here, and about Nvidia’s GPU plugin for Kubernetes here.