Warning

This page is under construction!

Key Concepts

This page introduces the following key concepts concerning Ray clusters:

Ray cluster

A Ray cluster is comprised of a head node and any number of worker nodes.

../../_images/ray-cluster.svg

A Ray cluster with two worker nodes. Each node runs Ray helper processes to facilitate distributed scheduling and memory management. The head node runs additional control processes, which are highlighted.

The number of worker nodes in a cluster may change with application demand, according to your Ray cluster configuration. This is known as autoscaling. The head node runs the autoscaler.

Note

Ray nodes are implemented as pods when running on Kubernetes.

Users can submit jobs for execution on the Ray cluster, or can interactively use the cluster by connecting to the head node and running ray.init. See Clients and Jobs for more information.

Worker nodes

Worker nodes execute a Ray application by executing tasks and actors and storing Ray objects. Each worker node runs helper processes which implement distributed scheduling and memory management.

Head node

Every Ray cluster has one node which is designated as the head node of the cluster. The head node is identical to other worker nodes, except that it also runs singleton processes responsible for cluster management such as the autoscaler and the Ray driver processes which run Ray jobs. Ray may schedule tasks and actors on the head node just like any other worker node, unless configured otherwise.

Autoscaler

The autoscaler is a process that runs on the head node (or as a sidecar container in the head pod if using Kubernetes). It is responsible for provisioning or deprovisioning worker nodes to meet the needs of the Ray workload. In particular, if the resource demands of the Ray workload exceed the current capacity of the cluster, the autoscaler will attempt to add more nodes. Conversely, if a node is idle for long enough, the autoscaler will remove it from the cluster.

To learn more about the autoscaler and how to configure it, refer to the following user guides:

Clients and Jobs

TODO

TODO: Update the following section so that we recommend the best tool for first-time users: See https://anyscaleteam.slack.com/archives/C01CLKUN38V/p1659990371608629?thread_ts=1659981502.811539&cid=C01CLKUN38V

Clients and Jobs

Ray provides two methods for running workloads on a Ray Cluster: the Ray Client, and Ray Job Submission.

  • The Ray Client enables interactive development by connecting a local Python script or shell to the cluster. Developers can scale-out their local programs on the cloud as if it were on their laptop. The Ray Client is used by specifying the head node address as an argument to ray.init.

  • Ray Job Submission enables users to submit locally developed-and-tested applications to a remote Ray Cluster. Ray Job Submission simplifies the experience of packaging, deploying, and managing a Ray application.

To learn how to run workloads on a Ray Cluster, refer to the following user guides: