Core APIs

Deploying a Backend

Backends define the implementation of your business logic or models that will handle incoming requests. In order to support seamless scalability backends can have many replicas, which are individual processes running in the Ray cluster to handle requests. To define a backend, you must first define the “handler” or the business logic you’d like to respond with. The handler should take as input a Starlette Request object and return any JSON-serializable object as output. For a more customizable response type, the handler may return a Starlette Response object.

A backend is defined using create_backend, and the implementation can be defined as either a function or a class. Use a function when your response is stateless and a class when you might need to maintain some state (like a model). When using a class, you can specify arguments to be passed to the constructor in create_backend, shown below.

A backend consists of a number of replicas, which are individual copies of the function or class that are started in separate Ray Workers (processes).

def handle_request(starlette_request):
  return "hello world"

class RequestHandler:
  # Take the message to return as an argument to the constructor.
  def __init__(self, msg):
      self.msg = msg

  def __call__(self, starlette_request):
      return self.msg

serve.create_backend("simple_backend", handle_request)
# Pass in the message that the backend will return as an argument.
# If we call this backend, it will respond with "hello, world!".
serve.create_backend("simple_backend_class", RequestHandler, "hello, world!")

We can also list all available backends and delete them to reclaim resources. Note that a backend cannot be deleted while it is in use by an endpoint because then traffic to an endpoint may not be able to be handled.

>> serve.list_backends()
{
    'simple_backend': {'num_replicas': 1},
    'simple_backend_class': {'num_replicas': 1},
}
>> serve.delete_backend("simple_backend")
>> serve.list_backends()
{
    'simple_backend_class': {'num_replicas': 1},
}

Exposing a Backend

While backends define the implementation of your request handling logic, endpoints allow you to expose them via HTTP. Endpoints are “logical” and can have one or multiple backends that serve requests to them. To create an endpoint, we simply need to specify a name for the endpoint, the name of a backend to handle requests to the endpoint, and the route and methods where it will be accesible. By default endpoints are serviced only by the backend provided to create_endpoint, but in some cases you may want to specify multiple backends for an endpoint, e.g., for A/B testing or incremental rollout. For information on how to do this, please see Splitting Traffic.

serve.create_endpoint("simple_endpoint", backend="simple_backend", route="/simple", methods=["GET"])

After creating the endpoint, it is now exposed by the HTTP server and handles requests using the specified backend. We can query the model to verify that it’s working.

import requests
print(requests.get("http://127.0.0.1:8000/simple").text)

We can also query the endpoint using the ServeHandle interface.

handle = serve.get_handle("simple_endpoint")
print(ray.get(handle.remote()))

To view all of the existing endpoints that have created, use serve.list_endpoints.

>>> serve.list_endpoints()
{'simple_endpoint': {'route': '/simple', 'methods': ['GET'], 'traffic': {}}}

You can also delete an endpoint using serve.delete_endpoint. Endpoints and backends are independent, so deleting an endpoint will not delete its backends. However, an endpoint must be deleted in order to delete the backends that serve its traffic.

serve.delete_endpoint("simple_endpoint")

Configuring a Backend

There are a number of things you’ll likely want to do with your serving application including scaling out, splitting traffic, or configuring the maximum number of in-flight requests for a backend. All of these options are encapsulated in a BackendConfig object for each backend.

The BackendConfig for a running backend can be updated using serve.update_backend_config.

Scaling Out

To scale out a backend to many processes, simply configure the number of replicas.

config = {"num_replicas": 10}
serve.create_backend("my_scaled_endpoint_backend", handle_request, config=config)

# scale it back down...
config = {"num_replicas": 2}
serve.update_backend_config("my_scaled_endpoint_backend", config)

This will scale up or down the number of replicas that can accept requests.

Resource Management (CPUs, GPUs)

To assign hardware resources per replica, you can pass resource requirements to ray_actor_options. By default, each replica requires one CPU. To learn about options to pass in, take a look at Resources with Actor guide.

For example, to create a backend where each replica uses a single GPU, you can do the following:

config = {"num_gpus": 1}
serve.create_backend("my_gpu_backend", handle_request, ray_actor_options=config)

Fractional Resources

The resources specified in ray_actor_options can also be fractional. This allows you to flexibly share resources between replicas. For example, if you have two models and each doesn’t fully saturate a GPU, you might want to have them share a GPU by allocating 0.5 GPUs each. The same could be done to multiplex over CPUs.

half_gpu_config = {"num_gpus": 0.5}
serve.create_backend("my_gpu_backend_1", handle_request, ray_actor_options=half_gpu_config)
serve.create_backend("my_gpu_backend_2", handle_request, ray_actor_options=half_gpu_config)

Configuring Parallelism with OMP_NUM_THREADS

Deep learning models like PyTorch and Tensorflow often use multithreading when performing inference. The number of CPUs they use is controlled by the OMP_NUM_THREADS environment variable. To avoid contention, Ray sets OMP_NUM_THREADS=1 by default because Ray workers and actors use a single CPU by default. If you do want to enable this parallelism in your Serve backend, just set OMP_NUM_THREADS to the desired value either when starting Ray or in your function/class definition:

OMP_NUM_THREADS=12 ray start --head
OMP_NUM_THREADS=12 ray start --address=$HEAD_NODE_ADDRESS
class MyBackend:
    def __init__(self, parallelism):
        os.environ["OMP_NUM_THREADS"] = parallelism
        # Download model weights, initialize model, etc.

serve.create_backend("parallel_backend", MyBackend, 12)

Note

Some other libraries may not respect OMP_NUM_THREADS and have their own way to configure parallelism. For example, if you’re using OpenCV, you’ll need to manually set the number of threads using cv2.setNumThreads(num_threads) (set to 0 to disable multi-threading). You can check the configuration using cv2.getNumThreads() and cv2.getNumberOfCPUs().

User Configuration (Experimental)

Suppose you want to update a parameter in your model without creating a whole new backend. You can do this by writing a reconfigure method for the class underlying your backend. At runtime, you can then pass in your new parameters by setting the user_config field of BackendConfig.

The following simple example will make the usage clear:

import requests
import random

import ray
from ray import serve
from ray.serve import BackendConfig

ray.init()
serve.start()


class Threshold:
    def __init__(self):
        # self.model won't be changed by reconfigure.
        self.model = random.Random()  # Imagine this is some heavyweight model.

    def reconfigure(self, config):
        # This will be called when the class is created and when
        # the user_config field of BackendConfig is updated.
        self.threshold = config["threshold"]

    def __call__(self, request):
        return self.model.random() > self.threshold


backend_config = BackendConfig(user_config={"threshold": 0.01})
serve.create_backend("threshold", Threshold, config=backend_config)
serve.create_endpoint("threshold", backend="threshold", route="/threshold")
print(requests.get("http://127.0.0.1:8000/threshold").text)  # true, probably

backend_config = BackendConfig(user_config={"threshold": 0.99})
serve.update_backend_config("threshold", backend_config)
print(requests.get("http://127.0.0.1:8000/threshold").text)  # false, probably

The reconfigure method is called when the class is created if user_config is set. In particular, it’s also called when new replicas are created in the future if scale up your backend later. The reconfigure method is also called each time user_config is updated via serve.update_backend_config.

Dependency Management

Ray Serve supports serving backends with different (possibly conflicting) python dependencies. For example, you can simultaneously serve one backend that uses legacy Tensorflow 1 and another backend that uses Tensorflow 2.

Currently this is supported using conda via Ray’s built-in runtime_env option for actors. As with all other actor options, pass these in via ray_actor_options in your call to serve.create_backend. You must have a conda environment set up for each set of dependencies you want to isolate. If using a multi-node cluster, the desired conda environment must be present on all nodes. See Conda Environments for Tasks and Actors for details.

Here’s an example script. For it to work, first create a conda environment named ray-tf1 with Ray Serve and Tensorflow 1 installed, and another named ray-tf2 with Ray Serve and Tensorflow 2. The Ray and Python versions must be the same in both environments.

import requests
from ray import serve
import tensorflow as tf

serve.start()


def tf_version(request):
    return ("Tensorflow " + tf.__version__)


serve.create_backend(
    "tf1", tf_version, ray_actor_options={"runtime_env": {
        "conda": "ray-tf1"
    }})
serve.create_endpoint("tf1", backend="tf1", route="/tf1")
serve.create_backend(
    "tf2", tf_version, ray_actor_options={"runtime_env": {
        "conda": "ray-tf2"
    }})
serve.create_endpoint("tf2", backend="tf2", route="/tf2")

print(requests.get("http://127.0.0.1:8000/tf1").text)  # Tensorflow 1.15.0
print(requests.get("http://127.0.0.1:8000/tf2").text)  # Tensorflow 2.3.0

Note

If a conda environment is not specified, your backend will be started in the same conda environment as the client (the process calling serve.create_backend) by default. (When using Ray Client, your backend will be started in the conda environment that the Serve controller is running in, which by default is the conda environment the remote Ray cluster was started in.)

The dependencies required in the backend may be different than the dependencies installed in the driver program (the one running Serve API calls). In this case, you can pass the backend in as an import path that will be imported in the Python environment in the workers, but not the driver. Example:

import requests

from ray import serve

client = serve.start()

# Include your class as input to the ImportedBackend constructor.
import_path = "ray.serve.utils.MockImportedBackend"
serve.create_backend("imported", import_path, "input_arg")
serve.create_endpoint("imported", backend="imported", route="/imported")

print(requests.get("http://127.0.0.1:8000/imported").text)