This section should help you:
understand Ray Serve’s performance characteristics
find ways to debug and tune your Serve application’s performance
This section offers some tips and tricks to improve your Ray Serve application’s performance. Check out the architecture page for helpful context, including an overview of the HTTP proxy actor and deployment replica actors.
We are continuously benchmarking Ray Serve. The metrics we care about are latency, throughput, and scalability. We can confidently say:
Ray Serve’s latency overhead is single digit milliseconds, around 1-2 milliseconds on average.
For throughput, Serve achieves about 3-4k queries per second on a single machine (8 cores) using 1 HTTP proxy actor and 8 replicas performing no-op requests.
It is horizontally scalable so you can add more machines to increase the overall throughput. Ray Serve is built on top of Ray, so its scalability is bounded by Ray’s scalability. Please see Ray’s scalability envelope to learn more about the maximum number of nodes and other limitations.
We run long-running benchmarks nightly:
Runs 10 minute wrk trial on a single no-op deployment with 1000 replicas.
Head node: AWS EC2 m5.8xlarge. 32 worker nodes: AWS EC2 m5.8xlarge.
Runs 10 minute wrk trial on 10 deployments with 100 replicas each. Each deployment recursively sends queries to up to 5 other deployments.
Head node: AWS EC2 m5.8xlarge. 32 worker nodes: AWS EC2 m5.8xlarge.
Runs 10 node ensemble, constructed with a call graph, that performs basic arithmetic at each node. Ensemble pattern routes the input to 10 different nodes, and their outputs are combined to produce the final output. Simulates 4 clients making 20 requests each.
Head node: AWS EC2 m5.8xlarge. 0 Worker nodes.
The performance numbers above come from a recent run of the nightly benchmarks.
Check out our benchmark workloads’ source code directly to get a better sense of what they test. You can see which cluster templates each benchmark uses here (under the
cluster_compute key), and you can see what type of nodes each template spins up here.
You can check out our microbenchmark instructions to benchmark Ray Serve on your hardware.
Serve offers a request batching feature that can improve your service throughput without sacrificing latency. This is possible because ML models can utilize efficient vectorized computation to process a batch of request at a time. Batching is also necessary when your model is expensive to use and you want to maximize the utilization of hardware.
Machine Learning (ML) frameworks such as Tensorflow, PyTorch, and Scikit-Learn support evaluating multiple samples at the same time. Ray Serve allows you to take advantage of this feature via dynamic request batching. When a request arrives, Serve puts the request in a queue. This queue buffers the requests to form a batch. The deployment picks up the batch and evaluates it. After the evaluation, the resulting batch will be split up, and each response is returned individually.
You can enable batching by using the
ray.serve.batch decorator. Let’s take a look at a simple example by modifying the
MyModel class to accept a batch.
from ray import serve import ray @serve.deployment class Model: def __call__(self, single_sample: int) -> int: return single_sample * 2 handle = serve.run(Model.bind()) assert ray.get(handle.remote(1)) == 2
The batching decorators expect you to make the following changes in your method signature:
The method is declared as an async method because the decorator batches in asyncio event loop.
The method accepts a list of its original input types as input. For example,
arg1: int, arg2: strshould be changed to
arg1: List[int], arg2: List[str].
The method returns a list. The length of the return list and the input list must be of equal lengths for the decorator to split the output evenly and return a corresponding response back to its respective request.
from typing import List import numpy as np from ray import serve import ray @serve.deployment class Model: @serve.batch(max_batch_size=8, batch_wait_timeout_s=0.1) async def __call__(self, multiple_samples: List[int]) -> List[int]: # Use numpy's vectorized computation to efficiently process a batch. return np.array(multiple_samples) * 2 handle = serve.run(Model.bind()) assert ray.get([handle.remote(i) for i in range(8)]) == [i * 2 for i in range(8)]
You can supply two optional parameters to the decorators.
batch_wait_timeout_scontrols how long Serve should wait for a batch once the first request arrives.
max_batch_sizecontrols the size of the batch. Once the first request arrives, the batching decorator will wait for a full batch (up to
batch_wait_timeout_sis reached. If the timeout is reached, the batch will be sent to the model regardless the batch size.
max_batch_size ideally should be a power of 2 (2, 4, 8, 16, …) because CPUs and GPUs are both optimized for data of these shapes. Large batch sizes incur a high memory cost as well as latency penalty for the first few requests.
batch_wait_timeout_s should be set considering the end to end latency SLO (Service Level Objective). For example, if your latency target is 150ms, and the model takes 100ms to evaluate the batch, the
batch_wait_timeout_s should be set to a value much lower than 150ms - 100ms = 50ms.
When using batching in a Serve Deployment Graph, the relationship between an upstream node and a downstream node might affect the performance as well. Consider a chain of two models where first model sets
max_batch_size=8 and second model sets
max_batch_size=6. In this scenario, when the first model finishes a full batch of 8, the second model will finish one batch of 6 and then to fill the next batch, which will initially only be partially filled with 8 - 6 = 2 requests, incurring latency costs. The batch size of downstream models should ideally be multiples or divisors of the upstream models to ensure the batches play well together.
The performance issue you’re most likely to encounter is high latency and/or low throughput for requests.
Once you set up monitoring with Ray and Ray Serve, these issues may appear as:
serve_num_router_requestsstaying constant while your load increases
serve_deployment_processing_latency_msspiking up as queries queue up in the background
There are handful of ways to address these issues:
Make sure you are using the right hardware and resources:
Are you reserving GPUs for your deployment replicas using
Are you reserving one or more cores for your deployment replicas using
Are you setting OMP_NUM_THREADS to increase the performance of your deep learning framework?
asyncmethods in your callable. See the section below.
Consider batching your requests. See the section below.
Are you using
async def in your callable? If you are using
hitting the same queuing issue mentioned above, you might want to increase
max_concurrent_queries. Serve sets a low number (100) by default so the client gets
proper backpressure. You can increase the value in the deployment decorator; e.g.