Ray Serve: Scalable and Programmable Serving#

Tip

Get in touch with us if you’re using or considering using Ray Serve.

../_images/logo.svg

Ray Serve is a scalable model serving library for building online inference APIs. Serve is framework-agnostic, so you can use a single toolkit to serve everything from deep learning models built with frameworks like PyTorch, TensorFlow, and Keras, to Scikit-Learn models, to arbitrary Python business logic. It has several features and performance optimizations for serving Large Language Models such as response streaming, dynamic request batching, multi-node/multi-GPU serving, etc.

Ray Serve is particularly well suited for model composition and many model serving, enabling you to build a complex inference service consisting of multiple ML models and business logic all in Python code.

Ray Serve is built on top of Ray, so it easily scales to many machines and offers flexible scheduling support such as fractional GPUs so you can share resources and serve many machine learning models at low cost.

Quickstart#

Install Ray Serve and its dependencies:

pip install "ray[serve]"

Define a simple “hello world” application, run it locally, and query it over HTTP.

import requests
from starlette.requests import Request
from typing import Dict

from ray import serve


# 1: Define a Ray Serve application.
@serve.deployment
class MyModelDeployment:
    def __init__(self, msg: str):
        # Initialize model state: could be very large neural net weights.
        self._msg = msg

    def __call__(self, request: Request) -> Dict:
        return {"result": self._msg}


app = MyModelDeployment.bind(msg="Hello world!")

# 2: Deploy the application locally.
serve.run(app, route_prefix="/")

# 3: Query the application and print the result.
print(requests.get("http://localhost:8000/").json())
# {'result': 'Hello world!'}

More examples#

Use Serve’s model composition API to combine multiple deployments into a single application.

import requests
import starlette
from typing import Dict
from ray import serve
from ray.serve.handle import DeploymentHandle


# 1. Define the models in our composition graph and an ingress that calls them.
@serve.deployment
class Adder:
    def __init__(self, increment: int):
        self.increment = increment

    def add(self, inp: int):
        return self.increment + inp


@serve.deployment
class Combiner:
    def average(self, *inputs) -> float:
        return sum(inputs) / len(inputs)


@serve.deployment
class Ingress:
    def __init__(
        self,
        adder1: DeploymentHandle,
        adder2: DeploymentHandle,
        combiner: DeploymentHandle,
    ):
        self._adder1 = adder1
        self._adder2 = adder2
        self._combiner = combiner

    async def __call__(self, request: starlette.requests.Request) -> Dict[str, float]:
        input_json = await request.json()
        final_result = await self._combiner.average.remote(
            self._adder1.add.remote(input_json["val"]),
            self._adder2.add.remote(input_json["val"]),
        )
        return {"result": final_result}


# 2. Build the application consisting of the models and ingress.
app = Ingress.bind(Adder.bind(increment=1), Adder.bind(increment=2), Combiner.bind())
serve.run(app)

# 3: Query the application and print the result.
print(requests.post("http://localhost:8000/", json={"val": 100.0}).json())
# {"result": 101.5}

Use Serve’s FastAPI integration to elegantly handle HTTP parsing and validation.

import requests
from fastapi import FastAPI
from ray import serve

# 1: Define a FastAPI app and wrap it in a deployment with a route handler.
app = FastAPI()


@serve.deployment
@serve.ingress(app)
class FastAPIDeployment:
    # FastAPI will automatically parse the HTTP request for us.
    @app.get("/hello")
    def say_hello(self, name: str) -> str:
        return f"Hello {name}!"


# 2: Deploy the deployment.
serve.run(FastAPIDeployment.bind(), route_prefix="/")

# 3: Query the deployment and print the result.
print(requests.get("http://localhost:8000/hello", params={"name": "Theodore"}).json())
# "Hello Theodore!"

To run this example, install the following: pip install transformers

Serve a pre-trained Hugging Face Transformers model using Ray Serve. The model we’ll use is a sentiment analysis model: it will take a text string as input and return if the text was “POSITIVE” or “NEGATIVE.”

import requests
from starlette.requests import Request
from typing import Dict

from transformers import pipeline

from ray import serve


# 1: Wrap the pretrained sentiment analysis model in a Serve deployment.
@serve.deployment
class SentimentAnalysisDeployment:
    def __init__(self):
        self._model = pipeline("sentiment-analysis")

    def __call__(self, request: Request) -> Dict:
        return self._model(request.query_params["text"])[0]


# 2: Deploy the deployment.
serve.run(SentimentAnalysisDeployment.bind(), route_prefix="/")

# 3: Query the deployment and print the result.
print(
    requests.get(
        "http://localhost:8000/", params={"text": "Ray Serve is great!"}
    ).json()
)
# {'label': 'POSITIVE', 'score': 0.9998476505279541}

Why choose Serve?#

Build end-to-end ML-powered applications

Many solutions for ML serving focus on “tensor-in, tensor-out” serving: that is, they wrap ML models behind a predefined, structured endpoint. However, machine learning isn’t useful in isolation. It’s often important to combine machine learning with business logic and traditional web serving logic such as database queries.

Ray Serve is unique in that it allows you to build and deploy an end-to-end distributed serving application in a single framework. You can combine multiple ML models, business logic, and expressive HTTP handling using Serve’s FastAPI integration (see FastAPI HTTP Deployments) to build your entire application as one Python program.

Combine multiple models using a programmable API

Often solving a problem requires more than just a single machine learning model. For instance, image processing applications typically require a multi-stage pipeline consisting of steps like preprocessing, segmentation, and filtering to achieve their end goal. In many cases each model may use a different architecture or framework and require different resources (like CPUs vs GPUs).

Many other solutions support defining a static graph in YAML or some other configuration language. This can be limiting and hard to work with. Ray Serve, on the other hand, supports multi-model composition using a programmable API where calls to different models look just like function calls. The models can use different resources and run across different machines in the cluster, but you can write it like a regular program. See Deploy Compositions of Models for more details.

Flexibly scale up and allocate resources

Machine learning models are compute-intensive and therefore can be very expensive to operate. A key requirement for any ML serving system is being able to dynamically scale up and down and allocate the right resources for each model to handle the request load while saving cost.

Serve offers a number of built-in primitives to help make your ML serving application efficient. It supports dynamically scaling the resources for a model up and down by adjusting the number of replicas, batching requests to take advantage of efficient vectorized operations (especially important on GPUs), and a flexible resource allocation model that enables you to serve many models on limited hardware resources.

Avoid framework or vendor lock-in

Machine learning moves fast, with new libraries and model architectures being released all the time, it’s important to avoid locking yourself into a solution that is tied to a specific framework. This is particularly important in serving, where making changes to your infrastructure can be time consuming, expensive, and risky. Additionally, many hosted solutions are limited to a single cloud provider which can be a problem in today’s multi-cloud world.

Ray Serve is not tied to any specific machine learning library or framework, but rather provides a general-purpose scalable serving layer. Because it’s built on top of Ray, you can run it anywhere Ray can: on your laptop, Kubernetes, any major cloud provider, or even on-premise.

How can Serve help me as a…#

Data scientist

Serve makes it easy to go from a laptop to a cluster. You can test your models (and your entire deployment graph) on your local machine before deploying it to production on a cluster. You don’t need to know heavyweight Kubernetes concepts or cloud configurations to use Serve.

ML engineer

Serve helps you scale out your deployment and runs them reliably and efficiently to save costs. With Serve’s first-class model composition API, you can combine models together with business logic and build end-to-end user-facing applications. Additionally, Serve runs natively on Kubernetes with minimal operation overhead.

ML platform engineer

Serve specializes in scalable and reliable ML model serving. As such, it can be an important plug-and-play component of your ML platform stack. Serve supports arbitrary Python code and therefore integrates well with the MLOps ecosystem. You can use it with model optimizers (ONNX, TVM), model monitoring systems (Seldon Alibi, Arize), model registries (MLFlow, Weights and Biases), machine learning frameworks (XGBoost, Scikit-learn), data app UIs (Gradio, Streamlit), and Web API frameworks (FastAPI, gRPC).

LLM developer

Serve enables you to rapidly prototype, develop, and deploy scalable LLM applications to production. Many large language model (LLM) applications combine prompt preprocessing, vector database lookups, LLM API calls, and response validation. Because Serve supports any arbitrary Python code, you can write all these steps as a single Python module, enabling rapid development and easy testing. You can then quickly deploy your Ray Serve LLM application to production, and each application step can independently autoscale to efficiently accommodate user traffic without wasting resources. In order to improve performance of your LLM applications, Ray Serve has features for batching and can integrate with any model optimization technique. Ray Serve also supports streaming responses, a key feature for chatbot-like applications.

How does Serve compare to …#

TFServing, TorchServe, ONNXRuntime

Ray Serve is framework-agnostic, so you can use it alongside any other Python framework or library. We believe data scientists should not be bound to a particular machine learning framework. They should be empowered to use the best tool available for the job.

Compared to these framework-specific solutions, Ray Serve doesn’t perform any model-specific optimizations to make your ML model run faster. However, you can still optimize the models yourself and run them in Ray Serve. For example, you can run a model compiled by PyTorch JIT or ONNXRuntime.

AWS SageMaker, Azure ML, Google Vertex AI

As an open-source project, Ray Serve brings the scalability and reliability of these hosted offerings to your own infrastructure. You can use the Ray cluster launcher to deploy Ray Serve to all major public clouds, K8s, as well as on bare-metal, on-premise machines.

Ray Serve is not a full-fledged ML Platform. Compared to these other offerings, Ray Serve lacks the functionality for managing the lifecycle of your models, visualizing their performance, etc. Ray Serve primarily focuses on model serving and providing the primitives for you to build your own ML platform on top.

Seldon, KServe, Cortex

You can develop Ray Serve on your laptop, deploy it on a dev box, and scale it out to multiple machines or a Kubernetes cluster, all with minimal or no changes to code. It’s a lot easier to get started with when you don’t need to provision and manage a K8s cluster. When it’s time to deploy, you can use our Kubernetes Operator to transparently deploy your Ray Serve application to K8s.

BentoML, Comet.ml, MLflow

Many of these tools are focused on serving and scaling models independently. In contrast, Ray Serve is framework-agnostic and focuses on model composition. As such, Ray Serve works with any model packaging and registry format. Ray Serve also provides key features for building production-ready machine learning applications, including best-in-class autoscaling and naturally integrating with business logic.

We truly believe Serve is unique as it gives you end-to-end control over your ML application while delivering scalability and high performance. To achieve Serve’s feature offerings with other tools, you would need to glue together multiple frameworks like Tensorflow Serving and SageMaker, or even roll your own micro-batching component to improve throughput.

Learn More#

Check out Getting Started and Key Concepts, or head over to the Examples to get started building your Ray Serve applications.

Getting Started

Start with our quick start tutorials for deploying a single model locally and how to convert an existing model into a Ray Serve deployment .

Key Concepts

Understand the key concepts behind Ray Serve. Learn about Deployments, how to query them, and using DeploymentHandles to compose multiple models and business logic together.

Examples

Follow the tutorials to learn how to integrate Ray Serve with TensorFlow, and Scikit-Learn.

API Reference

Get more in-depth information about the Ray Serve API.

For more, see the following blog posts about Ray Serve: