Monitoring Ray Serve#

This section helps you debug and monitor your Serve applications by:

  • viewing the Ray dashboard

  • using Ray logging and Loki

  • inspecting built-in Ray Serve metrics

  • exporting metrics into Arize platform

Ray dashboard#

You can use the Ray dashboard to get a high-level overview of your Ray cluster and Ray Serve application’s states. This includes details such as:

  • the number of deployment replicas currently running

  • logs for your Serve controller, deployment replicas, and HTTP proxies

  • the Ray nodes (i.e. machines) running in your Ray cluster.

You can access the Ray dashboard at port 8265 at your cluster’s URI. For example, if you’re running Ray Serve locally, you can access the dashboard by going to http://localhost:8265 in your browser.

You can view important information about your application here. For example, you can inspect your deployment replicas by navigating to the Ray dashboard’s “Actors” tab while your Serve application is running:

In this example, there’s a single-node cluster running a deployment named Translator. This example Serve application uses four Ray actors:

  • 1 Serve controller

  • 1 HTTP proxy

  • 2 Translator deployment replicas

This page includes additional useful information like each actor’s process ID (PID) and a link to each actor’s logs, which includes their logging and print statements. You can also see whether any particular actor is alive or dead to help you debug potential cluster failures. For example, the image indicates that the Serve controller is currently dead and likely undergoing recovery.


To learn more about the Serve controller actor, the HTTP proxy actor(s), the deployment replicas, and how they all work together, check out the Serve Architecture documentation.

For a detailed overview of the Ray dashboard, see the dashboard documentation.

Ray logging#

To understand system-level behavior and to surface application-level details during runtime, you can leverage Ray logging.

Ray Serve uses Python’s standard logging module with a logger named "ray.serve". By default, logs are emitted from actors both to stderr and on disk on each node at /tmp/ray/session_latest/logs/serve/. This includes both system-level logs from the Serve controller and HTTP proxy as well as access logs and custom user logs produced from within deployment replicas.

In development, logs are streamed to the driver Ray program (the Python script that calls or the serve run CLI command), so it’s convenient to keep the driver running while debugging.

For example, let’s run a basic Serve application and view the logs that it emits.

First, let’s create a simple deployment that logs a custom log message when it’s queried:

# File name:

from ray import serve
import logging
from starlette.requests import Request

logger = logging.getLogger("ray.serve")

class SayHello:
    async def __call__(self, request: Request) -> str:"Hello world!")
        return "hi"

say_hello = SayHello.bind()

Run this deployment using the serve run CLI command:

$ serve run monitoring:say_hello

2022-08-10 22:58:55,963	INFO -- Deploying from import path: "monitoring:say_hello".
2022-08-10 22:58:57,886	INFO -- Started a local Ray instance. View the dashboard at
(ServeController pid=63881) INFO 2022-08-10 22:58:59,365 controller 63881 - Starting HTTP proxy with name 'SERVE_CONTROLLER_ACTOR:SERVE_PROXY_ACTOR-1252fc7fbbb16ca6a80c45cbb5fe4ef182030b95aa60b62604151168' on node '1252fc7fbbb16ca6a80c45cbb5fe4ef182030b95aa60b62604151168' listening on ''
The new client HTTP config differs from the existing one in the following fields: ['location']. The new HTTP config is ignored.
(ServeController pid=63881) INFO 2022-08-10 22:58:59,999 controller 63881 - Adding 1 replicas to deployment 'SayHello'.
(HTTPProxyActor pid=63883) INFO:     Started server process [63883]
2022-08-10 22:59:00,979	SUCC -- Deployed successfully.

serve run prints a few log messages immediately. Note that a few of these messages start with identifiers such as

(ServeController pid=63881)

These messages are logs from Ray Serve actors. They describe which actor (Serve controller, HTTP proxy, or deployment replica) created the log and what its process ID is (which is useful when distinguishing between different deployment replicas or HTTP proxies). The rest of these log messages are the actual log statements generated by the actor.

While serve run is running, we can query the deployment in a separate terminal window:

curl -X GET http://localhost:8000/

This causes the HTTP proxy and deployment replica to print log statements to the terminal running serve run:

(HTTPProxyActor pid=63883) INFO 2022-08-10 23:10:08,005 http_proxy - GET / 200 2.4ms
(ServeReplica:SayHello pid=63885) INFO 2022-08-10 23:10:08,004 SayHello SayHello#JYbzqP - Hello world!
(ServeReplica:SayHello pid=63885) INFO 2022-08-10 23:10:08,004 SayHello SayHello#JYbzqP - HANDLE __call__ OK 0.2ms

A copy of these logs are stored at /tmp/ray/session_latest/logs/serve/. You can parse these stored logs with a logging stack such as ELK or Loki to search them by deployment or replica.

Serve supports Log Rotation of these logs through setting the environment variables RAY_ROTATION_MAX_BYTES and RAY_ROTATION_BACKUP_COUNT.

To silence the replica-level logs or otherwise configure logging, configure the "ray.serve" logger inside the deployment constructor:

import logging

logger = logging.getLogger("ray.serve")

class Silenced:
    def __init__(self):

This controls which logs are written to STDOUT or files on disk. In addition to the standard Python logger, Serve supports custom logging. Custom logging lets you control what messages are written to STDOUT/STDERR, files on disk, or both.

For a detailed overview of logging in Ray, see Ray Logging.

Filtering logs with Loki#

You can explore and filter your logs using Loki. Setup and configuration are straightforward on Kubernetes, but as a tutorial, let’s set up Loki manually.

For this walkthrough, you need both Loki and Promtail, which are both supported by Grafana Labs. Follow the installation instructions at Grafana’s website to get executables for Loki and Promtail. For convenience, save the Loki and Promtail executables in the same directory, and then navigate to this directory in your terminal.

Now let’s get your logs into Loki using Promtail.

Save the following file as promtail-local-config.yaml:

  http_listen_port: 9080
  grpc_listen_port: 0

  filename: /tmp/positions.yaml

  - url: http://localhost:3100/loki/api/v1/push

  - job_name: ray
      - labels:
        job: ray
        __path__: /tmp/ray/session_latest/logs/serve/*.*

The relevant part for Ray Serve is the static_configs field, where we have indicated the location of our log files with __path__. The expression *.* will match all files, but it won’t match directories since they cause an error with Promtail.

We’ll run Loki locally. Grab the default config file for Loki with the following command in your terminal:


Now start Loki:

./loki-darwin-amd64 -config.file=loki-local-config.yaml

Here you may need to replace ./loki-darwin-amd64 with the path to your Loki executable file, which may have a different name depending on your operating system.

Start Promtail and pass in the path to the config file we saved earlier:

./promtail-darwin-amd64 -config.file=promtail-local-config.yaml

Once again, you may need to replace ./promtail-darwin-amd64 with your Promtail executable.

Run the following Python script to deploy a basic Serve deployment with a Serve deployment logger and to make some requests:

from ray import serve

import logging
import requests

logger = logging.getLogger("ray.serve")

class Counter:
    def __init__(self):
        self.count = 0

    def __call__(self, request):
        self.count += 1"count: {self.count}")
        return {"count": self.count}

counter = Counter.bind()

for i in range(10):

Now install and run Grafana and navigate to http://localhost:3000, where you can log in with default credentials:

  • Username: admin

  • Password: admin

On the welcome page, click “Add your first data source” and click “Loki” to add Loki as a data source.

Now click “Explore” in the left-side panel. You are ready to run some queries!

To filter all these Ray logs for the ones relevant to our deployment, use the following LogQL query:

{job="ray"} |= "Counter"

You should see something similar to the following:

You can use Loki to filter your Ray Serve logs and gather insights quicker.

Built-in Ray Serve metrics#

You can leverage built-in Ray Serve metrics to get a closer look at your application’s performance.

Ray Serve exposes important system metrics like the number of successful and failed requests through the Ray metrics monitoring infrastructure. By default, the metrics are exposed in Prometheus format on each node.


Different metrics are collected when Deployments are called via Python ServeHandle and when they are called via HTTP.

See the list of metrics below marked for each.

The following metrics are exposed by Ray Serve:




serve_deployment_request_counter [**]

  • deployment

  • replica

The number of queries that have been processed in this replica.

serve_deployment_error_counter [**]

  • deployment

  • replica

The number of exceptions that have occurred in the deployment.

serve_deployment_replica_starts [**]

  • deployment

  • replica

The number of times this replica has been restarted due to failure.


  • deployment

  • replica

Whether this deployment replica is healthy. 1 means healthy, 0 unhealthy.

serve_deployment_processing_latency_ms [**]

  • deployment

  • replica

The latency for queries to be processed.

serve_replica_processing_queries [**]

  • deployment

  • replica

The current number of queries being processed.

serve_num_http_requests [*]

  • route

  • method

The number of HTTP requests processed.

serve_num_http_error_requests [*]

  • route

  • error_code

  • method

The number of non-200 HTTP responses.

serve_num_router_requests [*]

  • deployment

The number of requests processed by the router.

serve_handle_request_counter [**]

  • handle

  • deployment

The number of requests processed by this ServeHandle.

serve_deployment_queued_queries [*]

  • deployment

  • endpoint

The number of queries for this deployment waiting to be assigned to a replica.

serve_num_deployment_http_error_requests [*]

  • deployment

  • error_code

  • method

The number of non-200 HTTP responses returned by each deployment.

[*] - only available when using HTTP calls
[**] - only available when using Python ServeHandle calls

To see this in action, first run the following command to start Ray and set up the metrics export port:

ray start --head --metrics-export-port=8080

Then run the following script:

from ray import serve

import time
import requests

def sleeper():

s = sleeper.bind()

while True:

The requests will loop until canceled with ctrl-c.

While this script is running, go to localhost:8080 in your web browser. In the output there, you can search for serve_ to locate the metrics above. The metrics are updated once every ten seconds, so you need to refresh the page to see new values.

For example, after running the script for some time and refreshing localhost:8080 you should find metrics similar to the following:

ray_serve_deployment_processing_latency_ms_count{..., replica="sleeper#jtzqhX"} 48.0
ray_serve_deployment_processing_latency_ms_sum{..., replica="sleeper#jtzqhX"} 48160.6719493866

which indicates that the average processing latency is just over one second, as expected.

You can even define a custom metric for your deployment and tag it with deployment or replica metadata. Here’s an example:

from ray import serve
from ray.util import metrics

import time
import requests

class MyDeployment:
    def __init__(self):
        self.num_requests = 0
        self.my_counter = metrics.Counter(
            description=("The number of odd-numbered requests to this deployment."),
        self.my_counter.set_default_tags({"deployment": "MyDeployment"})

    def __call__(self):
        self.num_requests += 1
        if self.num_requests % 2 == 1:

my_deployment = MyDeployment.bind()

while True:

The emitted logs include:

# HELP ray_my_counter The number of odd-numbered requests to this deployment.
# TYPE ray_my_counter gauge
ray_my_counter{..., deployment="MyDeployment"} 5.0

See the Ray Metrics documentation for more details, including instructions for scraping these metrics using Prometheus.

Exporting metrics into Arize#

Besides using Prometheus to check out Ray metrics, Ray Serve also has the flexibility to export the metrics into other observability platforms.

Arize is a machine learning observability platform which can help you to monitor real-time model performance, root cause model failures/performance degradation using explainability & slice analysis and surface drift, data quality, data consistency issues etc.

To integrate with Arize, you can directly add Arize client code into your Serve deployment code. (Example code)