Deploy a medium-sized LLM#

   

This tutorial shows you how to deploy and serve a medium language model in production with Ray Serve LLM. A medium LLM typically runs on a single node with 4-8 GPUs. It offers a balance between performance and efficiency. This tutorial deploys Llama-3.1-70 B, a medium-sized LLM with 70 B parameters. These models provide stronger accuracy and reasoning than small models while remaining more affordable and resource-friendly than very large ones. This makes them a solid choice for production workloads that need good quality at lower cost.

For smaller models, see Deploy a small-sized LLM. For larger models, see Deploy a large-sized LLM.


Configure Ray Serve LLM#

You can deploy a medium-sized LLM on a single node with multiple GPUs. To leverage all available GPUs, set tensor_parallel_size to the number of GPUs on the node, which distributes the model’s weights evenly across them.

Ray Serve LLM provides multiple Python APIs for defining your application. Use build_openai_app to build a full application from your LLMConfig object.

Set your Hugging Face token in the config file to access gated models like Llama-3.1.

# serve_llama_3_1_70b.py
from ray.serve.llm import LLMConfig, build_openai_app
import os

llm_config = LLMConfig(
    model_loading_config=dict(
        model_id="my-llama-3.1-70b",
        # Or unsloth/Meta-Llama-3.1-70B-Instruct for an ungated model
        model_source="meta-llama/Llama-3.1-70B-Instruct",
    ),
    accelerator_type="L40S", # Or "A100-40G"
    deployment_config=dict(
        autoscaling_config=dict(
            min_replicas=1,
            max_replicas=4,
        )
    ),
    ### If your model is not gated, you can skip `HF_TOKEN`
    # Share your Hugging Face token with the vllm engine so it can access the gated Llama 3.
    # Type `export HF_TOKEN=<YOUR-HUGGINGFACE-TOKEN>` in a terminal
    runtime_env=dict(env_vars={"HF_TOKEN": os.environ.get("HF_TOKEN")}),
    engine_kwargs=dict(
        max_model_len=32768,
        # Split weights among 8 GPUs in the node
        tensor_parallel_size=8,
    ),
)

app = build_openai_app({"llm_configs": [llm_config]})

Note: Before moving to a production setup, migrate to using a Serve config file to make your deployment version-controlled, reproducible, and easier to maintain for CI/CD pipelines. See Serving LLMs - Quickstart Examples: Production Guide for an example.


Deploy locally#

Prerequisites

  • Access to GPU compute.

  • (Optional) A Hugging Face token if using gated models like Meta’s Llama. Store it in export HF_TOKEN=<YOUR-HUGGINGFACE-TOKEN>.

**Note: **Depending on the organization, you can usually request access on the model’s Hugging Face page. For example, Meta’s Llama model approval can take anywhere from a few hours to several weeks.

Dependencies:

pip install "ray[serve,llm]"

Launch#

Follow the instructions at Configure Ray Serve LLM to define your app in a Python module serve_llama_3_1_70b.py.

In a terminal, run:

export HF_TOKEN=<YOUR-HUGGINGFACE-TOKEN>
serve run serve_llama_3_1_70b:app --non-blocking

Deployment typically takes a few minutes as the cluster is provisioned, the vLLM server starts, and the model is downloaded.


Send requests#

Your endpoint is available locally at http://localhost:8000 and you can use a placeholder authentication token for the OpenAI client, for example "FAKE_KEY".

Example curl:

curl -X POST http://localhost:8000/v1/chat/completions \
  -H "Authorization: Bearer FAKE_KEY" \
  -H "Content-Type: application/json" \
  -d '{ "model": "my-llama-3.1-70b", "messages": [{"role": "user", "content": "What is 2 + 2?"}] }'

Example Python:

#client.py
from urllib.parse import urljoin
from openai import OpenAI

API_KEY = "FAKE_KEY"
BASE_URL = "http://localhost:8000"

client = OpenAI(base_url=urljoin(BASE_URL, "v1"), api_key=API_KEY)

response = client.chat.completions.create(
    model="my-llama-3.1-70b",
    messages=[{"role": "user", "content": "Tell me a joke"}],
    stream=True
)

for chunk in response:
    content = chunk.choices[0].delta.content
    if content:
        print(content, end="", flush=True)

Shutdown#

Shutdown your LLM service:

serve shutdown -y

Deploy to production with Anyscale services#

For production deployment, use Anyscale services to deploy the Ray Serve app to a dedicated cluster without modifying the code. Anyscale ensures scalability, fault tolerance, and load balancing, keeping the service resilient against node failures, high traffic, and rolling updates.


Launch the service#

Anyscale provides out-of-the-box images (anyscale/ray-llm), which come pre-loaded with Ray Serve LLM, vLLM, and all required GPU/runtime dependencies. This makes it easy to get started without building a custom image.

Create your Anyscale service configuration in a new service.yaml file:

# service.yaml
name: deploy-llama-3-70b
image_uri: anyscale/ray-llm:2.49.0-py311-cu128 # Anyscale Ray Serve LLM image. Use `containerfile: ./Dockerfile` to use a custom Dockerfile.
compute_config:
  auto_select_worker_config: true 
working_dir: .
cloud:
applications:
  # Point to your app in your Python module
  - import_path: serve_llama_3_1_70b:app

Deploy your service. Make sure you forward your Hugging Face token to the command.

anyscale service deploy -f service.yaml --env HF_TOKEN=<YOUR-HUGGINGFACE-TOKEN>

Custom Dockerfile
You can customize the container by building your own Dockerfile. In your Anyscale Service config, reference the Dockerfile with containerfile (instead of image_uri):

# service.yaml
# Replace:
# image_uri: anyscale/ray-llm:2.49.0-py311-cu128

# with:
containerfile: ./Dockerfile

See the Anyscale base images for details on what each image includes.


Send requests#

The anyscale service deploy command output shows both the endpoint and authentication token:

(anyscale +3.9s) curl -H "Authorization: Bearer <YOUR-TOKEN>" <YOUR-ENDPOINT>

You can also retrieve both from the service page in the Anyscale console. Click the Query button at the top. See Send requests for example requests, but make sure to use the correct endpoint and authentication token.


Access the Serve LLM dashboard#

See Monitor your deployment for instructions on enabling LLM-specific logging. To open the Ray Serve LLM dashboard from an Anyscale service:

  1. In the Anyscale console, go to your Service or Workspace

  2. Navigate to the Metrics tab

  3. Click View in Grafana and click Serve LLM Dashboard


Shutdown#

Shutdown your Anyscale service:

anyscale service terminate -n deploy-llama-3-70b

Monitor your deployment#

Ray Serve LLM provides comprehensive monitoring through the Serve LLM Dashboard. This dashboard visualizes key metrics including:

  • Time to First Token (TTFT): Latency before the first token is generated.

  • Time Per Output Token (TPOT): Average latency per generated token.

  • Token throughput: Total tokens generated per second.

  • GPU cache utilization: Percentage of KV cache memory in use.

  • Request latency: End-to-end request duration.

To enable engine-level metrics, set log_engine_metrics: true in your LLM configuration. This is enabled by default starting with Ray 2.51.0.

The following example shows how to enable monitoring:

llm_config = LLMConfig(
    # ... other config ...
    log_engine_metrics=True,  # Enable detailed metrics
)

Access the dashboard#

To view metrics in an Anyscale Service or Workspace:

  1. Navigate to your Service or Workspace page.

  2. Open the Metrics tab.

  3. Expand View in Grafana and select Serve LLM Dashboard.

For a detailed explanation of each metric and how to interpret them for your workload, see Understand LLM latency and throughput metrics.

For comprehensive monitoring strategies and best practices, see the Observability and monitoring guide.


Improve concurrency#

Ray Serve LLM uses vLLM as its backend engine, which logs the maximum concurrency it can support based on your configuration.

Example log for 8xL40S:

INFO 08-19 20:57:37 [kv_cache_utils.py:837] Maximum concurrency for 32,768 tokens per request: 17.79x

The following are a few ways to improve concurrency depending on your model and hardware:

Reduce max_model_len
Lowering max_model_len reduces the memory needed for KV cache.

Example: Running Llama-3.1-70 B on 8xL40S:

  • max_model_len = 32,768 → concurrency ≈ 18

  • max_model_len = 16,384 → concurrency ≈ 36

Use Quantized models
Quantizing your model (for example, to FP8) reduces the model’s memory footprint, freeing up memory for more KV cache and enabling more concurrent requests.

Use pipeline parallelism
If a single node isn’t enough to handle your workload, consider distributing the model’s layers across multiple nodes with pipeline_parallel_size > 1.

Upgrade to GPUs with more memory
Some GPUs provide significantly more room for KV cache and allow for higher concurrency out of the box.

Scale with more replicas
In addition to tuning per-replica concurrency, you can scale horizontally by increasing the number of replicas in your config.
Raising the replica count increases the total number of concurrent requests your service can handle, especially under sustained or bursty traffic.

deployment_config:
  autoscaling_config:
    min_replicas: 1
    max_replicas: 4

For more details on tuning strategies, hardware guidance, and serving configurations, see Choose a GPU for LLM serving and Tune parameters for LLMs on Anyscale services.


Troubleshooting#

If you encounter issues when deploying your LLM, such as out-of-memory errors, authentication problems, or slow performance, consult the Troubleshooting Guide for solutions to common problems.


Summary#

In this tutorial, you deployed a medium-sized LLM with Ray Serve LLM, from development to production. You learned how to configure and deploy your service, send requests, monitor performance metrics, and optimize concurrency.

To learn more, take the LLM Serving Foundations course or explore LLM batch inference for offline workloads. For smaller models, see Deploy a small-sized LLM or for larger models, see Deploy a large-sized LLM.