Visualizing a Deployment Graph with Gradio#

You can visualize the deployment graph you built with Gradio. This integration allows you to interactively run your deployment graph through the Gradio UI and see the intermediate outputs of each node in real time as they finish evaluation.

To access this feature, you need to install Gradio.

Note

Gradio requires Python 3.7+. Make sure to install Python 3.7+ to use this tool.

pip install gradio

Additionally, you can optionally install pydot and graphviz. This will allow this tool to incorporate the complementary graphical illustration of the nodes and edges.

pip install -U pydot && brew install graphviz
pip install -U pydot && winget install graphviz
pip install -U pydot && sudo apt-get install -y graphviz

Also, for the quickstart example, install the transformers module to pull models through HuggingFace’s Pipelines.

pip install transformers

Quickstart Example#

Let’s build and visualize a deployment graph that

  1. Downloads an image

  2. Classifies the image

  3. Translates the results to German.

This will be the graphical structure of our deployment graph:

graph structure

Open up a new file named demo.py. First, let’s take care of imports:

import requests
from transformers import pipeline
from io import BytesIO
from PIL import Image, ImageFile
from typing import Dict

from ray import serve
from ray.dag.input_node import InputNode
from ray.serve.drivers import DAGDriver

Defining Nodes#

The downloader function takes an image’s URL, downloads it, and returns the image in the form of an ImageFile.

@serve.deployment
def downloader(image_url: str) -> ImageFile.ImageFile:
    image_bytes = requests.get(image_url).content
    image = Image.open(BytesIO(image_bytes)).convert("RGB")
    return image

The ImageClassifier class, upon initialization, loads the google/vit-base-patch16-224 image classification model using the Transformers pipeline. Its classify method takes in an ImageFile, runs the model on it, and outputs the classification labels and scores.

@serve.deployment
class ImageClassifier:
    def __init__(self):
        self.model = pipeline(
            "image-classification", model="google/vit-base-patch16-224"
        )

    def classify(self, image: ImageFile.ImageFile) -> Dict[str, float]:
        results = self.model(image)
        return {pred["label"]: pred["score"] for pred in results}

The Translator class, upon initialization, loads the t5-small translation model that translates from English to German. Its translate method takes in a map from strings to floats, and translates each of its string keys to German.

@serve.deployment
class Translator:
    def __init__(self):
        self.model = pipeline("translation_en_to_de", model="t5-small")

    def translate(self, dict: Dict[str, float]) -> Dict[str, float]:
        results = {}
        for label, score in dict.items():
            translated_label = self.model(label)[0]["translation_text"]
            results[translated_label] = score

        return results

Building the Graph#

Finally, we can build our graph by defining dependencies between nodes.

with InputNode(input_type=str) as user_input:
    classifier = ImageClassifier.bind()
    translator = Translator.bind()

    downloaded_image = downloader.bind(user_input)
    classes = classifier.classify.bind(downloaded_image)
    translated_classes = translator.translate.bind(classes)

    serve_entrypoint = DAGDriver.bind(translated_classes)

Deploy and Execute#

Let’s deploy and run the deployment graph! Deploy the graph with serve run and turn on the visualization with the gradio flag:

serve run demo:serve_entrypoint --gradio

If you go to http://localhost:7860, you can now access the Gradio visualization! Type in a link to an image, click “Run”, and you can see all of the intermediate outputs of your graph, including the final output! gradio vis

Setting Up the Visualization#

Now let’s see how to set up this visualization tool.

Requirement: Driver#

The DAGDriver is required for the visualization. If the DAGDriver is not already part of your deployment graph, you can include it with:

new_root_node = DAGDriver.bind(old_root_node)

Ensure Output Data is Properly Displayed#

Since the Gradio UI is set at deploy time, the type of Gradio component used to display intermediate outputs of the graph is also statically determined from the graph deployed. It is important that the correct Gradio component is used for each graph node.

Specify the return type annotation of each function or method in the deployment graph.

Note

If no return type annotation is specified for a node, then the Gradio component for that node will default to a Gradio Textbox.

The following table lists the supported data types and which Gradio component they’re displayed on.

Data Type

Gradio component

int, float

Numeric field

str

Textbox

bool

Checkbox

pd.Dataframe

DataFrame

list, dict, np.ndarray

JSON field

PIL.Image, torch.Tensor

Image

For instance, the output of the following function node will be displayed through a Gradio Checkbox.

@serve.deployment
def is_valid(begin, end) -> bool:
    return begin <= end

Gradio checkbox example

Providing Input#

Similarly, the Gradio component used for each graph input should also be correct. For instance, a deployment graph for image classification could either take an image URL from which it downloads the image, or take the image directly as input. In the first case, the Gradio UI should allow users to input the URL through a textbox, but in the second case, the Gradio UI should allow users to upload the image through an Image component.

The data type of each user input can be specified by passing in input_type to InputNode(). The following two sections will describe the two supported ways to provide input through input_type.

The following table describes the supported input data types and which Gradio component is used to collect that data.

Data Type

Gradio component

int, float

Numeric field

str

Textbox

bool

Checkbox

pd.Dataframe

DataFrame

PIL.Image, torch.Tensor

Image

Single Input#

If there is a single input to the deployment graph, it can be provided directly through InputNode. The following is an example code snippet.

with InputNode(input_type=ImageFile) as user_input:
    f_node = f.bind(user_input)

Note

Notice there is a single input, which is stored in user_input (an instance of InputNode). The data type of this single input must be one of the supported input data types.

When initializating InputNode(), the data type can be specified by passing in a type variable to the parameter input_type. Here, the type is specified to be ImageFile, so the Gradio visualization will take in user input through an Image component. single input example

Multiple Inputs#

If there are multiple inputs to the deployment graph, it can be provided by accessing attributes of InputNode. The following is an example code snippet.

with InputNode(input_type={0: int, 1: str, "id": str}) as user_input:
    f_node = f.bind(user_input[0])
    g_node = g.bind(user_input[1], user_input["id"])

Note

Notice there are multiple inputs: user_input[0], user_input[1], and user_input["id"]. They are accessed by indexing into user_input. The data types for each of these inputs must be one of the supported input data types.

When initializing InputNode(), these data types can be specified by passing in a dictionary that maps key to type (where key is integer or string) to the parameter input_type. Here, the input types are specified to be int, str, and str, so the Gradio visualization will take in the three inputs through one Numeric Field and two Textboxes.

multiple input example