Tabular data training and serving with Keras and Ray AIR

This notebook is adapted from a Keras tutorial. It uses Chicago Taxi dataset and a DNN Keras model to predict whether a trip may generate a big tip.

In this example, we showcase how to achieve the same tasks as the Keras Tutorial using Ray AIR, covering every step from data ingestion to pushing a model to serving.

  1. Read a CSV into Ray Dataset.

  2. Process the dataset by chaining Ray AIR preprocessors.

  3. Train the model using the TensorflowTrainer from AIR.

  4. Serve the model using Ray Serve and the above preprocessors.

Uncomment and run the following line in order to install all the necessary dependencies:

# ! pip install "tensorflow>=2.8.0" "ray[tune, data, serve]>=1.12.1"
# ! pip install fastapi

Set up Ray

We will use ray.init() to initialize a local cluster. By default, this cluster will be composed of only the machine you are running this notebook on. If you wish to attach to an existing Ray cluster, you can do so through ray.init(address="auto").

from pprint import pprint
import ray

ray.shutdown()
ray.init()
2022-07-20 18:45:28,814	INFO services.py:1483 -- View the Ray dashboard at http://127.0.0.1:8266

Ray

Python version: 3.7.10
Ray version: 2.0.0
Dashboard: http://127.0.0.1:8266

We can check the resources our cluster is composed of. If you are running this notebook on your local machine or Google Colab, you should see the number of CPU cores and GPUs available on the said machine.

pprint(ray.cluster_resources())
{'CPU': 16.0,
 'memory': 30436675994.0,
 'node:127.0.0.1': 1.0,
 'object_store_memory': 2147483648.0}

Getting the data

Let’s start with defining a helper function to get the data to work with. Some columns are dropped for simplicity.

import pandas as pd

INPUT = "input"
LABEL = "is_big_tip"

def get_data() -> pd.DataFrame:
    """Fetch the taxi fare data to work on."""
    _data = pd.read_csv(
        "https://raw.githubusercontent.com/tensorflow/tfx/master/"
        "tfx/examples/chicago_taxi_pipeline/data/simple/data.csv"
    )
    _data[LABEL] = _data["tips"] / _data["fare"] > 0.2
    # We drop some columns here for the sake of simplicity.
    return _data.drop(
        [
            "tips",
            "fare",
            "dropoff_latitude",
            "dropoff_longitude",
            "pickup_latitude",
            "pickup_longitude",
            "pickup_census_tract",
        ],
        axis=1,
    )
data = get_data()

Now let’s take a look at the data. Notice that some values are missing. This is exactly where preprocessing comes into the picture. We will come back to this in the preprocessing session below.

data.head(5)
pickup_community_area trip_start_month trip_start_hour trip_start_day trip_start_timestamp trip_miles dropoff_census_tract payment_type company trip_seconds dropoff_community_area is_big_tip
0 NaN 5 19 6 1400269500 0.0 NaN Credit Card Chicago Elite Cab Corp. (Chicago Carriag 0.0 NaN False
1 NaN 3 19 5 1362683700 0.0 NaN Unknown Chicago Elite Cab Corp. 300.0 NaN False
2 60.0 10 2 3 1380593700 12.6 NaN Cash Taxi Affiliation Services 1380.0 NaN False
3 10.0 10 1 2 1382319000 0.0 NaN Cash Taxi Affiliation Services 180.0 NaN False
4 14.0 5 7 5 1369897200 0.0 NaN Cash Dispatch Taxi Affiliation 1080.0 NaN False

We continue to split the data into training and test data. For the test data, we separate out the features to run serving on as well as labels to compare serving results with.

import numpy as np
from sklearn.model_selection import train_test_split
from typing import Tuple


def split_data(data: pd.DataFrame) -> Tuple[ray.data.Dataset, pd.DataFrame, np.array]:
    """Split the data in a stratified way.

    Returns:
        A tuple containing train dataset, test data and test label.
    """
    # There is a native offering in Ray Dataset for split as well.
    # However, supporting stratification is a TODO there. So use
    # scikit-learn equivalent here.
    train_data, test_data = train_test_split(
        data, stratify=data[[LABEL]], random_state=1113
    )
    _train_ds = ray.data.from_pandas(train_data)
    _test_label = test_data[LABEL].values
    _test_df = test_data.drop([LABEL], axis=1)
    return _train_ds, _test_df, _test_label

train_ds, test_df, test_label = split_data(data)
print(f"There are {train_ds.count()} samples for training and {test_df.shape[0]} samples for testing.")
There are 11251 samples for training and 3751 samples for testing.

Preprocessing

Let’s focus on preprocessing first. Usually, input data needs to go through some preprocessing before being fed into model. It is a good idea to package preprocessing logic into a modularized component so that the same logic can be applied to both training data as well as data for online serving or offline batch prediction.

In AIR, this component is a Preprocessor. It is constructed in a way that allows easy composition.

Now let’s construct a chained preprocessor composed of simple preprocessors, including

  1. Imputer for filling missing features;

  2. OneHotEncoder for encoding categorical features;

  3. BatchMapper where arbitrary user-defined function can be applied to batches of records; and so on. Take a look at Preprocessor. The output of the preprocessing step goes into model for training.

from ray.data.preprocessors import (
    BatchMapper,
    Chain,
    OneHotEncoder,
    SimpleImputer,
)

def get_preprocessor():
    """Construct a chain of preprocessors."""
    imputer1 = SimpleImputer(
        ["dropoff_census_tract"], strategy="most_frequent"
    )
    imputer2 = SimpleImputer(
        ["pickup_community_area", "dropoff_community_area"],
        strategy="most_frequent",
    )
    imputer3 = SimpleImputer(["payment_type"], strategy="most_frequent")
    imputer4 = SimpleImputer(
        ["company"], strategy="most_frequent")
    imputer5 = SimpleImputer(
        ["trip_start_timestamp", "trip_miles", "trip_seconds"], strategy="mean"
    )

    ohe = OneHotEncoder(
        columns=[
            "trip_start_hour",
            "trip_start_day",
            "trip_start_month",
            "dropoff_census_tract",
            "pickup_community_area",
            "dropoff_community_area",
            "payment_type",
            "company",
        ],
        max_categories={
            "dropoff_census_tract": 25,
            "pickup_community_area": 20,
            "dropoff_community_area": 20,
            "payment_type": 2,
            "company": 7,
        },
    )

    def batch_mapper_fn(df):
        df["trip_start_year"] = pd.to_datetime(df["trip_start_timestamp"], unit="s").dt.year
        df = df.drop(["trip_start_timestamp"], axis=1)
        return df

    def concat_for_tensor(dataframe):
        from ray.data.extensions import TensorArray
        result = {}
        feature_cols = [col for col in dataframe.columns if col != LABEL]
        result[INPUT] = TensorArray(dataframe[feature_cols].to_numpy(dtype=np.float32))
        if LABEL in dataframe.columns:
            result[LABEL] = dataframe[LABEL]
        return  pd.DataFrame(result)

    chained_pp = Chain(
        imputer1,
        imputer2,
        imputer3,
        imputer4,
        imputer5,
        ohe,
        BatchMapper(batch_mapper_fn),
        BatchMapper(concat_for_tensor)
    )
    return chained_pp

Now let’s define some constants for clarity.

# Note that `INPUT_SIZE` here is corresponding to the output dimension
# of the previously defined processing steps.
# This is used to specify the input shape of Keras model as well as
# when converting from training data from `ray.data.Dataset` to `tf.Tensor`.
INPUT_SIZE = 120
# The training batch size. Based on `NUM_WORKERS`, each worker
# will get its own share of this batch size. For example, if
# `NUM_WORKERS = 2`, each worker will work on 4 samples per batch.
BATCH_SIZE = 8
# Number of epoch. Adjust it based on how quickly you want the run to be.
EPOCH = 1
# Number of training workers.
# Adjust this accordingly based on the resources you have!
NUM_WORKERS = 2

Training

Let’s starting with defining a simple Keras model for the classification task.

import tensorflow as tf

def build_model():
    model = tf.keras.models.Sequential()
    model.add(tf.keras.Input(shape=(INPUT_SIZE,)))
    model.add(tf.keras.layers.Dense(50, activation="relu"))
    model.add(tf.keras.layers.Dense(1, activation="sigmoid"))
    return model

Now let’s define the training loop. This code will be run on each training worker in a distributed fashion. See more details here.

from ray.air import session, Checkpoint
from ray.train.tensorflow import prepare_dataset_shard

def train_loop_per_worker():
    dataset_shard = session.get_dataset_shard("train")

    strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
    with strategy.scope():
        model = build_model()
        model.compile(
            loss="binary_crossentropy",
            optimizer="adam",
            metrics=["accuracy"],
        )

    def to_tf_dataset(dataset, batch_size):
        def to_tensor_iterator():
            for batch in dataset.iter_tf_batches(
                batch_size=batch_size, dtypes=tf.float32, drop_last=True,
            ):
                yield batch[INPUT], batch[LABEL]

        output_signature = (
            tf.TensorSpec(shape=(BATCH_SIZE, INPUT_SIZE), dtype=tf.float32),
            tf.TensorSpec(shape=(BATCH_SIZE), dtype=tf.int64),
        )
        tf_dataset = tf.data.Dataset.from_generator(
            to_tensor_iterator, output_signature=output_signature
        )
        return prepare_dataset_shard(tf_dataset)

    for epoch in range(EPOCH):            
        # This will make sure that the training workers will get their own
        # share of batch to work on.
        # See `ray.train.tensorflow.prepare_dataset_shard` for more information.
        tf_dataset = to_tf_dataset(
            dataset=dataset_shard,
            batch_size=BATCH_SIZE,
        )

        model.fit(tf_dataset, verbose=0)
        # This saves checkpoint in a way that can be used by Ray Serve coherently.
        session.report(
            {},
            checkpoint=Checkpoint.from_dict(
                dict(epoch=epoch, model=model.get_weights())
            ),
        )

Now let’s define a trainer that takes in the training loop, the training dataset as well the preprocessor that we just defined.

And run it!

Notice that you can tune how long you want the run to be by changing EPOCH.

from ray.train.tensorflow import TensorflowTrainer
from ray.air.config import ScalingConfig

trainer = TensorflowTrainer(
    train_loop_per_worker=train_loop_per_worker,
    scaling_config=ScalingConfig(num_workers=NUM_WORKERS),
    datasets={"train": train_ds},
    preprocessor=get_preprocessor(),
)
result = trainer.fit()

Moving on to Serve

We will use Ray Serve to serve the trained model. A core concept of Ray Serve is Deployment. It allows you to define and update your business logic or models that will handle incoming requests as well as how this is exposed over HTTP or in Python.

In the case of serving model, ray.serve.air_integrations.Predictor and ray.serve.air_integrations.PredictorDeployment wrap a ray.air.checkpoint.Checkpoint into a Ray Serve deployment that can readily serve HTTP requests. Note, Checkpoint captures both model and preprocessing steps in a way compatible with Ray Serve and ensures that ml workload can transition seamlessly between training and serving.

This removes the boilerplate code and minimizes the effort to bring your model to production!

Generally speaking the http request can either send in json or data. Upon receiving this payload, Ray Serve would need some “adapter” to convert the request payload into some shape or form that can be accepted as input to the preprocessing steps. In this case, we send in a json request and converts it to a pandas DataFrame through dataframe_adapter, defined below.

from fastapi import Request

async def dataframe_adapter(request: Request):
    """Serve HTTP Adapter that reads JSON and converts to pandas DataFrame."""
    content = await request.json()
    return pd.DataFrame.from_dict(content)

Now let’s wrap everything in a serve endpoint that exposes a URL to where requests can be sent to.

from ray import serve
from ray.air.checkpoint import Checkpoint
from ray.train.tensorflow import TensorflowPredictor
from ray.serve import PredictorDeployment


def serve_model(checkpoint: Checkpoint, model_definition, adapter, name="Model") -> str:
    """Expose a serve endpoint.

    Returns:
        serve URL.
    """
    serve.start(detached=True)
    deployment = PredictorDeployment.options(name=name)
    deployment.deploy(
        TensorflowPredictor,
        checkpoint,
        # This is due to a current limitation on Serve that's
        # being addressed.
        # TODO(xwjiang): Change to True.
        batching_params=dict(max_batch_size=2, batch_wait_timeout_s=5),
        model_definition=model_definition,
        http_adapter=adapter,
    )
    return deployment.url
import ray
# Generally speaking, training and serving are done in totally different ray clusters.
# To simulate that, let's shutdown the old ray cluster in preparation for serving.
ray.shutdown()

endpoint_uri = serve_model(result.checkpoint, build_model, dataframe_adapter)
2022-07-20 18:46:11,759	INFO services.py:1483 -- View the Ray dashboard at http://127.0.0.1:8266
(ServeController pid=21308) INFO 2022-07-20 18:46:15,348 controller 21308 checkpoint_path.py:17 - Using RayInternalKVStore for controller checkpoint and recovery.
(ServeController pid=21308) INFO 2022-07-20 18:46:15,350 controller 21308 http_state.py:126 - Starting HTTP proxy with name 'SERVE_CONTROLLER_ACTOR:SERVE_PROXY_ACTOR-58fb3ee046cdce5c602369291de78f60c65dcbd7c5c5a8af57ec3a26' on node '58fb3ee046cdce5c602369291de78f60c65dcbd7c5c5a8af57ec3a26' listening on '127.0.0.1:8000'
(HTTPProxyActor pid=21311) INFO:     Started server process [21311]
/Users/jiaodong/anaconda3/envs/ray3.7/lib/python3.7/site-packages/ipykernel_launcher.py:23: UserWarning: From /var/folders/1s/wy6f3ytn3q726p5hl8fw8d780000gn/T/ipykernel_21006/609683685.py:23: deploy (from ray.serve.deployment) is deprecated and will be removed in a future version Please see https://docs.ray.io/en/latest/serve/index.html
(ServeController pid=21308) INFO 2022-07-20 18:46:17,658 controller 21308 deployment_state.py:1281 - Adding 1 replicas to deployment 'Model'.
(ServeReplica:Model pid=21314) 2022-07-20 18:46:23,199	WARNING compression.py:18 -- lz4 not available, disabling sample compression. This will significantly impact RLlib performance. To install lz4, run `pip install lz4`.

Let’s write a helper function to send requests to this endpoint and compare the results with labels.

import requests
import pandas as pd
import numpy as np

NUM_SERVE_REQUESTS = 10

def send_requests(df: pd.DataFrame, label: np.array):
    for i in range(NUM_SERVE_REQUESTS):
        one_row = df.iloc[[i]].to_dict()
        serve_result = requests.post(endpoint_uri, json=one_row).json()
        print(
            f"request{i} prediction: {serve_result[0]['predictions']} "
            f"- label: {str(label[i])}"
        )
send_requests(test_df, test_label)
request0 prediction: 0.004963837098330259 - label: True
request1 prediction: 6.652726733591408e-05 - label: False
request2 prediction: 0.00018405025184620172 - label: False
request3 prediction: 0.00016512417641934007 - label: False
request4 prediction: 0.00015515758423134685 - label: False
request5 prediction: 5.948602483840659e-05 - label: False
request6 prediction: 9.51739348238334e-05 - label: False
request7 prediction: 3.4787988170137396e-06 - label: False
request8 prediction: 0.00010751552326837555 - label: False
request9 prediction: 0.060329731553792953 - label: True