Deploying Predictors with Serve
Contents
Deploying Predictors with Serve#
Ray Serve is the recommended tool to deploy models trained with AIR.
After training a model with Ray Train, you can serve a model using Ray Serve. In this guide, we will cover how to use Ray AIR’s PredictorDeployment
, Predictor
, and Checkpoint
abstractions to quickly deploy a model for online inference.
But before that, let’s review the key concepts:
Checkpoint
represents a trained model stored in memory, file, or remote uri.Predictor
s understand how to perform a model inference given checkpoints and the model definition. Ray AIR comes with predictors for each supported frameworks.Deployment
is a Ray Serve construct that represent an HTTP endpoint along with scalable pool of models.
The core concept for model deployment is the PredictorDeployment
. The PredictorDeployment
takes a predictor class and a checkpoint and transforms them into a live HTTP endpoint.
We’ll start with a simple quick-start demo showing how you can use the PredictorDeployment
to deploy your model for online inference.
Let’s first make sure Ray AIR is installed. For the quick-start, we’ll also use Ray AIR to train and serve a XGBoost model.
!pip install "ray[air]" xgboost scikit-learn
You can find the preprocessor and trainer in the key concepts walk-through.
import ray
import pandas as pd
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from ray.train.xgboost import XGBoostTrainer
from ray.air.config import ScalingConfig
from ray.data.preprocessors import StandardScaler
data_raw = load_breast_cancer()
dataset_df = pd.DataFrame(data_raw["data"], columns=data_raw["feature_names"])
dataset_df["target"] = data_raw["target"]
train_df, test_df = train_test_split(dataset_df, test_size=0.3)
train_dataset = ray.data.from_pandas(train_df)
valid_dataset = ray.data.from_pandas(test_df)
test_dataset = ray.data.from_pandas(test_df.drop("target", axis=1))
# Define preprocessor
columns_to_scale = ["mean radius", "mean texture"]
preprocessor = StandardScaler(columns=columns_to_scale)
# Define trainer
trainer = XGBoostTrainer(
scaling_config=ScalingConfig(num_workers=1),
label_column="target",
params={
"tree_method": "approx",
"objective": "binary:logistic",
"eval_metric": ["logloss", "error"],
"max_depth": 2,
},
datasets={"train": train_dataset, "valid": valid_dataset},
preprocessor=preprocessor,
num_boost_round=5,
)
result = trainer.fit()
2022-06-02 19:31:31,356 INFO services.py:1483 -- View the Ray dashboard at http://127.0.0.1:8265
Current time: 2022-06-02 19:31:48 (running for 00:00:13.38)
Memory usage on this node: 37.9/64.0 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/16 CPUs, 0/0 GPUs, 0.0/25.71 GiB heap, 0.0/2.0 GiB objects
Result logdir: /Users/simonmo/ray_results/XGBoostTrainer_2022-06-02_19-31-34
Number of trials: 1/1 (1 TERMINATED)
Trial name | status | loc | iter | total time (s) | train-logloss | train-error | valid-logloss |
---|---|---|---|---|---|---|---|
XGBoostTrainer_4930d_00000 | TERMINATED | 127.0.0.1:60303 | 5 | 8.72108 | 0.190254 | 0.035176 | 0.20535 |
(GBDTTrainable pid=60303) UserWarning: `num_actors` in `ray_params` is smaller than 2 (1). XGBoost will NOT be distributed!
(GBDTTrainable pid=60303) 2022-06-02 19:31:42,283 INFO main.py:980 -- [RayXGBoost] Created 1 new actors (1 total actors). Waiting until actors are ready for training.
(GBDTTrainable pid=60303) 2022-06-02 19:31:46,324 INFO main.py:1025 -- [RayXGBoost] Starting XGBoost training.
(_RemoteRayXGBoostActor pid=60578) [19:31:46] task [xgboost.ray]:140298197243216 got new rank 0
Result for XGBoostTrainer_4930d_00000:
date: 2022-06-02_19-31-47
done: false
experiment_id: 171c25bee8e7490f933cc082daf7e6e0
hostname: Simons-MacBook-Pro.local
iterations_since_restore: 1
node_ip: 127.0.0.1
pid: 60303
should_checkpoint: true
time_since_restore: 8.666727781295776
time_this_iter_s: 8.666727781295776
time_total_s: 8.666727781295776
timestamp: 1654223507
timesteps_since_restore: 0
train-error: 0.047739
train-logloss: 0.483805
training_iteration: 1
trial_id: 4930d_00000
valid-error: 0.05848
valid-logloss: 0.488357
warmup_time: 0.0035247802734375
(GBDTTrainable pid=60303) 2022-06-02 19:31:47,421 INFO main.py:1519 -- [RayXGBoost] Finished XGBoost training on training data with total N=398 in 5.16 seconds (1.09 pure XGBoost training time).
Result for XGBoostTrainer_4930d_00000:
date: 2022-06-02_19-31-47
done: true
experiment_id: 171c25bee8e7490f933cc082daf7e6e0
experiment_tag: '0'
hostname: Simons-MacBook-Pro.local
iterations_since_restore: 5
node_ip: 127.0.0.1
pid: 60303
should_checkpoint: true
time_since_restore: 8.72108268737793
time_this_iter_s: 0.011542558670043945
time_total_s: 8.72108268737793
timestamp: 1654223507
timesteps_since_restore: 0
train-error: 0.035176
train-logloss: 0.190254
training_iteration: 5
trial_id: 4930d_00000
valid-error: 0.046784
valid-logloss: 0.20535
warmup_time: 0.0035247802734375
2022-06-02 19:31:48,266 INFO tune.py:753 -- Total run time: 13.77 seconds (13.38 seconds for the tuning loop).
The following block serves a Ray AIR model from a checkpoint, using the built-in XGBoostPredictor
.
from ray.train.xgboost import XGBoostPredictor
from ray import serve
from ray.serve import PredictorDeployment
from ray.serve.http_adapters import pandas_read_json
serve.run(
PredictorDeployment.options(name="XGBoostService").bind(
XGBoostPredictor, result.checkpoint, http_adapter=pandas_read_json
)
)
(ServeController pid=60981) INFO 2022-06-02 19:31:52,825 controller 60981 checkpoint_path.py:17 - Using RayInternalKVStore for controller checkpoint and recovery.
(ServeController pid=60981) INFO 2022-06-02 19:31:52,828 controller 60981 http_state.py:115 - Starting HTTP proxy with name 'SERVE_CONTROLLER_ACTOR:SERVE_PROXY_ACTOR-node:127.0.0.1-0' on node 'node:127.0.0.1-0' listening on '127.0.0.1:8000'
(HTTPProxyActor pid=60984) INFO: Started server process [60984]
(ServeController pid=60981) INFO 2022-06-02 19:31:55,191 controller 60981 deployment_state.py:1221 - Adding 1 replicas to deployment 'XGBoostService'.
Let’s send a request through HTTP.
import requests
sample_input = test_dataset.take(1)
sample_input = dict(sample_input[0])
output = requests.post("http://localhost:8000/", json=[sample_input]).json()
print(output)
[{'predictions': 0.1142289936542511}]
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:32:00,604 http_proxy 127.0.0.1 http_proxy.py:320 - POST /XGBoostService 307 5.4ms
(XGBoostService pid=60988) INFO 2022-06-02 19:32:00,603 XGBoostService XGBoostService#LOYoUm replica.py:484 - HANDLE __call__ OK 0.3ms
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:32:00,658 http_proxy 127.0.0.1 http_proxy.py:320 - POST /XGBoostService 200 49.8ms
(XGBoostService pid=60988) INFO 2022-06-02 19:32:00,656 XGBoostService XGBoostService#LOYoUm replica.py:484 - HANDLE __call__ OK 46.8ms
It works! As you can see, you can use the PredictorDeployment
to deploy checkpoints trained in Ray AIR as live endpoints. You can find more end-to-end examples for your specific frameworks in the examples page.
This tutorial aims to provide an in-depth understanding of PredictorDeployments
. In particular, it’ll demonstrate:
How to serve a predictor accepting array input.
How to serve a predictor accepting dataframe input.
How to serve a predictor accepting custom input that can be transformed to array or dataframe.
How to configure micro-batching to enhance performance.
1. Predictor accepting NumPy array#
We’ll use a simple predictor implementation that adds an increment to an input array.
import numpy as np
from ray.train.predictor import Predictor
from ray.air.checkpoint import Checkpoint
class AdderPredictor(Predictor):
"""Dummy predictor that increments input by a staic value."""
def __init__(self, increment: int):
self.increment = increment
@classmethod
def from_checkpoint(cls, ckpt: Checkpoint):
"""Create predictor from checkpoint.
Args:
ckpt: The AIR checkpoint representing a single dictionary. The dictionary
should have key `increment` and an integer value.
"""
return cls(ckpt.to_dict()["increment"])
def predict(self, inp: np.ndarray) -> np.ndarray:
return inp + self.increment
Let’s first test it locally.
local_checkpoint = Checkpoint.from_dict({"increment": 2})
local_predictor = AdderPredictor.from_checkpoint(local_checkpoint)
assert local_predictor.predict(np.array([40])) == np.array([42])
It worked! Now let’s serve it behind HTTP. In Ray Serve, the core unit of an HTTP service is called a Deployment
. It turns a Python class into a queryable HTTP endpoint. For Ray AIR, Serve provides a PredictorDeployment
to simplify this transformation. You don’t need to implement any Python classes. You just pass in your predictor and checkpoint instead.
The deployment takes several arguments. It requires two arguments to start:
predictor_cls (Type[Predictor] | str)
: The predictor Python class. Typically you can use built-in integrations from Ray AIR like theTorchPredictor
. Alternatively, you can specify the class path to import a predictor like"ray.air.integrations.torch.TorchPredictor"
.checkpoint (Checkpoint | str)
: A checkpoint instance, or uri to load the checkpoint from.
The following cell showcases how to create a deployment with our AdderPredictor
To learn more about Ray Serve, check out its documentation.
from ray import serve
from ray.serve import PredictorDeployment
# Deploy the model behind HTTP endpoint
serve.run(
PredictorDeployment.options(name="Adder").bind(
predictor_cls=AdderPredictor,
checkpoint=local_checkpoint
)
)
(ServeController pid=60981) INFO 2022-06-02 19:32:07,559 controller 60981 deployment_state.py:1221 - Adding 1 replicas to deployment 'Adder'.
After the model has been deployed, let’s send an HTTP request.
import requests
resp = requests.post("http://localhost:8000/", json={"array": [40]})
resp.raise_for_status()
resp.json()
[42.0]
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:32:18,864 http_proxy 127.0.0.1 http_proxy.py:320 - POST /Adder 200 18.0ms
(Adder pid=60999) INFO 2022-06-02 19:32:18,863 Adder Adder#aqYgDS replica.py:484 - HANDLE __call__ OK 13.1ms
Nice! We sent [40]
as our input and got [42]
as our output in JSON format.
You can also specify multi-dimensional arrays in the JSON payload, as well as “dtype” and “shape” fields to process to array. For more information about the array input schema, see Ndarray.
That’s it for arrays! Let’s take a look at tabular input.
2. Predictor accepting Pandas DataFrame#
Let’s now take a look at a predictor accepting dataframe inputs. We’ll perform some simple column-wise transformations on the input data.
import pandas as pd
class DataFramePredictor(Predictor):
"""Dummy predictor that first multiplies input then increment it."""
def __init__(self, increment: int):
self.increment = increment
@classmethod
def from_checkpoint(cls, ckpt: Checkpoint):
return cls(ckpt.to_dict()["increment"])
def predict(self, inp: pd.DataFrame) -> pd.DataFrame:
inp["prediction"] = inp["base"] * inp["multiplier"] + self.increment
return inp
local_df_predictor = DataFramePredictor.from_checkpoint(local_checkpoint)
Just like the AdderPredictor
, we’ll use the same PredictorDeployment
approach to make it queryable with HTTP.
Note that we added http_adapter=pandas_read_json
as the keyword argument. This tells Serve how to convert incoming JSON requests into a DataFrame. The pandas_read_json
adapter accepts:
Pandas-parsable JSON in HTTP body
Optional keyword arguments to the
pandas.read_json
function via HTTP url parameters.
To learn more, see HTTP Adapters.
Note
You might wonder why the previous array predictor doesn’t need to specify any http adapter. This is because Ray Serve defaults to a built-in adapter called json_to_ndarray
(ray.serve.http_adapters.json_to_ndarray)!
from ray.serve.http_adapters import pandas_read_json
serve.run(
PredictorDeployment.options(name="DataFramePredictor").bind(
predictor_cls=DataFramePredictor,
checkpoint=local_checkpoint,
http_adapter=pandas_read_json
)
)
(ServeController pid=60981) INFO 2022-06-02 19:32:24,396 controller 60981 deployment_state.py:1221 - Adding 1 replicas to deployment 'DataFramePredictor'.
Let’s send a request to our endpoint.
resp = requests.post(
"http://localhost:8000/",
json=[{"base": 1, "multiplier": 2}, {"base": 3, "multiplier": 4}],
params={"orient": "records"},
)
resp.raise_for_status()
resp.text
'[{"base":1,"multiplier":2,"prediction":4},{"base":3,"multiplier":4,"prediction":14}]'
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:32:28,751 http_proxy 127.0.0.1 http_proxy.py:320 - POST /DataFramePredictor 200 21.0ms
(DataFramePredictor pid=61006) INFO 2022-06-02 19:32:28,750 DataFramePredictor DataFramePredictor#IJcHCI replica.py:484 - HANDLE __call__ OK 17.2ms
Great! You can see that the input JSON has been converted to a dataframe, so our predictor can work with pure dataframes instead of raw HTTP requests.
But what if we need to configure the HTTP request? You can do that as well.
3. Accepting custom inputs via http_adapter
#
The http_adapter
field accepts any callable function that’s type annotated. You can also bring in additional types that are accepted by FastAPI’s dependency injection framework. For more detail, see HTTP Adapters. In the following example, instead of using the pandas adapter Serve provides, we’ll implement our own request adapter that works with just http parameters instead of JSON.
def our_own_http_adapter(base: int, multiplier: int):
return pd.DataFrame([{"base": base, "multiplier": multiplier}])
Let’s deploy it.
from ray.serve.http_adapters import pandas_read_json
serve.run(
PredictorDeployment.options(name="DataFramePredictor").bind(
predictor_cls=DataFramePredictor,
checkpoint=local_checkpoint,
http_adapter=our_own_http_adapter
)
)
(ServeController pid=60981) INFO 2022-06-02 19:33:31,010 controller 60981 deployment_state.py:1180 - Stopping 1 replicas of deployment 'DataFramePredictor' with outdated versions.
(ServeController pid=60981) INFO 2022-06-02 19:33:33,165 controller 60981 deployment_state.py:1221 - Adding 1 replicas to deployment 'DataFramePredictor'.
Let’s now send a request. Note that the new predictor accepts our specified input via HTTP parameters.
The equivalent curl request would be curl -X POST http://localhost:8000/DataFramePredictor/?base=10&multiplier=4
.
resp = requests.post(
"http://localhost:8000/",
params={"base": 10, "multiplier": 4}
)
resp.raise_for_status()
resp.text
'[{"base":10,"multiplier":4,"prediction":42}]'
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:33:36,070 http_proxy 127.0.0.1 http_proxy.py:320 - POST /DataFramePredictor 200 21.6ms
(DataFramePredictor pid=61037) INFO 2022-06-02 19:33:36,069 DataFramePredictor DataFramePredictor#QzQiec replica.py:484 - HANDLE __call__ OK 17.5ms
4. PredictorDeployment
performs microbatching to improve performance#
Common machine learning models take a batch of inputs for prediction. Common ML Frameworks are optimized with vectorized instruction to make inference on batch requests almost as fast as single requests.
In Serve’s PredictorDeployment
, the incoming requests are automatically batched.
When multiple clients send requests at the same time, Serve will combine the requests into a single batch (array or dataframe). Then, Serve calls predict on the entire batch. Let’s take a look at a predictor that returns each row’s content, batch_size, and batch group.
import time
class BatchSizePredictor(Predictor):
@classmethod
def from_checkpoint(cls, _: Checkpoint):
return cls()
def predict(self, inp: np.ndarray):
time.sleep(0.5) # simulate model inference.
return [(i, len(inp), inp) for i in inp]
serve.run(
PredictorDeployment.options(name="BatchSizePredictor").bind(
predictor_cls=BatchSizePredictor,
checkpoint=local_checkpoint,
)
)
(ServeController pid=60981) INFO 2022-06-02 19:33:39,305 controller 60981 deployment_state.py:1221 - Adding 1 replicas to deployment 'BatchSizePredictor'.
Let’s use a threadpool executor to send ten requests at the same time to simulate multiple clients.
from concurrent.futures import ThreadPoolExecutor, wait
with ThreadPoolExecutor() as pool:
futs = [
pool.submit(
requests.post,
"http://localhost:8000/",
json={"array": [i]},
)
for i in range(10)
]
wait(futs)
for fut in futs:
i, batch_size, batch_group = fut.result().json()
print(f"Request id: {i} is part of batch group: {batch_group}, with batch size {batch_size}")
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:33:43,141 http_proxy 127.0.0.1 http_proxy.py:320 - POST /BatchSizePredictor 200 525.9ms
(BatchSizePredictor pid=61041) INFO 2022-06-02 19:33:43,139 BatchSizePredictor BatchSizePredictor#QQPBXh replica.py:484 - HANDLE __call__ OK 519.1ms
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:33:43,647 http_proxy 127.0.0.1 http_proxy.py:320 - POST /BatchSizePredictor 200 1030.2ms
(BatchSizePredictor pid=61041) INFO 2022-06-02 19:33:43,645 BatchSizePredictor BatchSizePredictor#QQPBXh replica.py:484 - HANDLE __call__ OK 1013.6ms
(BatchSizePredictor pid=61041) INFO 2022-06-02 19:33:44,155 BatchSizePredictor BatchSizePredictor#QQPBXh replica.py:484 - HANDLE __call__ OK 1015.0ms
(BatchSizePredictor pid=61041) INFO 2022-06-02 19:33:44,155 BatchSizePredictor BatchSizePredictor#QQPBXh replica.py:484 - HANDLE __call__ OK 511.8ms
(BatchSizePredictor pid=61041) INFO 2022-06-02 19:33:44,155 BatchSizePredictor BatchSizePredictor#QQPBXh replica.py:484 - HANDLE __call__ OK 511.4ms
(BatchSizePredictor pid=61041) INFO 2022-06-02 19:33:44,155 BatchSizePredictor BatchSizePredictor#QQPBXh replica.py:484 - HANDLE __call__ OK 511.0ms
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:33:44,661 http_proxy 127.0.0.1 http_proxy.py:320 - POST /BatchSizePredictor 200 2043.3ms
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:33:44,662 http_proxy 127.0.0.1 http_proxy.py:320 - POST /BatchSizePredictor 200 2042.9ms
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:33:44,662 http_proxy 127.0.0.1 http_proxy.py:320 - POST /BatchSizePredictor 200 2039.5ms
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:33:44,662 http_proxy 127.0.0.1 http_proxy.py:320 - POST /BatchSizePredictor 200 2038.1ms
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:33:44,663 http_proxy 127.0.0.1 http_proxy.py:320 - POST /BatchSizePredictor 200 2038.9ms
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:33:44,663 http_proxy 127.0.0.1 http_proxy.py:320 - POST /BatchSizePredictor 200 2036.8ms
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:33:44,664 http_proxy 127.0.0.1 http_proxy.py:320 - POST /BatchSizePredictor 200 2036.5ms
(BatchSizePredictor pid=61041) INFO 2022-06-02 19:33:44,661 BatchSizePredictor BatchSizePredictor#QQPBXh replica.py:484 - HANDLE __call__ OK 1016.0ms
(BatchSizePredictor pid=61041) INFO 2022-06-02 19:33:44,661 BatchSizePredictor BatchSizePredictor#QQPBXh replica.py:484 - HANDLE __call__ OK 1015.6ms
(BatchSizePredictor pid=61041) INFO 2022-06-02 19:33:44,662 BatchSizePredictor BatchSizePredictor#QQPBXh replica.py:484 - HANDLE __call__ OK 1015.5ms
Request id: [0.0] is part of batch group: [[3.0], [0.0], [4.0], [7.0]], with batch size 4
Request id: [1.0] is part of batch group: [[1.0]], with batch size 1
Request id: [2.0] is part of batch group: [[2.0]], with batch size 1
Request id: [3.0] is part of batch group: [[3.0], [0.0], [4.0], [7.0]], with batch size 4
Request id: [4.0] is part of batch group: [[3.0], [0.0], [4.0], [7.0]], with batch size 4
Request id: [5.0] is part of batch group: [[6.0], [5.0], [9.0]], with batch size 3
Request id: [6.0] is part of batch group: [[6.0], [5.0], [9.0]], with batch size 3
Request id: [7.0] is part of batch group: [[3.0], [0.0], [4.0], [7.0]], with batch size 4
Request id: [8.0] is part of batch group: [[8.0]], with batch size 1
Request id: [9.0] is part of batch group: [[6.0], [5.0], [9.0]], with batch size 3
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:33:45,167 http_proxy 127.0.0.1 http_proxy.py:320 - POST /BatchSizePredictor 200 2539.1ms
(BatchSizePredictor pid=61041) INFO 2022-06-02 19:33:45,165 BatchSizePredictor BatchSizePredictor#QQPBXh replica.py:484 - HANDLE __call__ OK 1516.7ms
As you can see, some of the requests are part of a bigger group that’s run together.
You can also configure the exact details of batching parameters:
max_batch_size(int)
: the maximum batch size that will be executed in one call to predict.batch_wait_timeout_s (float)
: the maximum duration to wait formax_batch_size
elements before running the predict call.
Let’s set a max_batch_size
of 10 to group our requests into the same batch.
serve.run(
PredictorDeployment.options(name="BatchSizePredictor").bind(
predictor_cls=BatchSizePredictor,
checkpoint=local_checkpoint,
batching_params={"max_batch_size": 10, "batch_wait_timeout_s": 5}
)
)
(ServeController pid=60981) INFO 2022-06-02 19:33:47,081 controller 60981 deployment_state.py:1180 - Stopping 1 replicas of deployment 'BatchSizePredictor' with outdated versions.
(ServeController pid=60981) INFO 2022-06-02 19:33:49,234 controller 60981 deployment_state.py:1221 - Adding 1 replicas to deployment 'BatchSizePredictor'.
Let’s call them again! You should see all ten requests executed as part of the same group.
from concurrent.futures import ThreadPoolExecutor, wait
with ThreadPoolExecutor() as pool:
futs = [
pool.submit(
requests.post,
"http://localhost:8000/",
json={"array": [i]},
)
for i in range(10)
]
wait(futs)
for fut in futs:
i, batch_size, batch_group = fut.result().json()
print(f"Request id: {i} is part of batch group: {batch_group}, with batch size {batch_size}")
Request id: [0.0] is part of batch group: [[0.0], [5.0], [1.0], [2.0], [3.0], [4.0], [7.0], [6.0], [8.0], [9.0]], with batch size 10
Request id: [1.0] is part of batch group: [[0.0], [5.0], [1.0], [2.0], [3.0], [4.0], [7.0], [6.0], [8.0], [9.0]], with batch size 10
Request id: [2.0] is part of batch group: [[0.0], [5.0], [1.0], [2.0], [3.0], [4.0], [7.0], [6.0], [8.0], [9.0]], with batch size 10
Request id: [3.0] is part of batch group: [[0.0], [5.0], [1.0], [2.0], [3.0], [4.0], [7.0], [6.0], [8.0], [9.0]], with batch size 10
Request id: [4.0] is part of batch group: [[0.0], [5.0], [1.0], [2.0], [3.0], [4.0], [7.0], [6.0], [8.0], [9.0]], with batch size 10
Request id: [5.0] is part of batch group: [[0.0], [5.0], [1.0], [2.0], [3.0], [4.0], [7.0], [6.0], [8.0], [9.0]], with batch size 10
Request id: [6.0] is part of batch group: [[0.0], [5.0], [1.0], [2.0], [3.0], [4.0], [7.0], [6.0], [8.0], [9.0]], with batch size 10
Request id: [7.0] is part of batch group: [[0.0], [5.0], [1.0], [2.0], [3.0], [4.0], [7.0], [6.0], [8.0], [9.0]], with batch size 10
Request id: [8.0] is part of batch group: [[0.0], [5.0], [1.0], [2.0], [3.0], [4.0], [7.0], [6.0], [8.0], [9.0]], with batch size 10
Request id: [9.0] is part of batch group: [[0.0], [5.0], [1.0], [2.0], [3.0], [4.0], [7.0], [6.0], [8.0], [9.0]], with batch size 10
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:33:52,751 http_proxy 127.0.0.1 http_proxy.py:320 - POST /BatchSizePredictor 200 538.8ms
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:33:52,752 http_proxy 127.0.0.1 http_proxy.py:320 - POST /BatchSizePredictor 200 526.8ms
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:33:52,753 http_proxy 127.0.0.1 http_proxy.py:320 - POST /BatchSizePredictor 200 535.1ms
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:33:52,753 http_proxy 127.0.0.1 http_proxy.py:320 - POST /BatchSizePredictor 200 528.0ms
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:33:52,754 http_proxy 127.0.0.1 http_proxy.py:320 - POST /BatchSizePredictor 200 533.4ms
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:33:52,754 http_proxy 127.0.0.1 http_proxy.py:320 - POST /BatchSizePredictor 200 528.0ms
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:33:52,754 http_proxy 127.0.0.1 http_proxy.py:320 - POST /BatchSizePredictor 200 526.3ms
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:33:52,754 http_proxy 127.0.0.1 http_proxy.py:320 - POST /BatchSizePredictor 200 525.0ms
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:33:52,755 http_proxy 127.0.0.1 http_proxy.py:320 - POST /BatchSizePredictor 200 524.5ms
(HTTPProxyActor pid=60984) INFO 2022-06-02 19:33:52,755 http_proxy 127.0.0.1 http_proxy.py:320 - POST /BatchSizePredictor 200 524.0ms
(BatchSizePredictor pid=61046) INFO 2022-06-02 19:33:52,746 BatchSizePredictor BatchSizePredictor#mlVwXr replica.py:484 - HANDLE __call__ OK 530.1ms
(BatchSizePredictor pid=61046) INFO 2022-06-02 19:33:52,746 BatchSizePredictor BatchSizePredictor#mlVwXr replica.py:484 - HANDLE __call__ OK 514.7ms
(BatchSizePredictor pid=61046) INFO 2022-06-02 19:33:52,747 BatchSizePredictor BatchSizePredictor#mlVwXr replica.py:484 - HANDLE __call__ OK 514.4ms
(BatchSizePredictor pid=61046) INFO 2022-06-02 19:33:52,747 BatchSizePredictor BatchSizePredictor#mlVwXr replica.py:484 - HANDLE __call__ OK 513.6ms
(BatchSizePredictor pid=61046) INFO 2022-06-02 19:33:52,747 BatchSizePredictor BatchSizePredictor#mlVwXr replica.py:484 - HANDLE __call__ OK 513.4ms
(BatchSizePredictor pid=61046) INFO 2022-06-02 19:33:52,748 BatchSizePredictor BatchSizePredictor#mlVwXr replica.py:484 - HANDLE __call__ OK 511.6ms
(BatchSizePredictor pid=61046) INFO 2022-06-02 19:33:52,748 BatchSizePredictor BatchSizePredictor#mlVwXr replica.py:484 - HANDLE __call__ OK 510.6ms
(BatchSizePredictor pid=61046) INFO 2022-06-02 19:33:52,748 BatchSizePredictor BatchSizePredictor#mlVwXr replica.py:484 - HANDLE __call__ OK 510.4ms
(BatchSizePredictor pid=61046) INFO 2022-06-02 19:33:52,749 BatchSizePredictor BatchSizePredictor#mlVwXr replica.py:484 - HANDLE __call__ OK 510.3ms
(BatchSizePredictor pid=61046) INFO 2022-06-02 19:33:52,749 BatchSizePredictor BatchSizePredictor#mlVwXr replica.py:484 - HANDLE __call__ OK 509.9ms
The batching behavior is well-defined:
When batching arrays, they are all concatenated into a new array with an added batch dimension.
When batching dataframes, they are all concatenated row-wise.
You can also turn off this behavior by setting batching_params=False
.