ray.train.tensorflow.TensorflowPredictor#

class ray.train.tensorflow.TensorflowPredictor(*, model: Optional[keras.engine.training.Model] = None, preprocessor: Optional[Preprocessor] = None, use_gpu: bool = False)[source]#

Bases: ray.train._internal.dl_predictor.DLPredictor

A predictor for TensorFlow models.

Parameters
  • model – A Tensorflow Keras model to use for predictions.

  • preprocessor – A preprocessor used to transform data batches prior to prediction.

  • model_weights – List of weights to use for the model.

  • use_gpu – If set, the model will be moved to GPU on instantiation and prediction happens on GPU.

PublicAPI (beta): This API is in beta and may change before becoming stable.

classmethod from_checkpoint(checkpoint: ray.air.checkpoint.Checkpoint, model_definition: Optional[Union[Callable[[], keras.engine.training.Model], Type[keras.engine.training.Model]]] = None, use_gpu: Optional[bool] = False) ray.train.tensorflow.tensorflow_predictor.TensorflowPredictor[source]#

Instantiate the predictor from a Checkpoint.

The checkpoint is expected to be a result of TensorflowTrainer.

Parameters
  • checkpoint – The checkpoint to load the model and preprocessor from. It is expected to be from the result of a TensorflowTrainer run.

  • model_definition – A callable that returns a TensorFlow Keras model to use. Model weights will be loaded from the checkpoint. This is only needed if the checkpoint was created from TensorflowCheckpoint.from_model.

  • use_gpu – Whether GPU should be used during prediction.

call_model(inputs: Union[tensorflow.python.framework.ops.Tensor, Dict[str, tensorflow.python.framework.ops.Tensor]]) Union[tensorflow.python.framework.ops.Tensor, Dict[str, tensorflow.python.framework.ops.Tensor]][source]#

Runs inference on a single batch of tensor data.

This method is called by TorchPredictor.predict after converting the original data batch to torch tensors.

Override this method to add custom logic for processing the model input or output.

Example

# List outputs are not supported by default TensorflowPredictor.
def build_model() -> tf.keras.Model:
    input = tf.keras.layers.Input(shape=1)
    model = tf.keras.models.Model(inputs=input, outputs=[input, input])
    return model

# Use a custom predictor to format model output as a dict.
class CustomPredictor(TensorflowPredictor):
    def call_model(self, inputs):
        model_output = super().call_model(inputs)
        return {
            str(i): model_output[i] for i in range(len(model_output))
        }

predictor = CustomPredictor(model_definition=build_model)
predictions = predictor.predict(data_batch)
Parameters

inputs – A batch of data to predict on, represented as either a single TensorFlow tensor or for multi-input models, a dictionary of tensors.

Returns

The model outputs, either as a single tensor or a dictionary of tensors.

DeveloperAPI: This API may change across minor Ray releases.

predict(data: Union[numpy.ndarray, pandas.DataFrame, Dict[str, numpy.ndarray]], dtype: Optional[Union[tensorflow.python.framework.dtypes.DType, Dict[str, tensorflow.python.framework.dtypes.DType]]] = None) Union[numpy.ndarray, pandas.DataFrame, Dict[str, numpy.ndarray]][source]#

Run inference on data batch.

If the provided data is a single array or a dataframe/table with a single column, it will be converted into a single Tensorflow tensor before being inputted to the model.

If the provided data is a multi-column table or a dict of numpy arrays, it will be converted into a dict of tensors before being inputted to the model. This is useful for multi-modal inputs (for example your model accepts both image and text).

Parameters
  • data – A batch of input data. Either a pandas DataFrame or numpy array.

  • dtype – The dtypes to use for the tensors. Either a single dtype for all tensors or a mapping from column name to dtype.

Examples

>>> import numpy as np
>>> import tensorflow as tf
>>> from ray.train.tensorflow import TensorflowPredictor
>>>
>>> def build_model():
...     return tf.keras.Sequential(
...         [
...             tf.keras.layers.InputLayer(input_shape=()),
...             tf.keras.layers.Flatten(),
...             tf.keras.layers.Dense(1),
...         ]
...     )
>>>
>>> weights = [np.array([[2.0]]), np.array([0.0])]
>>> predictor = TensorflowPredictor(model=build_model())
>>>
>>> data = np.asarray([1, 2, 3])
>>> predictions = predictor.predict(data) 
>>> import pandas as pd
>>> import tensorflow as tf
>>> from ray.train.tensorflow import TensorflowPredictor
>>>
>>> def build_model():
...     input1 = tf.keras.layers.Input(shape=(1,), name="A")
...     input2 = tf.keras.layers.Input(shape=(1,), name="B")
...     merged = tf.keras.layers.Concatenate(axis=1)([input1, input2])
...     output = tf.keras.layers.Dense(2, input_dim=2)(merged)
...     return tf.keras.models.Model(
...         inputs=[input1, input2], outputs=output)
>>>
>>> predictor = TensorflowPredictor(model=build_model())
>>>
>>> # Pandas dataframe.
>>> data = pd.DataFrame([[1, 2], [3, 4]], columns=["A", "B"])
>>>
>>> predictions = predictor.predict(data) 
Returns

Prediction result. The return type will be the same as the

input type.

Return type

DataBatchType