Training a Torch ClassifierΒΆ

This tutorial demonstrates how to train an image classifier using the Ray AI Runtime (AIR).

You should be familiar with PyTorch before starting the tutorial. If you need a refresher, read PyTorch’s training a classifier tutorial.

Before you beginΒΆ

  • Install the Ray AI Runtime. You’ll need Ray 1.13 later to run this example.

!pip install 'ray[air]'
  • Install requests, torch, and torchvision

!pip install requests torch torchvision

Load and normalize CIFAR-10ΒΆ

We’ll train our classifier on a popular image dataset called CIFAR-10.

First, let’s load CIFAR-10 into a Ray Dataset.

import ray
from ray.data.datasource import SimpleTorchDatasource
import torchvision
import torchvision.transforms as transforms

transform = transforms.Compose(
    [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]
)

def train_dataset_factory():
    return torchvision.datasets.CIFAR10(root="./data", download=True, train=True, transform=transform)

def test_dataset_factory():
    return torchvision.datasets.CIFAR10(root="./data", download=True, train=False, transform=transform)

train_dataset: ray.data.Dataset = ray.data.read_datasource(SimpleTorchDatasource(), dataset_factory=train_dataset_factory)
test_dataset: ray.data.Dataset = ray.data.read_datasource(SimpleTorchDatasource(), dataset_factory=test_dataset_factory)
2022-07-11 14:46:56,693	WARNING read_api.py:264 -- The number of blocks in this dataset (1) limits its parallelism to 1 concurrent tasks. This is much less than the number of available CPU slots in the cluster. Use `.repartition(n)` to increase the number of dataset blocks.
(_prepare_read pid=54936) 2022-07-11 14:46:56,687	WARNING torch_datasource.py:56 -- `SimpleTorchDatasource` doesn't support parallel reads. The `parallelism` argument will be ignored.
(_execute_read_task pid=54936) Files already downloaded and verified
Map_Batches:   0%|          | 0/1 [01:45<?, ?it/s]
2022-07-11 14:47:12,009	WARNING read_api.py:264 -- The number of blocks in this dataset (1) limits its parallelism to 1 concurrent tasks. This is much less than the number of available CPU slots in the cluster. Use `.repartition(n)` to increase the number of dataset blocks.
(_prepare_read pid=54936) 2022-07-11 14:47:12,001	WARNING torch_datasource.py:56 -- `SimpleTorchDatasource` doesn't support parallel reads. The `parallelism` argument will be ignored.
(_execute_read_task pid=54936) Files already downloaded and verified
train_dataset
Dataset(num_blocks=1, num_rows=50000, schema={image: object, label: int64})

Note that SimpleTorchDatasource loads all data into memory, so you shouldn’t use it with larger datasets.

Next, let’s represent our data using pandas dataframes instead of tuples. This lets us call methods like Dataset.iter_torch_batches later in the tutorial.

from typing import Tuple
import pandas as pd
from ray.data.extensions import TensorArray
import torch


def convert_batch_to_pandas(batch: Tuple[torch.Tensor, int]) -> pd.DataFrame:
    images = TensorArray([image.numpy() for image, _ in batch])
    labels = [label for _, label in batch]

    df = pd.DataFrame({"image": images, "label": labels})

    return df


train_dataset = train_dataset.map_batches(convert_batch_to_pandas)
test_dataset = test_dataset.map_batches(convert_batch_to_pandas)
Read->Map_Batches:   0%|          | 0/1 [00:00<?, ?it/s]
(_map_block_nosplit pid=54936) Files already downloaded and verified
Read->Map_Batches: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:10<00:00, 10.44s/it]
Read->Map_Batches:   0%|          | 0/1 [00:00<?, ?it/s]
(_map_block_nosplit pid=54936) Files already downloaded and verified
Read->Map_Batches: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:02<00:00,  2.56s/it]
train_dataset
Dataset(num_blocks=1, num_rows=50000, schema={image: TensorDtype, label: int64})

Train a convolutional neural networkΒΆ

Now that we’ve created our datasets, let’s define the training logic.

import torch
import torch.nn as nn
import torch.nn.functional as F


class Net(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = torch.flatten(x, 1)  # flatten all dimensions except batch
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

We define our training logic in a function called train_loop_per_worker.

train_loop_per_worker contains regular PyTorch code with a few notable exceptions:

from ray import train
from ray.air import session, Checkpoint
import torch.optim as optim


def train_loop_per_worker(config):
    model = train.torch.prepare_model(Net())

    criterion = nn.CrossEntropyLoss()
    optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)

    train_dataset_shard = session.get_dataset_shard("train").iter_torch_batches(
        batch_size=config["batch_size"],
    )

    for epoch in range(2):
        running_loss = 0.0
        for i, data in enumerate(train_dataset_shard):
            # get the inputs and labels
            inputs, labels = data["image"], data["label"]

            # zero the parameter gradients
            optimizer.zero_grad()

            # forward + backward + optimize
            outputs = model(inputs)
            loss = criterion(outputs, labels)
            loss.backward()
            optimizer.step()

            # print statistics
            running_loss += loss.item()
            if i % 2000 == 1999:  # print every 2000 mini-batches
                print(f"[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}")
                running_loss = 0.0

        session.report(
            dict(running_loss=running_loss),
            checkpoint=Checkpoint.from_dict(dict(model=model.module.state_dict())),
        )

Finally, we can train our model. This should take a few minutes to run.

from ray.train.torch import TorchTrainer
from ray.air.config import ScalingConfig

trainer = TorchTrainer(
    train_loop_per_worker=train_loop_per_worker,
    train_loop_config={"batch_size": 2},
    datasets={"train": train_dataset},
    scaling_config=ScalingConfig(num_workers=2)
)
result = trainer.fit()
latest_checkpoint = result.checkpoint
== Status ==
Current time: 2022-07-11 15:02:20 (running for 00:00:02.72)
Memory usage on this node: 30.2/64.0 GiB
Using FIFO scheduling algorithm.
Resources requested: 3.0/16 CPUs, 0/0 GPUs, 0.0/33.52 GiB heap, 0.0/2.0 GiB objects
Result logdir: /Users/jiaodong/ray_results/TorchTrainer_2022-07-11_15-02-17
Number of trials: 1/1 (1 RUNNING)
Trial name status loc
TorchTrainer_2134d_00000RUNNING 127.0.0.1:61819


To scale your training script, create a Ray Cluster and increase the number of workers. If your cluster contains GPUs, add "use_gpu": True to your scaling config.

scaling_config=ScalingConfig(num_workers=8, "use_gpu=True)

Test the network on the test dataΒΆ

Let’s see how our model performs.

To classify images in the test dataset, we’ll need to create a Predictor.

Predictors load data from checkpoints and efficiently perform inference. In contrast to TorchPredictor, which performs inference on a single batch, BatchPredictor performs inference on an entire dataset. Because we want to classify all of the images in the test dataset, we’ll use a BatchPredictor.

from ray.train.torch import TorchPredictor
from ray.train.batch_predictor import BatchPredictor

predict_dataset = test_dataset.drop_columns(cols=["label"])
batch_predictor = BatchPredictor.from_checkpoint(
    checkpoint=latest_checkpoint,
    predictor_cls=TorchPredictor,
    model=Net(),
)

outputs: ray.data.Dataset = batch_predictor.predict(
    data=test_dataset, dtype=torch.float, feature_columns=["image"], keep_columns=["label"]
)
Map_Batches: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00,  9.24it/s]
Map Progress (1 actors 1 pending):   0%|          | 0/1 [00:01<?, ?it/s](BlockWorker pid=57241) /Users/jiaodong/anaconda3/envs/ray/lib/python3.6/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  ../c10/core/TensorImpl.h:1156.)
(BlockWorker pid=57241)   return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
Map Progress (1 actors 1 pending): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:02<00:00,  2.89s/it]

Our model outputs a list of energies for each class. To classify an image, we choose the class that has the highest energy.

import numpy as np

def convert_logits_to_classes(df):
    best_class = df["predictions"].map(lambda x: x.argmax())
    df["prediction"] = best_class
    return df

predictions = outputs.map_batches(
    convert_logits_to_classes, batch_format="pandas"
)

predictions.show(1)
Map_Batches: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 27.99it/s]
{'prediction': 3}

Now that we’ve classified all of the images, let’s figure out which images were classified correctly. The predictions dataset contains predicted labels and the test_dataset contains the true labels. To determine whether an image was classified correctly, we join the two datasets and check if the predicted labels are the same as the actual labels.

def calculate_prediction_scores(df):
    df["correct"] = df["prediction"] == df["label"]
    return df[["prediction", "label", "correct"]]

scores = predictions.map_batches(calculate_prediction_scores)

scores.show(1)
Map_Batches: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 13.60it/s]
{'prediction': 3, 'label': 3, 'correct': True}

To compute our test accuracy, we’ll count how many images the model classified correctly and divide that number by the total number of test images.

scores.sum(on="correct") / scores.count()
Shuffle Map: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 13.84it/s]
Shuffle Reduce: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 21.57it/s]
0.5564

Deploy the network and make a predictionΒΆ

Our model seems to perform decently, so let’s deploy the model to an endpoint. This’ll allow us to make predictions over the Internet.

from ray import serve
from ray.serve import PredictorDeployment
from ray.serve.http_adapters import NdArray

def json_to_numpy(payload: NdArray) -> pd.DataFrame:
      """Accepts an NdArray JSON from an HTTP body and converts it to a Numpy Array."""
      # Have to explicitly convert to float since np.array reads as a double.
      arr = np.array(payload.array, dtype=np.float32)
      return arr

serve.start(detached=True)
deployment = PredictorDeployment.options(name="my-deployment")
deployment.deploy(TorchPredictor, latest_checkpoint, batching_params=False, model=Net(), http_adapter=json_to_numpy)
(ServeController pid=57464) INFO 2022-07-11 14:51:04,982 controller 57464 checkpoint_path.py:17 - Using RayInternalKVStore for controller checkpoint and recovery.
(ServeController pid=57464) INFO 2022-07-11 14:51:04,985 controller 57464 http_state.py:118 - Starting HTTP proxy with name 'SERVE_CONTROLLER_ACTOR:SERVE_PROXY_ACTOR-node:127.0.0.1-0' on node 'node:127.0.0.1-0' listening on '127.0.0.1:8000'
(HTTPProxyActor pid=57494) INFO:     Started server process [57494]
(ServeController pid=57464) INFO 2022-07-11 14:51:06,352 controller 57464 deployment_state.py:1281 - Adding 1 replicas to deployment 'my-deployment'.

Let’s classify a test image.

batch = test_dataset.take(1)
array = np.expand_dims(np.array(batch[0]["image"]), axis=0)
array.shape
(1, 3, 32, 32)

You can perform inference against a deployed model by posting a dictionary with an "array" key. To learn more about the default input schema, read the NdArray documentation.

import requests

payload = {"array": array.tolist()}
response = requests.post(deployment.url, json=payload)
response.json()
[[-1.159639835357666,
  -1.4475929737091064,
  -0.06824108958244324,
  1.7863765954971313,
  0.19239971041679382,
  0.8146302700042725,
  0.6199826598167419,
  -0.4597688317298889,
  0.7662580013275146,
  -1.104752779006958]]