How to use Tune with PyTorch

In this walkthrough, we will show you how to integrate Tune into your PyTorch training workflow. We will follow this tutorial from the PyTorch documentation for training a CIFAR10 image classifier.


Hyperparameter tuning can make the difference between an average model and a highly accurate one. Often simple things like choosing a different learning rate or changing a network layer size can have a dramatic impact on your model performance. Fortunately, Tune makes exploring these optimal parameter combinations easy - and works nicely together with PyTorch.

As you will see, we only need to add some slight modifications. In particular, we need to

  1. wrap data loading and training in functions,

  2. make some network parameters configurable,

  3. add checkpointing (optional),

  4. and define the search space for the model tuning

Optionally, you can seamlessly leverage DistributedDataParallel training for each individual Pytorch model within Tune.


To run this example, you will need to install the following:

$ pip install ray torch torchvision

Setup / Imports

Let’s start with the imports:

from functools import partial
import numpy as np
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from filelock import FileLock
from import random_split
import torchvision
import torchvision.transforms as transforms
import ray
from ray import tune
from ray.tune.schedulers import ASHAScheduler

Most of the imports are needed for building the PyTorch model. Only the last three imports are for Ray Tune.

Data loaders

We wrap the data loaders in their own function and pass a global data directory. This way we can share a data directory between different trials.

def load_data(data_dir="./data"):
    transform = transforms.Compose([
        transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))

    # We add FileLock here because multiple workers will want to
    # download data, and this may cause overwrites since
    # DataLoader is not threadsafe.
    with FileLock(os.path.expanduser("~/.data.lock")):
        trainset = torchvision.datasets.CIFAR10(
            root=data_dir, train=True, download=True, transform=transform)

        testset = torchvision.datasets.CIFAR10(
            root=data_dir, train=False, download=True, transform=transform)

    return trainset, testset

Configurable neural network

We can only tune those parameters that are configurable. In this example, we can specify the layer sizes of the fully connected layers:

class Net(nn.Module):
    def __init__(self, l1=120, l2=84):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 5 * 5, l1)
        self.fc2 = nn.Linear(l1, l2)
        self.fc3 = nn.Linear(l2, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(-1, 16 * 5 * 5)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

The train function

Now it gets interesting, because we introduce some changes to the example from the PyTorch documentation.

We wrap the training script in a function train_cifar(config, checkpoint_dir=None). As you can guess, the config parameter will receive the hyperparameters we would like to train with. The checkpoint_dir parameter is used to restore checkpoints and gets filled automatically by Ray Tune. Saving of checkpoints will be covered below.

net = Net(config["l1"], config["l2"])
optimizer = optim.SGD(net.parameters(), lr=config["lr"], momentum=0.9)

if checkpoint_dir:
    checkpoint = os.path.join(checkpoint_dir, "checkpoint")
    model_state, optimizer_state = torch.load(checkpoint)

We also split the training data into a training and validation subset. We thus train on 80% of the data and calculate the validation loss on the remaining 20%. The batch sizes with which we iterate through the training and test sets are configurable as well.

Adding (multi) GPU support with DataParallel

Image classification benefits largely from GPUs. Luckily, we can continue to use PyTorch’s abstractions in Ray Tune. Thus, we can wrap our model in nn.DataParallel to support data parallel training on multiple GPUs:

device = "cpu"
if torch.cuda.is_available():
    device = "cuda:0"
    if torch.cuda.device_count() > 1:
        net = nn.DataParallel(net)

By using a device variable we make sure that training also works when we have no GPUs available. PyTorch requires us to send our data to the GPU memory explicitly, like this:

for i, data in enumerate(trainloader, 0):
    inputs, labels = data
    inputs, labels =,

The code now supports training on CPUs, on a single GPU, and on multiple GPUs. Notably, Ray also supports fractional GPUs so we can share GPUs among trials, as long as the model still fits on the GPU memory. We’ll come back to that later.

Communicating with Ray Tune

The most interesting part is the communication with Tune:

with tune.checkpoint_dir(epoch) as checkpoint_dir:
    path = os.path.join(checkpoint_dir, "checkpoint"), optimizer.state_dict()), path) / val_steps), accuracy=correct / total)

Here we first save a checkpoint and then report some metrics back to Tune. Specifically, we send the validation loss and accuracy back to Tune. Tune can then use these metrics to decide which hyperparameter configuration lead to the best results. These metrics can also be used to stop bad performing trials early in order to avoid wasting resources on those trials.

The checkpoint saving is optional. However, it is necessary if we wanted to use advanced schedulers like Population Based Training. In this cases, the created checkpoint directory will be passed as the checkpoint_dir parameter to the training function. After training, we can also restore the checkpointed models and validate them on a test set.

Full training function

The full code example looks like this:

def train_cifar(config, checkpoint_dir=None):
    net = Net(config["l1"], config["l2"])

    device = "cpu"
    if torch.cuda.is_available():
        device = "cuda:0"
        if torch.cuda.device_count() > 1:
            net = nn.DataParallel(net)

    criterion = nn.CrossEntropyLoss()
    optimizer = optim.SGD(net.parameters(), lr=config["lr"], momentum=0.9)

    # The `checkpoint_dir` parameter gets passed by Ray Tune when a checkpoint
    # should be restored.
    if checkpoint_dir:
        checkpoint = os.path.join(checkpoint_dir, "checkpoint")
        model_state, optimizer_state = torch.load(checkpoint)

    data_dir = os.path.abspath("./data")
    trainset, testset = load_data(data_dir)

    test_abs = int(len(trainset) * 0.8)
    train_subset, val_subset = random_split(
        trainset, [test_abs, len(trainset) - test_abs])

    trainloader =
    valloader =

    for epoch in range(10):  # loop over the dataset multiple times
        running_loss = 0.0
        epoch_steps = 0
        for i, data in enumerate(trainloader, 0):
            # get the inputs; data is a list of [inputs, labels]
            inputs, labels = data
            inputs, labels =,

            # zero the parameter gradients

            # forward + backward + optimize
            outputs = net(inputs)
            loss = criterion(outputs, labels)

            # print statistics
            running_loss += loss.item()
            epoch_steps += 1
            if i % 2000 == 1999:  # print every 2000 mini-batches
                print("[%d, %5d] loss: %.3f" % (epoch + 1, i + 1,
                                                running_loss / epoch_steps))
                running_loss = 0.0

        # Validation loss
        val_loss = 0.0
        val_steps = 0
        total = 0
        correct = 0
        for i, data in enumerate(valloader, 0):
            with torch.no_grad():
                inputs, labels = data
                inputs, labels =,

                outputs = net(inputs)
                _, predicted = torch.max(, 1)
                total += labels.size(0)
                correct += (predicted == labels).sum().item()

                loss = criterion(outputs, labels)
                val_loss += loss.cpu().numpy()
                val_steps += 1

        # Here we save a checkpoint. It is automatically registered with
        # Ray Tune and will potentially be passed as the `checkpoint_dir`
        # parameter in future iterations.
        with tune.checkpoint_dir(step=epoch) as checkpoint_dir:
            path = os.path.join(checkpoint_dir, "checkpoint")
                (net.state_dict(), optimizer.state_dict()), path) / val_steps), accuracy=correct / total)
    print("Finished Training")

As you can see, most of the code is adapted directly from the example.

Test set accuracy

Commonly the performance of a machine learning model is tested on a hold-out test set with data that has not been used for training the model. We also wrap this in a function:

def test_best_model(best_trial):
    best_trained_model = Net(best_trial.config["l1"], best_trial.config["l2"])
    device = "cuda:0" if torch.cuda.is_available() else "cpu"

    checkpoint_path = os.path.join(best_trial.checkpoint.value, "checkpoint")

    model_state, optimizer_state = torch.load(checkpoint_path)

    trainset, testset = load_data()

    testloader =
        testset, batch_size=4, shuffle=False, num_workers=2)

    correct = 0
    total = 0
    with torch.no_grad():
        for data in testloader:
            images, labels = data
            images, labels =,
            outputs = best_trained_model(images)
            _, predicted = torch.max(, 1)
            total += labels.size(0)
            correct += (predicted == labels).sum().item()

    print("Best trial test set accuracy: {}".format(correct / total))

As you can see, the function also expects a device parameter, so we can do the test set validation on a GPU.

Configuring the search space

Lastly, we need to define Tune’s search space. Here is an example:

config = {
    "l1": tune.sample_from(lambda _: 2**np.random.randint(2, 9)),
    "l2": tune.sample_from(lambda _: 2**np.random.randint(2, 9)),
    "lr": tune.loguniform(1e-4, 1e-1),
    "batch_size": tune.choice([2, 4, 8, 16]),
    "data_dir": data_dir

The tune.sample_from() function makes it possible to define your own sample methods to obtain hyperparameters. In this example, the l1 and l2 parameters should be powers of 2 between 4 and 256, so either 4, 8, 16, 32, 64, 128, or 256. The lr (learning rate) should be uniformly sampled between 0.0001 and 0.1. Lastly, the batch size is a choice between 2, 4, 8, and 16.

At each trial, Tune will now randomly sample a combination of parameters from these search spaces. It will then train a number of models in parallel and find the best performing one among these. We also use the ASHAScheduler which will terminate bad performing trials early.

We wrap the train_cifar function with functools.partial to set the constant data_dir parameter. We can also tell Ray Tune what resources should be available for each trial:

gpus_per_trial = 2
# ...
result =
    partial(train_cifar, data_dir=data_dir),
    resources_per_trial={"cpu": 8, "gpu": gpus_per_trial},

You can specify the number of CPUs, which are then available e.g. to increase the num_workers of the PyTorch DataLoader instances. The selected number of GPUs are made visible to PyTorch in each trial. Trials do not have access to GPUs that haven’t been requested for them - so you don’t have to care about two trials using the same set of resources.

Here we can also specify fractional GPUs, so something like gpus_per_trial=0.5 is completely valid. The trials will then share GPUs among each other. You just have to make sure that the models still fit in the GPU memory.

After training the models, we will find the best performing one and load the trained network from the checkpoint file. We then obtain the test set accuracy and report everything by printing.

The full main function looks like this:

def main(num_samples=10, max_num_epochs=10, gpus_per_trial=2):
    config = {
        "l1": tune.sample_from(lambda _: 2 ** np.random.randint(2, 9)),
        "l2": tune.sample_from(lambda _: 2 ** np.random.randint(2, 9)),
        "lr": tune.loguniform(1e-4, 1e-1),
        "batch_size": tune.choice([2, 4, 8, 16])
    scheduler = ASHAScheduler(
    result =
        resources_per_trial={"cpu": 2, "gpu": gpus_per_trial},

    best_trial = result.get_best_trial("loss", "min", "last")
    print("Best trial config: {}".format(best_trial.config))
    print("Best trial final validation loss: {}".format(
    print("Best trial final validation accuracy: {}".format(

    if ray.util.client.ray.is_connected():
        # If using Ray Client, we want to make sure checkpoint access
        # happens on the server. So we wrap `test_best_model` in a Ray task.
        # We have to make sure it gets executed on the same node that
        # ```` is called on.
        from ray.tune.utils.util import force_on_current_node
        remote_fn = force_on_current_node(ray.remote(test_best_model))

If you run the code, an example output could look like this:

  Number of trials: 10 (10 TERMINATED)
  | Trial name              | status     | loc   |   l1 |   l2 |          lr |   batch_size |    loss |   accuracy |   training_iteration |
  | train_cifar_87d1f_00000 | TERMINATED |       |   64 |    4 | 0.00011629  |            2 | 1.87273 |     0.244  |                    2 |
  | train_cifar_87d1f_00001 | TERMINATED |       |   32 |   64 | 0.000339763 |            8 | 1.23603 |     0.567  |                    8 |
  | train_cifar_87d1f_00002 | TERMINATED |       |    8 |   16 | 0.00276249  |           16 | 1.1815  |     0.5836 |                   10 |
  | train_cifar_87d1f_00003 | TERMINATED |       |    4 |   64 | 0.000648721 |            4 | 1.31131 |     0.5224 |                    8 |
  | train_cifar_87d1f_00004 | TERMINATED |       |   32 |   16 | 0.000340753 |            8 | 1.26454 |     0.5444 |                    8 |
  | train_cifar_87d1f_00005 | TERMINATED |       |    8 |    4 | 0.000699775 |            8 | 1.99594 |     0.1983 |                    2 |
  | train_cifar_87d1f_00006 | TERMINATED |       |  256 |    8 | 0.0839654   |           16 | 2.3119  |     0.0993 |                    1 |
  | train_cifar_87d1f_00007 | TERMINATED |       |   16 |  128 | 0.0758154   |           16 | 2.33575 |     0.1327 |                    1 |
  | train_cifar_87d1f_00008 | TERMINATED |       |   16 |    8 | 0.0763312   |           16 | 2.31129 |     0.1042 |                    4 |
  | train_cifar_87d1f_00009 | TERMINATED |       |  128 |   16 | 0.000124903 |            4 | 2.26917 |     0.1945 |                    1 |

  Best trial config: {'l1': 8, 'l2': 16, 'lr': 0.0027624906698231976, 'batch_size': 16, 'data_dir': '...'}
  Best trial final validation loss: 1.1815014744281769
  Best trial final validation accuracy: 0.5836
  Best trial test set accuracy: 0.5806

As you can see, most trials have been stopped early in order to avoid wasting resources. The best performing trial achieved a validation accuracy of about 58%, which could be confirmed on the test set.

So that’s it! You can now tune the parameters of your PyTorch models.

Advanced: Distributed training with DistributedDataParallel

Some models require multiple nodes to train in a short amount of time. Ray Tune allows you to easily do distributed data parallel training in addition to distributed hyperparameter tuning.

You can wrap your model in torch.nn.parallel.DistributedDataParallel to support distributed data parallel training:

from ray.util.sgd.torch import is_distributed_trainable
from torch.nn.parallel import DistributedDataParallel

def train_cifar(config, checkpoint_dir=None, data_dir=None):
    net = Net(config["l1"], config["l2"])

    device = "cpu"

    #### Using distributed data parallel training
    if is_distributed_trainable():
        net = DistributedDataParallel(net)

    if torch.cuda.is_available():
        device = "cuda"

If using checkpointing, be sure to use a special checkpoint context manager, distributed_checkpoint_dir that avoids redundant checkpointing across multiple processes:

from ray.util.sgd.torch import distributed_checkpoint_dir

#### Using distributed data parallel training
# Inside `def train_cifar(...)`,
# replace tune.checkpoint_dir() with the following
# Avoids redundant checkpointing on different processes.
with distributed_checkpoint_dir(step=epoch) as checkpoint_dir:
    path = os.path.join(checkpoint_dir, "checkpoint"), optimizer.state_dict()), path)

Finally, we need to tell Ray Tune to start multiple distributed processes at once by using ray.tune.integration.torch.DistributedTrainableCreator (docs). This is essentially equivalent to running torch.distributed.launch for each hyperparameter trial:

# You'll probably want to be running on a distributed Ray cluster.
# ray.init(address="auto")

from ray.util.sgd.integration.torch import DistributedTrainableCreator

distributed_train_cifar = DistributedTrainableCreator(
  partial(train_cifar, data_dir=data_dir),
  num_workers=2,  # number of parallel workers to use

See an end-to-end example here.

If you consider switching to PyTorch Lightning to get rid of some of your boilerplate training code, please know that we also have a walkthrough on how to use Tune with PyTorch Lightning models.