Configuring Scale and GPUs#

Increasing the scale of a Ray Train training run is simple and can be done in a few lines of code. The main interface for this is the ScalingConfig, which configures the number of workers and the resources they should use.

In this guide, a worker refers to a Ray Train distributed training worker, which is a Ray Actor that runs your training function.

Increasing the number of workers#

The main interface to control parallelism in your training code is to set the number of workers. This can be done by passing the num_workers attribute to the ScalingConfig:

from ray.train import ScalingConfig

scaling_config = ScalingConfig(
    num_workers=8
)

Using GPUs#

To use GPUs, pass use_gpu=True to the ScalingConfig. This will request one GPU per training worker. In the example below, training will run on 8 GPUs (8 workers, each using one GPU).

from ray.train import ScalingConfig

scaling_config = ScalingConfig(
    num_workers=8,
    use_gpu=True
)

Using GPUs in the training function#

When use_gpu=True is set, Ray Train will automatically set up environment variables in your training function so that the GPUs can be detected and used (e.g. CUDA_VISIBLE_DEVICES).

You can get the associated devices with ray.train.torch.get_device().

import torch
from ray.train import ScalingConfig
from ray.train.torch import TorchTrainer, get_device


def train_func(config):
    assert torch.cuda.is_available()

    device = get_device()
    assert device == torch.device("cuda:0")


trainer = TorchTrainer(
    train_func,
    scaling_config=ScalingConfig(
        num_workers=1,
        use_gpu=True
    )
)
trainer.fit()

Setting the resources per worker#

If you want to allocate more than one CPU or GPU per training worker, or if you defined custom cluster resources, set the resources_per_worker attribute:

from ray.train import ScalingConfig

scaling_config = ScalingConfig(
    num_workers=8,
    resources_per_worker={
        "CPU": 4,
        "GPU": 2,
    },
    use_gpu=True,
)

Note

If you specify GPUs in resources_per_worker, you also need to set use_gpu=True.

You can also instruct Ray Train to use fractional GPUs. In that case, multiple workers will be assigned the same CUDA device.

from ray.train import ScalingConfig

scaling_config = ScalingConfig(
    num_workers=8,
    resources_per_worker={
        "CPU": 4,
        "GPU": 0.5,
    },
    use_gpu=True,
)

Setting the communication backend (PyTorch)#

Note

This is an advanced setting. In most cases, you don’t have to change this setting.

You can set the PyTorch distributed communication backend (e.g. GLOO or NCCL) by passing a TorchConfig to the TorchTrainer.

See the PyTorch API reference for valid options.

from ray.train.torch import TorchConfig, TorchTrainer

trainer = TorchTrainer(
    train_func,
    scaling_config=ScalingConfig(
        num_workers=num_training_workers,
        use_gpu=True,
    ),
    torch_config=TorchConfig(backend="gloo"),
)

Trainer resources#

So far we’ve configured resources for each training worker. Technically, each training worker is a Ray Actor. Ray Train also schedules an actor for the Trainer object when you call Trainer.fit().

This object often only manages lightweight communication between the training workers. You can still specify its resources, which can be useful if you implemented your own Trainer that does heavier processing.

from ray.train import ScalingConfig

scaling_config = ScalingConfig(
    num_workers=8,
    trainer_resources={
        "CPU": 4,
        "GPU": 1,
    }
)

Per default, a trainer uses 1 CPU. If you have a cluster with 8 CPUs and want to start 4 training workers a 2 CPUs, this will not work, as the total number of required CPUs will be 9 (4 * 2 + 1). In that case, you can specify the trainer resources to use 0 CPUs:

from ray.train import ScalingConfig

scaling_config = ScalingConfig(
    num_workers=4,
    resources_per_worker={
        "CPU": 2,
    },
    trainer_resources={
        "CPU": 0,
    }
)