ray.train.torch.get_device#

ray.train.torch.get_device() torch.device[source]#

Gets the correct torch device configured for this process.

Returns the torch device for the current worker. If more than 1 GPU is requested per worker, returns the device with the minimal device index.

Note

If you requested multiple GPUs per worker, and want to get the full list of torch devices, please use get_devices().

Assumes that CUDA_VISIBLE_DEVICES is set and is a superset of the ray.get_gpu_ids().

Examples

Example: Launched 2 workers on the current node, each with 1 GPU

os.environ["CUDA_VISIBLE_DEVICES"] = "2,3"
ray.get_gpu_ids() == [2]
torch.cuda.is_available() == True
get_device() == torch.device("cuda:0")

Example: Launched 4 workers on the current node, each with 1 GPU

os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3"
ray.get_gpu_ids() == [2]
torch.cuda.is_available() == True
get_device() == torch.device("cuda:2")

Example: Launched 2 workers on the current node, each with 2 GPUs

os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3"
ray.get_gpu_ids() == [2,3]
torch.cuda.is_available() == True
get_device() == torch.device("cuda:2")

You can move a model to device by:

model.to(ray.train.torch.get_device())

Instead of manually checking the device type:

model.to("cuda" if torch.cuda.is_available() else "cpu")