ray.train.torch.xla.TorchXLAConfig#

class ray.train.torch.xla.TorchXLAConfig(backend: str | None = None, init_method: str = 'env', timeout_s: int = 1800, neuron_parallel_compile: bool = False)[source]#

Bases: TorchConfig

Configuration for torch XLA setup. See https://pytorch.org/xla/release/1.13/index.html for more info. Currently, only “neuron_cores” accelerator (AwsNeuronXLABackend) is supported with xrt runtime.

PublicAPI (alpha): This API is in alpha and may change before becoming stable.

Methods

Attributes

backend

backend_cls

init_method

neuron_parallel_compile

timeout_s

train_func_context