Ray Train FAQ¶
How fast is Ray Train compared to PyTorch, TensorFlow, etc.?¶
At its core, training speed should be the same - while Ray Train launches distributed training workers via Ray Actors, communication during training (e.g. gradient synchronization) is handled by the backend training framework itself.
For example, when running Ray Train with the
distributed training communication is done with Torch’s
How do I set resources?¶
By default, each worker will reserve 1 CPU resource, and an additional 1 GPU resource if
To override these resource requests or request additional custom resources,
you can initialize the
How can I use Matplotlib with Ray Train?¶
If you try to create a Matplotlib plot in the training function, you may encounter an error:
UserWarning: Starting a Matplotlib GUI outside of the main thread will likely fail.
To handle this, consider the following approaches:
If there is no dependency on any code in your training function, simply move the Matplotlib logic out and execute it before or after
If you are plotting metrics, you can pass the metrics via
train.report()and create a custom callback to plot the results.