Note

Ray 2.10.0 introduces the alpha stage of RLlib’s “new API stack”. The Ray Team plans to transition algorithms, example scripts, and documentation to the new code base thereby incrementally replacing the “old API stack” (e.g., ModelV2, Policy, RolloutWorker) throughout the subsequent minor releases leading up to Ray 3.0.

Note, however, that so far only PPO (single- and multi-agent) and SAC (single-agent only) support the “new API stack” and continue to run by default with the old APIs. You can continue to use the existing custom (old stack) classes.

See here for more details on how to use the new API stack.

Fault Tolerance And Elastic Training#

RLlib handles common failures modes, such as machine failures, spot instance preemption, network outages, or Ray cluster failures.

There are three main areas for RLlib fault tolerance support:

  • Worker recovery

  • Environment fault tolerance

  • Experiment level fault tolerance with Ray Tune

Worker Recovery#

RLlib supports self-recovering and elastic EnvRunnerGroup for both training and evaluation EnvRunner workers. This provides fault tolerance at worker level.

This means that if you have n EnvRunner workers sitting on different machines and a machine is pre-empted, RLlib can continue training and evaluation with minimal interruption.

The two properties that RLlib supports here are self-recovery and elasticity:

  • Elasticity: RLlib continues training even when an EnvRunner is removed. For example, if an RLlib trial uses spot instances, nodes may be removed from the cluster, potentially resulting in a subset of workers not getting scheduled. In this case, RLlib will continue with whatever healthy EnvRunner instances left at a reduced speed.

  • Self-Recovery: When possible, RLlib will attempt to restore any EnvRunner that was previously removed. During restoration, RLlib syncs the latest state over to the restored EnvRunner before new episodes can be sampled.

Worker fault tolerance can be turned on by setting config recreate_failed_env_runners to True.

RLlib achieves this by utilizing a state-aware and fault tolerant actor manager. Under the hood, RLlib relies on Ray Core actor fault tolerance to automatically recover failed worker actors.

Env Fault Tolerance#

In addition to worker fault tolerance, RLlib offers fault tolerance at the environment level as well.

Rollout or evaluation workers will often run multiple environments in parallel to take advantage of, for example, the parallel computing power that GPU offers. This can be controlled with the num_envs_per_env_runner config. It may then be wasteful if the entire worker needs to be reconstructed because of errors from a single environment.

In that case, RLlib offers the capability to restart individual environments without bubbling the errors to higher level components. You can do that easily by turning on config restart_failed_sub_environments.

Note

Environment restarts are blocking.

A rollout worker will wait until the environment comes back and finishes initialization. So for on-policy algorithms, it may be better to recover at worker level to make sure training progresses with elastic worker set while the environments are being reconstructed. More specifically, use configs num_envs_per_env_runner=1, restart_failed_sub_environments=False, and recreate_failed_env_runners=True.

Fault Tolerance and Recovery Provided by Ray Tune#

Ray Tune provides fault tolerance and recovery at the experiment trial level.

When using Ray Tune with RLlib, you can enable periodic checkpointing, which saves the state of the experiment to a user-specified persistent storage location. If a trial fails, Ray Tune will automatically restart it from the latest checkpointed state.

Other Miscellaneous Considerations#

By default, RLlib runs health checks during initial worker construction. The whole job will error out if a completely healthy worker fleet can not be established at the start of a training run. If an environment is by nature flaky, you may want to turn off this feature by setting config validate_env_runners_after_construction to False.

Lastly, in an extreme case where no healthy workers are left for training, RLlib will wait certain number of iterations for some of the workers to recover before the entire training job failed. The number of iterations it waits can be configured with the config num_consecutive_env_runner_failures_tolerance.