Note
Ray 2.10.0 introduces the alpha stage of RLlib’s “new API stack”. The team is currently transitioning algorithms, example scripts, and documentation to the new code base throughout the subsequent minor releases leading up to Ray 3.0.
See here for more details on how to activate and use the new API stack.
Algorithms#
The Algorithm
class is the highest-level API in RLlib responsible for WHEN and WHAT of RL algorithms. Things like WHEN should we sample the algorithm, WHEN should we perform a neural network update, and so on. The HOW will be delegated to components such as RolloutWorker
, etc.. It is the main entry point for RLlib users to interact with RLlib’s algorithms.
It allows you to train and evaluate policies, save an experiment’s progress and restore from
a prior saved experiment when continuing an RL run.
Algorithm
is a sub-class
of Trainable
and thus fully supports distributed hyperparameter tuning for RL.
Algorithm Configuration API#
The AlgorithmConfig
class represents
the primary way of configuring and building an Algorithm
.
You don’t use AlgorithmConfig
directly in practice, but rather use its algorithm-specific
implementations such as PPOConfig
, which each come
with their own set of arguments to their respective .training()
method.
Constructor#
A RLlib AlgorithmConfig builds an RLlib Algorithm from a given configuration. |
Public methods#
Creates a deep copy of this config and (un)freezes if necessary. |
|
Validates all values in this config. |
|
Freezes this config object, such that no attributes can be set anymore. |
Builder methods#
Builds an Algorithm from this AlgorithmConfig (or a copy thereof). |
|
Builds and returns a new LearnerGroup object based on settings in |
|
Builds and returns a new Learner object based on settings in |
Configuration methods#
Sets the callbacks configuration. |
|
Sets the config's debugging settings. |
|
Sets the config's RL-environment settings. |
|
Sets the config's evaluation settings. |
|
Sets the config's experimental settings. |
|
Sets the config's fault tolerance settings. |
|
Sets the config's DL framework settings. |
|
Sets the config's multi-agent settings. |
|
Sets the config's offline data settings. |
|
Sets the config's python environment settings. |
|
Sets the config's reporting settings. |
|
Specifies resources allocated for an Algorithm and its ray actors/workers. |
|
Sets the config's RLModule settings. |
|
Sets the training related configuration. |
Getter methods#
Returns the Learner class to use for this algorithm. |
|
Returns the RLModule spec to use for this algorithm. |
|
Creates a full AlgorithmConfig object from |
|
Returns the MultiRLModuleSpec based on the given env/spaces. |
|
Compiles complete multi-agent config (dict) from the information in |
|
Automatically infers a proper rollout_fragment_length setting if "auto". |
Miscellaneous methods#
Detects mismatches for |
Building Custom Algorithm Classes#
Warning
As of Ray >= 1.9, it is no longer recommended to use the build_trainer()
utility
function for creating custom Algorithm sub-classes.
Instead, follow the simple guidelines here for directly sub-classing from
Algorithm
.
In order to create a custom Algorithm, sub-class the
Algorithm
class
and override one or more of its methods. Those are in particular:
Algorithm API#
Constructor#
An RLlib algorithm responsible for optimizing one or more Policies. |
|
Subclasses should override this for custom initialization. |
|
Inference and Evaluation#
Computes an action for the specified policy on the local Worker. |
|
Computes an action for the specified policy on the local worker. |
|
Evaluates current policy under |
Saving and Restoring#
Creates a new algorithm instance from a given checkpoint. |
|
Recovers an Algorithm from a state object. |
|
Return a dict mapping Module/Policy IDs to weights. |
|
Set RLModule/Policy weights by Module/Policy ID. |
|
Exports model based on export_formats. |
|
Exports Policy checkpoint to a local directory and returns an AIR Checkpoint. |
|
Exports policy model with given policy_id to a local directory. |
|
Restores training state from a given model checkpoint. |
|
Try bringing back unhealthy EnvRunners and - if successful - sync with local. |
|
Saves the current model state to a checkpoint. |
|
Exports checkpoint to a local directory. |
Training#
Runs one logical iteration of training. |
|
Default single iteration logic of an algorithm. |
Multi Agent#
Adds a new policy to this Algorithm. |
|
Removes a policy from this Algorithm. |