ray.rllib.algorithms.algorithm_config.AlgorithmConfig#
- class ray.rllib.algorithms.algorithm_config.AlgorithmConfig(algo_class: type | None = None)[source]#
Bases:
_Config
A RLlib AlgorithmConfig builds an RLlib Algorithm from a given configuration.
from ray.rllib.algorithms.ppo import PPOConfig from ray.rllib.algorithms.callbacks import MemoryTrackingCallbacks # Construct a generic config object, specifying values within different # sub-categories, e.g. "training". config = ( PPOConfig() .training(gamma=0.9, lr=0.01) .environment(env="CartPole-v1") .env_runners(num_env_runners=0) .callbacks(MemoryTrackingCallbacks) ) # A config object can be used to construct the respective Algorithm. rllib_algo = config.build()
from ray.rllib.algorithms.ppo import PPOConfig from ray import tune # In combination with a tune.grid_search: config = PPOConfig() config.training(lr=tune.grid_search([0.01, 0.001])) # Use `to_dict()` method to get the legacy plain python config dict # for usage with `tune.Tuner().fit()`. tune.Tuner("PPO", param_space=config.to_dict())
Methods
Initializes an AlgorithmConfig instance.
Sets the config's API stack settings.
Builds an Algorithm from this AlgorithmConfig (or a copy thereof).
Builds and returns a new Learner object based on settings in
self
.Builds and returns a new LearnerGroup object based on settings in
self
.Sets the callbacks configuration.
Sets the config's checkpointing settings.
Creates a deep copy of this config and (un)freezes if necessary.
Sets the config's debugging settings.
Sets the rollout worker configuration.
Sets the config's RL-environment settings.
Sets the config's evaluation settings.
Sets the config's experimental settings.
Sets the config's fault tolerance settings.
Sets the config's DL framework settings.
Freezes this config object, such that no attributes can be set anymore.
Creates an AlgorithmConfig from a legacy python config dict.
Returns an instance constructed from the state.
Shim method to help pretend we are a dict.
Returns an AlgorithmConfig object, specific to the given module ID.
Returns the Learner class to use for this algorithm.
Returns the RLModule spec to use for this algorithm.
Creates a full AlgorithmConfig object from
self.evaluation_config
.Compiles complete multi-agent config (dict) from the information in
self
.Returns the MultiRLModuleSpec based on the given env/spaces.
Returns the RLModuleSpec based on the given env/spaces.
Automatically infers a proper rollout_fragment_length setting if "auto".
Returns a dict state that can be pickled.
Returns the TorchCompileConfig to use on workers.
Returns whether this config specifies a multi-agent setup.
Shim method to help pretend we are a dict.
Shim method to help pretend we are a dict.
Sets LearnerGroup and Learner worker related configurations.
Sets the config's multi-agent settings.
Sets the config's offline data settings.
Generates and validates a set of config key/value pairs (passed via kwargs).
Shim method to help pretend we are a dict.
Sets the config's python environment settings.
Sets the config's reporting settings.
Specifies resources allocated for an Algorithm and its ray actors/workers.
Sets the config's RLModule settings.
Returns a mapping from str to JSON'able values representing this config.
Converts all settings into a legacy config dict for backward compatibility.
Sets the training related configuration.
Modifies this AlgorithmConfig via the provided python config dict.
Validates all values in this config.
Detects mismatches for
train_batch_size
vsrollout_fragment_length
.Shim method to help pretend we are a dict.
Attributes
True if if specified env is an Atari env.
Returns the Learner sub-class to use by this Algorithm.
Defines the model configuration used.