ray.rllib.core.learner.learner.Learner.register_optimizer#

Learner.register_optimizer(*, module_id: str = '__all_modules__', optimizer_name: str = 'default_optimizer', optimizer: torch.optim.Optimizer | tf.keras.optimizers.Optimizer, params: Sequence[torch.Tensor | tf.Variable], lr_or_lr_schedule: float | List[List[int | float]] | None = None) None[source]#

Registers an optimizer with a ModuleID, name, param list and lr-scheduler.

Use this method in your custom implementations of either self.configure_optimizers() or self.configure_optimzers_for_module() (you should only override one of these!). If you register a learning rate Scheduler setting together with an optimizer, RLlib will automatically keep this optimizer’s learning rate updated throughout the training process. Alternatively, you can construct your optimizers directly with a learning rate and manage learning rate scheduling or updating yourself.

Parameters:
  • module_id – The module_id under which to register the optimizer. If not provided, will assume ALL_MODULES.

  • optimizer_name – The name (str) of the optimizer. If not provided, will assume DEFAULT_OPTIMIZER.

  • optimizer – The already instantiated optimizer object to register.

  • params – A list of parameters (framework-specific variables) that will be trained/updated

  • lr_or_lr_schedule – An optional fixed learning rate or learning rate schedule setup. If provided, RLlib will automatically keep the optimizer’s learning rate updated.