ray.rllib.core.learner.learner.Learner.add_module#

Learner.add_module(*, module_id: str, module_spec: SingleAgentRLModuleSpec, config_overrides: Dict | None = None, new_should_module_be_updated: Sequence[str] | Callable[[str, MultiAgentBatch | None], bool] | None = None) MultiAgentRLModuleSpec[source]#

Adds a module to the underlying MultiAgentRLModule.

Changes this Learner’s config in order to make this architectural change permanent wrt. to checkpointing.

Parameters:
  • module_id – The ModuleID of the module to be added.

  • module_spec – The ModuleSpec of the module to be added.

  • config_overrides – The AlgorithmConfig overrides that should apply to the new Module, if any.

  • new_should_module_be_updated – An optional sequence of ModuleIDs or a callable taking ModuleID and SampleBatchType and returning whether the ModuleID should be updated (trained). If None, will keep the existing setup in place. RLModules, whose IDs are not in the list (or for which the callable returns False) will not be updated.

Returns:

The new MultiAgentRLModuleSpec (after the change has been performed).