ray.rllib.algorithms.algorithm_config.AlgorithmConfig.learners#

AlgorithmConfig.learners(*, num_learners: int | None = <ray.rllib.utils.from_config._NotProvided object>, num_cpus_per_learner: int | float | None = <ray.rllib.utils.from_config._NotProvided object>, num_gpus_per_learner: int | float | None = <ray.rllib.utils.from_config._NotProvided object>, local_gpu_idx: int | None = <ray.rllib.utils.from_config._NotProvided object>)[source]#

Sets LearnerGroup and Learner worker related configurations.

Parameters:
  • num_learners – Number of Learner workers used for updating the RLModule. A value of 0 means training takes place on a local Learner on main process CPUs or 1 GPU (determined by num_gpus_per_learner). For multi-gpu training, you have to set num_learners to > 1 and set num_gpus_per_learner accordingly (e.g., 4 GPUs total and model fits on 1 GPU: num_learners=4; num_gpus_per_learner=1 OR 4 GPUs total and model requires 2 GPUs: num_learners=2; num_gpus_per_learner=2).

  • num_cpus_per_learner – Number of CPUs allocated per Learner worker. Only necessary for custom processing pipeline inside each Learner requiring multiple CPU cores. Ignored if num_learners=0.

  • num_gpus_per_learner – Number of GPUs allocated per Learner worker. If num_learners=0, any value greater than 0 runs the training on a single GPU on the main process, while a value of 0 runs the training on main process CPUs. If num_gpus_per_learner is > 0, then you shouldn’t change num_cpus_per_learner (from its default value of 1).

  • local_gpu_idx – If num_gpus_per_learner > 0, and num_learners < 2, then RLlib uses this GPU index for training. This is an index into the available CUDA devices. For example if os.environ["CUDA_VISIBLE_DEVICES"] = "1" and local_gpu_idx=0, RLlib uses the GPU with ID=1 on the node.

Returns:

This updated AlgorithmConfig object.