ray.rllib.algorithms.algorithm_config.AlgorithmConfig.resources#

AlgorithmConfig.resources(*, num_cpus_for_main_process: int | None = <ray.rllib.utils.from_config._NotProvided object>, num_gpus: int | float | None = <ray.rllib.utils.from_config._NotProvided object>, _fake_gpus: bool | None = <ray.rllib.utils.from_config._NotProvided object>, placement_strategy: str | None = <ray.rllib.utils.from_config._NotProvided object>, num_cpus_per_worker=-1, num_gpus_per_worker=-1, custom_resources_per_worker=-1, num_learner_workers=-1, num_cpus_per_learner_worker=-1, num_gpus_per_learner_worker=-1, local_gpu_idx=-1, num_cpus_for_local_worker=-1) AlgorithmConfig[source]#

Specifies resources allocated for an Algorithm and its ray actors/workers.

Parameters:
  • num_cpus_for_main_process – Number of CPUs to allocate for the main algorithm process that runs Algorithm.training_step(). Note: This is only relevant when running RLlib through Tune. Otherwise, Algorithm.training_step() runs in the main program (driver).

  • num_gpus – Number of GPUs to allocate to the algorithm process. Note that not all algorithms can take advantage of GPUs. Support for multi-GPU is currently only available for tf-[PPO/IMPALA/DQN/PG]. This can be fractional (e.g., 0.3 GPUs).

  • _fake_gpus – Set to True for debugging (multi-)?GPU funcitonality on a CPU machine. GPU towers will be simulated by graphs located on CPUs in this case. Use num_gpus to test for different numbers of fake GPUs.

  • placement_strategy – The strategy for the placement group factory returned by Algorithm.default_resource_request(). A PlacementGroup defines, which devices (resources) should always be co-located on the same node. For example, an Algorithm with 2 EnvRunners and 1 Learner (with 1 GPU) will request a placement group with the bundles: [{“cpu”: 1}, {“gpu”: 1, “cpu”: 1}, {“cpu”: 1}, {“cpu”: 1}], where the first bundle is for the local (main Algorithm) process, the second one for the 1 Learner worker and the last 2 bundles are for the two EnvRunners. These bundles can now be “placed” on the same or different nodes depending on the value of placement_strategy: “PACK”: Packs bundles into as few nodes as possible. “SPREAD”: Places bundles across distinct nodes as even as possible. “STRICT_PACK”: Packs bundles into one node. The group is not allowed to span multiple nodes. “STRICT_SPREAD”: Packs bundles across distinct nodes.

Returns:

This updated AlgorithmConfig object.