AlgorithmConfig.resources(*, num_gpus: int | float | None = <ray.rllib.utils.from_config._NotProvided object>, _fake_gpus: bool | None = <ray.rllib.utils.from_config._NotProvided object>, num_cpus_per_worker: int | float | None = <ray.rllib.utils.from_config._NotProvided object>, num_gpus_per_worker: int | float | None = <ray.rllib.utils.from_config._NotProvided object>, num_cpus_for_local_worker: int | None = <ray.rllib.utils.from_config._NotProvided object>, num_learner_workers: int | None = <ray.rllib.utils.from_config._NotProvided object>, num_cpus_per_learner_worker: int | float | None = <ray.rllib.utils.from_config._NotProvided object>, num_gpus_per_learner_worker: int | float | None = <ray.rllib.utils.from_config._NotProvided object>, local_gpu_idx: int | None = <ray.rllib.utils.from_config._NotProvided object>, custom_resources_per_worker: dict | None = <ray.rllib.utils.from_config._NotProvided object>, placement_strategy: str | None = <ray.rllib.utils.from_config._NotProvided object>) AlgorithmConfig[source]#

Specifies resources allocated for an Algorithm and its ray actors/workers.

  • num_gpus – Number of GPUs to allocate to the algorithm process. Note that not all algorithms can take advantage of GPUs. Support for multi-GPU is currently only available for tf-[PPO/IMPALA/DQN/PG]. This can be fractional (e.g., 0.3 GPUs).

  • _fake_gpus – Set to True for debugging (multi-)?GPU funcitonality on a CPU machine. GPU towers will be simulated by graphs located on CPUs in this case. Use num_gpus to test for different numbers of fake GPUs.

  • num_cpus_per_worker – Number of CPUs to allocate per worker.

  • num_gpus_per_worker – Number of GPUs to allocate per worker. This can be fractional. This is usually needed only if your env itself requires a GPU (i.e., it is a GPU-intensive video game), or model inference is unusually expensive.

  • num_learner_workers – Number of workers used for training. A value of 0 means training will take place on a local worker on head node CPUs or 1 GPU (determined by num_gpus_per_learner_worker). For multi-gpu training, set number of workers greater than 1 and set num_gpus_per_learner_worker accordingly (e.g. 4 GPUs total, and model needs 2 GPUs: num_learner_workers = 2 and num_gpus_per_learner_worker = 2)

  • num_cpus_per_learner_worker – Number of CPUs allocated per Learner worker. Only necessary for custom processing pipeline inside each Learner requiring multiple CPU cores. Ignored if num_learner_workers = 0.

  • num_gpus_per_learner_worker – Number of GPUs allocated per worker. If num_learner_workers = 0, any value greater than 0 will run the training on a single GPU on the head node, while a value of 0 will run the training on head node CPU cores. If num_gpus_per_learner_worker is set to > 0, then num_cpus_per_learner_worker should not be changed (from its default value of 1).

  • num_cpus_for_local_worker – Number of CPUs to allocate for the algorithm. Note: this only takes effect when running in Tune. Otherwise, the algorithm runs in the main program (driver).

  • local_gpu_idx – If num_gpus_per_learner_worker > 0, and num_learner_workers < 2, then this GPU index will be used for training. This is an index into the available CUDA devices. For example if os.environ["CUDA_VISIBLE_DEVICES"] = "1" then a local_gpu_idx of 0 will use the GPU with ID=1 on the node.

  • custom_resources_per_worker – Any custom Ray resources to allocate per worker.

  • placement_strategy – The strategy for the placement group factory returned by Algorithm.default_resource_request(). A PlacementGroup defines, which devices (resources) should always be co-located on the same node. For example, an Algorithm with 2 rollout workers, running with num_gpus=1 will request a placement group with the bundles: [{“gpu”: 1, “cpu”: 1}, {“cpu”: 1}, {“cpu”: 1}], where the first bundle is for the driver and the other 2 bundles are for the two workers. These bundles can now be “placed” on the same or different nodes depending on the value of placement_strategy: “PACK”: Packs bundles into as few nodes as possible. “SPREAD”: Places bundles across distinct nodes as even as possible. “STRICT_PACK”: Packs bundles into one node. The group is not allowed to span multiple nodes. “STRICT_SPREAD”: Packs bundles across distinct nodes.


This updated AlgorithmConfig object.