ray.rllib.evaluation.worker_set.WorkerSet.__init__#

WorkerSet.__init__(*, env_creator: Callable[[EnvContext], Any | gymnasium.Env | None] | None = None, validate_env: Callable[[Any | gymnasium.Env], None] | None = None, default_policy_class: Type[Policy] | None = None, config: AlgorithmConfig | None = None, num_workers: int = 0, local_worker: bool = True, logdir: str | None = None, _setup: bool = True)[source]#

Initializes a WorkerSet instance.

Parameters:
  • env_creator – Function that returns env given env config.

  • validate_env – Optional callable to validate the generated environment (only on worker=0). This callable should raise an exception if the environment is invalid.

  • default_policy_class – An optional default Policy class to use inside the (multi-agent) policies dict. In case the PolicySpecs in there have no class defined, use this default_policy_class. If None, PolicySpecs will be using the Algorithm’s default Policy class.

  • config – Optional AlgorithmConfig (or config dict).

  • num_workers – Number of remote rollout workers to create.

  • local_worker – Whether to create a local (non @ray.remote) worker in the returned set as well (default: True). If num_workers is 0, always create a local worker.

  • logdir – Optional logging directory for workers.

  • _setup – Whether to actually set up workers. This is only for testing.