ray.rllib.evaluation.worker_set.WorkerSet.__init__#

WorkerSet.__init__(*, env_creator: Optional[Callable[[EnvContext], Optional[Any]]] = None, validate_env: Optional[Callable[[Any], None]] = None, default_policy_class: Optional[Type[ray.rllib.policy.policy.Policy]] = None, config: Optional[AlgorithmConfig] = None, num_workers: int = 0, local_worker: bool = True, logdir: Optional[str] = None, _setup: bool = True)[source]#

Initializes a WorkerSet instance.

Parameters
  • env_creator – Function that returns env given env config.

  • validate_env – Optional callable to validate the generated environment (only on worker=0). This callable should raise an exception if the environment is invalid.

  • default_policy_class – An optional default Policy class to use inside the (multi-agent) policies dict. In case the PolicySpecs in there have no class defined, use this default_policy_class. If None, PolicySpecs will be using the Algorithm’s default Policy class.

  • config – Optional AlgorithmConfig (or config dict).

  • num_workers – Number of remote rollout workers to create.

  • local_worker – Whether to create a local (non @ray.remote) worker in the returned set as well (default: True). If num_workers is 0, always create a local worker.

  • logdir – Optional logging directory for workers.

  • _setup – Whether to setup workers. This is only for testing.