ray.rllib.evaluation.rollout_worker.RolloutWorker.__init__#

RolloutWorker.__init__(*, env_creator: Callable[[EnvContext], Any | gymnasium.Env | None], validate_env: Callable[[Any | gymnasium.Env, EnvContext], None] | None = None, config: AlgorithmConfig | None = None, worker_index: int = 0, num_workers: int | None = None, recreated_worker: bool = False, log_dir: str | None = None, spaces: Dict[str, Tuple[gymnasium.spaces.Space, gymnasium.spaces.Space]] | None = None, default_policy_class: Type[Policy] | None = None, dataset_shards: List[Dataset] | None = None, tf_session_creator=-1)[source]#

Initializes a RolloutWorker instance.

Parameters:
  • env_creator – Function that returns a gym.Env given an EnvContext wrapped configuration.

  • validate_env – Optional callable to validate the generated environment (only on worker=0).

  • worker_index – For remote workers, this should be set to a non-zero and unique value. This index is passed to created envs through EnvContext so that envs can be configured per worker.

  • recreated_worker – Whether this worker is a recreated one. Workers are recreated by an Algorithm (via WorkerSet) in case recreate_failed_workers=True and one of the original workers (or an already recreated one) has failed. They don’t differ from original workers other than the value of this flag (self.recreated_worker).

  • log_dir – Directory where logs can be placed.

  • spaces – An optional space dict mapping policy IDs to (obs_space, action_space)-tuples. This is used in case no Env is created on this RolloutWorker.