ray.rllib.env.env_runner_group.EnvRunnerGroup.__init__#
- EnvRunnerGroup.__init__(*, env_creator: Callable[[EnvContext], Any | gymnasium.Env | None] | None = None, validate_env: Callable[[Any | gymnasium.Env], None] | None = None, default_policy_class: Type[Policy] | None = None, config: AlgorithmConfig | None = None, num_env_runners: int = 0, local_env_runner: bool = True, logdir: str | None = None, _setup: bool = True, tune_trial_id: str | None = None, num_workers=-1, local_worker=-1)[source]#
Initializes a EnvRunnerGroup instance.
- Parameters:
env_creator – Function that returns env given env config.
validate_env – Optional callable to validate the generated environment (only on worker=0). This callable should raise an exception if the environment is invalid.
default_policy_class – An optional default Policy class to use inside the (multi-agent)
policies
dict. In case the PolicySpecs in there have no class defined, use thisdefault_policy_class
. If None, PolicySpecs will be using the Algorithm’s default Policy class.config – Optional AlgorithmConfig (or config dict).
num_env_runners – Number of remote EnvRunners to create.
local_env_runner – Whether to create a local (non @ray.remote) EnvRunner in the returned set as well (default: True). If
num_env_runners
is 0, always create a local EnvRunner.logdir – Optional logging directory for workers.
_setup – Whether to actually set up workers. This is only for testing.