EnvRunner API#

rllib.env.env_runner.EnvRunner#

Construction and setup#

EnvRunner

Base class for distributed RL-style data collection from an environment.

EnvRunner.make_env

Creates the RL environment for this EnvRunner and assigns it to self.env.

EnvRunner.make_module

Creates the RLModule for this EnvRunner and assigns it to self.module.

EnvRunner.get_spaces

Returns a dict mapping ModuleIDs to 2-tuples of obs- and action space.

EnvRunner.assert_healthy

Checks that self.__init__() has been completed properly.

Sampling#

EnvRunner.sample

Returns experiences (of any form) sampled from this EnvRunner.

EnvRunner.get_metrics

Returns metrics (in any form) of the thus far collected, completed episodes.

Cleanup#

EnvRunner.stop

Releases all resources used by this EnvRunner.

rllib.env.env_errors.StepFailedRecreateEnvError#

class ray.rllib.env.env_errors.StepFailedRecreateEnvError[source]#

An exception that signals that the environment step failed and the environment needs to be reset.

This exception may be raised by the environment’s step method. It is then caught by the EnvRunner and the environment is reset. This can be useful if your environment is unstable, regularely crashing in a certain way. For example, if you connect to an external simulator that you have little control over. You can detect such crashes in your step method and throw this error to not log the error. Use this with caution, as it may lead to infinite loops of resetting the environment.

PublicAPI (alpha): This API is in alpha and may change before becoming stable.

Single-agent and multi-agent EnvRunners#

By default, RLlib uses two built-in subclasses of EnvRunner, one for single-agent, one for multi-agent setups. It determines based on your config, which one to use.

Check your config.is_multi_agent property to find out, which of these setups you have configured and see the docs on setting up RLlib multi-agent for more details.