ray.rllib.env.env_runner.EnvRunner#

class ray.rllib.env.env_runner.EnvRunner(*, config: AlgorithmConfig, **kwargs)[source]#

Bases: FaultAwareApply

Base class for distributed RL-style data collection from an environment.

The EnvRunner API’s core functionalities can be summarized as: - Gets configured via passing a AlgorithmConfig object to the constructor. Normally, subclasses of EnvRunner then construct their own environment (possibly vectorized) copies and RLModules/Policies and use the latter to step through the environment in order to collect training data. - Clients of EnvRunner can use the sample() method to collect data for training from the environment(s). - EnvRunner offers parallelism via creating n remote Ray Actors based on this class. Use ray.remote([resources])(EnvRunner) method to create the corresponding Ray remote class. Then instantiate n Actors using the Ray [ctor].remote(...) syntax. - EnvRunner clients can get information about the server/node on which the individual Actors are running.

PublicAPI (alpha): This API is in alpha and may change before becoming stable.

Methods

__init__

Initializes an EnvRunner instance.

apply

Calls the given function with this Actor instance.

assert_healthy

Checks that self.__init__() has been completed properly.

get_spaces

Returns a dict mapping ModuleIDs to 2-tuples of obs- and action space.

make_env

Creates the RL environment for this EnvRunner and assigns it to self.env.

ping

Ping the actor.

sample

Returns experiences (of any form) sampled from this EnvRunner.

stop

Releases all resources used by this EnvRunner.