class ray.rllib.env.env_runner.EnvRunner(*, config: AlgorithmConfig, **kwargs)[source]#

Bases: FaultAwareApply

Base class for distributed RL-style data collection from an environment.

The EnvRunner API’s core functionalities can be summarized as: - Gets configured via passing a AlgorithmConfig object to the constructor. Normally, subclasses of EnvRunner then construct their own environment (possibly vectorized) copies and RLModules/Policies and use the latter to step through the environment in order to collect training data. - Clients of EnvRunner can use the sample() method to collect data for training from the environment(s). - EnvRunner offers parallelism via creating n remote Ray Actors based on this class. Use ray.remote([resources])(EnvRunner) method to create the corresponding Ray remote class. Then instantiate n Actors using the Ray [ctor].remote(...) syntax. - EnvRunner clients can get information about the server/node on which the individual Actors are running.



Initializes an EnvRunner instance.


Calls the given function with this Actor instance.


Checks that self.__init__() has been completed properly.


Returns this EnvRunner's (possibly serialized) current state as a dict.


Ping the actor.


Returns experiences (of any form) sampled from this EnvRunner.


Restores this EnvRunner's state from the given state dict.


Releases all resources used by this EnvRunner.