ray.rllib.env.env_runner.EnvRunner#
- class ray.rllib.env.env_runner.EnvRunner(*, config: AlgorithmConfig, **kwargs)[source]#
Bases:
FaultAwareApply
Base class for distributed RL-style data collection from an environment.
The EnvRunner API’s core functionalities can be summarized as: - Gets configured via passing a AlgorithmConfig object to the constructor. Normally, subclasses of EnvRunner then construct their own environment (possibly vectorized) copies and RLModules/Policies and use the latter to step through the environment in order to collect training data. - Clients of EnvRunner can use the
sample()
method to collect data for training from the environment(s). - EnvRunner offers parallelism via creating n remote Ray Actors based on this class. Useray.remote([resources])(EnvRunner)
method to create the corresponding Ray remote class. Then instantiate n Actors using the Ray[ctor].remote(...)
syntax. - EnvRunner clients can get information about the server/node on which the individual Actors are running.PublicAPI (alpha): This API is in alpha and may change before becoming stable.
Methods
Initializes an EnvRunner instance.
Calls the given function with this Actor instance.
Checks that self.__init__() has been completed properly.
Returns a dict mapping ModuleIDs to 2-tuples of obs- and action space.
Creates the RL environment for this EnvRunner and assigns it to
self.env
.Ping the actor.
Returns experiences (of any form) sampled from this EnvRunner.
Releases all resources used by this EnvRunner.