Ray 2.10.0 introduces the alpha stage of RLlib’s “new API stack”. The Ray Team plans to transition algorithms, example scripts, and documentation to the new code base thereby incrementally replacing the “old API stack” (e.g., ModelV2, Policy, RolloutWorker) throughout the subsequent minor releases leading up to Ray 3.0.

Note, however, that so far only PPO (single- and multi-agent) and SAC (single-agent only) support the “new API stack” and continue to run by default with the old APIs. You can continue to use the existing custom (old stack) classes.

See here for more details on how to use the new API stack.


This doc is related to RLlib’s new API stack and therefore experimental.

Learner (Alpha)#

Learner allows you to abstract the training logic of RLModules. It supports both gradient-based and non-gradient-based updates (e.g. polyak averaging, etc.) The API enables you to distribute the Learner using data- distributed parallel (DDP). The Learner achieves the following:

  1. Facilitates gradient-based updates on RLModule.

  2. Provides abstractions for non-gradient based updates such as polyak averaging, etc.

  3. Reporting training statistics.

  4. Checkpoints the modules and optimizer states for durable training.

The Learner class supports data-distributed- parallel style training using the LearnerGroup API. Under this paradigm, the LearnerGroup maintains multiple copies of the same Learner with identical parameters and hyperparameters. Each of these Learner instances computes the loss and gradients on a shard of a sample batch and then accumulates the gradients across the Learner instances. Learn more about data-distributed parallel learning in this article.

LearnerGroup also allows for asynchronous training and (distributed) checkpointing for durability during training.

Enabling Learner API in RLlib experiments#

Adjust the amount of resources for training using the num_gpus_per_learner, num_cpus_per_learner, and num_learners arguments in the AlgorithmConfig.

from ray.rllib.algorithms.ppo.ppo import PPOConfig
config = (
        num_learners=0,  # Set this to greater than 1 to allow for DDP style updates.
        num_gpus_per_learner=0,  # Set this to 1 to enable GPU training.


This features is in alpha. If you migrate to this algorithm, enable the feature by via AlgorithmConfig.api_stack(enable_rl_module_and_learner=True).

The following algorithms support Learner out of the box. Implement an algorithm with a custom Learner to leverage this API for other algorithms.


Supported Framework


pytorch tensorflow


pytorch tensorflow


pytorch tensorflow

Basic usage#

Use the LearnerGroup utility to interact with multiple learners.


If you enable the RLModule and Learner APIs via the AlgorithmConfig, then calling build() constructs a LearnerGroup for you, but if you’re using these APIs standalone, you can construct the LearnerGroup as follows.

env = gym.make("CartPole-v1")

# Create an AlgorithmConfig object from which we can build the
# LearnerGroup.
config = (
    # Number of Learner workers (Ray actors).
    # Use 0 for no actors, only create a local Learner.
    # Use >=1 to create n DDP-style Learner workers (Ray actors).
    # Specify the learner's hyperparameters.

# Construct a new LearnerGroup using our config object.
learner_group = config.build_learner_group(env=env)
env = gym.make("CartPole-v1")

# Create an AlgorithmConfig object from which we can build the
# Learner.
config = (
    # Specify the Learner's hyperparameters.
# Construct a new Learner using our config object.
learner = config.build_learner(env=env)


TIMESTEPS = {"num_env_steps_sampled_lifetime": 250}

# This is a blocking update.
results = learner_group.update_from_batch(batch=DUMMY_BATCH, timesteps=TIMESTEPS)

# This is a non-blocking update. The results are returned in a future
# call to `update_from_batch(..., async_update=True)`
_ = learner_group.update_from_batch(batch=DUMMY_BATCH, async_update=True, timesteps=TIMESTEPS)

# Artificially wait for async request to be done to get the results
# in the next call to
# `LearnerGroup.update_from_batch(..., async_update=True)`.
results = learner_group.update_from_batch(
    batch=DUMMY_BATCH, async_update=True, timesteps=TIMESTEPS
# `results` is an already reduced dict, which is the result of
# reducing over the individual async `update_from_batch(..., async_update=True)`
# calls.
assert isinstance(results, dict), results

When updating a LearnerGroup you can perform blocking or async updates on batches of data. Async updates are necessary for implementing async algorithms such as APPO/IMPALA.

# This is a blocking update (given a training batch).
result = learner.update_from_batch(batch=DUMMY_BATCH, timesteps=TIMESTEPS)

When updating a Learner you can only perform blocking updates on batches of data. You can perform non-gradient based updates before or after the gradient-based ones by overriding before_gradient_based_update() and after_gradient_based_update().

Getting and setting state#

# Get the LearnerGroup's RLModule weights and optimizer states.
state = learner_group.get_state()

# Only get the RLModule weights.
weights = learner_group.get_weights()

Set/get the state dict of all learners through learner_group via set_state() or get_state(). This includes all states including both neural network weights, and optimizer states on each learner. You can set and get the weights of the RLModule of all learners through learner_group via set_weights() or get_weights(). This does not include optimizer states.

# Get the Learner's RLModule weights and optimizer states.
state = learner.get_state()

# Only get the RLModule weights (as numpy arrays).
module_state = learner.get_module_state()

You can set and get the weights of a Learner using set_state() and get_state() . For setting or getting only RLModule weights (without optimizer states), use set_module_state() or get_module_state() API.



Checkpoint the state of all learners in the LearnerGroup via save_state() and load_state(). This includes all states including neural network weights and any optimizer states. Note that since the state of all of the Learner instances is identical, only the states from the first Learner need to be saved.


Checkpoint the state of a Learner via save_state() and load_state(). This includes all states including neural network weights and any optimizer states.


Learner has many APIs for flexible implementation, however the core ones that you need to implement are:




set up any optimizers for a RLModule.


calculate the loss for gradient based update to a module.


do any non-gradient based updates to a RLModule before(!) the gradient based ones, e.g. add noise to your network.

Starter Example#

A Learner that implements behavior cloning could look like the following:

class BCTorchLearner(TorchLearner):

    def compute_loss_for_module(
        module_id: ModuleID,
        config: AlgorithmConfig = None,
        batch: NestedDict,
        fwd_out: Dict[str, TensorType],
    ) -> TensorType:

        # standard behavior cloning loss
        action_dist_inputs = fwd_out[SampleBatch.ACTION_DIST_INPUTS]
        action_dist_class = self._module[module_id].get_train_action_dist_cls()
        action_dist = action_dist_class.from_logits(action_dist_inputs)
        loss = -torch.mean(action_dist.logp(batch[SampleBatch.ACTIONS]))

        return loss