This page is an index of examples for the various use cases and features of RLlib.
If any example is broken, or if you’d like to add an example to this page, feel free to raise an issue on our Github repository.
- Attention Nets and More with RLlib’s Trajectory View API:
This blog describes RLlib’s new “trajectory view API” and how it enables implementations of GTrXL (attention net) architectures.
- Reinforcement Learning with RLlib in the Unity Game Engine:
A how-to on connectig RLlib with the Unity3D game engine for running visual- and physics-based RL experiments.
- Lessons from Implementing 12 Deep RL Algorithms in TF and PyTorch:
Discussion on how we ported 12 of RLlib’s algorithms from TensorFlow to PyTorch and what we learnt on the way.
- Scaling Multi-Agent Reinforcement Learning:
This blog post is a brief tutorial on multi-agent RL and its design in RLlib.
- Functional RL with Keras and TensorFlow Eager:
Exploration of a functional paradigm for implementing reinforcement learning (RL) algorithms.
Environments and Adapters¶
- Registering a custom env and model:
Example of defining and registering a gym env and model for use with RLlib.
- Local Unity3D multi-agent environment example:
Example of how to setup an RLlib Trainer against a locally running Unity3D editor instance to learn any Unity3D game (including support for multi-agent). Use this example to try things out and watch the game and the learning progress live in the editor. Providing a compiled game, this example could also run in distributed fashion with num_workers > 0. For a more heavy-weight, distributed, cloud-based example, see
- Rendering and recording of an environment:
Example showing how to switch on rendering and recording of an env.
- Coin Game Example:
Coin Game Env Example (provided by the “Center on Long Term Risk”).
- DMLab Watermaze example:
Example for how to use a DMLab environment (Watermaze).
- RecSym environment example (for recommender systems) using the SlateQ algorithm:
Script showing how to train a SlateQTrainer on a RecSym environment.
- SUMO (Simulation of Urban MObility) environment example:
Example demonstrating how to use the SUMO simulator in connection with RLlib.
- VizDoom example script using RLlib’s auto-attention wrapper:
Script showing how to run PPO with an attention net against a VizDoom gym environment.
- Subprocess environment:
Example of how to ensure subprocesses spawned by envs are killed when RLlib exits.
Custom- and Complex Models¶
- Attention Net (GTrXL) learning the “repeat-after-me” environment:
Example showing how to use the auto-attention wrapper for your default- and custom models in RLlib.
- LSTM model learning the “repeat-after-me” environment:
Example showing how to use the auto-LSTM wrapper for your default- and custom models in RLlib.
- Custom Keras model:
Example of using a custom Keras model.
- Custom Keras/PyTorch RNN model:
Example of using a custom Keras- or PyTorch RNN model.
- Registering a custom model with supervised loss:
Example of defining and registering a custom model with a supervised loss.
- Batch normalization:
Example of adding batch norm layers to a custom model.
- Eager execution:
Example of how to leverage TensorFlow eager to simplify debugging and design of custom models and policies.
- Custom “Fast” Model:
Example of a “fast” Model learning only one parameter for tf and torch.
- Custom model API example:
Shows how to define a custom Model API in RLlib, such that it can be used inside certain algorithms.
- Trajectory View API utilizing model:
An example on how a model can use the trajectory view API to specify its own input.
- MobileNetV2 wrapping example model:
Implementations of tf.keras.applications.mobilenet_v2.MobileNetV2 and torch.hub (mobilenet_v2)-wrapping example models.
- Differentiable Neural Computer:
Example of DeepMind’s Differentiable Neural Computer for partially-observable environments.
- Custom training workflows:
Example of how to use Tune’s support for custom training functions to implement custom training workflows.
- Custom logger:
How to setup a custom Logger object in RLlib.
- Custom metrics:
Example of how to output custom training metrics to TensorBoard.
- Custom Policy class (TensorFlow):
How to setup a custom TFPolicy.
- Custom Policy class (PyTorch):
How to setup a custom TorchPolicy.
- Using rollout workers directly for control over the whole training workflow:
Example of how to use RLlib’s lower-level building blocks to implement a fully customized training workflow.
- Custom execution plan function handling two different Policies (DQN and PPO) at the same time:
Example of how to use the exec. plan of a Trainer to trin two different policies in parallel (also using multi-agent API).
- Custom tune experiment:
How to run a custom Ray Tune experiment with RLlib with custom training- and evaluation phases.
- Custom evaluation function:
Example of how to write a custom evaluation function that is called instead of the default behavior, which is running with the evaluation worker set through n episodes.
- Parallel evaluation and training:
Example showing how the evaluation workers and the “normal” rollout workers can run (to some extend) in parallel to speed up training.
Serving and Offline¶
- Offline RL with CQL:
Example showing how to run an offline RL training job using a historic-data json file.
- Serving RLlib models with Ray Serve: Example of using Ray Serve to serve RLlib models
with HTTP and JSON interface. This is the recommended way to expose RLlib for online serving use case.
- Another example for using RLlib with Ray Serve
This script offers a simple workflow for 1) training a policy with RLlib first, 2) creating a new policy 3) restoring its weights from the trained one and serving the new policy via Ray Serve.
- Unity3D client/server:
Example of how to setup n distributed Unity3D (compiled) games in the cloud that function as data collecting clients against a central RLlib Policy server learning how to play the game. The n distributed clients could themselves be servers for external/human players and allow for control being fully in the hands of the Unity entities instead of RLlib. Note: Uses Unity’s MLAgents SDK (>=1.0) and supports all provided MLAgents example games and multi-agent setups.
- CartPole client/server:
Example of online serving of predictions for a simple CartPole policy.
- Saving experiences:
Example of how to externally generate experience batches in RLlib-compatible format.
- Finding a checkpoint using custom criteria:
Example of how to find a checkpoint after a tune.run via some custom defined criteria.
Multi-Agent and Hierarchical¶
- Simple independent multi-agent setup vs a PettingZoo env:
Setup RLlib to run any algorithm in (independent) multi-agent mode against a multi-agent environment.
- More complex (shared-parameter) multi-agent setup vs a PettingZoo env:
Setup RLlib to run any algorithm in (shared-parameter) multi-agent mode against a multi-agent environment.
Example of different heuristic and learned policies competing against each other in rock-paper-scissors.
- PPO with centralized critic on two-step game:
Example of customizing PPO to leverage a centralized value function.
- Centralized critic in the env:
A simpler method of implementing a centralized critic by augmentating agent observations with global information.
- Hand-coded policy:
Example of running a custom hand-coded policy alongside trainable policies.
- Weight sharing between policies:
Example of how to define weight-sharing layers between two different policies.
- Multiple trainers:
Example of alternating training between two DQN and PPO trainers.
- Hierarchical training:
Example of hierarchical training using the multi-agent API.
- Iterated Prisoner’s Dilemma environment example:
Example of an iterated prisoner’s dilemma environment solved by RLlib.
- Example showing how to setup fractional GPUs:
Example of how to setup fractional GPUs for learning (driver) and environment rollouts (remote workers).
Special Action- and Observation Spaces¶
- Nested action spaces:
Learning in arbitrarily nested action spaces.
- Custom observation filters:
How to filter raw observations coming from the environment for further processing by the Agent’s model(s).
- Using the “Repeated” space of RLlib for variable lengths observations:
How to use RLlib’s Repeated space to handle variable length observations.
- Autoregressive action distribution example:
Learning with auto-regressive action dependencies (e.g. 2 action components; distribution for 2nd component depends on the 1st component’s actually sampled value).
- Arena AI:
A General Evaluation Platform and Building Toolkit for Single/Multi-Agent Intelligence with RLlib-generated baselines.
- The Emergence of Adversarial Communication in Multi-Agent Reinforcement Learning:
Using Graph Neural Networks and RLlib to train multiple cooperative and adversarial agents to solve the “cover the area”-problem, thereby learning how to best communicate (or - in the adversarial case - how to disturb communication).
A dense traffic simulating environment with RLlib-generated baselines.
- Neural MMO:
A multiagent AI research environment inspired by Massively Multiplayer Online (MMO) role playing games – self-contained worlds featuring thousands of agents per persistent macrocosm, diverse skilling systems, local and global economies, complex emergent social structures, and ad-hoc high-stakes single and team based conflict.
Example of building packet classification trees using RLlib / multi-agent in a bandit-like setting.
Example of learning optimal LLVM vectorization compiler pragmas for loops in C and C++ codes using RLlib.
- Roboschool / SageMaker:
Example of training robotic control policies in SageMaker with RLlib.
Example of training in StarCraft2 maps with RLlib / multi-agent.
- Traffic Flow:
Example of optimizing mixed-autonomy traffic simulations with RLlib / multi-agent.
Join us at Production RL Summit — a free virtual reinforcement learning event showcasing how companies like JP Morgan, Riot Games, Dow, and Siemens are apply RL to real business problems. Connect with peers and experts from the RL community, and sharpen your RL skills with hands-on workshop.