RLlib: Industry-Grade, Scalable Reinforcement Learning#
Note
Ray 2.10.0 introduces the alpha stage of RLlib’s “new API stack”. The team is currently transitioning algorithms, example scripts, and documentation to the new code base throughout the subsequent minor releases leading up to Ray 3.0.
See here for more details on how to activate and use the new API stack.
RLlib is an open source library for reinforcement learning (RL), offering support for production-level, highly scalable, and fault-tolerant RL workloads, while maintaining simple and unified APIs for a large variety of industry applications.
Whether training policies in a multi-agent setup, from historic offline data, or using externally connected simulators, RLlib offers simple solutions for each of these autonomous decision making needs and enables you to start running your experiments within hours.
RLlib is used in production by industry leaders in many different verticals, such as gaming, robotics, finance, climate- and industrial control, manufacturing and logistics, automobile, and boat design.
RLlib in 60 seconds#
It only takes a few steps to get your first RLlib workload up and running on your laptop. Install RLlib and PyTorch, as shown below:
pip install "ray[rllib]" torch
Note
For installation on computers running Apple Silicon (such as M1), follow instructions here.
Note
To be able to run our Atari or MuJoCo examples, you also need to run:
pip install "gymnasium[atari,accept-rom-license,mujoco]"
.
This is all! You can now start coding against RLlib. Here is an example for running the PPO Algorithm on the
Taxi domain.
You first create a config
for the algorithm, which defines the RL environment (taxi) and
any other needed settings and parameters.
Next, build
the algorithm and train
it for a total of 5 iterations.
One training iteration includes parallel (distributed) sample collection by the EnvRunner
actors,
followed by loss calculation on the collected data, and a model update step.
At the end of your script, the trained Algorithm is evaluated:
from ray.rllib.algorithms.ppo import PPOConfig
from ray.rllib.connectors.env_to_module import FlattenObservations
# 1. Configure the algorithm,
config = (
PPOConfig()
.environment("Taxi-v3")
.env_runners(
num_env_runners=2,
# Observations are discrete (ints) -> We need to flatten (one-hot) them.
env_to_module_connector=lambda env: FlattenObservations(),
)
.evaluation(evaluation_num_env_runners=1)
)
# 2. build the algorithm ..
algo = config.build()
# 3. .. train it ..
for _ in range(5):
print(algo.train())
# 4. .. and evaluate it.
algo.evaluate()
You can use any Farama-Foundation Gymnasium registered environment
with the env
argument.
In config.env_runners()
you can specify - amongst many other things - the number of parallel
EnvRunner
actors to collect samples from the environment.
You can also tweak the NN architecture used by by tweaking RLlib’s DefaultModelConfig
, as well as, set up a separate
config for the evaluation EnvRunner
actors through the config.evaluation()
method.
See here, if you want to learn more about the RLlib training APIs. Also, see here for a simple example on how to write an action inference loop after training.
If you want to get a quick preview of which algorithms and environments RLlib supports, click the dropdowns below:
Why chose RLlib?#
Learn More#
RLlib Environments
Get started with environments supported by RLlib, such as Farama foundation’s Gymnasium, Petting Zoo, and many custom formats for vectorized and multi-agent environments.
RLlib Key Concepts
Learn more about the core concepts of RLlib, such as environments, algorithms and policies.
RLlib Algorithms
See the many available RL algorithms of RLlib for model-free and model-based RL, on-policy and off-policy training, multi-agent RL, and more.
Customizing RLlib#
RLlib provides powerful, yet easy to use APIs for customizing all aspects of your experimental- and production training-workflows. For example, you may code your own environments in python using the Farama Foundation’s gymnasium or DeepMind’s OpenSpiel, provide custom PyTorch models, write your own optimizer setups and loss definitions, or define custom exploratory behavior.