Ray
Note
From Ray 2.6.0 onwards, RLlib is adopting a new stack for training and model customization, gradually replacing the ModelV2 API and some convoluted parts of Policy API with the RLModule API. Click here for details.
Advanced Features of the RLlib Python API
Working With Models, Preprocessors and Action Distributions
Checkpointing your Algorithms and Policies, and Exporting your Models
How To Customize Your Policies?
How To Use Sample Collections and Trajectory Views?
Working With Offline Data
Working with ReplayBuffers
How To Contribute To RLlib?
How To Work With the RLlib CLI?
How To Use the RLlib Catalogs