logo

Ray v1.2.0

  • What is Ray?

Overview of Ray

  • A Gentle Introduction to Ray
  • Community Integrations
  • Installing Ray

Ray Core

  • Ray Core Walkthrough
  • Using Ray
    • Starting Ray
    • Using Actors
    • AsyncIO / Concurrency for Actors
    • GPU Support
    • Serialization
    • Memory Management
    • Placement Groups
    • Debugging and Profiling
    • Advanced Usage
    • Cross-language programming
    • Best Practices: Ray with Tensorflow
    • Best Practices: Ray with PyTorch
  • Configuring Ray
  • Ray Dashboard
  • Tutorial and Examples
    • Tips for first-time users
    • Tips for testing Ray programs
    • Progress Bar for Ray Actors (tqdm)
    • Streaming MapReduce
    • Placement Group Examples
    • Parameter Server
    • Simple Parallel Model Selection
    • Batch L-BFGS
    • Fault-Tolerant Fairseq Training
    • News Reader
    • Learning to Play Pong
    • Asynchronous Advantage Actor Critic (A3C)
  • API and Package Reference

Ray Cluster

  • Distributed Ray Overview
  • Launching Cloud Clusters with Ray
    • Launching Cloud Clusters
      • AWS Configurations
    • Configuring your Cluster
    • Cluster Launcher Commands
  • Cluster Autoscaling
  • Ray with Cluster Managers
    • Deploying on Kubernetes
      • The Ray Kubernetes Operator
    • Deploying on YARN
    • Deploying on Slurm

Ray Serve

  • Ray Serve: Scalable and Programmable Serving
  • Key Concepts
  • Tutorials
    • Keras and Tensorflow Tutorial
    • PyTorch Tutorial
    • Scikit-Learn Tutorial
    • Batching Tutorial
    • Integration with Existing Web Servers
  • Deploying Ray Serve
  • Advanced Topics and Configurations
  • Serve Architecture
  • Ray Serve FAQ
  • Serve API Reference

Ray Tune

  • Tune: Scalable Hyperparameter Tuning
  • Key Concepts
  • User Guide & Configuring Tune
  • Tutorials & FAQ
    • A Basic Tune Tutorial
    • Guide to Population Based Training (PBT)
    • Tune Distributed Experiments
    • How does Tune work?
    • Using MLflow with Tune
    • How to use Tune with PyTorch
    • Using PyTorch Lightning with Tune
    • Model selection and serving with Ray Tune and Ray Serve
    • Tune’s Scikit Learn Adapters
    • Tuning XGBoost parameters
    • Using Weights & Biases with Tune
  • Examples
  • Tune API Reference
    • Execution (tune.run, tune.Experiment)
    • Training (tune.Trainable, tune.report)
    • Console Output (Reporters)
    • Analysis (tune.analysis)
    • Search Space API
    • Search Algorithms (tune.suggest)
    • Trial Schedulers (tune.schedulers)
    • Scikit-Learn API (tune.sklearn)
    • Stopping mechanisms (tune.stopper)
    • Loggers (tune.logger)
    • External library integrations (tune.integration)
    • Tune Internals
    • Tune Client API
    • Tune CLI (Experimental)
  • Contributing to Tune

RLlib

  • RLlib: Scalable Reinforcement Learning
  • RLlib Table of Contents
  • RLlib Training APIs
  • RLlib Environments
  • RLlib Models, Preprocessors, and Action Distributions
  • RLlib Algorithms
  • RLlib Sample Collection and Trajectory Views
  • RLlib Offline Datasets
  • RLlib Concepts and Custom Algorithms
  • RLlib Examples
  • RLlib Package Reference
  • Contributing to RLlib

Ray SGD

  • RaySGD: Distributed Training Wrappers
  • Distributed PyTorch
  • Distributed TensorFlow
  • Distributed Dataset
  • Pytorch Lightning with RaySGD
  • RaySGD Hyperparameter Tuning
  • RaySGD API Reference

Data Processing

  • Modin (Pandas on Ray)
  • Dask on Ray
  • Mars on Ray
  • RayDP (Spark on Ray)

More Libraries

  • Distributed multiprocessing.Pool
  • Distributed Scikit-learn / Joblib
  • Parallel Iterators
  • XGBoost on Ray
  • Modin (Pandas on Ray)
  • Dask on Ray
  • Mars on Ray
  • RayDP (Spark on Ray)
  • Ray Client

Ray Observability

  • Ray Monitoring
  • Ray Debugger

Contributing

  • Getting Involved / Contributing

Development and Ray Internals

  • Building Ray from Source
  • Ray Whitepapers
  • Debugging
  • Profiling for Ray Developers
  • Fault Tolerance
Theme by the Executable Book Project

Ray with Cluster Managers¶

Note

If you’re using AWS, Azure or GCP you can use the Ray Cluster Launcher to simplify the cluster setup process.

  • Deploying on Kubernetes
    • Creating a Ray Namespace
    • Starting a Ray Cluster
    • Running Ray Programs
    • Cleaning Up
    • Using GPUs
    • Questions or Issues?
  • Deploying on YARN
    • Skein Configuration
    • Packaging Dependencies
    • Ray Setup in YARN
    • Running a Job
    • Cleaning Up
    • Questions or Issues?
  • Deploying on Slurm
    • Walkthrough using Ray with SLURM
    • Python-interface SLURM scripts
    • Examples and templates
Cluster Autoscaling Deploying on Kubernetes

By The Ray Team
© Copyright 2021, The Ray Team.