Examples

If any example is broken, or if you’d like to add an example to this page, feel free to raise an issue on our Github repository.

Tip

Check out the Tune tutorials page for guides on how to use Tune with your preferred machine learning library.

General Examples

  • tune_basic_example: Simple example for doing a basic random and grid search.

  • async_hyperband_example: Example of using a simple tuning function with AsyncHyperBandScheduler.

  • hyperband_function_example: Example of using a Trainable function with HyperBandScheduler. Also uses the AsyncHyperBandScheduler.

  • pbt_function: Example of using the function API with a PopulationBasedTraining scheduler.

  • pb2_example: Example of using the Population-based Bandits (PB2) scheduler.

  • logging_example: Example of custom loggers and custom trial directory naming.

Trainable Class Examples

Though it is preferable to use the Function API, Tune also supports a Class-based API for training.

  • hyperband_example: Example of using a Trainable class with HyperBandScheduler. Also uses the AsyncHyperBandScheduler.

  • pbt_example: Example of using a Trainable class with PopulationBasedTraining scheduler.

Search Algorithm Examples

Sigopt (Contributed)

tune-sklearn examples

See the ray-project/tune-sklearn examples for a comprehensive list of examples leveraging Tune’s sklearn interface.

Framework-specific Examples

PyTorch

  • mnist_pytorch: Converts the PyTorch MNIST example to use Tune with the function-based API. Also shows how to easily convert something relying on argparse to use Tune.

  • ddp_mnist_torch: An example showing how to use DistributedDataParallel with Ray Tune. This enables both distributed training and distributed hyperparameter tuning.

  • cifar10_pytorch: Uses Pytorch to tune a simple model on CIFAR10.

  • pbt_convnet_function_example: Example training a ConvNet with checkpointing in function API.

Pytorch Lightning

Wandb, MLflow

Tensorflow/Keras

MXNet

  • mxnet_example: Simple example for using MXNet with Tune.

  • tune_cifar10_gluon: MXNet Gluon example to use Tune with the function-based API on CIFAR-10 dataset.

Horovod

XGBoost, LightGBM

  • XGBoost tutorial: A guide to tuning XGBoost parameters with Tune.

  • xgboost_example: Trains a basic XGBoost model with Tune with the function-based API and an XGBoost callback.

  • xgboost_dynamic_resources_example: Trains a basic XGBoost model with Tune with the class-based API and a ResourceChangingScheduler, ensuring all resources are being used at all time.

  • lightgbm_example: Trains a basic LightGBM model with Tune with the function-based API and a LightGBM callback.

RLlib

  • pbt_ppo_example: Example of optimizing a distributed RLlib algorithm (PPO) with the PopulationBasedTraining scheduler.

  • pb2_ppo_example: Example of optimizing a distributed RLlib algorithm (PPO) with the PB2 scheduler. Uses a small population size of 4, so can train on a laptop.

🤗 Huggingface Transformers

Contributed Examples

  • pbt_tune_cifar10_with_keras: A contributed example of tuning a Keras model on CIFAR10 with the PopulationBasedTraining scheduler.

  • genetic_example: Optimizing the michalewicz function using the contributed GeneticSearch algorithm with AsyncHyperBandScheduler.

Open Source Projects using Tune

Here are some of the popular open source repositories and research projects that leverage Tune. Feel free to submit a pull-request adding (or requesting a removal!) of a listed project.

  • Softlearning: Softlearning is a reinforcement learning framework for training maximum entropy policies in continuous domains. Includes the official implementation of the Soft Actor-Critic algorithm.

  • Flambe: An ML framework to accelerate research and its path to production. See flambe.ai.

  • Population Based Augmentation: Population Based Augmentation (PBA) is a algorithm that quickly and efficiently learns data augmentation functions for neural network training. PBA matches state-of-the-art results on CIFAR with one thousand times less compute.

  • Fast AutoAugment by Kakao: Fast AutoAugment (Accepted at NeurIPS 2019) learns augmentation policies using a more efficient search strategy based on density matching.

  • Allentune: Hyperparameter Search for AllenNLP from AllenAI.

  • machinable: A modular configuration system for machine learning research. See machinable.org.

  • NeuroCard: NeuroCard (Accepted at VLDB 2021) is a neural cardinality estimator for multi-table join queries. It uses state of the art deep density models to learn correlations across relational database tables.