Examples¶
If any example is broken, or if you’d like to add an example to this page, feel free to raise an issue on our Github repository.
Tip
Check out the Tune tutorials page for guides on how to use Tune with your preferred machine learning library.
General Examples¶
tune_basic_example: Simple example for doing a basic random and grid search.
async_hyperband_example: Example of using a simple tuning function with AsyncHyperBandScheduler.
hyperband_function_example: Example of using a Trainable function with HyperBandScheduler. Also uses the AsyncHyperBandScheduler.
pbt_function: Example of using the function API with a PopulationBasedTraining scheduler.
pb2_example: Example of using the Population-based Bandits (PB2) scheduler.
logging_example: Example of custom loggers and custom trial directory naming.
Trainable Class Examples
Though it is preferable to use the Function API, Tune also supports a Class-based API for training.
hyperband_example: Example of using a Trainable class with HyperBandScheduler. Also uses the AsyncHyperBandScheduler.
pbt_example: Example of using a Trainable class with PopulationBasedTraining scheduler.
Search Algorithm Examples¶
ax_example: Example script showing usage of AxSearch [Ax website]
dragonfly_example: Example script showing usage of DragonflySearch [Dragonfly website]
skopt_example: Example script showing usage of SkoptSearch [Scikit-Optimize website]
hyperopt_example: Example script showing usage of HyperOptSearch [HyperOpt website]
bayesopt_example: Example script showing usage of BayesOptSearch [BayesianOptimization website]
bohb_example: Example script showing usage of TuneBOHB [BOHB website]
nevergrad_example: Example script showing usage of NevergradSearch [Nevergrad website]
optuna_example: Example script showing usage of OptunaSearch [Optuna website]
zoopt_example: Example script showing usage of ZOOptSearch [ZOOpt website]
sigopt_example: Example script showing usage of SigOptSearch [SigOpt website]
Sigopt (Contributed)
sigopt_multi_objective_example: Example using Sigopt’s multi-objective functionality.
sigopt_prior_beliefs_example: Example using Sigopt’s support for prior beliefs.
tune-sklearn examples¶
See the ray-project/tune-sklearn examples for a comprehensive list of examples leveraging Tune’s sklearn interface.
Framework-specific Examples¶
PyTorch¶
mnist_pytorch: Converts the PyTorch MNIST example to use Tune with the function-based API. Also shows how to easily convert something relying on argparse to use Tune.
ddp_mnist_torch: An example showing how to use DistributedDataParallel with Ray Tune. This enables both distributed training and distributed hyperparameter tuning.
cifar10_pytorch: Uses Pytorch to tune a simple model on CIFAR10.
pbt_convnet_function_example: Example training a ConvNet with checkpointing in function API.
Pytorch Lightning¶
mnist_ptl_mini: A minimal example of using Pytorch Lightning to train a MNIST model. This example utilizes the Ray Tune-provided PyTorch Lightning callbacks. See also this tutorial for a full walkthrough.
mnist_pytorch_lightning: A comprehensive example using Pytorch Lightning to train a MNIST model. This example showcases how to use various search optimization techniques. It utilizes the Ray Tune-provided PyTorch Lightning callbacks.
A walkthrough tutorial for using Ray Tune with Pytorch-Lightning.
Wandb, MLFlow¶
wandb_example: Example for using Weights and Biases with Ray Tune.
mlflow_example: Example for using MLFlow with Ray Tune.
mlflow_ptl_example: Example for using MLFlow and Pytorch Lightning with Ray Tune.
Tensorflow/Keras¶
tune_mnist_keras: Converts the Keras MNIST example to use Tune with the function-based API and a Keras callback. Also shows how to easily convert something relying on argparse to use Tune.
pbt_memnn_example: Example of training a Memory NN on bAbI with Keras using PBT.
tf_mnist_example: Converts the Advanced TF2.0 MNIST example to use Tune with the Trainable. This uses tf.function. Original code from tensorflow: https://www.tensorflow.org/tutorials/quickstart/advanced
MXNet¶
mxnet_example: Simple example for using MXNet with Tune.
tune_cifar10_gluon: MXNet Gluon example to use Tune with the function-based API on CIFAR-10 dataset.
Horovod¶
horovod_simple: Leverages the Horovod-Tune integration to launch a distributed training + tuning job.
XGBoost, LightGBM¶
XGBoost tutorial: A guide to tuning XGBoost parameters with Tune.
xgboost_example: Trains a basic XGBoost model with Tune with the function-based API and an XGBoost callback.
lightgbm_example: Trains a basic LightGBM model with Tune with the function-based API and a LightGBM callback.
RLlib¶
pbt_ppo_example: Example of optimizing a distributed RLlib algorithm (PPO) with the PopulationBasedTraining scheduler.
pb2_ppo_example: Example of optimizing a distributed RLlib algorithm (PPO) with the PB2 scheduler. Uses a small population size of 4, so can train on a laptop.
🤗 Huggingface Transformers¶
pbt_transformers_example: Fine-tunes a Huggingface transformer with Tune Population Based Training.
Contributed Examples¶
pbt_tune_cifar10_with_keras: A contributed example of tuning a Keras model on CIFAR10 with the PopulationBasedTraining scheduler.
genetic_example: Optimizing the michalewicz function using the contributed GeneticSearch algorithm with AsyncHyperBandScheduler.
Open Source Projects using Tune¶
Here are some of the popular open source repositories and research projects that leverage Tune. Feel free to submit a pull-request adding (or requesting a removal!) of a listed project.
Softlearning: Softlearning is a reinforcement learning framework for training maximum entropy policies in continuous domains. Includes the official implementation of the Soft Actor-Critic algorithm.
Flambe: An ML framework to accelerate research and its path to production. See flambe.ai.
Population Based Augmentation: Population Based Augmentation (PBA) is a algorithm that quickly and efficiently learns data augmentation functions for neural network training. PBA matches state-of-the-art results on CIFAR with one thousand times less compute.
Fast AutoAugment by Kakao: Fast AutoAugment (Accepted at NeurIPS 2019) learns augmentation policies using a more efficient search strategy based on density matching.
Allentune: Hyperparameter Search for AllenNLP from AllenAI.
machinable: A modular configuration system for machine learning research. See machinable.org.
NeuroCard: NeuroCard (Accepted at VLDB 2021) is a neural cardinality estimator for multi-table join queries. It uses state of the art deep density models to learn correlations across relational database tables.