.. _tune-60-seconds: ======================== Key Concepts of Ray Tune ======================== .. TODO: should we introduce checkpoints as well? .. TODO: should we at least mention "Stopper" classes here? Let's quickly walk through the key concepts you need to know to use Tune. If you want to see practical tutorials right away, go visit our :ref:`user guides `. In essence, Tune has six crucial components that you need to understand. First, you define the hyperparameters you want to tune in a `search space` and pass them into a `trainable` that specifies the objective you want to tune. Then you select a `search algorithm` to effectively optimize your parameters and optionally use a `scheduler` to stop searches early and speed up your experiments. Together with other configuration, your `trainable`, search algorithm, and scheduler are passed into ``Tuner``, which runs your experiments and creates `trials`. The `Tuner` returns a `ResultGrid` to inspect your experiment results. The following figure shows an overview of these components, which we cover in detail in the next sections. .. image:: images/tune_flow.png .. _tune_60_seconds_trainables: Ray Tune Trainables ------------------- In short, a :ref:`Trainable ` is an object that you can pass into a Tune run. Ray Tune has two ways of defining a `trainable`, namely the :ref:`Function API ` and the :ref:`Class API `. Both are valid ways of defining a `trainable`, but the Function API is generally recommended and is used throughout the rest of this guide. Let's say we want to optimize a simple objective function like ``a (x ** 2) + b`` in which ``a`` and ``b`` are the hyperparameters we want to tune to `minimize` the objective. Since the objective also has a variable ``x``, we need to test for different values of ``x``. Given concrete choices for ``a``, ``b`` and ``x`` we can evaluate the objective function and get a `score` to minimize. .. tab-set:: .. tab-item:: Function API With the :ref:`the function-based API ` you create a function (here called ``trainable``) that takes in a dictionary of hyperparameters. This function computes a ``score`` in a "training loop" and `reports` this score back to Tune: .. literalinclude:: doc_code/key_concepts.py :language: python :start-after: __function_api_start__ :end-before: __function_api_end__ Note that we use ``session.report(...)`` to report the intermediate ``score`` in the training loop, which can be useful in many machine learning tasks. If you just want to report the final ``score`` outside of this loop, you can simply return the score at the end of the ``trainable`` function with ``return {"score": score}``. You can also use ``yield {"score": score}`` instead of ``session.report()``. .. tab-item:: Class API Here's an example of specifying the objective function using the :ref:`class-based API `: .. literalinclude:: doc_code/key_concepts.py :language: python :start-after: __class_api_start__ :end-before: __class_api_end__ .. tip:: ``session.report`` can't be used within a ``Trainable`` class. Learn more about the details of :ref:`Trainables here ` and :doc:`have a look at our examples `. Next, let's have a closer look at what the ``config`` dictionary is that you pass into your trainables. .. _tune-key-concepts-search-spaces: Tune Search Spaces ------------------ To optimize your *hyperparameters*, you have to define a *search space*. A search space defines valid values for your hyperparameters and can specify how these values are sampled (e.g. from a uniform distribution or a normal distribution). Tune offers various functions to define search spaces and sampling methods. :ref:`You can find the documentation of these search space definitions here `. Here's an example covering all search space functions. Again, :ref:`here is the full explanation of all these functions `. .. literalinclude:: doc_code/key_concepts.py :language: python :start-after: __config_start__ :end-before: __config_end__ .. _tune_60_seconds_trials: Tune Trials ----------- You use :ref:`Tuner.fit ` to execute and manage hyperparameter tuning and generate your `trials`. At a minimum, your ``Tuner`` call takes in a trainable as first argument, and a ``param_space`` dictionary to define the search space. The ``Tuner.fit()`` function also provides many features such as :ref:`logging `, :ref:`checkpointing `, and :ref:`early stopping `. In the example, minimizing ``a (x ** 2) + b``, a simple Tune run with a simplistic search space for ``a`` and ``b`` looks like this: .. literalinclude:: doc_code/key_concepts.py :language: python :start-after: __run_tunable_start__ :end-before: __run_tunable_end__ ``Tuner.fit`` will generate a couple of hyperparameter configurations from its arguments, wrapping them into :ref:`Trial objects `. Trials contain a lot of information. For instance, you can get the hyperparameter configuration using (``trial.config``), the trial ID (``trial.trial_id``), the trial's resource specification (``resources_per_trial`` or ``trial.placement_group_factory``) and many other values. By default ``Tuner.fit`` will execute until all trials stop or error. Here's an example output of a trial run: .. TODO: how to make sure this doesn't get outdated? .. code-block:: bash == Status == Memory usage on this node: 11.4/16.0 GiB Using FIFO scheduling algorithm. Resources requested: 1/12 CPUs, 0/0 GPUs, 0.0/3.17 GiB heap, 0.0/1.07 GiB objects Result logdir: /Users/foo/ray_results/myexp Number of trials: 1 (1 RUNNING) +----------------------+----------+---------------------+-----------+--------+--------+----------------+-------+ | Trial name | status | loc | a | b | score | total time (s) | iter | |----------------------+----------+---------------------+-----------+--------+--------+----------------+-------| | Trainable_a826033a | RUNNING | 10.234.98.164:31115 | 0.303706 | 0.0761 | 0.1289 | 7.54952 | 15 | +----------------------+----------+---------------------+-----------+--------+--------+----------------+-------+ You can also easily run just 10 trials by specifying the number of samples (``num_samples``). Tune automatically :ref:`determines how many trials will run in parallel `. Note that instead of the number of samples, you can also specify a time budget in seconds through ``time_budget_s``, if you set ``num_samples=-1``. .. literalinclude:: doc_code/key_concepts.py :language: python :start-after: __run_tunable_samples_start__ :end-before: __run_tunable_samples_end__ Finally, you can use more interesting search spaces to optimize your hyperparameters via Tune's :ref:`search space API `, like using random samples or grid search. Here's an example of uniformly sampling between ``[0, 1]`` for ``a`` and ``b``: .. literalinclude:: doc_code/key_concepts.py :language: python :start-after: __search_space_start__ :end-before: __search_space_end__ To learn more about the various ways of configuring your Tune runs, check out the :ref:`Tuner API reference `. .. _search-alg-ref: Tune Search Algorithms ---------------------- To optimize the hyperparameters of your training process, you use a :ref:`Search Algorithm ` which suggests hyperparameter configurations. If you don't specify a search algorithm, Tune will use random search by default, which can provide you with a good starting point for your hyperparameter optimization. For instance, to use Tune with simple Bayesian optimization through the ``bayesian-optimization`` package (make sure to first run ``pip install bayesian-optimization``), we can define an ``algo`` using ``BayesOptSearch``. Simply pass in a ``search_alg`` argument to ``tune.TuneConfig``, which is taken in by ``Tuner``: .. literalinclude:: doc_code/key_concepts.py :language: python :start-after: __bayes_start__ :end-before: __bayes_end__ Tune has Search Algorithms that integrate with many popular **optimization** libraries, such as :ref:`HyperOpt ` or :ref:`Optuna `. Tune automatically converts the provided search space into the search spaces the search algorithms and underlying libraries expect. See the :ref:`Search Algorithm API documentation ` for more details. Here's an overview of all available search algorithms in Tune: .. list-table:: :widths: 5 5 2 10 :header-rows: 1 * - SearchAlgorithm - Summary - Website - Code Example * - :ref:`Random search/grid search ` - Random search/grid search - - :doc:`/tune/examples/includes/tune_basic_example` * - :ref:`AxSearch ` - Bayesian/Bandit Optimization - [`Ax `__] - :doc:`/tune/examples/includes/ax_example` * - :ref:`HyperOptSearch ` - Tree-Parzen Estimators - [`HyperOpt `__] - :doc:`/tune/examples/hyperopt_example` * - :ref:`BayesOptSearch ` - Bayesian Optimization - [`BayesianOptimization `__] - :doc:`/tune/examples/includes/bayesopt_example` * - :ref:`TuneBOHB ` - Bayesian Opt/HyperBand - [`BOHB `__] - :doc:`/tune/examples/includes/bohb_example` * - :ref:`NevergradSearch ` - Gradient-free Optimization - [`Nevergrad `__] - :doc:`/tune/examples/includes/nevergrad_example` * - :ref:`OptunaSearch ` - Optuna search algorithms - [`Optuna `__] - :doc:`/tune/examples/optuna_example` .. note:: Unlike :ref:`Tune's Trial Schedulers `, Tune Search Algorithms cannot affect or stop training processes. However, you can use them together to early stop the evaluation of bad trials. In case you want to implement your own search algorithm, the interface is easy to implement, you can :ref:`read the instructions here `. Tune also provides helpful utilities to use with Search Algorithms: * :ref:`repeater`: Support for running each *sampled hyperparameter* with multiple random seeds. * :ref:`limiter`: Limits the amount of concurrent trials when running optimization. * :ref:`shim`: Allows creation of the search algorithm object given a string. Note that in the example above we tell Tune to ``stop`` after ``20`` training iterations. This way of stopping trials with explicit rules is useful, but in many cases we can do even better with `schedulers`. .. _schedulers-ref: Tune Schedulers --------------- To make your training process more efficient, you can use a :ref:`Trial Scheduler `. For instance, in our ``trainable`` example minimizing a function in a training loop, we used ``session.report()``. This reported `incremental` results, given a hyperparameter configuration selected by a search algorithm. Based on these reported results, a Tune scheduler can decide whether to stop the trial early or not. If you don't specify a scheduler, Tune will use a first-in-first-out (FIFO) scheduler by default, which simply passes through the trials selected by your search algorithm in the order they were picked and does not perform any early stopping. In short, schedulers can stop, pause, or tweak the hyperparameters of running trials, potentially making your hyperparameter tuning process much faster. Unlike search algorithms, :ref:`Trial Scheduler ` do not select which hyperparameter configurations to evaluate. Here's a quick example of using the so-called ``HyperBand`` scheduler to tune an experiment. All schedulers take in a ``metric``, which is the value reported by your trainable. The ``metric`` is then maximized or minimized according to the ``mode`` you provide. To use a scheduler, just pass in a ``scheduler`` argument to ``tune.TuneConfig``, which is taken in by ``Tuner``: .. literalinclude:: doc_code/key_concepts.py :language: python :start-after: __hyperband_start__ :end-before: __hyperband_end__ Tune includes distributed implementations of early stopping algorithms such as `Median Stopping Rule `__, `HyperBand `__, and `ASHA `__. Tune also includes a distributed implementation of `Population Based Training (PBT) `__ and `Population Based Bandits (PB2) `__. .. tip:: The easiest scheduler to start with is the ``ASHAScheduler`` which will aggressively terminate low-performing trials. When using schedulers, you may face compatibility issues, as shown in the below compatibility matrix. Certain schedulers cannot be used with search algorithms, and certain schedulers require that you implement :ref:`checkpointing `. Schedulers can dynamically change trial resource requirements during tuning. This is implemented in :ref:`ResourceChangingScheduler`, which can wrap around any other scheduler. .. list-table:: Scheduler Compatibility Matrix :header-rows: 1 * - Scheduler - Need Checkpointing? - SearchAlg Compatible? - Example * - :ref:`ASHA ` - No - Yes - :doc:`Link ` * - :ref:`Median Stopping Rule ` - No - Yes - :ref:`Link ` * - :ref:`HyperBand ` - Yes - Yes - :doc:`Link ` * - :ref:`BOHB ` - Yes - Only TuneBOHB - :doc:`Link ` * - :ref:`Population Based Training ` - Yes - Not Compatible - :doc:`Link ` * - :ref:`Population Based Bandits ` - Yes - Not Compatible - :doc:`Basic Example `, :doc:`PPO example ` Learn more about trial schedulers in :ref:`the scheduler API documentation `. .. _tune-concepts-analysis: Tune ResultGrid --------------- ``Tuner.fit()`` returns an :ref:`ResultGrid ` object which has methods you can use for analyzing your training. The following example shows you how to access various metrics from an ``ResultGrid`` object, like the best available trial, or the best hyperparameter configuration for that trial: .. literalinclude:: doc_code/key_concepts.py :language: python :start-after: __analysis_start__ :end-before: __analysis_end__ This object can also retrieve all training runs as dataframes, allowing you to do ad-hoc data analysis over your results. .. literalinclude:: doc_code/key_concepts.py :language: python :start-after: __results_start__ :end-before: __results_end__ See the :ref:`result analysis user guide ` for more usage examples. What's Next? ------------- Now that you have a working understanding of Tune, check out: * :ref:`tune-guides`: Tutorials for using Tune with your preferred machine learning library. * :doc:`/tune/examples/index`: End-to-end examples and templates for using Tune with your preferred machine learning library. * :doc:`/tune/getting-started`: A simple tutorial that walks you through the process of setting up a Tune experiment. Further Questions or Issues? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. include:: /_includes/_help.rst