ray.tune.schedulers.PopulationBasedTrainingReplay#
- class ray.tune.schedulers.PopulationBasedTrainingReplay(policy_file: str)[source]#
Bases:
FIFOScheduler
Replays a Population Based Training run.
Population Based Training does not return a single hyperparameter configuration, but rather a schedule of configurations. For instance, PBT might discover that a larger learning rate leads to good results in the first training iterations, but that a smaller learning rate is preferable later.
This scheduler enables replaying these parameter schedules from a finished PBT run. This requires that population based training has been run with
log_config=True
, which is the default setting.The scheduler will only accept and train a single trial. It will start with the initial config of the existing trial and update the config according to the schedule.
- Parameters:
policy_file – The PBT policy file. Usually this is stored in
~/ray_results/experiment_name/pbt_policy_xxx.txt
wherexxx
is the trial ID.
Example:
# Replaying a result from ray.tune.examples.pbt_convnet_example from ray import train, tune from ray.tune.examples.pbt_convnet_example import PytorchTrainable from ray.tune.schedulers import PopulationBasedTrainingReplay replay = PopulationBasedTrainingReplay( "~/ray_results/pbt_test/pbt_policy_XXXXX_00001.txt") tuner = tune.Tuner( PytorchTrainable, run_config=train.RunConfig( stop={"training_iteration": 100} ), tune_config=tune.TuneConfig( scheduler=replay, ), ) tuner.fit()
Methods
Restore trial scheduler from checkpoint.
Save trial scheduler to a checkpoint
Pass search properties to scheduler.
Attributes
Status for continuing trial execution
Status for pausing trial execution
Status for stopping trial execution