ray.rllib.callbacks.callbacks.RLlibCallback.on_offline_eval_runners_recreated#

RLlibCallback.on_offline_eval_runners_recreated(*, algorithm: Algorithm, offline_eval_runner_group: OfflineEvaluationRunnerGroup, offline_eval_runner_indices: List[int], **kwargs) None[source]#

Callback run after one or more OfflineEvaluationRunner actors have been recreated.

You can access and change the OfflineEvaluationRunners in question through the following code snippet inside your custom override of this method:

class MyCallbacks(RLlibCallback):
    def on_offline_eval_runners_recreated(
        self,
        *,
        algorithm,
        offline_eval_runner_group,
        offline_eval_runner_indices,
        **kwargs,
    ):
        # Define what you would like to do on the recreated EnvRunner:
        def func(offline_eval_runner):
            # Here, we just set some arbitrary property to 1.
            if is_evaluation:
                offline_eval_runner._custom_property_for_evaluation = 1
            else:
                offline_eval_runner._custom_property_for_training = 1

        # Use the `foreach_runner` method of the worker set and
        # only loop through those worker IDs that have been restarted.
        # Note that `local_runner=False` as long as there are remote
        # runners.
        offline_eval_runner_group.foreach_runner(
            func,
            remote_runner_ids=offline_eval_runner_indices,
            local_runner=False,
        )
Parameters:
  • algorithm – Reference to the Algorithm instance.

  • offline_eval_runner_group – The OfflineEvaluationRunnerGroup object in which the workers in question reside. You can use a runner_group.foreach_runner( remote_worker_ids=..., local_runner=False) method call to execute custom code on the recreated (remote) workers.

  • offline_eval_runner_indices – The list of (remote) worker IDs that have been recreated.