ray.rllib.algorithms.algorithm.Algorithm.from_checkpoint#

static Algorithm.from_checkpoint(checkpoint: str | Checkpoint, policy_ids: Container[str] | None = None, policy_mapping_fn: Callable[[Any, int | str], str] | None = None, policies_to_train: Container[str] | Callable[[str, SampleBatch | MultiAgentBatch | None], bool] | None = None) Algorithm[source]#

Creates a new algorithm instance from a given checkpoint.

Note: This method must remain backward compatible from 2.0.0 on.

Parameters:
  • checkpoint – The path (str) to the checkpoint directory to use or an AIR Checkpoint instance to restore from.

  • policy_ids – Optional list of PolicyIDs to recover. This allows users to restore an Algorithm with only a subset of the originally present Policies.

  • policy_mapping_fn – An optional (updated) policy mapping function to use from here on.

  • policies_to_train – An optional list of policy IDs to be trained or a callable taking PolicyID and SampleBatchType and returning a bool (trainable or not?). If None, will keep the existing setup in place. Policies, whose IDs are not in the list (or for which the callable returns False) will not be updated.

Returns:

The instantiated Algorithm.