ray.tune.syncer.SyncConfig#

class ray.tune.syncer.SyncConfig(upload_dir: Optional[str] = None, syncer: Optional[Union[str, ray.tune.syncer.Syncer]] = 'auto', sync_period: int = 300, sync_timeout: int = 1800, sync_artifacts: bool = True, sync_on_checkpoint: bool = True)[source]#

Bases: object

Configuration object for Tune syncing.

See Appendix: Types of Tune Experiment Data for an overview of what data is synchronized.

If an upload_dir is specified, both experiment and trial checkpoints will be stored on remote (cloud) storage. Synchronization then only happens via uploading/downloading from this remote storage – no syncing will happen between nodes.

There are a few scenarios where syncing takes place:

  1. The Tune driver (on the head node) syncing the experiment directory to the cloud (which includes experiment state such as searcher state, the list of trials and their statuses, and trial metadata)

  2. Workers directly syncing trial checkpoints to the cloud

  3. Workers syncing their trial directories to the head node (this is the default option when no cloud storage is used)

  4. Workers syncing artifacts (which include all files saved in the trial directory except for checkpoints) directly to the cloud.

See How to Configure Storage Options for a Distributed Tune Experiment? for more details and examples.

Parameters
  • upload_dir – Optional URI to sync training results and checkpoints to (e.g. s3://bucket, gs://bucket or hdfs://path). Specifying this will enable cloud-based checkpointing.

  • syncer – If upload_dir is specified, then this config accepts a custom syncer subclassing Syncer which will be used to synchronize checkpoints to/from cloud storage. If no upload_dir is specified, this config can be set to None, which disables the default worker-to-head-node syncing. Defaults to "auto" (auto detect), which assigns a default syncer that uses pyarrow to handle cloud storage syncing when upload_dir is provided.

  • sync_period – Minimum time in seconds to wait between two sync operations. A smaller sync_period will have more up-to-date data at the sync location but introduces more syncing overhead. Defaults to 5 minutes. Note: This applies to (1) and (3). Trial checkpoints are uploaded to the cloud synchronously on every checkpoint.

  • sync_timeout – Maximum time in seconds to wait for a sync process to finish running. This is used to catch hanging sync operations so that experiment execution can continue and the syncs can be retried. Defaults to 30 minutes. Note: Currently, this timeout only affects cloud syncing: (1) and (2).

  • sync_artifacts – Whether or not to sync artifacts that are saved to the trial directory (accessed via session.get_trial_dir()) to the cloud. Artifact syncing happens at the same frequency as trial checkpoint syncing. Note: This is scenario (4).

  • sync_on_checkpoint – If True, a sync from a worker’s remote trial directory to the head node will be forced on every trial checkpoint, regardless of the sync_period. Defaults to True. Note: This is ignored if upload_dir is specified, since this only applies to worker-to-head-node syncing (3).

PublicAPI: This API is stable across Ray releases.

validate_upload_dir() bool[source]#

Checks if upload_dir is supported by syncer.

Returns True if upload_dir is valid, otherwise raises ValueError.

Parameters

upload_dir – Path to validate.