Fine-tune dolly-v2-7b with Ray Train, PyTorch Lightning and FSDP#

In this example, we demonstrate how to use Ray Train to fine-tune a dolly-v2-7b model. dolly-v2-7b is a 7 billion parameter causal language model created by Databricks, derived from EleutherAI’s Pythia-6.9b, and fine-tuned on a ~15K record instruction corpus.

We load the pre-trained model from the HuggingFace model hub into a LightningModule and launch an FSDP fine-tuning job across 16 T4 GPUs with the help of Ray TorchTrainer. It is also straightforward to fine-tune other similar large language models in a similar manner as shown in this example.

Before starting this example, we highly recommend reading Ray Train Key Concepts and Ray Data Quickstart.

Set up ray cluster#

In this example, we are using a Ray cluster with a g4dn.8xlarge head node and 15 g4dn.4xlarge worker nodes. Each instance has one Tesla T4 GPU (16GiB Memory).

We define a runtime_env to install the necessary Python libraries on each node. You can skip this step if you have already installed all the required packages in your workers’ base image. We tested this example with lightning==2.0.2 and transformers==4.29.2.

import ray

ray.init(
    runtime_env={
        "pip": [
            "datasets",
            "evaluate",
            "transformers>=4.26.0",
            "torch>=1.12.0",
            "lightning>=2.0",
        ]
    }
)
MODEL_NAME = "databricks/dolly-v2-7b"

Prepare your data#

We are using tiny_shakespeare for fine-tuning, which contains 40,000 lines of Shakespeare from a variety of Shakespeare’s plays. Featured in Andrej Karpathy’s blog post ‘The Unreasonable Effectiveness of Recurrent Neural Networks’.

Dataset samples:

BAPTISTA:
I know him well: you are welcome for his sake.

GREMIO:
Saving your tale, Petruchio, I pray,
Let us, that are poor petitioners, speak too:
Baccare! you are marvellous forward.

PETRUCHIO:
O, pardon me, Signior Gremio; I would fain be doing.

Here, we have adopted similar pre-processing logic from another demo: GPT-J-6B Fine-Tuning with Ray Train and DeepSpeed.

import ray
import pandas as pd
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForCausalLM

def split_text(batch: pd.DataFrame) -> pd.DataFrame:
    text = list(batch["text"])
    flat_text = "".join(text)
    split_text = [
        x.strip()
        for x in flat_text.split("\n")
        if x.strip() and not x.strip()[-1] == ":"
    ]
    return pd.DataFrame(split_text, columns=["text"])


def tokenize(batch: pd.DataFrame) -> dict:
    tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, padding_side="left")
    tokenizer.pad_token = tokenizer.eos_token
    ret = tokenizer(
        list(batch["text"]),
        truncation=True,
        max_length=256,
        padding="max_length",
        return_tensors="np",
    )
    ret["labels"] = ret["input_ids"].copy()
    return dict(ret)

hf_dataset = load_dataset("tiny_shakespeare")
train_ds = ray.data.from_huggingface(hf_dataset["train"])

We first split the original paragraphs into multiple sentences, then tokenize them. Here are some samples:

# First split the dataset into multiple sentences.
train_ds = train_ds.map_batches(split_text, batch_format="pandas")
train_ds.take(10)
2023-08-30 11:03:12,182	INFO dataset.py:2380 -- Tip: Use `take_batch()` instead of `take() / show()` to return records in pandas or numpy batch format.
2023-08-30 11:03:12,185	INFO streaming_executor.py:93 -- Executing DAG InputDataBuffer[Input] -> TaskPoolMapOperator[MapBatches(split_text)] -> LimitOperator[limit=10]
2023-08-30 11:03:12,186	INFO streaming_executor.py:94 -- Execution config: ExecutionOptions(resource_limits=ExecutionResources(cpu=None, gpu=None, object_store_memory=None), locality_with_output=False, preserve_order=False, actor_locality_enabled=True, verbose_progress=False)
2023-08-30 11:03:12,187	INFO streaming_executor.py:96 -- Tip: For detailed progress reporting, run `ray.data.DataContext.get_current().execution_options.verbose_progress = True`
[{'text': 'Before we proceed any further, hear me speak.'},
 {'text': 'Speak, speak.'},
 {'text': 'You are all resolved rather to die than to famish?'},
 {'text': 'Resolved. resolved.'},
 {'text': 'First, you know Caius Marcius is chief enemy to the people.'},
 {'text': "We know't, we know't."},
 {'text': "Let us kill him, and we'll have corn at our own price."},
 {'text': "Is't a verdict?"},
 {'text': "No more talking on't; let it be done: away, away!"},
 {'text': 'One word, good citizens.'}]
# Then tokenize the dataset.
train_ds = train_ds.map_batches(tokenize, batch_format="pandas")

Define your lightning model#

In this example, we use the dolly-v2-7b model for finetuning. It is an instruction-following large language model trained on the Databricks machine learning platform that is licensed for commercial use. We load the model weights from Huggingface Model Hub and encapsulate it into a pl.LightningModule.

Note

Make sure you pass the FSDP wrapped model parameters self.trainer.model.parameters() into the optimizer, instead of self.model.parameters().

import torch
import lightning.pytorch as pl

class DollyV2Model(pl.LightningModule):
    def __init__(self, lr=2e-5, eps=1e-8):
        super().__init__()
        self.save_hyperparameters()
        self.lr = lr
        self.eps = eps
        self.model = AutoModelForCausalLM.from_pretrained(MODEL_NAME)

    def forward(self, batch):
        outputs = self.model(
            batch["input_ids"], 
            attention_mask=batch["attention_mask"], 
            labels=batch["labels"]
        )
        return outputs.loss

    def training_step(self, batch, batch_idx):
        loss = self.forward(batch)
        self.log("train_loss", loss, prog_bar=True, on_step=True)
        return loss

    def configure_optimizers(self):
        if self.global_rank == 0:
            print(self.trainer.model)
        return torch.optim.AdamW(self.trainer.model.parameters(), lr=self.lr, eps=self.eps)

Configure your FSDP strategy#

As dolly-v2-7b is a relatively large model, it cannot be properly fit into a single commercial GPU. In this example, we use the FSDP strategy to shard model parameters across multiple workers. This allows us to avoid GPU out-of-memory issues and support a larger global batch size.

Image source: Fully Sharded Data Parallel: faster AI training with fewer GPUs

Note

FSDP is a type of data parallelism that shards model parameters, optimizer states and gradients across DDP ranks. This was inspired by Xu et al. as well as the ZeRO Stage 3 from DeepSpeed. You may refer to these blogs for more information:

To start training with Lightning’s FSDPStrategy, you only need to create a RayFSDPStrategy with the same initialization arguments.

import functools
import lightning.pytorch as pl 

from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy
from torch.distributed.fsdp import ShardingStrategy, BackwardPrefetch
from transformers.models.gpt_neox.modeling_gpt_neox import GPTNeoXLayer

from ray.train.lightning import RayFSDPStrategy


# Define the model sharding policy:
# Wrap every GPTNeoXLayer as its own FSDP instance
auto_wrap_policy = functools.partial(
    transformer_auto_wrap_policy,
    transformer_layer_cls = {GPTNeoXLayer}
)

fsdp_strategy = RayFSDPStrategy(
    sharding_strategy=ShardingStrategy.FULL_SHARD,
    backward_prefetch=BackwardPrefetch.BACKWARD_PRE,
    forward_prefetch=True,
    auto_wrap_policy=auto_wrap_policy,
    limit_all_gathers=True,
    activation_checkpointing=[GPTNeoXLayer],
)

Tip

Some tips for FSDP configuration:

  • sharding_strategy:

    • ShardingStrategy.NO_SHARD: Parameters, gradients, and optimizer states are not sharded. Similar to DDP.

    • ShardingStrategy.SHARD_GRAD_OP: Gradients and optimizer states are sharded during computation, and additionally, parameters are sharded outside computation. Similar to ZeRO stage-2.

    • ShardingStrategy.FULL_SHARD: Parameters, gradients, and optimizer states are sharded. It has minimal GRAM usage among the 3 options. Similar to ZeRO stage-3.

  • auto_wrap_policy:

    • Model layers are often wrapped with FSDP in a layered fashion. This means that only the layers in a single FSDP instance are required to aggregate all parameters to a single device during forwarding or backward calculations.

    • Use transformer_auto_wrap_policy to automatically wrap each Transformer Block into a single FSDP instance.

  • backward_prefetch and forward_prefetch:

    • Overlap the upcoming all-gather while executing the current forward/backward pass. It can improve throughput but may slightly increase peak memory usage.

Fine-tune with Ray TorchTrainer#

Ray TorchTrainer allows you to scale your PyTorch Lightning training workload over multiple nodes. See Configuring Scale and GPUs for more details.

num_workers = 16
batch_size_per_worker = 10

Additionally, remember to define a Lightning callback that saves and reports checkpoints. Ray Train offers a simple implementation, RayTrainReportCallback(), which persists your checkpoint and metrics in remote storage at the end of each training epoch.

Note you can also implement your own report callback with customized logics, such as saving customized checkpoint files or reporting at a different frequency.

from ray.train import Checkpoint
from ray.train.lightning import RayLightningEnvironment, RayTrainReportCallback, prepare_trainer

# Training function for each worker
def train_func(config):
    lr = config["lr"]
    eps = config["eps"]
    strategy = config["strategy"]
    batch_size_per_worker = config["batch_size_per_worker"]

    # Model
    model = DollyV2Model(lr=lr, eps=eps)

    # Ray Data Ingestion
    train_ds = ray.train.get_dataset_shard("train")
    train_dataloader = train_ds.iter_torch_batches(batch_size=batch_size_per_worker)

    # Lightning Trainer
    trainer = pl.Trainer(
        max_epochs=1, 
        devices="auto",
        accelerator="auto", 
        precision="16-mixed",
        strategy=strategy,
        plugins=[RayLightningEnvironment()],
        callbacks=[RayTrainReportCallback()],
        enable_checkpointing=False,
    )

    trainer = prepare_trainer(trainer)

    trainer.fit(model, train_dataloaders=train_dataloader)

Note

Since this example runs with multiple nodes, we need to persist checkpoints and other outputs to some external storage for access after training has completed. You should set up cloud storage or NFS, then replace storage_path with your own cloud bucket URI or NFS path.

See the storage guide for more details.

storage_path="s3://your-bucket-here"  # TODO: Set up cloud storage
# storage_path="/mnt/path/to/nfs"     # TODO: Alternatively, set up NFS
from ray.train.torch import TorchTrainer
from ray.train import RunConfig, ScalingConfig, CheckpointConfig

# Save Ray Train checkpoints according to the performance on validation set
run_config = RunConfig(
    name="finetune_dolly-v2-7b",
    storage_path=storage_path,
    checkpoint_config=CheckpointConfig(num_to_keep=1),
)

# Scale the FSDP training workload across 16 GPUs
# You can change this config based on your compute resources.
scaling_config = ScalingConfig(
    num_workers=num_workers, use_gpu=True, trainer_resources={"memory": 100 * 1024 ** 3}
)

# Configuration to pass into train_func
train_config = {
    "lr": 2e-5,
    "eps": 1e-8,
    "strategy": fsdp_strategy,
    "batch_size_per_worker": 10
}

# Define a TorchTrainer and launch you training workload
ray_trainer = TorchTrainer(
    train_func,
    train_loop_config=train_config,
    run_config=run_config,
    scaling_config=scaling_config,
    datasets={"train": train_ds},
)
result = ray_trainer.fit()

result

Tune Status

Current time:2023-08-30 11:51:22
Running for: 00:47:57.19
Memory: 39.3/124.3 GiB

System Info

Using FIFO scheduling algorithm.
Logical resource usage: 193.0/272 CPUs, 16.0/16 GPUs (0.0/16.0 accelerator_type:None)

Trial Status

Trial name status loc iter total time (s) train_loss epoch step
TorchTrainer_839b5_00000TERMINATED10.0.23.226:66074 1 2868.15 0.176025 0 135
(TrainTrainable pid=66074) StorageContext on SESSION (rank=None):
(TrainTrainable pid=66074) StorageContext<
(TrainTrainable pid=66074)   storage_path=/mnt/cluster_storage
(TrainTrainable pid=66074)   storage_local_path=/home/ray/ray_results
(TrainTrainable pid=66074)   storage_filesystem=<pyarrow._fs.LocalFileSystem object at 0x7f8b0a2cd7f0>
(TrainTrainable pid=66074)   storage_fs_path=/mnt/cluster_storage
(TrainTrainable pid=66074)   experiment_dir_name=finetune_dolly-v2-7b
(TrainTrainable pid=66074)   trial_dir_name=TorchTrainer_839b5_00000_0_2023-08-30_11-03-25
(TrainTrainable pid=66074)   current_checkpoint_index=0
(TrainTrainable pid=66074) >
(TorchTrainer pid=66074) Starting distributed worker processes: ['66181 (10.0.23.226)', '14250 (10.0.40.16)', '13932 (10.0.2.17)', '13832 (10.0.41.56)', '14288 (10.0.53.250)', '13909 (10.0.41.152)', '13803 (10.0.14.94)', '47214 (10.0.44.99)', '13836 (10.0.58.27)', '13838 (10.0.58.206)', '13755 (10.0.62.244)', '13828 (10.0.9.99)', '13771 (10.0.43.35)', '13726 (10.0.59.245)', '13829 (10.0.58.178)', '13861 (10.0.46.116)']
(RayTrainWorker pid=66181) Setting up process group for: env:// [rank=0, world_size=16]
(RayTrainWorker pid=66181) StorageContext on SESSION (rank=0):
(RayTrainWorker pid=14250, ip=10.0.40.16) StorageContext< [repeated 2x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/ray-logging.html#log-deduplication for more options.)
(RayTrainWorker pid=14250, ip=10.0.40.16)   storage_path=/mnt/cluster_storage [repeated 2x across cluster]
(RayTrainWorker pid=14250, ip=10.0.40.16)   storage_local_path=/home/ray/ray_results [repeated 2x across cluster]
(RayTrainWorker pid=14250, ip=10.0.40.16)   storage_filesystem=<pyarrow._fs.LocalFileSystem object at 0x7ff145c40b30> [repeated 2x across cluster]
(RayTrainWorker pid=14250, ip=10.0.40.16)   storage_fs_path=/mnt/cluster_storage [repeated 2x across cluster]
(RayTrainWorker pid=14250, ip=10.0.40.16)   current_checkpoint_index=0 [repeated 6x across cluster]
(RayTrainWorker pid=14250, ip=10.0.40.16) > [repeated 2x across cluster]
(SplitCoordinator pid=66292) Auto configuring locality_with_output=['5ed99d043a52f67deb150f34202c09b77bd37409502ebf6e581b0544', '8efe8198d7c04d45714ae757f298c316117405f3a8b25b87a71e0d9e', 'e3754d1e1017e68dd919b35d35ea62ed7b005ad96452f371721fc9fa', '8bd0f431ab3733c4b423c1d50db06460e3c210de47355b3b4d215c31', '73a8b9377fe9531a84eaa7b30c966fbb11bc36aff070d55c8f7acd1a', 'ef922c93f3b2fc93ebe5a521426d24fb8aae7e13c65f9fbd106aea2a', '5249cff3eab41121f840c17a79e6a3cd0af0f059def707a39e055fcf', '042b668e5553a589a4f6693c45deee0abe57a1d754812172af425acb', '9ed138bfe1f9c7dca484ee08d8311806389adb3af7a76566a6f4dfaa', '7e2fcb5dfe4ab1b572d87257f9e13bbc22b33ba968b1e67a79505589', '39b1ef4da8493a22e321a1ea9dd13387f50d9a6e2d2fbad58ad5fe9c', '9484193409a5346c0838a4a19a0a08eec122477682ea1cb0ad3e305a', '0158084645ec305bdd2ab11a6f35c44ad206405ca810e65f24b09398', 'fe5b11633900d1c437b2e3ee4ea44c18cf68f3dece546537d2090c63', '573645f42162f531a66d20776a95ba05102fae8e4b8090d48b94b233', '47e317ad5d0eb94cabb78871541160763283629d0d3f3b77b69521ae']
(RayTrainWorker pid=66181) Using 16bit Automatic Mixed Precision (AMP)
(RayTrainWorker pid=66181) GPU available: True (cuda), used: True
(RayTrainWorker pid=66181) TPU available: False, using: 0 TPU cores
(RayTrainWorker pid=66181) IPU available: False, using: 0 IPUs
(RayTrainWorker pid=66181) HPU available: False, using: 0 HPUs
(RayTrainWorker pid=13726, ip=10.0.59.245) StorageContext on SESSION (rank=13): [repeated 15x across cluster]
(RayTrainWorker pid=13726, ip=10.0.59.245) StorageContext< [repeated 14x across cluster]
(RayTrainWorker pid=13726, ip=10.0.59.245)   storage_path=/mnt/cluster_storage [repeated 14x across cluster]
(RayTrainWorker pid=13726, ip=10.0.59.245)   storage_local_path=/home/ray/ray_results [repeated 14x across cluster]
(RayTrainWorker pid=13726, ip=10.0.59.245)   storage_filesystem=<pyarrow._fs.LocalFileSystem object at 0x7fc9841d7a70> [repeated 14x across cluster]
(RayTrainWorker pid=13726, ip=10.0.59.245)   storage_fs_path=/mnt/cluster_storage [repeated 14x across cluster]
(RayTrainWorker pid=13726, ip=10.0.59.245)   current_checkpoint_index=0 [repeated 42x across cluster]
(RayTrainWorker pid=13726, ip=10.0.59.245) > [repeated 14x across cluster]
(RayTrainWorker pid=66181) Missing logger folder: /home/ray/ray_results/finetune_dolly-v2-7b/TorchTrainer_839b5_00000_0_2023-08-30_11-03-25/lightning_logs
(RayTrainWorker pid=13832, ip=10.0.41.56) Using 16bit Automatic Mixed Precision (AMP)
(RayTrainWorker pid=13836, ip=10.0.58.27) Using 16bit Automatic Mixed Precision (AMP)
(RayTrainWorker pid=13832, ip=10.0.41.56) Missing logger folder: /home/ray/ray_results/finetune_dolly-v2-7b/TorchTrainer_839b5_00000_0_2023-08-30_11-03-25/lightning_logs
(RayTrainWorker pid=13726, ip=10.0.59.245) Using 16bit Automatic Mixed Precision (AMP) [repeated 13x across cluster]
(RayTrainWorker pid=47214, ip=10.0.44.99) Missing logger folder: /home/ray/ray_results/finetune_dolly-v2-7b/TorchTrainer_839b5_00000_0_2023-08-30_11-03-25/lightning_logs [repeated 13x across cluster]
(RayTrainWorker pid=13909, ip=10.0.41.152) LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
(RayTrainWorker pid=66181) FullyShardedDataParallel(
(RayTrainWorker pid=66181)   (_fsdp_wrapped_module): _LightningModuleWrapperBase(
(RayTrainWorker pid=66181)     (_forward_module): DollyV2Model(
(RayTrainWorker pid=66181)       (model): GPTNeoXForCausalLM(
(RayTrainWorker pid=66181)         (gpt_neox): GPTNeoXModel(
(RayTrainWorker pid=66181)           (embed_in): Embedding(50280, 4096)
(RayTrainWorker pid=66181)           (layers): ModuleList(
(RayTrainWorker pid=66181)             (0-31): 32 x FullyShardedDataParallel(
(RayTrainWorker pid=66181)               (_fsdp_wrapped_module): CheckpointWrapper(
(RayTrainWorker pid=66181)                 (_checkpoint_wrapped_module): GPTNeoXLayer(
(RayTrainWorker pid=66181)                   (input_layernorm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
(RayTrainWorker pid=66181)                   (post_attention_layernorm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
(RayTrainWorker pid=66181)                   (attention): GPTNeoXAttention(
(RayTrainWorker pid=66181)                     (rotary_emb): RotaryEmbedding()
(RayTrainWorker pid=66181)                     (query_key_value): Linear(in_features=4096, out_features=12288, bias=True)
(RayTrainWorker pid=66181)                     (dense): Linear(in_features=4096, out_features=4096, bias=True)
(RayTrainWorker pid=66181)                   )
(RayTrainWorker pid=66181)                   (mlp): GPTNeoXMLP(
(RayTrainWorker pid=66181)                     (dense_h_to_4h): Linear(in_features=4096, out_features=16384, bias=True)
(RayTrainWorker pid=66181)                     (dense_4h_to_h): Linear(in_features=16384, out_features=4096, bias=True)
(RayTrainWorker pid=66181)                     (act): GELUActivation()
(RayTrainWorker pid=66181)                   )
(RayTrainWorker pid=66181)                 )
(RayTrainWorker pid=66181)               )
(RayTrainWorker pid=66181)             )
(RayTrainWorker pid=66181)           )
(RayTrainWorker pid=66181)           (final_layer_norm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True)
(RayTrainWorker pid=66181)         )
(RayTrainWorker pid=66181)         (embed_out): Linear(in_features=4096, out_features=50280, bias=False)
(RayTrainWorker pid=66181)       )
(RayTrainWorker pid=66181)     )
(RayTrainWorker pid=66181)   )
(RayTrainWorker pid=66181) )
Epoch 0:   0%|          | 0/134 [00:00<?, ?it/s]
(RayTrainWorker pid=66181) 
(RayTrainWorker pid=66181)   | Name  | Type               | Params
(RayTrainWorker pid=66181) ---------------------------------------------
(RayTrainWorker pid=66181) 0 | model | GPTNeoXForCausalLM | 402 M 
(RayTrainWorker pid=66181) ---------------------------------------------
(RayTrainWorker pid=66181) 402 M     Trainable params
(RayTrainWorker pid=66181) 0         Non-trainable params
(RayTrainWorker pid=66181) 402 M     Total params
(RayTrainWorker pid=66181) 1,611.039 Total estimated model params size (MB)
(RayTrainWorker pid=13726, ip=10.0.59.245) Missing logger folder: /home/ray/ray_results/finetune_dolly-v2-7b/TorchTrainer_839b5_00000_0_2023-08-30_11-03-25/lightning_logs
(RayTrainWorker pid=13803, ip=10.0.14.94) LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] [repeated 15x across cluster]
(SplitCoordinator pid=66292) Executing DAG InputDataBuffer[Input] -> TaskPoolMapOperator[MapBatches(split_text)->MapBatches(tokenize)] -> OutputSplitter[split(16, equal=True)]
(SplitCoordinator pid=66292) Execution config: ExecutionOptions(resource_limits=ExecutionResources(cpu=None, gpu=None, object_store_memory=2000000000.0), locality_with_output=['5ed99d043a52f67deb150f34202c09b77bd37409502ebf6e581b0544', '8efe8198d7c04d45714ae757f298c316117405f3a8b25b87a71e0d9e', 'e3754d1e1017e68dd919b35d35ea62ed7b005ad96452f371721fc9fa', '8bd0f431ab3733c4b423c1d50db06460e3c210de47355b3b4d215c31', '73a8b9377fe9531a84eaa7b30c966fbb11bc36aff070d55c8f7acd1a', 'ef922c93f3b2fc93ebe5a521426d24fb8aae7e13c65f9fbd106aea2a', '5249cff3eab41121f840c17a79e6a3cd0af0f059def707a39e055fcf', '042b668e5553a589a4f6693c45deee0abe57a1d754812172af425acb', '9ed138bfe1f9c7dca484ee08d8311806389adb3af7a76566a6f4dfaa', '7e2fcb5dfe4ab1b572d87257f9e13bbc22b33ba968b1e67a79505589', '39b1ef4da8493a22e321a1ea9dd13387f50d9a6e2d2fbad58ad5fe9c', '9484193409a5346c0838a4a19a0a08eec122477682ea1cb0ad3e305a', '0158084645ec305bdd2ab11a6f35c44ad206405ca810e65f24b09398', 'fe5b11633900d1c437b2e3ee4ea44c18cf68f3dece546537d2090c63', '573645f42162f531a66d20776a95ba05102fae8e4b8090d48b94b233', '47e317ad5d0eb94cabb78871541160763283629d0d3f3b77b69521ae'], preserve_order=False, actor_locality_enabled=True, verbose_progress=False)
(SplitCoordinator pid=66292) Tip: For detailed progress reporting, run `ray.data.DataContext.get_current().execution_options.verbose_progress = True`
Epoch 0:   1%|          | 1/134 [00:25<57:20, 25.87s/it, v_num=0, train_loss=12.90]
Epoch 0:   1%|▏         | 2/134 [00:43<47:24, 21.55s/it, v_num=0, train_loss=12.50]
Epoch 0:   2%|▏         | 3/134 [01:00<43:55, 20.12s/it, v_num=0, train_loss=12.50]
Epoch 0:   3%|▎         | 4/134 [01:21<44:23, 20.49s/it, v_num=0, train_loss=12.50]
Epoch 0:   4%|▎         | 5/134 [01:39<42:42, 19.87s/it, v_num=0, train_loss=12.50]
Epoch 0:   4%|▍         | 6/134 [01:56<41:29, 19.45s/it, v_num=0, train_loss=12.50]
(autoscaler +4m7s) Tip: use `ray status` to view detailed cluster status. To disable these messages, set RAY_SCHEDULER_EVENTS=0.
(autoscaler +4m7s) [workspace snapshot] New snapshot created successfully (size: 448.90 KB).
Epoch 0:   5%|▌         | 7/134 [02:13<40:30, 19.14s/it, v_num=0, train_loss=12.50]
Epoch 0:   6%|▌         | 8/134 [02:31<39:43, 18.92s/it, v_num=0, train_loss=12.50]
Epoch 0:   7%|▋         | 9/134 [02:48<39:04, 18.76s/it, v_num=0, train_loss=12.50]
Epoch 0:   7%|▋         | 9/134 [02:48<39:06, 18.77s/it, v_num=0, train_loss=12.50]
Epoch 0:   7%|▋         | 10/134 [03:06<38:33, 18.66s/it, v_num=0, train_loss=12.50]
Epoch 0:   7%|▋         | 10/134 [03:06<38:35, 18.67s/it, v_num=0, train_loss=0.587]
Epoch 0:   8%|▊         | 11/134 [03:24<38:02, 18.56s/it, v_num=0, train_loss=0.587]
Epoch 0:   8%|▊         | 11/134 [03:24<38:04, 18.57s/it, v_num=0, train_loss=0.600]
Epoch 0:   9%|▉         | 12/134 [03:41<37:34, 18.48s/it, v_num=0, train_loss=0.600]
Epoch 0:   9%|▉         | 12/134 [03:41<37:36, 18.49s/it, v_num=0, train_loss=0.590]
Epoch 0:  10%|▉         | 13/134 [03:59<37:08, 18.41s/it, v_num=0, train_loss=0.590]
Epoch 0:  10%|▉         | 13/134 [03:59<37:09, 18.43s/it, v_num=0, train_loss=0.591]
Epoch 0:  10%|█         | 14/134 [04:17<36:44, 18.37s/it, v_num=0, train_loss=0.591]
Epoch 0:  10%|█         | 14/134 [04:17<36:45, 18.38s/it, v_num=0, train_loss=0.590]
Epoch 0:  11%|█         | 15/134 [04:34<36:19, 18.32s/it, v_num=0, train_loss=0.590]
Epoch 0:  11%|█         | 15/134 [04:34<36:21, 18.33s/it, v_num=0, train_loss=0.551]
Epoch 0:  12%|█▏        | 16/134 [04:52<35:57, 18.29s/it, v_num=0, train_loss=0.551]
Epoch 0:  12%|█▏        | 16/134 [04:52<35:58, 18.29s/it, v_num=0, train_loss=0.521]
Epoch 0:  13%|█▎        | 17/134 [05:10<35:35, 18.25s/it, v_num=0, train_loss=0.521]
Epoch 0:  13%|█▎        | 17/134 [05:10<35:36, 18.26s/it, v_num=0, train_loss=0.522]
Epoch 0:  13%|█▎        | 18/134 [05:28<35:14, 18.23s/it, v_num=0, train_loss=0.522]
Epoch 0:  13%|█▎        | 18/134 [05:28<35:15, 18.23s/it, v_num=0, train_loss=0.518]
Epoch 0:  14%|█▍        | 19/134 [05:45<34:52, 18.19s/it, v_num=0, train_loss=0.518]
Epoch 0:  14%|█▍        | 19/134 [05:45<34:52, 18.20s/it, v_num=0, train_loss=0.476]
Epoch 0:  15%|█▍        | 20/134 [06:03<34:30, 18.16s/it, v_num=0, train_loss=0.476]
Epoch 0:  15%|█▍        | 20/134 [06:03<34:31, 18.17s/it, v_num=0, train_loss=0.457]
Epoch 0:  16%|█▌        | 21/134 [06:23<34:23, 18.26s/it, v_num=0, train_loss=0.457]
Epoch 0:  16%|█▌        | 21/134 [06:23<34:23, 18.27s/it, v_num=0, train_loss=0.476]
Epoch 0:  16%|█▋        | 22/134 [06:42<34:09, 18.30s/it, v_num=0, train_loss=0.476]
Epoch 0:  16%|█▋        | 22/134 [06:42<34:10, 18.31s/it, v_num=0, train_loss=0.447]
(autoscaler +9m7s) [workspace snapshot] New snapshot created successfully (size: 451.39 KB).
Epoch 0:  17%|█▋        | 23/134 [07:00<33:49, 18.28s/it, v_num=0, train_loss=0.447]
Epoch 0:  17%|█▋        | 23/134 [07:00<33:49, 18.29s/it, v_num=0, train_loss=0.412]
Epoch 0:  18%|█▊        | 24/134 [07:19<33:33, 18.31s/it, v_num=0, train_loss=0.412]
Epoch 0:  18%|█▊        | 24/134 [07:19<33:34, 18.31s/it, v_num=0, train_loss=0.385]
Epoch 0:  19%|█▊        | 25/134 [07:37<33:13, 18.29s/it, v_num=0, train_loss=0.385]
Epoch 0:  19%|█▊        | 25/134 [07:37<33:13, 18.29s/it, v_num=0, train_loss=0.384]
Epoch 0:  19%|█▉        | 26/134 [07:54<32:52, 18.27s/it, v_num=0, train_loss=0.384]
Epoch 0:  19%|█▉        | 26/134 [07:55<32:53, 18.27s/it, v_num=0, train_loss=0.406]
Epoch 0:  20%|██        | 27/134 [08:12<32:32, 18.25s/it, v_num=0, train_loss=0.406]
Epoch 0:  20%|██        | 27/134 [08:12<32:32, 18.25s/it, v_num=0, train_loss=0.380]
Epoch 0:  21%|██        | 28/134 [08:30<32:12, 18.23s/it, v_num=0, train_loss=0.380]
Epoch 0:  21%|██        | 28/134 [08:30<32:12, 18.23s/it, v_num=0, train_loss=0.405]
Epoch 0:  22%|██▏       | 29/134 [08:48<31:52, 18.21s/it, v_num=0, train_loss=0.405]
Epoch 0:  22%|██▏       | 29/134 [08:48<31:52, 18.22s/it, v_num=0, train_loss=0.355]
Epoch 0:  22%|██▏       | 30/134 [09:06<31:32, 18.20s/it, v_num=0, train_loss=0.355]
Epoch 0:  22%|██▏       | 30/134 [09:06<31:33, 18.21s/it, v_num=0, train_loss=0.376]
Epoch 0:  23%|██▎       | 31/134 [09:23<31:12, 18.18s/it, v_num=0, train_loss=0.376]
Epoch 0:  23%|██▎       | 31/134 [09:23<31:13, 18.19s/it, v_num=0, train_loss=0.330]
Epoch 0:  24%|██▍       | 32/134 [09:41<30:53, 18.17s/it, v_num=0, train_loss=0.330]
Epoch 0:  24%|██▍       | 32/134 [09:41<30:53, 18.17s/it, v_num=0, train_loss=0.359]
Epoch 0:  25%|██▍       | 33/134 [09:59<30:33, 18.15s/it, v_num=0, train_loss=0.359]
Epoch 0:  25%|██▍       | 33/134 [09:59<30:33, 18.16s/it, v_num=0, train_loss=0.319]
Epoch 0:  25%|██▌       | 34/134 [10:16<30:14, 18.14s/it, v_num=0, train_loss=0.319]
Epoch 0:  25%|██▌       | 34/134 [10:16<30:14, 18.15s/it, v_num=0, train_loss=0.359]
Epoch 0:  26%|██▌       | 35/134 [10:34<29:54, 18.13s/it, v_num=0, train_loss=0.359]
Epoch 0:  26%|██▌       | 35/134 [10:34<29:54, 18.13s/it, v_num=0, train_loss=0.405]
Epoch 0:  27%|██▋       | 36/134 [10:52<29:35, 18.12s/it, v_num=0, train_loss=0.405]
Epoch 0:  27%|██▋       | 36/134 [10:52<29:35, 18.12s/it, v_num=0, train_loss=0.362]
Epoch 0:  28%|██▊       | 37/134 [11:09<29:16, 18.10s/it, v_num=0, train_loss=0.362]
Epoch 0:  28%|██▊       | 37/134 [11:10<29:16, 18.11s/it, v_num=0, train_loss=0.343]
Epoch 0:  28%|██▊       | 38/134 [11:27<28:56, 18.09s/it, v_num=0, train_loss=0.343]
Epoch 0:  28%|██▊       | 38/134 [11:27<28:57, 18.10s/it, v_num=0, train_loss=0.335]
Epoch 0:  29%|██▉       | 39/134 [11:45<28:38, 18.08s/it, v_num=0, train_loss=0.335]
Epoch 0:  29%|██▉       | 39/134 [11:45<28:38, 18.09s/it, v_num=0, train_loss=0.325]
(autoscaler +14m8s) [workspace snapshot] New snapshot created successfully (size: 455.47 KB).
Epoch 0:  30%|██▉       | 40/134 [12:03<28:19, 18.08s/it, v_num=0, train_loss=0.325]
Epoch 0:  30%|██▉       | 40/134 [12:03<28:19, 18.08s/it, v_num=0, train_loss=0.344]
Epoch 0:  31%|███       | 41/134 [12:20<27:59, 18.06s/it, v_num=0, train_loss=0.344]
Epoch 0:  31%|███       | 41/134 [12:20<28:00, 18.06s/it, v_num=0, train_loss=0.312]
Epoch 0:  31%|███▏      | 42/134 [12:38<27:40, 18.05s/it, v_num=0, train_loss=0.312]
Epoch 0:  31%|███▏      | 42/134 [12:38<27:40, 18.05s/it, v_num=0, train_loss=0.338]
Epoch 0:  32%|███▏      | 43/134 [12:55<27:21, 18.04s/it, v_num=0, train_loss=0.338]
Epoch 0:  32%|███▏      | 43/134 [12:55<27:21, 18.04s/it, v_num=0, train_loss=0.315]
Epoch 0:  33%|███▎      | 44/134 [13:13<27:02, 18.03s/it, v_num=0, train_loss=0.315]
Epoch 0:  33%|███▎      | 44/134 [13:13<27:02, 18.03s/it, v_num=0, train_loss=0.330]
Epoch 0:  34%|███▎      | 45/134 [13:31<26:44, 18.02s/it, v_num=0, train_loss=0.330]
Epoch 0:  34%|███▎      | 45/134 [13:31<26:44, 18.03s/it, v_num=0, train_loss=0.253]
Epoch 0:  34%|███▍      | 46/134 [13:48<26:25, 18.02s/it, v_num=0, train_loss=0.253]
Epoch 0:  34%|███▍      | 46/134 [13:49<26:25, 18.02s/it, v_num=0, train_loss=0.310]
Epoch 0:  35%|███▌      | 47/134 [14:06<26:07, 18.01s/it, v_num=0, train_loss=0.310]
Epoch 0:  35%|███▌      | 47/134 [14:06<26:07, 18.01s/it, v_num=0, train_loss=0.294]
Epoch 0:  36%|███▌      | 48/134 [14:24<25:48, 18.01s/it, v_num=0, train_loss=0.294]
Epoch 0:  36%|███▌      | 48/134 [14:24<25:48, 18.01s/it, v_num=0, train_loss=0.302]
Epoch 0:  37%|███▋      | 49/134 [14:41<25:29, 18.00s/it, v_num=0, train_loss=0.302]
Epoch 0:  37%|███▋      | 49/134 [14:42<25:30, 18.00s/it, v_num=0, train_loss=0.325]
Epoch 0:  37%|███▋      | 50/134 [14:59<25:11, 17.99s/it, v_num=0, train_loss=0.325]
Epoch 0:  37%|███▋      | 50/134 [14:59<25:11, 18.00s/it, v_num=0, train_loss=0.250]
Epoch 0:  38%|███▊      | 51/134 [15:17<24:53, 17.99s/it, v_num=0, train_loss=0.250]
Epoch 0:  38%|███▊      | 51/134 [15:17<24:53, 17.99s/it, v_num=0, train_loss=0.291]
Epoch 0:  39%|███▉      | 52/134 [15:34<24:34, 17.98s/it, v_num=0, train_loss=0.291]
Epoch 0:  39%|███▉      | 52/134 [15:35<24:34, 17.98s/it, v_num=0, train_loss=0.261]
Epoch 0:  40%|███▉      | 53/134 [15:52<24:15, 17.98s/it, v_num=0, train_loss=0.261]
Epoch 0:  40%|███▉      | 53/134 [15:52<24:16, 17.98s/it, v_num=0, train_loss=0.292]
Epoch 0:  40%|████      | 54/134 [16:10<23:57, 17.97s/it, v_num=0, train_loss=0.292]
Epoch 0:  40%|████      | 54/134 [16:10<23:58, 17.98s/it, v_num=0, train_loss=0.245]
Epoch 0:  41%|████      | 55/134 [16:28<23:39, 17.97s/it, v_num=0, train_loss=0.245]
Epoch 0:  41%|████      | 55/134 [16:28<23:39, 17.97s/it, v_num=0, train_loss=0.265]
Epoch 0:  42%|████▏     | 56/134 [16:45<23:20, 17.96s/it, v_num=0, train_loss=0.265]
Epoch 0:  42%|████▏     | 56/134 [16:45<23:21, 17.96s/it, v_num=0, train_loss=0.233]
(autoscaler +19m8s) [workspace snapshot] New snapshot created successfully (size: 458.23 KB).
Epoch 0:  43%|████▎     | 57/134 [17:03<23:02, 17.96s/it, v_num=0, train_loss=0.233]
Epoch 0:  43%|████▎     | 57/134 [17:03<23:02, 17.96s/it, v_num=0, train_loss=0.228]
Epoch 0:  43%|████▎     | 58/134 [17:21<22:44, 17.96s/it, v_num=0, train_loss=0.228]
Epoch 0:  43%|████▎     | 58/134 [17:21<22:45, 17.96s/it, v_num=0, train_loss=0.222]
Epoch 0:  44%|████▍     | 59/134 [17:39<22:26, 17.96s/it, v_num=0, train_loss=0.222]
Epoch 0:  44%|████▍     | 59/134 [17:39<22:27, 17.96s/it, v_num=0, train_loss=0.240]
Epoch 0:  45%|████▍     | 60/134 [17:57<22:08, 17.96s/it, v_num=0, train_loss=0.240]
Epoch 0:  45%|████▍     | 60/134 [17:57<22:08, 17.96s/it, v_num=0, train_loss=0.220]
Epoch 0:  46%|████▌     | 61/134 [18:15<21:50, 17.95s/it, v_num=0, train_loss=0.220]
Epoch 0:  46%|████▌     | 61/134 [18:15<21:50, 17.95s/it, v_num=0, train_loss=0.235]
Epoch 0:  46%|████▋     | 62/134 [18:32<21:32, 17.95s/it, v_num=0, train_loss=0.235]
Epoch 0:  46%|████▋     | 62/134 [18:32<21:32, 17.95s/it, v_num=0, train_loss=0.230]
Epoch 0:  47%|████▋     | 63/134 [18:50<21:14, 17.95s/it, v_num=0, train_loss=0.230]
Epoch 0:  47%|████▋     | 63/134 [18:50<21:14, 17.95s/it, v_num=0, train_loss=0.247]
Epoch 0:  48%|████▊     | 64/134 [19:08<20:55, 17.94s/it, v_num=0, train_loss=0.247]
Epoch 0:  48%|████▊     | 64/134 [19:08<20:56, 17.94s/it, v_num=0, train_loss=0.243]
Epoch 0:  49%|████▊     | 65/134 [19:25<20:37, 17.94s/it, v_num=0, train_loss=0.243]
Epoch 0:  49%|████▊     | 65/134 [19:26<20:37, 17.94s/it, v_num=0, train_loss=0.233]
Epoch 0:  49%|████▉     | 66/134 [19:43<20:19, 17.93s/it, v_num=0, train_loss=0.233]
Epoch 0:  49%|████▉     | 66/134 [19:43<20:19, 17.94s/it, v_num=0, train_loss=0.253]
Epoch 0:  50%|█████     | 67/134 [20:01<20:01, 17.93s/it, v_num=0, train_loss=0.253]
Epoch 0:  50%|█████     | 67/134 [20:01<20:01, 17.93s/it, v_num=0, train_loss=0.235]
Epoch 0:  51%|█████     | 68/134 [20:19<19:43, 17.93s/it, v_num=0, train_loss=0.235]
Epoch 0:  51%|█████     | 68/134 [20:19<19:43, 17.93s/it, v_num=0, train_loss=0.270]
Epoch 0:  51%|█████▏    | 69/134 [20:37<19:25, 17.93s/it, v_num=0, train_loss=0.270]
Epoch 0:  51%|█████▏    | 69/134 [20:37<19:25, 17.93s/it, v_num=0, train_loss=0.220]
Epoch 0:  52%|█████▏    | 70/134 [20:54<19:07, 17.93s/it, v_num=0, train_loss=0.220]
Epoch 0:  52%|█████▏    | 70/134 [20:55<19:07, 17.93s/it, v_num=0, train_loss=0.249]
Epoch 0:  53%|█████▎    | 71/134 [21:12<18:49, 17.93s/it, v_num=0, train_loss=0.249]
Epoch 0:  53%|█████▎    | 71/134 [21:12<18:49, 17.93s/it, v_num=0, train_loss=0.231]
Epoch 0:  54%|█████▎    | 72/134 [21:30<18:31, 17.92s/it, v_num=0, train_loss=0.231]
Epoch 0:  54%|█████▎    | 72/134 [21:30<18:31, 17.92s/it, v_num=0, train_loss=0.206]
Epoch 0:  54%|█████▍    | 73/134 [21:48<18:13, 17.92s/it, v_num=0, train_loss=0.206]
Epoch 0:  54%|█████▍    | 73/134 [21:48<18:13, 17.92s/it, v_num=0, train_loss=0.266]
(autoscaler +24m8s) [workspace snapshot] New snapshot created successfully (size: 462.71 KB).
Epoch 0:  55%|█████▌    | 74/134 [22:05<17:55, 17.92s/it, v_num=0, train_loss=0.266]
Epoch 0:  55%|█████▌    | 74/134 [22:06<17:55, 17.92s/it, v_num=0, train_loss=0.252]
Epoch 0:  56%|█████▌    | 75/134 [22:23<17:37, 17.92s/it, v_num=0, train_loss=0.252]
Epoch 0:  56%|█████▌    | 75/134 [22:23<17:37, 17.92s/it, v_num=0, train_loss=0.218]
Epoch 0:  57%|█████▋    | 76/134 [22:41<17:19, 17.92s/it, v_num=0, train_loss=0.218]
Epoch 0:  57%|█████▋    | 76/134 [22:41<17:19, 17.92s/it, v_num=0, train_loss=0.195]
Epoch 0:  57%|█████▋    | 77/134 [22:59<17:01, 17.91s/it, v_num=0, train_loss=0.195]
Epoch 0:  57%|█████▋    | 77/134 [22:59<17:01, 17.91s/it, v_num=0, train_loss=0.210]
Epoch 0:  58%|█████▊    | 78/134 [23:17<16:42, 17.91s/it, v_num=0, train_loss=0.210]
Epoch 0:  58%|█████▊    | 78/134 [23:17<16:43, 17.91s/it, v_num=0, train_loss=0.198]
Epoch 0:  59%|█████▉    | 79/134 [23:34<16:24, 17.91s/it, v_num=0, train_loss=0.198]
Epoch 0:  59%|█████▉    | 79/134 [23:34<16:24, 17.91s/it, v_num=0, train_loss=0.232]
Epoch 0:  60%|█████▉    | 80/134 [23:52<16:06, 17.90s/it, v_num=0, train_loss=0.232]
Epoch 0:  60%|█████▉    | 80/134 [23:52<16:06, 17.90s/it, v_num=0, train_loss=0.267]
Epoch 0:  60%|██████    | 81/134 [24:09<15:48, 17.90s/it, v_num=0, train_loss=0.267]
Epoch 0:  60%|██████    | 81/134 [24:10<15:48, 17.90s/it, v_num=0, train_loss=0.244]
Epoch 0:  61%|██████    | 82/134 [24:27<15:30, 17.90s/it, v_num=0, train_loss=0.244]
Epoch 0:  61%|██████    | 82/134 [24:27<15:30, 17.90s/it, v_num=0, train_loss=0.173]
Epoch 0:  62%|██████▏   | 83/134 [24:45<15:12, 17.89s/it, v_num=0, train_loss=0.173]
Epoch 0:  62%|██████▏   | 83/134 [24:45<15:12, 17.89s/it, v_num=0, train_loss=0.225]
Epoch 0:  63%|██████▎   | 84/134 [25:02<14:54, 17.89s/it, v_num=0, train_loss=0.225]
Epoch 0:  63%|██████▎   | 84/134 [25:03<14:54, 17.89s/it, v_num=0, train_loss=0.231]
Epoch 0:  63%|██████▎   | 85/134 [25:20<14:36, 17.89s/it, v_num=0, train_loss=0.231]
Epoch 0:  63%|██████▎   | 85/134 [25:20<14:36, 17.89s/it, v_num=0, train_loss=0.235]
Epoch 0:  64%|██████▍   | 86/134 [25:38<14:18, 17.88s/it, v_num=0, train_loss=0.235]
Epoch 0:  64%|██████▍   | 86/134 [25:38<14:18, 17.89s/it, v_num=0, train_loss=0.257]
Epoch 0:  65%|██████▍   | 87/134 [25:55<14:00, 17.88s/it, v_num=0, train_loss=0.257]
Epoch 0:  65%|██████▍   | 87/134 [25:56<14:00, 17.89s/it, v_num=0, train_loss=0.297]
Epoch 0:  66%|██████▌   | 88/134 [26:13<13:42, 17.88s/it, v_num=0, train_loss=0.297]
Epoch 0:  66%|██████▌   | 88/134 [26:13<13:42, 17.88s/it, v_num=0, train_loss=0.277]
Epoch 0:  66%|██████▋   | 89/134 [26:31<13:24, 17.88s/it, v_num=0, train_loss=0.277]
Epoch 0:  66%|██████▋   | 89/134 [26:31<13:24, 17.88s/it, v_num=0, train_loss=0.264]
Epoch 0:  67%|██████▋   | 90/134 [26:49<13:06, 17.88s/it, v_num=0, train_loss=0.264]
Epoch 0:  67%|██████▋   | 90/134 [26:49<13:06, 17.88s/it, v_num=0, train_loss=0.269]
(autoscaler +29m8s) [workspace snapshot] New snapshot created successfully (size: 465.81 KB).
Epoch 0:  68%|██████▊   | 91/134 [27:06<12:48, 17.88s/it, v_num=0, train_loss=0.269]
Epoch 0:  68%|██████▊   | 91/134 [27:07<12:48, 17.88s/it, v_num=0, train_loss=0.268]
Epoch 0:  69%|██████▊   | 92/134 [27:24<12:30, 17.88s/it, v_num=0, train_loss=0.268]
Epoch 0:  69%|██████▊   | 92/134 [27:24<12:30, 17.88s/it, v_num=0, train_loss=0.200]
Epoch 0:  69%|██████▉   | 93/134 [27:42<12:12, 17.87s/it, v_num=0, train_loss=0.200]
Epoch 0:  69%|██████▉   | 93/134 [27:42<12:12, 17.88s/it, v_num=0, train_loss=0.229]
Epoch 0:  70%|███████   | 94/134 [28:00<11:54, 17.87s/it, v_num=0, train_loss=0.229]
Epoch 0:  70%|███████   | 94/134 [28:00<11:54, 17.87s/it, v_num=0, train_loss=0.248]
Epoch 0:  71%|███████   | 95/134 [28:17<11:37, 17.87s/it, v_num=0, train_loss=0.248]
Epoch 0:  71%|███████   | 95/134 [28:18<11:37, 17.87s/it, v_num=0, train_loss=0.217]
Epoch 0:  72%|███████▏  | 96/134 [28:35<11:19, 17.87s/it, v_num=0, train_loss=0.217]
Epoch 0:  72%|███████▏  | 96/134 [28:35<11:19, 17.87s/it, v_num=0, train_loss=0.217]
Epoch 0:  72%|███████▏  | 97/134 [28:53<11:01, 17.87s/it, v_num=0, train_loss=0.217]
Epoch 0:  72%|███████▏  | 97/134 [28:53<11:01, 17.87s/it, v_num=0, train_loss=0.238]
Epoch 0:  73%|███████▎  | 98/134 [29:11<10:43, 17.87s/it, v_num=0, train_loss=0.238]
Epoch 0:  73%|███████▎  | 98/134 [29:11<10:43, 17.87s/it, v_num=0, train_loss=0.168]
Epoch 0:  74%|███████▍  | 99/134 [29:28<10:25, 17.87s/it, v_num=0, train_loss=0.168]
Epoch 0:  74%|███████▍  | 99/134 [29:29<10:25, 17.87s/it, v_num=0, train_loss=0.198]
Epoch 0:  75%|███████▍  | 100/134 [29:46<10:07, 17.87s/it, v_num=0, train_loss=0.198]
Epoch 0:  75%|███████▍  | 100/134 [29:46<10:07, 17.87s/it, v_num=0, train_loss=0.205]
Epoch 0:  75%|███████▌  | 101/134 [30:04<09:49, 17.86s/it, v_num=0, train_loss=0.205]
Epoch 0:  75%|███████▌  | 101/134 [30:04<09:49, 17.87s/it, v_num=0, train_loss=0.165]
Epoch 0:  76%|███████▌  | 102/134 [30:21<09:31, 17.86s/it, v_num=0, train_loss=0.165]
Epoch 0:  76%|███████▌  | 102/134 [30:22<09:31, 17.86s/it, v_num=0, train_loss=0.261]
Epoch 0:  77%|███████▋  | 103/134 [30:39<09:13, 17.86s/it, v_num=0, train_loss=0.261]
Epoch 0:  77%|███████▋  | 103/134 [30:39<09:13, 17.86s/it, v_num=0, train_loss=0.250]
Epoch 0:  78%|███████▊  | 104/134 [30:57<08:55, 17.86s/it, v_num=0, train_loss=0.250]
Epoch 0:  78%|███████▊  | 104/134 [30:57<08:55, 17.86s/it, v_num=0, train_loss=0.161]
Epoch 0:  78%|███████▊  | 105/134 [31:15<08:37, 17.86s/it, v_num=0, train_loss=0.161]
Epoch 0:  78%|███████▊  | 105/134 [31:15<08:38, 17.86s/it, v_num=0, train_loss=0.202]
Epoch 0:  79%|███████▉  | 106/134 [31:33<08:20, 17.86s/it, v_num=0, train_loss=0.202]
Epoch 0:  79%|███████▉  | 106/134 [31:33<08:20, 17.86s/it, v_num=0, train_loss=0.177]
Epoch 0:  80%|███████▉  | 107/134 [31:50<08:02, 17.86s/it, v_num=0, train_loss=0.177]
Epoch 0:  80%|███████▉  | 107/134 [31:51<08:02, 17.86s/it, v_num=0, train_loss=0.225]
(autoscaler +34m8s) [workspace snapshot] New snapshot created successfully (size: 470.30 KB).
Epoch 0:  81%|████████  | 108/134 [32:08<07:44, 17.86s/it, v_num=0, train_loss=0.225]
Epoch 0:  81%|████████  | 108/134 [32:08<07:44, 17.86s/it, v_num=0, train_loss=0.188]
Epoch 0:  81%|████████▏ | 109/134 [32:26<07:26, 17.86s/it, v_num=0, train_loss=0.188]
Epoch 0:  81%|████████▏ | 109/134 [32:26<07:26, 17.86s/it, v_num=0, train_loss=0.205]
Epoch 0:  82%|████████▏ | 110/134 [32:44<07:08, 17.86s/it, v_num=0, train_loss=0.205]
Epoch 0:  82%|████████▏ | 110/134 [32:44<07:08, 17.86s/it, v_num=0, train_loss=0.218]
Epoch 0:  83%|████████▎ | 111/134 [33:02<06:50, 17.86s/it, v_num=0, train_loss=0.218]
Epoch 0:  83%|████████▎ | 111/134 [33:02<06:50, 17.86s/it, v_num=0, train_loss=0.259]
Epoch 0:  84%|████████▎ | 112/134 [33:20<06:32, 17.86s/it, v_num=0, train_loss=0.259]
Epoch 0:  84%|████████▎ | 112/134 [33:20<06:32, 17.86s/it, v_num=0, train_loss=0.255]
Epoch 0:  84%|████████▍ | 113/134 [33:38<06:15, 17.86s/it, v_num=0, train_loss=0.255]
Epoch 0:  84%|████████▍ | 113/134 [33:38<06:15, 17.86s/it, v_num=0, train_loss=0.221]
Epoch 0:  85%|████████▌ | 114/134 [33:55<05:57, 17.86s/it, v_num=0, train_loss=0.221]
Epoch 0:  85%|████████▌ | 114/134 [33:56<05:57, 17.86s/it, v_num=0, train_loss=0.185]
Epoch 0:  86%|████████▌ | 115/134 [34:13<05:39, 17.86s/it, v_num=0, train_loss=0.185]
Epoch 0:  86%|████████▌ | 115/134 [34:13<05:39, 17.86s/it, v_num=0, train_loss=0.189]
Epoch 0:  87%|████████▋ | 116/134 [34:31<05:21, 17.86s/it, v_num=0, train_loss=0.189]
Epoch 0:  87%|████████▋ | 116/134 [34:31<05:21, 17.86s/it, v_num=0, train_loss=0.168]
Epoch 0:  87%|████████▋ | 117/134 [34:48<05:03, 17.85s/it, v_num=0, train_loss=0.168]
Epoch 0:  87%|████████▋ | 117/134 [34:49<05:03, 17.86s/it, v_num=0, train_loss=0.166]
Epoch 0:  88%|████████▊ | 118/134 [35:06<04:45, 17.85s/it, v_num=0, train_loss=0.166]
Epoch 0:  88%|████████▊ | 118/134 [35:06<04:45, 17.86s/it, v_num=0, train_loss=0.184]
Epoch 0:  89%|████████▉ | 119/134 [35:24<04:27, 17.85s/it, v_num=0, train_loss=0.184]
Epoch 0:  89%|████████▉ | 119/134 [35:24<04:27, 17.86s/it, v_num=0, train_loss=0.230]
Epoch 0:  90%|████████▉ | 120/134 [35:42<04:09, 17.85s/it, v_num=0, train_loss=0.230]
Epoch 0:  90%|████████▉ | 120/134 [35:42<04:09, 17.86s/it, v_num=0, train_loss=0.251]
Epoch 0:  90%|█████████ | 121/134 [36:00<03:52, 17.86s/it, v_num=0, train_loss=0.251]
Epoch 0:  90%|█████████ | 121/134 [36:00<03:52, 17.86s/it, v_num=0, train_loss=0.244]
Epoch 0:  91%|█████████ | 122/134 [36:18<03:34, 17.85s/it, v_num=0, train_loss=0.244]
Epoch 0:  91%|█████████ | 122/134 [36:18<03:34, 17.86s/it, v_num=0, train_loss=0.232]
Epoch 0:  92%|█████████▏| 123/134 [36:35<03:16, 17.85s/it, v_num=0, train_loss=0.232]
Epoch 0:  92%|█████████▏| 123/134 [36:36<03:16, 17.85s/it, v_num=0, train_loss=0.200]
Epoch 0:  93%|█████████▎| 124/134 [36:53<02:58, 17.85s/it, v_num=0, train_loss=0.200]
Epoch 0:  93%|█████████▎| 124/134 [36:53<02:58, 17.85s/it, v_num=0, train_loss=0.152]
(autoscaler +39m8s) [workspace snapshot] New snapshot created successfully (size: 473.57 KB).
Epoch 0:  93%|█████████▎| 125/134 [37:11<02:40, 17.86s/it, v_num=0, train_loss=0.152]
Epoch 0:  93%|█████████▎| 125/134 [37:12<02:40, 17.86s/it, v_num=0, train_loss=0.162]
Epoch 0:  94%|█████████▍| 126/134 [37:29<02:22, 17.86s/it, v_num=0, train_loss=0.162]
Epoch 0:  94%|█████████▍| 126/134 [37:29<02:22, 17.86s/it, v_num=0, train_loss=0.184]
Epoch 0:  95%|█████████▍| 127/134 [37:47<02:04, 17.86s/it, v_num=0, train_loss=0.184]
Epoch 0:  95%|█████████▍| 127/134 [37:47<02:04, 17.86s/it, v_num=0, train_loss=0.186]
Epoch 0:  96%|█████████▌| 128/134 [38:05<01:47, 17.85s/it, v_num=0, train_loss=0.186]
Epoch 0:  96%|█████████▌| 128/134 [38:05<01:47, 17.86s/it, v_num=0, train_loss=0.208]
Epoch 0:  96%|█████████▋| 129/134 [38:23<01:29, 17.85s/it, v_num=0, train_loss=0.208]
Epoch 0:  96%|█████████▋| 129/134 [38:23<01:29, 17.85s/it, v_num=0, train_loss=0.243]
Epoch 0:  97%|█████████▋| 130/134 [38:40<01:11, 17.85s/it, v_num=0, train_loss=0.243]
Epoch 0:  97%|█████████▋| 130/134 [38:41<01:11, 17.85s/it, v_num=0, train_loss=0.243]
Epoch 0:  98%|█████████▊| 131/134 [38:58<00:53, 17.85s/it, v_num=0, train_loss=0.243]
Epoch 0:  98%|█████████▊| 131/134 [38:58<00:53, 17.85s/it, v_num=0, train_loss=0.213]
Epoch 0:  99%|█████████▊| 132/134 [39:16<00:35, 17.85s/it, v_num=0, train_loss=0.213]
Epoch 0:  99%|█████████▊| 132/134 [39:16<00:35, 17.85s/it, v_num=0, train_loss=0.214]
Epoch 0:  99%|█████████▉| 133/134 [39:34<00:17, 17.85s/it, v_num=0, train_loss=0.214]
Epoch 0:  99%|█████████▉| 133/134 [39:34<00:17, 17.85s/it, v_num=0, train_loss=0.182]
Epoch 0: 100%|██████████| 134/134 [39:52<00:00, 17.85s/it, v_num=0, train_loss=0.182]
Epoch 0: 100%|██████████| 134/134 [39:52<00:00, 17.85s/it, v_num=0, train_loss=0.174]
Epoch 0: : 135it [40:10, 17.85s/it, v_num=0, train_loss=0.174]                       
Epoch 0: : 135it [40:10, 17.85s/it, v_num=0, train_loss=0.176]
(RayTrainWorker pid=13861, ip=10.0.46.116) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/mnt/cluster_storage/finetune_dolly-v2-7b/TorchTrainer_839b5_00000_0_2023-08-30_11-03-25/checkpoint_000000)
(autoscaler +44m9s) [workspace snapshot] New snapshot created successfully (size: 477.63 KB).
(RayTrainWorker pid=66181) Checkpoint successfully created at: Checkpoint(filesystem=local, path=/mnt/cluster_storage/finetune_dolly-v2-7b/TorchTrainer_839b5_00000_0_2023-08-30_11-03-25/checkpoint_000000) [repeated 15x across cluster]
Epoch 0: : 135it [46:03, 20.47s/it, v_num=0, train_loss=0.176]
(RayTrainWorker pid=66181) `Trainer.fit` stopped: `max_epochs=1` reached.
(RayTrainWorker pid=66181) RayFSDPStrategy: tearing down strategy...
2023-08-30 11:51:22,248	WARNING experiment_state.py:371 -- Experiment checkpoint syncing has been triggered multiple times in the last 30.0 seconds. A sync will be triggered whenever a trial has checkpointed more than `num_to_keep` times since last sync or if 300 seconds have passed since last sync. If you have set `num_to_keep` in your `CheckpointConfig`, consider increasing the checkpoint frequency or keeping more checkpoints. You can supress this warning by changing the `TUNE_WARN_EXCESSIVE_EXPERIMENT_CHECKPOINT_SYNC_THRESHOLD_S` environment variable.
2023-08-30 11:51:22,253	INFO tune.py:1142 -- Total run time: 2877.24 seconds (2877.14 seconds for the tuning loop).
Result(
  metrics={'train_loss': 0.176025390625, 'epoch': 0, 'step': 135},
  path='/mnt/cluster_storage/finetune_dolly-v2-7b/TorchTrainer_839b5_00000_0_2023-08-30_11-03-25',
  filesystem='local',
  checkpoint=Checkpoint(filesystem=local, path=/mnt/cluster_storage/finetune_dolly-v2-7b/TorchTrainer_839b5_00000_0_2023-08-30_11-03-25/checkpoint_000000)
)

We finished training in 2877s. The price for an on-demand g4dn.4xlarge instance is $1.204/hour, while a g4dn.8xlarge instance costs $2.176/hour. The total cost would be ($1.204 * 15 + $2.176) * 2877 / 3600 = $16.17.

Text-generation with HuggingFace Pipeline#

We can use the HuggingFace Pipeline to generate predictions from our fine-tuned model. Let’s input some prompts and see if our tuned Dolly can speak like Shakespeare:

import os
from transformers import pipeline

tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, padding_side="right")

ckpt_path = os.path.join(result.checkpoint.path, "checkpoint.ckpt")

dolly = DollyV2Model.load_from_checkpoint(ckpt_path, map_location=torch.device("cpu"))

nlp_pipeline = pipeline(
    task="text-generation", 
    model=dolly.model, 
    tokenizer=tokenizer, 
    device_map="auto"
)
for prompt in ["This is", "I am", "Once more"]:
    print(nlp_pipeline(prompt, max_new_tokens=20, do_sample=True, pad_token_id=tokenizer.eos_token_id))
[{'generated_text': "This is the day that our hearts live to love. Now come, go in; I'll sit here:"}]
[{'generated_text': 'I am very sorry, not a jot. What would you have? your pardon? my good lord?'}]
[{'generated_text': 'Once more, look up, look up, my sovereign; look up this night!'}]

References: