ray.data.Dataset.write_csv#
- Dataset.write_csv(path: str, *, filesystem: pyarrow.fs.FileSystem | None = None, try_create_dir: bool = True, arrow_open_stream_args: Dict[str, Any] | None = None, filename_provider: FilenameProvider | None = None, arrow_csv_args_fn: Callable[[], Dict[str, Any]] | None = None, num_rows_per_file: int | None = None, ray_remote_args: Dict[str, Any] = None, concurrency: int | None = None, **arrow_csv_args) None [source]#
Writes the
Dataset
to CSV files.The number of files is determined by the number of blocks in the dataset. To control the number of number of blocks, call
repartition()
.This method is only supported for datasets with records that are convertible to pyarrow tables.
By default, the format of the output files is
{uuid}_{block_idx}.csv
, whereuuid
is a unique id for the dataset. To modify this behavior, implement a customFilenameProvider
and pass it in as thefilename_provider
argument.Note
This operation will trigger execution of the lazy transformations performed on this dataset.
Examples
Write the dataset as CSV files to a local directory.
>>> import ray >>> ds = ray.data.range(100) >>> ds.write_csv("local:///tmp/data")
Write the dataset as CSV files to S3.
>>> import ray >>> ds = ray.data.range(100) >>> ds.write_csv("s3://bucket/folder/)
Time complexity: O(dataset size / parallelism)
- Parameters:
path – The path to the destination root directory, where the CSV files are written to.
filesystem – The pyarrow filesystem implementation to write to. These filesystems are specified in the pyarrow docs. Specify this if you need to provide specific configurations to the filesystem. By default, the filesystem is automatically selected based on the scheme of the paths. For example, if the path begins with
s3://
, theS3FileSystem
is used.try_create_dir – If
True
, attempts to create all directories in the destination path ifTrue
. Does nothing if all directories already exist. Defaults toTrue
.arrow_open_stream_args – kwargs passed to pyarrow.fs.FileSystem.open_output_stream, which is used when opening the file to write to.
filename_provider – A
FilenameProvider
implementation. Use this parameter to customize what your filenames look like.arrow_csv_args_fn – Callable that returns a dictionary of write arguments that are provided to pyarrow.write.write_csv when writing each block to a file. Overrides any duplicate keys from
arrow_csv_args
. Use this argument instead ofarrow_csv_args
if any of your write arguments cannot be pickled, or if you’d like to lazily resolve the write arguments for each dataset block.num_rows_per_file – [Experimental] The target number of rows to write to each file. If
None
, Ray Data writes a system-chosen number of rows to each file. The specified value is a hint, not a strict limit. Ray Data might write more or fewer rows to each file. In specific, if the number of rows per block is larger than the specified value, Ray Data writes the number of rows per block to each file.ray_remote_args – kwargs passed to
remote()
in the write tasks.concurrency – The maximum number of Ray tasks to run concurrently. Set this to control number of tasks to run concurrently. This doesn’t change the total number of tasks run. By default, concurrency is dynamically decided based on the available resources.
arrow_csv_args –
Options to pass to pyarrow.write.write_csv when writing each block to a file.