ray.data.Dataset.write_csv
ray.data.Dataset.write_csv#
- Dataset.write_csv(path: str, *, filesystem: Optional[pyarrow.fs.FileSystem] = None, try_create_dir: bool = True, arrow_open_stream_args: Optional[Dict[str, Any]] = None, block_path_provider: ray.data.datasource.file_based_datasource.BlockWritePathProvider = <ray.data.datasource.file_based_datasource.DefaultBlockWritePathProvider object>, arrow_csv_args_fn: Callable[[], Dict[str, Any]] = <function Dataset.<lambda>>, ray_remote_args: Dict[str, Any] = None, **arrow_csv_args) None [source]#
Write the dataset to csv.
This is only supported for datasets convertible to Arrow records. To control the number of files, use
.repartition()
.Unless a custom block path provider is given, the format of the output files will be {uuid}_{block_idx}.csv, where
uuid
is an unique id for the dataset.Note
This operation will trigger execution of the lazy transformations performed on this dataset.
Examples
>>> import ray >>> ds = ray.data.range(100) >>> ds.write_csv("s3://bucket/path")
Time complexity: O(dataset size / parallelism)
- Parameters
path – The path to the destination root directory, where csv files will be written to.
filesystem – The filesystem implementation to write to.
try_create_dir – Try to create all directories in destination path if True. Does nothing if all directories already exist.
arrow_open_stream_args – kwargs passed to pyarrow.fs.FileSystem.open_output_stream
block_path_provider – BlockWritePathProvider implementation to write each dataset block to a custom output path.
arrow_csv_args_fn – Callable that returns a dictionary of write arguments to use when writing each block to a file. Overrides any duplicate keys from arrow_csv_args. This should be used instead of arrow_csv_args if any of your write arguments cannot be pickled, or if you’d like to lazily resolve the write arguments for each dataset block.
ray_remote_args – Kwargs passed to ray.remote in the write tasks.
arrow_csv_args – Other CSV write options to pass to pyarrow.