ray.data.Dataset.write_parquet#

Dataset.write_parquet(path: str, *, filesystem: pyarrow.fs.FileSystem | None = None, try_create_dir: bool = True, arrow_open_stream_args: ~typing.Dict[str, ~typing.Any] | None = None, filename_provider: ~ray.data.datasource.filename_provider.FilenameProvider | None = None, block_path_provider: ~ray.data.datasource.block_path_provider.BlockWritePathProvider | None = None, arrow_parquet_args_fn: ~typing.Callable[[], ~typing.Dict[str, ~typing.Any]] = <function Dataset.<lambda>>, num_rows_per_file: int | None = None, ray_remote_args: ~typing.Dict[str, ~typing.Any] = None, concurrency: int | None = None, **arrow_parquet_args) None[source]#

Writes the Dataset to parquet files under the provided path.

The number of files is determined by the number of blocks in the dataset. To control the number of number of blocks, call repartition().

If pyarrow can’t represent your data, this method errors.

By default, the format of the output files is {uuid}_{block_idx}.parquet, where uuid is a unique id for the dataset. To modify this behavior, implement a custom BlockWritePathProvider and pass it in as the block_path_provider argument.

Note

This operation will trigger execution of the lazy transformations performed on this dataset.

Examples

>>> import ray
>>> ds = ray.data.range(100)
>>> ds.write_parquet("local:///tmp/data/")

Time complexity: O(dataset size / parallelism)

Parameters:
  • path – The path to the destination root directory, where parquet files are written to.

  • filesystem – The pyarrow filesystem implementation to write to. These filesystems are specified in the pyarrow docs. Specify this if you need to provide specific configurations to the filesystem. By default, the filesystem is automatically selected based on the scheme of the paths. For example, if the path begins with s3://, the S3FileSystem is used.

  • try_create_dir – If True, attempts to create all directories in the destination path. Does nothing if all directories already exist. Defaults to True.

  • arrow_open_stream_args – kwargs passed to pyarrow.fs.FileSystem.open_output_stream, which is used when opening the file to write to.

  • filename_provider – A FilenameProvider implementation. Use this parameter to customize what your filenames look like.

  • arrow_parquet_args_fn – Callable that returns a dictionary of write arguments that are provided to pyarrow.parquet.write_table() when writing each block to a file. Overrides any duplicate keys from arrow_parquet_args. Use this argument instead of arrow_parquet_args if any of your write arguments can’t pickled, or if you’d like to lazily resolve the write arguments for each dataset block.

  • num_rows_per_file – The target number of rows to write to each file. If None, Ray Data writes a system-chosen number of rows to each file.

  • ray_remote_args – Kwargs passed to remote() in the write tasks.

  • concurrency – The maximum number of Ray tasks to run concurrently. Set this to control number of tasks to run concurrently. This doesn’t change the total number of tasks run. By default, concurrency is dynamically decided based on the available resources.

  • arrow_parquet_args

    Options to pass to pyarrow.parquet.write_table(), which is used to write out each block to a file.