ray.data.read_csv#

ray.data.read_csv(paths: str | List[str], *, filesystem: pyarrow.fs.FileSystem | None = None, parallelism: int = -1, ray_remote_args: Dict[str, Any] = None, arrow_open_stream_args: Dict[str, Any] | None = None, meta_provider: BaseFileMetadataProvider | None = None, partition_filter: PathPartitionFilter | None = None, partitioning: Partitioning = Partitioning(style='hive', base_dir='', field_names=None, field_types={}, filesystem=None), include_paths: bool = False, ignore_missing_paths: bool = False, shuffle: Literal['files'] | FileShuffleConfig | None = None, file_extensions: List[str] | None = None, concurrency: int | None = None, override_num_blocks: int | None = None, **arrow_csv_args) Dataset[source]#

Creates a Dataset from CSV files.

Examples

Read a file in remote storage.

>>> import ray
>>> ds = ray.data.read_csv("s3://anonymous@ray-example-data/iris.csv")
>>> ds.schema()
Column             Type
------             ----
sepal length (cm)  double
sepal width (cm)   double
petal length (cm)  double
petal width (cm)   double
target             int64

Read multiple local files.

>>> ray.data.read_csv( 
...    ["local:///path/to/file1", "local:///path/to/file2"])

Read a directory from remote storage.

>>> ds = ray.data.read_csv("s3://anonymous@ray-example-data/iris-csv/")

Read files that use a different delimiter. For more uses of ParseOptions see https://arrow.apache.org/docs/python/generated/pyarrow.csv.ParseOptions.html # noqa: #501

>>> from pyarrow import csv
>>> parse_options = csv.ParseOptions(delimiter="\t")
>>> ds = ray.data.read_csv(
...     "s3://anonymous@ray-example-data/iris.tsv",
...     parse_options=parse_options)
>>> ds.schema()
Column        Type
------        ----
sepal.length  double
sepal.width   double
petal.length  double
petal.width   double
variety       string

Convert a date column with a custom format from a CSV file. For more uses of ConvertOptions see https://arrow.apache.org/docs/python/generated/pyarrow.csv.ConvertOptions.html # noqa: #501

>>> from pyarrow import csv
>>> convert_options = csv.ConvertOptions(
...     timestamp_parsers=["%m/%d/%Y"])
>>> ds = ray.data.read_csv(
...     "s3://anonymous@ray-example-data/dow_jones.csv",
...     convert_options=convert_options)

By default, read_csv() parses Hive-style partitions from file paths. If your data adheres to a different partitioning scheme, set the partitioning parameter.

>>> ds = ray.data.read_csv("s3://anonymous@ray-example-data/year=2022/month=09/sales.csv")
>>> ds.take(1)
[{'order_number': 10107, 'quantity': 30, 'year': '2022', 'month': '09'}]

By default, read_csv() reads all files from file paths. If you want to filter files by file extensions, set the file_extensions parameter.

Read only *.csv files from a directory.

>>> ray.data.read_csv("s3://anonymous@ray-example-data/different-extensions/",
...     file_extensions=["csv"])
Dataset(num_rows=?, schema={a: int64, b: int64})
Parameters:
  • paths – A single file or directory, or a list of file or directory paths. A list of paths can contain both files and directories.

  • filesystem – The PyArrow filesystem implementation to read from. These filesystems are specified in the pyarrow docs. Specify this parameter if you need to provide specific configurations to the filesystem. By default, the filesystem is automatically selected based on the scheme of the paths. For example, if the path begins with s3://, the S3FileSystem is used.

  • parallelism – This argument is deprecated. Use override_num_blocks argument.

  • ray_remote_args – kwargs passed to ray.remote() in the read tasks.

  • arrow_open_stream_args – kwargs passed to pyarrow.fs.FileSystem.open_input_file. when opening input files to read.

  • meta_provider – [Deprecated] A file metadata provider. Custom metadata providers may be able to resolve file metadata more quickly and/or accurately. In most cases, you do not need to set this. If None, this function uses a system-chosen implementation.

  • partition_filter – A PathPartitionFilter. Use with a custom callback to read only selected partitions of a dataset. By default, no files are filtered.

  • partitioning

    A Partitioning object that describes how paths are organized. By default, this function parses Hive-style partitions.

  • include_paths – If True, include the path to each file. File paths are stored in the 'path' column.

  • ignore_missing_paths – If True, ignores any file paths in paths that are not found. Defaults to False.

  • shuffle – If setting to “files”, randomly shuffle input files order before read. If setting to FileShuffleConfig, you can pass a seed to shuffle the input files. Defaults to not shuffle with None.

  • arrow_csv_args – CSV read options to pass to pyarrow.csv.open_csv when opening CSV files.

  • file_extensions – A list of file extensions to filter files by.

  • concurrency – The maximum number of Ray tasks to run concurrently. Set this to control number of tasks to run concurrently. This doesn’t change the total number of tasks run or the total number of output blocks. By default, concurrency is dynamically decided based on the available resources.

  • override_num_blocks – Override the number of output blocks from all read tasks. By default, the number of output blocks is dynamically decided based on input data size and available resources. You shouldn’t manually set this value in most cases.

Returns:

Dataset producing records read from the specified paths.