ray.data.read_json#

ray.data.read_json(paths: str | List[str], *, filesystem: pyarrow.fs.FileSystem | None = None, parallelism: int = -1, ray_remote_args: Dict[str, Any] = None, arrow_open_stream_args: Dict[str, Any] | None = None, meta_provider: BaseFileMetadataProvider | None = None, partition_filter: PathPartitionFilter | None = None, partitioning: Partitioning = Partitioning(style='hive', base_dir='', field_names=None, field_types={}, filesystem=None), include_paths: bool = False, ignore_missing_paths: bool = False, shuffle: Literal['files'] | None = None, file_extensions: List[str] | None = ['json', 'jsonl'], concurrency: int | None = None, override_num_blocks: int | None = None, **arrow_json_args) Dataset[source]#

Creates a Dataset from JSON and JSONL files.

For JSON file, the whole file is read as one row. For JSONL file, each line of file is read as separate row.

Examples

Read a JSON file in remote storage.

>>> import ray
>>> ds = ray.data.read_json("s3://anonymous@ray-example-data/log.json")
>>> ds.schema()
Column     Type
------     ----
timestamp  timestamp[...]
size       int64

Read a JSONL file in remote storage.

>>> ds = ray.data.read_json("s3://anonymous@ray-example-data/train.jsonl")
>>> ds.schema()
Column  Type
------  ----
input   string

Read multiple local files.

>>> ray.data.read_json( 
...    ["local:///path/to/file1", "local:///path/to/file2"])

Read multiple directories.

>>> ray.data.read_json( 
...     ["s3://bucket/path1", "s3://bucket/path2"])

By default, read_json() parses Hive-style partitions from file paths. If your data adheres to a different partitioning scheme, set the partitioning parameter.

>>> ds = ray.data.read_json("s3://anonymous@ray-example-data/year=2022/month=09/sales.json")
>>> ds.take(1)
[{'order_number': 10107, 'quantity': 30, 'year': '2022', 'month': '09'}]
Parameters:
  • paths – A single file or directory, or a list of file or directory paths. A list of paths can contain both files and directories.

  • filesystem – The PyArrow filesystem implementation to read from. These filesystems are specified in the PyArrow docs. Specify this parameter if you need to provide specific configurations to the filesystem. By default, the filesystem is automatically selected based on the scheme of the paths. For example, if the path begins with s3://, the S3FileSystem is used.

  • parallelism – This argument is deprecated. Use override_num_blocks argument.

  • ray_remote_args – kwargs passed to remote() in the read tasks.

  • arrow_open_stream_args – kwargs passed to pyarrow.fs.FileSystem.open_input_file. when opening input files to read.

  • meta_provider – A file metadata provider. Custom metadata providers may be able to resolve file metadata more quickly and/or accurately. In most cases, you do not need to set this. If None, this function uses a system-chosen implementation.

  • partition_filter – A PathPartitionFilter. Use with a custom callback to read only selected partitions of a dataset. By default, this filters out any file paths whose file extension does not match “.json” or “.jsonl”.

  • partitioning

    A Partitioning object that describes how paths are organized. By default, this function parses Hive-style partitions.

  • include_paths – If True, include the path to each file. File paths are stored in the 'path' column.

  • ignore_missing_paths – If True, ignores any file paths in paths that are not found. Defaults to False.

  • shuffle – If setting to “files”, randomly shuffle input files order before read. Defaults to not shuffle with None.

  • arrow_json_args – JSON read options to pass to pyarrow.json.read_json.

  • file_extensions – A list of file extensions to filter files by.

  • concurrency – The maximum number of Ray tasks to run concurrently. Set this to control number of tasks to run concurrently. This doesn’t change the total number of tasks run or the total number of output blocks. By default, concurrency is dynamically decided based on the available resources.

  • override_num_blocks – Override the number of output blocks from all read tasks. By default, the number of output blocks is dynamically decided based on input data size and available resources. You shouldn’t manually set this value in most cases.

Returns:

Dataset producing records read from the specified paths.