ray.data.read_json#

ray.data.read_json(paths: Union[str, List[str]], *, filesystem: Optional[pyarrow.fs.FileSystem] = None, parallelism: int = - 1, ray_remote_args: Dict[str, Any] = None, arrow_open_stream_args: Optional[Dict[str, Any]] = None, meta_provider: Optional[ray.data.datasource.file_meta_provider.BaseFileMetadataProvider] = None, partition_filter: Optional[ray.data.datasource.partitioning.PathPartitionFilter] = FileExtensionFilter(extensions=['.json', '.jsonl'], allow_if_no_extensions=False), partitioning: ray.data.datasource.partitioning.Partitioning = Partitioning(style='hive', base_dir='', field_names=None, filesystem=None), ignore_missing_paths: bool = False, shuffle: Optional[Literal['files']] = None, **arrow_json_args) ray.data.dataset.Dataset[source]#

Creates a Dataset from JSON and JSONL files.

For JSON file, the whole file is read as one row. For JSONL file, each line of file is read as separate row.

Examples

Read a JSON file in remote storage.

>>> import ray
>>> ds = ray.data.read_json("s3://anonymous@ray-example-data/log.json")
>>> ds.schema()
Column     Type
------     ----
timestamp  timestamp[s]
size       int64

Read a JSONL file in remote storage.

>>> ds = ray.data.read_json("s3://anonymous@ray-example-data/train.jsonl")
>>> ds.schema()
Column  Type
------  ----
input   string

Read multiple local files.

>>> ray.data.read_json( 
...    ["local:///path/to/file1", "local:///path/to/file2"])

Read multiple directories.

>>> ray.data.read_json( 
...     ["s3://bucket/path1", "s3://bucket/path2"])

By default, read_json() parses Hive-style partitions from file paths. If your data adheres to a different partitioning scheme, set the partitioning parameter.

>>> ds = ray.data.read_json("s3://anonymous@ray-example-data/year=2022/month=09/sales.json")
>>> ds.take(1)
[{'order_number': 10107, 'quantity': 30, 'year': '2022', 'month': '09'}]
Parameters
  • paths – A single file or directory, or a list of file or directory paths. A list of paths can contain both files and directories.

  • filesystem – The PyArrow filesystem implementation to read from. These filesystems are specified in the PyArrow docs. Specify this parameter if you need to provide specific configurations to the filesystem. By default, the filesystem is automatically selected based on the scheme of the paths. For example, if the path begins with s3://, the S3FileSystem is used.

  • parallelism – The amount of parallelism to use for the dataset. Defaults to -1, which automatically determines the optimal parallelism for your configuration. You should not need to manually set this value in most cases. For details on how the parallelism is automatically determined and guidance on how to tune it, see Tuning read parallelism. Parallelism is upper bounded by the total number of records in all the JSON files.

  • ray_remote_args – kwargs passed to remote() in the read tasks.

  • arrow_open_stream_args – kwargs passed to pyarrow.fs.FileSystem.open_input_file. when opening input files to read.

  • meta_provider – A file metadata provider. Custom metadata providers may be able to resolve file metadata more quickly and/or accurately. In most cases, you do not need to set this. If None, this function uses a system-chosen implementation.

  • partition_filter – A PathPartitionFilter. Use with a custom callback to read only selected partitions of a dataset. By default, this filters out any file paths whose file extension does not match “.json” or “.jsonl”.

  • partitioning

    A Partitioning object that describes how paths are organized. By default, this function parses Hive-style partitions.

  • ignore_missing_paths – If True, ignores any file paths in paths that are not found. Defaults to False.

  • shuffle – If setting to “files”, randomly shuffle input files order before read. Defaults to not shuffle with None.

  • arrow_json_args – JSON read options to pass to pyarrow.json.read_json.

Returns

Dataset producing records read from the specified paths.