ray.data.read_delta#
- ray.data.read_delta(path: str | List[str], *, filesystem: pyarrow.fs.FileSystem | None = None, columns: List[str] | None = None, parallelism: int = -1, ray_remote_args: Dict[str, Any] | None = None, meta_provider: ParquetMetadataProvider | None = None, partition_filter: PathPartitionFilter | None = None, partitioning: Partitioning | None = Partitioning(style='hive', base_dir='', field_names=None, field_types={}, filesystem=None), shuffle: Literal['files'] | None = None, include_paths: bool = False, concurrency: int | None = None, override_num_blocks: int | None = None, **arrow_parquet_args)[source]#
Creates a
Dataset
from Delta Lake files.Examples
>>> import ray >>> ds = ray.data.read_delta("s3://bucket@path/to/delta-table/")
- Parameters:
path – A single file path for a Delta Lake table. Multiple tables are not yet supported.
filesystem – The PyArrow filesystem implementation to read from. These filesystems are specified in the pyarrow docs. Specify this parameter if you need to provide specific configurations to the filesystem. By default, the filesystem is automatically selected based on the scheme of the paths. For example, if the path begins with
s3://
, theS3FileSystem
is used. IfNone
, this function uses a system-chosen implementation.columns – A list of column names to read. Only the specified columns are read during the file scan.
parallelism – This argument is deprecated. Use
override_num_blocks
argument.ray_remote_args – kwargs passed to
remote()
in the read tasks.meta_provider – A file metadata provider. Custom metadata providers may be able to resolve file metadata more quickly and/or accurately. In most cases you do not need to set this parameter.
partition_filter – A
PathPartitionFilter
. Use with a custom callback to read only selected partitions of a dataset.partitioning – A
Partitioning
object that describes how paths are organized. Defaults to HIVE partitioning.shuffle – If setting to “files”, randomly shuffle input files order before read. Defaults to not shuffle with
None
.include_paths – If
True
, include the path to each file. File paths are stored in the'path'
column.concurrency – The maximum number of Ray tasks to run concurrently. Set this to control number of tasks to run concurrently. This doesn’t change the total number of tasks run or the total number of output blocks. By default, concurrency is dynamically decided based on the available resources.
override_num_blocks – Override the number of output blocks from all read tasks. By default, the number of output blocks is dynamically decided based on input data size and available resources. You shouldn’t manually set this value in most cases.
**arrow_parquet_args – Other parquet read options to pass to PyArrow. For the full set of arguments, see the PyArrow API
- Returns:
Dataset
producing records read from the specified parquet files.
PublicAPI (alpha): This API is in alpha and may change before becoming stable.