ray.data.read_hudi#
- ray.data.read_hudi(table_uri: str, *, storage_options: Dict[str, str] | None = None, ray_remote_args: Dict[str, Any] | None = None, concurrency: int | None = None, override_num_blocks: int | None = None) Dataset [source]#
Create a
Dataset
from an Apache Hudi table.Examples
>>> import ray >>> ds = ray.data.read_hudi( ... table_uri="/hudi/trips", ... )
- Parameters:
table_uri – The URI of the Hudi table to read from. Local file paths, S3, and GCS are supported.
storage_options – Extra options that make sense for a particular storage connection. This is used to store connection parameters like credentials, endpoint, etc. See more explanation here.
ray_remote_args – kwargs passed to
remote()
in the read tasks.concurrency – The maximum number of Ray tasks to run concurrently. Set this to control number of tasks to run concurrently. This doesn’t change the total number of tasks run or the total number of output blocks. By default, concurrency is dynamically decided based on the available resources.
override_num_blocks – Override the number of output blocks from all read tasks. By default, the number of output blocks is dynamically decided based on input data size and available resources. You shouldn’t manually set this value in most cases.
- Returns:
A
Dataset
producing records read from the Hudi table.
PublicAPI (alpha): This API is in alpha and may change before becoming stable.