ray.data.read_snowflake#

ray.data.read_snowflake(sql: str, connection_parameters: Dict[str, Any], *, shard_keys: list[str] | None = None, ray_remote_args: Dict[str, Any] = None, concurrency: int | None = None, override_num_blocks: int | None = None) Dataset[source]#

Read data from a Snowflake data set.

Example

import ray

connection_parameters = dict(
    user=...,
    account="ABCDEFG-ABC12345",
    password=...,
    database="SNOWFLAKE_SAMPLE_DATA",
    schema="TPCDS_SF100TCL"
)
ds = ray.data.read_snowflake("SELECT * FROM CUSTOMERS", connection_parameters)
Parameters:
  • sql – The SQL query to execute.

  • connection_parameters – Keyword arguments to pass to snowflake.connector.connect. To view supported parameters, read https://docs.snowflake.com/developer-guide/python-connector/python-connector-api#functions.

  • shard_keys – The keys to shard the data by.

  • ray_remote_args – kwargs passed to ray.remote() in the read tasks.

  • concurrency – The maximum number of Ray tasks to run concurrently. Set this to control number of tasks to run concurrently. This doesn’t change the total number of tasks run or the total number of output blocks. By default, concurrency is dynamically decided based on the available resources.

  • override_num_blocks – Override the number of output blocks from all read tasks. This is used for sharding when shard_keys is provided. By default, the number of output blocks is dynamically decided based on input data size and available resources. You shouldn’t manually set this value in most cases.

Returns:

A Dataset containing the data from the Snowflake data set.

PublicAPI (alpha): This API is in alpha and may change before becoming stable.