ray.data.from_spark(df: pyspark.sql.DataFrame, *, parallelism: int | None = None, override_num_blocks: int | None = None) MaterializedDataset[source]#

Create a Dataset from a Spark DataFrame.

  • df – A Spark DataFrame, which must be created by RayDP (Spark-on-Ray).

  • parallelism – This argument is deprecated. Use override_num_blocks argument.

  • override_num_blocks – Override the number of output blocks from all read tasks. By default, the number of output blocks is dynamically decided based on input data size and available resources. You shouldn’t manually set this value in most cases.


A MaterializedDataset holding rows read from the DataFrame.