ray.data.Dataset
ray.data.Dataset#
- class ray.data.Dataset(plan: ray.data._internal.plan.ExecutionPlan, logical_plan: Optional[ray.data._internal.logical.interfaces.logical_plan.LogicalPlan] = None)[source]#
A Dataset is a distributed data collection for data loading and processing.
Datasets are distributed pipelines that produce
ObjectRef[Block]
outputs, where each block holds data in Arrow format, representing a shard of the overall data collection. The block also determines the unit of parallelism. For more details, see Ray Data Internals.Datasets can be created in multiple ways: from synthetic data via
range_*()
APIs, from existing memory data viafrom_*()
APIs (this creates a subclass of Dataset calledMaterializedDataset
), or from external storage systems such as local disk, S3, HDFS etc. via theread_*()
APIs. The (potentially processed) Dataset can be saved back to external storage systems via thewrite_*()
APIs.Examples
import ray # Create dataset from synthetic data. ds = ray.data.range(1000) # Create dataset from in-memory data. ds = ray.data.from_items( [{"col1": i, "col2": i * 2} for i in range(1000)] ) # Create dataset from external storage system. ds = ray.data.read_parquet("s3://bucket/path") # Save dataset back to external storage system. ds.write_csv("s3://bucket/output")
Dataset has two kinds of operations: transformation, which takes in Dataset and outputs a new Dataset (e.g.
map_batches()
); and consumption, which produces values (not a data stream) as output (e.g.iter_batches()
).Dataset transformations are lazy, with execution of the transformations being triggered by downstream consumption.
Dataset supports parallel processing at scale: transformations such as
map_batches()
, aggregations such asmin()
/max()
/mean()
, grouping viagroupby()
, shuffling operations such assort()
,random_shuffle()
, andrepartition()
.Examples
>>> import ray >>> ds = ray.data.range(1000) >>> # Transform batches (Dict[str, np.ndarray]) with map_batches(). >>> ds.map_batches(lambda batch: {"id": batch["id"] * 2}) MapBatches(<lambda>) +- Dataset(num_blocks=..., num_rows=1000, schema={id: int64}) >>> # Compute the maximum. >>> ds.max("id") 999 >>> # Shuffle this dataset randomly. >>> ds.random_shuffle() RandomShuffle +- Dataset(num_blocks=..., num_rows=1000, schema={id: int64}) >>> # Sort it back in order. >>> ds.sort("id") Sort +- Dataset(num_blocks=..., num_rows=1000, schema={id: int64})
Both unexecuted and materialized Datasets can be passed between Ray tasks and actors without incurring a copy. Dataset supports conversion to/from several more featureful dataframe libraries (e.g., Spark, Dask, Modin, MARS), and are also compatible with distributed TensorFlow / PyTorch.
Methods
Construct a Dataset (internal API).
Add the given column to the dataset.
Aggregate values using one or more functions.
Returns the columns of this Dataset.
Count the number of records in the dataset.
Deserialize the provided lineage-serialized Dataset.
Drop one or more columns from the dataset.
Filter out rows that don't satisfy the given predicate.
Apply the given function to each row and then flatten results.
Get a list of references to the underlying blocks of this dataset.
Group rows of a
Dataset
according to a column.Whether this dataset's lineage is able to be serialized for storage and later deserialized, possibly on a different cluster.
Return the list of input files for the dataset.
Return an iterable over batches of data.
Return an iterable over the rows in this dataset.
Return an iterable over batches of data represented as TensorFlow tensors.
Return an iterable over batches of data represented as Torch tensors.
Return a
DataIterator
over this dataset.Truncate the dataset to the first
limit
rows.Apply the given function to each row of this dataset.
Apply the given function to batches of data.
Execute and materialize this dataset into object store memory.
Return the maximum of one or more columns.
Compute the mean of one or more columns.
Return the minimum of one or more columns.
Return the number of blocks of this dataset.
Returns a new
Dataset
containing a random fraction of the rows.Randomly shuffle the rows of this
Dataset
.Convert this into a DatasetPipeline by looping over this dataset.
Return the schema of the dataset.
Select one or more columns from the dataset.
Serialize this dataset's lineage, not the actual data or the existing data futures, to bytes that can be stored and later deserialized, possibly on a different cluster.
Print up to the given number of rows from the
Dataset
.Return the in-memory size of the dataset.
Sort the dataset by the specified key column or key function.
Materialize and split the dataset into
n
disjoint pieces.Materialize and split the dataset at the given indices (like
np.split
).Materialize and split the dataset using proportions.
Returns a string containing execution timing information.
Compute the standard deviation of one or more columns.
Returns
n
DataIterators
that can be used to read disjoint subsets of the dataset in parallel.Compute the sum of one or more columns.
Return up to
limit
rows from theDataset
.Return all of the rows in this
Dataset
.Return up to
batch_size
rows from theDataset
in a batch.Convert this
Dataset
into a distributed set of PyArrow tables.Convert this
Dataset
into a Dask DataFrame.Convert this
Dataset
into a Mars DataFrame.Convert this
Dataset
into a Modin DataFrame.Converts this
Dataset
into a distributed set of NumPy ndarrays or dictionary of NumPy ndarrays.Convert this
Dataset
to a single pandas DataFrame.Converts this
Dataset
into a distributed set of Pandas dataframes.Convert this dataset into a distributed RandomAccessDataset (EXPERIMENTAL).
Convert this
Dataset
into a Spark DataFrame.Return a TensorFlow Dataset over this
Dataset
.Return a Torch IterableDataset over this
Dataset
.Materialize and split the dataset into train and test subsets.
Concatenate
Datasets
across rows.List the unique elements in a given column.
Convert this into a DatasetPipeline by windowing over data blocks.
Write the dataset to a BigQuery dataset table.
Writes the
Dataset
to CSV files.Writes the dataset to a custom
Datasource
.Writes the
Dataset
to images.Writes the
Dataset
to JSON and JSONL files.Writes the
Dataset
to a MongoDB database.Writes a column of the
Dataset
to .npy files.Writes the
Dataset
to parquet files under the providedpath
.Write to a database that provides a Python DB API2-compliant connector.
Write the
Dataset
to TFRecord files.Writes the dataset to WebDataset files.
Materialize and zip the columns of this dataset with the columns of another.
Attributes
Return the DataContext used to create this Dataset.