Ray 2.3.0

Ray

  • Overview
  • ML Workloads with Ray
  • Getting Started Guide
  • Installation
  • Use Cases
  • Ecosystem
  • Ray Core
    • Key Concepts
    • User Guides
      • Tasks
        • Nested Remote Functions
        • Generators
      • Actors
        • Named Actors
        • Terminating Actors
        • AsyncIO / Concurrency for Actors
        • Limiting Concurrency Per-Method with Concurrency Groups
        • Utility Classes
        • Out-of-band Communication
        • Actor Task Execution Order
        • Actor Design Patterns
      • Objects
        • Serialization
        • Object Spilling
      • Environment Dependencies
      • Scheduling
        • Resources
        • GPU Support
        • Placement Groups
        • Memory Management
        • Out-Of-Memory Prevention
      • Fault Tolerance
        • Task Fault Tolerance
        • Actor Fault Tolerance
        • Object Fault Tolerance
      • Design Patterns & Anti-patterns
        • Pattern: Using nested tasks to achieve nested parallelism
        • Pattern: Using generators to reduce heap memory usage
        • Pattern: Using ray.wait to limit the number of pending tasks
        • Pattern: Using resources to limit the number of concurrently running tasks
        • Pattern: Using an actor to synchronize other tasks and actors
        • Pattern: Using a supervisor actor to manage a tree of actors
        • Pattern: Using pipelining to increase throughput
        • Anti-pattern: Returning ray.put() ObjectRefs from a task harms performance and fault tolerance
        • Anti-pattern: Calling ray.get in a loop harms parallelism
        • Anti-pattern: Calling ray.get unnecessarily harms performance
        • Anti-pattern: Processing results in submission order using ray.get increases runtime
        • Anti-pattern: Fetching too many objects at once with ray.get causes failure
        • Anti-pattern: Over-parallelizing with too fine-grained tasks harms speedup
        • Anti-pattern: Redefining the same remote function or class harms performance
        • Anti-pattern: Passing the same large argument by value repeatedly harms performance
        • Anti-pattern: Closure capturing large objects harms performance
        • Anti-pattern: Using global variables to share state between tasks and actors
      • Advanced Topics
        • Tips for first-time users
        • Starting Ray
        • Using Namespaces
        • Cross-Language Programming
        • Working with Jupyter Notebooks & JupyterLab
        • Lazy Computation Graphs with the Ray DAG API
        • Miscellaneous Topics
    • Examples
      • Monte Carlo Estimation of π
      • Asynchronous Advantage Actor Critic (A3C)
      • Fault-Tolerant Fairseq Training
      • Simple Parallel Model Selection
      • Parameter Server
      • Learning to Play Pong
      • Using Ray for Highly Parallelizable Tasks
      • Batch Prediction
      • Batch Training with Ray Core
      • Simple AutoML for time series with Ray Core
      • Speed up your web crawler by parallelizing it with Ray
    • Ray Core API
      • Core API
        • ray.init
        • ray.shutdown
        • ray.is_initialized
        • ray.remote
        • ray.remote_function.RemoteFunction.options
        • ray.cancel
        • ray.remote
        • ray.actor.ActorClass.options
        • ray.method
        • ray.get_actor
        • ray.kill
        • ray.get
        • ray.wait
        • ray.put
        • ray.runtime_context.get_runtime_context
        • ray.runtime_context.RuntimeContext
        • ray.get_gpu_ids
        • ray.cross_language.java_function
        • ray.cross_language.java_actor_class
      • Scheduling API
        • ray.util.scheduling_strategies.PlacementGroupSchedulingStrategy
        • ray.util.scheduling_strategies.NodeAffinitySchedulingStrategy
        • ray.util.placement_group.placement_group
        • ray.util.placement_group.PlacementGroup
        • ray.util.placement_group.placement_group_table
        • ray.util.placement_group.remove_placement_group
        • ray.util.placement_group.get_current_placement_group
      • Runtime Env API
        • ray.runtime_env.RuntimeEnvConfig
        • ray.runtime_env.RuntimeEnv
      • Utility
        • ray.util.ActorPool
        • ray.util.queue.Queue
        • ray.nodes
        • ray.cluster_resources
        • ray.available_resources
        • ray.util.metrics.Counter
        • ray.util.metrics.Gauge
        • ray.util.metrics.Histogram
        • ray.util.pdb.set_trace
        • ray.util.inspect_serializability
        • ray.timeline
      • Exceptions
        • ray.exceptions.RayError
        • ray.exceptions.RayTaskError
        • ray.exceptions.RayActorError
        • ray.exceptions.TaskCancelledError
        • ray.exceptions.TaskUnschedulableError
        • ray.exceptions.ActorUnschedulableError
        • ray.exceptions.AsyncioActorExit
        • ray.exceptions.LocalRayletDiedError
        • ray.exceptions.WorkerCrashedError
        • ray.exceptions.TaskPlacementGroupRemoved
        • ray.exceptions.ActorPlacementGroupRemoved
        • ray.exceptions.ObjectStoreFullError
        • ray.exceptions.OutOfDiskError
        • ray.exceptions.ObjectLostError
        • ray.exceptions.ObjectFetchTimedOutError
        • ray.exceptions.GetTimeoutError
        • ray.exceptions.OwnerDiedError
        • ray.exceptions.PlasmaObjectNotAvailable
        • ray.exceptions.ObjectReconstructionFailedError
        • ray.exceptions.ObjectReconstructionFailedMaxAttemptsExceededError
        • ray.exceptions.ObjectReconstructionFailedLineageEvictedError
        • ray.exceptions.RuntimeEnvSetupError
        • ray.exceptions.CrossLanguageError
        • ray.exceptions.RaySystemError
      • Ray Core CLI
      • Ray State CLI
      • State API
        • ray.experimental.state.api.summarize_actors
        • ray.experimental.state.api.summarize_objects
        • ray.experimental.state.api.summarize_tasks
        • ray.experimental.state.api.list_actors
        • ray.experimental.state.api.list_placement_groups
        • ray.experimental.state.api.list_nodes
        • ray.experimental.state.api.list_jobs
        • ray.experimental.state.api.list_workers
        • ray.experimental.state.api.list_tasks
        • ray.experimental.state.api.list_objects
        • ray.experimental.state.api.list_runtime_envs
        • ray.experimental.state.api.get_actor
        • ray.experimental.state.api.get_placement_group
        • ray.experimental.state.api.get_node
        • ray.experimental.state.api.get_worker
        • ray.experimental.state.api.get_task
        • ray.experimental.state.api.get_objects
        • ray.experimental.state.api.list_logs
        • ray.experimental.state.api.get_log
        • ray.experimental.state.common.ActorState
        • ray.experimental.state.common.TaskState
        • ray.experimental.state.common.NodeState
        • ray.experimental.state.common.PlacementGroupState
        • ray.experimental.state.common.WorkerState
        • ray.experimental.state.common.ObjectState
        • ray.experimental.state.common.RuntimeEnvState
        • ray.experimental.state.common.JobState
        • ray.experimental.state.common.StateSummary
        • ray.experimental.state.common.TaskSummaries
        • ray.experimental.state.common.TaskSummaryPerFuncOrClassName
        • ray.experimental.state.common.ActorSummaries
        • ray.experimental.state.common.ActorSummaryPerClass
        • ray.experimental.state.common.ObjectSummaries
        • ray.experimental.state.common.ObjectSummaryPerKey
        • ray.experimental.state.exception.RayStateApiException
  • Ray Clusters
    • Key Concepts
    • Deploying on Kubernetes
      • Getting Started
      • User Guides
        • Managed Kubernetes services
        • RayCluster Configuration
        • KubeRay Autoscaling
        • Logging
        • Using GPUs
        • Experimental Features
        • (Advanced) Deploying a static Ray cluster without KubeRay
      • Examples
        • Ray AIR XGBoostTrainer on Kubernetes
        • ML training with GPUs on Kubernetes
      • API Reference
    • Deploying on VMs
      • Getting Started
      • User Guides
        • Launching Ray Clusters on AWS, GCP, Azure, On-Prem
        • Best practices for deploying large clusters
        • Configuring Autoscaling
        • Community Supported Cluster Managers
      • Examples
        • Ray AIR XGBoostTrainer on VMs
      • API References
        • Cluster Launcher Commands
        • Cluster YAML Configuration Options
    • Applications Guide
      • Ray Jobs Overview
        • Quickstart Using the Ray Jobs CLI
        • Python SDK Overview
        • Python SDK API Reference
        • Ray Jobs CLI API Reference
        • Ray Jobs REST API
        • Ray Client: Interactive Development
      • Cluster Monitoring
      • Programmatic Cluster Scaling
    • FAQ
    • Ray Cluster Management API
      • Cluster Management CLI
      • Python SDK API Reference
        • ray.job_submission.JobSubmissionClient
        • ray.job_submission.JobSubmissionClient.submit_job
        • ray.job_submission.JobSubmissionClient.stop_job
        • ray.job_submission.JobSubmissionClient.get_job_status
        • ray.job_submission.JobSubmissionClient.get_job_info
        • ray.job_submission.JobSubmissionClient.list_jobs
        • ray.job_submission.JobSubmissionClient.get_job_logs
        • ray.job_submission.JobSubmissionClient.tail_job_logs
        • ray.job_submission.JobStatus
        • ray.job_submission.JobInfo
        • ray.job_submission.JobDetails
        • ray.job_submission.JobType
        • ray.job_submission.DriverInfo
      • Ray Jobs CLI API Reference
      • Programmatic Cluster Scaling
  • Ray AI Runtime (AIR)
    • Key Concepts
    • User Guides
      • Using Preprocessors
      • Using Trainers
      • Configuring Training Datasets
      • Configuring Hyperparameter Tuning
      • Using Predictors for Inference
      • Deploying Predictors with Serve
      • How to Deploy AIR
    • Examples
      • Training a Torch Image Classifier
      • Convert existing PyTorch code to Ray AIR
      • Convert existing Tensorflow/Keras code to Ray AIR
      • Tabular data training and serving with Keras and Ray AIR
      • Fine-tune a 🤗 Transformers model
      • Training a model with Sklearn
      • Training a model with distributed XGBoost
      • Hyperparameter tuning with XGBoostTrainer
      • Training a model with distributed LightGBM
      • Incremental Learning with Ray AIR
      • Serving reinforcement learning policy models
      • Online reinforcement learning with Ray AIR
      • Offline reinforcement learning with Ray AIR
      • Logging results and uploading models to Comet ML
      • Logging results and uploading models to Weights & Biases
      • Integrate Ray AIR with Feast feature store
      • Simple AutoML for time series with Ray AIR
      • Batch training & tuning on Ray Tune
      • Batch (parallel) Demand Forecasting using Prophet, ARIMA, and Ray Tune
    • Ray AIR API
      • Preprocessor (Ray Data + Ray Train)
        • ray.data.preprocessor.Preprocessor
        • ray.data.preprocessor.Preprocessor.fit
        • ray.data.preprocessor.Preprocessor.fit_transform
        • ray.data.preprocessor.Preprocessor.transform
        • ray.data.preprocessor.Preprocessor.transform_batch
        • ray.data.preprocessor.Preprocessor.transform_stats
        • ray.data.preprocessors.BatchMapper
        • ray.data.preprocessors.Chain
        • ray.data.preprocessors.Concatenator
        • ray.data.preprocessors.SimpleImputer
        • ray.data.preprocessors.Categorizer
        • ray.data.preprocessors.LabelEncoder
        • ray.data.preprocessors.MultiHotEncoder
        • ray.data.preprocessors.OneHotEncoder
        • ray.data.preprocessors.OrdinalEncoder
        • ray.data.preprocessors.MaxAbsScaler
        • ray.data.preprocessors.MinMaxScaler
        • ray.data.preprocessors.Normalizer
        • ray.data.preprocessors.PowerTransformer
        • ray.data.preprocessors.RobustScaler
        • ray.data.preprocessors.StandardScaler
        • ray.data.preprocessors.CustomKBinsDiscretizer
        • ray.data.preprocessors.UniformKBinsDiscretizer
        • ray.data.preprocessors.TorchVisionPreprocessor
        • ray.data.preprocessors.CountVectorizer
        • ray.data.preprocessors.FeatureHasher
        • ray.data.preprocessors.HashingVectorizer
        • ray.data.preprocessors.Tokenizer
      • Dataset Ingest (Ray Data + Ray Train)
        • ray.air.util.check_ingest.make_local_dataset_iterator
        • ray.air.util.check_ingest.DummyTrainer
      • Trainers (Ray Train)
        • ray.train.trainer.BaseTrainer
        • ray.train.data_parallel_trainer.DataParallelTrainer
        • ray.train.gbdt_trainer.GBDTTrainer
        • ray.train.trainer.BaseTrainer.fit
        • ray.train.trainer.BaseTrainer.setup
        • ray.train.trainer.BaseTrainer.preprocess_datasets
        • ray.train.trainer.BaseTrainer.training_loop
        • ray.train.trainer.BaseTrainer.as_trainable
        • ray.train.backend.Backend
        • ray.train.backend.BackendConfig
        • ray.train.torch.TorchTrainer
        • ray.train.torch.TorchConfig
        • ray.train.torch.TorchCheckpoint
        • ray.train.torch.prepare_model
        • ray.train.torch.prepare_optimizer
        • ray.train.torch.prepare_data_loader
        • ray.train.torch.get_device
        • ray.train.torch.accelerate
        • ray.train.torch.backward
        • ray.train.torch.enable_reproducibility
        • ray.train.tensorflow.TensorflowTrainer
        • ray.train.tensorflow.TensorflowConfig
        • ray.train.tensorflow.TensorflowCheckpoint
        • ray.train.tensorflow.prepare_dataset_shard
        • ray.train.horovod.HorovodTrainer
        • ray.train.horovod.HorovodConfig
        • ray.train.xgboost.XGBoostTrainer
        • ray.train.xgboost.XGBoostCheckpoint
        • ray.train.lightgbm.LightGBMTrainer
        • ray.train.lightgbm.LightGBMCheckpoint
        • ray.train.huggingface.HuggingFaceTrainer
        • ray.train.huggingface.HuggingFaceCheckpoint
        • ray.train.sklearn.SklearnTrainer
        • ray.train.sklearn.SklearnCheckpoint
        • ray.train.mosaic.MosaicTrainer
        • ray.train.rl.RLTrainer
        • ray.train.rl.RLCheckpoint
      • Tuner (Ray Tune)
        • ray.tune.Tuner
        • ray.tune.Tuner.fit
        • ray.tune.Tuner.get_results
        • ray.tune.TuneConfig
        • ray.tune.Tuner.restore
        • ray.tune.run_experiments
        • ray.tune.Experiment
      • Results (Ray Train + Ray Tune)
        • ray.tune.ResultGrid
        • ray.tune.ResultGrid.get_best_result
        • ray.tune.ResultGrid.get_dataframe
        • ray.air.Result
        • ray.tune.ExperimentAnalysis
      • AIR Session (Ray Train + Ray Tune)
        • ray.air.session.report
        • ray.air.session.get_checkpoint
        • ray.air.session.get_dataset_shard
        • ray.air.session.get_experiment_name
        • ray.air.session.get_trial_name
        • ray.air.session.get_trial_id
        • ray.air.session.get_trial_resources
        • ray.air.session.get_trial_dir
        • ray.air.session.get_world_size
        • ray.air.session.get_world_rank
        • ray.air.session.get_local_world_size
        • ray.air.session.get_local_rank
        • ray.air.session.get_node_rank
      • AIR Configurations (Ray Train + Ray Tune)
        • ray.air.RunConfig
        • ray.air.ScalingConfig
        • ray.air.DatasetConfig
        • ray.air.CheckpointConfig
        • ray.air.FailureConfig
      • AIR Checkpoint (All Libraries)
        • ray.air.checkpoint.Checkpoint
        • ray.air.checkpoint.Checkpoint.from_dict
        • ray.air.checkpoint.Checkpoint.from_bytes
        • ray.air.checkpoint.Checkpoint.from_directory
        • ray.air.checkpoint.Checkpoint.from_uri
        • ray.air.checkpoint.Checkpoint.from_checkpoint
        • ray.air.checkpoint.Checkpoint.uri
        • ray.air.checkpoint.Checkpoint.get_internal_representation
        • ray.air.checkpoint.Checkpoint.get_preprocessor
        • ray.air.checkpoint.Checkpoint.set_preprocessor
        • ray.air.checkpoint.Checkpoint.to_dict
        • ray.air.checkpoint.Checkpoint.to_bytes
        • ray.air.checkpoint.Checkpoint.to_directory
        • ray.air.checkpoint.Checkpoint.as_directory
        • ray.air.checkpoint.Checkpoint.to_uri
      • Predictors (Ray Data + Ray Train)
        • ray.train.predictor.Predictor
        • ray.train.predictor.Predictor.from_checkpoint
        • ray.train.predictor.Predictor.from_pandas_udf
        • ray.train.predictor.Predictor.get_preprocessor
        • ray.train.predictor.Predictor.set_preprocessor
        • ray.train.predictor.Predictor.predict
        • ray.train.predictor.Predictor.preferred_batch_format
        • ray.train.predictor.DataBatchType
        • ray.train.batch_predictor.BatchPredictor
        • ray.train.batch_predictor.BatchPredictor.predict
        • ray.train.batch_predictor.BatchPredictor.predict_pipelined
        • ray.train.xgboost.XGBoostPredictor
        • ray.train.lightgbm.LightGBMPredictor
        • ray.train.tensorflow.TensorflowPredictor
        • ray.train.torch.TorchPredictor
        • ray.train.huggingface.HuggingFacePredictor
        • ray.train.sklearn.SklearnPredictor
        • ray.train.rl.RLPredictor
      • Model Serving in AIR (Ray Serve)
        • ray.serve.air_integrations.PredictorWrapper
      • External Library Integrations
        • ray.air.integrations.comet.CometLoggerCallback
        • ray.air.integrations.mlflow.MLflowLoggerCallback
        • ray.air.integrations.mlflow.setup_mlflow
        • ray.air.integrations.wandb.WandbLoggerCallback
        • ray.air.integrations.wandb.setup_wandb
        • ray.air.integrations.keras.ReportCheckpointCallback
        • ray.tune.integration.mxnet.TuneReportCallback
        • ray.tune.integration.mxnet.TuneCheckpointCallback
        • ray.tune.integration.pytorch_lightning.TuneReportCallback
        • ray.tune.integration.pytorch_lightning.TuneReportCheckpointCallback
        • ray.tune.integration.xgboost.TuneReportCallback
        • ray.tune.integration.xgboost.TuneReportCheckpointCallback
        • ray.tune.integration.lightgbm.TuneReportCallback
        • ray.tune.integration.lightgbm.TuneReportCheckpointCallback
    • Benchmarks
  • Ray Data
    • Getting Started
    • Key Concepts
    • User Guides
      • Creating Datasets
      • Transforming Datasets
      • Consuming Datasets
      • ML Preprocessing
      • ML Tensor Support
      • Custom Datasources
      • Pipelining Compute
      • Scheduling, Execution, and Memory Management
      • Performance Tips and Tuning
    • Examples
      • Processing the NYC taxi dataset
      • Batch Training with Ray Datasets
      • Large-scale ML Ingest
      • Scaling OCR with Ray Datasets
      • Advanced Pipeline Examples
      • Random Data Access (Experimental)
    • FAQ
    • Ray Datasets API
      • Input/Output
        • ray.data.range
        • ray.data.range_table
        • ray.data.range_tensor
        • ray.data.from_items
        • ray.data.read_parquet
        • ray.data.read_parquet_bulk
        • ray.data.Dataset.write_parquet
        • ray.data.read_csv
        • ray.data.Dataset.write_csv
        • ray.data.read_json
        • ray.data.Dataset.write_json
        • ray.data.read_text
        • ray.data.read_images
        • ray.data.read_binary_files
        • ray.data.read_tfrecords
        • ray.data.Dataset.write_tfrecords
        • ray.data.from_pandas
        • ray.data.from_pandas_refs
        • ray.data.Dataset.to_pandas
        • ray.data.Dataset.to_pandas_refs
        • ray.data.read_numpy
        • ray.data.from_numpy
        • ray.data.from_numpy_refs
        • ray.data.Dataset.write_numpy
        • ray.data.Dataset.to_numpy_refs
        • ray.data.from_arrow
        • ray.data.from_arrow_refs
        • ray.data.Dataset.to_arrow_refs
        • ray.data.read_mongo
        • ray.data.Dataset.write_mongo
        • ray.data.from_dask
        • ray.data.Dataset.to_dask
        • ray.data.from_spark
        • ray.data.Dataset.to_spark
        • ray.data.from_modin
        • ray.data.Dataset.to_modin
        • ray.data.from_mars
        • ray.data.Dataset.to_mars
        • ray.data.from_torch
        • ray.data.from_huggingface
        • ray.data.from_tf
        • ray.data.read_datasource
        • ray.data.Dataset.write_datasource
        • ray.data.Datasource
        • ray.data.ReadTask
        • ray.data.datasource.Reader
        • ray.data.datasource.BinaryDatasource
        • ray.data.datasource.CSVDatasource
        • ray.data.datasource.FileBasedDatasource
        • ray.data.datasource.ImageDatasource
        • ray.data.datasource.JSONDatasource
        • ray.data.datasource.NumpyDatasource
        • ray.data.datasource.ParquetDatasource
        • ray.data.datasource.RangeDatasource
        • ray.data.datasource.TFRecordDatasource
        • ray.data.datasource.MongoDatasource
        • ray.data.datasource.Partitioning
        • ray.data.datasource.PartitionStyle
        • ray.data.datasource.PathPartitionEncoder
        • ray.data.datasource.PathPartitionParser
        • ray.data.datasource.PathPartitionFilter
        • ray.data.datasource.FileMetadataProvider
        • ray.data.datasource.BaseFileMetadataProvider
        • ray.data.datasource.ParquetMetadataProvider
        • ray.data.datasource.DefaultFileMetadataProvider
        • ray.data.datasource.DefaultParquetMetadataProvider
        • ray.data.datasource.FastFileMetadataProvider
      • Dataset API
        • ray.data.Dataset
        • ray.data.Dataset.map
        • ray.data.Dataset.map_batches
        • ray.data.Dataset.flat_map
        • ray.data.Dataset.filter
        • ray.data.Dataset.add_column
        • ray.data.Dataset.drop_columns
        • ray.data.Dataset.select_columns
        • ray.data.Dataset.random_sample
        • ray.data.Dataset.limit
        • ray.data.Dataset.sort
        • ray.data.Dataset.random_shuffle
        • ray.data.Dataset.randomize_block_order
        • ray.data.Dataset.repartition
        • ray.data.Dataset.split
        • ray.data.Dataset.split_at_indices
        • ray.data.Dataset.split_proportionately
        • ray.data.Dataset.train_test_split
        • ray.data.Dataset.union
        • ray.data.Dataset.zip
        • ray.data.Dataset.groupby
        • ray.data.Dataset.aggregate
        • ray.data.Dataset.sum
        • ray.data.Dataset.min
        • ray.data.Dataset.max
        • ray.data.Dataset.mean
        • ray.data.Dataset.std
        • ray.data.Dataset.repeat
        • ray.data.Dataset.window
        • ray.data.Dataset.show
        • ray.data.Dataset.take
        • ray.data.Dataset.take_all
        • ray.data.Dataset.iterator
        • ray.data.Dataset.iter_rows
        • ray.data.Dataset.iter_batches
        • ray.data.Dataset.iter_torch_batches
        • ray.data.Dataset.iter_tf_batches
        • ray.data.Dataset.write_parquet
        • ray.data.Dataset.write_json
        • ray.data.Dataset.write_csv
        • ray.data.Dataset.write_numpy
        • ray.data.Dataset.write_tfrecords
        • ray.data.Dataset.write_mongo
        • ray.data.Dataset.write_datasource
        • ray.data.Dataset.to_torch
        • ray.data.Dataset.to_tf
        • ray.data.Dataset.to_dask
        • ray.data.Dataset.to_mars
        • ray.data.Dataset.to_modin
        • ray.data.Dataset.to_spark
        • ray.data.Dataset.to_pandas
        • ray.data.Dataset.to_pandas_refs
        • ray.data.Dataset.to_numpy_refs
        • ray.data.Dataset.to_arrow_refs
        • ray.data.Dataset.to_random_access_dataset
        • ray.data.Dataset.count
        • ray.data.Dataset.schema
        • ray.data.Dataset.default_batch_format
        • ray.data.Dataset.num_blocks
        • ray.data.Dataset.size_bytes
        • ray.data.Dataset.input_files
        • ray.data.Dataset.stats
        • ray.data.Dataset.get_internal_block_refs
        • ray.data.Dataset.fully_executed
        • ray.data.Dataset.is_fully_executed
        • ray.data.Dataset.lazy
        • ray.data.Dataset.has_serializable_lineage
        • ray.data.Dataset.serialize_lineage
        • ray.data.Dataset.deserialize_lineage
      • DatasetIterator API
        • ray.data.DatasetIterator.iter_batches
        • ray.data.DatasetIterator.iter_torch_batches
        • ray.data.DatasetIterator.to_tf
        • ray.data.DatasetIterator.stats
      • DatasetPipeline API
        • ray.data.DatasetPipeline
        • ray.data.DatasetPipeline.map
        • ray.data.DatasetPipeline.map_batches
        • ray.data.DatasetPipeline.flat_map
        • ray.data.DatasetPipeline.foreach_window
        • ray.data.DatasetPipeline.filter
        • ray.data.DatasetPipeline.add_column
        • ray.data.DatasetPipeline.drop_columns
        • ray.data.DatasetPipeline.select_columns
        • ray.data.DatasetPipeline.sort_each_window
        • ray.data.DatasetPipeline.random_shuffle_each_window
        • ray.data.DatasetPipeline.randomize_block_order_each_window
        • ray.data.DatasetPipeline.repartition_each_window
        • ray.data.DatasetPipeline.split
        • ray.data.DatasetPipeline.split_at_indices
        • ray.data.DatasetPipeline.repeat
        • ray.data.DatasetPipeline.rewindow
        • ray.data.DatasetPipeline.from_iterable
        • ray.data.DatasetPipeline.show
        • ray.data.DatasetPipeline.show_windows
        • ray.data.DatasetPipeline.take
        • ray.data.DatasetPipeline.take_all
        • ray.data.DatasetPipeline.iterator
        • ray.data.DatasetPipeline.iter_rows
        • ray.data.DatasetPipeline.iter_batches
        • ray.data.DatasetPipeline.iter_torch_batches
        • ray.data.DatasetPipeline.iter_tf_batches
        • ray.data.DatasetPipeline.write_json
        • ray.data.DatasetPipeline.write_csv
        • ray.data.DatasetPipeline.write_parquet
        • ray.data.DatasetPipeline.write_datasource
        • ray.data.DatasetPipeline.to_tf
        • ray.data.DatasetPipeline.to_torch
        • ray.data.DatasetPipeline.schema
        • ray.data.DatasetPipeline.count
        • ray.data.DatasetPipeline.stats
        • ray.data.DatasetPipeline.sum
      • GroupedDataset API
        • ray.data.grouped_dataset.GroupedDataset
        • ray.data.grouped_dataset.GroupedDataset.count
        • ray.data.grouped_dataset.GroupedDataset.sum
        • ray.data.grouped_dataset.GroupedDataset.min
        • ray.data.grouped_dataset.GroupedDataset.max
        • ray.data.grouped_dataset.GroupedDataset.mean
        • ray.data.grouped_dataset.GroupedDataset.std
        • ray.data.grouped_dataset.GroupedDataset.aggregate
        • ray.data.grouped_dataset.GroupedDataset.map_groups
        • ray.data.aggregate.AggregateFn
        • ray.data.aggregate.Count
        • ray.data.aggregate.Sum
        • ray.data.aggregate.Max
        • ray.data.aggregate.Mean
        • ray.data.aggregate.Std
        • ray.data.aggregate.AbsMax
      • DatasetContext API
        • ray.data.context.DatasetContext
        • ray.data.context.DatasetContext.get_current
      • Data Representations
        • ray.data.block.Block
        • ray.data.block.BlockExecStats
        • ray.data.block.BlockMetadata
        • ray.data.block.BlockAccessor
        • ray.data.block.DataBatch
        • ray.data.row.TableRow
        • ray.data.extensions.tensor_extension.TensorDtype
        • ray.data.extensions.tensor_extension.TensorArray
        • ray.data.extensions.tensor_extension.ArrowTensorType
        • ray.data.extensions.tensor_extension.ArrowTensorArray
        • ray.data.extensions.tensor_extension.ArrowVariableShapedTensorType
        • ray.data.extensions.tensor_extension.ArrowVariableShapedTensorArray
      • (Experimental) RandomAccessDataset API
        • ray.data.random_access_dataset.RandomAccessDataset
        • ray.data.random_access_dataset.RandomAccessDataset.get_async
        • ray.data.random_access_dataset.RandomAccessDataset.multiget
        • ray.data.random_access_dataset.RandomAccessDataset.stats
      • Utility
        • ray.data.set_progress_bars
      • API Guide for Users from Other Data Libraries
    • Integrations
      • Using Dask on Ray
      • Using Spark on Ray (RayDP)
      • Using Mars on Ray
      • Using Pandas on Ray (Modin)
  • Ray Train
    • Getting Started
    • Key Concepts
    • User Guides
      • Configuring Ray Train
      • Deep Learning Guide
      • XGBoost/LightGBM guide
      • Ray Train Architecture
    • Examples
      • PyTorch Fashion MNIST Example
      • HF Transformers Example
      • TensorFlow MNIST Example
      • Horovod Example
      • MLflow Callback Example
      • Tune & TensorFlow Example
      • Tune & PyTorch Example
      • Torch Data Prefetching Benchmark
    • Ray Train FAQ
    • Ray Train API
      • ray.train.trainer.BaseTrainer
        • ray.train.trainer.BaseTrainer.as_trainable
        • ray.train.trainer.BaseTrainer.fit
        • ray.train.trainer.BaseTrainer.preprocess_datasets
        • ray.train.trainer.BaseTrainer.setup
        • ray.train.trainer.BaseTrainer.training_loop
      • ray.train.data_parallel_trainer.DataParallelTrainer
        • ray.train.data_parallel_trainer.DataParallelTrainer.as_trainable
        • ray.train.data_parallel_trainer.DataParallelTrainer.fit
        • ray.train.data_parallel_trainer.DataParallelTrainer.get_dataset_config
        • ray.train.data_parallel_trainer.DataParallelTrainer.setup
      • ray.train.gbdt_trainer.GBDTTrainer
        • ray.train.gbdt_trainer.GBDTTrainer.as_trainable
        • ray.train.gbdt_trainer.GBDTTrainer.fit
        • ray.train.gbdt_trainer.GBDTTrainer.setup
      • ray.train.trainer.BaseTrainer.fit
      • ray.train.trainer.BaseTrainer.setup
      • ray.train.trainer.BaseTrainer.preprocess_datasets
      • ray.train.trainer.BaseTrainer.training_loop
      • ray.train.trainer.BaseTrainer.as_trainable
      • ray.train.backend.Backend
      • ray.train.backend.BackendConfig
      • ray.train.torch.TorchTrainer
      • ray.train.torch.TorchConfig
      • ray.train.torch.TorchCheckpoint
      • ray.train.torch.prepare_model
      • ray.train.torch.prepare_optimizer
      • ray.train.torch.prepare_data_loader
      • ray.train.torch.get_device
      • ray.train.torch.accelerate
      • ray.train.torch.backward
      • ray.train.torch.enable_reproducibility
      • ray.train.tensorflow.TensorflowTrainer
      • ray.train.tensorflow.TensorflowConfig
      • ray.train.tensorflow.TensorflowCheckpoint
      • ray.train.tensorflow.prepare_dataset_shard
      • ray.train.horovod.HorovodTrainer
      • ray.train.horovod.HorovodConfig
      • ray.train.xgboost.XGBoostTrainer
      • ray.train.xgboost.XGBoostCheckpoint
      • ray.train.lightgbm.LightGBMTrainer
      • ray.train.lightgbm.LightGBMCheckpoint
      • ray.train.huggingface.HuggingFaceTrainer
      • ray.train.huggingface.HuggingFaceCheckpoint
      • ray.train.sklearn.SklearnTrainer
      • ray.train.sklearn.SklearnCheckpoint
      • ray.train.mosaic.MosaicTrainer
      • ray.train.rl.RLTrainer
      • ray.train.rl.RLCheckpoint
  • Ray Tune
    • Getting Started
    • Key Concepts
    • User Guides
      • Running Basic Experiments
      • Logging Tune Runs
      • Setting Trial Resources
      • Using Search Spaces
      • How to Stop and Resume
      • How to Configure Storage Options for a Distributed Tune Experiment?
      • Using Callbacks and Metrics
      • Getting Data in and out of Tune
      • Analyzing Tune Experiment Results
      • A Guide to Population Based Training with Tune
        • Visualizing and Understanding PBT
      • Deploying Tune in the Cloud
      • Tune Architecture
      • Scalability Benchmarks
    • Ray Tune Examples
      • Examples using Ray Tune with ML Frameworks
        • Scikit-Learn Example
        • Keras Example
        • PyTorch Example
        • PyTorch Lightning Example
        • MXNet Example
        • Ray Serve Example
        • Ray RLlib Example
        • XGBoost Example
        • LightGBM Example
        • Horovod Example
        • Huggingface Example
      • Tune Experiment Tracking Examples
        • Comet Example
        • Weights & Biases Example
        • MLflow Example
      • Tune Hyperparameter Optimization Framework Examples
        • Ax Example
        • Dragonfly Example
        • Skopt Example
        • HyperOpt Example
        • Bayesopt Example
        • FLAML Example
        • BOHB Example
        • Nevergrad Example
        • Optuna Example
        • ZOOpt Example
        • SigOpt Example
        • HEBO Example
      • Other Examples
      • Exercises
    • Ray Tune FAQ
    • Ray Tune API
      • Tune Execution (tune.Tuner)
        • ray.tune.Tuner
        • ray.tune.Tuner.fit
        • ray.tune.Tuner.get_results
        • ray.tune.TuneConfig
        • ray.tune.Tuner.restore
        • ray.tune.run_experiments
        • ray.tune.Experiment
      • Tune Experiment Results (tune.ResultGrid)
        • ray.tune.ResultGrid
        • ray.tune.ResultGrid.get_best_result
        • ray.tune.ResultGrid.get_dataframe
        • ray.air.Result
        • ray.tune.ExperimentAnalysis
      • Training in Tune (tune.Trainable, session.report)
        • ray.tune.Trainable
        • ray.tune.Trainable.setup
        • ray.tune.Trainable.save_checkpoint
        • ray.tune.Trainable.load_checkpoint
        • ray.tune.Trainable.step
        • ray.tune.Trainable.reset_config
        • ray.tune.Trainable.cleanup
        • ray.tune.Trainable.default_resource_request
        • ray.tune.with_parameters
        • ray.tune.with_resources
        • ray.tune.execution.placement_groups.PlacementGroupFactory
        • ray.tune.utils.wait_for_gpu
        • ray.tune.utils.diagnose_serialization
        • ray.tune.utils.validate_save_restore
      • Tune Search Space API
        • ray.tune.uniform
        • ray.tune.quniform
        • ray.tune.loguniform
        • ray.tune.qloguniform
        • ray.tune.randn
        • ray.tune.qrandn
        • ray.tune.randint
        • ray.tune.qrandint
        • ray.tune.lograndint
        • ray.tune.qlograndint
        • ray.tune.choice
        • ray.tune.grid_search
        • ray.tune.sample_from
      • Tune Search Algorithms (tune.search)
        • ray.tune.search.basic_variant.BasicVariantGenerator
        • ray.tune.search.ax.AxSearch
        • ray.tune.search.bayesopt.BayesOptSearch
        • ray.tune.search.bohb.TuneBOHB
        • ray.tune.search.flaml.BlendSearch
        • ray.tune.search.flaml.CFO
        • ray.tune.search.dragonfly.DragonflySearch
        • ray.tune.search.hebo.HEBOSearch
        • ray.tune.search.hyperopt.HyperOptSearch
        • ray.tune.search.nevergrad.NevergradSearch
        • ray.tune.search.optuna.OptunaSearch
        • ray.tune.search.sigopt.SigOptSearch
        • ray.tune.search.skopt.SkOptSearch
        • ray.tune.search.zoopt.ZOOptSearch
        • ray.tune.search.Repeater
        • ray.tune.search.ConcurrencyLimiter
        • ray.tune.search.Searcher
        • ray.tune.search.Searcher.suggest
        • ray.tune.search.Searcher.save
        • ray.tune.search.Searcher.restore
        • ray.tune.search.Searcher.on_trial_result
        • ray.tune.search.Searcher.on_trial_complete
        • ray.tune.search.create_searcher
      • Tune Trial Schedulers (tune.schedulers)
        • ray.tune.schedulers.AsyncHyperBandScheduler
        • ray.tune.schedulers.ASHAScheduler
        • ray.tune.schedulers.HyperBandScheduler
        • ray.tune.schedulers.MedianStoppingRule
        • ray.tune.schedulers.PopulationBasedTraining
        • ray.tune.schedulers.PopulationBasedTrainingReplay
        • ray.tune.schedulers.pb2.PB2
        • ray.tune.schedulers.HyperBandForBOHB
        • ray.tune.schedulers.ResourceChangingScheduler
        • ray.tune.schedulers.resource_changing_scheduler.DistributeResources
        • ray.tune.schedulers.resource_changing_scheduler.DistributeResourcesToTopJob
        • ray.tune.schedulers.FIFOScheduler
        • ray.tune.schedulers.TrialScheduler
        • ray.tune.schedulers.TrialScheduler.choose_trial_to_run
        • ray.tune.schedulers.TrialScheduler.on_trial_result
        • ray.tune.schedulers.TrialScheduler.on_trial_complete
        • ray.tune.schedulers.create_scheduler
      • Tune Stopping Mechanisms (tune.stopper)
        • ray.tune.stopper.Stopper
        • ray.tune.stopper.Stopper.__call__
        • ray.tune.stopper.Stopper.stop_all
        • ray.tune.stopper.MaximumIterationStopper
        • ray.tune.stopper.ExperimentPlateauStopper
        • ray.tune.stopper.TrialPlateauStopper
        • ray.tune.stopper.TimeoutStopper
        • ray.tune.stopper.CombinedStopper
      • Tune Console Output (Reporters)
        • ray.tune.ProgressReporter
        • ray.tune.ProgressReporter.report
        • ray.tune.ProgressReporter.should_report
        • ray.tune.CLIReporter
        • ray.tune.JupyterNotebookReporter
      • Syncing in Tune (tune.SyncConfig, tune.Syncer)
        • ray.tune.syncer.SyncConfig
        • ray.tune.syncer.Syncer
        • ray.tune.syncer.Syncer.sync_up
        • ray.tune.syncer.Syncer.sync_down
        • ray.tune.syncer.Syncer.delete
        • ray.tune.syncer.Syncer.wait
        • ray.tune.syncer.Syncer.wait_or_retry
        • ray.tune.syncer.SyncerCallback
        • ray.tune.syncer._DefaultSyncer
        • ray.tune.syncer._BackgroundSyncer
      • Tune Loggers (tune.logger)
        • ray.tune.logger.JsonLoggerCallback
        • ray.tune.logger.CSVLoggerCallback
        • ray.tune.logger.TBXLoggerCallback
        • ray.air.integrations.mlflow.MLflowLoggerCallback
        • ray.air.integrations.wandb.WandbLoggerCallback
        • ray.tune.logger.LoggerCallback
        • ray.tune.logger.LoggerCallback.log_trial_start
        • ray.tune.logger.LoggerCallback.log_trial_restore
        • ray.tune.logger.LoggerCallback.log_trial_save
        • ray.tune.logger.LoggerCallback.log_trial_result
        • ray.tune.logger.LoggerCallback.log_trial_end
      • Tune Callbacks (tune.Callback)
        • ray.tune.Callback
        • ray.tune.Callback.setup
        • ray.tune.Callback.on_checkpoint
        • ray.tune.Callback.on_experiment_end
        • ray.tune.Callback.on_step_begin
        • ray.tune.Callback.on_step_end
        • ray.tune.Callback.on_trial_complete
        • ray.tune.Callback.on_trial_error
        • ray.tune.Callback.on_trial_restore
        • ray.tune.Callback.on_trial_result
        • ray.tune.Callback.on_trial_save
        • ray.tune.Callback.on_trial_start
        • ray.tune.Callback.get_state
        • ray.tune.Callback.set_state
      • Environment variables used by Ray Tune
      • Tune Scikit-Learn API (tune.sklearn)
      • External library integrations for Ray Tune
        • ray.air.integrations.comet.CometLoggerCallback
        • ray.air.integrations.mlflow.MLflowLoggerCallback
        • ray.air.integrations.mlflow.setup_mlflow
        • ray.air.integrations.wandb.WandbLoggerCallback
        • ray.air.integrations.wandb.setup_wandb
        • ray.air.integrations.keras.ReportCheckpointCallback
        • ray.tune.integration.mxnet.TuneReportCallback
        • ray.tune.integration.mxnet.TuneCheckpointCallback
        • ray.tune.integration.pytorch_lightning.TuneReportCallback
        • ray.tune.integration.pytorch_lightning.TuneReportCheckpointCallback
        • ray.tune.integration.xgboost.TuneReportCallback
        • ray.tune.integration.xgboost.TuneReportCheckpointCallback
        • ray.tune.integration.lightgbm.TuneReportCallback
        • ray.tune.integration.lightgbm.TuneReportCheckpointCallback
      • Tune Internals
      • Tune Client API
      • Tune CLI (Experimental)
  • Ray Serve
    • Getting Started
    • Key Concepts
    • User Guides
      • HTTP Handling
      • Scaling and Resource Allocation
      • Model Composition
      • Development Workflow
      • Production Guide
        • Serve Config Files ( serve build )
        • Deploying on VMs
        • Deploying on Kubernetes
        • Monitoring Ray Serve
        • Adding End-to-End Fault Tolerance
      • Performance Tuning
      • Handling Dependencies
      • Experimental Java API
      • 1.x to 2.x API Migration Guide
      • Experimental Direct Ingress
    • Architecture
    • Examples
      • Serving ML Models (Tensorflow, PyTorch, Scikit-Learn, others)
      • Batching Tutorial
      • Serving RLlib Models
      • Scaling your Gradio app with Ray Serve
      • Visualizing a Deployment Graph with Gradio
      • Java Tutorial
      • Deployment Graph Patterns
        • Pattern: Linear Pipeline
        • Pattern: Branching Input
        • Pattern: Conditional
    • Ray Serve API
      • Ray Serve Python API
        • ray.serve.run
        • ray.serve.start
        • ray.serve.shutdown
        • ray.serve.delete
        • ray.serve.handle.RayServeHandle
        • ray.serve.handle.RayServeHandle.remote
        • ray.serve.handle.RayServeHandle.options
        • ray.serve.batch
        • ray.serve.api.build
      • Serve REST API
      • Serve CLI
  • Ray RLlib
    • Getting Started with RLlib
    • Key Concepts
    • Environments
    • Algorithms
    • User Guides
      • Advanced Python APIs
      • Models, Preprocessors, and Action Distributions
      • Saving and Loading your RL Algorithms and Policies
      • How To Customize Policies
      • Sample Collections and Trajectory Views
      • Replay Buffers
      • Working With Offline Data
      • Connectors (Alpha)
      • Fault Tolerance And Elastic Training
      • How To Contribute to RLlib
      • Working with the RLlib CLI
    • Examples
    • Ray RLlib API
      • Algorithms
      • Environments
        • BaseEnv API
        • MultiAgentEnv API
        • VectorEnv API
        • ExternalEnv API
      • Policies
        • Base Policy class (ray.rllib.policy.policy.Policy)
        • TensorFlow-Specific Sub-Classes
        • Torch-Specific Policy: TorchPolicy
        • Building Custom Policy Classes
      • Model APIs
      • Evaluation and Environment Rollout
        • RolloutWorker
        • Sample Batches
        • WorkerSet
        • Environment Samplers
        • PolicyMap (ray.rllib.policy.policy_map.PolicyMap)
      • Offline RL
      • Parallel Requests Utilities
      • Training Operations Utilities
      • ReplayBuffer API
      • RLlib Utilities
        • Exploration API
        • Schedules API
        • RLlib Annotations/Decorators
        • Deep Learning Framework (tf vs torch) Utilities
        • TensorFlow Utility Functions
        • PyTorch Utility Functions
        • Numpy Utility Functions
        • Deprecation Tools/Utils
      • External Application API
  • More Libraries
    • Distributed Scikit-learn / Joblib
    • Distributed multiprocessing.Pool
    • Ray Collective Communication Lib
    • Using Ray with Pytorch Lightning
    • Ray Workflows (Alpha)
      • Key Concepts
      • Getting Started
      • Workflow Management
      • Workflow Metadata
      • Events
      • API Comparisons
      • Advanced Topics
      • Ray Workflows API
        • Workflow Execution API
        • Workflow Management API
  • Monitoring and Debugging
    • Overview
    • Ray Dashboard
    • Monitoring Ray States
    • Ray Debugger
    • Logging
    • Metrics
    • Profiling
    • Tracing
    • Troubleshooting Failures
    • Troubleshooting Hangs
    • Troubleshooting Performance
    • Ray Gotchas
    • Getting Help
    • Debugging (internal)
    • Profiling (internal)
  • References
    • Ray AIR API
      • Preprocessor (Ray Data + Ray Train)
        • ray.data.preprocessor.Preprocessor
        • ray.data.preprocessor.Preprocessor.fit
        • ray.data.preprocessor.Preprocessor.fit_transform
        • ray.data.preprocessor.Preprocessor.transform
        • ray.data.preprocessor.Preprocessor.transform_batch
        • ray.data.preprocessor.Preprocessor.transform_stats
        • ray.data.preprocessors.BatchMapper
        • ray.data.preprocessors.Chain
        • ray.data.preprocessors.Concatenator
        • ray.data.preprocessors.SimpleImputer
        • ray.data.preprocessors.Categorizer
        • ray.data.preprocessors.LabelEncoder
        • ray.data.preprocessors.MultiHotEncoder
        • ray.data.preprocessors.OneHotEncoder
        • ray.data.preprocessors.OrdinalEncoder
        • ray.data.preprocessors.MaxAbsScaler
        • ray.data.preprocessors.MinMaxScaler
        • ray.data.preprocessors.Normalizer
        • ray.data.preprocessors.PowerTransformer
        • ray.data.preprocessors.RobustScaler
        • ray.data.preprocessors.StandardScaler
        • ray.data.preprocessors.CustomKBinsDiscretizer
        • ray.data.preprocessors.UniformKBinsDiscretizer
        • ray.data.preprocessors.TorchVisionPreprocessor
        • ray.data.preprocessors.CountVectorizer
        • ray.data.preprocessors.FeatureHasher
        • ray.data.preprocessors.HashingVectorizer
        • ray.data.preprocessors.Tokenizer
      • Dataset Ingest (Ray Data + Ray Train)
        • ray.air.util.check_ingest.make_local_dataset_iterator
        • ray.air.util.check_ingest.DummyTrainer
      • Trainers (Ray Train)
        • ray.train.trainer.BaseTrainer
        • ray.train.data_parallel_trainer.DataParallelTrainer
        • ray.train.gbdt_trainer.GBDTTrainer
        • ray.train.trainer.BaseTrainer.fit
        • ray.train.trainer.BaseTrainer.setup
        • ray.train.trainer.BaseTrainer.preprocess_datasets
        • ray.train.trainer.BaseTrainer.training_loop
        • ray.train.trainer.BaseTrainer.as_trainable
        • ray.train.backend.Backend
        • ray.train.backend.BackendConfig
        • ray.train.torch.TorchTrainer
        • ray.train.torch.TorchConfig
        • ray.train.torch.TorchCheckpoint
        • ray.train.torch.prepare_model
        • ray.train.torch.prepare_optimizer
        • ray.train.torch.prepare_data_loader
        • ray.train.torch.get_device
        • ray.train.torch.accelerate
        • ray.train.torch.backward
        • ray.train.torch.enable_reproducibility
        • ray.train.tensorflow.TensorflowTrainer
        • ray.train.tensorflow.TensorflowConfig
        • ray.train.tensorflow.TensorflowCheckpoint
        • ray.train.tensorflow.prepare_dataset_shard
        • ray.train.horovod.HorovodTrainer
        • ray.train.horovod.HorovodConfig
        • ray.train.xgboost.XGBoostTrainer
        • ray.train.xgboost.XGBoostCheckpoint
        • ray.train.lightgbm.LightGBMTrainer
        • ray.train.lightgbm.LightGBMCheckpoint
        • ray.train.huggingface.HuggingFaceTrainer
        • ray.train.huggingface.HuggingFaceCheckpoint
        • ray.train.sklearn.SklearnTrainer
        • ray.train.sklearn.SklearnCheckpoint
        • ray.train.mosaic.MosaicTrainer
        • ray.train.rl.RLTrainer
        • ray.train.rl.RLCheckpoint
      • Tuner (Ray Tune)
        • ray.tune.Tuner
        • ray.tune.Tuner.fit
        • ray.tune.Tuner.get_results
        • ray.tune.TuneConfig
        • ray.tune.Tuner.restore
        • ray.tune.run_experiments
        • ray.tune.Experiment
      • Results (Ray Train + Ray Tune)
        • ray.tune.ResultGrid
        • ray.tune.ResultGrid.get_best_result
        • ray.tune.ResultGrid.get_dataframe
        • ray.air.Result
        • ray.tune.ExperimentAnalysis
      • AIR Session (Ray Train + Ray Tune)
        • ray.air.session.report
        • ray.air.session.get_checkpoint
        • ray.air.session.get_dataset_shard
        • ray.air.session.get_experiment_name
        • ray.air.session.get_trial_name
        • ray.air.session.get_trial_id
        • ray.air.session.get_trial_resources
        • ray.air.session.get_trial_dir
        • ray.air.session.get_world_size
        • ray.air.session.get_world_rank
        • ray.air.session.get_local_world_size
        • ray.air.session.get_local_rank
        • ray.air.session.get_node_rank
      • AIR Configurations (Ray Train + Ray Tune)
        • ray.air.RunConfig
        • ray.air.ScalingConfig
        • ray.air.DatasetConfig
        • ray.air.CheckpointConfig
        • ray.air.FailureConfig
      • AIR Checkpoint (All Libraries)
        • ray.air.checkpoint.Checkpoint
        • ray.air.checkpoint.Checkpoint.from_dict
        • ray.air.checkpoint.Checkpoint.from_bytes
        • ray.air.checkpoint.Checkpoint.from_directory
        • ray.air.checkpoint.Checkpoint.from_uri
        • ray.air.checkpoint.Checkpoint.from_checkpoint
        • ray.air.checkpoint.Checkpoint.uri
        • ray.air.checkpoint.Checkpoint.get_internal_representation
        • ray.air.checkpoint.Checkpoint.get_preprocessor
        • ray.air.checkpoint.Checkpoint.set_preprocessor
        • ray.air.checkpoint.Checkpoint.to_dict
        • ray.air.checkpoint.Checkpoint.to_bytes
        • ray.air.checkpoint.Checkpoint.to_directory
        • ray.air.checkpoint.Checkpoint.as_directory
        • ray.air.checkpoint.Checkpoint.to_uri
      • Predictors (Ray Data + Ray Train)
        • ray.train.predictor.Predictor
        • ray.train.predictor.Predictor.from_checkpoint
        • ray.train.predictor.Predictor.from_pandas_udf
        • ray.train.predictor.Predictor.get_preprocessor
        • ray.train.predictor.Predictor.set_preprocessor
        • ray.train.predictor.Predictor.predict
        • ray.train.predictor.Predictor.preferred_batch_format
        • ray.train.predictor.DataBatchType
        • ray.train.batch_predictor.BatchPredictor
        • ray.train.batch_predictor.BatchPredictor.predict
        • ray.train.batch_predictor.BatchPredictor.predict_pipelined
        • ray.train.xgboost.XGBoostPredictor
        • ray.train.lightgbm.LightGBMPredictor
        • ray.train.tensorflow.TensorflowPredictor
        • ray.train.torch.TorchPredictor
        • ray.train.huggingface.HuggingFacePredictor
        • ray.train.sklearn.SklearnPredictor
        • ray.train.rl.RLPredictor
      • Model Serving in AIR (Ray Serve)
        • ray.serve.air_integrations.PredictorWrapper
      • External Library Integrations
        • ray.air.integrations.comet.CometLoggerCallback
        • ray.air.integrations.mlflow.MLflowLoggerCallback
        • ray.air.integrations.mlflow.setup_mlflow
        • ray.air.integrations.wandb.WandbLoggerCallback
        • ray.air.integrations.wandb.setup_wandb
        • ray.air.integrations.keras.ReportCheckpointCallback
        • ray.tune.integration.mxnet.TuneReportCallback
        • ray.tune.integration.mxnet.TuneCheckpointCallback
        • ray.tune.integration.pytorch_lightning.TuneReportCallback
        • ray.tune.integration.pytorch_lightning.TuneReportCheckpointCallback
        • ray.tune.integration.xgboost.TuneReportCallback
        • ray.tune.integration.xgboost.TuneReportCheckpointCallback
        • ray.tune.integration.lightgbm.TuneReportCallback
        • ray.tune.integration.lightgbm.TuneReportCheckpointCallback
    • Ray Datasets API
      • Input/Output
        • ray.data.range
        • ray.data.range_table
        • ray.data.range_tensor
        • ray.data.from_items
        • ray.data.read_parquet
        • ray.data.read_parquet_bulk
        • ray.data.Dataset.write_parquet
        • ray.data.read_csv
        • ray.data.Dataset.write_csv
        • ray.data.read_json
        • ray.data.Dataset.write_json
        • ray.data.read_text
        • ray.data.read_images
        • ray.data.read_binary_files
        • ray.data.read_tfrecords
        • ray.data.Dataset.write_tfrecords
        • ray.data.from_pandas
        • ray.data.from_pandas_refs
        • ray.data.Dataset.to_pandas
        • ray.data.Dataset.to_pandas_refs
        • ray.data.read_numpy
        • ray.data.from_numpy
        • ray.data.from_numpy_refs
        • ray.data.Dataset.write_numpy
        • ray.data.Dataset.to_numpy_refs
        • ray.data.from_arrow
        • ray.data.from_arrow_refs
        • ray.data.Dataset.to_arrow_refs
        • ray.data.read_mongo
        • ray.data.Dataset.write_mongo
        • ray.data.from_dask
        • ray.data.Dataset.to_dask
        • ray.data.from_spark
        • ray.data.Dataset.to_spark
        • ray.data.from_modin
        • ray.data.Dataset.to_modin
        • ray.data.from_mars
        • ray.data.Dataset.to_mars
        • ray.data.from_torch
        • ray.data.from_huggingface
        • ray.data.from_tf
        • ray.data.read_datasource
        • ray.data.Dataset.write_datasource
        • ray.data.Datasource
        • ray.data.ReadTask
        • ray.data.datasource.Reader
        • ray.data.datasource.BinaryDatasource
        • ray.data.datasource.CSVDatasource
        • ray.data.datasource.FileBasedDatasource
        • ray.data.datasource.ImageDatasource
        • ray.data.datasource.JSONDatasource
        • ray.data.datasource.NumpyDatasource
        • ray.data.datasource.ParquetDatasource
        • ray.data.datasource.RangeDatasource
        • ray.data.datasource.TFRecordDatasource
        • ray.data.datasource.MongoDatasource
        • ray.data.datasource.Partitioning
        • ray.data.datasource.PartitionStyle
        • ray.data.datasource.PathPartitionEncoder
        • ray.data.datasource.PathPartitionParser
        • ray.data.datasource.PathPartitionFilter
        • ray.data.datasource.FileMetadataProvider
        • ray.data.datasource.BaseFileMetadataProvider
        • ray.data.datasource.ParquetMetadataProvider
        • ray.data.datasource.DefaultFileMetadataProvider
        • ray.data.datasource.DefaultParquetMetadataProvider
        • ray.data.datasource.FastFileMetadataProvider
      • Dataset API
        • ray.data.Dataset
        • ray.data.Dataset.map
        • ray.data.Dataset.map_batches
        • ray.data.Dataset.flat_map
        • ray.data.Dataset.filter
        • ray.data.Dataset.add_column
        • ray.data.Dataset.drop_columns
        • ray.data.Dataset.select_columns
        • ray.data.Dataset.random_sample
        • ray.data.Dataset.limit
        • ray.data.Dataset.sort
        • ray.data.Dataset.random_shuffle
        • ray.data.Dataset.randomize_block_order
        • ray.data.Dataset.repartition
        • ray.data.Dataset.split
        • ray.data.Dataset.split_at_indices
        • ray.data.Dataset.split_proportionately
        • ray.data.Dataset.train_test_split
        • ray.data.Dataset.union
        • ray.data.Dataset.zip
        • ray.data.Dataset.groupby
        • ray.data.Dataset.aggregate
        • ray.data.Dataset.sum
        • ray.data.Dataset.min
        • ray.data.Dataset.max
        • ray.data.Dataset.mean
        • ray.data.Dataset.std
        • ray.data.Dataset.repeat
        • ray.data.Dataset.window
        • ray.data.Dataset.show
        • ray.data.Dataset.take
        • ray.data.Dataset.take_all
        • ray.data.Dataset.iterator
        • ray.data.Dataset.iter_rows
        • ray.data.Dataset.iter_batches
        • ray.data.Dataset.iter_torch_batches
        • ray.data.Dataset.iter_tf_batches
        • ray.data.Dataset.write_parquet
        • ray.data.Dataset.write_json
        • ray.data.Dataset.write_csv
        • ray.data.Dataset.write_numpy
        • ray.data.Dataset.write_tfrecords
        • ray.data.Dataset.write_mongo
        • ray.data.Dataset.write_datasource
        • ray.data.Dataset.to_torch
        • ray.data.Dataset.to_tf
        • ray.data.Dataset.to_dask
        • ray.data.Dataset.to_mars
        • ray.data.Dataset.to_modin
        • ray.data.Dataset.to_spark
        • ray.data.Dataset.to_pandas
        • ray.data.Dataset.to_pandas_refs
        • ray.data.Dataset.to_numpy_refs
        • ray.data.Dataset.to_arrow_refs
        • ray.data.Dataset.to_random_access_dataset
        • ray.data.Dataset.count
        • ray.data.Dataset.schema
        • ray.data.Dataset.default_batch_format
        • ray.data.Dataset.num_blocks
        • ray.data.Dataset.size_bytes
        • ray.data.Dataset.input_files
        • ray.data.Dataset.stats
        • ray.data.Dataset.get_internal_block_refs
        • ray.data.Dataset.fully_executed
        • ray.data.Dataset.is_fully_executed
        • ray.data.Dataset.lazy
        • ray.data.Dataset.has_serializable_lineage
        • ray.data.Dataset.serialize_lineage
        • ray.data.Dataset.deserialize_lineage
      • DatasetIterator API
        • ray.data.DatasetIterator.iter_batches
        • ray.data.DatasetIterator.iter_torch_batches
        • ray.data.DatasetIterator.to_tf
        • ray.data.DatasetIterator.stats
      • DatasetPipeline API
        • ray.data.DatasetPipeline
        • ray.data.DatasetPipeline.map
        • ray.data.DatasetPipeline.map_batches
        • ray.data.DatasetPipeline.flat_map
        • ray.data.DatasetPipeline.foreach_window
        • ray.data.DatasetPipeline.filter
        • ray.data.DatasetPipeline.add_column
        • ray.data.DatasetPipeline.drop_columns
        • ray.data.DatasetPipeline.select_columns
        • ray.data.DatasetPipeline.sort_each_window
        • ray.data.DatasetPipeline.random_shuffle_each_window
        • ray.data.DatasetPipeline.randomize_block_order_each_window
        • ray.data.DatasetPipeline.repartition_each_window
        • ray.data.DatasetPipeline.split
        • ray.data.DatasetPipeline.split_at_indices
        • ray.data.DatasetPipeline.repeat
        • ray.data.DatasetPipeline.rewindow
        • ray.data.DatasetPipeline.from_iterable
        • ray.data.DatasetPipeline.show
        • ray.data.DatasetPipeline.show_windows
        • ray.data.DatasetPipeline.take
        • ray.data.DatasetPipeline.take_all
        • ray.data.DatasetPipeline.iterator
        • ray.data.DatasetPipeline.iter_rows
        • ray.data.DatasetPipeline.iter_batches
        • ray.data.DatasetPipeline.iter_torch_batches
        • ray.data.DatasetPipeline.iter_tf_batches
        • ray.data.DatasetPipeline.write_json
        • ray.data.DatasetPipeline.write_csv
        • ray.data.DatasetPipeline.write_parquet
        • ray.data.DatasetPipeline.write_datasource
        • ray.data.DatasetPipeline.to_tf
        • ray.data.DatasetPipeline.to_torch
        • ray.data.DatasetPipeline.schema
        • ray.data.DatasetPipeline.count
        • ray.data.DatasetPipeline.stats
        • ray.data.DatasetPipeline.sum
      • GroupedDataset API
        • ray.data.grouped_dataset.GroupedDataset
        • ray.data.grouped_dataset.GroupedDataset.count
        • ray.data.grouped_dataset.GroupedDataset.sum
        • ray.data.grouped_dataset.GroupedDataset.min
        • ray.data.grouped_dataset.GroupedDataset.max
        • ray.data.grouped_dataset.GroupedDataset.mean
        • ray.data.grouped_dataset.GroupedDataset.std
        • ray.data.grouped_dataset.GroupedDataset.aggregate
        • ray.data.grouped_dataset.GroupedDataset.map_groups
        • ray.data.aggregate.AggregateFn
        • ray.data.aggregate.Count
        • ray.data.aggregate.Sum
        • ray.data.aggregate.Max
        • ray.data.aggregate.Mean
        • ray.data.aggregate.Std
        • ray.data.aggregate.AbsMax
      • DatasetContext API
        • ray.data.context.DatasetContext
        • ray.data.context.DatasetContext.get_current
      • Data Representations
        • ray.data.block.Block
        • ray.data.block.BlockExecStats
        • ray.data.block.BlockMetadata
        • ray.data.block.BlockAccessor
        • ray.data.block.DataBatch
        • ray.data.row.TableRow
        • ray.data.extensions.tensor_extension.TensorDtype
        • ray.data.extensions.tensor_extension.TensorArray
        • ray.data.extensions.tensor_extension.ArrowTensorType
        • ray.data.extensions.tensor_extension.ArrowTensorArray
        • ray.data.extensions.tensor_extension.ArrowVariableShapedTensorType
        • ray.data.extensions.tensor_extension.ArrowVariableShapedTensorArray
      • (Experimental) RandomAccessDataset API
        • ray.data.random_access_dataset.RandomAccessDataset
        • ray.data.random_access_dataset.RandomAccessDataset.get_async
        • ray.data.random_access_dataset.RandomAccessDataset.multiget
        • ray.data.random_access_dataset.RandomAccessDataset.stats
      • Utility
        • ray.data.set_progress_bars
      • API Guide for Users from Other Data Libraries
    • Ray Train API
      • ray.train.trainer.BaseTrainer
        • ray.train.trainer.BaseTrainer.as_trainable
        • ray.train.trainer.BaseTrainer.fit
        • ray.train.trainer.BaseTrainer.preprocess_datasets
        • ray.train.trainer.BaseTrainer.setup
        • ray.train.trainer.BaseTrainer.training_loop
      • ray.train.data_parallel_trainer.DataParallelTrainer
        • ray.train.data_parallel_trainer.DataParallelTrainer.as_trainable
        • ray.train.data_parallel_trainer.DataParallelTrainer.fit
        • ray.train.data_parallel_trainer.DataParallelTrainer.get_dataset_config
        • ray.train.data_parallel_trainer.DataParallelTrainer.setup
      • ray.train.gbdt_trainer.GBDTTrainer
        • ray.train.gbdt_trainer.GBDTTrainer.as_trainable
        • ray.train.gbdt_trainer.GBDTTrainer.fit
        • ray.train.gbdt_trainer.GBDTTrainer.setup
      • ray.train.trainer.BaseTrainer.fit
      • ray.train.trainer.BaseTrainer.setup
      • ray.train.trainer.BaseTrainer.preprocess_datasets
      • ray.train.trainer.BaseTrainer.training_loop
      • ray.train.trainer.BaseTrainer.as_trainable
      • ray.train.backend.Backend
      • ray.train.backend.BackendConfig
      • ray.train.torch.TorchTrainer
      • ray.train.torch.TorchConfig
      • ray.train.torch.TorchCheckpoint
      • ray.train.torch.prepare_model
      • ray.train.torch.prepare_optimizer
      • ray.train.torch.prepare_data_loader
      • ray.train.torch.get_device
      • ray.train.torch.accelerate
      • ray.train.torch.backward
      • ray.train.torch.enable_reproducibility
      • ray.train.tensorflow.TensorflowTrainer
      • ray.train.tensorflow.TensorflowConfig
      • ray.train.tensorflow.TensorflowCheckpoint
      • ray.train.tensorflow.prepare_dataset_shard
      • ray.train.horovod.HorovodTrainer
      • ray.train.horovod.HorovodConfig
      • ray.train.xgboost.XGBoostTrainer
      • ray.train.xgboost.XGBoostCheckpoint
      • ray.train.lightgbm.LightGBMTrainer
      • ray.train.lightgbm.LightGBMCheckpoint
      • ray.train.huggingface.HuggingFaceTrainer
      • ray.train.huggingface.HuggingFaceCheckpoint
      • ray.train.sklearn.SklearnTrainer
      • ray.train.sklearn.SklearnCheckpoint
      • ray.train.mosaic.MosaicTrainer
      • ray.train.rl.RLTrainer
      • ray.train.rl.RLCheckpoint
    • Ray Tune API
      • Tune Execution (tune.Tuner)
        • ray.tune.Tuner
        • ray.tune.Tuner.fit
        • ray.tune.Tuner.get_results
        • ray.tune.TuneConfig
        • ray.tune.Tuner.restore
        • ray.tune.run_experiments
        • ray.tune.Experiment
      • Tune Experiment Results (tune.ResultGrid)
        • ray.tune.ResultGrid
        • ray.tune.ResultGrid.get_best_result
        • ray.tune.ResultGrid.get_dataframe
        • ray.air.Result
        • ray.tune.ExperimentAnalysis
      • Training in Tune (tune.Trainable, session.report)
        • ray.tune.Trainable
        • ray.tune.Trainable.setup
        • ray.tune.Trainable.save_checkpoint
        • ray.tune.Trainable.load_checkpoint
        • ray.tune.Trainable.step
        • ray.tune.Trainable.reset_config
        • ray.tune.Trainable.cleanup
        • ray.tune.Trainable.default_resource_request
        • ray.tune.with_parameters
        • ray.tune.with_resources
        • ray.tune.execution.placement_groups.PlacementGroupFactory
        • ray.tune.utils.wait_for_gpu
        • ray.tune.utils.diagnose_serialization
        • ray.tune.utils.validate_save_restore
      • Tune Search Space API
        • ray.tune.uniform
        • ray.tune.quniform
        • ray.tune.loguniform
        • ray.tune.qloguniform
        • ray.tune.randn
        • ray.tune.qrandn
        • ray.tune.randint
        • ray.tune.qrandint
        • ray.tune.lograndint
        • ray.tune.qlograndint
        • ray.tune.choice
        • ray.tune.grid_search
        • ray.tune.sample_from
      • Tune Search Algorithms (tune.search)
        • ray.tune.search.basic_variant.BasicVariantGenerator
        • ray.tune.search.ax.AxSearch
        • ray.tune.search.bayesopt.BayesOptSearch
        • ray.tune.search.bohb.TuneBOHB
        • ray.tune.search.flaml.BlendSearch
        • ray.tune.search.flaml.CFO
        • ray.tune.search.dragonfly.DragonflySearch
        • ray.tune.search.hebo.HEBOSearch
        • ray.tune.search.hyperopt.HyperOptSearch
        • ray.tune.search.nevergrad.NevergradSearch
        • ray.tune.search.optuna.OptunaSearch
        • ray.tune.search.sigopt.SigOptSearch
        • ray.tune.search.skopt.SkOptSearch
        • ray.tune.search.zoopt.ZOOptSearch
        • ray.tune.search.Repeater
        • ray.tune.search.ConcurrencyLimiter
        • ray.tune.search.Searcher
        • ray.tune.search.Searcher.suggest
        • ray.tune.search.Searcher.save
        • ray.tune.search.Searcher.restore
        • ray.tune.search.Searcher.on_trial_result
        • ray.tune.search.Searcher.on_trial_complete
        • ray.tune.search.create_searcher
      • Tune Trial Schedulers (tune.schedulers)
        • ray.tune.schedulers.AsyncHyperBandScheduler
        • ray.tune.schedulers.ASHAScheduler
        • ray.tune.schedulers.HyperBandScheduler
        • ray.tune.schedulers.MedianStoppingRule
        • ray.tune.schedulers.PopulationBasedTraining
        • ray.tune.schedulers.PopulationBasedTrainingReplay
        • ray.tune.schedulers.pb2.PB2
        • ray.tune.schedulers.HyperBandForBOHB
        • ray.tune.schedulers.ResourceChangingScheduler
        • ray.tune.schedulers.resource_changing_scheduler.DistributeResources
        • ray.tune.schedulers.resource_changing_scheduler.DistributeResourcesToTopJob
        • ray.tune.schedulers.FIFOScheduler
        • ray.tune.schedulers.TrialScheduler
        • ray.tune.schedulers.TrialScheduler.choose_trial_to_run
        • ray.tune.schedulers.TrialScheduler.on_trial_result
        • ray.tune.schedulers.TrialScheduler.on_trial_complete
        • ray.tune.schedulers.create_scheduler
      • Tune Stopping Mechanisms (tune.stopper)
        • ray.tune.stopper.Stopper
        • ray.tune.stopper.Stopper.__call__
        • ray.tune.stopper.Stopper.stop_all
        • ray.tune.stopper.MaximumIterationStopper
        • ray.tune.stopper.ExperimentPlateauStopper
        • ray.tune.stopper.TrialPlateauStopper
        • ray.tune.stopper.TimeoutStopper
        • ray.tune.stopper.CombinedStopper
      • Tune Console Output (Reporters)
        • ray.tune.ProgressReporter
        • ray.tune.ProgressReporter.report
        • ray.tune.ProgressReporter.should_report
        • ray.tune.CLIReporter
        • ray.tune.JupyterNotebookReporter
      • Syncing in Tune (tune.SyncConfig, tune.Syncer)
        • ray.tune.syncer.SyncConfig
        • ray.tune.syncer.Syncer
        • ray.tune.syncer.Syncer.sync_up
        • ray.tune.syncer.Syncer.sync_down
        • ray.tune.syncer.Syncer.delete
        • ray.tune.syncer.Syncer.wait
        • ray.tune.syncer.Syncer.wait_or_retry
        • ray.tune.syncer.SyncerCallback
        • ray.tune.syncer._DefaultSyncer
        • ray.tune.syncer._BackgroundSyncer
      • Tune Loggers (tune.logger)
        • ray.tune.logger.JsonLoggerCallback
        • ray.tune.logger.CSVLoggerCallback
        • ray.tune.logger.TBXLoggerCallback
        • ray.air.integrations.mlflow.MLflowLoggerCallback
        • ray.air.integrations.wandb.WandbLoggerCallback
        • ray.tune.logger.LoggerCallback
        • ray.tune.logger.LoggerCallback.log_trial_start
        • ray.tune.logger.LoggerCallback.log_trial_restore
        • ray.tune.logger.LoggerCallback.log_trial_save
        • ray.tune.logger.LoggerCallback.log_trial_result
        • ray.tune.logger.LoggerCallback.log_trial_end
      • Tune Callbacks (tune.Callback)
        • ray.tune.Callback
        • ray.tune.Callback.setup
        • ray.tune.Callback.on_checkpoint
        • ray.tune.Callback.on_experiment_end
        • ray.tune.Callback.on_step_begin
        • ray.tune.Callback.on_step_end
        • ray.tune.Callback.on_trial_complete
        • ray.tune.Callback.on_trial_error
        • ray.tune.Callback.on_trial_restore
        • ray.tune.Callback.on_trial_result
        • ray.tune.Callback.on_trial_save
        • ray.tune.Callback.on_trial_start
        • ray.tune.Callback.get_state
        • ray.tune.Callback.set_state
      • Environment variables used by Ray Tune
      • Tune Scikit-Learn API (tune.sklearn)
      • External library integrations for Ray Tune
        • ray.air.integrations.comet.CometLoggerCallback
        • ray.air.integrations.mlflow.MLflowLoggerCallback
        • ray.air.integrations.mlflow.setup_mlflow
        • ray.air.integrations.wandb.WandbLoggerCallback
        • ray.air.integrations.wandb.setup_wandb
        • ray.air.integrations.keras.ReportCheckpointCallback
        • ray.tune.integration.mxnet.TuneReportCallback
        • ray.tune.integration.mxnet.TuneCheckpointCallback
        • ray.tune.integration.pytorch_lightning.TuneReportCallback
        • ray.tune.integration.pytorch_lightning.TuneReportCheckpointCallback
        • ray.tune.integration.xgboost.TuneReportCallback
        • ray.tune.integration.xgboost.TuneReportCheckpointCallback
        • ray.tune.integration.lightgbm.TuneReportCallback
        • ray.tune.integration.lightgbm.TuneReportCheckpointCallback
      • Tune Internals
      • Tune Client API
      • Tune CLI (Experimental)
    • Ray Serve API
      • Ray Serve Python API
        • ray.serve.run
        • ray.serve.start
        • ray.serve.shutdown
        • ray.serve.delete
        • ray.serve.handle.RayServeHandle
        • ray.serve.handle.RayServeHandle.remote
        • ray.serve.handle.RayServeHandle.options
        • ray.serve.batch
        • ray.serve.api.build
      • Serve REST API
      • Serve CLI
    • Ray RLlib API
      • Algorithms
      • Environments
        • BaseEnv API
        • MultiAgentEnv API
        • VectorEnv API
        • ExternalEnv API
      • Policies
        • Base Policy class (ray.rllib.policy.policy.Policy)
        • TensorFlow-Specific Sub-Classes
        • Torch-Specific Policy: TorchPolicy
        • Building Custom Policy Classes
      • Model APIs
      • Evaluation and Environment Rollout
        • RolloutWorker
        • Sample Batches
        • WorkerSet
        • Environment Samplers
        • PolicyMap (ray.rllib.policy.policy_map.PolicyMap)
      • Offline RL
      • Parallel Requests Utilities
      • Training Operations Utilities
      • ReplayBuffer API
      • RLlib Utilities
        • Exploration API
        • Schedules API
        • RLlib Annotations/Decorators
        • Deep Learning Framework (tf vs torch) Utilities
        • TensorFlow Utility Functions
        • PyTorch Utility Functions
        • Numpy Utility Functions
        • Deprecation Tools/Utils
      • External Application API
    • Ray Workflows API
      • Workflow Execution API
        • ray.workflow.run
        • ray.workflow.run_async
      • Workflow Management API
        • ray.workflow.resume
        • ray.workflow.resume_async
        • ray.workflow.resume_all
        • ray.workflow.list_all
        • ray.workflow.get_status
        • ray.workflow.get_output
        • ray.workflow.get_output_async
        • ray.workflow.get_metadata
        • ray.workflow.cancel
    • Ray Cluster Management API
      • Cluster Management CLI
      • Python SDK API Reference
        • ray.job_submission.JobSubmissionClient
        • ray.job_submission.JobSubmissionClient.submit_job
        • ray.job_submission.JobSubmissionClient.stop_job
        • ray.job_submission.JobSubmissionClient.get_job_status
        • ray.job_submission.JobSubmissionClient.get_job_info
        • ray.job_submission.JobSubmissionClient.list_jobs
        • ray.job_submission.JobSubmissionClient.get_job_logs
        • ray.job_submission.JobSubmissionClient.tail_job_logs
        • ray.job_submission.JobStatus
        • ray.job_submission.JobInfo
        • ray.job_submission.JobDetails
        • ray.job_submission.JobType
        • ray.job_submission.DriverInfo
      • Ray Jobs CLI API Reference
      • Programmatic Cluster Scaling
    • Ray Core API
      • Core API
        • ray.init
        • ray.shutdown
        • ray.is_initialized
        • ray.remote
        • ray.remote_function.RemoteFunction.options
        • ray.cancel
        • ray.remote
        • ray.actor.ActorClass.options
        • ray.method
        • ray.get_actor
        • ray.kill
        • ray.get
        • ray.wait
        • ray.put
        • ray.runtime_context.get_runtime_context
        • ray.runtime_context.RuntimeContext
        • ray.get_gpu_ids
        • ray.cross_language.java_function
        • ray.cross_language.java_actor_class
      • Scheduling API
        • ray.util.scheduling_strategies.PlacementGroupSchedulingStrategy
        • ray.util.scheduling_strategies.NodeAffinitySchedulingStrategy
        • ray.util.placement_group.placement_group
        • ray.util.placement_group.PlacementGroup
        • ray.util.placement_group.placement_group_table
        • ray.util.placement_group.remove_placement_group
        • ray.util.placement_group.get_current_placement_group
      • Runtime Env API
        • ray.runtime_env.RuntimeEnvConfig
        • ray.runtime_env.RuntimeEnv
      • Utility
        • ray.util.ActorPool
        • ray.util.queue.Queue
        • ray.nodes
        • ray.cluster_resources
        • ray.available_resources
        • ray.util.metrics.Counter
        • ray.util.metrics.Gauge
        • ray.util.metrics.Histogram
        • ray.util.pdb.set_trace
        • ray.util.inspect_serializability
        • ray.timeline
      • Exceptions
        • ray.exceptions.RayError
        • ray.exceptions.RayTaskError
        • ray.exceptions.RayActorError
        • ray.exceptions.TaskCancelledError
        • ray.exceptions.TaskUnschedulableError
        • ray.exceptions.ActorUnschedulableError
        • ray.exceptions.AsyncioActorExit
        • ray.exceptions.LocalRayletDiedError
        • ray.exceptions.WorkerCrashedError
        • ray.exceptions.TaskPlacementGroupRemoved
        • ray.exceptions.ActorPlacementGroupRemoved
        • ray.exceptions.ObjectStoreFullError
        • ray.exceptions.OutOfDiskError
        • ray.exceptions.ObjectLostError
        • ray.exceptions.ObjectFetchTimedOutError
        • ray.exceptions.GetTimeoutError
        • ray.exceptions.OwnerDiedError
        • ray.exceptions.PlasmaObjectNotAvailable
        • ray.exceptions.ObjectReconstructionFailedError
        • ray.exceptions.ObjectReconstructionFailedMaxAttemptsExceededError
        • ray.exceptions.ObjectReconstructionFailedLineageEvictedError
        • ray.exceptions.RuntimeEnvSetupError
        • ray.exceptions.CrossLanguageError
        • ray.exceptions.RaySystemError
      • Ray Core CLI
      • Ray State CLI
      • State API
        • ray.experimental.state.api.summarize_actors
        • ray.experimental.state.api.summarize_objects
        • ray.experimental.state.api.summarize_tasks
        • ray.experimental.state.api.list_actors
        • ray.experimental.state.api.list_placement_groups
        • ray.experimental.state.api.list_nodes
        • ray.experimental.state.api.list_jobs
        • ray.experimental.state.api.list_workers
        • ray.experimental.state.api.list_tasks
        • ray.experimental.state.api.list_objects
        • ray.experimental.state.api.list_runtime_envs
        • ray.experimental.state.api.get_actor
        • ray.experimental.state.api.get_placement_group
        • ray.experimental.state.api.get_node
        • ray.experimental.state.api.get_worker
        • ray.experimental.state.api.get_task
        • ray.experimental.state.api.get_objects
        • ray.experimental.state.api.list_logs
        • ray.experimental.state.api.get_log
        • ray.experimental.state.common.ActorState
        • ray.experimental.state.common.TaskState
        • ray.experimental.state.common.NodeState
        • ray.experimental.state.common.PlacementGroupState
        • ray.experimental.state.common.WorkerState
        • ray.experimental.state.common.ObjectState
        • ray.experimental.state.common.RuntimeEnvState
        • ray.experimental.state.common.JobState
        • ray.experimental.state.common.StateSummary
        • ray.experimental.state.common.TaskSummaries
        • ray.experimental.state.common.TaskSummaryPerFuncOrClassName
        • ray.experimental.state.common.ActorSummaries
        • ray.experimental.state.common.ActorSummaryPerClass
        • ray.experimental.state.common.ObjectSummaries
        • ray.experimental.state.common.ObjectSummaryPerKey
        • ray.experimental.state.exception.RayStateApiException
    • Usage Stats Collection
  • Developer Guides
    • Getting Involved / Contributing
      • Building Ray from Source
      • Contributing to the Ray Documentation
      • Testing Autoscaling Locally
      • Tips for testing Ray programs
    • Configuring Ray
    • Architecture Whitepapers
Theme by the Executable Book Project
  • repository
  • open issue
  • suggest edit
  • .rst
Contents
  • Saving and Restoring Tune Search Algorithms
  • Random search and grid search (tune.search.basic_variant.BasicVariantGenerator)
  • Ax (tune.search.ax.AxSearch)
  • Bayesian Optimization (tune.search.bayesopt.BayesOptSearch)
  • BOHB (tune.search.bohb.TuneBOHB)
  • BlendSearch (tune.search.flaml.BlendSearch)
  • CFO (tune.search.flaml.CFO)
  • Dragonfly (tune.search.dragonfly.DragonflySearch)
  • HEBO (tune.search.hebo.HEBOSearch)
  • HyperOpt (tune.search.hyperopt.HyperOptSearch)
  • Nevergrad (tune.search.nevergrad.NevergradSearch)
  • Optuna (tune.search.optuna.OptunaSearch)
  • SigOpt (tune.search.sigopt.SigOptSearch)
  • Scikit-Optimize (tune.search.skopt.SkOptSearch)
  • ZOOpt (tune.search.zoopt.ZOOptSearch)
  • Repeated Evaluations (tune.search.Repeater)
  • ConcurrencyLimiter (tune.search.ConcurrencyLimiter)
  • Custom Search Algorithms (tune.search.Searcher)
  • Shim Instantiation (tune.create_searcher)

Tune Search Algorithms (tune.search)

Contents

  • Saving and Restoring Tune Search Algorithms
  • Random search and grid search (tune.search.basic_variant.BasicVariantGenerator)
  • Ax (tune.search.ax.AxSearch)
  • Bayesian Optimization (tune.search.bayesopt.BayesOptSearch)
  • BOHB (tune.search.bohb.TuneBOHB)
  • BlendSearch (tune.search.flaml.BlendSearch)
  • CFO (tune.search.flaml.CFO)
  • Dragonfly (tune.search.dragonfly.DragonflySearch)
  • HEBO (tune.search.hebo.HEBOSearch)
  • HyperOpt (tune.search.hyperopt.HyperOptSearch)
  • Nevergrad (tune.search.nevergrad.NevergradSearch)
  • Optuna (tune.search.optuna.OptunaSearch)
  • SigOpt (tune.search.sigopt.SigOptSearch)
  • Scikit-Optimize (tune.search.skopt.SkOptSearch)
  • ZOOpt (tune.search.zoopt.ZOOptSearch)
  • Repeated Evaluations (tune.search.Repeater)
  • ConcurrencyLimiter (tune.search.ConcurrencyLimiter)
  • Custom Search Algorithms (tune.search.Searcher)
  • Shim Instantiation (tune.create_searcher)

Tune Search Algorithms (tune.search)#

Tune’s Search Algorithms are wrappers around open-source optimization libraries for efficient hyperparameter selection. Each library has a specific way of defining the search space - please refer to their documentation for more details. Tune will automatically convert search spaces passed to Tuner to the library format in most cases.

You can utilize these search algorithms as follows:

from ray import tune
from ray.air import session
from ray.tune.search.optuna import OptunaSearch

def train_fn(config):
    # This objective function is just for demonstration purposes
    session.report({"loss": config["param"]})

tuner = tune.Tuner(
    train_fn,
    tune_config=tune.TuneConfig(
        search_alg=OptunaSearch(),
        num_samples=100,
        metric="loss",
        mode="min",
    ),
    param_space={"param": tune.uniform(0, 1)},
)
results = tuner.fit()

Saving and Restoring Tune Search Algorithms#

Certain search algorithms have save/restore implemented, allowing reuse of searchers that are fitted on the results of multiple tuning runs.

search_alg = HyperOptSearch()

tuner_1 = tune.Tuner(
    train_fn,
    tune_config=tune.TuneConfig(search_alg=search_alg)
)
results_1 = tuner_1.fit()

search_alg.save("./my-checkpoint.pkl")

# Restore the saved state onto another search algorithm,
# in a new tuning script

search_alg2 = HyperOptSearch()
search_alg2.restore("./my-checkpoint.pkl")

tuner_2 = tune.Tuner(
    train_fn,
    tune_config=tune.TuneConfig(search_alg=search_alg2)
)
results_2 = tuner_2.fit()

Tune automatically saves searcher state inside the current experiment folder during tuning. See Result logdir: ... in the output logs for this location.

Note that if you have two Tune runs with the same experiment folder, the previous state checkpoint will be overwritten. You can avoid this by making sure air.RunConfig(name=...) is set to a unique identifier:

search_alg = HyperOptSearch()
tuner_1 = tune.Tuner(
    train_fn,
    tune_config=tune.TuneConfig(
        num_samples=5,
        search_alg=search_alg,
    ),
    run_config=air.RunConfig(
        name="my-experiment-1",
        local_dir="~/my_results",
    )
)
results = tuner_1.fit()

search_alg2 = HyperOptSearch()
search_alg2.restore_from_dir(
  os.path.join("~/my_results", "my-experiment-1")
)

Random search and grid search (tune.search.basic_variant.BasicVariantGenerator)#

The default and most basic way to do hyperparameter search is via random and grid search. Ray Tune does this through the BasicVariantGenerator class that generates trial variants given a search space definition.

The BasicVariantGenerator is used per default if no search algorithm is passed to Tuner.

basic_variant.BasicVariantGenerator([...])

Uses Tune's variant generation for resolving variables.

Ax (tune.search.ax.AxSearch)#

ax.AxSearch(space, List[Dict]]] = None, ...)

Uses Ax to optimize hyperparameters.

Bayesian Optimization (tune.search.bayesopt.BayesOptSearch)#

bayesopt.BayesOptSearch([space, metric, ...])

Uses fmfn/BayesianOptimization to optimize hyperparameters.

BOHB (tune.search.bohb.TuneBOHB)#

BOHB (Bayesian Optimization HyperBand) is an algorithm that both terminates bad trials and also uses Bayesian Optimization to improve the hyperparameter search. It is available from the HpBandSter library.

Importantly, BOHB is intended to be paired with a specific scheduler class: HyperBandForBOHB.

In order to use this search algorithm, you will need to install HpBandSter and ConfigSpace:

$ pip install hpbandster ConfigSpace

See the BOHB paper for more details.

bohb.TuneBOHB([space, bohb_config, metric, ...])

BOHB suggestion component.

BlendSearch (tune.search.flaml.BlendSearch)#

BlendSearch is an economical hyperparameter optimization algorithm that combines combines local search with global search. It is backed by the FLAML library. It allows the users to specify a low-cost initial point as input if such point exists.

In order to use this search algorithm, you will need to install flaml:

$ pip install 'flaml[blendsearch]'

See the BlendSearch paper and documentation in FLAML BlendSearch documentation for more details.

flaml.BlendSearch

alias of ray.tune.search.flaml.flaml_search._DummyErrorRaiser

CFO (tune.search.flaml.CFO)#

CFO (Cost-Frugal hyperparameter Optimization) is a hyperparameter search algorithm based on randomized local search. It is backed by the FLAML library. It allows the users to specify a low-cost initial point as input if such point exists.

In order to use this search algorithm, you will need to install flaml:

$ pip install flaml

See the CFO paper and documentation in FLAML CFO documentation for more details.

flaml.CFO

alias of ray.tune.search.flaml.flaml_search._DummyErrorRaiser

Dragonfly (tune.search.dragonfly.DragonflySearch)#

dragonfly.DragonflySearch([optimizer, ...])

Uses Dragonfly to optimize hyperparameters.

HEBO (tune.search.hebo.HEBOSearch)#

hebo.HEBOSearch([space, metric, mode, ...])

Uses HEBO (Heteroscedastic Evolutionary Bayesian Optimization) to optimize hyperparameters.

HyperOpt (tune.search.hyperopt.HyperOptSearch)#

hyperopt.HyperOptSearch([space, metric, ...])

A wrapper around HyperOpt to provide trial suggestions.

Nevergrad (tune.search.nevergrad.NevergradSearch)#

nevergrad.NevergradSearch([optimizer, ...])

Uses Nevergrad to optimize hyperparameters.

Optuna (tune.search.optuna.OptunaSearch)#

optuna.OptunaSearch(space, ], List[Tuple], ...)

A wrapper around Optuna to provide trial suggestions.

SigOpt (tune.search.sigopt.SigOptSearch)#

You will need to use the SigOpt experiment and space specification to specify your search space.

sigopt.SigOptSearch([space, name, ...])

A wrapper around SigOpt to provide trial suggestions.

Scikit-Optimize (tune.search.skopt.SkOptSearch)#

skopt.SkOptSearch([optimizer, space, ...])

Uses Scikit Optimize (skopt) to optimize hyperparameters.

ZOOpt (tune.search.zoopt.ZOOptSearch)#

zoopt.ZOOptSearch([algo, budget, dim_dict, ...])

A wrapper around ZOOpt to provide trial suggestions.

Repeated Evaluations (tune.search.Repeater)#

Use ray.tune.search.Repeater to average over multiple evaluations of the same hyperparameter configurations. This is useful in cases where the evaluated training procedure has high variance (i.e., in reinforcement learning).

By default, Repeater will take in a repeat parameter and a search_alg. The search_alg will suggest new configurations to try, and the Repeater will run repeat trials of the configuration. It will then average the search_alg.metric from the final results of each repeated trial.

Warning

It is recommended to not use Repeater with a TrialScheduler. Early termination can negatively affect the average reported metric.

Repeater(searcher[, repeat, set_index])

A wrapper algorithm for repeating trials of same parameters.

ConcurrencyLimiter (tune.search.ConcurrencyLimiter)#

Use ray.tune.search.ConcurrencyLimiter to limit the amount of concurrency when using a search algorithm. This is useful when a given optimization algorithm does not parallelize very well (like a naive Bayesian Optimization).

ConcurrencyLimiter(searcher, max_concurrent)

A wrapper algorithm for limiting the number of concurrent trials.

Custom Search Algorithms (tune.search.Searcher)#

If you are interested in implementing or contributing a new Search Algorithm, provide the following interface:

Searcher([metric, mode])

Abstract class for wrapping suggesting algorithms.

Searcher.suggest(trial_id)

Queries the algorithm to retrieve the next set of parameters.

Searcher.save(checkpoint_path)

Save state to path for this search algorithm.

Searcher.restore(checkpoint_path)

Restore state for this search algorithm

Searcher.on_trial_result(trial_id, result)

Optional notification for result during training.

Searcher.on_trial_complete(trial_id[, ...])

Notification for the completion of trial.

If contributing, make sure to add test cases and an entry in the function described below.

Shim Instantiation (tune.create_searcher)#

There is also a shim function that constructs the search algorithm based on the provided string. This can be useful if the search algorithm you want to use changes often (e.g., specifying the search algorithm via a CLI option or config file).

create_searcher(search_alg, **kwargs)

Instantiate a search algorithm based on the given string.

previous

ray.tune.sample_from

next

ray.tune.search.basic_variant.BasicVariantGenerator

By The Ray Team
© Copyright 2023, The Ray Team.