ray.data.ActorPoolStrategy#

class ray.data.ActorPoolStrategy(*, size: int | None = None, min_size: int | None = None, max_size: int | None = None, initial_size: int | None = None, max_tasks_in_flight_per_actor: int | None = None, enable_true_multi_threading: bool = False)[source]#

Bases: ComputeStrategy

Specify the actor-based compute strategy for a Dataset transform.

ActorPoolStrategy specifies that an autoscaling pool of actors should be used for a given Dataset transform. This is useful for stateful setup of callable classes.

For a fixed-sized pool of size n, use ActorPoolStrategy(size=n).

To autoscale from m to n actors, use ActorPoolStrategy(min_size=m, max_size=n).

To autoscale from m to n actors, with an initial size of initial, use ActorPoolStrategy(min_size=m, max_size=n, initial_size=initial).

To increase opportunities for pipelining task dependency prefetching with computation and avoiding actor startup delays, set max_tasks_in_flight_per_actor to 2 or greater; to try to decrease the delay due to queueing of tasks on the worker actors, set max_tasks_in_flight_per_actor to 1.

The enable_true_multi_threading argument primarily exists to prevent GPU OOM issues with multi-threaded actors. The life cycle of an actor task involves 3 main steps:

  1. Batching Inputs

  2. Running actor UDF

  3. Batching Outputs

The enable_true_multi_threading flag affects step 2. If set to True, then the UDF can be run concurrently. By default, it is set to False, so at most 1 actor UDF is running at a time per actor. The max_concurrency flag on ray.remote affects steps 1 and 3. Below is a matrix summary:

  • [enable_true_multi_threading=False or True, max_concurrency=1] = 1 actor task running per actor. So at most 1

    of steps 1, 2, or 3 is running at any point in time.

  • [enable_true_multi_threading=False, max_concurrency>1] = multiple tasks running per actor (respecting GIL) but UDF runs 1 at a time. This is useful for doing CPU and GPU work, where you want to use a large batch size but want to hide the overhead of batching the inputs. In this case, CPU batching is done concurrently, while GPU inference is done 1 at a time. Concretely, steps 1 and 3 can have multiple threads, while step 2 is done serially.

  • [enable_true_multi_threading=True, max_concurrency>1] = multiple tasks running per actor. Unlike bullet #3 ^, the UDF runs concurrently (respecting GIL). No restrictions on steps 1, 2, or 3

NOTE: enable_true_multi_threading does not apply to async actors

Methods

__init__

Construct ActorPoolStrategy for a Dataset transform.