Using Actors

An actor is essentially a stateful worker (or a service). When a new actor is instantiated, a new worker is created, and methods of the actor are scheduled on that specific worker and can access and mutate the state of that worker.

Java demo code in this documentation can be found here.

Creating an actor

You can convert a standard Python class into a Ray actor class as follows:

class Counter(object):
    def __init__(self):
        self.value = 0

    def increment(self):
        self.value += 1
        return self.value

    def get_counter(self):
        return self.value

counter_actor = Counter.remote()

Note that the above is equivalent to the following:

class Counter(object):
    def __init__(self):
        self.value = 0

    def increment(self):
        self.value += 1
        return self.value

    def get_counter(self):
        return self.value

Counter = ray.remote(Counter)

counter_actor = Counter.remote()

When the above actor is instantiated, the following events happen.

  1. A node in the cluster is chosen and a worker process is created on that node for the purpose of running methods called on the actor.

  2. A Counter object is created on that worker and the Counter constructor is run.

Actor Methods

Methods of the actor can be called remotely.

counter_actor = Counter.remote()

assert counter_actor.increment.remote() == 1

class Foo(object):

    # Any method of the actor can return multiple object refs.
    def bar(self):
        return 1, 2

f = Foo.remote()

obj_ref1, obj_ref2 =
assert ray.get(obj_ref1) == 1
assert ray.get(obj_ref2) == 2

Resources with Actors

You can specify that an actor requires CPUs or GPUs in the decorator. While Ray has built-in support for CPUs and GPUs, Ray can also handle custom resources.

When using GPUs, Ray will automatically set the environment variable CUDA_VISIBLE_DEVICES for the actor after instantiated. The actor will have access to a list of the IDs of the GPUs that it is allowed to use via ray.get_gpu_ids(). This is a list of strings, like [], or ['1'], or ['2', '5', '6']. Under some circumstances, the IDs of GPUs could be given as UUID strings instead of indices (see the CUDA programming guide).

@ray.remote(num_cpus=2, num_gpus=1)
class GPUActor(object):

When an GPUActor instance is created, it will be placed on a node that has at least 1 GPU, and the GPU will be reserved for the actor for the duration of the actor’s lifetime (even if the actor is not executing tasks). The GPU resources will be released when the actor terminates.

If you want to use custom resources, make sure your cluster is configured to have these resources (see configuration instructions):


  • If you specify resource requirements in an actor class’s remote decorator, then the actor will acquire those resources for its entire lifetime (if you do not specify CPU resources, the default is 1), even if it is not executing any methods. The actor will not acquire any additional resources when executing methods.

  • If you do not specify any resource requirements in the actor class’s remote decorator, then by default, the actor will not acquire any resources for its lifetime, but every time it executes a method, it will need to acquire 1 CPU resource.

@ray.remote(resources={'Resource2': 1})
class GPUActor(object):

If you need to instantiate many copies of the same actor with varying resource requirements, you can do so as follows.

class Counter(object):

a1 = Counter.options(num_cpus=1, resources={"Custom1": 1}).remote()
a2 = Counter.options(num_cpus=2, resources={"Custom2": 1}).remote()
a3 = Counter.options(num_cpus=3, resources={"Custom3": 1}).remote()

Note that to create these actors successfully, Ray will need to be started with sufficient CPU resources and the relevant custom resources.

Terminating Actors

Automatic termination

Actor processes will be terminated automatically when the initial actor handle goes out of scope in Python. If we create an actor with actor_handle = Counter.remote(), then when actor_handle goes out of scope and is destructed, the actor process will be terminated. Note that this only applies to the original actor handle created for the actor and not to subsequent actor handles created by passing the actor handle to other tasks.

Manual termination within the actor

If necessary, you can manually terminate an actor from within one of the actor methods. This will kill the actor process and release resources associated/assigned to the actor.

This approach should generally not be necessary as actors are automatically garbage collected. The ObjectRef resulting from the task can be waited on to wait for the actor to exit (calling ray.get() on it will raise a RayActorError).

Note that this method of termination will wait until any previously submitted tasks finish executing and then exit the process gracefully with sys.exit.

Manual termination via an actor handle

You can terminate an actor forcefully.


This will call the exit syscall from within the actor, causing it to exit immediately and any pending tasks to fail.

This will not go through the normal Python sys.exit teardown logic, so any exit handlers installed in the actor using atexit will not be called.

Passing Around Actor Handles

Actor handles can be passed into other tasks. We can define remote functions (or actor methods) that use actor handles.

import time

def f(counter):
    for _ in range(1000):

If we instantiate an actor, we can pass the handle around to various tasks.

counter = Counter.remote()

# Start some tasks that use the actor.
[f.remote(counter) for _ in range(3)]

# Print the counter value.
for _ in range(10):

Named Actors

An actor can be given a globally unique name. This allows you to retrieve the actor from any job in the Ray cluster. This can be useful if you cannot directly pass the actor handle to the task that needs it, or if you are trying to access an actor launched by another driver. Note that the actor will still be garbage-collected if no handles to it exist. See Actor Lifetimes for more details.

# Create an actor with a name
counter = Counter.options(name="some_name").remote()


# Retrieve the actor later somewhere
counter = ray.get_actor("some_name")

Actor Lifetimes

Separately, actor lifetimes can be decoupled from the job, allowing an actor to persist even after the driver process of the job exits.

counter = Counter.options(name="CounterActor", lifetime="detached").remote()

The CounterActor will be kept alive even after the driver running above script exits. Therefore it is possible to run the following script in a different driver:

counter = ray.get_actor("CounterActor")

Note that the lifetime option is decoupled from the name. If we only specified the name without specifying lifetime="detached", then the CounterActor can only be retrieved as long as the original driver is still running.

Actor Pool

The ray.util module contains a utility class, ActorPool. This class is similar to multiprocessing.Pool and lets you schedule Ray tasks over a fixed pool of actors.

from ray.util import ActorPool

a1, a2 = Actor.remote(), Actor.remote()
pool = ActorPool([a1, a2])
print( a, v: a.double.remote(v), [1, 2, 3, 4]))
# [2, 4, 6, 8]

See the package reference for more information.

FAQ: Actors, Workers and Resources

What’s the difference between a worker and an actor?

Each “Ray worker” is a python process.

Workers are treated differently for tasks and actors. Any “Ray worker” is either 1. used to execute multiple Ray tasks or 2. is started as a dedicated Ray actor.

  • Tasks: When Ray starts on a machine, a number of Ray workers will be started automatically (1 per CPU by default). They will be used to execute tasks (like a process pool). If you execute 8 tasks with num_cpus=2, and total number of CPUs is 16 (ray.cluster_resources()[“CPU”] == 16), you will end up with 8 of your 16 workers idling.

  • Actor: A Ray Actor is also a “Ray worker” but is instantiated at runtime (upon actor_cls.remote()). All of its methods will run on the same process, using the same resources (designated when defining the Actor). Note that unlike tasks, the python processes that runs Ray Actors are not reused and will be terminated when the Actor is deleted.

To maximally utilize your resources, you want to maximize the time that your workers are working. You also want to allocate enough cluster resources so that both all of your needed actors can run and any other tasks you define can run. This also implies that tasks are scheduled more flexibly, and that if you don’t need the stateful part of an actor, you’re mostly better off using tasks.

Concurrency within an actor

Within a single actor process, it is possible to execute concurrent threads.

Ray offers two types of concurrency within an actor:

See the above links for more details.