ray.method#

ray.method(__method: Callable[[Any], _Ret]) _RemoteMethodNoArgs[_Ret][source]#
ray.method(__method: Callable[[Any, _T0], _Ret]) _RemoteMethod0[_Ret, _T0]
ray.method(__method: Callable[[Any, _T0, _T1], _Ret]) _RemoteMethod1[_Ret, _T0, _T1]
ray.method(__method: Callable[[Any, _T0, _T1, _T2], _Ret]) _RemoteMethod2[_Ret, _T0, _T1, _T2]
ray.method(__method: Callable[[Any, _T0, _T1, _T2, _T3], _Ret]) _RemoteMethod3[_Ret, _T0, _T1, _T2, _T3]
ray.method(__method: Callable[[Any, _T0, _T1, _T2, _T3, _T4], _Ret]) _RemoteMethod4[_Ret, _T0, _T1, _T2, _T3, _T4]
ray.method(__method: Callable[[Any, _T0, _T1, _T2, _T3, _T4, _T5], _Ret]) _RemoteMethod5[_Ret, _T0, _T1, _T2, _T3, _T4, _T5]
ray.method(__method: Callable[[Any, _T0, _T1, _T2, _T3, _T4, _T5, _T6], _Ret]) _RemoteMethod6[_Ret, _T0, _T1, _T2, _T3, _T4, _T5, _T6]
ray.method(__method: Callable[[Any, _T0, _T1, _T2, _T3, _T4, _T5, _T6, _T7], _Ret]) _RemoteMethod7[_Ret, _T0, _T1, _T2, _T3, _T4, _T5, _T6, _T7]
ray.method(__method: Callable[[Any, _T0, _T1, _T2, _T3, _T4, _T5, _T6, _T7, _T8], _Ret]) _RemoteMethod8[_Ret, _T0, _T1, _T2, _T3, _T4, _T5, _T6, _T7, _T8]
ray.method(__method: Callable[[Any, _T0, _T1, _T2, _T3, _T4, _T5, _T6, _T7, _T8, _T9], _Ret]) _RemoteMethod9[_Ret, _T0, _T1, _T2, _T3, _T4, _T5, _T6, _T7, _T8, _T9]
ray.method(*, num_returns: int | Literal['streaming'] | None = None, concurrency_group: str | None = None, max_task_retries: int | None = None, retry_exceptions: bool | list | tuple | None = None, _generator_backpressure_num_objects: int | None = None, enable_task_events: bool | None = None, tensor_transport: TensorTransportEnum | None = None) Callable[[Callable[Concatenate[Any, _P], _Ret]], Any]

Annotate an actor method.

@ray.remote
class Foo:
    @ray.method(num_returns=2)
    def bar(self):
        return 1, 2

f = Foo.remote()

_, _ = f.bar.remote()
Parameters:
  • num_returns – The number of object refs that should be returned by invocations of this actor method. The default value is 1 for a normal actor task and “streaming” for an actor generator task (a function that yields objects instead of returning them).

  • max_task_retries – How many times to retry an actor task if the task fails due to a runtime error, e.g., the actor has died. The default value is 0. If set to -1, the system will retry the failed task until the task succeeds, or the actor has reached its max_restarts limit. If set to n > 0, the system will retry the failed task up to n times, after which the task will throw a RayActorError exception upon ray.get. Note that Python exceptions may trigger retries only if retry_exceptions is set for the method, in that case when max_task_retries runs out the task will rethrow the exception from the task. You can override this number with the method’s max_task_retries option in @ray.method decorator or in .option().

  • retry_exceptions – Boolean of whether to retry all Python exceptions, or a list of allowlist exceptions to retry. The default value is False (only retry tasks upon system failures and if max_task_retries is set)

  • concurrency_group – The name of the concurrency group to use for the actor method. By default, the actor is single-threaded and runs all actor tasks on the same thread. See Defining Concurrency Groups.

  • tensor_transport – [Alpha] The tensor transport protocol to use for the actor method. The valid values are “OBJECT_STORE” (default), “NCCL”, “GLOO”, or “NIXL” (case-insensitive). If a non-object store transport is specified, Ray will store a reference instead of a copy of any torch.Tensors found inside values returned by this task, and the tensors will be sent directly to other tasks using the specified transport. NCCL and GLOO transports require first creating a collective with the involved actors using ray.experimental.collective.create_collective_group(). See Ray Direct Transport (RDT) for more details.