ray.data.llm.vLLMEngineProcessorConfig#

class ray.data.llm.vLLMEngineProcessorConfig(*, batch_size: int = 64, accelerator_type: str | None = None, concurrency: int = 1, model: str, engine_kwargs: ~typing.Dict[str, ~typing.Any] = <factory>, task_type: ~ray.llm._internal.batch.stages.vllm_engine_stage.vLLMTaskType = vLLMTaskType.GENERATE, runtime_env: ~typing.Dict[str, ~typing.Any] | None = None, max_pending_requests: int | None = None, max_concurrent_batches: int = 4, apply_chat_template: bool = True, chat_template: str | None = None, tokenize: bool = True, detokenize: bool = True, has_image: bool = False)[source]#

The configuration for the vLLM engine processor.

Parameters:
  • model – The model to use for the vLLM engine.

  • batch_size – The batch size to send to the vLLM engine. Large batch sizes are likely to saturate the compute resources and could achieve higher throughput. On the other hand, small batch sizes are more fault-tolerant and could reduce bubbles in the data pipeline. You can tune the batch size to balance the throughput and fault-tolerance based on your use case.

  • engine_kwargs – The kwargs to pass to the vLLM engine. Default engine kwargs are pipeline_parallel_size: 1, tensor_parallel_size: 1, max_num_seqs: 128, distributed_executor_backend: “mp”.

  • task_type – The task type to use. If not specified, will use ‘generate’ by default.

  • runtime_env – The runtime environment to use for the vLLM engine. See this doc for more details.

  • max_pending_requests – The maximum number of pending requests. If not specified, will use the default value from the vLLM engine.

  • max_concurrent_batches – The maximum number of concurrent batches in the engine. This is to overlap the batch processing to avoid the tail latency of each batch. The default value may not be optimal when the batch size or the batch processing latency is too small, but it should be good enough for batch size >= 64.

  • apply_chat_template – Whether to apply chat template.

  • chat_template – The chat template to use. This is usually not needed if the model checkpoint already contains the chat template.

  • tokenize – Whether to tokenize the input before passing it to the vLLM engine. If not, vLLM will tokenize the prompt in the engine.

  • detokenize – Whether to detokenize the output.

  • has_image – Whether the input messages have images.

  • accelerator_type – The accelerator type used by the LLM stage in a processor. Default to None, meaning that only the CPU will be used.

  • concurrency – The number of workers for data parallelism. Default to 1.

Examples

import ray
from ray.data.llm import vLLMEngineProcessorConfig, build_llm_processor

config = vLLMEngineProcessorConfig(
    model="meta-llama/Meta-Llama-3.1-8B-Instruct",
    engine_kwargs=dict(
        enable_prefix_caching=True,
        enable_chunked_prefill=True,
        max_num_batched_tokens=4096,
    ),
    concurrency=1,
    batch_size=64,
)
processor = build_llm_processor(
    config,
    preprocess=lambda row: dict(
        messages=[
            {"role": "system", "content": "You are a calculator"},
            {"role": "user", "content": f"{row['id']} ** 3 = ?"},
        ],
        sampling_params=dict(
            temperature=0.3,
            max_tokens=20,
            detokenize=False,
        ),
    ),
    postprocess=lambda row: dict(
        resp=row["generated_text"],
    ),
)

ds = ray.data.range(300)
ds = processor(ds)
for row in ds.take_all():
    print(row)

PublicAPI (alpha): This API is in alpha and may change before becoming stable.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'validate_assignment': True}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].