ray.rllib.evaluation.sampler.SyncSampler#

class ray.rllib.evaluation.sampler.SyncSampler(*, worker: RolloutWorker, env: BaseEnv, clip_rewards: bool | float, rollout_fragment_length: int, count_steps_by: str = 'env_steps', callbacks: DefaultCallbacks, multiple_episodes_in_batch: bool = False, normalize_actions: bool = True, clip_actions: bool = False, observation_fn: ObservationFunction | None = None, sample_collector_class: Type[SampleCollector] | None = None, render: bool = False, policies=None, policy_mapping_fn=None, preprocessors=None, obs_filters=None, tf_sess=None, horizon=-1, soft_horizon=-1, no_done_at_end=-1)[source]#

Bases: SamplerInput

Sync SamplerInput that collects experiences when get_data() is called.

Methods

__init__

Initializes a SyncSampler instance.

tf_input_ops

Returns TensorFlow queue ops for reading inputs from this reader.