ray.rllib.evaluation.sampler.SyncSampler
ray.rllib.evaluation.sampler.SyncSampler#
- class ray.rllib.evaluation.sampler.SyncSampler(*, worker: RolloutWorker, env: ray.rllib.env.base_env.BaseEnv, clip_rewards: Union[bool, float], rollout_fragment_length: int, count_steps_by: str = 'env_steps', callbacks: DefaultCallbacks, multiple_episodes_in_batch: bool = False, normalize_actions: bool = True, clip_actions: bool = False, observation_fn: Optional[ObservationFunction] = None, sample_collector_class: Optional[Type[ray.rllib.evaluation.collectors.sample_collector.SampleCollector]] = None, render: bool = False, policies=None, policy_mapping_fn=None, preprocessors=None, obs_filters=None, tf_sess=None, horizon=- 1, soft_horizon=- 1, no_done_at_end=- 1)[source]#
Bases:
ray.rllib.evaluation.sampler.SamplerInput
Sync SamplerInput that collects experiences when
get_data()
is called.Methods
__init__
(*, worker, env, clip_rewards, ...)Initializes a SyncSampler instance.
tf_input_ops
([queue_size])Returns TensorFlow queue ops for reading inputs from this reader.