ray.rllib.algorithms.algorithm_config.AlgorithmConfig.debugging#
- AlgorithmConfig.debugging(*, logger_creator: ~typing.Callable[[], ~ray.tune.logger.logger.Logger] | None = <ray.rllib.utils.from_config._NotProvided object>, logger_config: dict | None = <ray.rllib.utils.from_config._NotProvided object>, log_level: str | None = <ray.rllib.utils.from_config._NotProvided object>, log_sys_usage: bool | None = <ray.rllib.utils.from_config._NotProvided object>, fake_sampler: bool | None = <ray.rllib.utils.from_config._NotProvided object>, seed: int | None = <ray.rllib.utils.from_config._NotProvided object>, _run_training_always_in_thread: bool | None = <ray.rllib.utils.from_config._NotProvided object>, _evaluation_parallel_to_training_wo_thread: bool | None = <ray.rllib.utils.from_config._NotProvided object>) AlgorithmConfig [source]#
Sets the config’s debugging settings.
- Parameters:
logger_creator – Callable that creates a ray.tune.Logger object. If unspecified, a default logger is created.
logger_config – Define logger-specific configuration to be used inside Logger Default value None allows overwriting with nested dicts.
log_level – Set the ray.rllib.* log level for the agent process and its workers. Should be one of DEBUG, INFO, WARN, or ERROR. The DEBUG level also periodically prints out summaries of relevant internal dataflow (this is also printed out once at startup at the INFO level).
log_sys_usage – Log system resource metrics to results. This requires
psutil
to be installed for sys stats, andgputil
for GPU metrics.fake_sampler – Use fake (infinite speed) sampler. For testing only.
seed – This argument, in conjunction with worker_index, sets the random seed of each worker, so that identically configured trials have identical results. This makes experiments reproducible.
_run_training_always_in_thread – Runs the n
training_step()
calls per iteration always in a separate thread (just as we would do withevaluation_parallel_to_training=True
, but even without evaluation going on and even without evaluation workers being created in the Algorithm)._evaluation_parallel_to_training_wo_thread – Only relevant if
evaluation_parallel_to_training
is True. Then, in order to achieve parallelism, RLlib doesn’t use a thread pool (as it usually does in this situation).
- Returns:
This updated AlgorithmConfig object.