ray.rllib.algorithms.algorithm_config.AlgorithmConfig.debugging#

AlgorithmConfig.debugging(*, logger_creator: ~typing.Callable[[], ~ray.tune.logger.logger.Logger] | None = <ray.rllib.utils.from_config._NotProvided object>, logger_config: dict | None = <ray.rllib.utils.from_config._NotProvided object>, log_level: str | None = <ray.rllib.utils.from_config._NotProvided object>, log_sys_usage: bool | None = <ray.rllib.utils.from_config._NotProvided object>, fake_sampler: bool | None = <ray.rllib.utils.from_config._NotProvided object>, seed: int | None = <ray.rllib.utils.from_config._NotProvided object>, _run_training_always_in_thread: bool | None = <ray.rllib.utils.from_config._NotProvided object>, _evaluation_parallel_to_training_wo_thread: bool | None = <ray.rllib.utils.from_config._NotProvided object>) AlgorithmConfig[source]#

Sets the config’s debugging settings.

Parameters:
  • logger_creator – Callable that creates a ray.tune.Logger object. If unspecified, a default logger is created.

  • logger_config – Define logger-specific configuration to be used inside Logger Default value None allows overwriting with nested dicts.

  • log_level – Set the ray.rllib.* log level for the agent process and its workers. Should be one of DEBUG, INFO, WARN, or ERROR. The DEBUG level will also periodically print out summaries of relevant internal dataflow (this is also printed out once at startup at the INFO level). When using the rllib train command, you can also use the -v and -vv flags as shorthand for INFO and DEBUG.

  • log_sys_usage – Log system resource metrics to results. This requires psutil to be installed for sys stats, and gputil for GPU metrics.

  • fake_sampler – Use fake (infinite speed) sampler. For testing only.

  • seed – This argument, in conjunction with worker_index, sets the random seed of each worker, so that identically configured trials will have identical results. This makes experiments reproducible.

  • _run_training_always_in_thread – Runs the n training_step() calls per iteration always in a separate thread (just as we would do with evaluation_parallel_to_training=True, but even without evaluation going on and even without evaluation workers being created in the Algorithm).

  • _evaluation_parallel_to_training_wo_thread – Only relevant if evaluation_parallel_to_training is True. Then, in order to achieve parallelism, RLlib will not use a thread pool (as it usually does in this situation).

Returns:

This updated AlgorithmConfig object.