ray.rllib.utils.metrics.metrics_logger.MetricsLogger.log_dict#
- MetricsLogger.log_dict(value_dict, *, key: str | Tuple[str, ...] | None = None, reduce: str | None = 'mean', window: int | float | None = None, ema_coeff: float | None = None, percentiles: List[int] | bool = False, clear_on_reduce: bool = False, with_throughput: bool = False, throughput_ema_coeff: float | None = None, reduce_per_index_on_aggregate: bool = False) None[source]#
Logs all leafs of a possibly nested dict of values to this logger.
To aggregate logs from upstream components, use
aggregate.This is a convinience function that is equivalent to:
` tree.map_structure_with_path(lambda path, value: logger.log_value(path, value, ...), value_dict) `Traverses through all leafs of
stats_dictand - if a path cannot be found in this logger yet, will add theStatsfound at the leaf under that new key. If a path already exists, will merge the found leaf (Stats) with the ones already logged before. This way,stats_dictdoes NOT have to have the same structure as what has already been logged toself, but can be used to log values under new keys or nested key paths.logger = MetricsLogger() # Log n dicts with keys "a" and (some) "b". By default, all logged values # under that key are averaged, once `reduce()` is called. logger.log_dict( { "a": 0.1, "b": -0.1, }, window=10, ) logger.log_dict({ "b": -0.2, }) # don't have to repeat `window` arg if key already exists logger.log_dict({ "a": 0.2, "c": {"d": 5.0}, # can also introduce an entirely new (nested) key }) # Peek at the current (reduced) values under "a" and "b". check(logger.peek("a"), 0.15) check(logger.peek("b"), -0.15) check(logger.peek(("c", "d")), 5.0) # Reduced all stats. results = logger.reduce() check(results, { "a": 0.15, "b": -0.15, "c": {"d": 5.0}, })
- Parameters:
value_dict – The (possibly nested) dict with individual values as leafs to be logged to this logger.
key – An additional key (or tuple of keys) to prepend to all the keys (or tuples of keys in case of nesting) found inside
stats_dict. Useful to log the entire contents ofstats_dictin a more organized fashion under one new key, for example logging the results returned by an EnvRunner under keyreduce – The reduction method to apply, once
self.reduce()is called. If None, will collect all logged values underkeyin a list (and also return that list upon callingself.reduce()).window – An optional window size to reduce over. If not None, then the reduction operation is only applied to the most recent
windowitems, and - after reduction - the internal values list underkeyis shortened to hold at mostwindowitems (the most recent ones). Must be None ifema_coeffis provided. If None (andema_coeffis None), reduction must not be “mean”.ema_coeff – An optional EMA coefficient to use if
reduceis “mean” and nowindowis provided. Note that if bothwindowandema_coeffare provided, an error is thrown. Also, ifema_coeffis provided,reducemust be “mean”. The reduction formula for EMA is: EMA(t1) = (1.0 - ema_coeff) * EMA(t0) + ema_coeff * new_valuepercentiles – If reduce is
None, we can compute the percentiles of the values list given bypercentiles. Defaults to [0, 0.5, 0.75, 0.9, 0.95, 0.99, 1] if set to True. When using percentiles, a window must be provided. This window should be chosen carefully. RLlib computes exact percentiles and the computational complexity is O(m*n*log(n/m)) where n is the window size and m is the number of parallel metrics loggers involved (for example, m EnvRunners).clear_on_reduce – If True, all values under
keywill be emptied afterself.reduce()is called. Setting this to True is useful for cases, in which the internal values list would otherwise grow indefinitely, for example if reduce is None and there is nowindowprovided.with_throughput – Whether to track a throughput estimate together with this metric. This is only supported for
reduce=sumandclear_on_reduce=Falsemetrics (aka. “lifetime counts”). TheStatsobject under the logged key then keeps track of the time passed between two consecutive calls toreduce()and update its throughput estimate. The current throughput estimate of a key can be obtained through: <MetricsLogger>.peek(key, throughput=True).throughput_ema_coeff – The EMA coefficient to use for throughput tracking. Only used if with_throughput=True. Defaults to 0.05 if with_throughput is True.
reduce_per_index_on_aggregate – If True, when merging Stats objects, we reduce incoming values per index such that the new value at index
nwill be the reduced value of all incoming values at indexn. If False, when reducingnStats, the firstnmerged values will be the reduced value of all incoming values at index0, the nextnmerged values will be the reduced values of all incoming values at index1, etc.