ray.rllib.env.multi_agent_episode.MultiAgentEpisode.set_rewards#
- MultiAgentEpisode.set_rewards(*, new_data: Dict[Hashable, Any], at_indices: int | List[int] | slice | None = None, neg_index_as_lookback: bool = False) None[source]#
Overwrites all or some of this Episode’s rewards with the provided data.
This is a helper method to batch
SingleAgentEpisode.set_rewards. For more detail, seeSingleAgentEpisode.set_rewards.- Parameters:
new_data – A dict mapping agent IDs to new reward data. Each value in the dict is the new reward data to overwrite existing data with. This may be a list of individual reward(s) in case this episode is still not numpy’ized yet. In case this episode has already been numpy’ized, this should be a np.ndarray with a length exactly the size of the to-be-overwritten slice or segment (provided by
at_indices).at_indices – A single int is interpreted as one index, which to overwrite with
new_data(which is expected to be a single reward). A list of ints is interpreted as a list of indices, all of which to overwrite withnew_data(which is expected to be of the same size aslen(at_indices)). A slice object is interpreted as a range of indices to be overwritten withnew_data(which is expected to be of the same size as the provided slice). Thereby, negative indices by default are interpreted as “before the end” unless theneg_index_as_lookback=Trueoption is used, in which case negative indices are interpreted as “before ts=0”, meaning going back into the lookback buffer.neg_index_as_lookback – If True, negative values in
at_indicesare interpreted as “before ts=0”, meaning going back into the lookback buffer. For example, an episode with rewards = [4, 5, 6, 7, 8, 9], where [4, 5, 6] is the lookback buffer range (ts=0 item is 7), will handle a call toset_rewards(individual_reward, -1, neg_index_as_lookback=True)by overwriting the value of 6 in our rewards buffer with the provided “individual_reward”.
- Raises:
IndexError – If the provided
at_indicesdo not match the size ofnew_data.