ray.rllib.env.multi_agent_episode.MultiAgentEpisode.get_observations#
- MultiAgentEpisode.get_observations(indices: int | slice | List[int] | None = None, agent_ids: Collection[Any] | Any | None = None, *, env_steps: bool = True, neg_index_as_lookback: bool = False, fill: Any | None = None, one_hot_discrete: bool = False, return_list: bool = False) Dict[Any, Any] | List[Dict[Any, Any]] [source]#
Returns agents’ observations or batched ranges thereof from this episode.
- Parameters:
indices – A single int is interpreted as an index, from which to return the individual observation stored at this index. A list of ints is interpreted as a list of indices from which to gather individual observations in a batch of size len(indices). A slice object is interpreted as a range of observations to be returned. Thereby, negative indices by default are interpreted as “before the end” unless the
neg_index_as_lookback=True
option is used, in which case negative indices are interpreted as “before ts=0”, meaning going back into the lookback buffer. If None, will return all observations (from ts=0 to the end).agent_ids – An optional collection of AgentIDs or a single AgentID to get observations for. If None, will return observations for all agents in this episode.
env_steps – Whether
indices
should be interpreted as environment time steps (True) or per-agent timesteps (False).neg_index_as_lookback – If True, negative values in
indices
are interpreted as “before ts=0”, meaning going back into the lookback buffer. For example, an episode with agent A’s observations [4, 5, 6, 7, 8, 9], where [4, 5, 6] is the lookback buffer range (ts=0 item is 7), will respond toget_observations(-1, agent_ids=[A], neg_index_as_lookback=True)
with {A:6
} and toget_observations(slice(-2, 1), agent_ids=[A], neg_index_as_lookback=True)
with {A:[5, 6, 7]
}.fill – An optional value to use for filling up the returned results at the boundaries. This filling only happens if the requested index range’s start/stop boundaries exceed the episode’s boundaries (including the lookback buffer on the left side). This comes in very handy, if users don’t want to worry about reaching such boundaries and want to zero-pad. For example, an episode with agent A’ observations [10, 11, 12, 13, 14] and lookback buffer size of 2 (meaning observations
10
and11
are part of the lookback buffer) will respond toget_observations(slice(-7, -2), agent_ids=[A], fill=0.0)
with{A: [0.0, 0.0, 10, 11, 12]}
.one_hot_discrete – If True, will return one-hot vectors (instead of int-values) for those sub-components of a (possibly complex) observation space that are Discrete or MultiDiscrete. Note that if
fill=0
and the requestedindices
are out of the range of our data, the returned one-hot vectors will actually be zero-hot (all slots zero).return_list – Whether to return a list of multi-agent dicts (instead of a single multi-agent dict of lists/structs). False by default. This option can only be used when
env_steps
is True due to the fact the such a list can only be interpreted as one env step per list item (would not work with agent steps).
- Returns:
A dictionary mapping agent IDs to observations (at the given
indices
). Ifenv_steps
is True, only agents that have stepped (were ready) at the given env stepindices
are returned (i.e. not all agent IDs are necessarily in the keys). Ifreturn_list
is True, returns a list of MultiAgentDicts (mapping agent IDs to observations) instead.