MultiAgentEnv API
Contents
MultiAgentEnv API#
rllib.env.multi_agent_env.MultiAgentEnv#
- ray.rllib.env.multi_agent_env.MultiAgentEnv#
alias of <MagicMock spec=’str’ id=’140207173656784’>
Convert gym.Env into MultiAgentEnv#
- ray.rllib.env.multi_agent_env.make_multi_agent(env_name_or_creator: Union[str, Callable[[EnvContext], Optional[Any]]]) Type[MultiAgentEnv] [source]#
Convenience wrapper for any single-agent env to be converted into MA.
Allows you to convert a simple (single-agent)
gym.Env
class into aMultiAgentEnv
class. This function simply stacks n instances of the given`gym.Env`
class into one unifiedMultiAgentEnv
class and returns this class, thus pretending the agents act together in the same environment, whereas - under the hood - they live separately from each other in n parallel single-agent envs.Agent IDs in the resulting and are int numbers starting from 0 (first agent).
- Parameters
env_name_or_creator – String specifier or env_maker function taking an EnvContext object as only arg and returning a gym.Env.
- Returns
New MultiAgentEnv class to be used as env. The constructor takes a config dict with
num_agents
key (default=1). The rest of the config dict will be passed on to the underlying single-agent env’s constructor.
Examples
>>> from ray.rllib.env.multi_agent_env import make_multi_agent >>> # By gym string: >>> ma_cartpole_cls = make_multi_agent("CartPole-v1") >>> # Create a 2 agent multi-agent cartpole. >>> ma_cartpole = ma_cartpole_cls({"num_agents": 2}) >>> obs = ma_cartpole.reset() >>> print(obs) {0: [...], 1: [...]} >>> # By env-maker callable: >>> from ray.rllib.examples.env.stateless_cartpole ... import StatelessCartPole >>> ma_stateless_cartpole_cls = make_multi_agent( ... lambda config: StatelessCartPole(config)) >>> # Create a 3 agent multi-agent stateless cartpole. >>> ma_stateless_cartpole = ma_stateless_cartpole_cls( ... {"num_agents": 3}) >>> print(obs) {0: [...], 1: [...], 2: [...]}