Base Policy class (ray.rllib.policy.policy.Policy)¶
- class ray.rllib.policy.policy.Policy(observation_space: <MagicMock name='mock.Space' id='139715771337104'>, action_space: <MagicMock name='mock.Space' id='139715771337104'>, config: dict)[source]¶
Policy base class: Calculates actions, losses, and holds NN models.
Policy is the abstract superclass for all DL-framework specific sub-classes (e.g. TFPolicy or TorchPolicy). It exposes APIs to
Compute actions from observation (and possibly other) inputs.
Manage the Policy’s NN model(s), like exporting and loading their weights.
Postprocess a given trajectory from the environment or other input via the postprocess_trajectory method.
Compute losses from a train batch.
Perform updates from a train batch on the NN-models (this normally includes loss calculations) either a) in one monolithic step (train_on_batch) or b) via batch pre-loading, then n steps of actual loss computations and updates (load_batch_into_buffer + learn_on_loaded_batch).
Note: It is not recommended to sub-class Policy directly, but rather use one of the following two convenience methods: rllib.policy.policy_template::build_policy_class (PyTorch) or rllib.policy.tf_policy_template::build_tf_policy_class (TF).
- __init__(observation_space: <MagicMock name='mock.Space' id='139715771337104'>, action_space: <MagicMock name='mock.Space' id='139715771337104'>, config: dict)[source]¶
Initializes a Policy instance.
- Parameters
observation_space – Observation space of the policy.
action_space – Action space of the policy.
config – A complete Trainer/Policy config dict. For the default config keys and values, see rllib/trainer/trainer.py.
- init_view_requirements()[source]¶
Maximal view requirements dict for learn_on_batch() and compute_actions calls. Specific policies can override this function to provide custom list of view requirements.
- compute_single_action(obs: Optional[Union[Any, dict, tuple]] = None, state: Optional[List[Any]] = None, *, prev_action: Optional[Union[Any, dict, tuple]] = None, prev_reward: Optional[Union[Any, dict, tuple]] = None, info: dict = None, input_dict: Optional[ray.rllib.policy.sample_batch.SampleBatch] = None, episode: Optional[Episode] = None, explore: Optional[bool] = None, timestep: Optional[int] = None, **kwargs) Tuple[Union[Any, dict, tuple], List[Any], Dict[str, Any]] [source]¶
Computes and returns a single (B=1) action value.
Takes an input dict (usually a SampleBatch) as its main data input. This allows for using this method in case a more complex input pattern (view requirements) is needed, for example when the Model requires the last n observations, the last m actions/rewards, or a combination of any of these. Alternatively, in case no complex inputs are required, takes a single obs values (and possibly single state values, prev-action/reward values, etc..).
- Parameters
obs – Single observation.
state – List of RNN state inputs, if any.
prev_action – Previous action value, if any.
prev_reward – Previous reward, if any.
info – Info object, if any.
input_dict – A SampleBatch or input dict containing the single (unbatched) Tensors to compute actions. If given, it’ll be used instead of obs, state, prev_action|reward, and info.
episode – This provides access to all of the internal episode state, which may be useful for model-based or multi-agent algorithms.
explore – Whether to pick an exploitation or exploration action (default: None -> use self.config[“explore”]).
timestep – The current (sampling) time step.
- Keyword Arguments
kwargs – Forward compatibility placeholder.
- Returns
Tuple consisting of the action, the list of RNN state outputs (if any), and a dictionary of extra features (if any).
- compute_actions_from_input_dict(input_dict: Union[ray.rllib.policy.sample_batch.SampleBatch, Dict[str, Union[Any, dict, tuple]]], explore: bool = None, timestep: Optional[int] = None, episodes: Optional[List[Episode]] = None, **kwargs) Tuple[Any, List[Any], Dict[str, Any]] [source]¶
Computes actions from collected samples (across multiple-agents).
Takes an input dict (usually a SampleBatch) as its main data input. This allows for using this method in case a more complex input pattern (view requirements) is needed, for example when the Model requires the last n observations, the last m actions/rewards, or a combination of any of these.
- Parameters
input_dict – A SampleBatch or input dict containing the Tensors to compute actions. input_dict already abides to the Policy’s as well as the Model’s view requirements and can thus be passed to the Model as-is.
explore – Whether to pick an exploitation or exploration action (default: None -> use self.config[“explore”]).
timestep – The current (sampling) time step.
episodes – This provides access to all of the internal episodes’ state, which may be useful for model-based or multi-agent algorithms.
- Keyword Arguments
kwargs – Forward compatibility placeholder.
- Returns
- Batch of output actions, with shape like
[BATCH_SIZE, ACTION_SHAPE].
- state_outs: List of RNN state output
batches, if any, each with shape [BATCH_SIZE, STATE_SIZE].
- info: Dictionary of extra feature batches, if any, with shape like
{“f1”: [BATCH_SIZE, …], “f2”: [BATCH_SIZE, …]}.
- Return type
actions
- abstract compute_actions(obs_batch: Union[List[Union[Any, dict, tuple]], Any, dict, tuple], state_batches: Optional[List[Any]] = None, prev_action_batch: Union[List[Union[Any, dict, tuple]], Any, dict, tuple] = None, prev_reward_batch: Union[List[Union[Any, dict, tuple]], Any, dict, tuple] = None, info_batch: Optional[Dict[str, list]] = None, episodes: Optional[List[Episode]] = None, explore: Optional[bool] = None, timestep: Optional[int] = None, **kwargs) Tuple[Any, List[Any], Dict[str, Any]] [source]¶
Computes actions for the current policy.
- Parameters
obs_batch – Batch of observations.
state_batches – List of RNN state input batches, if any.
prev_action_batch – Batch of previous action values.
prev_reward_batch – Batch of previous rewards.
info_batch – Batch of info objects.
episodes – List of Episode objects, one for each obs in obs_batch. This provides access to all of the internal episode state, which may be useful for model-based or multi-agent algorithms.
explore – Whether to pick an exploitation or exploration action. Set to None (default) for using the value of self.config[“explore”].
timestep – The current (sampling) time step.
- Keyword Arguments
kwargs – Forward compatibility placeholder
- Returns
- Batch of output actions, with shape like
[BATCH_SIZE, ACTION_SHAPE].
- state_outs (List[TensorType]): List of RNN state output
batches, if any, each with shape [BATCH_SIZE, STATE_SIZE].
- info (List[dict]): Dictionary of extra feature batches, if any,
with shape like {“f1”: [BATCH_SIZE, …], “f2”: [BATCH_SIZE, …]}.
- Return type
actions (TensorType)
- compute_log_likelihoods(actions: Union[List[Any], Any], obs_batch: Union[List[Any], Any], state_batches: Optional[List[Any]] = None, prev_action_batch: Optional[Union[List[Any], Any]] = None, prev_reward_batch: Optional[Union[List[Any], Any]] = None, actions_normalized: bool = True) Any [source]¶
Computes the log-prob/likelihood for a given action and observation.
The log-likelihood is calculated using this Policy’s action distribution class (self.dist_class).
- Parameters
actions – Batch of actions, for which to retrieve the log-probs/likelihoods (given all other inputs: obs, states, ..).
obs_batch – Batch of observations.
state_batches – List of RNN state input batches, if any.
prev_action_batch – Batch of previous action values.
prev_reward_batch – Batch of previous rewards.
actions_normalized – Is the given actions already normalized (between -1.0 and 1.0) or not? If not and normalize_actions=True, we need to normalize the given actions first, before calculating log likelihoods.
- Returns
[BATCH_SIZE].
- Return type
Batch of log probs/likelihoods, with shape
- postprocess_trajectory(sample_batch: ray.rllib.policy.sample_batch.SampleBatch, other_agent_batches: Optional[Dict[Any, Tuple[Policy, ray.rllib.policy.sample_batch.SampleBatch]]] = None, episode: Optional[Episode] = None) ray.rllib.policy.sample_batch.SampleBatch [source]¶
Implements algorithm-specific trajectory postprocessing.
This will be called on each trajectory fragment computed during policy evaluation. Each fragment is guaranteed to be only from one episode. The given fragment may or may not contain the end of this episode, depending on the batch_mode=truncate_episodes|complete_episodes, rollout_fragment_length, and other settings.
- Parameters
sample_batch – batch of experiences for the policy, which will contain at most one episode trajectory.
other_agent_batches – In a multi-agent env, this contains a mapping of agent ids to (policy, agent_batch) tuples containing the policy and experiences of the other agents.
episode – An optional multi-agent episode object to provide access to all of the internal episode state, which may be useful for model-based or multi-agent algorithms.
- Returns
The postprocessed sample batch.
- loss(model: ray.rllib.models.modelv2.ModelV2, dist_class: ray.rllib.models.action_dist.ActionDistribution, train_batch: ray.rllib.policy.sample_batch.SampleBatch) Union[Any, List[Any]] [source]¶
Loss function for this Policy.
Override this method in order to implement custom loss computations.
- Parameters
model – The model to calculate the loss(es).
dist_class – The action distribution class to sample actions from the model’s outputs.
train_batch – The input batch on which to calculate the loss.
- Returns
Either a single loss tensor or a list of loss tensors.
- learn_on_batch(samples: ray.rllib.policy.sample_batch.SampleBatch) Dict[str, Any] [source]¶
Perform one learning update, given samples.
Either this method or the combination of compute_gradients and apply_gradients must be implemented by subclasses.
- Parameters
samples – The SampleBatch object to learn from.
- Returns
Dictionary of extra metadata from compute_gradients().
Examples
>>> policy, sample_batch = ... >>> policy.learn_on_batch(sample_batch)
- learn_on_batch_from_replay_buffer(replay_actor: ray.actor.ActorHandle, policy_id: str) Dict[str, Any] [source]¶
Samples a batch from given replay actor and performs an update.
- Parameters
replay_actor – The replay buffer actor to sample from.
policy_id – The ID of this policy.
- Returns
Dictionary of extra metadata from compute_gradients().
- load_batch_into_buffer(batch: ray.rllib.policy.sample_batch.SampleBatch, buffer_index: int = 0) int [source]¶
Bulk-loads the given SampleBatch into the devices’ memories.
The data is split equally across all the Policy’s devices. If the data is not evenly divisible by the batch size, excess data should be discarded.
- Parameters
batch – The SampleBatch to load.
buffer_index – The index of the buffer (a MultiGPUTowerStack) to use on the devices. The number of buffers on each device depends on the value of the num_multi_gpu_tower_stacks config key.
- Returns
The number of tuples loaded per device.
- get_num_samples_loaded_into_buffer(buffer_index: int = 0) int [source]¶
Returns the number of currently loaded samples in the given buffer.
- Parameters
buffer_index – The index of the buffer (a MultiGPUTowerStack) to use on the devices. The number of buffers on each device depends on the value of the num_multi_gpu_tower_stacks config key.
- Returns
The number of tuples loaded per device.
- learn_on_loaded_batch(offset: int = 0, buffer_index: int = 0)[source]¶
Runs a single step of SGD on an already loaded data in a buffer.
Runs an SGD step over a slice of the pre-loaded batch, offset by the offset argument (useful for performing n minibatch SGD updates repeatedly on the same, already pre-loaded data).
Updates the model weights based on the averaged per-device gradients.
- Parameters
offset – Offset into the preloaded data. Used for pre-loading a train-batch once to a device, then iterating over (subsampling through) this batch n times doing minibatch SGD.
buffer_index – The index of the buffer (a MultiGPUTowerStack) to take the already pre-loaded data from. The number of buffers on each device depends on the value of the num_multi_gpu_tower_stacks config key.
- Returns
The outputs of extra_ops evaluated over the batch.
- compute_gradients(postprocessed_batch: ray.rllib.policy.sample_batch.SampleBatch) Tuple[Union[List[Tuple[Any, Any]], List[Any]], Dict[str, Any]] [source]¶
Computes gradients given a batch of experiences.
Either this in combination with apply_gradients() or learn_on_batch() must be implemented by subclasses.
- Parameters
postprocessed_batch – The SampleBatch object to use for calculating gradients.
- Returns
List of gradient output values. grad_info: Extra policy-specific info values.
- Return type
grads
- apply_gradients(gradients: Union[List[Tuple[Any, Any]], List[Any]]) None [source]¶
Applies the (previously) computed gradients.
Either this in combination with compute_gradients() or learn_on_batch() must be implemented by subclasses.
- Parameters
gradients – The already calculated gradients to apply to this Policy.
- get_weights() dict [source]¶
Returns model weights.
Note: The return value of this method will reside under the “weights” key in the return value of Policy.get_state(). Model weights are only one part of a Policy’s state. Other state information contains: optimizer variables, exploration state, and global state vars such as the sampling timestep.
- Returns
Serializable copy or view of model weights.
- set_weights(weights: dict) None [source]¶
Sets this Policy’s model’s weights.
Note: Model weights are only one part of a Policy’s state. Other state information contains: optimizer variables, exploration state, and global state vars such as the sampling timestep.
- Parameters
weights – Serializable copy or view of model weights.
- get_exploration_state() Dict[str, Any] [source]¶
Returns the state of this Policy’s exploration component.
- Returns
Serializable information on the self.exploration object.
- is_recurrent() bool [source]¶
Whether this Policy holds a recurrent Model.
- Returns
True if this Policy has-a RNN-based Model.
- num_state_tensors() int [source]¶
The number of internal states needed by the RNN-Model of the Policy.
- Returns
The number of RNN internal states kept by this Policy’s Model.
- Return type
int
- get_initial_state() List[Any] [source]¶
Returns initial RNN state for the current policy.
- Returns
Initial RNN state for the current policy.
- Return type
List[TensorType]
- get_state() Dict[str, Union[Any, dict, tuple]] [source]¶
Returns the entire current state of this Policy.
Note: Not to be confused with an RNN model’s internal state. State includes the Model(s)’ weights, optimizer weights, the exploration component’s state, as well as global variables, such as sampling timesteps.
- Returns
Serialized local state.
- set_state(state: Dict[str, Union[Any, dict, tuple]]) None [source]¶
Restores the entire current state of this Policy from state.
- Parameters
state – The new state to set this policy to. Can be obtained by calling self.get_state().
- apply(func: Callable[[ray.rllib.policy.policy.Policy, Optional[Any], Optional[Any]], ray.rllib.utils.typing.T], *args, **kwargs) ray.rllib.utils.typing.T [source]¶
Calls the given function with this Policy instance.
Useful for when the Policy class has been converted into a ActorHandle and the user needs to execute some functionality (e.g. add a property) on the underlying policy object.
- Parameters
func – The function to call, with this Policy as first argument, followed by args, and kwargs.
args – Optional additional args to pass to the function call.
kwargs – Optional additional kwargs to pass to the function call.
- Returns
The return value of the function call.
- on_global_var_update(global_vars: Dict[str, Any]) None [source]¶
Called on an update to global vars.
- Parameters
global_vars – Global variables by str key, broadcast from the driver.
- export_checkpoint(export_dir: str) None [source]¶
Export Policy checkpoint to local directory.
- Parameters
export_dir – Local writable directory.
- export_model(export_dir: str, onnx: Optional[int] = None) None [source]¶
Exports the Policy’s Model to local directory for serving.
Note: The file format will depend on the deep learning framework used. See the child classed of Policy and their export_model implementations for more details.
- Parameters
export_dir – Local writable directory.
onnx – If given, will export model in ONNX format. The value of this parameter set the ONNX OpSet version to use.
- import_model_from_h5(import_file: str) None [source]¶
Imports Policy from local file.
- Parameters
import_file (str) – Local readable file.
- get_session() Optional[<MagicMock name='mock.compat.v1.Session' id='139715720901264'>] [source]¶
Returns tf.Session object to use for computing actions or None.
Note: This method only applies to TFPolicy sub-classes. All other sub-classes should expect a None to be returned from this method.
- Returns
- The tf Session to use for computing actions and losses with
this policy or None.