Numpy Utility Functions#

ray.rllib.utils.numpy.aligned_array(size: int, dtype, align: int = 64) numpy.ndarray[source]#

Returns an array of a given size that is 64-byte aligned.

The returned array can be efficiently copied into GPU memory by TensorFlow.

Parameters
  • size – The size (total number of items) of the array. For example, array([[0.0, 1.0], [2.0, 3.0]]) would have size=4.

  • dtype – The numpy dtype of the array.

  • align – The alignment to use.

Returns

A np.ndarray with the given specifications.

ray.rllib.utils.numpy.concat_aligned(items: List[numpy.ndarray], time_major: Optional[bool] = None) numpy.ndarray[source]#

Concatenate arrays, ensuring the output is 64-byte aligned.

We only align float arrays; other arrays are concatenated as normal.

This should be used instead of np.concatenate() to improve performance when the output array is likely to be fed into TensorFlow.

Parameters
  • items – The list of items to concatenate and align.

  • time_major – Whether the data in items is time-major, in which case, we will concatenate along axis=1.

Returns

The concat’d and aligned array.

ray.rllib.utils.numpy.convert_to_numpy(x: Union[numpy.array, tensorflow.python.framework.ops.Tensor, torch.Tensor, dict, tuple], reduce_type: bool = True, reduce_floats=- 1)[source]#

Converts values in stats to non-Tensor numpy or python types.

Parameters
  • x – Any (possibly nested) struct, the values in which will be converted and returned as a new struct with all torch/tf tensors being converted to numpy types.

  • reduce_type – Whether to automatically reduce all float64 and int64 data into float32 and int32 data, respectively.

Returns

A new struct with the same structure as x, but with all values converted to numpy arrays (on CPU).

ray.rllib.utils.numpy.fc(x: numpy.ndarray, weights: numpy.ndarray, biases: Optional[numpy.ndarray] = None, framework: Optional[str] = None) numpy.ndarray[source]#

Calculates FC (dense) layer outputs given weights/biases and input.

Parameters
  • x – The input to the dense layer.

  • weights – The weights matrix.

  • biases – The biases vector. All 0s if None.

  • framework – An optional framework hint (to figure out, e.g. whether to transpose torch weight matrices).

Returns

The dense layer’s output.

ray.rllib.utils.numpy.flatten_inputs_to_1d_tensor(inputs: Union[numpy.array, tensorflow.python.framework.ops.Tensor, torch.Tensor, dict, tuple], spaces_struct: Optional[Union[<MagicMock name='mock.spaces.Space' id='140262579983376'>, dict, tuple]] = None, time_axis: bool = False) Union[numpy.array, tensorflow.python.framework.ops.Tensor, torch.Tensor][source]#

Flattens arbitrary input structs according to the given spaces struct.

Returns a single 1D tensor resulting from the different input components’ values.

Thereby: - Boxes (any shape) get flattened to (B, [T]?, -1). Note that image boxes are not treated differently from other types of Boxes and get flattened as well. - Discrete (int) values are one-hot’d, e.g. a batch of [1, 0, 3] (B=3 with Discrete(4) space) results in [[0, 1, 0, 0], [1, 0, 0, 0], [0, 0, 0, 1]]. - MultiDiscrete values are multi-one-hot’d, e.g. a batch of [[0, 2], [1, 4]] (B=2 with MultiDiscrete([2, 5]) space) results in [[1, 0, 0, 0, 1, 0, 0], [0, 1, 0, 0, 0, 0, 1]].

Parameters
  • inputs – The inputs to be flattened.

  • spaces_struct – The structure of the spaces that behind the input

  • time_axis – Whether all inputs have a time-axis (after the batch axis). If True, will keep not only the batch axis (0th), but the time axis (1st) as-is and flatten everything from the 2nd axis up.

Returns

A single 1D tensor resulting from concatenating all flattened/one-hot’d input components. Depending on the time_axis flag, the shape is (B, n) or (B, T, n).

Examples

>>> # B=2
>>> from ray.rllib.utils.tf_utils import flatten_inputs_to_1d_tensor
>>> from gymnasium.spaces import Discrete, Box
>>> out = flatten_inputs_to_1d_tensor( 
...     {"a": [1, 0], "b": [[[0.0], [0.1]], [1.0], [1.1]]},
...     spaces_struct=dict(a=Discrete(2), b=Box(shape=(2, 1)))
... ) 
>>> print(out) 
[[0.0, 1.0,  0.0, 0.1], [1.0, 0.0,  1.0, 1.1]]  # B=2 n=4
>>> # B=2; T=2
>>> out = flatten_inputs_to_1d_tensor( 
...     ([[1, 0], [0, 1]],
...      [[[0.0, 0.1], [1.0, 1.1]], [[2.0, 2.1], [3.0, 3.1]]]),
...     spaces_struct=tuple([Discrete(2), Box(shape=(2, ))]),
...     time_axis=True
... ) 
>>> print(out) 
[[[0.0, 1.0, 0.0, 0.1], [1.0, 0.0, 1.0, 1.1]],        [[1.0, 0.0, 2.0, 2.1], [0.0, 1.0, 3.0, 3.1]]]  # B=2 T=2 n=4
ray.rllib.utils.numpy.make_action_immutable(obj)[source]#

Flags actions immutable to notify users when trying to change them.

Can also be used with any tree-like structure containing either dictionaries, numpy arrays or already immutable objects per se. Note, however that tree.map_structure() will in general not include the shallow object containing all others and therefore immutability will hold only for all objects contained in it. Use tree.traverse(fun, action, top_down=False) to include also the containing object.

Parameters

obj – The object to be made immutable.

Returns

The immutable object.

Examples

>>> import tree
>>> import numpy as np
>>> from ray.rllib.utils.numpy import make_action_immutable
>>> arr = np.arange(1,10)
>>> d = dict(a = 1, b = (arr, arr))
>>> tree.traverse(make_action_immutable, d, top_down=False) 
ray.rllib.utils.numpy.huber_loss(x: numpy.ndarray, delta: float = 1.0) numpy.ndarray[source]#

Reference: https://en.wikipedia.org/wiki/Huber_loss.

ray.rllib.utils.numpy.l2_loss(x: numpy.ndarray) numpy.ndarray[source]#

Computes half the L2 norm of a tensor (w/o the sqrt): sum(x**2) / 2.

Parameters

x – The input tensor.

Returns

The l2-loss output according to the above formula given x.

ray.rllib.utils.numpy.lstm(x, weights: numpy.ndarray, biases: Optional[numpy.ndarray] = None, initial_internal_states: Optional[numpy.ndarray] = None, time_major: bool = False, forget_bias: float = 1.0)[source]#

Calculates LSTM layer output given weights/biases, states, and input.

Parameters
  • x – The inputs to the LSTM layer including time-rank (0th if time-major, else 1st) and the batch-rank (1st if time-major, else 0th).

  • weights – The weights matrix.

  • biases – The biases vector. All 0s if None.

  • initial_internal_states – The initial internal states to pass into the layer. All 0s if None.

  • time_major – Whether to use time-major or not. Default: False.

  • forget_bias – Gets added to first sigmoid (forget gate) output. Default: 1.0.

Returns

Tuple consisting of 1) The LSTM layer’s output and 2) Tuple: Last (c-state, h-state).

ray.rllib.utils.numpy.one_hot(x: Union[numpy.array, tensorflow.python.framework.ops.Tensor, torch.Tensor, int], depth: int = 0, on_value: float = 1.0, off_value: float = 0.0) numpy.ndarray[source]#

One-hot utility function for numpy.

Thanks to qianyizhang: https://gist.github.com/qianyizhang/07ee1c15cad08afb03f5de69349efc30.

Parameters
  • x – The input to be one-hot encoded.

  • depth – The max. number to be one-hot encoded (size of last rank).

  • on_value – The value to use for on. Default: 1.0.

  • off_value – The value to use for off. Default: 0.0.

Returns

The one-hot encoded equivalent of the input array.

ray.rllib.utils.numpy.relu(x: numpy.ndarray, alpha: float = 0.0) numpy.ndarray[source]#

Implementation of the leaky ReLU function.

y = x * alpha if x < 0 else x

Parameters
  • x – The input values.

  • alpha – A scaling (“leak”) factor to use for negative x.

Returns

The leaky ReLU output for x.

ray.rllib.utils.numpy.sigmoid(x: numpy.ndarray, derivative: bool = False) numpy.ndarray[source]#

Returns the sigmoid function applied to x. Alternatively, can return the derivative or the sigmoid function.

Parameters
  • x – The input to the sigmoid function.

  • derivative – Whether to return the derivative or not. Default: False.

Returns

The sigmoid function (or its derivative) applied to x.

ray.rllib.utils.numpy.softmax(x: Union[numpy.ndarray, list], axis: int = - 1, epsilon: Optional[float] = None) numpy.ndarray[source]#

Returns the softmax values for x.

The exact formula used is: S(xi) = e^xi / SUMj(e^xj), where j goes over all elements in x.

Parameters
  • x – The input to the softmax function.

  • axis – The axis along which to softmax.

  • epsilon – Optional epsilon as a minimum value. If None, use SMALL_NUMBER.

Returns

The softmax over x.