Skip to content

ding.envs.common.common_function

ding.envs.common.common_function

sqrt_one_hot(v, max_val)

Overview

Sqrt the input value v and transform it into one-hot.

Arguments: - v (:obj:torch.Tensor): the value to be processed with sqrt and one-hot - max_val (:obj:int): the input v's estimated max value, used to calculate one-hot bit number. v would be clamped by (0, max_val). Returns: - ret (:obj:torch.Tensor): the value processed after sqrt and one-hot

div_one_hot(v, max_val, ratio)

Overview

Divide the input value v by ratio and transform it into one-hot.

Arguments: - v (:obj:torch.Tensor): the value to be processed with divide and one-hot - max_val (:obj:int): the input v's estimated max value, used to calculate one-hot bit number. v would be clamped by (0, max_val). - ratio (:obj:int): input v would be divided by ratio Returns: - ret (:obj:torch.Tensor): the value processed after divide and one-hot

div_func(inputs, other, unsqueeze_dim=1)

Overview

Divide inputs by other and unsqueeze if needed.

Arguments: - inputs (:obj:torch.Tensor): the value to be unsqueezed and divided - other (:obj:float): input would be divided by other - unsqueeze_dim (:obj:int): the dim to implement unsqueeze Returns: - ret (:obj:torch.Tensor): the value processed after unsqueeze and divide

clip_one_hot(v, num)

Overview

Clamp the input v in (0, num-1) and make one-hot mapping.

Arguments: - v (:obj:torch.Tensor): the value to be processed with clamp and one-hot - num (:obj:int): number of one-hot bits Returns: - ret (:obj:torch.Tensor): the value processed after clamp and one-hot

reorder_one_hot(v, dictionary, num, transform=None)

Overview

Reorder each value in input v according to reorder dict dictionary, then make one-hot mapping

Arguments: - v (:obj:torch.LongTensor): the original value to be processed with reorder and one-hot - dictionary (:obj:Dict[int, int]): a reorder lookup dict, map original value to new reordered index starting from 0 - num (:obj:int): number of one-hot bits - transform (:obj:int): an array to firstly transform the original action to general action Returns: - ret (:obj:torch.Tensor): one-hot data indicating reordered index

reorder_one_hot_array(v, array, num, transform=None)

Overview

Reorder each value in input v according to reorder dict dictionary, then make one-hot mapping. The difference between this function and reorder_one_hot is whether the type of reorder lookup data structure is np.ndarray or dict.

Arguments: - v (:obj:torch.LongTensor): the value to be processed with reorder and one-hot - array (:obj:np.ndarray): a reorder lookup array, map original value to new reordered index starting from 0 - num (:obj:int): number of one-hot bits - transform (:obj:np.ndarray): an array to firstly transform the original action to general action Returns: - ret (:obj:torch.Tensor): one-hot data indicating reordered index

reorder_boolean_vector(v, dictionary, num, transform=None)

Overview

Reorder each value in input v to new index according to reorder dict dictionary, then set corresponding position in return tensor to 1.

Arguments: - v (:obj:torch.LongTensor): the value to be processed with reorder - dictionary (:obj:Dict[int, int]): a reorder lookup dict, map original value to new reordered index starting from 0 - num (:obj:int): total number of items, should equals to max index + 1 - transform (:obj:np.ndarray): an array to firstly transform the original action to general action Returns: - ret (:obj:torch.Tensor): boolean data containing only 0 and 1, indicating whether corresponding original value exists in input v

get_to_and(num_bits) cached

Overview

Get an np.ndarray with num_bits elements, each equals to :math:2^n (n decreases from num_bits-1 to 0). Used by batch_binary_encode to make bit-wise and.

Arguments: - num_bits (:obj:int): length of the generating array Returns: - to_and (:obj:np.ndarray): an array with num_bits elements, each equals to :math:2^n (n decreases from num_bits-1 to 0)

batch_binary_encode(x, bit_num)

Overview

Big endian binary encode x to float tensor

Arguments: - x (:obj:torch.Tensor): the value to be unsqueezed and divided - bit_num (:obj:int): number of bits, should satisfy :math:2^{bit num} > max(x) Example: >>> batch_binary_encode(torch.tensor([131,71]), 10) tensor([[0., 0., 1., 0., 0., 0., 0., 0., 1., 1.], [0., 0., 0., 1., 0., 0., 0., 1., 1., 1.]]) Returns: - ret (:obj:torch.Tensor): the binary encoded tensor, containing only 0 and 1

compute_denominator(x)

Overview

Compute the denominator used in get_postion_vector. Divide 1 at the last step, so you can use it as an multiplier.

Arguments: - x (:obj:torch.Tensor): Input tensor, which is generated from torch.arange(0, d_model). Returns: - ret (:obj:torch.Tensor): Denominator result tensor.

get_postion_vector(x)

Overview

Get position embedding used in Transformer, even and odd :math:lpha are stored in POSITION_ARRAY

Arguments: - x (:obj:list): original position index, whose length should be 32 Returns: - v (:obj:torch.Tensor): position embedding tensor in 64 dims

affine_transform(data, action_clip=True, alpha=None, beta=None, min_val=None, max_val=None)

Overview

do affine transform for data in range [-1, 1], :math:lpha imes data + eta

Arguments: - data (:obj:Any): the input data - action_clip (:obj:bool): whether to do action clip operation ([-1, 1]) - alpha (:obj:float): affine transform weight - beta (:obj:float): affine transform bias - min_val (:obj:float): min value, if min_val and max_val are indicated, scale input data to [min_val, max_val] - max_val (:obj:float): max value Returns: - transformed_data (:obj:Any): affine transformed data

save_frames_as_gif(frames, path)

Overview

save frames as gif to a specified path.

Arguments: - frames (:obj:List): list of frames - path (:obj:str): the path to save gif

Full Source Code

../ding/envs/common/common_function.py

1import math 2from functools import partial, lru_cache 3from typing import Optional, Dict, Any 4 5import numpy as np 6import torch 7 8from ding.compatibility import torch_ge_180 9from ding.torch_utils import one_hot 10 11num_first_one_hot = partial(one_hot, num_first=True) 12 13 14def sqrt_one_hot(v: torch.Tensor, max_val: int) -> torch.Tensor: 15 """ 16 Overview: 17 Sqrt the input value ``v`` and transform it into one-hot. 18 Arguments: 19 - v (:obj:`torch.Tensor`): the value to be processed with `sqrt` and `one-hot` 20 - max_val (:obj:`int`): the input ``v``'s estimated max value, used to calculate one-hot bit number. \ 21 ``v`` would be clamped by (0, max_val). 22 Returns: 23 - ret (:obj:`torch.Tensor`): the value processed after `sqrt` and `one-hot` 24 """ 25 num = int(math.sqrt(max_val)) + 1 26 v = v.float() 27 v = torch.floor(torch.sqrt(torch.clamp(v, 0, max_val))).long() 28 return one_hot(v, num) 29 30 31def div_one_hot(v: torch.Tensor, max_val: int, ratio: int) -> torch.Tensor: 32 """ 33 Overview: 34 Divide the input value ``v`` by ``ratio`` and transform it into one-hot. 35 Arguments: 36 - v (:obj:`torch.Tensor`): the value to be processed with `divide` and `one-hot` 37 - max_val (:obj:`int`): the input ``v``'s estimated max value, used to calculate one-hot bit number. \ 38 ``v`` would be clamped by (0, ``max_val``). 39 - ratio (:obj:`int`): input ``v`` would be divided by ``ratio`` 40 Returns: 41 - ret (:obj:`torch.Tensor`): the value processed after `divide` and `one-hot` 42 """ 43 num = int(max_val / ratio) + 1 44 v = v.float() 45 v = torch.floor(torch.clamp(v, 0, max_val) / ratio).long() 46 return one_hot(v, num) 47 48 49def div_func(inputs: torch.Tensor, other: float, unsqueeze_dim: int = 1): 50 """ 51 Overview: 52 Divide ``inputs`` by ``other`` and unsqueeze if needed. 53 Arguments: 54 - inputs (:obj:`torch.Tensor`): the value to be unsqueezed and divided 55 - other (:obj:`float`): input would be divided by ``other`` 56 - unsqueeze_dim (:obj:`int`): the dim to implement unsqueeze 57 Returns: 58 - ret (:obj:`torch.Tensor`): the value processed after `unsqueeze` and `divide` 59 """ 60 inputs = inputs.float() 61 if unsqueeze_dim is not None: 62 inputs = inputs.unsqueeze(unsqueeze_dim) 63 return torch.div(inputs, other) 64 65 66def clip_one_hot(v: torch.Tensor, num: int) -> torch.Tensor: 67 """ 68 Overview: 69 Clamp the input ``v`` in (0, num-1) and make one-hot mapping. 70 Arguments: 71 - v (:obj:`torch.Tensor`): the value to be processed with `clamp` and `one-hot` 72 - num (:obj:`int`): number of one-hot bits 73 Returns: 74 - ret (:obj:`torch.Tensor`): the value processed after `clamp` and `one-hot` 75 """ 76 v = v.clamp(0, num - 1) 77 return one_hot(v, num) 78 79 80def reorder_one_hot( 81 v: torch.LongTensor, 82 dictionary: Dict[int, int], 83 num: int, 84 transform: Optional[np.ndarray] = None 85) -> torch.Tensor: 86 """ 87 Overview: 88 Reorder each value in input ``v`` according to reorder dict ``dictionary``, then make one-hot mapping 89 Arguments: 90 - v (:obj:`torch.LongTensor`): the original value to be processed with `reorder` and `one-hot` 91 - dictionary (:obj:`Dict[int, int]`): a reorder lookup dict, \ 92 map original value to new reordered index starting from 0 93 - num (:obj:`int`): number of one-hot bits 94 - transform (:obj:`int`): an array to firstly transform the original action to general action 95 Returns: 96 - ret (:obj:`torch.Tensor`): one-hot data indicating reordered index 97 """ 98 assert (len(v.shape) == 1) 99 assert (isinstance(v, torch.Tensor)) 100 new_v = torch.zeros_like(v) 101 for idx in range(v.shape[0]): 102 if transform is None: 103 val = v[idx].item() 104 else: 105 val = transform[v[idx].item()] 106 new_v[idx] = dictionary[val] 107 return one_hot(new_v, num) 108 109 110def reorder_one_hot_array( 111 v: torch.LongTensor, array: np.ndarray, num: int, transform: Optional[np.ndarray] = None 112) -> torch.Tensor: 113 """ 114 Overview: 115 Reorder each value in input ``v`` according to reorder dict ``dictionary``, then make one-hot mapping. 116 The difference between this function and ``reorder_one_hot`` is 117 whether the type of reorder lookup data structure is `np.ndarray` or `dict`. 118 Arguments: 119 - v (:obj:`torch.LongTensor`): the value to be processed with `reorder` and `one-hot` 120 - array (:obj:`np.ndarray`): a reorder lookup array, map original value to new reordered index starting from 0 121 - num (:obj:`int`): number of one-hot bits 122 - transform (:obj:`np.ndarray`): an array to firstly transform the original action to general action 123 Returns: 124 - ret (:obj:`torch.Tensor`): one-hot data indicating reordered index 125 """ 126 v = v.numpy() 127 if transform is None: 128 val = array[v] 129 else: 130 val = array[transform[v]] 131 return one_hot(torch.LongTensor(val), num) 132 133 134def reorder_boolean_vector( 135 v: torch.LongTensor, 136 dictionary: Dict[int, int], 137 num: int, 138 transform: Optional[np.ndarray] = None 139) -> torch.Tensor: 140 """ 141 Overview: 142 Reorder each value in input ``v`` to new index according to reorder dict ``dictionary``, 143 then set corresponding position in return tensor to 1. 144 Arguments: 145 - v (:obj:`torch.LongTensor`): the value to be processed with `reorder` 146 - dictionary (:obj:`Dict[int, int]`): a reorder lookup dict, \ 147 map original value to new reordered index starting from 0 148 - num (:obj:`int`): total number of items, should equals to max index + 1 149 - transform (:obj:`np.ndarray`): an array to firstly transform the original action to general action 150 Returns: 151 - ret (:obj:`torch.Tensor`): boolean data containing only 0 and 1, \ 152 indicating whether corresponding original value exists in input ``v`` 153 """ 154 ret = torch.zeros(num) 155 for item in v: 156 try: 157 if transform is None: 158 val = item.item() 159 else: 160 val = transform[item.item()] 161 idx = dictionary[val] 162 except KeyError as e: 163 # print(dictionary) 164 raise KeyError('{}_{}_'.format(num, e)) 165 ret[idx] = 1 166 return ret 167 168 169@lru_cache(maxsize=32) 170def get_to_and(num_bits: int) -> np.ndarray: 171 """ 172 Overview: 173 Get an np.ndarray with ``num_bits`` elements, each equals to :math:`2^n` (n decreases from num_bits-1 to 0). 174 Used by ``batch_binary_encode`` to make bit-wise `and`. 175 Arguments: 176 - num_bits (:obj:`int`): length of the generating array 177 Returns: 178 - to_and (:obj:`np.ndarray`): an array with ``num_bits`` elements, \ 179 each equals to :math:`2^n` (n decreases from num_bits-1 to 0) 180 """ 181 return 2 ** np.arange(num_bits - 1, -1, -1).reshape([1, num_bits]) 182 183 184def batch_binary_encode(x: torch.Tensor, bit_num: int) -> torch.Tensor: 185 """ 186 Overview: 187 Big endian binary encode ``x`` to float tensor 188 Arguments: 189 - x (:obj:`torch.Tensor`): the value to be unsqueezed and divided 190 - bit_num (:obj:`int`): number of bits, should satisfy :math:`2^{bit num} > max(x)` 191 Example: 192 >>> batch_binary_encode(torch.tensor([131,71]), 10) 193 tensor([[0., 0., 1., 0., 0., 0., 0., 0., 1., 1.], 194 [0., 0., 0., 1., 0., 0., 0., 1., 1., 1.]]) 195 Returns: 196 - ret (:obj:`torch.Tensor`): the binary encoded tensor, containing only `0` and `1` 197 """ 198 x = x.numpy() 199 xshape = list(x.shape) 200 x = x.reshape([-1, 1]) 201 to_and = get_to_and(bit_num) 202 return torch.FloatTensor((x & to_and).astype(bool).astype(float).reshape(xshape + [bit_num])) 203 204 205def compute_denominator(x: torch.Tensor) -> torch.Tensor: 206 """ 207 Overview: 208 Compute the denominator used in ``get_postion_vector``. \ 209 Divide 1 at the last step, so you can use it as an multiplier. 210 Arguments: 211 - x (:obj:`torch.Tensor`): Input tensor, which is generated from torch.arange(0, d_model). 212 Returns: 213 - ret (:obj:`torch.Tensor`): Denominator result tensor. 214 """ 215 if torch_ge_180(): 216 x = torch.div(x, 2, rounding_mode='trunc') * 2 217 else: 218 x = torch.div(x, 2) * 2 219 x = torch.div(x, 64.) 220 x = torch.pow(10000., x) 221 x = torch.div(1., x) 222 return x 223 224 225def get_postion_vector(x: list) -> torch.Tensor: 226 """ 227 Overview: 228 Get position embedding used in `Transformer`, even and odd :math:`\alpha` are stored in ``POSITION_ARRAY`` 229 Arguments: 230 - x (:obj:`list`): original position index, whose length should be 32 231 Returns: 232 - v (:obj:`torch.Tensor`): position embedding tensor in 64 dims 233 """ 234 # TODO use lru_cache to optimize it 235 POSITION_ARRAY = compute_denominator(torch.arange(0, 64, dtype=torch.float)) # d_model = 64 236 v = torch.zeros(64, dtype=torch.float) 237 x = torch.FloatTensor(x) 238 v[0::2] = torch.sin(x * POSITION_ARRAY[0::2]) # even 239 v[1::2] = torch.cos(x * POSITION_ARRAY[1::2]) # odd 240 return v 241 242 243def affine_transform( 244 data: Any, 245 action_clip: Optional[bool] = True, 246 alpha: Optional[float] = None, 247 beta: Optional[float] = None, 248 min_val: Optional[float] = None, 249 max_val: Optional[float] = None 250) -> Any: 251 """ 252 Overview: 253 do affine transform for data in range [-1, 1], :math:`\alpha \times data + \beta` 254 Arguments: 255 - data (:obj:`Any`): the input data 256 - action_clip (:obj:`bool`): whether to do action clip operation ([-1, 1]) 257 - alpha (:obj:`float`): affine transform weight 258 - beta (:obj:`float`): affine transform bias 259 - min_val (:obj:`float`): min value, if `min_val` and `max_val` are indicated, scale input data\ 260 to [min_val, max_val] 261 - max_val (:obj:`float`): max value 262 Returns: 263 - transformed_data (:obj:`Any`): affine transformed data 264 """ 265 if action_clip: 266 data = np.clip(data, -1, 1) 267 if min_val is not None: 268 assert max_val is not None 269 alpha = (max_val - min_val) / 2 270 beta = (max_val + min_val) / 2 271 assert alpha is not None 272 beta = beta if beta is not None else 0. 273 return data * alpha + beta 274 275 276def save_frames_as_gif(frames: list, path: str) -> None: 277 """ 278 Overview: 279 save frames as gif to a specified path. 280 Arguments: 281 - frames (:obj:`List`): list of frames 282 - path (:obj:`str`): the path to save gif 283 """ 284 try: 285 import imageio 286 except ImportError: 287 from ditk import logging 288 import sys 289 logging.warning("Please install imageio first.") 290 sys.exit(1) 291 imageio.mimsave(path, frames, fps=20)