Skip to content

ding.model.template.qac

ding.model.template.qac

ContinuousQAC

Bases: Module

Overview

The neural network and computation graph of algorithms related to Q-value Actor-Critic (QAC), such as DDPG/TD3/SAC. This model now supports continuous and hybrid action space. The ContinuousQAC is composed of four parts: actor_encoder, critic_encoder, actor_head and critic_head. Encoders are used to extract the feature from various observation. Heads are used to predict corresponding Q-value or action logit. In high-dimensional observation space like 2D image, we often use a shared encoder for both actor_encoder and critic_encoder. In low-dimensional observation space like 1D vector, we often use different encoders.

Interfaces: __init__, forward, compute_actor, compute_critic

__init__(obs_shape, action_shape, action_space, twin_critic=False, actor_head_hidden_size=64, actor_head_layer_num=1, critic_head_hidden_size=64, critic_head_layer_num=1, activation=nn.ReLU(), norm_type=None, encoder_hidden_size_list=None, share_encoder=False)

Overview

Initailize the ContinuousQAC Model according to input arguments.

Arguments: - obs_shape (:obj:Union[int, SequenceType]): Observation's shape, such as 128, (156, ). - action_shape (:obj:Union[int, SequenceType, EasyDict]): Action's shape, such as 4, (3, ), EasyDict({'action_type_shape': 3, 'action_args_shape': 4}). - action_space (:obj:str): The type of action space, including [regression, reparameterization, hybrid], regression is used for DDPG/TD3, reparameterization is used for SAC and hybrid for PADDPG. - twin_critic (:obj:bool): Whether to use twin critic, one of tricks in TD3. - actor_head_hidden_size (:obj:Optional[int]): The hidden_size to pass to actor head. - actor_head_layer_num (:obj:int): The num of layers used in the actor network to compute action. - critic_head_hidden_size (:obj:Optional[int]): The hidden_size to pass to critic head. - critic_head_layer_num (:obj:int): The num of layers used in the critic network to compute Q-value. - activation (:obj:Optional[nn.Module]): The type of activation function to use in MLP after each FC layer, if None then default set to nn.ReLU(). - norm_type (:obj:Optional[str]): The type of normalization to after network layer (FC, Conv), see ding.torch_utils.network for more details. - encoder_hidden_size_list (:obj:SequenceType): Collection of hidden_size to pass to Encoder, the last element must match head_hidden_size, this argument is only used in image observation. - share_encoder (:obj:Optional[bool]): Whether to share encoder between actor and critic.

forward(inputs, mode)

Overview

QAC forward computation graph, input observation tensor to predict Q-value or action logit. Different mode will forward with different network modules to get different outputs and save computation.

Arguments: - inputs (:obj:Union[torch.Tensor, Dict[str, torch.Tensor]]): The input data for forward computation graph, for compute_actor, it is the observation tensor, for compute_critic, it is the dict data including obs and action tensor. - mode (:obj:str): The forward mode, all the modes are defined in the beginning of this class. Returns: - output (:obj:Dict[str, torch.Tensor]): The output dict of QAC forward computation graph, whose key-values vary in different forward modes. Examples (Actor): >>> # Regression mode >>> model = ContinuousQAC(64, 6, 'regression') >>> obs = torch.randn(4, 64) >>> actor_outputs = model(obs,'compute_actor') >>> assert actor_outputs['action'].shape == torch.Size([4, 6]) >>> # Reparameterization Mode >>> model = ContinuousQAC(64, 6, 'reparameterization') >>> obs = torch.randn(4, 64) >>> actor_outputs = model(obs,'compute_actor') >>> assert actor_outputs['logit'][0].shape == torch.Size([4, 6]) # mu >>> actor_outputs['logit'][1].shape == torch.Size([4, 6]) # sigma

Examples (Critic): >>> inputs = {'obs': torch.randn(4, 8), 'action': torch.randn(4, 1)} >>> model = ContinuousQAC(obs_shape=(8, ),action_shape=1, action_space='regression') >>> assert model(inputs, mode='compute_critic')['q_value'].shape == (4, ) # q value

compute_actor(obs)

Overview

QAC forward computation graph for actor part, input observation tensor to predict action or action logit.

Arguments: - x (:obj:torch.Tensor): The input observation tensor data. Returns: - outputs (:obj:Dict[str, Union[torch.Tensor, Dict[str, torch.Tensor]]]): Actor output dict varying from action_space: regression, reparameterization, hybrid. ReturnsKeys (regression): - action (:obj:torch.Tensor): Continuous action with same size as action_shape, usually in DDPG/TD3. ReturnsKeys (reparameterization): - logit (:obj:Dict[str, torch.Tensor]): The predictd reparameterization action logit, usually in SAC. It is a list containing two tensors: mu and sigma. The former is the mean of the gaussian distribution, the latter is the standard deviation of the gaussian distribution. ReturnsKeys (hybrid): - logit (:obj:torch.Tensor): The predicted discrete action type logit, it will be the same dimension as action_type_shape, i.e., all the possible discrete action types. - action_args (:obj:torch.Tensor): Continuous action arguments with same size as action_args_shape. Shapes: - obs (:obj:torch.Tensor): :math:(B, N0), B is batch size and N0 corresponds to obs_shape. - action (:obj:torch.Tensor): :math:(B, N1), B is batch size and N1 corresponds to action_shape. - logit.mu (:obj:torch.Tensor): :math:(B, N1), B is batch size and N1 corresponds to action_shape. - logit.sigma (:obj:torch.Tensor): :math:(B, N1), B is batch size. - logit (:obj:torch.Tensor): :math:(B, N2), B is batch size and N2 corresponds to action_shape.action_type_shape. - action_args (:obj:torch.Tensor): :math:(B, N3), B is batch size and N3 corresponds to action_shape.action_args_shape. Examples: >>> # Regression mode >>> model = ContinuousQAC(64, 6, 'regression') >>> obs = torch.randn(4, 64) >>> actor_outputs = model(obs,'compute_actor') >>> assert actor_outputs['action'].shape == torch.Size([4, 6]) >>> # Reparameterization Mode >>> model = ContinuousQAC(64, 6, 'reparameterization') >>> obs = torch.randn(4, 64) >>> actor_outputs = model(obs,'compute_actor') >>> assert actor_outputs['logit'][0].shape == torch.Size([4, 6]) # mu >>> actor_outputs['logit'][1].shape == torch.Size([4, 6]) # sigma

compute_critic(inputs)

Overview

QAC forward computation graph for critic part, input observation and action tensor to predict Q-value.

Arguments: - inputs (:obj:Dict[str, torch.Tensor]): The dict of input data, including obs and action tensor, also contains logit and action_args tensor in hybrid action_space. ArgumentsKeys: - obs: (:obj:torch.Tensor): Observation tensor data, now supports a batch of 1-dim vector data. - action (:obj:Union[torch.Tensor, Dict]): Continuous action with same size as action_shape. - logit (:obj:torch.Tensor): Discrete action logit, only in hybrid action_space. - action_args (:obj:torch.Tensor): Continuous action arguments, only in hybrid action_space. Returns: - outputs (:obj:Dict[str, torch.Tensor]): The output dict of QAC's forward computation graph for critic, including q_value. ReturnKeys: - q_value (:obj:torch.Tensor): Q value tensor with same size as batch size. Shapes: - obs (:obj:torch.Tensor): :math:(B, N1), where B is batch size and N1 is obs_shape. - logit (:obj:torch.Tensor): :math:(B, N2), B is batch size and N2 corresponds to action_shape.action_type_shape. - action_args (:obj:torch.Tensor): :math:(B, N3), B is batch size and N3 corresponds to action_shape.action_args_shape. - action (:obj:torch.Tensor): :math:(B, N4), where B is batch size and N4 is action_shape. - q_value (:obj:torch.Tensor): :math:(B, ), where B is batch size.

Examples:

>>> inputs = {'obs': torch.randn(4, 8), 'action': torch.randn(4, 1)}
>>> model = ContinuousQAC(obs_shape=(8, ),action_shape=1, action_space='regression')
>>> assert model(inputs, mode='compute_critic')['q_value'].shape == (4, )  # q value

DiscreteQAC

Bases: Module

Overview

The neural network and computation graph of algorithms related to discrete action Q-value Actor-Critic (QAC), such as DiscreteSAC. This model now supports only discrete action space. The DiscreteQAC is composed of four parts: actor_encoder, critic_encoder, actor_head and critic_head. Encoders are used to extract the feature from various observation. Heads are used to predict corresponding Q-value or action logit. In high-dimensional observation space like 2D image, we often use a shared encoder for both actor_encoder and critic_encoder. In low-dimensional observation space like 1D vector, we often use different encoders.

Interfaces: __init__, forward, compute_actor, compute_critic

__init__(obs_shape, action_shape, twin_critic=False, actor_head_hidden_size=64, actor_head_layer_num=1, critic_head_hidden_size=64, critic_head_layer_num=1, activation=nn.ReLU(), norm_type=None, encoder_hidden_size_list=None, share_encoder=False)

Overview

Initailize the DiscreteQAC Model according to input arguments.

Arguments: - obs_shape (:obj:Union[int, SequenceType]): Observation's shape, such as 128, (156, ). - action_shape (:obj:Union[int, SequenceType, EasyDict]): Action's shape, such as 4, (3, ). - twin_critic (:obj:bool): Whether to use twin critic. - actor_head_hidden_size (:obj:Optional[int]): The hidden_size to pass to actor head. - actor_head_layer_num (:obj:int): The num of layers used in the actor network to compute action. - critic_head_hidden_size (:obj:Optional[int]): The hidden_size to pass to critic head. - critic_head_layer_num (:obj:int): The num of layers used in the critic network to compute Q-value. - activation (:obj:Optional[nn.Module]): The type of activation function to use in MLP after each FC layer, if None then default set to nn.ReLU(). - norm_type (:obj:Optional[str]): The type of normalization to after network layer (FC, Conv), see ding.torch_utils.network for more details. - encoder_hidden_size_list (:obj:SequenceType): Collection of hidden_size to pass to Encoder, the last element must match head_hidden_size, this argument is only used in image observation. - share_encoder (:obj:Optional[bool]): Whether to share encoder between actor and critic.

forward(inputs, mode)

Overview

QAC forward computation graph, input observation tensor to predict Q-value or action logit. Different mode will forward with different network modules to get different outputs and save computation.

Arguments: - inputs (:obj:torch.Tensor): The input observation tensor data. - mode (:obj:str): The forward mode, all the modes are defined in the beginning of this class. Returns: - output (:obj:Dict[str, torch.Tensor]): The output dict of QAC forward computation graph, whose key-values vary in different forward modes. Examples (Actor): >>> model = DiscreteQAC(64, 6) >>> obs = torch.randn(4, 64) >>> actor_outputs = model(obs,'compute_actor') >>> assert actor_outputs['logit'].shape == torch.Size([4, 6])

Examples(Critic): >>> model = DiscreteQAC(64, 6, twin_critic=False) >>> obs = torch.randn(4, 64) >>> actor_outputs = model(obs,'compute_critic') >>> assert actor_outputs['q_value'].shape == torch.Size([4, 6])

compute_actor(inputs)

Overview

QAC forward computation graph for actor part, input observation tensor to predict action or action logit.

Arguments: - inputs (:obj:torch.Tensor): The input observation tensor data. Returns: - outputs (:obj:Dict[str, torch.Tensor]): The output dict of QAC forward computation graph for actor, including discrete action logit. ReturnsKeys: - logit (:obj:torch.Tensor): The predicted discrete action type logit, it will be the same dimension as action_shape, i.e., all the possible discrete action choices. Shapes: - inputs (:obj:torch.Tensor): :math:(B, N0), B is batch size and N0 corresponds to obs_shape. - logit (:obj:torch.Tensor): :math:(B, N2), B is batch size and N2 corresponds to action_shape. Examples: >>> model = DiscreteQAC(64, 6) >>> obs = torch.randn(4, 64) >>> actor_outputs = model(obs,'compute_actor') >>> assert actor_outputs['logit'].shape == torch.Size([4, 6])

compute_critic(inputs)

Overview

QAC forward computation graph for critic part, input observation to predict Q-value for each possible discrete action choices.

Arguments: - inputs (:obj:torch.Tensor): The input observation tensor data. Returns: - outputs (:obj:Dict[str, torch.Tensor]): The output dict of QAC forward computation graph for critic, including q_value for each possible discrete action choices. ReturnKeys: - q_value (:obj:torch.Tensor): The predicted Q-value for each possible discrete action choices, it will be the same dimension as action_shape and used to calculate the loss. Shapes: - obs (:obj:torch.Tensor): :math:(B, N1), where B is batch size and N1 is obs_shape. - q_value (:obj:torch.Tensor): :math:(B, N2), where B is batch size and N2 is action_shape. Examples: >>> model = DiscreteQAC(64, 6, twin_critic=False) >>> obs = torch.randn(4, 64) >>> actor_outputs = model(obs,'compute_critic') >>> assert actor_outputs['q_value'].shape == torch.Size([4, 6])

Full Source Code

../ding/model/template/qac.py

1from typing import Union, Dict, Optional 2from easydict import EasyDict 3import numpy as np 4import torch 5import torch.nn as nn 6 7from ding.utils import SequenceType, squeeze, MODEL_REGISTRY 8from ..common import RegressionHead, ReparameterizationHead, DiscreteHead, MultiHead, \ 9 FCEncoder, ConvEncoder 10 11 12@MODEL_REGISTRY.register('continuous_qac') 13class ContinuousQAC(nn.Module): 14 """ 15 Overview: 16 The neural network and computation graph of algorithms related to Q-value Actor-Critic (QAC), such as \ 17 DDPG/TD3/SAC. This model now supports continuous and hybrid action space. The ContinuousQAC is composed of \ 18 four parts: ``actor_encoder``, ``critic_encoder``, ``actor_head`` and ``critic_head``. Encoders are used to \ 19 extract the feature from various observation. Heads are used to predict corresponding Q-value or action logit. \ 20 In high-dimensional observation space like 2D image, we often use a shared encoder for both ``actor_encoder`` \ 21 and ``critic_encoder``. In low-dimensional observation space like 1D vector, we often use different encoders. 22 Interfaces: 23 ``__init__``, ``forward``, ``compute_actor``, ``compute_critic`` 24 """ 25 mode = ['compute_actor', 'compute_critic'] 26 27 def __init__( 28 self, 29 obs_shape: Union[int, SequenceType], 30 action_shape: Union[int, SequenceType, EasyDict], 31 action_space: str, 32 twin_critic: bool = False, 33 actor_head_hidden_size: int = 64, 34 actor_head_layer_num: int = 1, 35 critic_head_hidden_size: int = 64, 36 critic_head_layer_num: int = 1, 37 activation: Optional[nn.Module] = nn.ReLU(), 38 norm_type: Optional[str] = None, 39 encoder_hidden_size_list: Optional[SequenceType] = None, 40 share_encoder: Optional[bool] = False, 41 ) -> None: 42 """ 43 Overview: 44 Initailize the ContinuousQAC Model according to input arguments. 45 Arguments: 46 - obs_shape (:obj:`Union[int, SequenceType]`): Observation's shape, such as 128, (156, ). 47 - action_shape (:obj:`Union[int, SequenceType, EasyDict]`): Action's shape, such as 4, (3, ), \ 48 EasyDict({'action_type_shape': 3, 'action_args_shape': 4}). 49 - action_space (:obj:`str`): The type of action space, including [``regression``, ``reparameterization``, \ 50 ``hybrid``], ``regression`` is used for DDPG/TD3, ``reparameterization`` is used for SAC and \ 51 ``hybrid`` for PADDPG. 52 - twin_critic (:obj:`bool`): Whether to use twin critic, one of tricks in TD3. 53 - actor_head_hidden_size (:obj:`Optional[int]`): The ``hidden_size`` to pass to actor head. 54 - actor_head_layer_num (:obj:`int`): The num of layers used in the actor network to compute action. 55 - critic_head_hidden_size (:obj:`Optional[int]`): The ``hidden_size`` to pass to critic head. 56 - critic_head_layer_num (:obj:`int`): The num of layers used in the critic network to compute Q-value. 57 - activation (:obj:`Optional[nn.Module]`): The type of activation function to use in ``MLP`` \ 58 after each FC layer, if ``None`` then default set to ``nn.ReLU()``. 59 - norm_type (:obj:`Optional[str]`): The type of normalization to after network layer (FC, Conv), \ 60 see ``ding.torch_utils.network`` for more details. 61 - encoder_hidden_size_list (:obj:`SequenceType`): Collection of ``hidden_size`` to pass to ``Encoder``, \ 62 the last element must match ``head_hidden_size``, this argument is only used in image observation. 63 - share_encoder (:obj:`Optional[bool]`): Whether to share encoder between actor and critic. 64 """ 65 super(ContinuousQAC, self).__init__() 66 obs_shape: int = squeeze(obs_shape) 67 action_shape = squeeze(action_shape) 68 self.action_shape = action_shape 69 self.action_space = action_space 70 assert self.action_space in ['regression', 'reparameterization', 'hybrid'], self.action_space 71 72 # encoder 73 self.share_encoder = share_encoder 74 if np.isscalar(obs_shape) or len(obs_shape) == 1: 75 assert not self.share_encoder, "Vector observation doesn't need share encoder." 76 assert encoder_hidden_size_list is None, "Vector obs encoder only uses one layer nn.Linear" 77 # Because there is already a layer nn.Linear in the head, so we use nn.Identity here to keep 78 # compatible with the image observation and avoid adding an extra layer nn.Linear. 79 self.actor_encoder = nn.Identity() 80 self.critic_encoder = nn.Identity() 81 encoder_output_size = obs_shape 82 elif len(obs_shape) == 3: 83 84 def setup_conv_encoder(): 85 kernel_size = [3 for _ in range(len(encoder_hidden_size_list))] 86 stride = [2] + [1 for _ in range(len(encoder_hidden_size_list) - 1)] 87 return ConvEncoder( 88 obs_shape, 89 encoder_hidden_size_list, 90 activation=activation, 91 norm_type=norm_type, 92 kernel_size=kernel_size, 93 stride=stride 94 ) 95 96 if self.share_encoder: 97 encoder = setup_conv_encoder() 98 self.actor_encoder = self.critic_encoder = encoder 99 else: 100 self.actor_encoder = setup_conv_encoder() 101 self.critic_encoder = setup_conv_encoder() 102 encoder_output_size = self.actor_encoder.output_size 103 else: 104 raise RuntimeError("not support observation shape: {}".format(obs_shape)) 105 # head 106 if self.action_space == 'regression': # DDPG, TD3 107 self.actor_head = nn.Sequential( 108 nn.Linear(encoder_output_size, actor_head_hidden_size), activation, 109 RegressionHead( 110 actor_head_hidden_size, 111 action_shape, 112 actor_head_layer_num, 113 final_tanh=True, 114 activation=activation, 115 norm_type=norm_type 116 ) 117 ) 118 elif self.action_space == 'reparameterization': # SAC 119 self.actor_head = nn.Sequential( 120 nn.Linear(encoder_output_size, actor_head_hidden_size), activation, 121 ReparameterizationHead( 122 actor_head_hidden_size, 123 action_shape, 124 actor_head_layer_num, 125 sigma_type='conditioned', 126 activation=activation, 127 norm_type=norm_type 128 ) 129 ) 130 elif self.action_space == 'hybrid': # PADDPG 131 # hybrid action space: action_type(discrete) + action_args(continuous), 132 # such as {'action_type_shape': torch.LongTensor([0]), 'action_args_shape': torch.FloatTensor([0.1, -0.27])} 133 action_shape.action_args_shape = squeeze(action_shape.action_args_shape) 134 action_shape.action_type_shape = squeeze(action_shape.action_type_shape) 135 actor_action_args = nn.Sequential( 136 nn.Linear(encoder_output_size, actor_head_hidden_size), activation, 137 RegressionHead( 138 actor_head_hidden_size, 139 action_shape.action_args_shape, 140 actor_head_layer_num, 141 final_tanh=True, 142 activation=activation, 143 norm_type=norm_type 144 ) 145 ) 146 actor_action_type = nn.Sequential( 147 nn.Linear(encoder_output_size, actor_head_hidden_size), activation, 148 DiscreteHead( 149 actor_head_hidden_size, 150 action_shape.action_type_shape, 151 actor_head_layer_num, 152 activation=activation, 153 norm_type=norm_type, 154 ) 155 ) 156 self.actor_head = nn.ModuleList([actor_action_type, actor_action_args]) 157 158 self.twin_critic = twin_critic 159 if self.action_space == 'hybrid': 160 critic_input_size = encoder_output_size + action_shape.action_type_shape + action_shape.action_args_shape 161 else: 162 critic_input_size = encoder_output_size + action_shape 163 if self.twin_critic: 164 self.critic_head = nn.ModuleList() 165 for _ in range(2): 166 self.critic_head.append( 167 nn.Sequential( 168 nn.Linear(critic_input_size, critic_head_hidden_size), activation, 169 RegressionHead( 170 critic_head_hidden_size, 171 1, 172 critic_head_layer_num, 173 final_tanh=False, 174 activation=activation, 175 norm_type=norm_type 176 ) 177 ) 178 ) 179 else: 180 self.critic_head = nn.Sequential( 181 nn.Linear(critic_input_size, critic_head_hidden_size), activation, 182 RegressionHead( 183 critic_head_hidden_size, 184 1, 185 critic_head_layer_num, 186 final_tanh=False, 187 activation=activation, 188 norm_type=norm_type 189 ) 190 ) 191 192 # Convenient for calling some apis (e.g. self.critic.parameters()), 193 # but may cause misunderstanding when `print(self)` 194 self.actor = nn.ModuleList([self.actor_encoder, self.actor_head]) 195 self.critic = nn.ModuleList([self.critic_encoder, self.critic_head]) 196 197 def forward(self, inputs: Union[torch.Tensor, Dict[str, torch.Tensor]], mode: str) -> Dict[str, torch.Tensor]: 198 """ 199 Overview: 200 QAC forward computation graph, input observation tensor to predict Q-value or action logit. Different \ 201 ``mode`` will forward with different network modules to get different outputs and save computation. 202 Arguments: 203 - inputs (:obj:`Union[torch.Tensor, Dict[str, torch.Tensor]]`): The input data for forward computation \ 204 graph, for ``compute_actor``, it is the observation tensor, for ``compute_critic``, it is the \ 205 dict data including obs and action tensor. 206 - mode (:obj:`str`): The forward mode, all the modes are defined in the beginning of this class. 207 Returns: 208 - output (:obj:`Dict[str, torch.Tensor]`): The output dict of QAC forward computation graph, whose \ 209 key-values vary in different forward modes. 210 Examples (Actor): 211 >>> # Regression mode 212 >>> model = ContinuousQAC(64, 6, 'regression') 213 >>> obs = torch.randn(4, 64) 214 >>> actor_outputs = model(obs,'compute_actor') 215 >>> assert actor_outputs['action'].shape == torch.Size([4, 6]) 216 >>> # Reparameterization Mode 217 >>> model = ContinuousQAC(64, 6, 'reparameterization') 218 >>> obs = torch.randn(4, 64) 219 >>> actor_outputs = model(obs,'compute_actor') 220 >>> assert actor_outputs['logit'][0].shape == torch.Size([4, 6]) # mu 221 >>> actor_outputs['logit'][1].shape == torch.Size([4, 6]) # sigma 222 223 Examples (Critic): 224 >>> inputs = {'obs': torch.randn(4, 8), 'action': torch.randn(4, 1)} 225 >>> model = ContinuousQAC(obs_shape=(8, ),action_shape=1, action_space='regression') 226 >>> assert model(inputs, mode='compute_critic')['q_value'].shape == (4, ) # q value 227 """ 228 assert mode in self.mode, "not support forward mode: {}/{}".format(mode, self.mode) 229 return getattr(self, mode)(inputs) 230 231 def compute_actor(self, obs: torch.Tensor) -> Dict[str, Union[torch.Tensor, Dict[str, torch.Tensor]]]: 232 """ 233 Overview: 234 QAC forward computation graph for actor part, input observation tensor to predict action or action logit. 235 Arguments: 236 - x (:obj:`torch.Tensor`): The input observation tensor data. 237 Returns: 238 - outputs (:obj:`Dict[str, Union[torch.Tensor, Dict[str, torch.Tensor]]]`): Actor output dict varying \ 239 from action_space: ``regression``, ``reparameterization``, ``hybrid``. 240 ReturnsKeys (regression): 241 - action (:obj:`torch.Tensor`): Continuous action with same size as ``action_shape``, usually in DDPG/TD3. 242 ReturnsKeys (reparameterization): 243 - logit (:obj:`Dict[str, torch.Tensor]`): The predictd reparameterization action logit, usually in SAC. \ 244 It is a list containing two tensors: ``mu`` and ``sigma``. The former is the mean of the gaussian \ 245 distribution, the latter is the standard deviation of the gaussian distribution. 246 ReturnsKeys (hybrid): 247 - logit (:obj:`torch.Tensor`): The predicted discrete action type logit, it will be the same dimension \ 248 as ``action_type_shape``, i.e., all the possible discrete action types. 249 - action_args (:obj:`torch.Tensor`): Continuous action arguments with same size as ``action_args_shape``. 250 Shapes: 251 - obs (:obj:`torch.Tensor`): :math:`(B, N0)`, B is batch size and N0 corresponds to ``obs_shape``. 252 - action (:obj:`torch.Tensor`): :math:`(B, N1)`, B is batch size and N1 corresponds to ``action_shape``. 253 - logit.mu (:obj:`torch.Tensor`): :math:`(B, N1)`, B is batch size and N1 corresponds to ``action_shape``. 254 - logit.sigma (:obj:`torch.Tensor`): :math:`(B, N1)`, B is batch size. 255 - logit (:obj:`torch.Tensor`): :math:`(B, N2)`, B is batch size and N2 corresponds to \ 256 ``action_shape.action_type_shape``. 257 - action_args (:obj:`torch.Tensor`): :math:`(B, N3)`, B is batch size and N3 corresponds to \ 258 ``action_shape.action_args_shape``. 259 Examples: 260 >>> # Regression mode 261 >>> model = ContinuousQAC(64, 6, 'regression') 262 >>> obs = torch.randn(4, 64) 263 >>> actor_outputs = model(obs,'compute_actor') 264 >>> assert actor_outputs['action'].shape == torch.Size([4, 6]) 265 >>> # Reparameterization Mode 266 >>> model = ContinuousQAC(64, 6, 'reparameterization') 267 >>> obs = torch.randn(4, 64) 268 >>> actor_outputs = model(obs,'compute_actor') 269 >>> assert actor_outputs['logit'][0].shape == torch.Size([4, 6]) # mu 270 >>> actor_outputs['logit'][1].shape == torch.Size([4, 6]) # sigma 271 """ 272 obs = self.actor_encoder(obs) 273 if self.action_space == 'regression': 274 x = self.actor_head(obs) 275 return {'action': x['pred']} 276 elif self.action_space == 'reparameterization': 277 x = self.actor_head(obs) 278 return {'logit': [x['mu'], x['sigma']]} 279 elif self.action_space == 'hybrid': 280 logit = self.actor_head[0](obs) 281 action_args = self.actor_head[1](obs) 282 return {'logit': logit['logit'], 'action_args': action_args['pred']} 283 284 def compute_critic(self, inputs: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]: 285 """ 286 Overview: 287 QAC forward computation graph for critic part, input observation and action tensor to predict Q-value. 288 Arguments: 289 - inputs (:obj:`Dict[str, torch.Tensor]`): The dict of input data, including ``obs`` and ``action`` \ 290 tensor, also contains ``logit`` and ``action_args`` tensor in hybrid action_space. 291 ArgumentsKeys: 292 - obs: (:obj:`torch.Tensor`): Observation tensor data, now supports a batch of 1-dim vector data. 293 - action (:obj:`Union[torch.Tensor, Dict]`): Continuous action with same size as ``action_shape``. 294 - logit (:obj:`torch.Tensor`): Discrete action logit, only in hybrid action_space. 295 - action_args (:obj:`torch.Tensor`): Continuous action arguments, only in hybrid action_space. 296 Returns: 297 - outputs (:obj:`Dict[str, torch.Tensor]`): The output dict of QAC's forward computation graph for critic, \ 298 including ``q_value``. 299 ReturnKeys: 300 - q_value (:obj:`torch.Tensor`): Q value tensor with same size as batch size. 301 Shapes: 302 - obs (:obj:`torch.Tensor`): :math:`(B, N1)`, where B is batch size and N1 is ``obs_shape``. 303 - logit (:obj:`torch.Tensor`): :math:`(B, N2)`, B is batch size and N2 corresponds to \ 304 ``action_shape.action_type_shape``. 305 - action_args (:obj:`torch.Tensor`): :math:`(B, N3)`, B is batch size and N3 corresponds to \ 306 ``action_shape.action_args_shape``. 307 - action (:obj:`torch.Tensor`): :math:`(B, N4)`, where B is batch size and N4 is ``action_shape``. 308 - q_value (:obj:`torch.Tensor`): :math:`(B, )`, where B is batch size. 309 310 Examples: 311 >>> inputs = {'obs': torch.randn(4, 8), 'action': torch.randn(4, 1)} 312 >>> model = ContinuousQAC(obs_shape=(8, ),action_shape=1, action_space='regression') 313 >>> assert model(inputs, mode='compute_critic')['q_value'].shape == (4, ) # q value 314 """ 315 316 obs, action = inputs['obs'], inputs['action'] 317 obs = self.critic_encoder(obs) 318 assert len(obs.shape) == 2 319 if self.action_space == 'hybrid': 320 action_type_logit = inputs['logit'] 321 action_type_logit = torch.softmax(action_type_logit, dim=-1) 322 action_args = action['action_args'] 323 if len(action_args.shape) == 1: 324 action_args = action_args.unsqueeze(1) 325 x = torch.cat([obs, action_type_logit, action_args], dim=1) 326 else: 327 if len(action.shape) == 1: # (B, ) -> (B, 1) 328 action = action.unsqueeze(1) 329 x = torch.cat([obs, action], dim=1) 330 if self.twin_critic: 331 x = [m(x)['pred'] for m in self.critic_head] 332 else: 333 x = self.critic_head(x)['pred'] 334 return {'q_value': x} 335 336 337@MODEL_REGISTRY.register('discrete_qac') 338class DiscreteQAC(nn.Module): 339 """ 340 Overview: 341 The neural network and computation graph of algorithms related to discrete action Q-value Actor-Critic (QAC), \ 342 such as DiscreteSAC. This model now supports only discrete action space. The DiscreteQAC is composed of \ 343 four parts: ``actor_encoder``, ``critic_encoder``, ``actor_head`` and ``critic_head``. Encoders are used to \ 344 extract the feature from various observation. Heads are used to predict corresponding Q-value or action logit. \ 345 In high-dimensional observation space like 2D image, we often use a shared encoder for both ``actor_encoder`` \ 346 and ``critic_encoder``. In low-dimensional observation space like 1D vector, we often use different encoders. 347 Interfaces: 348 ``__init__``, ``forward``, ``compute_actor``, ``compute_critic`` 349 """ 350 mode = ['compute_actor', 'compute_critic'] 351 352 def __init__( 353 self, 354 obs_shape: Union[int, SequenceType], 355 action_shape: Union[int, SequenceType], 356 twin_critic: bool = False, 357 actor_head_hidden_size: int = 64, 358 actor_head_layer_num: int = 1, 359 critic_head_hidden_size: int = 64, 360 critic_head_layer_num: int = 1, 361 activation: Optional[nn.Module] = nn.ReLU(), 362 norm_type: Optional[str] = None, 363 encoder_hidden_size_list: SequenceType = None, 364 share_encoder: Optional[bool] = False, 365 ) -> None: 366 """ 367 Overview: 368 Initailize the DiscreteQAC Model according to input arguments. 369 Arguments: 370 - obs_shape (:obj:`Union[int, SequenceType]`): Observation's shape, such as 128, (156, ). 371 - action_shape (:obj:`Union[int, SequenceType, EasyDict]`): Action's shape, such as 4, (3, ). 372 - twin_critic (:obj:`bool`): Whether to use twin critic. 373 - actor_head_hidden_size (:obj:`Optional[int]`): The ``hidden_size`` to pass to actor head. 374 - actor_head_layer_num (:obj:`int`): The num of layers used in the actor network to compute action. 375 - critic_head_hidden_size (:obj:`Optional[int]`): The ``hidden_size`` to pass to critic head. 376 - critic_head_layer_num (:obj:`int`): The num of layers used in the critic network to compute Q-value. 377 - activation (:obj:`Optional[nn.Module]`): The type of activation function to use in ``MLP`` \ 378 after each FC layer, if ``None`` then default set to ``nn.ReLU()``. 379 - norm_type (:obj:`Optional[str]`): The type of normalization to after network layer (FC, Conv), \ 380 see ``ding.torch_utils.network`` for more details. 381 - encoder_hidden_size_list (:obj:`SequenceType`): Collection of ``hidden_size`` to pass to ``Encoder``, \ 382 the last element must match ``head_hidden_size``, this argument is only used in image observation. 383 - share_encoder (:obj:`Optional[bool]`): Whether to share encoder between actor and critic. 384 """ 385 super(DiscreteQAC, self).__init__() 386 obs_shape: int = squeeze(obs_shape) 387 action_shape: int = squeeze(action_shape) 388 # encoder 389 self.share_encoder = share_encoder 390 if np.isscalar(obs_shape) or len(obs_shape) == 1: 391 assert not self.share_encoder, "Vector observation doesn't need share encoder." 392 assert encoder_hidden_size_list is None, "Vector obs encoder only uses one layer nn.Linear" 393 # Because there is already a layer nn.Linear in the head, so we use nn.Identity here to keep 394 # compatible with the image observation and avoid adding an extra layer nn.Linear. 395 self.actor_encoder = nn.Identity() 396 self.critic_encoder = nn.Identity() 397 encoder_output_size = obs_shape 398 elif len(obs_shape) == 3: 399 400 def setup_conv_encoder(): 401 kernel_size = [3 for _ in range(len(encoder_hidden_size_list))] 402 stride = [2] + [1 for _ in range(len(encoder_hidden_size_list) - 1)] 403 return ConvEncoder( 404 obs_shape, 405 encoder_hidden_size_list, 406 activation=activation, 407 norm_type=norm_type, 408 kernel_size=kernel_size, 409 stride=stride 410 ) 411 412 if self.share_encoder: 413 encoder = setup_conv_encoder() 414 self.actor_encoder = self.critic_encoder = encoder 415 else: 416 self.actor_encoder = setup_conv_encoder() 417 self.critic_encoder = setup_conv_encoder() 418 encoder_output_size = self.actor_encoder.output_size 419 else: 420 raise RuntimeError("not support observation shape: {}".format(obs_shape)) 421 422 # head 423 self.actor_head = nn.Sequential( 424 nn.Linear(encoder_output_size, actor_head_hidden_size), activation, 425 DiscreteHead( 426 actor_head_hidden_size, action_shape, actor_head_layer_num, activation=activation, norm_type=norm_type 427 ) 428 ) 429 430 self.twin_critic = twin_critic 431 if self.twin_critic: 432 self.critic_head = nn.ModuleList() 433 for _ in range(2): 434 self.critic_head.append( 435 nn.Sequential( 436 nn.Linear(encoder_output_size, critic_head_hidden_size), activation, 437 DiscreteHead( 438 critic_head_hidden_size, 439 action_shape, 440 critic_head_layer_num, 441 activation=activation, 442 norm_type=norm_type 443 ) 444 ) 445 ) 446 else: 447 self.critic_head = nn.Sequential( 448 nn.Linear(encoder_output_size, critic_head_hidden_size), activation, 449 DiscreteHead( 450 critic_head_hidden_size, 451 action_shape, 452 critic_head_layer_num, 453 activation=activation, 454 norm_type=norm_type 455 ) 456 ) 457 # Convenient for calling some apis (e.g. self.critic.parameters()), 458 # but may cause misunderstanding when `print(self)` 459 self.actor = nn.ModuleList([self.actor_encoder, self.actor_head]) 460 self.critic = nn.ModuleList([self.critic_encoder, self.critic_head]) 461 462 def forward(self, inputs: torch.Tensor, mode: str) -> Dict[str, torch.Tensor]: 463 """ 464 Overview: 465 QAC forward computation graph, input observation tensor to predict Q-value or action logit. Different \ 466 ``mode`` will forward with different network modules to get different outputs and save computation. 467 Arguments: 468 - inputs (:obj:`torch.Tensor`): The input observation tensor data. 469 - mode (:obj:`str`): The forward mode, all the modes are defined in the beginning of this class. 470 Returns: 471 - output (:obj:`Dict[str, torch.Tensor]`): The output dict of QAC forward computation graph, whose \ 472 key-values vary in different forward modes. 473 Examples (Actor): 474 >>> model = DiscreteQAC(64, 6) 475 >>> obs = torch.randn(4, 64) 476 >>> actor_outputs = model(obs,'compute_actor') 477 >>> assert actor_outputs['logit'].shape == torch.Size([4, 6]) 478 479 Examples(Critic): 480 >>> model = DiscreteQAC(64, 6, twin_critic=False) 481 >>> obs = torch.randn(4, 64) 482 >>> actor_outputs = model(obs,'compute_critic') 483 >>> assert actor_outputs['q_value'].shape == torch.Size([4, 6]) 484 """ 485 assert mode in self.mode, "not support forward mode: {}/{}".format(mode, self.mode) 486 return getattr(self, mode)(inputs) 487 488 def compute_actor(self, inputs: torch.Tensor) -> Dict[str, torch.Tensor]: 489 """ 490 Overview: 491 QAC forward computation graph for actor part, input observation tensor to predict action or action logit. 492 Arguments: 493 - inputs (:obj:`torch.Tensor`): The input observation tensor data. 494 Returns: 495 - outputs (:obj:`Dict[str, torch.Tensor]`): The output dict of QAC forward computation graph for actor, \ 496 including discrete action ``logit``. 497 ReturnsKeys: 498 - logit (:obj:`torch.Tensor`): The predicted discrete action type logit, it will be the same dimension \ 499 as ``action_shape``, i.e., all the possible discrete action choices. 500 Shapes: 501 - inputs (:obj:`torch.Tensor`): :math:`(B, N0)`, B is batch size and N0 corresponds to ``obs_shape``. 502 - logit (:obj:`torch.Tensor`): :math:`(B, N2)`, B is batch size and N2 corresponds to \ 503 ``action_shape``. 504 Examples: 505 >>> model = DiscreteQAC(64, 6) 506 >>> obs = torch.randn(4, 64) 507 >>> actor_outputs = model(obs,'compute_actor') 508 >>> assert actor_outputs['logit'].shape == torch.Size([4, 6]) 509 """ 510 x = self.actor_encoder(inputs) 511 x = self.actor_head(x) 512 return {'logit': x['logit']} 513 514 def compute_critic(self, inputs: torch.Tensor) -> Dict[str, torch.Tensor]: 515 """ 516 Overview: 517 QAC forward computation graph for critic part, input observation to predict Q-value for each possible \ 518 discrete action choices. 519 Arguments: 520 - inputs (:obj:`torch.Tensor`): The input observation tensor data. 521 Returns: 522 - outputs (:obj:`Dict[str, torch.Tensor]`): The output dict of QAC forward computation graph for critic, \ 523 including ``q_value`` for each possible discrete action choices. 524 ReturnKeys: 525 - q_value (:obj:`torch.Tensor`): The predicted Q-value for each possible discrete action choices, it will \ 526 be the same dimension as ``action_shape`` and used to calculate the loss. 527 Shapes: 528 - obs (:obj:`torch.Tensor`): :math:`(B, N1)`, where B is batch size and N1 is ``obs_shape``. 529 - q_value (:obj:`torch.Tensor`): :math:`(B, N2)`, where B is batch size and N2 is ``action_shape``. 530 Examples: 531 >>> model = DiscreteQAC(64, 6, twin_critic=False) 532 >>> obs = torch.randn(4, 64) 533 >>> actor_outputs = model(obs,'compute_critic') 534 >>> assert actor_outputs['q_value'].shape == torch.Size([4, 6]) 535 """ 536 inputs = self.critic_encoder(inputs) 537 if self.twin_critic: 538 x = [m(inputs)['logit'] for m in self.critic_head] 539 else: 540 x = self.critic_head(inputs)['logit'] 541 return {'q_value': x}