Skip to content

ding.model.template.acer

ding.model.template.acer

ACER

Bases: Module

Overview

The model of algorithmn ACER(Actor Critic with Experience Replay) Sample Efficient Actor-Critic with Experience Replay. https://arxiv.org/abs/1611.01224

Interfaces: __init__, forward, compute_actor, compute_critic

__init__(obs_shape, action_shape, encoder_hidden_size_list=[128, 128, 64], actor_head_hidden_size=64, actor_head_layer_num=1, critic_head_hidden_size=64, critic_head_layer_num=1, activation=nn.ReLU(), norm_type=None)

Overview

Init the ACER Model according to arguments.

Arguments: - obs_shape (:obj:Union[int, SequenceType]): Observation's space. - action_shape (:obj:Union[int, SequenceType]): Action's space. - actor_head_hidden_size (:obj:Optional[int]): The hidden_size to pass to actor-nn's Head. - actor_head_layer_num (:obj:int): The num of layers used in the network to compute Q value output for actor's nn. - critic_head_hidden_size (:obj:Optional[int]): The hidden_size to pass to critic-nn's Head. - critic_head_layer_num (:obj:int): The num of layers used in the network to compute Q value output for critic's nn. - activation (:obj:Optional[nn.Module]): The type of activation function to use in MLP the after layer_fn, if None then default set to nn.ReLU() - norm_type (:obj:Optional[str]): The type of normalization to use, see ding.torch_utils.fc_block for more details.

forward(inputs, mode)

Overview

Use observation to predict output. Parameter updates with ACER's MLPs forward setup.

Arguments: - mode (:obj:str): Name of the forward mode. Returns: - outputs (:obj:Dict): Outputs of network forward. Shapes (Actor): - obs (:obj:torch.Tensor): :math:(B, N1), where B is batch size and N1 is obs_shape - logit (:obj:torch.FloatTensor): :math:(B, N2), where B is batch size and N2 is action_shape Shapes (Critic): - inputs (:obj:torch.Tensor): :math:(B, N1), B is batch size and N1 corresponds to obs_shape - q_value (:obj:torch.FloatTensor): :math:(B, N2), where B is batch size and N2 is action_shape

compute_actor(inputs)

Overview

Use encoded embedding tensor to predict output. Execute parameter updates with compute_actor mode Use encoded embedding tensor to predict output.

Arguments: - inputs (:obj:torch.Tensor): The encoded embedding tensor, determined with given hidden_size, i.e. (B, N=hidden_size). hidden_size = actor_head_hidden_size - mode (:obj:str): Name of the forward mode. Returns: - outputs (:obj:Dict): Outputs of forward pass encoder and head. ReturnsKeys (either): - logit (:obj:torch.FloatTensor): :math:(B, N1), where B is batch size and N1 is action_shape Shapes: - inputs (:obj:torch.Tensor): :math:(B, N0), B is batch size and N0 corresponds to hidden_size - logit (:obj:torch.FloatTensor): :math:(B, N1), where B is batch size and N1 is action_shape Examples: >>> # Regression mode >>> model = ACER(64, 64) >>> inputs = torch.randn(4, 64) >>> actor_outputs = model(inputs,'compute_actor') >>> assert actor_outputs['logit'].shape == torch.Size([4, 64])

compute_critic(inputs)

Overview

Execute parameter updates with compute_critic mode Use encoded embedding tensor to predict output.

Arguments: - obs, action encoded tensors. - mode (:obj:str): Name of the forward mode. Returns: - outputs (:obj:Dict): Q-value output. ReturnKeys: - q_value (:obj:torch.Tensor): Q value tensor with same size as batch size. Shapes: - obs (:obj:torch.Tensor): :math:(B, N1), where B is batch size and N1 is obs_shape - q_value (:obj:torch.FloatTensor): :math:(B, N2), where B is batch size and N2 is action_shape. Examples: >>> inputs =torch.randn(4, N) >>> model = ACER(obs_shape=(N, ),action_shape=5) >>> model(inputs, mode='compute_critic')['q_value']

Full Source Code

../ding/model/template/acer.py

1from typing import Union, Dict, Optional 2import torch 3import torch.nn as nn 4 5from ding.utils import SequenceType, squeeze, MODEL_REGISTRY 6from ..common import ReparameterizationHead, RegressionHead, DiscreteHead, MultiHead, \ 7 FCEncoder, ConvEncoder 8 9 10@MODEL_REGISTRY.register('acer') 11class ACER(nn.Module): 12 """ 13 Overview: 14 The model of algorithmn ACER(Actor Critic with Experience Replay) 15 Sample Efficient Actor-Critic with Experience Replay. 16 https://arxiv.org/abs/1611.01224 17 Interfaces: 18 ``__init__``, ``forward``, ``compute_actor``, ``compute_critic`` 19 """ 20 mode = ['compute_actor', 'compute_critic'] 21 22 def __init__( 23 self, 24 obs_shape: Union[int, SequenceType], 25 action_shape: Union[int, SequenceType], 26 encoder_hidden_size_list: SequenceType = [128, 128, 64], 27 actor_head_hidden_size: int = 64, 28 actor_head_layer_num: int = 1, 29 critic_head_hidden_size: int = 64, 30 critic_head_layer_num: int = 1, 31 activation: Optional[nn.Module] = nn.ReLU(), 32 norm_type: Optional[str] = None, 33 ) -> None: 34 """ 35 Overview: 36 Init the ACER Model according to arguments. 37 Arguments: 38 - obs_shape (:obj:`Union[int, SequenceType]`): Observation's space. 39 - action_shape (:obj:`Union[int, SequenceType]`): Action's space. 40 - actor_head_hidden_size (:obj:`Optional[int]`): The ``hidden_size`` to pass to actor-nn's ``Head``. 41 - actor_head_layer_num (:obj:`int`): 42 The num of layers used in the network to compute Q value output for actor's nn. 43 - critic_head_hidden_size (:obj:`Optional[int]`): The ``hidden_size`` to pass to critic-nn's ``Head``. 44 - critic_head_layer_num (:obj:`int`): 45 The num of layers used in the network to compute Q value output for critic's nn. 46 - activation (:obj:`Optional[nn.Module]`): 47 The type of activation function to use in ``MLP`` the after ``layer_fn``, 48 if ``None`` then default set to ``nn.ReLU()`` 49 - norm_type (:obj:`Optional[str]`): 50 The type of normalization to use, see ``ding.torch_utils.fc_block`` for more details. 51 """ 52 super(ACER, self).__init__() 53 obs_shape: int = squeeze(obs_shape) 54 action_shape: int = squeeze(action_shape) 55 if isinstance(obs_shape, int) or len(obs_shape) == 1: 56 encoder_cls = FCEncoder 57 elif len(obs_shape) == 3: 58 encoder_cls = ConvEncoder 59 else: 60 raise RuntimeError( 61 "not support obs_shape for pre-defined encoder: {}, please customize your own DQN".format(obs_shape) 62 ) 63 64 self.actor_encoder = encoder_cls( 65 obs_shape, encoder_hidden_size_list, activation=activation, norm_type=norm_type 66 ) 67 self.critic_encoder = encoder_cls( 68 obs_shape, encoder_hidden_size_list, activation=activation, norm_type=norm_type 69 ) 70 71 self.critic_head = RegressionHead( 72 critic_head_hidden_size, action_shape, critic_head_layer_num, activation=activation, norm_type=norm_type 73 ) 74 self.actor_head = DiscreteHead( 75 actor_head_hidden_size, action_shape, actor_head_layer_num, activation=activation, norm_type=norm_type 76 ) 77 self.actor = [self.actor_encoder, self.actor_head] 78 self.critic = [self.critic_encoder, self.critic_head] 79 self.actor = nn.ModuleList(self.actor) 80 self.critic = nn.ModuleList(self.critic) 81 82 def forward(self, inputs: Union[torch.Tensor, Dict], mode: str) -> Dict: 83 """ 84 Overview: 85 Use observation to predict output. 86 Parameter updates with ACER's MLPs forward setup. 87 Arguments: 88 - mode (:obj:`str`): Name of the forward mode. 89 Returns: 90 - outputs (:obj:`Dict`): Outputs of network forward. 91 Shapes (Actor): 92 - obs (:obj:`torch.Tensor`): :math:`(B, N1)`, where B is batch size and N1 is ``obs_shape`` 93 - logit (:obj:`torch.FloatTensor`): :math:`(B, N2)`, where B is batch size and N2 is ``action_shape`` 94 Shapes (Critic): 95 - inputs (:obj:`torch.Tensor`): :math:`(B, N1)`, B is batch size and N1 corresponds to ``obs_shape`` 96 - q_value (:obj:`torch.FloatTensor`): :math:`(B, N2)`, where B is batch size and N2 is ``action_shape`` 97 """ 98 assert mode in self.mode, "not support forward mode: {}/{}".format(mode, self.mode) 99 return getattr(self, mode)(inputs) 100 101 def compute_actor(self, inputs: torch.Tensor) -> Dict: 102 """ 103 Overview: 104 Use encoded embedding tensor to predict output. 105 Execute parameter updates with ``compute_actor`` mode 106 Use encoded embedding tensor to predict output. 107 Arguments: 108 - inputs (:obj:`torch.Tensor`): 109 The encoded embedding tensor, determined with given ``hidden_size``, i.e. ``(B, N=hidden_size)``. 110 ``hidden_size = actor_head_hidden_size`` 111 - mode (:obj:`str`): Name of the forward mode. 112 Returns: 113 - outputs (:obj:`Dict`): Outputs of forward pass encoder and head. 114 ReturnsKeys (either): 115 - logit (:obj:`torch.FloatTensor`): :math:`(B, N1)`, where B is batch size and N1 is ``action_shape`` 116 Shapes: 117 - inputs (:obj:`torch.Tensor`): :math:`(B, N0)`, B is batch size and N0 corresponds to ``hidden_size`` 118 - logit (:obj:`torch.FloatTensor`): :math:`(B, N1)`, where B is batch size and N1 is ``action_shape`` 119 Examples: 120 >>> # Regression mode 121 >>> model = ACER(64, 64) 122 >>> inputs = torch.randn(4, 64) 123 >>> actor_outputs = model(inputs,'compute_actor') 124 >>> assert actor_outputs['logit'].shape == torch.Size([4, 64]) 125 """ 126 x = self.actor_encoder(inputs) 127 x = self.actor_head(x) 128 129 return x 130 131 def compute_critic(self, inputs: torch.Tensor) -> Dict: 132 """ 133 Overview: 134 Execute parameter updates with ``compute_critic`` mode 135 Use encoded embedding tensor to predict output. 136 Arguments: 137 - ``obs``, ``action`` encoded tensors. 138 - mode (:obj:`str`): Name of the forward mode. 139 Returns: 140 - outputs (:obj:`Dict`): Q-value output. 141 ReturnKeys: 142 - q_value (:obj:`torch.Tensor`): Q value tensor with same size as batch size. 143 Shapes: 144 - obs (:obj:`torch.Tensor`): :math:`(B, N1)`, where B is batch size and N1 is ``obs_shape`` 145 - q_value (:obj:`torch.FloatTensor`): :math:`(B, N2)`, where B is batch size and N2 is ``action_shape``. 146 Examples: 147 >>> inputs =torch.randn(4, N) 148 >>> model = ACER(obs_shape=(N, ),action_shape=5) 149 >>> model(inputs, mode='compute_critic')['q_value'] 150 """ 151 152 obs = inputs 153 x = self.critic_encoder(obs) 154 x = self.critic_head(x) 155 return {"q_value": x['pred']}