1 from .module
import Module
2 from ..
import functional
as F
3 from ..._jit_internal
import weak_module, weak_script_method
7 __constants__ = [
'p',
'inplace']
9 def __init__(self, p=0.5, inplace=False):
10 super(_DropoutNd, self).__init__()
12 raise ValueError(
"dropout probability has to be between 0 and 1, " 13 "but got {}".format(p))
18 inplace_str =
', inplace' if self.
inplace else '' 19 return 'p={}{}'.format(self.
p, inplace_str)
24 r"""During training, randomly zeroes some of the elements of the input 25 tensor with probability :attr:`p` using samples from a Bernoulli 26 distribution. Each channel will be zeroed out independently on every forward 29 This has proven to be an effective technique for regularization and 30 preventing the co-adaptation of neurons as described in the paper 31 `Improving neural networks by preventing co-adaptation of feature 34 Furthermore, the outputs are scaled by a factor of :math:`\frac{1}{1-p}` during 35 training. This means that during evaluation the module simply computes an 39 p: probability of an element to be zeroed. Default: 0.5 40 inplace: If set to ``True``, will do this operation in-place. Default: ``False`` 43 - Input: :math:`(*)`. Input can be of any shape 44 - Output: :math:`(*)`. Output is of the same shape as input 48 >>> m = nn.Dropout(p=0.2) 49 >>> input = torch.randn(20, 16) 52 .. _Improving neural networks by preventing co-adaptation of feature 53 detectors: https://arxiv.org/abs/1207.0580 57 def forward(self, input):
58 return F.dropout(input, self.
p, self.training, self.
inplace)
63 r"""Randomly zero out entire channels (a channel is a 2D feature map, 64 e.g., the :math:`j`-th channel of the :math:`i`-th sample in the 65 batched input is a 2D tensor :math:`\text{input}[i, j]`). 66 Each channel will be zeroed out independently on every forward call with 67 probability :attr:`p` using samples from a Bernoulli distribution. 69 Usually the input comes from :class:`nn.Conv2d` modules. 71 As described in the paper 72 `Efficient Object Localization Using Convolutional Networks`_ , 73 if adjacent pixels within feature maps are strongly correlated 74 (as is normally the case in early convolution layers) then i.i.d. dropout 75 will not regularize the activations and will otherwise just result 76 in an effective learning rate decrease. 78 In this case, :func:`nn.Dropout2d` will help promote independence between 79 feature maps and should be used instead. 82 p (float, optional): probability of an element to be zero-ed. 83 inplace (bool, optional): If set to ``True``, will do this operation 87 - Input: :math:`(N, C, H, W)` 88 - Output: :math:`(N, C, H, W)` (same shape as input) 92 >>> m = nn.Dropout2d(p=0.2) 93 >>> input = torch.randn(20, 16, 32, 32) 96 .. _Efficient Object Localization Using Convolutional Networks: 97 http://arxiv.org/abs/1411.4280 101 def forward(self, input):
102 return F.dropout2d(input, self.
p, self.training, self.
inplace)
107 r"""Randomly zero out entire channels (a channel is a 3D feature map, 108 e.g., the :math:`j`-th channel of the :math:`i`-th sample in the 109 batched input is a 3D tensor :math:`\text{input}[i, j]`). 110 Each channel will be zeroed out independently on every forward call with 111 probability :attr:`p` using samples from a Bernoulli distribution. 113 Usually the input comes from :class:`nn.Conv3d` modules. 115 As described in the paper 116 `Efficient Object Localization Using Convolutional Networks`_ , 117 if adjacent pixels within feature maps are strongly correlated 118 (as is normally the case in early convolution layers) then i.i.d. dropout 119 will not regularize the activations and will otherwise just result 120 in an effective learning rate decrease. 122 In this case, :func:`nn.Dropout3d` will help promote independence between 123 feature maps and should be used instead. 126 p (float, optional): probability of an element to be zeroed. 127 inplace (bool, optional): If set to ``True``, will do this operation 131 - Input: :math:`(N, C, D, H, W)` 132 - Output: :math:`(N, C, D, H, W)` (same shape as input) 136 >>> m = nn.Dropout3d(p=0.2) 137 >>> input = torch.randn(20, 16, 4, 32, 32) 138 >>> output = m(input) 140 .. _Efficient Object Localization Using Convolutional Networks: 141 http://arxiv.org/abs/1411.4280 145 def forward(self, input):
146 return F.dropout3d(input, self.
p, self.training, self.
inplace)
151 r"""Applies Alpha Dropout over the input. 153 Alpha Dropout is a type of Dropout that maintains the self-normalizing 155 For an input with zero mean and unit standard deviation, the output of 156 Alpha Dropout maintains the original mean and standard deviation of the 158 Alpha Dropout goes hand-in-hand with SELU activation function, which ensures 159 that the outputs have zero mean and unit standard deviation. 161 During training, it randomly masks some of the elements of the input 162 tensor with probability *p* using samples from a bernoulli distribution. 163 The elements to masked are randomized on every forward call, and scaled 164 and shifted to maintain zero mean and unit standard deviation. 166 During evaluation the module simply computes an identity function. 168 More details can be found in the paper `Self-Normalizing Neural Networks`_ . 171 p (float): probability of an element to be dropped. Default: 0.5 172 inplace (bool, optional): If set to ``True``, will do this operation 176 - Input: :math:`(*)`. Input can be of any shape 177 - Output: :math:`(*)`. Output is of the same shape as input 181 >>> m = nn.AlphaDropout(p=0.2) 182 >>> input = torch.randn(20, 16) 183 >>> output = m(input) 185 .. _Self-Normalizing Neural Networks: https://arxiv.org/abs/1706.02515 189 def forward(self, input):
190 return F.alpha_dropout(input, self.
p, self.training)
197 def forward(self, input):
198 return F.feature_alpha_dropout(input, self.
p, self.training)