Caffe2 - Python API A deep learning, cross platform ML framework
dropout.py
1 from .module import Module
2 from .. import functional as F
3 from ..._jit_internal import weak_module, weak_script_method
4
5
6 class _DropoutNd(Module):
7  __constants__ = ['p', 'inplace']
8
9  def __init__(self, p=0.5, inplace=False):
10  super(_DropoutNd, self).__init__()
11  if p < 0 or p > 1:
12  raise ValueError("dropout probability has to be between 0 and 1, "
13  "but got {}".format(p))
14  self.p = p
15  self.inplace = inplace
16
17  def extra_repr(self):
18  inplace_str = ', inplace' if self.inplace else ''
19  return 'p={}{}'.format(self.p, inplace_str)
20
21
22 @weak_module
24  r"""During training, randomly zeroes some of the elements of the input
25  tensor with probability :attr:p using samples from a Bernoulli
26  distribution. Each channel will be zeroed out independently on every forward
27  call.
28
29  This has proven to be an effective technique for regularization and
30  preventing the co-adaptation of neurons as described in the paper
31  Improving neural networks by preventing co-adaptation of feature
32  detectors_ .
33
34  Furthermore, the outputs are scaled by a factor of :math:\frac{1}{1-p} during
35  training. This means that during evaluation the module simply computes an
36  identity function.
37
38  Args:
39  p: probability of an element to be zeroed. Default: 0.5
40  inplace: If set to True, will do this operation in-place. Default: False
41
42  Shape:
43  - Input: :math:(*). Input can be of any shape
44  - Output: :math:(*). Output is of the same shape as input
45
46  Examples::
47
48  >>> m = nn.Dropout(p=0.2)
49  >>> input = torch.randn(20, 16)
50  >>> output = m(input)
51
52  .. _Improving neural networks by preventing co-adaptation of feature
53  detectors: https://arxiv.org/abs/1207.0580
54  """
55
56  @weak_script_method
57  def forward(self, input):
58  return F.dropout(input, self.p, self.training, self.inplace)
59
60
61 @weak_module
63  r"""Randomly zero out entire channels (a channel is a 2D feature map,
64  e.g., the :math:j-th channel of the :math:i-th sample in the
65  batched input is a 2D tensor :math:\text{input}[i, j]).
66  Each channel will be zeroed out independently on every forward call with
67  probability :attr:p using samples from a Bernoulli distribution.
68
69  Usually the input comes from :class:nn.Conv2d modules.
70
71  As described in the paper
72  Efficient Object Localization Using Convolutional Networks_ ,
73  if adjacent pixels within feature maps are strongly correlated
74  (as is normally the case in early convolution layers) then i.i.d. dropout
75  will not regularize the activations and will otherwise just result
76  in an effective learning rate decrease.
77
78  In this case, :func:nn.Dropout2d will help promote independence between
79  feature maps and should be used instead.
80
81  Args:
82  p (float, optional): probability of an element to be zero-ed.
83  inplace (bool, optional): If set to True, will do this operation
84  in-place
85
86  Shape:
87  - Input: :math:(N, C, H, W)
88  - Output: :math:(N, C, H, W) (same shape as input)
89
90  Examples::
91
92  >>> m = nn.Dropout2d(p=0.2)
93  >>> input = torch.randn(20, 16, 32, 32)
94  >>> output = m(input)
95
96  .. _Efficient Object Localization Using Convolutional Networks:
97  http://arxiv.org/abs/1411.4280
98  """
99
100  @weak_script_method
101  def forward(self, input):
102  return F.dropout2d(input, self.p, self.training, self.inplace)
103
104
105 @weak_module
107  r"""Randomly zero out entire channels (a channel is a 3D feature map,
108  e.g., the :math:j-th channel of the :math:i-th sample in the
109  batched input is a 3D tensor :math:\text{input}[i, j]).
110  Each channel will be zeroed out independently on every forward call with
111  probability :attr:p using samples from a Bernoulli distribution.
112
113  Usually the input comes from :class:nn.Conv3d modules.
114
115  As described in the paper
116  Efficient Object Localization Using Convolutional Networks_ ,
117  if adjacent pixels within feature maps are strongly correlated
118  (as is normally the case in early convolution layers) then i.i.d. dropout
119  will not regularize the activations and will otherwise just result
120  in an effective learning rate decrease.
121
122  In this case, :func:nn.Dropout3d will help promote independence between
123  feature maps and should be used instead.
124
125  Args:
126  p (float, optional): probability of an element to be zeroed.
127  inplace (bool, optional): If set to True, will do this operation
128  in-place
129
130  Shape:
131  - Input: :math:(N, C, D, H, W)
132  - Output: :math:(N, C, D, H, W) (same shape as input)
133
134  Examples::
135
136  >>> m = nn.Dropout3d(p=0.2)
137  >>> input = torch.randn(20, 16, 4, 32, 32)
138  >>> output = m(input)
139
140  .. _Efficient Object Localization Using Convolutional Networks:
141  http://arxiv.org/abs/1411.4280
142  """
143
144  @weak_script_method
145  def forward(self, input):
146  return F.dropout3d(input, self.p, self.training, self.inplace)
147
148
149 @weak_module
151  r"""Applies Alpha Dropout over the input.
152
153  Alpha Dropout is a type of Dropout that maintains the self-normalizing
154  property.
155  For an input with zero mean and unit standard deviation, the output of
156  Alpha Dropout maintains the original mean and standard deviation of the
157  input.
158  Alpha Dropout goes hand-in-hand with SELU activation function, which ensures
159  that the outputs have zero mean and unit standard deviation.
160
161  During training, it randomly masks some of the elements of the input
162  tensor with probability *p* using samples from a bernoulli distribution.
163  The elements to masked are randomized on every forward call, and scaled
164  and shifted to maintain zero mean and unit standard deviation.
165
166  During evaluation the module simply computes an identity function.
167
168  More details can be found in the paper Self-Normalizing Neural Networks_ .
169
170  Args:
171  p (float): probability of an element to be dropped. Default: 0.5
172  inplace (bool, optional): If set to True, will do this operation
173  in-place
174
175  Shape:
176  - Input: :math:(*). Input can be of any shape
177  - Output: :math:(*). Output is of the same shape as input
178
179  Examples::
180
182  >>> input = torch.randn(20, 16)
183  >>> output = m(input)
184
185  .. _Self-Normalizing Neural Networks: https://arxiv.org/abs/1706.02515
186  """
187
188  @weak_script_method
189  def forward(self, input):
190  return F.alpha_dropout(input, self.p, self.training)
191
192
193 @weak_module
195
196  @weak_script_method
197  def forward(self, input):
198  return F.feature_alpha_dropout(input, self.p, self.training)