Caffe2 - Python API
A deep learning, cross platform ML framework
Public Member Functions | Public Attributes | List of all members
caffe2.python.core.IR Class Reference
Inheritance diagram for caffe2.python.core.IR:

Public Member Functions

def __init__ (self, operators)
 
def SanityCheck (self, operators)
 
def Play (self, op)
 
def CheckGradientOperatorInput (self, grad_op_input, g_output, fwd_op_idx, locally_generated_blobs)
 
def AppendSparseGenerators (self, sparse_generators)
 
def BuildGradientGenerators (self, fwd_op_idx, gradient_ops, g_output, g_input)
 
def DoGradientAccumulation (self, fwd_op_idx)
 
def GetBackwardPass (self, ys)
 

Public Attributes

 ssa
 
 input_usages
 
 frontier
 
 gradient_frontier
 
 gradient_generators
 
 out_version_history
 
 in_version_history
 

Detailed Description

A simple IR class to keep track of all intermediate representations used
in the gradient computation.

Definition at line 426 of file core.py.

Member Function Documentation

def caffe2.python.core.IR.BuildGradientGenerators (   self,
  fwd_op_idx,
  gradient_ops,
  g_output,
  g_input 
)
Updates gradient_generators and gradient_frontier

Definition at line 588 of file core.py.

def caffe2.python.core.IR.CheckGradientOperatorInput (   self,
  grad_op_input,
  g_output,
  fwd_op_idx,
  locally_generated_blobs 
)
Checks if the gradient operators can be correctly carried out.

Definition at line 494 of file core.py.

def caffe2.python.core.IR.DoGradientAccumulation (   self,
  fwd_op_idx 
)
For each input name in the forward op, check if we will need to
add gradient accumulation. If so, do gradient accumulation and return
the list of gradient operators.

The criteria for doing gradient accumulation is:
(1) the specific input version has been used by multiple operators.
(2) the current fwd_op_idx is the first to use that input, i.e. in the
    backward pass, is the last to optionally generate the gradient for
    the op.
(3) For the operators that used the input, their gradient operators
    have generated more than 1 gradient.

When accumulating operators, our current solution is to rename all the
created gradients with an internal intermediate name, and then add a
Sum() operator that adds up all the gradients. This may use more memory
due to intermediate storage, but is usually the fastest approach as one
can do one single sum for multiple intermediate gradients.

Definition at line 842 of file core.py.

def caffe2.python.core.IR.GetBackwardPass (   self,
  ys 
)
Gets the backward pass that computes the derivatives of given blobs.

Inputs:
  ys: a list or a dictionary specifying what blobs we want to compute
      derivatives of. If the input is a list, we will automatically
      generate their gradients with all-one values; if the input is a
      dictionary, for any dictionary entries that are not None, we will
      take the corresponding blobs as their gradients; for all those
      that are None, we will auto-fill them with 1.

Definition at line 952 of file core.py.

def caffe2.python.core.IR.Play (   self,
  op 
)
"Adds an op to the current IR, and update the internal states to
reflect the blobs and versions after the execution of the op.

Definition at line 472 of file core.py.


The documentation for this class was generated from the following file: