Caffe2 - Python API
A deep learning, cross platform ML framework
Public Member Functions | Public Attributes | List of all members
caffe2.python.model_helper.ModelHelper Class Reference
Inheritance diagram for caffe2.python.model_helper.ModelHelper:
caffe2.python.cnn.CNNModelHelper caffe2.python.layer_model_helper.LayerModelHelper caffe2.python.models.seq2seq.seq2seq_model_helper.Seq2SeqModelHelper

Public Member Functions

def __init__ (self, name=None, init_params=True, allow_not_known_ops=True, skip_sparse_optim=False, param_model=None, arg_scope=None)
def arg_scope (self)
def get_name (self)
def create_param (self, param_name, shape, initializer, tags=None)
def get_param_info (self, param)
def add_param_DEPRECATED (self, param, key=None, shape=None, length=None)
def param_info (self, grad_type=None, id=None)
def AddParameter (self, param, tags=None)
def GetParams (self, namescope=None, top_scope=False)
def Proto (self)
def InitProto (self)
def RunAllOnGPU (self, args, kwargs)
def CreateDB (self, blob_out, db, db_type, kwargs)
def AddGradientOperators (self, args, kwargs)
def get_param_to_grad (self, params)
def GetOptimizationParamInfo (self, params=None)
def Validate (self)
def GetComputedParams (self, namescope=None)
def GetAllParams (self, namescope=None)
def TensorProtosDBInput (self, unused_blob_in, blob_out, batch_size, db, db_type, kwargs)
def GetDevices (self)
def __getattr__ (self, op_type)
def __dir__ (self)

Public Attributes


Detailed Description

A helper model so we can manange models more easily. It contains net def
and parameter storages. You can add an Operator yourself, e.g.

    model = model_helper.ModelHelper(name="train_net")
    # init your weight and bias as w and b
    w = model.param_init_net.XavierFill(...)
    b = model.param_init_net.ConstantFill(...)
    fc1 = model.FC([input, w, b], output, **kwargs)

or you can use helper functions in brew module without manually
defining parameter initializations and operators.

    model = model_helper.ModelHelper(name="train_net")
    fc1 = brew.fc(model, input, output, dim_in, dim_out, **kwargs)

Definition at line 74 of file

Member Function Documentation

def caffe2.python.model_helper.ModelHelper.__getattr__ (   self,
Catch-all for all other operators, mostly those without params.

Definition at line 444 of file

def caffe2.python.model_helper.ModelHelper.create_param (   self,
  tags = None 
Creates parameter with a given name and initializer.

If param_name is instance of BlobRefernce - then this blob will be used
to store parameter (no any logic will affect it's location).

If param_name is instance of a string type, then the final blob will
be created in the CurrentNameScope with the respect of all parameter
sharing logic, i.e. 'resolved_name_scope/param_name'.

Parameter sharing logic is going to override CurrentNameScope accoring
to the rules that are specified through ParameterSharing contexts,
all ParameterSharing contexts are applied recursively until there are no
extra overrides present, where on each step the best match will be
applied first.

The following examples should clarify the way ParameterSharing logic

As an example if this function is called with parameter 'w':
a. Call from some scope 'global_scope' with no Parameter sharing:
b. Call from scope 'scope_b', with override {'scope_b': 'scope_a'}:
c. Call from scope 'scope_a', with override {'scope_a': ''}:
d. Call from scope 'scope_b/shared', with overrides
  {'scope_b/shared': 'scope_b', 'scope_b': 'scope_a'}:
d. Call from scope 'scope_b/unshared', with overrides
  {'scope_b/shared': 'scope_b', 'scope_b': 'scope_a'}:

Definition at line 160 of file

def caffe2.python.model_helper.ModelHelper.get_param_to_grad (   self,
Given a list of parameters returns a dict from a parameter
to a corresponding gradient

Definition at line 353 of file

def caffe2.python.model_helper.ModelHelper.GetComputedParams (   self,
  namescope = None 
Returns the computed params in current namescope. 'Computed params'
are such parameters that are not optimized via gradient descent but are
directly computed from data, such as the running mean and variance
of Spatial Batch Normalization.

Definition at line 410 of file

def caffe2.python.model_helper.ModelHelper.GetOptimizationParamInfo (   self,
  params = None 
Returns a map for param => grad.
If params is not specified, all parameters will be considered.

Definition at line 369 of file

def caffe2.python.model_helper.ModelHelper.GetParams (   self,
  namescope = None,
  top_scope = False 
Returns the params in current namescope

Definition at line 297 of file

def caffe2.python.model_helper.ModelHelper.TensorProtosDBInput (   self,

Definition at line 430 of file

The documentation for this class was generated from the following file: