Caffe2 - Python API
A deep learning, cross platform ML framework
Public Member Functions | Public Attributes | List of all members
caffe2.python.optimizer.YellowFinOptimizer Class Reference
Inheritance diagram for caffe2.python.optimizer.YellowFinOptimizer:
caffe2.python.optimizer.Optimizer

Public Member Functions

def __init__ (self, alpha=0.1, mu=0.0, beta=0.999, curv_win_width=20, zero_debias=True, epsilon=0.1 **6, policy='fixed', sparse_dedup_aggregator=None, kwargs)
 
def scale_learning_rate (self, scale)
 
- Public Member Functions inherited from caffe2.python.optimizer.Optimizer
def __init__ (self)
 
def __call__ (self, net, param_init_net, param, grad=None)
 
def get_cpu_blob_name (self, base_str, node_name='')
 
def get_gpu_blob_name (self, base_str, gpu_id, node_name)
 
def make_unique_blob_name (self, base_str)
 
def build_lr (self, net, param_init_net, base_learning_rate, learning_rate_blob=None, policy="fixed", iter_val=0, kwargs)
 
def add_lr_multiplier (self, lr_multiplier)
 
def get_auxiliary_parameters (self)
 
def scale_learning_rate (self, args, kwargs)
 
def create_lars_inputs (self, param_init_net, weight_decay, trust, lr_max)
 

Public Attributes

 alpha
 
 mu
 
 beta
 
 curv_win_width
 
 zero_debias
 
 epsilon
 
 policy
 
 sparse_dedup_aggregator
 
 init_kwargs
 

Additional Inherited Members

- Static Public Member Functions inherited from caffe2.python.optimizer.Optimizer
def dedup (net, sparse_dedup_aggregator, grad)
 

Detailed Description

YellowFin: An automatic tuner for momentum SGD

See https://arxiv.org/abs/1706.03471 for more details. This implementation
has separate learning rate and momentum per each parameter.

Definition at line 1012 of file optimizer.py.


The documentation for this class was generated from the following file: