[TOC]
opt: A module containing optimization routines.
Base class for interfaces with external optimization algorithms.
Subclass this and implement _minimize
in order to wrap a new optimization
algorithm.
ExternalOptimizerInterface
should not be instantiated directly; instead use
e.g. ScipyOptimizerInterface
.
tf.contrib.opt.ExternalOptimizerInterface.__init__(loss, var_list=None, equalities=None, inequalities=None, **optimizer_kwargs)
{#ExternalOptimizerInterface.init}
Initialize a new interface instance.
loss
: A scalarTensor
to be minimized.var_list
: Optional list ofVariable
objects to update to minimizeloss
. Defaults to the list of variables collected in the graph under the keyGraphKeys.TRAINABLE_VARIABLES
.equalities
: Optional list of equality constraint scalarTensor
s to be held equal to zero.inequalities
: Optional list of inequality constraint scalarTensor
s to be kept nonnegative.**optimizer_kwargs
: Other subclass-specific keyword arguments.
tf.contrib.opt.ExternalOptimizerInterface.minimize(session=None, feed_dict=None, fetches=None, step_callback=None, loss_callback=None)
{#ExternalOptimizerInterface.minimize}
Minimize a scalar Tensor
.
Variables subject to optimization are updated in-place at the end of optimization.
Note that this method does not just return a minimization Op
, unlike
Optimizer.minimize()
; instead it actually performs minimization by
executing commands to control a Session
.
session
: ASession
instance.feed_dict
: A feed dict to be passed to calls tosession.run
.fetches
: A list ofTensor
s to fetch and supply toloss_callback
as positional arguments.step_callback
: A function to be called at each optimization step; arguments are the current values of all optimization variables flattened into a single vector.loss_callback
: A function to be called every time the loss and gradients are computed, with evaluated fetches supplied as positional arguments.
Optimizer wrapper that maintains a moving average of parameters.
tf.contrib.opt.MovingAverageOptimizer.__init__(opt, average_decay=0.9999, num_updates=None, sequential_update=True)
{#MovingAverageOptimizer.init}
Construct a new MovingAverageOptimizer.
opt
: A tf.Optimizer that will be used to compute and apply gradients.average_decay
: Float. Decay to use to maintain the moving averages of trained variables. See tf.train.ExponentialMovingAverage for details.num_updates
: Optional count of number of updates applied to variables. See tf.train.ExponentialMovingAverage for details.sequential_update
: Bool. If False, will compute the moving average at the same time as the model is updated, potentially doing benign data races. If True, will update the moving average after gradient updates.
tf.contrib.opt.MovingAverageOptimizer.apply_gradients(grads_and_vars, global_step=None, name=None)
{#MovingAverageOptimizer.apply_gradients}
tf.contrib.opt.MovingAverageOptimizer.compute_gradients(loss, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None)
{#MovingAverageOptimizer.compute_gradients}
Compute gradients of loss
for the variables in var_list
.
This is the first part of minimize()
. It returns a list
of (gradient, variable) pairs where "gradient" is the gradient
for "variable". Note that "gradient" can be a Tensor
, an
IndexedSlices
, or None
if there is no gradient for the
given variable.
loss
: A Tensor containing the value to minimize.var_list
: Optional list oftf.Variable
to update to minimizeloss
. Defaults to the list of variables collected in the graph under the keyGraphKey.TRAINABLE_VARIABLES
.gate_gradients
: How to gate the computation of gradients. Can beGATE_NONE
,GATE_OP
, orGATE_GRAPH
.aggregation_method
: Specifies the method used to combine gradient terms. Valid values are defined in the classAggregationMethod
.colocate_gradients_with_ops
: If True, try colocating gradients with the corresponding op.grad_loss
: Optional. ATensor
holding the gradient computed forloss
.
A list of (gradient, variable) pairs. Variable is always present, but
gradient can be None
.
TypeError
: Ifvar_list
contains anything else thanVariable
objects.ValueError
: If some arguments are invalid.
Return a slot named name
created for var
by the Optimizer.
Some Optimizer
subclasses use additional variables. For example
Momentum
and Adagrad
use variables to accumulate updates. This method
gives access to these Variable
objects if for some reason you need them.
Use get_slot_names()
to get the list of slot names created by the
Optimizer
.
var
: A variable passed tominimize()
orapply_gradients()
.name
: A string.
The Variable
for the slot if it was created, None
otherwise.
Return a list of the names of slots created by the Optimizer
.
See get_slot()
.
A list of strings.
tf.contrib.opt.MovingAverageOptimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None)
{#MovingAverageOptimizer.minimize}
Add operations to minimize loss
by updating var_list
.
This method simply combines calls compute_gradients()
and
apply_gradients()
. If you want to process the gradient before applying
them call compute_gradients()
and apply_gradients()
explicitly instead
of using this function.
loss
: ATensor
containing the value to minimize.global_step
: OptionalVariable
to increment by one after the variables have been updated.var_list
: Optional list ofVariable
objects to update to minimizeloss
. Defaults to the list of variables collected in the graph under the keyGraphKeys.TRAINABLE_VARIABLES
.gate_gradients
: How to gate the computation of gradients. Can beGATE_NONE
,GATE_OP
, orGATE_GRAPH
.aggregation_method
: Specifies the method used to combine gradient terms. Valid values are defined in the classAggregationMethod
.colocate_gradients_with_ops
: If True, try colocating gradients with the corresponding op.name
: Optional name for the returned operation.grad_loss
: Optional. ATensor
holding the gradient computed forloss
.
An Operation that updates the variables in var_list
. If global_step
was not None
, that operation also increments global_step
.
ValueError
: If some of the variables are notVariable
objects.
tf.contrib.opt.MovingAverageOptimizer.swapping_saver(var_list=None, name='swapping_saver', **kwargs)
{#MovingAverageOptimizer.swapping_saver}
Create a saver swapping moving averages and variables.
You should use this saver during training. It will save the moving averages of the trained parameters under the original parameter names. For evaluations or inference you should use a regular saver and it will automatically use the moving averages for the trained variable.
You must call this function after all variables have been created and after you have called Optimizer.minimize().
var_list
: List of variables to save, as perSaver()
. If set to None, will save all the variables that have been created before this call.name
: The name of the saver.**kwargs
: Keyword arguments ofSaver()
.
A tf.Saver
object.
RuntimeError
: If apply_gradients or minimize has not been called before.
Wrapper allowing scipy.optimize.minimize
to operate a tf.Session
.
Example:
vector = tf.Variable([7., 7.], 'vector')
# Make vector norm as small as possible.
loss = tf.reduce_sum(tf.square(vector))
optimizer = ScipyOptimizerInterface(loss, options={'maxiter': 100})
with tf.Session() as session:
optimizer.minimize(session)
# The value of vector should now be [0., 0.].
Example with constraints:
vector = tf.Variable([7., 7.], 'vector')
# Make vector norm as small as possible.
loss = tf.reduce_sum(tf.square(vector))
# Ensure the vector's y component is = 1.
equalities = [vector[1] - 1.]
# Ensure the vector's x component is >= 1.
inequalities = [vector[0] - 1.]
# Our default SciPy optimization algorithm, L-BFGS-B, does not support
# general constraints. Thus we use SLSQP instead.
optimizer = ScipyOptimizerInterface(
loss, equalities=equalities, inequalities=inequalities, method='SLSQP')
with tf.Session() as session:
optimizer.minimize(session)
# The value of vector should now be [1., 1.].
tf.contrib.opt.ScipyOptimizerInterface.__init__(loss, var_list=None, equalities=None, inequalities=None, **optimizer_kwargs)
{#ScipyOptimizerInterface.init}
Initialize a new interface instance.
loss
: A scalarTensor
to be minimized.var_list
: Optional list ofVariable
objects to update to minimizeloss
. Defaults to the list of variables collected in the graph under the keyGraphKeys.TRAINABLE_VARIABLES
.equalities
: Optional list of equality constraint scalarTensor
s to be held equal to zero.inequalities
: Optional list of inequality constraint scalarTensor
s to be kept nonnegative.**optimizer_kwargs
: Other subclass-specific keyword arguments.
tf.contrib.opt.ScipyOptimizerInterface.minimize(session=None, feed_dict=None, fetches=None, step_callback=None, loss_callback=None)
{#ScipyOptimizerInterface.minimize}
Minimize a scalar Tensor
.
Variables subject to optimization are updated in-place at the end of optimization.
Note that this method does not just return a minimization Op
, unlike
Optimizer.minimize()
; instead it actually performs minimization by
executing commands to control a Session
.
session
: ASession
instance.feed_dict
: A feed dict to be passed to calls tosession.run
.fetches
: A list ofTensor
s to fetch and supply toloss_callback
as positional arguments.step_callback
: A function to be called at each optimization step; arguments are the current values of all optimization variables flattened into a single vector.loss_callback
: A function to be called every time the loss and gradients are computed, with evaluated fetches supplied as positional arguments.
Wrapper optimizer that clips the norm of specified variables after update.
This optimizer delegates all aspects of gradient calculation and application to an underlying optimizer. After applying gradients, this optimizer then clips the variable to have a maximum L2 norm along specified dimensions. NB: this is quite different from clipping the norm of the gradients.
Multiple instances of VariableClippingOptimizer
may be chained to specify
different max norms for different subsets of variables.
This is more efficient at serving-time than using normalization during embedding lookup, at the expense of more expensive training and fewer guarantees about the norms.
tf.contrib.opt.VariableClippingOptimizer.__init__(opt, vars_to_clip_dims, max_norm, use_locking=False, colocate_clip_ops_with_vars=False, name='VariableClipping')
{#VariableClippingOptimizer.init}
Construct a new clip-norm optimizer.
opt
: The actual optimizer that will be used to compute and apply the gradients. Must be one of the Optimizer classes.vars_to_clip_dims
: A dict with keys as Variables and values as lists of dimensions along which to compute the L2-norm. Seetf.clip_by_norm
for more details.max_norm
: The L2-norm to clip to, for all variables specified.use_locking
: IfTrue
use locks for clip update operations.colocate_clip_ops_with_vars
: IfTrue
, try colocating the clip norm ops with the corresponding variable.name
: Optional name prefix for the operations created when applying gradients. Defaults to "VariableClipping".