[TOC]
Framework utilities.
Validate and return float type based on tensors
and dtype
.
For ops such as matrix multiplication, inputs and weights must be of the
same float type. This function validates that all tensors
are the same type,
validates that type is dtype
(if supplied), and returns the type. Type must
be dtypes.float32
or dtypes.float64
. If neither tensors
nor
dtype
is supplied, default to dtypes.float32
.
tensors
: Tensors of input values. Can includeNone
elements, which will be ignored.dtype
: Expected type.
Validated type.
ValueError
: if neithertensors
nordtype
is supplied, or result is not float.
Assert tensor
is 0-D, of type tf.int32
or tf.int64
.
tensor
:Tensor
to test.name
: Name of the op and of the newTensor
if one is created.
tensor
, for chaining.
ValueError
: iftensor
is not 0-D, of typetf.int32
ortf.int64
.
tf.convert_to_tensor_or_sparse_tensor(value, dtype=None, name=None)
{#convert_to_tensor_or_sparse_tensor}
Converts value to a SparseTensor
or Tensor
.
value
: ASparseTensor
,SparseTensorValue
, or an object whose type has a registeredTensor
conversion function.dtype
: Optional element type for the returned tensor. If missing, the type is inferred from the type ofvalue
.name
: Optional name to use if a newTensor
is created.
A SparseTensor
or Tensor
based on value
.
RuntimeError
: If result type is incompatible withdtype
.
Returns the appropriate graph to use for the given inputs.
- If
graph
is provided, we validate that all inputs inop_input_list
are from the same graph. - Otherwise, we attempt to select a graph from the first Operation- or
Tensor-valued input in
op_input_list
, and validate that all other such inputs are in the same graph. - If the graph was not specified and it could not be inferred from
op_input_list
, we attempt to use the default graph.
op_input_list
: A list of inputs to an operation, which may includeTensor
,Operation
, and other objects that may be converted to a graph element.graph
: (Optional) The explicit graph to use.
TypeError
: Ifop_input_list
is not a list or tuple, or if graph is not a Graph.ValueError
: If a graph is explicitly passed and not all inputs are from it, or if the inputs are from multiple graphs, or we could not find a graph and there was no default graph.
The appropriate graph to use for the given inputs.
Returns True
if x
is non-decreasing.
Elements of x
are compared in row-major order. The tensor [x[0],...]
is non-decreasing if for every adjacent pair we have x[i] <= x[i+1]
.
If x
has less than two elements, it is trivially non-decreasing.
See also: is_strictly_increasing
x
: NumericTensor
.name
: A name for this operation (optional). Defaults to "is_non_decreasing"
Boolean Tensor
, equal to True
iff x
is non-decreasing.
TypeError
: ifx
is not a numeric tensor.
Returns True
if x
is strictly increasing.
Elements of x
are compared in row-major order. The tensor [x[0],...]
is strictly increasing if for every adjacent pair we have x[i] < x[i+1]
.
If x
has less than two elements, it is trivially strictly increasing.
See also: is_non_decreasing
x
: NumericTensor
.name
: A name for this operation (optional). Defaults to "is_strictly_increasing"
Boolean Tensor
, equal to True
iff x
is strictly increasing.
TypeError
: ifx
is not a numeric tensor.
Check for tensor types.
Check whether an object is a tensor. Equivalent to
isinstance(x, [tf.Tensor, tf.SparseTensor, tf.Variable])
.
x
: An python object to check.
True
if x
is a tensor, False
if not.
Reduce tensors to a scalar sum.
This reduces each tensor in tensors
to a scalar via tf.reduce_sum
, then
adds them via tf.add_n
.
tensors
: List of tensors, all of the same numeric type.name
: Tensor name, and scope for all other ops.
Total loss tensor, or None if no losses have been configured.
ValueError
: iflosses
is missing or empty.
Asserts tensor has expected shape.
If tensor shape and expected_shape, are fully defined, assert they match. Otherwise, add assert op that will validate the shape when tensor is evaluated, and set shape on tensor.
expected_shape
: Expected shape to assert, as a 1D array of ints, or tensor of same.tensor
: Tensor whose shape we're validating.
tensor, perhaps with a dependent assert operation.
ValueError
: if tensor has an invalid shape.
Assert tensors are the same shape, from the same graph.
expected_tensor
: Tensor with expected shape.tensor
: Tensor of actual values.
Tuple of (actual_tensor, label_tensor), possibly with assert ops added.
Decorator for marking functions or methods deprecated.
This decorator logs a deprecation warning whenever the decorated function is called. It has the following format:
(from ) is deprecated and will be removed after . Instructions for updating:
will include the class name if it is a method.
It also edits the docstring of the function: ' (deprecated)' is appended to the first line of the docstring and a deprecation notice is prepended to the rest of the docstring.
date
: String. The date the function is scheduled to be removed. Must be ISO 8601 (YYYY-MM-DD).instructions
: String. Instructions on how to update code using the deprecated function.
Decorated function or method.
ValueError
: If date is not in ISO 8601 format, or instructions are empty.
tf.contrib.framework.deprecated_args(date, instructions, *deprecated_arg_names_or_tuples)
{#deprecated_args}
Decorator for marking specific function arguments as deprecated.
This decorator logs a deprecation warning whenever the decorated function is called with the deprecated argument. It has the following format:
Calling (from ) with is deprecated and will be removed after . Instructions for updating:
will include the class name if it is a method.
It also edits the docstring of the function: ' (deprecated arguments)' is appended to the first line of the docstring and a deprecation notice is prepended to the rest of the docstring.
date
: String. The date the function is scheduled to be removed. Must be ISO 8601 (YYYY-MM-DD).instructions
: String. Instructions on how to update code using the deprecated function.*deprecated_arg_names_or_tuples
: String. or 2-Tuple(String, [ok_vals]). The string is the deprecated argument name. Optionally, an ok-value may be provided. If the user provided argument equals this value, the warning is suppressed.
Decorated function or method.
ValueError
: If date is not in ISO 8601 format, instructions are empty, the deprecated arguments are not present in the function signature, or the second element of a deprecated_tuple is not a list.
tf.contrib.framework.deprecated_arg_values(date, instructions, **deprecated_kwargs)
{#deprecated_arg_values}
Decorator for marking specific function argument values as deprecated.
This decorator logs a deprecation warning whenever the decorated function is called with the deprecated argument values. It has the following format:
Calling (from ) with = is deprecated and will be removed after . Instructions for updating:
will include the class name if it is a method.
It also edits the docstring of the function: ' (deprecated arguments)' is appended to the first line of the docstring and a deprecation notice is prepended to the rest of the docstring.
date
: String. The date the function is scheduled to be removed. Must be ISO 8601 (YYYY-MM-DD).instructions
: String. Instructions on how to update code using the deprecated function.**deprecated_kwargs
: The deprecated argument values.
Decorated function or method.
ValueError
: If date is not in ISO 8601 format, or instructions are empty.
Stores the default arguments for the given set of list_ops.
For usage, please see examples at top of the file.
list_ops_or_scope
: List or tuple of operations to set argument scope for or a dictionary containing the current scope. When list_ops_or_scope is a dict, kwargs must be empty. When list_ops_or_scope is a list or tuple, then every op in it need to be decorated with @add_arg_scope to work.**kwargs
: keyword=value that will define the defaults for each op in list_ops. All the ops need to accept the given set of arguments.
the current_scope, which is a dictionary of {op: {arg: value}}
TypeError
: if list_ops is not a list or a tuple.ValueError
: if any op in list_ops has not be decorated with @add_arg_scope.
Decorates a function with args so it can be used within an arg_scope.
func
: function to decorate.
A tuple with the decorated function func_with_args().
Checks whether a func has been decorated with @add_arg_scope or not.
func
: function to check.
a boolean.
Returns the list kwargs that arg_scope can set for a func.
func
: function which has been decorated with @add_arg_scope.
a list of kwargs names.
Adds a variable to the GraphKeys.MODEL_VARIABLES
collection.
var
: a variable.
Asserts global_step_tensor
is a scalar int Variable
or Tensor
.
global_step_tensor
:Tensor
to test.
tf.contrib.framework.assert_or_get_global_step(graph=None, global_step_tensor=None)
{#assert_or_get_global_step}
Verifies that a global step tensor is valid or gets one if None is given.
If global_step_tensor
is not None, check that it is a valid global step
tensor (using assert_global_step
). Otherwise find a global step tensor using
get_global_step
and return it.
graph
: The graph to find the global step tensor for.global_step_tensor
: The tensor to check for suitability as a global step. If None is given (the default), find a global step tensor.
A tensor suitable as a global step, or None
if none was provided and none
was found.
Creates an operation to assign specific variables from a checkpoint.
model_path
: The full path to the model checkpoint. To get latest checkpoint usemodel_path = tf.train.latest_checkpoint(checkpoint_dir)
var_list
: A list ofVariable
objects or a dictionary mapping names in the checkpoint to the corresponding variables to initialize. If empty or None, it would return no_op(), None.
the restore_op and the feed_dict that need to be run to restore var_list.
ValueError
: If the checkpoint specified atmodel_path
is missing one of the variables invar_list
.
tf.contrib.framework.assign_from_checkpoint_fn(model_path, var_list, ignore_missing_vars=False, reshape_variables=False)
{#assign_from_checkpoint_fn}
Returns a function that assigns specific variables from a checkpoint.
model_path
: The full path to the model checkpoint. To get latest checkpoint usemodel_path = tf.train.latest_checkpoint(checkpoint_dir)
var_list
: A list ofVariable
objects or a dictionary mapping names in the checkpoint to the correspoing variables to initialize. If empty or None, it would return no_op(), None.ignore_missing_vars
: Boolean, if True it would ignore variables missing in the checkpoint with a warning instead of failing.reshape_variables
: Boolean, if True it would automatically reshape variables which are of different shape then the ones stored in the checkpoint but which have the same number of elements.
A function that takes a single argument, a tf.Session
, that applies the
assignment operation.
ValueError
: If the checkpoint specified atmodel_path
is missing one of the variables invar_list
.
Creates an assignment operation from a given mapping.
This function provides a mechanism for performing assignment of variables to values in a way that does not fill the graph with large assignment values.
var_names_to_values
: A map from variable names to values.
assign_op
: AnOperation
that assigns each of the given variables to the requested values.feed_dict
: The feed dictionary to use when evaluatingassign_op
.
ValueError
: if any of the given variable names were not found.
Returns a function that assigns specific variables from the given values.
This function provides a mechanism for performing assignment of variables to values in a way that does not fill the graph with large assignment values.
var_names_to_values
: A map from variable names to values.
A function that takes a single argument, a tf.Session
, that applies the
assignment operation.
ValueError
: if any of the given variable names were not found.
Create global step tensor in graph.
graph
: The graph in which to create the global step. If missing, use default graph.
Global step tensor.
ValueError
: if global step key is already defined.
Get the global step tensor.
The global step tensor must be an integer variable. We first try to find it
in the collection GLOBAL_STEP
, or by name global_step:0
.
graph
: The graph to find the global step in. If missing, use default graph.
The global step variable, or None
if none was found.
TypeError
: If the global step tensor has a non-integer type, or if it is not aVariable
.
Returns and create (if necessary) the global step variable.
graph
: The graph in which to create the global step. If missing, use default graph.
the tensor representing the global step variable.
Gets the list of local variables, filtered by scope and/or suffix.
scope
: an optional scope for filtering the variables to return.suffix
: an optional suffix for filtering the variables to return.
a list of variables in collection with scope and suffix.
Gets the list of model variables, filtered by scope and/or suffix.
scope
: an optional scope for filtering the variables to return.suffix
: an optional suffix for filtering the variables to return.
a list of variables in collection with scope and suffix.
Gets the variable uniquely identified by that var_op_name.
var_op_name
: the full name of the variable op, including the scope.
a tensorflow variable.
ValueError
: if no variable uniquely identified by the name exists.
Gets the list of variables that were given that name.
given_name
: name given to the variable without any scope.scope
: an optional scope for filtering the variables to return.
a copied list of variables with the given name and scope.
Gets the list of variables that end with the given suffix.
suffix
: suffix for filtering the variables to return.scope
: an optional scope for filtering the variables to return.
a copied list of variables with the given name and prefix.
tf.contrib.framework.get_variables_to_restore(include=None, exclude=None)
{#get_variables_to_restore}
Gets the list of the variables to restore.
include
: an optional list/tuple of scope strings for filtering which variables from the VARIABLES collection to include. None would include all the variables.exclude
: an optional list/tuple of scope strings for filtering which variables from the VARIABLES collection to exclude. None it would not exclude any.
a list of variables to restore.
TypeError
: include or exclude is provided but is not a list or a tuple.
tf.contrib.framework.get_variables(scope=None, suffix=None, collection='variables')
{#get_variables}
Gets the list of variables, filtered by scope and/or suffix.
scope
: an optional scope for filtering the variables to return. Can be a variable scope or a string.suffix
: an optional suffix for filtering the variables to return.collection
: in which collection search for. Defaults toGraphKeys.GLOBAL_VARIABLES
.
a list of variables in collection with scope and suffix.
tf.contrib.framework.local_variable(initial_value, validate_shape=True, name=None)
{#local_variable}
Create variable and add it to GraphKeys.LOCAL_VARIABLES
collection.
initial_value
: See variables.Variable.init.validate_shape
: See variables.Variable.init.name
: See variables.Variable.init.
New variable.
Gets an existing model variable with these parameters or creates a new one.
name
: the name of the new or existing variable.shape
: shape of the new or existing variable.dtype
: type of the new or existing variable (defaults toDT_FLOAT
).initializer
: initializer for the variable if one is created.regularizer
: a (Tensor -> Tensor or None) function; the result of applying it on a newly created variable will be added to the collection GraphKeys.REGULARIZATION_LOSSES and can be used for regularization.trainable
: IfTrue
also add the variable to the graph collectionGraphKeys.TRAINABLE_VARIABLES
(seetf.Variable
).collections
: A list of collection names to which the Variable will be added. Note that the variable is always also added to theGraphKeys.GLOBAL_VARIABLES
andGraphKeys.MODEL_VARIABLES
collections.caching_device
: Optional device string or function describing where the Variable should be cached for reading. Defaults to the Variable's device.device
: Optional device to place the variable. It can be an string or a function that is called to get the device for the variable.partitioner
: Optional callable that accepts a fully definedTensorShape
and dtype of theVariable
to be created, and returns a list of partitions for each axis (currently only one axis can be partitioned).custom_getter
: Callable that allows overwriting the internal get_variable method and has to have the same signature.
The created or existing variable.
Gets an existing variable with these parameters or creates a new one.
name
: the name of the new or existing variable.shape
: shape of the new or existing variable.dtype
: type of the new or existing variable (defaults toDT_FLOAT
).initializer
: initializer for the variable if one is created.regularizer
: a (Tensor -> Tensor or None) function; the result of applying it on a newly created variable will be added to the collection GraphKeys.REGULARIZATION_LOSSES and can be used for regularization.trainable
: IfTrue
also add the variable to the graph collectionGraphKeys.TRAINABLE_VARIABLES
(seetf.Variable
).collections
: A list of collection names to which the Variable will be added. If None it would default totf.GraphKeys.GLOBAL_VARIABLES
.caching_device
: Optional device string or function describing where the Variable should be cached for reading. Defaults to the Variable's device.device
: Optional device to place the variable. It can be an string or a function that is called to get the device for the variable.partitioner
: Optional callable that accepts a fully definedTensorShape
and dtype of theVariable
to be created, and returns a list of partitions for each axis (currently only one axis can be partitioned).custom_getter
: Callable that allows overwriting the internal get_variable method and has to have the same signature.
The created or existing variable.
Device chooser for variables.
When using a parameter server it will assign them in a round-robin fashion. When not using a parameter server it allows GPU or CPU placement.
tf.contrib.framework.VariableDeviceChooser.__init__(num_tasks=0, job_name='ps', device_type='CPU', device_index=0)
{#VariableDeviceChooser.init}
Initialize VariableDeviceChooser.
To use with 2 parameter servers: VariableDeviceChooser(2)
To use without parameter servers: VariableDeviceChooser() VariableDeviceChooser(device_type='GPU') # For GPU placement
num_tasks
: number of tasks.job_name
: String, a name for the parameter server job.device_type
: Optional device type string (e.g. "CPU" or "GPU")device_index
: int. Optional device index. If left unspecified, device represents 'any' device_index.
tf.contrib.framework.zero_initializer(ref, use_locking=True, name='zero_initializer')
{#zero_initializer}
Initialize 'ref' with all zeros, ref tensor should be uninitialized. If already initialized, you will get ValueError. This op is intended to save memory during initialization.
ref
: ref of the tensor need to be zero initialized.name
: optional name for this operation.
ref that initialized.
ValueError
: If ref tensor is initialized.
Returns CheckpointReader for latest checkpoint.
filepattern
: Directory with checkpoints file or path to checkpoint.
CheckpointReader
object.
ValueError
: if checkpoint_dir doesn't have 'checkpoint' file or checkpoints.
Returns list of all variables in the latest checkpoint.
checkpoint_dir
: Directory with checkpoints file or path to checkpoint.
List of tuples (name, shape)
.
Returns a Tensor with the contents of the given variable in the checkpoint.
checkpoint_dir
: Directory with checkpoints file or path to checkpoint.name
: Name of the tensor to return.
Tensor
object.
Using assingment map initializes current variables with loaded tensors.
Note: This overrides default initialization ops of specified variables and redefines dtype.
'checkpoint_scope_name/': 'scope_name/'
- will load all variables in
current scope_name
from checkpoint_scope_name
with matching variable
names.
'checkpoint_scope_name/some_other_variable': 'scope_name/variable_name'
-
will initalize scope_name/variable_name
variable
from checkpoint_scope_name/some_other_variable
.
'scope_variable_name': variable
- will initialize given tf.Variable
object with variable from the checkpoint.
'scope_variable_name': list(variable)
- will initialize list of
partitioned variables with variable from the checkpoint.
'/': 'scope_name/'
- will load all variables in current scope_name
from
checkpoint's root (e.g. no scope).
Supports loading into partitioned variables, which are represented as '/part_<part #>'.
Example
:
# Create variables.
with tf.variable_scope('test'):
m = tf.get_variable('my_var')
with tf.variable_scope('test2'):
var2 = tf.get_variable('my_var')
var3 = tf.get_variable(name="my1", shape=[100, 100],
partitioner=lambda shape, dtype: [5, 1])
...
# Specify which variables to intialize from checkpoint.
init_from_checkpoint(checkpoint_dir, {
'some_var': 'test/my_var',
'some_scope/': 'test2/'})
...
# Or use `Variable` objects to identify what to initialize.
init_from_checkpoint(checkpoint_dir, {
'some_scope/var2': var2,
})
# Initialize partitioned variables
init_from_checkpoint(checkpoint_dir, {
'some_var_from_ckpt': 'part_var',
})
# Or specifying the list of `Variable` objects.
init_from_checkpoint(checkpoint_dir, {
'some_var_from_ckpt': var3._get_variable_list(),
})
...
# Initialize variables as usual.
session.run(tf.get_all_variables())
checkpoint_dir
: Directory with checkpoints file or path to checkpoint.assignment_map
: Dict, where keys are names of the variables in the checkpoint and values are current variables or names of current variables (in default graph).
tf.errors.OpError: If missing checkpoints or tensors in checkpoints.
ValueError
: If missing variables in current graph.