Article From:https://www.cnblogs.com/guoyaohua/p/9059605.html

This article introduces the common functions of tensorflow, which is derived from online collation.

  TensorFlow Converts graphical definitions into distributed execution operations to make full use of available computing resources, such as CPU or GPU. In general, you do not need to explicitly specify the use of CPU or GPU, and TensorFlow can automatically detect. If GPU is detected, TensOrFlow will make the best use of the first GPU found to perform the operation.Parallel computing can speed up computations of expensive algorithms, and TensorFlow has also made effective improvements to complex operations. Most of the nuclear related operations are device related implementations, such as GPU.

  Here are some important operations / cores:

Operating group operation
Maths Add, Sub, Mul, Div, Exp, Log, Greater, Less, Equal
Array Concat, Slice, Split, Constant, Rank, Shape, Shuffle
Matrix MatMul, MatrixInverse, MatrixDeterminant
Neuronal Network SoftMax, Sigmoid, ReLU, Convolution2D, MaxPool
Checkpointing Save, Restore
Queues and syncronizations Enqueue, Dequeue, MutexAcquire, MutexRelease
Flow control Merge, Switch, Enter, Leave, NextIteration

Arithmetic operation of one, TensorFlow 

description
tf.add(x, y, name=None) Summing up
tf.sub(x, y, name=None) subtraction
tf.mul(x, y, name=None) multiplication
tf.div(x, y, name=None) division
tf.mod(x, y, name=None) Die
tf.abs(x, name=None) Absolute value
tf.neg(x, name=None)

Negation (y = -x)

tf.sign(x, name=None)

Return symbol

y = sign(x) = -1 if x < 0; 0 if x == 0; 1 if x > 0.

tf.inv(x, name=None) Reverse
tf.square(x, name=None) Square calculation(y = x * x = x^2)
tf.round(x, name=None)

Rounding the closest integers

# ‘a’ is [0.9, 2.5, 2.3, -4.4]
tf.round(a) ==> [ 1.0, 3.0, 2.0, -4.0 ]

tf.sqrt(x, name=None) Open root (y = \sqrt{x} = x^{1/2}).
tf.pow(x, y, name=None)

power(Yuan Suji)

# tensor ‘x’ is [[2, 2], [3, 3]]
# tensor ‘y’ is [[8, 16], [2, 3]]
tf.pow(x, y) ==> [[256, 65536], [9, 27]]

tf.exp(x, name=None) Calculation of the sub square of e
tf.log(x, name=None) Calculate log, an input to calculate e’s LN, two input to second input as the bottom.
tf.maximum(x, y, name=None) Return to maximum(x > y ? x : y)
tf.minimum(x, y, name=None) Return the minimum (x < y ? x : y)
tf.cos(x, name=None) Trigonometric function cosine
tf.sin(x, name=None) Trigonometric function sine
tf.tan(x, name=None) Trigonometric function Tan
tf.atan(x, name=None) Trigonometric function ctan

Two. Tensor operation Tensor Transformations

2.1  Data type conversion Casting

operation describe
tf.string_to_number
(string_tensor, out_type=None, name=None)
String to numeric
tf.to_double(x, name=’ToDouble’) Turn to 64 – bit floating-point type – float64
tf.to_float(x, name=’ToFloat’) Turn to 32 – bit floating-point type – float32
tf.to_int32(x, name=’ToInt32’) Turn to 32 – bit integer – int32
tf.to_int64(x, name=’ToInt64’) Turn to 64 – bit integer – Int64
tf.cast(x, dtype, name=None)

Convert X or x.values to dtype

# tensor a is [1.8, 2.2], dtype=tf.float
tf.cast(a, tf.int32) ==> [1, 2] # dtype=tf.int32

2.2  Shape operation Shapes and Shaping

operation description
tf.shape(input, name=None)

Return the shape of the data

# ‘t’ is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
shape(t) ==> [2, 2, 3]

tf.size(input, name=None)

The number of elements returned to the data

# ‘t’ is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]]
size(t) ==> 12

tf.rank(input, name=None)

Return to the rank of tensor(Dimension)

Note: this rank is different from the rank of the matrix,
tensorThe rank represents the number of indexes required by a tensor to uniquely represent any element.
Also known as “order”, “degree” or “ndims”.
#’t’ is [[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]
# shape of tensor ‘t’ is [2, 2, 3]
rank(t) ==> 3

tf.reshape(tensor, shape, name=None)

Change the shape of tensor

# tensor ‘t’ is [1, 2, 3, 4, 5, 6, 7, 8, 9]
# tensor ‘t’ has shape [9]
reshape(t, [3, 3]) ==> 
[[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]
#If shape has an element [-1], it means that the dimension is flat to one dimension.
# -1 It will be automatically derived as 9:
reshape(t, [2, -1]) ==> 
[[1, 1, 1, 2, 2, 2, 3, 3, 3],
[4, 4, 4, 5, 5, 5, 6, 6, 6]]

tf.expand_dims(input, dim, name=None)

Insert dimension 1 into a tensor

#This operation requires -1-input.dims ()
# ‘t’ is a tensor of shape [2]
shape(expand_dims(t, 0)) ==> [1, 2]
shape(expand_dims(t, 1)) ==> [2, 1]
shape(expand_dims(t, -1)) ==> [2, 1] <= dim <= input.dims()

2.3  Slicing and merging (Slicing and Joining)

operation describe
tf.slice(input_, begin, size, name=None)

Slice the tensor and extract part of the content from input.

inputs:It can be list, array, tensor

     begin:nDimension list, begin[i] represents the relative initial offset of 0 when extracting data from inputs I dimension, that is, extracting data from begin[i] of I dimension.

     size:nDimension list, size[i] represents the number of I dimensional elements to be extracted.

     There are several relationships as follows:

         (1) i in [0,n]

         (2)tf.shape(inputs)[0]=len(begin)=len(size)

         (3)begin[i]>=0   The starting position of the extraction of the I dimension element is greater than 0

         (4)begin[i]+size[i]<=tf.shape(inputs)[i]

#’input’ is 
#[[[1, 1, 1], [2, 2, 2]],[[3, 3, 3], [4, 4, 4]],[[5, 5, 5], [6, 6, 6]]]
  tf.slice(input, [1, 0, 0], [1, 1, 3]) ==> [[[3, 3, 3]]]
  tf.slice(input, [1, 0, 0], [1, 2, 3]) ==> 
[[[3, 3, 3],
[4, 4, 4]]]
tf.slice(input, [1, 0, 0], [2, 1, 3]) ==> 
[[[3, 3, 3]],
[[5, 5, 5]]]

tf.split(split_dim, num_split, value, name=’split’)

Tensor is separated into num_split tensors by a certain dimension.

# ‘value’ is a tensor with shape [5, 30]
# Split ‘value’ into 3 tensors along dimension 1
split0, split1, split2 = tf.split(1, 3, value)
tf.shape(split0) ==> [5, 10]

tf.concat(concat_dim, values, name=’concat’)

Link tensor along a certain dimension(The whole dimension is constant

t1 = [[1, 2, 3], [4, 5, 6]]
t2 = [[7, 8, 9], [10, 11, 12]]
tf.concat(0, [t1, t2]) ==> [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]
tf.concat(1, [t1, t2]) ==> [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]]

If you want to link up with a new axis of tensor, you can:

tf.concat(axis, [tf.expand_dims(t, axis) for t in tensors])
Equivalent to tf.pack (tensors, axis=axis)

tf.pack(values, axis=0, name=’pack’)

Pack a series of rank-R tensor into a rank- (R+1) tensor.(Integral dimension plus one)

# ‘x’ is [1, 4], ‘y’ is [2, 5], ‘z’ is [3, 6]
pack([x, y, z]) => [[1, 4], [2, 5], [3, 6]] 
# Along the first dimension pack
pack([x, y, z], axis=1) => [[1, 2, 3], [4, 5, 6]]
It is equivalent to tf.pack ([x, y, z]) = np.asarray ([x, y, z]).

tf.reverse(tensor, dims, name=None)

Proceed along a certain dimensionSequence reversal
Where dim is a list, the element is bool, and size is rank (tensor).

# tensor ‘t’ is 
[[[[ 0, 1, 2, 3],
#[ 4, 5, 6, 7],
#[ 8, 9, 10, 11]],
#[[12, 13, 14, 15],
#[16, 17, 18, 19],
#[20, 21, 22, 23]]]]
# tensor ‘t’ shape is [1, 2, 3, 4]
# ‘dims’ is [False, False, False, True]
reverse(t, dims) ==>
[[[[ 3, 2, 1, 0],
[ 7, 6, 5, 4],
[ 11, 10, 9, 8]],
[[15, 14, 13, 12],
[19, 18, 17, 16],
[23, 22, 21, 20]]]]

tf.transpose(a, perm=None, name=’transpose’)

Changing the dimensional order of tensor(Axis transformation)
Change the tensor order according to the dimension of listing perm.
As defined, perm is (n-1… 0)

# ‘x’ is [[1 2 3],[4 5 6]]
tf.transpose(x) ==> [[1 4], [2 5],[3 6]]
# Equivalently
tf.transpose(x, perm=[1, 0]) ==> [[1 4],[2 5], [3 6]]

tf.gather(params, indices, validate_indices=None, name=None)

The section in params that is indicated by the merge index indices

tf.gather

tf.one_hot
(indices, depth, on_value=None, off_value=None, 
axis=None, dtype=None, name=None)

Single heat encoding (ont-hot encoing)

indices = [0, 2, -1, 1]
depth = 3
on_value = 5.0 
off_value = 0.0 
axis = -1 
#Then output is [4 x 3]: 
output = 
[5.0 0.0 0.0] // one_hot(0) 
[0.0 0.0 5.0] // one_hot(2) 
[0.0 0.0 0.0] // one_hot(-1) 
[0.0 5.0 0.0] // one_hot(1)

Three, matrix correlation operation

operation describe
tf.diag(diagonal, name=None)

A diagonal tensor that returns a given diagonal value

# ‘diagonal’ is [1, 2, 3, 4]
tf.diag(diagonal) ==> 
[[1, 0, 0, 0]
[0, 2, 0, 0]
[0, 0, 3, 0]
[0, 0, 0, 4]]

tf.diag_part(input, name=None) The function is opposite to the above
tf.trace(x, name=None) Find the trace of a 2 dimensional tensor, that is, the sum of the diagonal value diagonal.
tf.transpose(a, perm=None, name=’transpose’) Changing the dimensional order of tensor (axis transformation)
tf.matmul(a, b, transpose_a=False, 
transpose_b=False, a_is_sparse=False, 
b_is_sparse=False, name=None)
Multiplication of matrices (which can handle batch data)
tf.matrix_determinant(input, name=None) Determinant returning the square matrix
tf.matrix_inverse(input, adjoint=None, name=None) The inverse matrix of the input conjugate matrix is calculated when the inverse matrix of the square matrix is adjoint True.
tf.cholesky(input, name=None) Cholesky decomposition of the input matrix,
That is, the decomposition of a symmetric positive definite matrix into a product of a lower triangular matrix L and its transposed A=LL^T.
tf.matrix_solve(matrix, rhs, adjoint=None, name=None) Solving equation
matrixFor square matrix shape [M, M], RHS shape is [M, K], output is [M, K]

Four, plural operation

operation describe
tf.complex(real, imag, name=None)

Convert two real numbers into plural form

# tensor ‘real’ is [2.25, 3.25]
# tensor ‘imag’ is [4.75, 5.75]
tf.complex(real, imag) ==> [[2.25 + 4.75j], [3.25 + 5.75j]]

tf.complex_abs(x, name=None)

The absolute value of the complex number, that is, the length

# tensor ‘x’ is [[-2.25 + 4.75j], [-3.25 + 5.75j]]
tf.complex_abs(x) ==> [5.25594902, 6.60492229]

tf.conj(input, name=None) Conjugate complex number
tf.imag(input, name=None)
tf.real(input, name=None)
Extracting the imaginary part and the real part of the complex number
tf.fft(input, name=None) One dimensional discrete Fourier transform is calculated, and the input data type is complex64.

Five. Reduction calculation (Reduction)

operation describe

tf.reduce_sum(input_tensor, 
reduction_indices=None, 
keep_dims=False, name=None)

Calculate the sum of the input tensor elements or seek the sum of the reduction_indices specified axis.

# ‘x’ is [[1, 1, 1]
# [1, 1, 1]]
tf.reduce_sum(x) ==> 6
tf.reduce_sum(x, 0) ==> [2, 2, 2]
tf.reduce_sum(x, 1) ==> [3, 3]
tf.reduce_sum(x, 1, keep_dims=True) ==> [[3], [3]]
tf.reduce_sum(x, [0, 1]) ==> 6

tf.reduce_prod(input_tensor, 
reduction_indices=None, 
keep_dims=False, name=None)
Calculate the product of the input tensor element, or calculate the product according to the axis specified by reduction_indices.
tf.reduce_min(input_tensor, 
reduction_indices=None, 
keep_dims=False, name=None)
Finding the minimum value in tensor
tf.reduce_max(input_tensor, 
reduction_indices=None, 
keep_dims=False, name=None)
The maximum value in tensor
tf.reduce_mean(input_tensor, 
reduction_indices=None, 
keep_dims=False, name=None)
Mean value in tensor
tf.reduce_all(input_tensor, 
reduction_indices=None, 
keep_dims=False, name=None)

Seeking for each element in tensorLogic ‘and’

# ‘x’ is 
# [[True, True]
# [False, False]]
tf.reduce_all(x) ==> False
tf.reduce_all(x, 0) ==> [False, False]
tf.reduce_all(x, 1) ==> [True, False]

tf.reduce_any(input_tensor, 
reduction_indices=None, 
keep_dims=False, name=None)
Seeking for each element in tensorLogic ‘or’
tf.accumulate_n(inputs, shape=None, 
tensor_dtype=None, name=None)

Calculating the sum of a series of tensor

# tensor ‘a’ is [[1, 2], [3, 4]]
# tensor ‘b’ is [[5, 0], [0, 6]]
tf.accumulate_n([a, b, a]) ==> [[7, 4], [6, 14]]

tf.cumsum(x, axis=0, exclusive=False, 
reverse=False, name=None)

Accumulation and accumulation

tf.cumsum([a, b, c]) ==> [a, a + b, a + b + c]
tf.cumsum([a, b, c], exclusive=True) ==> [0, a, a + b]
tf.cumsum([a, b, c], reverse=True) ==> [a + b + c, b + c, c]
tf.cumsum([a, b, c], exclusive=True, reverse=True) ==> [b + c, c, 0]

Six, division (Segmentation)

operation describe
tf.segment_sum(data, segment_ids, name=None)

The sum of the fragments calculated along the tensor (first dimension)

  • Function parameter

    data:A Tensor.

    segment_ids:A Tensor; must be one of the following types: int32, Int64;

    The rank of one dimension tensor is equal to the rank of the first dimension of data; the value should be sorted and can be repeated.

    name:The name of the operation (optional).

  • Function return value

    tf.segment_sumThe function returns a Tensor, which has the same type as data.

 It has the same shape as data, except for dimension 0 of K (number of segments).

tf.segment_prod(data, segment_ids, name=None) Calculating the product of each fragment according to the subsection of segment_ids
tf.segment_min(data, segment_ids, name=None) Calculate the minimum value of each segment according to segment_ids segmentation.
tf.segment_max(data, segment_ids, name=None) Calculate the maximum value of each segment according to segment_ids segmentation.
tf.segment_mean(data, segment_ids, name=None) Calculate the average value of each segment according to segment_ids segmentation.
tf.unsorted_segment_sum(data, segment_ids,
num_segments, name=None)

Similar to the tf.segment_sum function,
The difference is that the order of ID in segment_ids can be disordered.

tf.sparse_segment_sum(data, indices, 
segment_ids, name=None)

Sum of input for sparse partition

c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])
# Select two rows, one segment.
tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0])) 
==> [[0 0 0 0]]
The original data’s indices is divided into [0,1] positions.
And in accordance with the grouping of segment_ids

Seven, sequence comparison and index extraction (Sequence Comparison and Indexing).

operation describe
tf.argmin(input, dimension, name=None) Index index that returns the input minimum
tf.argmax(input, dimension, name=None) Index index that returns the input maximum
tf.listdiff(x, y, name=None) Returns the index of different values in X, y
tf.where(input, name=None)

Return to the location of the bool type tensor for True

# ‘input’ tensor is 
#[[True, False]
#[True, False]]
# ‘input’ There are two ‘True’, then the output of two coordinate values.
# ‘input’The rank is 2, so each coordinate has two dimensions.
where(input) ==>
[[0, 0],
[1, 0]]

tf.unique(x, name=None)

Returns a tuple tuple (y, IDX) and Y is a list of unique data for X.
idxThe index for the X data corresponding to the Y element

# tensor ‘x’ is [1, 1, 2, 4, 4, 4, 7, 8, 8]
y, idx = unique(x)
y ==> [1, 2, 4, 7, 8]
idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]

tf.invert_permutation(x, name=None)

The relationship between replacement of X data and index

y[x[i]] = i for i in [0, 1, …, len(x) – 1]

# tensor x is [3, 4, 0, 2, 1]
invert_permutation(x) ==> [2, 4, 3, 0, 1]

Eight, neural network (Neural Network)

  • Activation function (Activation Functions)

operation describe
tf.nn.relu(features, name=None) Rectifying function: Max (features, 0)
tf.nn.relu6(features, name=None) A rectification function with a threshold of 6: min (max (features, 0), 6).
tf.nn.elu(features, name=None) eluFunction, exp (features) – 1 if < 0, otherwise features
Exponential Linear Units (ELUs)
tf.nn.softplus(features, name=None) Calculation of softplus:log (exp (features) + 1)
tf.nn.dropout(x, keep_prob, 
noise_shape=None, seed=None, name=None)
Calculate dropout, keep_prob is the probability of keep
noise_shapeShape for noise
tf.nn.bias_add(value, bias, data_format=None, name=None) Add a bias to value
This function is a special case of tf.add, bias is only one dimension.
The function is summed up with the value by the broadcast mechanism.
The data format can be different from value and return to the same format as value.
tf.sigmoid(x, name=None) y = 1 / (1 + exp(-x))
tf.tanh(x, name=None) Hyperbolic tangent activation function
  • Convolution function (Convolution)

operation describe
tf.nn.conv2d(input, filter, strides, padding, 
use_cudnn_on_gpu=None, data_format=None, name=None)
Calculation of 2D convolution with a given 4D input and filter
The input shape is [batch, height, width, in_channels]
tf.nn.conv3d(input, filter, strides, padding, name=None) Calculation of 3D convolution with a given 5D input and filter
Input shape is [batch, in_depth, in_height, in_width, in_channels]
  • Pooling function (Pooling)

operation describe
tf.nn.avg_pool(value, ksize, strides, padding, 
data_format=’NHWC’, name=None)
Averaging pooling
tf.nn.max_pool(value, ksize, strides, padding, 
data_format=’NHWC’, name=None)
Maxima method pooling
tf.nn.max_pool_with_argmax(input, ksize, strides,
padding, Targmax=None, name=None)

Return a two-dimensional tuple(output,argmax),Maximum value pooling,

And return the maximum value and its corresponding index

tf.nn.avg_pool3d(input, ksize, strides, 
padding, name=None)
3DMean value pooling
tf.nn.max_pool3d(input, ksize, strides, 
padding, name=None)
3DMaximum value pooling
  • Data standardization (Normalization)

operation describe
tf.nn.l2_normalize(x, dim, epsilon=1e-12, name=None) Standardization of the L2 paradigm for dimension dim
output = x / sqrt(max(sum(x**2), epsilon))
tf.nn.sufficient_statistics(x, axes, shift=None, 
keep_dims=False, name=None)
Calculation of complete statistics related to mean and variance
Returns the 4 dimensional tuple, the number of * elements, the sum of * elements, the sum of squares of * elements, and *shift results.
See the algorithm Introduction
tf.nn.normalize_moments(counts, mean_ss, variance_ss, shift, name=None) Calculating mean and variance based on complete statistics
tf.nn.moments(x, axes, shift=None, 
name=None, keep_dims=False)
Direct calculation of mean and variance
  • Loss function (Losses)

operation describe
tf.nn.l2_loss(t, name=None) output = sum(t ** 2) / 2
  • Classification function (Classification)

operation describe
tf.nn.sigmoid_cross_entropy_with_logits
(logits, targets, name=None)*
Calculating the cross entropy of input Logits, targets
tf.nn.softmax(logits, name=None) Computing softmax
softmax[i, j] = exp(logits[i, j]) / sum_j(exp(logits[i, j]))
tf.nn.log_softmax(logits, name=None) logsoftmax[i, j] = logits[i, j] – log(sum(exp(logits[i])))
tf.nn.softmax_cross_entropy_with_logits
(logits, labels, name=None)
Calculating the softmax cross entropy of Logits and labels
logits, labelsThe same shape and data types must be the same
tf.nn.sparse_softmax_cross_entropy_with_logits
(logits, labels, name=None)
Calculating the softmax cross entropy of Logits and labels
tf.nn.weighted_cross_entropy_with_logits
(logits, targets, pos_weight, name=None)
Similar to sigmoid_cross_entropy_with_logits (),
But the weight of the positive sample is added to the weight pos_weight
  • Symbol embedding (Embeddings)

operation describe
tf.nn.embedding_lookup
(params, ids, partition_strategy=’mod’, 
name=None, validate_indices=True)

Query tensor value in embedding list params according to index IDS
If len (params) > 1, ID will be divided according to the partition_strategy policy.

1、If partition_strategy is “mod”,
idThe assigned location is p = ID% len (params)
For example, there are 13 IDS, which are divided into 5 locations.
[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]

2、If partition_strategy is “div”, the allocation plan is:
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]

tf.nn.embedding_lookup_sparse(params, 
sp_ids, sp_weights, partition_strategy=’mod’, 
name=None, combiner=’mean’)

For a given IDs and weight query embedding

1、sp_idsA sparse tensor for a N x M,
NFor batch size, M is arbitrary, data type Int64

2、sp_weightsSparse tensor weights of shape and sp_ids,
Floating point type, if it is None, the weight is all ‘1’.

  • Recurrent neural network (Recurrent Neural Networks)

operation describe
tf.nn.rnn(cell, inputs, initial_state=None, dtype=None, 
sequence_length=None, scope=None)
Building recurrent neural network based on RNNCell class cell
tf.nn.dynamic_rnn(cell, inputs, sequence_length=None, 
initial_state=None, dtype=None, parallel_iterations=None, 
swap_memory=False, time_major=False, scope=None)
Dynamic cell based on RNNCell class cell
Unlike general RNN, the function is dynamically expanded according to input.
Return (outputs, state)
tf.nn.state_saving_rnn(cell, inputs, state_saver, state_name, 
sequence_length=None, scope=None)
RNN network that can store debug status
tf.nn.bidirectional_rnn(cell_fw, cell_bw, inputs, 
initial_state_fw=None, initial_state_bw=None, dtype=None,
sequence_length=None, scope=None)
Bi-directional RNN, returning a 3 tuple of tuple
(outputs, output_state_fw, output_state_bw)
  • Evaluation network (Evaluation)

operation describe
tf.nn.top_k(input, k=1, sorted=True, name=None) Returns the value of the previous K large and its corresponding index
tf.nn.in_top_k(predictions, targets, k, name=None) Returns the corresponding predictions value for targets index.
Whether it is in the K position before predictions,
The data type is bool, and Len is the same as predictions.
  • Supervised candidate sampling network (Candidate Sampling)

  For a large number of multi – and multi – label models, if full connection softmax will take up a lot of time and space resources, the candidate sampling method uses only a small portion of the category and label as supervision to speed up the training.

operation describe
Sampled Loss Functions  
tf.nn.nce_loss(weights, biases, inputs, labels, num_sampled,
num_classes, num_true=1, sampled_values=None,
remove_accidental_hits=False, partition_strategy=’mod’,
name=’nce_loss’)
Training loss results to return to noise-contrastive
tf.nn.sampled_softmax_loss(weights, biases, inputs, labels, 
num_sampled, num_classes, num_true=1, sampled_values=None,
remove_accidental_hits=True, partition_strategy=’mod’, 
name=’sampled_softmax_loss’)
Training loss to return to sampled softmax
Reference – Jean et al., 2014 third parts
Candidate Samplers  
tf.nn.uniform_candidate_sampler(true_classes, num_true, 
num_sampled, unique, range_max, seed=None, name=None)
Sampling set by uniform distribution
Return three yuan tuple
1、sampled_candidates Candidate sets.
2、Expected true_classes number, floating point value
3、Expected sampled_candidates number, floating point value
tf.nn.log_uniform_candidate_sampler(true_classes, num_true,
num_sampled, unique, range_max, seed=None, name=None)
Through the uniformly distributed sampling set of log, three yuan tuple is returned.
tf.nn.learned_unigram_candidate_sampler
(true_classes, num_true, num_sampled, unique, 
range_max, seed=None, name=None)
Sampling based on the distribution learned during training.
Return three yuan tuple
tf.nn.fixed_unigram_candidate_sampler(true_classes, num_true,
num_sampled, unique, range_max, vocab_file=”, 
distortion=1.0, num_reserved_ids=0, num_shards=1, 
shard=0, unigrams=(), seed=None, name=None)
Sampling based on the basic distribution provided

Nine. Preservation and recovery variables

operation describe
Class tf.train.Saver (Saving and Restoring Variables)  
tf.train.Saver.__init__(var_list=None, reshape=False, 
sharded=False, max_to_keep=5, 
keep_checkpoint_every_n_hours=10000.0, 
name=None, restore_sequentially=False,
saver_def=None, builder=None)
Create a memory Saver
var_listDefine variables that need to be stored and restored
tf.train.Saver.save(sess, save_path, global_step=None, 
latest_filename=None, meta_graph_suffix=’meta’,
write_meta_graph=True)
Save variable
tf.train.Saver.restore(sess, save_path) Recovery variable
tf.train.Saver.last_checkpoints List the recently undeleted checkpoint file name
tf.train.Saver.set_last_checkpoints(last_checkpoints) Setting up a list of checkpoint filenames
tf.train.Saver.set_last_checkpoints_with_time(last_checkpoints_with_time) Setting up checkpoint file name list and timestamp

Similar Posts:

Link of this Article: TensorFlow common function summary

Leave a Reply

Your email address will not be published. Required fields are marked *