tf.contrib.layers.embed_sequence
Maps a sequence of symbols to a sequence of embeddings.
tf.contrib.layers.embed_sequence( ids, vocab_size=None, embed_dim=None, unique=False, initializer=None, regularizer=None, trainable=True, scope=None, reuse=None )
Typical use case would be reusing embeddings between an encoder and decoder.
Args | |
---|---|
ids | [batch_size, doc_length] Tensor of type int32 or int64 with symbol ids. |
vocab_size | Integer number of symbols in vocabulary. |
embed_dim | Integer number of dimensions for embedding matrix. |
unique | If True , will first compute the unique set of indices, and then lookup each embedding once, repeating them in the output as needed. |
initializer | An initializer for the embeddings, if None default for current scope is used. |
regularizer | Optional regularizer for the embeddings. |
trainable | If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable ). |
scope | Optional string specifying the variable scope for the op, required if reuse=True . |
reuse | If True , variables inside the op will be reused. |
Returns | |
---|---|
Tensor of [batch_size, doc_length, embed_dim] with embedded sequences. |
Raises | |
---|---|
ValueError | if embed_dim or vocab_size are not specified when reuse is None or False . |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/layers/embed_sequence