tensorflow::Tensor
#include <tensor.h>
Represents an n-dimensional array of values.
Summary
Constructors and Destructors | |
---|---|
Tensor() Creates a 1-dimensional, 0-element float tensor. | |
Tensor(DataType type, const TensorShape & shape) | |
Tensor(Allocator *a, DataType type, const TensorShape & shape) Creates a tensor with the input type and shape , using the allocator a to allocate the underlying buffer. | |
Tensor(Allocator *a, DataType type, const TensorShape & shape, const AllocationAttributes & allocation_attr) Creates a tensor with the input type and shape , using the allocator a and the specified "allocation_attr" to allocate the underlying buffer. | |
Tensor(DataType type, const TensorShape & shape, TensorBuffer *buf) Creates a tensor with the input datatype, shape and buf. | |
Tensor(DataType type) Creates an empty Tensor of the given data type. | |
Tensor(float scalar_value) | |
Tensor(double scalar_value) | |
Tensor(int32 scalar_value) | |
Tensor(uint32 scalar_value) | |
Tensor(uint16 scalar_value) | |
Tensor(uint8 scalar_value) | |
Tensor(int16 scalar_value) | |
Tensor(int8 scalar_value) | |
Tensor(tstring scalar_value) | |
Tensor(complex64 scalar_value) | |
Tensor(complex128 scalar_value) | |
Tensor(int64 scalar_value) | |
Tensor(uint64 scalar_value) | |
Tensor(bool scalar_value) | |
Tensor(qint8 scalar_value) | |
Tensor(quint8 scalar_value) | |
Tensor(qint16 scalar_value) | |
Tensor(quint16 scalar_value) | |
Tensor(qint32 scalar_value) | |
Tensor(bfloat16 scalar_value) | |
Tensor(Eigen::half scalar_value) | |
Tensor(ResourceHandle scalar_value) | |
Tensor(const char *scalar_value) | |
Tensor(const Tensor & other) Copy constructor. | |
Tensor(Tensor && other) Move constructor. | |
Tensor(T *t) | |
~Tensor() |
Public functions | |
---|---|
AllocatedBytes() const | size_t |
AsProtoField(TensorProto *proto) const | void Fills in proto with *this tensor's content. |
AsProtoTensorContent(TensorProto *proto) const | void |
BitcastFrom(const Tensor & other, DataType dtype, const TensorShape & shape) | Status Copy the other tensor into this tensor, reshape it and reinterpret the buffer's datatype. |
CopyFrom(const Tensor & other, const TensorShape & shape) TF_MUST_USE_RESULT | bool Copy the other tensor into this tensor and reshape it. |
DebugString(int num_values) const | std::string A human-readable summary of the tensor suitable for debugging. |
DebugString() const | std::string |
DeviceSafeDebugString() const | std::string |
FillDescription(TensorDescription *description) const | void Fill in the TensorDescription proto with metadata about the tensor that is useful for monitoring and debugging. |
FromProto(const TensorProto & other) TF_MUST_USE_RESULT | bool Parse other and construct the tensor. |
FromProto(Allocator *a, const TensorProto & other) TF_MUST_USE_RESULT | bool |
IsAligned() const | bool Returns true iff this tensor is aligned. |
IsInitialized() const | bool If necessary, has this Tensor been initialized? |
IsSameSize(const Tensor & b) const | bool |
NumElements() const | int64 Convenience accessor for the tensor shape. |
RefCountIsOne() const | bool |
SharesBufferWith(const Tensor & b) const | bool |
Slice(int64 dim0_start, int64 dim0_limit) const | Slice this tensor along the 1st dimension. |
SubSlice(int64 index) const | Select a subslice from this tensor along the 1st dimension. |
SummarizeValue(int64 max_entries, bool print_v2) const | std::string Render the first max_entries values in *this into a string. |
TotalBytes() const | size_t Returns the estimated memory usage of this tensor. |
UnsafeCopyFromInternal(const Tensor & other, DataType dtype, const TensorShape & shape) | void Like BitcastFrom, but CHECK fails if any preconditions are not met. |
bit_casted_shaped(gtl::ArraySlice< int64 > new_sizes) | TTypes< T, NDIMS >::Tensor Return the tensor data to an Eigen::Tensor with the new shape specified in new_sizes and cast to a new dtype T . |
bit_casted_shaped(gtl::ArraySlice< int64 > new_sizes) const | TTypes< T, NDIMS >::ConstTensor Return the tensor data to an Eigen::Tensor with the new shape specified in new_sizes and cast to a new dtype T . |
bit_casted_tensor() | TTypes< T, NDIMS >::Tensor Return the tensor data to an Eigen::Tensor with the same size but a bitwise cast to the specified dtype T . |
bit_casted_tensor() const | TTypes< T, NDIMS >::ConstTensor Return the tensor data to an Eigen::Tensor with the same size but a bitwise cast to the specified dtype T . |
data() const | void * |
dim_size(int d) const | int64 Convenience accessor for the tensor shape. |
dims() const | int Convenience accessor for the tensor shape. |
dtype() const | DataType Returns the data type. |
flat() | TTypes< T >::Flat Return the tensor data as an Eigen::Tensor of the data type and a specified shape. |
flat() const | TTypes< T >::ConstFlat |
flat_inner_dims() | TTypes< T, NDIMS >::Tensor Returns the data as an Eigen::Tensor with NDIMS dimensions, collapsing all Tensor dimensions but the last NDIMS-1 into the first dimension of the result. |
flat_inner_dims() const | TTypes< T, NDIMS >::ConstTensor |
flat_inner_outer_dims(int64 begin) | TTypes< T, NDIMS >::Tensor |
flat_inner_outer_dims(int64 begin) const | TTypes< T, NDIMS >::ConstTensor |
flat_outer_dims() | TTypes< T, NDIMS >::Tensor Returns the data as an Eigen::Tensor with NDIMS dimensions, collapsing all Tensor dimensions but the first NDIMS-1 into the last dimension of the result. |
flat_outer_dims() const | TTypes< T, NDIMS >::ConstTensor |
matrix() | TTypes< T >::Matrix |
matrix() const | TTypes< T >::ConstMatrix |
operator=(const Tensor & other) | Tensor & Assign operator. This tensor shares other's underlying storage. |
operator=(Tensor && other) | Tensor & Move operator. See move constructor for details. |
reinterpret_last_dimension() | TTypes< T, NDIMS >::Tensor Return the tensor data to an Eigen::Tensor with the last dimension elements converted into single elements of a larger type. |
reinterpret_last_dimension() const | TTypes< T, NDIMS >::ConstTensor Return the tensor data to an Eigen::Tensor with the last dimension elements converted into single elements of a larger type. |
scalar() | TTypes< T >::Scalar |
scalar() const | TTypes< T >::ConstScalar |
shape() const | const TensorShape & Returns the shape of the tensor. |
shaped(gtl::ArraySlice< int64 > new_sizes) | TTypes< T, NDIMS >::Tensor |
shaped(gtl::ArraySlice< int64 > new_sizes) const | TTypes< T, NDIMS >::ConstTensor |
tensor() | TTypes< T, NDIMS >::Tensor |
tensor() const | TTypes< T, NDIMS >::ConstTensor |
tensor_data() const | StringPiece Returns a StringPiece mapping the current tensor's buffer. |
unaligned_flat() | TTypes< T >::UnalignedFlat |
unaligned_flat() const | TTypes< T >::UnalignedConstFlat |
unaligned_shaped(gtl::ArraySlice< int64 > new_sizes) | TTypes< T, NDIMS >::UnalignedTensor |
unaligned_shaped(gtl::ArraySlice< int64 > new_sizes) const | TTypes< T, NDIMS >::UnalignedConstTensor |
vec() | TTypes< T >::Vec Return the tensor data as an Eigen::Tensor with the type and sizes of this Tensor . |
vec() const | TTypes< T >::ConstVec Const versions of all the methods above. |
Public functions
AllocatedBytes
size_t AllocatedBytes() const
AsProtoField
void AsProtoField( TensorProto *proto ) const
Fills in proto
with *this
tensor's content.
AsProtoField()
fills in the repeated field for proto.dtype()
, while AsProtoTensorContent()
encodes the content in proto.tensor_content()
in a compact form.
AsProtoTensorContent
void AsProtoTensorContent( TensorProto *proto ) const
BitcastFrom
Status BitcastFrom( const Tensor & other, DataType dtype, const TensorShape & shape )
Copy the other tensor into this tensor, reshape it and reinterpret the buffer's datatype.
If Status::OK() is returned, the two tensors now share the same underlying storage.
This call requires that the other
tensor and the given type and shape are "compatible" (i.e. they occupy the same number of bytes).
Specifically:
shape.num_elements() * DataTypeSize(type)
must equal
other.num_elements() * DataTypeSize(other.dtype())
In addition, this function requires:
- DataTypeSize(other.dtype()) != 0
- DataTypeSize(type) != 0
If any of the requirements are not met, errors::InvalidArgument is returned.
CopyFrom
bool CopyFrom( const Tensor & other, const TensorShape & shape ) TF_MUST_USE_RESULT
Copy the other tensor into this tensor and reshape it.
This tensor shares other's underlying storage. Returns true
iff other.shape()
has the same number of elements of the given shape
.
DebugString
std::string DebugString( int num_values ) const
A human-readable summary of the tensor suitable for debugging.
DebugString
std::string DebugString() const
DeviceSafeDebugString
std::string DeviceSafeDebugString() const
FillDescription
void FillDescription( TensorDescription *description ) const
Fill in the TensorDescription
proto with metadata about the tensor that is useful for monitoring and debugging.
FromProto
bool FromProto( const TensorProto & other ) TF_MUST_USE_RESULT
Parse other
and construct the tensor.
Returns true
iff the parsing succeeds. If the parsing fails, the state of *this
is unchanged.
FromProto
bool FromProto( Allocator *a, const TensorProto & other ) TF_MUST_USE_RESULT
IsAligned
bool IsAligned() const
Returns true iff this tensor is aligned.
IsInitialized
bool IsInitialized() const
If necessary, has this Tensor been initialized?
Zero-element Tensors are always considered initialized, even if they have never been assigned to and do not have any memory allocated.
IsSameSize
bool IsSameSize( const Tensor & b ) const
NumElements
int64 NumElements() const
Convenience accessor for the tensor shape.
RefCountIsOne
bool RefCountIsOne() const
SharesBufferWith
bool SharesBufferWith( const Tensor & b ) const
Slice
Tensor Slice( int64 dim0_start, int64 dim0_limit ) const
Slice this tensor along the 1st dimension.
I.e., the returned tensor satisfies returned[i, ...] == this[dim0_start + i, ...]. The returned tensor shares the underlying tensor buffer with this tensor.
NOTE: The returned tensor may not satisfy the same alignment requirement as this tensor depending on the shape. The caller must check the returned tensor's alignment before calling certain methods that have alignment requirement (e.g., flat()
, tensor()
).
NOTE: When fed with an N-dimensional tensor, this method returns a tensor also with N dimensions. If you want to select a sub tensor, see SubSlice.
REQUIRES: dims()
>= 1 REQUIRES: 0 <= dim0_start <= dim0_limit <= dim_size(0)
SubSlice
Tensor SubSlice( int64 index ) const
Select a subslice from this tensor along the 1st dimension.
When fed with an N-dimensional tensor, this method returns a tensor with N-1 dimensions, where the returned tensor is a subslice of the input tensor along the first dimension. The N-1 dimensions of the returned tensor are the last N-1 dimensions of the input tensor.
NOTE: The returned tensor may not satisfy the same alignment requirement as this tensor depending on the shape. The caller must check the returned tensor's alignment before calling certain methods that have alignment requirement (e.g., flat()
, tensor()
).
REQUIRES: dims()
>= 1 REQUIRES: 0 <= index < dim_size(0)
SummarizeValue
std::string SummarizeValue( int64 max_entries, bool print_v2 ) const
Render the first max_entries
values in *this
into a string.
Tensor
Tensor()
Creates a 1-dimensional, 0-element float tensor.
The returned Tensor is not a scalar (shape {}), but is instead an empty one-dimensional Tensor (shape {0}, NumElements() == 0). Since it has no elements, it does not need to be assigned a value and is initialized by default (IsInitialized() is true). If this is undesirable, consider creating a one-element scalar which does require initialization:
Tensor(DT_FLOAT, TensorShape({}))
Tensor
Tensor( DataType type, const TensorShape & shape )
Creates a Tensor of the given type
and shape
.
If LogMemory::IsEnabled() the allocation is logged as coming from an unknown kernel and step. Calling the Tensor constructor directly from within an Op is deprecated: use the OpKernelConstruction/OpKernelContext allocate_* methods to allocate a new tensor, which record the kernel and step.
The underlying buffer is allocated using a CPUAllocator
.
Tensor
Tensor( Allocator *a, DataType type, const TensorShape & shape )
Creates a tensor with the input type
and shape
, using the allocator a
to allocate the underlying buffer.
If LogMemory::IsEnabled() the allocation is logged as coming from an unknown kernel and step. Calling the Tensor constructor directly from within an Op is deprecated: use the OpKernelConstruction/OpKernelContext allocate_* methods to allocate a new tensor, which record the kernel and step.
a
must outlive the lifetime of this Tensor.
Tensor
Tensor( Allocator *a, DataType type, const TensorShape & shape, const AllocationAttributes & allocation_attr )
Creates a tensor with the input type
and shape
, using the allocator a
and the specified "allocation_attr" to allocate the underlying buffer.
If the kernel and step are known allocation_attr.allocation_will_be_logged should be set to true and LogMemory::RecordTensorAllocation should be called after the tensor is constructed. Calling the Tensor constructor directly from within an Op is deprecated: use the OpKernelConstruction/OpKernelContext allocate_* methods to allocate a new tensor, which record the kernel and step.
a
must outlive the lifetime of this Tensor.
Tensor
Tensor( DataType type, const TensorShape & shape, TensorBuffer *buf )
Creates a tensor with the input datatype, shape and buf.
Acquires a ref on buf that belongs to this Tensor.
Tensor
Tensor( DataType type )
Creates an empty Tensor of the given data type.
Like Tensor(), returns a 1-dimensional, 0-element Tensor with IsInitialized() returning True. See the Tensor() documentation for details.
Tensor
Tensor( float scalar_value )
Tensor
Tensor( double scalar_value )
Tensor
Tensor( int32 scalar_value )
Tensor
Tensor( uint32 scalar_value )
Tensor
Tensor( uint16 scalar_value )
Tensor
Tensor( uint8 scalar_value )
Tensor
Tensor( int16 scalar_value )
Tensor
Tensor( int8 scalar_value )
Tensor
Tensor( tstring scalar_value )
Tensor
Tensor( complex64 scalar_value )
Tensor
Tensor( complex128 scalar_value )
Tensor
Tensor( int64 scalar_value )
Tensor
Tensor( uint64 scalar_value )
Tensor
Tensor( bool scalar_value )
Tensor
Tensor( qint8 scalar_value )
Tensor
Tensor( quint8 scalar_value )
Tensor
Tensor( qint16 scalar_value )
Tensor
Tensor( quint16 scalar_value )
Tensor
Tensor( qint32 scalar_value )
Tensor
Tensor( bfloat16 scalar_value )
Tensor
Tensor( Eigen::half scalar_value )
Tensor
Tensor( ResourceHandle scalar_value )
Tensor
Tensor( const char *scalar_value )
Tensor
Tensor( const Tensor & other )
Copy constructor.
Tensor
Tensor( Tensor && other )
Move constructor.
After this call,
Tensor
Tensor( T *t )=delete
TotalBytes
size_t TotalBytes() const
Returns the estimated memory usage of this tensor.
UnsafeCopyFromInternal
void UnsafeCopyFromInternal( const Tensor & other, DataType dtype, const TensorShape & shape )
Like BitcastFrom, but CHECK fails if any preconditions are not met.
Deprecated. Use BitcastFrom instead and check the returned Status.
bit_casted_shaped
TTypes< T, NDIMS >::Tensor bit_casted_shaped( gtl::ArraySlice< int64 > new_sizes )
Return the tensor data to an Eigen::Tensor
with the new shape specified in new_sizes
and cast to a new dtype T
.
Using a bitcast is useful for move and copy operations. The allowed bitcast is the only difference from shaped()
.
bit_casted_shaped
TTypes< T, NDIMS >::ConstTensor bit_casted_shaped( gtl::ArraySlice< int64 > new_sizes ) const
Return the tensor data to an Eigen::Tensor
with the new shape specified in new_sizes
and cast to a new dtype T
.
Using a bitcast is useful for move and copy operations. The allowed bitcast is the only difference from shaped()
.
bit_casted_tensor
TTypes< T, NDIMS >::Tensor bit_casted_tensor()
Return the tensor data to an Eigen::Tensor
with the same size but a bitwise cast to the specified dtype T
.
Using a bitcast is useful for move and copy operations. NOTE: this is the same as tensor()
except a bitcast is allowed.
bit_casted_tensor
TTypes< T, NDIMS >::ConstTensor bit_casted_tensor() const
Return the tensor data to an Eigen::Tensor
with the same size but a bitwise cast to the specified dtype T
.
Using a bitcast is useful for move and copy operations. NOTE: this is the same as tensor()
except a bitcast is allowed.
data
void * data() const
dim_size
int64 dim_size( int d ) const
Convenience accessor for the tensor shape.
dims
int dims() const
Convenience accessor for the tensor shape.
For all shape accessors, see comments for relevant methods of TensorShape
in tensor_shape.h
.
dtype
DataType dtype() const
Returns the data type.
flat
TTypes< T >::Flat flat()
Return the tensor data as an Eigen::Tensor
of the data type and a specified shape.
These methods allow you to access the data with the dimensions and sizes of your choice. You do not need to know the number of dimensions of the Tensor to call them. However, they CHECK
that the type matches and the dimensions requested creates an Eigen::Tensor
with the same number of elements as the tensor.
Example:
typedef float T; Tensor my_ten(...built with Shape{planes: 4, rows: 3, cols: 5}...); // 1D Eigen::Tensor, size 60: auto flat = my_ten.flat(); // 2D Eigen::Tensor 12 x 5: auto inner = my_ten.flat_inner_dims(); // 2D Eigen::Tensor 4 x 15: auto outer = my_ten.shaped({4, 15}); // CHECK fails, bad num elements: auto outer = my_ten.shaped({4, 8}); // 3D Eigen::Tensor 6 x 5 x 2: auto weird = my_ten.shaped({6, 5, 2}); // CHECK fails, type mismatch: auto bad = my_ten.flat();
flat
TTypes< T >::ConstFlat flat() const
flat_inner_dims
TTypes< T, NDIMS >::Tensor flat_inner_dims()
flat_inner_dims
TTypes< T, NDIMS >::ConstTensor flat_inner_dims() const
flat_inner_outer_dims
TTypes< T, NDIMS >::Tensor flat_inner_outer_dims( int64 begin )
Returns the data as an Eigen::Tensor with NDIMS dimensions, collapsing the first 'begin' Tensor dimensions into the first dimension of the result and the Tensor dimensions of the last dims() - 'begin' - NDIMS into the last dimension of the result.
If 'begin' < 0 then the |'begin'| leading dimensions of size 1 will be added. If 'begin' + NDIMS > dims() then 'begin' + NDIMS - dims() trailing dimensions of size 1 will be added.
flat_inner_outer_dims
TTypes< T, NDIMS >::ConstTensor flat_inner_outer_dims( int64 begin ) const
flat_outer_dims
TTypes< T, NDIMS >::Tensor flat_outer_dims()
flat_outer_dims
TTypes< T, NDIMS >::ConstTensor flat_outer_dims() const
matrix
TTypes< T >::Matrix matrix()
matrix
TTypes< T >::ConstMatrix matrix() const
operator=
Tensor & operator=( const Tensor & other )
Assign operator. This tensor shares other's underlying storage.
operator=
Tensor & operator=( Tensor && other )
Move operator. See move constructor for details.
reinterpret_last_dimension
TTypes< T, NDIMS >::Tensor reinterpret_last_dimension()
Return the tensor data to an Eigen::Tensor
with the last dimension elements converted into single elements of a larger type.
For example, this is useful for kernels that can treat NCHW_VECT_C int8 tensors as NCHW int32 tensors. The sizeof(T) should equal the size of the original element type * num elements in the original last dimension. NDIMS should be 1 less than the original number of dimensions.
reinterpret_last_dimension
TTypes< T, NDIMS >::ConstTensor reinterpret_last_dimension() const
Return the tensor data to an Eigen::Tensor
with the last dimension elements converted into single elements of a larger type.
For example, this is useful for kernels that can treat NCHW_VECT_C int8 tensors as NCHW int32 tensors. The sizeof(T) should equal the size of the original element type * num elements in the original last dimension. NDIMS should be 1 less than the original number of dimensions.
scalar
TTypes< T >::Scalar scalar()
scalar
TTypes< T >::ConstScalar scalar() const
shape
const TensorShape & shape() const
Returns the shape of the tensor.
shaped
TTypes< T, NDIMS >::Tensor shaped( gtl::ArraySlice< int64 > new_sizes )
shaped
TTypes< T, NDIMS >::ConstTensor shaped( gtl::ArraySlice< int64 > new_sizes ) const
tensor
TTypes< T, NDIMS >::Tensor tensor()
tensor
TTypes< T, NDIMS >::ConstTensor tensor() const
tensor_data
StringPiece tensor_data() const
Returns a StringPiece
mapping the current tensor's buffer.
The returned StringPiece
may point to memory location on devices that the CPU cannot address directly.
NOTE: The underlying tensor buffer is refcounted, so the lifetime of the contents mapped by the StringPiece
matches the lifetime of the buffer; callers should arrange to make sure the buffer does not get destroyed while the StringPiece
is still used.
REQUIRES: DataTypeCanUseMemcpy(dtype())
.
unaligned_flat
TTypes< T >::UnalignedFlat unaligned_flat()
unaligned_flat
TTypes< T >::UnalignedConstFlat unaligned_flat() const
unaligned_shaped
TTypes< T, NDIMS >::UnalignedTensor unaligned_shaped( gtl::ArraySlice< int64 > new_sizes )
unaligned_shaped
TTypes< T, NDIMS >::UnalignedConstTensor unaligned_shaped( gtl::ArraySlice< int64 > new_sizes ) const
vec
TTypes< T >::Vec vec()
Return the tensor data as an Eigen::Tensor
with the type and sizes of this Tensor
.
Use these methods when you know the data type and the number of dimensions of the Tensor and you want an Eigen::Tensor
automatically sized to the Tensor
sizes. The implementation check fails if either type or sizes mismatch.
Example:
typedef float T; Tensor my_mat(...built with Shape{rows: 3, cols: 5}...); auto mat = my_mat.matrix(); // 2D Eigen::Tensor, 3 x 5. auto mat = my_mat.tensor(); // 2D Eigen::Tensor, 3 x 5. auto vec = my_mat.vec(); // CHECK fails as my_mat is 2D. auto vec = my_mat.tensor(); // CHECK fails as my_mat is 2D. auto mat = my_mat.matrix();// CHECK fails as type mismatch.
vec
TTypes< T >::ConstVec vec() const
Const versions of all the methods above.
~Tensor
~Tensor()
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 4.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.4/api_docs/cc/class/tensorflow/tensor