2.2.0
TensorFlow Lite toolbox support


ST Edge AI Core

TensorFlow Lite toolbox support


ST Edge AI Core Technology 2.2.0




Overview

This document lists the layers (or operators) which can be imported and converted. Supported operators allow to address a large range of classical topologies targeting a Mobile or IoT resource-constrained runtime environment: SqueezeNet, MobileNet V1 or V2, Inception, SSD MobileNet v1,..

Purpose of this document is to list the operators and their associated constraints or limitations, please refer to the original documentation for details on a given layer.

Tensorflow Lite is the format used to deploy a neural network model on mobile platforms. ST Edge AI Core imports and converts the .tflite files which is based on the flatbuffer technology. The official ‘schema.fbs’ definition (tags v2.18.0) is used to import the models. A number of operators from the supported operator list are handled, including the quantized models and/or operators generated by the Quantization Aware Training or/and Post-training quantization processes.

This file was automatically generated.

Summary table

Following table contains the list of the operators that can be imported, if the constraints or limitations are met.

  • supported optional fused activation (or non-linearity): gelu, linear, relu, quantized_relu, relu_n1_to_1, relu_0_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign, abs, acos, acosh, asin, asinh, atan, atanh, ceil, clip, cos, cosh, erf, flexerf, exp, floor, identity, log, logistic, neg, logical_not, prelu, probit, reciprocal, relu_generic, relu_thresholded, round, sign, sin, sinh, softmax_zero, sqrt, swish, tan
  • supported optional fused integer activation (or non-linearity): prelu, relu, clip, lut, swish, identity, relu6
  • mixed data operations (i.e hybrid operator) are not natively supported, activations and weights should be quantized
  • if an operator is not supported in integer, floating point version is used. Converters are automatically added by the code generator.
operator data types constraints/limitations
ABS float32 common
ADD float32, int8, uint8 common
ARG_MAX float32, int32 common
ARG_MIN float32, int32 common
ATAN2 float32 common
AVERAGE_POOL_2D float32, int8, uint8 common, specific
BATCH_MATMUL float32, int8, uint8 common, specific
BATCH_TO_SPACE_ND float32, int8, uint8 common
BROADCAST_ARGS float32, int8, uint8, int32 common
BROADCAST_TO float32, int8, uint8 common, specific
CAST bool, int8, uint8, float32 common, specific
CEIL float32 common
CONCATENATION float32, int8, uint8 common, specific
CONV_2D float32, int8, uint8 common, specific
COS float32 common
DEPTH_TO_SPACE float32, int8, uint8 common, specific
DEPTHWISE_CONV_2D float32, int8, uint8 common, specific
DEQUANTIZE float32, int8, uint8 common
DIV float32, int8, uint8 common
ELU float32, int8, uint8 common
EQUAL float32, bool common
EXP float32 common
EXPAND_DIMS float32, int8, uint8 common
FILL float32 common
FlexErf float32 common
FLOOR float32 common
FLOOR_DIV float32 common
FLOOR_MOD float32 common
FULLY_CONNECTED float32, int8, uint8 common, specific
GATHER float32, int8, uint8 common, specific
GATHER_ND float32, int8, uint8 common, specific
GELU float32 common
GREATER float32, bool common
GREATER_EQUAL float32, bool common
HARD_SWISH float32 common
L2_NORMALIZATION float32 common
LEAKY_RELU float32, int8, uint8 common
LESS float32, bool common
LESS_EQUAL float32, bool common
LOCAL_RESPONSE_NORMALIZATION float32 common
LOG float32 common
LOG_SOFTMAX float32 common, specific
LOGICAL_AND bool common
LOGICAL_NOT float32 common
LOGICAL_OR bool common
LOGISTIC float32 common
MAX_POOL_2D float32, int8, uint8 common, specific
MAXIMUM float32, int8, uint8 common
MEAN float32 common
MINIMUM float32, int8, uint8 common
MIRROR_PAD float32 common, specific
MUL float32, int8, uint8 common
NEG float32 common
NOT_EQUAL float32 common
PACK float32, int8, uint8 common, specific
PAD float32 common, specific
PADV2 float32 common, specific
POW float32 common
PRELU float32, int8, uint8 common
QUANTIZE float32, int8, uint8 common
REDUCE_ANY float32 common
REDUCE_MAX float32 common
REDUCE_MIN float32 common
REDUCE_PROD float32 common
RELU float32, int8, uint8 common
RELU6 float32, int8, uint8 common
RELU_0_TO_1 float32, int8, uint8 common
RELU_N1_TO_1 float32, int8, uint8 common
RESHAPE float32, int8, uint8 common
RESIZE_BILINEAR float32 common
RESIZE_NEAREST_NEIGHBOR float32 common
REVERSE_V2 float32, int8, uint8 common, specific
ROUND float32 common
RSQRT float32 common
SCATTER_ND float32, int8, uint8 common, specific
SELECT float32, int8, uint8, int16, uint16, int32, uint32, bool common
SELECT_V2 float32, int8, uint8, int16, uint16, int32, uint32, bool common
SHAPE float32, int8, uint8, int32 common
SIGN float32 common
SIN float32 common
SLICE float32, int8, uint8 common
SOFTMAX float32 common, specific
SPACE_TO_BATCH_ND float32, int8, uint8 common
SPACE_TO_DEPTH float32, int8, uint8 common
SPLIT float32, int8, uint8 common, specific
SPLIT_V float32, int8, uint8 common, specific
SQRT float32 common
SQUARE float32 common
SQUARED_DIFFERENCE float32 common
SQUEEZE float32, int8, uint8 common
STRIDED_SLICE float32, int8, uint8 common
SUB float32, int8, uint8 common
SUM float32 common
TANH float32 common
TILE float32, int8, uint8 common, specific
TOPK_V2 float32, int8, uint8 common
TRANSPOSE float32, int8, uint8 common, specific
TRANSPOSE_CONV float32, int8, uint8 common, specific
UNIDIRECTIONAL_SEQUENCE_LSTM float32, int8 common, specific
UNPACK float32, int8, uint8 common
ZEROS_LIKE float32 common

Common constraints

  • input and output tensors must be not dynamic.
    • variable-length batch dimension (i.e. (None,)) is considered as equal to 1
    • must not be greater than 6D
    • dimension must be in the range [0, 65536[
    • batch dimension is partially supported in the axis/axes parameters
  • data type for the weights/activations tensors must be:
    • float32, int8, uint8
    • only int32 for the bias tensor is considered
    • for some operators, bool and binary types are also supported
  • 1D operator is mapped on the respective 2D operator by adding a singleton dimension on the input: (12,3) -> (12, 1, 3)

Operators

ABS

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32
  • output data types: float32

ADD

Performs element-wise operation

  • category: eltwise operator
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

ARG_MAX

Computes the indices of the max elements of the input tensor’s element along the provided axis.

  • category: generic layer
  • input data types: float32
  • output data types: int32

ARG_MIN

Computes the indices of the min elements of the input tensor’s element along the provided axis.

  • category: generic layer
  • input data types: float32
  • output data types: int32

ATAN2

Performs element-wise operation

  • category: eltwise operator
  • input data types: float32
  • output data types: float32

AVERAGE_POOL_2D

Downsamples the input

  • category: pooling layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

Specific constraints/recommendations:

  • arbitrary strides, provided that they are smaller than the input size
  • arbitrary pool sizes, provided that they are smaller than the input size

BATCH_MATMUL

Fully Connected operation

  • category: core layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8
  • fused activations (if present): gelu, linear, relu, quantized_relu, relu_n1_to_1, relu_0_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign
  • integer schemes: weights / activations
    • Signed Symmetric / Signed Asymmetric (SSSA)
    • Signed Symmetric per channel (or per-axis) / Signed Asymmetric (SSSA_CH)
    • Signed Symmetric / Unsigned Asymmetric (SSUA)
    • Signed Symmetric per channel (or per-axis) / Unsigned Asymmetric (SSUA_CH)

Specific constraints/recommendations:

  • Only up to 3D matrix multiplication is supported

BATCH_TO_SPACE_ND

Reshape the batch dimension of a tensor

  • category: Reshaping operation
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

BROADCAST_ARGS

Returns a tensor containing the shape of the input tensor

  • category: Reshaping operation
  • input data types: float32, int8, uint8
  • output data types: int32

BROADCAST_TO

Constructs a tensor by tiling the input tensor

  • category: reshaping layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

Specific constraints/recommendations:

  • tiling on batch-dimension is not supported

CAST

Cast elements of the input tensor to the specified output tensor data

  • category: conversion layer
  • input data types: bool, int8, uint8, float32
  • output data types: bool, int8, uint8, float32

Specific constraints/recommendations:

  • The attribute saturate different from the default value is not supported

CEIL

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32
  • output data types: float32

CONCATENATION

Performs concatenation of a list of inputs

  • category: merge operator
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8
  • fused activations (if present): gelu, linear, relu, quantized_relu, relu_n1_to_1, relu_0_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign

Specific constraints/recommendations:

  • concatenating on the batch dimension is not supported

CONV_2D

Performs convolution operation

  • category: convolutional layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8
  • fused activations (if present): gelu, linear, relu, quantized_relu, relu_n1_to_1, relu_0_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign
  • integer schemes: weights / activations
    • Signed Symmetric / Signed Asymmetric (SSSA)
    • Signed Symmetric per channel (or per-axis) / Signed Asymmetric (SSSA_CH)
    • Signed Symmetric / Unsigned Asymmetric (SSUA)
    • Signed Symmetric per channel (or per-axis) / Unsigned Asymmetric (SSUA_CH)

Specific constraints/recommendations:

  • arbitrary strides, provided that they are smaller than the input size
  • arbitrary filter kernel sizes, provided that they are smaller than the input size
  • dilation factors different from 1 are not supported for int8 model

COS

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32
  • output data types: float32

DEPTH_TO_SPACE

Permutes the dimensions of the input according to a given pattern

  • category: reshaping layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

Specific constraints/recommendations:

  • transposing the batch dimension is not supported

DEPTHWISE_CONV_2D

Performs convolution operation

  • category: convolutional layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8
  • fused activations (if present): gelu, linear, relu, quantized_relu, relu_n1_to_1, relu_0_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign
  • integer schemes: weights / activations
    • Signed Symmetric / Signed Asymmetric (SSSA)
    • Signed Symmetric per channel (or per-axis) / Signed Asymmetric (SSSA_CH)
    • Signed Symmetric / Unsigned Asymmetric (SSUA)
    • Signed Symmetric per channel (or per-axis) / Unsigned Asymmetric (SSUA_CH)

Specific constraints/recommendations:

  • arbitrary strides, provided that they are smaller than the input size
  • arbitrary filter kernel sizes, provided that they are smaller than the input size
  • dilation factors different from 1 are not supported for int8 model

DEQUANTIZE

Computes element-wise data conversion low precision to full precision, based on the scale/zeropoint parameters

  • category: conversion layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

DIV

Performs element-wise operation

  • category: eltwise operator
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

ELU

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

EQUAL

Performs logical element-wise operation

  • category: eltwise operator
  • input data types: float32
  • output data types: bool

EXP

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32
  • output data types: float32

EXPAND_DIMS

Reshapes a tensor

  • category: Reshaping operation
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

FILL

Generates a tensor with given value and shape

  • category: constant layer
  • input data types: float32
  • output data types: float32

FlexErf

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32
  • output data types: float32

FLOOR

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32
  • output data types: float32

FLOOR_DIV

Performs element-wise operation

  • category: eltwise operator
  • input data types: float32
  • output data types: float32

FLOOR_MOD

Performs element-wise operation

  • category: eltwise operator
  • input data types: float32
  • output data types: float32

FULLY_CONNECTED

Fully Connected operation

  • category: core layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8
  • fused activations (if present): gelu, linear, relu, quantized_relu, relu_n1_to_1, relu_0_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign
  • integer schemes: weights / activations
    • Signed Symmetric / Signed Asymmetric (SSSA)
    • Signed Symmetric per channel (or per-axis) / Signed Asymmetric (SSSA_CH)
    • Signed Symmetric / Unsigned Asymmetric (SSUA)
    • Signed Symmetric per channel (or per-axis) / Unsigned Asymmetric (SSUA_CH)

Specific constraints/recommendations:

  • Only up to 3D matrix multiplication is supported

GATHER

Gathers values along a specified axis

  • category: activation function
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

Specific constraints/recommendations:

  • Gather is not supported with indices dimensions > 2 (Batch is not considered), axis > 3and axis = 0, batch_dims attribute is not handled

GATHER_ND

Gathers slices from input tensor into an output tensor with shape specified by indices

  • category: activation function
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

Specific constraints/recommendations:

  • Batch_dims attribute is not handled. 4D or more indices (BATCH is not included) are not implementedLast dimension of indices > 4 with 2D inputs tensor case is not handled

GELU

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32
  • output data types: float32

GREATER

Performs logical element-wise operation

  • category: eltwise operator
  • input data types: float32
  • output data types: bool

GREATER_EQUAL

Performs logical element-wise operation

  • category: eltwise operator
  • input data types: float32
  • output data types: bool

HARD_SWISH

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32
  • output data types: float32

L2_NORMALIZATION

Apply Lp-normalization along the provided axis

  • category: normalization function
  • input data types: float32
  • output data types: float32

LEAKY_RELU

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

LESS

Performs logical element-wise operation

  • category: eltwise operator
  • input data types: float32
  • output data types: bool

LESS_EQUAL

Performs logical element-wise operation

  • category: eltwise operator
  • input data types: float32
  • output data types: bool

LOCAL_RESPONSE_NORMALIZATION

Apply Local Response Normalization over local input regions

  • category: normalization function
  • input data types: float32
  • output data types: float32

LOG

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32
  • output data types: float32

LOG_SOFTMAX

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32
  • output data types: float32

Specific constraints/recommendations:

  • It is supported only for 1D tensor and only on the channel dimension

LOGICAL_AND

Performs boolean element-wise operation

  • category: eltwise operator
  • input data types: bool
  • output data types: bool

LOGICAL_NOT

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32
  • output data types: float32

LOGICAL_OR

Performs boolean element-wise operation

  • category: eltwise operator
  • input data types: bool
  • output data types: bool

LOGISTIC

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32
  • output data types: float32

MAX_POOL_2D

Downsamples the input

  • category: pooling layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

Specific constraints/recommendations:

  • arbitrary strides, provided that they are smaller than the input size
  • arbitrary pool sizes, provided that they are smaller than the input size

MAXIMUM

Computes the maximum (element-wise) a list of inputs

  • category: eltwise operator
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

MEAN

Computes the Mean of the input tensor’s element along the provided axes

  • category: reduction operation
  • input data types: float32
  • output data types: float32

MINIMUM

Computes the minimum (element-wise) a list of inputs

  • category: eltwise operator
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

MIRROR_PAD

Pads an input tensor

  • category: Reshaping layer
  • input data types: float32
  • output data types: float32

Specific constraints/recommendations:

  • padding ‘edge’ is not supported

MUL

Performs element-wise operation

  • category: eltwise operator
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

NEG

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32
  • output data types: float32

NOT_EQUAL

Performs element-wise operation

  • category: eltwise operator
  • input data types: float32
  • output data types: float32

PACK

Packs a list of tensors into a tensor along a specified axis

  • category: merge operator
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

Specific constraints/recommendations:

  • related TF operator: tf.stack

PAD

Pads an input tensor

  • category: Reshaping layer
  • input data types: float32
  • output data types: float32

Specific constraints/recommendations:

  • padding ‘edge’ is not supported

PADV2

Pads an input tensor

  • category: Reshaping layer
  • input data types: float32
  • output data types: float32

Specific constraints/recommendations:

  • padding ‘edge’ is not supported

POW

Performs element-wise operation

  • category: eltwise operator
  • input data types: float32
  • output data types: float32

PRELU

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

QUANTIZE

Computes element-wise data conversion full precision to low precision, based on the scale/zeropoint parameters

  • category: conversion layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

REDUCE_ANY

Computes the logical ‘or’ of elements across dimensions of a tensor

  • category: reduction operation
  • input data types: float32
  • output data types: float32

REDUCE_MAX

Computes the Max of the input tensor’s element along the provided axes

  • category: reduction operation
  • input data types: float32
  • output data types: float32

REDUCE_MIN

Computes the Min of the input tensor’s element along the provided axes

  • category: reduction operation
  • input data types: float32
  • output data types: float32

REDUCE_PROD

Computes the Product of the input tensor’s element along the provided axes

  • category: reduction operation
  • input data types: float32
  • output data types: float32

RELU

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

RELU6

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

RELU_0_TO_1

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

RELU_N1_TO_1

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

RESHAPE

Reshapes a tensor

  • category: Reshaping operation
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

RESIZE_BILINEAR

Resize input tensor using bilinear interpolation

  • category: resizing operation
  • input data types: float32
  • output data types: float32

RESIZE_NEAREST_NEIGHBOR

Resize input tensor using nearest interpolation mode

  • category: resizing operation
  • input data types: float32
  • output data types: float32

REVERSE_V2

Reverses specific dimensions of a tensor

  • category: Reshape layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

Specific constraints/recommendations:

  • reversing the batch dimension is not supported
  • support more than 1 axis is not supported

ROUND

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32
  • output data types: float32

RSQRT

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32
  • output data types: float32

SCATTER_ND

Scatters updates into a tensor of shape equal to shape attribute according indices (TFLite),The output of the ScatterND layer is produced by creating a copy of the input data, and then updating its values to values specified by updates at specific index positions specified by indices (ONNX)

  • category: activation function
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

Specific constraints/recommendations:

  • 5D or more indices (BATCH is not included) are not implemented, 5D or more data (BATCH is not included) are not implemented

SELECT

Where layer

  • category: generic layer
  • input data types: float32, int8, uint8, int16, uint16, int32, uint32, bool
  • output data types: float32, int8, uint8, int16, uint16, int32, uint32, bool

SELECT_V2

Where layer

  • category: generic layer
  • input data types: float32, int8, uint8, int16, uint16, int32, uint32, bool
  • output data types: float32, int8, uint8, int16, uint16, int32, uint32, bool

SHAPE

Returns a tensor containing the shape of the input tensor

  • category: Reshaping operation
  • input data types: float32, int8, uint8
  • output data types: int32

SIGN

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32
  • output data types: float32

SIN

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32
  • output data types: float32

SLICE

Crops the input

  • category: reshaping layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

SOFTMAX

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32
  • output data types: float32

Specific constraints/recommendations:

  • It is supported only for 1D tensor and only on the channel dimension

SPACE_TO_BATCH_ND

Divides spatial dimensions

  • category: Reshaping operation
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

SPACE_TO_DEPTH

Rearranges blocks of spatial data into depth

  • category: reshaping layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

SPLIT

Splits a tensor into a list of sub tensors

  • category: split operator
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

Specific constraints/recommendations:

  • Only supported if the number of splits is equal to the size of the splitting dimension

SPLIT_V

Splits a tensor into a list of sub tensors

  • category: split operator
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

Specific constraints/recommendations:

  • Only supported if the number of splits is equal to the size of the splitting dimension

SQRT

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32
  • output data types: float32

SQUARE

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32
  • output data types: float32

SQUARED_DIFFERENCE

Compute (x-y)(x-y)

  • category: eltwise operator
  • input data types: float32
  • output data types: float32

SQUEEZE

Reshapes a tensor

  • category: Reshaping operation
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

STRIDED_SLICE

Crops the input

  • category: reshaping layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

SUB

Performs element-wise operation

  • category: eltwise operator
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

SUM

Computes the Sum of the input tensor’s element along the provided axes

  • category: reduction operation
  • input data types: float32
  • output data types: float32

TANH

Applies an activation function to the input tensor

  • category: activation layer
  • input data types: float32
  • output data types: float32

TILE

Constructs a tensor by tiling the input tensor

  • category: reshaping layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

Specific constraints/recommendations:

  • tiling on batch-dimension is not supported

TOPK_V2

Retrieve the top-K largest or smallest elements along a specified axis

  • category: topK operator
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

TRANSPOSE

Permutes the dimensions of the input according to a given pattern

  • category: reshaping layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

Specific constraints/recommendations:

  • transposing the batch dimension is not supported

TRANSPOSE_CONV

Transposed convolutional layer

  • category: convolutional layer
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

Specific constraints/recommendations:

  • arbitrary strides, provided that they are smaller than the input size
  • arbitrary filter kernel sizes, provided that they are smaller than the input size

UNIDIRECTIONAL_SEQUENCE_LSTM

Computes a multi-layer long short-term memory (LSTM) RNN to an input sequence (batch=1, timesteps, features)

  • category: recurrent layer
  • input data types: float32, int8
  • output data types: float32, int8

Specific constraints/recommendations:

  • stateless mode support only
  • fused activation: sigmoid
  • fused recurrent activation: sigmoid
  • return_state not supported
  • time_major not supported

UNPACK

Unpacks num tensors from values along specified axis

  • category: split operator
  • input data types: float32, int8, uint8
  • output data types: float32, int8, uint8

ZEROS_LIKE

Generates a tensor with values 0

  • category: zeros like
  • input data types: float32
  • output data types: float32