Keras toolbox support
ST Edge AI Core Technology 2.2.0
Overview
This document lists the layers (or operators) which can be imported and converted. Supported operators allow to address a large range of classical topologies targeting a Mobile or IoT resource-constrained runtime environment: SqueezeNet, MobileNet V1 or V2, Inception, SSD MobileNet v1,..
Purpose of this document is to list the operators and their associated constraints or limitations, please refer to the original documentation for details on a given layer.
Keras.io 3 is supported through
the Tensorflow backend
with channels-last dimension ordering. Keras.io 2.0 up to version
2.15 is supported through tf_keras, while networks defined in Keras
1.x are not officially supported.
This file was automatically generated.
- ST Edge AI Core version : 2.2
- 110 operators found
- 28 custom operators found
Summary table
Following table contains the list of the operators that can be imported, if the constraints or limitations are met. The 28 custom operators are listed in the next table.
- supported optional fused activation (or non-linearity): gelu,
linear, relu, quantized_relu, relu_n1_to_1, relu_0_to_1, leaky_relu,
relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential,
tanh, softmax, softplus, softsign, abs, acos, acosh, asin, asinh,
atan, atanh, ceil, clip, cos, cosh, erf, flexerf, exp, floor,
identity, log, logistic, neg, logical_not, prelu, probit,
reciprocal, relu_generic, relu_thresholded, round, sign, sin, sinh,
softmax_zero, sqrt, swish, tan
- supported optional fused integer activation (or
non-linearity): prelu, relu, clip, lut, swish, identity, relu6
- integer operation for Keras models are only supported
through QKeras and Larq extensions
- if an operator is not supported in integer, floating point version is used. Converters are automatically added by the code generator.
Custom operators
Following table contains the list of the custom Keras 2 operators that can be imported.
Operator | Data types | Constraints/Limitations |
---|---|---|
Abs | float32 | specific |
Acos | float32 | specific |
Acosh | float32 | specific |
Asin | float32 | specific |
Asinh | float32 | specific |
Atan | float32 | specific |
Atanh | float32 | specific |
Ceil | float32 | specific |
Clip | float32 | specific |
Cos | float32 | specific |
Exp | float32 | specific |
Fill | float32 | common, specific |
FloorDiv | float32 | specific |
FloorMod | float32 | specific |
Log | float32 | specific |
Pow | float32 | specific |
Reciprocal | float32 | specific |
Reshape | float32, int8, uint8 | common, specific |
Round | float32 | specific |
Shape | float32, int8, uint8, int32 | common, specific |
Sign | float32 | specific |
Sin | float32 | specific |
Split | float32, int8, uint8 | common, specific |
Sqrt | float32 | specific |
Square | float32 | specific |
Tanh | float32 | specific |
Unpack | float32, int8, uint8 | common, specific |
Where | float32, int8, uint8, int16, uint16, int32, uint32, bool | common, specific |
Common constraints
- input and output tensors must be not dynamic.
- variable-length batch dimension (i.e.
(None,)
) is considered as equal to 1
- must not be greater than 6D
- dimension must be in the range
[0, 65536[
- batch dimension is partially supported in the axis/axes
parameters
- variable-length batch dimension (i.e.
- data type for the weights/activations tensors must be:
- float32, int8, uint8
- only int32 for the bias tensor is considered
- for some operators, bool and binary types are also
supported
- float32, int8, uint8
- 1D operator is mapped on the respective 2D operator by adding a singleton dimension on the input: (12,3) -> (12, 1, 3)
Operators
ops.absolute
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
layers.Activation
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
- fused activations (if present): gelu, linear, relu, quantized_relu, relu_n1_to_1, relu_0_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign
Specific constraints/recommendations:
- softmax is always conserved in float32, converters are added if necessary
layers.ActivityRegularization
Applies Dropout to the input
- category: regularization layers
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- operator is dropped during the conversion
layers.Add
Performs element-wise operation
- category: eltwise operator
- input data types: float32
- output data types: float32
layers.AlphaDropout
Applies Dropout to the input
- category: regularization layers
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- operator is dropped during the conversion
ops.arccos
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.arccosh
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.arcsin
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.arcsinh
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.arctan
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.arctan2
Performs element-wise operation
- category: eltwise operator
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.arctanh
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.argmax
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.argmin
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
layers.Average
Performs element-wise operation
- category: eltwise operator
- input data types: float32
- output data types: float32
layers.AveragePooling1D
Downsamples the input
- category: pooling layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
Specific constraints/recommendations:
- arbitrary strides, provided that they are smaller than the input
size
- arbitrary pool sizes, provided that they are smaller than the input size
layers.AveragePooling2D
Downsamples the input
- category: pooling layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
Specific constraints/recommendations:
- arbitrary strides, provided that they are smaller than the input
size
- arbitrary pool sizes, provided that they are smaller than the input size
layers.BatchNormalization
Performs the normalization of the input
- category: normalization layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Only normalization on the last axis (channels) is supported
layers.Bidirectional
Bidirectionnal wrapper for RNNs
- category: recurrent layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
ops.ceil
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
layers.Concatenate
Performs concatenation of a list of inputs
- category: merge operator
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- concatenating on the batch dimension is not supported
layers.Conv1D
Performs convolution operation
- category: convolutional layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
- fused activations (if present): gelu, linear, relu,
quantized_relu, relu_n1_to_1, relu_0_to_1, leaky_relu, relu6, elu,
selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax,
softplus, softsign
- integer schemes: weights / activations
- Signed Symmetric / Signed Asymmetric (SSSA)
- Signed Symmetric per channel (or per-axis) / Signed Asymmetric
(SSSA_CH)
- Signed Symmetric / Unsigned Asymmetric (SSUA)
- Signed Symmetric per channel (or per-axis) / Unsigned Asymmetric
(SSUA_CH)
- Unsigned Asymmetric / Unsigned Asymmetric (UAUA)
- Unsigned Asymmetric per channel (or per-axis) / Unsigned Asymmetric (UAUA_CH)
- Signed Symmetric / Signed Asymmetric (SSSA)
Specific constraints/recommendations:
- arbitrary strides, provided that they are smaller than the input
size
- arbitrary filter kernel sizes, provided that they are smaller
than the input size
- dilation factors different from 1 are not supported for int8 model
layers.Conv1DTranspose
Transposed convolutional layer
- category: convolutional layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
- fused activations (if present): gelu, linear, relu, quantized_relu, relu_n1_to_1, relu_0_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign
Specific constraints/recommendations:
- arbitrary strides, provided that they are smaller than the input
size
- arbitrary filter kernel sizes, provided that they are smaller than the input size
layers.Conv2D
Performs convolution operation
- category: convolutional layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
- fused activations (if present): gelu, linear, relu,
quantized_relu, relu_n1_to_1, relu_0_to_1, leaky_relu, relu6, elu,
selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax,
softplus, softsign
- integer schemes: weights / activations
- Signed Symmetric / Signed Asymmetric (SSSA)
- Signed Symmetric per channel (or per-axis) / Signed Asymmetric
(SSSA_CH)
- Signed Symmetric / Unsigned Asymmetric (SSUA)
- Signed Symmetric per channel (or per-axis) / Unsigned Asymmetric
(SSUA_CH)
- Unsigned Asymmetric / Unsigned Asymmetric (UAUA)
- Unsigned Asymmetric per channel (or per-axis) / Unsigned Asymmetric (UAUA_CH)
- Signed Symmetric / Signed Asymmetric (SSSA)
Specific constraints/recommendations:
- arbitrary strides, provided that they are smaller than the input
size
- arbitrary filter kernel sizes, provided that they are smaller
than the input size
- dilation factors different from 1 are not supported for int8 model
layers.Conv2DTranspose
Transposed convolutional layer
- category: convolutional layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
- fused activations (if present): gelu, linear, relu, quantized_relu, relu_n1_to_1, relu_0_to_1, leaky_relu, relu6, elu, selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus, softsign
Specific constraints/recommendations:
- arbitrary strides, provided that they are smaller than the input
size
- arbitrary filter kernel sizes, provided that they are smaller than the input size
ops.cos
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.cosh
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
layers.Cropping1D
Crops the input
- category: reshaping layer
- input data types: float32
- output data types: float32
layers.Cropping2D
Crops the input
- category: reshaping layer
- input data types: float32
- output data types: float32
CustomDoReFa
Computes element-wise data conversion full precision to low precision, based on the scale/zeropoint parameters
- category: conversion layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
CustomDoReFaQuantizer
Computes element-wise data conversion full precision to low precision, based on the scale/zeropoint parameters
- category: conversion layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
layers.Lambda
Wraps arbitrary expressions
- category: custom layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- The wrapped operation is uncompiled, this conversion may fail if the tf operator is unsupported
layers.Dense
Fully Connected operation
- category: core layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
- fused activations (if present): gelu, linear, relu,
quantized_relu, relu_n1_to_1, relu_0_to_1, leaky_relu, relu6, elu,
selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax,
softplus, softsign
- integer schemes: weights / activations
- Signed Symmetric / Signed Asymmetric (SSSA)
- Signed Symmetric per channel (or per-axis) / Signed Asymmetric
(SSSA_CH)
- Signed Symmetric / Unsigned Asymmetric (SSUA)
- Signed Symmetric per channel (or per-axis) / Unsigned Asymmetric
(SSUA_CH)
- Unsigned Asymmetric / Unsigned Asymmetric (UAUA)
- Unsigned Asymmetric per channel (or per-axis) / Unsigned Asymmetric (UAUA_CH)
- Signed Symmetric / Signed Asymmetric (SSSA)
Specific constraints/recommendations:
- for the floating point model, weights and/or bias can be compressed during the code generation
layers.DepthwiseConv2D
Performs convolution operation
- category: convolutional layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
- fused activations (if present): gelu, linear, relu,
quantized_relu, relu_n1_to_1, relu_0_to_1, leaky_relu, relu6, elu,
selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax,
softplus, softsign
- integer schemes: weights / activations
- Signed Symmetric / Signed Asymmetric (SSSA)
- Signed Symmetric per channel (or per-axis) / Signed Asymmetric
(SSSA_CH)
- Signed Symmetric / Unsigned Asymmetric (SSUA)
- Signed Symmetric per channel (or per-axis) / Unsigned Asymmetric
(SSUA_CH)
- Unsigned Asymmetric / Unsigned Asymmetric (UAUA)
- Unsigned Asymmetric per channel (or per-axis) / Unsigned Asymmetric (UAUA_CH)
- Signed Symmetric / Signed Asymmetric (SSSA)
Specific constraints/recommendations:
- arbitrary strides, provided that they are smaller than the input
size
- arbitrary filter kernel sizes, provided that they are smaller
than the input size
- dilation factors different from 1 are not supported for int8 model
ops.divide
Performs element-wise operation
- category: eltwise operator
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
layers.Dropout
Applies Dropout to the input
- category: regularization layers
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- operator is dropped during the conversion
layers.ELU
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
ops.equal
Performs logical element-wise operation
- category: eltwise operator
- input data types: float32
- output data types: bool
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.erf
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
layers.Flatten
Flattens the non-batch input dimensions to a vector
- category: Reshaping operation
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
Specific constraints/recommendations:
- Flatten on the batch dimension is not supported
- operator is dropped during the conversion
ops.floor
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.floor_divide
Performs element-wise operation
- category: eltwise operator
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
layers.GaussianDropout
Applies Dropout to the input
- category: regularization layers
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- operator is dropped during the conversion
layers.GaussianNoise
Applies Dropout to the input
- category: regularization layers
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- operator is dropped during the conversion
layers.GlobalAveragePooling1D
Downsamples the input
- category: pooling layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
layers.GlobalAveragePooling2D
Downsamples the input
- category: pooling layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
layers.GlobalMaxPooling1D
Downsamples the input
- category: pooling layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
layers.GlobalMaxPooling2D
Downsamples the input
- category: pooling layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
ops.greater
Performs logical element-wise operation
- category: eltwise operator
- input data types: float32
- output data types: bool
ops.greater_equal
Performs element-wise operation
- category: eltwise operator
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
layers.GRU
Gated Recurrent Unit
- category: recurrent layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- stateless and stateful (batch=1 only) mode support
- fused activation: gelu, linear, relu, quantized_relu,
relu_n1_to_1, relu_0_to_1, leaky_relu, relu6, elu, selu, sigmoid,
hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus,
softsign
- fused recurrent activation: gelu, linear, relu, quantized_relu,
relu_n1_to_1, relu_0_to_1, leaky_relu, relu6, elu, selu, sigmoid,
hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus,
softsign
return_state
not supported
ops.hard_silu
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
layers.InputLayer
Optional placeholder for the network’s input
- category: core layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
Specific constraints/recommendations:
- operator is dropped during the conversion
layers.LeakyReLU
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
ops.leaky_relu
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.less
Performs logical element-wise operation
- category: eltwise operator
- input data types: float32
- output data types: bool
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.less_equal
Performs element-wise operation
- category: eltwise operator
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.log
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.logical_and
Performs element-wise operation
- category: eltwise operator
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.logical_not
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.logical_or
Performs element-wise operation
- category: eltwise operator
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.logical_xor
Performs element-wise operation
- category: eltwise operator
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
layers.LSTM
Computes a multi-layer long short-term memory (LSTM) RNN to an input sequence (batch=1, timesteps, features)
- category: recurrent layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- stateless and stateful (batch=1 only) mode support
- in stateful mode the user is requested to define two C routines
to allocate and deallocate internal layer state.
- initial state must be provided as part of the allocation routine
implementation
- the two functions to implement are:
void _allocate_lstm_states(ai_float **states, ai_u32 size_in_bytes)
void _deallocate_lstm_states(ai_float **states)
- initial state must be provided as part of the allocation routine
implementation
- fused activation: gelu, linear, relu, quantized_relu,
relu_n1_to_1, relu_0_to_1, leaky_relu, relu6, elu, selu, sigmoid,
hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus,
softsign
- fused recurrent activation: gelu, linear, relu, quantized_relu,
relu_n1_to_1, relu_0_to_1, leaky_relu, relu6, elu, selu, sigmoid,
hard_sigmoid, hard_swish, exponential, tanh, softmax, softplus,
softsign
return_state
not supported
ops.maximum
Computes the maximum (element-wise) a list of inputs
- category: eltwise operator
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
layers.MaxPooling1D
Downsamples the input
- category: pooling layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
Specific constraints/recommendations:
- arbitrary strides, provided that they are smaller than the input
size
- arbitrary pool sizes, provided that they are smaller than the input size
layers.MaxPooling2D
Downsamples the input
- category: pooling layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
Specific constraints/recommendations:
- arbitrary strides, provided that they are smaller than the input
size
- arbitrary pool sizes, provided that they are smaller than the input size
ops.minimum
Computes the minimum (element-wise) a list of inputs
- category: eltwise operator
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.mod
Performs element-wise operation
- category: eltwise operator
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
layers.Multiply
Performs element-wise operation
- category: eltwise operator
- input data types: float32
- output data types: float32
ops.negative
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.not_equal
Performs element-wise operation
- category: eltwise operator
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
layers.Permute
Permutes the dimensions of the input according to a given pattern
- category: reshaping layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
Specific constraints/recommendations:
- transposing the batch dimension is not supported
layers.PReLU
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
Specific constraints/recommendations:
- shared axes in PReLU supported only for the leading dimensions
qkeras.QActivation
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32, int8, binary
- output data types: binary
Specific constraints/recommendations:
- Custom Keras layer from QKeras framework
- For detailed information, see Deep Quantized Neural Network [DQNN] support article
qkeras.QBatchNormalization
Performs the normalization of the input
- category: normalization layer
- input data types: float32, int8, binary
- output data types: float32, int8, binary
Specific constraints/recommendations:
- Custom Keras layer from QKeras framework
- Only normalization on the last axis (channels) is
supported
- For detailed information, see Deep Quantized Neural Network [DQNN] support article
qkeras.QConv1D
Performs convolution operation
- category: convolutional layer
- input data types: float32, int8, binary
- output data types: float32, int8, binary
Specific constraints/recommendations:
- arbitrary strides, provided that they are smaller than the input
size
- arbitrary filter kernel sizes, provided that they are smaller
than the input size
- Custom Keras layer from QKeras framework
- For detailed information, see Deep Quantized Neural Network [DQNN] support article
qkeras.QConv2D
Performs convolution operation
- category: convolutional layer
- input data types: float32, int8, binary
- output data types: float32, int8, binary
Specific constraints/recommendations:
- arbitrary strides, provided that they are smaller than the input
size
- arbitrary filter kernel sizes, provided that they are smaller
than the input size
- Custom Keras layer from QKeras framework
- For detailed information, see Deep Quantized Neural Network [DQNN] support article
qkeras.QConv2DTranspose
Transposed convolutional layer
- category: convolutional layer
- input data types: float32, int8, binary
- output data types: float32, int8, binary
Specific constraints/recommendations:
- Custom Keras layer from QKeras framework
- Padding must be valid
- Stride must be (1, 1)
- For detailed information, see Deep Quantized Neural Network [DQNN] support article
qkeras.QDense
- category: core layer
- input data types: float32, int8, binary
- output data types: float32, int8, binary
Specific constraints/recommendations:
- Custom Keras layer from QKeras framework
- Only 2D input shape is supported:
[batch_size, input_dim]
. A rank greater than 2 is not supported,Flatten
layer before the QuantDense/QDense operator should be added
- For detailed information, see Deep Quantized Neural Network [DQNN] support article
qkeras.QDepthwiseConv2D
Performs convolution operation
- category: convolutional layer
- input data types: float32, int8, binary
- output data types: float32, int8, binary
Specific constraints/recommendations:
- arbitrary strides, provided that they are smaller than the input
size
- arbitrary filter kernel sizes, provided that they are smaller
than the input size
- Custom Keras layer from QKeras framework
- For detailed information, see Deep Quantized Neural Network [DQNN] support article
larq.layers.QuantConv2D
Performs convolution operation
- category: convolutional layer
- input data types: float32, int8, binary
- output data types: float32, int8, binary
Specific constraints/recommendations:
- arbitrary strides, provided that they are smaller than the input
size
- arbitrary filter kernel sizes, provided that they are smaller
than the input size
- Custom Keras layer from LARQ framework
- For detailed information, see Deep Quantized Neural Network [DQNN] support article
larq.layers.QuantDense
- category: core layer
- input data types: float32, int8, binary
- output data types: float32, int8, binary
Specific constraints/recommendations:
- Custom Keras layer from LARQ framework
- Only 2D input shape is supported:
[batch_size, input_dim]
. A rank greater than 2 is not supported,Flatten
layer before the QuantDense/QDense operator should be added
- For detailed information, see Deep Quantized Neural Network [DQNN] support article
larq.layers.QuantDepthwiseConv2D
Performs convolution operation
- category: convolutional layer
- input data types: float32, int8, binary
- output data types: float32, int8, binary
Specific constraints/recommendations:
- arbitrary strides, provided that they are smaller than the input
size
- arbitrary filter kernel sizes, provided that they are smaller
than the input size
- Custom Keras layer from LARQ framework
- For detailed information, see Deep Quantized Neural Network [DQNN] support article
ops.relu
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
layers.ReLU
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
ops.relu6
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
layers.RepeatVector
Repeats the input n times
- category: reshaping layer
- input data types: float32
- output data types: float32
layers.Reshape
Reshapes a tensor
- category: Reshaping operation
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
ops.round
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.rsqrt
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.selu
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
layers.SeparableConv1D
Performs convolution operation
- category: convolutional layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
- fused activations (if present): gelu, linear, relu,
quantized_relu, relu_n1_to_1, relu_0_to_1, leaky_relu, relu6, elu,
selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax,
softplus, softsign
- integer schemes: weights / activations
- Signed Symmetric / Signed Asymmetric (SSSA)
- Signed Symmetric per channel (or per-axis) / Signed Asymmetric
(SSSA_CH)
- Signed Symmetric / Unsigned Asymmetric (SSUA)
- Signed Symmetric per channel (or per-axis) / Unsigned Asymmetric
(SSUA_CH)
- Unsigned Asymmetric / Unsigned Asymmetric (UAUA)
- Unsigned Asymmetric per channel (or per-axis) / Unsigned Asymmetric (UAUA_CH)
- Signed Symmetric / Signed Asymmetric (SSSA)
Specific constraints/recommendations:
- arbitrary strides, provided that they are smaller than the input
size
- arbitrary filter kernel sizes, provided that they are smaller
than the input size
- dilation factors different from 1 are not supported for int8 model
layers.SeparableConv2D
Performs convolution operation
- category: convolutional layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
- fused activations (if present): gelu, linear, relu,
quantized_relu, relu_n1_to_1, relu_0_to_1, leaky_relu, relu6, elu,
selu, sigmoid, hard_sigmoid, hard_swish, exponential, tanh, softmax,
softplus, softsign
- integer schemes: weights / activations
- Signed Symmetric / Signed Asymmetric (SSSA)
- Signed Symmetric per channel (or per-axis) / Signed Asymmetric
(SSSA_CH)
- Signed Symmetric / Unsigned Asymmetric (SSUA)
- Signed Symmetric per channel (or per-axis) / Unsigned Asymmetric
(SSUA_CH)
- Unsigned Asymmetric / Unsigned Asymmetric (UAUA)
- Unsigned Asymmetric per channel (or per-axis) / Unsigned Asymmetric (UAUA_CH)
- Signed Symmetric / Signed Asymmetric (SSSA)
Specific constraints/recommendations:
- arbitrary strides, provided that they are smaller than the input
size
- arbitrary filter kernel sizes, provided that they are smaller
than the input size
- dilation factors different from 1 are not supported for int8 model
ops.sign
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.sin
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
layers.Softmax
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- It is supported only for 1D tensor and only on the channel
dimension
- The value 1 is supported as the default value of the axis attribute, not the value -1
layers.SpatialDropout1D
Applies Dropout to the input
- category: regularization layers
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- operator is dropped during the conversion
layers.SpatialDropout2D
Applies Dropout to the input
- category: regularization layers
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- operator is dropped during the conversion
ops.sqrt
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
STCustomDoReFa
Computes element-wise data conversion full precision to low precision, based on the scale/zeropoint parameters
- category: conversion layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
layers.Subtract
Performs element-wise operation
- category: eltwise operator
- input data types: float32
- output data types: float32
ops.tan
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
ops.tanh
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported only in Keras 3 models
layers.TFOpLambda
Wraps arbitrary expressions
- category: custom layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- The wrapped operation is uncompiled, this conversion may fail if the tf operator is unsupported
layers.ThresholdedReLU
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
layers.TimeDistributed
Applies a layer to every temporal slice of an input
- category: wrapper layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- Supported layers: Conv2D, Dense, Flatten, MaxPooling2D, ZeroPadding2D, Dropout
layers.UpSampling1D
- category: resizing operation
- input data types: float32
- output data types: float32
layers.UpSampling2D
- category: resizing operation
- input data types: float32
- output data types: float32
layers.ZeroPadding1D
Pads an input tensor
- category: Reshaping layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
layers.ZeroPadding2D
Pads an input tensor
- category: Reshaping layer
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
Custom operators
Abs
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.math.abs
Acos
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.math.acos
Acosh
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.math.acosh
Asin
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.math.asin
Asinh
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.math.asinh
Atan
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.math.atan
Atanh
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.math.atanh
Ceil
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.math.ceil
Clip
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.clip_by_value
Cos
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.math.cos
Exp
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.math.exp
Fill
Generates a tensor with given value and shape
- category: constant layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.fill
FloorDiv
Performs element-wise operation
- category: eltwise operator
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.math.floordiv
FloorMod
Performs element-wise operation
- category: eltwise operator
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.math.floormod
Log
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.math.log
Pow
Performs element-wise operation
- category: eltwise operator
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.math.pow
Reciprocal
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.reciprocal
Reshape
Reshapes a tensor
- category: Reshaping operation
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
Specific constraints/recommendations:
- related TF operator: tf.reshape
- operator is dropped during the conversion
Round
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.math.round
Shape
Returns a tensor containing the shape of the input tensor
- category: Reshaping operation
- input data types: float32, int8, uint8
- output data types: int32
Specific constraints/recommendations:
- related TF operator: tf.shape
Sign
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.math.sign
Sin
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.math.sin
Split
Splits a tensor into a list of sub tensors
- category: split operator
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
Specific constraints/recommendations:
- Only supported if the number of splits is equal to the size of
the splitting dimension
- related TF operator: tf.split
Sqrt
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.math.sqrt
Square
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.math.square
Tanh
Applies an activation function to the input tensor
- category: activation layer
- input data types: float32
- output data types: float32
Specific constraints/recommendations:
- related TF operator: tf.math.tanh
Unpack
Unpacks num tensors from values along specified axis
- category: split operator
- input data types: float32, int8, uint8
- output data types: float32, int8, uint8
Specific constraints/recommendations:
- related TF operator: tf.unpack
Where
Where layer
- category: generic layer
- input data types: float32, int8, uint8, int16, uint16, int32,
uint32, bool
- output data types: float32, int8, uint8, int16, uint16, int32, uint32, bool
Specific constraints/recommendations:
- related TF operator: tf.where