# Android package for ORT Mobile operator and type reduction configuration # # The list of operators was generated from: # - the ONNX operators use by the tf2onnx tflite converter # - the operators used in a set of tflite models from tfhub, the tflite examples, and the mlperf mobile models # - models were optimized with optimizations set to 'basic', 'extended' and 'all' # - see the readme file for full details # allow float, int8, uint8. operators that manipulate shapes or indices have int32 and int64 enabled internally. !globally_allowed_types;float,int8_t,uint8_t # ops used by the tf2onnx tflite converter. ai.onnx;12,13,14,15;Abs,Add,And,ArgMax,ArgMin,AveragePool,Cast,Ceil,Clip,Concat,ConstantOfShape,Conv,ConvTranspose,Cos,CumSum,DepthToSpace,DequantizeLinear,Div,DynamicQuantizeLinear,Elu,Equal,Exp,Expand,Flatten,Floor,Gather,GatherND,Gemm,Greater,GreaterOrEqual,Identity,If,LRN,LeakyRelu,Less,LessOrEqual,Log,LogSoftmax,Loop,MatMul,Max,MaxPool,Mean,Min,Mul,Neg,NonMaxSuppression,NonZero,Not,Or,PRelu,Pad,Pow,QuantizeLinear,Range,Reciprocal,ReduceMax,ReduceMean,ReduceMin,ReduceProd,ReduceSum,Relu,Reshape,Resize,ReverseSequence,Round,ScatterND,Shape,Sigmoid,Sin,Size,Slice,Softmax,SpaceToDepth,Split,Sqrt,Squeeze,Sub,Sum,Tanh,ThresholdedRelu,Tile,TopK,Transpose,Unique,Unsqueeze,Where # other ops found in test models ai.onnx;12,13,14,15;Erf,GlobalAveragePool,InstanceNormalization,HardSigmoid,MatMulInteger,QLinearConv,QLinearMatMul # Control flow ops # - If and Loop are covered by the tflite converter list # - Scan tends to be used in speech models (it's more efficient than Loop) so include it for support of those ai.onnx;12,13,14,15;Scan # Changed ONNX ops by opset version for the above ops. This list is to provide context as to how much was added # for each additional opset we support. # # opset 13 # Abs,Add,ArgMax,ArgMin,Cast,Ceil,Clip,Concat,DepthToSpace,DequantizeLinear,Div,Equal,Erf,Exp,Expand,Flatten,Floor, # Gather,GatherND,Gemm,Greater,Identity,If,LRN,Less,Log,LogSoftmax,Loop,MatMul,Max,Mean,Min,Mul,Neg,NonZero,Pad, # Pow,QuantizeLinear,Reciprocal,ReduceMax,ReduceMean,ReduceMin,ReduceProd,ReduceSum,Relu,Reshape,Resize, # ScatterND,Shape,Sigmoid,Size,Slice,Softmax,SpaceToDepth,Split,Sqrt,Squeeze,Sub,Sum,Tanh,Tile,Transpose,Unsqueeze # opset 14 # Add,CumSum,Div,Identity,Mul,Relu,Reshape,Sub # opset 15 # Pow,Shape # internal ops added by optimizers # Note: LayerNormalization is an internal op even though it is (incorrectly) registered in the ONNX domain. ai.onnx;1;LayerNormalization com.microsoft;1;DynamicQuantizeMatMul,FusedConv,FusedGemm,FusedMatMul,Gelu,MatMulIntegerToFloat,NhwcMaxPool,QLinearAdd,QLinearAveragePool,QLinearConv,QLinearGlobalAveragePool,QLinearMul,QLinearSigmoid # NHWC transformer also uses this, so assuming it's valuable enough to include com.microsoft;1;QLinearLeakyRelu # Quantized contrib ops that are registered but no usage was found. Excluding for now. # com.microsoft;1;DynamicQuantizeLSTM,QAttention