zation data type of activation. Please refer to https://onnxruntime.ai/docs/performance/quantization.html for more details on data type selection calibrate_method: Current calibration methods supported are MinMax and Entropy. Please use CalibrationMethod.MinMax or CalibrationMethod.Entropy as options. op_types_to_quantize: specify the types of operators to quantize, like ['Conv'] to quantize Conv only. It quantizes all supported operators by default. per_channel: quantize weights per channel reduce_range: quantize weights with 7-bits. It may improve the accuracy for some models running on non-VNNI machine, especially for per-channel mode weight_type: quantization data type of weight. Please refer to https://onnxruntime.ai/docs/performance/quantization.html for more details on data type selection nodes_to_quantize: List of nodes names to quantize. When this list is not None only the nodes in this list are quantized. example: [ 'Conv__224', 'Conv__252' ] nodes_to_exclude: List of nodes names to exclude. The nodes in this list will be excluded from quantization when it is not None. optimize_model: Deprecating Soon! Optimize model before quantization. NOT recommended, optimization will change the computation graph, making debugging of quantization loss difficult. use_external_data_format: option used for large size (>2GB) model. Set to False by default. extra_options: key value pair dictionary for various options in different case. Current used: extra.Sigmoid.nnapi = True/False (Default is False) ActivationSymmetric = True/False: symmetrize calibration data for activations (default is False). WeightSymmetric = True/False: symmetrize calibration data for weights (default is True). EnableSubgraph = True/False : Default is False. If enabled, subgraph will be quantized. Dyanmic mode currently is supported. Will support more in the future. ForceQuantizeNoInputCheck = True/False : By default, some latent operators like maxpool, transpose, do not quantize if their input is not quantized already. Setting to True to force such operator always quantize input and so generate quantized output. Also, the True behavior could be disabled per node using the nodes_to_exclude. MatMulConstBOnly = True/False: Default is False for static mode. If enabled, only MatMul with const B will be quantized. AddQDQPairToWeight = True/False : Default is False which quantizes floating-point weight and feeds it to solely inserted DeQuantizeLinear node. If True, it remains floating-point weight and inserts both QuantizeLinear/DeQuantizeLinear nodes to weight. OpTypesToExcludeOutputQuantization = list of op type : Default is []. If any op type is specified, it won't quantize the output of ops with this specific op types. DedicatedQDQPair = True/False : Default is False. When inserting QDQ pair, multiple nodes can share a single QDQ pair as their inputs. If True, it will create identical and dedicated QDQ pair for each node. QDQOpTypePerChannelSupportToAxis = dictionary : Default is {}. Set channel axis for specific op type, for example: {'MatMul': 1}, and it's effective only when per channel quantization is supported and per_channel is True. If specific op type supports per channel quantization but not explicitly specified with channel axis, default channel axis will be used. CalibTensorRangeSymmetric = True/False : Default is False. If enabled, the final range of tensor during calibration will be explicitly set to symmetric to central point "0". CalibMovingAverage = True/False : Default is False. If enabled, the moving average of the minimum and maximum values will be computed when the calibration method selected is MinMax. CalibMovingAverageConstant = float : Default is 0.01. Constant smoothing factor to use when computing the moving average of the minimum and maximum values. Effective only when the calibration method selected is MinMax and when CalibMovingAverage is set to True. r