Introduction
Some primitives in the library support input/output tensors with the INT8 (either signed or unsigned) data type. The primary goal is to support reduced precision inference on the compatible hardware.
Related materials:
Quantization Model
The primary quantization model that the library assumes is the following:
\[ x_{f32}(:) = scale_{f32} \cdot (x_{int8}(:) - 0_{x\_int8})
\]
where \(scale_{f32}\) is somehow known in advance (typically, the process of obtaining these scale factors is called the calibration process). This might be counter-intuitive, but the library cannot compute any of the scale factors at run-time dynamically. Hence, the model is sometimes called a static quantization model. The main rationale to support only static quantization out-of-the-box is higher performance. Those who want to use dynamic quantization can do so in a few steps:
- Compute the result in higher precision, like dnnl::memory::data_type::s32.
- Find the required characteristics, like min and max values, and derive the scale factor.
- Re-quantize to the lower precision data type.
It is also worth mentioning that the library supports fixed zero position. For most of the primitives, real zero value is mapped to zero for quantized values; that is, \(0_{x\_int8} = 0\). For example, this is the only model that Convolution and Inner Product currently support. The RNN primitives have limited support of shifted zero (for details, refer to the corresponding section in RNN).
For the rest of this guide, we will assume that \(0_{x\_int8} = 0\).
- Warning
- Depending on the architecture, the behavior of int8 computations might slightly vary. For more details, refer to Int8 Computation Aspects.
This guide doesn't cover how the appropriate scaling factor can be found. Refer to the materials in the Introduction.
Example: Convolution Quantization Workflow
Let's consider a simple example: a convolution without bias. The tensors are represented as:
- \(\src_{f32}(:) = scale_{\)\src\f$} \cdot \src_{int8}(:) \(- \)\weights_{f32}(:) = scale_{ \(\weights\)} \cdot \weights_{int8}(:) \(- \)\dst_{f32}(:) = scale_{ \(\dst\)} \cdot \dst_{int8}(:) \(Here the \)\src_{f32}, \weights_{f32}, \dst_{f32} \( are not
computed at all, the whole work happens with INT8 tensors.
As mentioned above, we also somehow know all the scaling factors:
\)@_fakenlscale_{ \(\src\)}, scale_{ \(\weights\)}, scale_{ \(\dst\)} \(.
So the task is to compute the \)\dst_{int8} \( tensor.
Mathematically, the computations are pretty straightforward:
\f[
\dst_{int8}(:) =
downconvert\_f32\_to\_int8(
output\_scale \cdot
conv_{s32}(\src_{int8}, \weights_{int8})
),
\f]
where:
- \)@_fakenloutput\_scale := \frac{scale_{\f$\f$\src\f$\f$} \cdot scale_{ \(\)\weights\f$ \(}}{scale_{\)\dst\f$}} \(;
- \)@_fakenlconv_{s32} \( is just a regular convolution which takes source and
weights with INT8 data type and compute the result in INT32 data type (INT32
is chosen to avoid overflows during the computations);
- \)@_fakenldownconvert\_f32\_to\_s8() \( converts an <tt>f32</tt> value to <tt>s8</tt> with
potential saturation if the values are out of the range of the INT8 data type.
Note that in order to perform the operation, one doesn't need to know the
exact scaling factors for all the tensors; it is enough to know only the
\)@_fakenloutput\_scale\f$. The library utilizes this fact; a user needs to provide only this one extra parameter (see the Output Scaling Attribute section below) to perform the convolution.
Per-Channel Scaling
Some of the primitives have limited support of multiple scales for a quantized tensor. The most popular use-case is a Convolution primitive that supports per-output-channel scaling factors for the weights, meaning that the actual convolution computations would need to scale different output channels differently. This is possible without significant performance drop because the per-output-channel re-quantization only required at the very end of the computations. It seems impossible to implement the same trick for the input channels, since that would require re-quantization for every input data point.
Assume we have (the scales are designated as \(\alpha\) to simplify reading):
- \(\src_{f32}(n, ic, ih, iw) = \alpha_{\) \(\src\) \(} \cdot \src_{int8}(n, ic, ih, iw)\)
- \(\weights_{f32}(oc, ic, kh, kw) =
\alpha_{\) \(\weights\) \(}(oc) \cdot \weights_{int8}(oc, ic, kh, kw)\)
- \(\dst_{f32}(n, oc, oh, ow) = scale_{\)\dst\f$} \cdot \dst_{int8}(n, oc, oh, ow) \(Note that now the weights' scaling factor depends on the \)@_fakenloc \(.
To compute the \)\dst_{int8} \( we need to perform the following:
\f[
\dst_{int8}(n, oc, oh, ow) =
downconvert\_f32\_to\_int8(
output\_scale(oc) \cdot
conv_{s32}(\src_{int8}, \weights_{int8})|_{(n, oc, oh, ow)}
),
\f]
where now
- \)@_fakenloutput\_scale(oc) := \frac{\alpha_{\f$\f$\src\f$\f$} \cdot \alpha_{\f$\f$\weights\f$\f$}(oc)}{\alpha_{\f$\f$\dst\f$\f$}} \(.
It is worth mentioning that a user has to prepare quantized weights accordingly.
For oneDNN provides reorders that can perform per-channel scaling:
\f[
\weights_{int8}(oc, ic, kh, kw) =
downconvert\_f32\_to\_int8(
output\_scale(oc) \cdot
\weights_{f32}(oc, ic, kh, kw)
),
\f]
where:
- \)@_fakenloutput\_scale(oc) := \frac{1}{\alpha_{\f$\f$\weights\f$\f$}(oc_{})} \(.
@section autotoc_md331 API
The library API to support for INT8 was designed for the model described above.
However, it doesn't require users to follow exactly this model. As long as
users can fit their model into the given functionality everything should work
fine. Having this in mind we tried to design a minimal and simple yet powerful
enough quantization API.
The most common data type for data tensors during INT8 inference is
#dnnl::memory::data_type::s8 and #dnnl::memory::data_type::u8. All the
scaling factors related to tensors are not attached in any way to the
oneDNN memory objects and should be maintained by users.
The library essentially extends the ability of the primitives to scale the
output before storing the result to the memory with the destination data type.
That's exactly the minimum that we need to support INT8 inference (check the
equations above–only \)@_fakenloutput\_scale\f$ is non-standard).
The scaling happens in the single precision floating point data type (dnnl::memory::data_type::f32). Before storing, the result is downconverted to the destination data type with saturation if required. The rounding happens according to the current HW setting (for instance, on CPU according to the MXCSR register).
Output Scaling Attribute
The library uses Primitive Attributes API for setting the scaling factors for most of the primitives. The supporting attributes can be found in the documentation for each primitive. The unsupported cases are handled according to the attributes error handling section.
API:
The primitives do not support output scales if source (and weights) tensors are of the int8 data type. In other words, regular f32 convolution cannot scale the output result.
The parameters (C++ API for simplicity):
int mask,
const std::vector<float> &scales
);
void set_output_scales(int mask, const std::vector< float > &scales)
Sets output scaling factors correspondence mask and values.
Definition dnnl.hpp:2583
In the simplest case, when there is only one common scale the attribute changes the op behavior from
\[ \dst(:) = Op(...)
\]
to
\[ \dst(:) = scale \cdot Op(...).
\]
To support scales per one or several dimensions, users must set the appropriate mask.
Say the destination is \(D_0 \times ... \times D_{n-1}\) tensor and we want to have output scales per \(d_i\) dimension (where \(0 \le d_i < n\)).
Then the mask should be set to:
- \(mask = \sum \limits_{d_i} 2^{d_i}\),
and the number of scales should be:
- scales.size() = \(\prod\limits_{d_i}D_{d_i}\).
Example 1: weights quantization with per-output-channel-and-group scaling
const int G, OC, IC, KH, KW;
{G, OC/G, IC/G, KH, KW},
);
std::vector<float> wei_scales(G * OC/G) = {...};
std::vector<float> inv_wei_scales(wei_scales.size());
for (size_t i = 0; i < wei_scales.size(); ++i)
inv_wei_scales[i] = 1.f / wei_scales[i];
const int mask = 0
| (1 << 0)
| (1 << 1);
wei_user_f32_md, engine,
wei_conv_s8_md, engine,
attr);
Primitive descriptor for a convolution forward propagation primitive.
Definition dnnl.hpp:3746
A memory descriptor.
Definition dnnl.hpp:1729
@ hwigo
5D CNN weights tensor with groups; an alias for dnnl::memory::format_tag::decab
Definition dnnl.hpp:1407
@ f32
32-bit/single-precision floating point.
Definition dnnl.hpp:1216
Primitive attributes.
Definition dnnl.hpp:2481
Primitive descriptor for a reorder primitive.
Definition dnnl.hpp:3120
Reorder primitive.
Definition dnnl.hpp:3118
Example 2: convolution with groups, with per-output-channel quantization
This example is complementary to the previous example (which should ideally be the first one). Let's say we want to have an INT8 convolution with per-output channel scaling.
const float src_scale;
const float dst_scale;
std::vector<float> wei_scales(G * OC/G) = {...};
{BATCH, IC, IH, IW},
);
{G, OC/G, IC/G, KH, KW},
);
src_conv_s8_any_md,
wei_conv_s8_any_md,
dst_conv_s8_any_md,
strides, padding_l, padding_r,
dnnl::padding_kind::zero
);
const int mask = 0
| (1 << 1);
std::vector<float> conv_output_scales(G * OC/G);
for (int g_oc = 0; G * OC/G; ++g_oc)
conv_output_scales[g_oc] = src_scale * wei_scales(g_oc) / dst_scale;
conv_d,
attr,
engine);
@ convolution_direct
Direct convolution.
Definition dnnl.hpp:482
@ forward_inference
Forward data propagation (inference mode).
Definition dnnl.hpp:449
Descriptor for a convolution forward propagation primitive.
Definition dnnl.hpp:3542
@ any
Placeholder memory format tag.
Definition dnnl.hpp:1287
@ s8
8-bit signed integer.
Definition dnnl.hpp:1220
Interplay of output scales with post-ops
In general, the post-ops are independent from the output scales. The output scales are applied to the result first; then post-ops will take effect.
For details, refer to the Tanh -> Sum -> ScaleShift example.
That has an implication on the scaling factors passed to the library, however. Consider the following example of a convolution with \(\tanh\) as a post-op:
\[ \dst_{s8}(:) =
\frac{1}{scale_{\dst}}
\cdot
\tanh(
scale_{\src}
\cdot
scale_{\weights}
\cdot
conv_{s32}(\src_{s8}, wei_{s8})
)
\]
As you can see:
- The convolution output scales are now \(conv\_output\_scale = scale_{\)\src\f$} \cdot scale_{ \(\weights\)} \(,
i.e. no division by \)@_fakenlscale_{ \(\dst\)} \(;
- And the post-ops scale for \)\tanh\f$ is set to \(scale\_tanh\_post\_op = \frac{1}{scale_{\)\dst\f$}}