ECQ$^{\text{x}}$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs
The remarkable success of deep neural networks (DNNs) in various applications
is accompanied by a significant increase in network parameters and arithmetic
operations. Such increases in memory and computational demands make deep
learning prohibitive for resource-constrained hardware platforms such as mobile
devices. Recent efforts aim to reduce these overheads, while preserving model
performance as much as possible, and include parameter reduction techniques,
parameter quantization, and lossless compression techniques.
In this chapter, we develop and describe a novel quantization paradigm for
DNNs: Our method leverages concepts of explainable AI (XAI) and concepts of
information theory: Instead of assigning weight values based on their distances
to the quantization clusters, the assignment function additionally considers
weight relevances obtained from Layer-wise Relevance Propagation (LRP) and the
information content of the clusters (entropy optimization). The ultimate goal
is to preserve the most relevant weights in quantization clusters of highest
information content.
Experimental results show that this novel Entropy-Constrained and
XAI-adjusted Quantization (ECQ$^{\text{x}}$) method generates ultra
low-precision (2-5 bit) and simultaneously sparse neural networks while
maintaining or even improving model performance. Due to reduced parameter
precision and high number of zero-elements, the rendered networks are highly
compressible in terms of file size, up to $103\times$ compared to the
full-precision unquantized DNN model. Our approach was evaluated on different
types of models and datasets (including Google Speech Commands and CIFAR-10)
and compared with previous work.