Along with the great success of deep neural networks, there is also growing
concern about their black-box nature. The interpretability issue affects
people's trust on deep learning systems. It is also related to many ethical
problems, e.g., algorithmic discrimination. Moreover, interpretability is a
desired property for deep networks to become powerful tools in other research
fields, e.g., drug discovery and genomics. In this survey, we conduct a
comprehensive review of the neural network interpretability research. We first
clarify the definition of interpretability as it has been used in many
different contexts. Then we elaborate on the importance of interpretability and
propose a novel taxonomy organized along three dimensions: type of engagement
(passive vs. active interpretation approaches), the type of explanation, and
the focus (from local to global interpretability). This taxonomy provides a
meaningful 3D view of distribution of papers from the relevant literature as
two of the dimensions are not simply categorical but allow ordinal
subcategories. Finally, we summarize the existing interpretability evaluation
methods and suggest possible research directions inspired by our new taxonomy.