On the Consistency of Biases in Neural Networks for Computer Vision
Are Convolutional Neural Networks or Transformers more like human vision?
Machine learning models for computer vision exceed humans in accuracy on specific visual recognition tasks, notably on datasets like imagenet.
However, high accuracy can be achieved in many ways, and high accuracy can be achieved not only by the data to which the system is exposed, but also by the inductive biases of the model, which are typically harder to characterize.
In this work, we follow a recent trend of in-depth behavioral analyses of neural network models that go beyond accuracy as an evaluation metric by looking at patterns of errors.
Our focus is on comparing a suite of standard convolutional neural networks (cnns) and a recently-proposed attention-based network, the vision transformer (vit), which relaxes the translation-invariance constraint of cnns and therefore represents a model with a weaker set of inductive biases.
We demonstrate, using new metrics for examining error consistency with more granularity, that their errors are also more consistent with those of humans.
These results have implications both for building more human-like visionmodels, as well as for understanding visual object recognition in humans.
Authors
Shikhar Tuli, Ishita Dasgupta, Erin Grant, Thomas L. Griffiths