Do Transformer Modifications Transfer Across Implementations and Applications?
The research community has proposed copious modifications to the Transformer
architecture since it was introduced over three years ago, relatively few of
which have seen widespread adoption. In this paper, we comprehensively evaluate
many of these modifications in a shared experimental setting that covers most
of the common uses of the Transformer in natural language processing.
Surprisingly, we find that most modifications do not meaningfully improve
performance. Furthermore, most of the Transformer variants we found beneficial
were either developed in the same codebase that we used or are relatively minor
changes. We conjecture that performance improvements may strongly depend on
implementation details and correspondingly make some recommendations for
improving the generality of experimental results.
Authors
Sharan Narang, Hyung Won Chung, Yi Tay, William Fedus, Thibault Fevry, Michael Matena, Karishma Malkan, Noah Fiedel, Noam Shazeer, Zhenzhong Lan, Yanqi Zhou, Wei Li, Nan Ding, Jake Marcus, Adam Roberts, Colin Raffel