Using Large Language Models for Natural Language Processing
Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey
We present a survey of recent work that uses pre-trained transformer-based language models to solve natural language processing tasks via pre-training then fine-tuning, prompting, or text generation approaches.
We also present approaches that use pre-trained language models to generate data for training augmentation or other purposes.
We conclude with discussions on limitations and suggested directions for future research.
Authors
Bonan Min, Hayley Ross, Elior Sulem, Amir Pouran Ben Veyseh, Thien Huu Nguyen, Oscar Sainz, Eneko Agirre, Ilana Heinz, Dan Roth