Automated Question-Answering for Large Language Models
Ask Me Anything: A simple strategy for prompting language models
Large language models (llms) transfer well to new tasks out-of-the-box simply given a natural language prompt that demonstrates how to perform the task and no additional training.
Prompting is a brittle process wherein small modifications to the prompt can cause large variations in the modelpredictions, and therefore significant effort is dedicated towards designing a painstakingly"perfect prompt"for a task.
To mitigate the high degree of effort involved in prompt-design, we instead ask whether producing multiple effective, yet imperfect, prompts and aggregating them can lead to a high-quality prompting strategy.
Output true or false.").
Our approach recursively uses the llm itself to transform task inputs to the effective qa format.
We apply the collected prompts to obtain several noisy votes for the input s true label and propose to use weaksupervision, a procedure for combining the noisy predictions, to produce the final predictions for the inputs.
This simple strategy enables the open-source gpt-j-6b model to match and exceed the performance of few-shot gpt3-175b on 15 of 20 popular benchmark benchmarks, demonstrating an average performance lift of 10.2% over the few-shot baseline.
Authors
Simran Arora, Avanika Narayan, Mayee F. Chen, Laurel Orr, Neel Guha, Kush Bhatia, Ines Chami, Frederic Sala, Christopher Ré