We propose setfit (sentence transformer fine-tuning), an efficient and prompt-free framework for few-shot fine-tuning of sentencetransformers (sts).
Setfit works by first fine-tuning a pretrained st on a small number of text pairs, in a contrastive siamese manner.
The resulting model is then used to generate rich text embeddings, which are used to train a classification head.
This simple framework requires no prompts or verbalizers, and achieves high accuracy with orders of magnitude less parameters than existing techniques.
Our experiments show that setfit obtains comparable results with parameter-efficient fine-tuning and pattern exploiting training techniques, while being an order of magnitude faster to train.
Authors
Lewis Tunstall, Nils Reimers, Unso Eun Seo Jo, Luke Bates, Daniel Korat, Moshe Wasserblat, Oren Pereg