HyperTransformer: Model Generation for Supervised and Semi-Supervised Few-Shot Learning
We propose a transformer-based model for few-shot learning that generates weights of a convolutional neural network directly from support samples.
Our method is particularly effective for small target architectures where learning a fixed universal task-independent embedding is not optimal and better performance is attained when the information about the task can modulate all model parameters.
For larger models we discover that generating the last layer alone allows us to produce competitive or better results than those obtained with state-of-the-art methods while being end-to-end differentiable.