ReCo: Region-Controlled Text-to-Image Generation
Recently, large-scale text-to-image (T2I) models have shown impressive
performance in generating high-fidelity images, but with limited
controllability, e.g., precisely specifying the content in a specific region
with a free-form text description. In this paper, we propose an effective
technique for such regional control in T2I generation. We augment T2I models'
inputs with an extra set of position tokens, which represent the quantized
spatial coordinates. Each region is specified by four position tokens to
represent the top-left and bottom-right corners, followed by an open-ended
natural language regional description. Then, we fine-tune a pre-trained T2I
model with such new input interface. Our model, dubbed as ReCo
(Region-Controlled T2I), enables the region control for arbitrary objects
described by open-ended regional texts rather than by object labels from a
constrained category set. Empirically, ReCo achieves better image quality than
the T2I model strengthened by positional words (FID: 8.82->7.36, SceneFID:
15.54->6.51 on COCO), together with objects being more accurately placed,
amounting to a 20.40% region classification accuracy improvement on COCO.
Furthermore, we demonstrate that ReCo can better control the object count,
spatial relationship, and region attributes such as color/size, with the
free-form regional description. Human evaluation on PaintSkill shows that ReCo
is +19.28% and +17.21% more accurate in generating images with correct object
count and spatial relationship than the T2I model.