Can Pre-trained Language Models be Used to Resolve Textual and Semantic Merge Conflicts?
Program merging is standard practice when developers integrate their
individual changes to a common code base. When the merge algorithm fails, this
is called a merge conflict. The conflict either manifests in textual merge
conflicts where the merge fails to produce code, or semantic merge conflicts
where the merged code results in compiler or test breaks. Resolving these
conflicts for large code projects is expensive because it requires developers
to manually identify the sources of conflict and correct them.
In this paper, we explore the feasibility of automatically repairing merge
conflicts (both textual and semantic) using k-shot learning with large neural
language models (LM) such as GPT-3. One of the challenges in leveraging such
language models is to fit the examples and the queries within a small prompt
(2048 tokens). We evaluate LMs and k-shot learning for two broad applications:
(a) textual and semantic merge conflicts for a divergent fork Microsoft Edge,
and (b) textual merge conflicts for a large number of JavaScript projects in
GitHub. Our results are mixed: one one-hand, LMs provide the state-of-the-art
(SOTA) performance on semantic merge conflict resolution for Edge compared to
earlier symbolic approaches; on the other hand, LMs do not yet obviate the
benefits of fine-tuning neural models (when sufficient data is available) or
the design of special purpose domain-specific languages (DSL) for restricted
patterns for program synthesis.