Symbolic regression, the task of predicting the mathematical expression of a
function from the observation of its values, is a difficult task which usually
involves a two-step procedure: predicting the "skeleton" of the expression up
to the choice of numerical constants, then fitting the constants by optimizing
a non-convex loss function. The dominant approach is genetic programming, which
evolves candidates by iterating this subroutine a large number of times. Neural
networks have recently been tasked to predict the correct skeleton in a single
try, but remain much less powerful. In this paper, we challenge this two-step
procedure, and task a Transformer to directly predict the full mathematical
expression, constants included. One can subsequently refine the predicted
constants by feeding them to the non-convex optimizer as an informed
initialization. We present ablations to show that this end-to-end approach
yields better results, sometimes even without the refinement step. We evaluate
our model on problems from the SRBench benchmark and show that our model
approaches the performance of state-of-the-art genetic programming with several
orders of magnitude faster inference.
Authors
Pierre-Alexandre Kamienny, Stéphane d'Ascoli, Guillaume Lample, François Charton