Model Evaluation, Model Selection, and Algorithm Selection in Machine Learning
The correct use of model evaluation, model selection, and algorithm selection
techniques is vital in academic machine learning research as well as in many
industrial settings. This article reviews different techniques that can be used
for each of these three subtasks and discusses the main advantages and
disadvantages of each technique with references to theoretical and empirical
studies. Further, recommendations are given to encourage best yet feasible
practices in research and applications of machine learning. Common methods such
as the holdout method for model evaluation and selection are covered, which are
not recommended when working with small datasets. Different flavors of the
bootstrap technique are introduced for estimating the uncertainty of
performance estimates, as an alternative to confidence intervals via normal
approximation if bootstrapping is computationally feasible. Common
cross-validation techniques such as leave-one-out cross-validation and k-fold
cross-validation are reviewed, the bias-variance trade-off for choosing k is
discussed, and practical tips for the optimal choice of k are given based on
empirical evidence. Different statistical tests for algorithm comparisons are
presented, and strategies for dealing with multiple comparisons such as omnibus
tests and multiple-comparison corrections are discussed. Finally, alternative
methods for algorithm selection, such as the combined F-test 5x2
cross-validation and nested cross-validation, are recommended for comparing
machine learning algorithms when datasets are small.