MNIST-1D: A Simple, Low-Memory, and Low-End Benchmark for Deep Learning
Scaling *down* Deep Learning
We introduce a minimalist, low-memory, and low-compute alternative to classic deep learning benchmarks.
The training examples are 20 times smaller than classic benchmarks yet they differentiate more clearly between linear, nonlinear, and convolutional models which attain 32, 68, and 94% accuracyrespectively (these models obtain 94, 99+, and 99+% on mnist).
Then we present example use cases which include measuring the spatial inductive biases of lottery tickets, observing deep double descent, and metalearning an activation function.