The lottery ticket hypothesis conjectures the existence of sparse subnetworks
of large randomly initialized deep neural networks that can be successfully
trained in isolation. Recent work has experimentally observed that some of
these tickets can be practically reused across a variety of tasks, hinting at
some form of universality. We formalize this concept and theoretically prove
that not only do such universal tickets exist but they also do not require
further training. Our proofs introduce a couple of technical innovations
related to pruning for strong lottery tickets, including extensions of subset
sum results and a strategy to leverage higher amounts of depth. Our explicit
sparse constructions of universal function families might be of independent
interest, as they highlight representational benefits induced by univariate
convolutional architectures.