On the Principles of Parsimony and Self-Consistency for the Emergence of Intelligence
Ten years into the revival of deep networks and artificial intelligence, we
propose a theoretical framework that sheds light on understanding deep networks
within a bigger picture of Intelligence in general. We introduce two
fundamental principles, Parsimony and Self-consistency, that we believe to be
cornerstones for the emergence of Intelligence, artificial or natural. While
these two principles have rich classical roots, we argue that they can be
stated anew in entirely measurable and computable ways. More specifically, the
two principles lead to an effective and efficient computational framework,
compressive closed-loop transcription, that unifies and explains the evolution
of modern deep networks and many artificial intelligence practices. While we
mainly use modeling of visual data as an example, we believe the two principles
will unify understanding of broad families of autonomous intelligent systems
and provide a framework for understanding the brain.