Criticality of Deep Neural Networks with Infinity Approximation

Critical initialization of wide and deep neural networks through partial Jacobians: general theory and applications to LayerNorm

We introduce partial jacobians of a network, defined as derivatives of preactivations in layer with respect to preactivations in layer.

We discuss various properties of the partial jacobians such as their scaling with depth and relation to the neural tangent kernel (ntk).

We derive the recurrence relations for the partial jacobians and utilize them to analyze criticality of deep neural networks with (and without) layernorm.

We find that the normalization layer changes the optimal values of hyperparameters and criticalexponents.

We argue that layernorm is more stable when applied to preactivations, rather than activations due to larger correlation depth.