On the Trainability of Quantum Conclaveal Neural Networks

Absence of Barren Plateaus in Quantum Convolutional Neural Networks

We rigorously analyze the gradient scaling for the parameters in the quantum convolutional neural network (qcn) architecture.We find that the variance of the gradient vanishes no faster than polynomially, implying that qcns do not exhibit barren plateaus.This provides an analytical guarantee for the trainability of randomly initialized qcns, which highlights qcns as being trainable under random initialization unlike many other qnn architectures.We also introduce a novel graph-based method to analyze expectation values over haar-distributed unitaries, which will likely be useful in other contexts.Finally, we perform numerical simulations to verify our analytical results.