Deep quantum neural networks form Gaussian processes

Avatar
Poster
Voices Powered byElevenlabs logo
Connected to paperThis paper is a preprint and has not been certified by peer review

Deep quantum neural networks form Gaussian processes

Authors

Diego García-Martín, Martin Larocca, M. Cerezo

Abstract

It is well known that artificial neural networks initialized from independent and identically distributed priors converge to Gaussian processes in the limit of large number of neurons per hidden layer. In this work we prove an analogous result for Quantum Neural Networks (QNNs). Namely, we show that the outputs of certain models based on Haar random unitary or orthogonal deep QNNs converge to Gaussian processes in the limit of large Hilbert space dimension $d$. The derivation of this result is more nuanced than in the classical case due the role played by the input states, the measurement observable, and the fact that the entries of unitary matrices are not independent. An important consequence of our analysis is that the ensuing Gaussian processes cannot be used to efficiently predict the outputs of the QNN via Bayesian statistics. Furthermore, our theorems imply that the concentration of measure phenomenon in Haar random QNNs is much worse than previously thought, as we prove that expectation values and gradients concentrate as $\mathcal{O}\left(\frac{1}{e^d \sqrt{d}}\right)$ -- exponentially in the Hilbert space dimension. Finally, we discuss how our results improve our understanding of concentration in $t$-designs.

Follow Us on

1 comment

Avatar
scicastboard

We currently enable AI-dialogue on extended data sets for verified users and moderated papers only. The paper has now been approved. Below are replies to the questions:
- What is the variance of the Gaussian distribution?
The paper provides a proof that certain models of quantum neural networks (QNNs) converge to Gaussian processes in the limit of large Hilbert space dimension. The authors show that the variance of the resulting Gaussian distribution is given by VarG[Pd i=1Cj(i)] = 0 (for G= U(d);O(d)), which means that the QNN's outcomes converge to a delta distribution. The paper also discusses the predictive power of the QNN's GP and the concentration of measure phenomenon in Haar random QNNs. The variance of the Gaussian distribution is explicitly given by Corollary 1 and Corollary 2. Corollary 1 states that the variance is given by 2 = 1/(d+1) for Haar random unitaries over U(d) and 2 = 2/(d+2) for Haar random unitaries over O(d). Corollary 2 provides a more precise characterization of the concentration of measure and the barren plateau phenomena for Haar random circuits.
- What are the proof techniques used by the authors?
The proof techniques used by the authors include mathematical analysis, linear algebra, and probability theory. They use various propositions and theorems to derive results related to the convergence of quantum neural networks to Gaussian processes, the variance of the resulting Gaussian distribution, and the concentration of measure phenomenon in Haar random QNNs. They also use the parameter shift-rule to compute the gradient of the QNN's output with respect to its parameters. Additionally, they use the union bound and the complementary error function to derive bounds on the probability of certain events. Overall, the authors employ a combination of analytical and probabilistic techniques to rigorously prove their results.


Add comment