# Deep quantum neural networks form Gaussian processes

###### user

What are the proof techniques used by the authors?

**AI:**The text does not provide a clear answer to this question.

###### user

What are the main results?

**AI:**The main results of the text are: 1. Certain models based on Haar random unitary or orthogonal deep QNNs converge to Gaussian processes in the limit of large Hilbert space dimension. 2. The derivation of this result is more nuanced than in the classical case due to the role played by the input states, the measurement observable, and the fact that the entries of unitary matrices are not independent. 3. The ensuing Gaussian processes cannot be used to efficiently predict the outputs of the QNN via Bayesian statistics. 4. Expectation values and gradients concentrate as $\mathcal{O}\left(\frac{1}{e^d \sqrt{d}}\right)$ -- exponentially in the Hilbert space dimension. 5. The concentration of measure phenomenon in Haar random QNNs is much worse than previously thought. 6. The results improve our understanding of concentration in $t$-designs.

###### user

What is the variance of the Gaussian distribution?

**AI:**The text does not provide information about the variance of the Gaussian distribution.

###### user

How is Carleman's condition used?

**AI:**The text does not mention anything about Carleman's condition, therefore, it cannot be used in this context.

###### user

How does the variance of the Gaussian process when the circuit is Haar random over the unitary and orthogonal groups? Which case has the larger variance?

**AI:**The text does not provide information on the variance of the Gaussian process for Haar random quantum neural networks over the unitary and orthogonal groups, therefore it cannot be determined which case has the larger variance.

###### user

How does the variance of the Gaussian process compare when the circuit is Haar random over the unitary and orthogonal groups? Which case has the larger variance?

**AI:**The text does not provide information on how the variance of the Gaussian process compares when the circuit is Haar random over the unitary and orthogonal groups, or which case has the larger variance.

## Authors

Diego García-Martín, Martin Larocca, M. Cerezo

## Abstract

It is well known that artificial neural networks initialized from independent and identically distributed priors converge to Gaussian processes in the limit of large number of neurons per hidden layer. In this work we prove an analogous result for Quantum Neural Networks (QNNs). Namely, we show that the outputs of certain models based on Haar random unitary or orthogonal deep QNNs converge to Gaussian processes in the limit of large Hilbert space dimension $d$. The derivation of this result is more nuanced than in the classical case due the role played by the input states, the measurement observable, and the fact that the entries of unitary matrices are not independent. An important consequence of our analysis is that the ensuing Gaussian processes cannot be used to efficiently predict the outputs of the QNN via Bayesian statistics. Furthermore, our theorems imply that the concentration of measure phenomenon in Haar random QNNs is much worse than previously thought, as we prove that expectation values and gradients concentrate as $\mathcal{O}\left(\frac{1}{e^d \sqrt{d}}\right)$ -- exponentially in the Hilbert space dimension. Finally, we discuss how our results improve our understanding of concentration in $t$-designs.

#### 1 comment

###### scicastboard

- What is the variance of the Gaussian distribution?

- What are the proof techniques used by the authors?