Universality of Classically Trainable, Quantum-Deployed Boson-Sampling Generative Models

Avatar
Poster
Voice is AI-generated
Connected to paperThis paper is a preprint and has not been certified by peer review

Universality of Classically Trainable, Quantum-Deployed Boson-Sampling Generative Models

Authors

Andrii Kurkin, Ulysse Chabaud, Zoltán Kolarovszki, Bence Bakó, Zoltán Zimborás, Vedran Dunjko

Abstract

Recent work on the instantaneous quantum polynomial-time (IQP) quantum-circuit Born machine (QCBM) highlights a promising paradigm for generative modeling: train classically, deploy quantumly. In this setting, the training objective can be evaluated efficiently on a classical computer, while sampling from the resulting model may still be classically intractable. Furthermore, in the IQP-QCBM framework, extending the model family with ancillary qubits has been proven to yield universality. This paper asks whether similar results hold for linear-optical generative models. To this end, we introduce the Boson Sampling Born Machine (BSBM). Our analysis retraces analogous steps as were found for IQP-QCBMs with twists. Using recent results that enable classical approximation of broad classes of expectation values in linear optics, we show that BSBMs can be trained classically for wide families of loss functions. Next, we argue that "basic" BSBMs are not universal generative models, and that universality can be achieved by expanding the model while preserving efficient classical training and sampling hardness. In our approach, we introduce and analyze the role of constant-function postprocessing, generalizing the construction for IQP-QCBMs, which under suitable conditions can lead to universality while preserving the hardness of classically simulating the models. We showcase a family of BSBMs, characterized by a single hyperparameter, that allows for a monotonic increase in expressivity toward universality while retaining the capacity to represent ostensibly hard distributions. Furthermore, we discuss the possible modalities for the efficient classical training, in the sense of efficient estimation of gradients of the loss function.

Follow Us on

0 comments

Add comment