[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
 

Optimization Problems in Quantum Machine Learning

Loading...
Thumbnail Image

Files

Publication or External Link

Date

2023

Advisor

Citation

Abstract

The variational algorithm is a paradigm for designing quantum procedures implementable on noisy intermediate-scale quantum (NISQ) machines. It is viewed as a promising candidate for demonstrating practical quantum advantage.

In this dissertation, we look into the optimization aspect of the variational quantum algorithms as an attempt to answer when and why a variational quantum algorithm works. We mainly focus on two instantiations of the family of variational algorithms, the Variational Quantum Eigensolvers (VQEs) and the Quantum Neural Networks (QNNs).

We first established that, for almost all QNN architecture designs, there exist hard problem instances leading to an optimization landscape swarmed by spurious local minima provided that the QNN is under-parameterized. This observation rules out the possibility of a universal good QNN design achieving exponential advantage against the classical neural networks on any dataset and calls for instance-dependent designs for variational circuits.

We then show that VQE training converges linearly when the number of parameters exceeds an over-parameterization threshold. By tying the threshold to instance-dependent quantities, we developed variants of VQE algorithms that allow the training and testing of shallower variational circuits, as depths are usually the implementation bottlenecks on NISQ machines.

For QNNs, by looking into its convergence, we show that the dynamics of QNN training are different from the dynamics of any kernel regression, therefore ruling out the popular conjecture that over-parameterized QNNs are equivalent to certain versions of neural tangent kernels like their classical counterparts. As a practical implication, our analysis showcases the measurement design as a way to accelerate the convergence of QNNs.

At the end of this dissertation, we consider the classical problem of optimization with partial information, the Multi-arm Bandits (MABs). We show that, when enhanced with quantum access to the arms, there is a quadratic speed-up against the classical algorithms, which can serve as the building block for quantum reinforcement learning algorithms.

Notes

Rights