[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

The power of quantum neural networks

A preprint version of the article is available at arXiv.

Abstract

It is unknown whether near-term quantum computers are advantageous for machine learning tasks. In this work we address this question by trying to understand how powerful and trainable quantum machine learning models are in relation to popular classical neural networks. We propose the effective dimension—a measure that captures these qualities—and prove that it can be used to assess any statistical model’s ability to generalize on new data. Crucially, the effective dimension is a data-dependent measure that depends on the Fisher information, which allows us to gauge the ability of a model to train. We demonstrate numerically that a class of quantum neural networks is able to achieve a considerably better effective dimension than comparable feedforward networks and train faster, suggesting an advantage for quantum machine learning, which we verify on real quantum hardware.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Overview of the quantum neural network used in this study.
Fig. 2: Average Fisher information spectrum distribution.
Fig. 3: Normalized effective dimension and training loss.

Similar content being viewed by others

Data availability

The data for the graphs and analyses in this study was generated using Python. Source data are provided with this paper. All other data can be accessed via the following Zenodo repository: https://doi.org/10.5281/zenodo.4732830 (ref. 56).

Code availability

All code to generate the data, figures and analyses in this study is publicly available with detailed information on the implementation via the following Zenodo repository: https://doi.org/10.5281/zenodo.4732830 (ref. 56).

References

  1. Goodfellow, I., Bengio, Y. & Courville, A. Deep Learning (MIT Press, 2016); http://www.deeplearningbook.org

  2. Baldi, P. & Vershynin, R. The capacity of feedforward neural networks. Neural Networks 116, 288–311 (2019).

    Article  Google Scholar 

  3. Dziugaite, G. K. & Roy, D. M. Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. In Proc. 33rd Conference on Uncertainty in Artificial Intelligence (UAI, 2017).

  4. Schuld, M. Supervised Learning with Quantum Computers (Springer, 2018).

  5. Zoufal, C., Lucchi, A. & Woerner, S. Quantum generative adversarial networks for learning and loading random distributions. npj Quant. Inf. 5, 1–9 (2019).

    Google Scholar 

  6. Romero, J., Olson, J. P. & Aspuru-Guzik, A. Quantum autoencoders for efficient compression of quantum data. Quant. Sci. Technol. 2, 045001 (2017).

    Article  Google Scholar 

  7. Dunjko, V. & Briegel, H. J. Machine learning & artificial intelligence in the quantum domain: a review of recent progress. Rep. Prog. Phys. 81, 074001 (2018).

    Article  MathSciNet  Google Scholar 

  8. Ciliberto, C. et al. Quantum machine learning: a classical perspective. Proc. Roy. Soc. A 474, 20170551 (2018).

    Article  MathSciNet  Google Scholar 

  9. Killoran, N. et al. Continuous-variable quantum neural networks. Phys. Rev. Res. 1, 033063 (2019).

    Article  Google Scholar 

  10. Schuld, M., Sinayskiy, I. & Petruccione, F. The quest for a quantum neural network. Quant. Inf. Proc. 13, 2567–2586 (2014).

    Article  MathSciNet  Google Scholar 

  11. Farhi, E. & Neven, H. Classification with quantum neural networks on near term processors. Quant. Rev. Lett. 1, 2 (2020).

    Google Scholar 

  12. Aaronson, S. Read the fine print. Nat. Phys. 11, 291–293 (2015).

    Article  Google Scholar 

  13. Vapnik, V. The Nature of Statistical Learning Theory Vol. 8, 1–15 (Springer, 2000).

  14. Vapnik, V. N. & Chervonenkis, A. Y. On the uniform convergence of relative frequencies of events to their probabilities. Theory Probab. Appl. 16, 264–280 (1971).

    Article  Google Scholar 

  15. Sontag, E. D Neural Networks and Machine Learning 69–95 (Springer, 1998).

  16. Vapnik, V., Levin, E. & Cun, Y. L. Measuring the VC-dimension of a learning machine. Neural Comput. 6, 851–876 (1994).

    Article  Google Scholar 

  17. Neyshabur, B., Bhojanapalli, S., McAllester, D. & Srebro, N. Exploring generalization in deep learning. In Advances in Neural Information Processing Systems 30, 5947–5956 (NIPS, 2017).

  18. Arora, S., Ge, R., Neyshabur, B. & Zhang, Y. Stronger generalization bounds for deep nets via a compression approach. In Proc. 35th International Conference on Machine Learning Vol. 80, 254–263 (PMLR, 2018); http://proceedings.mlr.press/v80/arora18b.html

  19. Wright, L. G. & McMahon, P. L. The capacity of quantum neural networks. In Conference on Lasers and Electro-Optics JM4G.5 (Optical Society of America, 2020); http://www.osapublishing.org/abstract.cfm?URI=CLEO_QELS-2020-JM4G.5

  20. Du, Y., Hsieh, M.-H., Liu, T. & Tao, D. Expressive power of parametrized quantum circuits. Phys. Rev. Res. 2, 033125 (2020).

    Article  Google Scholar 

  21. Huang, H.-Y. et al. Power of data in quantum machine learning. Nat. Commun. 12, 2631 (2021).

    Article  Google Scholar 

  22. Berezniuk, O., Figalli, A., Ghigliazza, R. & Musaelian, K. A scale-dependent notion of effective dimension. Preprint at https://arxiv.org/abs/2001.10872 (2020).

  23. Rissanen, J. J. Fisher information and stochastic complexity. IEEE Trans. Inf. Theory 42, 40–47 (1996).

    Article  MathSciNet  Google Scholar 

  24. Cover, T. M. & Thomas, J. A. Elements of Information Theory (Wiley, 2006).

  25. Nakaji, K. & Yamamoto, N. Expressibility of the alternating layered ansatz for quantum computation. Quantum 5, 434 (2021).

    Article  Google Scholar 

  26. Holmes, Z., Sharma, K., Cerezo, M. & Coles, P. J. Connecting ansatz expressibility to gradient magnitudes and barren plateaus. Preprint at https://arxiv.org/abs/2101.02138 (2021).

  27. McClean, J. R., Boixo, S., Smelyanskiy, V. N., Babbush, R. & Neven, H. Barren plateaus in quantum neural network training landscapes. Nat. Commun. 9, 1–6 (2018).

    Article  Google Scholar 

  28. Wang, S. et al. Noise-induced barren plateaus in variational quantum algorithms. Preprint at https://arxiv.org/abs/2007.14384 (2020).

  29. Cerezo, M., Sone, A., Volkoff, T., Cincio, L. & Coles, P. J. Cost function dependent barren plateaus in shallow parametrized quantum circuits. Nat. Commun. 12, 1791 (2021).

    Article  Google Scholar 

  30. Verdon, G. et al. Learning to learn with quantum neural networks via classical neural networks. Preprint at https://arxiv.org/abs/1907.05415 (2019).

  31. Volkoff, T. & Coles, P. J. Large gradients via correlation in random parameterized quantum circuits. Quant. Sci. Technol. 6, 025008 (2021).

    Article  Google Scholar 

  32. Skolik, A., McClean, J. R., Mohseni, M., van der Smagt, P. & Leib, M. Layerwise learning for quantum neural networks. Quant. Mach. Intell. 3, 5 (2021).

    Article  Google Scholar 

  33. Huembeli, P. & Dauphin, A. Characterizing the loss landscape of variational quantum circuits. Quant. Sci. Technol. 6, 025011 (2021).

    Article  Google Scholar 

  34. Bishop, C. Exact calculation of the Hessian matrix for the multilayer perceptron. Neural Comput. 4, 494–501 (1992).

  35. LeCun, Y. A., Bottou, L., Orr, G. B. & Müller, K.-R. Efficient BackProp 9–48 (Springer, 2012); https://doi.org/10.1007/978-3-642-35289-8_3

  36. Cerezo, M. & Coles, P. J. Higher order derivatives of quantum neural networks with barren plateaus. Quant. Sci. Technol. 6, 035006 (2021).

    Article  Google Scholar 

  37. Kunstner, F., Hennig, P. & Balles, L. Limitations of the empirical Fisher approximation for natural gradient descent. In Advances in Neural Information Processing Systems 32 4156–4167 (NIPS, 2019); http://papers.nips.cc/paper/limitations-of-fisher-approximation

  38. Karakida, R., Akaho, S. & Amari, S.-I. Universal statistics of Fisher information in deep neural networks: mean field approach. In Proc. Machine Learning Research Vol. 89, 1032–1041 (PMLR, 2019); http://proceedings.mlr.press/v89/karakida19a.html

  39. Schuld, M., Bocharov, A., Svore, K. M. & Wiebe, N. Circuit-centric quantum classifiers. Phys. Rev. A 101, 032308 (2020).

    Article  MathSciNet  Google Scholar 

  40. Schuld, M., Sweke, R. & Meyer, J. J. Effect of data encoding on the expressive power of variational quantum-machine-learning models. Phys. Rev. A 103, 032430 (2021).

    Article  MathSciNet  Google Scholar 

  41. Lloyd, S., Schuld, M., Ijaz, A., Izaac, J. & Killoran, N. Quantum embeddings for machine learning. Preprint at https://arxiv.org/abs/2001.03622 (2020).

  42. Cong, I., Choi, S. & Lukin, M. D. Quantum convolutional neural networks. Nat. Phys. 15, 1273–1278 (2019).

    Article  Google Scholar 

  43. Amari, S.-I. Natural gradient works efficiently in learning. Neural Comput. 10, 251–276 (1998).

    Article  Google Scholar 

  44. Liang, T., Poggio, T., Rakhlin, A. & Stokes, J. Fisher–Rao metric, geometry, and complexity of neural networks. In Proc. Machine Learning Research Vol. 89, 888–896 (PMLR, 2019); http://proceedings.mlr.press/v89/liang19a.html

  45. Neyshabur, B., Salakhutdinov, R. R. & Srebro, N. Path-SGD: path-normalized optimization in deep neural networks. In Advances in Neural Information Processing Systems 28, 2422–2430 (NIPS, 2015).

  46. Neyshabur, B., Tomioka, R. & Srebro, N. Norm-based capacity control in neural networks. In Proc. Machine Learning Research Vol. 40, 1376–1401 (PMLR, 2015); http://proceedings.mlr.press/v40/Neyshabur15.html

  47. Bartlett, P. L., Foster, D. J. & Telgarsky, M. J. Spectrally-normalized margin bounds for neural networks. In Advances in Neural Information Processing Systems 30, 6240–6249 (NIPS, 2017); http://papers.nips.cc/paper/7204-spectrally-normalized

  48. Rissanen, J. J. Fisher information and stochastic complexity. IEEE Trans. Inf. Theory 42, 40–47 (1996).

    Article  MathSciNet  Google Scholar 

  49. Marrero, C. O., Kieferová, M. & Wiebe, N. Entanglement induced barren plateaus. Preprint at https://arxiv.org/abs/2010.15968 (2020).

  50. Havlíček, V. et al. Supervised learning with quantum-enhanced feature spaces. Nature 567, 209–212 (2019).

    Article  Google Scholar 

  51. Sim, S., Johnson, P. D. & Aspuru-Guzik, A. Expressibility and entangling capability of parameterized quantum circuits for hybrid quantum-classical algorithms. Adv. Quant. Technol. 2, 1900070 (2019).

    Article  Google Scholar 

  52. Jia, Z. & Su, H. Information-theoretic local minima characterization and regularization. In Proc. 37th International Conference on Machine Learning Vol. 119, 4773–4783 (PMLR, 2020); http://proceedings.mlr.press/v119/jia20a.html

  53. Virmaux, A. & Scaman, K. Lipschitz regularity of deep neural networks: analysis and efficient estimation. In Advances in Neural Information Processing Systems 31, 3835–3844 (NIPS, 2018); http://papers.nips.cc/paper/lipschitz-regularity-of-deep-neural-networks

  54. Sweke, R. et al. Stochastic gradient descent for hybrid quantum-classical optimization. Quantum 4, 314 (2020).

    Article  Google Scholar 

  55. Dua, D. & Graff, C. UCI Machine Learning Repository (2017); http://archive.ics.uci.edu/ml

  56. Abbas, A. et al. amyami187/effective_dimension: The Effective Dimension Code (Zenodo, 2021); https://doi.org/10.5281/zenodo.4732830

Download references

Acknowledgements

We thank M. Schuld for insightful discussions on data embedding in quantum models. We also thank T. L. Scholten for constructive feedback on the manuscript. C.Z. acknowledges support from the National Centre of Competence in Research Quantum Science and Technology (QSIT).

Author information

Authors and Affiliations

Authors

Contributions

The main ideas were developed by all of the authors. A.A. provided numerical simulations. D.S. and A.F. proved the technical claims. All authors contributed to the write-up.

Corresponding author

Correspondence to Stefan Woerner.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review informationNature Computational Science thanks Patrick Coles and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Handling editor: Jie Pan, in collaboration with the Nature Computational Science team.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary Figs. 1–11, Sections 1–5 and Table 1.

Source data

Source Data Fig. 2

Unprocessed raw text data are stored in text files, containing the numerical values used to generate the eigenvalue distributions for each model in the subplots of Fig. 2.

Source Data Fig. 3

In the zip folder there are text data files containing the numerical values for the effective dimension for each model and labeled accordingly, there is also a file containing the values of the axis. Then, the the raw numerical data for the loss values, their averages and the standard deviation around the average loss values are stored in text files for each model and labeled transparently.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Abbas, A., Sutter, D., Zoufal, C. et al. The power of quantum neural networks. Nat Comput Sci 1, 403–409 (2021). https://doi.org/10.1038/s43588-021-00084-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s43588-021-00084-1

This article is cited by

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics