[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content

Advertisement

Log in

Information Processing Capacity of Spin-Based Quantum Reservoir Computing Systems

  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract

The dynamical behavior of complex quantum systems can be harnessed for information processing. With this aim, quantum reservoir computing (QRC) with Ising spin networks was recently introduced as a quantum version of classical reservoir computing. In turn, reservoir computing is a neuro-inspired machine learning technique that consists in exploiting dynamical systems to solve nonlinear and temporal tasks. We characterize the performance of the spin-based QRC model with the Information Processing Capacity (IPC), which allows to quantify the computational capabilities of a dynamical system beyond specific tasks. The influence on the IPC of the input injection frequency, time multiplexing, and different measured observables encompassing local spin measurements as well as correlations is addressed. We find conditions for an optimum input driving and provide different alternatives for the choice of the output variables used for the readout. This work establishes a clear picture of the computational capabilities of a quantum network of spins for reservoir computing. Our results pave the way to future research on QRC both from the theoretical and experimental points of view.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Goodfellow I, Bengio Y, Courville A. Deep learning. Cambridge: MIT press; 2016.

    MATH  Google Scholar 

  2. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems; 2012. p. 1097–1105.

  3. Carleo G, Cirac I, Cranmer K, Daudet L, Schuld M, Tishby N, et al. 2019. Machine learning and the physical sciences, Vol. 91.

  4. Hinton G. Deep learning—a technology with the potential to transform health care. Jama 2019; 320(11):1101–1102.

    Article  Google Scholar 

  5. Young T, Hazarika D, Poria S, Cambria E. Recent trends in deep learning based natural language processing. IEEE Comput Intell Mag 2018;13(3):55–75.

    Article  Google Scholar 

  6. Triefenbach F, Demuynck K, Martens JP. Large vocabulary continuous speech recognition with reservoir-based acoustic models. IEEE Signal Process Lett 2014;21(3):311–315.

    Article  Google Scholar 

  7. Pathak J, Hunt B, Girvan M, Lu Z, Ott E. Model-free prediction of large spatiotemporally chaotic systems from data: a reservoir computing approach. Phys Rev Lett 2018;120(2):024102.

    Article  Google Scholar 

  8. Antonik P, Duport F, Hermans M, Smerieri A, Haelterman M, Massar S. Online training of an opto-electronic reservoir computer applied to real-time channel equalization. IEEE Trans Neural Netw Learn Syst 2016;28(11):2686–2698.

    Article  Google Scholar 

  9. Makridakis S, Spiliotis E, Assimakopoulos V. The M4 Competition: results, findings, conclusion and way forward. Int J Forecast 2018;34(4):802–808.

    Article  Google Scholar 

  10. Maass W, Natschläger T, Markram H. Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput 2002;14(11):2531–2560.

    Article  MATH  Google Scholar 

  11. Lukoševičius M, Jaeger H. Reservoir computing approaches to recurrent neural network training. Comput Sci Rev 2009;3(3):127–149.

    Article  MATH  Google Scholar 

  12. Jaeger H. 2001. The ”echo state” approach to analysing and training recurrent neural networks-with an erratum note. Bonn, Germany: German National Research Center for Information Technology GMD Technical Report 148(34):13.

  13. Verstraeten D, Schrauwen B, d’Haene M, Stroobandt D. An experimental unification of reservoir computing methods. Neural Netw 2007;20(3):391–403.

    Article  MATH  Google Scholar 

  14. Tanaka G, Yamane T, Héroux JB, Nakane R, Kanazawa N, Takeda S, et al. 2019. Recent advances in physical reservoir computing: a review. Neural Netw.

  15. Van der Sande G, Brunner D, Soriano M. C. Advances in photonic reservoir computing. Nanophotonics 2017;6(3):561–576.

    Article  Google Scholar 

  16. Appeltant L, Soriano M C, Van der Sande G, Danckaert J, Massar S, Dambre J, et al. Information processing using a single dynamical node as complex system. Nat Commun 2011;2(1):1–6.

    Article  Google Scholar 

  17. Torrejon J, Riou M, Araujo F A, Tsunegi S, Khalsa G, Querlioz D, et al. Neuromorphic computing with nanoscale spintronic oscillators. Nature 2017;547(7664):428.

    Article  Google Scholar 

  18. Fujii K, Nakajimam K. Harnessing disordered-ensemble quantum dynamics for machine learning. Phys Rev Appl 2017;8(2):024030.

    Article  Google Scholar 

  19. Nakajima K, Fujii K, Negoro M, Mitarai K, Kitagawa M. Boosting computational power through spatial multiplexing in quantum reservoir computing. Phys Rev Appl 2019;11(3):034021.

    Article  Google Scholar 

  20. Chen J, Nurdin HI. Learning nonlinear input–output maps with dissipative quantum systems. Quantum Inf Process 2019;18(7):198.

    Article  MathSciNet  MATH  Google Scholar 

  21. Ghosh S, Opala A, Matuszewski M, Paterek T, Liew TC. Quantum reservoir processing. npj Quantum Inf 2019;5(1):1–6.

    Article  Google Scholar 

  22. Marković D, Grollier J. 2020. Quantum neuromorphic computing. arXiv:2006.15111.

  23. Nielsen M A, Chuang I. Quantum computation and quantum information. Cambridge: Cambridge University Press; 2010.

    MATH  Google Scholar 

  24. Ladd T D, Jelezko F, Laflamme R, Nakamura Y, Monroe C, O’Brien JL. Quantum computers. Nature 2010;464(7285):45–53.

    Article  Google Scholar 

  25. Acín A, Bloch I, Buhrman H, Calarco T, Eichler C, Eisert J, et al. The quantum technologies roadmap: a European community view. New J Phys 2018;20(8):080201.

    Article  Google Scholar 

  26. Dambre J, Verstraeten D, Schrauwen B, Massar S. Information processing capacity of dynamical systems. Sci. Rep. 2012;2:514.

    Article  Google Scholar 

  27. Jaeger H. Short term memory in echo state networks. GMD-Report 152. GMD-German National Research Institute for Computer Science; 2002. Citeseer.

  28. Brunner D, Soriano M C, Mirasso C R, Fischer I. Parallel photonic information processing at gigabyte per second data rates using transient states. Nat Commun 2013;4:1364.

    Article  Google Scholar 

  29. Larger L, Baylón-Fuentes A, Martinenghi R, Udaltsov VS, Chembo YK, Jacquot M. High-speed photonic reservoir computing using a time-delay-based architecture: million words per second classification. Phys Rev X 2017;7(1):011015.

    Google Scholar 

  30. Jaeger H, Haas H. Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 2004;304(5667):78–80.

    Article  Google Scholar 

  31. Soriano M C, Ortín S, Keuninckx L, Appeltant L, Danckaert J, Pesquera L, et al. Delay-based reservoir computing: noise effects in a combined analog and digital implementation. IEEE Trans Neural Netw Learn Syst 2014;26(2):388–393.

    Article  MathSciNet  Google Scholar 

  32. Grigoryeva L, Ortega JP. Echo state networks are universal. Neural Netw 2018;108:495–508.

    Article  MATH  Google Scholar 

  33. Negoro M, Mitarai K, Fujii K, Nakajima K, Kitagawa M. 2018. Machine learning with controllable quantum dynamics of a nuclear spin ensemble in a solid. arXiv:1806.10910 .

  34. Chen J, Nurdin HI, Yamamoto N. 2020. Temporal information processing on noisy quantum computers. arXiv:2001.09498.

  35. Nokkala J, Martínez-Peña R, Giorgi GL, Parigi V, Soriano MC, Zambrini R. 2020. Gaussian states provide universal and versatile quantum reservoir computing. arXiv:2006.04821.

  36. Keuninckx L, Danckaert J, Van der Sande G. Real-time audio processing with a cascade of discrete-time delay line-based reservoir computers. Cognit Comput 2017;9(3):315–326.

    Article  Google Scholar 

  37. Gallicchio C, Micheli A. Echo state property of deep reservoir computing networks. Cognit Comput 2017;9(3):337–350.

    Article  Google Scholar 

  38. Martínez-Peña R, Giorgi GL, Nokkala J, Zambrini R, Soriano MC. Dynamical phase transitions in quantum reservoir computing. in preparation.

Download references

Funding

This study was financially supported by the Spanish State Research Agency, through the Severo Ochoa and María de Maeztu Program for Centers and Units of Excellence in R&D (MDM-2017-0711), CSIC extension to QuaResC (PID2019-109094GB), and CSIC Quantum Technologies PTI-001. The work of RMP and MCS has been supported by MICINN, AEI, FEDER and the University of the Balearic Islands through a predoctoral fellowship (MDM-2017-0711-18-1), and a “Ramon y Cajal” Fellowship (RYC-2015-18140), respectively. GLG acknowledges funding from the CAIB postdoctoral program.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to R. Martínez-Peña.

Ethics declarations

Conflict of Interest

The authors declare that they have no conflict of interest.

Ethical Approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article belongs to the Topical Collection: Trends in Reservoir Computing

Guest Editors: Claudio Gallicchio, Alessio Micheli, Simone Scardapane, Miguel C. Soriano

Appendices

Appendix 1

Here, we motivate the choice of the parameters h and Js of the model presented in Eq. (5). Our choice is based on the numerical results shown in Fig. 8, in which the IPC is computed for different values of h and Js. Other relevant system parameters are set to Δt = 10 and N = 5, which have been our benchmark in the simulations according to the results presented in the main text. We have explored four orders of magnitude for both h and Js to observe the evolution of the normalized capacity together with the distribution of the linear and nonlinear contributions to the IPC. For small values of h, the total capacity does not saturate and the main contribution is the linear memory, being the only contribution for small values of Js. For higher values of either h or Js, the profile of memory capacities changes towards a higher presence of nonlinear contributions, while keeping the bound of the total capacity. Thus, the decision of choosing h = 1 and Js = 1 as our benchmark (with Δt = 10 and N = 5) is based on the fact that it is a well established operational point, with a saturated total capacity, and a good presence of nonlinear contributions.

Fig. 8
figure 8

IPC versus (a) natural frequency of the spins h and (b) coupling strength Js. The parameters in (a) are Δt = 10, Js = 1 and N = 5, while in (b) we have used Δt = 10, h = 1 and N = 5. Notice that the normalization factor in this plot is N. The error bars of the plots correspond to the standard deviation over 10 realizations

Appendix 2

In this appendix, we explain in more detail how the bars of the IPC are computed. Contributions to the IPC are usually shown according to the degree of the polynomial we want to reproduce. For each degree, we need to sum up the contributions coming from different delays. By delay we mean how far in the past we consider the influence of the input into the system. This influence is represented in Eq. (11) by taking the inputs ski, where i is the delay respect to present time k.

In the main text, we have only shown the sum of the capacities over the delays. To deepen in our characterization we include here an illustration of the role of the delay for the reproduction of polynomials of degree 1, i.e., linear memory. The name of linear memory comes from the fact that we are computing the capacity of reproducing or “remembering” targets of the form \(\bar {y}_{k}=s_{k-i}\). Figure 9 represents the bare capacity of Eq. (10) with respect to delay i for such a linear memory.

Fig. 9
figure 9

Linear memory of the spin-based QRC for parameters Δt = 10, h = 1, Js = 1 and N = 5 as a function of the delay in the input

The area under the curve of Fig. 9 is what we have plotted as IPC of degree d = 1 in the bar’s plots across this work. In some cases, e.g., Fig. 8 (b) (Js = 0.01), the influence of the input extends to delays longer than 100 past inputs and care needs to be taken to not disregard small non-vanishing contributions in the computation of the IPC. It is a straightforward procedure to represent linear memory, but nonlinear contributions are harder to untangle visually. Representing second- and third-order contributions is still possible with 2D and 3D heatmaps of the capacities respect to the multiple delays. However, such visual representations of the memory as a function of the delay find a limit when we go to nonlinearities of degree d ≥ 4. Therefore, although we are aware that the summation over delay contributions is somehow hiding the information regarding the distribution of the memory for different delays, it provides a compact and readable representation.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Martínez-Peña, R., Nokkala, J., Giorgi, G.L. et al. Information Processing Capacity of Spin-Based Quantum Reservoir Computing Systems. Cogn Comput 15, 1440–1451 (2023). https://doi.org/10.1007/s12559-020-09772-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-020-09772-y

Keywords

Navigation