Binary Classification Quantum Neural Network Model Based on Optimized Grover Algorithm
<p>Basic principles of the quantum classifier. <math display="inline"><semantics> <msub> <mi>x</mi> <mi>i</mi> </msub> </semantics></math> randomly generated by the <math display="inline"><semantics> <msup> <mi>R</mi> <mrow> <mi>N</mi> <mo>×</mo> <mi>M</mi> </mrow> </msup> </semantics></math> matrix, and <math display="inline"><semantics> <msub> <mi>y</mi> <mi>i</mi> </msub> </semantics></math>, which can only be 0 or 1, forms <span class="html-italic">N</span> data pairs and then generates the dataset <span class="html-italic">T</span>. According to the classic backpropagation rule <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>θ</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mo>·</mo> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <msub> <mi>x</mi> <mi>i</mi> </msub> </semantics></math> is taken as the input to this rule. The qubit obtains the input–output relationship from the data pair <math display="inline"><semantics> <mrow> <mo stretchy="false">(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>i</mi> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math> in the constructed dataset <span class="html-italic">T</span>, and learns the interaction in the training set of the relationship. In other words, the purpose of the quantum neural network is to use <math display="inline"><semantics> <msub> <mi>x</mi> <mi>i</mi> </msub> </semantics></math> as an input to learn <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>θ</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mo>·</mo> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> rules.</p> "> Figure 2
<p>The paradigm of BQM. (<b>A</b>) The first K-1 loop uses <span class="html-italic">U</span>, defined in Equation (<a href="#FD12-entropy-24-01783" class="html-disp-formula">12</a>), which consists of unitary operators (namely <math display="inline"><semantics> <msub> <mi>U</mi> <mrow> <mi>d</mi> <mi>a</mi> <mi>t</mi> <mi>a</mi> </mrow> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>U</mi> <mi>c</mi> </msub> </semantics></math>, <span class="html-italic">H</span>, and <math display="inline"><semantics> <msub> <mi>Q</mi> <mi>i</mi> </msub> </semantics></math>). (<b>B</b>) The last cycle uses the unitary operation <math display="inline"><semantics> <msub> <mi>U</mi> <mi>E</mi> </msub> </semantics></math> defined in Equation (<a href="#FD13-entropy-24-01783" class="html-disp-formula">13</a>). The qubit interacts with <math display="inline"><semantics> <msub> <mi>U</mi> <mi>c</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>Q</mi> <mi>i</mi> </msub> </semantics></math> to form the feature register and data register.</p> "> Figure 3
<p>The realization of the <math display="inline"><semantics> <msup> <mi>l</mi> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </msup> </semantics></math> layer <math display="inline"><semantics> <mrow> <mi>U</mi> <mo stretchy="false">(</mo> <msup> <mi>θ</mi> <mi>c</mi> </msup> <mo stretchy="false">)</mo> </mrow> </semantics></math>. It is assumed that the <math display="inline"><semantics> <msup> <mi>l</mi> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </msup> </semantics></math> layer <math display="inline"><semantics> <mrow> <mi>U</mi> <mo stretchy="false">(</mo> <msup> <mi>θ</mi> <mi>c</mi> </msup> <mo stretchy="false">)</mo> </mrow> </semantics></math> interacts with <span class="html-italic">M</span> qubits. Three trainable parameterized gates <math display="inline"><semantics> <msub> <mi>R</mi> <mi>Z</mi> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>R</mi> <mi>Y</mi> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>R</mi> <mi>Z</mi> </msub> </semantics></math> are first applied to each qubit, followed by the <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>N</mi> <mi>O</mi> <mi>T</mi> </mrow> </semantics></math> gates.</p> "> Figure 4
<p>Circuit implementation of BQM prediction. Use <math display="inline"><semantics> <mrow> <mi mathvariant="normal">g</mi> <mo stretchy="false">(</mo> <mo>·</mo> <mo stretchy="false">)</mo> </mrow> </semantics></math> as in the training process. The encoding method prepares the state <math display="inline"><semantics> <mrow> <mo>|</mo> <mi mathvariant="normal">g</mi> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">)</mo> <mo stretchy="false">〉</mo> </mrow> </semantics></math> and applies the trained variable component subcircuit <math display="inline"><semantics> <msub> <mi>U</mi> <mi>c</mi> </msub> </semantics></math> to <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> </mrow> <mover> <mi>χ</mi> <mo>∼</mo> </mover> <mrow> <mo stretchy="false">〉</mo> </mrow> </mrow> </semantics></math>.</p> "> Figure 5
<p>Implementation of BQM in numerical simulation. (<b>a</b>) The circuit implementation of the encoded unitary <math display="inline"><semantics> <msub> <mi>U</mi> <mrow> <mi>d</mi> <mi>a</mi> <mi>t</mi> <mi>a</mi> </mrow> </msub> </semantics></math> corresponding to the feature map <math display="inline"><semantics> <mrow> <mi>f</mi> <mo stretchy="false">(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math> is illustrated. (<b>b</b>) The realization of quantum operation <math display="inline"><semantics> <mrow> <mi>f</mi> <mo stretchy="false">(</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math>. (<b>c</b>) The implementation of BQM, given input <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>k</mi> </msub> <mo>=</mo> <mrow> <mo>{</mo> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>j</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>m</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi>n</mi> </msub> <mo>}</mo> </mrow> </mrow> </semantics></math>.</p> "> Figure 6
<p>Dataset evolution. The yellow cube in the figure represents the data point in the data pair whose <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, the blue cube represents the data point in the data pair whose <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, and the point circle in red represents the <math display="inline"><semantics> <msup> <mi>K</mi> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </msup> </semantics></math> data point. (<b>a</b>) The traditional dataset when defined as <span class="html-italic">T</span>, which is defined in Equation (<a href="#FD1-entropy-24-01783" class="html-disp-formula">1</a>). (<b>b</b>) When <math display="inline"><semantics> <mrow> <msup> <mi>K</mi> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </msup> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (that is, the red grid labeled A), according to the generation formula, <math display="inline"><semantics> <msup> <mi>K</mi> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </msup> </semantics></math> is combined with the first <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math> labels with <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, and the resulting dataset. (<b>c</b>) When <math display="inline"><semantics> <mrow> <msup> <mi>K</mi> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </msup> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> (that is, the red grid labeled B), according to the generation formula, <math display="inline"><semantics> <msup> <mi>K</mi> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </msup> </semantics></math> is combined with the first <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math> labels with <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, and the resulting dataset.</p> "> Figure 7
<p>The performance of different quantum classifiers at the different depolarization rates (<math display="inline"><semantics> <mrow> <mi>P</mi> <mo>=</mo> <mn>0.1</mn> <mo>,</mo> <mn>0.3</mn> </mrow> </semantics></math>). Depolarizing noise models extracted from quantum hardware are applied to the trainable unitary <math display="inline"><semantics> <mrow> <msub> <mi>U</mi> <mi>c</mi> </msub> <mrow> <mo stretchy="false">(</mo> <mi>θ</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> of these three classifiers. The labels `BQM’, `BCE’, and `MSE’ refer to the proposed Grover-based quantum classifier, the quantum kernel classifier with BCE loss, and the quantum kernel classifier with mean square error loss. (<b>a</b>,<b>b</b>) shows the variation of the train and test accuracies of BQM and the quantum kernel classifier with BCE loss with a <span class="html-italic">P</span> value of 0.1. (<b>c</b>,<b>d</b>) show the variation of the train and test accuracies of BQM and the quantum kernel classifier with BCE loss when the <span class="html-italic">P</span> value is 0.3. Vertical bars reflect the variance of the train and test accuracy at each iteration.</p> "> Figure 8
<p>The iterated process of BQM algorithm. After receiving the encoded data in the variable component subcircuit of the quantum classifier, the measurement operator defines a query as one measurement.</p> ">
Abstract
:1. Introduction
2. Basic Conception
2.1. Supervised Learning Classification of QNN
2.2. Younes’ Algorithm
3. The Binary QNN Model
3.1. Pretreatment Stage of a Dichotomous Task
3.2. The Training Process of the Learning Plan
- 1.
- If the pair of samples before K as analogously connects with the label , the expected state is
- 2.
- If the pair of samples before K as analogously connects with the label , the expected state is
3.3. The Evolution of the Quantum State
- After interaction with unitary , using the Equation (10) input state , this state can be converted to . For all computing in , this means that the quantum operation does not change state.When we measure the indicator register of the output state, the sampling for calculating the base i is distributed.
- After interaction with unitary using the Equation (11) input state , this state can be converted to .Mathematically, the result state is generated after interaction with HAfter the first cycle, the generated state is generated
3.4. The Loss Function
3.5. Gradient-Based Parameter Optimization
3.6. Circuit Implementation of Label Prediction
3.7. Synthetic Construction of Datasets
3.8. The Details of BQM
4. Results
4.1. Dataset Evolution
- 1.
- Represents the original dataset when as the `A’ cube or when as the `B’ cube in the pair of data ;
- 2.
- 3.
4.2. Stability and Convergence Analysis
4.3. Performance Analysis under Depolarization Noise
4.4. Complex Comparsion Analysis
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Bao, Z.; Zhou, J.; Tay, Y.C. sonSQL: An Extensible Relational DBMS for Social Network Start-Ups. In Proceedings of the 32nd International Conference on Conceptual Modeling (ER 2013), Hong Kong, China, 11–13 November 2013; Volume 8217, pp. 495–498. [Google Scholar]
- Rosen, D.; Kim, B. Social networks and online environments: When science and practice co-evolve. Soc. Netw. Anal. Min. 2011, 1, 27–42. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Y.; Ling, W. A comparative study of information diffusion in weblogs and microblogs based on social network analysis. Chin. J. Libr. Inf. Sci. 2012, 5, 51–66. [Google Scholar]
- Xu, L.; Jiang, C.; Wang, J.; Yuan, J.; Ren, Y. Information Security in Big Data: Privacy and Data Mining. Chin. J. Libr. Inf. Sci. 2014, 2, 2169–3536. [Google Scholar]
- Witten, I.; EibeFrank. Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations, 1st ed.; China Machine Press: Beijing, China, 2005. [Google Scholar]
- Nasraoui, O. Web data mining: Exploring hyperlinks, contents, and usage data. SIGKDD Explor. Newsl. 2008, 10, 23–25. [Google Scholar] [CrossRef]
- Tang, H.S.; Yao, Y.W. Research on Decision Tree in Data Mining. Appl. Res. Comput. 2001, 8, 18. [Google Scholar]
- Chen, M.S.; Park, J.S.; Yu, P.S. Efficient data mining for path traversal patterns. IEEE Trans. Knowl. Data Eng. 1998, 10, 209–221. [Google Scholar] [CrossRef] [Green Version]
- Juntao, H.U.; Defeng, W.U.; Li, G.; Gan, Y. Architecture and Technique of Multimedia Data Mining. Comput. Eng. 2003, 29, 149–151. [Google Scholar]
- Shou, Y.J. Data Mining Technique. Autom.-Petro-Chem. Ind. 2000, 6, 38. [Google Scholar]
- Yang, Y.; Adelstein, S.J.; Kassis, A.I. Target discovery from data mining approaches. Drug Discov. Today 2009, 14, 147–154. [Google Scholar] [CrossRef]
- Mahmood, N.; Hafeez, Y.; Iqbal, K.; Hussain, S.; Aqib, M.; Jamal, M.; Song, O.Y. Mining Software Repository for Cleaning Bugs Using Data Mining Technique. Comput. Mater. Contin. 2021, 69, 873–893. [Google Scholar] [CrossRef]
- Helma, C.; Kramer, S. Machine Learning and Data Mining. Predict. Toxicol. 2005, 42, 223–254. [Google Scholar]
- Wang, Y.; Li, G.; Xu, Y.; Hu, J. An Algorithm for Mining of Association Rules for the Information Communication Network Alarms Based on Swarm Intelligence. Math. Probl. Eng. 2014, 2014, 894205. [Google Scholar] [CrossRef] [Green Version]
- Chen, Z.Y.; Qiu, T.H.; Zhang, W.B.; Ma, H.Y. Effects of initial states on the quantum correlations in the generalized Grover search algorithm. Chin. Phys. B 2021, 30, 080303. [Google Scholar] [CrossRef]
- Xu, Z.; Ying, M.; Valiron, B. Reasoning about Recursive Quantum Programs. arXiv 2021, arXiv:2107.11679. [Google Scholar]
- Xue, Y.J.; Wang, H.W.; Tian, Y.B.; Wang, Y.N.; Wang, Y.X.; Wang, S.M. Quantum Information Protection Scheme Based on Reinforcement Learning for Periodic Surface Codes. Quantum Eng. 2022, 2022, 7643871. [Google Scholar] [CrossRef]
- Galindo, A.; Martín-Delgado, M.A. Family of Grover’s quantum-searching algorithms. Phys. Rev. A 2000, 62, 062303. [Google Scholar] [CrossRef] [Green Version]
- Ma, H.; Ma, Y.; Zhang, W.; Zhao, X.; Chu, P. Development of Video Encryption Scheme Based on Quantum Controlled Dense Coding Using GHZ State for Smart Home Scenario. Wirel. Pers. Commun. 2022, 123, 295–309. [Google Scholar] [CrossRef]
- Zhu, D.; Linke, N.M.; Benedetti, M.; Landsman, K.A.; Nguyen, N.H.; Alderete, C.H.; Perdomo-Ortiz, A.; Korda, N.; Garfoot, A.; Brecque, C.; et al. Training of quantum circuits on a hybrid quantum computer. Sci. Adv. 2019, 5, aaw9918. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wang, H.W.; Xue, Y.J.; Ma, Y.L.; Hua, N.; Ma, H.Y. Determination of quantum toric error correction code threshold using convolutional neural network decoders. Chin. Phys. B 2022, 31, 010303. [Google Scholar] [CrossRef]
- Zheng, W.; Yin, L. Characterization inference based on joint-optimization of multi-layer semantics and deep fusion matching network. Peerj. Comput. Sci. 2022, 8, e908. [Google Scholar] [CrossRef]
- Singh, M.P.; Radhey, K.; Rajput, B.S. Pattern Classifications Using Grover’s and Ventura’s Algorithms in a Two-qubits System. Int. J. Theor. Phys. 2018, 57, 692–705. [Google Scholar] [CrossRef]
- Huang, C.Q.; Jiang, F.; Huang, Q.H.; Wang, X.Z.; Han, Z.M.; Huang, W.Y. Dual-Graph Attention Convolution Network for 3-D Point Cloud Classification. IEEE Trans. Neural Networks Learn. Syst. 2022, 1–13. [Google Scholar] [CrossRef] [PubMed]
- Ding, L.; Wang, H.; Wang, Y.; Wang, S. Based on Quantum Topological Stabilizer Color Code Morphism Neural Network Decoder. Quantum Eng. 2022, 2022. [Google Scholar] [CrossRef]
- Zhou, G.; Zhang, R.; Huang, S. Generalized Buffering Algorithm. IEEE Access 2021, 99, 1. [Google Scholar] [CrossRef]
- Xu, Y.; Zhang, X.; Gai, H. Quantum Neural Networks for Face Recognition Classifier. Procedia Eng. 2011, 15, 1319–1323. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Y.C.; Bao, W.S.; Xiang, W.; Fu, X.Q. Decoherence in optimized quantum random-walk search algorithm. Chin. Phys. 2015, 24, 197–202. [Google Scholar] [CrossRef]
- Tseng, H.-Y.; Tsai, C.-W.; Hwang, T.; Li, C.-M. Quantum Secret Sharing Based on Quantum Search Algorithm. Int. J. Theor. Phys. 2012, 51, 3101–3108. [Google Scholar] [CrossRef]
- Chen, J.; Liu, L.; Liu, Y.; Zeng, X. A Learning Framework for n-Bit Quantized Neural Networks toward FPGAs. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 1067–1081. [Google Scholar] [CrossRef] [Green Version]
- Xiao, Y.; Huang, W.; Oh, S.K.; Zhu, L. A polynomial kernel neural network classifier based on random sampling and information gain. Appl. Intell. 2021, 52, 6398–6412. [Google Scholar] [CrossRef]
- He, Z.; Zhang, H.; Zhao, J.; Qian, Q. Classification of power quality disturbances using quantum neural network and DS evidence fusion. Eur. Trans. Electr. Power 2012, 22, 533–547. [Google Scholar] [CrossRef]
- Du, Y.; Hsieh, M.H.; Liu, T.; Tao, D. A Grover-search Based Quantum Learning Scheme for Classification. New J. Phys. 2021, 23, 20. [Google Scholar] [CrossRef]
- Younes, A.; Rowe, J.; Miller, J. Enhanced quantum searching via entanglement and partial diffusion. Phys. Nonlinear Phenom. 2008, 237, 1074–1078. [Google Scholar] [CrossRef] [Green Version]
- Liang, X.; Luo, L.; Hu, S.; Li, Y. Mapping the knowledge frontiers and evolution of decision making based on agent-based modeling. Knowl. Based Syst. 2022, 250, 108982. [Google Scholar] [CrossRef]
- Liu, Z.; Chen, H. Analyzing and Improving the Secure Quantum Dialogue Protocol Based on Four-Qubit Cluster State. Int. J. Theor. Phys. 2020, 59, 2120–2126. [Google Scholar] [CrossRef]
- Chakrabarty, I.; Khan, S.; Singh, V. Dynamic Grover search: Applications in recommendation systems and optimization problems. Quantum Inf. Process. 2017, 16, 153. [Google Scholar] [CrossRef] [Green Version]
- Song, J.; Ke, Z.; Zhang, W.; Ma, Y.; Ma, H. Quantum Confidentiality Query Protocol Based on Bell State Identity. Int. J. Theor. Phys. 2022, 61, 52. [Google Scholar] [CrossRef]
- Panella, M.; Martinelli, G. Neural networks with quantum architecture and quantum learning. Int. J. Circuit Theory Appl. 2011, 39, 61–77. [Google Scholar] [CrossRef]
- Carlo, C.; Stefano, M. On Grover’s search algorithm from a quantum information geometry viewpoint. Phys. Stat. Mech. Its Appl. 2012, 391, 1610–1625. [Google Scholar]
- Ashraf, I.; Mahmood, T.; Lakshminarayanan, V. A Modification of Grover’s Quantum Search Algorithm. Photonics Optoelectron 2012, 1, 20–24. [Google Scholar]
- Long, G.; Liu, Y.; Loss, D. Search an unsorted database with quantum mechanics. Front. Comput. Sci. China 2007, 1, 247–271. [Google Scholar] [CrossRef]
- Bin, C.; Zhang, W.; Wang, X.; Zhao, J.; Gu, Y.; Zhang, Y. A Memetic Algorithm Based on Two_Arch2 for Multi-depot Heterogeneous-vehicle Capacitated Arc Routing Problem. Swarm Evol. Comput. 2021, 63, 100864. [Google Scholar]
- Ma, Z.; Zheng, W.; Chen, X.; Yin, L. Joint embedding VQA model based on dynamic word vector. Peerj Comput. Sci. 2021, 7, e353. [Google Scholar] [CrossRef] [PubMed]
- Grover, L.K. A fast quantum mechanical algorithm for database search. In Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing; Association for Computing Machinery, New York, NY, USA, 3–5 May 1996; pp. 212–219. [Google Scholar]
- Benedetti, M.; Garcia-Pintos, D.; Perdomo, O.; Leyton-Ortega, V.; Nam, Y.; Perdomo-Ortiz, A. A generative modeling approach for benchmarking and training shallow quantum circuits. npj Quantum Inf. 2019, 5, 45. [Google Scholar] [CrossRef] [Green Version]
- Mitarai, K.; Negoro, M.; Kitagawa, M.; Fujii, K. Quantum circuit learning. Phys. Rev. A 2018, 98, 2309. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Y.J.; Tang, J. Analysis and assessment of network security situation based on cloud model. Int. J. Theor. Phys. 2014, 36, 63–67. [Google Scholar]
- Tian, H.; Qin, Y.; Niu, Z.; Wang, L.; Ge, S. Summer Maize Mapping by Compositing Time Series Sentinel-1A Imagery Based on Crop Growth Cycles. J. Indian Soc. Remote. Sens. 2021, 49, 2863–2874. [Google Scholar] [CrossRef]
- Qin, Y. Early-Season Mapping of Winter Crops Using Sentinel-2 Optical Imagery. Remote. Sens. 2021, 13, 3822. [Google Scholar]
- Zheng, W.; Xun, Y.; Wu, X.; Deng, Z.; Chen, X.; Sui, Y. A Comparative Study of Class Rebalancing Methods for Security Bug Report Classification. IEEE Trans. Reliab. 2021, 70, 4. [Google Scholar] [CrossRef]
- Wu, X.; Zheng, W.; Xia, X.; Lo, D. Data quality matters: A case study on data label correctness for security bug report prediction. IEEE Trans. Softw. Eng. 2021, 48, 2541–2556. [Google Scholar] [CrossRef]
- Aghayar, K.; Rezaei Fard, E. Noisy Quantum Mixed State Pattern Classification Based on the Grover’s Search Algorithm. Int. J. Nanotechnol. Appl. 2021, 15, 171–180. [Google Scholar]
Algorithm’s Name | ||||
---|---|---|---|---|
BCE | ||||
MSE | ||||
BQM |
Algorithm’s Name | Query Complexity |
---|---|
Grover | |
Younes’ algorithm | |
BCE | |
MSE | |
BQM |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhao, W.; Wang, Y.; Qu, Y.; Ma, H.; Wang, S. Binary Classification Quantum Neural Network Model Based on Optimized Grover Algorithm. Entropy 2022, 24, 1783. https://doi.org/10.3390/e24121783
Zhao W, Wang Y, Qu Y, Ma H, Wang S. Binary Classification Quantum Neural Network Model Based on Optimized Grover Algorithm. Entropy. 2022; 24(12):1783. https://doi.org/10.3390/e24121783
Chicago/Turabian StyleZhao, Wenlin, Yinuo Wang, Yingjie Qu, Hongyang Ma, and Shumei Wang. 2022. "Binary Classification Quantum Neural Network Model Based on Optimized Grover Algorithm" Entropy 24, no. 12: 1783. https://doi.org/10.3390/e24121783
APA StyleZhao, W., Wang, Y., Qu, Y., Ma, H., & Wang, S. (2022). Binary Classification Quantum Neural Network Model Based on Optimized Grover Algorithm. Entropy, 24(12), 1783. https://doi.org/10.3390/e24121783