Quantum Lernmatrix
<p>A unit is an abstract model of a biological neuron [<a href="#B24-entropy-25-00871" class="html-bibr">24</a>,<a href="#B29-entropy-25-00871" class="html-bibr">29</a>,<a href="#B35-entropy-25-00871" class="html-bibr">35</a>,<a href="#B36-entropy-25-00871" class="html-bibr">36</a>,<a href="#B37-entropy-25-00871" class="html-bibr">37</a>].</p> "> Figure 2
<p>The Lernmatrix is composed of a set of units that represent a simple model of a real biological neuron. The unit is composed of weights, which correspond to the synapses and dendrites in the real neuron. In this Figure, they are described by <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>∈</mo> <mrow> <mo>{</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>}</mo> </mrow> </mrow> </semantics></math> where <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>≤</mo> <mi>i</mi> <mo>≤</mo> <mi>m</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>≤</mo> <mi>j</mi> <mo>≤</mo> <mi>n</mi> </mrow> </semantics></math>. <span class="html-italic">T</span> is the threshold of the unit.</p> "> Figure 3
<p>The vector pair <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">x</mi> <mn>1</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">y</mi> <mn>1</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> is learned. The corresponding binary weights of the associated pair are indicated by a black square. In the next step, the vector pair <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">x</mi> <mn>2</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">y</mi> <mn>2</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> is learned. The corresponding binary weights of the associated pair are indicated by a black circle.</p> "> Figure 4
<p>The query vector <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">x</mi> <mi>q</mi> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> differs by one bit to the learned query vector <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">x</mi> <mn>2</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>. The threshold <span class="html-italic">T</span> is set to the number of “one” components in the query vector <math display="inline"><semantics> <msub> <mi mathvariant="bold">x</mi> <mi>q</mi> </msub> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>. The retrieved vector is the vector <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">y</mi> <mn>2</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> that was stored.</p> "> Figure 5
<p>The weight matrix after learning of 20,000 test patterns, in which ten ones were randomly set in a 2000 dimensional vector represents a high loaded matrix with equally distributed weights. This example shows that the weight matrix diagram often contains nearly no information. Information about the weight matrix can be extracted by the structure of weight matrix. (White color represents wights.)</p> "> Figure 6
<p>Sigmoid-like probability functions for <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math> is indicated by continuous line, the linear relation by the dashed lines. The x-axis indicates the <span class="html-italic">k</span> values, and the y-axis the probabilities.</p> "> Figure 7
<p>Quantum counting circuit with <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>.</p> "> Figure 8
<p><math display="inline"><semantics> <mrow> <mi>p</mi> <mo>(</mo> <mo stretchy="false">|</mo> <mn>0101</mn> <mo stretchy="false">〉</mo> <mo>)</mo> <mo>=</mo> <mn>0.25</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>(</mo> <mo stretchy="false">|</mo> <mn>1101</mn> <mo stretchy="false">〉</mo> <mo>)</mo> <mo>=</mo> <mn>0.75</mn> </mrow> </semantics></math>.</p> "> Figure 9
<p>Wight matrix represented by four units after learning the correlation of the three patterns <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">x</mi> <mn>1</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>; <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">y</mi> <mn>1</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">x</mi> <mn>2</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>; <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">y</mi> <mn>2</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">x</mi> <mn>3</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>; <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">y</mi> <mn>3</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>. The learning is identical with the learning phase of the Lernmatrix.</p> "> Figure 10
<p>The quantum circuit that produces the <span class="html-italic">sleep phase</span>. The qubits 0 to 3 represent the query vector, the qubits 4 to 7 the associative memory, the qubits 8 to 11 represent the count and the qubits 12 and 13 are the index qubits, while the qubit 14 is the control qubit.</p> "> Figure 11
<p>Four superposition states corresponding to the four units of the associative memory. The qubits 0 to 3 represent the query vector <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">x</mi> <mi>q</mi> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>, the qubits 4 to 7 the associative memory, the qubits 8 to 11 represent the count, the qubits 12 and 13 are the index qubits, and the control qubit 14 is zero. Note that the units are counted in the reverse order by the index qubits: 11 for the first unit, 10 for the third unit, 01 for second unit and 00 for the fourth unit.</p> "> Figure 12
<p>The quantum circuit that produces the <span class="html-italic">active phase</span>. The query and the amplification operations on the count qubits, the qubits 8 to 11. The control qubit 14.</p> "> Figure 13
<p>Five superposition states not equal to zero. The control qubit 14 equal to one indicates the firing of the units. The measured value is <math display="inline"><semantics> <mrow> <mn>0.625</mn> </mrow> </semantics></math>. The two probabilities <math display="inline"><semantics> <mrow> <mn>0.25</mn> </mrow> </semantics></math> express the perfect match and the solution <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math>, indicated by the index qubits 12 and 13, with the values <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </semantics></math> for the first unit and <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>00</mn> <mo>)</mo> </mrow> </semantics></math> for the fourth unit. Note that the units are counted in the reverse order by the index qubits: <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </semantics></math> first unit, <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>10</mn> <mo>)</mo> </mrow> </semantics></math> for the second unit, <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>01</mn> <mo>)</mo> </mrow> </semantics></math> for third unit and <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>00</mn> <mo>)</mo> </mrow> </semantics></math> for the fourth unit. The control qubit 14 equal to zero indicates the units that do not fire. The measured value is <math display="inline"><semantics> <mrow> <mn>0.375</mn> </mrow> </semantics></math>. The probability <math display="inline"><semantics> <mrow> <mn>0.25</mn> </mrow> </semantics></math> with the index qubits 12 and 13, with the value <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>01</mn> <mo>)</mo> </mrow> </semantics></math> for the third unit indicates the most dissimilar pattern <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math>.</p> "> Figure 14
<p>Weight matrix represented by eight units after learning the correlation of the three patterns <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">x</mi> <mn>1</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>; <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">y</mi> <mn>1</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">x</mi> <mn>2</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>; <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">y</mi> <mn>2</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">x</mi> <mn>3</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>; <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">y</mi> <mn>3</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>. The learning is identical with the learning phase of the Lernmatrix.</p> "> Figure 15
<p>The quantum circuit that produces the <span class="html-italic">sleep phase</span>. The qubits 0 to 7 represent the query vector, the qubits 8 to 15 the associative memory, the qubits 16 to 23 represent the count and the qubits 24, 25 and 26 are the index qubits (8 states), and the qubit 27 is the control qubit.</p> "> Figure 16
<p>The quantum circuit that produces the <span class="html-italic">active phase</span>. The query and the amplification operations on the count qubits, the qubits 16 to 23 and the control qubit 27.</p> "> Figure 17
<p>Teen superposition states not equal to zero. The qubits 24, 25 and 26 are the index qubits. Note that the units are counted in the reverse order by the index qubits: 111 first unit, 110 for the second unit, till 000 being the eight unit. The measured value for the control qubit 27 equal to one indicates the firing of the units. The measured value is just <math display="inline"><semantics> <mrow> <mn>0.5</mn> </mrow> </semantics></math>. This happens since the weight matrix is relatively small and not homogeneously filled. For the query vector <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">x</mi> <mi>q</mi> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>, the three values <math display="inline"><semantics> <mrow> <mn>0.125</mn> </mrow> </semantics></math> indicate the answer vector <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math> by the index qubits 24, 25 and 26; for the first unit with the value (111), the second unit (110) and seventh unit (001). The control qubit 27 equal to zero indicates the units that do not fire.</p> "> Figure 18
<p>Circuit representing the application of the control qubit two times for the quantum circuit of <a href="#entropy-25-00871-f010" class="html-fig">Figure 10</a>.</p> "> Figure 19
<p>Seven superposition states not equal to zero. This is because the states with the former values <math display="inline"><semantics> <mrow> <mn>0.125</mn> </mrow> </semantics></math> were divided into two values <math display="inline"><semantics> <mrow> <mn>0.125</mn> <mo>/</mo> <mn>2</mn> <mo>=</mo> <mn>0.0625</mn> </mrow> </semantics></math> by the two control qubits. The first control qubit 15 equal to one indicates the firing of the units. The measured value is <math display="inline"><semantics> <mrow> <mn>0.625</mn> </mrow> </semantics></math>. After measuring the first control qubit equal to one, the measured value of the second control qubit 14 equal to one is <math display="inline"><semantics> <mrow> <mn>0.9</mn> </mrow> </semantics></math>. Assuming independence, the value of measuring the two control qubits with the value one is <math display="inline"><semantics> <mrow> <mn>0.5625</mn> <mo>=</mo> <mn>0.625</mn> <mo>·</mo> <mn>0.9</mn> </mrow> </semantics></math>. As before, the two values <math display="inline"><semantics> <mrow> <mn>0.25</mn> </mrow> </semantics></math> indicate the perfect match and the solution <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math> with the values of the index qubits 12 and 13: <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>11</mn> <mo>)</mo> </mrow> </semantics></math> for the first unit and <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>00</mn> <mo>)</mo> </mrow> </semantics></math> for the fourth unit.</p> "> Figure 20
<p>Assuming we have eight states indicated by the index qubit 2, 3 and 4, one marked state 010 has the count two, and the other seven state the count of one.</p> "> Figure 21
<p>The resulting histogram of the measured qubits of one marked state with the count two, and the other seven state the count of one with applying the control qubit.</p> "> Figure 22
<p>The resulting histogram of the measured qubits of one marked state with the count two, and the other seven state the count of one with applying the control qubit two times.</p> "> Figure 23
<p>For <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <msup> <mn>2</mn> <mn>16</mn> </msup> </mrow> </semantics></math>, the y-axis indicates the resulting probability. (<b>a</b>) Circles indicate the growth of the probability of the marked state related to the the number of steps of Grover’s amplification indicated by the x-axis. The triangles indicate the growth of the probability of the marked state using Trugenberger amplification with the x-axis indicating the number <span class="html-italic">b</span> of measurements assuming the control qubits are 1. (<b>b</b>) With the assumption of independence, measuring the control qubits in the sequence <math display="inline"><semantics> <mrow> <mi>b</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mi>b</mi> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mi>b</mi> <mo>=</mo> <mn>3</mn> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <msub> <mi>b</mi> <mi>B</mi> </msub> </mrow> </semantics></math> results in a low probability indicated by the circles. The x-axis indicates the number measurements <span class="html-italic">b</span> of the control qubits. As a consequence, we can measure the sequential control qubits two times before the task becomes not tractable.</p> "> Figure 23 Cont.
<p>For <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <msup> <mn>2</mn> <mn>16</mn> </msup> </mrow> </semantics></math>, the y-axis indicates the resulting probability. (<b>a</b>) Circles indicate the growth of the probability of the marked state related to the the number of steps of Grover’s amplification indicated by the x-axis. The triangles indicate the growth of the probability of the marked state using Trugenberger amplification with the x-axis indicating the number <span class="html-italic">b</span> of measurements assuming the control qubits are 1. (<b>b</b>) With the assumption of independence, measuring the control qubits in the sequence <math display="inline"><semantics> <mrow> <mi>b</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mi>b</mi> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mi>b</mi> <mo>=</mo> <mn>3</mn> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <msub> <mi>b</mi> <mi>B</mi> </msub> </mrow> </semantics></math> results in a low probability indicated by the circles. The x-axis indicates the number measurements <span class="html-italic">b</span> of the control qubits. As a consequence, we can measure the sequential control qubits two times before the task becomes not tractable.</p> "> Figure 24
<p>(<b>a</b>) In our example, we store three patterns, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">x</mi> <mn>1</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">y</mi> <mn>1</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>; <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">x</mi> <mn>2</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">y</mi> <mn>2</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">x</mi> <mn>3</mn> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">y</mi> <mn>3</mn> </msub> <mrow> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>, and the query vector is <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">x</mi> <mi>q</mi> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>. (<b>b</b>) The aggregation is a Boolean OR-based transform for two neighboring weights of units results resulting in a more dense memory with <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold">x</mi> <mi>q</mi> </msub> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p> "> Figure 25
<p>Five superposition states not equal to zero. The measured probability (control qubit equal to one) indicates the firing of the units is <math display="inline"><semantics> <mrow> <mn>0.838</mn> </mrow> </semantics></math>, the measured probability values are <math display="inline"><semantics> <mrow> <mn>0.213</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>0.125</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>0.25</mn> </mrow> </semantics></math>.</p> "> Figure 26
<p>(<b>a</b>) We compare the cost of <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mi>l</mi> <mi>o</mi> <msub> <mi>g</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>n</mi> <mo>/</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> queries to the quantum Lernmatrix (representing the weight matrix of the size <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>×</mo> <mi>n</mi> </mrow> </semantics></math>), <math display="inline"><semantics> <mrow> <mi>O</mi> <mo>(</mo> <mi>k</mi> <mo>·</mo> <mi>n</mi> <mo>)</mo> </mrow> </semantics></math> (dashed line) to the cost of a classical Lernmatrix of the size <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>×</mo> <mi>n</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>O</mi> <mo>(</mo> <msup> <mi>n</mi> <mn>2</mn> </msup> <mo>)</mo> </mrow> </semantics></math>. (<b>b</b>) We compare the cost of <span class="html-italic">k</span> queries to the quantum Lernmatrix (representing the weight matrix of the size <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>×</mo> <mi>n</mi> </mrow> </semantics></math>), <math display="inline"><semantics> <mrow> <mi>O</mi> <mo>(</mo> <mi>k</mi> <mo>·</mo> <mi>n</mi> <mo>)</mo> </mrow> </semantics></math> with cost <math display="inline"><semantics> <mrow> <mi>O</mi> <mo>(</mo> <mi>k</mi> <mo>·</mo> <mi>n</mi> <mo>)</mo> </mrow> </semantics></math> (dashed line) to Grover’s amplification algorithm on a list of <span class="html-italic">L</span> vectors of dimension <span class="html-italic">n</span> with cost <math display="inline"><semantics> <mrow> <mi>O</mi> <mo>(</mo> <mi>n</mi> <mo>·</mo> <msqrt> <mi>L</mi> </msqrt> <mo>)</mo> </mrow> </semantics></math>.</p> ">
Abstract
:1. Introduction
- The input (reading) problem: The amplitude distribution of a quantum state is initialized by reading n data points. Although the existing quantum algorithm requires only steps or and is faster than the classical algorithms, n data points must be read. Hence, the complexity of the algorithm does not improve and is .
- The destruction problem: A quantum associative memory [1,2,3,4,5] for n data points for dimension m requires only or fewer units (quantum bits). An operator, which acts as an oracle [3], indicates the solution. However, this memory can be queried only once because of the collapse during measurement (destruction); hence, quantum associative memory does not have any advantages over classical memory.
- We introduce the Lernmatrix model described by units that model neurons.
- We indicate that Lernmatrix has a tremendous storage capacity, much higher than most other associative memories. This is valid for sparse equally distributed ones in vectors representing the information.
- Quantum counting of ones based on Euler’s formula is described.
- Based on the Lernmatrix model, a quantum Lernmatrix is introduced in which units are represented in superposition and the query operation is based on quantum counting of ones. The measured result is a position of a one or zero in the answer vector.
- We analyze the Trugenberger amplification.
- Since a one in the answer vector represents information, we assume in that we can reconstruct the answer vector by measuring several ones, taking for granted that the rest of the vector is zero. In a sparse code with k ones, k measurements of different ones reconstruct the binary answer vector. We can increase the probability of measuring a one by the introduced tree-like structure.
- The Lernmatrix can store much more patterns then the number of units. Because of this, the cost of loading L patterns into quantum states is much lower than storing the patterns individually.
2. Lernmatrix
- The ability to correct faults if false information is given.
- The ability to complete information if some parts are missing.
- The ability to interpolate information. In other words, if a sub-symbol is not currently stored, the most similar stored sub-symbol is determined.
2.1. Learning and Retrieval
- Hetero-association if both vectors are different ,
- Association, if , the answer vector represents the reconstruction of the disturbed query vector.
Example
2.2. Storage Capacity
2.3. Large Matrices
3. Monte Carlo Lernmatrix
4. Qiskit Experiments
5. Quantum Counting Ones
- from qiskit import QuantumCircuit, Aer, execute
- from qiskit.visualization import plot_histogram
- from math import~pi
- qc = QuantumCircuit(4)
- #Input is |101>
- qc.x(0)
- qc.x(2)
- qc.barrier()
- qc.h(3)
- qc.cp(-pi/6,0,3)
- qc.cp(-pi/6,1,3)
- qc.cp(-pi/6,2,3)
- qc.x(3)
- qc.cp(pi/6,0,3)
- qc.cp(pi/6,1,3)
- qc.cp(pi/6,2,3)
- qc.x(3)
- qc.h(3)
- simulator = Aer.get_backend(’statevector_simulator’)
- # Run and get counts
- result=execute(qc,simulator).result()
- counts = result.get_counts()
- plot_histogram(counts)
6. Quantum Lernmatrix
- The first unit has the value and the two corresponding states are: for the control the value is with the measured probability and for the control the value is with the measured probability 0.
- The second unit has the value and the two corresponding states are: for the control the value is with the measured probability and for the control the value is with the measured probability .
- The third unit has the value and the two corresponding states are: for the control the value is with the measured probability = 0 and for the control the value is with the measured probability = 0.
- The fourth unit has the (decimal) value and the two corresponding states are: for the control the value is with the measured probability and for the control the value is with the measured probability 0.
6.1. Generalization
6.2. Example
7. Applying Trugenberger Amplification Several Times
Relation to b
8. Tree-like Structures
9. Costs
Query Cost of Quantum Lernmatrix
10. Conclusions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A. Quantum Lernmatrix
- qc = QuantumCircuit(15)
- #0-3 query
- #4-7 data
- #8-11 count
- #Index Pointer
- #12-13
- #Aux
- #14
- #Sleep Phase
- #Index Pointer
- qc.h(12)
- qc.h(13)
- qc.barrier()
- #1st weights
- qc.ccx(12,13,4)
- qc.ccx(12,13,7)
- qc.barrier()
- #2th weights
- qc.x(12)
- qc.ccx(12,13,4)
- qc.x(12)
- qc.barrier()
- #3th weights
- qc.x(13)
- qc.ccx(12,13,6)
- qc.x(13)
- qc.barrier()
- #4th weights
- qc.x(12)
- qc.x(13)
- qc.ccx(12,13,4)
- qc.ccx(12,13,7)
- qc.x(13)
- qc.x(12)
- qc.barrier()
- #Active Phase
- #query
- qc.x(0)
- qc.x(3)
- qc.barrier()
- qc.ccx(0,4,8)
- qc.ccx(1,5,9)
- qc.ccx(2,6,10)
- qc.ccx(3,7,11)
- #Dividing
- qc.h(14)
- qc.barrier()
- #Marking
- qc.cp(-pi/4,8,14)
- qc.cp(-pi/4,9,14)
- qc.cp(-pi/4,10,14)
- qc.cp(-pi/4,11,14)
- qc.barrier()
- qc.x(14)
- qc.cp(pi/4,8,14)
- qc.cp(pi/4,9,14)
- qc.cp(pi/4,10,14)
- qc.cp(pi/4,11,14)
- qc.h(14)
- qc.draw(fold=110)
Appendix B. Quantum Tree-like Lernmatrix
- qc = QuantumCircuit(23)
- #0-3 query
- #4-7 data aggregated
- #8-11 data
- #12-19 count
- #Index Pointer
- #20-21
- #Aux
- #22
- #Sleep Phase
- #Index Pointer
- qc.h(20)
- qc.h(21)
- #1st weights
- #OR Aggregated
- qc.barrier()
- qc.ccx(20,21,4)
- qc.ccx(20,21,7)
- #Original
- qc.barrier()
- qc.ccx(20,21,8)
- qc.ccx(20,21,11)
- #2th weights
- qc.x(20)
- #OR Aggregated
- qc.barrier()
- qc.ccx(20,21,4)
- qc.ccx(20,21,7)
- #Original
- qc.barrier()
- qc.ccx(20,21,8)
- qc.x(20)
- #3th weights
- qc.x(21)
- #OR Aggregated
- qc.barrier()
- qc.ccx(20,21,4)
- qc.ccx(20,21,6)
- qc.ccx(20,21,7)
- #Original
- qc.barrier()
- qc.ccx(20,21,10)
- qc.x(21)
- #4th weights
- qc.x(20)
- qc.x(21)
- #OR Aggregated
- qc.barrier()
- qc.ccx(20,21,4)
- qc.ccx(20,21,6)
- qc.ccx(20,21,7)
- #Original
- qc.barrier()
- qc.ccx(20,21,8)
- qc.ccx(20,21,11)
- qc.x(21)
- qc.x(20)
- #Active Phase
- #query
- qc.barrier()
- qc.x(0)
- qc.x(3)
- qc.barrier()
- #query, counting
- #OR Aggregated
- qc.ccx(0,4,12)
- qc.ccx(1,5,13)
- qc.ccx(2,6,14)
- qc.ccx(3,7,15)
- #Original
- qc.ccx(0,8,16)
- qc.ccx(1,9,17)
- qc.ccx(2,10,18)
- qc.ccx(3,11,19)
- #Dividing
- qc.barrier()
- qc.h(22)
- #Marking
- qc.barrier()
- qc.cp(-pi/8,12,22)
- qc.cp(-pi/8,13,22)
- qc.cp(-pi/8,14,22)
- qc.cp(-pi/8,15,22)
- qc.cp(-pi/8,16,22)
- qc.cp(-pi/8,17,22)
- qc.cp(-pi/8,18,22)
- qc.cp(-pi/8,19,22)
- qc.barrier()
- qc.x(22)
- qc.cp(pi/8,12,22)
- qc.cp(pi/8,13,22)
- qc.cp(pi/8,14,22)
- qc.cp(pi/8,15,22)
- qc.cp(pi/8,16,22)
- qc.cp(pi/8,17,22)
- qc.cp(pi/8,18,22)
- qc.cp(pi/8,19,22)
- qc.barrier()
- qc.h(22)
- qc.draw()
References
- Ventura, D.; Martinez, T. Quantum associative memory with exponential capacity. In Proceedings of the 1998 IEEE International Joint Conference on Neural Networks Proceedings. IEEE World Congress on Computational Intelligence, Anchorage, AK, USA, 4–9 May 1988; Volume 1, pp. 509–513. [Google Scholar]
- Ventura, D.; Martinez, T. Quantum associative memory. Inf. Sci. 2000, 124, 273–296. [Google Scholar] [CrossRef]
- Tay, N.; Loo, C.; Perus, M. Face Recognition with Quantum Associative Networks Using Overcomplete Gabor Wavelet. Cogn. Comput. 2010, 2, 297–302. [Google Scholar] [CrossRef]
- Trugenberger, C.A. Probabilistic Quantum Memories. Phys. Rev. Lett. 2001, 87, 067901. [Google Scholar] [CrossRef] [PubMed]
- Trugenberger, C.A. Quantum Pattern Recognition. Quantum Inf. Process. 2003, 1, 471–493. [Google Scholar] [CrossRef]
- Schuld, M.; Petruccione, F. Supervised Learning with Quantum Computers; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
- Grover, L.K. A fast quantum mechanical algorithm for database search. In Proceedings of the STOC’96: Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, Philadelphia, PA, USA, 22–24 May 1996; ACM: New York, NY, USA, 1996; pp. 212–219. [Google Scholar] [CrossRef]
- Grover, L.K. Quantum Mechanics helps in searching for a needle in a haystack. Phys. Rev. Lett. 1997, 79, 325. [Google Scholar] [CrossRef]
- Grover, L.K. A framework for fast quantum mechanical algorithms. In Proceedings of the STOC’98: Proceedings of the Thirtieth Annual ACM Symposium on Theory of Computing, Dallas, TX, USA, 24–26 May 1998; ACM: New York, NY, USA, 1998; pp. 53–62. [Google Scholar] [CrossRef]
- Grover, L.K. Quantum Computers Can Search Rapidly by Using Almost Any Transformation. Phys. Rev. Lett. 1998, 80, 4329–4332. [Google Scholar] [CrossRef]
- Aïmeur, E.; Brassard, B.; Gambs, S. Quantum speed-up for unsupervised learning. Mach. Learn. 2013, 90, 261–287. [Google Scholar] [CrossRef]
- Wittek, P. Quantum Machine Learning, What Quantum Computing Means to Data Mining; Elsevier Insights; Academic Press: Cambridge, MA, USA, 2014. [Google Scholar]
- Aaronson, S. Quantum Machine Learning Algorithms: Read the Fine Print. Nat. Phys. 2015, 11, 291–293. [Google Scholar] [CrossRef]
- Diamantini, M.C.; Trugenberger, C.A. Mirror modular cloning and fast quantum associative retrieval. arXiv 2022, arXiv:2206.01644. [Google Scholar]
- Brun, T.; Klauck, H.; Nayak, A.; Rotteler, M.; Zalka, C. Comment on “Probabilistic Quantum Memories”. Phys. Rev. Lett. 2003, 91, 209801. [Google Scholar] [CrossRef] [PubMed]
- Harrow, A.; Hassidim, A.; Lloyd, S. Quantum algorithm for solving linear systems of equations. Phys. Rev. Lett. 2009, 103, 150502. [Google Scholar] [CrossRef]
- Schuld, M.; Killoran, N. Quantum Machine Learning in Feature Hilbert Spaces. Phys. Rev. Lett. 2019, 122, 040504. [Google Scholar] [CrossRef]
- Palm, G. Neural Assemblies, an Alternative Approach to Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 1982. [Google Scholar]
- Hecht-Nielsen, R. Neurocomputing; Addison-Wesley: Reading, PA, USA, 1989. [Google Scholar]
- Steinbuch, K. Die Lernmatrix. Kybernetik 1961, 1, 36–45. [Google Scholar] [CrossRef]
- Steinbuch, K. Automat und Mensch, 4th ed.; Springer: Berlin/Heidelberg, Germany, 1971. [Google Scholar]
- Willshaw, D.; Buneman, O.; Longuet-Higgins, H. Nonholgraphic associative memory. Nature 1969, 222, 960–962. [Google Scholar] [CrossRef] [PubMed]
- Contributors, Q. Qiskit: An Open-source Framework for Quantum Computing. Zenodo 2023. [Google Scholar] [CrossRef]
- Palm, G. Assoziatives Gedächtnis und Gehirntheorie. In Gehirn und Kognition; Spektrum der Wissenschaft: Heidelberg, Germany, 1990; pp. 164–174. [Google Scholar]
- Churchland, P.S.; Sejnowski, T.J. The Computational Brain; The MIT Press: Cambridge, MA, USA, 1994. [Google Scholar]
- Fuster, J. Memory in the Cerebral Cortex; The MIT Press: Cambridge, MA, USA, 1995. [Google Scholar]
- Squire, L.R.; Kandel, E.R. Memory: From Mind to Moleculus; Scientific American Library: New York, NY, USA, 1999. [Google Scholar]
- Kohonen, T. Self-Organization and Associative Memory, 3rd ed.; Springer: Berlin/Heidelberg, Germany, 1989. [Google Scholar]
- Hertz, J.; Krogh, A.; Palmer, R.G. Introduction to the Theory of Neural Computation; Addison-Wesley: Reading, PA, USA, 1991. [Google Scholar]
- Anderson, J.R. Cognitive Psychology and Its Implications, 4th ed.; W. H. Freeman and Company: New York, NY, USA, 1995. [Google Scholar]
- Amari, S. Learning Patterns and Pattern Sequences by Self-Organizing Nets of Threshold Elements. IEEE Trans. Comput. 1972, 100, 1197–1206. [Google Scholar] [CrossRef]
- Anderson, J.A. An Introduction to Neural Networks; The MIT Press: Cambridge, MA, USA, 1995. [Google Scholar]
- Ballard, D.H. An Introduction to Natural Computation; The MIT Press: Cambridge, MA, USA, 1997. [Google Scholar]
- Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef] [PubMed]
- McClelland, J.; Kawamoto, A. Mechanisms of Sentence Processing: Assigning Roles to Constituents of Sentences. In Parallel Distributed Processing; McClelland, J., Rumelhart, D., Eds.; The MIT Press: Cambridge, MA, USA, 1986; pp. 272–325. [Google Scholar]
- OFTA. Les Re´seaux de Neurones; Masson: Paris, France, 1991. [Google Scholar]
- Schwenker, F. Küntliche Neuronale Netze: Ein Überblick über die theoretischen Grundlagen. In Finanzmarktanalyse und-Prognose mit Innovativen und Quantitativen Verfahren; Bol, G., Nakhaeizadeh, G., Vollmer, K., Eds.; Physica-Verlag: Heidelberg, Germany, 1996; pp. 1–14. [Google Scholar]
- Sommer, F.T. Theorie Neuronaler Assoziativspeicher. Ph.D. Thesis, Heinrich-Heine-Universität Düsseldorf, Düsseldorf, Germany, 1993. [Google Scholar]
- Wickelgren, W.A. Context-Sensitive Coding, Associative Memory, and Serial Order in (Speech) Behavior. Psychol. Rev. 1969, 76, 1–15. [Google Scholar] [CrossRef]
- Sa-Couto, L.; Wichert, A. “What-Where” sparse distributed invariant representations of visual patterns. Neural Comput. Appl. 2022, 34, 6207–6214. [Google Scholar] [CrossRef]
- Sa-Couto, L.; Wichert, A. Competitive learning to generate sparse representations for associative memory. arXiv 2023, arXiv:2301.02196. [Google Scholar]
- Marcinowski, M. Codierungsprobleme beim Assoziativen Speichern. Master’s Thesis, Fakultät für Physik der Eberhard-Karls-Universität Tübingen, Tübingen, Germany, 1987. [Google Scholar]
- Freeman, J.A. Simulating Neural Networks with Mathematica; Addison-Wesley: Reading, PA, USA, 1994. [Google Scholar]
- Sacramento, J.; Wichert, A. Tree-like hierarchical associative memory structures. Neural Netw. 2011, 24, 143–147. [Google Scholar] [CrossRef] [PubMed]
- Sacramento, J.; Burnay, F.; Wichert, A. Regarding the temporal requirements of a hierarchical Willshaw network. Neural Netw. 2012, 25, 84–93. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wichert, A. Quantum Lernmatrix. Entropy 2023, 25, 871. https://doi.org/10.3390/e25060871
Wichert A. Quantum Lernmatrix. Entropy. 2023; 25(6):871. https://doi.org/10.3390/e25060871
Chicago/Turabian StyleWichert, Andreas. 2023. "Quantum Lernmatrix" Entropy 25, no. 6: 871. https://doi.org/10.3390/e25060871
APA StyleWichert, A. (2023). Quantum Lernmatrix. Entropy, 25(6), 871. https://doi.org/10.3390/e25060871