[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 24, February
Previous Issue
Volume 23, December
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 24, Issue 1 (January 2022) – 138 articles

Cover Story (view full-size image): Line-entropy is a nonlinear metric from recurrence quantification analysis used to gauge the level of system complexity. For periodic systems (simple), long recurrent diagonal lines span from border to border and are truncated to unique lengths. Entropy values are high (high complexity). Restriction (masking) of the triangular recurrence area to a tilted box (red/pink) and long lines are truncated at the border to identical lengths. Entropy values are low (low complexity). However, for real-world systems, noise disallows long lines from forming and brings the two entropy values closer together. Thus, entropy values computed from Dow-Jones scores (green) track fairly closely for triangular areas (blue) versus boxed areas (red). View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 16249 KiB  
Article
Modelling of the Electrical Membrane Potential for Concentration Polarization Conditions
by Kornelia M. Batko, Izabella Ślęzak-Prochazka, Andrzej Ślęzak, Wioletta M. Bajdur and Radomir Ščurek
Entropy 2022, 24(1), 138; https://doi.org/10.3390/e24010138 - 17 Jan 2022
Cited by 1 | Viewed by 2224
Abstract
Based on Kedem–Katchalsky formalism, the model equation of the membrane potential (Δψs) generated in a membrane system was derived for the conditions of concentration polarization. In this system, a horizontally oriented electro-neutral biomembrane separates solutions of the same electrolytes [...] Read more.
Based on Kedem–Katchalsky formalism, the model equation of the membrane potential (Δψs) generated in a membrane system was derived for the conditions of concentration polarization. In this system, a horizontally oriented electro-neutral biomembrane separates solutions of the same electrolytes at different concentrations. The consequence of concentration polarization is the creation, on both sides of the membrane, of concentration boundary layers. The basic equation of this model includes the unknown ratio of solution concentrations (Ci/Ce) at the membrane/concentration boundary layers. We present the calculation procedure (Ci/Ce) based on novel equations derived in the paper containing the transport parameters of the membrane (Lp, σ, and ω), solutions (ρ, ν), concentration boundary layer thicknesses (δl, δh), concentration Raileigh number (RC), concentration polarization factor (ζs), volume flux (Jv), mechanical pressure difference (ΔP), and ratio of known solution concentrations (Ch/Cl). From the resulting equation, Δψs was calculated for various combinations of the solution concentration ratio (Ch/Cl), the Rayleigh concentration number (RC), the concentration polarization coefficient (ζs), and the hydrostatic pressure difference (ΔP). Calculations were performed for a case where an aqueous NaCl solution with a fixed concentration of 1 mol m−3 (Cl) was on one side of the membrane and on the other side an aqueous NaCl solution with a concentration between 1 and 15 mol m−3 (Ch). It is shown that (Δψs) depends on the value of one of the factors (i.e., ΔP, Ch/Cl, RC and ζs) at a fixed value of the other three. Full article
(This article belongs to the Section Thermodynamics)
Show Figures

Figure 1

Figure 1
<p>Membrane system (M—membrane; <math display="inline"><semantics> <mrow> <msub> <mi>l</mi> <mi>l</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>l</mi> <mi>h</mi> </msub> </mrow> </semantics></math>—concentration boundary layers; <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi>l</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi>e</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi>i</mi> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi>h</mi> </msub> </mrow> </semantics></math>—solution concentrations; <math display="inline"><semantics> <mrow> <msub> <mi>J</mi> <mi>v</mi> </msub> </mrow> </semantics></math>—volume flux; <span class="html-italic">J<sub>l</sub></span>, <span class="html-italic">J<sub>m</sub></span>, and <span class="html-italic">J<sub>h</sub></span>—solute fluxes; <span class="html-italic">I<sub>l</sub></span>, <span class="html-italic">I<sub>s</sub></span>, <span class="html-italic">I<sub>m</sub></span>, and <span class="html-italic">I<sub>h</sub></span>—ionic currents; <span class="html-italic">δ<sub>l</sub></span> and <span class="html-italic">δ<sub>h</sub></span>—concentration boundary layers thicknesses; <span class="html-italic">δ<sub>m</sub></span>—membrane thickness.</p>
Full article ">Figure 2
<p>The families of the characteristics <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>ψ</mi> <mi>s</mi> </msub> <mo>=</mo> <mi>f</mi> <mo stretchy="false">(</mo> <msub> <mi>C</mi> <mi>h</mi> </msub> <mo>/</mo> <msub> <mi>C</mi> <mi>l</mi> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math> for different fixed values of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>P</mi> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mi>ζ</mi> <mi>s</mi> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>C</mi> </msub> </mrow> </semantics></math>. (<b>a</b>) characteristics of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>ψ</mi> <mi>s</mi> </msub> <mo>=</mo> <mi>f</mi> <mo stretchy="false">(</mo> <msub> <mi>C</mi> <mi>h</mi> </msub> <mo>/</mo> <msub> <mi>C</mi> <mi>l</mi> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math> for the same <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>P</mi> </mrow> </semantics></math> = −100 kPa value and <math display="inline"><semantics> <mrow> <msub> <mi>ζ</mi> <mi>s</mi> </msub> </mrow> </semantics></math> = 0.22 of different set values of <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>C</mi> </msub> </mrow> </semantics></math>. (<b>b</b>) characteristics of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>ψ</mi> <mi>s</mi> </msub> <mo>=</mo> <mi>f</mi> <mo stretchy="false">(</mo> <msub> <mi>C</mi> <mi>h</mi> </msub> <mo>/</mo> <msub> <mi>C</mi> <mi>l</mi> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math> for the same <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>P</mi> </mrow> </semantics></math> = +100 kPa value and <math display="inline"><semantics> <mrow> <msub> <mi>ζ</mi> <mi>s</mi> </msub> <mo> </mo> </mrow> </semantics></math>= 0.049 of different set values of <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>C</mi> </msub> </mrow> </semantics></math>. (<b>c</b>) characteristics of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>ψ</mi> <mi>s</mi> </msub> <mo>=</mo> <mi>f</mi> <mo stretchy="false">(</mo> <msub> <mi>C</mi> <mi>h</mi> </msub> <mo>/</mo> <msub> <mi>C</mi> <mi>l</mi> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math> for the same <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>P</mi> </mrow> </semantics></math> = −100 kPa value and <math display="inline"><semantics> <mrow> <msub> <mi>ζ</mi> <mi>s</mi> </msub> </mrow> </semantics></math> = 0.049 of different set values of <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>C</mi> </msub> </mrow> </semantics></math>; characteristics of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>ψ</mi> <mi>s</mi> </msub> <mo>=</mo> <mi>f</mi> <mo stretchy="false">(</mo> <msub> <mi>C</mi> <mi>h</mi> </msub> <mo>/</mo> <msub> <mi>C</mi> <mi>l</mi> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math> for the same <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>P</mi> </mrow> </semantics></math> = −100 kPa value and <math display="inline"><semantics> <mrow> <msub> <mi>ζ</mi> <mi>s</mi> </msub> </mrow> </semantics></math> = 0.07 of different set values of <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>C</mi> </msub> </mrow> </semantics></math> (<b>d</b>) characteristics of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>ψ</mi> <mi>s</mi> </msub> <mo>=</mo> <mi>f</mi> <mo stretchy="false">(</mo> <msub> <mi>C</mi> <mi>h</mi> </msub> <mo>/</mo> <msub> <mi>C</mi> <mi>l</mi> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math> for the same <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>P</mi> </mrow> </semantics></math> = −100 kPa value and <math display="inline"><semantics> <mrow> <msub> <mi>ζ</mi> <mi>s</mi> </msub> </mrow> </semantics></math> = 0.3 of different set values of <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>C</mi> </msub> </mrow> </semantics></math>; characteristics of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>ψ</mi> <mi>s</mi> </msub> <mo>=</mo> <mi>f</mi> <mo stretchy="false">(</mo> <msub> <mi>C</mi> <mi>h</mi> </msub> <mo>/</mo> <msub> <mi>C</mi> <mi>l</mi> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math> for the same <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>P</mi> </mrow> </semantics></math> = +100 kPa value and <math display="inline"><semantics> <mrow> <msub> <mi>ζ</mi> <mi>s</mi> </msub> </mrow> </semantics></math> = 0.22 of different set values of <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>C</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>The families of the characteristics <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>ψ</mi> <mi>s</mi> </msub> <mo>=</mo> <mi>f</mi> <mfenced> <mrow> <mo>Δ</mo> <mi>P</mi> </mrow> </mfenced> </mrow> </semantics></math> for different fixed values of <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi>h</mi> </msub> <mo>/</mo> <msub> <mi>C</mi> <mi>l</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>ζ</mi> <mi>s</mi> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>C</mi> </msub> </mrow> </semantics></math>. (<b>a</b>) characteristics of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>ψ</mi> <mi>s</mi> </msub> <mo>=</mo> <mi>f</mi> <mfenced> <mrow> <mo>Δ</mo> <mi>P</mi> </mrow> </mfenced> </mrow> </semantics></math> for the same <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>C</mi> </msub> </mrow> </semantics></math> = 244.5968 value and <math display="inline"><semantics> <mrow> <msub> <mi>ζ</mi> <mi>s</mi> </msub> </mrow> </semantics></math> = 0.049 of different set values of <math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mi>C</mi> <mi>h</mi> </msub> <mo>/</mo> <msub> <mi>C</mi> <mi>l</mi> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math>. (<b>b</b>) characteristics of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>ψ</mi> <mi>s</mi> </msub> <mo>=</mo> <mi>f</mi> <mfenced> <mrow> <mo>Δ</mo> <mi>P</mi> </mrow> </mfenced> </mrow> </semantics></math> for the same <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>C</mi> </msub> </mrow> </semantics></math> = 244.5968 value and <math display="inline"><semantics> <mrow> <msub> <mi>ζ</mi> <mi>s</mi> </msub> </mrow> </semantics></math> = 0.049 of different set values of <math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mi>C</mi> <mi>h</mi> </msub> <mo>/</mo> <msub> <mi>C</mi> <mi>l</mi> </msub> </mrow> </semantics></math> = 20. (<b>c</b>) characteristics of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>ψ</mi> <mi>s</mi> </msub> <mo>=</mo> <mi>f</mi> <mfenced> <mrow> <mo>Δ</mo> <mi>P</mi> </mrow> </mfenced> </mrow> </semantics></math> for the same <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>C</mi> </msub> </mrow> </semantics></math> = 4423.6023 value and <math display="inline"><semantics> <mrow> <msub> <mi>ζ</mi> <mi>s</mi> </msub> </mrow> </semantics></math> = 0.03 of different set values of <math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mi>C</mi> <mi>h</mi> </msub> <mo>/</mo> <msub> <mi>C</mi> <mi>l</mi> </msub> </mrow> </semantics></math> = 3.75. (<b>d</b>) characteristics of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>ψ</mi> <mi>s</mi> </msub> <mo>=</mo> <mi>f</mi> <mfenced> <mrow> <mo>Δ</mo> <mi>P</mi> </mrow> </mfenced> </mrow> </semantics></math> for the same value <math display="inline"><semantics> <mrow> <msub> <mi>ζ</mi> <mi>s</mi> </msub> </mrow> </semantics></math> = 0.3 and <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi>h</mi> </msub> <mo>/</mo> <msub> <mi>C</mi> <mi>l</mi> </msub> </mrow> </semantics></math> = 20 of different set values of <math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mi>R</mi> <mi>C</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>The families of the characteristics <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>ψ</mi> <mi>s</mi> </msub> <mo>=</mo> <mi>f</mi> <mfenced> <mrow> <msub> <mi>R</mi> <mi>C</mi> </msub> </mrow> </mfenced> </mrow> </semantics></math> for different fixed values of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>P</mi> </mrow> </semantics></math>,<math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mi>ζ</mi> <mi>s</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi>h</mi> </msub> <mo>/</mo> <msub> <mi>C</mi> <mi>l</mi> </msub> </mrow> </semantics></math>. (<b>a</b>) characteristics of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>ψ</mi> <mi>s</mi> </msub> <mo>=</mo> <mi>f</mi> <mfenced> <mrow> <msub> <mi>R</mi> <mi>C</mi> </msub> </mrow> </mfenced> </mrow> </semantics></math> for the same <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>P</mi> </mrow> </semantics></math> = −100 kPa value and <math display="inline"><semantics> <mrow> <msub> <mi>ζ</mi> <mi>s</mi> </msub> </mrow> </semantics></math> = 0.049 of different set values of <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi>h</mi> </msub> <mo>/</mo> <msub> <mi>C</mi> <mi>l</mi> </msub> </mrow> </semantics></math>. (<b>b</b>) characteristics of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>ψ</mi> <mi>s</mi> </msub> <mo>=</mo> <mi>f</mi> <mfenced> <mrow> <msub> <mi>R</mi> <mi>C</mi> </msub> </mrow> </mfenced> </mrow> </semantics></math> for the same <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>P</mi> </mrow> </semantics></math> = +100 kPa value and <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi>h</mi> </msub> <mo>/</mo> <msub> <mi>C</mi> <mi>l</mi> </msub> </mrow> </semantics></math> = 20 of different set values of <math display="inline"><semantics> <mrow> <msub> <mi>ζ</mi> <mi>s</mi> </msub> </mrow> </semantics></math>. (<b>c</b>) characteristics of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>ψ</mi> <mi>s</mi> </msub> <mo>=</mo> <mi>f</mi> <mfenced> <mrow> <msub> <mi>R</mi> <mi>C</mi> </msub> </mrow> </mfenced> </mrow> </semantics></math> for the same <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>P</mi> </mrow> </semantics></math> = −100 kPa value and <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi>h</mi> </msub> <mo>/</mo> <msub> <mi>C</mi> <mi>l</mi> </msub> </mrow> </semantics></math> = 20 of different set values of <math display="inline"><semantics> <mrow> <msub> <mi>ζ</mi> <mi>s</mi> </msub> </mrow> </semantics></math>. (<b>d</b>) characteristics of <math display="inline"><semantics> <mrow> <msub> <mrow> <mo stretchy="false">(</mo> <msub> <mi>R</mi> <mi>C</mi> </msub> <mo stretchy="false">)</mo> </mrow> <mrow> <mi>l</mi> <mi>i</mi> <mi>m</mi> </mrow> </msub> <mo>=</mo> <mi>f</mi> <mfenced> <mrow> <msub> <mi>ζ</mi> <mi>s</mi> </msub> </mrow> </mfenced> </mrow> </semantics></math> for the different set values of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>P</mi> </mrow> </semantics></math>.</p>
Full article ">
20 pages, 324 KiB  
Article
A Classical Formulation of Quantum Theory?
by William F. Braasch, Jr. and William K. Wootters
Entropy 2022, 24(1), 137; https://doi.org/10.3390/e24010137 - 17 Jan 2022
Cited by 2 | Viewed by 2533
Abstract
We explore a particular way of reformulating quantum theory in classical terms, starting with phase space rather than Hilbert space, and with actual probability distributions rather than quasiprobabilities. The classical picture we start with is epistemically restricted, in the spirit of a model [...] Read more.
We explore a particular way of reformulating quantum theory in classical terms, starting with phase space rather than Hilbert space, and with actual probability distributions rather than quasiprobabilities. The classical picture we start with is epistemically restricted, in the spirit of a model introduced by Spekkens. We obtain quantum theory only by combining a collection of restricted classical pictures. Our main challenge in this paper is to find a simple way of characterizing the allowed sets of classical pictures. We present one promising approach to this problem and show how it works out for the case of a single qubit. Full article
(This article belongs to the Special Issue Quantum Darwinism and Friends)
10 pages, 1703 KiB  
Article
Quantum Switchboard with Coupled-Cavity Array
by Wai-Keong Mok and Leong-Chuan Kwek
Entropy 2022, 24(1), 136; https://doi.org/10.3390/e24010136 - 17 Jan 2022
Viewed by 1716
Abstract
The ability to control the flow of quantum information is deterministically useful for scaling up quantum computation. In this paper, we demonstrate a controllable quantum switchboard which directs the teleportation protocol to one of two targets, fully dependent on the sender’s choice. Importantly, [...] Read more.
The ability to control the flow of quantum information is deterministically useful for scaling up quantum computation. In this paper, we demonstrate a controllable quantum switchboard which directs the teleportation protocol to one of two targets, fully dependent on the sender’s choice. Importantly, the quantum switchboard also acts as a optimal quantum cloning machine, which allows the receivers to recover the unknown quantum state with a maximal fidelity of 56. This protects the system from the complete loss of quantum information in the event that the teleportation protocol fails. We also provide an experimentally feasible physical implementation of the proposal using a coupled-cavity array. The proposed switchboard can be utilized for the efficient routing of quantum information in a large quantum network. Full article
Show Figures

Figure 1

Figure 1
<p>A quantum network consisting of many quantum devices and computers connected through a quantum switchboard. Switchboard can also redirect congested channels to less used channels during peak usage.</p>
Full article ">Figure 2
<p>Schematic of quantum switchboard. Suppose Alice wishes to send her auxiliary qubit to Bob. She can direct Dick to send his qubit to Charlene (Bob). Charlene then performs a Bell measurement on his qubit with Dick’s qubit and sends results of his measurement to Bob. Using information from Charlene, Bob can perfectly recover state of Alice’s auxiliary qubit. Our switchboard state has unique feature of also being an optimal quantum cloning machine, which allows receivers to recover quantum state with maximal fidelity in event that teleportation protocol fails.</p>
Full article ">Figure 3
<p>An illustration of how the scheme can be extended to a network of nodes with receivers labelled by Bob<math display="inline"><semantics> <msub> <mrow/> <mn>1</mn> </msub> </semantics></math>, Bob<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math>, ⋯ and Charlene<math display="inline"><semantics> <msub> <mrow/> <mn>1</mn> </msub> </semantics></math>, Charlene<math display="inline"><semantics> <msub> <mrow/> <mn>2</mn> </msub> </semantics></math>, ⋯. The controllers are not omitted from the diagram for simplicity, but are inherently present at each branch of the network to direct the flow of quantum information.</p>
Full article ">Figure 4
<p>(<b>a</b>) Coupled-cavity array setup. Each unit cell contains a four-level atom coupled to a two-mode cavity field with strengths <math display="inline"><semantics> <msub> <mi>g</mi> <mi>a</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>g</mi> <mi>b</mi> </msub> </semantics></math>. Each atom is driven by four external lasers with Rabi frequencies <math display="inline"><semantics> <mrow> <msub> <mo>Ω</mo> <mn>1</mn> </msub> <mo>,</mo> <msub> <mo>Ω</mo> <mn>2</mn> </msub> <mo>,</mo> <msub> <mo>Ω</mo> <mn>3</mn> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <msub> <mo>Ω</mo> <mn>4</mn> </msub> </semantics></math>. (<b>b</b>) Energy level diagram of the four-level atom. The detunings <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>j</mi> </msub> </semantics></math> are defined in the main text. A double <math display="inline"><semantics> <mo>Λ</mo> </semantics></math> system is formed, with <math display="inline"><semantics> <mrow> <msub> <mo>Λ</mo> <mi>a</mi> </msub> <mo>=</mo> <mrow> <mo>{</mo> <mrow> <mo>|</mo> <mn>1</mn> <mo>〉</mo> </mrow> <mo>,</mo> <mrow> <mo>|</mo> <mn>2</mn> <mo>〉</mo> </mrow> <mo>,</mo> <mrow> <mo>|</mo> <mn>3</mn> <mo>〉</mo> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mo>Λ</mo> <mi>b</mi> </msub> <mo>=</mo> <mrow> <mo>{</mo> <mrow> <mo>|</mo> <mn>1</mn> <mo>〉</mo> </mrow> <mo>,</mo> <mrow> <mo>|</mo> <mn>2</mn> <mo>〉</mo> </mrow> <mo>,</mo> <mrow> <mo>|</mo> <mn>4</mn> <mo>〉</mo> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math>. The ground states <math display="inline"><semantics> <mrow> <mo>|</mo> <mn>1</mn> <mo>〉</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>|</mo> <mn>2</mn> <mo>〉</mo> </mrow> </semantics></math> are used to encode a qubit.</p>
Full article ">
28 pages, 498 KiB  
Article
An Information Theoretic Interpretation to Deep Neural Networks
by Xiangxiang Xu, Shao-Lun Huang, Lizhong Zheng and Gregory W. Wornell
Entropy 2022, 24(1), 135; https://doi.org/10.3390/e24010135 - 17 Jan 2022
Cited by 14 | Viewed by 4450
Abstract
With the unprecedented performance achieved by deep learning, it is commonly believed that deep neural networks (DNNs) attempt to extract informative features for learning tasks. To formalize this intuition, we apply the local information geometric analysis and establish an information-theoretic framework for feature [...] Read more.
With the unprecedented performance achieved by deep learning, it is commonly believed that deep neural networks (DNNs) attempt to extract informative features for learning tasks. To formalize this intuition, we apply the local information geometric analysis and establish an information-theoretic framework for feature selection, which demonstrates the information-theoretic optimality of DNN features. Moreover, we conduct a quantitative analysis to characterize the impact of network structure on the feature extraction process of DNNs. Our investigation naturally leads to a performance metric for evaluating the effectiveness of extracted features, called the H-score, which illustrates the connection between the practical training process of DNNs and the information-theoretic framework. Finally, we validate our theoretical results by experimental designs on synthesized data and the ImageNet dataset. Full article
(This article belongs to the Special Issue Information Theory and Machine Learning)
Show Figures

Figure 1

Figure 1
<p>A deep neural network that uses data <span class="html-italic">X</span> to predict <span class="html-italic">Y</span>. All hidden layers together map the input data <span class="html-italic">X</span> to <span class="html-italic">k</span>-dimensional feature <math display="inline"><semantics> <mrow> <mi>s</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>s</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>s</mi> <mi>k</mi> </msub> <mo>)</mo> </mrow> <mi mathvariant="normal">T</mi> </msup> </mrow> </semantics></math>. Then, the probabilistic prediction <math display="inline"><semantics> <msub> <mover accent="true"> <mi>P</mi> <mo>˜</mo> </mover> <mrow> <mi>Y</mi> <mo>|</mo> <mi>X</mi> </mrow> </msub> </semantics></math> of <span class="html-italic">Y</span> is computed from <math display="inline"><semantics> <mrow> <mi>s</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> <mo>,</mo> <mi>v</mi> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>b</mi> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math>, where <span class="html-italic">v</span> and bias <span class="html-italic">b</span> are the weights and bias in the last layer.</p>
Full article ">Figure 2
<p>A multi-layer neural network, where the expressive power of the feature mapping <math display="inline"><semantics> <mrow> <mi>s</mi> <mo>(</mo> <mo>·</mo> <mo>)</mo> </mrow> </semantics></math> is restricted by the hidden representation <span class="html-italic">t</span>. All hidden layers previous to <span class="html-italic">t</span> are fixed, represented by the “pre-processing” module.</p>
Full article ">Figure 3
<p>A simple neural network with ideal expressive power, which can generate any <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> dimensional feature <span class="html-italic">s</span> of <span class="html-italic">X</span> by tuning the weights in the first layer.</p>
Full article ">Figure 4
<p>The trained feature <span class="html-italic">s</span>, weights <span class="html-italic">v</span>, and bias <span class="html-italic">b</span> of the network in <a href="#entropy-24-00135-f003" class="html-fig">Figure 3</a>, which are compared with the corresponding theoretical results to show their coincidences.</p>
Full article ">Figure 5
<p>The designed network for validating the impact of network structure on feature extraction, with <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> neurons in two hidden layers. Our goal is to compare the learned weights <math display="inline"><semantics> <mrow> <mi>w</mi> <mo>(</mo> <mn>1</mn> <mo>)</mo> <mo>,</mo> <mi>w</mi> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>w</mi> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </semantics></math> and bias <span class="html-italic">c</span> in the hidden layer with our theoretic characterizations in <a href="#sec3dot2dot2-entropy-24-00135" class="html-sec">Section 3.2.2</a>.</p>
Full article ">Figure 6
<p>The trained weights <span class="html-italic">w</span> and bias <span class="html-italic">c</span> of the network in <a href="#entropy-24-00135-f005" class="html-fig">Figure 5</a>, which are compared with the corresponding theoretical results to show their coincidences.</p>
Full article ">
12 pages, 1728 KiB  
Article
Instance Segmentation of Multiple Myeloma Cells Using Deep-Wise Data Augmentation and Mask R-CNN
by May Phu Paing, Adna Sento, Toan Huy Bui and Chuchart Pintavirooj
Entropy 2022, 24(1), 134; https://doi.org/10.3390/e24010134 - 17 Jan 2022
Cited by 7 | Viewed by 3178
Abstract
Multiple myeloma is a condition of cancer in the bone marrow that can lead to dysfunction of the body and fatal expression in the patient. Manual microscopic analysis of abnormal plasma cells, also known as multiple myeloma cells, is one of the most [...] Read more.
Multiple myeloma is a condition of cancer in the bone marrow that can lead to dysfunction of the body and fatal expression in the patient. Manual microscopic analysis of abnormal plasma cells, also known as multiple myeloma cells, is one of the most commonly used diagnostic methods for multiple myeloma. However, as it is a manual process, it consumes too much effort and time. Besides, it has a higher chance of human errors. This paper presents a computer-aided detection and segmentation of myeloma cells from microscopic images of the bone marrow aspiration. Two major contributions are presented in this paper. First, different Mask R-CNN models using different images, including original microscopic images, contrast-enhanced images and stained cell images, are developed to perform instance segmentation of multiple myeloma cells. As a second contribution, a deep-wise augmentation, a deep learning-based data augmentation method, is applied to increase the performance of Mask R-CNN models. Based on the experimental findings, the Mask R-CNN model using contrast-enhanced images combined with the proposed deep-wise data augmentation provides a superior performance compared to other models. It achieves a mean precision of 0.9973, mean recall of 0.8631, and mean intersection over union (IOU) of 0.9062. Full article
Show Figures

Figure 1

Figure 1
<p>Semantic diagram of proposed multiple myeloma cell detection. It contains three major parts: (1) Segmentation of stained cells, (2) Deep-wise data augmentation, and (3) Mask R-CNN.</p>
Full article ">Figure 2
<p>Segmentation of stained cells. (<b>a</b>) Original input microscopic cell image, (<b>b</b>) Hue (H) color channel of input image, (<b>c</b>) Result of contrast stretching, and (<b>d</b>) Result after removing unstained cells.</p>
Full article ">Figure 3
<p>Deep-wise data augmentation. (<b>a</b>) Example of polygon masks (ground truths) for myeloma cells using VGG annotation tool, (<b>b</b>) Basic data augmentations of cells 1, 2, and 3, (<b>c</b>) Deep-wise augmentation of cell 1 (denoted by arrow), and (<b>d</b>) Deep-wise augmentation of cell 2 and 3 (denoted by arrows).</p>
Full article ">Figure 4
<p>Training and validation of proposed Contrast-enhanced Mask R-CNN with Deep-wise data augmentation. (<b>a</b>) Training loss (0.3006) and (<b>b</b>) Validation loss (0.6247).</p>
Full article ">Figure 5
<p>Some examples of segmentation results, by contrast-enhanced Mask R-CNN with deep-wise data augmentation. Each column represents the original image, contrast-enhanced image and segmentation result, respectively. (<b>a</b>) Isolated MM cell (<b>b</b>) Nucleus and cytoplasm have similar color intensity (<b>c</b>) MM cells in a cluster (Touching nuclei) (<b>d</b>) MM cell in a cluster (Touching cytoplasm), Green colored bounding boxes and masks represent ground truth, and the red colored ones represent segmentation result. The caption above the bounding box shows the prediction score/IOU value.</p>
Full article ">
17 pages, 790 KiB  
Article
Learn Quasi-Stationary Distributions of Finite State Markov Chain
by Zhiqiang Cai, Ling Lin and Xiang Zhou
Entropy 2022, 24(1), 133; https://doi.org/10.3390/e24010133 - 17 Jan 2022
Cited by 1 | Viewed by 2348
Abstract
We propose a reinforcement learning (RL) approach to compute the expression of quasi-stationary distribution. Based on the fixed-point formulation of quasi-stationary distribution, we minimize the KL-divergence of two Markovian path distributions induced by candidate distribution and true target distribution. To solve this challenging [...] Read more.
We propose a reinforcement learning (RL) approach to compute the expression of quasi-stationary distribution. Based on the fixed-point formulation of quasi-stationary distribution, we minimize the KL-divergence of two Markovian path distributions induced by candidate distribution and true target distribution. To solve this challenging minimization problem by gradient descent, we apply a reinforcement learning technique by introducing the reward and value functions. We derive the corresponding policy gradient theorem and design an actor-critic algorithm to learn the optimal solution and the value function. The numerical examples of finite state Markov chain are tested to demonstrate the new method. Full article
Show Figures

Figure 1

Figure 1
<p>The loopy Markov chain example with <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>. The figure shows the log–log plots of <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math>-norm error of the Vanilla Algorithm (<b>a</b>), Projection Algorithm (<b>b</b>), Polyak Averaging Algorithm (<b>c</b>) and our actor-critic algorithm (<b>d</b>). The iteration for the actor-critic algorithm is defined as one step of gradient descent (“<span class="html-italic">t</span>” in Algorithm 1).</p>
Full article ">Figure 2
<p>The loopy Markov chain example with <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics></math>. The figure shows the log–log plots of <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math>-norm error of Vanilla Algorithm (<b>a</b>), Projection Algorithm (<b>b</b>), Polyak Averaging Algorithm (<b>c</b>) and our actor-critic algorithm (<b>d</b>).</p>
Full article ">Figure 3
<p>The QSD for M/M/1/500 queue with <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>i</mi> </msub> <mo>≡</mo> <mn>1.25</mn> </mrow> </semantics></math> (<b>left</b>) and <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>2</mn> <mo>−</mo> <mfrac> <mn>3</mn> <mrow> <mn>2</mn> <mi>N</mi> <mo>−</mo> <mn>4</mn> </mrow> </mfrac> <mrow> <mo>(</mo> <mi>i</mi> <mo>−</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> (<b>right</b>).</p>
Full article ">Figure 4
<p>The M/M/1/500 queue with <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>1.25</mn> </mrow> </semantics></math>. The figure shows the log–log plots of <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math>-norm error of Vanilla Algorithm (<b>a</b>), Projection Algorithm (<b>b</b>), Polyak Averaging Algorithm (<b>c</b>) and our actor-critic algorithm (<b>d</b>).</p>
Full article ">Figure 5
<p>The M/M/1/500 queue with <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>2</mn> <mo>−</mo> <mfrac> <mn>3</mn> <mrow> <mn>2</mn> <mi>N</mi> <mo>−</mo> <mn>4</mn> </mrow> </mfrac> <mrow> <mo>(</mo> <mi>i</mi> <mo>−</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>. The figure shows the log–log plots of <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math>-norm error of Vanilla Algorithm (<b>a</b>), Projection Algorithm (<b>b</b>), Polyak Averaging Algorithm (<b>c</b>) and our actor-critic algorithm (<b>d</b>).</p>
Full article ">
15 pages, 1641 KiB  
Article
Understanding Dilated Mathematical Relationship between Image Features and the Convolutional Neural Network’s Learnt Parameters
by Eyad Alsaghir, Xiyu Shi, Varuna De Silva and Ahmet Kondoz
Entropy 2022, 24(1), 132; https://doi.org/10.3390/e24010132 - 16 Jan 2022
Cited by 1 | Viewed by 2058
Abstract
Deep learning, in general, was built on input data transformation and presentation, model training with parameter tuning, and recognition of new observations using the trained model. However, this came with a high computation cost due to the extensive input database and the length [...] Read more.
Deep learning, in general, was built on input data transformation and presentation, model training with parameter tuning, and recognition of new observations using the trained model. However, this came with a high computation cost due to the extensive input database and the length of time required in training. Despite the model learning its parameters from the transformed input data, no direct research has been conducted to investigate the mathematical relationship between the transformed information (i.e., features, excitation) and the model’s learnt parameters (i.e., weights). This research aims to explore a mathematical relationship between the input excitations and the weights of a trained convolutional neural network. The objective is to investigate three aspects of this assumed feature-weight relationship: (1) the mathematical relationship between the training input images’ features and the model’s learnt parameters, (2) the mathematical relationship between the images’ features of a separate test dataset and a trained model’s learnt parameters, and (3) the mathematical relationship between the difference of training and testing images’ features and the model’s learnt parameters with a separate test dataset. The paper empirically demonstrated the existence of this mathematical relationship between the test image features and the model’s learnt weights by the ANOVA analysis. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

Figure 1
<p>Diagram of Category 1 for the effect of training dataset sizes on the weights of models. In scenario 1, the small training dataset has 20 images, the number of test images <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>, and the number of models <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>11</mn> </mrow> </semantics></math>. In scenario 2, <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>400</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>, and the size of training datasets are 10,000, 8000, and 1300, respectively. The generated FW vectors are [<span class="html-italic">n</span> × 1] in size.</p>
Full article ">Figure 2
<p>Diagram of Category 2 for the effect of unknown-unrelated and unknown-related datasets on the models’ weights. Models 1, 2, and 3 were trained with a large training dataset of 10,000, 8000, and 1300 images, respectively. The test dataset size <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>200</mn> </mrow> </semantics></math> in the third scenario with Model 1, and <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>400</mn> </mrow> </semantics></math> with the three models in the fourth and fifth scenarios. The generated FW vectors are [<span class="html-italic">n</span> × 1] in size.</p>
Full article ">Figure 3
<p>Diagram of Category 3 for the difference in FW values created between training and test datasets. In the sixth scenario, the FW difference between known-related and unknown-related test sets are tested. The difference between known-related and unknown-unrelated test sets are tested in the seventh scenario. The three models were trained with a large dataset of 10,000, 8000, and 1300 images, respectively, and tested with <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>200</mn> </mrow> </semantics></math> images per dataset. The generated FW vectors are [<span class="html-italic">n</span> × 1] in size.</p>
Full article ">Figure 4
<p>Performance of accuracy (<b>left</b>) and loss (<b>right</b>) for Model 1 in training and validation. Model 1 was trained with 1000 images from the Kaggle Dogs vs. Cats dataset.</p>
Full article ">
29 pages, 1643 KiB  
Article
Optimal Control of Uniformly Heated Granular Fluids in Linear Response
by Natalia Ruiz-Pino and Antonio Prados
Entropy 2022, 24(1), 131; https://doi.org/10.3390/e24010131 - 16 Jan 2022
Cited by 6 | Viewed by 1824
Abstract
We present a detailed analytical investigation of the optimal control of uniformly heated granular gases in the linear regime. The intensity of the stochastic driving is therefore assumed to be bounded between two values that are close, which limits the possible values of [...] Read more.
We present a detailed analytical investigation of the optimal control of uniformly heated granular gases in the linear regime. The intensity of the stochastic driving is therefore assumed to be bounded between two values that are close, which limits the possible values of the granular temperature to a correspondingly small interval. Specifically, we are interested in minimising the connection time between the non-equilibrium steady states (NESSs) for two different values of the granular temperature by controlling the time dependence of the driving intensity. The closeness of the initial and target NESSs make it possible to linearise the evolution equations and rigorously—from a mathematical point of view—prove that the optimal controls are of bang-bang type, with only one switching in the first Sonine approximation. We also look into the dependence of the optimal connection time on the bounds of the driving intensity. Moreover, the limits of validity of the linear regime are investigated. Full article
(This article belongs to the Section Non-equilibrium Phenomena)
Show Figures

Figure 1

Figure 1
<p>Time evolution of the temperature and the excess kurtosis. Specifically, we plot <math display="inline"><semantics> <mrow> <mi>δ</mi> <mi>T</mi> </mrow> </semantics></math> (solid line) and <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>A</mi> <mn>2</mn> </msub> </mrow> </semantics></math> (dashed line), both for the CH protocol (<b>left panel</b>) and for the HC protocol (<b>right panel</b>). Dotted line represents the horizontal axis. The bounds for the driving intensity are <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> <mo>=</mo> <mo>−</mo> <mn>0.1</mn> </mrow> </semantics></math>, and the initial temperature is <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>T</mi> <mi mathvariant="normal">i</mi> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math> for CH and <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>T</mi> <mi mathvariant="normal">i</mi> </msub> <mo>=</mo> <mo>−</mo> <mn>0.01</mn> </mrow> </semantics></math> for HC. The evolution under the action of <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> </mrow> </semantics></math> is shown in blue and the evolution under <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> </mrow> </semantics></math> in red. Other parameters are <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Phase plane trajectories. The CH case is illustrated in the <b>left panel</b> and the HC case in the <b>right panel</b>. Several trajectories are shown for different initial temperatures <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>T</mi> <mi mathvariant="normal">i</mi> </msub> <mo>∈</mo> <mrow> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>0.01</mn> <mo>]</mo> </mrow> </mrow> </semantics></math> (<math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>T</mi> <mi mathvariant="normal">i</mi> </msub> <mo>∈</mo> <mrow> <mo>[</mo> <mo>−</mo> <mn>0.01</mn> <mo>,</mo> <mn>0</mn> <mo>]</mo> </mrow> </mrow> </semantics></math>) for the CH (HC) protocol. The remainder of the system parameters are the same as in <a href="#entropy-24-00131-f001" class="html-fig">Figure 1</a>. In each panel, the solid line (red on the left, blue on the right) represents the second part of the phase trajectory, arriving at the target NESS—the origin <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>δ</mi> <msub> <mi>A</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mi>δ</mi> <mi>T</mi> <mo>=</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math>. As in the previous figure, red (blue) lines correspond to <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> </mrow> </semantics></math> (<math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> </mrow> </semantics></math>). Again in each panel, the dashed lines represent the first part of the phase trajectory, starting from the initial points <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mi>δ</mi> <msub> <mi>T</mi> <mi mathvariant="normal">i</mi> </msub> <mo>)</mo> </mrow> </semantics></math>. These curves end up at the points <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>δ</mi> <msub> <mi>A</mi> <mrow> <mn>2</mn> <mi>J</mi> </mrow> </msub> <mo>,</mo> <mi>δ</mi> <msub> <mi>T</mi> <mi>J</mi> </msub> <mo>)</mo> </mrow> </semantics></math>, marked with circles, at which the dashed and solid lines intersect.</p>
Full article ">Figure 3
<p>Switching time <math display="inline"><semantics> <msub> <mi>t</mi> <mi>J</mi> </msub> </semantics></math> and minimum connection time <math display="inline"><semantics> <msub> <mi>t</mi> <mi mathvariant="normal">f</mi> </msub> </semantics></math> as functions of the thermostat limit values for the CH protocol. Specifically, we have chosen the initial temperature <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi mathvariant="normal">i</mi> </msub> <mo>=</mo> <mn>1.01</mn> <mo>&gt;</mo> <mn>1</mn> </mrow> </semantics></math>. In the <b>left panel</b>, <math display="inline"><semantics> <msub> <mi>t</mi> <mi>J</mi> </msub> </semantics></math> (dashed line) and <math display="inline"><semantics> <msub> <mi>t</mi> <mi mathvariant="normal">f</mi> </msub> </semantics></math> (solid line) are plotted as functions of the lower bound <math display="inline"><semantics> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> </semantics></math>, for a fixed value of the upper bound, namely, <math display="inline"><semantics> <mrow> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mo>=</mo> <mn>1.1</mn> </mrow> </semantics></math>. In the <b>right panel</b>, they are plotted as functions of the upper bound <math display="inline"><semantics> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> </semantics></math>, for a fixed value of the upper bound, namely, <math display="inline"><semantics> <mrow> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics></math>. Additional parameters are <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>. There are no qualitative changes for other values of <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>α</mi> <mo>,</mo> <mi>d</mi> <mo>)</mo> </mrow> </semantics></math>, aside from an increase in the connecting time as <math display="inline"><semantics> <mi>α</mi> </semantics></math> decreases. The inset shows a zoom of the panel for <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>≤</mo> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mo>≤</mo> <msub> <mi>χ</mi> <mi mathvariant="normal">i</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>χ</mi> <mi mathvariant="normal">i</mi> </msub> <mo>=</mo> <mn>1.015</mn> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi mathvariant="normal">i</mi> </msub> <mo>=</mo> <mn>1.01</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Switching time <math display="inline"><semantics> <msub> <mi>t</mi> <mi>J</mi> </msub> </semantics></math> and minimum connection time <math display="inline"><semantics> <msub> <mi>t</mi> <mi mathvariant="normal">f</mi> </msub> </semantics></math> as functions of the thermostat limit values for the HC case. The initial temperature is now <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi mathvariant="normal">i</mi> </msub> <mo>=</mo> <mn>0.99</mn> <mo>&lt;</mo> <mn>1</mn> </mrow> </semantics></math>. The reminder of the parameters are the same as in <a href="#entropy-24-00131-f003" class="html-fig">Figure 3</a>. Again, the <b>left</b> (<b>right</b>) panel shows <math display="inline"><semantics> <msub> <mi>t</mi> <mi>J</mi> </msub> </semantics></math> (dashed) and <math display="inline"><semantics> <msub> <mi>t</mi> <mi mathvariant="normal">f</mi> </msub> </semantics></math> (solid) as functions of <math display="inline"><semantics> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> </semantics></math> (<math display="inline"><semantics> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> </semantics></math>), for a fixed value of <math display="inline"><semantics> <mrow> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mo>=</mo> <mn>1.1</mn> </mrow> </semantics></math> (<math display="inline"><semantics> <mrow> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics></math>). The inset in the <b>left</b> panel shows a zoom of the graph for <math display="inline"><semantics> <mrow> <msub> <mi>χ</mi> <mi mathvariant="normal">i</mi> </msub> <mo>≤</mo> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> <mo>≤</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>χ</mi> <mi mathvariant="normal">i</mi> </msub> <mo>=</mo> <mn>0.985</mn> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi mathvariant="normal">i</mi> </msub> <mo>=</mo> <mn>1.01</mn> </mrow> </semantics></math>, showing that there is no change in behaviour when <math display="inline"><semantics> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> </semantics></math> crosses <math display="inline"><semantics> <msub> <mi>χ</mi> <mi mathvariant="normal">i</mi> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>Minimum connection time <math display="inline"><semantics> <msub> <mi>t</mi> <mi mathvariant="normal">f</mi> </msub> </semantics></math> versus the initial control <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mi mathvariant="normal">i</mi> </msub> </mrow> </semantics></math> for the CH case. Symbols represent the linear response prediction for <math display="inline"><semantics> <msub> <mi>t</mi> <mi mathvariant="normal">f</mi> </msub> </semantics></math>, as given by Equation (<a href="#FD29-entropy-24-00131" class="html-disp-formula">29</a>), for different values of the bounds (from top to bottom: <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> <mo>=</mo> <mo>−</mo> <mn>0.1</mn> </mrow> </semantics></math> (triangles), <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mo>=</mo> <mn>0.3</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> <mo>=</mo> <mo>−</mo> <mn>0.3</mn> </mrow> </semantics></math> (stars), <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> <mo>=</mo> <mo>−</mo> <mn>0.7</mn> </mrow> </semantics></math> (diamonds), <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mo>=</mo> <mn>99</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> <mo>=</mo> <mo>−</mo> <mn>0.99</mn> </mrow> </semantics></math>) (circles). Dashed lines correspond to Equation (<a href="#FD39-entropy-24-00131" class="html-disp-formula">39</a>) for each case, which shows the soundness of this approximate expression. The solid line corresponds to Equation (<a href="#FD38-entropy-24-00131" class="html-disp-formula">38</a>), which is basically superimposed with the dashed line for <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mo>=</mo> <mn>99</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> <mo>=</mo> <mo>−</mo> <mn>0.99</mn> </mrow> </semantics></math>. Other parameters are <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Minimum connection time <math display="inline"><semantics> <msub> <mi>t</mi> <mi mathvariant="normal">f</mi> </msub> </semantics></math> versus the initial control <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mi mathvariant="normal">i</mi> </msub> </mrow> </semantics></math> for the HC case. The line code is the same as in <a href="#entropy-24-00131-f005" class="html-fig">Figure 5</a>. Again, the solid line corresponding to Equation (<a href="#FD38-entropy-24-00131" class="html-disp-formula">38</a>) is basically superimposed with the linear response prediction for the further from unity bounds. Once more, <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Convergence to the non-linear expression (<a href="#FD38-entropy-24-00131" class="html-disp-formula">38</a>) as the bounds go to more extreme values for the CH case. We plot the connection time <math display="inline"><semantics> <msub> <mi>t</mi> <mi mathvariant="normal">f</mi> </msub> </semantics></math> versus the initial control <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mi mathvariant="normal">i</mi> </msub> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>. Several sets of data are plotted: (i) the non-linear expression (<a href="#FD38-entropy-24-00131" class="html-disp-formula">38</a>) (blue solid line), and (ii) the linear prediction, as given by Equation (<a href="#FD29-entropy-24-00131" class="html-disp-formula">29</a>), for several values of the bounds, namely, <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mo>=</mo> <mn>9</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> <mo>=</mo> <mo>−</mo> <mn>0.9</mn> </mrow> </semantics></math> (stars), <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mo>=</mo> <mn>19</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> <mo>=</mo> <mo>−</mo> <mn>0.95</mn> </mrow> </semantics></math> (triangles), <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mo>=</mo> <mn>99</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> <mo>=</mo> <mo>−</mo> <mn>0.99</mn> </mrow> </semantics></math> (circles). The time <math display="inline"><semantics> <msub> <mi>t</mi> <mi mathvariant="normal">f</mi> </msub> </semantics></math> given by Equation (<a href="#FD29-entropy-24-00131" class="html-disp-formula">29</a>) converges to that in Equation (<a href="#FD38-entropy-24-00131" class="html-disp-formula">38</a>) “from above”.</p>
Full article ">Figure 8
<p>Convergence to the non-linear expression (<a href="#FD38-entropy-24-00131" class="html-disp-formula">38</a>) as the bounds go to more extreme values for the HC case. Symbols code of the data shown are the same as in <a href="#entropy-24-00131-f007" class="html-fig">Figure 7</a>. The breakage of the convergence “from above” to the non-linear result is clearly seen: For extreme enough values of the bounds, the linear time becomes smaller than the non-linear prediction for a full-strength thermostat.</p>
Full article ">Figure A1
<p>Discriminant <math display="inline"><semantics> <mi mathvariant="sans-serif">Δ</mi> </semantics></math> as a function of the restitution coefficient <math display="inline"><semantics> <mi>α</mi> </semantics></math>. Both the <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> (solid line) and <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> (dashed) cases are shown. The discriminant remains positive for all values of <math display="inline"><semantics> <mi>α</mi> </semantics></math>, guaranteeing that the optimal control is of the bang-bang type with at most one switching.</p>
Full article ">Figure A2
<p>Qualitative picture of the heating and cooling trajectories in the phase plane. Curves for <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math> are drawn with solid lines, curves for <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>&lt;</mo> <mn>0</mn> </mrow> </semantics></math> with dashed lines; heating ones (<math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> </mrow> </semantics></math>) are in red, cooling ones (<math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> </mrow> </semantics></math>) in blue. The common tangent to the heating and cooling curves at the origin is represented by a black dotted line. The system starts cooling from an initial state <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>T</mi> <mi mathvariant="normal">i</mi> </msub> <mo>&lt;</mo> <mn>0</mn> </mrow> </semantics></math>. Let us assume that the bang-bang protocol is of CH type. In the first step of the bang, the system follows the blue cooling curve: If it is allowed to relax during an infinite time, it reaches the NESS <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mi>δ</mi> <msub> <mi>T</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> <mo>=</mo> <mfrac> <mn>2</mn> <mn>3</mn> </mfrac> <mi>δ</mi> <msub> <mi>χ</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>)</mo> </mrow> </semantics></math> over the vertical axis. The cooling must be interrupted at some time <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mi>J</mi> </msub> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math>, where the driving is switched to <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> </mrow> </semantics></math>: Since the system must reach the target NESS at the origin, it needs to move over the branch of the heating curve corresponding to <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>&lt;</mo> <mn>0</mn> </mrow> </semantics></math> (red dashed line). However, this is impossible, since this heating curve is always above the tangent line and does not intersect the cooling curve. Therefore, it is not feasible to drive the system to the origin using a CH protocol for <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>T</mi> <mi mathvariant="normal">i</mi> </msub> <mo>&lt;</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure A3
<p>Qualitative picture of the HC protocol in the phase plane. Curves for <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math> are drawn with solid lines, curves for <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>&lt;</mo> <mn>0</mn> </mrow> </semantics></math> with dashed lines; heating ones (<math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> are in red, cooling ones (<math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> <mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> in blue. The system starts heating from an initial state <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>T</mi> <mi mathvariant="normal">i</mi> </msub> <mo>&lt;</mo> <mn>0</mn> </mrow> </semantics></math>, corresponding to point <span class="html-italic">I</span>. In the first step of the bang, the system follows the red heating curve: If allowed to relax during an infinite time, it reaches the NESS <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mi>δ</mi> <msub> <mi>T</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mo>=</mo> <mfrac> <mn>2</mn> <mn>3</mn> </mfrac> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">max</mo> </msub> <mo>)</mo> </mrow> </semantics></math> over the vertical axis. The heating is interrupted at the point <span class="html-italic">J</span> over the cooling curve for <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>&lt;</mo> <mn>0</mn> </mrow> </semantics></math> (blue dashed line), where the driving is switched to <math display="inline"><semantics> <mrow> <mi>δ</mi> <msub> <mi>χ</mi> <mo movablelimits="true" form="prefix">min</mo> </msub> </mrow> </semantics></math>. The optimal connection thus comprises the arcs <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>J</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>J</mi> <mi>F</mi> </mrow> </semantics></math>.</p>
Full article ">Figure A4
<p>Comparison of the two- and four-step bang-bang processes for <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi mathvariant="normal">i</mi> </msub> <mo>&lt;</mo> <mn>1</mn> </mrow> </semantics></math>. The two-step bang-bang connects <span class="html-italic">I</span> and <span class="html-italic">F</span> with the arcs <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>J</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>J</mi> <mi>F</mi> </mrow> </semantics></math>, whereas the four-step one comprises the arcs <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>K</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mi>L</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>M</mi> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>F</mi> </mrow> </semantics></math>. The latter obtains <span class="html-italic">I</span> and <span class="html-italic">F</span> connected, but it takes it longer to complete it—for all the possible points <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>K</mi> </msub> <mo>&lt;</mo> <msub> <mi>T</mi> <mi>J</mi> </msub> </mrow> </semantics></math>. Any four-step protocol whose first cooling arc ends at a point with <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>K</mi> </msub> <mo>&gt;</mo> <msub> <mi>T</mi> <mi>J</mi> </msub> </mrow> </semantics></math> cannot drive the system to the target NESS <math display="inline"><semantics> <mrow> <mi>F</mi> <mo>=</mo> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math>. Dotted lines represent the axes. Dashed and solid lines correspond to the heating and cooling steps of the bang-bang, respectively.</p>
Full article ">Figure A5
<p>Comparison of the two- and four-step bang-bang processes for <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi mathvariant="normal">i</mi> </msub> <mo>&gt;</mo> <mn>1</mn> </mrow> </semantics></math>. Analogously to the case <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi mathvariant="normal">i</mi> </msub> <mo>&lt;</mo> <mn>1</mn> </mrow> </semantics></math>, the two-step bang-bang connects <span class="html-italic">I</span> and <span class="html-italic">F</span> with the arcs <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>J</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>J</mi> <mi>F</mi> </mrow> </semantics></math>, whereas the four-step one comprises the arcs <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>K</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mi>L</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>M</mi> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>F</mi> </mrow> </semantics></math>. It takes it longer to complete the latter—but now <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>K</mi> </msub> <mo>&gt;</mo> <msub> <mi>T</mi> <mi>J</mi> </msub> </mrow> </semantics></math>. In this case, if <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>K</mi> </msub> <mo>&lt;</mo> <msub> <mi>T</mi> <mi>J</mi> </msub> </mrow> </semantics></math>, the system cannot be driven to the target NESS. Dotted, dashed, and solid lines have the same meaning as in <a href="#entropy-24-00131-f0A4" class="html-fig">Figure A4</a>.</p>
Full article ">
16 pages, 535 KiB  
Article
Multifractal Company Market: An Application to the Stock Market Indices
by Michał Chorowski and Ryszard Kutner
Entropy 2022, 24(1), 130; https://doi.org/10.3390/e24010130 - 16 Jan 2022
Cited by 2 | Viewed by 1830
Abstract
Using the multiscale normalized partition function, we exploit the multifractal analysis based on directly measurable shares of companies in the market. We present evidence that markets of competing firms are multifractal/multiscale. We verified this by (i) using our model that described the critical [...] Read more.
Using the multiscale normalized partition function, we exploit the multifractal analysis based on directly measurable shares of companies in the market. We present evidence that markets of competing firms are multifractal/multiscale. We verified this by (i) using our model that described the critical properties of the company market and (ii) analyzing a real company market defined by the S&P500 index. As the valuable reference case, we considered a four-group market model that skillfully reconstructs this index’s empirical data. We point out that a four-group company market organization is universal because it can perfectly describe the essential features of the spectrum of dimensions, regardless of the analyzed series of shares. The apparent differences from the empirical data appear only at the level of subtle effects. Full article
(This article belongs to the Special Issue Three Risky Decades: A Time for Econophysics?)
Show Figures

Figure 1

Figure 1
<p>Quetelet curve: the dependence of the standardized rank of companies generated within our model, i.e., CDF, on their shares <math display="inline"><semantics> <mi>ω</mi> </semantics></math>. It is precisely to analyze this simulation data that we use multifractal formalism.</p>
Full article ">Figure 2
<p>The typical dependence of <math display="inline"><semantics> <mi mathvariant="sans-serif">Λ</mi> </semantics></math> (=<math display="inline"><semantics> <msub> <mi>N</mi> <mrow> <mi>s</mi> <mi>t</mi> </mrow> </msub> </semantics></math>) vs. interventionism level <span class="html-italic">q</span> at fixed <math display="inline"><semantics> <mi>η</mi> </semantics></math> (=0.5) and <math display="inline"><semantics> <mi>λ</mi> </semantics></math> (=0.9). It is a flat phase diagram where a continuous phase transition is clearly visible at <math display="inline"><semantics> <msub> <mi>q</mi> <mi>c</mi> </msub> </semantics></math> (=0.734). All other plots in this section have the same <math display="inline"><semantics> <mi>η</mi> </semantics></math> and <math display="inline"><semantics> <mi>λ</mi> </semantics></math> parameters as this plot.</p>
Full article ">Figure 3
<p>Scaling exponent <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>(</mo> <mi>β</mi> <mo>)</mo> </mrow> </semantics></math> vs. exponent <math display="inline"><semantics> <mi>β</mi> </semantics></math> (the order of scale). Its nonlinear/multifractal behavior in the range of <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>∈</mo> <mo>[</mo> <mo>−</mo> <mn>1.0</mn> <mo>,</mo> <mspace width="3.33333pt"/> <mn>2.0</mn> <mo>]</mo> </mrow> </semantics></math> for interventionism level <math display="inline"><semantics> <mrow> <mn>0</mn> <mo>&lt;</mo> <mi>q</mi> <mo>&lt;</mo> <mn>1</mn> </mrow> </semantics></math> is clearly seen (especially on the zoomed plot). On the other hand, the plot on the right shows the existence of oblique asymptotes. Multifractality is present if and only if they are different from each other. For example, we have selected ten characteristic levels of interventionism here (see the legend). The sharp decrease in the slope difference of the asymptotes for <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>≈</mo> <mn>1</mn> </mrow> </semantics></math> (blue dashed curves) is visible. We use the same set of <span class="html-italic">q</span> values in all plots in <a href="#sec2-entropy-24-00130" class="html-sec">Section 2</a>.</p>
Full article ">Figure 4
<p>Dependence of Rényi dimensions <span class="html-italic">D</span> on <math display="inline"><semantics> <mi>β</mi> </semantics></math>. A sharp drop in the <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>D</mi> <mo>(</mo> <mi>β</mi> <mo>)</mo> </mrow> </semantics></math> span is clearly visible on the right plot for large values of <math display="inline"><semantics> <mrow> <mo>|</mo> <mi>β</mi> <mo>|</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>≈</mo> <mn>1</mn> </mrow> </semantics></math> (blue dashed curve). This is the result of the behavior of the <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>(</mo> <mi>β</mi> <mo>)</mo> </mrow> </semantics></math> vs. <math display="inline"><semantics> <mi>β</mi> </semantics></math> curve shown in <a href="#entropy-24-00130-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 5
<p>The dependence of the generalized Hurst exponent <span class="html-italic">h</span> and its span <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>h</mi> </mrow> </semantics></math> on <math display="inline"><semantics> <mi>β</mi> </semantics></math>. A sharp drop in the <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>h</mi> <mo>(</mo> <mi>β</mi> <mo>)</mo> </mrow> </semantics></math> span is clearly visible for large values of <math display="inline"><semantics> <mrow> <mo>|</mo> <mi>β</mi> <mo>|</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>≈</mo> <mn>1</mn> </mrow> </semantics></math> (blue dashed curve). It is the result of the behavior of the <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>(</mo> <mi>β</mi> <mo>)</mo> </mrow> </semantics></math> vs. <math display="inline"><semantics> <mi>β</mi> </semantics></math> curve shown in <a href="#entropy-24-00130-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 6
<p>Dependence of the local singularity <math display="inline"><semantics> <mi>α</mi> </semantics></math> on <math display="inline"><semantics> <mi>β</mi> </semantics></math>. A sharp drop in the <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>α</mi> <mo>(</mo> <mi>β</mi> <mo>)</mo> </mrow> </semantics></math> span is clearly visible on the right plot for large values of <math display="inline"><semantics> <mrow> <mo>|</mo> <mi>β</mi> <mo>|</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>≈</mo> <mn>1</mn> </mrow> </semantics></math> (blue dashed curve). This is the result of the behavior of the <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>(</mo> <mi>β</mi> <mo>)</mo> </mrow> </semantics></math> vs. <math display="inline"><semantics> <mi>β</mi> </semantics></math> curve shown in <a href="#entropy-24-00130-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 7
<p>Dependence of the local singularity span <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>α</mi> </mrow> </semantics></math> on <span class="html-italic">q</span> at fixed <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>. A slight but distinct peak locates near <math display="inline"><semantics> <mrow> <msub> <mi>q</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>0.735</mn> </mrow> </semantics></math>, which defines the criticality threshold used by us at earlier work [<a href="#B15-entropy-24-00130" class="html-bibr">15</a>]. We also included a magnification of this peak.</p>
Full article ">Figure 8
<p>Dependence of spectrum of dimensions, <span class="html-italic">f</span>, from <math display="inline"><semantics> <mi>α</mi> </semantics></math> (left plot) and <math display="inline"><semantics> <mi>β</mi> </semantics></math> (right plot). There is a visible nonlinear dependence of the shape <span class="html-italic">f</span> on the level of interventionism <span class="html-italic">q</span>. Moreover, there is a wide spread in the spectrum of singularities <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>α</mi> </mrow> </semantics></math>. As expected, the same applies to the dependence of <span class="html-italic">f</span> on <math display="inline"><semantics> <mi>β</mi> </semantics></math>. In addition, there is a slight asymmetry of <span class="html-italic">f</span>, i.e., <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math>, herein.</p>
Full article ">Figure 9
<p>Dependence of specific heat, <span class="html-italic">c</span>, for a constant volume (<math display="inline"><semantics> <mrow> <mi>V</mi> <mo>=</mo> <mo form="prefix">ln</mo> <msub> <mi>N</mi> <mrow> <mi>s</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math>) on <math display="inline"><semantics> <mi>β</mi> </semantics></math>. The anomalous behavior of <span class="html-italic">c</span> is apparent due to the presence of Schottky peaks for both the positive and negative values of <math display="inline"><semantics> <mi>β</mi> </semantics></math>.</p>
Full article ">Figure 10
<p>Quetelet curve: the empirical dependence of the standardized rank of companies, belonging to the <math display="inline"><semantics> <mrow> <mi>S</mi> <mo>&amp;</mo> <mi>P</mi> <mspace width="3.33333pt"/> <mn>500</mn> </mrow> </semantics></math> index, i.e., CDF, on their shares <math display="inline"><semantics> <mi>ω</mi> </semantics></math>. It is precisely to analyze this data that we use multifractal formalism.</p>
Full article ">Figure 11
<p>Dependence of <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>(</mo> <mi>β</mi> <mo>)</mo> </mrow> </semantics></math> vs. <math display="inline"><semantics> <mi>β</mi> </semantics></math> for the company market from the <math display="inline"><semantics> <mrow> <mi>S</mi> <mo>&amp;</mo> <mi>P</mi> </mrow> </semantics></math> 500 index. The left plot is a magnification of the <math display="inline"><semantics> <mi>β</mi> </semantics></math> range belonging to the <math display="inline"><semantics> <mrow> <mo>[</mo> <mo>−</mo> <mn>1.5</mn> <mo>,</mo> <mn>2.0</mn> <mo>]</mo> </mrow> </semantics></math> interval. The right plot shows the one in the full <math display="inline"><semantics> <mi>β</mi> </semantics></math> range, i.e., belonging to the <math display="inline"><semantics> <mrow> <mo>[</mo> <mo>−</mo> <mn>10</mn> <mo>,</mo> <mn>10</mn> <mo>]</mo> </mrow> </semantics></math> interval. In the assumed plot’s resolution of the whole (right) graph, it is impossible to distinguish the results of the four-group company market model (red curve) from the empirical (black) curve.</p>
Full article ">Figure 12
<p>Dependence of the generalized Hurst exponent <math display="inline"><semantics> <mrow> <mi>h</mi> <mo>(</mo> <mi>β</mi> <mo>)</mo> </mrow> </semantics></math> on the <math display="inline"><semantics> <mi>β</mi> </semantics></math> exponent. Its span is sufficient for one of the spectra of dimensions presented in <a href="#entropy-24-00130-f013" class="html-fig">Figure 13</a> (both curves have there a common span) to define a solid multifractality. There are slight/subtle local differences between the two curves in both figures (black: the empirical one; red: the four-group company market).</p>
Full article ">Figure 13
<p>Dependence of the spectrum of dimensions <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>(</mo> <mi>α</mi> <mo>)</mo> </mrow> </semantics></math> vs. <math display="inline"><semantics> <mi>α</mi> </semantics></math> for the company market from the <math display="inline"><semantics> <mrow> <mi>S</mi> <mo>&amp;</mo> <mi>P</mi> <mspace width="3.33333pt"/> <mn>500</mn> </mrow> </semantics></math> index (black curve). The <span class="html-italic">f</span> asymmetry favoring large firms is visible. For comparison, we have included the spectra of dimensions for the four-group company market represented by the red curve.</p>
Full article ">Figure 14
<p>Anomalous dependence of specific heat <math display="inline"><semantics> <mrow> <mi>c</mi> <mo>(</mo> <mi>β</mi> <mo>)</mo> </mrow> </semantics></math> vs. <math display="inline"><semantics> <mi>β</mi> </semantics></math> for the company market, for example, from <math display="inline"><semantics> <mrow> <mi>S</mi> <mo>&amp;</mo> <mi>P</mi> <mspace width="3.33333pt"/> <mn>500</mn> </mrow> </semantics></math> index. As can be seen, the model of four-group company market shows apparent differences from the empirical data only at the level of the second <math display="inline"><semantics> <mi>τ</mi> </semantics></math> derivative, i.e., at the level of hyper-fine effects.</p>
Full article ">Figure 15
<p>An example plot of the spectrum of dimensions <span class="html-italic">f</span> vs. <math display="inline"><semantics> <mi>α</mi> </semantics></math> for the company market consisting of the four groups. Characteristic coordinates that we read from the graph, define the conditions (considered in the main text), which help us to determine the unknowns <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>,</mo> <msub> <mi>K</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>K</mi> <mn>2</mn> </msub> <mo>,</mo> <mi>L</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi>ω</mi> <mo movablelimits="true" form="prefix">min</mo> </msup> <mo>,</mo> <msub> <mi>ω</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>ω</mi> <mn>2</mn> </msub> <mo>,</mo> <msup> <mi>ω</mi> <mo movablelimits="true" form="prefix">max</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 16
<p>Schematic classification of spectrum of dimensions due to asymmetry <math display="inline"><semantics> <mi>γ</mi> </semantics></math> and degeneration <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>M</mi> <mo>,</mo> <mi>L</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">
15 pages, 1956 KiB  
Article
Cooperative Spectrum Sensing Based on Multi-Features Combination Network in Cognitive Radio Network
by Mingdong Xu, Zhendong Yin, Yanlong Zhao and Zhilu Wu
Entropy 2022, 24(1), 129; https://doi.org/10.3390/e24010129 - 15 Jan 2022
Cited by 22 | Viewed by 3190
Abstract
Cognitive radio, as a key technology to improve the utilization of radio spectrum, acquired much attention. Moreover, spectrum sensing has an irreplaceable position in the field of cognitive radio and was widely studied. The convolutional neural networks (CNNs) and the gate recurrent unit [...] Read more.
Cognitive radio, as a key technology to improve the utilization of radio spectrum, acquired much attention. Moreover, spectrum sensing has an irreplaceable position in the field of cognitive radio and was widely studied. The convolutional neural networks (CNNs) and the gate recurrent unit (GRU) are complementary in their modelling capabilities. In this paper, we introduce a CNN-GRU network to obtain the local information for single-node spectrum sensing, in which CNN is used to extract spatial feature and GRU is used to extract the temporal feature. Then, the combination network receives the features extracted by the CNN-GRU network to achieve multifeatures combination and obtains the final cooperation result. The cooperative spectrum sensing scheme based on Multifeatures Combination Network enhances the sensing reliability by fusing the local information from different sensing nodes. To accommodate the detection of multiple types of signals, we generated 8 kinds of modulation types to train the model. Theoretical analysis and simulation results show that the cooperative spectrum sensing algorithm proposed in this paper improved detection performance with no prior knowledge about the information of primary user or channel state. Our proposed method achieved competitive performance under the condition of large dynamic signal-to-noise ratio. Full article
(This article belongs to the Special Issue Methods in Artificial Intelligence and Information Processing)
Show Figures

Figure 1

Figure 1
<p>System model for cooperative spectrum sensing.</p>
Full article ">Figure 2
<p>Architecture diagram of Multifeatures Combination Network (MCN) for cooperative spectrum sensing.</p>
Full article ">Figure 3
<p>Architecture diagram of gate recurrent unit (GRU) for cooperative spectrum sensing.</p>
Full article ">Figure 4
<p>Schematic diagram of one-dimensional convolution neural network (1D-CNN) for cooperative spectrum sensing.</p>
Full article ">Figure 5
<p>Detection performance for models with different number of nodes.</p>
Full article ">Figure 6
<p>Detection performance for various spectrum sensing methods.</p>
Full article ">Figure 7
<p>Receiver operating characteristic (ROC) curves for different number of nodes at SNR = −18 dB.</p>
Full article ">Figure 8
<p>Performance of different modulation types for MCN (scheme 1) with 4 nodes.</p>
Full article ">
11 pages, 4944 KiB  
Article
Forest Fire Detection via Feature Entropy Guided Neural Network
by Zhenwei Guan, Feng Min, Wei He, Wenhua Fang and Tao Lu
Entropy 2022, 24(1), 128; https://doi.org/10.3390/e24010128 - 15 Jan 2022
Cited by 11 | Viewed by 2458
Abstract
Forest fire detection from videos or images is vital to forest firefighting. Most deep learning based approaches rely on converging image loss, which ignores the content from different fire scenes. In fact, complex content of images always has higher entropy. From this perspective, [...] Read more.
Forest fire detection from videos or images is vital to forest firefighting. Most deep learning based approaches rely on converging image loss, which ignores the content from different fire scenes. In fact, complex content of images always has higher entropy. From this perspective, we propose a novel feature entropy guided neural network for forest fire detection, which is used to balance the content complexity of different training samples. Specifically, a larger weight is given to the feature of the sample with a high entropy source when calculating the classification loss. In addition, we also propose a color attention neural network, which mainly consists of several repeated multiple-blocks of color-attention modules (MCM). Each MCM module can extract the color feature information of fire adequately. The experimental results show that the performance of our proposed method outperforms the state-of-the-art methods. Full article
Show Figures

Figure 1

Figure 1
<p>The figure shows the training samples with different content complexity. The entropy value of sample 1 and sample 2 is small, and the image is easier to distinguish as fire; Sample 3 and sample 4 have larger entropy values relatively, and the images are not easy to distinguish as fire.</p>
Full article ">Figure 2
<p>Our “MCM” module. The size of all convolution kernels is 1 × 1 except for depth separable convolution in <a href="#entropy-24-00128-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure 3
<p>Overview of the forest fire detection algorithm using the proposed FireColorNet as the backbone, which contains three parts, Backbone, Neck and Head. Backbone mainly includes multiple blocks of color attention modules “MCM”, and the Neck is a scale feature fusion “PAN” structure, and the Head is used for prediction. The “Conv Block” represents convolution, normalization, and activation function operations. “× k” represents the number of repetitions of this module. Post processing represents non-maximum suppression operation in the inference phase.</p>
Full article ">Figure 4
<p>A few sample images from our created dataset.</p>
Full article ">Figure 5
<p>The above figure shows our four different network connection methods. Among them, the yellow box represents 1 × 1 convolution, and the red box represents mean pooling along the horizontal and vertical direction, and the blue box is marked with the sigmoid activation function, and the green represents feature splicing and a 1 × 1 convolution operation, and the orange represents normalization and activation function, and the gray represents our PA module. (<b>a</b>) variant1; (<b>b</b>) variant2; (<b>c</b>) variant3; (<b>d</b>) variant4.</p>
Full article ">Figure 6
<p>The above figure shows the curves of loss and accuracy with the number of training times in the training phase. The loss function adopts our cross entropy loss based on feature entropy guidance. (<b>a</b>) Variation curve of loss and accuracy for the training phase in dataset 1. (<b>b</b>) Variation curve of loss and accuracy for the training phase in dataset 2.</p>
Full article ">Figure 7
<p>The figure on the left is the visualized result of the default SE module of Eficientnet-b0, and on the right is our MCM module. We can see that our method can detect small and less obvious flames, but the default SE module fails to detect them. We use the same parameters in the detection phase. (<b>a</b>) Default SE module; (<b>b</b>) our MCM module.</p>
Full article ">Figure 8
<p>Compared to our method, faster rcnn and grid rcnn have redundant prediction, and atss has a missing prediction at the flame in the lower right corner of the figure.</p>
Full article ">
13 pages, 1454 KiB  
Article
Security Analysis of Continuous-Variable Measurement-Device-Independent Quantum Key Distribution Systems in Complex Communication Environments
by Yi Zheng, Haobin Shi, Wei Pan, Quantao Wang and Jiahui Mao
Entropy 2022, 24(1), 127; https://doi.org/10.3390/e24010127 - 14 Jan 2022
Cited by 4 | Viewed by 2295
Abstract
Continuous-variable measure-device-independent quantum key distribution (CV-MDI QKD) is proposed to remove all imperfections originating from detection. However, there are still some inevitable imperfections in a practical CV-MDI QKD system. For example, there is a fluctuating channel transmittance in the complex communication environments. Here [...] Read more.
Continuous-variable measure-device-independent quantum key distribution (CV-MDI QKD) is proposed to remove all imperfections originating from detection. However, there are still some inevitable imperfections in a practical CV-MDI QKD system. For example, there is a fluctuating channel transmittance in the complex communication environments. Here we investigate the security of the system under the effects of the fluctuating channel transmittance, where the transmittance is regarded as a fixed value related to communication distance in theory. We first discuss the parameter estimation in fluctuating channel transmittance based on these establishing of channel models, which has an obvious deviation compared with the estimated parameters in the ideal case. Then, we show the evaluated results when the channel transmittance respectively obeys the two-point distribution and the uniform distribution. In particular, the two distributions can be easily realized under the manipulation of eavesdroppers. Finally, we analyze the secret key rate of the system when the channel transmittance obeys the above distributions. The simulation analysis indicates that a slight fluctuation of the channel transmittance may seriously reduce the performance of the system, especially in the extreme asymmetric case. Furthermore, the communication between Alice, Bob and Charlie may be immediately interrupted. Therefore, eavesdroppers can manipulate the channel transmittance to complete a denial-of-service attack in a practical CV-MDI QKD system. To resist this attack, the Gaussian post-selection method can be exploited to calibrate the parameter estimation to reduce the deterioration of performance of the system. Full article
(This article belongs to the Topic Quantum Information and Quantum Computing)
Show Figures

Figure 1

Figure 1
<p>EB model of a practical GMCS CV-MDI-QKD system running in complex environments. Here, channel transmittance <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>A</mi> <mi>C</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>B</mi> <mi>C</mi> </mrow> </msub> </semantics></math> are modeled to obey a certain distribution, which may be easily controlled by Eve.</p>
Full article ">Figure 2
<p>The probability density function of the channel transmittance when it obeys the two-point distribution, where <span class="html-italic">T</span> represents <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>A</mi> <mi>C</mi> </mrow> </msub> </semantics></math> or <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>B</mi> <mi>C</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 3
<p>The probability density function of the channel transmittance when it obeys the uniform distribution, where <span class="html-italic">T</span> represents <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>A</mi> <mi>C</mi> </mrow> </msub> </semantics></math> or <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>B</mi> <mi>C</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 4
<p>Secret key rate as a function of the transmission distance from Alice to Bob in the symmetric case when the channel transmittance obeys the two-point distribution, where <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>A</mi> <mi>C</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>L</mi> <mrow> <mi>B</mi> <mi>C</mi> </mrow> </msub> </mrow> </semantics></math>. The fiber loss is 0.2 dB/km.</p>
Full article ">Figure 5
<p>Secret key rate vs the transmission distance from Alice to Bob in the extreme asymmetric case when the channel transmittance obeys the two-point distribution, where <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>B</mi> <mi>C</mi> </mrow> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Secret key rate as a function of the transmission distance from Alice to Bob in the symmetric case when the channel transmittance obeys the uniform distribution, where <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>A</mi> <mi>C</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>L</mi> <mrow> <mi>B</mi> <mi>C</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Secret key rate vs the transmission distance from Alice to Bob in the extreme asymmetric case when the channel transmittance obeys the uniform distribution, where <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>B</mi> <mi>C</mi> </mrow> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">
20 pages, 709 KiB  
Article
Immunity in the ABM-DSGE Framework for Preventing and Controlling Epidemics—Validation of Results
by Jagoda Kaszowska-Mojsa, Przemysław Włodarczyk and Agata Szymańska
Entropy 2022, 24(1), 126; https://doi.org/10.3390/e24010126 - 14 Jan 2022
Cited by 2 | Viewed by 2405
Abstract
The COVID-19 pandemic has raised many questions on how to manage an epidemiological and economic crisis around the world. Since the beginning of the COVID-19 pandemic, scientists and policy makers have been asking how effective lockdowns are in preventing and controlling the spread [...] Read more.
The COVID-19 pandemic has raised many questions on how to manage an epidemiological and economic crisis around the world. Since the beginning of the COVID-19 pandemic, scientists and policy makers have been asking how effective lockdowns are in preventing and controlling the spread of the virus. In the absence of vaccines, the regulators lacked any plausible alternatives. Nevertheless, after the introduction of vaccinations, to what extent the conclusions of these analyses are still valid should be considered. In this paper, we present a study on the effect of vaccinations within the dynamic stochastic general equilibrium model with an agent-based epidemic component. Thus, we validated the results regarding the need to use lockdowns as an efficient tool for preventing and controlling epidemics that were obtained in November 2020. Full article
(This article belongs to the Special Issue Entropy in Real-World Datasets and Its Impact on Machine Learning)
Show Figures

Figure 1

Figure 1
<p>State transition probabilities in the agent-based epidemic component. Health status: 1—healthy (<span class="html-italic">h</span>), 2—infected (<span class="html-italic">i</span>), 3—treated (<span class="html-italic">t</span>), 4—healthy individuals in preventive quarantine (<span class="html-italic">q</span>), 5—deceased (<span class="html-italic">d</span>), 6—recovered (<span class="html-italic">r</span>), 7—vaccinated (<span class="html-italic">v</span>) <math display="inline"><semantics> <msup> <mi>P</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msup> </semantics></math>—transition probability between states <span class="html-italic">i</span> and <span class="html-italic">j</span>, see <a href="#entropy-24-00126-t002" class="html-table">Table 2</a> and <a href="#entropy-24-00126-t003" class="html-table">Table 3</a>.</p>
Full article ">Figure 2
<p>Changes in the health states in Scenario 1.1 (with immunity).</p>
Full article ">Figure 3
<p>Changes in the health states in Scenario 1.2 (with immunity).</p>
Full article ">Figure 4
<p>Changes in the health states in Scenario 1.3 (with immunity).</p>
Full article ">Figure 5
<p>Aggregate labour productivity under the different COVID-19 prevention and control schemes. Please note that this figure is similar to the one that was published in [<a href="#B1-entropy-24-00126" class="html-bibr">1</a>] in November 2020. This figure enables the results for the scenarios that were analysed in 2021 to be compared with those from 2020.</p>
Full article ">Figure 6
<p>Aggregate labour productivity under the different COVID-19 vaccination schemes. Vaccination Scenario 1 is (1.1); Vaccination Scenario 2 is (1.2) and Vaccination Scenario 3 is (1.3).</p>
Full article ">Figure 7
<p>The major macroeconomic indicators under the different COVID-19 prevention and control schemes (conditional forecasts using the DSGE model). Please note that this figure is similar to the one that was published in [<a href="#B1-entropy-24-00126" class="html-bibr">1</a>] in November 2020. However, the capital accumulation process was recalibrated in the DSGE model as is explained in <a href="#sec5-entropy-24-00126" class="html-sec">Section 5</a>. This figure enables the results for scenarios analysed in 2021 to be compared with those from 2020.</p>
Full article ">Figure 8
<p>The major macroeconomic indicators under the different COVID-19 vaccination schemes (conditional forecasts using the DSGE model). Vaccination Scenario 1 is (1.1); Vaccination Scenario 2 is (1.2) and Vaccination Scenario 3 is (1.3).</p>
Full article ">
17 pages, 1555 KiB  
Article
Inferring a Property of a Large System from a Small Number of Samples
by Damián G. Hernández and Inés Samengo
Entropy 2022, 24(1), 125; https://doi.org/10.3390/e24010125 - 14 Jan 2022
Cited by 2 | Viewed by 1757
Abstract
Inferring the value of a property of a large stochastic system is a difficult task when the number of samples is insufficient to reliably estimate the probability distribution. The Bayesian estimator of the property of interest requires the knowledge of the prior distribution, [...] Read more.
Inferring the value of a property of a large stochastic system is a difficult task when the number of samples is insufficient to reliably estimate the probability distribution. The Bayesian estimator of the property of interest requires the knowledge of the prior distribution, and in many situations, it is not clear which prior should be used. Several estimators have been developed so far in which the proposed prior us individually tailored for each property of interest; such is the case, for example, for the entropy, the amount of mutual information, or the correlation between pairs of variables. In this paper, we propose a general framework to select priors that is valid for arbitrary properties. We first demonstrate that only certain aspects of the prior distribution actually affect the inference process. We then expand the sought prior as a linear combination of a one-dimensional family of indexed priors, each of which is obtained through a maximum entropy approach with constrained mean values of the property under study. In many cases of interest, only one or very few components of the expansion turn out to contribute to the Bayesian estimator, so it is often valid to only keep a single component. The relevant component is selected by the data, so no handcrafted priors are required. We test the performance of this approximation with a few paradigmatic examples and show that it performs well in comparison to the ad-hoc methods previously proposed in the literature. Our method highlights the connection between Bayesian inference and equilibrium statistical mechanics, since the most relevant component of the expansion can be argued to be that with the right temperature. Full article
(This article belongs to the Special Issue Applications of Information Theory in Statistics)
Show Figures

Figure 1

Figure 1
<p>Conceptual framework used to estimate the property <math display="inline"><semantics> <mrow> <mi>F</mi> <mo>(</mo> <mi mathvariant="bold-italic">q</mi> <mo>)</mo> </mrow> </semantics></math> of a stochastic system with probabilities <math display="inline"><semantics> <mi mathvariant="bold-italic">q</mi> </semantics></math> from a limited number of observations, or samples <math display="inline"><semantics> <mi mathvariant="bold-italic">n</mi> </semantics></math>. (<b>a</b>) Several probabilities <math display="inline"><semantics> <mi mathvariant="bold-italic">q</mi> </semantics></math> can produce the same value of the property. All such <math display="inline"><semantics> <mi mathvariant="bold-italic">q</mi> </semantics></math>-vectors belong to the same level surface of <math display="inline"><semantics> <mrow> <mi>F</mi> <mo>(</mo> <mi mathvariant="bold-italic">q</mi> <mo>)</mo> </mrow> </semantics></math>. (<b>b</b>) Left: Example case in which the property <math display="inline"><semantics> <mrow> <mi>F</mi> <mo>(</mo> <mi mathvariant="bold-italic">q</mi> <mo>)</mo> </mrow> </semantics></math> has circular level surfaces on the simplex embedded in three-dimensional space. Two members of the family of base functions used to expand the prior are shown, containing the different <math display="inline"><semantics> <mi mathvariant="bold-italic">q</mi> </semantics></math> vectors shown in (<b>a</b>). These two members are solutions of the Maxentropy problem with two different expected values <span class="html-italic">f</span> of the property. Middle: The histogram <math display="inline"><semantics> <mi mathvariant="bold-italic">n</mi> </semantics></math> generated by the sampled data produces a likelihood <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>(</mo> <mi mathvariant="bold-italic">n</mi> <mo>|</mo> <mi mathvariant="bold-italic">q</mi> <mo>)</mo> </mrow> </semantics></math> that selective favors a specific range of surface levels. Right: The most favored <span class="html-italic">f</span> value (or equivalently, <math display="inline"><semantics> <mi>β</mi> </semantics></math> value) is the one for which the prior and posterior estimates of the property coincide.</p>
Full article ">Figure 2
<p>Toy example used to illustrate the estimation of the amount of mutual information in the same system as in <a href="#sec3dot1-entropy-24-00125" class="html-sec">Section 3.1</a>. (<b>a</b>) The system is governed by a bivariate probability distribution, with states characterized by two labels: <span class="html-italic">x</span> and <span class="html-italic">y</span>. There are many <span class="html-italic">x</span> states, <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mi>x</mi> </msub> <mo>≫</mo> <mn>1</mn> </mrow> </semantics></math>, and all are equally probable; that is, <math display="inline"><semantics> <mrow> <msub> <mi>q</mi> <mi>x</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>/</mo> <msub> <mi>k</mi> <mi>x</mi> </msub> </mrow> </semantics></math>. The variable <span class="html-italic">y</span> is binary, and its two values are depicted as red and blue, corresponding to 1 and 0, respectively. The conditional probabilities were sampled from a symmetric beta distribution with parameter <math display="inline"><semantics> <mrow> <mn>0.5</mn> </mrow> </semantics></math>. The goal is to estimate the mutual information <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mi>X</mi> <mi>Y</mi> </mrow> </msub> </semantics></math> from <span class="html-italic">n</span> samples, with <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>≳</mo> <msub> <mi>k</mi> <mi>x</mi> </msub> </mrow> </semantics></math>. (<b>b</b>) The level surfaces of the mutual information, as well as some sampled <math display="inline"><semantics> <mi mathvariant="bold-italic">q</mi> </semantics></math>-values, are displayed in a three-dimensional subspace of the full <math display="inline"><semantics> <mi mathvariant="bold-italic">q</mi> </semantics></math>-space, for different values of the hyperparameter <math display="inline"><semantics> <mi>β</mi> </semantics></math>. (<b>c</b>) Prior mean mutual information as a function of the scaled hyperparameter <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>/</mo> <msub> <mi>k</mi> <mi>x</mi> </msub> </mrow> </semantics></math>, and its fluctuations for <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mi>x</mi> </msub> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>. (<b>d</b>) Prior and posterior mean mutual information as a function of the scaled hyperparameter <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>/</mo> <msub> <mi>k</mi> <mi>x</mi> </msub> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>60</mn> </mrow> </semantics></math> samples. The intersection of these curves corresponds to the MAP estimation <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>I</mi> <mo>|</mo> <msub> <mi>β</mi> <mn>0</mn> </msub> <mo>〉</mo> </mrow> </semantics></math>. In gray, the posterior marginal evidence for the hyperparameter, whose width decreases as the square root of the number of states with at least two samples, <math display="inline"><semantics> <msub> <mi>k</mi> <mn>2</mn> </msub> </semantics></math>. (<b>e</b>) Comparison of the estimation of mutual information between different methods for the considered set of samples (<math display="inline"><semantics> <msub> <mi>I</mi> <mi>HS</mi> </msub> </semantics></math> is the estimator from [<a href="#B8-entropy-24-00125" class="html-bibr">8</a>]). The horizontal line corresponds to the true value of the mutual information.</p>
Full article ">Figure 3
<p>Entropy estimation in a toy example. (<b>a</b>) A distribution <math display="inline"><semantics> <msub> <mi>q</mi> <mn>0</mn> </msub> </semantics></math> has ranked probabilities that decrease with a power law (<math display="inline"><semantics> <mrow> <msub> <mi>q</mi> <mi>r</mi> </msub> <mo>∝</mo> <msup> <mi>r</mi> <mrow> <mo>−</mo> <mn>3</mn> <mo>/</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math>) within a finite number of states (<math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math>). Distributions such as these represent a challenge for the estimation of entropy. Three possible sets of <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> samples are displayed. (<b>b</b>) Prior mean entropy, plus/minus a standard deviation, as a function of the available number of states <span class="html-italic">k</span> for several values of the hyperparameter <math display="inline"><semantics> <mi>β</mi> </semantics></math>. (<b>c</b>) Prior and posterior mean entropy as function of <math display="inline"><semantics> <mi>β</mi> </semantics></math> for the different possible sets of multiplicities that can be obtained from <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> samples (three examples shown in panel a, with matching colors). The intersections between prior and posterior mean entropy (MAP estimation) are marked with circles. The size of the circles is proportional to the likelihood of the corresponding multiplicity (a minimum size is imposed, for visibility). (<b>d</b>) Comparison of the average estimation of entropy between different methods over all the multiplicity sets (discarding the set with no coincidences, and the set with all samples in one state), weighting according to their likelihood. The horizontal line corresponds to the true value of the entropy.</p>
Full article ">
10 pages, 985 KiB  
Article
Prebiotic Aggregates (Tissues) Emerging from Reaction–Diffusion: Formation Time, Configuration Entropy and Optimal Spatial Dimension
by Juan Cesar Flores
Entropy 2022, 24(1), 124; https://doi.org/10.3390/e24010124 - 14 Jan 2022
Cited by 2 | Viewed by 1776
Abstract
For the formation of a proto-tissue, rather than a protocell, the use of reactant dynamics in a finite spatial region is considered. The framework is established on the basic concepts of replication, diversity, and heredity. Heredity, in the sense of the continuity of [...] Read more.
For the formation of a proto-tissue, rather than a protocell, the use of reactant dynamics in a finite spatial region is considered. The framework is established on the basic concepts of replication, diversity, and heredity. Heredity, in the sense of the continuity of information and alike traits, is characterized by the number of equivalent patterns conferring viability against selection processes. In the case of structural parameters and the diffusion coefficient of ribonucleic acid, the formation time ranges between a few years to some decades, depending on the spatial dimension (fractional or not). As long as equivalent patterns exist, the configuration entropy of proto-tissues can be defined and used as a practical tool. Consequently, the maximal diversity and weak fluctuations, for which proto-tissues can develop, occur at the spatial dimension 2.5. Full article
(This article belongs to the Section Entropy and Biology)
Show Figures

Figure 1

Figure 1
<p>Stability dimensionless parameter <math display="inline"><semantics> <mrow> <mi>λ</mi> <msub> <mi>L</mi> <mi>c</mi> </msub> <msup> <mrow/> <mn>2</mn> </msup> <mo>/</mo> <msub> <mi>D</mi> <mi>A</mi> </msub> </mrow> </semantics></math> as a function of the normalized wavenumber <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mover accent="true"> <mi>k</mi> <mo stretchy="false">→</mo> </mover> <mo>|</mo> </mrow> <msub> <mi>L</mi> <mi>c</mi> </msub> </mrow> </semantics></math>. The upper region between 0 and 1, verifying Equation (5), corresponds to the unstable manifold, and, consequently, patterns can develop from <math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> (i.e., tissues). The parameter <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>c</mi> </msub> <mo>=</mo> <msqrt> <mrow> <msub> <mi>D</mi> <mi>A</mi> </msub> <mo>/</mo> <mrow> <mo>(</mo> <mrow> <msub> <mi>k</mi> <mrow> <mi>A</mi> <mi>B</mi> </mrow> </msub> <mi>f</mi> <mrow> <mo>(</mo> <mrow> <mn>0</mn> <mo>,</mo> <msub> <mi>B</mi> <mi>o</mi> </msub> </mrow> <mo>)</mo> </mrow> <mo>−</mo> <msub> <mi>k</mi> <mi>A</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </msqrt> </mrow> </semantics></math>. is the characteristic size of a protocell (Equation (9) with <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 2
<p>The main graph presents the formation time <math display="inline"><semantics> <mi>τ</mi> </semantics></math> for tissues as a function of the spatial dimension <math display="inline"><semantics> <mi>d</mi> </semantics></math>. A large spatial dimension requires a significant amount of time for proto-tissues to develop. The graph was constructed using Equation (15) and relates to cell parameters and the RNA diffusion coefficient. The inset graph shows the maximal number of equivalent structures <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>H</mi> <mo>,</mo> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>/</mo> <msup> <mrow> <mrow> <mo>(</mo> <mrow> <mi>L</mi> <mo>/</mo> <msub> <mi>L</mi> <mi>c</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <mi>d</mi> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math> as a function of the spatial dimension. Consequently, the maximal diversity occurs at dimension <math display="inline"><semantics> <mrow> <mo>~</mo> <mn>2.5</mn> </mrow> </semantics></math>. The illustrative blue inset picture depicts <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>H</mi> </msub> <mo>~</mo> <mn>30</mn> </mrow> </semantics></math> crack patches in which hypothetical tissues can eventually develop.</p>
Full article ">Figure 3
<p>Main curve, the configuration entropy <math display="inline"><semantics> <mrow> <mi>S</mi> <mo>/</mo> <msub> <mi>k</mi> <mi>B</mi> </msub> </mrow> </semantics></math> of proto-tissues as a function of the spatial dimension <math display="inline"><semantics> <mrow> <mi>d</mi> </mrow> </semantics></math>. The maximum occurs numerically at <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>~</mo> <mn>2.5</mn> </mrow> </semantics></math> and promotes diversity and complexity. Inset curve, the derivative <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mrow> <mo>(</mo> <mrow> <mn>1</mn> <mo>/</mo> <msub> <mi>k</mi> <mi>B</mi> </msub> </mrow> <mo>)</mo> </mrow> <mo>∂</mo> <mi>S</mi> <mo>/</mo> <mo>∂</mo> <mi>d</mi> </mrow> </semantics></math> defines equilibrium in a system with uncertain (fluctuating) spatial dimension of average <math display="inline"><semantics> <mi>d</mi> </semantics></math>. At <math display="inline"><semantics> <mrow> <mi>d</mi> <mo>~</mo> <mn>5</mn> <mo>,</mo> <mtext> </mtext> <mi>γ</mi> <mo>~</mo> <mn>0</mn> </mrow> </semantics></math> is related to weak thermal fluctuations (<math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>~</mo> <mi>δ</mi> <mi>T</mi> <mo>/</mo> <mn>2</mn> <mi>T</mi> </mrow> </semantics></math>) and is optimal for proto-tissue formation. The black point corresponds to <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>≈</mo> <mn>0.076</mn> </mrow> </semantics></math> for the Atacama Desert (Chile).</p>
Full article ">
18 pages, 540 KiB  
Article
Robust Statistical Inference in Generalized Linear Models Based on Minimum Renyi’s Pseudodistance Estimators
by María Jaenada and Leandro Pardo
Entropy 2022, 24(1), 123; https://doi.org/10.3390/e24010123 - 13 Jan 2022
Cited by 6 | Viewed by 2369
Abstract
Minimum Renyi’s pseudodistance estimators (MRPEs) enjoy good robustness properties without a significant loss of efficiency in general statistical models, and, in particular, for linear regression models (LRMs). In this line, Castilla et al. considered robust Wald-type test statistics in LRMs based on these [...] Read more.
Minimum Renyi’s pseudodistance estimators (MRPEs) enjoy good robustness properties without a significant loss of efficiency in general statistical models, and, in particular, for linear regression models (LRMs). In this line, Castilla et al. considered robust Wald-type test statistics in LRMs based on these MRPEs. In this paper, we extend the theory of MRPEs to Generalized Linear Models (GLMs) using independent and nonidentically distributed observations (INIDO). We derive asymptotic properties of the proposed estimators and analyze their influence function to asses their robustness properties. Additionally, we define robust Wald-type test statistics for testing linear hypothesis and theoretically study their asymptotic distribution, as well as their influence function. The performance of the proposed MRPEs and Wald-type test statistics are empirically examined for the Poisson Regression models through a simulation study, focusing on their robustness properties. We finally test the proposed methods in a real dataset related to the treatment of epilepsy, illustrating the superior performance of the robust MRPEs as well as Wald-type tests. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>IF of MRPEs with <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> (<b>left</b>) and <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> (<b>right</b>) of Poisson regression model.</p>
Full article ">Figure 2
<p>Mean Squared Error (MSE) on estimation (<b>left</b>) and prediction (<b>right</b>) against contamination level on data.</p>
Full article ">Figure 3
<p>MSE in estimation of <math display="inline"><semantics> <mi mathvariant="bold-italic">β</mi> </semantics></math> in absence of contamination (<b>left</b>) and under <math display="inline"><semantics> <mrow> <mn>5</mn> <mo>%</mo> </mrow> </semantics></math> of contamination level in data (<b>right</b>) with different values of <math display="inline"><semantics> <mi>α</mi> </semantics></math> against sample size for Poisson regression model.</p>
Full article ">
15 pages, 15147 KiB  
Article
FPGA-Implemented Fractal Decoder with Forward Error Correction in Short-Reach Optical Interconnects
by Svitlana Matsenko, Oleksiy Borysenko, Sandis Spolitis, Aleksejs Udalcovs, Lilita Gegere, Aleksandr Krotov, Oskars Ozolins and Vjaceslavs Bobrovs
Entropy 2022, 24(1), 122; https://doi.org/10.3390/e24010122 - 13 Jan 2022
Cited by 2 | Viewed by 2455
Abstract
Forward error correction (FEC) codes combined with high-order modulator formats, i.e., coded modulation (CM), are essential in optical communication networks to achieve highly efficient and reliable communication. The task of providing additional error control in the design of CM systems with high-performance requirements [...] Read more.
Forward error correction (FEC) codes combined with high-order modulator formats, i.e., coded modulation (CM), are essential in optical communication networks to achieve highly efficient and reliable communication. The task of providing additional error control in the design of CM systems with high-performance requirements remains urgent. As an additional control of CM systems, we propose to use indivisible error detection codes based on a positional number system. In this work, we evaluated the indivisible code using the average probability method (APM) for the binary symmetric channel (BSC), which has the simplicity, versatility and reliability of the estimate, which is close to reality. The APM allows for evaluation and compares indivisible codes according to parameters of correct transmission, and detectable and undetectable errors. Indivisible codes allow for the end-to-end (E2E) control of the transmission and processing of information in digital systems and design devices with a regular structure and high speed. This study researched a fractal decoder device for additional error control, implemented in field-programmable gate array (FPGA) software with FEC for short-reach optical interconnects with multilevel pulse amplitude (PAM-M) modulated with Gray code mapping. Indivisible codes with natural redundancy require far fewer hardware costs to develop and implement encoding and decoding devices with a sufficiently high error detection efficiency. We achieved a reduction in hardware costs for a fractal decoder by using the fractal property of the indivisible code from 10% to 30% for different n while receiving the reciprocal of the golden ratio. Full article
Show Figures

Figure 1

Figure 1
<p>Possible transformations of code combinations of indivisible code into classes <span class="html-italic">C</span>, <span class="html-italic">V</span>, <span class="html-italic">Z</span>.</p>
Full article ">Figure 2
<p>Probability of proper data transmission of indivisible code (<b>a</b>) Log<sub>10</sub>(<span class="html-italic">C</span>) from <span class="html-italic">n</span> for <span class="html-italic">p</span><sub>10</sub> = 3 × 10<sup>−4</sup>; 3 × 10<sup>−5</sup>; 3 × 10<sup>−6</sup>, and <span class="html-italic">p</span><sub>01</sub> = 3 × 10<sup>−3</sup>; 3 × 10<sup>−4</sup>; 3 × 10<sup>−5</sup>; (<b>b</b>) Log<sub>10</sub>(<span class="html-italic">C</span>) from Log<sub>10</sub>(<span class="html-italic">p</span><sub>10</sub>) for <span class="html-italic">p</span><sub>10</sub> = 1.5 × 10<sup>−3</sup>–1.5 × 10<sup>−7</sup>; 3 × 10<sup>−3</sup>–3 × 10<sup>−7</sup>; 5 × 10<sup>−3</sup>–5 × 10<sup>−7</sup>.</p>
Full article ">Figure 3
<p>Probability of an undetectable error of the indivisible code (<b>a</b>) Log<sub>10</sub>(<span class="html-italic">V</span>) from <span class="html-italic">n</span> for <span class="html-italic">p</span><sub>10</sub> = 3 × 10<sup>−4</sup>; 3 × 10<sup>−5</sup>; 3 × 10<sup>−6</sup>, and <span class="html-italic">p</span><sub>01</sub> = 3 × 10<sup>−3</sup>; 3 × 10<sup>−4</sup>; 3 × 10<sup>−5</sup>; (<b>b</b>) Log<sub>10</sub>(<span class="html-italic">V</span>) from Log<sub>10</sub>(<span class="html-italic">p</span><sub>01</sub>) for <span class="html-italic">p</span><sub>01</sub> = 1.5 × 10<sup>−3</sup>–1.5 × 10<sup>−6</sup>; 3 × 10<sup>−3</sup>–3 × 10<sup>−6</sup>; 5 × 10<sup>−3</sup>–5 × 10<sup>−6</sup>.</p>
Full article ">Figure 4
<p>Probability of an error detected in the indivisible code. (<b>a</b>) Log<sub>10</sub>(<span class="html-italic">Z</span>) from <span class="html-italic">n</span> for <span class="html-italic">p</span><sub>10</sub> = 3 × 10<sup>−4</sup>; 3 × 10<sup>−5</sup>; 3 × 10<sup>−6</sup>, and <span class="html-italic">p</span><sub>01</sub> = 3 × 10<sup>−3</sup>; 3 × 10<sup>−4</sup>; 3 × 10<sup>−5</sup>; (<b>b</b>) Log<sub>10</sub>(<span class="html-italic">Z</span>) from Log<sub>10</sub>(<span class="html-italic">p</span><sub>01</sub>) for <span class="html-italic">p</span><sub>01</sub> = 1.5 × 10<sup>−3</sup>–1.5 × 10<sup>−7</sup>; 3 × 10<sup>−3</sup>–3 × 10<sup>−7</sup>; 5 × 10<sup>−3</sup>–5 × 10<sup>−7</sup>.</p>
Full article ">Figure 5
<p>WDM optical interconnect model.</p>
Full article ">Figure 6
<p>Post-FEC BER vs. ROP with 56 and 35 Gbaud B2B and 1000 m SSMF for (<b>a</b>) PAM-4 modulation format with interleaved BCH + LDPC and RS + LDPC FEC, and (<b>b</b>) PAM-8 modulation format with interleaved BCH + LDPC and RS + LDPC FEC.</p>
Full article ">Figure 7
<p>Post-FEC BER vs. pre-FEC BER with 56 and 35 Gbaud PAM-M (M = 4, 8) modulation formats with OSNR and EDFA for (<b>a</b>) PAM-4 modulation format with interleaved BCH + LDPC and RS + LDPC FEC, and (<b>b</b>) PAM-8 modulation format with LDPC, interleaved BCH + LDPC and RS + LDPC FEC.</p>
Full article ">Figure 8
<p>(<b>a</b>) Fractal decoder block diagram; (<b>b</b>) fractal decoder circuit diagram.</p>
Full article ">Figure 9
<p>Simulation waveform of the fractal decoder device.</p>
Full article ">
14 pages, 8783 KiB  
Article
A Damping-Tunable Snap System: From Dissipative Hyperchaos to Conservative Chaos
by Patinya Ketthong and Banlue Srisuchinwong
Entropy 2022, 24(1), 121; https://doi.org/10.3390/e24010121 - 13 Jan 2022
Cited by 3 | Viewed by 1810
Abstract
A hyperjerk system described by a single fourth-order ordinary differential equation of the form x=f(x,x¨,x˙,x) has been referred to as a snap system. A damping-tunable snap system, capable [...] Read more.
A hyperjerk system described by a single fourth-order ordinary differential equation of the form x=f(x,x¨,x˙,x) has been referred to as a snap system. A damping-tunable snap system, capable of an adjustable attractor dimension (DL) ranging from dissipative hyperchaos (DL<4) to conservative chaos (DL=4), is presented for the first time, in particular not only in a snap system, but also in a four-dimensional (4D) system. Such an attractor dimension is adjustable by nonlinear damping of a relatively simple quadratic function of the form Ax2, easily tunable by a single parameter A. The proposed snap system is practically implemented and verified by the reconfigurable circuits of field programmable analog arrays (FPAAs). Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The Lyapunov dimension (<math display="inline"><semantics> <msub> <mi>D</mi> <mi>L</mi> </msub> </semantics></math>). (<b>b</b>) The spectrum of <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>E</mi> <mi>s</mi> </mrow> </semantics></math> (<math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>L</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>L</mi> <mn>4</mn> </msub> </mrow> </semantics></math>), ordered from large to small values, of (<a href="#FD3-entropy-24-00121" class="html-disp-formula">3</a>) versus <span class="html-italic">A</span>.</p>
Full article ">Figure 2
<p>The damping coefficient (<math display="inline"><semantics> <mi>α</mi> </semantics></math>) versus the parameter <span class="html-italic">A</span>.</p>
Full article ">Figure 3
<p>A decrease in damping, from <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> </mrow> </semantics></math> 0.2655 to 0.0, allows a phase space expansion with the tendency of an increase in the attractor dimension, from <math display="inline"><semantics> <mrow> <msub> <mi>D</mi> <mi>L</mi> </msub> <mo>=</mo> </mrow> </semantics></math> 1.0 to 4.0.</p>
Full article ">Figure 4
<p>Basins of attraction of (<a href="#FD11-entropy-24-00121" class="html-disp-formula">11</a>) in red and purple areas of conservative chaos and periodic oscillations, respectively, on an <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </semantics></math> plane where <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mi>w</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>. The blue dot is the equilibrium point at the origin. The system therefore exhibits multistability and coexisting attractors.</p>
Full article ">Figure 5
<p>Two examples of coexisting attractors of (<a href="#FD11-entropy-24-00121" class="html-disp-formula">11</a>), of which the initial conditions are <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>0.01</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>(</mo> <mo>−</mo> <mn>0.01</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math> for conservative chaos and periodic oscillations, respectively.</p>
Full article ">Figure 6
<p>Circuit implementation of the scaled system in (<a href="#FD14-entropy-24-00121" class="html-disp-formula">14</a>) where <math display="inline"><semantics> <mrow> <mi>A</mi> <mo>≠</mo> <mn>0</mn> </mrow> </semantics></math> for dissipative hyperchaos using the first chip (FPAA1).</p>
Full article ">Figure 7
<p>Circuit implementation of the scaled system in (<a href="#FD14-entropy-24-00121" class="html-disp-formula">14</a>) where <math display="inline"><semantics> <mrow> <mi>A</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for conservative chaos using the second chip (FPAA2).</p>
Full article ">Figure 8
<p>Parameters of the configurable analog modules [<a href="#B15-entropy-24-00121" class="html-bibr">15</a>] of the first chip (FPAA1) for dissipative hyperchaos.</p>
Full article ">Figure 9
<p>Parameters of the configurable analog modules [<a href="#B15-entropy-24-00121" class="html-bibr">15</a>] of the second chip (FPAA2) for conservative chaos.</p>
Full article ">Figure 10
<p>Attractors of snap-based dissipative hyperchaos on the (<span class="html-italic">x</span>, <math display="inline"><semantics> <mover accent="true"> <mi>x</mi> <mo>˙</mo> </mover> </semantics></math>), (<span class="html-italic">x</span>, <math display="inline"><semantics> <mover accent="true"> <mi>x</mi> <mo>¨</mo> </mover> </semantics></math>), (<span class="html-italic">x</span>, <math display="inline"><semantics> <mover accent="true"> <mi>x</mi> <mo>⃛</mo> </mover> </semantics></math>), and (<math display="inline"><semantics> <mover accent="true"> <mi>x</mi> <mo>¨</mo> </mover> </semantics></math>, <math display="inline"><semantics> <mover accent="true"> <mi>x</mi> <mo>⃛</mo> </mover> </semantics></math>) planes. (<b>a</b>–<b>d</b>) Numerical trajectories, respectively. (<b>e</b>–<b>h</b>) FPAA-based oscilloscope traces, respectively.</p>
Full article ">Figure 11
<p>Attractors of snap-based conservative chaos on the (<span class="html-italic">x</span>, <math display="inline"><semantics> <mover accent="true"> <mi>x</mi> <mo>˙</mo> </mover> </semantics></math>), (<span class="html-italic">x</span>, <math display="inline"><semantics> <mover accent="true"> <mi>x</mi> <mo>¨</mo> </mover> </semantics></math>), (<span class="html-italic">x</span>, <math display="inline"><semantics> <mover accent="true"> <mi>x</mi> <mo>⃛</mo> </mover> </semantics></math>), and (<math display="inline"><semantics> <mover accent="true"> <mi>x</mi> <mo>¨</mo> </mover> </semantics></math>, <math display="inline"><semantics> <mover accent="true"> <mi>x</mi> <mo>⃛</mo> </mover> </semantics></math>) planes. (<b>a</b>–<b>d</b>) Numerical trajectories, respectively. (<b>e</b>–<b>h</b>) FPAA-based oscilloscope traces, respectively.</p>
Full article ">
15 pages, 446 KiB  
Article
Weighted Relative Group Entropies and Associated Fisher Metrics
by Iulia-Elena Hirica, Cristina-Liliana Pripoae, Gabriel-Teodor Pripoae and Vasile Preda
Entropy 2022, 24(1), 120; https://doi.org/10.3390/e24010120 - 13 Jan 2022
Cited by 6 | Viewed by 1901
Abstract
A large family of new α-weighted group entropy functionals is defined and associated Fisher-like metrics are considered. All these notions are well-suited semi-Riemannian tools for the geometrization of entropy-related statistical models, where they may act as sensitive controlling invariants. The main result [...] Read more.
A large family of new α-weighted group entropy functionals is defined and associated Fisher-like metrics are considered. All these notions are well-suited semi-Riemannian tools for the geometrization of entropy-related statistical models, where they may act as sensitive controlling invariants. The main result of the paper establishes a link between such a metric and a canonical one. A sufficient condition is found, in order that the two metrics be conformal (or homothetic). In particular, we recover a recent result, established for α=1 and for non-weighted relative group entropies. Our conformality condition is “universal”, in the sense that it does not depend on the group exponential. Full article
(This article belongs to the Special Issue Measures of Information II)
Show Figures

Figure 1

Figure 1
<p>The variation of <math display="inline"><semantics> <msub> <mi>ρ</mi> <mi>G</mi> </msub> </semantics></math>, with the notation <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>:</mo> <mo>=</mo> <msup> <mi>θ</mi> <mn>2</mn> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>:</mo> <mo>=</mo> <mi>q</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>The variation of <math display="inline"><semantics> <msub> <mi>ρ</mi> <mi>G</mi> </msub> </semantics></math>, with the notation <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>:</mo> <mo>=</mo> <msup> <mi>θ</mi> <mn>2</mn> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>:</mo> <mo>=</mo> <mi>α</mi> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>The variation of <math display="inline"><semantics> <msub> <mi>ρ</mi> <mi>G</mi> </msub> </semantics></math>, with the notation <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>:</mo> <mo>=</mo> <msup> <mi>θ</mi> <mn>2</mn> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>:</mo> <mo>=</mo> <mi>α</mi> <mo>∈</mo> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">
14 pages, 3078 KiB  
Article
Fusion Domain-Adaptation CNN Driven by Images and Vibration Signals for Fault Diagnosis of Gearbox Cross-Working Conditions
by Gang Mao, Zhongzheng Zhang, Bin Qiao and Yongbo Li
Entropy 2022, 24(1), 119; https://doi.org/10.3390/e24010119 - 13 Jan 2022
Cited by 33 | Viewed by 3391
Abstract
The vibration signal of gearboxes contains abundant fault information, which can be used for condition monitoring. However, vibration signal is ineffective for some non-structural failures. In order to resolve this dilemma, infrared thermal images are introduced to combine with vibration signals via fusion [...] Read more.
The vibration signal of gearboxes contains abundant fault information, which can be used for condition monitoring. However, vibration signal is ineffective for some non-structural failures. In order to resolve this dilemma, infrared thermal images are introduced to combine with vibration signals via fusion domain-adaptation convolutional neural network (FDACNN), which can diagnose both structural and non-structural failures under various working conditions. First, the measured raw signals are converted into frequency and squared envelope spectrum to characterize the health states of the gearbox. Second, the sequences of the frequency and squared envelope spectrum are arranged into two-dimensional format, which are combined with infrared thermal images to form fusion data. Finally, the adversarial network is introduced to realize the state recognition of structural and non-structural faults in the unlabeled target domain. An experiment of gearbox test rigs was used for effectiveness validation by measuring both vibration and infrared thermal images. The results suggest that the proposed FDACNN method performs best in cross-domain fault diagnosis of gearboxes via multi-source heterogeneous data compared with the other four methods. Full article
Show Figures

Figure 1

Figure 1
<p>The procedures of the proposed method.</p>
Full article ">Figure 2
<p>The gear box fault simulator system: (<b>a</b>) the experimental test rig; (<b>b</b>) the layout of the test rig.</p>
Full article ">Figure 3
<p>Raw signals, spectral distribution and squared envelope spectrum of different health states. (<b>a</b>) Normal; (<b>b</b>) TB 50; (<b>c</b>) TB 100; (<b>d</b>) OS 1500; (<b>e</b>) OS 2000.</p>
Full article ">Figure 4
<p>The infrared thermal image of different health states.</p>
Full article ">Figure 5
<p>Bar diagram of results in different test tasks.</p>
Full article ">Figure 6
<p>Feature visualization of different test tasks.</p>
Full article ">
16 pages, 8083 KiB  
Article
MFAN: Multi-Level Features Attention Network for Fake Certificate Image Detection
by Yu Sun, Rongrong Ni and Yao Zhao
Entropy 2022, 24(1), 118; https://doi.org/10.3390/e24010118 - 13 Jan 2022
Cited by 7 | Viewed by 3209
Abstract
Up to now, most of the forensics methods have attached more attention to natural content images. To expand the application of image forensics technology, forgery detection for certificate images that can directly represent people’s rights and interests is investigated in this paper. Variable [...] Read more.
Up to now, most of the forensics methods have attached more attention to natural content images. To expand the application of image forensics technology, forgery detection for certificate images that can directly represent people’s rights and interests is investigated in this paper. Variable tampered region scales and diverse manipulation types are two typical characteristics in fake certificate images. To tackle this task, a novel method called Multi-level Feature Attention Network (MFAN) is proposed. MFAN is built following the encoder–decoder network structure. In order to extract features with rich scale information in the encoder, on the one hand, we employ Atrous Spatial Pyramid Pooling (ASPP) on the final layer of a pre-trained residual network to capture the contextual information at different scales; on the other hand, low-level features are concatenated to ensure the sensibility to small targets. Furthermore, the resulting multi-level features are recalibrated on channels for irrelevant information suppression and enhancing the tampered regions, guiding the MFAN to adapt to diverse manipulation traces. In the decoder module, the attentive feature maps are convoluted and unsampled to effectively generate the prediction mask. Experimental results indicate that the proposed method outperforms some state-of-the-art forensics methods. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

Figure 1
<p>Examples of certificate images. From left to right, the first column is the original image; the second column is the tampered certificate image, and the last column is the ground truth mask.</p>
Full article ">Figure 2
<p>The framework of Multi-level Feature Attention Network (MFAN). There are two sub-network in MFAN: the encoder is used to extract rich features and the decoder is designed to generate binary localization map.</p>
Full article ">Figure 3
<p>The structure of feature recalibration module and visualization of <math display="inline"><semantics> <msub> <mi>F</mi> <mrow> <mi>f</mi> <mi>u</mi> <mi>s</mi> <mi>e</mi> </mrow> </msub> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>F</mi> <mi>R</mi> <mi>M</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>The structure of decoder network.</p>
Full article ">Figure 5
<p>Visualization results of (<b>a</b>) multi-level feature fusion and (<b>b</b>) feature recalibration.</p>
Full article ">Figure 6
<p>Some detection results on TIANCHI dataset. From top to bottom: tamper images, ground truth masks, CFA [<a href="#B16-entropy-24-00118" class="html-bibr">16</a>], NOI [<a href="#B43-entropy-24-00118" class="html-bibr">43</a>], RRU-Net [<a href="#B36-entropy-24-00118" class="html-bibr">36</a>], ManTra-Net [<a href="#B8-entropy-24-00118" class="html-bibr">8</a>], MVSS-Net [<a href="#B44-entropy-24-00118" class="html-bibr">44</a>], TianchiRank-3 [<a href="#B45-entropy-24-00118" class="html-bibr">45</a>], EncNet [<a href="#B46-entropy-24-00118" class="html-bibr">46</a>] and the proposed MFAN.</p>
Full article ">Figure 7
<p>Comparison results under four attacks. JPEG compression: (<b>a</b>), Gaussian noise: (<b>b</b>), resize: (<b>c</b>) and median blur: (<b>d</b>). Ordinates represent the <math display="inline"><semantics> <msub> <mi>F</mi> <mn>1</mn> </msub> </semantics></math> score.</p>
Full article ">Figure 8
<p>Some detection results on natural content images. From top to bottom: tamper images, ground truth masks, CFA [<a href="#B16-entropy-24-00118" class="html-bibr">16</a>], NOI [<a href="#B43-entropy-24-00118" class="html-bibr">43</a>], ManTra-Net [<a href="#B8-entropy-24-00118" class="html-bibr">8</a>], MVSS-Net [<a href="#B44-entropy-24-00118" class="html-bibr">44</a>], EncNet [<a href="#B46-entropy-24-00118" class="html-bibr">46</a>] and the proposed MFAN.</p>
Full article ">
19 pages, 1139 KiB  
Article
Variational Bayesian-Based Improved Maximum Mixture Correntropy Kalman Filter for Non-Gaussian Noise
by Xuyou Li, Yanda Guo and Qingwen Meng
Entropy 2022, 24(1), 117; https://doi.org/10.3390/e24010117 - 12 Jan 2022
Cited by 4 | Viewed by 2308
Abstract
The maximum correntropy Kalman filter (MCKF) is an effective algorithm that was proposed to solve the non-Gaussian filtering problem for linear systems. Compared with the original Kalman filter (KF), the MCKF is a sub-optimal filter with Gaussian correntropy objective function, which has been [...] Read more.
The maximum correntropy Kalman filter (MCKF) is an effective algorithm that was proposed to solve the non-Gaussian filtering problem for linear systems. Compared with the original Kalman filter (KF), the MCKF is a sub-optimal filter with Gaussian correntropy objective function, which has been demonstrated to have excellent robustness to non-Gaussian noise. However, the performance of MCKF is affected by its kernel bandwidth parameter, and a constant kernel bandwidth may lead to severe accuracy degradation in non-stationary noises. In order to solve this problem, the mixture correntropy method is further explored in this work, and an improved maximum mixture correntropy KF (IMMCKF) is proposed. By derivation, the random variables that obey Beta-Bernoulli distribution are taken as intermediate parameters, and a new hierarchical Gaussian state-space model was established. Finally, the unknown mixing probability and state estimation vector at each moment are inferred via a variational Bayesian approach, which provides an effective solution to improve the applicability of MCKFs in non-stationary noises. Performance evaluations demonstrate that the proposed filter significantly improves the existing MCKFs in non-stationary noises. Full article
Show Figures

Figure 1

Figure 1
<p>RMSEs of the position from different filters.</p>
Full article ">Figure 2
<p>RMSEs of the velocity from different filters.</p>
Full article ">Figure 3
<p>ARMSEs of position from different filters with varying <math display="inline"><semantics> <msub> <mi>p</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 4
<p>ARMSEs of velocity from different filters with varying <math display="inline"><semantics> <msub> <mi>p</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>ARMSEs versus iteration number from different filters.</p>
Full article ">Figure 6
<p>The test trajectory of the vehicle.</p>
Full article ">Figure 7
<p>The position errors from different filters in non-Gaussian noises.</p>
Full article ">Figure 8
<p>The velocity errors from different filters in non-Gaussian noises.</p>
Full article ">
12 pages, 304 KiB  
Article
On the Depth of Decision Trees with Hypotheses
by Mikhail Moshkov
Entropy 2022, 24(1), 116; https://doi.org/10.3390/e24010116 - 12 Jan 2022
Cited by 4 | Viewed by 2098
Abstract
In this paper, based on the results of rough set theory, test theory, and exact learning, we investigate decision trees over infinite sets of binary attributes represented as infinite binary information systems. We define the notion of a problem over an information system [...] Read more.
In this paper, based on the results of rough set theory, test theory, and exact learning, we investigate decision trees over infinite sets of binary attributes represented as infinite binary information systems. We define the notion of a problem over an information system and study three functions of the Shannon type, which characterize the dependence in the worst case of the minimum depth of a decision tree solving a problem on the number of attributes in the problem description. The considered three functions correspond to (i) decision trees using attributes, (ii) decision trees using hypotheses (an analog of equivalence queries from exact learning), and (iii) decision trees using both attributes and hypotheses. The first function has two possible types of behavior: logarithmic and linear (this result follows from more general results published by the author earlier). The second and the third functions have three possible types of behavior: constant, logarithmic, and linear (these results were published by the author earlier without proofs that are given in the present paper). Based on the obtained results, we divided the set of all infinite binary information systems into four complexity classes. In each class, the type of behavior for each of the considered three functions does not change. Full article
(This article belongs to the Special Issue Rough Set Theory and Entropy in Information Science)
20 pages, 914 KiB  
Article
Estimating Distributions of Parameters in Nonlinear State Space Models with Replica Exchange Particle Marginal Metropolis–Hastings Method
by Hiroaki Inoue, Koji Hukushima and Toshiaki Omori
Entropy 2022, 24(1), 115; https://doi.org/10.3390/e24010115 - 12 Jan 2022
Cited by 3 | Viewed by 2549
Abstract
Extracting latent nonlinear dynamics from observed time-series data is important for understanding a dynamic system against the background of the observed data. A state space model is a probabilistic graphical model for time-series data, which describes the probabilistic dependence between latent variables at [...] Read more.
Extracting latent nonlinear dynamics from observed time-series data is important for understanding a dynamic system against the background of the observed data. A state space model is a probabilistic graphical model for time-series data, which describes the probabilistic dependence between latent variables at subsequent times and between latent variables and observations. Since, in many situations, the values of the parameters in the state space model are unknown, estimating the parameters from observations is an important task. The particle marginal Metropolis–Hastings (PMMH) method is a method for estimating the marginal posterior distribution of parameters obtained by marginalization over the distribution of latent variables in the state space model. Although, in principle, we can estimate the marginal posterior distribution of parameters by iterating this method infinitely, the estimated result depends on the initial values for a finite number of times in practice. In this paper, we propose a replica exchange particle marginal Metropolis–Hastings (REPMMH) method as a method to improve this problem by combining the PMMH method with the replica exchange method. By using the proposed method, we simultaneously realize a global search at a high temperature and a local fine search at a low temperature. We evaluate the proposed method using simulated data obtained from the Izhikevich neuron model and Lévy-driven stochastic volatility model, and we show that the proposed REPMMH method improves the problem of the initial value dependence in the PMMH method, and realizes efficient sampling of parameters in the state space models compared with existing methods. Full article
Show Figures

Figure 1

Figure 1
<p>Probabilistic graphical model of a state space model. <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">z</mi> <mrow> <mn>1</mn> <mo>:</mo> <mi>N</mi> </mrow> </msub> <mo>=</mo> <mfenced separators="" open="{" close="}"> <msub> <mi mathvariant="bold-italic">z</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="bold-italic">z</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi mathvariant="bold-italic">z</mi> <mi>N</mi> </msub> </mfenced> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">y</mi> <mrow> <mn>1</mn> <mo>:</mo> <mi>N</mi> </mrow> </msub> <mo>=</mo> <mfenced separators="" open="{" close="}"> <msub> <mi mathvariant="bold-italic">y</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="bold-italic">y</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi mathvariant="bold-italic">y</mi> <mi>N</mi> </msub> </mfenced> </mrow> </semantics></math>, respectively, represent latent variables and observations for time step <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mi>N</mi> </mrow> </semantics></math>. The arrow to the latent variables <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">z</mi> <mi>n</mi> </msub> </semantics></math> at the time step <span class="html-italic">n</span> from the latent variables <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">z</mi> <mrow> <mi>n</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </semantics></math> at the previous time step <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math> represents a system model <math display="inline"><semantics> <mrow> <mi>f</mi> <mfenced separators="" open="(" close=")"> <msub> <mi mathvariant="bold-italic">z</mi> <mi>n</mi> </msub> <mrow> <mrow/> <mo>|</mo> <mrow/> </mrow> <msub> <mi mathvariant="bold-italic">z</mi> <mrow> <mi>n</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi mathvariant="bold-italic">θ</mi> <mi>f</mi> </msub> </mfenced> </mrow> </semantics></math>, and the arrow to the observations <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">y</mi> <mi>n</mi> </msub> </semantics></math> at the time step <span class="html-italic">n</span> from the latent variables <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">z</mi> <mi>n</mi> </msub> </semantics></math> at the time step <span class="html-italic">n</span> represents an observation model <math display="inline"><semantics> <mrow> <mi>g</mi> <mfenced separators="" open="(" close=")"> <msub> <mi mathvariant="bold-italic">y</mi> <mi>n</mi> </msub> <mrow> <mrow/> <mo>|</mo> <mrow/> </mrow> <msub> <mi mathvariant="bold-italic">z</mi> <mi>n</mi> </msub> <mo>,</mo> <msub> <mi mathvariant="bold-italic">θ</mi> <mi>g</mi> </msub> </mfenced> </mrow> </semantics></math>. <math display="inline"><semantics> <mrow> <mi mathvariant="bold-sans-serif">Θ</mi> <mo>=</mo> <mfenced separators="" open="{" close="}"> <msub> <mi mathvariant="bold-italic">θ</mi> <mi>f</mi> </msub> <mo>,</mo> <msub> <mi mathvariant="bold-italic">θ</mi> <mi>g</mi> </msub> </mfenced> </mrow> </semantics></math> are parameters to be estimated.</p>
Full article ">Figure 2
<p>Schematic diagrams of the proposed replica exchange particle marginal Metropolis–Hastings (REPMMH) method. (<b>a</b>) The time-series observations <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">y</mi> <mrow> <mn>1</mn> <mo>:</mo> <mi>N</mi> </mrow> </msub> </semantics></math> as inputs. (<b>b</b>,<b>c</b>) The REPMMH method consisting of (<b>b</b>) the Metropolis–Hastings (MH) algorithms and (<b>c</b>) the sequential Monte Carlo (SMC) methods parallelly conducted at multiple temperatures. In the SMC method, the sample candidate <math display="inline"><semantics> <msup> <mrow> <mi mathvariant="bold-sans-serif">Θ</mi> </mrow> <mrow> <mfenced open="(" close=")"> <mi>r</mi> </mfenced> <mo>*</mo> </mrow> </msup> </semantics></math> proposed by the MH algorithm is used to obtain the marginal likelihood <math display="inline"><semantics> <mrow> <mi>p</mi> <msup> <mfenced separators="" open="(" close=")"> <msub> <mi mathvariant="bold-italic">y</mi> <mrow> <mn>1</mn> <mo>:</mo> <mi>N</mi> </mrow> </msub> <mrow> <mrow/> <mo>|</mo> <mrow/> </mrow> <msup> <mrow> <mi mathvariant="bold-sans-serif">Θ</mi> </mrow> <mrow> <mfenced open="(" close=")"> <mi>r</mi> </mfenced> <mo>*</mo> </mrow> </msup> </mfenced> <mfrac> <mn>1</mn> <msup> <mi>T</mi> <mfenced open="(" close=")"> <mi>r</mi> </mfenced> </msup> </mfrac> </msup> </mrow> </semantics></math>. By the SMC method, the marginalization over time-series of latent variables <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">z</mi> <mrow> <mn>1</mn> <mo>:</mo> <mi>N</mi> </mrow> </msub> </semantics></math> is conducted iteratively for time steps <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mi>N</mi> </mrow> </semantics></math>. In the MH algorithm, the marginal likelihood <math display="inline"><semantics> <mrow> <mi>p</mi> <msup> <mfenced separators="" open="(" close=")"> <msub> <mi mathvariant="bold-italic">y</mi> <mrow> <mn>1</mn> <mo>:</mo> <mi>N</mi> </mrow> </msub> <mrow> <mrow/> <mo>|</mo> <mrow/> </mrow> <msup> <mrow> <mi mathvariant="bold-sans-serif">Θ</mi> </mrow> <mrow> <mfenced open="(" close=")"> <mi>r</mi> </mfenced> <mo>*</mo> </mrow> </msup> </mfenced> <mfrac> <mn>1</mn> <msup> <mi>T</mi> <mfenced open="(" close=")"> <mi>r</mi> </mfenced> </msup> </mfrac> </msup> </mrow> </semantics></math> is used to determine whether to accept or reject the sample candidate. In the REPMMH method, exchanges between samples at different temperatures are considered in order to achieve the transitions that are difficult to achieve with the particle marginal Metropolis–Hastings (PMMH) method. The transitions can be realized by passing through a high temperature due to exchange between temperatures, as shown by the red arrows in the MH algorithm. (<b>d</b>) The estimated posterior distributions of parameters <math display="inline"><semantics> <mi mathvariant="bold-sans-serif">Θ</mi> </semantics></math> as the output.</p>
Full article ">Figure 3
<p>Observations and external inputs used to evaluate the proposed method. Observed membrane potential of the Izhikevich neuron model <math display="inline"><semantics> <msub> <mi>y</mi> <mi>n</mi> </msub> </semantics></math> (<b>top</b>) in response to input current <math display="inline"><semantics> <msub> <mi>I</mi> <mrow> <mi>ext</mi> <mo>,</mo> <mi>n</mi> </mrow> </msub> </semantics></math> (<b>bottom</b>) is shown.</p>
Full article ">Figure 4
<p>Estimated posterior distributions obtained by employing the PMMH method in the Izhikevich neuron model. In each graph, the estimated probability density function of the parameter (<span class="html-italic">a</span>, <span class="html-italic">b</span>, <span class="html-italic">c</span> and <span class="html-italic">d</span>) is shown by the blue histogram. The red solid and black dashed lines express the true and initial values, respectively.</p>
Full article ">Figure 5
<p>Estimated posterior distributions obtained by employing the REPMMH method in the Izhikevich neuron model. See also the captions of the figure and subfigures for <a href="#entropy-24-00115-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 6
<p>Estimated posterior distributions obtained by employing the replica exchange particle Gibbs with ancestor sampling (REPGAS) method in the Izhikevich neuron model. See also the captions of the figure and subfigures for <a href="#entropy-24-00115-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 7
<p>Autocorrelation as a function of the lag length for parameters <span class="html-italic">a</span>, <span class="html-italic">b</span>, <span class="html-italic">c</span> and <span class="html-italic">d</span> in the Izhikevich neuron model. Results for the PMMH method (black dashed-dotted line), the REPGAS method (blue dashed line) and the REPMMH method (red solid line) are shown. Each inset figure represents the result when the horizontal axis is the logarithmic scale. In results obtained by the REPGAS method and the REPMMH method, samples at <math display="inline"><semantics> <mrow> <msup> <mi>T</mi> <mfenced open="(" close=")"> <mn>1</mn> </mfenced> </msup> <mo>=</mo> <mn>1.0</mn> </mrow> </semantics></math> were used.</p>
Full article ">Figure 8
<p>Estimated posterior distributions obtained by employing the PMMH method in the Lévy-driven stochastic volatility model. In each graph, the estimated probability density function of parameters (<math display="inline"><semantics> <mi>κ</mi> </semantics></math>, <math display="inline"><semantics> <mi>δ</mi> </semantics></math>, <math display="inline"><semantics> <mi>γ</mi> </semantics></math> and <math display="inline"><semantics> <mi>λ</mi> </semantics></math>) is shown by the blue histogram. The red solid and black dashed lines express the true and initial values, respectively.</p>
Full article ">Figure 9
<p>Estimated posterior distributions obtained by employing the REPMMH method in the Lévy-driven stochastic volatility model. See also the captions of the figure and subfigures for <a href="#entropy-24-00115-f008" class="html-fig">Figure 8</a>.</p>
Full article ">Figure 10
<p>Autocorrelation as a function of the lag length for parameters <math display="inline"><semantics> <mi>κ</mi> </semantics></math>, <math display="inline"><semantics> <mi>δ</mi> </semantics></math>, <math display="inline"><semantics> <mi>γ</mi> </semantics></math> and <math display="inline"><semantics> <mi>λ</mi> </semantics></math> in the Lévy-driven stochastic volatility model. Results for the PMMH method (black dashed-dotted line) and the REPMMH method at <math display="inline"><semantics> <mrow> <msup> <mi>T</mi> <mfenced open="(" close=")"> <mn>1</mn> </mfenced> </msup> <mo>=</mo> <mn>1.0</mn> </mrow> </semantics></math> (red solid line) are shown.</p>
Full article ">
17 pages, 610 KiB  
Review
Singing Voice Detection: A Survey
by Ramy Monir, Daniel Kostrzewa and Dariusz Mrozek
Entropy 2022, 24(1), 114; https://doi.org/10.3390/e24010114 - 12 Jan 2022
Cited by 14 | Viewed by 4470
Abstract
Singing voice detection or vocal detection is a classification task that determines whether there is a singing voice in a given audio segment. This process is a crucial preprocessing step that can be used to improve the performance of other tasks such as [...] Read more.
Singing voice detection or vocal detection is a classification task that determines whether there is a singing voice in a given audio segment. This process is a crucial preprocessing step that can be used to improve the performance of other tasks such as automatic lyrics alignment, singing melody transcription, singing voice separation, vocal melody extraction, and many more. This paper presents a survey on the techniques of singing voice detection with a deep focus on state-of-the-art algorithms such as convolutional LSTM and GRU-RNN. It illustrates a comparison between existing methods for singing voice detection, mainly based on the Jamendo and RWC datasets. Long-term recurrent convolutional networks have reached impressive results on public datasets. The main goal of the present paper is to investigate both classical and state-of-the-art approaches to singing voice detection. Full article
(This article belongs to the Special Issue Methods in Artificial Intelligence and Information Processing)
Show Figures

Figure 1

Figure 1
<p>Calculation steps for MFCC features [<a href="#B35-entropy-24-00114" class="html-bibr">35</a>].</p>
Full article ">Figure 2
<p>LSTM cell.</p>
Full article ">Figure 3
<p>The overview of the BLSTM architecture used for SVD in [<a href="#B16-entropy-24-00114" class="html-bibr">16</a>].</p>
Full article ">Figure 4
<p>The overview of the GRU-RNN architecture used in [<a href="#B41-entropy-24-00114" class="html-bibr">41</a>].</p>
Full article ">Figure 5
<p>Convolutional LSTM used in [<a href="#B18-entropy-24-00114" class="html-bibr">18</a>] (used under the terms and conditions of the Creative Commons Attribution (CC BY) license (<a href="http://creativecommons.org/licenses/by/4.0/" target="_blank">http://creativecommons.org/licenses/by/4.0/</a>) (accessed on 10 November 2021)).</p>
Full article ">Figure 6
<p>The topology of L RCN used in [<a href="#B21-entropy-24-00114" class="html-bibr">21</a>] (used under the terms and conditions of the Creative Commons Attribution (CC BY) license (<a href="http://creativecommons.org/licenses/by/4.0/" target="_blank">http://creativecommons.org/licenses/by/4.0/</a>) (accessed on 10 November 2021)).</p>
Full article ">Figure 7
<p>Inner structure of LRCN layer used in [<a href="#B21-entropy-24-00114" class="html-bibr">21</a>] (used under the terms and conditions of the Creative Commons Attribution (CC BY) license (<a href="http://creativecommons.org/licenses/by/4.0/" target="_blank">http://creativecommons.org/licenses/by/4.0/</a>) (accessed on 10 November 2021)).</p>
Full article ">
16 pages, 379 KiB  
Article
Entropies and IPR as Markers for a Phase Transition in a Two-Level Model for Atom–Diatomic Molecule Coexistence
by Ignacio Baena, Pedro Pérez-Fernández, Manuela Rodríguez-Gallardo and José Miguel Arias
Entropy 2022, 24(1), 113; https://doi.org/10.3390/e24010113 - 12 Jan 2022
Cited by 2 | Viewed by 2011
Abstract
A quantum phase transition (QPT) in a simple model that describes the coexistence of atoms and diatomic molecules is studied. The model, which is briefly discussed, presents a second-order ground state phase transition in the thermodynamic (or large particle number) limit, changing from [...] Read more.
A quantum phase transition (QPT) in a simple model that describes the coexistence of atoms and diatomic molecules is studied. The model, which is briefly discussed, presents a second-order ground state phase transition in the thermodynamic (or large particle number) limit, changing from a molecular condensate in one phase to an equilibrium of diatomic molecules–atoms in coexistence in the other one. The usual markers for this phase transition are the ground state energy and the expected value of the number of atoms (alternatively, the number of molecules) in the ground state. In this work, other markers for the QPT, such as the inverse participation ratio (IPR), and particularly, the Rényi entropy, are analyzed and proposed as QPT markers. Both magnitudes present abrupt changes at the critical point of the QPT. Full article
(This article belongs to the Special Issue Current Trends in Quantum Phase Transitions)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the model used in this work for the atom–diatomic coexistence. This is a two-level model. Diatomic molecules (<b>b</b>) are in the lower level, while single atoms (<b>a</b>) are in the upper level. The quantity <math display="inline"><semantics> <mrow> <msub> <mi>ω</mi> <mn>0</mn> </msub> <mo>−</mo> <mi>ω</mi> </mrow> </semantics></math> represents the energy needed for separating the molecule into its two single atoms. This figure was taken from [<a href="#B16-entropy-24-00113" class="html-bibr">16</a>].</p>
Full article ">Figure 2
<p>Large–M limit (mean–field) results of the system as a function of the control parameter <math display="inline"><semantics> <mi>λ</mi> </semantics></math> for the cases of <math display="inline"><semantics> <mrow> <msub> <mi>ω</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ω</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. In panel (<b>a</b>) the ground state energy per particle is represented. In panel (<b>b</b>), its first derivative with respect to <math display="inline"><semantics> <mi>λ</mi> </semantics></math> is plotted. Finally, in panel (<b>c</b>), the second derivative of the ground state energy per particle is given as a function of <math display="inline"><semantics> <mi>λ</mi> </semantics></math>. The system undergoes a QPT for <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Ground state energy per particle in the large–M limit of the system as a function of the control parameter <math display="inline"><semantics> <mi>λ</mi> </semantics></math> for the cases of <math display="inline"><semantics> <mrow> <msub> <mi>ω</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ω</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. The system undergoes a QPT for <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>. The mean field calculation is depicted in black full line; meanwhile, the exact numerical results for <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math> (full green line) and <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>700</mn> </mrow> </semantics></math> (dashed red line) are also presented. In order to show the convergence to the mean field with <span class="html-italic">M</span>, the inset represents the difference between the exact <span class="html-italic">M</span> calculation and the mean field result. Different <span class="html-italic">M</span> sizes are shown.</p>
Full article ">Figure 4
<p>The large-M value for <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mi>a</mi> </msub> <mo>/</mo> <mi>M</mi> </mrow> </semantics></math>, number of atoms of type <span class="html-italic">a</span> per particle, as a function of the control parameter <math display="inline"><semantics> <mi>λ</mi> </semantics></math> for the cases of <math display="inline"><semantics> <mrow> <msub> <mi>ω</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>ω</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. The system undergoes a QPT for <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>. This observable behaves as an order parameter. It is zero for <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>&lt;</mo> <msub> <mi>λ</mi> <mi>c</mi> </msub> </mrow> </semantics></math> and different to zero for larger values of <math display="inline"><semantics> <mi>λ</mi> </semantics></math>.</p>
Full article ">Figure 5
<p>Numerical exact calculation for the expected value of <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>n</mi> <mo stretchy="false">^</mo> </mover> <mi>a</mi> </msub> <mo>/</mo> <mi>M</mi> </mrow> </semantics></math>. Number of atoms of type <span class="html-italic">a</span> per particle, as a function of the control parameter <math display="inline"><semantics> <mi>λ</mi> </semantics></math> for three different cases: <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>ω</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (full black line), <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>ω</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> (dashed red line) and <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>ω</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> (dotted and dashed blue line). In all these cases, the critical value for <math display="inline"><semantics> <mi>λ</mi> </semantics></math> has been marked: <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math>, respectively. All calculations were done for <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>700</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>IPR for the ground state as a function of the control parameter <math display="inline"><semantics> <mi>λ</mi> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>700</mn> </mrow> </semantics></math> and for three different <math display="inline"><semantics> <mi>ω</mi> </semantics></math> selections: <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>ω</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (full black line), <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>ω</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> (dashed red line) and <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>ω</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> (dotted and dashed blue line).</p>
Full article ">Figure 7
<p>IPR for the ground state as a function of the control parameter <math display="inline"><semantics> <mi>λ</mi> </semantics></math> for <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>ω</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>700</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Representation of the components of the binomial distribution for <math display="inline"><semantics> <mrow> <mi>D</mi> <mo>=</mo> <mn>350</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>3</mn> </mrow> </semantics></math> (dashed red line) compared with the computed squared coefficients for the components of the ground state wavefunction in the basis <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mi>M</mi> <mo>,</mo> </mrow> <msub> <mi>n</mi> <mi>b</mi> </msub> <mrow> <mo>〉</mo> </mrow> </mrow> </semantics></math> (full blue line). This last calculation was done for <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>ω</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>700</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>IPR for different M-values as a function of <math display="inline"><semantics> <mi>λ</mi> </semantics></math>. These calculations were done for <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>ω</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Rényi entropy with <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <msup> <mi>R</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>/</mo> <mn>2</mn> <mo>)</mo> </mrow> </msup> </semantics></math>, as a function of <math display="inline"><semantics> <mi>λ</mi> </semantics></math> for different <span class="html-italic">M</span> values.</p>
Full article ">Figure 11
<p>Shannon entropy as a function of <math display="inline"><semantics> <mi>λ</mi> </semantics></math> for different <span class="html-italic">M</span> values.</p>
Full article ">Figure 12
<p>Rényi entropy with <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <msup> <mi>R</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msup> </semantics></math>, as a function of <math display="inline"><semantics> <mi>λ</mi> </semantics></math> for different <span class="html-italic">M</span> values.</p>
Full article ">Figure 13
<p>Different Rényi entropy values as a function of <math display="inline"><semantics> <mi>λ</mi> </semantics></math> for different <math display="inline"><semantics> <mi>α</mi> </semantics></math> values, <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>700</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>ω</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> in all cases. The case <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>→</mo> <mn>1</mn> </mrow> </semantics></math> is the Shannon entropy.</p>
Full article ">
17 pages, 3890 KiB  
Article
Visual Recognition of Traffic Signs in Natural Scenes Based on Improved RetinaNet
by Shangwang Liu, Tongbo Cai, Xiufang Tang, Yangyang Zhang and Changgeng Wang
Entropy 2022, 24(1), 112; https://doi.org/10.3390/e24010112 - 12 Jan 2022
Cited by 17 | Viewed by 3395
Abstract
Aiming at recognizing small proportion, blurred and complex traffic sign in natural scenes, a traffic sign detection method based on RetinaNet-NeXt is proposed. First, to ensure the quality of dataset, the data were cleaned and enhanced to denoise. Secondly, a novel backbone network [...] Read more.
Aiming at recognizing small proportion, blurred and complex traffic sign in natural scenes, a traffic sign detection method based on RetinaNet-NeXt is proposed. First, to ensure the quality of dataset, the data were cleaned and enhanced to denoise. Secondly, a novel backbone network ResNeXt was employed to improve the detection accuracy and effection of RetinaNet. Finally, transfer learning and group normalization were adopted to accelerate our network training. Experimental results show that the precision, recall and mAP of our method, compared with the original RetinaNet, are improved by 9.08%, 9.09% and 7.32%, respectively. Our method can be effectively applied to traffic sign detection. Full article
Show Figures

Figure 1

Figure 1
<p>Structure of RetinaNet-NeXt network.</p>
Full article ">Figure 2
<p>Backbone Network (ResNeXt).</p>
Full article ">Figure 3
<p>Structure of Feature Pyramid Network.</p>
Full article ">Figure 4
<p>Structure of classification and regression subnet.</p>
Full article ">Figure 5
<p>The types of traffic signs in the public dataset TT100K. (<b>a</b>) instruction; (<b>b</b>) prohibition; (<b>c</b>) warning. It is mainly used in this work.</p>
Full article ">Figure 6
<p>Data augmentation results of traffic sign image: (<b>a</b>) Original image; (<b>b</b>) Size cropping; (<b>c</b>) Color change.</p>
Full article ">Figure 7
<p>Training loss curve.</p>
Full article ">Figure 8
<p>Traffic sign recognition results.</p>
Full article ">Figure 9
<p>Distribution of ground truth and prediction: (<b>a</b>) The distribution of box size in ground truth; (<b>b</b>) The distribution of box size in prediction.</p>
Full article ">Figure 10
<p>Comparison of recognition results of different detection frameworks: (<b>a</b>) Faster RCNN; (<b>b</b>) YOLOv5; (<b>c</b>) RetinaNet; (<b>d</b>) RetinaNet-NeXt.</p>
Full article ">Figure 11
<p>PR curves of different models under the effect of anchor: (<b>a</b>) The anchor size is (0, 32]; (<b>b</b>) The anchor size is (32, 96]; (<b>c</b>) The anchor size is (96, 512]; (<b>d</b>) The anchor size is (0, 512].</p>
Full article ">Figure 11 Cont.
<p>PR curves of different models under the effect of anchor: (<b>a</b>) The anchor size is (0, 32]; (<b>b</b>) The anchor size is (32, 96]; (<b>c</b>) The anchor size is (96, 512]; (<b>d</b>) The anchor size is (0, 512].</p>
Full article ">
17 pages, 496 KiB  
Article
A Verifiable Arbitrated Quantum Signature Scheme Based on Controlled Quantum Teleportation
by Dianjun Lu, Zhihui Li, Jing Yu and Zhaowei Han
Entropy 2022, 24(1), 111; https://doi.org/10.3390/e24010111 - 11 Jan 2022
Cited by 21 | Viewed by 2135
Abstract
In this paper, we present a verifiable arbitrated quantum signature scheme based on controlled quantum teleportation. The five-qubit entangled state functions as a quantum channel. The proposed scheme uses mutually unbiased bases particles as decoy particles and performs unitary operations on these decoy [...] Read more.
In this paper, we present a verifiable arbitrated quantum signature scheme based on controlled quantum teleportation. The five-qubit entangled state functions as a quantum channel. The proposed scheme uses mutually unbiased bases particles as decoy particles and performs unitary operations on these decoy particles, applying the functional values of symmetric bivariate polynomial. As such, eavesdropping detection and identity authentication can both be executed. The security analysis shows that our scheme can neither be disavowed by the signatory nor denied by the verifier, and it cannot be forged by any malicious attacker. Full article
(This article belongs to the Special Issue Practical Quantum Communication)
Show Figures

Figure 1

Figure 1
<p>The model of controlled quantum teleportation.</p>
Full article ">Figure 2
<p>Initializing phase schematic diagram.</p>
Full article ">Figure 3
<p>Schematic diagram of the main steps of the arbitrated quantum signature scheme.</p>
Full article ">Figure 4
<p>Diagram of transferring signature information.</p>
Full article ">
21 pages, 368 KiB  
Article
Function Computation under Privacy, Secrecy, Distortion, and Communication Constraints
by Onur Günlü
Entropy 2022, 24(1), 110; https://doi.org/10.3390/e24010110 - 11 Jan 2022
Cited by 4 | Viewed by 1881 | Correction
Abstract
The problem of reliable function computation is extended by imposing privacy, secrecy, and storage constraints on a remote source whose noisy measurements are observed by multiple parties. The main additions to the classic function computation problem include (1) privacy leakage to an eavesdropper [...] Read more.
The problem of reliable function computation is extended by imposing privacy, secrecy, and storage constraints on a remote source whose noisy measurements are observed by multiple parties. The main additions to the classic function computation problem include (1) privacy leakage to an eavesdropper is measured with respect to the remote source rather than the transmitting terminals’ observed sequences; (2) the information leakage to a fusion center with respect to the remote source is considered a new privacy leakage metric; (3) the function computed is allowed to be a distorted version of the target function, which allows the storage rate to be reduced compared to a reliable function computation scenario, in addition to reducing secrecy and privacy leakages; (4) two transmitting node observations are used to compute a function. Inner and outer bounds on the rate regions are derived for lossless and lossy single-function computation with two transmitting nodes, which recover previous results in the literature. For special cases, including invertible and partially invertible functions, and degraded measurement channels, simplified lossless and lossy rate regions are characterized, and one achievable region is evaluated as an example scenario. Full article
(This article belongs to the Special Issue Information-Theoretic Approach to Privacy and Security)
Show Figures

Figure 1

Figure 1
<p>Single-function computation problem with two transmitting nodes under secrecy, privacy, and storage (or communication) constraints.</p>
Full article ">
21 pages, 425 KiB  
Article
Attaining Fairness in Communication for Omniscience
by Ni Ding, Parastoo Sadeghi, David Smith and Thierry Rakotoarivelo
Entropy 2022, 24(1), 109; https://doi.org/10.3390/e24010109 - 11 Jan 2022
Viewed by 1917
Abstract
This paper studies how to attain fairness in communication for omniscience that models the multi-terminal compress sensing problem and the coded cooperative data exchange problem where a set of users exchange their observations of a discrete multiple random source to attain omniscience—the state [...] Read more.
This paper studies how to attain fairness in communication for omniscience that models the multi-terminal compress sensing problem and the coded cooperative data exchange problem where a set of users exchange their observations of a discrete multiple random source to attain omniscience—the state that all users recover the entire source. The optimal rate region containing all source coding rate vectors that achieve omniscience with the minimum sum rate is shown to coincide with the core (the solution set) of a coalitional game. Two game-theoretic fairness solutions are studied: the Shapley value and the egalitarian solution. It is shown that the Shapley value assigns each user the source coding rate measured by their remaining information of the multiple source given the common randomness that is shared by all users, while the egalitarian solution simply distributes the rates as evenly as possible in the core. To avoid the exponentially growing complexity of obtaining the Shapley value, a polynomial-time approximation method is proposed which utilizes the fact that the Shapley value is the mean value over all extreme points in the core. In addition, a steepest descent algorithm is proposed that converges in polynomial time on the fractional egalitarian solution in the core, which can be implemented by network coding schemes. Finally, it is shown that the game can be decomposed into subgames so that both the Shapley value and the egalitarian solution can be obtained within each subgame in a distributed manner with reduced complexity. Full article
(This article belongs to the Special Issue Machine Learning for Communications)
Show Figures

Figure 1

Figure 1
<p>The 5-user system with <math display="inline"><semantics> <mrow> <mi>V</mi> <mspace width="3.33333pt"/> <mo>=</mo> <mspace width="3.33333pt"/> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>5</mn> <mo>}</mo> </mrow> </semantics></math> in Example 1. The users encode and broadcast <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">Z</mi> <mi>i</mi> </msub> </semantics></math>s so as to attain omniscience of the source <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">Z</mi> <mi>V</mi> </msub> </semantics></math>. In the corresponding CCDE problem, each <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">W</mi> <mi>j</mi> </msub> </semantics></math> denotes a packet that belongs to a field <math display="inline"><semantics> <msub> <mi mathvariant="double-struck">F</mi> <mi>q</mi> </msub> </semantics></math>, and each user <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>∈</mo> <mi>V</mi> </mrow> </semantics></math> broadcasts linear combinations of <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">Z</mi> <mi>i</mi> </msub> </semantics></math> to help others recover all packets in <math display="inline"><semantics> <msub> <mi mathvariant="sans-serif">Z</mi> <mi>V</mi> </msub> </semantics></math>.</p>
Full article ">Figure 2
<p>The core <math display="inline"><semantics> <mrow> <msup> <mrow> <mi mathvariant="script">R</mi> </mrow> <mo>*</mo> </msup> <mrow> <mo>(</mo> <mrow> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>5</mn> <mo>}</mo> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> of the subgame <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Ω</mi> <mo>(</mo> <mrow> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>5</mn> <mo>}</mo> </mrow> <mo>,</mo> <msub> <mover accent="true"> <mi>f</mi> <mo stretchy="false">^</mo> </mover> <msup> <mi>R</mi> <mo>*</mo> </msup> </msub> <mo>)</mo> </mrow> </semantics></math> of the 5-user system in <a href="#entropy-24-00109-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 3
<p>For the core <math display="inline"><semantics> <mrow> <msup> <mrow> <mi mathvariant="script">R</mi> </mrow> <mo>*</mo> </msup> <mrow> <mo>(</mo> <mrow> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>5</mn> <mo>}</mo> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> of the subgame <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Ω</mi> <mo>(</mo> <mrow> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>5</mn> <mo>}</mo> </mrow> <mo>,</mo> <msub> <mover accent="true"> <mi>f</mi> <mo stretchy="false">^</mo> </mover> <msup> <mi>R</mi> <mo>*</mo> </msup> </msub> <mo>)</mo> </mrow> </semantics></math>, the extreme point set is <math display="inline"><semantics> <mrow> <mi>EX</mi> <mrow> <mo>(</mo> <mrow> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>5</mn> <mo>}</mo> </mrow> <mo>)</mo> </mrow> <mspace width="3.33333pt"/> <mo>=</mo> <mspace width="3.33333pt"/> <mrow> <mo>{</mo> <mrow> <mo>(</mo> <mfrac> <mn>3</mn> <mn>2</mn> </mfrac> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>,</mo> <mrow> <mo>(</mo> <mfrac> <mn>3</mn> <mn>2</mn> </mfrac> <mo>,</mo> <mfrac> <mn>3</mn> <mn>2</mn> </mfrac> <mo>,</mo> <mfrac> <mn>5</mn> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>,</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mfrac> <mn>9</mn> <mn>2</mn> </mfrac> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> <mo>,</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mfrac> <mn>5</mn> <mn>2</mn> </mfrac> <mo>)</mo> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math>, the mean value of which is the Shapley value <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi mathvariant="bold">r</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>5</mn> <mo>}</mo> </mrow> </msub> <mspace width="3.33333pt"/> <mo>=</mo> <mspace width="3.33333pt"/> <mrow> <mo>(</mo> <mfrac> <mn>5</mn> <mn>4</mn> </mfrac> <mo>,</mo> <mn>3</mn> <mo>,</mo> <mfrac> <mn>5</mn> <mn>4</mn> </mfrac> <mo>)</mo> </mrow> </mrow> </semantics></math>. We apply the random permutation method twice as in Example 5. We randomly generate 3 permutations of 1, 4 and 5 each time and get the two approximations of <math display="inline"><semantics> <msub> <mover accent="true"> <mi mathvariant="bold">r</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>5</mn> <mo>}</mo> </mrow> </msub> </semantics></math>. In this figure, the path to <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mfrac> <mn>9</mn> <mn>2</mn> </mfrac> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math> shows an example of how the Edmond algorithm (Algorithm 3 in [<a href="#B8-entropy-24-00109" class="html-bibr">8</a>]) finds the vertex <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mfrac> <mn>9</mn> <mn>2</mn> </mfrac> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </semantics></math> corresponding to the permutation <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>4</mn> <mo>,</mo> <mn>5</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>The error measured by the <math display="inline"><semantics> <msub> <mo>ℓ</mo> <mn>1</mn> </msub> </semantics></math>-norm <math display="inline"><semantics> <msub> <mrow> <mo>∥</mo> <msubsup> <mi mathvariant="bold">r</mi> <mi>V</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msubsup> <mo>−</mo> <msubsup> <mi mathvariant="bold">r</mi> <mi>V</mi> <mo>*</mo> </msubsup> <mo>∥</mo> </mrow> <mn>1</mn> </msub> </semantics></math> of the estimation sequence <math display="inline"><semantics> <mrow> <mo>{</mo> <msubsup> <mi mathvariant="bold">r</mi> <mi>V</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msubsup> <mo>}</mo> </mrow> </semantics></math> generated by the SDA algorithm in Example 7 to determine the fractional egalitarian solution in <math display="inline"><semantics> <mrow> <msup> <mrow> <mi mathvariant="script">R</mi> </mrow> <mo>*</mo> </msup> <mrow> <mo>(</mo> <mi>V</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, the minimizer of <math display="inline"><semantics> <mrow> <mo movablelimits="true" form="prefix">min</mo> <mfenced separators="" open="{" close="}"> <msub> <mo>∑</mo> <mrow> <mi>i</mi> <mo>∈</mo> <mi>V</mi> </mrow> </msub> <msubsup> <mi>r</mi> <mi>i</mi> <mn>2</mn> </msubsup> <mo lspace="0pt">:</mo> <msub> <mi mathvariant="bold">r</mi> <mi>V</mi> </msub> <mo>∈</mo> <msup> <mrow> <mi mathvariant="script">R</mi> </mrow> <mo>*</mo> </msup> <mrow> <mo>(</mo> <mi>V</mi> <mo>)</mo> </mrow> <mo>∩</mo> <msubsup> <mi mathvariant="double-struck">Q</mi> <mrow> <msup> <mrow> <mo>|</mo> <mi mathvariant="script">P</mi> <mo>|</mo> </mrow> <mo>*</mo> </msup> <mo>−</mo> <mn>1</mn> </mrow> <mrow> <mo>|</mo> <mi>V</mi> <mo>|</mo> </mrow> </msubsup> </mfenced> </mrow> </semantics></math>. The error linearly decreases to zero with gradient <math display="inline"><semantics> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math>; i.e., the <math display="inline"><semantics> <msub> <mo>ℓ</mo> <mn>1</mn> </msub> </semantics></math>-norm <math display="inline"><semantics> <msub> <mrow> <mo>∥</mo> <msubsup> <mi mathvariant="bold">r</mi> <mi>V</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msubsup> <mo>−</mo> <msubsup> <mi mathvariant="bold">r</mi> <mi>V</mi> <mo>*</mo> </msubsup> <mo>∥</mo> </mrow> <mn>1</mn> </msub> </semantics></math> is reduced by <math display="inline"><semantics> <mrow> <mfrac> <mn>2</mn> <mrow> <mrow> <mo>|</mo> </mrow> <msup> <mi mathvariant="script">P</mi> <mo>*</mo> </msup> <mrow> <mo>|</mo> <mo>−</mo> <mn>1</mn> </mrow> </mrow> </mfrac> <mspace width="3.33333pt"/> <mo>=</mo> <mspace width="3.33333pt"/> <mn>1</mn> </mrow> </semantics></math> in each iteration.</p>
Full article ">Figure 5
<p>By applying the SDA algorithm to the subgame <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Ω</mi> <mo>(</mo> <mrow> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>5</mn> <mo>}</mo> </mrow> <mo>,</mo> <msub> <mover accent="true"> <mi>f</mi> <mo stretchy="false">^</mo> </mover> <msup> <mi>R</mi> <mo>*</mo> </msup> </msub> <mo>)</mo> </mrow> </semantics></math> of the 5-user system in Example 1 with the initial point <math display="inline"><semantics> <mrow> <msubsup> <mi mathvariant="bold">r</mi> <mrow> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>5</mn> <mo>}</mo> </mrow> <mrow> <mo>(</mo> <mn>0</mn> <mo>)</mo> </mrow> </msubsup> <mspace width="3.33333pt"/> <mo>=</mo> <mspace width="3.33333pt"/> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mfrac> <mn>9</mn> <mn>2</mn> </mfrac> <mo>,</mo> <mn>0</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>, we get the estimation sequence <math display="inline"><semantics> <mrow> <mo>{</mo> <msubsup> <mi mathvariant="bold">r</mi> <mrow> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>5</mn> <mo>}</mo> </mrow> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </msubsup> <mo>}</mo> </mrow> </semantics></math> resulting an update path toward the fractional egalitarian solution <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">r</mi> <mrow> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>5</mn> <mo>}</mo> </mrow> <mo>*</mo> </msubsup> </semantics></math>, the minimizer of <math display="inline"><semantics> <mrow> <mo movablelimits="true" form="prefix">min</mo> <mfenced separators="" open="{" close="}"> <msub> <mo>∑</mo> <mrow> <mi>i</mi> <mo>∈</mo> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>5</mn> <mo>}</mo> </mrow> </msub> <msubsup> <mi>r</mi> <mi>i</mi> <mn>2</mn> </msubsup> <mo lspace="0pt">:</mo> <msub> <mi mathvariant="bold">r</mi> <mrow> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>5</mn> <mo>}</mo> </mrow> </msub> <mo>∈</mo> <msup> <mrow> <mi mathvariant="script">R</mi> </mrow> <mo>*</mo> </msup> <mrow> <mo>(</mo> <mrow> <mo>{</mo> <mn>1</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>5</mn> <mo>}</mo> </mrow> <mo>)</mo> </mrow> <mo>∩</mo> <msubsup> <mi mathvariant="double-struck">Q</mi> <mrow> <mrow> <mo>|</mo> </mrow> <msup> <mi mathvariant="script">P</mi> <mo>*</mo> </msup> <mrow> <mo>|</mo> <mo>−</mo> <mn>1</mn> </mrow> </mrow> <mn>3</mn> </msubsup> </mfenced> </mrow> </semantics></math>.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop