[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 25, February
Previous Issue
Volume 24, December
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 25, Issue 1 (January 2023) – 179 articles

Cover Story (view full-size image): A molecular electronic wavefunction prepared at a highly excited adiabatic state embedded in a densely quasi-degenerate manifold quickly begins to undergo continual nonadiabatic mixing with other states, each of which in turn further mixes with other states. The mixing is caused by the so-called nonadiabatic interactions due to a significant breakdown of the Born–Oppenheimer approximations. The resultant electronic wavepacket penetrates into the broader domain in the Hilbert space, and thus, the dynamics looks like a fractional Brownian motion. A monotonically increasing Shannon entropy and other indicators highlight the presence of quantum chaos in the electronic state of molecules. This intensive chaos brings about peculiar characteristics in the dynamics of molecules. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 7267 KiB  
Article
Numerical Investigation on the Effect of Section Width on the Performance of Air Ejector with Rectangular Section
by Ying Zhang, Jingming Dong, Shuaiyu Song, Xinxiang Pan, Nan He and Manfei Lu
Entropy 2023, 25(1), 179; https://doi.org/10.3390/e25010179 - 16 Jan 2023
Cited by 3 | Viewed by 2176
Abstract
Due to its simple structure and lack of moving parts, the supersonic air ejector has been widely applied in the fields of machinery, aerospace, and energy-saving. The performance of the ejector is influenced by the flow channel structure and the velocity of the [...] Read more.
Due to its simple structure and lack of moving parts, the supersonic air ejector has been widely applied in the fields of machinery, aerospace, and energy-saving. The performance of the ejector is influenced by the flow channel structure and the velocity of the jet, thus the confined jet is an important limiting factor for the performance of the supersonic air ejector. In order to investigate the effect of the confined jet on the performance of the ejector, an air ejector with a rectangular section was designed. The effects of the section width (Wc) on the entrainment ratio, velocity distribution, turbulent kinetic energy distribution, Mach number distribution, and vorticity distribution of the rectangular section air ejector were studied numerically. The numerical results indicated that the entrainment ratio of the rectangular section air ejector increased from 0.34 to 0.65 and the increment of the ER was 91.2% when the section width increased from 1 mm to 10 mm. As Wc increased, the region of the turbulent kinetic energy gradually expanded. The energy exchange between the primary fluid and the secondary fluid was mainly in the form of turbulent diffusion in the mixing chamber. In addition to Wc limiting the fluid flow in the rectangular section air ejector, the structure size of the rectangular section air ejector in the XOY plane also had a limiting effect on the internal fluid flow. In the rectangular section air ejector, the streamwise vortices played an important role in the mixing process. The increase of Wc would increase the distribution of the streamwise vortices in the constant-area section. Meanwhile, the distribution of the spanwise vortices would gradually decrease. Full article
(This article belongs to the Special Issue Entropy and Exergy Analysis in Ejector-Based Systems)
Show Figures

Figure 1

Figure 1
<p>Three-dimensional structure diagram of the rectangular section air ejector.</p>
Full article ">Figure 2
<p>Structure diagram of the rectangular section air ejector on the XOY plane.</p>
Full article ">Figure 3
<p>Three-dimensional grid diagram of the rectangular section air ejector.</p>
Full article ">Figure 4
<p>Variation of the <span class="html-italic">ER</span> of the rectangular section air ejector under different <span class="html-italic">W<sub>c</sub></span>.</p>
Full article ">Figure 5
<p>Variation of velocity distribution of the rectangular section air ejector in the XOY plane under different <span class="html-italic">W<sub>c</sub></span>.</p>
Full article ">Figure 6
<p>Variation of velocity distribution of the rectangular section air ejector in the XOZ plane under different <span class="html-italic">W<sub>c</sub></span>.</p>
Full article ">Figure 7
<p>Variation of turbulent kinetic energy distribution of the rectangular section air ejector in the XOY plane under different <span class="html-italic">W<sub>c</sub></span>.</p>
Full article ">Figure 8
<p>Variation of turbulent kinetic energy distribution of the rectangular section air ejector in the XOZ plane under different <span class="html-italic">W<sub>c</sub></span>.</p>
Full article ">Figure 9
<p>Variation of the turbulent kinetic energy distribution of the rectangular section air ejector along the X axis under different <span class="html-italic">W<sub>c</sub></span>.</p>
Full article ">Figure 10
<p>Variation of the Mach number distribution of the rectangular section air ejector in the XOY plane under different <span class="html-italic">W<sub>c</sub></span>.</p>
Full article ">Figure 11
<p>Variation of the Mach number distribution of the rectangular section air ejector in the XOZ plane under different <span class="html-italic">W<sub>c</sub></span>.</p>
Full article ">Figure 12
<p>Variation of the streamside vortex distribution of the rectangular section air ejector along the X axis under different <span class="html-italic">W<sub>c</sub></span>.</p>
Full article ">Figure 13
<p>Variation of the spanwise vortices distribution of the rectangular section air ejector along the X axis under different <span class="html-italic">W<sub>c</sub></span>.</p>
Full article ">
29 pages, 7443 KiB  
Article
Multi-Level Thresholding Image Segmentation Based on Improved Slime Mould Algorithm and Symmetric Cross-Entropy
by Yuanyuan Jiang, Dong Zhang, Wenchang Zhu and Li Wang
Entropy 2023, 25(1), 178; https://doi.org/10.3390/e25010178 - 16 Jan 2023
Cited by 8 | Viewed by 3249
Abstract
Multi-level thresholding image segmentation divides an image into multiple regions of interest and is a key step in image processing and image analysis. Aiming toward the problems of the low segmentation accuracy and slow convergence speed of traditional multi-level threshold image segmentation methods, [...] Read more.
Multi-level thresholding image segmentation divides an image into multiple regions of interest and is a key step in image processing and image analysis. Aiming toward the problems of the low segmentation accuracy and slow convergence speed of traditional multi-level threshold image segmentation methods, in this paper, we present multi-level thresholding image segmentation based on an improved slime mould algorithm (ISMA) and symmetric cross-entropy for global optimization and image segmentation tasks. First, elite opposition-based learning (EOBL) was used to improve the quality and diversity of the initial population and accelerate the convergence speed. The adaptive probability threshold was used to adjust the selection probability of the slime mould to enhance the ability of the algorithm to jump out of the local optimum. The historical leader strategy, which selects the optimal historical information as the leader for the position update, was found to improve the convergence accuracy. Subsequently, 14 benchmark functions were used to evaluate the performance of ISMA, comparing it with other well-known algorithms in terms of the optimization accuracy, convergence speed, and significant differences. Subsequently, we tested the segmentation quality of the method proposed in this paper on eight grayscale images and compared it with other image segmentation criteria and well-known algorithms. The experimental metrics include the average fitness (mean), standard deviation (std), peak signal to noise ratio (PSNR), structure similarity index (SSIM), and feature similarity index (FSIM), which we utilized to evaluate the quality of the segmentation. The experimental results demonstrated that the improved slime mould algorithm is superior to the other compared algorithms, and multi-level thresholding image segmentation based on the improved slime mould algorithm and symmetric cross-entropy can be effectively applied to the task of multi-level threshold image segmentation. Full article
(This article belongs to the Special Issue Entropy in Soft Computing and Machine Learning Algorithms II)
Show Figures

Figure 1

Figure 1
<p>View of the 14 benchmark functions.</p>
Full article ">Figure 2
<p>Convergence behavior of the algorithms based on 14 benchmark functions.</p>
Full article ">Figure 3
<p>Test images and their grayscale histograms.</p>
Full article ">Figure 4
<p>Lena image segmentation results.</p>
Full article ">Figure 5
<p>Cameraman image segmentation results.</p>
Full article ">Figure 6
<p>Butterfly image segmentation results.</p>
Full article ">Figure 7
<p>Lake image segmentation results.</p>
Full article ">Figure 8
<p>Barbara image segmentation results.</p>
Full article ">Figure 9
<p>Columbia image segmentation results.</p>
Full article ">Figure 10
<p>Milkdrop image segmentation results.</p>
Full article ">Figure 11
<p>Man image segmentation results.</p>
Full article ">Figure 12
<p>Convergence behavior of the algorithms for all images when <span class="html-italic">d</span> = 5.</p>
Full article ">
22 pages, 817 KiB  
Article
A Concise Account of Information as Meaning Ascribed to Symbols and Its Association with Conscious Mind
by Yunus A. Çengel
Entropy 2023, 25(1), 177; https://doi.org/10.3390/e25010177 - 16 Jan 2023
Cited by 4 | Viewed by 4879
Abstract
The term information is used in different meanings in different fields of study and daily life, causing misunderstanding and confusion. There is a need to clarify what information is and how it relates to knowledge. It is argued that information is meaning [...] Read more.
The term information is used in different meanings in different fields of study and daily life, causing misunderstanding and confusion. There is a need to clarify what information is and how it relates to knowledge. It is argued that information is meaning represented by physical symbols such as sights, sounds, and words. Knowledge is meaning that resides in a conscious mind. The basic building blocks of information are symbols and meaning, which cannot be reduced to one another. The symbols of information are the physical media of representation and the means of transmission of information. Without the associated meaning, the symbols of information have no significance since meaning is an ascribed and acquired quality and not an inherent property of the symbols. We can transmit symbols of information but cannot transmit meaning from one mind to another without a common protocol or convention. A concise and cohesive framework for information can be established on the common ground of the mind, meaning, and symbols trio. Using reasoned arguments, logical consistency, and conformity with common experiences and observations as the methodology, this paper offers valuable insights to facilitate clear understanding and unifies several definitions of information into one in a cohesive manner. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>Information is meaning ascribed to tangible symbols by a conscious mind; knowledge is meaning that resides in the mind as percepts.</p>
Full article ">Figure 2
<p>Transmission of information involves transmitting the symbols of information only; meaning is ascribed to the symbols at both ends of transmission by conscious minds.</p>
Full article ">
11 pages, 1215 KiB  
Article
On Divided-Type Connectivity of Graphs
by Qiao Zhou, Xiaomin Wang and Bing Yao
Entropy 2023, 25(1), 176; https://doi.org/10.3390/e25010176 - 16 Jan 2023
Viewed by 1786
Abstract
The graph connectivity is a fundamental concept in graph theory. In particular, it plays a vital role in applications related to the modern interconnection graphs, e.g., it can be used to measure the vulnerability of the corresponding graph, and is an important metric [...] Read more.
The graph connectivity is a fundamental concept in graph theory. In particular, it plays a vital role in applications related to the modern interconnection graphs, e.g., it can be used to measure the vulnerability of the corresponding graph, and is an important metric for reliability and fault tolerance of the graph. Here, firstly, we introduce two types of divided operations, named vertex-divided operation and edge-divided operation, respectively, as well as their inverse operations vertex-coincident operation and edge-coincident operation, to find some methods for splitting vertices of graphs. Secondly, we define a new connectivity, which can be referred to as divided connectivity, which differs from traditional connectivity, and present an equivalence relationship between traditional connectivity and our divided connectivity. Afterwards, we explore the structures of graphs based on the vertex-divided connectivity. Then, as an application of our divided operations, we show some necessary and sufficient conditions for a graph to be an Euler’s graph. Finally, we propose some valuable and meaningful problems for further research. Full article
(This article belongs to the Section Statistical Physics)
Show Figures

Figure 1

Figure 1
<p>A scheme for illustrating vertex-splitting operation and edge-splitting operation: vertex-splitting operation is from (<b>a</b>) to (<b>b</b>); edge-splitting operation is from (<b>c</b>) to (<b>d</b>).</p>
Full article ">Figure 2
<p>A schemefor illustrating four graph operations: (<b>a</b>) v-divided operation; (<b>b</b>) v-coincident operation; (<b>c</b>) e-divided operation; and (<b>d</b>) e-coincident operation, cited from [<a href="#B12-entropy-25-00176" class="html-bibr">12</a>].</p>
Full article ">Figure 3
<p>(<b>a</b>) A graph <span class="html-italic">H</span> with minimum degree <math display="inline"><semantics> <mrow> <mi>δ</mi> <mo>(</mo> <mi>H</mi> <mo>)</mo> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>; (<b>b</b>) an e-divided graph <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>∧</mo> <mi>x</mi> <mi>w</mi> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <msubsup> <mi>κ</mi> <mi>d</mi> <mo>′</mo> </msubsup> <mrow> <mo>(</mo> <mi>H</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>; (<b>c</b>) a v-divided graph <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>∧</mo> <mo>{</mo> <mi>x</mi> <mo>,</mo> <mi>w</mi> <mo>}</mo> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <msub> <mi>κ</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>H</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>; (<b>d</b>) a v-deleted graph <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>−</mo> <mo>{</mo> <mi>x</mi> <mo>,</mo> <mi>w</mi> <mo>}</mo> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>κ</mi> <mo>(</mo> <mi>H</mi> <mo>)</mo> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>; (<b>e</b>) an e-deleted graph <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>−</mo> <mo>{</mo> <mi>y</mi> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mi>w</mi> <mo>,</mo> <mi>y</mi> <mi>u</mi> <mo>,</mo> <mi>y</mi> <mi>v</mi> <mo>}</mo> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <msup> <mi>κ</mi> <mo>′</mo> </msup> <mrow> <mo>(</mo> <mi>H</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>A scheme for illustrating the proof of Theorem 2.</p>
Full article ">Figure 5
<p>(<b>a</b>) A Sierpinski model <math display="inline"><semantics> <mrow> <mi>S</mi> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math> has <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>S</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> and three connected-perfect subsets. (<b>b</b>) Another Sierpinski model <math display="inline"><semantics> <mrow> <mi>S</mi> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </semantics></math> has <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>S</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math> and three connected-perfect subsets <math display="inline"><semantics> <mrow> <msub> <mi>X</mi> <mi>a</mi> </msub> <mo>=</mo> <mrow> <mo>{</mo> <mi>b</mi> <mo>,</mo> <msup> <mi>b</mi> <mo>′</mo> </msup> <mo>,</mo> <mi>c</mi> <mo>,</mo> <msup> <mi>c</mi> <mo>′</mo> </msup> <mo>}</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>X</mi> <mi>b</mi> </msub> <mo>=</mo> <mrow> <mo>{</mo> <mi>a</mi> <mo>,</mo> <msup> <mi>a</mi> <mo>′</mo> </msup> <mo>,</mo> <mi>c</mi> <mo>,</mo> <msup> <mi>c</mi> <mo>′</mo> </msup> <mo>}</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>X</mi> <mi>c</mi> </msub> <mo>=</mo> <mrow> <mo>{</mo> <mi>b</mi> <mo>,</mo> <msup> <mi>b</mi> <mo>′</mo> </msup> <mo>,</mo> <mi>a</mi> <mo>,</mo> <msup> <mi>a</mi> <mo>′</mo> </msup> <mo>}</mo> </mrow> </mrow> </semantics></math>. (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </semantics></math> is 4-connected and also v-divided 4-connected, but it is e-divided 2-connected (see <a href="#entropy-25-00176-f006" class="html-fig">Figure 6</a>). (<b>d</b>) The disconnected graph <math display="inline"><semantics> <mrow> <mi>S</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> <mo>−</mo> <msup> <mi>X</mi> <mn>2</mn> </msup> </mrow> </semantics></math> has <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>(</mo> <mi>S</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> <mo>−</mo> <msup> <mi>X</mi> <mn>2</mn> </msup> <mo>)</mo> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> components, which is the most, where <math display="inline"><semantics> <mrow> <msup> <mi>X</mi> <mn>2</mn> </msup> <mo>=</mo> <mrow> <mo>{</mo> <mi>a</mi> <mo>,</mo> <msup> <mi>a</mi> <mo>′</mo> </msup> <mo>,</mo> <mi>b</mi> <mo>,</mo> <msup> <mi>b</mi> <mo>′</mo> </msup> <mo>,</mo> <mi>c</mi> <mo>,</mo> <msup> <mi>c</mi> <mo>′</mo> </msup> <mo>}</mo> </mrow> </mrow> </semantics></math> is a connected-perfect subset of <math display="inline"><semantics> <mrow> <mi>S</mi> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>The Sierpinski model <math display="inline"><semantics> <mrow> <mi>S</mi> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </semantics></math> is e-divided 2-connected: (<b>a</b>) Dividing an edge <math display="inline"><semantics> <mrow> <mi>b</mi> <msup> <mi>c</mi> <mo>′</mo> </msup> </mrow> </semantics></math> of the Sierpinski model <math display="inline"><semantics> <mrow> <mi>S</mi> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </semantics></math> into two edges <math display="inline"><semantics> <mrow> <msup> <mi>b</mi> <mo>″</mo> </msup> <msub> <mi>c</mi> <mn>1</mn> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi>b</mi> <mo>′</mo> </msup> <msub> <mi>c</mi> <mn>2</mn> </msub> </mrow> </semantics></math> for obtaining an e-divided graph <math display="inline"><semantics> <mrow> <mi>S</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> <mo>∧</mo> <mi>b</mi> <msup> <mi>c</mi> <mo>′</mo> </msup> </mrow> </semantics></math>. (<b>b</b>) Dividing an edge <math display="inline"><semantics> <mrow> <msup> <mi>b</mi> <mo>′</mo> </msup> <mi>c</mi> </mrow> </semantics></math> of the e-divided graph <math display="inline"><semantics> <mrow> <mi>S</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> <mo>∧</mo> <mi>b</mi> <msup> <mi>c</mi> <mo>′</mo> </msup> </mrow> </semantics></math> into two edges <math display="inline"><semantics> <mrow> <msub> <mi>b</mi> <mn>1</mn> </msub> <msup> <mi>c</mi> <mo>″</mo> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>b</mi> <mn>2</mn> </msub> <msup> <mi>c</mi> <mo>′</mo> </msup> </mrow> </semantics></math> for obtaining another e-divided graph <math display="inline"><semantics> <mrow> <mi>S</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> <mo>∧</mo> <mrow> <mo>{</mo> <mi>b</mi> <msup> <mi>c</mi> <mo>′</mo> </msup> <mo>,</mo> <msup> <mi>b</mi> <mo>′</mo> </msup> <mi>c</mi> <mo>}</mo> </mrow> </mrow> </semantics></math>. (<b>c</b>) Another e-divided graph different from the one shown in (<b>b</b>).</p>
Full article ">Figure 7
<p>(<b>a</b>–<b>e</b>) is a connected graph <span class="html-italic">G</span> in the left, and (<b>a</b>–<b>e</b>) is a v-divided graph <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>=</mo> <mi>G</mi> <mo>∧</mo> <mo>{</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>x</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>x</mi> <mn>4</mn> </msub> <mo>}</mo> </mrow> </semantics></math> in the right.</p>
Full article ">
17 pages, 3459 KiB  
Article
Precision Machine Learning
by Eric J. Michaud, Ziming Liu and Max Tegmark
Entropy 2023, 25(1), 175; https://doi.org/10.3390/e25010175 - 15 Jan 2023
Cited by 18 | Viewed by 6146
Abstract
We explore unique considerations involved in fitting machine learning (ML) models to data with very high precision, as is often required for science applications. We empirically compare various function approximation methods and study how they scale with increasing parameters and data. We find [...] Read more.
We explore unique considerations involved in fitting machine learning (ML) models to data with very high precision, as is often required for science applications. We empirically compare various function approximation methods and study how they scale with increasing parameters and data. We find that neural networks (NNs) can often outperform classical approximation methods on high-dimensional examples, by (we hypothesize) auto-discovering and exploiting modular structures therein. However, neural networks trained with common optimizers are less powerful for low-dimensional cases, which motivates us to study the unique properties of neural network loss landscapes and the corresponding optimization challenges that arise in the high precision regime. To address the optimization issue in low dimensions, we develop training tricks which enable us to train neural networks to extremely low loss, close to the limits allowed by numerical precision. Full article
Show Figures

Figure 1

Figure 1
<p>In (<b>a</b>) (<b>top</b>), we show the solutions learned by a ReLU network and linear simplex interpolation on the 1D problem <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mo form="prefix">cos</mo> <mo>(</mo> <mn>2</mn> <mi>x</mi> <mo>)</mo> </mrow> </semantics></math>. In (<b>b</b>) (<b>bottom</b>), we visualize linear regions for a ReLU network, trained on unnormalized data (<b>left</b>) and normalized data (<b>center</b>), as well as linear simplex interpolation (<b>right</b>) on the 2D problem <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mi>x</mi> <mi>y</mi> </mrow> </semantics></math>. In general, we find that normalizing data to have zero mean and unit variance improves network performance, but that linear simplex interpolation outperforms neural networks on low-dimensional problems by better vertex placement.</p>
Full article ">Figure 2
<p>Scaling of linear simplex interpolation versus ReLU NNs. While simplex interpolation scales very predictably as <math display="inline"><semantics> <msup> <mi>N</mi> <mrow> <mo>−</mo> <mn>2</mn> <mo>/</mo> <mi>d</mi> </mrow> </msup> </semantics></math>, where <span class="html-italic">d</span> is the input dimension, we find that NNs sometimes scale better (at least in early regimes) as <math display="inline"><semantics> <msup> <mi>N</mi> <mrow> <mo>−</mo> <mn>2</mn> <mo>/</mo> <msup> <mi>d</mi> <mo>*</mo> </msup> </mrow> </msup> </semantics></math>, where <math display="inline"><semantics> <mrow> <msup> <mi>d</mi> <mo>*</mo> </msup> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, on high dimensional problems.</p>
Full article ">Figure 3
<p>ReLU neural networks are seen to initially scale roughly as if they were modular. Networks with enforced modularity (dark blue and red, dashed line), with architecture depicted on the right, perform and scale similarly, though slightly better, than standard dense MLPs of the same depth (light blue and red).</p>
Full article ">Figure 4
<p>Interpolation methods, both linear and nonlinear, on 2D and 3D problems, seen to approximately scale as <math display="inline"><semantics> <msup> <mi>D</mi> <mrow> <mo>−</mo> <mo>(</mo> <mi>n</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> <mo>/</mo> <mi>d</mi> </mrow> </msup> </semantics></math> where <span class="html-italic">n</span> is the order of the polynomial spline, <span class="html-italic">d</span> is the input dimension.</p>
Full article ">Figure 5
<p>Scaling of linear simplex interpolation vs tanh NNs. We also plot ReLU NN performance as a dotted line for comparison. While simplex interpolation scales very predictably as <math display="inline"><semantics> <msup> <mi>N</mi> <mrow> <mo>−</mo> <mn>2</mn> <mo>/</mo> <mi>d</mi> </mrow> </msup> </semantics></math>, where <span class="html-italic">d</span> is the input dimension, tanh NN scaling is much messier. See <a href="#app3-entropy-25-00175" class="html-app">Appendix C</a> for a comparison of scaling curves with Adam vs. the BFGS optimizer.</p>
Full article ">Figure 6
<p>(<b>a</b>) Scaling of neural networks on a target function which can be arbitrarily closely approximated by a network of finite width. (<b>b</b>) diagram from [<a href="#B35-entropy-25-00175" class="html-bibr">35</a>] showing how a four-neuron network can implement multiplication arbitrarily well. Therefore a depth-2 network of width at least 12 has an architecture error at the machine precision limit, yet optimization in practice does not discover solutions within at least 10 orders of magnitude of the precision limit.</p>
Full article ">Figure 7
<p>Eigenvalues (dark green) of the loss landscape Hessian (MSE loss) after training with the Adam optimizer, along with the magnitude of the gradient’s projection onto each corresponding eigenvector (thin red line). We see a cluster of top eigenvalues and a bulk of near-zero eigenvalues. The gradient (thin jagged red curve) points mostly in directions of high-curvature. See <a href="#app4-entropy-25-00175" class="html-app">Appendix D</a> for a similar plot after training with BFGS rather than Adam.</p>
Full article ">Figure 8
<p>Comparison of Adam with BFGS + low-curvature subspace training + boosting. Using second-order methods like BFGS, but especially using boosting, leads to an improvement of many orders of magnitude over just training with Adam. Target functions are a teacher network (<b>top</b>) and a symbolic equation (<b>bottom</b>).</p>
Full article ">Figure 9
<p>Comparison of Adam with BFGS + low-curvature subspace training + boosting, for a 2D problem (<b>top</b>) and a 6D problem (<b>bottom</b>), the equation we studied in <a href="#entropy-25-00175-f006" class="html-fig">Figure 6</a>a. As we increase dimension, the optimization tricks we tried in this work show diminishing benefits.</p>
Full article ">Figure 10
<p>User’s Guide for Precision: which approximation is best depends on properties of the problem.</p>
Full article ">Figure A1
<p>Rescaling loss has minimal benefit relative to boosting.</p>
Full article ">Figure A2
<p>Low-curvature subspace training curves for varying thresholds <math display="inline"><semantics> <mi>τ</mi> </semantics></math>.</p>
Full article ">Figure A3
<p>Scaling of tanh networks with the BFGS vs Adam optimizer. We use the same setup as in <a href="#entropy-25-00175-f005" class="html-fig">Figure 5</a>, training tanh MLPs of depth 2–4 of varying width on functions given by symbolic equations. BFGS outperforms Adam on the 3-dimensional example shown (<b>top left</b>) and performs roughly similarly to Adam on the other problems.</p>
Full article ">Figure A4
<p>Eigenvalues (dark green) of the loss landscape Hessian (MSE loss) after training a width-20, depth-3 network to fit <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <msup> <mi>x</mi> <mn>2</mn> </msup> </mrow> </semantics></math> with the BFGS optimizer. Like in <a href="#entropy-25-00175-f007" class="html-fig">Figure 7</a>, we also plot the magnitude of the gradient’s projection onto each corresponding eigenvector (thin red line). The “canyon” shape of the loss landscape is more obvious at lower-loss points in the landscape found by BFGS than Adam finds. There is a clear set of top eigenvalues corresponding to a few directions of much higher curvature than the bulk.</p>
Full article ">
18 pages, 3117 KiB  
Article
Long-Range Dependence Involutional Network for Logo Detection
by Xingzhuo Li, Sujuan Hou, Baisong Zhang, Jing Wang, Weikuan Jia and Yuanjie Zheng
Entropy 2023, 25(1), 174; https://doi.org/10.3390/e25010174 - 15 Jan 2023
Cited by 7 | Viewed by 2921
Abstract
Logo detection is one of the crucial branches in computer vision due to various real-world applications, such as automatic logo detection and recognition, intelligent transportation, and trademark infringement detection. Compared with traditional handcrafted-feature-based methods, deep learning-based convolutional neural networks (CNNs) can learn both [...] Read more.
Logo detection is one of the crucial branches in computer vision due to various real-world applications, such as automatic logo detection and recognition, intelligent transportation, and trademark infringement detection. Compared with traditional handcrafted-feature-based methods, deep learning-based convolutional neural networks (CNNs) can learn both low-level and high-level image features. Recent decades have witnessed the great feature representation capabilities of deep CNNs and their variants, which have been very good at discovering intricate structures in high-dimensional data and are thereby applicable to many domains including logo detection. However, logo detection remains challenging, as existing detection methods cannot solve well the problems of a multiscale and large aspect ratios. In this paper, we tackle these challenges by developing a novel long-range dependence involutional network (LDI-Net). Specifically, we designed a strategy that combines a new operator and a self-attention mechanism via rethinking the intrinsic principle of convolution called long-range dependence involution (LD involution) to alleviate the detection difficulties caused by large aspect ratios. We also introduce a multilevel representation neural architecture search (MRNAS) to detect multiscale logo objects by constructing a novel multipath topology. In addition, we implemented an adaptive RoI pooling module (ARM) to improve detection efficiency by addressing the problem of logo deformation. Comprehensive experiments on four benchmark logo datasets demonstrate the effectiveness and efficiency of the proposed approach. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

Figure 1
<p>Three logo challenges. (<b>a</b>) Logo with a large aspect ratio; (<b>b</b>) logos with multiple scales in an image; (<b>c</b>) logo deformation caused by angle change, reflection, and other reasons.</p>
Full article ">Figure 2
<p>Overview of proposed LDI-Net for logo detection. MRNAS: multilevel representation neural architecture search. LD involution: long-range dependence involution. ARM: adaptive RoI pooling module.</p>
Full article ">Figure 3
<p>Feature map transformation process based on involution.</p>
Full article ">Figure 4
<p>Construction of the involutional kernel.</p>
Full article ">Figure 5
<p>Some examples of LDI-Net test results. The orange box represents the location of the detected logo object. The top of the box represents categories and accuracy.</p>
Full article ">Figure 6
<p>Comparison of dynamic R-CNN and LDI-Net with increasing number of iterations.</p>
Full article ">Figure 7
<p>Comparison of visualization results of dynamic R-CNN and LDI-Net for the large aspect ratio problem. Blue boxes: ground-truth boxes. Orange boxes: correct detection boxes.</p>
Full article ">Figure 8
<p>Comparison of visualization results of dynamic R-CNN and LDI-Net for multiscale logo images. Blue boxes: ground-truth boxes. Orange boxes: correct detection boxes.</p>
Full article ">Figure 9
<p>Comparison of visualization results of dynamic R-CNN and LDI-Net for logo deformation images. Blue boxes: ground-truth boxes. Orange boxes: correct detection boxes.</p>
Full article ">
16 pages, 3179 KiB  
Article
From Nonlinear Dominant System to Linear Dominant System: Virtual Equivalent System Approach for Multiple Variable Self-Tuning Control System Analysis
by Jinghui Pan, Kaixiang Peng and Weicun Zhang
Entropy 2023, 25(1), 173; https://doi.org/10.3390/e25010173 - 15 Jan 2023
Viewed by 1638
Abstract
The stability and convergence analysis of a multivariable stochastic self-tuning system (STC) is very difficult because of its highly nonlinear structure. In this paper, based on the virtual equivalent system method, the structural nonlinear or nonlinear dominated multivariable self-tuning system is transformed into [...] Read more.
The stability and convergence analysis of a multivariable stochastic self-tuning system (STC) is very difficult because of its highly nonlinear structure. In this paper, based on the virtual equivalent system method, the structural nonlinear or nonlinear dominated multivariable self-tuning system is transformed into a structural linear or linear dominated system, thus simplifying the stability and convergence analysis of multivariable STC systems. For the control process of a multivariable stochastic STC system, parameter estimation is required, and there may be three cases of parameter estimation convergence, convergence to the actual value and divergence. For these three cases, this paper provides four theorems and two corollaries. Given the theorems and corollaries, it can be directly concluded that the convergence of parameter estimation is a sufficient condition for the stability and convergence of stochastic STC systems but not a necessary condition, and the four theorems and two corollaries proposed in this paper are independent of specific controller design strategies and specific parameter estimation algorithms. The virtual equivalent system theory proposed in this paper does not need specific control strategies, parameters and estimation algorithms but only needs the nature of the system itself, which can judge the stability and convergence of the self-tuning system and relax the dependence of the system stability convergence criterion on the system structure information. The virtual equivalent system method proposed in this paper is proved to be effective when the parameter estimation may have convergence, convergence to the actual value and divergence. Full article
(This article belongs to the Special Issue Nonlinear Control Systems with Recent Advances and Applications)
Show Figures

Figure 1

Figure 1
<p>Stochastic self-tuning control system.</p>
Full article ">Figure 2
<p>Constant control system corresponding to self-tuning control system.</p>
Full article ">Figure 3
<p>Virtual equivalent system of stochastic self-tuning control system.</p>
Full article ">Figure 4
<p>Decomposition subsystem of the virtual equivalent system 2.</p>
Full article ">Figure 5
<p>Virtual equivalent system when parameter estimates converge to non-true values.</p>
Full article ">Figure 6
<p>Decomposition system of virtual equivalent system 1.</p>
Full article ">Figure 7
<p>Decomposition system of virtual equivalent system 2.</p>
Full article ">Figure 8
<p>Decomposition system of virtual equivalent system 3.</p>
Full article ">Figure 9
<p>Virtual equivalent system I when parameter estimation may not converge.</p>
Full article ">Figure 10
<p>Virtual Equivalent System II when parameter Estimation may not convergence.</p>
Full article ">Figure 11
<p>Decomposed subsystem 1.</p>
Full article ">Figure 12
<p>Decomposed subsystem 2.</p>
Full article ">Figure 13
<p>Decomposed subsystem 3.</p>
Full article ">
24 pages, 12917 KiB  
Article
A Semantic-Enhancement-Based Social Network User-Alignment Algorithm
by Yuanhao Huang, Pengcheng Zhao, Qi Zhang, Ling Xing, Honghai Wu and Huahong Ma
Entropy 2023, 25(1), 172; https://doi.org/10.3390/e25010172 - 15 Jan 2023
Cited by 11 | Viewed by 2676
Abstract
User alignment can associate multiple social network accounts of the same user. It has important research implications. However, the same user has various behaviors and friends across different social networks. This will affect the accuracy of user alignment. In this paper, we aim [...] Read more.
User alignment can associate multiple social network accounts of the same user. It has important research implications. However, the same user has various behaviors and friends across different social networks. This will affect the accuracy of user alignment. In this paper, we aim to improve the accuracy of user alignment by reducing the semantic gap between the same user in different social networks. Therefore, we propose a semantically enhanced social network user alignment algorithm (SENUA). The algorithm performs user alignment based on user attributes, user-generated contents (UGCs), and user check-ins. The interference of local semantic noise can be reduced by mining the user’s semantic features for these three factors. In addition, we improve the algorithm’s adaptability to noise by multi-view graph-data augmentation. Too much similarity of non-aligned users can have a large negative impact on the user-alignment effect. Therefore, we optimize the embedding vectors based on multi-headed graph attention networks and multi-view contrastive learning. This can enhance the similar semantic features of the aligned users. Experimental results show that SENUA has an average improvement of 6.27% over the baseline method at hit-precision30. This shows that semantic enhancement can effectively improve user alignment. Full article
(This article belongs to the Special Issue Entropy in Machine Learning Applications)
Show Figures

Figure 1

Figure 1
<p>User-alignment diagram.</p>
Full article ">Figure 2
<p>The framework of the proposed algorithm.</p>
Full article ">Figure 3
<p>Multi-level semantic feature representation.</p>
Full article ">Figure 4
<p>Construction process of preferences a sharing relationship.</p>
Full article ">Figure 5
<p>Visualization of user similarity before training: (<b>a</b>) Douban; (<b>b</b>) Weibo; (<b>c</b>) Douban-Weibo; (<b>d</b>) DBLP17; (<b>e</b>) DBLP19; (<b>f</b>) DBLP17-DBLP19.</p>
Full article ">Figure 6
<p>Visualization of trained user similarity: (<b>a</b>) Douban-Weibo; (<b>b</b>) DBLP17-DBLP19.</p>
Full article ">Figure 7
<p>Comparison of user similarity before and after training: (<b>a</b>) Douban-Weibo; (<b>b</b>) DBLP17-DBLP19.</p>
Full article ">Figure 8
<p>Comparisons with baselines: (<b>a</b>) Douban-Weibo; (<b>b</b>) DBLP17-DBLP19.</p>
Full article ">Figure 9
<p>The impacts of GAT and graph-data augmentation on user alignment: (<b>a</b>) Douban-Weibo; (<b>b</b>) DBLP17-DBLP19.</p>
Full article ">
16 pages, 785 KiB  
Article
Willow Catkin Optimization Algorithm Applied in the TDOA-FDOA Joint Location Problem
by Jeng-Shyang Pan, Si-Qi Zhang, Shu-Chuan Chu, Hong-Mei Yang and Bin Yan
Entropy 2023, 25(1), 171; https://doi.org/10.3390/e25010171 - 14 Jan 2023
Cited by 9 | Viewed by 2295
Abstract
The heuristic optimization algorithm is a popular optimization method for solving optimization problems. A novel meta-heuristic algorithm was proposed in this paper, which is called the Willow Catkin Optimization (WCO) algorithm. It mainly consists of two processes: spreading seeds and aggregating seeds. In [...] Read more.
The heuristic optimization algorithm is a popular optimization method for solving optimization problems. A novel meta-heuristic algorithm was proposed in this paper, which is called the Willow Catkin Optimization (WCO) algorithm. It mainly consists of two processes: spreading seeds and aggregating seeds. In the first process, WCO tries to make the seeds explore the solution space to find the local optimal solutions. In the second process, it works to develop each optimal local solution and find the optimal global solution. In the experimental section, the performance of WCO is tested with 30 test functions from CEC 2017. WCO was applied in the Time Difference of Arrival and Frequency Difference of Arrival (TDOA-FDOA) co-localization problem of moving nodes in Wireless Sensor Networks (WSNs). Experimental results show the performance and applicability of the WCO algorithm. Full article
Show Figures

Figure 1

Figure 1
<p>Meta-heuristic algorithm classification.</p>
Full article ">Figure 2
<p>Obtaining <span class="html-italic">u</span> and <span class="html-italic">v</span> from wind vectors.</p>
Full article ">Figure 3
<p>The function curve of parameter <span class="html-italic">a</span>.</p>
Full article ">Figure 4
<p>Flowchart of WCO.</p>
Full article ">Figure 5
<p>Root mean square error of each algorithm.</p>
Full article ">Figure 6
<p>TDOA/FDOA Joint Location Model.</p>
Full article ">Figure 7
<p>Bias of each algorithm.</p>
Full article ">
13 pages, 10690 KiB  
Article
Can One Series of Self-Organized Nanoripples Guide Another Series of Self-Organized Nanoripples during Ion Bombardment: From the Perspective of Power Spectral Density Entropy?
by Hengbo Li, Jinyu Li, Gaoyuan Yang, Ying Liu, Frank Frost and Yilin Hong
Entropy 2023, 25(1), 170; https://doi.org/10.3390/e25010170 - 14 Jan 2023
Viewed by 1947
Abstract
Ion bombardment (IB) is a promising nanofabrication tool for self-organized nanostructures. When ions bombard a nominally flat solid surface, self-organized nanoripples can be induced on the irradiated target surface, which are called intrinsic nanoripples of the target material. The degree of ordering of [...] Read more.
Ion bombardment (IB) is a promising nanofabrication tool for self-organized nanostructures. When ions bombard a nominally flat solid surface, self-organized nanoripples can be induced on the irradiated target surface, which are called intrinsic nanoripples of the target material. The degree of ordering of nanoripples is an outstanding issue to be overcome, similar to other self-organization methods. In this study, the IB-induced nanoripples on bilayer systems with enhanced quality are revisited from the perspective of guided self-organization. First, power spectral density (PSD) entropy is introduced to evaluate the degree of ordering of the irradiated nanoripples, which is calculated based on the PSD curve of an atomic force microscopy image (i.e., the Fourier transform of the surface height. The PSD entropy can characterize the degree of ordering of nanoripples). The lower the PSD entropy of the nanoripples is, the higher the degree of ordering of the nanoripples. Second, to deepen the understanding of the enhanced quality of nanoripples on bilayer systems, the temporal evolution of the nanoripples on the photoresist (PR)/antireflection coating (ARC) and Au/ARC bilayer systems are compared with those of single PR and ARC layers. Finally, we demonstrate that a series of intrinsic IB-induced nanoripples on the top layer may act as a kind of self-organized template to guide the development of another series of latent IB-induced nanoripples on the underlying layer, aiming at improving the ripple ordering. The template with a self-organized nanostructure may alleviate the critical requirement for periodic templates with a small period of ~100 nm. The work may also provide inspiration for guided self-organization in other fields. Full article
(This article belongs to the Special Issue Recent Advances in Guided Self-Organization)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>–<b>e</b>) AFM images of the irradiated ARC surface at 400 eV ion energy and ion-incidence angle of 50° for bombardment stages A–E. The height scale Z is specified in each image. (<b>a’</b>–<b>e’</b>) PSD curves of the AFM images shown in <a href="#entropy-25-00170-f001" class="html-fig">Figure 1</a>a–e.</p>
Full article ">Figure 2
<p>(<b>a</b>–<b>e</b>) AFM images of the irradiated PR surface at 400 eV ion energy and ion-incidence angle of 50° for bombardment stages A–E. The height scale Z is specified in each image. (<b>a’</b>–<b>e’</b>) PSD curves of the AFM images shown in <a href="#entropy-25-00170-f002" class="html-fig">Figure 2</a>a–e.</p>
Full article ">Figure 2 Cont.
<p>(<b>a</b>–<b>e</b>) AFM images of the irradiated PR surface at 400 eV ion energy and ion-incidence angle of 50° for bombardment stages A–E. The height scale Z is specified in each image. (<b>a’</b>–<b>e’</b>) PSD curves of the AFM images shown in <a href="#entropy-25-00170-f002" class="html-fig">Figure 2</a>a–e.</p>
Full article ">Figure 3
<p>(<b>a</b>–<b>e</b>) AFM images of the irradiated PR/ARC surface at 400 eV ion energy and ion-incidence angle of 50° for bombardment stages A–E. The height scale Z is specified in each image. (<b>a’</b>–<b>e’</b>) PSD curves of the AFM images shown in <a href="#entropy-25-00170-f003" class="html-fig">Figure 3</a>a–e.</p>
Full article ">Figure 4
<p>(<b>a</b>–<b>e</b>) AFM images of the irradiated Au/ARC surface at 400 eV ion energy and ion-incidence angle of 50° for bombardment stages A–E. The height scale Z is specified in each image. (<b>a’</b>–<b>e’</b>) PSD curves of the AFM images shown in <a href="#entropy-25-00170-f004" class="html-fig">Figure 4</a>a–e. <a href="#entropy-25-00170-f004" class="html-fig">Figure 4</a>a,b are cited from ref [<a href="#B24-entropy-25-00170" class="html-bibr">24</a>].</p>
Full article ">Figure 5
<p>PSD entropy of the nanoripples on (<b>a</b>) ARC, (<b>b</b>) PR, (<b>c</b>) PR/ARC bilayer, and (<b>d</b>) Au/ARC surfaces from the corresponding AFM morphologies for bombardment stages A–E.</p>
Full article ">Figure 6
<p>Overall roughness, high-frequency and low-frequency roughnesses of the nanoripples on (<b>a</b>) ARC, (<b>b</b>) PR, (<b>c</b>) PR/ARC bilayer [<a href="#B23-entropy-25-00170" class="html-bibr">23</a>], and (<b>d</b>) Au/ARC [<a href="#B24-entropy-25-00170" class="html-bibr">24</a>] surfaces from the corresponding AFM morphologies for bombardment stages A–E.</p>
Full article ">Figure 7
<p>Evolution of the ripple wavelength of (<b>a</b>) ARC, (<b>b</b>) PR, (<b>c</b>) PR/ARC bilayer, and (<b>d</b>) Au/ARC bilayer.</p>
Full article ">Figure 8
<p>Schematic of (<b>a</b>) pattern transfer using a grating mask during conventional lithography, (<b>b</b>) curvature-dependent sputtering based on the Bradley-Harper theory [<a href="#B12-entropy-25-00170" class="html-bibr">12</a>], and (<b>c</b>) guided self-organization during ion bombardment of a bilayer system (i.e., the synergy of (<b>a</b>,<b>b</b>)).</p>
Full article ">
16 pages, 3521 KiB  
Article
GRPAFusion: A Gradient Residual and Pyramid Attention-Based Multiscale Network for Multimodal Image Fusion
by Jinxin Wang, Xiaoli Xi, Dongmei Li, Fang Li and Guanxin Zhang
Entropy 2023, 25(1), 169; https://doi.org/10.3390/e25010169 - 14 Jan 2023
Cited by 7 | Viewed by 2173
Abstract
Multimodal image fusion aims to retain valid information from different modalities, remove redundant information to highlight critical targets, and maintain rich texture details in the fused image. However, current image fusion networks only use simple convolutional layers to extract features, ignoring global dependencies [...] Read more.
Multimodal image fusion aims to retain valid information from different modalities, remove redundant information to highlight critical targets, and maintain rich texture details in the fused image. However, current image fusion networks only use simple convolutional layers to extract features, ignoring global dependencies and channel contexts. This paper proposes GRPAFusion, a multimodal image fusion framework based on gradient residual and pyramid attention. The framework uses multiscale gradient residual blocks to extract multiscale structural features and multigranularity detail features from the source image. The depth features from different modalities were adaptively corrected for inter-channel responses using a pyramid split attention module to generate high-quality fused images. Experimental results on public datasets indicated that GRPAFusion outperforms the current fusion methods in subjective and objective evaluations. Full article
(This article belongs to the Special Issue Advances in Image Fusion)
Show Figures

Figure 1

Figure 1
<p>Example fusion results for FusionGAN, DenseFuse, and our GRPAFusion. GRPAFusion can retain both texture-detail information in the visible image and thermal-radiation information in the infrared image.</p>
Full article ">Figure 2
<p>Overall structure of the fusion framework proposed in this article. The framework includes an encoder, feature fusion layer, and decoder.</p>
Full article ">Figure 3
<p>Structure of the MGR block. The MGR block is divided into two branches. The upper dashed box is the detailed branch used to extract multi-granularity detail features. The lower dashed box is the structure branch used to extract multiscale structural features.</p>
Full article ">Figure 4
<p>Structure of the PSA module. The PSA module uses convolution kernels of different sizes and group convolution to obtain multiscale attention maps, and adaptive fusion of different modal images is achieved by adjusting the response between feature fusion channels.</p>
Full article ">Figure 5
<p>Representative results of the qualitative evaluation of the parameter sensitivity-analysis experiment. The scene on the left is <span class="html-italic">Kaptain_1123</span>, and that on the right is <span class="html-italic">Bench</span>. From top to bottom are the visible images, the infrared images, the fusion results based on the global average pooling method, and the fusion results based on the global level map method, respectively. The red boxes are used to mark key locations in the results.</p>
Full article ">Figure 6
<p>Representative results of the qualitative evaluation of the network-architecture validation experiment. The scene at the top is <span class="html-italic">soldier_behind_smoke</span>, and the scene at the bottom is <span class="html-italic">heather</span>. From left to right are visible images, infrared images, fusion results without gradient residual, fusion results without the PSA module, and our GRPAFusion, respectively. The red boxes are used to mark key locations in the results.</p>
Full article ">Figure 7
<p>Subjective experimental results of comparison experiments on <span class="html-italic">Street</span> of the TNO dataset. Rectangular boxes were used to mark key locations in the results. The red boxes mark the infrared thermal information that requires attention, and the green boxes mark the texture details that require attention.</p>
Full article ">Figure 8
<p>Subjective experimental results of comparison experiments on <span class="html-italic">Nato_camp</span>, <span class="html-italic">Ship</span>, <span class="html-italic">Meeting</span>, and <span class="html-italic">Sandpath</span> of the TNO dataset. Rectangular boxes are used to mark key locations in the results. The red boxes mark the infrared thermal information that requires attention, and the green boxes mark the texture details that require attention.</p>
Full article ">Figure 9
<p>The results of eight objective evaluation metrics calculated on 21 pairs of representative images from the TNO dataset.</p>
Full article ">
14 pages, 1592 KiB  
Article
Super-Exponential Growth in Models of a Binary String World
by Marco Villani and Roberto Serra
Entropy 2023, 25(1), 168; https://doi.org/10.3390/e25010168 - 13 Jan 2023
Cited by 1 | Viewed by 2311
Abstract
The Theory of the Adjacent Possible (TAP) equation has been proposed as an appropriate description of super-exponential growth phenomena, where a phase of slow growth is followed by a rapid increase, leading to a “hockey stick” curve. This equation, initially conceived to describe [...] Read more.
The Theory of the Adjacent Possible (TAP) equation has been proposed as an appropriate description of super-exponential growth phenomena, where a phase of slow growth is followed by a rapid increase, leading to a “hockey stick” curve. This equation, initially conceived to describe the growth in time of the number of new types of artifacts, has also been applied to several natural phenomena. A possible drawback is that it may overestimate the number of new artifact types, since it does not take into account the fact that interactions, among existing types, may produce types which have already been previously discovered. We introduce here a Binary String World (BSW) where new string types can be generated by interactions among (at most two) already existing types. We introduce a continuous limit of the TAP equation for the BSW; we solve it analytically and show that it leads to divergence in finite time. We also introduce a criterion to distinguish this type of behavior from the familiar exponential growth, which diverges only as t → ∝. In the BSW, it is possible to directly model the generation of new types, and to check whether the newborns are actually novel types, thus discarding the rediscoveries of already existing types. We show that the type of growth is still TAP-like, rather than exponential, although of course in simulations one never can observes true divergence. We also show that this property is robust with respect to some changes in the model, as long as it deals with types (and not with individuals). Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Time behavior of the continuous Equation (7) (red curve) and of the discrete version Equation (4) (blue circles). The parameter value is <span class="html-italic">α</span><sub>1</sub> = 0.1, <span class="html-italic">α</span><sub>2</sub> = 0.9, P = 0.01 (discrete simulation), and P = 0.0095 (continuous case). The slight difference between the values of continuous P and discrete P is due to the difference between discrete computation (in which dynamics occur in finite steps of equal size, independently of the number of reactions involved) and continuous dynamics. (<b>b</b>) The exponential function is unable to follow the trend of the BTAP in its whole extension; in fact, if the slow initial part is well approximated, the fast final ascent is not well approached, while a good approximation of the final ascent does not allow a proper approximation of the initial part. We recall that the approximation of the final part is in any case only partial because the exponential function does not diverge in finite time.</p>
Full article ">Figure 2
<p>Number of existing string types in model BSSM and in the BTAP Equation (7). The values of the parameters <span class="html-italic">α</span><sub>1</sub> and <span class="html-italic">α</span><sub>2</sub>, which best fit the simulation results are, respectively, 0.1 and 0.909. The corresponding divergence point of Equation (8) is also shown.</p>
Full article ">Figure 3
<p>(<b>a</b>) The transformation of BSSM using Equation (10) does not show saturation, while an exponential growth of the number of types (with the same parameters of BSSM—in this case, <span class="html-italic">α</span><sub>1</sub> = 0.001, <span class="html-italic">α</span><sub>2</sub> = 0.9096, and P = 1) shows the typical behavior of a logistic function. (<b>b</b>) A magnification of the final part of (<b>a</b>), which shows that the transformation of BSSM does not show any kind of saturation in the time duration of the simulation.</p>
Full article ">Figure 4
<p>Number of existing string types in model BSSM2, interpolated by using the BTAP equation; the divergence point of Equation (7) is also shown. (<b>a</b>) Case in which the coefficients of cleavages and condensations have the same value (Con1Cl1—interpolation with the TAP equation gives <span class="html-italic">α</span><sub>1</sub> = 0.00108 and <span class="html-italic">α</span><sub>2</sub> = 1.408); (<b>b</b>) case in which the coefficients of the condensations is one hundredth of the coefficients of the cleavages (Con001Cl1—interpolation with the TAP equation gives <span class="html-italic">α</span><sub>1</sub> = 0.297 and <span class="html-italic">α</span><sub>2</sub> = 0.0585).</p>
Full article ">Figure 5
<p>Number of chemical species present inside a CSTR and inside a container separated from the outside by a semi-permeable membrane (case “SemiP”). The CSTR flux and the environmental concentrations outside the container are set to have a final comparable number of chemical species in the two cases. The two systems start having monomers, dimers, and trimers inside; in the flow entering the CSTR there is a constant concentration of monomers “A” and “B”, the same substances, which in the “SemiP” case can pass the membrane and whose concentrations are buffered in the external environment.</p>
Full article ">
15 pages, 2103 KiB  
Article
Image Registration for Visualizing Magnetic Flux Leakage Testing under Different Orientations of Magnetization
by Shengping Li, Jie Zhang, Gaofei Liu, Nanhui Chen, Lulu Tian, Libing Bai and Cong Chen
Entropy 2023, 25(1), 167; https://doi.org/10.3390/e25010167 - 13 Jan 2023
Viewed by 1824
Abstract
The Magnetic Flux Leakage (MFL) visualization technique is widely used in the surface defect inspection of ferromagnetic materials. However, the information of the images detected through the MFL method is incomplete when the defect (especially for the cracks) is complex, and some information [...] Read more.
The Magnetic Flux Leakage (MFL) visualization technique is widely used in the surface defect inspection of ferromagnetic materials. However, the information of the images detected through the MFL method is incomplete when the defect (especially for the cracks) is complex, and some information would be lost when magnetized unidirectionally. Then, the multidirectional magnetization method is proposed to fuse the images detected under different magnetization orientations. It causes a critical problem: the existing image registration methods cannot be applied to align the images because the images are different when detected under different magnetization orientations. This study presents a novel image registration method for MFL visualization to solve this problem. In order to evaluate the registration, and to fuse the information detected in different directions, the mutual information between the reference image and the MFL image calculated by the forward model is designed as a measure. Furthermore, Particle Swarm Optimization (PSO) is used to optimize the registration process. The comparative experimental results demonstrate that this method has a higher registration accuracy for the MFL images of complex cracks than the existing methods. Full article
Show Figures

Figure 1

Figure 1
<p>The distribution of MFL field with DOM (Simulated by COMSOL).</p>
Full article ">Figure 2
<p>Registration Schematic of MFL images under DOM.</p>
Full article ">Figure 3
<p>Magneto-optical imaging: (<b>a</b>) Schematic of MOI, (<b>b</b>) The MOI system.</p>
Full article ">Figure 4
<p>Schematic of the procession.</p>
Full article ">Figure 5
<p>The proposed method.</p>
Full article ">Figure 6
<p>The comparison of graphs along the row and column directions; (<b>a</b>) Before transformation, (<b>b</b>) after transformation.</p>
Full article ">Figure 7
<p>The example image images. (<b>a</b>) The original image of the defect. (<b>b</b>) The first direction of magnetization. (<b>c</b>) The second direction of magnetization (the angle between c and b is about 85<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math>∼90<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math>).</p>
Full article ">Figure 8
<p>The segmentation of captured image.</p>
Full article ">Figure 9
<p>The solenoid in specimen.</p>
Full article ">Figure 10
<p>Schematic of solenoid interaction. (<b>a</b>) No interactions; (<b>b</b>) With interactions.</p>
Full article ">Figure 11
<p>Comparison of magnetic dipole model and solenoid model. (<b>a</b>) Magnetic dipole model, (<b>b</b>) Solenoid model, (<b>c</b>) The actual MOI image.</p>
Full article ">Figure 12
<p>Example of operational feasibility. (<b>a</b>) Results of registration, (<b>b</b>) Convergence of iterations.</p>
Full article ">Figure 13
<p>The platform of MFL image detection.</p>
Full article ">Figure 14
<p>The samples: (<b>a</b>) Manual t-shaped crack, (<b>b</b>) Natural one-line crack, (<b>c</b>) natural three-line crack.</p>
Full article ">Figure 15
<p>Comparison of correlation coefficients with manual registration results.</p>
Full article ">
25 pages, 1812 KiB  
Article
A Pseudorandom Number Generator Based on the Chaotic Map and Quantum Random Walks
by Wenbo Zhao, Zhenhai Chang, Caochuan Ma and Zhuozhuo Shen
Entropy 2023, 25(1), 166; https://doi.org/10.3390/e25010166 - 13 Jan 2023
Cited by 11 | Viewed by 3169
Abstract
In this paper, a surjective mapping that satisfies the Li–Yorke chaos in the unit area is constructed and a perturbation algorithm (disturbing its parameters and inputs through another high-dimensional chaos) is proposed to enhance the randomness of the constructed chaotic system and expand [...] Read more.
In this paper, a surjective mapping that satisfies the Li–Yorke chaos in the unit area is constructed and a perturbation algorithm (disturbing its parameters and inputs through another high-dimensional chaos) is proposed to enhance the randomness of the constructed chaotic system and expand its key space. An algorithm for the composition of two systems (combining sequence based on quantum random walks with chaotic system’s outputs) is designed to improve the distribution of the system outputs and a compound chaotic system is ultimately obtained. The new compound chaotic system is evaluated using some test methods such as time series complexity, autocorrelation and distribution of output frequency. The test results showed that the new system has complex dynamic behavior such as high randomicity, unpredictability and uniform output distribution. Then, a new scheme for generating pseudorandom numbers is presented utilizing the composite chaotic system. The proposed pseudorandom number generator (PRNG) is evaluated using a series test suites such as NIST sp 800-22 soft and other tools or methods. The results of tests are promising, as the proposed PRNG passed all these tests. Thus, the proposed PRNG can be used in the information security field. Full article
(This article belongs to the Special Issue Information Network Mining and Applications)
Show Figures

Figure 1

Figure 1
<p>Conical section in unit region.</p>
Full article ">Figure 2
<p>Lyapunov exponent of the map (<a href="#FD4-entropy-25-00166" class="html-disp-formula">4</a>).</p>
Full article ">Figure 3
<p>Bifurcation diagram of the map (<a href="#FD4-entropy-25-00166" class="html-disp-formula">4</a>).</p>
Full article ">Figure 4
<p>Iterative trajectory of map (<a href="#FD4-entropy-25-00166" class="html-disp-formula">4</a>).</p>
Full article ">Figure 5
<p>Lyapunov exponent and Bifurcuresation diagram of sys (<a href="#FD9-entropy-25-00166" class="html-disp-formula">9</a>) for <math display="inline"><semantics> <msub> <mi>k</mi> <mn>14</mn> </msub> </semantics></math>.</p>
Full article ">Figure 6
<p>Block diagram of the optimization scheme.</p>
Full article ">Figure 7
<p>Trajectories of different chaotic systems.</p>
Full article ">Figure 8
<p>Approximate entropy.</p>
Full article ">Figure 9
<p>Histograms of the sequences generated by the digital chaotic system.</p>
Full article ">Figure 9 Cont.
<p>Histograms of the sequences generated by the digital chaotic system.</p>
Full article ">Figure 10
<p>Autocorrelation analysis.</p>
Full article ">Figure 11
<p>RQA measures for the proposed PRNG.</p>
Full article ">Figure 12
<p>Normalized Shannon entropy <math display="inline"><semantics> <msub> <mi>H</mi> <mi>s</mi> </msub> </semantics></math> and intensive SCM <math display="inline"><semantics> <msub> <mi>C</mi> <mi>j</mi> </msub> </semantics></math> for the proposed PRNG.</p>
Full article ">Figure 13
<p>The Scale index of the Henon map, the logistic map and the proposed PRNG.</p>
Full article ">Figure 14
<p>Windowed Scale index of the Henon map, the logistic map and the proposed PRNG.</p>
Full article ">
16 pages, 552 KiB  
Article
Fluctuation Theorem for Information Thermodynamics of Quantum Correlated Systems
by Jung Jun Park and Hyunchul Nha
Entropy 2023, 25(1), 165; https://doi.org/10.3390/e25010165 - 13 Jan 2023
Cited by 2 | Viewed by 2035
Abstract
We establish a fluctuation theorem for an open quantum bipartite system that explicitly manifests the role played by quantum correlation. Generally quantum correlations may substantially modify the universality of classical thermodynamic relations in composite systems. Our fluctuation theorem finds a non-equilibrium parameter of [...] Read more.
We establish a fluctuation theorem for an open quantum bipartite system that explicitly manifests the role played by quantum correlation. Generally quantum correlations may substantially modify the universality of classical thermodynamic relations in composite systems. Our fluctuation theorem finds a non-equilibrium parameter of genuinely quantum nature that sheds light on the emerging quantum information thermodynamics. Specifically we show that the statistics of quantum correlation fluctuation obtained in a time-reversed process can provide a useful insight into addressing work and heat in the resulting thermodynamic evolution. We illustrate these quantum thermodynamic relations by two examples of quantum correlated systems. Full article
(This article belongs to the Special Issue Quantum Thermodynamics: Fundamentals and Applications)
Show Figures

Figure 1

Figure 1
<p>Two-level system undergoing an isothermal change of level splitting. The two states <math display="inline"><semantics> <mrow> <mo>|</mo> <mn>0</mn> <mo>〉</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>|</mo> <mn>1</mn> <mo>〉</mo> </mrow> </semantics></math> that are initially degenerate are split into <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> </mrow> <msup> <mn>0</mn> <mo>′</mo> </msup> <mrow> <mo>〉</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> </mrow> <msup> <mn>1</mn> <mo>′</mo> </msup> <mrow> <mo>〉</mo> </mrow> </mrow> </semantics></math> with distinct energy levels by an amount <math display="inline"><semantics> <mo>Δ</mo> </semantics></math>.</p>
Full article ">Figure 2
<p>Top: schematic of a quantum adiabatic process for a quantum correlated bipartite system. Bottom: thermodynamic quantities <math display="inline"><semantics> <mfenced open="&#x2329;" close="&#x232A;"> <mi>w</mi> </mfenced> </semantics></math> (Green), <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>f</mi> <mrow> <mi>q</mi> <mi>c</mi> </mrow> </msub> <mo>−</mo> <msup> <mi>β</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> <mo form="prefix">ln</mo> <mi>κ</mi> </mrow> </semantics></math> (Blue) and <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>F</mi> <mrow> <mi>A</mi> <mi>B</mi> </mrow> </msub> </mrow> </semantics></math> (Orange).</p>
Full article ">
12 pages, 398 KiB  
Article
Synchronization Transition of the Second-Order Kuramoto Model on Lattices
by Géza Ódor and Shengfeng Deng
Entropy 2023, 25(1), 164; https://doi.org/10.3390/e25010164 - 13 Jan 2023
Cited by 5 | Viewed by 2496
Abstract
The second-order Kuramoto equation describes the synchronization of coupled oscillators with inertia, which occur, for example, in power grids. On the contrary to the first-order Kuramoto equation, its synchronization transition behavior is significantly less known. In the case of Gaussian self-frequencies, it is [...] Read more.
The second-order Kuramoto equation describes the synchronization of coupled oscillators with inertia, which occur, for example, in power grids. On the contrary to the first-order Kuramoto equation, its synchronization transition behavior is significantly less known. In the case of Gaussian self-frequencies, it is discontinuous, in contrast to the continuous transition for the first-order Kuramoto equation. Herein, we investigate this transition on large 2D and 3D lattices and provide numerical evidence of hybrid phase transitions, whereby the oscillator phases θi exhibit a crossover, while the frequency is spread over a real phase transition in 3D. Thus, a lower critical dimension dlO=2 is expected for the frequencies and dlR=4 for phases such as that in the massless case. We provide numerical estimates for the critical exponents, finding that the frequency spread decays as td/2 in the case of an aligned initial state of the phases in agreement with the linear approximation. In 3D, however, in the case of the initially random distribution of θi, we find a faster decay, characterized by t1.8(1) as the consequence of enhanced nonlinearities which appear by the random phase fluctuations. Full article
(This article belongs to the Special Issue Non-equilibrium Phase Transitions)
Show Figures

Figure 1

Figure 1
<p>The frequency spread in 2D at <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> for different <span class="html-italic">K</span> values, shown by the legends, for <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>2000</mn> </mrow> </semantics></math>, in the case of ordered initial conditions. The dashed line marks a numerical fit at the critical point <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>3.4</mn> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> with <math display="inline"><semantics> <msup> <mi>t</mi> <mrow> <mo>−</mo> <mi>d</mi> <mo>/</mo> <mn>2</mn> </mrow> </msup> </semantics></math>. Inset: the finite size scaling of the frequency entrainment transition point <math display="inline"><semantics> <msub> <mi>K</mi> <mi>c</mi> </msub> </semantics></math> for various system sizes in 2D (black asterisks) and 3D (red boxes), for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> and under ordered initial conditions. One can see a logarithmic growth in 2D and a convergence to <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>1.15</mn> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> constant value in 3D.</p>
Full article ">Figure 2
<p>The frequency spread in 2D at <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.4</mn> </mrow> </semantics></math> for different <span class="html-italic">K</span> values, shown by the legends, for <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>2000</mn> </mrow> </semantics></math>, using ordered initial conditions. The dashed line marks a numerical fit at the critical point <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>3.5</mn> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> with <math display="inline"><semantics> <msup> <mi>t</mi> <mrow> <mo>−</mo> <mn>1.03</mn> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </msup> </semantics></math>. Inset: Steady state values obtained by starting from ordered (black bullets) and disordered (red boxes) initial conditions.</p>
Full article ">Figure 3
<p>The frequency spread in 2D at <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> for different <span class="html-italic">K</span> values, shown by the legends, for <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>2000</mn> </mrow> </semantics></math>, in the case of disordered initial conditions. The dashed line marks a numerical fit at the critical point at <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>8.0</mn> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> with <math display="inline"><semantics> <msup> <mi>t</mi> <mrow> <mo>−</mo> <mn>1.09</mn> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </msup> </semantics></math>. Inset: Part of the hysteresis loop of <span class="html-italic">R</span> in 2D obtained by ordered (black bullets) and disordered (red boxes) initial conditions for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>200</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Steady -state Kuramoto order parameter (black dots) in 2D and its variance (red squares) at <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> at different <span class="html-italic">K</span> values for <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>200</mn> </mrow> </semantics></math>. Inset: <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>L</mi> <mo>=</mo> <mn>200</mn> <mo>)</mo> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> </mrow> </semantics></math> 1, 2, 3, 5, 7, 8, 9, 10, 11, 12, 13, 14, 20, 25, 35, 45 (bottom to top curves).</p>
Full article ">Figure 5
<p>Finite-size behavior of <span class="html-italic">R</span> in 2D for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> and ordered initial conditions shows a crossover. Inset: finite-size scaling of <math display="inline"><semantics> <msubsup> <mi>K</mi> <mi>c</mi> <mo>′</mo> </msubsup> </semantics></math> as estimated by the half values of <span class="html-italic">R</span> (black boxes) and by the <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>(</mo> <mi>R</mi> <mo>)</mo> </mrow> </semantics></math> peaks (red bullets) exhibit a linear growth.</p>
Full article ">Figure 6
<p>The frequency spread in 3D at <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> </mrow> </semantics></math> 0.1, 0.2, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1, 1.05, 1.1, 2 (top-to-bottom curves) for <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>200</mn> </mrow> </semantics></math> linear sized lattices and phase-ordered initial conditions. The dashed line marks a numerical fit at the critical point <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>1.02</mn> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> with <math display="inline"><semantics> <msup> <mi>t</mi> <mrow> <mo>−</mo> <mi>d</mi> <mo>/</mo> <mn>2</mn> </mrow> </msup> </semantics></math>. Inset: Steady state values obtained by starting from ordered (black bullets) and disordered (red boxes) initial conditions.</p>
Full article ">Figure 7
<p>The frequency spread in 3D at <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> for different <span class="html-italic">K</span> values, shown by the legends, for <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>200</mn> </mrow> </semantics></math> and disordered initial conditions. The dashed line marks a numerical fit at the critical point <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <msub> <mi>K</mi> <mi>c</mi> </msub> <mo>≃</mo> <mn>7</mn> </mrow> </semantics></math> with <math display="inline"><semantics> <msup> <mi>t</mi> <mrow> <mo>−</mo> <mn>1.8</mn> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> </semantics></math>.</p>
Full article ">Figure 8
<p>Finite-size behavior of <span class="html-italic">R</span> in 3D for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> and the ordered initial conditions, shows a crossover. Inset: finite-size scaling of <math display="inline"><semantics> <msubsup> <mi>K</mi> <mi>c</mi> <mo>′</mo> </msubsup> </semantics></math> as estimated by the half values of <span class="html-italic">R</span> (black bullets) as well as by the <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>(</mo> <mi>R</mi> <mo>)</mo> </mrow> </semantics></math> peaks (red boxes) exhibit a power-law growth.</p>
Full article ">Figure A1
<p>The frequency spread in 2D at <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.4</mn> </mrow> </semantics></math> for different <span class="html-italic">K</span> values, shown by the legends for <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>2000</mn> </mrow> </semantics></math>, with disordered initial conditions. The dashed line marks a numerical fit at the critical point at <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>9.5</mn> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> with <math display="inline"><semantics> <msup> <mi>t</mi> <mrow> <mo>−</mo> <mn>0.96</mn> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </msup> </semantics></math>.</p>
Full article ">Figure A2
<p>Steady state Kuramoto-order parameter (black ‘+’) in 3D and its variance (red ‘*’) at <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> for different <math display="inline"><semantics> <msup> <mi>K</mi> <mo>′</mo> </msup> </semantics></math> values for <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>. Inset: <math display="inline"><semantics> <mrow> <mi>R</mi> <mo>(</mo> <mi>t</mi> <mo>,</mo> <mi>L</mi> <mo>=</mo> <mn>100</mn> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">
11 pages, 275 KiB  
Article
Entropy, Graph Homomorphisms, and Dissociation Sets
by Ziyuan Wang, Jianhua Tu and Rongling Lang
Entropy 2023, 25(1), 163; https://doi.org/10.3390/e25010163 - 13 Jan 2023
Cited by 1 | Viewed by 1623
Abstract
Given two graphs G and H, the mapping of f:V(G)V(H) is called a graph homomorphism from G to H if it maps the adjacent vertices of G to the adjacent vertices of [...] Read more.
Given two graphs G and H, the mapping of f:V(G)V(H) is called a graph homomorphism from G to H if it maps the adjacent vertices of G to the adjacent vertices of H. For the graph G, a subset of vertices is called a dissociation set of G if it induces a subgraph of G containing no paths of order three, i.e., a subgraph of a maximum degree, which is at most one. Graph homomorphisms and dissociation sets are two generalizations of the concept of independent sets. In this paper, by utilizing an entropy approach, we provide upper bounds on the number of graph homomorphisms from the bipartite graph G to the graph H and the number of dissociation sets in a bipartite graph G. Full article
(This article belongs to the Special Issue Spectral Graph Theory, Topological Indices of Graph, and Entropy)
35 pages, 12354 KiB  
Article
Numerical Calculation of the Irreversible Entropy Production of Additively Manufacturable Off-Set Strip Fin Heat-Transferring Structures
by Marco Fuchs, Nico Lubos and Stephan Kabelac
Entropy 2023, 25(1), 162; https://doi.org/10.3390/e25010162 - 13 Jan 2023
Cited by 4 | Viewed by 2007
Abstract
In this manuscript, off-set strip fin structures are presented which are adapted to the possibilities of additive manufacturing. For this purpose, the geometric parameters, including fin height, fin spacing, fin length, and fin longitudinal displacement, are varied, and the Colburn j-factor and the [...] Read more.
In this manuscript, off-set strip fin structures are presented which are adapted to the possibilities of additive manufacturing. For this purpose, the geometric parameters, including fin height, fin spacing, fin length, and fin longitudinal displacement, are varied, and the Colburn j-factor and the Fanning friction factor are numerically calculated in the Reynolds number range of 80–920. The structures are classified with respect to their entropy production number according to Bejan. This method is compared with the results from partial differential equations for the calculation of the irreversible entropy production rate due to shear stresses and heat conduction. This study reveals that the chosen temperature difference leads to deviation in terms of entropy production due to heat conduction, whereas the dissipation by shear stresses shows only small deviations of less than 2%. It is further shown that the variation in fin height and fin spacing has only a small influence on heat transfer and pressure drop, while a variation in fin length and fin longitudinal displacement shows a larger influence. With respect to the entropy production number, short and long fins, as well as large fin spacing and fin longitudinal displacement, are shown to be beneficial. A detailed examination of a single structure shows that the entropy production rate due to heat conduction is dominated by the entropy production rate in the wall, while the fluid has only a minor influence. Full article
(This article belongs to the Special Issue Applications of CFD in Heat and Fluid Flow Processes)
Show Figures

Figure 1

Figure 1
<p><b>Left</b>: schematic 3D model of the structures investigated with geometric parameters, and <b>right</b>: schematic 3D model of the internal fin arrangement of the investigated section.</p>
Full article ">Figure 2
<p>Three-dimensional model of the complete calculation domain with boundary conditions. Periodic conditions are also applied on the top grey fin and side wall areas. Evaluation of the in- and outlet temperatures and pressure drop is carried out at the coloured inlet and outlet locations.</p>
Full article ">Figure 3
<p>Schematic of the internal structures (hot or cold side) with locations for the wall and fluid temperatures for the determination of the heat transfer coefficient.</p>
Full article ">Figure 4
<p><b>Left</b>: three-dimensional model of the validation case with boundary conditions, and <b>right</b>: location of the fluid and wall temperatures for the determination of the heat transfer coefficient.</p>
Full article ">Figure 5
<p><b>Left</b>: infinitesimal fluid element for equation 27, <b>right</b>: simplified cross section of <a href="#entropy-25-00162-f002" class="html-fig">Figure 2</a> with the corresponding temperatures according to <a href="#entropy-25-00162-f003" class="html-fig">Figure 3</a>. For simplification, the heat flow rate over the periodic boundary condition (dotted line) is added to the overall heat flow rate <math display="inline"><semantics> <mover accent="true"> <mi>Q</mi> <mo>˙</mo> </mover> </semantics></math>.</p>
Full article ">Figure 6
<p>Colburn j-factor and Fanning f-factor for rectangular off-set strip fins (hot and cold sides). Comparison with values from correlations (Equations (16)–(21)) from Manglik and Bergles [<a href="#B36-entropy-25-00162" class="html-bibr">36</a>], Chennu [<a href="#B39-entropy-25-00162" class="html-bibr">39</a>], and Joshi and Webb [<a href="#B37-entropy-25-00162" class="html-bibr">37</a>].</p>
Full article ">Figure 7
<p>Comparison of the irreversible entropy production rate for the entire domain using the calculation method of Bejan [<a href="#B8-entropy-25-00162" class="html-bibr">8</a>], the 2nd law of thermodynamics [<a href="#B44-entropy-25-00162" class="html-bibr">44</a>], and differential equation [<a href="#B7-entropy-25-00162" class="html-bibr">7</a>,<a href="#B17-entropy-25-00162" class="html-bibr">17</a>].</p>
Full article ">Figure 8
<p>Comparison of the entropy production rate by heat conduction and shear stresses in the cold (<b>left</b>) and hot (<b>right</b>) fluids, using the method of Bejan [<a href="#B8-entropy-25-00162" class="html-bibr">8</a>] and the differential equations [<a href="#B17-entropy-25-00162" class="html-bibr">17</a>].</p>
Full article ">Figure 9
<p><b>Left:</b> Entropy production rate by heat conduction in the hot/cold fluid, the wall/fins, and overall entropy production rate by heat conduction, and <b>right</b>: relative proportions of the entropy generation by heat conduction of the hot/cold fluid and the wall. For the total entropy production rate and the entropy production rate in the wall and fins, the mean Reynolds number of the hot and cold sides is used.</p>
Full article ">Figure 10
<p>Relative proportions of the molecular and fluctuating irreversible entropy production rate by heat conduction (<b>left</b>) and shear stresses (<b>right</b>) for the hot and cold fluids.</p>
Full article ">Figure 11
<p><b>Left:</b> Entropy production rate by heat conduction and shear stresses in the fluid and the wall, and <b>right</b>: relative share of the fluid and wall entropy production rates compared to the overall entropy production rate. For the entropy production rate in the wall, the mean Reynolds number of the hot and cold sides is used.</p>
Full article ">Figure 12
<p>Bejan number for the hot and cold fluids.</p>
Full article ">Figure 13
<p>Local volumetric entropy production rate by shear stress (mean values and fluctuation) at Re = 110 (<b>left</b>) and at Re = 713 (<b>right</b>) for the cold fluid side (flow direction: bottom to top). Slicing plane for the contour plot.</p>
Full article ">Figure 14
<p>Local volumetric entropy production rate by heat conduction (mean values and fluctuating values) at Re = 110 (<b>left</b>) and at Re = 713 (<b>right</b>) for the cold fluid side.</p>
Full article ">Figure 15
<p>Cross-sectional view of the local volumetric entropy production rate by shear stress at Re = 110 (<b>left</b>) and at Re = 713 (<b>right</b>) for the cold fluid side. Slicing plane for the contour plot.</p>
Full article ">Figure 16
<p>Cross-sectional view of the local volumetric entropy production rate by heat conduction at Re = 110 (<b>left</b>) and at Re = 713 (<b>right</b>) for the cold fluid side.</p>
Full article ">Figure 17
<p>(<b>a</b>) Entropy production number due to shear stresses for the cold side, (<b>b</b>) entropy production number due to shear stresses for the hot side for different fin heights, (<b>c</b>) entropy production number due to heat conduction in the hot/cold fluid and the walls/fins, (<b>d</b>) overall entropy production number for different fin heights, and (<b>e</b>,<b>f</b>) j-factor, f-factor, and Nusselt number of the hot/cold side for different fin heights.</p>
Full article ">Figure 18
<p>(<b>a</b>) Entropy production number due to shear stresses for the cold side. (<b>b</b>) Entropy production number due to shear stresses for the hot side for different fin spacings<b>.</b> (<b>c</b>) Entropy production number due to heat conduction in the hot/cold fluid and the walls/fins. (<b>d</b>) Overall entropy production number for different fin spacings. (<b>e</b>,<b>f</b>) j-factor, f-factor, and Nusselt number of the hot/cold side for different fin spacings.</p>
Full article ">Figure 18 Cont.
<p>(<b>a</b>) Entropy production number due to shear stresses for the cold side. (<b>b</b>) Entropy production number due to shear stresses for the hot side for different fin spacings<b>.</b> (<b>c</b>) Entropy production number due to heat conduction in the hot/cold fluid and the walls/fins. (<b>d</b>) Overall entropy production number for different fin spacings. (<b>e</b>,<b>f</b>) j-factor, f-factor, and Nusselt number of the hot/cold side for different fin spacings.</p>
Full article ">Figure 19
<p>(<b>a</b>) Entropy production number due to shear stresses for the cold side. (<b>b</b>) Entropy production number due to shear stresses for the hot side for different fin lengths. (<b>c</b>) Entropy production number due to heat conduction in the hot/cold fluid and the walls/fins. (<b>d</b>) Overall entropy production number for different fin lengths. (<b>e</b>,<b>f</b>) j-factor, f-factor, and Nu number of the hot/cold side for different fin lengths.</p>
Full article ">Figure 19 Cont.
<p>(<b>a</b>) Entropy production number due to shear stresses for the cold side. (<b>b</b>) Entropy production number due to shear stresses for the hot side for different fin lengths. (<b>c</b>) Entropy production number due to heat conduction in the hot/cold fluid and the walls/fins. (<b>d</b>) Overall entropy production number for different fin lengths. (<b>e</b>,<b>f</b>) j-factor, f-factor, and Nu number of the hot/cold side for different fin lengths.</p>
Full article ">Figure 20
<p>(<b>a</b>) Entropy production number due to shear stresses for the cold side. (<b>b</b>) Entropy production number due to shear stresses for the hot side for different longitudinal fin displacement. (<b>c</b>) Entropy production number due to heat conduction in the hot/cold fluid and the walls/fins. (<b>d</b>) Overall entropy production number for different longitudinal fin displacement. (<b>e</b>,<b>f</b>) j-factor, f-factor, and Nusselt number of the hot/cold side for different longitudinal fin displacement.</p>
Full article ">Figure 20 Cont.
<p>(<b>a</b>) Entropy production number due to shear stresses for the cold side. (<b>b</b>) Entropy production number due to shear stresses for the hot side for different longitudinal fin displacement. (<b>c</b>) Entropy production number due to heat conduction in the hot/cold fluid and the walls/fins. (<b>d</b>) Overall entropy production number for different longitudinal fin displacement. (<b>e</b>,<b>f</b>) j-factor, f-factor, and Nusselt number of the hot/cold side for different longitudinal fin displacement.</p>
Full article ">
12 pages, 1209 KiB  
Article
Ray-Stretching Statistics and Hot-Spot Formation in Weak Random Disorder
by Sicong Chen and Lev Kaplan
Entropy 2023, 25(1), 161; https://doi.org/10.3390/e25010161 - 13 Jan 2023
Cited by 1 | Viewed by 1741
Abstract
Weak scattering in a random disordered medium and the associated extreme-event statistics are of great interest in various physical contexts. Here, in the context of non-relativistic particle motion through a weakly correlated random potential, we show how extreme events in particle densities are [...] Read more.
Weak scattering in a random disordered medium and the associated extreme-event statistics are of great interest in various physical contexts. Here, in the context of non-relativistic particle motion through a weakly correlated random potential, we show how extreme events in particle densities are strongly related to the stretching exponents, where the ’hot spots’ in the intensity profile correspond to minima in the stretching exponents. This strong connection is expected to be valid for different random potential distributions, as long as the disorder is correlated and weak, and is also expected to apply to other physical contexts, such as deep ocean waves. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) A Gaussian random potential generated on a 256 by 256 grid with zero mean, standard deviation <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math>, and a Gaussian two-point correlation function with correlation length <math display="inline"><semantics> <mrow> <mi>ξ</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>. The potential is constructed by starting with <math display="inline"><semantics> <msup> <mrow> <mo stretchy="false">(</mo> <mn>256</mn> <mo stretchy="false">)</mo> </mrow> <mn>2</mn> </msup> </semantics></math> delta-shaped peaks on a square grid with spacing unity, and then convolving with a Gaussian, as described in Equation (<a href="#FD5-entropy-25-00161" class="html-disp-formula">5</a>). (<b>b</b>,<b>c</b>) The ray density and stretching exponent of particles scattered in the random potential field shown in (<b>a</b>), but with <span class="html-italic">x</span> and <span class="html-italic">y</span> rescaled to a 512 by 512 grid. Here (<b>b</b>) shows the particle density <math display="inline"><semantics> <mrow> <mi>I</mi> <mo stretchy="false">(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math>, with intensity normalized to unity before scattering, and (<b>c</b>) shows the normalized average stretching exponent <math display="inline"><semantics> <mrow> <mi>α</mi> <mo stretchy="false">(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math>. The correlation is clearly negative; i.e., higher densities in panel (<b>b</b>) are associated with a lower stretching exponent in panel (<b>c</b>).</p>
Full article ">Figure 2
<p>Scatter plots of density vs. stretching exponent before, during, and after the first generation of hot spots (around <span class="html-italic">y</span> = 80–130). (<b>a</b>–<b>d</b>) represent data for <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math> (before), <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> (during), <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mn>125</mn> </mrow> </semantics></math> (during), and <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mn>400</mn> </mrow> </semantics></math> (after), respectively.</p>
Full article ">Figure 3
<p>(<b>a</b>) Probability distribution of particle density <span class="html-italic">I</span> in the region <math display="inline"><semantics> <mrow> <mn>60</mn> <mo>&lt;</mo> <mi>y</mi> <mo>&lt;</mo> <mn>500</mn> </mrow> </semantics></math>. (<b>b</b>) Average stretching exponent <math display="inline"><semantics> <mi>α</mi> </semantics></math> in the same region, with each data point corresponding to one bin in panel (<b>a</b>).</p>
Full article ">Figure 4
<p>Evolution of intensity and stretching-exponent statistics as a function of forward travel distance. (<b>a</b>) Average and variance of the density <math display="inline"><semantics> <mrow> <mi>I</mi> <mo stretchy="false">(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> at forward travel distance <span class="html-italic">y</span>. (<b>b</b>) Average and variance of the stretching exponent <math display="inline"><semantics> <mrow> <mi>α</mi> <mo stretchy="false">(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> at forward travel distance <span class="html-italic">y</span>.</p>
Full article ">
12 pages, 457 KiB  
Article
Locality, Realism, Ergodicity and Randomness in Bell’s Experiment
by Alejandro Andrés Hnilo
Entropy 2023, 25(1), 160; https://doi.org/10.3390/e25010160 - 13 Jan 2023
Cited by 1 | Viewed by 1654
Abstract
Assuming that there is no way of sending signals propagating faster than light and that free will exists, the loophole-free observed violation of Bell’s inequalities demonstrates that at least one of three fundamental hypotheses involved in the derivation and observation of the inequalities [...] Read more.
Assuming that there is no way of sending signals propagating faster than light and that free will exists, the loophole-free observed violation of Bell’s inequalities demonstrates that at least one of three fundamental hypotheses involved in the derivation and observation of the inequalities is false: Locality, Realism, or Ergodicity. An experiment is proposed to obtain some evidence about which one is the false one. It is based on recording the time evolution of the rate of non-random series of outcomes that are generated in a specially designed Bell’s setup. The results of such experiment would be important not only to the foundations of Quantum Mechanics, but they would also have immediate practical impact on the efficient use of quantum-based random number generators and the security of Quantum Key Distribution using entangled states. Full article
(This article belongs to the Special Issue Quantum Mechanics and Its Foundations III)
Show Figures

Figure 1

Figure 1
<p>Sketch of a Bell’s setup and the proposed experiment. Source S emits pairs of photons maximally entangled in polarization, analyzers are set at angles {α,β} at stations A and B placed at distance <span class="html-italic">L</span>; single-photon detectors are placed at the output gates of the analyzers, producing binary time series. In the proposed experiment, pairs are emitted during well-separated pulses of duration ≈2<span class="html-italic">L/c</span>, settings {α,β} are changed just before the arrival of each pulse and remain fixed during the pulses’ duration. Hence, Local (i.e., spatially isolated) measurements in A and B are enforced during the first half of the pulses (t &lt; <span class="html-italic">L/c</span>), but not during the second half (t &gt; <span class="html-italic">L/c</span>). Measurements during the first (second) half of the pulses are hence performed in loophole-free (not loophole-free) condition. Therefore, observing the violation of Bell’s inequalities during the first half implies that at least one of the three hypotheses (Locality, Realism, and Ergodicity) necessary to derive and use Bell’s inequalities must be false. On the other hand, observing the violation during the second half is possible even if the three hypotheses are true. Depending on which one of the three hypotheses is false, the level of randomness of the series recorded in the first and second halves may be different.</p>
Full article ">
10 pages, 5152 KiB  
Article
EspEn Graph for the Spatial Analysis of Entropy in Images
by Ricardo Alonso Espinosa Medina
Entropy 2023, 25(1), 159; https://doi.org/10.3390/e25010159 - 12 Jan 2023
Viewed by 1914
Abstract
The quantification of entropy in images is a topic of interest that has had different applications in the field of agronomy, product generation and medicine. Some algorithms have been proposed for the quantification of the irregularity present in an image; however, the challenges [...] Read more.
The quantification of entropy in images is a topic of interest that has had different applications in the field of agronomy, product generation and medicine. Some algorithms have been proposed for the quantification of the irregularity present in an image; however, the challenges to overcome in the computational cost involved in large images and the reliable measurements in small images are still topics of discussion. In this research we propose an algorithm, EspEn Graph, which allows the quantification and graphic representation of the irregularity present in an image, revealing the location of the places where there are more or less irregular textures in the image. EspEn is used to calculate entropy because it presents reliable and stable measurements for small size images. This allows an image to be subdivided into small sections to calculate the entropy in each section and subsequently perform the conversion of values to graphically show the regularity present in an image. In conclusion, the EspEn Graph returns information on the spatial regularity that an image with different textures has and the average of these entropy values allows a reliable measure of the general entropy of the image. Full article
(This article belongs to the Special Issue Measures of Information II)
Show Figures

Figure 1

Figure 1
<p>Representation of the calculation of the EspEn in a grain of the algorithm of the EspEn Graph. The red color represents greater irregularity, the blue color represents greater regularity, green colors (more yellow) represent intermediate values of irregularity.</p>
Full article ">Figure 2
<p>Images from the normalized Brodatz texture database (NBT) used in the second numerical experiment. Top row regular images (low entropy value), middle row medium irregular images (middle entropy value) and lower row irregular images (high entropy value).</p>
Full article ">Figure 3
<p>Synthetic images with homogeneous fractions of regular and irregular shapes (ImMIX (<b>a</b>–<b>c</b>)). Result of the EspEn Graph for each case (<b>d</b>–<b>f</b>), shows the most irregular areas in red, the most regular areas in blue, and the areas with intermediate degrees of contamination in orange and light blue. The distribution pattern of the original images with different sections (<b>a</b>–<b>c</b>) is maintained in the results (<b>d</b>–<b>f</b>).</p>
Full article ">Figure 4
<p>Result of the EspEn Graph for ImMIX images with different spatial distribution of noise contamination, in equal amounts and different sizes. Color images (<b>a</b>–<b>c</b>) show greater detail in the identification of regularity through colors (red = irregular, dark blue = regular, light blue = 33% contamination and orange = 66% contamination). The color images (<b>d</b>–<b>f</b>) show a clear differentiation of the regularity, with less detail, and finally the color images (<b>g</b>–<b>i</b>) show a differentiation of the regularity of the image that is not very detailed but identifiable.</p>
Full article ">Figure 4 Cont.
<p>Result of the EspEn Graph for ImMIX images with different spatial distribution of noise contamination, in equal amounts and different sizes. Color images (<b>a</b>–<b>c</b>) show greater detail in the identification of regularity through colors (red = irregular, dark blue = regular, light blue = 33% contamination and orange = 66% contamination). The color images (<b>d</b>–<b>f</b>) show a clear differentiation of the regularity, with less detail, and finally the color images (<b>g</b>–<b>i</b>) show a differentiation of the regularity of the image that is not very detailed but identifiable.</p>
Full article ">Figure 5
<p>Mean ± std of the EspEn Graph for the different groups of ImNBT images.</p>
Full article ">Figure 6
<p>ImNBT images with different textures and different irregularity predominance (low, middle and high) and their respective EspEn Graph images with the mean values of the EspEn Graph entropy.</p>
Full article ">
39 pages, 738 KiB  
Article
On Regularized Systems of Equations for Gas Mixture Dynamics with New Regularizing Velocities and Diffusion Fluxes
by Alexander Zlotnik and Timofey Lomonosov
Entropy 2023, 25(1), 158; https://doi.org/10.3390/e25010158 - 12 Jan 2023
Cited by 6 | Viewed by 1699
Abstract
We deal with multidimensional regularized systems of equations for the one-velocity and one-temperature inert gas mixture dynamics consisting of the balance equations for the mass of components and the momentum and total energy of the mixture, with diffusion fluxes between the components as [...] Read more.
We deal with multidimensional regularized systems of equations for the one-velocity and one-temperature inert gas mixture dynamics consisting of the balance equations for the mass of components and the momentum and total energy of the mixture, with diffusion fluxes between the components as well as the viscosity and heat conductivity terms. The regularizations are kinetically motivated and aimed at constructing conditionally stable symmetric in space discretizations without limiters. We consider a new combined form of regularizing velocities containing the total pressure of the mixture. To confirm the physical correctness of the regularized systems, we derive the balance equation for the mixture entropy with the non-negative entropy production, under generalized assumptions on the diffusion fluxes. To confirm nice regularizing properties, we derive the systems of equations linearized at constant solutions and provide the existence, uniqueness and L2-dissipativity of weak solutions to an initial-boundary problem for them. For the original systems, we also discuss the related Petrovskii parabolicity property and its important corollaries. In addition, in the one-dimensional case, we also present the special three-point and symmetric finite-difference discretization in space of the regularized systems and prove that it inherits the entropy correctness property. We also give results of numerical experiments confirming that the discretization is able to simulate well various dynamic problems of contact between two different gases. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>Example 1, case <italic>A</italic>. The results for <inline-formula><mml:math id="mm1029"><mml:semantics><mml:mrow><mml:mi>a</mml:mi><mml:mo>=</mml:mo><mml:mn>0.5</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, <inline-formula><mml:math id="mm1030"><mml:semantics><mml:mrow><mml:mi>β</mml:mi><mml:mo>=</mml:mo><mml:mn>0.7</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, <inline-formula><mml:math id="mm1031"><mml:semantics><mml:mrow><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>251</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula> (orange) and 1001 (brown) for <inline-formula><mml:math id="mm1032"><mml:semantics><mml:mrow><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mn>0.2</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>. Hereafter, the graphs for two values of <italic>N</italic> mainly almost coincide except for vicinities of the contact discontinuity and the shock wave.</p>
Full article ">Figure 2
<p>Example 1, case <italic>B</italic>. The results for <inline-formula><mml:math id="mm1033"><mml:semantics><mml:mrow><mml:mi>a</mml:mi><mml:mo>=</mml:mo><mml:mn>0.2</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, <inline-formula><mml:math id="mm1034"><mml:semantics><mml:mrow><mml:mi>β</mml:mi><mml:mo>=</mml:mo><mml:mn>0.1</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, <inline-formula><mml:math id="mm1035"><mml:semantics><mml:mrow><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>251</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula> (bronze) and 1001 (red) for <inline-formula><mml:math id="mm1036"><mml:semantics><mml:mrow><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mn>0.2</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>.</p>
Full article ">Figure 3
<p>Example 2, case <italic>A</italic>. The results for <inline-formula><mml:math id="mm1037"><mml:semantics><mml:mrow><mml:mi>a</mml:mi><mml:mo>=</mml:mo><mml:mn>0.5</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, <inline-formula><mml:math id="mm1038"><mml:semantics><mml:mrow><mml:mi>β</mml:mi><mml:mo>=</mml:mo><mml:mn>0.4</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, <inline-formula><mml:math id="mm1039"><mml:semantics><mml:mrow><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>501</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula> (orange) and 2001 (brown) for <inline-formula><mml:math id="mm1040"><mml:semantics><mml:mrow><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mn>0.0002</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>.</p>
Full article ">Figure 4
<p>Example 2, case <italic>B</italic>. The results for <inline-formula><mml:math id="mm1041"><mml:semantics><mml:mrow><mml:mi>a</mml:mi><mml:mo>=</mml:mo><mml:mn>0.5</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, <inline-formula><mml:math id="mm1042"><mml:semantics><mml:mrow><mml:mi>β</mml:mi><mml:mo>=</mml:mo><mml:mn>0.4</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, <inline-formula><mml:math id="mm1043"><mml:semantics><mml:mrow><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>501</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula> (bronze) and 2001 (red) for <inline-formula><mml:math id="mm1044"><mml:semantics><mml:mrow><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mn>0.0002</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>.</p>
Full article ">Figure 5
<p>Example 3, case <italic>A</italic>. The results for <inline-formula><mml:math id="mm1045"><mml:semantics><mml:mrow><mml:mi>a</mml:mi><mml:mo>=</mml:mo><mml:mn>0.25</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, <inline-formula><mml:math id="mm1046"><mml:semantics><mml:mrow><mml:mi>β</mml:mi><mml:mo>=</mml:mo><mml:mn>0.4</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, <inline-formula><mml:math id="mm1047"><mml:semantics><mml:mrow><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>1001</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula> (orange) and 4001 (brown) for <inline-formula><mml:math id="mm1048"><mml:semantics><mml:mrow><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mn>0.011</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>.</p>
Full article ">Figure 6
<p>Example 3, case <italic>C</italic>. The results for <inline-formula><mml:math id="mm1049"><mml:semantics><mml:mrow><mml:mi>a</mml:mi><mml:mo>=</mml:mo><mml:mn>0.25</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, <inline-formula><mml:math id="mm1050"><mml:semantics><mml:mrow><mml:mi>β</mml:mi><mml:mo>=</mml:mo><mml:mn>0.2</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, <inline-formula><mml:math id="mm1051"><mml:semantics><mml:mrow><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>1001</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula> (copper) and 4001 (purple) for <inline-formula><mml:math id="mm1052"><mml:semantics><mml:mrow><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mn>0.011</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>. The graphs of <inline-formula><mml:math id="mm1053"><mml:semantics><mml:msub><mml:mi>ρ</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:semantics></mml:math></inline-formula> have high “fingers” at the contact discontinuity.</p>
Full article ">Figure 7
<p>Example 4, case <italic>A</italic>. The results for <inline-formula><mml:math id="mm1054"><mml:semantics><mml:mrow><mml:mi>a</mml:mi><mml:mo>=</mml:mo><mml:mn>0.5</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, <inline-formula><mml:math id="mm1055"><mml:semantics><mml:mrow><mml:mi>β</mml:mi><mml:mo>=</mml:mo><mml:mn>0.2</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, <inline-formula><mml:math id="mm1056"><mml:semantics><mml:mrow><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>1001</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula> (orange) and 4001 (brown) for <inline-formula><mml:math id="mm1057"><mml:semantics><mml:mrow><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mn>4</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>.</p>
Full article ">Figure 8
<p>Example 4, case <italic>B</italic>. The results for <inline-formula><mml:math id="mm1058"><mml:semantics><mml:mrow><mml:mi>a</mml:mi><mml:mo>=</mml:mo><mml:mn>0.5</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, <inline-formula><mml:math id="mm1059"><mml:semantics><mml:mrow><mml:mi>β</mml:mi><mml:mo>=</mml:mo><mml:mn>0.2</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>, <inline-formula><mml:math id="mm1060"><mml:semantics><mml:mrow><mml:mi>N</mml:mi><mml:mo>=</mml:mo><mml:mn>1001</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula> (bronze) and 4001 (red) for <inline-formula><mml:math id="mm1061"><mml:semantics><mml:mrow><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mn>4</mml:mn></mml:mrow></mml:semantics></mml:math></inline-formula>.</p>
Full article ">
16 pages, 1036 KiB  
Article
Dynamics of Quantum Networks in Noisy Environments
by Chang-Yue Zhang, Zhu-Jun Zheng, Shao-Ming Fei and Mang Feng
Entropy 2023, 25(1), 157; https://doi.org/10.3390/e25010157 - 12 Jan 2023
Cited by 1 | Viewed by 2179
Abstract
Noise exists inherently in realistic quantum systems and affects the evolution of quantum systems. We investigate the dynamics of quantum networks in noisy environments by using the fidelity of the quantum evolved states and the classical percolation theory. We propose an analytical framework [...] Read more.
Noise exists inherently in realistic quantum systems and affects the evolution of quantum systems. We investigate the dynamics of quantum networks in noisy environments by using the fidelity of the quantum evolved states and the classical percolation theory. We propose an analytical framework that allows us to characterize the stability of quantum networks in terms of quantum noises and network topologies. The calculation results of the framework determine the maximal time that quantum networks with different network topologies can maintain the ability to communicate under noise. We demonstrate the results of the framework through examples of specific graphs under amplitude damping and phase damping noises. We further consider the capacity of the quantum network in a noisy environment according to the proposed framework. The analytical framework helps us better understand the evolution time of a quantum network and provides a reference for designing large quantum networks. Full article
(This article belongs to the Special Issue New Advances in Quantum Communication and Networks)
Show Figures

Figure 1

Figure 1
<p>Entanglement swapping in the quantum network. Alice and Bob can establish a quantum channel through two steps. Step <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>a</mi> <mo>)</mo> </mrow> </semantics></math> is selecting a path from a quantum network, such as <math display="inline"><semantics> <mrow> <mi>A</mi> <mo>→</mo> <mi>R</mi> <mo>→</mo> <mi>B</mi> </mrow> </semantics></math> or <math display="inline"><semantics> <mrow> <mi>A</mi> <mo>→</mo> <msub> <mi>R</mi> <mn>1</mn> </msub> <mo>→</mo> <msub> <mi>R</mi> <mn>2</mn> </msub> <mo>→</mo> <mo>⋯</mo> <mo>→</mo> <msub> <mi>R</mi> <mi>N</mi> </msub> <mo>→</mo> <mi>B</mi> </mrow> </semantics></math>. Step <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>b</mi> <mo>)</mo> </mrow> </semantics></math> is that intermediate nodes perform Bell state measurements to build a long-distance entanglement between the communication parties.</p>
Full article ">Figure 2
<p>Schematic diagrams of bond percolation and site percolation. (<b>a</b>) Bond percolation; (<b>b</b>) Site percolation.</p>
Full article ">Figure 3
<p>The fidelities of Bell states under amplitude damping and phase damping noises. For simulation, we consider the following parameters as identical: <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>τ</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>γ</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>γ</mi> <mn>2</mn> </msub> <mo>=</mo> <mi>τ</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Quantum network in a one-dimensional chain.</p>
Full article ">Figure 5
<p>Quantum networks in two-dimensional lattices. (<b>a</b>) Square; (<b>b</b>) Triangle; (<b>c</b>) Honeycomb.</p>
Full article ">Figure 6
<p>The distribution of the size of GCC for square quantum networks under amplitude damping and phase damping noises. The simulations were performed with <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mi>M</mi> <mo>×</mo> <mi>M</mi> <mo>,</mo> <mi>M</mi> <mo>=</mo> <mn>50</mn> <mo>,</mo> <mn>100</mn> <mo>,</mo> <mn>120</mn> <mo>.</mo> </mrow> </semantics></math></p>
Full article ">Figure 7
<p>The critical times for triangle, square and honeycomb quantum networks. The three intersection points are <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>T</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>0.6527</mn> <mo>,</mo> <mn>0.0810</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <msub> <mi>F</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>T</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>0.5</mn> <mo>,</mo> <mn>0.2117</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <msub> <mi>F</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>T</mi> <mn>3</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mrow> <mo>(</mo> <mn>0.3473</mn> <mo>,</mo> <mn>1.4044</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>, respectively.</p>
Full article ">Figure 8
<p>The graphical representation of the distribution of connected components generated by <math display="inline"><semantics> <mrow> <msub> <mi>H</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>The distribution of the size of GCC for ER quantum networks under amplitude damping and phase damping noise.</p>
Full article ">Figure 10
<p>Flow chart of finding all the paths between the source and destination nodes by using a greedy algorithm.</p>
Full article ">
19 pages, 603 KiB  
Article
Lossless Image Coding Using Non-MMSE Algorithms to Calculate Linear Prediction Coefficients
by Grzegorz Ulacha and Mirosław Łazoryszczak
Entropy 2023, 25(1), 156; https://doi.org/10.3390/e25010156 - 12 Jan 2023
Cited by 6 | Viewed by 2275
Abstract
This paper presents a lossless image compression method with a fast decoding time and flexible adjustment of coder parameters affecting its implementation complexity. A comparison of several approaches for computing non-MMSE prediction coefficients with different levels of complexity was made. The data modeling [...] Read more.
This paper presents a lossless image compression method with a fast decoding time and flexible adjustment of coder parameters affecting its implementation complexity. A comparison of several approaches for computing non-MMSE prediction coefficients with different levels of complexity was made. The data modeling stage of the proposed codec was based on linear (calculated by the non-MMSE method) and non-linear (complemented by a context-dependent constant component removal block) predictions. Prediction error coding uses a two-stage compression: an adaptive Golomb code and a binary arithmetic code. The proposed solution results in 30% shorter decoding times and a lower bit average than competing solutions (by 7.9% relative to the popular JPEG-LS codec). Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

Figure 1
<p>Neighborhood pixel numbering.</p>
Full article ">Figure 2
<p>Entropy as a function of the parameter <span class="html-italic">M</span> for the Noisesquare image encoded with the linear predictor of the order <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Average entropy value (for the set of 45 test images) as a function of the prediction order obtained using four minimization criteria: MMSE (<math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>)—marked with diamonds; MMAE (<math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>)—marked with squares; Minkowski (<math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>0.75</mn> </mrow> </semantics></math>)—marked with triangles; entropy function <span class="html-italic">H</span>—marked with circles.</p>
Full article ">Figure 4
<p>The average value of entropy as a function of <span class="html-italic">N</span> fractional bits of precision of the prediction coefficients (for the set of 45 images at <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>14</mn> </mrow> </semantics></math>) obtained by two methods: classic MMSE—dashed line, ISA based on Formula (<a href="#FD6-entropy-25-00156" class="html-disp-formula">6</a>) at <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>—solid line.</p>
Full article ">Figure 5
<p>Block diagram of the proposed cascade coding.</p>
Full article ">Figure 6
<p>A base test set of nine pictures (Ballon, Barb, Barb2, Board, Boats, Girl, Gold, Hotel, Zelda).</p>
Full article ">
10 pages, 1036 KiB  
Article
Thermoelectric Cycle and the Second Law of Thermodynamics
by Ti-Wei Xue and Zeng-Yuan Guo
Entropy 2023, 25(1), 155; https://doi.org/10.3390/e25010155 - 12 Jan 2023
Cited by 5 | Viewed by 2860
Abstract
In 2019, Schilling et al. claimed that they achieved the supercooling of a body without external intervention in their thermoelectric experiments, thus arguing that the second law of thermodynamics was bent. Kostic suggested that their claim lacked full comprehension of the second law [...] Read more.
In 2019, Schilling et al. claimed that they achieved the supercooling of a body without external intervention in their thermoelectric experiments, thus arguing that the second law of thermodynamics was bent. Kostic suggested that their claim lacked full comprehension of the second law of thermodynamics. A review of history shows that what Clausius referred to as the second law of thermodynamics is the theorem of the equivalence of transformations (unfairly ignored historically) in a reversible heat–work cycle, rather than “heat can never pass from a cold to a hot body without some other change” that was only viewed by Clausius as a natural phenomenon. Here, we propose the theorem of the equivalence of transformations for reversible thermoelectric cycles. The analysis shows that the supercooling phenomenon Schilling et al. observed is achieved by a reversible combined power–refrigeration cycle. According to the theorem of equivalence of transformations in reversible thermoelectric cycles, the reduction in body temperature to below the ambient temperature requires the body itself to have a higher initial temperature than ambient as compensation. Not only does the supercooling phenomenon not bend the second law, but it provides strong evidence of the second law. Full article
Show Figures

Figure 1

Figure 1
<p>Schilling et al.’s thermoelectric experiments [<a href="#B1-entropy-25-00155" class="html-bibr">1</a>]. <span class="html-italic">T</span><sub>b</sub>(0) denotes the initial temperature of the body, <span class="html-italic">T</span><sub>b,min</sub> denotes the minimum temperature the body can reach, <span class="html-italic">T</span><sub>r</sub> denotes the ambient temperature and <span class="html-italic">τ</span> denotes time. <span class="html-italic">Π</span> is the Peltier element, <span class="html-italic">L</span> is the electric inductance and <span class="html-italic">I</span> is the electric current.</p>
Full article ">Figure 2
<p>Thermodynamic cycle with three heat sources [<a href="#B5-entropy-25-00155" class="html-bibr">5</a>]. (<b>a</b>) <span class="html-italic">p-V</span> diagram; (<b>b</b>) <span class="html-italic">t-S</span> diagram.</p>
Full article ">Figure 3
<p>Kelvin’s thermoelectric circuit [<a href="#B21-entropy-25-00155" class="html-bibr">21</a>].</p>
Full article ">Figure 4
<p>Schematic of transformations in a reversible combined power–refrigeration cycle.</p>
Full article ">Figure 5
<p>Combined cycle in Schilling et al.’s thermoelectric experiments.</p>
Full article ">Figure 6
<p>Schematic of transformations in Schilling et al.’s combined cycle. (<b>a</b>) Normal expression. (<b>b</b>) Simplified expression.</p>
Full article ">
39 pages, 1240 KiB  
Article
Analysis of Kernel Matrices via the von Neumann Entropy and Its Relation to RVM Performances
by Lluís A. Belanche-Muñoz and Małgorzata Wiejacha
Entropy 2023, 25(1), 154; https://doi.org/10.3390/e25010154 - 12 Jan 2023
Cited by 2 | Viewed by 1924
Abstract
Kernel methods have played a major role in the last two decades in the modeling and visualization of complex problems in data science. The choice of kernel function remains an open research area and the reasons why some kernels perform better than others [...] Read more.
Kernel methods have played a major role in the last two decades in the modeling and visualization of complex problems in data science. The choice of kernel function remains an open research area and the reasons why some kernels perform better than others are not yet understood. Moreover, the high computational costs of kernel-based methods make it extremely inefficient to use standard model selection methods, such as cross-validation, creating a need for careful kernel design and parameter choice. These reasons justify the prior analyses of kernel matrices, i.e., mathematical objects generated by the kernel functions. This paper explores these topics from an entropic standpoint for the case of kernelized relevance vector machines (RVMs), pinpointing desirable properties of kernel matrices that increase the likelihood of obtaining good model performances in terms of generalization power, as well as relate these properties to the model’s fitting ability. We also derive a heuristic for achieving close-to-optimal modeling results while keeping the computational costs low, thus providing a recipe for efficient analysis when processing resources are limited. Full article
Show Figures

Figure 1

Figure 1
<p>The von Neumann entropy of a matrix generated with a uniform eigenvalue distribution with no noise (<b>left</b>), Gaussian noise with standard deviation = 0.1 (<b>middle</b>), and Gaussian noise with standard deviation = 1 (<b>right</b>).</p>
Full article ">Figure 2
<p>The von Neumann entropy of a matrix generated with eigenvalues decreasing linearly with no noise (<b>left</b>), Gaussian noise with standard deviation = 10 (<b>middle</b>), and Gaussian noise with standard deviation = 100 (<b>right</b>).</p>
Full article ">Figure 3
<p>The von Neumann entropy of a matrix generated with eigenvalues decreasing along a hyperbole with varying offset ranging from a = 0, a = 0.1, a = 1 to a = 10, respectively from left to right.</p>
Full article ">Figure 4
<p>The von Neumann entropy of a matrix generated with a varying number of non-zero eigenvalues: 1, 5, 10, 25, 50, 75, 90, 99, respectively, from the top left to the bottom right.</p>
Full article ">Figure 5
<p>Effect of normalizing the kernel matrix and scaling data rows to unit length on the von Neumann entropy, depending on the degree of the normalized polynomial kernel per dataset.</p>
Full article ">Figure 6
<p>Effect of scaling data rows to unit length on the von Neumann entropy depending on the kernel width of the RBF kernel per dataset.</p>
Full article ">Figure 7
<p>Modeling the relationship between the RBF kernel width and the von Neumann entropy (in black) using a logarithm of a chosen base (in red) per dataset.</p>
Full article ">Figure 8
<p>Modeling the relationship between a normalized polynomial kernel degree and the von Neumann entropy (in black) using a logarithm of a chosen base (in red) per dataset.</p>
Full article ">Figure 9
<p>Fitting ability of an RVM model with the RBF kernel: training NMSE against the von Neumann entropy across 30 train–test split folds.</p>
Full article ">Figure 10
<p>Fitting ability of an RVM model with the normalized polynomial kernel: training NMSE against the von Neumann entropy across 30 train–test split folds.</p>
Full article ">Figure 11
<p>Fitting ability of an RVM model with an ELM kernel: training NMSE against the von Neumann entropy across 30 train–test split folds.</p>
Full article ">Figure 12
<p>Generalization power of an RVM model with the RBF kernel across 30 train–test split folds.</p>
Full article ">Figure 13
<p>Generalization power of an RVM model with the normalized polynomial kernel across 30 train–test split folds.</p>
Full article ">Figure 14
<p>Generalization power of an RVM model with the RBF kernel on a single train–test split fold and a polynomial regression of the degree 2 curve.</p>
Full article ">Figure 15
<p>Generalization power of an RVM model with the normalized polynomial kernel a on a single train–test split fold and a polynomial regression of the degree 2 curve.</p>
Full article ">Figure 16
<p>Illustration of the procedure used to transform a non-monotonic relationship into a monotonic one.</p>
Full article ">Figure 17
<p>Relationship between the von Neumann entropy and test NMSE for the RBF kernel after taking a square root and reversing the NMSE sign to the left of the minimum of the parabola.</p>
Full article ">Figure 18
<p>Relationship between the von Neumann entropy and test NMSE for the normalized polynomial kernel after taking a square root and reversing the NMSE sign to the left of the minimum of the parabola.</p>
Full article ">Figure 19
<p>The fitting ability of an RVM model with the RBF kernel (<b>left</b>) and normalized polynomial kernel (<b>right</b>): training NMSE against the von Neumann entropy across 5 train–test split folds.</p>
Full article ">Figure 20
<p>Generalization power of an RVM model with the RBF kernel (<b>left</b>) and normalized polynomial kernel (<b>right</b>): test NMSE against the von Neumann entropy across 5 train–test split folds.</p>
Full article ">Figure 21
<p>Generalization power of an RVM model with the RBF kernel (<b>left</b>) and normalized polynomial kernel (<b>right</b>) on a single train–test split fold and a polynomial regression of the degree 2 curve.</p>
Full article ">Figure 22
<p>Correlation between the von Neumann entropy and test NMSE for the RBF kernel (<b>left</b>) and normalized polynomial kernel (<b>right</b>) after taking a square root and reversing the NMSE sign to the left of the minimum of the parabola.</p>
Full article ">
16 pages, 1750 KiB  
Article
Entanglement of Signal Paths via Noisy Superconducting Quantum Devices
by Wenbo Shi and Robert Malaney
Entropy 2023, 25(1), 153; https://doi.org/10.3390/e25010153 - 12 Jan 2023
Cited by 2 | Viewed by 2008
Abstract
Quantum routers will provide for important functionality in emerging quantum networks, and the deployment of quantum routing in real networks will initially be realized on low-complexity (few-qubit) noisy quantum devices. A true working quantum router will represent a new application for quantum entanglement—the [...] Read more.
Quantum routers will provide for important functionality in emerging quantum networks, and the deployment of quantum routing in real networks will initially be realized on low-complexity (few-qubit) noisy quantum devices. A true working quantum router will represent a new application for quantum entanglement—the coherent superposition of multiple communication paths traversed by the same quantum signal. Most end-user benefits of this application are yet to be discovered, but a few important use-cases are now known. In this work, we investigate the deployment of quantum routing on low-complexity superconducting quantum devices. In such devices, we verify the quantum nature of the routing process as well as the preservation of the routed quantum signal. We also implement quantum random access memory, a key application of quantum routing, on these same devices. Our experiments then embed a five-qubit quantum error-correcting code within the router, outlining the pathway for error-corrected quantum routing. We detail the importance of the qubit-coupling map for a superconducting quantum device that hopes to act as a quantum router, and experimentally verify that optimizing the number of controlled-X gates decreases hardware errors that impact routing performance. Our results indicate that near-term realization of quantum routing using noisy superconducting quantum devices within real-world quantum networks is possible. Full article
(This article belongs to the Special Issue Quantum Entanglement and Its Application in Quantum Communication)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic diagram illustrating the principle of a quantum router. A sender prepares and sends a quantum signal to the quantum router via a quantum channel, and a control qubit directs the quantum signal’s path based on the control information it stores. The output of the quantum router is an entanglement between the control qubit and the two paths. (<b>b</b>) Quantum circuit of a quantum router with state tomography. H stands for the Hadamard gate, and T is the phase gate that introduces a <math display="inline"><semantics> <mrow> <mi>π</mi> <mo>/</mo> <mn>4</mn> </mrow> </semantics></math> phase. All qubits start from the <math display="inline"><semantics> <mrow> <mo>|</mo> <mn>0</mn> <mo>〉</mo> </mrow> </semantics></math> state, and the second qubit initially is <math display="inline"><semantics> <mrow> <msub> <mrow> <mo>|</mo> <mi>ϕ</mi> <mo>〉</mo> </mrow> <mi>n</mi> </msub> <mo>=</mo> <msub> <mrow> <mo>|</mo> <mn>0</mn> <mo>〉</mo> </mrow> <mi>n</mi> </msub> </mrow> </semantics></math>. The 3-qubit gate in purple is the controlled-swap gate, which exchanges the two quantum states (represented by the two crosses) when the control qubit (represented by the solid circle) is in the <math display="inline"><semantics> <mrow> <mo>|</mo> <mn>1</mn> <mo>〉</mo> </mrow> </semantics></math> state. The state tomography reconstructs the density matrix of the quantum router’s output. <math display="inline"><semantics> <msub> <mi>R</mi> <mi>c</mi> </msub> </semantics></math> represents a classical register that contains the quantum measurement results of the state tomography. (<b>c</b>) Quantum circuit of a QRAM with state tomography. The 2-qubit gate is the CX gate (Controlled-X), which is applied to a control (represented by the solid circle) and a target qubit (represented by the circle with a plus sign).</p>
Full article ">Figure 2
<p>Probability density function of <span class="html-italic">F</span> between <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> and <math display="inline"><semantics> <msub> <mi>ρ</mi> <mi>r</mi> </msub> </semantics></math> obtained from 100,000 samples.</p>
Full article ">Figure 3
<p>Quantum circuit of the 5-qubit QECC embedded within the quantum router with state tomography. The first part of this quantum circuit is the state preparation of <math display="inline"><semantics> <msub> <mrow> <mo>|</mo> <mi>ϕ</mi> <mo>〉</mo> </mrow> <mi>s</mi> </msub> </semantics></math> on the third qubit (counting from the top), followed by the encoding, error-finding, error correction, and quantum routing. The first five qubits, except for the third qubit, are ancillary qubits for encoding, the sixth qubit is <math display="inline"><semantics> <msub> <mrow> <mo>|</mo> <mi>ϕ</mi> <mo>〉</mo> </mrow> <mi>n</mi> </msub> </semantics></math>, and the last qubit is prepared as <math display="inline"><semantics> <msub> <mrow> <mo>|</mo> <mi>ϕ</mi> <mo>〉</mo> </mrow> <mi>c</mi> </msub> </semantics></math>. Z stands for the Pauli-Z gate, the 2-qubit gate with two solid circles is the CZ (controlled-Z) gate. The 3-qubit and the 5-qubit gates are either a multi-CX or a multi-CZ gate, whose solid and hollow circles indicate that the control state is the <math display="inline"><semantics> <mrow> <mo>|</mo> <mn>1</mn> <mo>〉</mo> </mrow> </semantics></math> or <math display="inline"><semantics> <mrow> <mo>|</mo> <mn>0</mn> <mo>〉</mo> </mrow> </semantics></math> state, respectively. Note that the row being continued across the three blocks is the continuation of the previous row.</p>
Full article ">Figure 4
<p>A transpiled circuit of the router circuit. Note that the state tomography is not included here, and 9 CX gates are involved. The quantum gates included in this transpiled circuit are basis gates that can be physically operated on the <span class="html-italic">ibmq_jakarta</span>.</p>
Full article ">Figure 5
<p>Theoretical and experimental density matrices of <math display="inline"><semantics> <msub> <mrow> <mo>|</mo> <mo>Φ</mo> <mo>〉</mo> </mrow> <mi>f</mi> </msub> </semantics></math>. (<b>a</b>,<b>b</b>) represent the real and imaginary parts of the theoretical density matrix, <math display="inline"><semantics> <mi>ρ</mi> </semantics></math>, respectively. (<b>c</b>,<b>d</b>) show the real and imaginary parts of the experimental density matrix, <math display="inline"><semantics> <msub> <mi>ρ</mi> <mn>1</mn> </msub> </semantics></math>, respectively. The entanglement fidelity, <span class="html-italic">F</span>, between <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> and <math display="inline"><semantics> <msub> <mi>ρ</mi> <mn>1</mn> </msub> </semantics></math> is 0.85.</p>
Full article ">Figure 6
<p>Coupling maps of the <span class="html-italic">ibmq_jakarta</span> (<b>a</b>), the <span class="html-italic">ibmq_quito</span> and <span class="html-italic">ibmq_belem</span> (<b>b</b>), and the <span class="html-italic">ibmqx4</span> (<b>c</b>). The two-way arrows in (<b>a</b>,<b>b</b>) represent that the CX gate can be implemented between the two pointed qubits in both directions. The one-way arrows in (<b>c</b>) indicate that the CX gate can only be implemented in one direction (the arrowheads point to the target qubits). In the connected qubit pair labeled by 0 and 1 in (<b>c</b>), for example, qubit 1 can only act as the control qubit of the CX gate with the qubit 0 as the target qubit.</p>
Full article ">Figure 7
<p>Theoretical and experimental density matrices of <math display="inline"><semantics> <msub> <mrow> <mo>|</mo> <mi>ϕ</mi> <mo>〉</mo> </mrow> <mi>s</mi> </msub> </semantics></math> after the quantum routing process with <math display="inline"><semantics> <mrow> <msub> <mrow> <mo>|</mo> <mi>ϕ</mi> <mo>〉</mo> </mrow> <msup> <mi>c</mi> <mo>′</mo> </msup> </msub> <mo>=</mo> <msub> <mrow> <mo>|</mo> <mn>1</mn> <mo>〉</mo> </mrow> <mi>c</mi> </msub> </mrow> </semantics></math>. (<b>a</b>,<b>b</b>) represent the real and imaginary parts of the theoretical density matrix, <math display="inline"><semantics> <mi>σ</mi> </semantics></math>, respectively. (<b>c</b>,<b>d</b>) show the real and imaginary parts of the experimental density matrix, <math display="inline"><semantics> <msub> <mi>σ</mi> <mn>1</mn> </msub> </semantics></math>, respectively. The state fidelity, <math display="inline"><semantics> <msub> <mi>F</mi> <mi>S</mi> </msub> </semantics></math>, between <math display="inline"><semantics> <mi>σ</mi> </semantics></math> and <math display="inline"><semantics> <msub> <mi>σ</mi> <mn>1</mn> </msub> </semantics></math> is <math display="inline"><semantics> <mrow> <mn>0.89</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p><span class="html-italic">F</span> of the quantum router without and with the QECC.</p>
Full article ">
12 pages, 1398 KiB  
Article
Heart Rate Complexity and Autonomic Modulation Are Associated with Psychological Response Inhibition in Healthy Subjects
by Francesco Riganello, Martina Vatrano, Paolo Tonin, Antonio Cerasa and Maria Daniela Cortese
Entropy 2023, 25(1), 152; https://doi.org/10.3390/e25010152 - 12 Jan 2023
Cited by 1 | Viewed by 2125
Abstract
Background: the ability to suppress/regulate impulsive reactions has been identified as common factor underlying the performance in all executive function tasks. We analyzed the HRV signals (power of high (HF) and low (LF) frequency, Sample Entropy (SampEn), and Complexity Index (CI)) during the [...] Read more.
Background: the ability to suppress/regulate impulsive reactions has been identified as common factor underlying the performance in all executive function tasks. We analyzed the HRV signals (power of high (HF) and low (LF) frequency, Sample Entropy (SampEn), and Complexity Index (CI)) during the execution of cognitive tests to assess flexibility, inhibition abilities, and rule learning. Methods: we enrolled thirty-six healthy subjects, recording five minutes of resting state and two tasks of increasing complexity based on 220 visual stimuli with 12 × 12 cm red and white squares on a black background. Results: at baseline, CI was negatively correlated with age, and LF was negatively correlated with SampEn. In Task 1, the CI and LF/HF were negatively correlated with errors. In Task 2, the reaction time positively correlated with the CI and the LF/HF ratio errors. Using a binary logistic regression model, age, CI, and LF/HF ratio classified performance groups with a sensitivity and specificity of 73 and 71%, respectively. Conclusions: this study performed an important initial exploration in defining the complex relationship between CI, sympathovagal balance, and age in regulating impulsive reactions during cognitive tests. Our approach could be applied in assessing cognitive decline, providing additional information on the brain-heart interaction. Full article
Show Figures

Figure 1

Figure 1
<p><span class="html-italic">Experimental detail</span>: Subject comfortably sits in front of the screen at 70 cm of distance from the monitor with the hand positioned near the spacebar of the computer keyboard. In the first task, the GO is represented by the white square, and the subject must hit the spacebar when it appears, and inhibit the response when the red square appears (the chess pattern with squares was the distractor). In the second task, the GO is represented by the appearance of a square with color equal to the previous square, while the NoGo is represented by a change in the appearance of the square color. The first square of the tasks appeared after 30 s of a black image. The stimulus duration was 500 ms and the interval of time between the stimuli was 1500 ms.</p>
Full article ">Figure 2
<p><span class="html-italic">Multiscale Entropy and Complexity Index Scheme.</span> (<b>A</b>) For each subject the tachogram was extracted in baseline, task 1, and task 2. N represents the length of each original detrended (trend line in red) time series (baseline N = 422 ± 100; task 1 N = 509 ± 83; task 2 N = 524 ± 102). (<b>B</b>) The sample entropy (SampEn) was calculated for the original and coarse-grained series A and B, setting the parameters m and r at 2 and 0.2, respectively. The complexity index (CI) was calculated as the sum of the SampEn of the scales 1 (original series), 2 (a), and 3 (b).</p>
Full article ">Figure 3
<p>Mean errors during the tasks: number of errors during task 1 (green line) and task 2 (red line) along the time.</p>
Full article ">Figure 4
<p><span class="html-italic">Boxplot of the HRV parameters</span>: Boxplot of the natural logarithm of low-frequency power (LnLF) and high-frequency power (LnHF), and Complexity Index (CI) in resting-state (baseline) and during task 1 and task 2.</p>
Full article ">Figure 5
<p><span class="html-italic">Correlations among HRV parameters and performance:</span> Correlation among HRV parameters (SampEn (SE), Complexity Index (CI), HF power (LnHF), LF power (LnLF), LF/HF), performance levels (errors and Reaction Time) and age of the group. In red and black are the negative and positive correlations, respectively. (<b>A</b>) baseline; (<b>B</b>) task 1; (<b>C</b>) task 2.</p>
Full article ">
16 pages, 334 KiB  
Article
Stochastic Particle Creation: From the Dynamical Casimir Effect to Cosmology
by Matías Mantiñan, Francisco D. Mazzitelli and Leonardo G. Trombetta
Entropy 2023, 25(1), 151; https://doi.org/10.3390/e25010151 - 11 Jan 2023
Cited by 3 | Viewed by 1483
Abstract
We study a stochastic version of the dynamical Casimir effect, computing the particle creation inside a cavity produced by a random motion of one of its walls. We first present a calculation perturbative in the amplitude of the motion. We compare the stochastic [...] Read more.
We study a stochastic version of the dynamical Casimir effect, computing the particle creation inside a cavity produced by a random motion of one of its walls. We first present a calculation perturbative in the amplitude of the motion. We compare the stochastic particle creation with the deterministic counterpart. Then, we go beyond the perturbative evaluation using a stochastic version of the multiple scale analysis, that takes into account stochastic parametric resonance. We stress the relevance of the coupling between the different modes induced by the stochastic motion. In the single-mode approximation, the equations are formally analogous to those that describe the stochastic particle creation in a cosmological context, that we rederive using multiple scale analysis. Full article
(This article belongs to the Special Issue Quantum Nonstationary Systems)
16 pages, 662 KiB  
Article
Mining Mobile Network Fraudsters with Augmented Graph Neural Networks
by Xinxin Hu, Haotian Chen, Hongchang Chen, Xing Li, Junjie Zhang and Shuxin Liu
Entropy 2023, 25(1), 150; https://doi.org/10.3390/e25010150 - 11 Jan 2023
Cited by 10 | Viewed by 3033
Abstract
With the rapid evolution of mobile communication networks, the number of subscribers and their communication practices is increasing dramatically worldwide. However, fraudsters are also sniffing out the benefits. Detecting fraudsters from the massive volume of call detail records (CDR) in mobile communication networks [...] Read more.
With the rapid evolution of mobile communication networks, the number of subscribers and their communication practices is increasing dramatically worldwide. However, fraudsters are also sniffing out the benefits. Detecting fraudsters from the massive volume of call detail records (CDR) in mobile communication networks has become an important yet challenging topic. Fortunately, Graph neural network (GNN) brings new possibilities for telecom fraud detection. However, the presence of the graph imbalance and GNN oversmoothing problems makes fraudster detection unsatisfactory. To address these problems, we propose a new fraud detector. First, we transform the user features with the help of a multilayer perceptron. Then, a reinforcement learning-based neighbor sampling strategy is designed to balance the number of neighbors of different classes of users. Next, we perform user feature aggregation using GNN. Finally, we innovatively treat the above augmented GNN as weak classifier and integrate multiple weak classifiers using the AdaBoost algorithm. A balanced focal loss function is also used to monitor the model training error. Extensive experiments are conducted on two open real-world telecom fraud datasets, and the results show that the proposed method is significantly effective for the graph imbalance problem and the oversmoothing problem in telecom fraud detection. Full article
(This article belongs to the Special Issue Information Network Mining and Applications)
Show Figures

Figure 1

Figure 1
<p>Illustration of fraud detection in mobile communication network.</p>
Full article ">Figure 2
<p>The pipeline of our proposed method.</p>
Full article ">Figure 3
<p>Performance Comparison between the proposed method and baselines with different imbalance rate (IR) on Sichuan Dataset (<b>a</b>,<b>b</b>) and BUPT Dataset (<b>c</b>,<b>d</b>).</p>
Full article ">Figure 4
<p>Performance Comparison between the proposed method and baselines with different GNN layers on BUPT Dataset. (<b>a</b>) illustrates the performance score of the models on metric AUC; (<b>b</b>) illustrates the performance score of the models on metric Recall.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop