[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 24, June
Previous Issue
Volume 24, April
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 24, Issue 5 (May 2022) – 174 articles

Cover Story (view full-size image): Fifth-generation mobile communication systems (5G) have to accommodate both Ultra-Reliable Low-Latency Communication (URLLC) and enhanced Mobile Broadband (eMBB) services. While eMBB applications support high data rates, URLLC services aim to guarantee low latency and high reliability. eMBB and URLLC services are scheduled on the same frequency band, where their different latency requirements render their coexistence challenging. In this survey, we review coding schemes that simultaneously accommodate URLLC and eMBB transmissions and show that they outperform traditional scheduling approaches. Various communication scenarios are considered, including point-to-point channels, broadcast channels, interference networks, cellular models, and C-RANs. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 405 KiB  
Article
Bayesian Network Model Averaging Classifiers by Subbagging
by Shouta Sugahara, Itsuki Aomi and Maomi Ueno
Entropy 2022, 24(5), 743; https://doi.org/10.3390/e24050743 - 23 May 2022
Cited by 2 | Viewed by 2749
Abstract
When applied to classification problems, Bayesian networks are often used to infer a class variable when given feature variables. Earlier reports have described that the classification accuracy of Bayesian network structures achieved by maximizing the marginal likelihood (ML) is lower than that achieved [...] Read more.
When applied to classification problems, Bayesian networks are often used to infer a class variable when given feature variables. Earlier reports have described that the classification accuracy of Bayesian network structures achieved by maximizing the marginal likelihood (ML) is lower than that achieved by maximizing the conditional log likelihood (CLL) of a class variable given the feature variables. Nevertheless, because ML has asymptotic consistency, the performance of Bayesian network structures achieved by maximizing ML is not necessarily worse than that achieved by maximizing CLL for large data. However, the error of learning structures by maximizing the ML becomes much larger for small sample sizes. That large error degrades the classification accuracy. As a method to resolve this shortcoming, model averaging has been proposed to marginalize the class variable posterior over all structures. However, the posterior standard error of each structure in the model averaging becomes large as the sample size becomes small; it subsequently degrades the classification accuracy. The main idea of this study is to improve the classification accuracy using subbagging, which is modified bagging using random sampling without replacement, to reduce the posterior standard error of each structure in model averaging. Moreover, to guarantee asymptotic consistency, we use the K-best method with the ML score. The experimentally obtained results demonstrate that our proposed method provides more accurate classification than earlier BNC methods and the other state-of-the-art ensemble methods do. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

Figure 1
<p>Example of a Bayesian network.</p>
Full article ">Figure 2
<p>Average posterior standard errors of structures (APSES) of the <span class="html-italic">KB100</span> and those of <span class="html-italic">SubbKB10</span>.</p>
Full article ">
21 pages, 1998 KiB  
Article
An Image Compression Encryption Algorithm Based on Chaos and ZUC Stream Cipher
by Xiaomeng Song, Mengna Shi, Yanqi Zhou and Erfu Wang
Entropy 2022, 24(5), 742; https://doi.org/10.3390/e24050742 - 23 May 2022
Cited by 10 | Viewed by 2886
Abstract
In order to improve the transmission efficiency and security of image encryption, we combined a ZUC stream cipher and chaotic compressed sensing to perform image encryption. The parallel compressed sensing method is adopted to ensure the encryption and decryption efficiency. The ZUC stream [...] Read more.
In order to improve the transmission efficiency and security of image encryption, we combined a ZUC stream cipher and chaotic compressed sensing to perform image encryption. The parallel compressed sensing method is adopted to ensure the encryption and decryption efficiency. The ZUC stream cipher is used to sample the one-dimensional chaotic map to reduce the correlation between elements and improve the randomness of the chaotic sequence. The compressed sensing measurement matrix is constructed by using the sampled chaotic sequence to improve the image restoration effect. In order to reduce the block effect after the parallel compressed sensing operation, we also propose a method of a random block of images. Simulation analysis shows that the algorithm demonstrated better encryption and compression performance. Full article
(This article belongs to the Special Issue Computational Imaging and Image Encryption with Entropy)
Show Figures

Figure 1

Figure 1
<p>STL map structure.</p>
Full article ">Figure 2
<p>Bifurcation diagram: (<b>a</b>): Sine map. (<b>b</b>): Tent map. (<b>c</b>): Logistic map. (<b>d</b>): STL map.</p>
Full article ">Figure 3
<p>ZUC system structure.</p>
Full article ">Figure 4
<p>The sampling process diagram.</p>
Full article ">Figure 5
<p>Lyapunov exponents: (<b>a</b>): Logistic map. (<b>b</b>): Sine map. (<b>c</b>): STL map.</p>
Full article ">Figure 5 Cont.
<p>Lyapunov exponents: (<b>a</b>): Logistic map. (<b>b</b>): Sine map. (<b>c</b>): STL map.</p>
Full article ">Figure 6
<p>Random block rendering: (<b>a</b>): 4 sub-blocks. (<b>b</b>): 8 sub-blocks.</p>
Full article ">Figure 7
<p>Encryption flow chart.</p>
Full article ">Figure 8
<p>Decryption flow chart.</p>
Full article ">Figure 9
<p>Simulation results of encryption and decryption: (<b>a1</b>)–(<b>a3</b>) are plain images Lena, Baboon and Boat; (<b>b1</b>)–(<b>b3</b>) are the encryption images; and (<b>c1</b>)–(<b>c3</b>) are the decrypted images.</p>
Full article ">Figure 10
<p>Histogram of the RGB components of Lena image: (<b>a1</b>)–(<b>a3</b>) are the plaintext images; (<b>b1</b>)–(<b>b3</b>) are the encryption images; and (<b>c1</b>)–(<b>c3</b>) are the decrypted images.</p>
Full article ">Figure 11
<p>Histogram of grayscale image before and after encryption: (<b>a1</b>)–(<b>a3</b>) Baboon and (<b>b1</b>)–(<b>b3</b>) Boat.</p>
Full article ">Figure 12
<p>Pixel correlation distribution: (<b>a</b>): Original image. (<b>b</b>): Encryption image.</p>
Full article ">Figure 13
<p>Encryption key sensitivity: (<b>a</b>): The original image. (<b>b</b>): Encryption image. (<b>c</b>): The encrypted image after changing the key. (<b>d</b>): The difference between two encrypted images.</p>
Full article ">Figure 14
<p>Incorrectly decrypted image: (<b>a</b>): key changed <math display="inline"><semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>14</mn> </mrow> </msup> </semantics></math>, (<b>b</b>): key changed <math display="inline"><semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>15</mn> </mrow> </msup> </semantics></math> and (<b>c</b>): key changed <math display="inline"><semantics> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>16</mn> </mrow> </msup> </semantics></math>.</p>
Full article ">Figure 15
<p>PSNR comparison (ours vs. Deng et al. 2017 [<a href="#B43-entropy-24-00742" class="html-bibr">43</a>]) under different compression ratios.</p>
Full article ">Figure 16
<p>Encryption time comparison (ours vs. Gong et al. 2019 [<a href="#B40-entropy-24-00742" class="html-bibr">40</a>], Deng et al. 2017 [<a href="#B43-entropy-24-00742" class="html-bibr">43</a>]).</p>
Full article ">
16 pages, 3933 KiB  
Article
Weakly Supervised Building Semantic Segmentation Based on Spot-Seeds and Refinement Process
by Khaled Moghalles, Heng-Chao Li and Abdulwahab Alazeb
Entropy 2022, 24(5), 741; https://doi.org/10.3390/e24050741 - 23 May 2022
Cited by 6 | Viewed by 2658
Abstract
Automatic building semantic segmentation is the most critical and relevant task in several geospatial applications. Methods based on convolutional neural networks (CNNs) are mainly used in current building segmentation. The requirement of huge pixel-level labels is a significant obstacle to achieve the semantic [...] Read more.
Automatic building semantic segmentation is the most critical and relevant task in several geospatial applications. Methods based on convolutional neural networks (CNNs) are mainly used in current building segmentation. The requirement of huge pixel-level labels is a significant obstacle to achieve the semantic segmentation of building by CNNs. In this paper, we propose a novel weakly supervised framework for building segmentation, which generates high-quality pixel-level annotations and optimizes the segmentation network. A superpixel segmentation algorithm can predict a boundary map for training images. Then, Superpixels-CRF built on the superpixel regions is guided by spot seeds to propagate information from spot seeds to unlabeled regions, resulting in high-quality pixel-level annotations. Using these high-quality pixel-level annotations, we can train a more robust segmentation network and predict segmentation maps. To iteratively optimize the segmentation network, the predicted segmentation maps are refined, and the segmentation network are retrained. Comparative experiments demonstrate that the proposed segmentation framework achieves a marked improvement in the building’s segmentation quality while reducing human labeling efforts. Full article
Show Figures

Figure 1

Figure 1
<p>Pipeline of the proposed framework for the building’s semantic segmentation. (<b>a</b>) Generating initial ground truth: first, spot seeds are used to guide a superpixels-CRF model over superpixels segmentation to produce the initial ground truth. (<b>b</b>) Then, our framework utilizes the initial ground truth for the segmentation network training and predicts the segmentation masks of training images. In order to produce more accurate ground truth, we utilize a refinement process to smooth the segmentation network, which retrains again to provide more precise segmentation prediction as we iteratively optimize the segmentation.</p>
Full article ">Figure 2
<p>The main steps of the proposed framework.</p>
Full article ">Figure 3
<p>The building’s segmentation segmentation on Potsdam dataset. From left to right: original image, ground truth, our results, the multiple-feature reuse network (MFRN), Deeplab-V3, and the dense-attention network (DAN). The red boxes indicate improvement, while the yellow boxes indicate a false classification.</p>
Full article ">Figure 4
<p>The building’s segmentation segmentation on WHU building dataset. From left to right: original image, ground truth, our results, Deeplab-V3, and FastCCN. The red boxes indicate improvement, while the yellow boxes indicate a false classification.</p>
Full article ">Figure 5
<p>The building’s segmentation segmentation on Vaihingen dataset. From left to right: original image, ground truth, our results, Deeplab-V3, UNet++, and UNet-8s. The red boxes indicate improvement, while the yellow boxes indicate a false classification.</p>
Full article ">
21 pages, 1199 KiB  
Article
Block-Iterative Reconstruction from Dynamically Selected Sparse Projection Views Using Extended Power-Divergence Measure
by Kazuki Ishikawa, Yusaku Yamaguchi, Omar M. Abou Al-Ola, Takeshi Kojima and Tetsuya Yoshinaga
Entropy 2022, 24(5), 740; https://doi.org/10.3390/e24050740 - 23 May 2022
Cited by 2 | Viewed by 2461
Abstract
Iterative reconstruction of density pixel images from measured projections in computed tomography has attracted considerable attention. The ordered-subsets algorithm is an acceleration scheme that uses subsets of projections in a previously decided order. Several methods have been proposed to improve the convergence rate [...] Read more.
Iterative reconstruction of density pixel images from measured projections in computed tomography has attracted considerable attention. The ordered-subsets algorithm is an acceleration scheme that uses subsets of projections in a previously decided order. Several methods have been proposed to improve the convergence rate by permuting the order of the projections. However, they do not incorporate object information, such as shape, into the selection process. We propose a block-iterative reconstruction from sparse projection views with the dynamic selection of subsets based on an estimating function constructed by an extended power-divergence measure for decreasing the objective function as much as possible. We give a unified proposition for the inequality related to the difference between objective functions caused by one iteration as the theoretical basis of the proposed optimization strategy. Through the theory and numerical experiments, we show that nonuniform and sparse use of projection views leads to a reconstruction of higher-quality images and that an ordered subset is not the most effective for block-iterative reconstruction. The two-parameter class of extended power-divergence measures is the key to estimating an effective decrease in the objective function and plays a significant role in constructing a robust algorithm against noise. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Access scheme for projections <math display="inline"><semantics> <msub> <mi>y</mi> <mi>i</mi> </msub> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mo>−</mo> <msup> <mi>I</mi> <mn>1</mn> </msup> <mo>+</mo> <msup> <mi>I</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>I</mi> <mn>3</mn> </msup> <mo>+</mo> <msup> <mi>I</mi> <mn>4</mn> </msup> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math> in (<b>a</b>) SAS and (<b>b</b>) MLS.</p>
Full article ">Figure 2
<p>Phantom image <span class="html-italic">e</span> in <math display="inline"><semantics> <mrow> <mn>20</mn> <mo>×</mo> <mn>20</mn> </mrow> </semantics></math> pixels.</p>
Full article ">Figure 3
<p>Scatter plot with identity line (red) for BI-SART in Equation (<a href="#FD6-entropy-24-00740" class="html-disp-formula">6</a>).</p>
Full article ">Figure 4
<p>Scatter plot with identity line (red) for BI-MLEM in Equation (<a href="#FD7-entropy-24-00740" class="html-disp-formula">7</a>).</p>
Full article ">Figure 5
<p>Scatter plot with identity line (red) for BI-MART in Equation (<a href="#FD8-entropy-24-00740" class="html-disp-formula">8</a>).</p>
Full article ">Figure 6
<p>Phantom images of (<b>a</b>) Shepp–Logan and (<b>b</b>) chessboard pattern.</p>
Full article ">Figure 7
<p>Objective functions <math display="inline"><semantics> <mrow> <msup> <mi>D</mi> <mi>m</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>,</mo> <mi>z</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> for WBIR and conventional SAS-OSEM algorithms at each iteration <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>60</mn> </mrow> </semantics></math> in experiment using Shepp–Logan phantom.</p>
Full article ">Figure 8
<p>Reconstructed images (<b>upper</b> panel) and images of the subtraction (<b>lower</b> panel) for SAS-OSEM and WBIR in experiment using Shepp–Logan phantom.</p>
Full article ">Figure 9
<p>Frequency bar chart of subset indices <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>30</mn> </mrow> </semantics></math> for WBIR after 60 iterations in experiment using Shepp–Logan phantom.</p>
Full article ">Figure 10
<p>(<b>a</b>) SNR and (<b>b</b>) SSIM for WBIR and conventional SAS-OSEM algorithms at each iteration <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>60</mn> </mrow> </semantics></math> in experiment using Shepp–Logan phantom.</p>
Full article ">Figure 10 Cont.
<p>(<b>a</b>) SNR and (<b>b</b>) SSIM for WBIR and conventional SAS-OSEM algorithms at each iteration <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>60</mn> </mrow> </semantics></math> in experiment using Shepp–Logan phantom.</p>
Full article ">Figure 11
<p>Objective functions <math display="inline"><semantics> <mrow> <msup> <mi>D</mi> <mi>m</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>,</mo> <mi>z</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> for WBIR and OSEM algorithms by PND, FAS, WDS, and MLS at each iteration <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mn>60</mn> </mrow> </semantics></math> in experiment using Shepp–Logan phantom.</p>
Full article ">Figure 12
<p>(<b>a</b>) SNR(<span class="html-italic">e</span>,<span class="html-italic">z</span>(<span class="html-italic">n</span>))) and (<b>b</b>) SSIM(<span class="html-italic">e</span>,<span class="html-italic">z</span>(<span class="html-italic">n</span>))) for WBIR and OSEM algorithms by PND, FAS, WDS, and MLS at each iteration <span class="html-italic">n</span> = 0, 1, 2, …, 60 in experiment using Shepp–Logan phantom.</p>
Full article ">Figure 12 Cont.
<p>(<b>a</b>) SNR(<span class="html-italic">e</span>,<span class="html-italic">z</span>(<span class="html-italic">n</span>))) and (<b>b</b>) SSIM(<span class="html-italic">e</span>,<span class="html-italic">z</span>(<span class="html-italic">n</span>))) for WBIR and OSEM algorithms by PND, FAS, WDS, and MLS at each iteration <span class="html-italic">n</span> = 0, 1, 2, …, 60 in experiment using Shepp–Logan phantom.</p>
Full article ">Figure 13
<p>Objective functions <math display="inline"><semantics> <mrow> <msup> <mi>D</mi> <mi>m</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>,</mo> <mi>z</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> for WBIR and OSEM algorithms by PND, FAS, WDS, and MLS at each iteration <span class="html-italic">n</span> = 0, 1, 2, …, 60 in experiment using chessboard phantom with noise-free projections.</p>
Full article ">Figure 14
<p>Reconstructed images at iterations denoted by the number beside each image for MLS-OSEM and WBIR in experiment using chessboard phantom. Thirty iterations by OSEM have almost the same computation time as 25 by WBIR.</p>
Full article ">Figure 15
<p>Frequency bar chart of subset indices <span class="html-italic">m</span> = 1, 2, …, 30 for WBIR after 30 iterations in experiment using chessboard phantom.</p>
Full article ">Figure 16
<p>Contour plots of objective functions <math display="inline"><semantics> <mrow> <msub> <mo form="prefix">log</mo> <mn>10</mn> </msub> <mrow> <mo>(</mo> <msup> <mi>D</mi> <mi>m</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>,</mo> <mi>z</mi> <mrow> <mo>(</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> with <span class="html-italic">N</span> equal to (<b>a</b>) 10, (<b>b</b>) 20, and (<b>c</b>) 30 in experiment using noise-free projection. The white dot indicates the position of <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>γ</mi> <mo>,</mo> <mi>α</mi> <mo>)</mo> <mo>=</mo> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 17
<p>Contour plots of objective functions <math display="inline"><semantics> <mrow> <msub> <mo form="prefix">log</mo> <mn>10</mn> </msub> <mrow> <mo>(</mo> <msup> <mi>D</mi> <mi>m</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>,</mo> <mi>z</mi> <mrow> <mo>(</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> with <span class="html-italic">N</span> equal to (<b>a</b>) 10, (<b>b</b>) 20, and (<b>c</b>) 30 in experiment with noisy projection. The white dot indicates the position of <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>γ</mi> <mo>,</mo> <mi>α</mi> <mo>)</mo> <mo>=</mo> <mo>(</mo> <mn>0.5</mn> <mo>,</mo> <mn>0.5</mn> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 17 Cont.
<p>Contour plots of objective functions <math display="inline"><semantics> <mrow> <msub> <mo form="prefix">log</mo> <mn>10</mn> </msub> <mrow> <mo>(</mo> <msup> <mi>D</mi> <mi>m</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>,</mo> <mi>z</mi> <mrow> <mo>(</mo> <mi>N</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> with <span class="html-italic">N</span> equal to (<b>a</b>) 10, (<b>b</b>) 20, and (<b>c</b>) 30 in experiment with noisy projection. The white dot indicates the position of <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>γ</mi> <mo>,</mo> <mi>α</mi> <mo>)</mo> <mo>=</mo> <mo>(</mo> <mn>0.5</mn> <mo>,</mo> <mn>0.5</mn> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 18
<p>Objective functions <math display="inline"><semantics> <mrow> <msup> <mi>D</mi> <mi>m</mi> </msup> <mrow> <mo>(</mo> <mi>e</mi> <mo>,</mo> <mi>z</mi> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> for WBIR and OSEM algorithms by PND, FAS, WDS, and MLS at each iteration <span class="html-italic">n</span> = 0, 1, 2, …, 60 in experiment using chessboard phantom with noisy projections.</p>
Full article ">
14 pages, 5095 KiB  
Article
massiveGST: A Mann–Whitney–Wilcoxon Gene-Set Test Tool That Gives Meaning to Gene-Set Enrichment Analysis
by Luigi Cerulo and Stefano Maria Pagnotta
Entropy 2022, 24(5), 739; https://doi.org/10.3390/e24050739 - 23 May 2022
Cited by 1 | Viewed by 3167
Abstract
Gene-set enrichment analysis is the key methodology for obtaining biological information from transcriptomic space’s statistical result. Since its introduction, Gene-set Enrichment analysis methods have obtained more reliable results and a wider range of application. Great attention has been devoted to global tests, in [...] Read more.
Gene-set enrichment analysis is the key methodology for obtaining biological information from transcriptomic space’s statistical result. Since its introduction, Gene-set Enrichment analysis methods have obtained more reliable results and a wider range of application. Great attention has been devoted to global tests, in contrast to competitive methods that have been largely ignored, although they appear more flexible because they are independent from the source of gene-profiles. We analyzed the properties of the Mann–Whitney–Wilcoxon test, a competitive method, and adapted its interpretation in the context of enrichment analysis by introducing a Normalized Enrichment Score that summarize two interpretations: a probability estimate and a location index. Two implementations are presented and compared with relevant literature methods: an R package and an online web tool. Both allow for obtaining tabular and graphical results with attention to reproducible research. Full article
(This article belongs to the Special Issue Computational Methods and Algorithms for Bioinformatics)
Show Figures

Figure 1

Figure 1
<p>Scatter plot of the size of the gene-sets (transformed as <math display="inline"><semantics> <mrow> <msub> <mo form="prefix">log</mo> <mn>10</mn> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mrow> <mi>size</mi> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>) against the Normalized Enrichment Score; (<b>a</b>) in the case of GSEA, (<b>b</b>) for massiveGST. Data come from the gene-profile included in the R-package and 4046 gene-sets. The intensity of the color is proportional to the <span class="html-italic">p</span>-value (light color assigned to higher <span class="html-italic">p</span>-value).</p>
Full article ">Figure 2
<p>Software architecture of the online web-service.</p>
Full article ">Figure 3
<p>Flow-chart to run analysis both in the web service, and in the R environment.</p>
Full article ">Figure 4
<p>Results of the simulation. 30 gene-profiles have been queried with MSigDB C1 collection of 278 gene-sets using R-implementation of the methodologies (<b>a</b>): clusterProfiler with DOSE and fGSEA options, fast GSEA, pre ranked GSEA, massive GST, and camera pre-ranked) and online tools (<b>b</b>): GeneTrial3 with weighted GSEA and Wilcoxon Rank Sum test options, massive GST, and WebGestalt GSEA). The time, in seconds, is <math display="inline"><semantics> <msub> <mo form="prefix">log</mo> <mn>10</mn> </msub> </semantics></math> transformed. The raw data are in <a href="#entropy-24-00739-t0A1" class="html-table">Table A1</a>.</p>
Full article ">Figure 5
<p>Screenshot of the tabular results of the gene-profile associated with FGFR3-TACC3 fusion positive samples in GBM. C5 and Hallmark collections (in total 10,321 gene-sets) from MSigDB interrogated the gene-profile in 1.55 s.</p>
Full article ">Figure 6
<p>Graphical rendering of the tabular results of the analysis. Each ball is a gene-set; the radius matches the dimension, and the color corresponds to the NES. When two gene-sets share some gene, they appear connected, and the strength of similarity results in the thickness of the segment.</p>
Full article ">
21 pages, 703 KiB  
Article
Sensing Enhancement on Social Networks: The Role of Network Topology
by Markus Brede and Guillermo Romero-Moreno
Entropy 2022, 24(5), 738; https://doi.org/10.3390/e24050738 - 22 May 2022
Viewed by 2335
Abstract
Sensing and processing information from dynamically changing environments is essential for the survival of animal collectives and the functioning of human society. In this context, previous work has shown that communication between networked agents with some preference towards adopting the majority opinion can [...] Read more.
Sensing and processing information from dynamically changing environments is essential for the survival of animal collectives and the functioning of human society. In this context, previous work has shown that communication between networked agents with some preference towards adopting the majority opinion can enhance the quality of error-prone individual sensing from dynamic environments. In this paper, we compare the potential of different types of complex networks for such sensing enhancement. Numerical simulations on complex networks are complemented by a mean-field approach for limited connectivity that captures essential trends in dependencies. Our results show that, whilst bestowing advantages on a small group of agents, degree heterogeneity tends to impede overall sensing enhancement. In contrast, clustering and spatial structure play a more nuanced role depending on overall connectivity. We find that ring graphs exhibit superior enhancement for large connectivity and that random graphs outperform for small connectivity. Further exploring the role of clustering and path lengths in small-world models, we find that sensing enhancement tends to be boosted in the small-world regime. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>Left</b>) Dependence of the average fraction of correctly sensing agents <span class="html-italic">P</span> on the sensing intensity <span class="html-italic">p</span>. The figure compares numerical data obtained for an all-to-all connected system with 1000 nodes (black squares) to the mean-field orbit diagram for the bi-stable system and a mean-field estimate for <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </semantics></math> from Equation (<a href="#FD6-entropy-24-00738" class="html-disp-formula">6</a>) (dotted lines) along with an estimate of the stationary outcome of the switching dynamics based on Equation (<a href="#FD8-entropy-24-00738" class="html-disp-formula">8</a>) (magenta line). Parameters are as follows: <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>; switching rate <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>=</mo> <mn>0.001</mn> </mrow> </semantics></math>; <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>; sensing accuracy <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>0.51</mn> </mrow> </semantics></math>. Numerical data are from simulations with over 10,000 iterations of the dynamical process averaged over 10 independent runs. In the orbit diagram, we find a critical point <math display="inline"><semantics> <mrow> <msup> <mi>p</mi> <mo>∗</mo> </msup> <mo>≈</mo> <mn>0.458</mn> </mrow> </semantics></math> such that below <math display="inline"><semantics> <msup> <mi>p</mi> <mo>∗</mo> </msup> </semantics></math>, the system is bi-stable and above <math display="inline"><semantics> <msup> <mi>p</mi> <mo>∗</mo> </msup> </semantics></math>, it follows the sensing of all agents and manages to adapt to changing signals. For a switching rate of the external signal of <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>=</mo> <mn>0.001</mn> </mrow> </semantics></math>, the maximum fraction of agents aware of the correct signal is found at <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>≈</mo> <mn>0.466</mn> </mrow> </semantics></math>, slightly above the bifurcation point. (<b>Right</b>) Comparison of the mean-field estimate (based on Equation (<a href="#FD8-entropy-24-00738" class="html-disp-formula">8</a>), black lines) vs. numerical data for different switching rates <span class="html-italic">u</span>.</p>
Full article ">Figure 2
<p>Maximum sensing enhancement <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>/</mo> <mi>q</mi> </mrow> </semantics></math> as a function of the sensing accuracy <span class="html-italic">q</span>. (<b>Left</b>): Comparison between numerical data for different types of complex networks for <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>=</mo> <mn>0.001</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>k</mi> <mo>〉</mo> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>. (<b>Right</b>): Comparison for random regular graphs with connectivity <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>k</mi> <mo>〉</mo> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>k</mi> <mo>〉</mo> <mo>=</mo> <mn>40</mn> </mrow> </semantics></math> with the mean-field estimates for limited connectivity (dotted lines) and all-to-all coupling (solid line).</p>
Full article ">Figure 3
<p>Average probability to sense the correct state of the environment <span class="html-italic">P</span> as a function of node degree <span class="html-italic">k</span> averaged over 10 Barabasi–Albert-type SF networks with <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <msup> <mn>10</mn> <mn>4</mn> </msup> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>=</mo> <mn>0.001</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>0.51</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.31</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>k</mi> <mo>〉</mo> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>. Error bars give standard errors.</p>
Full article ">Figure 4
<p>Dependence of the point of optimal sensing enhancement on connectivity for different networks. (<b>Left</b>): Optimal sensing <math display="inline"><semantics> <msub> <mi>P</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </semantics></math> vs. connectivity <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>k</mi> <mo>〉</mo> </mrow> </semantics></math>. (<b>Right</b>): Required sensing intensity at the optimum vs. connectivity. (<b>Bottom</b>): Analysis of the <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </semantics></math> dependence for RGs for different system sizes ranging from <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>8000</mn> </mrow> </semantics></math>, where the inset magnifies the region <math display="inline"><semantics> <mrow> <mn>0.4</mn> <mo>≤</mo> <mi>p</mi> <mo>≤</mo> <mn>0.5</mn> </mrow> </semantics></math>. Numerical data obtained from simulations of <math display="inline"><semantics> <msup> <mn>10</mn> <mn>4</mn> </msup> </semantics></math> iterations of the updating process and averaged over 20 networks of size <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math> (for the first two panels) for <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>0.51</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>=</mo> <mn>0.001</mn> </mrow> </semantics></math>. The black lines give mean-field estimates.</p>
Full article ">Figure 5
<p>Dependence of the point of maximum sensing enhancement on <math display="inline"><semantics> <mi>α</mi> </semantics></math>. (<b>Left</b>): Dependence of optimal sensing <math display="inline"><semantics> <msub> <mi>P</mi> <mi>max</mi> </msub> </semantics></math> on <math display="inline"><semantics> <mi>α</mi> </semantics></math> for various networks with <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>k</mi> <mo>〉</mo> <mo>=</mo> <mn>40</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>=</mo> <mn>0.001</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>0.51</mn> </mrow> </semantics></math>. There is a max value of <math display="inline"><semantics> <mi>α</mi> </semantics></math>, such that sensing is maximally enhanced. (<b>Right</b>): Dependence of the required sensing intensity <span class="html-italic">p</span> at the maximum point on <math display="inline"><semantics> <mi>α</mi> </semantics></math> for various complex networks. The solid line gives the mean-field estimate. (<b>Bottom</b>): Results from the mean-field analysis for the dependence of <math display="inline"><semantics> <msub> <mi>P</mi> <mi>max</mi> </msub> </semantics></math> and the width of the enhancement region <math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mi>P</mi> <mo>&gt;</mo> <mn>0.52</mn> </mrow> </msub> </semantics></math> on <math display="inline"><semantics> <mi>α</mi> </semantics></math>. More consensus enhancement is possible for a smaller <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>&gt;</mo> <mn>1</mn> </mrow> </semantics></math>, but the width of the peak converges to zero and transients become longer as one approaches <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> from above.</p>
Full article ">Figure 6
<p>Dependence of the average fraction of correctly sensing agents on the sensing intensity from simulations on small-world networks with different small-world parameters <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mn>0.01</mn> <mo>,</mo> <mn>0.02</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>0.51</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>=</mo> <mn>0.001</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>k</mi> <mo>〉</mo> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math>. Data points provide averages over 20 small-world networks of size <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Dependence of the maximum achievable share of correctly sensing agents <math display="inline"><semantics> <msub> <mi>P</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </semantics></math> (<b>left</b>) and the sensing intensity at which this maximum is achieved (<b>right</b>) on the small-world parameter <span class="html-italic">f</span> used to construct the social network for networks of different average connectivity <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>k</mi> <mo>〉</mo> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>k</mi> <mo>〉</mo> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>. The (<b>bottom</b>) left panel yields the dependence of the optimal small-world parameter <span class="html-italic">f</span> for which sensing enhancement is maximised on the average degree. Data obtained from simulations on regular small-world networks of size <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>=</mo> <mn>0.001</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>0.51</mn> </mrow> </semantics></math>. For each data point, 50 small-world networks were constructed, and for each, the optimal sensing intensity and maximum of <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>(</mo> <mi>p</mi> <mo>)</mo> </mrow> </semantics></math> were determined and then averaged. We see that optimal sensing enhancement can be achieved for networks in the small-world region (for small <span class="html-italic">f</span>).</p>
Full article ">Figure 8
<p>Comparison between the dependencies of maximum achievable sensing enhancement (<b>left</b>) and the required sensing intensity at the optimum (<b>right</b>) on network connectivity between ring graphs (RGs), regular random graphs (RRGs), and optimal small worlds (OSWs). Data obtained from averages over 50 simulations on networks of size <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>=</mo> <mn>0.001</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>0.51</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Dependence of the fraction of correct agents <span class="html-italic">P</span> on the sensing intensity <span class="html-italic">p</span> in environments with different numbers of states <span class="html-italic">n</span>. Results are from numerical simulations for RRG networks of size <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math> averaged over 20 networks (symbols) and from mean-field estimates (solid lines). Other parameters are <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>0.51</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>=</mo> <mn>0.001</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>k</mi> <mo>〉</mo> <mo>=</mo> <mn>60</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Comparison of the dependence of the largest accuracy enhancement <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>/</mo> <mi>q</mi> </mrow> </semantics></math> on the sensing accuracy <span class="html-italic">q</span> for different numbers of environmental states <span class="html-italic">n</span>. From left to right and top to bottom, panels give data for RRGs, RGs, ER-type random networks, and scale-free networks (note that <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>=</mo> <mn>11</mn> </mrow> </semantics></math> was not plotted for SF networks, because no enhancement is possible in that case). Data averaged from 20 realisations of networks of size <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math>. Other parameters are <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>=</mo> <mn>0.001</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>k</mi> <mo>〉</mo> <mo>=</mo> <mn>40</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Comparison of the dependence of the sensing intensity for which the largest accuracy enhancement can be achieved (see <a href="#entropy-24-00738-f010" class="html-fig">Figure 10</a>) on the sensing accuracy <span class="html-italic">q</span> for different numbers of environmental states <span class="html-italic">n</span>. From left to right and top to bottom, panels give data for RRGs, RGs, ER-type random networks, and scale-free networks. Parameter settings as in <a href="#entropy-24-00738-f010" class="html-fig">Figure 10</a>.</p>
Full article ">Figure 12
<p>Comparison of the dependence of maximum achievable accuracy enhancement (<b>top left</b>), the sensing intensity at which the optimal accuracy enhancement can be achieved (<b>top right</b>), and maximum sensing accuracy for which sensing enhancement can be achieved (<b>bottom</b>) on the number of environmental states for RRGs, RGs, ER-type networks, and scale-free networks. Dependence of Parameter settings as in <a href="#entropy-24-00738-f010" class="html-fig">Figure 10</a> and <a href="#entropy-24-00738-f011" class="html-fig">Figure 11</a>.</p>
Full article ">Figure A1
<p>Dependence of the optimal achievable sensing enhancement <math display="inline"><semantics> <msub> <mi>P</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> </semantics></math> (<b>left</b>) and the sensing intensities at which it can be realised (<b>right</b>) on the system size for different types of networks (RRG, RG, ER, and SF from top to bottom). Data points are for <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>0.51</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>=</mo> <mn>0.001</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>k</mi> <mo>〉</mo> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>, and points in the figures represent averages over 50 network configurations.</p>
Full article ">Figure A2
<p>(<b>Top</b>) Dependence of the optimal sensing enhancement <math display="inline"><semantics> <msub> <mi>P</mi> <mi>max</mi> </msub> </semantics></math> (<b>left</b>) and optimal sensing intensity (<b>right</b>) on the small-world parameter <span class="html-italic">f</span> for different system sizes <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>250</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>8000</mn> </mrow> </semantics></math>. (<b>Middle</b>) Dependence of the optimal small-world parameter on system size (error bars given by discretisation of <span class="html-italic">f</span>). (<b>Bottom</b>) Dependence of the optimal enhancement at the optimal small-world parameter (<b>left</b>) and sensing intensity at which this can be realised (<b>right</b>) on system size. Data points are for <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>0.51</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>u</mi> <mo>=</mo> <mn>0.001</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mo>〈</mo> <mi>k</mi> <mo>〉</mo> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, and points in the figures represent averages over 50 network configurations.</p>
Full article ">
14 pages, 2163 KiB  
Article
Adaptive Fixed-Time Neural Networks Control for Pure-Feedback Non-Affine Nonlinear Systems with State Constraints
by Yang Li, Quanmin Zhu, Jianhua Zhang and Zhaopeng Deng
Entropy 2022, 24(5), 737; https://doi.org/10.3390/e24050737 - 22 May 2022
Cited by 1 | Viewed by 2211
Abstract
A new fixed-time adaptive neural network control strategy is designed for pure-feedback non-affine nonlinear systems with state constraints according to the feedback signal of the error system. Based on the adaptive backstepping technology, the Lyapunov function is designed for each subsystem. The neural [...] Read more.
A new fixed-time adaptive neural network control strategy is designed for pure-feedback non-affine nonlinear systems with state constraints according to the feedback signal of the error system. Based on the adaptive backstepping technology, the Lyapunov function is designed for each subsystem. The neural network is used to identify the unknown parameters of the system in a fixed-time, and the designed control strategy makes the output signal of the system track the expected signal in a fixed-time. Through the stability analysis, it is proved that the tracking error converges in a fixed-time, and the design of the upper bound of the setting time of the error system only needs to modify the parameters and adaptive law of the controlled system controller, which does not depend on the initial conditions. Full article
(This article belongs to the Special Issue Nonlinear Control Systems with Recent Advances and Applications)
Show Figures

Figure 1

Figure 1
<p>Fixed-time adaptive neural network control system.</p>
Full article ">Figure 2
<p>Fixed-time adaptive neural network control algorithm.</p>
Full article ">Figure 3
<p>Trajectories of the output and the desired signal.</p>
Full article ">Figure 4
<p>Trajectories of the homeomorphism mapping states.</p>
Full article ">Figure 5
<p>Trajectories of the system states.</p>
Full article ">Figure 6
<p>Trajectories of the controller.</p>
Full article ">Figure 7
<p>Trajectories of the output and the desired signal.</p>
Full article ">Figure 8
<p>Trajectories of the controller.</p>
Full article ">
18 pages, 2887 KiB  
Article
Task Offloading Strategy Based on Mobile Edge Computing in UAV Network
by Wei Qi, Hao Sun, Lichen Yu, Shuo Xiao and Haifeng Jiang
Entropy 2022, 24(5), 736; https://doi.org/10.3390/e24050736 - 22 May 2022
Cited by 10 | Viewed by 2857
Abstract
When an unmanned aerial vehicle (UAV) performs tasks such as power patrol inspection, water quality detection, field scientific observation, etc., due to the limitations of the computing capacity and battery power, it cannot complete the tasks efficiently. Therefore, an effective method is to [...] Read more.
When an unmanned aerial vehicle (UAV) performs tasks such as power patrol inspection, water quality detection, field scientific observation, etc., due to the limitations of the computing capacity and battery power, it cannot complete the tasks efficiently. Therefore, an effective method is to deploy edge servers near the UAV. The UAV can offload some of the computationally intensive and real-time tasks to edge servers. In this paper, a mobile edge computing offloading strategy based on reinforcement learning is proposed. Firstly, the Stackelberg game model is introduced to model the UAV and edge nodes in the network, and the utility function is used to calculate the maximization of offloading revenue. Secondly, as the problem is a mixed-integer non-linear programming (MINLP) problem, we introduce the multi-agent deep deterministic policy gradient (MADDPG) to solve it. Finally, the effects of the number of UAVs and the summation of computing resources on the total revenue of the UAVs were simulated through simulation experiments. The experimental results show that compared with other algorithms, the algorithm proposed in this paper can more effectively improve the total benefit of UAVs. Full article
(This article belongs to the Special Issue Wireless Sensor Networks and Their Applications)
Show Figures

Figure 1

Figure 1
<p>Task offloading system model in UAV network.</p>
Full article ">Figure 2
<p>Two-stage Stackelberg game model.</p>
Full article ">Figure 3
<p>Interaction process between agent and environment.</p>
Full article ">Figure 4
<p>Curve of system utility and number of iterations.</p>
Full article ">Figure 5
<p>Curve of success rate of task.</p>
Full article ">Figure 6
<p>Average delay curve of different algorithms.</p>
Full article ">Figure 7
<p>Average energy consumption curve of different algorithms.</p>
Full article ">Figure 8
<p>Average utility curve of system with different algorithms.</p>
Full article ">Figure 9
<p>Average utility curve of UAV with different algorithms.</p>
Full article ">
23 pages, 1720 KiB  
Article
Information Fragmentation, Encryption and Information Flow in Complex Biological Networks
by Clifford Bohm, Douglas Kirkpatrick, Victoria Cao and Christoph Adami
Entropy 2022, 24(5), 735; https://doi.org/10.3390/e24050735 - 21 May 2022
Cited by 5 | Viewed by 3814
Abstract
Assessing where and how information is stored in biological networks (such as neuronal and genetic networks) is a central task both in neuroscience and in molecular genetics, but most available tools focus on the network’s structure as opposed to its function. Here, we [...] Read more.
Assessing where and how information is stored in biological networks (such as neuronal and genetic networks) is a central task both in neuroscience and in molecular genetics, but most available tools focus on the network’s structure as opposed to its function. Here, we introduce a new information-theoretic tool—information fragmentation analysis—that, given full phenotypic data, allows us to localize information in complex networks, determine how fragmented (across multiple nodes of the network) the information is, and assess the level of encryption of that information. Using information fragmentation matrices we can also create information flow graphs that illustrate how information propagates through these networks. We illustrate the use of this tool by analyzing how artificial brains that evolved in silico solve particular tasks, and show how information fragmentation analysis provides deeper insights into how these brains process information and “think”. The measures of information fragmentation and encryption that result from our methods also quantify complexity of information processing in these networks and how this processing complexity differs between primary exposure to sensory data (early in the lifetime) and later routine processing. Full article
(This article belongs to the Special Issue Foundations of Biological Computation)
Show Figures

Figure 1

Figure 1
<p>Entropy Venn diagram showing how the information about the joint variable <math display="inline"><semantics> <mrow> <mi>X</mi> <mo>=</mo> <msub> <mi>X</mi> <mn>1</mn> </msub> <msub> <mi>X</mi> <mn>2</mn> </msub> </mrow> </semantics></math> stored in <span class="html-italic">Y</span> is distributed across the subsystems <math display="inline"><semantics> <msub> <mi>X</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>X</mi> <mn>2</mn> </msub> </semantics></math>. The information <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <msub> <mi>X</mi> <mn>1</mn> </msub> <mo>;</mo> <mi>Y</mi> <mo>)</mo> </mrow> </semantics></math> shared between <math display="inline"><semantics> <msub> <mi>X</mi> <mn>1</mn> </msub> </semantics></math> and <span class="html-italic">Y</span> is indicated by righthatching, while the information <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <msub> <mi>X</mi> <mn>2</mn> </msub> <mo>;</mo> <mi>Y</mi> <mo>)</mo> </mrow> </semantics></math> is shown with lefthatching. As <math display="inline"><semantics> <msub> <mi>X</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>X</mi> <mn>2</mn> </msub> </semantics></math> can share entropy, the sum of <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <msub> <mi>X</mi> <mn>1</mn> </msub> <mo>;</mo> <mi>Y</mi> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <msub> <mi>X</mi> <mn>2</mn> </msub> <mo>;</mo> <mi>Y</mi> <mo>)</mo> </mrow> </semantics></math> double counts any information shared between all three: <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <msub> <mi>X</mi> <mn>1</mn> </msub> <mo>;</mo> <msub> <mi>X</mi> <mn>2</mn> </msub> <mo>;</mo> <mi>Y</mi> <mo>)</mo> </mrow> </semantics></math> (crosshatched). Because information shared between three (or more) parties can be negative, the sum <math display="inline"><semantics> <mrow> <mi>I</mi> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mn>1</mn> </msub> <mo>;</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>I</mi> <mrow> <mo>(</mo> <msub> <mi>X</mi> <mn>2</mn> </msub> <mo>;</mo> <mi>Y</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> can be larger or smaller than <math display="inline"><semantics> <mrow> <mi>I</mi> <mo>(</mo> <mi>X</mi> <mo>;</mo> <mi>Y</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Node configuration for Markov Brains and Recurrent Neural Networks. (<b>a</b>) Networks for the <span class="html-italic">n</span>-Back task have a single input node (blue), eight memory nodes (green) and five output nodes (red) to report on prior inputs. (<b>b</b>) Networks for the Block Catch task have four “retinal” or sensor (input) nodes, eight memory nodes, and two motor (output) nodes that allow the agent to move left or right.</p>
Full article ">Figure 3
<p>Fragmentation matrices for the <span class="html-italic">n</span>-Back task. Matrices from four Markov Brains evolved on the <span class="html-italic">n</span>-Back task that evolved perfect performance, shown as (<b>a</b>–<b>d</b>). The features labeling the rows of the matrix are the expected outputs on the current update, while sets (labelling columns) are combinations of the brain’s eight memory values <math display="inline"><semantics> <mrow> <msub> <mi>m</mi> <mn>1</mn> </msub> <mo>⋯</mo> <msub> <mi>m</mi> <mn>8</mn> </msub> </mrow> </semantics></math>. The amount of information between each feature and each set is indicated by gray-scale, where white squares indicate perfect correlation, and gray to black represents successively less correlation. The black diamond within a white square indicates the smallest distinct informative set (DIS) that predicts each feature. A portion of each matrix containing sets of intermediate size is not shown to save space.</p>
Full article ">Figure 4
<p>Entropy Venn diagram for element {<math display="inline"><semantics> <mrow> <msub> <mi>o</mi> <mn>5</mn> </msub> <mo>,</mo> <msub> <mi>m</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>m</mi> <mn>7</mn> </msub> </mrow> </semantics></math>} of the fragmentation matrix shown in <a href="#entropy-24-00735-f003" class="html-fig">Figure 3</a>c. As <math display="inline"><semantics> <mrow> <msub> <mi>o</mi> <mn>5</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>m</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>−</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>⊗</mo> <msub> <mi>m</mi> <mn>7</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>−</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> (⊗ is the XOR operator), information about <math display="inline"><semantics> <msub> <mi>o</mi> <mn>5</mn> </msub> </semantics></math> is perfectly encrypted so that each of the nodes <math display="inline"><semantics> <msub> <mi>m</mi> <mn>2</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>m</mi> <mn>7</mn> </msub> </semantics></math> reveal <span class="html-italic">no</span> information about <math display="inline"><semantics> <msub> <mi>o</mi> <mn>5</mn> </msub> </semantics></math>. Because this Venn diagram is symmetric, it is arbitrary which variable is called the sender, the receiver, or the key.</p>
Full article ">Figure 5
<p>Information flow through nodes of the Markov Brains evolved to solve the <span class="html-italic">n</span>-Back task. Diagrams (<b>a</b>–<b>d</b>) correspond to the fragmentation matrices shown in <a href="#entropy-24-00735-f003" class="html-fig">Figure 3</a>a–d. Input node <math display="inline"><semantics> <msub> <mi>i</mi> <mn>1</mn> </msub> </semantics></math> is in green, output neurons <math display="inline"><semantics> <msub> <mi>o</mi> <mi>k</mi> </msub> </semantics></math> are blue, and memory neurons <math display="inline"><semantics> <msub> <mi>m</mi> <mi>k</mi> </msub> </semantics></math> are white. The numbers within the nodes are the entropy of that node throughout a trial (as the inputs are random, each node has one bit of entropy). The arrows going into each node represent the connections necessary to account for the total entropy in that node. The labels accompanying each arrow and the arrows’ widths both indicate the proportion of the entropy in the downstream node that can be accounted for by each arrow alone, but because information is distributed and not additive, the sum of informations often does not equal the entropy of the downstream node. Memory nodes with zero entropy are not shown to simplify the graphs (all brains have eight memory nodes). In this configuration, <span class="html-italic">n</span>-Back agents were required to report on the outputs correspondent to <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>−</mo> <mn>3</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>−</mo> <mn>5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>−</mo> <mn>7</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>−</mo> <mn>8</mn> </mrow> </semantics></math>, where <span class="html-italic">t</span> is the current time.</p>
Full article ">Figure 6
<p>Depiction of the two tasks used. (<b>a</b>) In the <span class="html-italic">n</span>-Back task, successive bits provided to the agent at input <span class="html-italic">i</span> and must pass though various portions of the memory <span class="html-italic">m</span> and delivered to outputs at later times, such that the outputs <math display="inline"><semantics> <msub> <mi>o</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>o</mi> <mn>2</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>o</mi> <mn>3</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>o</mi> <mn>4</mn> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>o</mi> <mn>5</mn> </msub> </semantics></math> at a given time <span class="html-italic">t</span> provide the input state from prior time points <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>−</mo> <mn>3</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>−</mo> <mn>5</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>−</mo> <mn>7</mn> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>−</mo> <mn>8</mn> </mrow> </semantics></math>, respectively. (<b>b</b>) In the Block Catch task, blocks of various sizes with left or right lateral motion are dropped. Some blocks must be avoided (those shown in red) while other blocks (shown in green) are to be caught. The right portion of (<b>b</b>) shows a subsection of the environment at a particular moment, with a left-falling size-4 block (red). The agent is depicted in blue, with the sensors in dark blue and the “blind spot” in light blue. As currently positioned, only the rightmost sensor of the agent would be activated. Here, the agent should miss the block. The agent “catches” a block if any part of the block intersects any part of the agent.</p>
Full article ">Figure 7
<p>Fragmentation matrices for three Markov Brains evolved to perfect performance on the Block Catch task. For each brain two fragmentation matrices are shown, the first using the state information from the full lifetime (all 120 conditions for 31 updates), and the other only the late lifetime, that is, the last 25% of updates (all 120 conditions). The features (labeling the rows) represent various salient features of the world state. The columns are combinations (sets) of the brain’s 8 memory nodes <math display="inline"><semantics> <mrow> <msub> <mi>m</mi> <mn>1</mn> </msub> <mo>⋯</mo> <msub> <mi>m</mi> <mn>8</mn> </msub> </mrow> </semantics></math>. The amount of information between each feature and each memory set is indicated by gray-scale, where white squares indicate perfect correlation, and gray to black represents successively less correlation. A portion of each matrix containing sets of intermediate size is not shown to save space. (<b>a</b>) Full-lifetime fragmentation matrix of a simple brain (1), same brain, late-lifetime fragmentation matrix (2); (<b>b</b>) full-lifetime and late-lifetime fragmentation matrices for an intermediate-complexity brain (1 and 2, respectively); (<b>c</b>) full-lifetime and late-lifetime fragmentation matrices for a complex brain (1 and 2, respectively).</p>
Full article ">Figure 8
<p>Full-lifetime (<b>a.1</b>,<b>b.1</b>,<b>c.1</b>) and late-lifetime (<b>a.2</b>,<b>b.2</b>,<b>c.2</b>) information flow diagrams for the Block Catch task, for the three brains shown in <a href="#entropy-24-00735-f007" class="html-fig">Figure 7</a>. Green, white, and blue nodes indicate inputs (<span class="html-italic">i</span>), memory (<span class="html-italic">m</span>), and output (<span class="html-italic">o</span>) nodes respectively. The numbers in the nodes indicate the entropy (in bits) in that node. The labels accompanying each connecting link and the link’s width both indicate the proportion of the entropy in the downstream node that can be accounted for by that link. The links rendered in black going into each node represent the connections necessary to account for the total entropy in that node. Red links indicate connections that may (but do not necessarily) account for downstream information (indicating redundant predictive sets). Memory nodes with zero entropy are not shown to simplify the figures (all brains have eight memory nodes). Figure labels correspond to results shown in <a href="#entropy-24-00735-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 9
<p>Mutational robustness (average degradation of performance) vs. flow-complexity (number of informative arrows in the information flow diagram), for Markov Brains (left panel) and RNNs (right panel). In the left panel, three dots are circled and annotated (a–c) to indicate values generated by the three networks shown in <a href="#entropy-24-00735-f007" class="html-fig">Figure 7</a> and <a href="#entropy-24-00735-f008" class="html-fig">Figure 8</a>. Black solid lines indicate a line of best linear fit.</p>
Full article ">Figure 10
<p>Depiction of the two cognitive systems used in this work. Both brain types have the same general structure, which consists of a “before” state, <math display="inline"><semantics> <msub> <mi>T</mi> <mn>0</mn> </msub> </semantics></math> and an “after” state, <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>. The <math display="inline"><semantics> <msub> <mi>T</mi> <mn>0</mn> </msub> </semantics></math> state is made up of inputs and prior memory, while the <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math> state is made up of outputs and updated memory. (<b>a</b>) shows the structure of the RNNs where data flows from <math display="inline"><semantics> <msub> <mi>T</mi> <mn>0</mn> </msub> </semantics></math> (input and prior memory) through summation and threshold nodes to <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math> (outputs and updated memory). (<b>b</b>) shows the structure of the Markov Brains, where information flows from <math display="inline"><semantics> <msub> <mi>T</mi> <mn>0</mn> </msub> </semantics></math>, through genetically encoded logic gates, to <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>.</p>
Full article ">
18 pages, 379 KiB  
Article
Optimal Control of Background-Based Uncertain Systems with Applications in DC Pension Plan
by Wei Liu, Wanying Wu, Xiaoyi Tang and Yijun Hu
Entropy 2022, 24(5), 734; https://doi.org/10.3390/e24050734 - 21 May 2022
Viewed by 1910
Abstract
In this paper, we propose a new optimal control model for uncertain systems with jump. In the model, the background-state variables are incorporated, where the background-state variables are governed by an uncertain differential equation. Meanwhile, the state variables are governed by another uncertain [...] Read more.
In this paper, we propose a new optimal control model for uncertain systems with jump. In the model, the background-state variables are incorporated, where the background-state variables are governed by an uncertain differential equation. Meanwhile, the state variables are governed by another uncertain differential equation with jump, in which both the background-state variables and the control variables are involved. Under the optimistic value criterion, using uncertain dynamic programming method, we establish the principle and the equation of optimality. As an application, the optimal investment strategy and optimal payment rate for DC pension plans are given, where the corresponding background-state variables represent the salary process. This application in DC pension plans illustrates the effectiveness of the proposed model. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Effect of <math display="inline"><semantics> <mi>α</mi> </semantics></math> on the optimal investment proportion <math display="inline"><semantics> <mrow> <msup> <mi>ω</mi> <mo>*</mo> </msup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) Effect of <math display="inline"><semantics> <mi>α</mi> </semantics></math> on the optimal payment rate <math display="inline"><semantics> <mrow> <msup> <mi>B</mi> <mo>*</mo> </msup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>(<b>a</b>) Effect of <math display="inline"><semantics> <mi>μ</mi> </semantics></math> on the optimal investment proportion <math display="inline"><semantics> <mrow> <msup> <mi>ω</mi> <mo>*</mo> </msup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) Effect of <math display="inline"><semantics> <mi>μ</mi> </semantics></math> on the optimal payment rate <math display="inline"><semantics> <mrow> <msup> <mi>B</mi> <mo>*</mo> </msup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>; (<b>c</b>) Effect of <math display="inline"><semantics> <msub> <mi>σ</mi> <mn>1</mn> </msub> </semantics></math> on the optimal investment proportion <math display="inline"><semantics> <mrow> <msup> <mi>ω</mi> <mo>*</mo> </msup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>; (<b>d</b>) Effect of <math display="inline"><semantics> <msub> <mi>σ</mi> <mn>1</mn> </msub> </semantics></math> on the optimal payment rate <math display="inline"><semantics> <mrow> <msup> <mi>B</mi> <mo>*</mo> </msup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>; (<b>e</b>) Effect of <math display="inline"><semantics> <msub> <mi>σ</mi> <mn>2</mn> </msub> </semantics></math> on the optimal investment proportion <math display="inline"><semantics> <mrow> <msup> <mi>ω</mi> <mo>*</mo> </msup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>; (<b>f</b>) Effect of <math display="inline"><semantics> <msub> <mi>σ</mi> <mn>2</mn> </msub> </semantics></math> on the optimal payment rate <math display="inline"><semantics> <mrow> <msup> <mi>B</mi> <mo>*</mo> </msup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 2 Cont.
<p>(<b>a</b>) Effect of <math display="inline"><semantics> <mi>μ</mi> </semantics></math> on the optimal investment proportion <math display="inline"><semantics> <mrow> <msup> <mi>ω</mi> <mo>*</mo> </msup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) Effect of <math display="inline"><semantics> <mi>μ</mi> </semantics></math> on the optimal payment rate <math display="inline"><semantics> <mrow> <msup> <mi>B</mi> <mo>*</mo> </msup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>; (<b>c</b>) Effect of <math display="inline"><semantics> <msub> <mi>σ</mi> <mn>1</mn> </msub> </semantics></math> on the optimal investment proportion <math display="inline"><semantics> <mrow> <msup> <mi>ω</mi> <mo>*</mo> </msup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>; (<b>d</b>) Effect of <math display="inline"><semantics> <msub> <mi>σ</mi> <mn>1</mn> </msub> </semantics></math> on the optimal payment rate <math display="inline"><semantics> <mrow> <msup> <mi>B</mi> <mo>*</mo> </msup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>; (<b>e</b>) Effect of <math display="inline"><semantics> <msub> <mi>σ</mi> <mn>2</mn> </msub> </semantics></math> on the optimal investment proportion <math display="inline"><semantics> <mrow> <msup> <mi>ω</mi> <mo>*</mo> </msup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>; (<b>f</b>) Effect of <math display="inline"><semantics> <msub> <mi>σ</mi> <mn>2</mn> </msub> </semantics></math> on the optimal payment rate <math display="inline"><semantics> <mrow> <msup> <mi>B</mi> <mo>*</mo> </msup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>(<b>a</b>) Effect of <math display="inline"><semantics> <msub> <mi>μ</mi> <mi>L</mi> </msub> </semantics></math> on the optimal investment proportion <math display="inline"><semantics> <mrow> <msup> <mi>ω</mi> <mo>*</mo> </msup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) Effect of <math display="inline"><semantics> <msub> <mi>μ</mi> <mi>L</mi> </msub> </semantics></math> the optimal payment rate <math display="inline"><semantics> <mrow> <msup> <mi>B</mi> <mo>*</mo> </msup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>; (<b>c</b>) Effect of <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>L</mi> </msub> </semantics></math> on the optimal investment proportion <math display="inline"><semantics> <mrow> <msup> <mi>ω</mi> <mo>*</mo> </msup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>; (<b>d</b>) Effect of <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>L</mi> </msub> </semantics></math> the optimal payment rate <math display="inline"><semantics> <mrow> <msup> <mi>B</mi> <mo>*</mo> </msup> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">
34 pages, 23413 KiB  
Article
Finite-Time Pinning Synchronization Control for T-S Fuzzy Discrete Complex Networks with Time-Varying Delays via Adaptive Event-Triggered Approach
by Xiru Wu, Yuchong Zhang, Qingming Ai and Yaonan Wang
Entropy 2022, 24(5), 733; https://doi.org/10.3390/e24050733 - 21 May 2022
Cited by 1 | Viewed by 2421
Abstract
This paper is concerned with the adaptive event-triggered finite-time pinning synchronization control problem for T-S fuzzy discrete complex networks (TSFDCNs) with time-varying delays. In order to accurately describe discrete dynamical behaviors, we build a general model of discrete complex networks via T-S fuzzy [...] Read more.
This paper is concerned with the adaptive event-triggered finite-time pinning synchronization control problem for T-S fuzzy discrete complex networks (TSFDCNs) with time-varying delays. In order to accurately describe discrete dynamical behaviors, we build a general model of discrete complex networks via T-S fuzzy rules, which extends a continuous-time model in existing results. Based on an adaptive threshold and measurement errors, a discrete adaptive event-triggered approach (AETA) is introduced to govern signal transmission. With the hope of improving the resource utilization and reducing the update frequency, an event-based fuzzy pinning feedback control strategy is designed to control a small fraction of network nodes. Furthermore, by new Lyapunov–Krasovskii functionals and the finite-time analysis method, sufficient criteria are provided to guarantee the finite-time bounded stability of the closed-loop error system. Under an optimization condition and linear matrix inequality (LMI) constraints, the desired controller parameters with respect to minimum finite time are derived. Finally, several numerical examples are conducted to show the effectiveness of obtained theoretical results. For the same system, the average triggering rate of AETA is significantly lower than existing event-triggered mechanisms and the convergence rate of synchronization errors is also superior to other control strategies. Full article
(This article belongs to the Special Issue Dynamics of Complex Networks)
Show Figures

Figure 1

Figure 1
<p>Communication coupling structure for two fuzzy rules. (<b>a</b>) Rule 1. (<b>b</b>) Rule 2.</p>
Full article ">Figure 2
<p>States of nodes <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <mspace width="0.166667em"/> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> in TSFDCNs.</p>
Full article ">Figure 3
<p>Synchronization errors <math display="inline"><semantics> <msub> <mi>e</mi> <mrow> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> without controllers of TSFDCNs.</p>
Full article ">Figure 4
<p>(<b>a</b>) Synchronization errors <math display="inline"><semantics> <msub> <mi>e</mi> <mrow> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> of closed-loop TSFDCNs with controllers. (<b>b</b>) Curves of Lyapunov terms <math display="inline"><semantics> <mrow> <msubsup> <mi>e</mi> <mi>i</mi> <mi>T</mi> </msubsup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <msub> <mi>Q</mi> <mi>i</mi> </msub> <msub> <mi>e</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Curves of control inputs.</p>
Full article ">Figure 6
<p>(<b>a</b>) Triggered instants under the static event-triggered mechanism in [<a href="#B18-entropy-24-00733" class="html-bibr">18</a>]. (<b>b</b>) Triggered instants under the periodic event-triggered mechanism in [<a href="#B39-entropy-24-00733" class="html-bibr">39</a>]. (<b>c</b>) Triggered instants under the static event-triggered mechanism in [<a href="#B48-entropy-24-00733" class="html-bibr">48</a>]. (<b>d</b>) Triggered instants under the AETA.</p>
Full article ">Figure 7
<p>The triggering rates of AETA and methods in [<a href="#B18-entropy-24-00733" class="html-bibr">18</a>,<a href="#B39-entropy-24-00733" class="html-bibr">39</a>,<a href="#B48-entropy-24-00733" class="html-bibr">48</a>] for various nodes.</p>
Full article ">Figure 8
<p>The triggering rates of five nodes for varying <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>i</mi> </msub> </semantics></math>.</p>
Full article ">Figure 9
<p>(<b>a</b>) Synchronization errors by Theorem 2 in [<a href="#B29-entropy-24-00733" class="html-bibr">29</a>]. (<b>b</b>) Synchronization errors by Theorem 2 in [<a href="#B44-entropy-24-00733" class="html-bibr">44</a>].</p>
Full article ">Figure 10
<p>Chaotic trajectories of two fuzzy modes with initial condition <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>x</mi> <mo stretchy="false">˜</mo> </mover> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <mo>=</mo> <msup> <mrow> <mo>(</mo> <mo>−</mo> <mn>0.5</mn> <mo>,</mo> <mn>0.6</mn> <mo>)</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math>. (<b>a</b>) Rule 1. (<b>b</b>) Rule 2.</p>
Full article ">Figure 11
<p>Synchronization errors of chaotic TSFDCNs without control.</p>
Full article ">Figure 12
<p>State trajectories of network nodes in chaotic TSFDCNs.</p>
Full article ">Figure 13
<p>(<b>a</b>) Synchronization errors of chaotic TSFDCNs under control. (<b>b</b>) Curves of Lyapunov terms <math display="inline"><semantics> <mrow> <msubsup> <mi>e</mi> <mi>i</mi> <mi>T</mi> </msubsup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <msub> <mi>Q</mi> <mi>i</mi> </msub> <msub> <mi>e</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 14
<p>Curves of control inputs.</p>
Full article ">Figure 15
<p>Triggered instants of pinned nodes.</p>
Full article ">Figure 16
<p>Performance of two existing methods. (<b>a</b>) State trajectories of network nodes by Theorem 2 in [<a href="#B29-entropy-24-00733" class="html-bibr">29</a>]. (<b>b</b>) State trajectories of network nodes by Theorem 3.1 in [<a href="#B34-entropy-24-00733" class="html-bibr">34</a>].</p>
Full article ">Figure 17
<p>Communication structure of coupled nodes in DCNs.</p>
Full article ">Figure 18
<p>States of nodes <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <mspace width="0.166667em"/> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mn>2</mn> </mrow> </msub> <mo>,</mo> <mspace width="0.166667em"/> <msub> <mi>x</mi> <mrow> <mi>i</mi> <mn>3</mn> </mrow> </msub> </mrow> </semantics></math> in DCNs.</p>
Full article ">Figure 19
<p>Synchronization errors <math display="inline"><semantics> <msub> <mi>e</mi> <mrow> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> without controllers of DCNs.</p>
Full article ">Figure 20
<p>(<b>a</b>) Synchronization errors <math display="inline"><semantics> <msub> <mi>e</mi> <mrow> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> of closed-loop DCNs with controllers. (<b>b</b>) Curves of Lyapunov terms <math display="inline"><semantics> <mrow> <msubsup> <mi>e</mi> <mi>i</mi> <mi>T</mi> </msubsup> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> <msub> <mi>Q</mi> <mi>i</mi> </msub> <msub> <mi>e</mi> <mi>i</mi> </msub> <mrow> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 21
<p>Curves of control inputs.</p>
Full article ">Figure 22
<p>Triggered instants of pinned nodes in DCNs.</p>
Full article ">
19 pages, 6717 KiB  
Article
Ecological Function Analysis and Optimization of a Recompression S-CO2 Cycle for Gas Turbine Waste Heat Recovery
by Qinglong Jin, Shaojun Xia and Tianchao Xie
Entropy 2022, 24(5), 732; https://doi.org/10.3390/e24050732 - 21 May 2022
Cited by 8 | Viewed by 2100
Abstract
In this paper, a recompression S-CO2 Brayton cycle model that considers the finite-temperature difference heat transfer between the heat source and the working fluid, irreversible compression, expansion, and other irreversibility is established. First, the ecological function is analyzed. Then the mass flow [...] Read more.
In this paper, a recompression S-CO2 Brayton cycle model that considers the finite-temperature difference heat transfer between the heat source and the working fluid, irreversible compression, expansion, and other irreversibility is established. First, the ecological function is analyzed. Then the mass flow rate, pressure ratio, diversion coefficient, and the heat conductance distribution ratios (HCDRs) of four heat exchangers (HEXs) are chosen as variables to optimize cycle performance, and the problem of long optimization time is solved by building a neural network prediction model. The results show that when the mass flow rate is small, the pressure ratio, the HCDRs of heater, and high temperature regenerator are the main influencing factors of the ecological function; when the mass flow rate is large, the influences of the re-compressor, the HCDRs of low temperature regenerator, and cooler on the ecological function increase; reasonable adjustment of the HCDRs of four HEXs can make the cycle performance better, but mass flow rate plays a more important role; the ecological function can be increased by 12.13%, 31.52%, 52.2%, 93.26%, and 96.99% compared with the initial design point after one-, two-, three-, four- and five-time optimizations, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>The device diagram of RCSCBC.</p>
Full article ">Figure 2
<p>The <span class="html-italic">T</span>-<span class="html-italic">s</span> diagram of RCSCBC.</p>
Full article ">Figure 3
<p>Calculation flow chart.</p>
Full article ">Figure 4
<p>Effect of <span class="html-italic">m</span><sub>wf</sub> on <span class="html-italic">E</span>–<span class="html-italic">x</span><sub>p</sub> relation.</p>
Full article ">Figure 5
<p>Effect of <span class="html-italic">m</span><sub>wf</sub> on <span class="html-italic">E</span>–<span class="html-italic">π</span> relation.</p>
Full article ">Figure 6
<p>Effect of <span class="html-italic">η</span><sub>t</sub> and <span class="html-italic">η</span><sub>c</sub> on <span class="html-italic">E</span>–<span class="html-italic">π</span> relation.</p>
Full article ">Figure 7
<p>Three-dimensional relationship between <span class="html-italic">E</span>, <span class="html-italic">m</span><sub>wf</sub>, and <span class="html-italic">π</span>.</p>
Full article ">Figure 8
<p>Three-dimensional relationship between <span class="html-italic">E</span>, <span class="html-italic">x</span><sub>p</sub>, and <span class="html-italic">π</span>.</p>
Full article ">Figure 9
<p>Three-dimensional relationship between <span class="html-italic">E</span>, <span class="html-italic">ψ</span><sub>H</sub>, and <span class="html-italic">ψ</span><sub>HTR</sub>.</p>
Full article ">Figure 10
<p>Optimization flow chart of RCSCBC.</p>
Full article ">Figure 11
<p>Comparison of predicted and calculated values of <span class="html-italic">E</span> and <span class="html-italic">T</span><sub>1</sub>.</p>
Full article ">Figure 12
<p>Profiles of <span class="html-italic">E</span> and the corresponding <span class="html-italic">ψ</span><sub>H</sub> and <span class="html-italic">ψ</span><sub>L</sub> versus <span class="html-italic">ψ</span><sub>HTR</sub>.</p>
Full article ">Figure 13
<p>Profiles of <span class="html-italic">E</span> and the corresponding <span class="html-italic">ψ</span><sub>HTR</sub>, <span class="html-italic">ψ</span><sub>H</sub>, and <span class="html-italic">ψ</span><sub>L</sub> versus <span class="html-italic">ψ</span><sub>LTR</sub>.</p>
Full article ">Figure 14
<p>Profiles of <span class="html-italic">E</span> and the corresponding <span class="html-italic">ψ</span><sub>PHE</sub>, <span class="html-italic">ψ</span><sub>HE</sub>, <span class="html-italic">ψ</span><sub>R</sub>, and <span class="html-italic">ψ</span><sub>L</sub> versus <span class="html-italic">m</span><sub>wf</sub>.</p>
Full article ">Figure 15
<p>The variation law of <span class="html-italic">T</span><sub>1</sub>, <span class="html-italic">T</span><sub>4</sub>, and <span class="html-italic">T</span><sub>H,out</sub> corresponding to <span class="html-italic">E</span> with <span class="html-italic">m</span><sub>wf</sub>.</p>
Full article ">
31 pages, 6269 KiB  
Article
Robust Variable-Step Perturb-and-Observe Sliding Mode Controller for Grid-Connected Wind-Energy-Conversion Systems
by Ilham Toumi, Billel Meghni, Oussama Hachana, Ahmad Taher Azar, Amira Boulmaiz, Amjad J. Humaidi, Ibraheem Kasim Ibraheem, Nashwa Ahmad Kamal, Quanmin Zhu, Giuseppe Fusco and Naglaa K. Bahgaat
Entropy 2022, 24(5), 731; https://doi.org/10.3390/e24050731 - 20 May 2022
Cited by 14 | Viewed by 2938
Abstract
In order to extract efficient power generation, a wind turbine (WT) system requires an accurate maximum power point tracking (MPPT) technique. Therefore, a novel robust variable-step perturb-and-observe (RVS-P&O) algorithm was developed for the machine-side converter (MSC). The control strategy was applied on a [...] Read more.
In order to extract efficient power generation, a wind turbine (WT) system requires an accurate maximum power point tracking (MPPT) technique. Therefore, a novel robust variable-step perturb-and-observe (RVS-P&O) algorithm was developed for the machine-side converter (MSC). The control strategy was applied on a WT based permanent-magnet synchronous generator (PMSG) to overcome the downsides of the currently published P&O MPPT methods. Particularly, two main points were involved. Firstly, a systematic step-size selection on the basis of power and speed measurement normalization was proposed; secondly, to obtain acceptable robustness for high and long wind-speed variations, a new correction to calculate the power variation was carried out. The grid-side converter (GSC) was controlled using a second-order sliding mode controller (SOSMC) with an adaptive-gain super-twisting algorithm (STA) to realize the high-quality seamless setting of power injected into the grid, a satisfactory power factor correction, a high harmonic performance of the AC source, and removal of the chatter effect compared to the traditional first-order sliding mode controller (FOSMC). Simulation results showed the superiority of the suggested RVS-P&O over the competing based P&O techniques. The RVS-P&O offered the WT an efficiency of 99.35%, which was an increase of 3.82% over the variable-step P&O algorithm. Indeed, the settling time was remarkably enhanced; it was 0.00794 s, which was better than for LS-P&O (0.0841 s), SS-P&O (0.1617 s), and VS-P&O (0.2224 s). Therefore, in terms of energy efficiency, as well as transient and steady-state response performances under various operating conditions, the RVS-P&O algorithm could be an accurate candidate for MPP online operation tracking. Full article
(This article belongs to the Special Issue Nonlinear Control Systems with Recent Advances and Applications)
Show Figures

Figure 1

Figure 1
<p>Configuration of the studied wind-generation system.</p>
Full article ">Figure 2
<p>Model power coefficient <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <msub> <mi>C</mi> <mi>p</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> with tip speed ratio (<math display="inline"><semantics> <mrow> <mi>λ</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> curve.</p>
Full article ">Figure 3
<p>The complete control system description.</p>
Full article ">Figure 4
<p>Power/speed curve showing the various operation regions of the VSWT.</p>
Full article ">Figure 5
<p>Block diagram of MSC contoller.</p>
Full article ">Figure 6
<p>Block diagram of GSC-based DPC-SVM with SOSMC-STA controller.</p>
Full article ">Figure 7
<p>Working principal of the P&amp;O-based MPPT technique.</p>
Full article ">Figure 8
<p>Operation principal of the RVS-P&amp;O-based MPPT controller.</p>
Full article ">Figure 9
<p>Detailed flowchart of the RVS-P&amp;O-based MPPT technique.</p>
Full article ">Figure 10
<p>Machine-side results under gradual variations in wind speed. (<b>a</b>) Wind-speed profile; (<b>b</b>) Power coefficient; (<b>c</b>) Tip speed ratio; (<b>d</b>) Rotor speed; (<b>e</b>) Mechanical power.</p>
Full article ">Figure 10 Cont.
<p>Machine-side results under gradual variations in wind speed. (<b>a</b>) Wind-speed profile; (<b>b</b>) Power coefficient; (<b>c</b>) Tip speed ratio; (<b>d</b>) Rotor speed; (<b>e</b>) Mechanical power.</p>
Full article ">Figure 11
<p>Machine-side results under variable fluctuations in wind speed. (<b>a</b>) Wind-speed profile; (<b>b</b>) Power coefficient; (<b>c</b>) Tip speed ratio; (<b>d</b>) Rotor speed; (<b>e</b>) Error rotor speed; (<b>f</b>) Mechanical power; (<b>g</b>) Extracted power error; (<b>h</b>) Step size.</p>
Full article ">Figure 11 Cont.
<p>Machine-side results under variable fluctuations in wind speed. (<b>a</b>) Wind-speed profile; (<b>b</b>) Power coefficient; (<b>c</b>) Tip speed ratio; (<b>d</b>) Rotor speed; (<b>e</b>) Error rotor speed; (<b>f</b>) Mechanical power; (<b>g</b>) Extracted power error; (<b>h</b>) Step size.</p>
Full article ">Figure 11 Cont.
<p>Machine-side results under variable fluctuations in wind speed. (<b>a</b>) Wind-speed profile; (<b>b</b>) Power coefficient; (<b>c</b>) Tip speed ratio; (<b>d</b>) Rotor speed; (<b>e</b>) Error rotor speed; (<b>f</b>) Mechanical power; (<b>g</b>) Extracted power error; (<b>h</b>) Step size.</p>
Full article ">Figure 12
<p>Optimal rotational-speed profile. (<b>a</b>) Real tracking of the optimal rotational speed ORC in region “II”; (<b>b</b>) ORC for SS-P&amp;O; (<b>c</b>) ORC for LS-P&amp;O; (<b>d</b>) ORC for VS-P&amp;O; (<b>e</b>) ORC for RVS-P&amp;O.</p>
Full article ">Figure 12 Cont.
<p>Optimal rotational-speed profile. (<b>a</b>) Real tracking of the optimal rotational speed ORC in region “II”; (<b>b</b>) ORC for SS-P&amp;O; (<b>c</b>) ORC for LS-P&amp;O; (<b>d</b>) ORC for VS-P&amp;O; (<b>e</b>) ORC for RVS-P&amp;O.</p>
Full article ">Figure 13
<p>Dynamic response of the competing algorithms (SS-P&amp;O, LS-P&amp;O, VS-P&amp;O, and RVS-P&amp;O).</p>
Full article ">Figure 14
<p>Grid-side results for FOSMC and SOSMC algorithms. (<b>a</b>) DC-link voltage; (<b>b</b>) Grid active power; (<b>c</b>) Grid reactive power; (<b>d</b>) Grid current phase “A” for FOSMC; (<b>e</b>) Grid current phase “A” for SOSMC; (<b>f</b>) THD for FOSMC algorithm; (<b>g</b>) THD for SOSMC algorithm.</p>
Full article ">Figure 14 Cont.
<p>Grid-side results for FOSMC and SOSMC algorithms. (<b>a</b>) DC-link voltage; (<b>b</b>) Grid active power; (<b>c</b>) Grid reactive power; (<b>d</b>) Grid current phase “A” for FOSMC; (<b>e</b>) Grid current phase “A” for SOSMC; (<b>f</b>) THD for FOSMC algorithm; (<b>g</b>) THD for SOSMC algorithm.</p>
Full article ">Figure 14 Cont.
<p>Grid-side results for FOSMC and SOSMC algorithms. (<b>a</b>) DC-link voltage; (<b>b</b>) Grid active power; (<b>c</b>) Grid reactive power; (<b>d</b>) Grid current phase “A” for FOSMC; (<b>e</b>) Grid current phase “A” for SOSMC; (<b>f</b>) THD for FOSMC algorithm; (<b>g</b>) THD for SOSMC algorithm.</p>
Full article ">
16 pages, 487 KiB  
Article
An Extensive Assessment of Network Embedding in PPI Network Alignment
by Marianna Milano, Chiara Zucco, Marzia Settino and Mario Cannataro
Entropy 2022, 24(5), 730; https://doi.org/10.3390/e24050730 - 20 May 2022
Cited by 5 | Viewed by 2938
Abstract
Network alignment is a fundamental task in network analysis. In the biological field, where the protein–protein interaction (PPI) is represented as a graph, network alignment allowed the discovery of underlying biological knowledge such as conserved evolutionary pathways and functionally conserved proteins throughout different [...] Read more.
Network alignment is a fundamental task in network analysis. In the biological field, where the protein–protein interaction (PPI) is represented as a graph, network alignment allowed the discovery of underlying biological knowledge such as conserved evolutionary pathways and functionally conserved proteins throughout different species. A recent trend in network science concerns network embedding, i.e., the modelling of nodes in a network as a low-dimensional feature vector. In this survey, we present an overview of current PPI network embedding alignment methods, a comparison among them, and a comparison to classical PPI network alignment algorithms. The results of this comparison highlight that: (i) only five network embeddings for network alignment algorithms have been applied in the biological context, whereas the literature presents several classical network alignment algorithms; (ii) there is a need for developing an evaluation framework that may enable a unified comparison between different algorithms; (iii) the majority of the proposed algorithms perform network embedding through matrix factorization-based techniques; (iv) three out of five algorithms leverage external biological resources, while the remaining two are designed for domain agnostic network alignment and tested on PPI networks; (v) two algorithms out of three are stated to perform multi-network alignment, while the remaining perform pairwise network alignment. Full article
Show Figures

Figure 1

Figure 1
<p>The figure shows an example of PNA one-to-one, PNA many-to-many, MNA one-to-one, and MNA many-to-many.</p>
Full article ">Figure 2
<p>The figure shows the proposed taxonomy of the considered NE-NA algorithms for PPI networks based on their general pipeline.</p>
Full article ">
22 pages, 2767 KiB  
Article
Functional Dynamics of Substrate Recognition in TEM Beta-Lactamase
by Chris Avery, Lonnie Baker and Donald J. Jacobs
Entropy 2022, 24(5), 729; https://doi.org/10.3390/e24050729 - 20 May 2022
Cited by 4 | Viewed by 2954
Abstract
The beta-lactamase enzyme provides effective resistance to beta-lactam antibiotics due to substrate recognition controlled by point mutations. Recently, extended-spectrum and inhibitor-resistant mutants have become a global health problem. Here, the functional dynamics that control substrate recognition in TEM beta-lactamase are investigated using all-atom [...] Read more.
The beta-lactamase enzyme provides effective resistance to beta-lactam antibiotics due to substrate recognition controlled by point mutations. Recently, extended-spectrum and inhibitor-resistant mutants have become a global health problem. Here, the functional dynamics that control substrate recognition in TEM beta-lactamase are investigated using all-atom molecular dynamics simulations. Comparisons are made between wild-type TEM-1 and TEM-2 and the extended-spectrum mutants TEM-10 and TEM-52, both in apo form and in complex with four different antibiotics (ampicillin, amoxicillin, cefotaxime and ceftazidime). Dynamic allostery is predicted based on a quasi-harmonic normal mode analysis using a perturbation scan. An allosteric mechanism known to inhibit enzymatic function in TEM beta-lactamase is identified, along with other allosteric binding targets. Mechanisms for substrate recognition are elucidated using multivariate comparative analysis of molecular dynamics trajectories to identify changes in dynamics resulting from point mutations and ligand binding, and the conserved dynamics, which are functionally important, are extracted as well. The results suggest that the H10-H11 loop (residues 214-221) is a secondary anchor for larger extended spectrum ligands, while the H9-H10 loop (residues 194-202) is distal from the active site and stabilizes the protein against structural changes. These secondary non-catalytically-active loops offer attractive targets for novel noncompetitive inhibitors of TEM beta-lactamase. Full article
(This article belongs to the Special Issue Molecular Dynamics Simulations of Biomolecules)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>TEM beta-lactamase, labeled with important structural features. Magenta indicates residues involved with catalysis, including Ser70, Lys73, Ser130, Asn132, Glu166, Lys234, and Ala237. The primary binding site for beta-lactams is shown as a green sphere. Loops are shown in light pink, and cyan indicates the location of the omega loop (residues 163-178).</p>
Full article ">Figure 2
<p>Structures of the four ligands used for simulation in this work: (<b>a</b>) ampicillin, (<b>b</b>) amoxicillin, (<b>c</b>) cefotaxime, and (<b>d</b>) ceftazidime.</p>
Full article ">Figure 3
<p>Essential dynamics for apo (top row) and holo (bottom row) TEM beta-lactamase. The RMSD (<b>a</b>,<b>c</b>) and PC projections (<b>b</b>,<b>d</b>) have units of Å. The heat maps were calculated by pooling all MD simulations together. Variations can be seen in the individual PDF plots along the <span class="html-italic">x</span>-axis and <span class="html-italic">y</span>-axis for the apo/holo heat maps, which use the same coloring as in (<b>a</b>,<b>c</b>).</p>
Full article ">Figure 4
<p>Allosteric targets: (<b>a</b>) The <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mo>Δ</mo> <mi>G</mi> </mrow> </semantics></math> propensity response for TEM-1, TEM-2, TEM-10, and TEM-52; (<b>b</b>) a 3D rendering of the same data shown in (<b>a</b>), with (blue, red) showing areas where rigidifying perturbations (enhance, inhibit) ligand binding. Because the general trend is similar across all four mutants, the average signal for all four mutants displays the general allosteric regions of beta-lactamase. Signals less than <math display="inline"><semantics> <mrow> <mn>0.0025</mn> </mrow> </semantics></math> are zeroed in order to show better contrast on sites with a higher signal.</p>
Full article ">Figure 5
<p>Dynamical differences through dRMSF. Using TEM-1 as a baseline reference, dynamical differences are shown using dRMSF in contrast to mutant (<b>a</b>) TEM-2, (<b>b</b>), TEM-10 and (<b>c</b>) TEM-52. Corresponding 3D renderings using pymol are shown in (<b>d</b>–<b>f</b>) using the same coloring scheme. On all structures, the yellow sphere represents catalytic Ser70 for reference and the black spheres show the location of each mutant’s amino acid substitution locations.</p>
Full article ">Figure 6
<p>dRSMF for apo vs. holo simulations for: (<b>a</b>) TEM-1; (<b>b</b>) TEM-2; (<b>c</b>) TEM-10; and (<b>d</b>) TEM-52.</p>
Full article ">Figure 7
<p>H10-H11 loop conformations. (<b>a</b>) Selected example conformations observed in apo simulations. The carboxyl groups of the extended-spectrum ligands cefotaxime (<b>b</b>) and ceftazidime (<b>c</b>), shown in cyan, are large enough to reach Val216 and form a stabilizing contact.</p>
Full article ">Figure 8
<p>Conserved dynamics: (<b>a</b>) iRMSF for holo trajectories in the ligand perspective; (<b>b</b>) average iRMSF rendered onto a beta-lactamase structure, with thresholding to emphasize conserved regions.</p>
Full article ">Figure 9
<p>The average cumulative overlap (CO) between SPLOC modes and PCA modes in (<b>a</b>) discriminant space, (<b>b</b>) indifferent space, (<b>c</b>) undetermined space, and (<b>d</b>) over the entire basis set. CO is computed for each mode in the SPLOC subspace by summing over the top <span class="html-italic">n</span> PCA vectors. The results shown here are the average over all replicate SPLOC runs and all modes within each subspace, with the error bars representing the standard error.</p>
Full article ">Figure 10
<p>Multiple conformations reflecting the dynamic motions of the H9-H10 loop. Residues Arg191, Lys192, and Val51 are shown as potential residues that influence this region. The active site Ser70 is shown as a yellow sphere for reference.</p>
Full article ">Figure 11
<p>Potential binding pocket on beta-lactamase for inhibiting motions in the H9-H10 helix, leading to enzyme destabilization as it tries to bind a ligand.</p>
Full article ">
10 pages, 272 KiB  
Article
Quantum Estimates for Different Type Intequalities through Generalized Convexity
by Ohud Bulayhan Almutairi
Entropy 2022, 24(5), 728; https://doi.org/10.3390/e24050728 - 20 May 2022
Cited by 4 | Viewed by 1827
Abstract
This article estimates several integral inequalities involving (hm)-convexity via the quantum calculus, through which Important integral inequalities including Simpson-like, midpoint-like, averaged midpoint-trapezoid-like and trapezoid-like are extended. We generalized some quantum integral inequalities for q-differentiable [...] Read more.
This article estimates several integral inequalities involving (hm)-convexity via the quantum calculus, through which Important integral inequalities including Simpson-like, midpoint-like, averaged midpoint-trapezoid-like and trapezoid-like are extended. We generalized some quantum integral inequalities for q-differentiable (hm)-convexity. Our results could serve as the refinement and the unification of some classical results existing in the literature by taking the limit q1. Full article
(This article belongs to the Special Issue Advanced Numerical Methods for Differential Equations)
13 pages, 527 KiB  
Article
A Hybrid Scheme of MCS Selection and Spectrum Allocation for URLLC Traffic under Delay and Reliability Constraints
by Yuehong Gao, Haotian Yang, Xiao Hong and Lu Chen
Entropy 2022, 24(5), 727; https://doi.org/10.3390/e24050727 - 20 May 2022
Cited by 3 | Viewed by 2441
Abstract
The Ultra-Reliable Low-Latency Communication (URLLC) is expected to be an important feature of 5G and beyond networks. Supporting URLLC in a resource-efficient manner demands optimal Modulation and Coding Scheme (MCS) selection and spectrum allocation. This paper presents a study on MCS selection and [...] Read more.
The Ultra-Reliable Low-Latency Communication (URLLC) is expected to be an important feature of 5G and beyond networks. Supporting URLLC in a resource-efficient manner demands optimal Modulation and Coding Scheme (MCS) selection and spectrum allocation. This paper presents a study on MCS selection and spectrum allocation to support URLLC. The essential idea is to establish an analytical connection between the delay and reliability requirements of URLLC data transmission and the underlying MCS selection and spectrum allocation. In particular, the connection factors in fundamental aspects of wireless data communication include channel quality, coding and modulation, spectrum allocation and data traffic characteristics. With this connection, MCS selection and spectrum allocation can be efficiently performed based on the delay and reliability requirements of URLLC. Theoretical results in the scenario of a 5G New Radio system are presented, where the Signal-to-Noise Ratio (SNR) thresholds for adaptive MCS selection, data-transmission rate and delay, as well as spectrum allocation under different configurations, including data duplication, are discussed. Simulation results are also obtained and compared with the theoretical results, which validate the analysis and its efficiency. Full article
(This article belongs to the Special Issue Advances in Information and Coding Theory)
Show Figures

Figure 1

Figure 1
<p>System Model.</p>
Full article ">Figure 2
<p>Equivalent Analysis Model for the Considered Wireless Communication System.</p>
Full article ">Figure 3
<p>Required coding length with <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>256</mn> </mrow> </semantics></math> bits and <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Required coding length with <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>256</mn> </mrow> </semantics></math> bits and <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Capacity loss under different reliability requirements.</p>
Full article ">Figure 6
<p>Transmission rate (<math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>256</mn> </mrow> </semantics></math> bits, <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>W</mi> <mo>=</mo> <mn>540</mn> </mrow> </semantics></math> kHz).</p>
Full article ">Figure 7
<p>Maximum delay (<math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>256</mn> </mrow> </semantics></math> bits, <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>W</mi> <mo>=</mo> <mn>540</mn> </mrow> </semantics></math> kHz).</p>
Full article ">Figure 8
<p>Capacity region (<math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>256</mn> </mrow> </semantics></math> bits, <math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> ms, <math display="inline"><semantics> <mrow> <msub> <mi>ϵ</mi> <mn>0</mn> </msub> <mo>=</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics></math>).</p>
Full article ">
16 pages, 1302 KiB  
Article
Information Dynamics of Electric Field Intensity before and during the COVID-19 Pandemic
by Gorana Mijatovic, Dragan Kljajic, Karolina Kasas-Lazetic, Miodrag Milutinov, Salvatore Stivala, Alessandro Busacca, Alfonso Carmelo Cino, Sebastiano Stramaglia and Luca Faes
Entropy 2022, 24(5), 726; https://doi.org/10.3390/e24050726 - 20 May 2022
Cited by 1 | Viewed by 2100
Abstract
This work investigates the temporal statistical structure of time series of electric field (EF) intensity recorded with the aim of exploring the dynamical patterns associated with periods with different human activity in urban areas. The analyzed time series were obtained from a sensor [...] Read more.
This work investigates the temporal statistical structure of time series of electric field (EF) intensity recorded with the aim of exploring the dynamical patterns associated with periods with different human activity in urban areas. The analyzed time series were obtained from a sensor of the EMF RATEL monitoring system installed in the campus area of the University of Novi Sad, Serbia. The sensor performs wideband cumulative EF intensity monitoring of all active commercial EF sources, thus including those linked to human utilization of wireless communication systems. Monitoring was performed continuously during the years 2019 and 2020, allowing us to investigate the effects on the patterns of EF intensity of varying conditions of human mobility, including regular teaching and exam activity within the campus, as well as limitations to mobility related to the COVID-19 pandemic. Time series analysis was performed using both simple statistics (mean and variance) and combining the information-theoretic measure of information storage (IS) with the method of surrogate data to quantify the regularity of EF dynamic patterns and detect the presence of nonlinear dynamics. Moreover, to assess the possible coexistence of dynamic behaviors across multiple temporal scales, IS analysis was performed over consecutive observation windows lasting one day, week, month, and year, respectively coarse grained at time scales of 6 min, 30 min, 2 h, and 1 day. Our results document that the EF intensity patterns of variability are modulated by the movement of people at daily, weekly, and monthly scales, and are blunted during periods of restricted mobility related to the COVID-19 pandemic. Mobility restrictions also affected significantly the regularity of the EF intensity time series, resulting in lower values of IS observed simultaneously with a loss of nonlinear dynamics. Thus, our analysis can be useful to investigate changes in the global patterns of human mobility both during pandemics or other types of events, and from this perspective may serve to implement strategies for safety assessment and for optimizing the design of networks of EF sensors. Full article
Show Figures

Figure 1

Figure 1
<p>Representative time series of EF intensity monitored during the same day (<b>a</b>), week (<b>b</b>), and month (<b>c</b>) of 2019 (orange) and 2020 (green), as well as during the whole years 2019 and 2020 (<b>d</b>). The time series samples are obtained averaging the EF intensity over a time scale that is peculiar of each observation window: <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mrow> <mi>d</mi> <mi>a</mi> <mi>y</mi> </mrow> </msub> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> min in (<b>a</b>); <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mrow> <mi>w</mi> <mi>e</mi> <mi>e</mi> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math> min in (<b>b</b>); <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mrow> <mi>m</mi> <mi>o</mi> <mi>n</mi> <mi>t</mi> <mi>h</mi> </mrow> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> h in (<b>c</b>); <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mrow> <mi>y</mi> <mi>e</mi> <mi>a</mi> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> day in (<b>d</b>).</p>
Full article ">Figure 2
<p>Mean of the EF intensity time series computed over observation windows lasting one day at the time scale <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mrow> <mi>d</mi> <mi>a</mi> <mi>y</mi> </mrow> </msub> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> min (<b>a</b>), one week at the time scale <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mrow> <mi>w</mi> <mi>e</mi> <mi>e</mi> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math> min (<b>b</b>), one month at the time scale <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mrow> <mi>m</mi> <mi>o</mi> <mi>n</mi> <mi>t</mi> <mi>h</mi> </mrow> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> h (<b>c</b>), and one year at the time scale <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mrow> <mi>y</mi> <mi>e</mi> <mi>a</mi> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> day (<b>d</b>). The colored areas identify periods of different activities in the campus area of the University of Novi Sad, occurring before (2019) and during (2020) the COVID-19 pandemic. The colored areas may differ slightly between the two analyzed years (±a few days).</p>
Full article ">Figure 3
<p>Variance of the EF intensity time series computed over observation windows lasting one day at the time scale <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mrow> <mi>d</mi> <mi>a</mi> <mi>y</mi> </mrow> </msub> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> min (<b>a</b>), one week at the time scale <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mrow> <mi>w</mi> <mi>e</mi> <mi>e</mi> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math> min (<b>b</b>), one month at the time scale <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mrow> <mi>m</mi> <mi>o</mi> <mi>n</mi> <mi>t</mi> <mi>h</mi> </mrow> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> h (<b>c</b>), and one year at the time scale <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mrow> <mi>y</mi> <mi>e</mi> <mi>a</mi> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> day (<b>d</b>). The colored areas identify periods of different activities in the campus area of the University of Novi Sad, occurring before (2019) and during (2020) the COVID-19 pandemic. The colored areas may differ slightly between the two analyzed years (±a few days).</p>
Full article ">Figure 4
<p>Information storage computed on the representative time series reported in <a href="#entropy-24-00726-f001" class="html-fig">Figure 1</a> (filled symbols, positioned left) and on 100 IAAFT surrogates (empty circles, right). The thresholds set to detect statistically significant nonlinear dynamics are indicated by blue lines; time series with significant nonlinearity are detected when the original <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>S</mi> </mrow> </semantics></math> exceeds the threshold level (orange or green circles), while the time series is regarded as linear when the original <math display="inline"><semantics> <mrow> <mi>I</mi> <mi>S</mi> </mrow> </semantics></math> is below the threshold (black squares).</p>
Full article ">Figure 5
<p>Information storage of the EF intensity time series computed over observation windows lasting one day at the time scale <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mrow> <mi>d</mi> <mi>a</mi> <mi>y</mi> </mrow> </msub> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> min (<b>a</b>), one week at the time scale <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mrow> <mi>w</mi> <mi>e</mi> <mi>e</mi> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math> min (<b>b</b>), one month at the time scale <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mrow> <mi>m</mi> <mi>o</mi> <mi>n</mi> <mi>t</mi> <mi>h</mi> </mrow> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> h (<b>c</b>), and one year at the time scale <math display="inline"><semantics> <mrow> <msub> <mi>τ</mi> <mrow> <mi>y</mi> <mi>e</mi> <mi>a</mi> <mi>r</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> day (<b>d</b>). The colored areas identify periods of different activities in the campus area of the University of Novi Sad, occurring before (2019) and during (2020) the COVID-19 pandemic. The colored areas may differ slightly between the two analyzed years (±a few days). Black-colored squares indicate the presence of linear dynamics, while orange (2019) and green (2020) circles the presence of nonlinear dynamics, detected through the method of surrogate data.</p>
Full article ">
15 pages, 615 KiB  
Article
More Causes Less Effect: Destructive Interference in Decision Making
by Irina Basieva, Vijitashwa Pandey and Polina Khrennikova
Entropy 2022, 24(5), 725; https://doi.org/10.3390/e24050725 - 20 May 2022
Cited by 5 | Viewed by 2253
Abstract
We present a new experiment demonstrating destructive interference in customers’ estimates of conditional probabilities of product failure. We take the perspective of a manufacturer of consumer products and consider two situations of cause and effect. Whereas, individually, the effect of the causes is [...] Read more.
We present a new experiment demonstrating destructive interference in customers’ estimates of conditional probabilities of product failure. We take the perspective of a manufacturer of consumer products and consider two situations of cause and effect. Whereas, individually, the effect of the causes is similar, it is observed that when combined, the two causes produce the opposite effect. Such negative interference of two or more product features may be exploited for better modeling of the cognitive processes taking place in customers’ minds. Doing so can enhance the likelihood that a manufacturer will be able to design a better product, or a feature within it. Quantum probability has been used to explain some commonly observed “non-classical” effects, such as the disjunction effect, question order effect, violation of the sure-thing principle, and the Machina and Ellsberg paradoxes. In this work, we present results from a survey on the impact of multiple observed symptoms on the drivability of a vehicle. The symptoms are assumed to be conditionally independent. We demonstrate that the response statistics cannot be directly explained using classical probability, but quantum formulation easily models it, as it allows for both positive and negative “interference” between events. Since quantum formalism also accounts for classical probability’s predictions, it serves as a richer paradigm for modeling decision making behavior in engineering design and behavioral economics. Full article
(This article belongs to the Special Issue Quantum Models of Cognition and Decision-Making II)
Show Figures

Figure 1

Figure 1
<p>Conditions <span class="html-italic">A</span> and <span class="html-italic">B</span> (red and blue) increase the probability of <span class="html-italic">D</span> (black), even from values close to zero. Combined conditions <span class="html-italic">A</span> and <span class="html-italic">B</span> (magenta) have smaller or no effect. Meanwhile, probability of the combination <span class="html-italic">A</span> and <span class="html-italic">B</span> when <span class="html-italic">D</span> is true (black dashed line) is significantly higher than the combination of <span class="html-italic">A</span> and <span class="html-italic">B</span> when <span class="html-italic">D</span> is not true (blue dotted line).</p>
Full article ">Figure 2
<p>Prior and conditional probabilities fit to the experimental data.</p>
Full article ">
8 pages, 1907 KiB  
Editorial
Entropy 2022 Best Paper Award
by Entropy Editorial Office
Entropy 2022, 24(5), 724; https://doi.org/10.3390/e24050724 - 20 May 2022
Viewed by 2045
Abstract
On behalf of the Editor-in-Chief, Prof [...] Full article
16 pages, 2404 KiB  
Article
Joint Optimization of Control Strategy and Energy Consumption for Energy Harvesting WSAN
by Zhuwei Wang, Zhicheng Liu, Lihan Liu, Chao Fang, Meng Li and Jingcheng Zhao
Entropy 2022, 24(5), 723; https://doi.org/10.3390/e24050723 - 19 May 2022
Cited by 1 | Viewed by 1817
Abstract
With the rapid development of wireless sensor technology, recent progress in wireless sensor and actuator networks (WSANs) with energy harvesting provide the possibility for various real-time applications. Meanwhile, extensive research activities are carried out in the fields of efficient energy allocation and control [...] Read more.
With the rapid development of wireless sensor technology, recent progress in wireless sensor and actuator networks (WSANs) with energy harvesting provide the possibility for various real-time applications. Meanwhile, extensive research activities are carried out in the fields of efficient energy allocation and control strategy design. However, the joint design considering physical plant control, energy harvesting, and consumption is rarely concerned in existing works. In this paper, in order to enhance system control stability and promote quality of service for the WSAN energy efficiency, a novel three-step joint optimization algorithm is proposed through control strategy and energy management analysis. First, the optimal sampling interval can be obtained based on energy harvesting, consumption, and remaining conditions. Then, the control gain for each sampling interval is derived by using a backward iteration. Finally, the optimal control strategy is determined as a linear function of the current plant states and previous control strategies. The application of UAV formation flight system demonstrates that better system performance and control stability can be achieved by the proposed joint optimization design for all poor, sufficient, and general energy harvesting scenarios. Full article
Show Figures

Figure 1

Figure 1
<p>The architecture of WASN system with energy harvesting controller.</p>
Full article ">Figure 2
<p>Energy harvesting and consumption model for the controller.</p>
Full article ">Figure 3
<p>The UAV formation flight system with an energy harvesting controller.</p>
Full article ">Figure 4
<p>Energy level comparison between fixed and adaptive sampling intervals in the poor energy harvesting condition.</p>
Full article ">Figure 5
<p>The relative distance between the follower and the leader comparisons in the poor energy harvesting condition.</p>
Full article ">Figure 6
<p>Energy level comparison between fixed and adaptive sampling intervals in the sufficient energy harvesting condition.</p>
Full article ">Figure 7
<p>Relative distance between the follower and the leader comparisons in the sufficient energy harvesting condition.</p>
Full article ">Figure 8
<p>Energy level comparison between fixed and adaptive sampling intervals in the general energy harvesting condition.</p>
Full article ">Figure 9
<p>Relative distance between the follower and the leader comparisons in the general energy harvesting condition.</p>
Full article ">
22 pages, 950 KiB  
Article
E-Learning Performance Prediction: Mining the Feature Space of Effective Learning Behavior
by Feiyue Qiu, Lijia Zhu, Guodao Zhang, Xin Sheng, Mingtao Ye, Qifeng Xiang and Ping-Kuo Chen
Entropy 2022, 24(5), 722; https://doi.org/10.3390/e24050722 - 19 May 2022
Cited by 16 | Viewed by 3371
Abstract
Learning analysis provides a new opportunity for the development of online education, and has received extensive attention from scholars at home and abroad. How to use data and models to predict learners’ academic success or failure and give teaching feedback in a timely [...] Read more.
Learning analysis provides a new opportunity for the development of online education, and has received extensive attention from scholars at home and abroad. How to use data and models to predict learners’ academic success or failure and give teaching feedback in a timely manner is a core problem in the field of learning analytics. At present, many scholars use key learning behaviors to improve the prediction effect by exploring the implicit relationship between learning behavior data and grades. At the same time, it is very important to explore the association between categories and prediction effects in learning behavior classification. This paper proposes a self-adaptive feature fusion strategy based on learning behavior classification, aiming to mine the effective E-learning behavior feature space and further improve the performance of the learning performance prediction model. First, a behavior classification model (E-learning Behavior Classification Model, EBC Model) based on interaction objects and learning process is constructed; second, the feature space is preliminarily reduced by entropy weight method and variance filtering method; finally, combined with EBC Model and a self-adaptive feature fusion strategy to build a learning performance predictor. The experiment uses the British Open University Learning Analysis Dataset (OULAD). Through the experimental analysis, an effective feature space is obtained, that is, the basic interactive behavior (BI) and knowledge interaction behavior (KI) of learning behavior category has the strongest correlation with learning performance.And it is proved that the self-adaptive feature fusion strategy proposed in this paper can effectively improve the performance of the learning performance predictor, and the performance index of accuracy(ACC), F1-score(F1) and kappa(K) reach 98.44%, 0.9893, 0.9600. This study constructs E-learning performance predictors and mines the effective feature space from a new perspective, and provides some auxiliary references for online learners and managers. Full article
Show Figures

Figure 1

Figure 1
<p>Framework of the proposed method.</p>
Full article ">Figure 2
<p>E-learning behavior classification model-EBC Model.</p>
Full article ">Figure 3
<p>K kinds of clustering CH score chart.</p>
Full article ">Figure 4
<p>Visualizing clustering results.</p>
Full article ">Figure 5
<p>Accuracy of behavioral feature subsets under 7 algorithms.</p>
Full article ">Figure 6
<p>F1-score of behavioral feature subsets under 7 algorithms.</p>
Full article ">Figure 7
<p>Kappa coefficients of behavioral feature subsets under 7 algorithms.</p>
Full article ">Figure 8
<p>Accuracy of the three groups of prediction models.</p>
Full article ">Figure 9
<p>F1-score of the three groups of prediction models.</p>
Full article ">Figure 10
<p>Kappa of the three groups of prediction models.</p>
Full article ">Figure 11
<p>Computation time of the three groups of prediction models.</p>
Full article ">
26 pages, 620 KiB  
Review
Some Recent Advances in Energetic Variational Approaches
by Yiwei Wang and Chun Liu
Entropy 2022, 24(5), 721; https://doi.org/10.3390/e24050721 - 18 May 2022
Cited by 9 | Viewed by 2778
Abstract
In this paper, we summarize some recent advances related to the energetic variational approach (EnVarA), a general variational framework of building thermodynamically consistent models for complex fluids, by some examples. Particular focus will be placed on how to model systems involving chemo-mechanical couplings [...] Read more.
In this paper, we summarize some recent advances related to the energetic variational approach (EnVarA), a general variational framework of building thermodynamically consistent models for complex fluids, by some examples. Particular focus will be placed on how to model systems involving chemo-mechanical couplings and non-isothermal effects. Full article
(This article belongs to the Special Issue Modeling and Simulation of Complex Fluid Flows)
Show Figures

Figure 1

Figure 1
<p>An illustration of the flow map.</p>
Full article ">Figure 2
<p>Schematic diagram of breakage and combination processes in wormlike micellar solutions, in which different species are indicated by different colors. (<b>a</b>) General reaction mechanism (<a href="#FD75-entropy-24-00721" class="html-disp-formula">75</a>); (<b>b</b>) The reaction mechanism (<a href="#FD76-entropy-24-00721" class="html-disp-formula">76</a>) considered in this paper (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>).</p>
Full article ">
19 pages, 7014 KiB  
Article
Bionic Covert Underwater Acoustic Communication Based on Time–Frequency Contour of Bottlenose Dolphin Whistle
by Lei Xie, Jiahui Zhu, Yuqing Jia and Huifang Chen
Entropy 2022, 24(5), 720; https://doi.org/10.3390/e24050720 - 18 May 2022
Cited by 5 | Viewed by 3332
Abstract
In order to meet the requirements of communication security and concealment, as well as to protect marine life, bionic covert communication has become a hot research topic for underwater acoustic communication (UAC). In this paper, we propose a bionic covert UAC (BC-UAC) method [...] Read more.
In order to meet the requirements of communication security and concealment, as well as to protect marine life, bionic covert communication has become a hot research topic for underwater acoustic communication (UAC). In this paper, we propose a bionic covert UAC (BC-UAC) method based on the time–frequency contour (TFC) of the bottlenose dolphin whistle, which can overcome the safety problem of traditional low signal–noise ratio (SNR) covert communication and make the detected communication signal be excluded as marine biological noise. In the proposed BC-UAC method, the TFC of the bottlenose dolphin whistle is segmented to improve the transmission rate. Two BC-UAC schemes based on the segmented TFC of the whistle, the BC-UAC scheme using the whistle signal with time-delay (BC-UAC-TD) and the BC-UAC scheme using the whistle signal with frequency-shift (BC-UAC-FS), are addressed. The original whistle signal is used as a synchronization signal. Moreover, the virtual time reversal mirror (VTRM) technique is adopted to equalize the channel for mitigating the multipath effect. The performance of the proposed BC-UAC method, in terms of the Pearson correlation coefficient (PCC) and bit error rate (BER), is evaluated under simulated and measured underwater channels. Numerical results show that the proposed BC-UAC method performs well on covertness and reliability. Furthermore, the covertness of the bionic modulated signal in BC-UAC-TD is better than that of BC-UAC-FS, although the reliability of BC-UAC-FS is better than that of BC-UAC-TD. Full article
(This article belongs to the Special Issue Entropy and Information Theory in Acoustics II)
Show Figures

Figure 1

Figure 1
<p>The effect of human activities on toothed whales.</p>
Full article ">Figure 2
<p>Four categories of whistle. (<b>a</b>) up-sweep whistle. (<b>b</b>) down-sweep whistle. (<b>c</b>) flat-sweep whistle. (<b>d</b>) sinusoidal whistle.</p>
Full article ">Figure 3
<p>A bottlenose dolphin whistle. (<b>a</b>) waveform. (<b>b</b>) TFC.</p>
Full article ">Figure 4
<p>The normalized autocorrelation of the whistle shown in <a href="#entropy-24-00720-f003" class="html-fig">Figure 3</a>a.</p>
Full article ">Figure 5
<p>The frame structure of covert communication.</p>
Full article ">Figure 6
<p>The model of the proposed BC-UAC system.</p>
Full article ">Figure 7
<p>The frequency offset to characterize the time-delay in BC-UAC-TD.</p>
Full article ">Figure 8
<p>The frequency-shift in BC-UAC-FS.</p>
Full article ">Figure 9
<p>The workflow of the channel estimation and equalization in the BC-UAC method.</p>
Full article ">Figure 10
<p>The impulse response of the BELLHOP channel.</p>
Full article ">Figure 11
<p>The impact of <span class="html-italic">T</span><sub>sym</sub> on the BER of the BC-UAC method under the simulated channel. (<b>a</b>) BC-UAC-TD. (<b>b</b>) BC-UAC-FS.</p>
Full article ">Figure 12
<p>The impact of the frequency offset on the BER of the BC-UAC method under the simulated channel. (<b>a</b>) BC-UAC-TD (∆<span class="html-italic">f</span>); (<b>b</b>) BC-UAC-FS (∆<span class="html-italic">f</span><sub>0</sub>).</p>
Full article ">Figure 13
<p>The impact of <span class="html-italic">M</span> on the BER of the proposed BC-UAC method under the simulated channel. (<b>a</b>) BC-UAC-TD; (<b>b</b>) BC-UAC-FS.</p>
Full article ">Figure 14
<p>The impulse response of measured underwater channel.</p>
Full article ">Figure 15
<p>The impact of <span class="html-italic">T</span><sub>sym</sub> on the BER of the proposed BC-UAC method under the measured channel. (<b>a</b>) BC-UAC-TD; (<b>b</b>) BC-UAC-FS.</p>
Full article ">Figure 16
<p>The impact of the frequency offset on the BER of the proposed BC-UAC method under the measured channel. (<b>a</b>) BC-UAC-TD (∆<span class="html-italic">f</span>); (<b>b</b>) BC-UAC-FS (∆<span class="html-italic">f</span><sub>0</sub>).</p>
Full article ">Figure 17
<p>The impact of <span class="html-italic">M</span> on the BER of proposed BC-UAC method under the measured channel. (<b>a</b>) BC-UAC-TD; (<b>b</b>) BC-UAC-FS.</p>
Full article ">
8 pages, 430 KiB  
Article
Fractional Stochastic Differential Equation Approach for Spreading of Diseases
by Leonardo dos Santos Lima
Entropy 2022, 24(5), 719; https://doi.org/10.3390/e24050719 - 17 May 2022
Cited by 8 | Viewed by 2732
Abstract
The nonlinear fractional stochastic differential equation approach with Hurst parameter H within interval H(0,1) to study the time evolution of the number of those infected by the coronavirus in countries where the number of cases is large [...] Read more.
The nonlinear fractional stochastic differential equation approach with Hurst parameter H within interval H(0,1) to study the time evolution of the number of those infected by the coronavirus in countries where the number of cases is large as Brazil is studied. The rises and falls of novel cases daily or the fluctuations in the official data are treated as a random term in the stochastic differential equation for the fractional Brownian motion. The projection of novel cases in the future is treated as quadratic mean deviation in the official data of novel cases daily since the beginning of the pandemic up to the present. Moreover, the rescaled range analysis (RS) is employed to determine the Hurst index for the time series of novel cases and some statistical tests are performed with the aim to determine the shape of the probability density of novel cases in the future. Full article
Show Figures

Figure 1

Figure 1
<p>Dynamics of novel cases <math display="inline"><semantics> <mrow> <mi>N</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> in Brazil. The zigzag behavior of the results is reflected by the stochastic term in Equation (<a href="#FD1-entropy-24-00719" class="html-disp-formula">1</a>). We plot the time series of the model Equation (<a href="#FD1-entropy-24-00719" class="html-disp-formula">1</a>) for a value of Hurst parameter <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>&gt;</mo> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> such as <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>=</mo> <mn>0.55</mn> </mrow> </semantics></math> (above) and <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>&lt;</mo> <mn>1</mn> <mo>/</mo> <mn>2</mn> </mrow> </semantics></math> as <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>=</mo> <mn>0.30</mn> </mrow> </semantics></math> (under). The black squares are the daily novel cases reported by the Ministry of Health and the red-line is the adjusting of the model Equation (<a href="#FD1-entropy-24-00719" class="html-disp-formula">1</a>).</p>
Full article ">Figure 2
<p>Behavior of half-width of the distribution as a function of <span class="html-italic">t</span>, <math display="inline"><semantics> <mrow> <mi>σ</mi> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math>. The half-width gives an expectation of novel cases in each day <span class="html-italic">t</span>.</p>
Full article ">Figure 3
<p>Behavior of the kurtosis as a function of <span class="html-italic">t</span>, <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mn>4</mn> </msub> <mrow> <mo stretchy="false">(</mo> <mi>t</mi> <mo stretchy="false">)</mo> </mrow> </mrow> </semantics></math> for different values of Hurst index: <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> (black-solid line), <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math> (dashed-red line), and <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>=</mo> <mn>0.8</mn> </mrow> </semantics></math> (dot-dashed-green line), that is, for a value above and below the value <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>, which corresponds to the standard Brownian motion. The range of negative values gives an estimating of the shape of distribution which becomes closest to a Gaussian for <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mn>4</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> at a range of large <span class="html-italic">t</span> values since the first cases reported.</p>
Full article ">Figure 4
<p>Log-Log graphic to determine the Hurst index using the rescaled range (RS) method.</p>
Full article ">
43 pages, 3606 KiB  
Article
Interpolating Strange Attractors via Fractional Brownian Bridges
by Sebastian Raubitzek, Thomas Neubauer, Jan Friedrich and Andreas Rauber
Entropy 2022, 24(5), 718; https://doi.org/10.3390/e24050718 - 17 May 2022
Cited by 4 | Viewed by 2344
Abstract
We present a novel method for interpolating univariate time series data. The proposed method combines multi-point fractional Brownian bridges, a genetic algorithm, and Takens’ theorem for reconstructing a phase space from univariate time series data. The basic idea is to first generate a [...] Read more.
We present a novel method for interpolating univariate time series data. The proposed method combines multi-point fractional Brownian bridges, a genetic algorithm, and Takens’ theorem for reconstructing a phase space from univariate time series data. The basic idea is to first generate a population of different stochastically-interpolated time series data, and secondly, to use a genetic algorithm to find the pieces in the population which generate the smoothest reconstructed phase space trajectory. A smooth trajectory curve is hereby found to have a low variance of second derivatives along the curve. For simplicity, we refer to the developed method as PhaSpaSto-interpolation, which is an abbreviation for phase-space-trajectory-smoothing stochastic interpolation. The proposed approach is tested and validated with a univariate time series of the Lorenz system, five non-model data sets and compared to a cubic spline interpolation and a linear interpolation. We find that the criterion for smoothness guarantees low errors on known model and non-model data. Finally, we interpolate the discussed non-model data sets, and show the corresponding improved phase space portraits. The proposed method is useful for interpolating low-sampled time series data sets for, e.g., machine learning, regression analysis, or time series prediction approaches. Further, the results suggest that the variance of second derivatives along a given phase space trajectory is a valuable tool for phase space analysis of non-model time series data, and we expect it to be useful for future research. Full article
Show Figures

Figure 1

Figure 1
<p>Depiction of the employed scheme.</p>
Full article ">Figure 2
<p>Errors from <a href="#entropy-24-00718-t001" class="html-table">Table 1</a> depending on the different numbers of interpolation points.</p>
Full article ">Figure 3
<p>Reconstructed attractors for the interpolated Lorenz system. (<b>a</b>): Non-interpolated original data (i.e., the one the errors are calculated with); (<b>b</b>): Average interpolation of the whole population; (<b>c</b>): Linear interpolated; (<b>d</b>): Spline interpolated; (<b>e</b>): The one interpolation of the population that has the lowest RMSE; (<b>f</b>): Interpolation improved by the presented genetic algorithm approach.</p>
Full article ">Figure 3 Cont.
<p>Reconstructed attractors for the interpolated Lorenz system. (<b>a</b>): Non-interpolated original data (i.e., the one the errors are calculated with); (<b>b</b>): Average interpolation of the whole population; (<b>c</b>): Linear interpolated; (<b>d</b>): Spline interpolated; (<b>e</b>): The one interpolation of the population that has the lowest RMSE; (<b>f</b>): Interpolation improved by the presented genetic algorithm approach.</p>
Full article ">Figure 4
<p>Original vs. interpolated time series data. (<b>a</b>): Non-interpolated original data (i.e., the one the error’s are calculated with) and population average; (<b>b</b>): Genetic-algorithm-improved interpolation; (<b>c</b>): The one interpolation of the population that has the lowest RMSE; (<b>d</b>): Population average vs. genetic-algorithm-improved interpolation; (<b>e</b>): Linear interpoaltion vs. genetic-algorithm-improved interpolation; (<b>f</b>): Spline interpolation vs. genetic-algorithm-improved interpolation.</p>
Full article ">Figure 5
<p>Interpolated data and reconstructed attractors for the NYC measles outbreaks data set. (<b>a</b>): The original and interpolated time series data; (<b>b</b>): Phase space reconstruction of the original data; (<b>c</b>): Phase space reconstruction of the average population data; (<b>d</b>): Phase space reconstruction of the genetic-algorithm-improved data.</p>
Full article ">Figure 5 Cont.
<p>Interpolated data and reconstructed attractors for the NYC measles outbreaks data set. (<b>a</b>): The original and interpolated time series data; (<b>b</b>): Phase space reconstruction of the original data; (<b>c</b>): Phase space reconstruction of the average population data; (<b>d</b>): Phase space reconstruction of the genetic-algorithm-improved data.</p>
Full article ">Figure 6
<p>Interpolated data and reconstructed attractors for the car sales in Quebec data set. (<b>a</b>): The original and interpolated time series data; (<b>b</b>): Phase space reconstruction of the original data; (<b>c</b>): Phase space reconstruction of the average population data; (<b>d</b>): Phase space reconstruction of the genetic-algorithm-improved data.</p>
Full article ">Figure 6 Cont.
<p>Interpolated data and reconstructed attractors for the car sales in Quebec data set. (<b>a</b>): The original and interpolated time series data; (<b>b</b>): Phase space reconstruction of the original data; (<b>c</b>): Phase space reconstruction of the average population data; (<b>d</b>): Phase space reconstruction of the genetic-algorithm-improved data.</p>
Full article ">Figure 7
<p>Interpolated data and reconstructed attractors for the Perrin Freres Champagne sales data set. (<b>a</b>): The original and interpolated time series data; (<b>b</b>): Phase space reconstruction of the original data; (<b>c</b>): Phase space reconstruction of the average population data; (<b>d</b>): Phase space reconstruction of the genetic-algorithm-improved data.</p>
Full article ">Figure 7 Cont.
<p>Interpolated data and reconstructed attractors for the Perrin Freres Champagne sales data set. (<b>a</b>): The original and interpolated time series data; (<b>b</b>): Phase space reconstruction of the original data; (<b>c</b>): Phase space reconstruction of the average population data; (<b>d</b>): Phase space reconstruction of the genetic-algorithm-improved data.</p>
Full article ">Figure 8
<p>Interpolated data and reconstructed attractors for the monthly international airline passengers data set. (<b>a</b>): The original and interpolated time series data; (<b>b</b>): Phase space reconstruction of the original data; (<b>c</b>): Phase space reconstruction of the average population data; (<b>d</b>): Phase space reconstruction of the genetic-algorithm-improved data.</p>
Full article ">Figure 9
<p>Interpolated data and reconstructed attractors for the monthly mean temperature in Nottingham castle data set. (<b>a</b>): The original and interpolated time series data; (<b>b</b>): Phase space reconstruction of the original data; (<b>c</b>): Phase space reconstruction of the average population data; (<b>d</b>): Phase space reconstruction of the genetic-algorithm-improved data.</p>
Full article ">Figure 10
<p>Interpolated data and reconstructed attractors for the shampoo sales data set. (<b>a</b>): The original and interpolated time series data; (<b>b</b>): Phase space reconstruction of the original data; (<b>c</b>): Phase space reconstruction of the average population data; (<b>d</b>): Phase space reconstruction of the genetic-algorithm-improved data.</p>
Full article ">Figure 11
<p>Interpolated data and reconstructed attractors for the annual maize yields in Austria data set. (<b>a</b>): The original and interpolated time series data; (<b>b</b>): Phase space reconstruction of the original data; (<b>c</b>): Phase space reconstruction of the average population data; (<b>d</b>): Phase space reconstruction of the genetic-algorithm-improved data.</p>
Full article ">Figure A1
<p>Reconstructed phase space trajectories for different time delays for the monthly international airline passengers data set. (<b>a</b>): AMI and <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> time delay; (<b>b</b>): ACF time delay.</p>
Full article ">Figure A2
<p>Reconstructed phase space trajectories for different time delays for the monthly mean temperature in Nottingham castle data set. (<b>a</b>): AMI time delay; (<b>b</b>): ACF time delay; (<b>c</b>): <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure A3
<p>Reconstructed phase space trajectories for different time delays for the Perrin Freres champagne sales data set. (<b>a</b>): AMI time delay; (<b>b</b>): ACF time delay; (<b>c</b>): <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure A4
<p>Reconstructed phase space trajectories for different time delays for the car sales in Quebec data set. (<b>a</b>): AMI and <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> time delay; (<b>b</b>): ACF time delay.</p>
Full article ">Figure A5
<p>Reconstructed phase space trajectories for different time delays for the measles cases in NYC data set. (<b>a</b>): AMI time delay; (<b>b</b>): ACF time delay; (<b>c</b>): <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure A6
<p>Reconstructed phase space trajectories for different time delays for the annual maize yields in Austria data set. (<b>a</b>): AMI time delay; (<b>b</b>): ACF time delay; (<b>c</b>): <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>.</p>
Full article ">Figure A7
<p>Reconstructed phase space trajectories for the shampoo sales data set, AMI, ACF and <math display="inline"><semantics> <mrow> <mi>τ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> time delay, as all of them are the same for this data set.</p>
Full article ">Figure A8
<p>Evolution of errors depending on the number of interpolation points for the non-model data validation. (<b>a</b>): Measles cases in NYC data set, results from <a href="#entropy-24-00718-t002" class="html-table">Table 2</a>; (<b>b</b>): Car sales in Quebec data set, results from <a href="#entropy-24-00718-t003" class="html-table">Table 3</a>; (<b>c</b>): Perrin Freres champagne sales data set, results from <a href="#entropy-24-00718-t004" class="html-table">Table 4</a>; (<b>d</b>): Monthly international airline passengers data set, results from <a href="#entropy-24-00718-t005" class="html-table">Table 5</a>; (<b>e</b>): Monthly mean temperature in Nottingham castle data set, results from <a href="#entropy-24-00718-t006" class="html-table">Table 6</a>; (<b>f</b>): Shampoo sales data set, results from <a href="#entropy-24-00718-t007" class="html-table">Table 7</a>; (<b>g</b>): Annual maize yields in Austria data set, results from <a href="#entropy-24-00718-t008" class="html-table">Table 8</a>.</p>
Full article ">Figure A8 Cont.
<p>Evolution of errors depending on the number of interpolation points for the non-model data validation. (<b>a</b>): Measles cases in NYC data set, results from <a href="#entropy-24-00718-t002" class="html-table">Table 2</a>; (<b>b</b>): Car sales in Quebec data set, results from <a href="#entropy-24-00718-t003" class="html-table">Table 3</a>; (<b>c</b>): Perrin Freres champagne sales data set, results from <a href="#entropy-24-00718-t004" class="html-table">Table 4</a>; (<b>d</b>): Monthly international airline passengers data set, results from <a href="#entropy-24-00718-t005" class="html-table">Table 5</a>; (<b>e</b>): Monthly mean temperature in Nottingham castle data set, results from <a href="#entropy-24-00718-t006" class="html-table">Table 6</a>; (<b>f</b>): Shampoo sales data set, results from <a href="#entropy-24-00718-t007" class="html-table">Table 7</a>; (<b>g</b>): Annual maize yields in Austria data set, results from <a href="#entropy-24-00718-t008" class="html-table">Table 8</a>.</p>
Full article ">Figure A9
<p>Interpolated validation data (25 interpolation points) for the measles cases in NYC data set. (<b>a</b>): Average population validation; (<b>b</b>): Validation, linear interpolation; (<b>c</b>): Validation, spline interpolation; (<b>d</b>): Validation, best random interpolation; (<b>e</b>): Validation, gen. alg. improved interpolation.</p>
Full article ">Figure A10
<p>Reconstructed validation attractors (25 interpolation points) for the measles cases in NYC data set. (<b>a</b>): Reconstructed attractor, average population validation interpolation; (<b>b</b>): Reconstructed attractor, linear interpolation; (<b>c</b>): Reconstructed attractor, spline interpolation; (<b>d</b>): Reconstructed attractor, best random validation interpolation; (<b>e</b>): Reconstructed attractor, gen. alg. improved validation interpolation.</p>
Full article ">Figure A11
<p>Interpolated validation data (one interpolation point) for the car sales in Quebec data set. (<b>a</b>): Average population validation; (<b>b</b>): Validation, linear interpolation; (<b>c</b>): Validation, spline interpolation; (<b>d</b>): Validation, best random interpolation; (<b>e</b>): Validation, gen. alg. improved interpolation.</p>
Full article ">Figure A11 Cont.
<p>Interpolated validation data (one interpolation point) for the car sales in Quebec data set. (<b>a</b>): Average population validation; (<b>b</b>): Validation, linear interpolation; (<b>c</b>): Validation, spline interpolation; (<b>d</b>): Validation, best random interpolation; (<b>e</b>): Validation, gen. alg. improved interpolation.</p>
Full article ">Figure A12
<p>Reconstructed validation attractors (one interpolation point) for the car sales in Quebec data set. (<b>a</b>): Reconstructed attractor, average population validation interpolation; (<b>b</b>): Reconstructed attractor, linear interpolation; (<b>c</b>): Reconstructed attractor, spline interpolation; (<b>d</b>): Reconstructed attractor, best random validation interpolation; (<b>e</b>): Reconstructed attractor, gen. alg. improved validation interpolation.</p>
Full article ">Figure A12 Cont.
<p>Reconstructed validation attractors (one interpolation point) for the car sales in Quebec data set. (<b>a</b>): Reconstructed attractor, average population validation interpolation; (<b>b</b>): Reconstructed attractor, linear interpolation; (<b>c</b>): Reconstructed attractor, spline interpolation; (<b>d</b>): Reconstructed attractor, best random validation interpolation; (<b>e</b>): Reconstructed attractor, gen. alg. improved validation interpolation.</p>
Full article ">Figure A13
<p>Interpolated validation data (seven interpolation points) for the Perrin Freres champagne sales data set. (<b>a</b>): Average population validation; (<b>b</b>): Validation, linear interpolation; (<b>c</b>): Validation, spline interpolation; (<b>d</b>): Validation, best random interpolation; (<b>e</b>): Validation, gen. alg. improved interpolation.</p>
Full article ">Figure A14
<p>Reconstructed validation attractors (seven interpolation points) for the Perrin Freres champagne sales data set. (<b>a</b>): Reconstructed attractor, average population validation interpolation; (<b>b</b>): Reconstructed attractor, linear interpolation; (<b>c</b>): Reconstructed attractor, spline interpolation; (<b>d</b>): Reconstructed attractor, best random validation interpolation; (<b>e</b>): Reconstructed attractor, gen. alg. improved validation interpolation.</p>
Full article ">Figure A15
<p>Interpolated validation data (three interpolation points) for the monthly international airline passengers data set. (<b>a</b>): Average population validation; (<b>b</b>): Validation, linear interpolation; (<b>c</b>): Validation, spline interpolation; (<b>d</b>): Validation, best random interpolation; (<b>e</b>): Validation, gen. alg. improved interpolation.</p>
Full article ">Figure A15 Cont.
<p>Interpolated validation data (three interpolation points) for the monthly international airline passengers data set. (<b>a</b>): Average population validation; (<b>b</b>): Validation, linear interpolation; (<b>c</b>): Validation, spline interpolation; (<b>d</b>): Validation, best random interpolation; (<b>e</b>): Validation, gen. alg. improved interpolation.</p>
Full article ">Figure A16
<p>Reconstructed validation attractors (three interpolation points) for the monthly international airline passengers data set. (<b>a</b>): Reconstructed attractor, average population validation interpolation; (<b>b</b>): Reconstructed attractor, linear interpolation; (<b>c</b>): Reconstructed attractor, spline interpolation; (<b>d</b>): Reconstructed attractor, best random validation interpolation; (<b>e</b>): Reconstructed attractor, gen. alg. improved validation interpolation.</p>
Full article ">Figure A16 Cont.
<p>Reconstructed validation attractors (three interpolation points) for the monthly international airline passengers data set. (<b>a</b>): Reconstructed attractor, average population validation interpolation; (<b>b</b>): Reconstructed attractor, linear interpolation; (<b>c</b>): Reconstructed attractor, spline interpolation; (<b>d</b>): Reconstructed attractor, best random validation interpolation; (<b>e</b>): Reconstructed attractor, gen. alg. improved validation interpolation.</p>
Full article ">Figure A17
<p>Interpolated validation data (one interpolation point) for the monthly mean temperature in Nottingham castle data set. (<b>a</b>): Average population validation; (<b>b</b>): Validation, linear interpolation; (<b>c</b>): Validation, spline interpolation; (<b>d</b>): Validation, best random interpolation; (<b>e</b>): Validation, gen. alg. improved interpolation.</p>
Full article ">Figure A18
<p>Reconstructed validation attractors (one interpolation point)for the monthly mean temperature in Nottingham castle data set. (<b>a</b>): Reconstructed attractor, average population validation interpolation; (<b>b</b>): Reconstructed attractor, linear interpolation; (<b>c</b>): Reconstructed attractor, spline interpolation; (<b>d</b>): Reconstructed attractor, best random validation interpolation; (<b>e</b>): Reconstructed attractor, gen. alg. improved validation interpolation.</p>
Full article ">Figure A19
<p>Interpolated validation data (one interpolation point) for the shampoo sales data set. (<b>a</b>): Average population validation; (<b>b</b>): Validation, linear interpolation; (<b>c</b>): Validation, spline interpolation; (<b>d</b>): Validation, best random interpolation; (<b>e</b>): Validation, gen. alg. improved interpolation.</p>
Full article ">Figure A19 Cont.
<p>Interpolated validation data (one interpolation point) for the shampoo sales data set. (<b>a</b>): Average population validation; (<b>b</b>): Validation, linear interpolation; (<b>c</b>): Validation, spline interpolation; (<b>d</b>): Validation, best random interpolation; (<b>e</b>): Validation, gen. alg. improved interpolation.</p>
Full article ">Figure A20
<p>Reconstructed validation attractors (one interpolation point) for the shampoo sales data set. (<b>a</b>): Reconstructed attractor, average population validation interpolation; (<b>b</b>): Reconstructed attractor, linear interpolation; (<b>c</b>): Reconstructed attractor, spline interpolation; (<b>d</b>): Reconstructed attractor, best random validation interpolation; (<b>e</b>): Reconstructed attractor, gen. alg. improved validation interpolation.</p>
Full article ">Figure A20 Cont.
<p>Reconstructed validation attractors (one interpolation point) for the shampoo sales data set. (<b>a</b>): Reconstructed attractor, average population validation interpolation; (<b>b</b>): Reconstructed attractor, linear interpolation; (<b>c</b>): Reconstructed attractor, spline interpolation; (<b>d</b>): Reconstructed attractor, best random validation interpolation; (<b>e</b>): Reconstructed attractor, gen. alg. improved validation interpolation.</p>
Full article ">Figure A21
<p>Interpolated validation data (one interpolation point) for the annual maize yields in Austria data set. (<b>a</b>): Average population validation; (<b>b</b>): Validation, linear interpolation; (<b>c</b>): Validation, spline interpolation; (<b>d</b>): Validation, best random interpolation; (<b>e</b>): Validation, gen. alg. improved interpolation.</p>
Full article ">Figure A22
<p>Reconstructed validation attractors (one interpolation point) for the annual maize yields in Austria data set. (<b>a</b>): Reconstructed attractor, average population validation interpolation; (<b>b</b>): Reconstructed attractor, linear interpolation; (<b>c</b>): Reconstructed attractor, spline interpolation; (<b>d</b>): Reconstructed attractor, best random validation interpolation; (<b>e</b>): Reconstructed attractor, gen. alg. improved validation interpolation.</p>
Full article ">Figure A23
<p>Reconstructed attractors for the interpolated Lorenz system for different loss functions. (<b>a</b>): Nearest neighbour distance loss function; (<b>b</b>): First derivative mean loss function; (<b>c</b>): First derivative variance loss function; (<b>d</b>): second derivative mean loss function.</p>
Full article ">Figure A24
<p>Loss surface for the Lorenz attractor. (<b>a</b>,<b>b</b>) both show the same surface from different angles. This is the employed loss function (<a href="#sec3dot3dot1-entropy-24-00718" class="html-sec">Section 3.3.1</a>) depending on a varying embedding dimension and time delay. The orange dot marks the correct embedding dimension and time delay.</p>
Full article ">
28 pages, 4958 KiB  
Article
A Thermodynamically Consistent, Microscopically-Based, Model of the Rheology of Aggregating Particles Suspensions
by Soham Jariwala, Norman J. Wagner and Antony N. Beris
Entropy 2022, 24(5), 717; https://doi.org/10.3390/e24050717 - 17 May 2022
Cited by 5 | Viewed by 2697
Abstract
In this work, we outline the development of a thermodynamically consistent microscopic model for a suspension of aggregating particles under arbitrary, inertia-less deformation. As a proof-of-concept, we show how the combination of a simplified population-balance-based description of the aggregating particle microstructure along with [...] Read more.
In this work, we outline the development of a thermodynamically consistent microscopic model for a suspension of aggregating particles under arbitrary, inertia-less deformation. As a proof-of-concept, we show how the combination of a simplified population-balance-based description of the aggregating particle microstructure along with the use of the single-generator bracket description of nonequilibrium thermodynamics, which leads naturally to the formulation of the model equations. Notable elements of the model are a lognormal distribution for the aggregate size population, a population balance-based model of the aggregation and breakup processes and a conformation tensor-based viscoelastic description of the elastic network of the particle aggregates. The resulting example model is evaluated in steady and transient shear forces and elongational flows and shown to offer predictions that are consistent with observed rheological behavior of typical systems of aggregating particles. Additionally, an expression for the total entropy production is also provided that allows one to judge the thermodynamic consistency and to evaluate the importance of the various dissipative phenomena involved in given flow processes. Full article
(This article belongs to the Special Issue Modeling and Simulation of Complex Fluid Flows)
Show Figures

Figure 1

Figure 1
<p>Schematic indicating the spans of length scales involved in fractal agglomerates. Such structures are commonly observed in dispersions such as carbon black in mineral oil [<a href="#B1-entropy-24-00717" class="html-bibr">1</a>,<a href="#B27-entropy-24-00717" class="html-bibr">27</a>] and fumed silica in paraffin oil [<a href="#B28-entropy-24-00717" class="html-bibr">28</a>,<a href="#B29-entropy-24-00717" class="html-bibr">29</a>].</p>
Full article ">Figure 2
<p>Model predictions for steady-state simple shear flow: (<b>a</b>) shear stress: total (solid line), elastic component (dashed line) and viscous component (dotted line); (<b>b</b>) first normal stress difference. The model parameters are: <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mrow> <mi>b</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>100</mn> <mo>,</mo> <mtext> </mtext> <msub> <mi>λ</mi> <mrow> <mi>R</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>0.5</mn> <mo>,</mo> <mtext> </mtext> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Model predictions for steady-state simple shear flow: (<b>a</b>) dimensionless viscosity (solid line), elastic modulus (dashed line) and agglomerate volume parameter (dotted line); (<b>b</b>) zeroth moment (solid line) and second moment (dot-dashed line) of the aggregate size distribution. The model parameters are: <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mrow> <mi>b</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>100</mn> <mo>,</mo> <mtext> </mtext> <msub> <mi>λ</mi> <mrow> <mi>R</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>0.5</mn> <mo>,</mo> <mtext> </mtext> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Model predictions for steady-state simple shear flow: (<b>a</b>) steady shear and (<b>b</b>) normal stress differences for different exponent values in Equation (36) (model parameters <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mrow> <mi>b</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>100</mn> <mo>,</mo> <mtext> </mtext> <msub> <mi>λ</mi> <mrow> <mi>R</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 5
<p>Model predictions for steady-state simple shear flow: (<b>a</b>) shear stress and (<b>b</b>) first normal stress difference for different values of the model parameter <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mrow> <mi>b</mi> <mi>a</mi> </mrow> </msub> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mrow> <mi>R</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>0.5</mn> <mo>,</mo> <mtext> </mtext> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Model predictions for steady-state simple shear flow: (<b>a</b>) shear stress and (<b>b</b>) first normal stress difference for different values of the model parameter <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mrow> <mi>R</mi> <mi>a</mi> </mrow> </msub> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mrow> <mi>b</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>100</mn> <mo>,</mo> <mtext> </mtext> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Model predictions for steady shear start-up transients from a quiescent condition subjected to different shear rates. The different curves correspond to increasing <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>i</mi> </mrow> </semantics></math> values, increasing from the bottom to top curves, with values as indicated in the insert legend: (<b>a</b>) shear stresses, scaled by their final steady-state values; (<b>b</b>) total shear stress along with viscoelastic and viscous components for indicated <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>i</mi> </mrow> </semantics></math> values; (<b>c</b>) zeroth and (<b>d</b>) second moments of the agglomerate size distribution. The model parameters are: <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mrow> <mi>b</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>100</mn> <mo>,</mo> <mtext> </mtext> <msub> <mi>λ</mi> <mrow> <mi>R</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>0.5</mn> <mo>,</mo> <mtext> </mtext> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Model predictions of relaxation upon cessation of steady shear flow from an initial deformation rate indicated by Weissenberg number, with the curves corresponding to decreasing <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>i</mi> </mrow> </semantics></math> values from bottom to top, with values as indicated in the insert legend. The fluid is initially subjected to steady shear from a quiescent state and allowed to attain steady state. Once that is attained, the deformation rate is set to zero. The evolution of the (<b>a</b>) shear stress and (<b>b</b>) elastic modulus after flow cessation is reported as a function of the time since the time <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <msub> <mi>t</mi> <mi>m</mi> </msub> </mrow> </semantics></math> when the flow is stopped. The model parameters are: <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mrow> <mi>b</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>100</mn> <mo>,</mo> <mtext> </mtext> <msub> <mi>λ</mi> <mrow> <mi>R</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>0.5</mn> <mo>,</mo> <mtext> </mtext> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Model predictions for triangular shear transients. Hysteresis loops for shear stress (<b>a</b>,<b>c</b>) and first normal stress difference (<b>b</b>,<b>d</b>) for different values of <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <mi>m</mi> </msub> </mrow> </semantics></math> plotted for <math display="inline"><semantics> <mrow> <mi>W</mi> <msub> <mi>i</mi> <mrow> <mi>max</mi> </mrow> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> in (<b>a</b>,<b>b</b>) and <math display="inline"><semantics> <mrow> <mi>W</mi> <msub> <mi>i</mi> <mrow> <mi>max</mi> </mrow> </msub> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> in (<b>b</b>,<b>d</b>). The model parameters are: <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mrow> <mi>b</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>5</mn> <mo>,</mo> <mtext> </mtext> <msub> <mi>λ</mi> <mrow> <mi>R</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>0.5</mn> <mo>,</mo> <mtext> </mtext> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>(<b>a</b>) Stress response of the model for parameters <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mrow> <mi>b</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>100</mn> <mo>,</mo> <mtext> </mtext> <msub> <mi>λ</mi> <mrow> <mi>R</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>5</mn> <mo>,</mo> <mtext> </mtext> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for an intermittent shear rate step test, where a shear rate corresponding to <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>i</mi> <mo>=</mo> <mn>7.5</mn> </mrow> </semantics></math> is applied on a fluid at equilibrium, followed by a step down in shear rate to <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>i</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> at <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>/</mo> <msub> <mi>τ</mi> <mi>a</mi> </msub> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, and finally a step up in shear rate to <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>i</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> at <math display="inline"><semantics> <mrow> <mi>t</mi> <mo>/</mo> <msub> <mi>τ</mi> <mi>a</mi> </msub> <mo>=</mo> <mn>200</mn> </mrow> </semantics></math>. The applied deformation rate is depicted by the dotted orange line. The component-wise contribution to the total shear stress (scaled by its steady-state value) is shown in (<b>b</b>). The dashed blue line depicts the contribution from the viscoelastic term and the dotted red line is the viscous contribution, showing the inelastic thixotropy independent of the viscoelastic term.</p>
Full article ">Figure 11
<p>Model predictions for start-up transients for uniaxial elongation for different Weissenberg numbers, <math display="inline"><semantics> <mrow> <mi>W</mi> <msub> <mi>i</mi> <mi>ε</mi> </msub> <mo>=</mo> <msubsup> <mi>τ</mi> <mi>R</mi> <mrow> <mi>e</mi> <mi>q</mi> </mrow> </msubsup> <mover accent="true"> <mi>ε</mi> <mo>˙</mo> </mover> </mrow> </semantics></math>: (<b>a</b>) first normal stress difference; (<b>b</b>) zeroth moment. The model parameters are: <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mrow> <mi>b</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mtext> </mtext> <msub> <mi>λ</mi> <mrow> <mi>R</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>0.3</mn> <mo>,</mo> <mtext> </mtext> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>(<b>a</b>) Model predictions for the entropy production corresponding to the total relaxation terms, <math display="inline"><semantics> <mrow> <mi>T</mi> <msub> <mi>σ</mi> <mrow> <mi>s</mi> <mo>,</mo> <msub> <mi>t</mi> <mi>i</mi> </msub> </mrow> </msub> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mn>2</mn> </mrow> </semantics></math>, in Equation (43) plotted for simple shear start-up from the quiescent state for various Weissenberg numbers. The different curves correspond to increasing <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>i</mi> </mrow> </semantics></math> values, increasing from the bottom to top curves, with values as indicated in the insert legend. The contributions from the fifth and sixth terms in the entropy production (mixing terms) are explicitly plotted (<b>b</b>). It is clear that these contributions are not always non-negative; however, their additive contribution to the total relaxation terms, <math display="inline"><semantics> <mrow> <mi>T</mi> <msub> <mi>σ</mi> <mrow> <mi>s</mi> <mo>,</mo> <msub> <mi>t</mi> <mi>i</mi> </msub> </mrow> </msub> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mn>2</mn> </mrow> </semantics></math>, in Equation (43) is always positive. The model parameters are <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mrow> <mi>b</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>100</mn> <mo>,</mo> <mtext> </mtext> <msub> <mi>λ</mi> <mrow> <mi>R</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>0.5</mn> <mo>,</mo> <mtext> </mtext> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 13
<p>Model predictions for individual contributions to the entropy production, as indicated in Equation (41) for the simple shear start-up test for <math display="inline"><semantics> <mrow> <mi>W</mi> <mi>i</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>. The dissipation is by far dominated by the viscous effects, whereas the contribution from the structural moments is smaller by several orders of magnitude. Negative values are indicated with a dashed line. Model parameters are: <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mrow> <mi>b</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>100</mn> <mo>,</mo> <mtext> </mtext> <msub> <mi>λ</mi> <mrow> <mi>R</mi> <mi>a</mi> </mrow> </msub> <mo>=</mo> <mn>0.5</mn> <mo>,</mo> <mtext> </mtext> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">
10 pages, 273 KiB  
Article
Derivation of Two-Fluid Model Based on Onsager Principle
by Jiajia Zhou and Masao Doi
Entropy 2022, 24(5), 716; https://doi.org/10.3390/e24050716 - 17 May 2022
Cited by 2 | Viewed by 2129
Abstract
Using the Onsager variational principle, we study the dynamic coupling between the stress and the composition in a polymer solution. In the original derivation of the two-fluid model of Doi and Onuki the polymer stress was introduced a priori; therefore, a constitutive [...] Read more.
Using the Onsager variational principle, we study the dynamic coupling between the stress and the composition in a polymer solution. In the original derivation of the two-fluid model of Doi and Onuki the polymer stress was introduced a priori; therefore, a constitutive equation is required to close the equations. Based on our previous study of viscoelastic fluids with homogeneous composition, we start with a dumbbell model for the polymer, and derive all dynamic equations using the Onsager variational principle. Full article
(This article belongs to the Special Issue Modeling and Simulation of Complex Fluid Flows)
13 pages, 504 KiB  
Article
Measured Composite Collision Models: Quantum Trajectory Purities and Channel Divisibility
by Konstantin Beyer, Kimmo Luoma, Tim Lenz and Walter T. Strunz
Entropy 2022, 24(5), 715; https://doi.org/10.3390/e24050715 - 17 May 2022
Viewed by 2080
Abstract
We investigate a composite quantum collision model with measurements on the memory part, which effectively probe the system. The framework allows us to adjust the measurement strength, thereby tuning the dynamical map of the system. For a two-qubit setup with a symmetric and [...] Read more.
We investigate a composite quantum collision model with measurements on the memory part, which effectively probe the system. The framework allows us to adjust the measurement strength, thereby tuning the dynamical map of the system. For a two-qubit setup with a symmetric and informationally complete measurement on the memory, we study the divisibility of the resulting dynamics in dependence of the measurement strength. The measurements give rise to quantum trajectories of the system and we show that the average asymptotic purity depends on the specific form of the measurement. With the help of numerical simulations, we demonstrate that the different performance of the measurements is generic and holds for almost all interaction gates between the system and the memory in the composite collision model. The discrete model is then extended to a time-continuous limit. Full article
(This article belongs to the Special Issue Quantum Collision Models)
Show Figures

Figure 1

Figure 1
<p>The composite collision model. The system <math display="inline"><semantics> <mi mathvariant="script">S</mi> </semantics></math> interacts repeatedly with the same memory system <math display="inline"><semantics> <mi mathvariant="script">M</mi> </semantics></math>. The memory <math display="inline"><semantics> <mi mathvariant="script">M</mi> </semantics></math> subsequently interacts with a fresh ancilla in each step. If only the reduced dynamics of <math display="inline"><semantics> <mi mathvariant="script">S</mi> </semantics></math> and <math display="inline"><semantics> <mi mathvariant="script">M</mi> </semantics></math> is of interest, the ancillas can be traced out after their collision. However, here we consider a model where the ancillas are measured after the interaction, which allows us to obtain some information about the current state of <math display="inline"><semantics> <mi mathvariant="script">S</mi> </semantics></math> and <math display="inline"><semantics> <mi mathvariant="script">M</mi> </semantics></math>. This indirect measurement can be seen as a dilation of a direct measurement on <math display="inline"><semantics> <mi mathvariant="script">M</mi> </semantics></math>, and therefore, the model can be given in an equivalent but more compact form without an explicit reference to the ancillas. We will use the latter description in this article. Discarding the outcome, the measurements constitute a channel <math display="inline"><semantics> <mi mathvariant="sans-serif">Λ</mi> </semantics></math>.</p>
Full article ">Figure 2
<p>(<b>a</b>) The plot shows the fraction of unitaries <span class="html-italic">W</span> which lead to indivisible dynamics in dependence of the measurement strength <span class="html-italic">g</span>. For weak measurements, almost all <span class="html-italic">W</span> lead to indivisibility. For larger <span class="html-italic">g</span>, the ratio decreases. However, it has to be stressed that even in the limit <math display="inline"><semantics> <mrow> <mi>g</mi> <mo>→</mo> <mn>1</mn> </mrow> </semantics></math>, where the channel <math display="inline"><semantics> <mi mathvariant="sans-serif">Λ</mi> </semantics></math> is entanglement breaking in each step, more than half of the unitaries <span class="html-italic">W</span> lead to indivisible dynamics. (<b>b</b>) The average of the indivisibility quantifier <math display="inline"><semantics> <mi mathvariant="script">N</mi> </semantics></math> (see Equation (<a href="#FD9-entropy-24-00715" class="html-disp-formula">9</a>)) is plotted. The plot shows that the average values of <math display="inline"><semantics> <mi mathvariant="script">N</mi> </semantics></math> quickly decrease with increasing measurement strength <span class="html-italic">g</span>. Thus, even though many <span class="html-italic">W</span> lead to indivisible dynamics also for large <span class="html-italic">g</span>, those will hardly be distinguishable from a P-divisible dynamics with the measure <math display="inline"><semantics> <mi mathvariant="script">N</mi> </semantics></math>.</p>
Full article ">Figure 3
<p>We plot the average purity of the steady state ensembles averaged over random unitary interaction gates <span class="html-italic">W</span> (in each run the gate is fixed for all steps in the collision model). As expected, one can see that the purity increases with the measurement strength. Furthermore, the better performance of measurement <span class="html-italic">B</span> is a generic feature.</p>
Full article ">Figure 4
<p>(<b>a</b>) We plot the fraction <math display="inline"><semantics> <mi mathvariant="script">V</mi> </semantics></math> of Hamiltonians in the Gaussian unitary ensemble (<math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>) that lead to indivisible dynamics. For small measurement strength (weak depolarisation), almost all Hamiltonians lead to indivisible dynamics. For stronger measurements, more and more Hamiltonians generate divisible dynamics. (<b>b</b>) The average divisibility (see Equation (<a href="#FD10-entropy-24-00715" class="html-disp-formula">10</a>)) shows a steep decrease with increasing measurement strength <math display="inline"><semantics> <mi>γ</mi> </semantics></math>. Already at <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, where still more than 50% of the Hamiltonians lead to indivisible dynamics, the average divisibility has dropped to a value below <math display="inline"><semantics> <mrow> <mover> <mi mathvariant="script">N</mi> <mo>¯</mo> </mover> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics></math>, so most of the dynamics are hard to distinguish from P-divisible ones.</p>
Full article ">Figure 5
<p>The average ensemble purity <math display="inline"><semantics> <mover> <mi mathvariant="script">P</mi> <mo>¯</mo> </mover> </semantics></math> for the three different measurements (see Equations (<a href="#FD19-entropy-24-00715" class="html-disp-formula">19</a>), (<a href="#FD21-entropy-24-00715" class="html-disp-formula">21</a>) and (<a href="#FD22-entropy-24-00715" class="html-disp-formula">22</a>)) is plotted in dependence of the measurement strength <math display="inline"><semantics> <mi>γ</mi> </semantics></math>. The average is taken over interaction Hamiltonians from the Gaussian unitary ensemble (<math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>). As in the discrete model, measurement <span class="html-italic">B</span> leads to the most pure ensembles, followed by <span class="html-italic">A</span> and <span class="html-italic">C</span>. All three measurement scenarios lead to pure ensembles in the strong measurement limit <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>→</mo> <mo>∞</mo> </mrow> </semantics></math>.</p>
Full article ">
14 pages, 285 KiB  
Article
Privacy: An Axiomatic Approach
by Alexander Ziller, Tamara T. Mueller, Rickmer Braren, Daniel Rueckert and Georgios Kaissis
Entropy 2022, 24(5), 714; https://doi.org/10.3390/e24050714 - 16 May 2022
Cited by 1 | Viewed by 2151
Abstract
The increasing prevalence of large-scale data collection in modern society represents a potential threat to individual privacy. Addressing this threat, for example through privacy-enhancing technologies (PETs), requires a rigorous definition of what exactly is being protected, that is, of privacy itself. In this [...] Read more.
The increasing prevalence of large-scale data collection in modern society represents a potential threat to individual privacy. Addressing this threat, for example through privacy-enhancing technologies (PETs), requires a rigorous definition of what exactly is being protected, that is, of privacy itself. In this work, we formulate an axiomatic definition of privacy based on quantifiable and irreducible information flows. Our definition synthesizes prior work from the domain of social science with a contemporary understanding of PETs such as differential privacy (DP). Our work highlights the fact that the inevitable difficulties of protecting privacy in practice are fundamentally information-theoretic. Moreover, it enables quantitative reasoning about PETs based on what they are protecting, thus fostering objective policy discourse about their societal implementation. Full article
Previous Issue
Next Issue
Back to TopTop