[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (264)

Search Parameters:
Keywords = sparse coding

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 13392 KiB  
Article
Incorporation of Histogram Intersection and Semantic Information into Non-Negative Local Laplacian Sparse Coding for Image Classification
by Ying Shi, Yuan Wan, Xinjian Wang and Huanhuan Li
Mathematics 2025, 13(2), 219; https://doi.org/10.3390/math13020219 - 10 Jan 2025
Viewed by 378
Abstract
Traditional sparse coding has proven to be an effective method for image feature representation in recent years, yielding promising results in image classification. However, it faces several challenges, such as sensitivity to feature variations, code instability, and inadequate distance measures. Additionally, image representation [...] Read more.
Traditional sparse coding has proven to be an effective method for image feature representation in recent years, yielding promising results in image classification. However, it faces several challenges, such as sensitivity to feature variations, code instability, and inadequate distance measures. Additionally, image representation and classification often operate independently, potentially resulting in the loss of semantic relationships. To address these issues, a new method is proposed, called Histogram intersection and Semantic information-based Non-negativity Local Laplacian Sparse Coding (HS-NLLSC) for image classification. This method integrates Non-negativity and Locality into Laplacian Sparse Coding (NLLSC) optimisation, enhancing coding stability and ensuring that similar features are encoded into similar codewords. In addition, histogram intersection is introduced to redefine the distance between feature vectors and codebooks, effectively preserving their similarity. By comprehensively considering both the processes of image representation and classification, more semantic information is retained, thereby leading to a more effective image representation. Finally, a multi-class linear Support Vector Machine (SVM) is employed for image classification. Experimental results on four standard and three maritime image datasets demonstrate superior performance compared to the previous six algorithms. Specifically, the classification accuracy of our approach improved by 5% to 19% compared to the previous six methods. This research provides valuable insights for various stakeholders in selecting the most suitable method for specific circumstances. Full article
(This article belongs to the Special Issue Optimization Models and Algorithms in Data Science)
Show Figures

Figure 1

Figure 1
<p>The framework of the HS-NLLSC algorithm.</p>
Full article ">Figure 2
<p>Some pictures of the Caltech-101 dataset.</p>
Full article ">Figure 3
<p>Some pictures of the MID.</p>
Full article ">Figure 4
<p>The obtained dictionaries with non-negativity, locality, bandpass characteristics, and directionality for the three methods.</p>
Full article ">Figure 5
<p>Visualisation of code V learned from (<b>a</b>) SC, (<b>b</b>) EH-NLSC, and (<b>c</b>) HS-NLLSC in Scene-15.</p>
Full article ">Figure 6
<p>Classification accuracy (average value ± standard deviation) for seven different methods in four standard image datasets.</p>
Full article ">Figure 7
<p>Classification accuracy (average value ± standard deviation) for the three maritime datasets.</p>
Full article ">Figure 8
<p>The impact of <math display="inline"><semantics> <mi>λ</mi> </semantics></math> and <math display="inline"><semantics> <mi>β</mi> </semantics></math> on the classification results.</p>
Full article ">Figure 9
<p>Image representations of four different methods in Caltech-256.</p>
Full article ">
13 pages, 464 KiB  
Review
Entropy of Neuronal Spike Patterns
by Artur Luczak
Entropy 2024, 26(11), 967; https://doi.org/10.3390/e26110967 - 11 Nov 2024
Viewed by 914
Abstract
Neuronal spike patterns are the fundamental units of neural communication in the brain, which is still not fully understood. Entropy measures offer a quantitative framework to assess the variability and information content of these spike patterns. By quantifying the uncertainty and informational content [...] Read more.
Neuronal spike patterns are the fundamental units of neural communication in the brain, which is still not fully understood. Entropy measures offer a quantitative framework to assess the variability and information content of these spike patterns. By quantifying the uncertainty and informational content of neuronal patterns, entropy measures provide insights into neural coding strategies, synaptic plasticity, network dynamics, and cognitive processes. Here, we review basic entropy metrics and then we provide examples of recent advancements in using entropy as a tool to improve our understanding of neuronal processing. It focuses especially on studies on critical dynamics in neural networks and the relation of entropy to predictive coding and cortical communication. We highlight the necessity of expanding entropy measures from single neurons to encompass multi-neuronal activity patterns, as cortical circuits communicate through coordinated spatiotemporal activity patterns, called neuronal packets. We discuss how the sequential and partially stereotypical nature of neuronal packets influences the entropy of cortical communication. Stereotypy reduces entropy by enhancing reliability and predictability in neural signaling, while variability within packets increases entropy, allowing for greater information capacity. This balance between stereotypy and variability supports both robustness and flexibility in cortical information processing. We also review challenges in applying entropy to analyze such spatiotemporal neuronal spike patterns, notably, the “curse of dimensionality” in estimating entropy for high-dimensional neuronal data. Finally, we discuss strategies to overcome these challenges, including dimensionality reduction techniques, advanced entropy estimators, sparse coding schemes, and the integration of machine learning approaches. Thus, this work summarizes the most recent developments on how entropy measures contribute to our understanding of principles underlying neural coding. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

Figure 1
<p>Cartoon illustration of neuronal activity packets. (<b>A</b>) Sequential activity patterns (called packets) during deep sleep where activity occurs sporadically. Within each packet, neurons fire with a stereotyped sequential pattern (each neuron marked with different color). (<b>B</b>) In an awake state, when more information is transmitted, packets occur right after each other, without long periods of silence, but temporal relationships between neurons are similar to those in the sleep state. (<b>C</b>) Consistency and variability in neuronal packets (geometrical interpretation). The gray area illustrates the space of all spiking patterns theoretically possible for a packet. The left-side panels show a cartoon of sample packets, each corresponding to a single point in gray space. The white area inside represents the space of packets experimentally observed in the brain. Packets evoked by different sensory stimuli occupy smaller subspaces (colored blobs). The right-side panels illustrate stimulus-evoked packets. The overall structure of evoked packets is similar, with differences in the firing rate and in the spike timing of neurons encoding information about different stimuli (figure modified from [<a href="#B18-entropy-26-00967" class="html-bibr">18</a>]).</p>
Full article ">
18 pages, 6989 KiB  
Article
A Deep Unfolding Network for Multispectral and Hyperspectral Image Fusion
by Bihui Zhang, Xiangyong Cao and Deyu Meng
Remote Sens. 2024, 16(21), 3979; https://doi.org/10.3390/rs16213979 - 26 Oct 2024
Viewed by 889
Abstract
Multispectral and hyperspectral image fusion (MS/HS fusion) aims to generate a high-resolution hyperspectral (HRHS) image by fusing a high-resolution multispectral (HRMS) and a low-resolution hyperspectral (LRHS) images. The deep unfolding-based MS/HS fusion method is a representative deep learning paradigm due to its excellent [...] Read more.
Multispectral and hyperspectral image fusion (MS/HS fusion) aims to generate a high-resolution hyperspectral (HRHS) image by fusing a high-resolution multispectral (HRMS) and a low-resolution hyperspectral (LRHS) images. The deep unfolding-based MS/HS fusion method is a representative deep learning paradigm due to its excellent performance and sufficient interpretability. However, existing deep unfolding-based MS/HS fusion methods only rely on a fixed linear degradation model, which focuses on modeling the relationships between HRHS and HRMS, as well as HRHS and LRHS. In this paper, we break free from this observation model framework and propose a new observation model. Firstly, the proposed observation model is built based on the convolutional sparse coding (CSC) technique, and then a proximal gradient algorithm is designed to solve this model. Secondly, we unfold the iterative algorithm into a deep network, dubbed as MHF-CSCNet, where the proximal operators are learned using convolutional neural networks. Finally, all trainable parameters can be automatically learned end-to-end from the training pairs. Experimental evaluations conducted on various benchmark datasets demonstrate the superiority of our method both quantitatively and qualitatively compared to other state-of-the-art methods. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The overall architecture of our proposed unfolding network. (<b>b</b>) The left and right sides of the figure show the structure of U-Net and V-Net, respectively. (<b>c</b>) The structure of C-Net.</p>
Full article ">Figure 2
<p>11 test images from the CAVE dataset. (<b>a</b>) balloons; (<b>b</b>) CD; (<b>c</b>) chart and stuffed toy; (<b>d</b>) clay; (<b>e</b>) fake and real beers; (<b>f</b>) fake and real lemon slices; (<b>g</b>) fake and real tomatoes; (<b>h</b>) feathers; (<b>i</b>) flowers; (<b>j</b>) hairs; and (<b>k</b>) jelly beans.</p>
Full article ">Figure 3
<p>Visual representations of the fused results. Includes selected spectral vectors and the pseudocolor images for the outcomes coming from the different fusion methods on the CAVE datasets with size <math display="inline"><semantics> <mrow> <mn>200</mn> <mo>×</mo> <mn>200</mn> </mrow> </semantics></math>. (<b>a</b>) Spectral vectors in 2nd test image located at (31, 200). (<b>b</b>) Spectral vectors in 6th test image located at (61, 151). We show the composite image of the HS image with bands 15-8-29 as R-G-B on the final two rows. (<b>c</b>) The simulated RGB images of a test sample. (<b>d</b>) The ground-truth HRHS image. (<b>e</b>–<b>p</b>) The results obtained by 12 comparison methods and had two demarcated areas marked by red and green boxes zoomed for easy observation.</p>
Full article ">Figure 4
<p>Visual representations of the fused results. Includes selected spectral vectors and the pseudocolor images for the outcomes coming from the different fusion methods on the Chikusei datasets with size <math display="inline"><semantics> <mrow> <mn>680</mn> <mo>×</mo> <mn>680</mn> </mrow> </semantics></math>. (<b>a</b>) Spectral vectors in 6th test image located at (250, 250). (<b>b</b>) Spectral vectors in 5th test image located at (250, 250). (<b>c</b>) The simulated RGB images of a test sample in Chikusei data set. We show the composite image of the HS image with bands 80-60-30 as R-G-B. (<b>d</b>) The ground-truth HrHS image. (<b>e</b>–<b>p</b>) The results obtained by 12 comparison methods, and two demarcated areas marked by red and green boxes zoomed for easy observation.</p>
Full article ">Figure 5
<p>(<b>a</b>) The simulated RGB images of a test sample in CAVE dataset. We show the composite image of the HS image with bands 15-8-29 as R-G-B. (<b>b</b>–<b>m</b>) The results obtained by 12 comparison methods, and had two demarcated areas marked by red and green boxes zoomed for easy observation.</p>
Full article ">Figure 6
<p>(<b>a</b>) The simulated RGB images of a test sample in Harvard data set. We show the composite image of the HS image with bands 8-15-29 as R-G-B. (<b>b</b>) The ground-truth HRHS image. (<b>c</b>–<b>n</b>) The results obtained by 12 comparison methods, and had two demarcated areas marked by red and green boxes zoomed for easy observation.</p>
Full article ">Figure 7
<p>Visual representations of the fused results. Including selected spectral vectors for the outcomes coming from the different fusion methods on the Harvard datasets with size <math display="inline"><semantics> <mrow> <mn>512</mn> <mo>×</mo> <mn>512</mn> </mrow> </semantics></math>. (<b>a</b>) Spectral vectors in 5th test image located at (101, 101). (<b>b</b>) Spectral vectors in 18th test image located at (51, 51).</p>
Full article ">
22 pages, 4759 KiB  
Article
An Improved Nonnegative Matrix Factorization Algorithm Combined with K-Means for Audio Noise Reduction
by Yan Liu, Haozhen Zhu, Yongtuo Cui, Xiaoyu Yu, Haibin Wu and Aili Wang
Electronics 2024, 13(20), 4132; https://doi.org/10.3390/electronics13204132 - 21 Oct 2024
Viewed by 845
Abstract
Clustering algorithms have the characteristics of being simple and efficient and can complete calculations without a large number of datasets, making them suitable for application in noise reduction processing for audio module mass production testing. In order to solve the problems of the [...] Read more.
Clustering algorithms have the characteristics of being simple and efficient and can complete calculations without a large number of datasets, making them suitable for application in noise reduction processing for audio module mass production testing. In order to solve the problems of the NMF algorithm easily getting stuck in local optimal solutions and difficult feature signal extraction, an improved NMF audio denoising algorithm combined with K-means initialization was designed. Firstly, the Euclidean distance formula of K-means has been improved to extract audio signal features from multiple dimensions. Combined with the initialization strategy of K-means decomposition, the initialization dictionary matrix of the NMF algorithm has been optimized to avoid getting stuck in local optimal solutions and effectively improve the robustness of the algorithm. Secondly, in the sparse coding part of the NMF algorithm, feature extraction expressions are added to solve the problem of noise residue and partial spectral signal loss in audio signals during the operation process. At the same time, the size of the coefficient matrix is limited to reduce operation time and improve the accuracy of feature extraction in high-precision audio signals. Then, comparative experiments were conducted using the NOIZEUS and NOISEX-92 datasets, as well as random noise audio signals. This algorithm improved the signal-to-noise ratio by 10–20 dB and reduced harmonic distortion by approximately −10 dB. Finally, a high-precision audio acquisition unit based on FPGA was designed, and practical applications have shown that it can effectively improve the signal-to-noise ratio of audio signals and reduce harmonic distortion. Full article
Show Figures

Figure 1

Figure 1
<p>The framework of improved NMF algorithm.</p>
Full article ">Figure 2
<p>The pure speech signal.</p>
Full article ">Figure 3
<p>The noisy speech signal.</p>
Full article ">Figure 4
<p>The estimated speech signal.</p>
Full article ">Figure 5
<p>Overall block diagram of audio signal noise reduction.</p>
Full article ">Figure 6
<p>Optimized Distributed Structure.</p>
Full article ">Figure 7
<p>Physical diagram of hardware circuit.</p>
Full article ">Figure 8
<p>Audio signal analysis interface.</p>
Full article ">Figure 9
<p>Processed audio signal converted to WAV file.</p>
Full article ">
19 pages, 5557 KiB  
Article
Microwave Coincidence Imaging with Phase-Coded Stochastic Radiation Field
by Hang Lin, Hongyan Liu, Yongqiang Cheng, Ke Xu, Kang Liu and Yang Yang
Remote Sens. 2024, 16(20), 3851; https://doi.org/10.3390/rs16203851 - 16 Oct 2024
Viewed by 911
Abstract
Microwave coincidence imaging (MCI) represents a novel forward-looking radar imaging method with high-resolution capabilities. Most MCI methods rely on random frequency modulation to generate stochastic radiation fields, which introduces the complexity of radar systems and imposes limitations on imaging quality under noisy conditions. [...] Read more.
Microwave coincidence imaging (MCI) represents a novel forward-looking radar imaging method with high-resolution capabilities. Most MCI methods rely on random frequency modulation to generate stochastic radiation fields, which introduces the complexity of radar systems and imposes limitations on imaging quality under noisy conditions. In this paper, microwave coincidence imaging with phase-coded stochastic radiation fields is proposed, which generates spatio-temporally uncorrelated stochastic radiation fields with phase coding. Firstly, the radiation field characteristics are analyzed, and the coding sequences are designed. Then, pulse compression is applied to achieve a one-dimensional range image. Furthermore, an azimuthal imaging model is built, and a reference matrix is derived from the frequency domain. Finally, sparse Bayesian learning (SBL) and alternating direction method of multipliers (ADMM)-based total variation are implemented to reconstruct targets. The methods have better imaging performance at low signal-to-noise ratios (SNRs), as shown by the imaging results and mean square error (MSE) curves. In addition, compared with the SBL and ADMM-based total variation methods based on the direct frequency-domain solution, the proposed method’s computational complexity is reduced, giving it great potential in forward-looking high-resolution scenarios, such as autonomous obstacle avoidance with vehicle-mounted radar and terminal guidance. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Principle of radar coincidence imaging.</p>
Full article ">Figure 2
<p>Wavefront undulations of different bandwidths at the same moment. (<b>a</b>) B = 200 MHz. (<b>b</b>) B = 400 MHz. (<b>c</b>) B = 600 MHz. (<b>d</b>) B = 800 MHz.</p>
Full article ">Figure 3
<p>Effective-rank change curves for different influencing factors. (<b>a</b>) Effective-rank change curves at different bandwidths. (<b>b</b>) Effective-rank change curves with different array spacings and numbers of codes.</p>
Full article ">Figure 4
<p>Radiation field distribution of chaotic sequences generating cyclic codes.</p>
Full article ">Figure 5
<p>Variation in the effective rank corresponding to random codes and chaotic sequences for different imaging times. (<b>a</b>) B = 600 MHz. (<b>b</b>) B = 800 MHz.</p>
Full article ">Figure 6
<p>Aircraft point targets.</p>
Full article ">Figure 7
<p>SBL imaging results. (<b>a</b>) FM results at 0 dB. (<b>b</b>) FM results at 5 dB. (<b>c</b>) AziPM results at 0 dB. (<b>d</b>) AziPM results at 5 dB.</p>
Full article ">Figure 8
<p>Azimuthal dimensional slices at 0 dB. (<b>a</b>) FM method. (<b>b</b>) AziPM method.</p>
Full article ">Figure 9
<p>MSE curves of FM and AziPM methods with different modulation counts: (<b>a</b>) 16 modulations, (<b>b</b>) 48 modulations, (<b>c</b>) 64 modulations, (<b>d</b>) 96 modulations.</p>
Full article ">Figure 9 Cont.
<p>MSE curves of FM and AziPM methods with different modulation counts: (<b>a</b>) 16 modulations, (<b>b</b>) 48 modulations, (<b>c</b>) 64 modulations, (<b>d</b>) 96 modulations.</p>
Full article ">Figure 10
<p>Imaging results with different codes at 0 dB. (<b>a</b>) Cyclic code. (<b>b</b>) Random code.</p>
Full article ">Figure 11
<p>MSE curves of cyclic code and random code methods.</p>
Full article ">Figure 12
<p>Imaging results for different targets. (<b>a</b>–<b>c</b>) Targets. (<b>d</b>–<b>f</b>) Imaging results at 0 dB. (<b>g</b>–<b>i</b>) Imaging results at 5 dB.</p>
Full article ">Figure 13
<p>ADMM total variation imaging results. (<b>a</b>,<b>c</b>,<b>e</b>) Imaging results at 0 dB. (<b>b</b>,<b>d</b>,<b>f</b>) Imaging results at 5 dB.</p>
Full article ">Figure 14
<p>MSE curves for different methods.</p>
Full article ">Figure 15
<p>Comparison of runtimes of different methods.</p>
Full article ">
22 pages, 4062 KiB  
Article
A Distributed Non-Intrusive Load Monitoring Method Using Karhunen–Loeve Feature Extraction and an Improved Deep Dictionary
by Siqi Liu, Zhiyuan Xie and Zhengwei Hu
Electronics 2024, 13(19), 3970; https://doi.org/10.3390/electronics13193970 - 9 Oct 2024
Viewed by 959
Abstract
In recent years, the non-invasive load monitoring (NILM) method based on sparse coding has shown promising research prospects. This type of method learns a sparse dictionary for each monitoring target device, and it expresses load decomposition as a problem of signal reconstruction using [...] Read more.
In recent years, the non-invasive load monitoring (NILM) method based on sparse coding has shown promising research prospects. This type of method learns a sparse dictionary for each monitoring target device, and it expresses load decomposition as a problem of signal reconstruction using dictionaries and sparse vectors. The existing NILM methods based on sparse coding have problems such as inability to be applied to multi-state and time-varying devices, single-load characteristics, and poor recognition ability for similar devices in distributed manners. Using the analysis above, this paper focuses on devices with similar features in households and proposes a distributed non-invasive load monitoring method using Karhunen–Loeve (KL) feature extraction and an improved deep dictionary. Firstly, Karhunen–Loeve expansion (KLE) is used to perform subspace expansion on the power waveform of the target device, and a new load feature is extracted by combining singular value decomposition (SVD) dimensionality reduction. Afterwards, the states of all the target devices are modeled as super states, and an improved deep dictionary based on the distance separability measure function (DSM-DDL) is learned for each super state. Among them, the state transition probability matrix and observation probability matrix in the hidden Markov model (HMM) are introduced as the basis for selecting the dictionary order during load decomposition. The KL feature matrix of power observation values and improved depth dictionary are used to discriminate the current super state based on the minimum reconstruction error criterion. The test results based on the UK-DALE dataset show that the KL feature matrix can effectively reduce the load similarity of devices. Combined with DSM-DDL, KL has a certain information acquisition ability and acceptable computational complexity, which can effectively improve the load decomposition accuracy of similar devices, quickly and accurately estimating the working status and power demand of household appliances. Full article
(This article belongs to the Special Issue New Advances in Distributed Computing and Its Applications)
Show Figures

Figure 1

Figure 1
<p>Flowchart of KLE feature extraction.</p>
Full article ">Figure 2
<p>Overlapping-region graph of super-state class clusters.</p>
Full article ">Figure 3
<p>Flowchart of energy disaggregation.</p>
Full article ">Figure 4
<p>Current diagram of three appliances. (<b>a</b>) Power curve of TV and refrigerator; (<b>b</b>) Power curve of refrigerator and computer.</p>
Full article ">Figure 4 Cont.
<p>Current diagram of three appliances. (<b>a</b>) Power curve of TV and refrigerator; (<b>b</b>) Power curve of refrigerator and computer.</p>
Full article ">Figure 5
<p>Quantitative feature diagram of three kinds of equipment. (<b>a</b>) Quantitative characteristic curve of TV and refrigerator; (<b>b</b>) Quantitative characteristic curves of desktop computer and refrigerator–TV set.</p>
Full article ">Figure 6
<p>KL characteristic matrix of three appliances. (<b>a</b>) KL characteristic matrix of TV(1); (<b>b</b>) KL characteristic matrix of PC(2); (<b>c</b>) KL characteristic matrix of REF(1).</p>
Full article ">Figure 7
<p>Similarity index comparison.</p>
Full article ">Figure 8
<p>The relationship between the number of super states and the number of appliances.</p>
Full article ">Figure 9
<p>Effect of super-state number on load decomposition time.</p>
Full article ">Figure 10
<p>Comparison of accuracy between DDL and DSM-DDL.</p>
Full article ">
28 pages, 2389 KiB  
Article
Simulating Weak Attacks in a New Duplication–Divergence Model with Node Loss
by Ruihua Zhang and Gesine Reinert
Entropy 2024, 26(10), 813; https://doi.org/10.3390/e26100813 - 25 Sep 2024
Viewed by 698
Abstract
A better understanding of protein–protein interaction (PPI) networks representing physical interactions between proteins could be beneficial for evolutionary insights as well as for practical applications such as drug development. As a statistical model for PPI networks, duplication–divergence models have been proposed, but they [...] Read more.
A better understanding of protein–protein interaction (PPI) networks representing physical interactions between proteins could be beneficial for evolutionary insights as well as for practical applications such as drug development. As a statistical model for PPI networks, duplication–divergence models have been proposed, but they suffer from resulting in either very sparse networks in which most of the proteins are isolated, or in networks which are much denser than what is usually observed, having almost no isolated proteins. Moreover, in real networks, where a gene codes a protein, gene loss may occur. The loss of nodes has not been captured in duplication–divergence models to date. Here, we introduce a new duplication–divergence model which includes node loss. This mechanism results in networks in which the proportion of isolated proteins can take on values which are strictly between 0 and 1. To understand this new model, we apply strong and weak attacks to networks from duplication–divergence models with and without node loss, and compare the results to those obtained when carrying out similar attacks on two real PPI networks of E. coli and of S. cerevisiae. We find that the new model more closely reflects the damage caused by strong and weak attacks found in the PPI networks. Full article
Show Figures

Figure 1

Figure 1
<p>Sensitivity analysis for the number of isolated nodes in the <span class="html-italic">E. coli</span> and <span class="html-italic">S. cerevisiae</span> PPI networks across varying STRING score thresholds.</p>
Full article ">Figure 2
<p>Attack strategies. (<b>A</b>) Complete knockout attack: all edges connected to the attacked node are eliminated. (<b>B1</b>) Partial knockout attack: half of the edges connected to the attacked node are eliminated. (<b>C1</b>) Distributed knockout attack: randomly selected edges are eliminated. Adapted from FIG.1 in [<a href="#B1-entropy-26-00813" class="html-bibr">1</a>].</p>
Full article ">Figure 3
<p>Graph illustration of a duplication–divergence model.</p>
Full article ">Figure 4
<p>Graph illustration of a new duplication divergence model with node loss.</p>
Full article ">Figure 5
<p>The average number of edges in the 1-step ego network of an <span class="html-italic">E. coli</span> and <span class="html-italic">S. cerevisiea</span> PPI network after 25 attacks. (<b>a</b>) shows the average number of edges in the 1-step ego network in a <span class="html-italic">E. coli</span> PPI network under 25 knockout attacks. Blue line: complete knockout; red line: partial knockout with half of the edges connected to one node being removed at each attack; green line: partial knockout with half of the edges connected to two nodes being removed at each attack; orange line: partial knockout with half of the edges connected to five nodes being removed at each attack. (<b>b</b>) shows the average number of edges in the 1-step ego network in a <span class="html-italic">S. cerevisiae</span> PPI network under 25 attenuation attacks. Since a one-node halved knockout only deletes half of the edges connected to the selected node, when a node has a degree of at least 2 it causes less damage than a complete knockout which removes all the edges connected to the selected node.</p>
Full article ">Figure 6
<p>Network efficiency after up to 25 weak attacks on simulations from the duplication–divergence model starting with a triangle with a divergence rate <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.4</mn> </mrow> </semantics></math>. <b>Top left:</b> knockout attacks. Blue line: complete knockout; red line: partial knockout with half of the edges connected to one node being removed at each attack; green line: partial knockout with half of the edges connected to two nodes being removed at each attack; orange line: partial knockout with half of the edges connected to five nodes being removed at each attack. <b>Top right:</b> attenuation attacks. Blue line: complete knockout; red line: partial attenuation with all the edges connected to one node being halved at each attack; green line: partial attenuation with all the edges connected to two nodes being halved at each attack; orange line: partial attenuation with all the edges connected to five nodes being halved at each attack. <b>Bottom left</b>: distributed attacks, with edges drawn from a random distribution; the horizontal line represents equivalent damage to the network achieved by one complete knockout. <b>Bottom right</b>: distributed attenuation attacks, with the weight of edges drawn from a random distribution to be halved; the horizontal line represents equivalent damage to the network achieved by one complete knockout.</p>
Full article ">Figure 7
<p>Network efficiency after up to 25 weak attacks on simulations from the new node loss model starting with a triangle; a node can be lost with probability <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math>, using a divergence rate <span class="html-italic">p</span> = 0.4. The graph is undirected and has unit edge weight. <b>Top left:</b> knockout attacks. Blue line: complete knockout; red line: partial knockout with half of the edges connected to one node being removed at each attack; green line: partial knockout with half of the edges connected to two nodes being removed at each attack; orange line: partial knockout with half of the edges connected to five nodes being removed at each attack. <b>Top right:</b> attenuation attacks. Blue line: complete knockout; red line: partial attenuation with all the edges connected to one node being halved at each attack; green line: partial attenuation with all the edges connected to two nodes being halved at each attack; orange line: partial attenuation with all the edges connected to five nodes being halved at each attack. <b>Bottom left</b>: distributed attacks, with edges drawn from a random distribution; the horizontal line represents equivalent damage to the network achieved by one complete knockout. <b>Bottom right</b>: distributed attenuation attacks, with the weight of edges drawn from a random distribution to be halved; the horizontal line represents equivalent damage to the network achieved by one complete knockout.</p>
Full article ">Figure 8
<p>Effect of <span class="html-italic">q</span> on the efficiency of weak attacks on simulated networks from the node loss model starting from a triangle with <span class="html-italic">p</span> = 0.4, and <span class="html-italic">q</span> ranges from 0.2, 0.4, 0.6, to 0.8.</p>
Full article ">Figure A1
<p>Effect of <span class="html-italic">p</span> when applying complete or weak knockout attacks on simulated networks from duplication–divergence models (i.e., <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>) starting with a triangle.</p>
Full article ">Figure A2
<p>Effect of <span class="html-italic">p</span> when applying complete or weak knockout attacks on simulated networks from the new node loss model starting with a triangle, with <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>0.4</mn> </mrow> </semantics></math>.</p>
Full article ">Figure A3
<p>Network efficiency after up to 25 weak attacks on simulated networks from the new node loss model starting with a triangle with a divergence rate <span class="html-italic">p</span> = 0.4, where a node can be lost with probability <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>0.4</mn> </mrow> </semantics></math>, 0.6, and 0.8. <b>Left plots:</b> knockout attacks. Blue line: complete knockout; red line: partial knockout with all the edges connected to one node being halved at each attack; green line: partial knockout with all the edges connected to two nodes being halved at each attack; orange line: partial knockout with all the edges connected to five nodes being halved at each attack. <b>Right plots:</b> attenuation attacks. Blue line: complete knockout; red line: partial attenuation with all the edges connected to one node being halved at each attack; green line: partial attenuation with all the edges connected to two nodes being halved at each attack; orange line: partial attenuation with all the edges connected to five nodes being halved at each attack.</p>
Full article ">Figure A4
<p>Effect of <span class="html-italic">p</span> when applying complete or weak knockout attacks on simulated networks from the duplication–divergence model starting with an edge.</p>
Full article ">Figure A5
<p>Effect of <span class="html-italic">p</span> when applying complete or weak knockout attacks on simulated networks from the node loss model starting with an edge, with <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>0.4</mn> </mrow> </semantics></math>.</p>
Full article ">Figure A6
<p>Weak attacks on simulated networks from the new node loss model starting with an edge where a node can be lost with probability <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math>, using a divergence rate <span class="html-italic">p</span> = 0.4. The graph is undirected and has unit edge weight. Edges selected for distributed attacks are drawn from a random distribution. <b>Top left:</b> knockout attacks. Blue line: complete knockout; red line: partial knockout with all the edges connected to two nodes being halved at each attack; green line: partial knockout with all the edges connected to five nodes being halved at each attack; orange line: partial knockout with all the edges connected to ten nodes being halved at each attack. <b>Top right:</b> attenuation attacks. Blue line: complete knockout; red line: partial attenuation with all the edges connected to two nodes being halved at each attack; green line: partial attenuation with all the edges connected to five nodes being halved at each attack; orange line: partial attenuation with all the edges connected to ten nodes being halved at each attack. <b>Bottom left</b>: distributed attacks, with edges drawn from a random distribution; the horizontal line represents equivalent damage to the network achieved by one complete knockout. <b>Bottom right</b>: distributed attenuation attacks, with the weight of edges drawn from a random distribution to be halved; the horizontal line represents equivalent damage to the network achieved by one complete knockout.</p>
Full article ">Figure A7
<p>Weak attacks on simulated networks from the new node loss model starting with an edge; <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.4</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>0.6</mn> </mrow> </semantics></math>, or <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>0.8</mn> </mrow> </semantics></math>, respectively. All the graphs are undirected with unit edge weight. Edges selected for distributed attacks are drawn from a random distribution.</p>
Full article ">Figure A8
<p>Weak attacks on simulated networks from the new node loss model starting with an edge; <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>0.2</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>0.4</mn> </mrow> </semantics></math>. All the graphs are undirected with unit edge weight. Edges selected for distributed attacks are drawn from a random distribution.</p>
Full article ">Figure A9
<p>Effect of thresholds of STRING scores when applying complete or weak knockout attacks on real PPI networks, using the thresholds 0.200, 0.400, and 0.600.</p>
Full article ">
14 pages, 505 KiB  
Article
Few-Shot Learning Sensitive Recognition Method Based on Prototypical Network
by Guoquan Yuan, Xinjian Zhao, Liu Li, Song Zhang and Shanming Wei
Mathematics 2024, 12(17), 2791; https://doi.org/10.3390/math12172791 - 9 Sep 2024
Viewed by 869
Abstract
Traditional machine learning-based entity extraction methods rely heavily on feature engineering by experts, and the generalization ability of the model is poor. Prototype networks, on the other hand, can effectively use a small amount of labeled data to train models while using category [...] Read more.
Traditional machine learning-based entity extraction methods rely heavily on feature engineering by experts, and the generalization ability of the model is poor. Prototype networks, on the other hand, can effectively use a small amount of labeled data to train models while using category prototypes to enhance the generalization ability of the models. Therefore, this paper proposes a prototype network-based named entity recognition (NER) method, namely the FSPN-NER model, to solve the problem of difficult recognition of sensitive data in data-sparse text. The model utilizes the positional coding model (PCM) to pre-train the data and perform feature extraction, then computes the prototype vectors to achieve entity matching, and finally introduces a boundary detection module to enhance the performance of the prototype network in the named entity recognition task. The model in this paper is compared with LSTM, BiLSTM, CRF, Transformer and their combination models, and the experimental results on the test dataset show that the model outperforms the comparative models with an accuracy of 84.8%, a recall of 85.8% and an F1 value of 0.853. Full article
Show Figures

Figure 1

Figure 1
<p>The overall architecture of the model.</p>
Full article ">Figure 2
<p>Effect of training sample size on F1 scores.</p>
Full article ">Figure 3
<p>Comparing the performance of word vectors with different dimensions.</p>
Full article ">
16 pages, 2564 KiB  
Article
Modeling Chickpea Productivity with Artificial Image Objects and Convolutional Neural Network
by Mikhail Bankin, Yaroslav Tyrykin, Maria Duk, Maria Samsonova and Konstantin Kozlov
Plants 2024, 13(17), 2444; https://doi.org/10.3390/plants13172444 - 1 Sep 2024
Viewed by 881
Abstract
The chickpea plays a significant role in global agriculture and occupies an increasing share in the human diet. The main aim of the research was to develop a model for the prediction of two chickpea productivity traits in the available dataset. Genomic data [...] Read more.
The chickpea plays a significant role in global agriculture and occupies an increasing share in the human diet. The main aim of the research was to develop a model for the prediction of two chickpea productivity traits in the available dataset. Genomic data for accessions were encoded in Artificial Image Objects, and a model for the thousand-seed weight (TSW) and number of seeds per plant (SNpP) prediction was constructed using a Convolutional Neural Network, dictionary learning and sparse coding for feature extraction, and extreme gradient boosting for regression. The model was capable of predicting both traits with an acceptable accuracy of 84–85%. The most important factors for model solution were identified using the dense regression attention maps method. The SNPs important for the SNpP and TSW traits were found in 34 and 49 genes, respectively. Genomic prediction with a constructed model can help breeding programs harness genotypic and phenotypic diversity to more effectively produce varieties with a desired phenotype. Full article
(This article belongs to the Section Plant Genetics, Genomics and Biotechnology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The overview of the research.</p>
Full article ">Figure 2
<p>Histogram for TSW trait.</p>
Full article ">Figure 3
<p>Histogram for SNpP trait.</p>
Full article ">Figure 4
<p>Example AIO. The size of the image is 128 × 128 px; here, the image is enlarged. Each colored square corresponds to one pixel. The color of each pixel is obtained by (<a href="#FD1-plants-13-02444" class="html-disp-formula">1</a>) for genomic factors.</p>
Full article ">Figure 5
<p>The architecture of CNN.</p>
Full article ">Figure 6
<p>Convergence for SNpP trait.</p>
Full article ">Figure 7
<p>Comparison of measured and predicted number of seeds per plant. The data points used for training are marked with blue circles, and those from test set are drawn as red dots. The straight line represents the exact correspondence. The model accuracy was <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>84</mn> <mo>%</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Example attention map for SNpP trait for individual accession. The intensity differences were increased for visualization purposes.</p>
Full article ">Figure 9
<p>Convergence for TSW trait.</p>
Full article ">Figure 10
<p>Comparison of measured and predicted thousand-seed weight. The data points used for training are marked with blue circles, and those from test set are drawn as red dots. The straight line represents the exact correspondence. The model accuracy was <math display="inline"><semantics> <mrow> <mi>a</mi> <mo>=</mo> <mn>85</mn> <mo>%</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Example attention map for TSW trait for individual accession, the intensity was enhanced for visualization similar to <a href="#plants-13-02444-f008" class="html-fig">Figure 8</a>.</p>
Full article ">
18 pages, 7128 KiB  
Article
RGBTSDF: An Efficient and Simple Method for Color Truncated Signed Distance Field (TSDF) Volume Fusion Based on RGB-D Images
by Yunqiang Li, Shuowen Huang, Ying Chen, Yong Ding, Pengcheng Zhao, Qingwu Hu and Xujie Zhang
Remote Sens. 2024, 16(17), 3188; https://doi.org/10.3390/rs16173188 - 29 Aug 2024
Viewed by 1439
Abstract
RGB-D image mapping is an important tool in applications such as robotics, 3D reconstruction, autonomous navigation, and augmented reality (AR). Efficient and reliable mapping methods can improve the accuracy, real-time performance, and flexibility of sensors in various fields. However, the currently widely used [...] Read more.
RGB-D image mapping is an important tool in applications such as robotics, 3D reconstruction, autonomous navigation, and augmented reality (AR). Efficient and reliable mapping methods can improve the accuracy, real-time performance, and flexibility of sensors in various fields. However, the currently widely used Truncated Signed Distance Field (TSDF) still suffers from the problem of inefficient memory management, making it difficult to directly use it for large-scale 3D reconstruction. In order to address this problem, this paper proposes a highly efficient and accurate TSDF voxel fusion method, RGBTSDF. First, based on the sparse characteristics of the volume, an improved grid octree is used to manage the whole scene, and a hard coding method is proposed for indexing. Second, during the depth map fusion process, the depth map is interpolated to achieve a more accurate voxel fusion effect. Finally, a mesh extraction method with texture constraints is proposed to overcome the effects of noise and holes and improve the smoothness and refinement of the extracted surface. We comprehensively evaluate RGBTSDF and similar methods through experiments on public datasets and the datasets collected by commercial scanning devices. Experimental results show that RGBTSDF requires less memory and can achieve real-time performance experience using only the CPU. It also improves fusion accuracy and achieves finer grid details. Full article
(This article belongs to the Special Issue New Insight into Point Cloud Data Processing)
Show Figures

Figure 1

Figure 1
<p>The pipeline of RGBTSDF.</p>
Full article ">Figure 2
<p>The structure of an octree.</p>
Full article ">Figure 3
<p>The structure of a grid octree.</p>
Full article ">Figure 4
<p>The diagram of hard coding.</p>
Full article ">Figure 5
<p>2D schematic of Marching Cubes.</p>
Full article ">Figure 6
<p>From top to bottom, the first row depicts the results of direct mesh extraction, while the second row shows the effects of mesh extraction with texture constraints applied.</p>
Full article ">Figure 7
<p>Qualitative and detail results of different methods on the ICL-NUIM dataset. (<b>a</b>) Open3D; (<b>b</b>) VDBFusion; (<b>c</b>) Gradient-SDF; (<b>d</b>) RGBTSDF.</p>
Full article ">Figure 8
<p>Qualitative results of different methods on the TUM dataset. (<b>a</b>) Open3D; (<b>b</b>) VDBFusion; (<b>c</b>) Gradient-SDF; (<b>d</b>) RGBTSDF.</p>
Full article ">Figure 9
<p>Qualitative and details results of different methods on the Venus model data. (<b>a</b>) Open3D; (<b>b</b>) VDBFusion; (<b>c</b>) Gradient-SDF (<b>d</b>) RGBTSDF; (<b>e</b>) RGBTSDF of mesh extraction with texture constraints.</p>
Full article ">Figure 10
<p>Qualitative and details results of different methods on the Furniture. (<b>a</b>) Open3D; (<b>b</b>) VDBFusion; (<b>c</b>) Gradient-SDF (<b>d</b>) RGBTSDF; (<b>e</b>) RGBTSDF of mesh extraction with texture constraints.</p>
Full article ">Figure 11
<p>Comparison of reconstruction scores for four fusion methods with different voxel sizes and truncation.</p>
Full article ">
22 pages, 937 KiB  
Article
Radar Emitter Recognition Based on Spiking Neural Networks
by Zhenghao Luo, Xingdong Wang, Shuo Yuan and Zhangmeng Liu
Remote Sens. 2024, 16(14), 2680; https://doi.org/10.3390/rs16142680 - 22 Jul 2024
Cited by 2 | Viewed by 1274
Abstract
Efficient and effective radar emitter recognition is critical for electronic support measurement (ESM) systems. However, in complex electromagnetic environments, intercepted pulse trains generally contain substantial data noise, including spurious and missing pulses. Currently, radar emitter recognition methods utilizing traditional artificial neural networks (ANNs) [...] Read more.
Efficient and effective radar emitter recognition is critical for electronic support measurement (ESM) systems. However, in complex electromagnetic environments, intercepted pulse trains generally contain substantial data noise, including spurious and missing pulses. Currently, radar emitter recognition methods utilizing traditional artificial neural networks (ANNs) like CNNs and RNNs are susceptible to data noise and require intensive computations, posing challenges to meeting the performance demands of modern ESM systems. Spiking neural networks (SNNs) exhibit stronger representational capabilities compared to traditional ANNs due to the temporal dynamics of spiking neurons and richer information encoded in precise spike timing. Furthermore, SNNs achieve higher computational efficiency by performing event-driven sparse addition calculations. In this paper, a lightweight spiking neural network is proposed by combining direct coding, leaky integrate-and-fire (LIF) neurons, and surrogate gradients to recognize radar emitters. Additionally, an improved SNN for radar emitter recognition is proposed, leveraging the local timing structure of pulses to enhance adaptability to data noise. Simulation results demonstrate the superior performance of the proposed method over existing methods. Full article
(This article belongs to the Special Issue Technical Developments in Radar—Processing and Application)
Show Figures

Figure 1

Figure 1
<p>Comparison of spiking neural networks and artificial neural networks. Information processing diagram of (<b>a</b>) spiking neural networks and (<b>b</b>) artificial neural networks.</p>
Full article ">Figure 2
<p>Radar pulse train diagram. (<b>a</b>) The original pulse stream without data noise. (<b>b</b>) Intercepted radar pulse train with three common data noises.</p>
Full article ">Figure 3
<p>SNN structure for radar emitter recognition. The digitized PRI and PW are first transformed into one-hot vectors, then embedded into lower dimensional features and concatenated, and then encoded into spike trains through the encoding layer. Finally, spiking neurons in the output layer corresponding to the correct category have the highest spike firing rate.</p>
Full article ">Figure 4
<p>Structure of improved SNNs. Firstly, the local timing structure (<math display="inline"><semantics> <msub> <mi mathvariant="bold">L</mi> <mi>t</mi> </msub> </semantics></math>) and pulse width of each pulse are encoded into spike trains. After the SNN processing, the spiking neuron corresponding to the correct class has the highest spike-firing rate in the output layer.</p>
Full article ">Figure 5
<p>Radar pulse train recognition experiment. (<b>a</b>) Spike trains fired by neurons in the encoding layer. (<b>b</b>) Spike trains fired by neurons in the first hidden layer. (<b>c</b>) Spike trains fired by neurons in the second hidden layer. (<b>d</b>) Spike trains fired by neurons in the output layer.</p>
Full article ">Figure 6
<p>Confusion matrix of radar emitter recognition based on SNNs. (<b>a</b>) Missing pulse rate is 0.1 and spurious pulse rate is 0.2. (<b>b</b>) Missing pulse rate is 0.3 and spurious pulse rate is 0.6. (<b>c</b>) Missing pulse rate is 0.5 and spurious pulse rate is 1.0. (<b>d</b>) Missing pulse rate is 0.7 and spurious pulse rate is 1.4.</p>
Full article ">Figure 7
<p>Recognition performance based on SNNs for different spurious pulse rates and missing pulse rates: (<b>a</b>,<b>c</b>) show recall and precision with a fixed missing pulse rate equal to 0.3 and varying spurious pulse rate from 0 to 1.8, while (<b>b</b>,<b>d</b>) show recall and precision with a fixed missing pulse rate equal to 0.5 and varying spurious pulse rate from 0 to 1.8.</p>
Full article ">Figure 8
<p>Confusion matrix of radar emitter recognition based on improved SNNs. (<b>a</b>) Missing pulse rate is 0.1, spurious pulse rate is 0.2. (<b>b</b>) Missing pulse rate is 0.3, spurious pulse rate is 0.6. (<b>c</b>) Missing pulse rate is 0.5, spurious pulse rate is 1.0. (<b>d</b>) Missing pulse rate is 0.7, spurious pulse rate is 1.4.</p>
Full article ">Figure 9
<p>Recognition performance based on improved SNNs for different spurious pulse rates and missing pulse rates: (<b>a</b>,<b>c</b>) show recall and precision with a fixed missing pulse rate equal to 0.3 and varying spurious pulse rate from 0 to 1.8, while (<b>b</b>,<b>d</b>) show recall and precision with a fixed missing pulse rate equal to 0.5 and varying spurious pulse rate from 0 to 1.8.</p>
Full article ">Figure 10
<p>The recognition accuracy under different missing pulse rates.</p>
Full article ">Figure 11
<p>The recognition accuracy under different spurious pulse rates.</p>
Full article ">Figure 12
<p>The running time of different methods when dealing with pulse streams with different lengths.</p>
Full article ">
32 pages, 5258 KiB  
Article
Developing GA-FuL: A Generic Wide-Purpose Library for Computing with Geometric Algebra
by Ahmad Hosny Eid and Francisco G. Montoya
Mathematics 2024, 12(14), 2272; https://doi.org/10.3390/math12142272 - 20 Jul 2024
Viewed by 1058
Abstract
The Geometric Algebra Fulcrum Library (GA-FuL) version 1.0 is introduced in this paper as a comprehensive computational library for geometric algebra (GA) and Clifford algebra (CA), in addition to other classical algebras. As a sophisticated software system, GA-FuL is useful for practical applications [...] Read more.
The Geometric Algebra Fulcrum Library (GA-FuL) version 1.0 is introduced in this paper as a comprehensive computational library for geometric algebra (GA) and Clifford algebra (CA), in addition to other classical algebras. As a sophisticated software system, GA-FuL is useful for practical applications requiring numerical or symbolic prototyping, optimized code generation, and geometric visualization. A comprehensive overview of the GA-FuL design is provided, including its core design intentions, data-driven programming characteristics, and extensible layered design. The library is capable of representing and manipulating sparse multivectors of any dimension, scalar kind, or metric signature, including conformal and projective geometric algebras. Several practical and illustrative use cases of the library are provided to highlight its potential for mathematical, scientific, and engineering applications. The metaprogramming code optimization capabilities of GA-FuL are found to be unique among other software systems. This allows for the automated production of highly efficient code, based on powerful geometric modeling formulations provided by geometric algebra. Full article
(This article belongs to the Section B: Geometry and Topology)
Show Figures

Figure 1

Figure 1
<p>Main layers of the GA-FuL design.</p>
Full article ">Figure 2
<p>GA-FuL algebra layer generic multivector thin-wrapper classes.</p>
Full article ">Figure 3
<p>GA-FuL modeling layer interfaces for linear maps on generic multivectors.</p>
Full article ">Figure 4
<p>Public interfaces of GA-FuL meta-expressions in the meta-context sub-layer.</p>
Full article ">Figure 5
<p>Illustration of the geometric procedure for defining a family of rotations between two unit vectors.</p>
Full article ">
17 pages, 8945 KiB  
Article
A Method for In-Loop Video Coding Restoration
by Carlos Salazar, Maria Trujillo and John W. Branch-Bedoya
Electronics 2024, 13(12), 2422; https://doi.org/10.3390/electronics13122422 - 20 Jun 2024
Viewed by 1778
Abstract
In-loop restoration is a post-processing task aiming to reduce losses caused by the quantization and the inverse quantization phases in a video coding process. Emerging in-loop restoration methods, most of them based on deep learning, have reported higher quality gains than classical filters. [...] Read more.
In-loop restoration is a post-processing task aiming to reduce losses caused by the quantization and the inverse quantization phases in a video coding process. Emerging in-loop restoration methods, most of them based on deep learning, have reported higher quality gains than classical filters. However, the complexity at the decoder side remains a challenge. The Sparse Restoration Method (SRM) is presented as a low-complexity method that utilizes sparse representation and Natural Scene Statistic metrics to enhance visual quality at the block level. Our method shows potential restoration benefits when applied to synthetic video sequences. Full article
(This article belongs to the Special Issue Image and Video Processing Based on Deep Learning)
Show Figures

Figure 1

Figure 1
<p>Post−processing filters (simplified view).</p>
Full article ">Figure 2
<p>Illustration of boundary artifacts caused by quantization during AV1 video compression. An original frame at the top, where white line refers to the boundary before being split into two blocks. A decoded frame at the bottom with green box highlighting boundary discontinuity cause by quantization process on each block.</p>
Full article ">Figure 3
<p>Illustration of boundary blocks used to determine the size of a deblocking filter.</p>
Full article ">Figure 4
<p>Illustration of boundary pixels involved in deblocking filtering.</p>
Full article ">Figure 5
<p>Illustration or ringing artifact. left image refers to original frame. Right image shows ringing artifacts around the object edges [<a href="#B12-electronics-13-02422" class="html-bibr">12</a>].</p>
Full article ">Figure 6
<p>CDEF in eight directions, where dark cube represents the central pixel [<a href="#B11-electronics-13-02422" class="html-bibr">11</a>].</p>
Full article ">Figure 7
<p>Architecture of the CNN in-loop filter [<a href="#B16-electronics-13-02422" class="html-bibr">16</a>].</p>
Full article ">Figure 8
<p>Architecture of the Guided CNN restoration [<a href="#B7-electronics-13-02422" class="html-bibr">7</a>].</p>
Full article ">Figure 9
<p>Visual and objective comparison of a frame restoration. At the top, sparse coefficients are sent to the decoder. At the bottom, sparse coefficients are predicted by the decoder. The distorted original image (not displayed here) reports a PSNR = 38.81 dB. This example uses <math display="inline"> <semantics> <mrow> <mi>Q</mi> <mi>P</mi> <mo>=</mo> <mn>135</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 10
<p>Illustration of video test sequences A2–A5 and B1.</p>
Full article ">Figure 11
<p>The AWS computer architecture used for tests.</p>
Full article ">Figure 12
<p>Subjective assessment (luma plane) for SRM restoration at the block level, using QP = 210. At the left: original block (reference), at the center: distorted block due AV2 compression artifacts, and at the right: restored block after executing SRM and compensating it with predicted DR (decoding residual).</p>
Full article ">Figure 13
<p>SRM frame restoration using Y plane. U and V planes are the same for the decoded frame. Using a sequence B1 with quantization of QP = 210. Top: original block (reference) at the left, distorted block due AV2 compression artifacts at the center, and restored block with +1 dB VMAF gain at the right. Bottom: full reference frame.</p>
Full article ">
13 pages, 3145 KiB  
Article
Expanding and Enriching the LncRNA Gene–Disease Landscape Using the GeneCaRNA Database
by Shalini Aggarwal, Chana Rosenblum, Marshall Gould, Shahar Ziman, Ruth Barshir, Ofer Zelig, Yaron Guan-Golan, Tsippi Iny-Stein, Marilyn Safran, Shmuel Pietrokovski and Doron Lancet
Biomedicines 2024, 12(6), 1305; https://doi.org/10.3390/biomedicines12061305 - 12 Jun 2024
Cited by 1 | Viewed by 1371
Abstract
The GeneCaRNA human gene database is a member of the GeneCards Suite. It presents ~280,000 human non-coding RNA genes, identified algorithmically from ~690,000 RNAcentral transcripts. This expands by ~tenfold the ncRNA gene count relative to other sources. GeneCaRNA thus contains ~120,000 long non-coding [...] Read more.
The GeneCaRNA human gene database is a member of the GeneCards Suite. It presents ~280,000 human non-coding RNA genes, identified algorithmically from ~690,000 RNAcentral transcripts. This expands by ~tenfold the ncRNA gene count relative to other sources. GeneCaRNA thus contains ~120,000 long non-coding RNAs (LncRNAs, >200 bases long), including ~100,000 novel genes. The latter have sparse functional information, a vast terra incognita for future research. LncRNA genes are uniformly represented on all nuclear chromosomes, with 10 genes on mitochondrial DNA. Data obtained from MalaCards, another GeneCards Suite member, finds 1547 genes associated with 1 to 50 diseases. About 15% of the associations portray experimental evidence, with cancers tending to be multigenic. Preliminary text mining within GeneCaRNA discovers interactions of lncRNA transcripts with target gene products, with 25% being ncRNAs and 75% proteins. GeneCaRNA has a biological pathways section, which at present shows 131 pathways for 38 lncRNA genes, a basis for future expansion. Finally, our GeneHancer database provides regulatory elements for ~110,000 lncRNA genes, offering pointers for co-regulated genes and genetic linkages from enhancers to diseases. We anticipate that the broad vista provided by GeneCaRNA will serve as an essential guide for further lncRNA research in disease decipherment. Full article
(This article belongs to the Section Molecular Genetics and Genetic Diseases)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Rank graph for annotative information, separately illustrated for two groups: genes from the major gene sources (NCBI, HGNC, and ENSEMBL) (orange), and genes inferred from RNAcentral transcripts without any annotation from the major gene sources, TRIGGS (blue).</p>
Full article ">Figure 2
<p>A GeneCaRNA-based suggested subclassification of 11,430 lncRNA subclass-definable genes (9.4% of the total). LincRNA: LncRNAs that are transcribed from the DNA stretch between the two protein-coding genes. Divergent: LncRNAs that are transcribed by a promoter shared with a protein-coding gene. Protein suspect: LncRNAs that have the potential to encode a peptide/protein. Intronic: LncRNAs transcribed purely from the intron(s) of a coding gene. Antisense: LncRNAs that are transcribed in antisense to a protein-coding DNA strand.</p>
Full article ">Figure 3
<p>Gene–disease associations: Detailed scrutiny of the gene–disease associations among the 1554 lncRNA genes and 2019 diseases. (<b>a</b>) The count of diseases per gene values. (<b>b</b>) The count of genes per disease values. (<b>c</b>) The number of genes per disease for a sample of 9 diseases with 4 or more elite associations.</p>
Full article ">Figure 4
<p>A map of gene-to-disease associations. Chord diagram of the network of the top 10 most-recurring genes, and the top 5 most-recurring diseases. Each chord represents the gene/disease association, where the width of the line correlates with the association score. The association score follows the MalaCards gene to disease scoring system, where score values depend on the level of manual curation of the information source, and on the significance assigned by the source itself to its different annotation classes [<a href="#B21-biomedicines-12-01305" class="html-bibr">21</a>].</p>
Full article ">Figure 5
<p>LncRNA-related pathways. (<b>a</b>) The counts of pathways (≥2) mapped to lncRNA genes. (<b>b</b>) The count of genes per pathway. 96 pathways include 1 gene (<a href="#app1-biomedicines-12-01305" class="html-app">Table S4</a>) and, at maximum, a pathway had 4 genes. (<b>c</b>) A map of the “STAT3 signaling in hepatocellular carcinoma” obtained from WikiPathways (ID: WP4337). The lncRNA genes involved represent the cruciality of lncRNA genes in regulating protein-coding genes, joining five MirRNA genes in the ncRNA category.</p>
Full article ">Figure 5 Cont.
<p>LncRNA-related pathways. (<b>a</b>) The counts of pathways (≥2) mapped to lncRNA genes. (<b>b</b>) The count of genes per pathway. 96 pathways include 1 gene (<a href="#app1-biomedicines-12-01305" class="html-app">Table S4</a>) and, at maximum, a pathway had 4 genes. (<b>c</b>) A map of the “STAT3 signaling in hepatocellular carcinoma” obtained from WikiPathways (ID: WP4337). The lncRNA genes involved represent the cruciality of lncRNA genes in regulating protein-coding genes, joining five MirRNA genes in the ncRNA category.</p>
Full article ">Figure 6
<p>A diagram showing a sample of 199 target gene products showing one-to-one interactions with 118 lncRNA transcripts. The lncRNA targets include 171 proteins and a range of ncRNAs spanning three classes.</p>
Full article ">
16 pages, 3410 KiB  
Article
Feature Extraction Based on Sparse Coding Approach for Hand Grasp Type Classification
by Jirayu Samkunta, Patinya Ketthong, Nghia Thi Mai, Md Abdus Samad Kamal, Iwanori Murakami and Kou Yamada
Algorithms 2024, 17(6), 240; https://doi.org/10.3390/a17060240 - 3 Jun 2024
Viewed by 775
Abstract
The kinematics of the human hand exhibit complex and diverse characteristics unique to each individual. Various techniques such as vision-based, ultrasonic-based, and data-glove-based approaches have been employed to analyze human hand movements. However, a critical challenge remains in efficiently analyzing and classifying hand [...] Read more.
The kinematics of the human hand exhibit complex and diverse characteristics unique to each individual. Various techniques such as vision-based, ultrasonic-based, and data-glove-based approaches have been employed to analyze human hand movements. However, a critical challenge remains in efficiently analyzing and classifying hand grasp types based on time-series kinematic data. In this paper, we propose a novel sparse coding feature extraction technique based on dictionary learning to address this challenge. Our method enhances model accuracy, reduces training time, and minimizes overfitting risk. We benchmarked our approach against principal component analysis (PCA) and sparse coding based on a Gaussian random dictionary. Our results demonstrate a significant improvement in classification accuracy: achieving 81.78% with our method compared to 31.43% for PCA and 77.27% for the Gaussian random dictionary. Furthermore, our technique outperforms in terms of macro-average F1-score and average area under the curve (AUC) while also significantly reducing the number of features required. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (2nd Edition))
Show Figures

Figure 1

Figure 1
<p>Overview of proposed methodology.</p>
Full article ">Figure 2
<p>The sparse-coding-based feature extraction technique based on dictionary learning.</p>
Full article ">Figure 3
<p>Details of proposed method.</p>
Full article ">Figure 4
<p>Sequences to illustrate five grasp types.</p>
Full article ">Figure 5
<p>Kinematic model of the human hand.</p>
Full article ">Figure 6
<p>Confusion matrices for NN classification using several feature extraction techniques: (<b>a</b>) raw data, (<b>b</b>) PCA, (<b>c</b>) sparse coding based on Gaussian random dictionary, and (<b>d</b>) sparse coding based on dictionary learning.</p>
Full article ">Figure 7
<p>ROC curve of feature extraction techniques: (<b>a</b>) raw data, (<b>b</b>) PCA, (<b>c</b>) sparse coding based on Gaussian random dictionary, and (<b>d</b>) sparse coding based on dictionary learning.</p>
Full article ">Figure 8
<p>Comparison of AUC values for each class between raw data, PCA, sparse coding based on Gaussian random dictionary, and dictionary learning approach.</p>
Full article ">
Back to TopTop