[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 5, March
Previous Issue
Volume 4, September
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Signals, Volume 4, Issue 4 (December 2023) – 14 articles

Cover Story (view full-size image): One of the main limitations in the analysis of EEG signals during movement is the presence of artefacts due to cranial muscle contraction; there are two main aspects: validating a tool capable of decreasing movement artefacts, while developing a reliable method for the quantitative analysis of EEG data and using this method to analyse the EEG signal recorded during a particular motor activity. In the image, you can see a montage of the 16 EEG channels with monopolar reference and the statistically significant differences (coloured electrodes) between the eyes open (left figures) and eyes closed (right figures) condition in three postural tasks: seated position (top), standing with two-legged support (centre), and standing with single-legged support (bottom) for PSD variable. The procedure used showed excellent reliability, allowing for a significant decrease in movement artefacts. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
15 pages, 1098 KiB  
Article
Automatic Detection of Electrodermal Activity Events during Sleep
by Jacopo Piccini, Elias August, Sami Leon Noel Aziz Hanna, Tiina Siilak and Erna Sif Arnardóttir
Signals 2023, 4(4), 877-891; https://doi.org/10.3390/signals4040048 - 18 Dec 2023
Cited by 1 | Viewed by 1774
Abstract
Currently, there is significant interest in developing algorithms for processing electrodermal activity (EDA) signals recorded during sleep. The interest is driven by the growing popularity and increased accuracy of wearable devices capable of recording EDA signals. If properly processed and analysed, they can [...] Read more.
Currently, there is significant interest in developing algorithms for processing electrodermal activity (EDA) signals recorded during sleep. The interest is driven by the growing popularity and increased accuracy of wearable devices capable of recording EDA signals. If properly processed and analysed, they can be used for various purposes, such as identifying sleep stages and sleep-disordered breathing, while being minimally intrusive. Due to the tedious nature of manually scoring EDA sleep signals, the development of an algorithm to automate scoring is necessary. In this paper, we present a novel scoring algorithm for the detection of EDA events and EDA storms using signal processing techniques. We apply the algorithm to EDA recordings from two different and unrelated studies that have also been manually scored and evaluate its performances in terms of precision, recall, and F1 score. We obtain F1 scores of about 69% for EDA events and of about 56% for EDA storms. In comparison to the literature values for scoring agreement between experts, we observe a strong agreement between automatic and manual scoring of EDA events and a moderate agreement between automatic and manual scoring of EDA storms. EDA events and EDA storms detected with the algorithm can be further processed and used as training variables in machine learning algorithms to classify sleep health. Full article
(This article belongs to the Special Issue Advanced Methods of Biomedical Signal Processing)
Show Figures

Figure 1

Figure 1
<p>A sample raw EDA signal from a single night PSG recording is shown.</p>
Full article ">Figure 2
<p>The figure shows that <span class="html-italic">haar</span> function. We use it to limit the effect of sharp changes due to measurement noise.</p>
Full article ">Figure 3
<p>The <span class="html-italic">db44</span> function.</p>
Full article ">Figure 4
<p>The <span class="html-italic">coif3</span> function.</p>
Full article ">Figure 5
<p>Flow diagram of the algorithm developed in this work. After applying a band-pass filter, the signal is processed in two different ways in parallel. One branch is for EDA event detection, while the other one is for motion artefact detection. The outputs of the two branches are then merged and artefacts removed. The final steps consist of removing periods of wakefulness and EDA storm detection.</p>
Full article ">Figure 6
<p>Example of an artefact. The signal exhibits a sudden voltage drop followed by high-frequency oscillations.</p>
Full article ">Figure 7
<p>Example of an artefact’s power spectrum. The high-frequency contribution exceeds the threshold. Thus, the segment is labelled an artefact.</p>
Full article ">Figure 8
<p>An EDA event is detected: the power spectrum of frequencies between 0.25 Hz and 3 Hz exceeds the EDA event threshold.</p>
Full article ">Figure 9
<p>An EDA event is not detected: the power spectrum does not exceed the EDA event threshold.</p>
Full article ">Figure 10
<p>An example of an EDA storm.</p>
Full article ">Figure 11
<p>Difference between manual and automatic scoring.</p>
Full article ">
18 pages, 810 KiB  
Article
Benford’s Law and Perceptual Features for Face Image Quality Assessment
by Domonkos Varga
Signals 2023, 4(4), 859-876; https://doi.org/10.3390/signals4040047 - 5 Dec 2023
Viewed by 1709
Abstract
The rapid growth in multimedia, storage systems, and digital computers has resulted in huge repositories of multimedia content and large image datasets in recent years. For instance, biometric databases, which can be used to identify individuals based on fingerprints, facial features, or iris [...] Read more.
The rapid growth in multimedia, storage systems, and digital computers has resulted in huge repositories of multimedia content and large image datasets in recent years. For instance, biometric databases, which can be used to identify individuals based on fingerprints, facial features, or iris patterns, have gained a lot of attention both from academia and industry. Specifically, face image quality assessment (FIQA) has become a very important part of face recognition systems, since the performance of such systems strongly depends on the quality of input data, such as blur, focus, compression, pose, or illumination. The main contribution of this paper is an analysis of Benford’s law-inspired first digit distribution and perceptual features for FIQA. To be more specific, I investigate the first digit distributions in different domains, such as wavelet or singular values, as quality-aware features for FIQA. My analysis revealed that first digit distributions with perceptual features are able to reach a high performance in the task of FIQA. Full article
Show Figures

Figure 1

Figure 1
<p>Relative frequency of leading digits based on the prediction of Benford’s law.</p>
Full article ">Figure 2
<p>Process for investigating the effectiveness of Benford’s law-inspired and perceptual features for face image quality assessment.</p>
Full article ">Figure 3
<p>Illustration of two-dimensional discrete wavelet transform.</p>
Full article ">Figure 4
<p>Measured empirical distribution of GFIQA-20k’s [<a href="#B56-signals-04-00047" class="html-bibr">56</a>] quality ratings.</p>
Full article ">Figure 5
<p>Sample images from GFIQA-20k [<a href="#B56-signals-04-00047" class="html-bibr">56</a>]. Quality ratings are printed on the face images in the upper left corners.</p>
Full article ">Figure 6
<p>PLCC values of different regression methods in the form of box plots. Measured over 100 random train–test splits on GFIQA-20k [<a href="#B56-signals-04-00047" class="html-bibr">56</a>]. In each box plot, the median value is denoted by the central mark. Moreover, the 25th and 75th percentiles correspond to the bottom and top edges, respectively. The outliers are represented by red plus signs, and the whiskers extend to the most extreme values, which are not considered outliers.</p>
Full article ">Figure 7
<p>SROCC values of different regression methods in the form of box plots. Measured over 100 random train–test splits on GFIQA-20k [<a href="#B56-signals-04-00047" class="html-bibr">56</a>]. In each box plot, the median value is denoted by the central mark. Moreover, the 25th and 75th percentiles correspond to the bottom and top edges, respectively. The outliers are represented by red plus signs, and the whiskers extend to the most extreme values, which are not considered as outliers.</p>
Full article ">Figure 8
<p>KROCC values of different regression methods in the form of box plots. Measured over 100 random train–test splits on GFIQA-20k [<a href="#B56-signals-04-00047" class="html-bibr">56</a>]. In each box plot, the median value is denoted by the central mark. Moreover, the 25th and 75th percentiles correspond to the bottom and top edges, respectively. The outliers are represented by red plus signs, and the whiskers extend to the most extreme values, which are not considered as outliers.</p>
Full article ">Figure 9
<p>Performance comparison of FDD and perceptual features. Median SROCC values were measured over 100 random train–test splits on GFIQA-20k [<a href="#B56-signals-04-00047" class="html-bibr">56</a>].</p>
Full article ">Figure 10
<p>Performance of the proposed feature vector in cases where a part of the feature vector was removed. The performance of the whole feature vector is denoted by ’X’. Median SROCC values were measured over 100 random train–test splits on GFIQA-20k [<a href="#B56-signals-04-00047" class="html-bibr">56</a>].</p>
Full article ">Figure 11
<p>Ground-truth vs. predicted quality scores scatterplot on a GFIQA-20k [<a href="#B56-signals-04-00047" class="html-bibr">56</a>] test set.</p>
Full article ">Figure 12
<p>Radar graph for the visual comparison of median PLCC, SROCC, and KROCC values obtained on GFIQA-20k [<a href="#B56-signals-04-00047" class="html-bibr">56</a>] after 100 random train–test splits.</p>
Full article ">
23 pages, 10816 KiB  
Article
Integrating Data from Multiple Nondestructive Evaluation Technologies Using Machine Learning Algorithms for the Enhanced Assessment of a Concrete Bridge Deck
by Mustafa Khudhair and Nenad Gucunski
Signals 2023, 4(4), 836-858; https://doi.org/10.3390/signals4040046 - 4 Dec 2023
Cited by 2 | Viewed by 1295
Abstract
Several factors impact the durability of concrete bridge decks, including traffic loads, fatigue, temperature changes, environmental stress, and maintenance activities. Detecting problems such as corrosion, delamination, or concrete degradation early on can lower maintenance costs. Nondestructive evaluation (NDE) techniques can detect these issues [...] Read more.
Several factors impact the durability of concrete bridge decks, including traffic loads, fatigue, temperature changes, environmental stress, and maintenance activities. Detecting problems such as corrosion, delamination, or concrete degradation early on can lower maintenance costs. Nondestructive evaluation (NDE) techniques can detect these issues at early stages. Each NDE method, meanwhile, has limitations that reduce the accuracy of the assessment. In this study, multiple NDE technologies were combined with machine learning algorithms to improve the interpretation of half-cell potential (HCP) and electrical resistivity (ER) measurements. A parametric study was performed to analyze the influence of five parameters on HCP and ER measurements, such as the degree of saturation, corrosion length, delamination depth, concrete cover, and moisture condition of delamination. The results were obtained through finite element simulations and used to build two machine learning algorithms, a classification algorithm and a regression algorithm, based on Random Forest methodology. The algorithms were tested using data collected from a bridge deck in the BEAST® facility. Both machine learning algorithms were effective in improving the interpretation of the ER and HCP measurements using data from multiple NDE technologies. Full article
Show Figures

Figure 1

Figure 1
<p>Finite element 3D model components.</p>
Full article ">Figure 2
<p>A-A cross-section.</p>
Full article ">Figure 3
<p>Potential distribution for 15 cm corrosion length and 0.5 DoS: (<b>a</b>) no delamination; (<b>b</b>) AFD at 30 mm; (<b>c</b>) WFD at 30 mm.</p>
Full article ">Figure 3 Cont.
<p>Potential distribution for 15 cm corrosion length and 0.5 DoS: (<b>a</b>) no delamination; (<b>b</b>) AFD at 30 mm; (<b>c</b>) WFD at 30 mm.</p>
Full article ">Figure 4
<p>HCP measurements for sound concrete: (<b>a</b>) 38 mm CC; (<b>b</b>) 51 mm CC; (<b>c</b>) 63 mm CC; (<b>d</b>) 76 mm CC.</p>
Full article ">Figure 4 Cont.
<p>HCP measurements for sound concrete: (<b>a</b>) 38 mm CC; (<b>b</b>) 51 mm CC; (<b>c</b>) 63 mm CC; (<b>d</b>) 76 mm CC.</p>
Full article ">Figure 5
<p>C-C cross-section.</p>
Full article ">Figure 6
<p>Concrete potential distributions from ER simulations: (<b>a</b>) sound concrete; (<b>b</b>) AFD at 70 mm; (<b>c</b>) WFD at 70 mm.</p>
Full article ">Figure 7
<p>ER for the models: (<b>a</b>) for sound concrete; (<b>b</b>) with AFD at different depths; (<b>c</b>) with WFD at different depths.</p>
Full article ">Figure 8
<p>B-B cross-section for sound concrete.</p>
Full article ">Figure 9
<p>Acceleration time history.</p>
Full article ">Figure 10
<p>Frequency spectra.</p>
Full article ">Figure 11
<p>Workflow diagram for the HCP and ER classification algorithm.</p>
Full article ">Figure 12
<p>Confusion matrix for the number of instances: (<b>a</b>) for the HCP; (<b>b</b>) for ER.</p>
Full article ">Figure 13
<p>Workflow diagram for the HCP and ER regression algorithm.</p>
Full article ">Figure 14
<p>Scatter plot of Random Forest predictions against actual values: (<b>a</b>) for ER; (<b>b</b>) for the HCP.</p>
Full article ">Figure 15
<p>The loading device that represents the truck axel loads at the BEAST<sup>®</sup> facility.</p>
Full article ">Figure 16
<p>Data collection of the NDE technologies on the BEAST slab. (<b>a</b>) Grid mark on the concrete deck; (<b>b</b>) electrical resistivity; (<b>c</b>) half-cell potential; (<b>d</b>) impact echo; (<b>e</b>) MOIST-Scan.</p>
Full article ">Figure 17
<p>NDE surface map from the data collected on the BEAST slab for the (<b>a</b>) electrical resistivity; (<b>b</b>) half-cell potential; (<b>c</b>) degree of saturation; (<b>d</b>) impact echo condition assessment; (<b>e</b>) concrete cover, obtained via GPR survey.</p>
Full article ">Figure 18
<p>Electrical resistivity result comparison.</p>
Full article ">Figure 19
<p>Half-cell potential result comparison.</p>
Full article ">Figure 20
<p>Half-cell potential result comparison.</p>
Full article ">Figure 21
<p>Electrical resistivity result comparison.</p>
Full article ">
20 pages, 6552 KiB  
Article
EEG-Based Seizure Detection Using Variable-Frequency Complex Demodulation and Convolutional Neural Networks
by Yedukondala Rao Veeranki, Riley McNaboe and Hugo F. Posada-Quintero
Signals 2023, 4(4), 816-835; https://doi.org/10.3390/signals4040045 - 28 Nov 2023
Cited by 12 | Viewed by 2141
Abstract
Epilepsy is a complex neurological disorder characterized by recurrent and unpredictable seizures that affect millions of people around the world. Early and accurate epilepsy detection is critical for timely medical intervention and improved patient outcomes. Several methods and classifiers for automated epilepsy detection [...] Read more.
Epilepsy is a complex neurological disorder characterized by recurrent and unpredictable seizures that affect millions of people around the world. Early and accurate epilepsy detection is critical for timely medical intervention and improved patient outcomes. Several methods and classifiers for automated epilepsy detection have been developed in previous research. However, the existing research landscape requires innovative approaches that can further improve the accuracy of diagnosing and managing patients. This study investigates the application of variable-frequency complex demodulation (VFCDM) and convolutional neural networks (CNN) to discriminate between healthy, interictal, and ictal states using electroencephalogram (EEG) data. For testing this approach, the EEG signals were collected from the publicly available Bonn dataset. A high-resolution time–frequency spectrum (TFS) of each EEG signal was obtained using the VFCDM. The TFS images were fed to the CNN classifier for the classification of the signals. The performance of CNN was evaluated using leave-one-subject-out cross-validation (LOSO CV). The TFS shows variations in its frequency for different states that correspond to variation in the neural activity. The LOSO CV approach yields a consistently high performance, ranging from 90% to 99% between different combinations of healthy and epilepsy states (interictal and ictal). The extensive LOSO CV validation approach ensures the reliability and robustness of the proposed method. As a result, the research contributes to advancing the field of epilepsy detection and brings us one step closer to developing practical, reliable, and efficient diagnostic tools for clinical applications. Full article
(This article belongs to the Special Issue Advanced Methods of Biomedical Signal Processing)
Show Figures

Figure 1

Figure 1
<p>Pipeline of the proposed methodology.</p>
Full article ">Figure 2
<p>Illustration of the CNN structure used in this study.</p>
Full article ">Figure 3
<p>EEG signal characteristics in different brain states. Representative EEG signals depicting neural activity in healthy, interictal, and ictal states. The figure illustrates distinctive amplitude and fluctuation patterns: Healthy state (Z and O): low and stable RMS amplitude (40–43 µV) with consistent fluctuations (28–29 µV). Interictal state (F and N): higher RMS amplitude (49–50.5 µV) with pronounced and erratic fluctuations. Ictal state (S): intermediate RMS amplitude (36.5 µV) with intense, synchronous neuronal activity.</p>
Full article ">Figure 4
<p>Time–frequency spectrograms (TFS) of EEG signals across brain states. Representative TFS depicting neural activity in healthy (Z and O), interictal (F and N), and ictal (S) states. The figure showcases key TFS features, including the mean frequency range, spectral power, stability, dominant frequency, and % of significant frequency transitions.</p>
Full article ">Figure 5
<p>Radar plot representation of the CNN performance metrics for classifying various combinations of healthy and ictal states.</p>
Full article ">Figure 6
<p>The (<b>a</b>) training and validation metrics and (<b>b</b>) normalized confusion matrix of CNN in classifying various combinations of healthy and ictal states.</p>
Full article ">Figure 6 Cont.
<p>The (<b>a</b>) training and validation metrics and (<b>b</b>) normalized confusion matrix of CNN in classifying various combinations of healthy and ictal states.</p>
Full article ">Figure 7
<p>Radar plot representation of the CNN performance metrics for classifying healthy vs. epileptic subjects.</p>
Full article ">Figure 8
<p>The (<b>a</b>) training and validation metrics and (<b>b</b>) normalized confusion matrix of CNN in classifying various combinations of healthy vs. epileptic subjects.</p>
Full article ">Figure 9
<p>Radar plot representation of the CNN performance metrics for classifying healthy vs. interictal vs. ictal states.</p>
Full article ">Figure 10
<p>The (<b>a</b>) training and validation metrics and (<b>b</b>) normalized confusion matrix of CNN in classifying healthy vs. interictal vs. ictal states.</p>
Full article ">
16 pages, 3538 KiB  
Article
Nearly Linear-Phase 2-D Recursive Digital Filters Design Using Balanced Realization Model Reduction
by Abdussalam Omar, Dale Shpak and Panajotis Agathoklis
Signals 2023, 4(4), 800-815; https://doi.org/10.3390/signals4040044 - 27 Nov 2023
Viewed by 1247
Abstract
This paper presents a new method for the design of separable-denominator 2-D IIR filters with nearly linear phase in the passband. The design method is based on a balanced realization model reduction technique. The nearly linear-phase 2-D IIR filter is designed using 2-D [...] Read more.
This paper presents a new method for the design of separable-denominator 2-D IIR filters with nearly linear phase in the passband. The design method is based on a balanced realization model reduction technique. The nearly linear-phase 2-D IIR filter is designed using 2-D model reduction from a linear-phase 2-D FIR filter, which serves as the initial filter. The structured controllability and observability Gramians Ps and Qs serve as the foundation for this technique. onal positive-definite matrices that satisfy 2-D Lyapunov equations. An efficient method is used to compute these Gramians by minimizing the traces of Ps and Qs under linear matrix inequality (LMI) constraints. The use of these Gramians ensures that the resulting 2-D IIR filter preserves stability and can be implemented using a separable-denominator 2-D filter with fewer coefficients than the original 2-D FIR filter. Numerical examples show that the proposed method compares favorably with existing techniques. Full article
(This article belongs to the Topic Research on the Application of Digital Signal Processing)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the proposed method.</p>
Full article ">Figure 2
<p>Magnitude responses and group delays of 2-D lowpass FIR and IIR filters discussed in Example 1 of <a href="#sec5dot1-signals-04-00044" class="html-sec">Section 5.1</a>: (<b>a</b>) 2-D FIR lowpass filter of order (24,24); (<b>b</b>) 2-D IIR of reduced order (13,13) in [<a href="#B6-signals-04-00044" class="html-bibr">6</a>]; (<b>c</b>) 2-D IIR of reduced order (13,13) using LMI; (<b>d</b>) magnitude contour of reduced-order filter; (<b>e</b>) group delay <math display="inline"><semantics> <msub> <mi>τ</mi> <mn>1</mn> </msub> </semantics></math> of reduced order (13,13); (<b>f</b>) group delay <math display="inline"><semantics> <msub> <mi>τ</mi> <mn>2</mn> </msub> </semantics></math> of reduced order (13,13).</p>
Full article ">Figure 3
<p>Magnitude responses and group delays of the reduced 2-D IIR bandpass filter described in <a href="#sec5dot2-signals-04-00044" class="html-sec">Section 5.2</a>.</p>
Full article ">Figure 4
<p>2-D FIR and IIR fan filters described in <a href="#sec5dot3-signals-04-00044" class="html-sec">Section 5.3</a>: (<b>a</b>) initial 2-D FIR fan filter of order (49,49); (<b>b</b>) reduced 2-D IIR fan filter of order (34,34); (<b>c</b>) group delay <math display="inline"><semantics> <msub> <mi>τ</mi> <mn>1</mn> </msub> </semantics></math> of reduced 2-D IIR fan filter; (<b>d</b>) group delay <math display="inline"><semantics> <msub> <mi>τ</mi> <mn>2</mn> </msub> </semantics></math> of reduced 2-D IIR fan filter; (<b>e</b>) impulse response of 2-D IIR fan filter; (<b>f</b>) magnitude contour of the reduced fan filter.</p>
Full article ">Figure 5
<p>Magnitude response of 2-D FIR and IIR fan filters described in <a href="#sec5dot4-signals-04-00044" class="html-sec">Section 5.4</a>.</p>
Full article ">Figure 6
<p>Original and filtered plane wave images using 2-D FIR fan filter and their spectrum.</p>
Full article ">Figure 7
<p>Original and filtered plane wave images using reduced-order 2-D IIR fan filter and their spectrum.</p>
Full article ">
12 pages, 1236 KiB  
Technical Note
Evaluating the Feasibility of Euler Angles for Bed-Based Patient Movement Monitoring
by Jonathan Mayer, Rejath Jose, Gregory Kurgansky, Paramvir Singh, Chris Coletti, Timothy Devine and Milan Toma
Signals 2023, 4(4), 788-799; https://doi.org/10.3390/signals4040043 - 14 Nov 2023
Viewed by 1315
Abstract
In the field of modern healthcare, technology plays a crucial role in improving patient care and ensuring their safety. One area where advancements can still be made is in alert systems, which provide timely notifications to hospital staff about critical events involving patients. [...] Read more.
In the field of modern healthcare, technology plays a crucial role in improving patient care and ensuring their safety. One area where advancements can still be made is in alert systems, which provide timely notifications to hospital staff about critical events involving patients. These early warning systems allow for swift responses and appropriate interventions when needed. A commonly used patient alert technology is nurse call systems, which empower patients to request assistance using bedside devices. Over time, these systems have evolved to include features such as call prioritization, integration with staff communication tools, and links to patient monitoring setups that can generate alerts based on vital signs. There is currently a shortage of smart systems that use sensors to inform healthcare workers about the activity levels of patients who are confined to their beds. Current systems mainly focus on alerting staff when patients become disconnected from monitoring machines. In this technical note, we discuss the potential of utilizing cost-effective sensors to monitor and evaluate typical movements made by hospitalized bed-bound patients. To improve the care provided to unaware patients further, healthcare professionals could benefit from implementing trigger alert systems that are based on detecting patient movements. Such systems would promptly notify mobile devices or nursing stations whenever a patient displays restlessness or leaves their bed urgently and requires medical attention. Full article
(This article belongs to the Special Issue Advanced Methods of Biomedical Signal Processing)
Show Figures

Figure 1

Figure 1
<p>Illustration of pitch, roll, and yaw relative to a patient in both supine and sitting positions.</p>
Full article ">Figure 2
<p>Procrustes shape analysis applied to a set of landmarks. The shapes have been aligned, after the translation, rotation, and scaling, revealing underlying patterns of shape variation. Statistical analysis can now be performed to explore shape differences and relationships.</p>
Full article ">Figure 3
<p>Graphics explaining the comparative analyses performed.</p>
Full article ">Figure 4
<p>Euler angles around the vertical axis (i.e., roll, as demonstrated in <a href="#signals-04-00043-f001" class="html-fig">Figure 1</a>) as a means of representing different body movements while in bed, contrasting them with the standard motion of rolling over (blue dot-dashed line), i.e., (<b>a</b>) rollover vs. drop from the bed, (<b>b</b>) rollover vs. roll to the side, (<b>c</b>) rollover vs. seizure episode, and (<b>d</b>) rollover vs. head drop.</p>
Full article ">Figure 5
<p>Euler angles around the frontal axis (i.e., pitch, as demonstrated in <a href="#signals-04-00043-f001" class="html-fig">Figure 1</a>) as a means of representing different body movements while in bed, contrasting them with the standard motion of rolling over (blue dot-dashed line), i.e., (<b>a</b>) rollover vs. drop from the bed, (<b>b</b>) rollover vs. roll to the side, (<b>c</b>) rollover vs. seizure episode, and (<b>d</b>) rollover vs. head drop.</p>
Full article ">Figure 6
<p>The values indicating differences between body movements are obtained through the comparison of each movement with others. The five different types of movements evaluated in this study include rolling over from the back to the stomach (<b>A</b>), dropping from bed (<b>B</b>), rolling to the side (<b>C</b>), experiencing a seizure (<b>D</b>), and head drops (<b>E</b>).</p>
Full article ">Figure 7
<p>Dissimilarity values of varying body movements in bed when compared against each other, e.g., the first row of figure (<b>a</b>) represents the rollover measurement #1 compared to four other rollover measurements.</p>
Full article ">Figure 8
<p>An instance of a particular bodily motion executed in bed (rollover 3 as reference data set) and the corresponding dissimilarity values obtained from comparing it to other similar rollovers versus contrasting it with diverse body movements.</p>
Full article ">Figure 9
<p>To identify movements in bed, dissimilarity values, yielded by a Procrustes analysis, are compared when calculated using the Euler angles associated with different rollovers (<a href="#signals-04-00043-f007" class="html-fig">Figure 7</a>a) versus using the Euler angles associated with other bodily movements. Such relatively significant differences suggest that these dissimilarity values have potential as markers for detecting specific forms of movement within a bed environment.</p>
Full article ">
20 pages, 686 KiB  
Article
High-Quality and Reproducible Automatic Drum Transcription from Crowdsourced Data
by Mickaël Zehren, Marco Alunno and Paolo Bientinesi
Signals 2023, 4(4), 768-787; https://doi.org/10.3390/signals4040042 - 10 Nov 2023
Cited by 1 | Viewed by 2024
Abstract
Within the broad problem known as automatic music transcription, we considered the specific task of automatic drum transcription (ADT). This is a complex task that has recently shown significant advances thanks to deep learning (DL) techniques. Most notably, massive amounts of labeled data [...] Read more.
Within the broad problem known as automatic music transcription, we considered the specific task of automatic drum transcription (ADT). This is a complex task that has recently shown significant advances thanks to deep learning (DL) techniques. Most notably, massive amounts of labeled data obtained from crowds of annotators have made it possible to implement large-scale supervised learning architectures for ADT. In this study, we explored the untapped potential of these new datasets by addressing three key points: First, we reviewed recent trends in DL architectures and focused on two techniques, self-attention mechanisms and tatum-synchronous convolutions. Then, to mitigate the noise and bias that are inherent in crowdsourced data, we extended the training data with additional annotations. Finally, to quantify the potential of the data, we compared many training scenarios by combining up to six different datasets, including zero-shot evaluations. Our findings revealed that crowdsourced datasets outperform previously utilized datasets, and regardless of the DL architecture employed, they are sufficient in size and quality to train accurate models. By fully exploiting this data source, our models produced high-quality drum transcriptions, achieving state-of-the-art results. Thanks to this accuracy, our work can be more successfully used by musicians (e.g., to learn new musical pieces by reading, or to convert their performances to MIDI) and researchers in music information retrieval (e.g., to retrieve information from the notes instead of audio, such as the rhythm or structure of a piece). Full article
(This article belongs to the Topic Research on the Application of Digital Signal Processing)
Show Figures

Figure 1

Figure 1
<p>Detailed architecture for the frame-synchronous CNN encoder (<b>left</b>) and the tatum-synchronous CNN encoder (<b>right</b>). Each architecture has two stacks of two convolutional layers (cyan) with batch normalization, followed by max-pooling and dropout layers. Tatum synchronicity is achieved with max-pooling on the frame dimension (blue).</p>
Full article ">Figure 2
<p>Detailed architecture for the RNN decoder (<b>left</b>) and the self-attention decoder (<b>right</b>). In summary, the RNN consists of three layers of bi-directional gated recurrent units (green). The self-attention mechanism consists of L stacks of multi-head self-attention (orange).</p>
Full article ">Figure 3
<p>A beat divided into 12 even intervals accommodates 16th notes and 16th-note triplets.</p>
Full article ">Figure 4
<p>Distribution of the tatum intervals, derived from Madmom’s beats subdivided 12 times, for each dataset.</p>
Full article ">Figure 5
<p>Genre distribution for both ADTOF datasets.</p>
Full article ">Figure A1
<p>F-measure of the frame RNN model with a varying hit rate tolerance on both ADTOF datasets, before and after alignment.</p>
Full article ">
22 pages, 3715 KiB  
Article
Radix-22 Algorithm for the Odd New Mersenne Number Transform (ONMNT)
by Yousuf Al-Aali, Mounir T. Hamood and Said Boussakta
Signals 2023, 4(4), 746-767; https://doi.org/10.3390/signals4040041 - 23 Oct 2023
Viewed by 1412
Abstract
This paper introduces a new derivation of the radix-22 fast algorithm for the forward odd new Mersenne number transform (ONMNT) and the inverse odd new Mersenne number transform (IONMNT). This involves introducing new equations and functions in finite fields, bringing particular [...] Read more.
This paper introduces a new derivation of the radix-22 fast algorithm for the forward odd new Mersenne number transform (ONMNT) and the inverse odd new Mersenne number transform (IONMNT). This involves introducing new equations and functions in finite fields, bringing particular challenges unlike those in other fields. The radix-22 algorithm combines the benefits of the reduced number of operations of the radix-4 algorithm and the simple butterfly structure of the radix-2 algorithm, making it suitable for various applications such as lightweight ciphers, authenticated encryption, hash functions, signal processing, and convolution calculations. The multidimensional linear index mapping technique is the conventional method used to derive the radix-22 algorithm. However, this method does not provide clear insights into the underlying structure and flexibility of the radix-22 approach. This paper addresses this limitation and proposes a derivation based on bit-unscrambling techniques, which reverse the ordering of the output sequence, resulting in efficient calculations with fewer operations. Butterfly and signal flow diagrams are also presented to illustrate the structure of the fast algorithm for both ONMNT and IONMNT. The proposed method should pave the way for efficient and flexible implementation of ONMNT and IONMNT in applications such as lightweight ciphers and signal processing. The algorithm has been implemented in C and is validated with an example. Full article
Show Figures

Figure 1

Figure 1
<p>An in-place butterfly diagram of the radix-<math display="inline"><semantics> <msup> <mn>2</mn> <mn>2</mn> </msup> </semantics></math> ONMNT algorithm; solid lines and dashed lines represent addition and subtraction operations, respectively.</p>
Full article ">Figure 2
<p>A signal flow graph of radix-<math display="inline"><semantics> <msup> <mn>2</mn> <mn>2</mn> </msup> </semantics></math> ONMNT for the transform length <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>16</mn> </mrow> </semantics></math>; solid lines and dashed lines represent addition and subtraction operations, respectively.</p>
Full article ">Figure 3
<p>An in-place butterfly diagram of the radix-<math display="inline"><semantics> <msup> <mn>2</mn> <mn>2</mn> </msup> </semantics></math> IONMNT algorithm; solid lines and dashed lines represent addition and subtraction operations, respectively.</p>
Full article ">Figure 4
<p>Signal flow graph of radix-<math display="inline"><semantics> <msup> <mn>2</mn> <mn>2</mn> </msup> </semantics></math> IONMNT for the transform length N = 16; solid lines and dashed lines represent addition and subtraction operations, respectively.</p>
Full article ">Figure 5
<p>Comparison of total calculations for direct calculations, radix-2 and radix-<math display="inline"><semantics> <msup> <mn>2</mn> <mn>2</mn> </msup> </semantics></math>.</p>
Full article ">
21 pages, 1191 KiB  
Article
Restoration for Intensity Nonuniformities with Discontinuities in Whole-Body MRI
by Stathis Hadjidemetriou, Ansgar Malich, Lorenz Damian Rossknecht, Luca Ferrarini and Ismini E. Papageorgiou
Signals 2023, 4(4), 725-745; https://doi.org/10.3390/signals4040040 - 18 Oct 2023
Cited by 1 | Viewed by 2295
Abstract
The reconstruction in MRI assumes a uniform radio-frequency field. However, this is violated due to coil field nonuniformity and sensitivity variations. In whole-body MRI, the nonuniformities are more complex due to the imaging with multiple coils that typically have different overall sensitivities that [...] Read more.
The reconstruction in MRI assumes a uniform radio-frequency field. However, this is violated due to coil field nonuniformity and sensitivity variations. In whole-body MRI, the nonuniformities are more complex due to the imaging with multiple coils that typically have different overall sensitivities that result in sharp sensitivity changes at the junctions between adjacent coils. These lead to images with anatomically inconsequential intensity nonuniformities that include jump discontinuities of the intensity nonuniformities at the junctions corresponding to adjacent coils. The body is also imaged with multiple contrasts that result in images with different nonuniformities. A method is presented for the joint intensity uniformity restoration of two such images to achieve intensity homogenization. The effect of the spatial intensity distortion on the auto-co-occurrence statistics of each image as well as on the joint-co-occurrence statistics of the two images is modeled in terms of Point Spread Function (PSF). The PSFs and the non-stationary deconvolution of these PSFs from the statistics offer posterior Bayesian expectation estimates of the nonuniformity with Bayesian coring. Subsequently, a piecewise smoothness constraint is imposed for nonuniformity. This uses non-isotropic smoothing of the restoration field to allow the modeling of junction discontinuities. The implementation of the restoration method is iterative and imposes stability and validity constraints of the nonuniformity estimates. The effectiveness and accuracy of the method is demonstrated extensively with whole-body MRI image pairs of thirty-one cancer patients. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the non-parametric Bayesian coring derivation of the posterior conditional expectation. The conditional expectation is further expanded with Bayes’ rule. The expressions for the prior and the likelihood are then substituted. It provides the intermediate vector field <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>u</mi> <mo>^</mo> </mover> <mo>=</mo> <mi>E</mi> <mrow> <mo>(</mo> <mi>u</mi> <mo>|</mo> <mi>v</mi> <mo>,</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> for image restoration.</p>
Full article ">Figure 2
<p>First example of joint restoration of <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w TSE and a <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w + <math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math>w STIR. The cumulative restoration fields account for the spatial variations of the coil sensitivities, their different overall sensitivities, and the junctions between them. (<b>a</b>) Initial <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w image TSE; (<b>b</b>) Cumulative restoration field for TSE; (<b>c</b>) Restored <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w image TSE; (<b>d</b>) Initial <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w + <math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math>w STIR image; (<b>e</b>) Cumulative restoration field for STIR; (<b>f</b>) Restored <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w + <math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math>w STIR image.</p>
Full article ">Figure 3
<p>Co-Occurrence statistics of original and restored images of the first example in <a href="#signals-04-00040-f002" class="html-fig">Figure 2</a>. The restored statistics are sharper. The different distribution of the tissues are better shown in the restored joint-co-occurrence statistics in (<b>b</b>). (<b>a</b>) Original joint-co-occurrence statistics; (<b>b</b>) Restored joint-co-occurrence statistics; (<b>c</b>) Original <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w TSE co-occurrence statistics; (<b>d</b>) Restored <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w TSE co-occurrence statistics; (<b>e</b>) Original <math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math>w STIR co-occurrence statistics; (<b>f</b>) Restored <math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math>w STIR co-occurrence statistics.</p>
Full article ">Figure 3 Cont.
<p>Co-Occurrence statistics of original and restored images of the first example in <a href="#signals-04-00040-f002" class="html-fig">Figure 2</a>. The restored statistics are sharper. The different distribution of the tissues are better shown in the restored joint-co-occurrence statistics in (<b>b</b>). (<b>a</b>) Original joint-co-occurrence statistics; (<b>b</b>) Restored joint-co-occurrence statistics; (<b>c</b>) Original <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w TSE co-occurrence statistics; (<b>d</b>) Restored <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w TSE co-occurrence statistics; (<b>e</b>) Original <math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math>w STIR co-occurrence statistics; (<b>f</b>) Restored <math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math>w STIR co-occurrence statistics.</p>
Full article ">Figure 4
<p>Second example of joint restoration of <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w TSE and a <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w + <math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math>w STIR. The cumulative restoration fields account for the spatial variations of the coil sensitivities, their different overall sensitivities, and the junctions between them. (<b>a</b>) Initial <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w image TSE; (<b>b</b>) Cumulative restoration field for TSE; (<b>c</b>) Restored <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w image TSE; (<b>d</b>) Initial <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w + <math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math>w STIR image; (<b>e</b>) Cumulative restoration field for STIR; (<b>f</b>) Restored <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w + <math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math>w STIR image.</p>
Full article ">Figure 4 Cont.
<p>Second example of joint restoration of <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w TSE and a <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w + <math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math>w STIR. The cumulative restoration fields account for the spatial variations of the coil sensitivities, their different overall sensitivities, and the junctions between them. (<b>a</b>) Initial <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w image TSE; (<b>b</b>) Cumulative restoration field for TSE; (<b>c</b>) Restored <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w image TSE; (<b>d</b>) Initial <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w + <math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math>w STIR image; (<b>e</b>) Cumulative restoration field for STIR; (<b>f</b>) Restored <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w + <math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math>w STIR image.</p>
Full article ">Figure 5
<p>Co-Occurrence statistics of original and restored images of the second example in <a href="#signals-04-00040-f004" class="html-fig">Figure 4</a>. The restored joint-co-occurrence statistics better show the distributions of the different tissues. The tissues distributions are shown improved in the restored co-occurrence statistics as well. (<b>a</b>) Original joint-co-occurrence statistics; (<b>b</b>) Restored joint-co-occurrence statistics; (<b>c</b>) Original <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w TSE co-occurrence statistics; (<b>d</b>) Restored <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w TSE co-occurrence statistics; (<b>e</b>) Original <math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math>w STIR co-occurrence statistics; (<b>f</b>) Restored <math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math>w STIR co-occurrence statistics.</p>
Full article ">Figure 5 Cont.
<p>Co-Occurrence statistics of original and restored images of the second example in <a href="#signals-04-00040-f004" class="html-fig">Figure 4</a>. The restored joint-co-occurrence statistics better show the distributions of the different tissues. The tissues distributions are shown improved in the restored co-occurrence statistics as well. (<b>a</b>) Original joint-co-occurrence statistics; (<b>b</b>) Restored joint-co-occurrence statistics; (<b>c</b>) Original <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w TSE co-occurrence statistics; (<b>d</b>) Restored <math display="inline"><semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics></math>w TSE co-occurrence statistics; (<b>e</b>) Original <math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math>w STIR co-occurrence statistics; (<b>f</b>) Restored <math display="inline"><semantics> <msub> <mi>T</mi> <mn>2</mn> </msub> </semantics></math>w STIR co-occurrence statistics.</p>
Full article ">
17 pages, 6326 KiB  
Article
Quantitative Electroencephalography: Cortical Responses under Different Postural Conditions
by Marco Ivaldi, Lorenzo Giacometti and David Conversi
Signals 2023, 4(4), 708-724; https://doi.org/10.3390/signals4040039 - 18 Oct 2023
Viewed by 1291
Abstract
In this study, the alpha and beta spectral frequency bands and amplitudes of EEG signals recorded from 10 healthy volunteers using an experimental cap with neoprene jacketed electrodes were analysed. Background: One of the main limitations in the analysis of EEG signals during [...] Read more.
In this study, the alpha and beta spectral frequency bands and amplitudes of EEG signals recorded from 10 healthy volunteers using an experimental cap with neoprene jacketed electrodes were analysed. Background: One of the main limitations in the analysis of EEG signals during movement is the presence of artefacts due to cranial muscle contraction; the objectives of this study therefore focused on two main aspects: (1) validating a tool capable of decreasing movement artefacts, while developing a reliable method for the quantitative analysis of EEG data; (2) using this method to analyse the EEG signal recorded during a particular motor activity (bi- and monopodalic postural control). Methods: The EEG sampling frequency was 512 Hz; the signal was acquired on 16 channels with monopolar montage and the reference on Cz. The recorded signals were processed using a specifically written Matlab script and also by exploiting open-source software (Eeglab). Results: The procedure used showed excellent reliability, allowing for a significant decrease in movement artefacts even during motor tasks performed both with eyes open and with eyes closed. Conclusions: This preliminary study lays the foundation for correctly recording EEG signals as an additional source of information in the study of human movement. Full article
(This article belongs to the Special Issue Advancing Signal Processing and Analytics of EEG Signals)
Show Figures

Figure 1

Figure 1
<p>Detail of the experimental cap. Notice the introduction of the conductive gel into the electrode in the neoprene jacket.</p>
Full article ">Figure 2
<p>Representative diagram of the construction of the experimental electrode.</p>
Full article ">Figure 3
<p>Electrode placement in accordance with the 10–20 system.</p>
Full article ">Figure 4
<p>Example of bipodalic posture performed on the stabilometric platform.</p>
Full article ">Figure 5
<p>Seated eyes−closed (OC) task frequency spectrum example (frequency domain).</p>
Full article ">Figure 6
<p>Seated eyes−open (OA) task frequency spectrum example (frequency domain).</p>
Full article ">Figure 7
<p>(<b>a</b>,<b>b</b>) Significance between OA and OC in alpha band and beta band (ARV).</p>
Full article ">Figure 8
<p>(<b>a</b>,<b>b</b>) Significance between BIPOA and BIPOC in alpha band and beta band (ARV).</p>
Full article ">Figure 9
<p>(<b>a</b>,<b>b</b>) Significance between MONOA and MONOC in alpha band and beta band (ARV).</p>
Full article ">Figure 10
<p>(<b>a</b>,<b>b</b>) Significance between OA and OC in alpha band and beta band (PSD).</p>
Full article ">Figure 11
<p>(<b>a</b>,<b>b</b>) Significance between BIPOA and BIPOC in alpha band and beta band (PSD).</p>
Full article ">Figure 12
<p>(<b>a</b>,<b>b</b>) Significance between MONOA and MONOC in alpha band and beta band (PSD).</p>
Full article ">Figure 13
<p>(<b>a</b>,<b>b</b>) Average ARV of all channels in the alpha and beta bands. * = <span class="html-italic">p</span> &lt; 0.05; ** = <span class="html-italic">p</span> &lt; 0.01; *** = <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 14
<p>(<b>a</b>,<b>b</b>) Average PSD of all channels in the alpha and beta bands. * = <span class="html-italic">p</span> &lt; 0.05; ** = <span class="html-italic">p</span> &lt; 0.01; *** = <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 15
<p>Brodmann area 17 (primary visual cortex) is shown in red in this image, area 18 in orange and area 19 in yellow.</p>
Full article ">
21 pages, 11269 KiB  
Review
Exploitation Techniques of IoST Vulnerabilities in Air-Gapped Networks and Security Measures—A Systematic Review
by Razi Hamada and Ievgeniia Kuzminykh
Signals 2023, 4(4), 687-707; https://doi.org/10.3390/signals4040038 - 13 Oct 2023
Cited by 1 | Viewed by 2353
Abstract
IP cameras and digital video recorders, as part of the Internet of Surveillance Things (IoST) technology, can sometimes allow unauthenticated access to the video feed or management dashboard. These vulnerabilities may result from weak APIs, misconfigurations, or hidden firmware backdoors. What is particularly [...] Read more.
IP cameras and digital video recorders, as part of the Internet of Surveillance Things (IoST) technology, can sometimes allow unauthenticated access to the video feed or management dashboard. These vulnerabilities may result from weak APIs, misconfigurations, or hidden firmware backdoors. What is particularly concerning is that these vulnerabilities can stay unnoticed for extended periods, spanning weeks, months, or even years, until a malicious attacker decides to exploit them. The response actions in case of identifying the vulnerability, such as updating software and firmware for millions of IoST devices, might be challenging and time-consuming. Implementing an air-gapped video surveillance network, which is isolated from the internet and external access, can reduce the cybersecurity threats associated with internet-connected IoST devices. However, such networks can also be susceptible to other threats and attacks, which need to be explored and analyzed. In this work, we perform a systematic literature review on the current state of research and use cases related to compromising and protecting cameras in logical and physical air-gapped networks. We provide a network diagram for each mode of exploitation, discuss the vulnerabilities that could result in a successful attack, demonstrate the potential impacts on organizations in the event of IoST compromise, and outline the security measures and mechanisms that can be deployed to mitigate these security risks. Full article
(This article belongs to the Special Issue Internet of Things for Smart Planet: Present and Future)
Show Figures

Figure 1

Figure 1
<p>Number of papers that are published each year related to our study (<b>a</b>) after removing duplicates and before IC and EC, and (<b>b</b>) selected for analysis.</p>
Full article ">Figure 2
<p>The topology of non-air gapped (<b>a</b>), physically air-gapped (<b>b</b>) and logically air-gapped (<b>c</b>) networks [<a href="#B24-signals-04-00038" class="html-bibr">24</a>].</p>
Full article ">Figure 3
<p>Exploitation diagram for network misconfiguration in a logically air-gapped network.</p>
Full article ">Figure 4
<p>Results of search for open API server ports with possible unauthenticated access.</p>
Full article ">Figure 5
<p>Use case of Yellowstone regional airport, human error in network misconfiguration.</p>
Full article ">Figure 6
<p>Exploitation diagram for the evil twin exploitation technique in a logically air-gapped network.</p>
Full article ">Figure 7
<p>Exploitation diagram for supply chain exploitation technique in a logically air-gapped network.</p>
Full article ">Figure 8
<p>Exploitation diagram for Cyber Kill Chain exploitation technique in a logically air-gapped network.</p>
Full article ">Figure 9
<p>Framework for identifying the security measures in Cyber Kill Chain attacks.</p>
Full article ">Figure 10
<p>Diagram of the in-built exploitation technique in a logically air-gapped network.</p>
Full article ">Figure 11
<p>Diagram of the IR LED exploitation technique in a physically air-gapped network.</p>
Full article ">
18 pages, 8959 KiB  
Article
Beyond Staircasing Effect: Robust Image Smoothing via 0 Gradient Minimization and Novel Gradient Constraints
by Ryo Matsuoka and Masahiro Okuda
Signals 2023, 4(4), 669-686; https://doi.org/10.3390/signals4040037 - 26 Sep 2023
Cited by 1 | Viewed by 1720
Abstract
In this paper, we propose robust image-smoothing methods based on 0 gradient minimization with novel gradient constraints to effectively suppress pseudo-edges. Simultaneously minimizing the 0 gradient, i.e., the number of nonzero gradients in an image, and the 2 data fidelity [...] Read more.
In this paper, we propose robust image-smoothing methods based on 0 gradient minimization with novel gradient constraints to effectively suppress pseudo-edges. Simultaneously minimizing the 0 gradient, i.e., the number of nonzero gradients in an image, and the 2 data fidelity results in a smooth image. However, this optimization often leads to undesirable artifacts, such as pseudo-edges, known as the “staircasing effect”, and halos, which become more visible in image enhancement tasks, like detail enhancement and tone mapping. To address these issues, we introduce two types of gradient constraints: box and ball. These constraints are applied using a reference image (e.g., the input image is used as a reference for image smoothing) to suppress pseudo-edges in homogeneous regions and the blurring effect around strong edges. We also present an 0 gradient minimization problem based on the box-/ball-type gradient constraints using an alternating direction method of multipliers (ADMM). Experimental results on important applications of 0 gradient minimization demonstrate the advantages of our proposed methods compared to existing 0 gradient-based approaches. Full article
(This article belongs to the Topic Research on the Application of Digital Signal Processing)
Show Figures

Figure 1

Figure 1
<p>Example of image smoothing: The black line, the dotted blue line, the dotted green line, and the dotted red line show the gradient signals of the input image, the smoothing results of the <math display="inline"><semantics> <msub> <mo mathvariant="italic">ℓ</mo> <mn>0</mn> </msub> </semantics></math> gradient minimization [<a href="#B31-signals-04-00037" class="html-bibr">31</a>], the <math display="inline"><semantics> <msub> <mo mathvariant="italic">ℓ</mo> <mn>0</mn> </msub> </semantics></math> gradient projection [<a href="#B37-signals-04-00037" class="html-bibr">37</a>], and ours with the box-type gradient constraints, respectively. Each gradient signal is the vertical first-order gradient of the <span class="html-italic">G</span> channel. The red arrows indicate pseudo-edges.</p>
Full article ">Figure 2
<p>Examples of the proposed gradient constraints: vertical and horizontal axes <math display="inline"><semantics> <msub> <mi>d</mi> <mi>v</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>d</mi> <mi>h</mi> </msub> </semantics></math> represent gradient values of vertical and horizontal directions, respectively. The blue solid arrow and the dotted red arrow indicate the gradient vectors of reference and ideal images, respectively. The dotted blue box and circle represent areas that satisfy the gradient constraints, respectively. (<b>a</b>) Box-type gradient constraint. (<b>b</b>) Ball-type gradient constraint.</p>
Full article ">Figure 3
<p>Ground truth and input images on zero-mean Gaussian noise removal (<math display="inline"><semantics> <mrow> <mi>σ</mi> <mo>=</mo> <mn>5.0</mn> <mo>×</mo> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics></math>). (<b>a</b>) Ground truth. (<b>b</b>) Input image.</p>
Full article ">Figure 4
<p>Noise removal results and their PSNR values [dB]: (from left to right) (<b>a1</b>–<b>4</b>) reference images, (<b>b1</b>–<b>4</b>) our image with the box-type constraint, and (<b>c1</b>–<b>4</b>) our image with the ball-type one.</p>
Full article ">Figure 5
<p>Detail enhancement results 1: (upper) smoothing results and (lower) detail enhancement results. (<b>a</b>) Input. (<b>b</b>) <math display="inline"><semantics> <msub> <mo mathvariant="italic">ℓ</mo> <mn>0</mn> </msub> </semantics></math> gradient minimization [<a href="#B31-signals-04-00037" class="html-bibr">31</a>]. (<b>c</b>) <math display="inline"><semantics> <msub> <mo mathvariant="italic">ℓ</mo> <mn>0</mn> </msub> </semantics></math> gradient projection [<a href="#B37-signals-04-00037" class="html-bibr">37</a>]. (<b>d</b>) Ours. ©2018 IEEE. Reprinted, with permission, from [<a href="#B62-signals-04-00037" class="html-bibr">62</a>].</p>
Full article ">Figure 6
<p>Detail enhancement results 2: (upper) smoothing results and (lower) detail enhancement results. (<b>a</b>) Input. (<b>b</b>) <math display="inline"><semantics> <msub> <mo mathvariant="italic">ℓ</mo> <mn>0</mn> </msub> </semantics></math> gradient minimization [<a href="#B31-signals-04-00037" class="html-bibr">31</a>]. (<b>c</b>) <math display="inline"><semantics> <msub> <mo mathvariant="italic">ℓ</mo> <mn>0</mn> </msub> </semantics></math> gradient projection [<a href="#B37-signals-04-00037" class="html-bibr">37</a>]. (<b>d</b>) Ours. ©2018 IEEE. Reprinted, with permission, from [<a href="#B62-signals-04-00037" class="html-bibr">62</a>].</p>
Full article ">Figure 7
<p>Tone-mapping results. (<b>a</b>) Global operator [<a href="#B64-signals-04-00037" class="html-bibr">64</a>]. (<b>b</b>) <math display="inline"><semantics> <msub> <mo mathvariant="italic">ℓ</mo> <mn>0</mn> </msub> </semantics></math> gradient minimization [<a href="#B31-signals-04-00037" class="html-bibr">31</a>]. (<b>c</b>) <math display="inline"><semantics> <msub> <mo mathvariant="italic">ℓ</mo> <mn>0</mn> </msub> </semantics></math> gradient projection [<a href="#B37-signals-04-00037" class="html-bibr">37</a>]. (<b>d</b>) Ours. ©2018 IEEE. Reprinted, with permission, from [<a href="#B62-signals-04-00037" class="html-bibr">62</a>].</p>
Full article ">Figure 8
<p>Tone-mapping results obtained by different smoothing parameters and their MSE values: (<b>a</b>) Reinhard et al.’s global operator [<a href="#B64-signals-04-00037" class="html-bibr">64</a>], (<b>b</b>–<b>d</b>) are obtained by Durand and Dorsey’s tone-mapping framework [<a href="#B7-signals-04-00037" class="html-bibr">7</a>] with (·-1) <math display="inline"><semantics> <msub> <mo mathvariant="italic">ℓ</mo> <mn>0</mn> </msub> </semantics></math> gradient minimization [<a href="#B31-signals-04-00037" class="html-bibr">31</a>] and (·-2) our methods. ©2018 IEEE. Reprinted, with permission, from [<a href="#B62-signals-04-00037" class="html-bibr">62</a>].</p>
Full article ">Figure 9
<p>Comparison PSNR and SSIM for clip-art JPEG artifact removal. (<b>a</b>) Peak signal-to-noise ratio (PSNR) [dB]. (<b>b</b>) Structural similarity (SSIM) [<a href="#B65-signals-04-00037" class="html-bibr">65</a>].</p>
Full article ">Figure 10
<p>Results of clip-art JPEG artifact removal and their PSNR [dB]. (<b>a</b>) Ground truth, (<b>b</b>) Input image, (<b>c-1</b>) [<a href="#B31-signals-04-00037" class="html-bibr">31</a>], 31.51, (<b>c-2</b>) [<a href="#B31-signals-04-00037" class="html-bibr">31</a>], 32.18, (<b>d-1</b>) [<a href="#B37-signals-04-00037" class="html-bibr">37</a>], 31.19, (<b>d-2</b>) [<a href="#B37-signals-04-00037" class="html-bibr">37</a>], 31.89, (<b>e-1</b>) Ours, 31.81, (<b>e-2</b>) Ours, 32.23.</p>
Full article ">
18 pages, 2589 KiB  
Article
Improved RSSI Indoor Localization in IoT Systems with Machine Learning Algorithms
by Madduma Wellalage Pasan Maduranga, Valmik Tilwari and Ruvan Abeysekera
Signals 2023, 4(4), 651-668; https://doi.org/10.3390/signals4040036 - 25 Sep 2023
Cited by 8 | Viewed by 2512
Abstract
Recent developments in machine learning algorithms are playing a significant role in wireless communication and Internet of Things (IoT) systems. Location-based Internet of Things services (LBIoTS) are considered one of the primary applications among those IoT applications. The key information involved in LBIoTS [...] Read more.
Recent developments in machine learning algorithms are playing a significant role in wireless communication and Internet of Things (IoT) systems. Location-based Internet of Things services (LBIoTS) are considered one of the primary applications among those IoT applications. The key information involved in LBIoTS is finding an object’s geographical location. The Global Positioning System (GPS) technique does not perform better in indoor environments due to multipath. Numerous methods have been investigated for indoor localization scenarios. However, the precise location estimation of a moving object in such an application is challenging due to the high signal fluctuations. Therefore, this paper presents machine learning algorithms to estimate the object’s location based on the Received Signal Strength Indicator (RSSI) values collected through Bluetooth low-energy (BLE)-based nodes. In this experiment, we utilize a publicly available RSSI dataset. The RSSI data are collected from different BLE ibeacon nodes installed in a complex indoor environment with labels. Then, the RSSI data are linearized using the weighted least-squares method and filtered using moving average filters. Moreover, machine learning algorithms are used for training and testing the dataset to estimate the precise location of the objects. All the proposed algorithms were tested and evaluated under their different hyperparameters. The tested models provided approximately 85% accuracy for KNN, 84% for SVM and 76% accuracy in FFNN. Full article
Show Figures

Figure 1

Figure 1
<p>Arrangement of anchor node and target nodes.</p>
Full article ">Figure 2
<p>Arrangement of the sensor nodes in the testbed [<a href="#B8-signals-04-00036" class="html-bibr">8</a>].</p>
Full article ">Figure 3
<p>RSSI data with frequency and time domain.</p>
Full article ">Figure 4
<p>Testing accuracy vs. number of epochs.</p>
Full article ">Figure 5
<p>Training accuracy vs. number of epochs.</p>
Full article ">Figure 6
<p>Training time vs. various machine learning algorithms.</p>
Full article ">Figure 7
<p>Accuracy vs. various machine learning algorithms.</p>
Full article ">Figure 8
<p>f1-score vs. various machine learning algorithms.</p>
Full article ">Figure 9
<p>Accuracy vs. number of samples, with different number of vectors.</p>
Full article ">Figure 10
<p>Accuracy vs. number of samples with different k values.</p>
Full article ">
7 pages, 3207 KiB  
Communication
Localizing the First Interstellar Meteor with Seismometer Data
by Amir Siraj and Abraham Loeb
Signals 2023, 4(4), 644-650; https://doi.org/10.3390/signals4040035 - 25 Sep 2023
Cited by 4 | Viewed by 4364
Abstract
The first meter-scale interstellar meteor (IM1) was detected by US government sensors in 2014, identified as an interstellar object candidate in 2019, and confirmed by the Department of Defense (DoD) in 2022. We use data from a nearby seismometer to localize the fireball [...] Read more.
The first meter-scale interstellar meteor (IM1) was detected by US government sensors in 2014, identified as an interstellar object candidate in 2019, and confirmed by the Department of Defense (DoD) in 2022. We use data from a nearby seismometer to localize the fireball to a ∼16km2 region within the ∼120km2 zone allowed by the precision of the DoD-provided coordinates. The improved localization is of great importance for a forthcoming expedition to retrieve the meteor fragments. Full article
(This article belongs to the Special Issue Enabling a More Prosperous Space Era: A Signal Processing Perspective)
Show Figures

Figure 1

Figure 1
<p>Schematic showing the geometry of the meteor fireball and the seismometer. Sound rays can travel directly through the air or reflect off of the surface of the ocean. The illustrated paths are not meant to be accurate depictions of individual sound waves, and the symbol designating the seismometer does not indicate its height. By summing over the different paths, we are able to fit the recorded sound signal for particular values of the horizontal distance (<span class="html-italic">r</span>) and altitude above sea level (<span class="html-italic">z</span>).</p>
Full article ">Figure 2
<p>AU MANU seismic signal as a probability function of counts normalized to a unit area. The green region (270.5–271.5<math display="inline"><semantics> <mrow> <mspace width="0.277778em"/> <mi mathvariant="normal">s</mi> </mrow> </semantics></math> after the fireball) indicates the constraint for the arrival of the air-mediated sound waves, and the orange region (296–297<math display="inline"><semantics> <mrow> <mspace width="0.277778em"/> <mi mathvariant="normal">s</mi> </mrow> </semantics></math> after the fireball) illustrates the constraint for the peak signal produced by the air-mediated sound waves. The first, second, and third gray lines correspond to the first, second, and third peaks in the IM1 light curve [<a href="#B2-signals-04-00035" class="html-bibr">2</a>].</p>
Full article ">Figure 3
<p>The second packet of the seismic signal, with the counts normalized to unity. The orange curve indicates the result of the sound wave reflection model described in the text for a ground distance of <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>83.9</mn> <mo>±</mo> <mn>0.7</mn> <mspace width="0.277778em"/> <mi>km</mi> </mrow> </semantics></math> and an altitude of <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>16.9</mn> <mo>±</mo> <mn>0.9</mn> <mspace width="0.277778em"/> <mi>km</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Geographical area of interest, with the location of the AU MANU seismometer and the DoD-specified fireball location highlighted.</p>
Full article ">Figure 5
<p>The DoD-reported location is represented by the “plus” at the center of the plot, with the <math display="inline"><semantics> <mrow> <msup> <mn>0.1</mn> <mo>∘</mo> </msup> <mo>×</mo> <msup> <mn>0.1</mn> <mo>∘</mo> </msup> </mrow> </semantics></math> box corresponding to the area allowed by the precision of the DoD-reported coordinates. AU MANU distance constraint (projected onto the surface of the ocean) is illustrated in blue, with the ∼<math display="inline"><semantics> <mrow> <mn>16</mn> <mspace width="0.277778em"/> <msup> <mi>km</mi> <mn>2</mn> </msup> </mrow> </semantics></math> portion that agrees with the DoD-reported coordinates (given their level of precision) highlighted in red. The purple region indicates the constraint given by the AU COEN seismic signal, which is fully consistent with the AU MANU distance constraint. The gray arrow indicates the direction that the meteor was traveling in, according to DoD data.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop