[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 20, March-1
Previous Issue
Volume 20, February-1
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 20, Issue 4 (February-2 2020) – 289 articles

Cover Story (view full-size image): Driver drowsiness is a major cause of traffic accidents. Automated driving might counteract this problem, but in the lower levels of automation, the driver is still responsible as a fallback. Therefore, reliable drowsiness detection systems are required. Techniques that use physiological signals seem to be especially promising. However, in a dynamic driving environment, only non- or minimally intrusive methods are accepted, and vibrations could lead to reduced sensor performance. In our work, encouraged by the progress in the development of wrist-worn wearables, their suitability in an automotive environment was investigated. We propose a drowsiness detection system with a machine learning approach applied solely to physiological data from a wrist-worn wearable sensor. The use of wrist-worn wearables inside a vehicle would enable the recording of physiological signals in a way that is familiar to the [...] Read more.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
12 pages, 3307 KiB  
Article
Low Noise Interface ASIC of Micro Gyroscope with Ball-disc Rotor
by Mingyuan Ren, Honghai Xu, Xiaowei Han, Changchun Dong and Xuebin Lu
Sensors 2020, 20(4), 1238; https://doi.org/10.3390/s20041238 - 24 Feb 2020
Cited by 6 | Viewed by 3594
Abstract
A low noise interface ASIC for micro gyroscope with ball-disc rotor is realized in 0.5µm CMOS technology. The interface circuit utilizes a transimpedance pre-amplifier which reduces input noise. The proposed interface achieves 0.003°/s/Hz1/2 noise density and 0.003°/s sensitivity with ±100°/s measure range. [...] Read more.
A low noise interface ASIC for micro gyroscope with ball-disc rotor is realized in 0.5µm CMOS technology. The interface circuit utilizes a transimpedance pre-amplifier which reduces input noise. The proposed interface achieves 0.003°/s/Hz1/2 noise density and 0.003°/s sensitivity with ±100°/s measure range. The functionality of the full circuit, including circuit analysis, noise analysis and measurement results, has been demonstrated. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Mechanical structure of micro gyroscope with ball-disc rotor.</p>
Full article ">Figure 2
<p>Test of driving interference for micro gyroscope with ball-disc rotor.</p>
Full article ">Figure 3
<p>Frequency response of driving interference for micro gyroscope.</p>
Full article ">Figure 4
<p>Noise of opened-loop system.</p>
Full article ">Figure 5
<p>Noise model of the transimpedance amplifier.</p>
Full article ">Figure 6
<p>Overall realization scheme for the interface ASIC of micro gyroscope with ball-disc rotor.</p>
Full article ">Figure 7
<p>Schematic of the single path detection principle of micro gyroscope with ball-disc rotor.</p>
Full article ">Figure 8
<p>Transimpedance structure of charge voltage conversion circuit.</p>
Full article ">Figure 9
<p>Layout of the interface circuit for micro gyroscope with ball-disc rotor.</p>
Full article ">Figure 10
<p>Spectrum of system output noise.</p>
Full article ">
19 pages, 10074 KiB  
Article
A Robust Detection Algorithm for Infrared Maritime Small and Dim Targets
by Yuwei Lu, Lili Dong, Tong Zhang and Wenhai Xu
Sensors 2020, 20(4), 1237; https://doi.org/10.3390/s20041237 - 24 Feb 2020
Cited by 21 | Viewed by 4033
Abstract
Infrared maritime target detection is the key technology of maritime target search systems. However, infrared images generally have the defects of low signal-to-noise ratio and low resolution. At the same time, the maritime environment is complicated and changeable. Under the interference of islands, [...] Read more.
Infrared maritime target detection is the key technology of maritime target search systems. However, infrared images generally have the defects of low signal-to-noise ratio and low resolution. At the same time, the maritime environment is complicated and changeable. Under the interference of islands, waves and other disturbances, the brightness of small dim targets is easily obscured, which makes them difficult to distinguish. This is difficult for traditional target detection algorithms to deal with. In order to solve these problems, through the analysis of infrared maritime images under a variety of sea conditions including small dim targets, this paper concludes that in infrared maritime images, small targets occupy very few pixels, often do not have any edge contour information, and the gray value and contrast values are very low. The background such as island and strong sea wave occupies a large number of pixels, with obvious texture features, and often has a high gray value. By deeply analyzing the difference between the target and the background, this paper proposes a detection algorithm (SRGM) for infrared small dim targets under different maritime background. Firstly, this algorithm proposes an efficient maritime background filter for the common background in the infrared maritime image. Firstly, the median filter based on the sensitive region selection is used to extract the image background accurately, and then the background is eliminated by image difference with the original image. In addition, this article analyzes the differences in gradient features between strong interference caused by the background and targets, proposes a small dim target extraction operator with two analysis factors that fit the target features perfectly and combines the adaptive threshold segmentation to realize the accurate extraction of the small dim target. The experimental results show that compared with the current popular small dim target detection algorithms, this paper has better performance for target detection in various maritime environments. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Results of three target detection algorithms in two typical maritime background: (<b>a</b>) denotes the original images. For the upper image in (<b>b</b>–<b>d</b>), they have only detected the target on the left. For the lower image in (<b>b</b>–<b>d</b>), they have detected the target but kept island and wave interference.</p>
Full article ">Figure 2
<p>Infrared maritime image including island and target. (<b>a</b>) denotes the original image, (<b>b</b>) denotes the gray distribution of each area.</p>
Full article ">Figure 3
<p>Gray distribution of waves and target. (<b>a</b>) denotes the original image (640 × 512), (<b>b</b>) denotes the gray distribution of waves and target in two directions, the upper part is the vertical directions and the lower part is the horizontal directions.</p>
Full article ">Figure 4
<p>Gradient distribution of waves and targets.</p>
Full article ">Figure 5
<p>Infrared maritime images including small dim targets: The upper parts of (<b>a</b>–<b>d</b>) are the original images and the lower parts are the gray distribution maps of each image.</p>
Full article ">Figure 6
<p>Gray distribution of target and salt noise.</p>
Full article ">Figure 7
<p>Flow chart of the proposed method.</p>
Full article ">Figure 8
<p>Flow chart of the proposed algorithm (SRGM).</p>
Full article ">Figure 9
<p>Flow chart of the proposed median filter based on sensitive area acquisition.</p>
Full article ">Figure 10
<p>Comparison of the effect of two kinds of filters for target extraction.</p>
Full article ">Figure 11
<p>Result of using <math display="inline"><semantics> <mrow> <msub> <mi>G</mi> <mi>L</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>G</mi> <mi>V</mi> </msub> </mrow> </semantics></math> to analyze gradient in turn.</p>
Full article ">Figure 12
<p>Experimental image:(<b>a</b>) two targets with different contrast on a calm sea; (<b>b</b>) three targets with different contrast on the sea with uneven brightness; (<b>c</b>) target on the sea with obvious waves; (<b>d</b>) two targets on the horizon line; (<b>e</b>) island interference in the image; (<b>f</b>) fog interference in the image; (<b>g</b>) extremely small and dim target.</p>
Full article ">Figure 13
<p>Detection results of the above five algorithms: (<b>a</b>) two targets with different contrast on a calm sea; (<b>b</b>) three targets with different contrast on the sea with uneven brightness; (<b>c</b>) target on the sea with obvious waves; (<b>d</b>) two targets on the horizon line; (<b>e</b>) island interference in the image; (<b>f</b>) fog interference in the image; (<b>g</b>) extremely small and dim target.</p>
Full article ">Figure 14
<p>Five types of typical maritime images: (<b>a</b>) island interference in the image; (<b>b</b>) target on the horizon line; (<b>c</b>) target on the calm sea; (<b>d</b>) fog interference in the image; (<b>e</b>) target on the sea with obvious waves.</p>
Full article ">
10 pages, 5540 KiB  
Article
Method to Determine the Far-Field Beam Pattern of A Long Array From Subarray Beam Pattern Measurements
by Donghwan Jung and Jeasoo Kim
Sensors 2020, 20(4), 1236; https://doi.org/10.3390/s20041236 - 24 Feb 2020
Cited by 1 | Viewed by 5950
Abstract
Beam pattern measurement is essential to verifying the performance of an array sonar. However, common problems in beam pattern measurement of arrays include constraints on achieving the far-field condition and reaching plane waves mainly due to limited measurement space as in acoustic water [...] Read more.
Beam pattern measurement is essential to verifying the performance of an array sonar. However, common problems in beam pattern measurement of arrays include constraints on achieving the far-field condition and reaching plane waves mainly due to limited measurement space as in acoustic water tank. For this purpose, the conventional method of measuring beam patterns in limited spaces, which transform near-field measurement data into far-field results, is used. However, the conventional method is time-consuming because of the dense spatial sampling. Hence, we devised a method to measure the beam pattern of a discrete line array in limited space based on the subarray method. In this method, a discrete line array with a measurement space that does not satisfy the far-field condition is divided into several subarrays, and the beam pattern of the line array can then be determined from the subarray measurements by the spatial convolution that is equivalent to the multiplication of beam pattern. The proposed method was verified through simulation and experimental measurement on a line array with 256 elements of 16 subarrays. Full article
(This article belongs to the Special Issue Underwater Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Geometry of a uniform rectangular array, where each element is a red diamond.</p>
Full article ">Figure 2
<p>Flowchart of the proposed method.</p>
Full article ">Figure 3
<p>Array divided into 16 subarrays.</p>
Full article ">Figure 4
<p>Beam pattern simulation for far-field and near-field (r = 1.22 m, 2.44 m, 4.88 m).</p>
Full article ">Figure 5
<p>Beam pattern of subarray (<b>a</b>) and sampling function (<b>b</b>) in far-field.</p>
Full article ">Figure 6
<p>Beam pattern estimated by the proposed method.</p>
Full article ">Figure 7
<p>Beam pattern for the near field obtained by the proposed method and beam pattern for the far and near fields.</p>
Full article ">Figure 8
<p>Example of raw data of the beam pattern of the sub-array: (<b>a</b>) ex 1; (<b>b</b>) zoomed ex 1; (<b>c</b>) ex 2; (<b>d</b>) zoomed ex 2; (<b>e</b>) ex 3; (<b>f</b>) zoomed ex 3; (<b>g</b>) ex 4; (<b>h</b>) zoomed ex 4.</p>
Full article ">Figure 9
<p>Example of filtered data: (<b>a</b>) ex 1; (<b>b</b>) zoomed ex 1; (<b>c</b>) ex 2; (<b>d</b>) zoomed ex 2; (<b>e</b>) ex 3; (<b>f</b>) zoomed ex 3; (<b>g</b>) ex 4; (<b>h</b>) zoomed ex 4.</p>
Full article ">Figure 10
<p>Examples of subarray beam patterns obtained from near-field measurements: (<b>a</b>) ex 1; (<b>b</b>) ex 2; (<b>c</b>) ex 3; (<b>d</b>) ex 4; (<b>e</b>) ex 5.</p>
Full article ">Figure 11
<p>Subarray beam pattern to obtain total beam pattern using the proposed method. (<b>a</b>) subarray beam pattern obtained from near-field measurements and (<b>b</b>) sampling beam pattern.</p>
Full article ">Figure 12
<p>Beam patterns obtained from simulation and the proposed method.</p>
Full article ">
12 pages, 744 KiB  
Article
A Comprehensive Machine-Learning-Based Software Pipeline to Classify EEG Signals: A Case Study on PNES vs. Control Subjects
by Giuseppe Varone, Sara Gasparini, Edoardo Ferlazzo, Michele Ascoli, Giovanbattista Gaspare Tripodi, Chiara Zucco, Barbara Calabrese, Mario Cannataro and Umberto Aguglia
Sensors 2020, 20(4), 1235; https://doi.org/10.3390/s20041235 - 24 Feb 2020
Cited by 17 | Viewed by 4795
Abstract
The diagnosis of psychogenic nonepileptic seizures (PNES) by means of electroencephalography (EEG) is not a trivial task during clinical practice for neurologists. No clear PNES electrophysiological biomarker has yet been found, and the only tool available for diagnosis is video EEG monitoring with [...] Read more.
The diagnosis of psychogenic nonepileptic seizures (PNES) by means of electroencephalography (EEG) is not a trivial task during clinical practice for neurologists. No clear PNES electrophysiological biomarker has yet been found, and the only tool available for diagnosis is video EEG monitoring with recording of a typical episode and clinical history of the subject. In this paper, a data-driven machine learning (ML) pipeline for classifying EEG segments (i.e., epochs) of PNES and healthy controls (CNT) is introduced. This software pipeline consists of a semiautomatic signal processing technique and a supervised ML classifier to aid clinical discriminative diagnosis of PNES by means of an EEG time series. In our ML pipeline, statistical features like the mean, standard deviation, kurtosis, and skewness are extracted in a power spectral density (PSD) map split up in five conventional EEG rhythms (delta, theta, alpha, beta, and the whole band, i.e., 1–32 Hz). Then, the feature vector is fed into three different supervised ML algorithms, namely, the support vector machine (SVM), linear discriminant analysis (LDA), and Bayesian network (BN), to perform EEG segment classification tasks for CNT vs. PNES. The performance of the pipeline algorithm was evaluated on a dataset of 20 EEG signals (10 PNES and 10 CNT) that was recorded in eyes-closed resting condition at the Regional Epilepsy Centre, Great Metropolitan Hospital of Reggio Calabria, University of Catanzaro, Italy. The experimental results showed that PNES vs. CNT discrimination tasks performed via the ML algorithm and validated with random split (RS) achieved an average accuracy of 0.97 ± 0.013 (RS-SVM), 0.99 ± 0.02 (RS-LDA), and 0.82 ± 0.109 (RS-BN). Meanwhile, with leave-one-out (LOO) validation, an average accuracy of 0.98 ± 0.0233 (LOO-SVM), 0.98 ± 0.124 (LOO-LDA), and 0.81 ± 0.109 (LOO-BN) was achieved. Our findings showed that BN was outperformed by SVM and LDA. The promising results of the proposed software pipeline suggest that it may be a valuable tool to support existing clinical diagnosis. Full article
(This article belongs to the Special Issue Novel Approaches to EEG Signal Processing)
Show Figures

Figure 1

Figure 1
<p>Software pipeline.</p>
Full article ">Figure 2
<p>Receiver operating characteristic curve (ROC) of three classifiers, namely, support vector machine (SVM), Bayesian network (BN), and linear discriminant analysis (LDA), (simulated data) with different area under the curve (AUC) values: (<b>A</b>,<b>D</b>) SVM, (<b>B</b>,<b>E</b>) NB, and (<b>C</b>,<b>F</b>) LDA. The classifiers were evaluated through two different methods: (<b>A</b>–<b>C</b>) random split and (<b>D</b>–<b>F</b>) leave-one-out (LOO).</p>
Full article ">
14 pages, 3912 KiB  
Article
Batch Processing through Particle Swarm Optimization for Target Motion Analysis with Bottom Bounce Underwater Acoustic Signals
by Raegeun Oh, Taek Lyul Song and Jee Woong Choi
Sensors 2020, 20(4), 1234; https://doi.org/10.3390/s20041234 - 24 Feb 2020
Cited by 8 | Viewed by 3972
Abstract
A target angular information in 3-dimensional space consists of an elevation angle and azimuth angle. Acoustic signals propagating along multiple paths in underwater environments usually have different elevation angles. Target motion analysis (TMA) uses the underwater acoustic signals received by a passive horizontal [...] Read more.
A target angular information in 3-dimensional space consists of an elevation angle and azimuth angle. Acoustic signals propagating along multiple paths in underwater environments usually have different elevation angles. Target motion analysis (TMA) uses the underwater acoustic signals received by a passive horizontal line array to track an underwater target. The target angle measured by the horizontal line array is, in fact, a conical angle that indicates the direction of the signal arriving at the line array sonar system. Accordingly, bottom bounce paths produce inaccurate target locations if they are interpreted as azimuth angles in the horizontal plane, as is commonly assumed in existing TMA technologies. Therefore, it is necessary to consider the effect of the conical angle on bearings-only TMA (BO-TMA). In this paper, a target conical angle causing angular ambiguity will be simulated using a ray tracing method in an underwater environment. A BO-TMA method using particle swarm optimization (PSO) is proposed for batch processing to solve the angular ambiguity problem. Full article
Show Figures

Figure 1

Figure 1
<p>Trajectories of (<b>a</b>) the target and (<b>b</b>) the observer in the horizontal plane.</p>
Full article ">Figure 2
<p>Geometry between observer and target. <math display="inline"><semantics> <mrow> <msub> <mi>φ</mi> <mi>l</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>φ</mi> <mi>n</mi> </msub> </mrow> </semantics></math> are the azimuth angles from due north and the direction of the HLA, respectively. <math display="inline"><semantics> <mrow> <msub> <mi>c</mi> <mi>o</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mi>θ</mi> </semantics></math>, and <math display="inline"><semantics> <mi>μ</mi> </semantics></math> are heading angle of the HLA, conical angle, and elevation angle in the vertical plane, respectively.</p>
Full article ">Figure 3
<p>(<b>a</b>) Ray paths predicted by ray tracing method based on (<b>b</b>) Munk’s sound speed profile. Direct and bottom bounce paths are plotted with magenta and blue lines, respectively.</p>
Full article ">Figure 4
<p>Bearing-time records (BTRs) of the scenario.</p>
Full article ">Figure 5
<p>(<b>a</b>) Eigenray tracing result conducted to determine the expected target range. The distance at which the ray arrives at an expected target depth after bottom reflection becomes the estimated target range in <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>φ</mi> <mo>^</mo> </mover> <mi>l</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>k</mi> <mo>,</mo> <mtext> </mtext> <mi>i</mi> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> direction. (<b>b</b>) Top-view illustration showing the line of conical angle and bearing line. For <span class="html-italic">k</span>-th scan, the line connecting <span class="html-italic">I</span> possible target positions estimated using the eigenray tracing is a bearing line (red line in figure).</p>
Full article ">Figure 6
<p>Bearing lines (solid lines) and lines of conical angles (dashed lines) at <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mi>K</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>The BTRs for conical angle measurements including Gaussian measurement error with zero mean and standard deviation of (<b>a</b>) 0.2, (<b>b</b>) 0.4, and (<b>c</b>) 0.6°.</p>
Full article ">Figure 8
<p>The distribution of initial states estimated using TMA for 1000 random runs for standard deviations of zero mean Gaussian measurement errors of (<b>a</b>) 0.2, (<b>b</b>) 0.4, and (<b>c</b>) 0.6°. The true initial state vector of the target is <math display="inline"><semantics> <mrow> <mrow> <mo>[</mo> <mrow> <mn>0</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>,</mo> <mo> </mo> <mn>2500</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>,</mo> <mo> </mo> <mn>0</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> <mo>,</mo> <mo> </mo> <mo>−</mo> <mn>3</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> <mo>]</mo> </mrow> </mrow> </semantics></math>. The left column shows the initial target position estimates, and the right shows target velocity estimates. The true initial state vector of the target and the mean of estimated state vectors are indicated by red circles and yellow triangles, respectively. The regions within one standard deviation of the mean are indicated by black ellipses.</p>
Full article ">Figure 8 Cont.
<p>The distribution of initial states estimated using TMA for 1000 random runs for standard deviations of zero mean Gaussian measurement errors of (<b>a</b>) 0.2, (<b>b</b>) 0.4, and (<b>c</b>) 0.6°. The true initial state vector of the target is <math display="inline"><semantics> <mrow> <mrow> <mo>[</mo> <mrow> <mn>0</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>,</mo> <mo> </mo> <mn>2500</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>,</mo> <mo> </mo> <mn>0</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> <mo>,</mo> <mo> </mo> <mo>−</mo> <mn>3</mn> <mo> </mo> <mi mathvariant="normal">m</mi> <mo>/</mo> <mi mathvariant="normal">s</mi> </mrow> <mo>]</mo> </mrow> </mrow> </semantics></math>. The left column shows the initial target position estimates, and the right shows target velocity estimates. The true initial state vector of the target and the mean of estimated state vectors are indicated by red circles and yellow triangles, respectively. The regions within one standard deviation of the mean are indicated by black ellipses.</p>
Full article ">Figure 9
<p>The distribution of initial states estimated using TMA for 1000 random runs with standard deviations of zero mean Gaussian measurement error of 0.4° with the measurement numbers of (<b>a</b>) 15, (<b>b</b>) 30, and (<b>c</b>) 60.</p>
Full article ">
21 pages, 7435 KiB  
Article
Learning Attention Representation with a Multi-Scale CNN for Gear Fault Diagnosis under Different Working Conditions
by Yong Yao, Sen Zhang, Suixian Yang and Gui Gui
Sensors 2020, 20(4), 1233; https://doi.org/10.3390/s20041233 - 24 Feb 2020
Cited by 78 | Viewed by 6205
Abstract
The gear fault signal under different working conditions is non-linear and non-stationary, which makes it difficult to distinguish faulty signals from normal signals. Currently, gear fault diagnosis under different working conditions is mainly based on vibration signals. However, vibration signal acquisition is limited [...] Read more.
The gear fault signal under different working conditions is non-linear and non-stationary, which makes it difficult to distinguish faulty signals from normal signals. Currently, gear fault diagnosis under different working conditions is mainly based on vibration signals. However, vibration signal acquisition is limited by its requirement for contact measurement, while vibration signal analysis methods relies heavily on diagnostic expertise and prior knowledge of signal processing technology. To solve this problem, a novel acoustic-based diagnosis (ABD) method for gear fault diagnosis under different working conditions based on a multi-scale convolutional learning structure and attention mechanism is proposed in this paper. The multi-scale convolutional learning structure was designed to automatically mine multiple scale features using different filter banks from raw acoustic signals. Subsequently, the novel attention mechanism, which was based on a multi-scale convolutional learning structure, was established to adaptively allow the multi-scale network to focus on relevant fault pattern information under different working conditions. Finally, a stacked convolutional neural network (CNN) model was proposed to detect the fault mode of gears. The experimental results show that our method achieved much better performance in acoustic based gear fault diagnosis under different working conditions compared with a standard CNN model (without an attention mechanism), an end-to-end CNN model based on time and frequency domain signals, and other traditional fault diagnosis methods involving feature engineering. Full article
(This article belongs to the Special Issue Sensors Fault Diagnosis Trends and Applications)
Show Figures

Figure 1

Figure 1
<p>Block diagram of the proposed method.</p>
Full article ">Figure 2
<p>Multi-scale feature extraction mechanism.</p>
Full article ">Figure 3
<p>Temporal attention mechanism.</p>
Full article ">Figure 4
<p>Architecture of the attention-based multi-scale convolutional neural network (CNN) model. It contains (<b>a</b>) an attention-based multi-scale feature extraction module and (<b>b</b>) a fault pattern recognition module.</p>
Full article ">Figure 5
<p>Experimental system in a semi-anechoic chamber.</p>
Full article ">Figure 6
<p>Fault pattern of the gears.</p>
Full article ">Figure 7
<p>Four different types of gear signals in the time and frequency domains under two load conditions: (<b>a</b>) signal at 900 rev/min, (<b>b</b>) signal at 1800 rev/min, and (<b>c</b>) signal at 2700 rev/min.</p>
Full article ">Figure 7 Cont.
<p>Four different types of gear signals in the time and frequency domains under two load conditions: (<b>a</b>) signal at 900 rev/min, (<b>b</b>) signal at 1800 rev/min, and (<b>c</b>) signal at 2700 rev/min.</p>
Full article ">Figure 8
<p>Frequency magnitude response of the multi-scale convolutional filters of the first layer. The filters are sorted by their center frequency. Left shows the frequency response of the low-scale network, middle shows the frequency response of the mid-scale network, and right shows the frequency response of the high-scale network.</p>
Full article ">Figure 9
<p>Visualization results of randomly selected filters from the multi-scale pool layer and temporal attention output for four different types of gear input signals under the two load conditions corresponding to the attention based multi-scale CNN model for evaluation A.</p>
Full article ">Figure 10
<p>Visualization results of randomly selected filters from the multi-scale pool layer and temporal attention output for four different types of raw gear signals under the two load conditions corresponding to the attention based multi-scale CNN model for evaluation B.</p>
Full article ">Figure 11
<p>Confusion matrixes for the proposed attention-based multi-scale CNN model. The left matrix shows the statistics for evaluation (<b>A</b>), while the right matrix shows the statistics for evaluation (<b>B</b>).</p>
Full article ">Figure 12
<p>Feature visualization via t-SNE (t-distributed stochastic neighbor embedding). Left shows the feature representations for the last fully connected layer of attention-based multi-scale CNN model for evaluation (<b>A</b>). Right shows the feature representations for evaluation (<b>B</b>).</p>
Full article ">
13 pages, 4348 KiB  
Article
A Near-infrared Turn-on Fluorescent Sensor for Sensitive and Specific Detection of Albumin from Urine Samples
by Yoonjeong Kim, Eunryeol Shin, Woong Jung, Mi Kyoung Kim and Youhoon Chong
Sensors 2020, 20(4), 1232; https://doi.org/10.3390/s20041232 - 24 Feb 2020
Cited by 18 | Viewed by 3970
Abstract
A readily synthesizable fluorescent probe DMAT-π-CAP was evaluated for sensitive and selective detection of human serum albumin (HSA). DMAT-π-CAP showed selective turn-on fluorescence at 730 nm in the presence of HSA with more than 720-fold enhancement in emission intensity ([DMAT-π-CAP] = [...] Read more.
A readily synthesizable fluorescent probe DMAT-π-CAP was evaluated for sensitive and selective detection of human serum albumin (HSA). DMAT-π-CAP showed selective turn-on fluorescence at 730 nm in the presence of HSA with more than 720-fold enhancement in emission intensity ([DMAT-π-CAP] = 10 μM), and rapid detection of HSA was accomplished in 3 s. The fluorescence intensity of DMAT-π-CAP was shown to increase in HSA concentration-dependent manner (Kd = 15.4 ± 3.3 μM), and the limit of detection of DMAT-π-CAP was determined to be 10.9 nM (0.72 mg/L). The 1:1 stoichiometry between DMAT-π-CAP and HSA was determined, and the displacement assay revealed that DMAT-π-CAP competes with hemin for the unique binding site, which rarely accommodates drugs and endogenous compounds. Based on the HSA-selective turn-on NIR fluorescence property as well as the unique binding site, DMAT-π-CAP was anticipated to serve as a fluorescence sensor for quantitative detection of the HSA level in biological samples with minimized background interference. Thus, urine samples were directly analyzed by DMAT-π-CAP to assess albumin levels, and the results were comparable to those obtained from immunoassay. The similar sensitivity and specificity to the immunoassay along with the simple, cost-effective, and fast detection of HSA warrants practical application of the NIR fluorescent albumin sensor, DMAT-π-CAP, in the analysis of albumin levels in various biological environments. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Fluorescence spectra of <b>DMAT-π-CAP</b> (5 μM) in the absence and presence of HSA (10 μM). A magnified view of the boxed region is shown in the inset. (<b>b</b>) Fluorescence intensity of <b>DMAT-π-CAP</b> (5 μM) in the presence of various bioanalytes [10 μM except BSA (2 μM)]. (F<sub>0</sub>-F)/F<sub>0</sub> indicates relative fluorescence intensity. Data are the mean ± standard error (n = 3).</p>
Full article ">Figure 2
<p>(<b>a</b>) Emission fluorescence spectra of <b>DMAT-π-CAP</b> (5 μM, λ<sub>em</sub> = 730 nm) upon addition of increasing concentration of HSA (0, 1.25, 2.5, 5, 10, 20, 30, 40, 50, 60, 70, 80, 90, and 100 μM). (<b>b</b>) HSA-dependent change in the fluorescence intensity of <b>DMAT-π-CAP</b>. Inset shows the linear relationship between the fluorescence intensity of <b>DMAT-π-CAP</b> (5 μM) at 730 nm and the concentrations of HSA (0–10 μM). (F<sub>0</sub>-F)/F<sub>0</sub> indicates relative fluorescence intensity.</p>
Full article ">Figure 3
<p>(<b>a</b>) Job’s plot analysis. Fluorescence intensity of a mixture of <b>DMAT-π-CAP</b> and HSA (total molar concentration = 10 μM at PBS) was measured at different molar ratios of the two components (0.1–0.9) (λ<sub>em</sub> = 730 nm). (F<sub>0</sub>–F)/F<sub>0</sub> indicates relative fluorescence intensity. (<b>b</b>) Changes in the fluorescence intensity (λ<sub>em</sub> = 730 nm) from a mixture of <b>DMAT-π-CAP</b> (5 μM) and HSA (10 μM) upon addition of site-specific markers (warfarin, ibuprofen, and hemin).</p>
Full article ">Figure 4
<p>The albumin levels in urine samples determined by direct fluorometric method and standard addition method using <b>DMAT-π-CAP</b> were compared with those obtained by immunoassay. Data are mean ± SD. *, <span class="html-italic">p</span> &lt; 0.05; **, <span class="html-italic">p</span> &lt; 0.01, ***, <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Scheme 1
<p>Synthesis of the title compound, <b>DMAT-π-CAP</b> (<b>5</b>).</p>
Full article ">
19 pages, 4428 KiB  
Article
Estimation of the Yield and Plant Height of Winter Wheat Using UAV-Based Hyperspectral Images
by Huilin Tao, Haikuan Feng, Liangji Xu, Mengke Miao, Guijun Yang, Xiaodong Yang and Lingling Fan
Sensors 2020, 20(4), 1231; https://doi.org/10.3390/s20041231 - 24 Feb 2020
Cited by 92 | Viewed by 6722
Abstract
Crop yield is related to national food security and economic performance, and it is therefore important to estimate this parameter quickly and accurately. In this work, we estimate the yield of winter wheat using the spectral indices (SIs), ground-measured plant height (H), and [...] Read more.
Crop yield is related to national food security and economic performance, and it is therefore important to estimate this parameter quickly and accurately. In this work, we estimate the yield of winter wheat using the spectral indices (SIs), ground-measured plant height (H), and the plant height extracted from UAV-based hyperspectral images (HCSM) using three regression techniques, namely partial least squares regression (PLSR), an artificial neural network (ANN), and Random Forest (RF). The SIs, H, and HCSM were used as input values, and then the PLSR, ANN, and RF were trained using regression techniques. The three different regression techniques were used for modeling and verification to test the stability of the yield estimation. The results showed that: (1) HCSM is strongly correlated with H (R2 = 0.97); (2) of the regression techniques, the best yield prediction was obtained using PLSR, followed closely by ANN, while RF had the worst prediction performance; and (3) the best prediction results were obtained using PLSR and training using a combination of the SIs and HCSM as inputs (R2 = 0.77, RMSE = 648.90 kg/ha, NRMSE = 10.63%). Therefore, it can be concluded that PLSR allows the accurate estimation of crop yield from hyperspectral remote sensing data, and the combination of the SIs and HCSM allows the most accurate yield estimation. The results of this study indicate that the crop plant height extracted from UAV-based hyperspectral measurements can improve yield estimation, and that the comparative analysis of PLSR, ANN, and RF regression techniques can provide a reference for agricultural management. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Location and aerial images of the test area.</p>
Full article ">Figure 2
<p>The unmanned aerial vehicle (UAV) remote sensing platform (model DJI S1000) that was used to acquire hyperspectral images. UHD 185: UHD 185 Firefly hyperspectral sensor.</p>
Full article ">Figure 3
<p>Measured and estimated heights of winter wheat. The three colors represent different growth stages: blue: jointing stage; red: flagging stage; green: flowering stage.</p>
Full article ">Figure 4
<p>The relationships between the ground-measured winter-wheat yield and the yield predicted using the PBI, H, and H<sub>CSM</sub> in different growth stages of winter wheat. (<b>a</b>–<b>c</b>) show the relationships between the measured yield and the yield predicted using the PBI, H, and H<sub>CSM</sub> at the jointing stage, respectively; (<b>d</b>–<b>f</b>) show the same as (<b>a</b>–<b>c</b>) for the flagging stage; and (<b>g</b>–<b>i</b>) show the same as (<b>a</b>–<b>c</b>) for the flowering stage.</p>
Full article ">Figure 5
<p>The relationships between the measured yield and the yield predicted using a combination of the PBI and H, and the yield predicted using a combination of the PBI and H<sub>CSM</sub>, at different growth stages (<b>a</b>,<b>b</b>) show the relationships between the measured yield and the yield predicted using a combination of the PBI and H, and using a combination of the PBI and H<sub>CSM</sub>, respectively, for the jointing stage; (<b>c</b>,<b>d</b>) show the same as (<b>a</b>,<b>b</b>) for the flagging stage, and (<b>e</b>,<b>f</b>) show the same as (<b>a</b>,<b>b</b>) for the flowering stage.</p>
Full article ">Figure 6
<p>The relationships between the ground-measured and predicted values of winter-wheat yield at different growth stages based on partial least squares regression (PLSR). (<b>a</b>–<b>c</b>) show the results for SIs, a combination of SIs and H, and a combination of SIs and H<sub>CSM</sub>, respectively, at the jointing stage. (<b>d</b>–<b>f</b>) show the same as (<b>a</b>–<b>c</b>) for the flagging stage; and (<b>g</b>–<b>i</b>) show the same as (<b>a</b>–<b>c</b>) for the flowering stage.</p>
Full article ">Figure 7
<p>The relationships between the ground-measured and predicted values of winter-wheat yield at different growth stages based on an artificial neural network (ANN). (<b>a</b>–<b>c</b>) show the results for SIs, a combination of SIs and H, and a combination of SIs and H<sub>CSM</sub>, respectively, at the jointing stage. (<b>d</b>–<b>f</b>) show the same as (<b>a</b>–<b>c</b>) for the flagging stage; and (<b>g</b>–<b>i</b>) show the same as (<b>a</b>–<b>c</b>) for the flowering stage.</p>
Full article ">Figure 8
<p>The relationships between the ground-measured and predicted values of winter-wheat yield at different growth stages based on the random forest (RF) method. (<b>a</b>–<b>c</b>) show the results for SIs, a combination of SIs and H, and a combination of SIs and H<sub>CSM</sub>, respectively, at the jointing stage. (<b>d</b>–<b>f</b>) show the same as (<b>a</b>–<b>c</b>) for the flagging stage; and (<b>g</b>–<b>i</b>) show the same as (<b>a</b>–<b>c</b>) for the flowering stage.</p>
Full article ">Figure 9
<p>Maps showing the predicted yield of winter wheat in the 48 experimental plots obtained using PLSR and a combination of the SIs and H<sub>CSM</sub>. (<b>a</b>) Jointing stage, (<b>b</b>) flagging stage, and (<b>c</b>) flowering stage.</p>
Full article ">
26 pages, 3628 KiB  
Article
Speed Calibration and Traceability for Train-Borne 24 GHz Continuous-Wave Doppler Radar Sensor
by Lei Du, Qiao Sun, Jie Bai, Xiaolei Wang and Tianqi Xu
Sensors 2020, 20(4), 1230; https://doi.org/10.3390/s20041230 - 24 Feb 2020
Cited by 14 | Viewed by 7361
Abstract
The 24 GHz continuous-wave (CW) Doppler radar sensor (DRS) is widely used for measuring the instantaneous speed of moving objects by using a non-contact approach, and has begun to be used in train-borne movable speed measurements in recent years in China because of [...] Read more.
The 24 GHz continuous-wave (CW) Doppler radar sensor (DRS) is widely used for measuring the instantaneous speed of moving objects by using a non-contact approach, and has begun to be used in train-borne movable speed measurements in recent years in China because of its advanced performance. The architecture and working principle of train-borne DRSs with different structures including single-channel DRSs used for freight train speed measurements in railway freight dedicated lines and dual-channel DRSs used for speed measurements of high-speed and urban rail trains in railway passenger dedicated lines, are first introduced. Then, the disadvantages of two traditional speed calibration methods for train-borne DRS are described, and a new speed calibration method based on the Doppler shift signal simulation by imposing a signal modulation on the incident CW microwave signal is proposed. A 24 GHz CW radar target simulation system for a train-borne DRS was specifically realized to verify the proposed speed calibration method for a train-borne DRS, and traceability and performance evaluation on simulated speed were taken into account. The simulated speed range of the simulation system was up to (5~500) km/h when the simulated incident angle range was within the range of (45 ± 8)°, and the maximum permissible error (MPE) of the simulated speed was ±0.05 km/h. Finally, the calibration and uncertainty evaluation results of two typical train-borne dual-channel DRS samples validated the effectiveness and feasibility of the proposed speed calibration approach for a train-borne DRS with full range in the laboratory as well as in the field. Full article
(This article belongs to the Special Issue Advances in Microwave and Millimeter Wave Radar Sensors)
Show Figures

Figure 1

Figure 1
<p>The installation schematic of train-borne DRS.</p>
Full article ">Figure 2
<p>Train-borne single-channel DRS: (<b>a</b>) Appearance view of DRS4; (<b>b</b>) Schematic diagram of working principle.</p>
Full article ">Figure 3
<p>Problems of train-borne single-channel DRS: (<b>a</b>) Angle deflection problem caused by the installation deviation and the jolt of motion; (<b>b</b>) Multipath effect problem.</p>
Full article ">Figure 4
<p>Vehicle-borne dual-channel DRS with the Janus configuration: (<b>a</b>) Installation schematic; (<b>b</b>) Schematic diagram of working principle in the ideal situation without a deflection angle; (<b>c</b>) Schematic diagram of working principle in the actual situation with a deflection angle.</p>
Full article ">Figure 5
<p>Train-borne dual-channel DRS: (<b>a</b>) Appearance view of DRS05; (<b>b</b>) Schematic diagram of working principle in the ideal situation without a deflection angle; (<b>c</b>) Schematic diagram of working principle in the actual situation with a deflection angle.</p>
Full article ">Figure 6
<p>Schematic diagram of the traditional speed calibration method for a train-borne single-channel DRS by using tuning forks.</p>
Full article ">Figure 7
<p>Schematic diagram of traditional speed calibration method for the train-borne DRS by using a moving pavement simulator.</p>
Full article ">Figure 8
<p>Schematic diagram of the proposed speed calibration method: (<b>a</b>) For the train-borne single-channel DRS; (<b>b</b>) For the train-borne dual-channel DRS.</p>
Full article ">Figure 9
<p>Flow diagram of the new speed calibration process for a dual-channel DRS.</p>
Full article ">Figure 10
<p>Realization and traceability of the simulation system.</p>
Full article ">Figure 11
<p>Calibration setup for the DRS05/1a sample.</p>
Full article ">Figure 12
<p>Calibration setup for the DRS05S1c sample.</p>
Full article ">
12 pages, 10192 KiB  
Article
Multi-Modal, Remote Breathing Monitor
by Nir Regev and Dov Wulich
Sensors 2020, 20(4), 1229; https://doi.org/10.3390/s20041229 - 24 Feb 2020
Cited by 10 | Viewed by 5424
Abstract
Monitoring breathing is important for a plethora of applications including, but not limited to, baby monitoring, sleep monitoring, and elderly care. This paper presents a way to fuse both vision-based and RF-based modalities for the task of estimating the breathing rate of a [...] Read more.
Monitoring breathing is important for a plethora of applications including, but not limited to, baby monitoring, sleep monitoring, and elderly care. This paper presents a way to fuse both vision-based and RF-based modalities for the task of estimating the breathing rate of a human. The modalities used are the F200 Intel® RealSenseTM RGB and depth (RGBD) sensor, and an ultra-wideband (UWB) radar. RGB image-based features and their corresponding image coordinates are detected on the human body and are tracked using the famous optical flow algorithm of Lucas and Kanade. The depth at these coordinates is also tracked. The synced-radar received signal is processed to extract the breathing pattern. All of these signals are then passed to a harmonic signal detector which is based on a generalized likelihood ratio test. Finally, a spectral estimation algorithm based on the reformed Pisarenko algorithm tracks the breathing fundamental frequencies in real-time, which are then fused into a one optimal breathing rate in a maximum likelihood fashion. We tested this multimodal set-up on 14 human subjects and we report a maximum error of 0.5 BPM compared to the true breathing rate. Full article
(This article belongs to the Special Issue IR-UWB Radar Sensors)
Show Figures

Figure 1

Figure 1
<p>Xethru X4 radar mounted to a laptop with RealSense sensor on it.</p>
Full article ">Figure 2
<p>Subject under test will be sitting on this chair infront of the radar and RealSense sensor.</p>
Full article ">Figure 3
<p>Pre band-pass filtering example of three axes (x, y, depth) breathing pattern.</p>
Full article ">Figure 4
<p>Post band-pass filtering example of three axes (x, y, depth) breathing pattern.</p>
Full article ">Figure 5
<p>CRB and Equation (<a href="#FD4-sensors-20-01229" class="html-disp-formula">4</a>) vs. SNR for K = 1000.</p>
Full article ">Figure 6
<p>Radar and vision extracted breathing signals.</p>
Full article ">Figure 7
<p>Extracted signals comparisons for breathing rate of 9.37 BPM. (<b>a</b>) Radar and ground truth signals. (<b>b</b>) Depth and ground truth signals. (<b>c</b>) Radar and depth signals.</p>
Full article ">Figure 8
<p>Extracted signals comparisons for breathing rate of 23.43 BPM. (<b>a</b>) Radar and ground truth signals. (<b>b</b>) RGB and ground truth signals. (<b>c</b>) Radar and RGB signals.</p>
Full article ">Figure 9
<p>Extracted signals comparisons for breathing rate of 33.98 BPM. (<b>a</b>) Radar and ground truth signals. (<b>b</b>) RGB and ground truth signals. (<b>c</b>) Radar and RGB signals.</p>
Full article ">Figure 10
<p>Extracted signals comparisons for breathing rate 56.25 BPM. (<b>a</b>) Radar and ground truth signals. (<b>b</b>) RGB and ground truth signals. (<b>c</b>) Radar and RGB signals.</p>
Full article ">
17 pages, 6202 KiB  
Article
Strong Wind Characteristics and Buffeting Response of a Cable-Stayed Bridge under Construction
by Lei Yan, Lei Ren, Xuhui He, Siying Lu, Hui Guo and Teng Wu
Sensors 2020, 20(4), 1228; https://doi.org/10.3390/s20041228 - 24 Feb 2020
Cited by 10 | Viewed by 4062
Abstract
This study carries out a detailed full-scale investigation on the strong wind characteristics at a cable-stayed bridge site and associated buffeting response of the bridge structure during construction, using a field monitoring system. It is found that the wind turbulence parameters during the [...] Read more.
This study carries out a detailed full-scale investigation on the strong wind characteristics at a cable-stayed bridge site and associated buffeting response of the bridge structure during construction, using a field monitoring system. It is found that the wind turbulence parameters during the typhoon and monsoon conditions share a considerable amount of similarity, and they can be described as the input turbulence parameters for the current wind-induced vibration theory. While the longitudinal turbulence integral scales are consistent with those in regional structural codes, the turbulence intensities and gust factors are less than the recommended values. The wind spectra obtained via the field measurements can be well approximated by the von Karman spectra. For the buffeting response of the bridge under strong winds, its vertical acceleration responses at the extreme single-cantilever state are significantly larger than those in the horizontal direction and the increasing tendencies with mean wind velocities are also different from each other. The identified frequencies of the bridge are utilized to validate its finite element model (FEM), and these field-measurement acceleration results are compared with those from the FEM-based numerical buffeting analysis with measured turbulence parameters. Full article
(This article belongs to the Special Issue Sensors in Structural Health Monitoring and Seismic Protection)
Show Figures

Figure 1

Figure 1
<p>Bridge and its location.</p>
Full article ">Figure 2
<p>Architecture of the wireless monitoring system.</p>
Full article ">Figure 3
<p>Arrangement of measurement sensors at the bridge under extreme single-cantilever state: (<b>a</b>) Elevation view; (<b>b</b>) Plan view; (<b>c</b>) Cross section of the bridge girder.</p>
Full article ">Figure 4
<p>Measured mean wind velocities and directions with a one-hour interval from August 13th to October 5th: (<b>a</b>) Mean wind velocity; (<b>b</b>) Mean wind direction.</p>
Full article ">Figure 5
<p>Variation of turbulence intensities during strong winds: (<b>a</b>) Typhoon Bailu; (<b>b</b>) Strong monsoon event.</p>
Full article ">Figure 6
<p>Variation of gust factors during strong winds: (<b>a</b>) Typhoon Bailu; (<b>b</b>) Strong monsoon event.</p>
Full article ">Figure 7
<p>Variation of integral length scales during strong winds: (<b>a</b>) Typhoon Bailu; (<b>b</b>) Strong monsoon event.</p>
Full article ">Figure 8
<p>Wind spectra obtained via the field measurement: (<b>a</b>) Longitudinal velocity; (<b>b</b>) Lateral velocity; (<b>c</b>) Vertical velocity.</p>
Full article ">Figure 8 Cont.
<p>Wind spectra obtained via the field measurement: (<b>a</b>) Longitudinal velocity; (<b>b</b>) Lateral velocity; (<b>c</b>) Vertical velocity.</p>
Full article ">Figure 9
<p>Time histories of accelerations of bridge girder and corresponding wind velocity: (<b>a</b>) Vertical acceleration; (<b>b</b>) Horizontal acceleration; (<b>c</b>) Wind velocity.</p>
Full article ">Figure 9 Cont.
<p>Time histories of accelerations of bridge girder and corresponding wind velocity: (<b>a</b>) Vertical acceleration; (<b>b</b>) Horizontal acceleration; (<b>c</b>) Wind velocity.</p>
Full article ">Figure 10
<p>Dependence of acceleration responses on strong winds: (<b>a</b>) Vertical; (<b>b</b>) Horizontal.</p>
Full article ">Figure 11
<p>Acceleration spectra of the bridge girder: (<b>a</b>) Vertical; (<b>b</b>) Horizontal.</p>
Full article ">Figure 12
<p>Dependence of turbulence intensity and turbulence integral scale on mean wind velocity: (<b>a</b>) Turbulence intensity; (<b>b</b>) Turbulence integral scale.</p>
Full article ">Figure 13
<p>Comparison of standard deviations of acceleration response obtained by field measurements and numerical analysis in Case 1 and Case 2: (<b>a</b>) Vertical; (<b>b</b>) Horizontal.</p>
Full article ">Figure 14
<p>Comparison of standard deviations of acceleration response obtained by field measurements and numerical analysis in Case 2 and Case 3: (<b>a</b>) Vertical; (<b>b</b>) Horizontal.</p>
Full article ">
29 pages, 10015 KiB  
Article
Seismic Assessment of Footbridges under Spatial Variation of Earthquake Ground Motion (SVEGM): Experimental Testing and Finite Element Analyses
by Izabela Joanna Drygala, Joanna Maria Dulinska and Maria Anna Polak
Sensors 2020, 20(4), 1227; https://doi.org/10.3390/s20041227 - 24 Feb 2020
Cited by 10 | Viewed by 3426
Abstract
In this paper, the seismic assessments of two footbridges, i.e., a single-span steel frame footbridge and a three-span cable-stayed structure, to the spatial variation of earthquake ground motion (SVEGM) are presented. A model of nonuniform kinematic excitation was used for the dynamic analyses [...] Read more.
In this paper, the seismic assessments of two footbridges, i.e., a single-span steel frame footbridge and a three-span cable-stayed structure, to the spatial variation of earthquake ground motion (SVEGM) are presented. A model of nonuniform kinematic excitation was used for the dynamic analyses of the footbridges. The influence of SVEGM on the dynamic performance of structures was assessed on both experimental and numerical ways. The comprehensive tests were planned and carried out on both structures. The investigation was divided into two parts: in situ experiment and numerical analyses. The first experimental part served for the validation of both the finite element (FE) modal models of structures and the theoretical model of nonuniform excitation as well as the appropriateness of the FE procedures used for dynamic analyses. First, the modal properties were validated. The differences between the numerical and the experimental natural frequencies, obtained using the operational modal analysis, were less than 10%. The comparison of the experimental and numerical mode shapes also proved a good agreement since the modal assurance criterion values were satisfactory for both structures. Secondly, nonuniform kinematic excitation was experimentally imposed using vibroseis tests. The apparent wave velocities, evaluated from the cross-correlation functions of the acceleration-time histories registered at two consecutive structures supports, equaled 203 and 214 m/s for both structures, respectively. Also, the coherence functions proved the similarity of the signals, especially for the frequency range 5 to 15 Hz. Then, artificial kinematic excitation was generated on the basis of the adopted model of nonuniform excitation. The obtained power spectral density functions of acceleration-time histories registered at all supports as well as the cross-spectral density functions between registered and artificial acceleration-time histories confirmed the strong similarity of the measured and artificial signals. Finally, the experimental and numerical assessments of the footbridges performance under the known dynamic excitation generated by the vibroseis were carried out. The FE models and procedures were positively validated by linking full-scale tests and numerical calculations. In the numerical part of the research, seismic analyses of the footbridges were conducted. The dynamic responses of structures to a representative seismic shock were calculated. Both the uniform and nonuniform models of excitation were applied to demonstrate and quantify the influence of SVEGM on the seismic assessment of footbridges. It occurred that SVEGM may generate non-conservative results in comparison with classic uniform seismic excitation. For the stiff steel frame footbridge the maximum dynamic response was obtained for the model of nonuniform excitation with the lowest wave velocity. Especially zones located closely to stiff frame nodes were significantly more disturbed. For the flexible cable-stayed footbridge, in case of nonuniform excitation, the dynamic response was enhanced only at the points located in the extreme spans and in the midspan closely to the pillars. Full article
(This article belongs to the Special Issue Nondestructive Sensing in Civil Engineering)
Show Figures

Figure 1

Figure 1
<p>General concept of the experimental investigation carried out in three stages: 1st stage—the validation of the modal models of both footbridges with regard to the results of in situ experiment, 2nd stage—the validation of the theoretical model of nonuniform kinematic excitation through the similarity assessment of the measured and artificial excitations, and 3rd stage—the validation of the dynamic responses of the footbridges through the comparison of the measured and calculated accelerations at representative points of the structure.</p>
Full article ">Figure 2
<p>The single-span steel frame footbridge: (<b>a</b>) general view and (<b>b</b>) structural layout.</p>
Full article ">Figure 3
<p>The cable-stayed footbridge: (<b>a</b>) general view and (<b>b</b>) structural layout.</p>
Full article ">Figure 4
<p>(<b>a</b>) Layout of measurement points and accelerometer anchorage for the frame footbridge (F_IMP—input measurement points; F_OMP—output measurement points); schemes of active measurement points for (<b>b</b>) the first, (<b>c</b>) second, and (<b>d</b>) third stage of the experiment.</p>
Full article ">Figure 5
<p>(<b>a</b>) Layout of measurement points and accelerometer anchorage for the cable-stayed footbridge (C_IMP—input measurement points; C_OMP—output measurement points); schemes of active measurement points for (<b>b</b>) the first, (<b>c</b>) second, and (<b>d</b>) third stage of the experiment.</p>
Full article ">Figure 6
<p>THOMAS vibroseis apparatus.</p>
Full article ">Figure 7
<p>Time-frequency characteristics of the vibroseis tests: (<b>a</b>) linear sweep and (<b>b</b>) exponential sweep.</p>
Full article ">Figure 8
<p>(<b>a</b>) Acceleration-time history at output measurement point F_OMP_2 of the frame footbridge and (<b>b</b>) natural frequency estimator.</p>
Full article ">Figure 9
<p>(<b>a</b>) Acceleration-time history at output measurement point C_OMP_3 of the cable-stayed footbridge and (<b>b</b>) natural frequency estimator.</p>
Full article ">Figure 10
<p>(<b>a</b>) 1st, (<b>b</b>) 2nd, (<b>c</b>) 3rd, (<b>d</b>) 4th, and (<b>e</b>) 5th mode shapes of the frame footbridge [<a href="#B6-sensors-20-01227" class="html-bibr">6</a>].</p>
Full article ">Figure 11
<p>(<b>a</b>) 1st, (<b>b</b>) 2nd, (<b>c</b>) 3rd, (<b>d</b>) 4th, and (<b>e</b>) 5th mode shapes of the cable-stayed footbridge [<a href="#B7-sensors-20-01227" class="html-bibr">7</a>].</p>
Full article ">Figure 12
<p>(<b>a</b>) Acceleration-time histories at input measurement points F_IMP_1 and F_IMP_2 of the frame footbridge due to the exponential sweep and (<b>b</b>) coherence function between the signals at points F_IMP_1 and F_IMP_2.</p>
Full article ">Figure 13
<p>Time-frequency characteristics of the excitation at point (<b>a</b>) F_IMP_1 and (<b>b</b>) F_IMP_2 of the frame footbridge due to exponential sweep.</p>
Full article ">Figure 14
<p>(<b>a</b>) Acceleration-time histories at input measurement points C_IMP_21 and C_IMP_3 of the cable-stayed footbridge due to the exponential sweep and (<b>b</b>) coherence function of the signals at points C_IMP_2 and C_IMP_3.</p>
Full article ">Figure 15
<p>Time-frequency characteristics of the excitation at points (<b>a</b>) C_IMP_2 and (<b>b</b>) C_IMP_3 of the cable-stayed footbridge due to exponential sweep.</p>
Full article ">Figure 16
<p>Cross-correlation functions between signals caused by exponential sweep at points: (<b>a</b>) F_IMP_1 and F_IMP_2 of the frame and (<b>b</b>) C_IMP_2 and C_IMP_3 of the cable-stayed footbridge.</p>
Full article ">Figure 17
<p>Comparison of the maximum accelerations, registered during the exponential sweeps (<b>a</b>) at two input points of the frame footbridge and (<b>b</b>) at four input points of the cable-stayed footbridge with the artificial curves of amplitude reductions.</p>
Full article ">Figure 18
<p>PSD function of registered acceleration-time history due to the exponential sweep vs. CSD function between registered and artificial acceleration-time histories for (<b>a</b>) point F_IMP_2 of the frame footbridge and (<b>b</b>) point C_IMP_3 of the cable-stayed footbridge.</p>
Full article ">Figure 19
<p>Acceleration time-histories of the seismic event in (<b>a</b>) X direction, (<b>b</b>) Y direction, and (<b>c</b>) Z direction [<a href="#B31-sensors-20-01227" class="html-bibr">31</a>].</p>
Full article ">Figure 20
<p>Frequency spectra of accelerations for the seismic event in (<b>a</b>) X direction, (<b>b</b>) Y direction, and (<b>c</b>) Z direction.</p>
Full article ">Figure 21
<p>Von Mises stress-time histories for the frame footbridge for output control points: (<b>a</b>) F_OMP_1, (<b>b</b>) F_OMP_4, and (<b>c</b>) F_OMP_7.</p>
Full article ">Figure 22
<p>Maximum von Mises stresses for different values of wave velocity for the frame footbridge at output points: (<b>a</b>) F_OMP_1, (<b>b</b>) F_OMP_4, and (<b>c</b>) F_OMP_7.</p>
Full article ">Figure 23
<p>Differences between Mises stresses obtained for uniform and nonuniform model of kinematic excitation with wave velocity 203 m/s at output points of the frame footbridge.</p>
Full article ">Figure 24
<p>Von Mises stress-time histories for the cable-stayed footbridge at control points: (<b>a</b>) C_OMP_1, (<b>b</b>) C_OMP_2, and (<b>c</b>) C_OMP_3.</p>
Full article ">Figure 25
<p>Maximum von Mises stresses for different values of wave passage velocity for the cable-stayed footbridge at output points: (<b>a</b>) C_OMP_1, (<b>b</b>) C_OMP_2, and (<b>c</b>) C_OMP_3.</p>
Full article ">Figure 26
<p>Differences between Mises stresses obtained for uniform and nonuniform model of kinematic excitation with wave velocity 214 m/s at output points of the cable-stayed footbridge.</p>
Full article ">
18 pages, 27853 KiB  
Article
CLRS: Continual Learning Benchmark for Remote Sensing Image Scene Classification
by Haifeng Li, Hao Jiang, Xin Gu, Jian Peng, Wenbo Li, Liang Hong and Chao Tao
Sensors 2020, 20(4), 1226; https://doi.org/10.3390/s20041226 - 24 Feb 2020
Cited by 36 | Viewed by 5007
Abstract
Remote sensing image scene classification has a high application value in the agricultural, military, as well as other fields. A large amount of remote sensing data is obtained every day. After learning the new batch data, scene classification algorithms based on deep learning [...] Read more.
Remote sensing image scene classification has a high application value in the agricultural, military, as well as other fields. A large amount of remote sensing data is obtained every day. After learning the new batch data, scene classification algorithms based on deep learning face the problem of catastrophic forgetting, that is, they cannot maintain the performance of the old batch data. Therefore, it has become more and more important to ensure that the scene classification model has the ability of continual learning, that is, to learn new batch data without forgetting the performance of the old batch data. However, the existing remote sensing image scene classification datasets all use static benchmarks and lack the standard to divide the datasets into a number of sequential learning training batches, which largely limits the development of continual learning in remote sensing image scene classification. First, this study gives the criteria for training batches that have been partitioned into three continual learning scenarios, and proposes a large-scale remote sensing image scene classification database called the Continual Learning Benchmark for Remote Sensing (CLRS). The goal of CLRS is to help develop state-of-the-art continual learning algorithms in the field of remote sensing image scene classification. In addition, in this paper, a new method of constructing a large-scale remote sensing image classification database based on the target detection pretrained model is proposed, which can effectively reduce manual annotations. Finally, several mainstream continual learning methods are tested and analyzed under three continual learning scenarios, and the results can be used as a baseline for future work. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) New Instances scenario (NI): The bare_land in Batch 1, Batch 2, Batch 3, ⋯ Batch n are different in background, texture, resolution, area, etc. (<b>b</b>) New Classes scenario (NC): Different scene categories appear in subsequent training batches. (<b>c</b>) New Instances and Classes scenario (NIC): Subsequent training batches contain new scene categories and new instances of the same category.</p>
Full article ">Figure 2
<p>The construction process of remote sensing image scene classification database.</p>
Full article ">Figure 3
<p>Continual Learning Benchmark for Remote Sensing (CLRS) construction process based on OpenStreetMap data. Step1: Superimposing and registering. Step2: Filtering the target area according to the OpenStreetMap (OSM) attribute. Step3: Focusing on the target area, add 10 pixels each in length and width, and crop the target image block to <math display="inline"><semantics> <mrow> <mn>256</mn> <mo>×</mo> <mn>256</mn> </mrow> </semantics></math> size.</p>
Full article ">Figure 4
<p>CLRS construction process based on the target detection pretrained model. Step1: Cropping the remote sensing image into <math display="inline"><semantics> <mrow> <mn>1024</mn> <mo>×</mo> <mn>1024</mn> </mrow> </semantics></math> to satisfy the model input size. Step2: The target location information is obtained by YOLOV2 detection. Step3: The image was cropped to <math display="inline"><semantics> <mrow> <mn>256</mn> <mo>×</mo> <mn>256</mn> </mrow> </semantics></math> size according to the detected coordinate range.</p>
Full article ">Figure 5
<p>Cropping principles. (<b>a</b>) Detected object is at the center of the remote sensing image: crop the image according to the center of the object. (<b>b</b>) Detected object is on the edge of the remote sensing image: crop the image according to the boundary of remote sensing image.</p>
Full article ">Figure 6
<p>Some example images from CLRS data set: There are 15000 images within 25 classes. For each scene class, there are 600 samples. Two examples of per class are shown.</p>
Full article ">Figure 7
<p>CLRS image acquisition area map; red marker points indicate images collected from the area.</p>
Full article ">Figure 8
<p>Higher intraclass diversity. (<b>a</b>) Instances of the same category in different seasons. (<b>b</b>) Instances of the same category in different climates and geographical environment. (<b>c</b>) Instances of the same category in different cultures and architectural styles. (<b>d</b>) Instances of the same category in different resolutions.</p>
Full article ">Figure 9
<p>Larger interclass similarity. (<b>a</b>) Similar structures between different categories. (<b>b</b>) Similar objects between different categories. (<b>c</b>) Similar background between different categories. (<b>d</b>) Similar textures between different categories.</p>
Full article ">Figure 10
<p>Test accuracy of the four methods on the New Instances (NI) scenario. The final result of each method is the average of disturbing the training batches order five times. EWC = Elastic Weights Consolidation; LWF = Learning Without Forgetting.</p>
Full article ">Figure 11
<p>Test accuracy of the five methods on the New Classes (NC) scenario. The final result of each method is the average of scrambling the training batches order five times. CWR = CopyWeights with Re-init.</p>
Full article ">Figure 12
<p>Test accuracy of the five methods on the New Instances and Classes (NIC) scenario. The final result of each method is the average of confusing the training batches order three times.</p>
Full article ">
12 pages, 2375 KiB  
Article
Effect of Structural Uncertainty in Passive Microwave Soil Moisture Retrieval Algorithm
by Lanka Karthikeyan, Ming Pan, Dasika Nagesh Kumar and Eric F. Wood
Sensors 2020, 20(4), 1225; https://doi.org/10.3390/s20041225 - 24 Feb 2020
Cited by 5 | Viewed by 3256
Abstract
Passive microwave sensors use a radiative transfer model (RTM) to retrieve soil moisture (SM) using brightness temperatures (TB) at low microwave frequencies. Vegetation optical depth (VOD) is a key input to the RTM. Retrieval algorithms can analytically invert the RTM [...] Read more.
Passive microwave sensors use a radiative transfer model (RTM) to retrieve soil moisture (SM) using brightness temperatures (TB) at low microwave frequencies. Vegetation optical depth (VOD) is a key input to the RTM. Retrieval algorithms can analytically invert the RTM using dual-polarized TB measurements to retrieve the VOD and SM concurrently. Algorithms in this regard typically use the τ-ω types of models, which consist of two third-order polynomial equations and, thus, can have multiple solutions. Through this work, we find that uncertainty occurs due to the structural indeterminacy that is inherent in all τ-ω types of models in passive microwave SM retrieval algorithms. In the process, a new analytical solution for concurrent VOD and SM retrieval is presented, along with two widely used existing analytical solutions. All three solutions are applied to a fixed framework of RTM to retrieve VOD and SM on a global scale, using X-band Advanced Microwave Scanning Radiometer-Earth Observing System (AMSR-E) TB data. Results indicate that, with structural uncertainty, there ensues a noticeable impact on the VOD and SM retrievals. In an era where the sensitivity of retrieval algorithms is still being researched, we believe the structural indeterminacy of RTM identified here would contribute to uncertainty in the soil moisture retrievals. Full article
(This article belongs to the Special Issue Radar and Radiometric Sensors and Sensing)
Show Figures

Figure 1

Figure 1
<p>The problem of equifinality (structural indeterminacy) in the concurrent retrieval of soil moisture (SM) and vegetation optical depth (<span class="html-italic">τ</span>). The three axes represent brightness temperatures (<span class="html-italic">T<sub>B</sub></span>), SM and <span class="html-italic">τ</span>. The two planes shaded in orange and blue correspond to observed satellite measurements <span class="html-italic">T<sub>BVObs</sub></span> and <span class="html-italic">T<sub>BHObs</sub></span> at a timestep, respectively (in <span class="html-italic">V</span> and <span class="html-italic">H</span> polarizations, respectively). The thick curves shown in orange and blue on these planes represent the RTM equations in <span class="html-italic">V</span> and <span class="html-italic">H</span>, respectively (Equations (1) and (2)), which are primarily the functions of SM and <span class="html-italic">τ</span>. The dashed lines on the SM–τ plane are the projections of the solid curves. The points where these dashed lines intersect (black squares) on the SM–τ plane are the possible solutions to both RTM equations. Using the Pan solution results in (SM<span class="html-italic"><sub>Pan</sub></span>, <span class="html-italic">τ<sub>Pan</sub></span>), the New solution results in (SM<span class="html-italic"><sub>New</sub></span>, <span class="html-italic">τ<sub>New</sub></span>), and the Meesters solution results in (SM<span class="html-italic"><sub>Meesters</sub></span>, <span class="html-italic">τ<sub>Meesters</sub></span>). The problem of equifinality arises with the fact that all three solutions lead to the similar <span class="html-italic">T<sub>BV</sub></span> and <span class="html-italic">T<sub>BH</sub></span> values. The existence of more solution(s) is also possible (‘More Solutions?’ in the SM–τ plane).</p>
Full article ">Figure 2
<p>Scatter density plots of mean advanced microwave scanning radiometer (AMSR)-E SM of the retrievals obtained by employing three analytical solutions in the RTM framework. Retrievals corresponding to ascending and descending passes are plotted in the first and second columns of the figure, respectively. The thick and the dotted lines in each plot represent the normal line (45° line) and best fit line, respectively.</p>
Full article ">Figure 3
<p>Unbiased root mean squared difference (ubRMSD) between the SM products obtained using the three solutions <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">Γ</mi> <mrow> <mi>C</mi> <mo>,</mo> <mi>N</mi> <mi>e</mi> <mi>w</mi> </mrow> </msub> </mrow> </semantics></math> (indicated as SM<span class="html-italic"><sub>New</sub></span>), <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">Γ</mi> <mrow> <mi>C</mi> <mo>,</mo> <mi>P</mi> <mi>a</mi> <mi>n</mi> </mrow> </msub> </mrow> </semantics></math> (indicated as SM<span class="html-italic"><sub>Pan</sub></span>) and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">Γ</mi> <mrow> <mi>C</mi> <mo>,</mo> <mi>M</mi> <mi>e</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> <mi>e</mi> <mi>r</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> (indicated as SM<span class="html-italic"><sub>Meesters</sub></span>) for ascending and descending passes. The number in bold in each tile indicate the global average of ubRMSD computed between corresponding SM products.</p>
Full article ">Figure 4
<p>Scatter plots of VOD (top row) and SM (bottom row) simulations pertaining to 50,000 parameter sets (<span class="html-italic">h</span>, <span class="html-italic">Q</span>, <span class="html-italic">ω</span>) obtained by altering the analytical solution at Site 1.</p>
Full article ">
23 pages, 4073 KiB  
Article
The Application of Robust Least Squares Method in Frequency Lock Loop Fusion for Global Navigation Satellite System Receivers
by Mengyue Han, Qian Wang, Yuanlan Wen, Min He and Xiufeng He
Sensors 2020, 20(4), 1224; https://doi.org/10.3390/s20041224 - 23 Feb 2020
Cited by 3 | Viewed by 3337
Abstract
The tracking accuracy of a traditional Frequency Lock Loop (FLL) decreases significantly in a complex environment, thus reducing the overall performance of a satellite receiver. In order to ensure high tracking accuracy of a receiver in a complex environment, this paper proposes a [...] Read more.
The tracking accuracy of a traditional Frequency Lock Loop (FLL) decreases significantly in a complex environment, thus reducing the overall performance of a satellite receiver. In order to ensure high tracking accuracy of a receiver in a complex environment, this paper proposes a new tracking loop combining the vector FLL (VFLL) with a robust least squares method, which accurately matches the weights of received signals of different qualities to ensure high positioning accuracy. The weights of received signals are selected at the signal level, not at the observation level. In this paper, the ranges of strong and weak signals of the loop are determined according to the different expressions of the distribution function at different signal strengths, and the concept of loop segmentation is introduced. The segmentation results of the FLL are taken as a basis of the weight selection, and then combined with the Institute of Geodesy and Geophysics (IGGIII) weight function to obtain the equivalent weight matrix; the experiments are conducted to prove the advantages of the proposed method over the traditional methods. The experimental results show that the proposed VFLL tracking method has strong denoising capability under both normal- signal and harsh application environment conditions. Accordingly, the proposed model has a promising application perspective. Full article
(This article belongs to the Special Issue GNSS Data Processing and Navigation)
Show Figures

Figure 1

Figure 1
<p>Relationship between the probability density function and SNR.</p>
Full article ">Figure 2
<p>The structure of the proposed model.</p>
Full article ">Figure 3
<p>The flow of the Kalman filter.</p>
Full article ">Figure 4
<p>The output segmentation results of the FLL frequency discriminator.</p>
Full article ">Figure 5
<p>The FLL filter output error dependence on the CNR value.</p>
Full article ">Figure 6
<p>The CH3 tracking frequency errors of the three models under the open-environment conditions.</p>
Full article ">Figure 7
<p>(<b>a</b>) Velocity error comparison of the three models. (<b>b</b>) Enlarged view of the results presented in the dotted-line bordered region in figure (<b>a</b>).</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) Velocity error comparison of the three models. (<b>b</b>) Enlarged view of the results presented in the dotted-line bordered region in figure (<b>a</b>).</p>
Full article ">Figure 8
<p>The standard errors of a unit weight of the velocity error of the three models.</p>
Full article ">Figure 9
<p>The frequency tracking error of the traditional SFLL.</p>
Full article ">Figure 10
<p>The frequency tracking error of the traditional VFLL.</p>
Full article ">Figure 11
<p>The frequency tracking error of the VFLL model based on the robust least squares method.</p>
Full article ">Figure 12
<p>The frequency tracking error of channel CH6.</p>
Full article ">Figure 13
<p>The overall positioning error of the traditional SFLL model.</p>
Full article ">Figure 14
<p>The overall positioning errors of the two VFLL models.</p>
Full article ">Figure 15
<p>The standard error of unit weight of positioning error of two model.</p>
Full article ">Figure 16
<p>Positioning accuracy of the three models.</p>
Full article ">Figure 17
<p>The standard error of a unit weight of the positioning error of three models.</p>
Full article ">
8 pages, 1394 KiB  
Technical Note
Deep Neural Networks for the Classification of Pure and Impure Strawberry Purees
by Zhong Zheng, Xin Zhang, Jinxing Yu, Rui Guo and Lili Zhangzhong
Sensors 2020, 20(4), 1223; https://doi.org/10.3390/s20041223 - 23 Feb 2020
Cited by 9 | Viewed by 3534
Abstract
In this paper, a comparative study of the effectiveness of deep neural networks (DNNs) in the classification of pure and impure purees is conducted. Three different types of deep neural networks (DNNs)—the Gated Recurrent Unit (GRU), the Long Short Term Memory (LSTM), and [...] Read more.
In this paper, a comparative study of the effectiveness of deep neural networks (DNNs) in the classification of pure and impure purees is conducted. Three different types of deep neural networks (DNNs)—the Gated Recurrent Unit (GRU), the Long Short Term Memory (LSTM), and the temporal convolutional network (TCN)—are employed for the detection of adulteration of strawberry purees. The Strawberry dataset, a time series spectroscopy dataset from the UCR time series classification repository, is utilized to evaluate the performance of different DNNs. Experimental results demonstrate that the TCN is able to obtain a higher classification accuracy than the GRU and LSTM. Moreover, the TCN achieves a new state-of-the-art classification accuracy on the Strawberry dataset. These results indicates the great potential of using the TCN for the detection of adulteration of fruit purees in the future. Full article
Show Figures

Figure 1

Figure 1
<p>The schematic of the Gated Recurrent Unit (GRU) for time series classification (TSC).</p>
Full article ">Figure 2
<p>The schematic of the Long Short Term Memory (LSTM) for TSC.</p>
Full article ">Figure 3
<p>The schematic of the temporal convolutional network (TCN) for TSC.</p>
Full article ">Figure 4
<p>Training losses of the GRU, LSTM, TCN, and MLP.</p>
Full article ">
13 pages, 563 KiB  
Article
Throughput Analysis of Buffer-Aided Decode-and-Forward Wireless Relaying with RF Energy Harvesting
by Phat Huynh, Khoa T. Phan, Bo Liu and Robert Ross
Sensors 2020, 20(4), 1222; https://doi.org/10.3390/s20041222 - 23 Feb 2020
Cited by 3 | Viewed by 2564
Abstract
In this paper, we investigated a buffer-aided decode-and-forward (DF) wireless relaying system over fading channels, where the source and relay harvest radio-frequency (RF) energy from a power station for data transmissions. We derived exact expressions for end-to-end throughput considering half-duplex (HD) and full-duplex [...] Read more.
In this paper, we investigated a buffer-aided decode-and-forward (DF) wireless relaying system over fading channels, where the source and relay harvest radio-frequency (RF) energy from a power station for data transmissions. We derived exact expressions for end-to-end throughput considering half-duplex (HD) and full-duplex (FD) relaying schemes. The numerical results illustrate the throughput and energy efficiencies of the relaying schemes under different self-interference (SI) cancellation levels and relay deployment locations. It was demonstrated that throughput-optimal relaying is not necessarily energy efficiency-optimal. The results provide guidance on optimal relaying network deployment and operation under different performance criteria. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>A wireless communication system consists of energy-constrained source and relay.</p>
Full article ">Figure 2
<p>Time allocation in a transmission block: (<b>a</b>) Half duplex, (<b>b</b>) Full duplex.</p>
Full article ">Figure 3
<p>Process to determine optimal <math display="inline"><semantics> <msub> <mi>α</mi> <mn>2</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>α</mi> <mn>3</mn> </msub> </semantics></math> values.</p>
Full article ">Figure 4
<p>End-to-end throughput <math display="inline"><semantics> <mi>τ</mi> </semantics></math> vs. harvesting time factor <math display="inline"><semantics> <msub> <mi>α</mi> <mn>1</mn> </msub> </semantics></math> with <math display="inline"><semantics> <msub> <mi>d</mi> <mn>1</mn> </msub> </semantics></math> = 9, <math display="inline"><semantics> <msub> <mi>d</mi> <mn>3</mn> </msub> </semantics></math> = 10, <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Energy efficiency <math display="inline"><semantics> <msub> <mi>η</mi> <mrow> <mi>E</mi> <mi>E</mi> </mrow> </msub> </semantics></math> vs. harvesting time factor <math display="inline"><semantics> <msub> <mi>α</mi> <mn>1</mn> </msub> </semantics></math> with <math display="inline"><semantics> <msub> <mi>d</mi> <mn>1</mn> </msub> </semantics></math> = 9, <math display="inline"><semantics> <msub> <mi>d</mi> <mn>3</mn> </msub> </semantics></math> = 10, <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Maximum energy efficiency vs. distance from source to relay <math display="inline"><semantics> <msub> <mi>d</mi> <mn>1</mn> </msub> </semantics></math> with <math display="inline"><semantics> <msub> <mi>d</mi> <mn>3</mn> </msub> </semantics></math> = 10, <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Maximum throughput vs. distance from source to relay <math display="inline"><semantics> <msub> <mi>d</mi> <mn>1</mn> </msub> </semantics></math> with <math display="inline"><semantics> <msub> <mi>d</mi> <mn>3</mn> </msub> </semantics></math> = 10, <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>.</p>
Full article ">
23 pages, 5750 KiB  
Article
Wireless Sensing of Lower Lip and Thumb-Index Finger ‘Ramp-and-Hold’ Isometric Force Dynamics in a Small Cohort of Unilateral MCA Stroke: Discussion of Preliminary Findings
by Steven Barlow, Rebecca Custead, Jaehoon Lee, Mohsen Hozan and Jacob Greenwood
Sensors 2020, 20(4), 1221; https://doi.org/10.3390/s20041221 - 23 Feb 2020
Cited by 6 | Viewed by 3558
Abstract
Automated wireless sensing of force dynamics during a visuomotor control task was used to rapidly assess residual motor function during finger pinch (right and left hand) and lower lip compression in a cohort of seven adult males with chronic, unilateral middle cerebral artery [...] Read more.
Automated wireless sensing of force dynamics during a visuomotor control task was used to rapidly assess residual motor function during finger pinch (right and left hand) and lower lip compression in a cohort of seven adult males with chronic, unilateral middle cerebral artery (MCA) stroke with infarct confirmed by anatomic magnetic resonance imaging (MRI). A matched cohort of 25 neurotypical adult males served as controls. Dependent variables were extracted from digitized records of ‘ramp-and-hold’ isometric contractions to target levels (0.25, 0.5, 1, and 2 Newtons) presented in a randomized block design; and included force reaction time, peak force, and dF/dtmax associated with force recruitment, and end-point accuracy and variability metrics during the contraction hold-phase (mean, SD, criterion percentage ‘on-target’). Maximum voluntary contraction force (MVCF) was also assessed to establish the force operating range. Results based on linear mixed modeling (LMM, adjusted for age and handedness) revealed significant patterns of dissolution in fine force regulation among MCA stroke participants, especially for the contralesional thumb-index finger followed by the ipsilesional digits, and the lower lip. For example, the contralesional thumb-index finger manifest increased reaction time, and greater overshoot in peak force during recruitment compared to controls. Impaired force regulation among MCA stroke participants during the contraction hold-phase was associated with significant increases in force SD, and dramatic reduction in the ability to regulate force output within prescribed target force window (±5% of target). Impaired force regulation during contraction hold-phase was greatest in the contralesional hand muscle group, followed by significant dissolution in ipsilateral digits, with smaller effects found for lower lip. These changes in fine force dynamics were accompanied by large reductions in the MVCF with the LMM marginal means for contralesional and ipsilesional pinch forces at just 34.77% (15.93 N vs. 45.82 N) and 66.45% (27.23 N vs. 40.98 N) of control performance, respectively. Biomechanical measures of fine force and MVCF performance in adult stroke survivors provide valuable information on the profile of residual motor function which can help inform clinical treatment strategies and quantitatively monitor the efficacy of rehabilitation or neuroprotection strategies. Full article
(This article belongs to the Special Issue Sensors for People–Environment Interactions in Health Research)
Show Figures

Figure 1

Figure 1
<p>Wireless strain gage sensors (upper row). Left thumb-index finger ‘pinch,’ right thumb-index finger pinch, and lower lip midline ‘compression’ in situ. Middle plot panel shows individual ramp-and-hold force trials at 0.25, 0.5, 1, and 2 N target force levels in a waterfall display format for the left thumb-index finger, right thumb-index finger, and lower lip sampled from a neurotypical 63 year-old male. Bottom plot panel shows the same data types for a 66 year-old male R-MCA (right- middle cerebral artery) stroke survivor. Heat maps are shown below each muscle group time series to illustrate the variability between participants in force onset and absolute force amplitude coded by a color scale.</p>
Full article ">Figure 2
<p>Boxplot of force reaction time (s) as a function of muscle group (RF = right finger [contralesional for stroke participants], LF = left finger [ipsilesional for stroke participants], and LL = lower lip), stroke status, and target force. Interquartile boxes (cyan, pink), 95% confidence interval for the median (yellow), ▼ = mean, ● = median. The whiskers represent the ranges for the bottom 25% and the top 25% of the data values, excluding outliers.</p>
Full article ">Figure 3
<p>Boxplot of peak force (N) during recruitment as a function of muscle group (RF = right finger [contralesional for stroke participants], LF = left finger [ipsilesional for stroke participants], and LL = lower lip), stroke status, and target force. Interquartile boxes (cyan, pink), 95% confidence interval for the median (yellow), ▼ = mean, ● = median. The whiskers represent the ranges for the bottom 25% and the top 25% of the data values, excluding outliers.</p>
Full article ">Figure 4
<p>Boxplot of dF/dt<sub>max</sub> (N/s) during recruitment as a function of muscle group (RF = right finger [contralesional for stroke participants], LF = left finger [ipsilesional for stroke participants], and LL = lower lip), stroke status, and target force. Interquartile boxes (cyan, pink), 95% confidence interval for the median (yellow), ▼ = mean, ● = median. The whiskers represent the ranges for the bottom 25% and the top 25% of the data values, excluding outliers.</p>
Full article ">Figure 5
<p>Boxplots of T1 and T2 Hold Phase Mean force (N) as a function of muscle group (RF = right finger [contralesional for stroke participants], LF = left finger [ipsilesional for stroke participants], and LL = lower lip), stroke status, and target force. Interquartile boxes (cyan, pink), 95% confidence interval for the median (yellow), ▼ = mean, ● = median. The whiskers represent the ranges for the bottom 25% and the top 25% of the data values, excluding outliers.</p>
Full article ">Figure 6
<p>Boxplots of T1 and T2 Hold Phase Force Standard Deviation (N) as a function of muscle group (RF = right finger [contralesional for stroke participants], LF = left finger [ipsilesional for stroke participants], and LL = lower lip), stroke status, and target force. Interquartile boxes (cyan, pink), 95% confidence interval for the median (yellow), ▼ = mean, ● = median. The whiskers represent the ranges for the bottom 25% and the top 25% of the data values, excluding outliers.</p>
Full article ">Figure 7
<p>Boxplots of T1 and T2 Hold Phase Criterion Percentage (within a ±5% target force criterion window) as a function of muscle group (RF = right finger [contralesional for stroke participants], LF = left finger [ipsilesional for stroke participants], and LL = lower lip), stroke status, and target force. Interquartile boxes (cyan, pink), 95% confidence interval for the median (yellow), ▼ = mean, ● = median. The whiskers represent the ranges for the bottom 25% and the top 25% of the data values, excluding outliers.</p>
Full article ">Figure 8
<p>Boxplot of maximum voluntary contraction force (MVCF) as a function of stroke status and muscle group (RF = right finger [contralesional for stroke participants], LF = left finger [ipsilesional for stroke participants], and LL = lower lip). Interquartile boxes (cyan, pink), 95% confidence interval for the median (yellow), ▼ = mean, ● = median. The whiskers represent the ranges for the bottom 25% and the top 25% of the data values, excluding outliers.</p>
Full article ">
30 pages, 16199 KiB  
Article
Free-Resolution Probability Distributions Map-Based Precise Vehicle Localization in Urban Areas
by Kyu-Won Kim and Gyu-In Jee
Sensors 2020, 20(4), 1220; https://doi.org/10.3390/s20041220 - 23 Feb 2020
Cited by 9 | Viewed by 4076
Abstract
We propose a free-resolution probability distributions map (FRPDM) and an FRPDM-based precise vehicle localization method using 3D light detection and ranging (LIDAR). An FRPDM is generated by Gaussian mixture modeling, based on road markings and vertical structure point cloud. Unlike single resolution or [...] Read more.
We propose a free-resolution probability distributions map (FRPDM) and an FRPDM-based precise vehicle localization method using 3D light detection and ranging (LIDAR). An FRPDM is generated by Gaussian mixture modeling, based on road markings and vertical structure point cloud. Unlike single resolution or multi-resolution probability distribution maps, in the case of the FRPDM, the resolution is not fixed and the object can be represented by various sizes of probability distributions. Thus, the shape of the object can be represented efficiently. Therefore, the map size is very small (61 KB/km) because the object is effectively represented by a small number of probability distributions. Based on the generated FRPDM, point-to-probability distribution scan matching and feature-point matching were performed to obtain the measurements, and the position and heading of the vehicle were derived using an extended Kalman filter-based navigation filter. The experimental area is the Gangnam area of Seoul, South Korea, which has many buildings around the road. The root mean square (RMS) position errors for the lateral and longitudinal directions were 0.057 m and 0.178 m, respectively, and the RMS heading error was 0.281°. Full article
Show Figures

Figure 1

Figure 1
<p>Comparison of probability distribution map (PDM) generation result: (<b>a</b>) Point cloud, (<b>b</b>) single resolution PDM (2 m), (<b>c</b>) single resolution PDM (1 m), and (<b>d</b>) free-resolution probability distributions map (FRPDM).</p>
Full article ">Figure 2
<p>Process of Vehicle Localization.</p>
Full article ">Figure 3
<p>Flowchart of the FRPDM generation process.</p>
Full article ">Figure 4
<p>Map trajectory of an urban area.</p>
Full article ">Figure 5
<p>Result of pose graph optimization.</p>
Full article ">Figure 6
<p>Road surface extraction using height filter.</p>
Full article ">Figure 7
<p>Result of road-marking extraction.</p>
Full article ">Figure 8
<p>Result of line fitting using iterative-end-point-fit (IEPF) algorithm.</p>
Full article ">Figure 9
<p>Pseudocode of outlier removal algorithm.</p>
Full article ">Figure 10
<p>Result of vertical structure line extraction.</p>
Full article ">Figure 11
<p>Road-marking and vertical structure extraction results.</p>
Full article ">Figure 12
<p>Result of outlier removal using the occupancy grid filter.</p>
Full article ">Figure 13
<p>Result of object clustering.</p>
Full article ">Figure 14
<p>Pseudocode of the expectation–maximization (EM) algorithm for Gaussian mixture modeling (GMM).</p>
Full article ">Figure 15
<p>Example of increasing the number of probability distributions.</p>
Full article ">Figure 16
<p>Result of probability distributions transform.</p>
Full article ">Figure 17
<p>Pseudocode of algorithm for removal of overlapped probability distributions.</p>
Full article ">Figure 18
<p>Result of overlapped probability distribution removal method. (<b>a</b>) Case of the smaller probability distribution inside a probability distribution, (<b>b</b>) Case of the probability distributions overlap with each other.</p>
Full article ">Figure 19
<p>Flowchart of the precise vehicle localization process based on the FRPDM.</p>
Full article ">Figure 20
<p>Process of point cloud accumulation.</p>
Full article ">Figure 21
<p>Pseudocode of point-to-probability distribution scan matching.</p>
Full article ">Figure 22
<p>Pseudocode of point-to-probability distribution data association.</p>
Full article ">Figure 23
<p>Result of Point-to-Probability Distribution Scan Matching: (<b>a</b>) Single resolution PDM (2 m), (<b>b</b>) FRPDM.</p>
Full article ">Figure 24
<p>Example of point cloud to probability distribution conversion.</p>
Full article ">Figure 25
<p>Pseudocode of probability distribution data association.</p>
Full article ">Figure 26
<p>Error of the GPS/DR in urban area.</p>
Full article ">Figure 27
<p>Example of map matching result when initial position error is large. (<b>a</b>) Single resolution PDM (2 m), (<b>b</b>) FRPDM.</p>
Full article ">Figure 28
<p>Flowchart of error correction of initial vehicle position.</p>
Full article ">Figure 29
<p>Result of error correction of initial vehicle position.</p>
Full article ">Figure 30
<p>Experimental configuration: (<b>a</b>) Vehicle platform, (<b>b</b>) 3D light detection and ranging (3D LIDAR) (Velodyne HDL-32E) specification.</p>
Full article ">Figure 31
<p>Position error (map type). (<b>a</b>) Lateral position error, (<b>b</b>) Longitudinal position error.</p>
Full article ">Figure 32
<p>Heading error (map type).</p>
Full article ">Figure 33
<p>GPS/DR increment error.</p>
Full article ">Figure 34
<p>Position error (map matching method). (<b>a</b>) Lateral position error, (<b>b</b>) Longitudinal position error.</p>
Full article ">Figure 35
<p>Heading error (map matching method).</p>
Full article ">Figure 36
<p>Comparison of Map Matching Result: (<b>a</b>) Scan Matching; (<b>b</b>) Scan Matching + Feature–Point Matching.</p>
Full article ">
14 pages, 2790 KiB  
Article
A Highly Sensitive Piezoresistive Pressure Sensor Based on Graphene Oxide/Polypyrrole@Polyurethane Sponge
by Bing Lv, Xingtong Chen and Chunguo Liu
Sensors 2020, 20(4), 1219; https://doi.org/10.3390/s20041219 - 23 Feb 2020
Cited by 51 | Viewed by 6147
Abstract
In this work, polyurethane sponge is employed as the structural substrate of the sensor. Graphene oxide (GO) and polypyrrole (PPy) are alternately coated on the sponge fiber skeleton by charge layer-by-layer assembly (LBL) to form a multilayer composite conductive layer to prepare the [...] Read more.
In this work, polyurethane sponge is employed as the structural substrate of the sensor. Graphene oxide (GO) and polypyrrole (PPy) are alternately coated on the sponge fiber skeleton by charge layer-by-layer assembly (LBL) to form a multilayer composite conductive layer to prepare the piezoresistive sensors. The 2D GO sheet is helpful for the formation of the GO layers, and separating the PPy layer. The prepared GO/PPy@PU (polyurethane) conductive sponges still had high compressibility. The unique fragmental microstructure and synergistic effect made the sensor reach a high sensitivity of 0.79 kPa−1. The sensor could detect as low as 75 Pa, exhibited response time less than 70 ms and reproducibility over 10,000 cycles, and could be used for different types of motion detection. This work opens up new opportunities for high-performance piezoresistive sensors and other electronic devices for GO/PPy composites. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic diagram for preparation of graphene oxide/polypyrrole@polyurethane (GO/PPy@PU) conductive sponges. (<b>b</b>) Photographs of the initial PU sponge and (<b>c</b>) GO/PPy@PU conductive sponge. (<b>d</b>,<b>e</b>) Scanning electron microscope (SEM) images of the initial PU sponge. (<b>f</b>–<b>h</b>) SEM images of GO/PU sponge, GO/Py@PU sponge, GO/PPy@PU conductive sponge.</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematic diagram of the GO/PPy synthesis process. (<b>b</b>,<b>c</b>) SEM images of GO/PPy@PU conductive sponge with one and two dipping coatings. (<b>d</b>) Initial resistance of the sponge sensors with different dipping times.</p>
Full article ">Figure 3
<p><b>(a</b>,<b>b</b>) Photographs of the maximum compressibility of GO/PPy@PU conductive sponge. (<b>c</b>) The numerical and experimental results of stress–strain curves of GO/PPy@PU with 0–75% strain. (<b>d</b>) Cyclic stress–strain curves of GO/PPy@PU.</p>
Full article ">Figure 4
<p>(<b>a</b>,<b>b</b>) SEM images of the microcrack caused by small strain on the conductive skeleton, magnification: (<b>a</b>) 3000× and (<b>b</b>) 10,000×. (<b>c</b>,<b>d</b>) Schematic diagram of the large deformation GO/PPy@PU conductive sponge. (<b>e</b>) Schematic diagram of GO/PPy@PU conductive sponge skeleton contact during compression.</p>
Full article ">Figure 5
<p>(<b>a</b>) Schematic diagram of the manufacture of sponge sensor. (<b>b</b>) The numerical and experimental results of relative resistance change (ΔR/R<sub>0</sub>) of the GO/PPy@PU conductive sponge and compressive strain and corresponding GF changes with two dips of coating. (<b>c</b>) Performance of different samples with two dipping coatings.</p>
Full article ">Figure 6
<p>(<b>a</b>) The numerical and experimental results of the relative resistance change (ΔR/R<sub>0</sub>) of the GO/PPy@PU conductive sponge with two to five times of dipping the coating and gradually increasing pressure. (<b>b</b>) Hysteresis curves for the GO/PPy@PU sponge sensor.</p>
Full article ">Figure 7
<p>(<b>a</b>) Cyclic curves of relative resistance change of the GO/PPy@PU sponge sensor under low strain. (<b>b</b>) Cyclic curves of relative resistance change of the GO/PPy@PU sponge sensor under high strain. (<b>c</b>) The hysteresis test of the GO/PPy@PU sponge sensor with a response time &lt;70ms. (<b>d</b>) Cyclic stability test of GO/PPy@PU sponge sensor for 10,000 cycles at 45% strain.</p>
Full article ">Figure 8
<p>(<b>a</b>) A photograph of the GO/PPy@PU sponge sensor fixed to the tester’s throat. (<b>b</b>) Characteristic current curves during pronouncing different words, (<b>c</b>) swallowing saliva, coughing. (<b>d</b>) A photograph of the GO/PPy@PU sponge sensor fixed to the wrist. (<b>e</b>) Characteristic current curve of pulse monitoring. (<b>f</b>) Characteristic current curve of finger joint bending.</p>
Full article ">
13 pages, 4688 KiB  
Article
Effective Light Beam Modulation by Chirp IDT on a Suspended LiNbO3 Membrane for 3D Holographic Displays
by Yongbeom Lee and Keekeun Lee
Sensors 2020, 20(4), 1218; https://doi.org/10.3390/s20041218 - 23 Feb 2020
Cited by 1 | Viewed by 3815
Abstract
An acousto-optic (AO) holographic display unit based on a suspended waveguide membrane was developed. The AO unit consists of a wide bandwidth chirp interdigital transducer (IDT) on a 20 µm thick suspended crystalline 128° YX LiNbO3 membrane, a light blocker with a [...] Read more.
An acousto-optic (AO) holographic display unit based on a suspended waveguide membrane was developed. The AO unit consists of a wide bandwidth chirp interdigital transducer (IDT) on a 20 µm thick suspended crystalline 128° YX LiNbO3 membrane, a light blocker with a 20 µm hole near the entrance, and an active lens near the exit. The 20 µm thickness of the floating membrane significantly enhanced surface acoustic wave (SAW) confinement. The light blocker was installed in front of the AO unit to enhance the coupling efficiency of the incident light to the waveguide membrane and to remove perturbations to the photodetector during measurement at the exit region. The active lens was vertically attached to the waveguide sidewall to collect the diffracted beam without loss and to modulate the focal length in free space through the applied voltage. As SAWs were radiated from the IDT, a Bragg grating with periodic refractive indexes was formed along the waveguide membrane. The grating diffracted incident light. The deflection angle and phase, and the intensity of the light beam were controlled by the SAW frequency and input power, respectively. The maximum diffraction efficiency achieved was approximately 90% for a 400 MHz SAW. COMSOL simulation and coupling of mode modeling were performed to optimize design parameters and predict device performance. Full article
(This article belongs to the Special Issue Advances in Surface Acoustic Wave Sensors)
Show Figures

Figure 1

Figure 1
<p>Overall schematic view of the developed AO system. (<b>a</b>) 3D view and (<b>b</b>) cross-sectional (artificial) view.</p>
Full article ">Figure 2
<p>(<b>a</b>) Simulated AO unit using COMSOL and (<b>b</b>) surface displacement in terms of waveguide thickness and frequency.</p>
Full article ">Figure 3
<p>Schematic views of (<b>a</b>) slanted IDT (<b>b</b>) chirp IDT for COM modeling.</p>
Full article ">Figure 4
<p>Passband responses of the slanted and chirp IDTs with respect to frequencies obtained from COM modeling.</p>
Full article ">Figure 5
<p>Fabrication procedure. (<b>a</b>) LN waveguide bonding onto Si wafer, (<b>b</b>) Al patterns for wide bandwidth IDTs, (<b>c</b>) PR patterns for masking and protection layer during DRIE process, (<b>d</b>) DRIE, (<b>e</b>) cleaning with microstriper, (<b>f</b>) light blocker assembling for effective light coupling, (<b>g</b>) Al patterns for lens, (<b>h</b>) SU-8 spin coating and then UV exposure via proximity lithography, (<b>i</b>) developing process to form hemisphere shape of the polymer, (<b>j</b>) Al patterns, (<b>k</b>) PZT patterns via RF sputter, and (<b>l</b>) Al patterns for sandwiched PZT by two electrodes and lift-off.</p>
Full article ">Figure 6
<p>Optimal images of the fabricated (<b>a</b>) slanted and (<b>b</b>) chirp IDTs. (<b>c</b>) Cross-sectional view of the LN membrane via SEM.</p>
Full article ">Figure 6 Cont.
<p>Optimal images of the fabricated (<b>a</b>) slanted and (<b>b</b>) chirp IDTs. (<b>c</b>) Cross-sectional view of the LN membrane via SEM.</p>
Full article ">Figure 7
<p>Experimental insertion loss (S<sub>21</sub>) vs. frequency in terms of LN thicknesses from (<b>a</b>) slanted IDT and (<b>b</b>) chirp IDT.</p>
Full article ">Figure 7 Cont.
<p>Experimental insertion loss (S<sub>21</sub>) vs. frequency in terms of LN thicknesses from (<b>a</b>) slanted IDT and (<b>b</b>) chirp IDT.</p>
Full article ">Figure 8
<p>(<b>a</b>) Exit view from CCD camera without incident light beam (<b>b</b>) the zeroth order light beam at exit without SAW perturbation.</p>
Full article ">Figure 9
<p>(<b>a</b>) CCD views by applying different frequencies from chirp IDTs. (<b>b</b>) Contour plot and color mapping with respect to different SAW frequencies. (<b>c</b>) Diffraction angle vs. SAW frequency in terms of the wavelength of the incident light beam.</p>
Full article ">Figure 9 Cont.
<p>(<b>a</b>) CCD views by applying different frequencies from chirp IDTs. (<b>b</b>) Contour plot and color mapping with respect to different SAW frequencies. (<b>c</b>) Diffraction angle vs. SAW frequency in terms of the wavelength of the incident light beam.</p>
Full article ">Figure 10
<p>(<b>a</b>) Measurement setup for the diffraction efficiency after placing a photodetector at the first order beam position and (<b>b</b>) output voltage of photodetector before and after beam diffraction.</p>
Full article ">Figure 11
<p>(<b>a</b>) Optical and magnified views for active lens testing and (<b>b</b>) visual view for AO stacks and lens array to form holographic video.</p>
Full article ">
15 pages, 3546 KiB  
Article
Graph Constraint and Collaborative Representation Classifier Steered Discriminative Projection with Applications for the Early Identification of Cucumber Diseases
by Yuhua Li, Fengjie Wang, Ye Sun and Yingxu Wang
Sensors 2020, 20(4), 1217; https://doi.org/10.3390/s20041217 - 23 Feb 2020
Cited by 8 | Viewed by 2881
Abstract
Accurate, rapid and non-destructive disease identification in the early stage of infection is essential to ensure the safe and efficient production of greenhouse cucumbers. Nevertheless, the effectiveness of most existing methods relies on the disease already exhibiting obvious symptoms in the middle to [...] Read more.
Accurate, rapid and non-destructive disease identification in the early stage of infection is essential to ensure the safe and efficient production of greenhouse cucumbers. Nevertheless, the effectiveness of most existing methods relies on the disease already exhibiting obvious symptoms in the middle to late stages of infection. Therefore, this paper presents an early identification method for cucumber diseases based on the techniques of hyperspectral imaging and machine learning, which consists of two procedures. First, reconstruction fidelity terms and graph constraints are constructed based on the decision criterion of the collaborative representation classifier and the desired spatial distribution of spectral curves (391 to 1044 nm) respectively. The former constrains the same-class and different-class reconstruction residuals while the latter constrains the weighted distances between spectral curves. They are further fused to steer the design of an offline algorithm. The algorithm aims to train a linear discriminative projection to transform the original spectral curves into a low dimensional space, where the projected spectral curves of different diseases own better separation trends. Then, the collaborative representation classifier is utilized to achieve online early diagnosis. Five experiments were performed on the hyperspectral data collected in the early infection stage of cucumber anthracnose and Corynespora cassiicola diseases. Experimental results demonstrated that the proposed method was feasible and effective, providing a maximal identification accuracy of 98.2% and an average online identification time of 0.65 ms. The proposed method has a promising future in practical production due to its high diagnostic accuracy and short diagnosis time. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>The flowchart of the CRC-steered discriminative projection learning method (CRC-DP).</p>
Full article ">Figure 2
<p>(<b>a</b>) Two classes of data; (<b>b</b>) 100 points from each class; (<b>c</b>) the one-dimensional results using different DR methods.</p>
Full article ">Figure 3
<p>The variances of 13 features on the wine data.</p>
Full article ">Figure 4
<p>The two-dimensional results of the wine data using different dimension-reduction (DR) methods: (<b>a</b>) CRC-steered discriminative projection learning method (CRC-DP); (<b>b</b>) locality preserving projection (LPP); (<b>c</b>) neighborhood preserving embedding (NPE); (<b>d</b>) principal component analysis (PCA); (<b>e</b>) sparsity preserving projection (SPP).</p>
Full article ">Figure 5
<p>The coverage areas of spectral curves corresponding to different diseases: <b>(a</b>) normal, anthracnose and <span class="html-italic">Corynespora cassiicola</span>; (<b>b</b>) anthracnose and <span class="html-italic">Corynespora cassiicola</span>; (<b>c</b>) normal and <span class="html-italic">Corynespora cassiicola</span>; (<b>d</b>) normal and anthracnose.</p>
Full article ">Figure 6
<p>The identification accuracy versus the reduced sample dimension <span class="html-italic">m</span>.</p>
Full article ">Figure 7
<p>(<b>a</b>) The collaborative representation coefficients of a query sample from the first class; (<b>b</b>) the reconstruction residuals corresponding to each disease.</p>
Full article ">Figure 8
<p>(<b>a</b>) Comparison results of the CRC-DP method with and without graph constraint; (<b>b</b>) identification accuracy versus the enrollment size by CRC-DP method.</p>
Full article ">
13 pages, 8105 KiB  
Article
Mechatronics and Remote Driving Control of the Drive-by-Wire for a Go Kart
by Chien-Hsun Wu, Wei-Chen Lin and Kun-Sheng Wang
Sensors 2020, 20(4), 1216; https://doi.org/10.3390/s20041216 - 23 Feb 2020
Cited by 2 | Viewed by 5768
Abstract
This research mainly aims at the construction of the novel acceleration pedal, the brake pedal and the steering system by mechanical designs and mechatronics technologies, an approach of which is rarely seen in Taiwan. Three highlights can be addressed: 1. The original steering [...] Read more.
This research mainly aims at the construction of the novel acceleration pedal, the brake pedal and the steering system by mechanical designs and mechatronics technologies, an approach of which is rarely seen in Taiwan. Three highlights can be addressed: 1. The original steering parts were removed with the fault tolerance design being implemented so that the basic steering function can still remain in case of the function failure of the control system. 2. A larger steering angle of the front wheels in response to a specific rotated angle of the steering wheel is devised when cornering or parking at low speed in interest of drivability, while a smaller one is designed at high speed in favor of driving stability. 3. The operating patterns of the throttle, brake, and steering wheel can be customized in accordance with various driving environments and drivers’ requirements using the self-developed software. The implementation of a steer-by-wire system in the remote driving control for a go kart is described in this study. The mechatronic system is designed in order to support the conversion from human driving to autonomous driving for the go kart in the future. The go kart, using machine vision, is wirelessly controlled in the WiFi frequency bands. The steer-by-wire system was initially modeled as a standalone system for one wheel and subsequently developed into its complete form, including front wheel steering components, acceleration components, brake components, a microcontroller, drive circuit and digital to analog converter. The control output section delivers the commands to the subsystem controllers, relays and converters. The remote driving control of the go kart is activated when proper commands are sent by the vehicle control unit (VCU). All simulation and experiment results demonstrated that the control strategies of duel motors and the VCU control were successfully optimized. The feasibility study and performance evaluation of Taiwan’s go karts will be conducted as an extension of this study in the near future. Full article
(This article belongs to the Special Issue Selected Papers from IEEE ICKII 2019)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The working principle of the acceleration/brake-by-wire system.</p>
Full article ">Figure 2
<p>The schematic configuration of the acceleration/brake-by-wire system.</p>
Full article ">Figure 3
<p>The schematic configuration of the acceleration/brake-by-wire system.</p>
Full article ">Figure 4
<p>The schematic configuration of the acceleration/brake-by-wire system.</p>
Full article ">Figure 5
<p>Schematic configuration of the steer-by-wire system by ideal design: (<b>a</b>) neutral position; (<b>b</b>) extreme position.</p>
Full article ">Figure 5 Cont.
<p>Schematic configuration of the steer-by-wire system by ideal design: (<b>a</b>) neutral position; (<b>b</b>) extreme position.</p>
Full article ">Figure 6
<p>(<b>a</b>) Motor mount vs. stress for the steer-by-wire system; (<b>b</b>) bracket vs. stress for the steer-by-wire system.</p>
Full article ">Figure 6 Cont.
<p>(<b>a</b>) Motor mount vs. stress for the steer-by-wire system; (<b>b</b>) bracket vs. stress for the steer-by-wire system.</p>
Full article ">Figure 7
<p>Mechanical design of the steer-by-wire system.</p>
Full article ">Figure 8
<p>The control configuration of the remote controller.</p>
Full article ">Figure 9
<p>The schematic configuration of the remote controller.</p>
Full article ">Figure 10
<p>The schematic configuration of the vehicle control unit.</p>
Full article ">Figure 11
<p>The control flow of the vehicle control unit.</p>
Full article ">Figure 12
<p>(<b>a</b>) The hardware configuration of steer-by-wire in a real go kart; (<b>b</b>) hardware configuration of drive-by-wire using the steering wheel, accelerator and brake pedal.</p>
Full article ">Figure 13
<p>(<b>a</b>) Accelerator voltage vs. time for the drive-by-wire; (<b>b</b>) brake pedal voltage vs. time for the drive-by-wire.</p>
Full article ">Figure 14
<p>Steering wheel angle vs. time for the drive-by-wire.</p>
Full article ">Figure 15
<p>The system verification of drive-by-wire: (<b>a</b>) remote control to go straight; (<b>b</b>) remote control to turn left; (<b>c</b>) remote control to turn right under the passenger mode.</p>
Full article ">Figure 15 Cont.
<p>The system verification of drive-by-wire: (<b>a</b>) remote control to go straight; (<b>b</b>) remote control to turn left; (<b>c</b>) remote control to turn right under the passenger mode.</p>
Full article ">
19 pages, 981 KiB  
Article
An Efficient, Anonymous and Robust Authentication Scheme for Smart Home Environments
by Soumya Banerjee, Vanga Odelu, Ashok Kumar Das, Samiran Chattopadhyay and Youngho Park
Sensors 2020, 20(4), 1215; https://doi.org/10.3390/s20041215 - 22 Feb 2020
Cited by 57 | Viewed by 5051
Abstract
In recent years, the Internet of Things (IoT) has exploded in popularity. The smart home, as an important facet of IoT, has gained its focus for smart intelligent systems. As users communicate with smart devices over an insecure communication medium, the sensitive information [...] Read more.
In recent years, the Internet of Things (IoT) has exploded in popularity. The smart home, as an important facet of IoT, has gained its focus for smart intelligent systems. As users communicate with smart devices over an insecure communication medium, the sensitive information exchanged among them becomes vulnerable to an adversary. Thus, there is a great thrust in developing an anonymous authentication scheme to provide secure communication for smart home environments. Most recently, an anonymous authentication scheme for smart home environments with provable security has been proposed in the literature. In this paper, we analyze the recent scheme to highlight its several vulnerabilities. We then address the security drawbacks and present a more secure and robust authentication scheme that overcomes the drawbacks found in the analyzed scheme, while incorporating its advantages too. Finally, through a detailed comparative study, we demonstrate that the proposed scheme provides significantly better security and more functionality features with comparable communication and computational overheads with similar schemes. Full article
Show Figures

Figure 1

Figure 1
<p>A typical smart home architecture (adapted from [<a href="#B7-sensors-20-01215" class="html-bibr">7</a>]).</p>
Full article ">Figure 2
<p>Summary of user registration.</p>
Full article ">Figure 3
<p>Summary of login and authentication phase.</p>
Full article ">Figure 4
<p>The simulation results under OFMC &amp; CL-AtSe back-ends.</p>
Full article ">Figure 5
<p>(<b>a</b>) Throughput (bytes per second) (<b>b</b>) End-to-end delay (seconds).</p>
Full article ">
16 pages, 2130 KiB  
Article
Detecting Respiratory Pathologies Using Convolutional Neural Networks and Variational Autoencoders for Unbalancing Data
by María Teresa García-Ordás, José Alberto Benítez-Andrades, Isaías García-Rodríguez, Carmen Benavides and Héctor Alaiz-Moretón
Sensors 2020, 20(4), 1214; https://doi.org/10.3390/s20041214 - 22 Feb 2020
Cited by 82 | Viewed by 7461
Abstract
The aim of this paper was the detection of pathologies through respiratory sounds. The ICBHI (International Conference on Biomedical and Health Informatics) Benchmark was used. This dataset is composed of 920 sounds of which 810 are of chronic diseases, 75 of non-chronic diseases [...] Read more.
The aim of this paper was the detection of pathologies through respiratory sounds. The ICBHI (International Conference on Biomedical and Health Informatics) Benchmark was used. This dataset is composed of 920 sounds of which 810 are of chronic diseases, 75 of non-chronic diseases and only 35 of healthy individuals. As more than 88% of the samples of the dataset are from the same class (Chronic), the use of a Variational Convolutional Autoencoder was proposed to generate new labeled data and other well known oversampling techniques after determining that the dataset classes are unbalanced. Once the preprocessing step was carried out, a Convolutional Neural Network (CNN) was used to classify the respiratory sounds into healthy, chronic, and non-chronic disease. In addition, we carried out a more challenging classification trying to distinguish between the different types of pathologies or healthy: URTI, COPD, Bronchiectasis, Pneumonia, and Bronchiolitis. We achieved results up to 0.993 F-Score in the three-label classification and 0.990 F-Score in the more challenging six-class classification. Full article
(This article belongs to the Special Issue Sensor and Systems Evaluation for Telemedicine and eHealth)
Show Figures

Figure 1

Figure 1
<p>In (<b>a</b>), a Variational AutoEncoder (VAE) scheme with the mean and standard deviation layers used to sample the latent vector. In (<b>b</b>), the vanilla autoencoder with a simple latent vector.</p>
Full article ">Figure 2
<p>A vanilla Convolutional Neural Network (CNN) representation.</p>
Full article ">Figure 3
<p>The sounds were recorded from seven different locations remarked in red.</p>
Full article ">Figure 4
<p>Examples of the Mel Spectrograms for the chronic, non-chronic and healthy classes after preprocessing.</p>
Full article ">Figure 5
<p>VAE scheme configuration for data augmentation.</p>
Full article ">Figure 6
<p>In (<b>a</b>), the new images generated using the VAE. In (<b>b</b>), the variation between the original and the generated images.</p>
Full article ">Figure 7
<p>(<b>a</b>) Confusion matrix of the unbalanced dataset. (<b>b</b>) Confusion matrix of the unbalanced dataset with weights in the training. (<b>c</b>) Confusion matrix of balanced dataset using our proposal scheme.</p>
Full article ">Figure 8
<p>Comparative between our proposal method and the best results on the state-of-the-art using (International Conference on Biomedical and Health Informatics (ICBHI) dataset.</p>
Full article ">Figure 9
<p>(<b>a</b>) Confusion matrix of the unbalanced dataset. (<b>b</b>) Confusion matrix of the unbalanced dataset with weights in the training. (<b>c</b>) Confusion matrix of the balanced dataset using our proposed scheme with VAE.</p>
Full article ">
18 pages, 803 KiB  
Article
An Identity Authentication Method of a MIoT Device Based on Radio Frequency (RF) Fingerprint Technology
by Qiao Tian, Yun Lin, Xinghao Guo, Jin Wang, Osama AlFarraj and Amr Tolba
Sensors 2020, 20(4), 1213; https://doi.org/10.3390/s20041213 - 22 Feb 2020
Cited by 25 | Viewed by 6455
Abstract
With the continuous development of science and engineering technology, our society has entered the era of the mobile Internet of Things (MIoT). MIoT refers to the combination of advanced manufacturing technologies with the Internet of Things (IoT) to create a flexible digital manufacturing [...] Read more.
With the continuous development of science and engineering technology, our society has entered the era of the mobile Internet of Things (MIoT). MIoT refers to the combination of advanced manufacturing technologies with the Internet of Things (IoT) to create a flexible digital manufacturing ecosystem. The wireless communication technology in the Internet of Things is a bridge between mobile devices. Therefore, the introduction of machine learning (ML) algorithms into MIoT wireless communication has become a research direction of concern. However, the traditional key-based wireless communication method demonstrates security problems and cannot meet the security requirements of the MIoT. Based on the research on the communication of the physical layer and the support vector data description (SVDD) algorithm, this paper establishes a radio frequency fingerprint (RFF or RF fingerprint) authentication model for a communication device. The communication device in the MIoT is accurately and efficiently identified by extracting the radio frequency fingerprint of the communication signal. In the simulation experiment, this paper introduces the neighborhood component analysis (NCA) method and the SVDD method to establish a communication device authentication model. At a signal-to-noise ratio (SNR) of 15 dB, the authentic devices authentication success rate (ASR) and the rogue devices detection success rate (RSR) are both 90%. Full article
(This article belongs to the Special Issue Security and Privacy in Wireless Sensor Network)
Show Figures

Figure 1

Figure 1
<p>A support vector data description geometric model.</p>
Full article ">Figure 2
<p>The experimental setup.</p>
Full article ">Figure 3
<p>Transient signal waveforms of four wireless devices.</p>
Full article ">Figure 4
<p>The radio frequency fingerprint (RFF) feature generation process. Neighborhood component analysis (NCA).</p>
Full article ">Figure 5
<p>Schematic diagram of the distance between the sample point and the hypersphere. Signal-to-noise ratio (SNR).</p>
Full article ">Figure 6
<p>The process of the RFF authentication.</p>
Full article ">Figure 7
<p>The specific form of the RFF authentication model. Support vector data description (SVDD).</p>
Full article ">Figure 8
<p>Authentic devices authentication success rate.</p>
Full article ">Figure 9
<p>Rogue device detection success rate.</p>
Full article ">
18 pages, 5985 KiB  
Article
Detecting of the Longitudinal Grouting Quality in Prestressed Curved Tendon Duct Using Piezoceramic Transducers
by Tianyong Jiang, Bin He, Yaowen Zhang and Lei Wang
Sensors 2020, 20(4), 1212; https://doi.org/10.3390/s20041212 - 22 Feb 2020
Cited by 14 | Viewed by 3518
Abstract
To understand the characteristics of longitudinal grouting quality, this paper developed a stress wave-based active sensing method using piezoceramic transducers to detect longitudinal grouting quality of the prestressed curved tendon ducts. There were four lead zirconate titanate (PZT) transducers installed in the same [...] Read more.
To understand the characteristics of longitudinal grouting quality, this paper developed a stress wave-based active sensing method using piezoceramic transducers to detect longitudinal grouting quality of the prestressed curved tendon ducts. There were four lead zirconate titanate (PZT) transducers installed in the same longitudinal plane. One of them, mounted on the bottom of the curved tendon duct, was called as an actuator for generating stress waves. The other three, pasted on the top of the curved tendon duct, were called as sensors for detecting the wave responses. The experimental process was divided into five states during the grouting, which included 0%, 50%, 75%, 90%, and 100% grouting. The voltage signals, power spectral density (PSD) energy and wavelet packet energy were adopted in this research. Experimental results showed that all the amplitudes of the above analysis indicators were small before the grouting reached 90%. Only when the grouting degree reached the 100% grouting, these parameters increased significantly. The results of different longitudinal PZT sensors were mainly determined by the distance from the generator, the position of grouting holes, and the fluidity of grouting materials. These results showed the longitudinal grouting quality can be effectively evaluated by analyzing the difference between the signals received by the PZT transducers in the curved tendon duct. The devised method has certain application value in detecting the longitudinal grouting quality of prestressed curved tendon duct. Full article
(This article belongs to the Special Issue Damage Detection of Structures Based on Piezoelectric Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of detecting the longitudinal grouting quality. (<b>a</b>) 50% grouting; (<b>b</b>) 75% grouting; (<b>c</b>) 90% grouting; (<b>d</b>) 100% grouting.</p>
Full article ">Figure 2
<p>The test specimen. (<b>a</b>) The frame work before pouring the concrete. (<b>b</b>) The test specimen after the concrete is poured. (<b>c</b>) Three-dimensional view of the model.</p>
Full article ">Figure 3
<p>Specimen dimensions (unit: mm). (<b>a</b>) Longitudinal section. (<b>b</b>) Transverse section.</p>
Full article ">Figure 4
<p>Experimental equipment.</p>
Full article ">Figure 5
<p>Test specimen in different states.</p>
Full article ">Figure 6
<p>Operating interface of the special test program of NI USB-6363.</p>
Full article ">Figure 7
<p>Voltage signals of PZT 2 in one period.</p>
Full article ">Figure 8
<p>Voltage signals of PZT 3 in one period.</p>
Full article ">Figure 9
<p>Voltage signals of PZT 4 in one period.</p>
Full article ">Figure 10
<p>PSD energy of PZT 2 in different grouting states.</p>
Full article ">Figure 11
<p>PSD energy of PZT 3 in different grouting states.</p>
Full article ">Figure 12
<p>PSD energy of PZT 4 in different grouting states.</p>
Full article ">Figure 13
<p>Wavelet packet energy of PZT sensors in different grouting states. (<b>a</b>) PZT 2 sensor; (<b>b</b>) PZT 3 sensor; (<b>c</b>) PZT 4 sensor.</p>
Full article ">
25 pages, 13209 KiB  
Article
Indoor NLOS Positioning System Based on Enhanced CSI Feature with Intrusion Adaptability
by Ke Han, Lingjie Shi, Zhongliang Deng, Xiao Fu and Yun Liu
Sensors 2020, 20(4), 1211; https://doi.org/10.3390/s20041211 - 22 Feb 2020
Cited by 9 | Viewed by 3968
Abstract
With the wide deployment of commercial WiFi devices, the fine-grained channel state information (CSI) has received widespread attention with broad application domain including indoor localization and intrusion detection. From the perspective of practicality, dynamic intrusion may be confused under non-line-of-sight (NLOS) conditions and [...] Read more.
With the wide deployment of commercial WiFi devices, the fine-grained channel state information (CSI) has received widespread attention with broad application domain including indoor localization and intrusion detection. From the perspective of practicality, dynamic intrusion may be confused under non-line-of-sight (NLOS) conditions and the continuous operation of passive positioning system will bring much unnecessary computation. In this paper, we propose an enhanced CSI-based indoor positioning system with pre-intrusion detection suitable for NLOS scenarios (C-InP). It mainly consists of two modules: intrusion detection and positioning estimation. The introduction of detection module is a prerequisite for positioning module. In order to improve the discrimination of features under NLOS conditions, we propose a modified calibration method for phase transformation while the amplitude outliers are filtered by the variance distribution with the median sequence. In addition, binary and improved multiple support vector classification (SVC) models are established to realize NLOS intrusion detection and high-discrimination fingerprint localization, respectively. Comprehensive experimental verification is carried out in typical indoor scenarios. Experimental results show that C-InP outperforms the existing system in NLOS environments, where the mean distance error (MDE) reached 0.49 m in the integrated room and 0.81 m in the complex garage, respectively. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Channel state information (CSI) amplitude outlier filtering processing at position 1 in scenario 1 (Spare room). (<b>a</b>) The Raw CSI amplitude. (<b>b</b>) The distribution heat map of the amplitude probability. (<b>c</b>) The Processed CSI amplitude. (<b>d</b>) The boxplot diagram of the measured CSI amplitude.</p>
Full article ">Figure 2
<p>CSI amplitude outlier filtering processing at position 2 in scenario 2 (Single laboratory). (<b>a</b>) The Raw CSI amplitude. (<b>b</b>) The distribution heat map of the amplitude probability. (<b>c</b>) The Processed CSI amplitude. (<b>d</b>) The boxplot diagram of raw CSI amplitude.</p>
Full article ">Figure 3
<p>CSI amplitude outlier filtering processing at position 3 in scenario 3 (Integrated non line of sight (NLOS) room). (<b>a</b>) The Raw CSI amplitude. (<b>b</b>) The distribution heat map of the amplitude probability. (<b>c</b>) The Processed CSI amplitude. (<b>d</b>) The boxplot diagram of raw CSI amplitude.</p>
Full article ">Figure 4
<p>CSI amplitude outlier filtering processing in scenario 4 (Complex garage). (<b>a</b>) Raw CSI amplitude at position 4. (<b>b</b>) The distribution heat map of the amplitude probability at position 4. (<b>c</b>) Processed CSI amplitude at position 4. (<b>d</b>) The boxplot diagram of raw CSI amplitude at position 4.</p>
Full article ">Figure 5
<p>Amplitude deviation variance in different scenes. (<b>a</b>) Raw CSI amplitude distribution in single line of sight (LOS) conditions (scene1 &amp; scene2). (<b>b</b>) Raw CSI amplitude distribution in complex conditions (scene3 &amp; scene4).</p>
Full article ">Figure 6
<p>CSI phase calibration at LOS position. (<b>a</b>) CSI phase in one antenna. (<b>b</b>) CSI phase after linear transformation in the first subcarrier. (<b>c</b>) CSI phase after conventional calibration. (<b>d</b>) CSI phase after modified calibration.</p>
Full article ">Figure 7
<p>CSI phase calibration at NLOS position. (<b>a</b>) CSI phase in one antenna. (<b>b</b>) CSI phase after linear transformation in the first subcarrier. (<b>c</b>) CSI phase after conventional calibration. (<b>d</b>) CSI phase after modified calibration.</p>
Full article ">Figure 8
<p>Phases in four different positions on the first antenna.</p>
Full article ">Figure 9
<p>Temporal variances of CSI in LOS and NLOS. (<b>a</b>) CSI amplitude variances between adjacent entries. (<b>b</b>) CSI phase variances between adjacent entries.</p>
Full article ">Figure 10
<p>System architecture of C-InP.</p>
Full article ">Figure 11
<p>Time stability analysis (<b>a</b>) CSI Amplitude. (<b>b</b>) CSI Phase.</p>
Full article ">Figure 12
<p>Experimental environments of the integrated NLOS room combined with laboratory, meeting room and corridor.</p>
Full article ">Figure 13
<p>Experimental environments of the complex garage.</p>
Full article ">Figure 14
<p>Experimental scenarios. (<b>a</b>) Integrated NLOS room. (<b>b</b>) Complex garage.</p>
Full article ">Figure 15
<p>Detailed integrated room environment.</p>
Full article ">Figure 16
<p>Detection results. (<b>a</b>) B-SVC. (<b>b</b>) Naive Bayes Classification.</p>
Full article ">Figure 17
<p>The Cumulative Distribution Function (CDF) of the error distance of C-InP in two scenes.</p>
Full article ">Figure 18
<p>The comparison of positioning performance among C-InP, DBSCAN and raw data-based approach. (<b>a</b>) The CDF comparison in the integrated room. (<b>b</b>) The CDF comparison in the garage.</p>
Full article ">Figure 19
<p>The positioning performance of different parameter K in SVD. (<b>a</b>) The CDF comparison in the integrated room. (<b>b</b>) The CDF comparison in the garage.</p>
Full article ">Figure 20
<p>The performance comparision of different positioning systems. (<b>a</b>) The CDF comparison among C-InP, MDS-KNN and NB in the integrated room. (<b>b</b>) The CDF comparison among C-InP, MDS-KNN and NB in the garage.</p>
Full article ">
45 pages, 4518 KiB  
Review
Human Activity Sensing with Wireless Signals: A Survey
by Jiao Liu, Guanlong Teng and Feng Hong
Sensors 2020, 20(4), 1210; https://doi.org/10.3390/s20041210 - 22 Feb 2020
Cited by 65 | Viewed by 10818
Abstract
Wireless networks have been widely deployed with a high demand for wireless data traffic. The ubiquitous availability of wireless signals brings new opportunities for non-intrusive human activity sensing. To enhance a thorough understanding of existing wireless sensing techniques and provide insights for future [...] Read more.
Wireless networks have been widely deployed with a high demand for wireless data traffic. The ubiquitous availability of wireless signals brings new opportunities for non-intrusive human activity sensing. To enhance a thorough understanding of existing wireless sensing techniques and provide insights for future directions, this survey conducts a review of the existing research on human activity sensing with wireless signals. We review and compare existing research of wireless human activity sensing from seven perspectives, including the types of wireless signals, theoretical models, signal preprocessing techniques, activity segmentation, feature extraction, classification, and application. With the development and deployment of new wireless technology, there will be more sensing opportunities in human activities. Based on the analysis of existing research, the survey points out seven challenges on wireless human activity sensing research: robustness, non-coexistence of sensing and communications, privacy, multiple user activity sensing, limited sensing range, complex deep learning, and lack of standard datasets. Finally, this survey presents four possible future research trends, including new theoretical models, the coexistence of sensing and communications, awareness of sensing on receivers, and constructing open datasets to enable new wireless sensing opportunities on human activities. Full article
(This article belongs to the Special Issue Smart Sensing: Leveraging AI for Sensing)
Show Figures

Figure 1

Figure 1
<p>Overview of wireless sensing and survey organization.</p>
Full article ">Figure 2
<p>Four-dimensional CSI matrix of MIMO-OFDM channels.</p>
Full article ">Figure 3
<p>Indoor multi-path effect model.</p>
Full article ">Figure 4
<p>Phase difference distribution.</p>
Full article ">Figure 5
<p>Path length change due to the human movement.</p>
Full article ">Figure 6
<p>Geometrical relationship between human velocity and Doppler velocity.</p>
Full article ">Figure 7
<p>Directional ambiguity on symmetric velocity.</p>
Full article ">Figure 8
<p>Doppler effect on multiple directions.</p>
Full article ">Figure 9
<p>Principle of FMCW chirp</p>
Full article ">Figure 10
<p>Frequency deviation on FMCW chirp on human motion.</p>
Full article ">Figure 11
<p>Fresnel zone model.</p>
Full article ">Figure 12
<p>Signal amplitude with the human residing on the boundaries of Fresnel zones of odd and even numbers (<b>a</b>) odd zone (<b>b</b>) even zone.</p>
Full article ">Figure 13
<p>Delayed waveforms for subcarrier 1 and 2.</p>
Full article ">Figure 14
<p>AoA model with the antenna array</p>
Full article ">Figure 15
<p>Raw vs. pre-processed CSI phase (<b>a</b>) CSI phase vs. subcarrier index (<b>b</b>) CSI phase vs. sampling time.</p>
Full article ">Figure 16
<p>Segmentation example of the received amplitude sequence during five squats. Green check labels each squat segment. The red vertical lines label the start and end timestamp of each squat.</p>
Full article ">
12 pages, 2171 KiB  
Article
Development of a Low-Cost Narrow Band Multispectral Imaging System Coupled with Chemometric Analysis for Rapid Detection of Rice False Smut in Rice Seed
by Haiyong Weng, Ya Tian, Na Wu, Xiaoling Li, Biyun Yang, Yiping Huang, Dapeng Ye and Renye Wu
Sensors 2020, 20(4), 1209; https://doi.org/10.3390/s20041209 - 22 Feb 2020
Cited by 14 | Viewed by 3635
Abstract
Spectral imaging is a promising technique for detecting the quality of rice seeds. However, the high cost of the system has limited it to more practical applications. The study was aimed to develop a low-cost narrow band multispectral imaging system for detecting rice [...] Read more.
Spectral imaging is a promising technique for detecting the quality of rice seeds. However, the high cost of the system has limited it to more practical applications. The study was aimed to develop a low-cost narrow band multispectral imaging system for detecting rice false smut (RFS) in rice seeds. Two different cultivars of rice seeds were artificially inoculated with RFS. Results have demonstrated that spectral features at 460, 520, 660, 740, 850, and 940 nm were well linked to the RFS. It achieved an overall accuracy of 98.7% with a false negative rate of 3.2% for Zheliang, and 91.4% with 6.7% for Xiushui, respectively, using the least squares-support vector machine. Moreover, the robustness of the model was validated through transferring the model of Zheliang to Xiushui with the overall accuracy of 90.3% and false negative rate of 7.8%. These results demonstrate the feasibility of the developed system for RFS identification with a low detecting cost. Full article
(This article belongs to the Special Issue Hyperspectral Imaging (HSI) Sensing and Analysis)
Show Figures

Figure 1

Figure 1
<p>Two genotypes of rice seeds <span class="html-italic">Zheliang</span> (<b>a</b>) and <span class="html-italic">Xiushui</span> (<b>b</b>) with different infected degrees of rice false smut (RFS).</p>
Full article ">Figure 2
<p>Schematic overview of the analytical procedure for rice false smut (RFS) disease detection.</p>
Full article ">Figure 3
<p>The linearities of CCD under different exposure time (<b>a</b>) and illuminance distribution (<b>b</b>) at wavelengths of 460, 520, 660, 740, 850, and 940 nm, respectively, at a working distance of 18 cm.</p>
Full article ">Figure 4
<p>Mean reflectance spectra of healthy and rice false smut (RFS) infected rice seeds of <span class="html-italic">Zheliang</span> (<b>a</b>) and <span class="html-italic">Xiushui</span> (<b>b</b>). Principal component analysis of reflectance at six wavelengths in healthy, slightly, and severely infected rice seeds of <span class="html-italic">Zheliang</span> (<b>c</b>) and <span class="html-italic">Xiushui</span> (<b>d</b>).</p>
Full article ">Figure 5
<p>The overall accuracies and false negative rates from the least squares-support vector machine (LS-SVM) for rice false smut (RFS) disease detection in <span class="html-italic">Xiushui</span> based on the model established from <span class="html-italic">Zheliang</span>.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop