[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 20, October-2
Previous Issue
Volume 20, September-2
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 20, Issue 19 (October-1 2020) – 277 articles

Cover Story (view full-size image): The digitization of the manufacturing industry has led to more efficient production, under the Industry 4.0 concept. Datasets collected from shop floor assets are used in data-driven analytics efforts to support more informed business intelligence decisions. However, these results are currently used in isolated and dispersed parts of the production process. At the same time, full integration of artificial intelligence (AI) in all parts of manufacturing systems is currently lacking. In this context, a more holistic integration of AI by promoting collaboration is presented. Collaboration is understood as a multidimensional conceptual term that covers all important enablers for AI adoption and is promoted in terms of business intelligence optimization, human-in-the-loop, and secure federation across manufacturing sites. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
23 pages, 15685 KiB  
Article
Rough or Noisy? Metrics for Noise Estimation in SfM Reconstructions
by Ivan Nikolov and Claus Madsen
Sensors 2020, 20(19), 5725; https://doi.org/10.3390/s20195725 - 8 Oct 2020
Cited by 2 | Viewed by 3590
Abstract
Structure from Motion (SfM) can produce highly detailed 3D reconstructions, but distinguishing real surface roughness from reconstruction noise and geometric inaccuracies has always been a difficult problem to solve. Existing SfM commercial solutions achieve noise removal by a combination of aggressive global smoothing [...] Read more.
Structure from Motion (SfM) can produce highly detailed 3D reconstructions, but distinguishing real surface roughness from reconstruction noise and geometric inaccuracies has always been a difficult problem to solve. Existing SfM commercial solutions achieve noise removal by a combination of aggressive global smoothing and the reconstructed texture for smaller details, which is a subpar solution when the results are used for surface inspection. Other noise estimation and removal algorithms do not take advantage of all the additional data connected with SfM. We propose a number of geometrical and statistical metrics for noise assessment, based on both the reconstructed object and the capturing camera setup. We test the correlation of each of the metrics to the presence of noise on reconstructed surfaces and demonstrate that classical supervised learning methods, trained with these metrics can be used to distinguish between noise and roughness with an accuracy above 85%, with an additional 5–6% performance coming from the capturing setup metrics. Our proposed solution can easily be integrated into existing SfM workflows as it does not require more image data or additional sensors. Finally, as part of the testing we create an image dataset for SfM from a number of objects with varying shapes and sizes, which are available online together with ground truth annotations. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of Structure from Motion (SfM) reconstruction geometrical errors, which need to be distinguished from real surface roughness. Noise parts are shown in red. The problematic areas in (<b>a</b>,<b>c</b>), lead to geometrical errors in the reconstruction as seen in (<b>b</b>,<b>d</b>).</p>
Full article ">Figure 2
<p>Overview of the proposed idea for using metrics extract from the mesh and capturing setup used for SfM reconstruction, to determine if the underlying surface is noisy or rough.</p>
Full article ">Figure 3
<p>Examples of the five main observational hypotheses, used as a basis for the chosen mesh-based and capturing setup-based metrics.</p>
Full article ">Figure 4
<p>Visualization of all the proposed metrics as heat maps. For Local Roughness from Gaussian Curvature (LRGC), <math display="inline"><semantics> <mrow> <mi>D</mi> <mi>O</mi> <msub> <mi>N</mi> <mi>m</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>V</mi> <msub> <mi>D</mi> <mi>m</mi> </msub> </mrow> </semantics></math>, higher values (indicated with red color) indicate higher risk of noise, while for <math display="inline"><semantics> <mrow> <mi>V</mi> <mi>I</mi> <msub> <mi>E</mi> <mi>m</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mi>C</mi> <msub> <mi>V</mi> <mi>s</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>P</mi> <msub> <mi>F</mi> <mi>s</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>V</mi> <mi>i</mi> <msub> <mi>F</mi> <mi>s</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>V</mi> <mi>P</mi> <msub> <mi>C</mi> <mi>s</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>V</mi> <mi>A</mi> <msub> <mi>V</mi> <mi>s</mi> </msub> </mrow> </semantics></math>—higher values, indicate lower risk of noise.</p>
Full article ">Figure 5
<p>An image used as input to the SfM solution and calculated feature points. A radius is set around each of the features and all points that are in the area are projected to the reconstructed mesh.</p>
Full article ">Figure 6
<p>Visualization of the calculated hemisphere positioned above each vertex in the mesh and the camera position, together with the intersection points. The distance from the camera to the vertex position is made in a smaller scale for easier visualization. Once all the intersection points are found the area between them is calculated and the ratio between it and the whole area is used for the metric.</p>
Full article ">Figure 7
<p>Overview of the implementation pipeline, showing what input and programming environments are used to calculate each of the metrics. The mesh-based metrics are directly computed in Python, while the capturing-setup based ones use a combination between Python and the Unity game engine.</p>
Full article ">Figure 8
<p>Views from the Unity implementation used for the capturing setup-based metric extraction.</p>
Full article ">Figure 9
<p>Objects selected for the robustness test. These objects have widely varying shape, size, roughness profiles and materials.</p>
Full article ">Figure 10
<p>View from the annotation tool used for creating the roughness versus noise ground truth for each of the meshes. The vertices painted red are set as reconstruction noise.</p>
Full article ">Figure 11
<p>Correlation matrix of the used metrics, together with the dependent variable. For easier visualization the metrics are shown with their coded names—<math display="inline"><semantics> <mrow> <mi>V</mi> <mi>P</mi> <msub> <mi>C</mi> <mi>s</mi> </msub> </mrow> </semantics></math>: vertices seen from parallel camera, <math display="inline"><semantics> <mrow> <mi>V</mi> <mi>A</mi> <msub> <mi>V</mi> <mi>s</mi> </msub> </mrow> </semantics></math>: vertex area visibility, <math display="inline"><semantics> <mrow> <mi>V</mi> <mi>i</mi> <msub> <mi>F</mi> <mi>s</mi> </msub> </mrow> </semantics></math>: vertices in focus, <math display="inline"><semantics> <mrow> <mi>N</mi> <mi>C</mi> <msub> <mi>V</mi> <mi>s</mi> </msub> </mrow> </semantics></math>: number of cameras seeing each vertex, <math display="inline"><semantics> <mrow> <mi>P</mi> <msub> <mi>F</mi> <mi>s</mi> </msub> </mrow> </semantics></math>: projected 2D features, <math display="inline"><semantics> <mrow> <mi>V</mi> <mi>I</mi> <msub> <mi>E</mi> <mi>m</mi> </msub> </mrow> </semantics></math>: vertex local color entropy, <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>R</mi> <mi>G</mi> <msub> <mi>C</mi> <mi>m</mi> </msub> </mrow> </semantics></math>: local roughness from Gaussian curvature, <math display="inline"><semantics> <mrow> <mi>D</mi> <mi>O</mi> <msub> <mi>N</mi> <mi>m</mi> </msub> </mrow> </semantics></math>: difference of normals and <math display="inline"><semantics> <mrow> <mi>V</mi> <msub> <mi>D</mi> <mi>m</mi> </msub> </mrow> </semantics></math>: vertex local spatial density.</p>
Full article ">Figure 12
<p>The annotated ground truth vertices on the left and the same classified vertices using our proposed method on the right. The noise vertices are colored red, while the non-noise ones are blue.</p>
Full article ">Figure 13
<p>Visualization of the noise estimation results, using different subsets of metrics, together with the ground truth annotation. The different testing scenarios are separated for easier comparison.</p>
Full article ">Figure 14
<p>The wind turbine blade used for the second testing scenario (<b>a</b>), together with the precision-recall curve of the classification model (<b>b</b>) and the visualized annotation compared to classified vertices (<b>c</b>). Red vertices are noise, blue are non-noise.</p>
Full article ">
15 pages, 6454 KiB  
Article
Study on Propagation Depth of Ultrasonic Longitudinal Critically Refracted (LCR) Wave
by Yongmeng Liu, Enxiao Liu, Yuanlin Chen, Xiaoming Wang, Chuanzhi Sun and Jiubin Tan
Sensors 2020, 20(19), 5724; https://doi.org/10.3390/s20195724 - 8 Oct 2020
Cited by 15 | Viewed by 3164
Abstract
The accurate measurement of stress at different depths in the end face of a high-pressure compressor rotor is particularly important, as it is directly related to the assembly quality and overall performance of aero-engines. The ultrasonic longitudinal critically refracted (LCR) wave is sensitive [...] Read more.
The accurate measurement of stress at different depths in the end face of a high-pressure compressor rotor is particularly important, as it is directly related to the assembly quality and overall performance of aero-engines. The ultrasonic longitudinal critically refracted (LCR) wave is sensitive to stress and can measure stress at different depths, which has a prominent advantage in stress non-destructive measurements. In order to accurately characterize the propagation depth of LCR waves and improve the spatial resolution of stress measurement, a finite element model suitable for the study of LCR wave propagation depths was established based on a wave equation and Snell law, and the generation and propagation process of LCR waves are analyzed. By analyzing the blocking effect of grooves with different depths on the wave, the propagation depth of the LCR wave at seven specific frequencies was determined in turn. On this basis, the LCR wave propagation depth model is established, and the effects of wedge materials, piezoelectric element diameters, and excitation voltages on the propagation depth of LCR waves are discussed. This study is of great significance to improve the spatial resolution of stress measurements at different depths in the end face of the aero-engine rotor. Full article
(This article belongs to the Special Issue Ultrasonic Sensors and Technology for Material Characterization)
Show Figures

Figure 1

Figure 1
<p>Coordinate system of deformed object.</p>
Full article ">Figure 2
<p>Longitudinal critically refracted (LCR) wave excitation and reception device and wedge structure.</p>
Full article ">Figure 3
<p>Ultrasonic transducers with different frequencies can generate LCR waves propagating at different depths in the material. (<b>a</b>) Propagation depth of LCR wave at <span class="html-italic">f</span><sub>1</sub> frequency; (<b>b</b>) propagation depth of LCR wave at <span class="html-italic">f</span><sub>2</sub> frequency; (<b>c</b>) propagation depth of LCR wave at <span class="html-italic">f</span><sub>3</sub> frequency.</p>
Full article ">Figure 4
<p>Geometric model in simulation calculation.</p>
Full article ">Figure 5
<p>Finite element mesh in simulation calculation.</p>
Full article ">Figure 6
<p>The generation and propagation process of LCR wave in time (<b>a</b>) <span class="html-italic">t</span> = 8E-7 s, (<b>b</b>) <span class="html-italic">t</span> = 1.4E-6 s, (<b>c</b>) <span class="html-italic">t</span> = 2.2E-6 s, (<b>d</b>) <span class="html-italic">t</span> = 3.2E-6 s, (<b>e</b>) <span class="html-italic">t</span> = 4.2E-6 s, (<b>f</b>) <span class="html-italic">t</span> = 5.2E-6 s, (<b>g</b>) <span class="html-italic">t</span> = 6.2E-6 s, and (<b>h</b>) <span class="html-italic">t</span> = 7.0E-6 s.</p>
Full article ">Figure 7
<p>The sound pressure signals of the transmitting and receiving transducers.</p>
Full article ">Figure 8
<p>The method of determining the propagation depth of LCR waves.</p>
Full article ">Figure 9
<p>Time extension of LCR wave signal with a frequency of 4 MHz at different groove depths.</p>
Full article ">Figure 10
<p>Relationship between the LCR wave propagation depth and frequency (<b>a</b>) and wavelength (<b>b</b>).</p>
Full article ">Figure 11
<p>Analysis of the effect of wedge material on the LCR wave propagation depth: (<b>a</b>) the wedge material is Polymethyl methacrylate (PMMA); (<b>b</b>) the wedge material is polystyrene (PS); (<b>c</b>) propagation time extension caused by 1.24 mm groove when wedge material is PS.</p>
Full article ">Figure 12
<p>Analysis of the effect of piezoelectric element diameter on propagation depth of LCR wave: (<b>a</b>) piezoelectric element diameter is 5 mm; (<b>b</b>) piezoelectric element diameter is 2.5 mm; (<b>c</b>) propagation time extension caused by 1.24 mm groove when piezoelectric element diameter is 2.5 mm.</p>
Full article ">Figure 13
<p>Analysis of the effect of excitation voltage on propagation depth of LCR wave: (<b>a</b>) excitation signal <span class="html-italic">S</span><sub>1</sub>(<span class="html-italic">t</span>); (<b>b</b>) excitation signal <span class="html-italic">S</span><sub>2</sub>(<span class="html-italic">t</span>); (<b>c</b>) propagation time extension caused by 1.24 mm groove when the excitation signal is <span class="html-italic">S</span><sub>2</sub>(<span class="html-italic">t</span>).</p>
Full article ">
10 pages, 1716 KiB  
Letter
Influence of Nivolumab for Intercellular Adhesion Force between a T Cell and a Cancer Cell Evaluated by AFM Force Spectroscopy
by Hyonchol Kim, Kenta Ishibashi, Masumi Iijima, Shun’ichi Kuroda and Chikashi Nakamura
Sensors 2020, 20(19), 5723; https://doi.org/10.3390/s20195723 - 8 Oct 2020
Cited by 2 | Viewed by 3012
Abstract
The influence of nivolumab on intercellular adhesion forces between T cells and cancer cells was evaluated quantitatively using atomic force microscopy (AFM). Two model T cells, one expressing high levels of programmed cell death protein 1 (PD-1) (PD-1high Jurkat) and the other [...] Read more.
The influence of nivolumab on intercellular adhesion forces between T cells and cancer cells was evaluated quantitatively using atomic force microscopy (AFM). Two model T cells, one expressing high levels of programmed cell death protein 1 (PD-1) (PD-1high Jurkat) and the other with low PD-1 expression levels (PD-1low Jurkat), were analyzed. In addition, two model cancer cells, one expressing programmed death-ligand 1 (PD-L1) on the cell surface (PC-9, PD-L1+) and the other without PD-L1 (MCF-7, PD-L1), were also used. A T cell was attached to the apex of the AFM cantilever using a cup-attached AFM chip, and the intercellular adhesion forces were measured. Although PD-1high T cells adhered strongly to PD-L1+ cancer cells, the adhesion force was smaller than that with PD-L1 cancer cells. After the treatment of PD-1high T cells with nivolumab, the adhesion force with PD-L1+ cancer cells increased to a similar level as with PD-L1 cancer cells. These results can be explained by nivolumab influencing the upregulation of the adhesion ability of PD-1high T cells with PD-L1+ cancer cells. These results were obtained by measuring intercellular adhesion forces quantitatively, indicating the usefulness of single-cell AFM analysis. Full article
(This article belongs to the Special Issue Nanosensors for Biomedical Applications)
Show Figures

Figure 1

Figure 1
<p>Immunofluorescent staining of PD-1 molecules on T cells (<b>a</b>) and PD-L1 molecules on cancer cells (<b>b</b>) used in this study. BF, bright field; FL, fluorescence images. Bars, 50 µm.</p>
Full article ">Figure 2
<p>Experimental setup used in this study. A T cell was picked up using the cup-chip, used to approach a cancer cell, and the intercellular adhesion force was measured.</p>
Full article ">Figure 3
<p>Typical force curves obtained between PD-L1<sup>+</sup> cancer cells and either PD-1<sup>high</sup> (<b>a</b>) or PD-1<sup>low</sup> (<b>b</b>) cell pairs. Dwell time, 20 s.</p>
Full article ">Figure 4
<p>Relationship between maximum adhesion forces and contact time between PD-L1<sup>+</sup> cancer cells and either PD-1<sup>high</sup> (red) or PD-1<sup>low</sup> (blue) T cells. Data points were a few shifted to left and right for visualization. <span class="html-italic">p</span> &lt; 0.01 for all contact time.</p>
Full article ">Figure 5
<p>Summary of the intercellular adhesion force of T cells to PD-L1<sup>−</sup> cancer cells. (<b>a</b>) Relationship between maximum adhesion forces and the contact time between PD-L1<sup>−</sup> cancer cells and either PD-1<sup>high</sup> (red) or PD-1<sup>low</sup> (blue) T cells. <span class="html-italic">p</span> &gt; 0.05 for all contact times. (<b>b</b>) Comparison of the adhesion force of PD-1<sup>high</sup> T cells to PD-L1<sup>+</sup> (circle, same data in <a href="#sensors-20-05723-f004" class="html-fig">Figure 4</a>) and PD-L1<sup>−</sup> (square) cancer cells. Data points were shifted to left and right for visualization. <span class="html-italic">p</span> &lt; 0.05 for all contact time.</p>
Full article ">Figure 6
<p>Summary of the influence of nivolumab on intercellular adhesion forces between T cells and cancer cells. (<b>a</b>) Comparison of the intercellular adhesion forces between T cells (either PD-1<sup>high</sup> or PD-1<sup>low</sup>) and PD-L1<sup>+</sup> cancer cells before (circle, same data in <a href="#sensors-20-05723-f004" class="html-fig">Figure 4</a>) and after (cross) treatment with nivolumab. For PD-1<sup>high</sup> vs. PD-L1<sup>+</sup> before and after the treatment of nivolumab, <span class="html-italic">p</span> &lt; 0.05 over 20 s contact time. (<b>b</b>) Comparison of the adhesion force of PD-1<sup>high</sup> T cells to PD-L1<sup>+</sup> (circle, <a href="#sensors-20-05723-f004" class="html-fig">Figure 4</a>), PD-L1<sup>−</sup> (square, <a href="#sensors-20-05723-f005" class="html-fig">Figure 5</a>), and PD-L1<sup>+</sup> cancer cells after treatment with nivolumab (cross). <span class="html-italic">p</span> &gt; 0.05 between PD-L1<sup>−</sup> (square) and PD-L1<sup>+</sup> after treatment with nivolumab (cross) for all contact times. Data points were shifted to left and right for visualization.</p>
Full article ">
27 pages, 5708 KiB  
Article
Evaluation of Inertial Sensor Data by a Comparison with Optical Motion Capture Data of Guitar Strumming Gestures
by Sérgio Freire, Geise Santos, Augusto Armondes, Eduardo A. L. Meneses and Marcelo M. Wanderley
Sensors 2020, 20(19), 5722; https://doi.org/10.3390/s20195722 - 8 Oct 2020
Cited by 12 | Viewed by 4331
Abstract
Computing technologies have opened up a myriad of possibilities for expanding the sonic capabilities of acoustic musical instruments. Musicians nowadays employ a variety of rather inexpensive, wireless sensor-based systems to obtain refined control of interactive musical performances in actual musical situations like live [...] Read more.
Computing technologies have opened up a myriad of possibilities for expanding the sonic capabilities of acoustic musical instruments. Musicians nowadays employ a variety of rather inexpensive, wireless sensor-based systems to obtain refined control of interactive musical performances in actual musical situations like live music concerts. It is essential though to clearly understand the capabilities and limitations of such acquisition systems and their potential influence on high-level control of musical processes. In this study, we evaluate one such system composed of an inertial sensor (MetaMotionR) and a hexaphonic nylon guitar for capturing strumming gestures. To characterize this system, we compared it with a high-end commercial motion capture system (Qualisys) typically used in the controlled environments of research laboratories, in two complementary tasks: comparisons of rotational and translational data. For the rotations, we were able to compare our results with those that are found in the literature, obtaining RMSE below 10° for 88% of the curves. The translations were compared in two ways: by double derivation of positional data from the mocap and by double integration of IMU acceleration data. For the task of estimating displacements from acceleration data, we developed a compensative-integration method to deal with the oscillatory character of the strumming, whose approximative results are very dependent on the type of gestures and segmentation; a value of 0.77 was obtained for the average of the normalized covariance coefficients of the displacement magnitudes. Although not in the ideal range, these results point to a clearly acceptable trade-off between the flexibility, portability and low cost of the proposed system when compared to the limited use and cost of the high-end motion capture standard in interactive music setups. Full article
(This article belongs to the Special Issue Wearable Sensors for Human Motion Analysis)
Show Figures

Figure 1

Figure 1
<p>Time spans between IMU incoming messages. (<b>a</b>) Histogram of timestamps of one sensor with a bin width of 15 ms; (<b>b</b>) Histogram of time spans between subsequent messages of all physical quantities in a particular take.</p>
Full article ">Figure 2
<p>Plot of timestamp vs. message number for each physical quantity.</p>
Full article ">Figure 3
<p>Measurement of the IMU response delay. On the left, preparation for the free fall. On the right, two-channel recording with audio and sensor data.</p>
Full article ">Figure 4
<p>Guitar and IMU used in this study which were set as rigid bodies in the Qualisys motion capture system. (<b>a</b>) Guitar with markers; (<b>b</b>) IMU with markers.</p>
Full article ">Figure 5
<p>Double integration of the acceleration curve of a simple linear trajectory. On the second row, the solid line indicates a balance towards the strong lobe, the dashed line a balance towards the weaker lobe. The small circles indicate the zero-crossings extracted from the acceleration curve.</p>
Full article ">Figure 6
<p>IMU data from six wrist rotations. (<b>a</b>) Wrist rotation. (<b>b</b>) Curves of three-dimensional (3D) accelerations, angular velocities and Euler angles of wrist rotations.</p>
Full article ">Figure 7
<p>In the first row, curves of 3D acceleration rotated to the IMU frame of reference. The following rows depict the speed and displacement curves that were obtained by integration.</p>
Full article ">Figure 8
<p>Musical excerpt 1.</p>
Full article ">Figure 9
<p>Musical excerpt 2.</p>
Full article ">Figure 10
<p>Musical excerpt 3.</p>
Full article ">Figure 11
<p>Different IMU positions used by the musicians to perform the musical excerpts. (<b>a</b>) Used by Musician 2 in excerpts 1 and 2; (<b>b</b>) Used by Musician 1 in excerpts 1 and 2; (<b>c</b>) Used by Musician 1 in excerpt 3; (<b>d</b>) Used by Musician 2 in excerpt 3.</p>
Full article ">Figure 12
<p>Rotation curves per axis in take m2r1t2.</p>
Full article ">Figure 13
<p>Rotation curves per axis in take m2r3t2c.</p>
Full article ">Figure 14
<p>Acceleration curves per axis in take m2r2t1.</p>
Full article ">Figure 15
<p>3D acceleration curves and zero-crossings from take m2r1t2.</p>
Full article ">Figure 16
<p>Displacement curves estimated for each axis of take m2r1t2, balanced according to the stronger lobe.</p>
Full article ">Figure 17
<p>Displacement curves estimated for each axis of take <b>m1r2t2</b>, balanced according to the stronger lobe.</p>
Full article ">Figure 18
<p>Accelerations curves from take m2r1t2: original, rotated to Qualisys angles, IMU-rotated.</p>
Full article ">
12 pages, 4566 KiB  
Letter
A Novel Approach for Measurement of Composition and Temperature of N-Decane/Butanol Blends Using Two-Color Laser-Induced Fluorescence of Nile Red
by Matthias Koegl, Mohammad Pahlevani and Lars Zigan
Sensors 2020, 20(19), 5721; https://doi.org/10.3390/s20195721 - 8 Oct 2020
Cited by 9 | Viewed by 3076
Abstract
In this work, the possibility of using a two-color LIF (laser-induced fluorescence) approach for fuel composition and temperature measurements using nile red dissolved in n-decane/butanol blends is investigated. The studies were conducted in a specially designed micro cell enabling the detection of the [...] Read more.
In this work, the possibility of using a two-color LIF (laser-induced fluorescence) approach for fuel composition and temperature measurements using nile red dissolved in n-decane/butanol blends is investigated. The studies were conducted in a specially designed micro cell enabling the detection of the spectral LIF intensities over a wide range of temperatures (283–423 K) and butanol concentrations (0–100 vol.%) in mixtures with n-decane. Furthermore, absorption spectra were analyzed for these fuel mixtures. At constant temperature, the absorption and LIF signals exhibit a large spectral shift toward higher wavelengths with increasing butanol concentration. Based on this fact, a two-color detection approach is proposed that enables the determination of the butanol concentration. This is reasonable when temperature changes and evaporation effects accompanied with dye enrichment can be neglected. For n-decane, no spectral shift and broadening of the spectrum are observed for various temperatures. However, for butanol admixture, two-color thermometry is possible as long as the dye and butanol concentrations are kept constant. For example, the LIF spectrum shows a distinct broadening for B20 (i.e., 80 vol.% n-decane, 20 vol.% butanol) and a shift of the peak toward lower wavelengths of about 40 nm for temperature variations of 140 K. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Optical setup (<b>left</b>) and internal design (sectional view) of the micro cell (<b>right</b>).</p>
Full article ">Figure 2
<p>Absorption spectra of nile red (<b>left</b>: in n-decane (B0), <b>right</b>: in B20) normalized to the respective maximum intensity at 30 mg/L, inserted diagrams show linearity (B0: R<sup>2</sup> = 0.997, B20: R<sup>2</sup> = 0.999) of the integral LIF (laser-induced fluorescence) signal for various dye concentrations; 293 K.</p>
Full article ">Figure 3
<p>LIF emission spectra of nile red (<b>left</b>: in n-decane (B0), <b>right</b>: in B20) normalized to the respective maximum intensity at 30 mg/L, inserted diagrams show linearity (B0: R<sup>2</sup> = 0.999, B20: R<sup>2</sup> = 0.993) of the integral LIF signal for various dye concentrations; 293 K.</p>
Full article ">Figure 4
<p>Normalized LIF emission spectra of nile red in B0 and B20 for various dye concentrations; 293 K.</p>
Full article ">Figure 5
<p>Temperature-dependent normalized emission spectra of nile red (7.5 mg/L) in B0 (<b>a</b>) and B20 (<b>b</b>) normalized to the individual maximum intensity; normalized integral fluorescence intensities (<b>c</b>: B0, <b>d</b>: B20) for various temperatures.</p>
Full article ">Figure 6
<p>Temperature-dependent emission spectra of nile red (7.5 mg/L) in B20 (<b>left</b>) normalized to the maximum intensity and the intensity ratio (R<sup>2</sup> = 0.999) for two-color thermometry (<b>right</b>) for various temperatures.</p>
Full article ">Figure 7
<p>Absorption (<b>left</b>) and emission spectra (<b>right</b>) of nile red (7.5 mg/L) in n-decane at various butanol concentrations normalized to the respective maximum values; 293 K, 0.1 MPa.</p>
Full article ">Figure 8
<p>Normalized emission spectra of nile red (7.5 mg/L) in n-decane at various butanol concentrations and an overlay of suitable filters (<b>left</b>); intensity ratio (<b>right</b>, R<sup>2</sup> = 0.997); 293 K, 0.1 MPa.</p>
Full article ">
18 pages, 3052 KiB  
Article
A Smart Sensing and Routing Mechanism for Wireless Sensor Networks
by Li-Ling Hung
Sensors 2020, 20(19), 5720; https://doi.org/10.3390/s20195720 - 8 Oct 2020
Cited by 5 | Viewed by 2590
Abstract
Wireless sensor networks (WSNs) have long been used for many applications. The efficiency of a WSN is subject to its monitoring accuracy and limited energy capacity. Thus, accurate detection and limited energy are two crucial problems for WSNs. Some studies have focused on [...] Read more.
Wireless sensor networks (WSNs) have long been used for many applications. The efficiency of a WSN is subject to its monitoring accuracy and limited energy capacity. Thus, accurate detection and limited energy are two crucial problems for WSNs. Some studies have focused on building energy-efficient transmission mechanisms to extend monitoring lifetimes, and others have focused on building additional systems to support monitoring for enhanced accuracy. Herein, we propose a distributed cooperative mechanism where neighboring sensors mutually confirm event occurrences for improved monitoring accuracy. Moreover, the mechanism transmits events in a time- and energy-efficient manner by using smart antennae to extend monitoring lifetimes. The results of the simulations reveal that monitoring lifetime is extended and time for event notifications is shortened under the proposed mechanism. The evaluations also demonstrate that the monitoring accuracy of the proposed mechanism is much higher than that of other existing mechanisms. Full article
Show Figures

Figure 1

Figure 1
<p>Example coordinates for part of the environment.</p>
Full article ">Figure 2
<p>Neighboring sensors monitoring the cell with coordinate (0, 0, 0).</p>
Full article ">Figure 3
<p>Three types of period in the timeline.</p>
Full article ">Figure 4
<p>(<b>a</b>) Directions and neighbors of the sensor at coordinate (0,0,0); (<b>b</b>) sensor groups according to the coordinates of (<b>a</b>).</p>
Full article ">Figure 5
<p>(<b>a</b>) Directions of transmissions for sensors located in different groups during the first mini-slot; (<b>b</b>) concurrence of transmissions among sensors during the first mini-slot.</p>
Full article ">Figure 6
<p>(<b>a</b>) Energy consumed by sensors according to monitoring time when the rate of event occurrence is 10%; (<b>b</b>) average notification time for one event.</p>
Full article ">Figure 7
<p>(<b>a</b>) An area monitored by two neighboring sensors in mechanisms EERH and CoDet; (<b>b</b>) an area monitored by four neighboring sensors in proposed mechanism, SSRM.</p>
Full article ">Figure 8
<p>(<b>a</b>) The miss rates of event detection for mechanisms with different sensor fault rates; (<b>b</b>) monitoring performance of mechanisms.</p>
Full article ">Figure 9
<p>(<b>a</b>) Energy consumed by sensors according to monitoring time for different unmonitored region rates; (<b>b</b>) sensing range of sensors adjusted in these situations.</p>
Full article ">
23 pages, 6564 KiB  
Article
Comparative Experiments of V2X Security Protocol Based on Hash Chain Cryptography
by Shimaa A. Abdel Hakeem, Mohamed A. Abd El-Gawad and HyungWon Kim
Sensors 2020, 20(19), 5719; https://doi.org/10.3390/s20195719 - 8 Oct 2020
Cited by 16 | Viewed by 3715
Abstract
Vehicle-to-everything (V2X) is the communication technology designed to support road safety for drivers and autonomous driving. The light-weight security solution is crucial to meet the real-time needs of on-board V2X applications. However, most of the recently proposed V2X security protocols—based on the Elliptic [...] Read more.
Vehicle-to-everything (V2X) is the communication technology designed to support road safety for drivers and autonomous driving. The light-weight security solution is crucial to meet the real-time needs of on-board V2X applications. However, most of the recently proposed V2X security protocols—based on the Elliptic Curve Digital Signature Algorithm (ECDSA)—are not efficient enough to support fast processing and reduce the communication overhead between vehicles. ECDSA provides a high-security level at the cost of excessive communication and computation overhead, which motivates us to propose a light-weight message authentication and privacy preservation protocol for V2X communications. The proposed protocol achieves highly secure message authentication at a substantially lower cost by introducing a hash chain of secret keys for a Message Authentication Code (MAC). We implemented the proposed protocol using commercial V2X devices to prove its performance advantages over the standard and non-standard protocols. We constructed real V2X networks using commercial V2X devices that run our implemented protocol. Our extensive experiments with real networks demonstrate that the proposed protocol reduces the communication overhead by 6 times and computation overhead by more than 100 times compared with the IEEE1609.2 standard. Moreover, the proposed protocol reduces the communication overhead by 4 times and the computation overhead by up to 100 times compared with a non-standard security protocol, TESLA. The proposed protocol substantially reduces the average end-to-end delay to 2.5 ms, which is a 24- and 28-fold reduction, respectively, compared with the IEEE1609 and TESLA protocols. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>V2V and V2I communication modes in the proposed protocol.</p>
Full article ">Figure 2
<p>The generated hash chain keys.</p>
Full article ">Figure 3
<p>The proposed protocol full architecture.</p>
Full article ">Figure 4
<p>Message Signing Procedure.</p>
Full article ">Figure 5
<p>Message Verification Procedure.</p>
Full article ">Figure 6
<p>The proposed protocol’s message format in the case of OBU and RSU.</p>
Full article ">Figure 7
<p>WSA messages signed with implicit certificate in IEEE1609.2.</p>
Full article ">Figure 8
<p>(<b>a</b>) Secure CAM message with authorization ticket; (<b>b</b>) Secure CAM message with certificate digest using SHA-256.</p>
Full article ">Figure 9
<p>(<b>a</b>) Cohda wireless MK5 On-board Unit (OBU), (<b>b</b>) Cohda wireless MK5 Road Side Unit (RSU), (<b>c</b>) Hardware architecture of MK5 device.</p>
Full article ">Figure 10
<p>Our test platform consisting of test vehicles, GPS antenna, MK5 OBU devices, and RSU devices.</p>
Full article ">Figure 11
<p>Average end-to-end delay for the proposed protocol applied to IEEE1609.</p>
Full article ">Figure 12
<p>Average end-to-end delay measured for V2X messages of the IEEE 1609 standard.</p>
Full article ">Figure 13
<p>End-to-end delay measured for V2X messages of the ETSI standard.</p>
Full article ">Figure 14
<p>End-to-end delay for IEEE1609 enhanced with proposed protocol, IEEE1609 Standard with software module, and ETSI Standard with software module.</p>
Full article ">Figure 15
<p>Verification ratio of IEEE1609 and proposed protocol for different vehicle density using software security module.</p>
Full article ">Figure 16
<p>Average end-to-end delay for the proposed protocol and non-standard security protocols using IEEE1609 different messages.</p>
Full article ">
19 pages, 5953 KiB  
Article
Network Optimisation and Performance Analysis of a Multistatic Acoustic Navigation Sensor
by Rohan Kapoor, Alessandro Gardi and Roberto Sabatini
Sensors 2020, 20(19), 5718; https://doi.org/10.3390/s20195718 - 8 Oct 2020
Cited by 3 | Viewed by 2457
Abstract
This paper addresses some of the existing research gaps in the practical use of acoustic waves for navigation of autonomous air and surface vehicles. After providing a characterisation of ultrasonic transducers, a multistatic sensor arrangement is discussed, with multiple transmitters broadcasting their respective [...] Read more.
This paper addresses some of the existing research gaps in the practical use of acoustic waves for navigation of autonomous air and surface vehicles. After providing a characterisation of ultrasonic transducers, a multistatic sensor arrangement is discussed, with multiple transmitters broadcasting their respective signals in a round-robin fashion, following a time division multiple access (TDMA) scheme. In particular, an optimisation methodology for the placement of transmitters in a given test volume is presented with the objective of minimizing the position dilution of precision (PDOP) and maximizing the sensor availability. Additionally, the contribution of platform dynamics to positioning error is also analysed in order to support future ground and flight vehicle test activities. Results are presented of both theoretical and experimental data analysis performed to determine the positioning accuracy attainable from the proposed multistatic acoustic navigation sensor. In particular, the ranging errors due to signal delays and attenuation of sound waves in air are analytically derived, and static indoor positioning tests are performed to determine the positioning accuracy attainable with different transmitter–receiver-relative geometries. Additionally, it is shown that the proposed transmitter placement optimisation methodology leads to increased accuracy and better coverage in an indoor environment, where the required position, velocity, and time (PVT) data cannot be delivered by satellite-based navigation systems. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic of an ultrasonic transducer.</p>
Full article ">Figure 2
<p>(<b>a</b>) Transmitter circuit. (<b>b</b>) Receiver circuit.</p>
Full article ">Figure 3
<p>Signal timeline and sources of error (not to scale) [<a href="#B22-sensors-20-05718" class="html-bibr">22</a>].</p>
Full article ">Figure 4
<p>Acoustic positioning and navigation system (adapted from [<a href="#B24-sensors-20-05718" class="html-bibr">24</a>]).</p>
Full article ">Figure 5
<p>Variation of receiver delay with distance from the transmitter.</p>
Full article ">Figure 6
<p>Tetrahedron formed by joining unit vectors from the receiver to the four transmitters.</p>
Full article ">Figure 7
<p>Angular variation of transmitter–receiver directivity.</p>
Full article ">Figure 8
<p>Position dilution of precision (PDOP) distribution for different transmitter arrangements. (<b>a</b>) Transmitters arranged in a triangle, with one transmitter at its median. (<b>b</b>) Transmitters arranged along a rectangle within the test area. (<b>c</b>) Transmitters arranged at the centre of the edges of the test area. (<b>d</b>) Transmitters arranged at the corners of the test area.</p>
Full article ">Figure 9
<p>Transmitter arrangement optimisation using PDOP. (<b>a</b>) Optimised transmitter arrangement. (<b>b</b>) PDOP for an optimised transmitter arrangement.</p>
Full article ">Figure 10
<p>(<b>a</b>) Receiver motion in a spiral trajectory at 3 m/s (r = 3 m). (<b>b</b>) Actual vs calculated receiver distance at 3 m/s. (<b>c</b>) Positioning error propagation due to receiver moving at different velocities.</p>
Full article ">Figure 11
<p>(<b>a</b>) Transmitter arrangement and receiver motion. (<b>b</b>) Error in positioning due to receiver velocity for various time delays.</p>
Full article ">Figure 12
<p>Positioning tests with transmitters arranged in rectangular arrangement.</p>
Full article ">Figure 13
<p>Positioning tests with transmitters arranged in trigonal planar arrangement.</p>
Full article ">Figure 14
<p>Variation of positioning error with PDOP (<b>a</b>) rectangular transmitter arrangement. (<b>b</b>) Trigonal planar transmitter arrangement.</p>
Full article ">
16 pages, 5028 KiB  
Article
Mapping Small-Scale Horizontal Velocity Field in Panzhinan Waterway by Coastal Acoustic Tomography
by Haocai Huang, Xinyi Xie, Yong Guo and Hangzhou Wang
Sensors 2020, 20(19), 5717; https://doi.org/10.3390/s20195717 - 8 Oct 2020
Cited by 2 | Viewed by 2851
Abstract
Mapping small-scale high-precision velocity fields is of great significance to oceanic environment research. Coastal acoustic tomography (CAT) is a frontier technology used to observe large-scale velocity field in the horizontal slice. Nonetheless, it is difficult to observe the velocity field using the CAT [...] Read more.
Mapping small-scale high-precision velocity fields is of great significance to oceanic environment research. Coastal acoustic tomography (CAT) is a frontier technology used to observe large-scale velocity field in the horizontal slice. Nonetheless, it is difficult to observe the velocity field using the CAT in small-scale areas, specifically where the flow field is complex such as ocean ranch and artificial upwelling areas. This paper conducted a sound transmission experiment using four 50 kHz CAT systems in the Panzhinan waterway. Notably, sound transmission based on the round-robin method was recommended for small-scale CAT observation. The travel time between stations, obtained by correlation of raw data, was applied to reconstruct the horizontal velocity fields using Tapered Least Square inversion. The minimum net volume transport was 8.7 m3/s at 12:32, 1.63% of the total inflow volume transport indicating that the observational errors were acceptable. The relative errors of the range-average velocity calculated by differential travel time were 1.54% (path 2) and 0.92% (path 6), respectively. Moreover, the inversion velocity root-mean-square errors (RMSEs) were 0.5163, 0.1494, 0.2103, 0.2804 and 0.2817 m/s for paths 1, 2, 3, 4 and 6, respectively. The feasibility and acceptable accuracy of the CAT method in the small-scale velocity profiling measurement were validated. Furthermore, a three-dimensional (3-D) velocity field mapping should be performed with combined analysis in horizontal and vertical slices. Full article
(This article belongs to the Special Issue Intelligent Sound Measurement Sensor and System)
Show Figures

Figure 1

Figure 1
<p>Sound transmission.</p>
Full article ">Figure 2
<p>Map of the Panzhinan waterway near the Zhairuoshan Island with adjacent regions (left panel) and the CAT station array at a magnified scale (right panel).</p>
Full article ">Figure 3
<p>Experimental settings between the two stations; the acoustic transceivers were suspended down to 1–2 m under the surface with an additional weight by a fixed mount, coastal acoustic tomography (CAT) systems were synchronized with the GPS.</p>
Full article ">Figure 4
<p>(<b>a</b>) Ray patterns of S1-S3 in the range-independent ray simulation; (<b>b</b>) Sound–speed profile data measured using conductivity–temperature–depth (CTD); (<b>c</b>) Temperature profile data measured using CTD.</p>
Full article ">Figure 5
<p>Stack diagram of the correlation patterns with first-arrival peaks marked with red circles.</p>
Full article ">Figure 6
<p>Time plots of the differential travel time for five station pairs.</p>
Full article ">Figure 7
<p>Horizontal velocity fields mapping.</p>
Full article ">Figure 8
<p>ADCP velocity data with vector plots along the tracks. (<b>a</b>) five ADCP ship tracks. (<b>b</b>) Vector plots for ADCP velocity.</p>
Full article ">Figure 9
<p>(<b>a</b>,<b>c</b>) are the ADCP measurement data along S1–S3 and S3–S4, respectively; (<b>b</b>,<b>d</b>) are the comparison of the ADCP average data and the range-average velocity <span class="html-italic">V<sub>m</sub></span>. The solid blue lines represent the <span class="html-italic">V<sub>m</sub></span> and the red dots represent the ADCP average value calculated from ADCP measurement data, respectively.</p>
Full article ">Figure 10
<p>Comparison between the CAT inversion and the range-average velocity <span class="html-italic">V<sub>m</sub></span> at nine moments.</p>
Full article ">
36 pages, 8154 KiB  
Review
A Review of Microelectronic Systems and Circuit Techniques for Electrical Neural Recording Aimed at Closed-Loop Epilepsy Control
by Reza Ranjandish and Alexandre Schmid
Sensors 2020, 20(19), 5716; https://doi.org/10.3390/s20195716 - 8 Oct 2020
Cited by 11 | Viewed by 6625
Abstract
Closed-loop implantable electronics offer a new trend in therapeutic systems aimed at controlling some neurological diseases such as epilepsy. Seizures are detected and electrical stimulation applied to the brain or groups of nerves. To this aim, the signal recording chain must be very [...] Read more.
Closed-loop implantable electronics offer a new trend in therapeutic systems aimed at controlling some neurological diseases such as epilepsy. Seizures are detected and electrical stimulation applied to the brain or groups of nerves. To this aim, the signal recording chain must be very carefully designed so as to operate in low-power and low-latency, while enhancing the probability of correct event detection. This paper reviews the electrical characteristics of the target brain signals pertaining to epilepsy detection. Commercial systems are presented and discussed. Finally, the major blocks of the signal acquisition chain are presented with a focus on the circuit architecture and a careful attention to solutions to issues related to data acquisition from multi-channel arrays of cortical sensors. Full article
(This article belongs to the Special Issue Integrated Circuits and Systems for Smart Sensory Applications)
Show Figures

Figure 1

Figure 1
<p>The new classification of seizure types based on the International League Against Epilepsy (ILAE) report (Reprinted with permission of Wiley Periodicals, Inc. © 2017 International League Against Epilepsy) [<a href="#B7-sensors-20-05716" class="html-bibr">7</a>].</p>
Full article ">Figure 2
<p>General overview of (<b>a</b>) a low-power seizure detection, and (<b>b</b>) a low-power seizure control systems.</p>
Full article ">Figure 3
<p>Bandwidths of some vital physiological signals (modified from [<a href="#B20-sensors-20-05716" class="html-bibr">20</a>]).</p>
Full article ">Figure 4
<p>Signals characteristics in neural recording systems.</p>
Full article ">Figure 5
<p>Simplified block diagram of a practical neural recording system.</p>
Full article ">Figure 6
<p>Placement of the different types of electrodes for neural recording.</p>
Full article ">Figure 7
<p>Different types of electrodes for neural recording including (<b>a</b>) grid array for intradural or subdural recording, (<b>b</b>) laminar intracortical electrodes, (<b>c</b>) intracortical micro-electrode array (Reprinted with permission of Wiley Periodicals, Inc., © 2003 The American Laryngological, Rhinological and Otological Society, Inc.), [<a href="#B38-sensors-20-05716" class="html-bibr">38</a>]) and (<b>d</b>) SEEG electrode (Courtesy DIXI medical).</p>
Full article ">Figure 8
<p>Vagus Nerve Stimulation (VNS) devices by LivaNova [<a href="#B47-sensors-20-05716" class="html-bibr">47</a>] including (<b>a</b>) AspireSR and (<b>b</b>) SenTiva, (Reprinted with permission; copyrighted material of LivaNova).</p>
Full article ">Figure 9
<p>FDA approved responsive neurostimulation (RNS) system for closed-loop epilepsy detection and stimulation (courtesy of NeuroPace, Inc.) [<a href="#B53-sensors-20-05716" class="html-bibr">53</a>].</p>
Full article ">Figure 10
<p>FDA approved deep-brain stimulation (DBS) system for epilepsy (Courtesy Food and Drug Administration (FDA)).</p>
Full article ">Figure 11
<p>FDA approved DBS system for epilepsy [<a href="#B57-sensors-20-05716" class="html-bibr">57</a>] (Image with kind permission of Medtronic).</p>
Full article ">Figure 12
<p>Transcutaneous Vagus Nerve Stimulation device for epilepsy control [<a href="#B63-sensors-20-05716" class="html-bibr">63</a>] (Image reprinted with kind permission of tVNS Technologies GmbH).</p>
Full article ">Figure 13
<p>Embrace2 by Empatica Inc. [<a href="#B72-sensors-20-05716" class="html-bibr">72</a>] for seizure detection (Image reprinted with kind permission of Empatica Inc.).</p>
Full article ">Figure 14
<p>Overview of the ac-coupled capacitive-feedback structure for neural amplifiers. (<b>a</b>) Classical feedback topology and (<b>b</b>), capacitive T-feedback topology.</p>
Full article ">Figure 15
<p>Pseudo-resistor architectures. (<b>a</b>) Fixed-value and (<b>b</b>) tunable pseudo-resistor.</p>
Full article ">Figure 16
<p>The ac-coupled chopper amplifier architectures. (<b>a</b>) Classical topology with servo loop and (<b>b</b>) alternated topology without servo loop.</p>
Full article ">Figure 17
<p>General architecture of dc-coupled amplifiers.</p>
Full article ">Figure 18
<p>Amplifier sharing techniques: (<b>a</b>) time-division multiplexing (TDM) technique proposed in [<a href="#B84-sensors-20-05716" class="html-bibr">84</a>] and (<b>b</b>) frequency-division multiplexing (FDM) technique proposed in [<a href="#B85-sensors-20-05716" class="html-bibr">85</a>].</p>
Full article ">Figure 19
<p>First four Rademacher functions.</p>
Full article ">Figure 20
<p>Effect of the dc servo loop on the overall frequency response of the chopper amplifier. (<b>a</b>) Topology including a dc-servo loop. (<b>b</b>) Block-level frequency responses of the topology including a servo loop. (<b>c</b>) Overall frequency response.</p>
Full article ">Figure 21
<p>Noise analysis model of (<b>a</b>) a single-channel chopper amplifier and (<b>b</b>) a two-channel chopper amplifier.</p>
Full article ">Figure 22
<p>Multichannel compressive sensing (MCS). (<b>a</b>) Overview of the MCS concept. (<b>b</b>) Structure of the conventional multi-input single-output compressive sensing (MISOCS) block with 16 inputs.</p>
Full article ">Figure 23
<p>Schematic of the proposed multiple-input single-output compressive sensing block, mMISOCS.</p>
Full article ">
25 pages, 2800 KiB  
Article
Determination of the Inaccuracies of Calculated EEG Indices
by Mariusz Borawski, Konrad Biercewicz and Jarosław Duda
Sensors 2020, 20(19), 5715; https://doi.org/10.3390/s20195715 - 8 Oct 2020
Cited by 1 | Viewed by 2498
Abstract
The data obtained as a result of an EEG measurement are burdened with inaccuracies related to the measurement process itself and the need to remove recorded disturbances. The article presents an example of how to calculate the Approach-Withdraw Index (EEG-AW) and Memorization Index [...] Read more.
The data obtained as a result of an EEG measurement are burdened with inaccuracies related to the measurement process itself and the need to remove recorded disturbances. The article presents an example of how to calculate the Approach-Withdraw Index (EEG-AW) and Memorization Index (MI) indices in such a way that their inaccuracy resulting from the removal of artifacts can be periodically calculated. This inaccuracy is expressed in terms of standard deviation. This allows you to determine the reliability of the obtained conclusions in the context of examining elements in a 2D computer game created in the Unity engine. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

Figure 1
<p>Excerpt from the ICA component related to the eye artifact.</p>
Full article ">Figure 2
<p>ICA as the transformation of the coordinate system.</p>
Full article ">Figure 3
<p>Chronological order of events while playing a computer game.</p>
Full article ">Figure 4
<p>Excerpt from the ICA component containing eye artefacts.</p>
Full article ">Figure 5
<p>Standard deviations and average values determined for the ICA component to be removed.</p>
Full article ">Figure 6
<p>Standard deviations for electrodes at different distances from the eyes.</p>
Full article ">Figure 7
<p>GFP for the alpha band calculated for the electrodes on the right side of the head and its approximate values of the upper limit of standard deviations.</p>
Full article ">Figure 8
<p>Index AW and its approximate values of the upper limit of standard deviations.</p>
Full article ">Figure 9
<p>Standardized index AW and its approximate values of the upper limit of standard deviations.</p>
Full article ">Figure 10
<p>(<b>a</b>) The picture shows a screenshot of the game, where the hero is in the middle of the next level. In the picture, the points where the player is looking are marked with red stars. Below are the graphs (blue) of the Approach-Withdrawal and Memory Index and the standard deviation value for both indices (red line) (<b>b</b>) A graph showing the occurrence of an ocular artifact for an event related to the hero’s transition to the next level.</p>
Full article ">Figure 11
<p>(<b>a</b>) The picture shows a screenshot of the game, where the hero revives at a fixed point after death. In the picture, the points where the player is looking are marked with red stars. Below are the graphs (blue) of the Approach-Withdrawal and Memory Index and the standard deviation value for both indices (red line) (<b>b</b>) A graph showing the occurrence of an ocular artifact before the hero’s rebirth event.</p>
Full article ">Figure 12
<p>(<b>a</b>) The picture shows a screenshot of the game, where the player fights the opponent. In the picture, the points where the player is looking are marked with red stars. Below are the graphs (blue) of the Approach-Withdrawal and Memory Index and the standard deviation value for both indices (red line) (<b>b</b>) A graph showing the absence of an artifact for an event related to fighting an opponent.</p>
Full article ">Figure 13
<p>(<b>a</b>) The picture shows a screenshot of the game, where the player overcomes an obstacle that is spikes. In the picture, the points where the player is looking are marked with red stars. Below are the graphs (blue) of the Approach-Withdrawal and Memory Index and the standard deviation value for both indices (red line) (<b>b</b>) A graph showing the occurrence of an ocular artifact after an obstacle event.</p>
Full article ">
24 pages, 10121 KiB  
Article
The Effect of Deflections and Elastic Deformations on Geometrical Deviation and Shape Profile Measurements of Large Crankshafts with Uncontrolled Supports
by Krzysztof Nozdrzykowski, Stanisław Adamczak, Zenon Grządziel and Paweł Dunaj
Sensors 2020, 20(19), 5714; https://doi.org/10.3390/s20195714 - 8 Oct 2020
Cited by 10 | Viewed by 2917
Abstract
This article presents a multi-criteria analysis of the errors that may occur while measuring the geometric deviations of crankshafts that require multi-point support. The analysis included in the paper confirmed that the currently used conventional support method—in which the journals of large crankshafts [...] Read more.
This article presents a multi-criteria analysis of the errors that may occur while measuring the geometric deviations of crankshafts that require multi-point support. The analysis included in the paper confirmed that the currently used conventional support method—in which the journals of large crankshafts rest on a set of fixed rigid vee-blocks—significantly limits the detectability of their geometric deviations, especially those of the main journal axes’ positions. Insights for performing practical measurements, which will improve measurement procedures and increase measurement accuracy, are provided. The results are presented both graphically and as discrete amplitude spectra to make a visual, qualitative comparison, which is complemented by a quantitative assessment based on correlation analysis. Full article
(This article belongs to the Special Issue Measurement Methods in the Operation of Ships and Offshore Facilities)
Show Figures

Figure 1

Figure 1
<p>Geometrical deviation measurement methods: (<b>a</b>) horizontal on a set of rigid vee-block supports, (<b>b</b>) vertical using precision measuring machines (courtesy of ADCOLE).</p>
Full article ">Figure 2
<p>Geometrical model of the analyzed shaft with journal numeration.</p>
Full article ">Figure 3
<p>A graphic representation of the cases under consideration.</p>
Full article ">Figure 4
<p>Finite element model of the analyzed shaft (<b>a</b>) isometric view; (<b>b</b>) mesh close-up; (<b>c</b>) main journals support.</p>
Full article ">Figure 5
<p>SAJD measurement system: (<b>a</b>) layout of the system; (<b>b</b>) measurement method [<a href="#B11-sensors-20-05714" class="html-bibr">11</a>]: 1—measuring head MUK 25-600, 2—shaft journal, 3—drive motor, 4—displacement sensor, F—measuring head pressing force.</p>
Full article ">Figure 6
<p>Exemplary finite element analysis results for case 1.</p>
Full article ">Figure 7
<p>Changes in deflections measured in the vertical plane at individual main journals with the shaft rotated by 15° at a time and when one of the supports (of journal no. 5 counting from the timing gear end) is offset upwards by 0.03 mm relative to the others, shown in the charts in the (<b>a</b>) Cartesian and (<b>b</b>) polar coordinate systems.</p>
Full article ">Figure 8
<p>Exemplary finite element analysis results for case 2.</p>
Full article ">Figure 9
<p>Changes in deflections measured in the vertical plane at individual main journals with the shaft rotated by 15° at a time and when one of the supports (of journal no. 5 counting from the timing gear end) was offset downwards by 0.03 mm relative to the others, in the (<b>a</b>) Cartesian and (<b>b</b>) polar coordinate systems.</p>
Full article ">Figure 10
<p>Changes in deflections measured in the vertical plane at individual main journals with the shaft changed by 15° at a time, and when one of the main journal axes (of journal no. 5 counting from the timing gear end) was offset eccentrically upwards by 0.03 mm relative to the others, shown in the (<b>a</b>) Cartesian and (<b>b</b>) polar coordinate systems.</p>
Full article ">Figure 11
<p>Changes in deflections measured in the vertical plane at individual main journals, when one of the main journal axes (no. 5 counting from the timing gear end) was offset eccentrically downwards by 0.03 mm relative to the others, shown in the (<b>a</b>) Cartesian and (<b>b</b>) polar coordinate systems.</p>
Full article ">Figure 12
<p>Supplementary diagram of the successive stages of shaft deflection caused by its eccentricity with respect to the axis of rotation of the measuring system when the support retains its fixed height.</p>
Full article ">Figure 13
<p>The journal displacements measured by the sensor for a full shaft rotation (0°–360° angle) are recorded when the support maintaining a constant height restricts these displacements, as shown in the (<b>a</b>) Cartesian and (<b>b</b>) polar coordinate systems.</p>
Full article ">Figure 14
<p>Graph showing the measurable value <b><span class="html-italic">w</span></b> of eccentricity <math display="inline"><semantics> <mi>e</mi> </semantics></math> as a function of the vertical position <math display="inline"><semantics> <mi>x</mi> </semantics></math> of the support.</p>
Full article ">Figure 15
<p>A supplementary diagram to determine the measurement error caused by a journal’s eccentric displacement when measuring geometric deviations of the shaft.</p>
Full article ">Figure 16
<p>Deformation of the object supported by two vee-blocks and the linear and angular displacements resulting from this deformation.</p>
Full article ">Figure 17
<p>Roundness profile of journal no. 5 measured by the reference method (<b>a</b>); the discrete amplitude spectrum (<b>b</b>).</p>
Full article ">Figure 18
<p>Eccentric movement profile of the center of the roundness profile being measured. The eccentricity was 0.03 mm, and the support did not limit the shaft displacement (<b>a</b>); discrete amplitude spectrum (<b>b</b>).</p>
Full article ">Figure 19
<p>Eccentric movement profile of the center of the roundness profile being measured when the supports were set at the same height (<math display="inline"><semantics> <mrow> <mi>x</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0</mn> <mo> </mo> <mi>mm</mi> </mrow> </semantics></math>), with an eccentricity of the main journal <span class="html-italic">e</span> = 0.03 mm (<b>a</b>); discrete amplitude spectrum (<b>b</b>).</p>
Full article ">Figure 20
<p>The profile obtained by superimposing the measured roundness profile of journal no. 5 on the full profile of the eccentric movement of the measured roundness profile center (<b>a</b>); discrete amplitude spectrum of the superimposed profile (<b>b</b>).</p>
Full article ">Figure 21
<p>Profile obtained by superimposing the measured roundness profile of journal no. 5 on the eccentric movement profile of the center of the measured roundness profile with the supports set at the same height (<math display="inline"><semantics> <mrow> <mi>x</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0</mn> <mo> </mo> <mi>mm</mi> </mrow> </semantics></math>), with an eccentricity of the main journal <span class="html-italic">e</span> = 0.03 mm (<b>a</b>); discrete amplitude spectrum of the superimposed profile (<b>b</b>).</p>
Full article ">Figure 22
<p>Profile obtained by superimposing the measured roundness profile of journal no. 5—rotated by 60° with respect to the reference profile—onto the profile of the eccentric movement of the center of the measured roundness profile, with the supports set at the same height (<math display="inline"><semantics> <mrow> <mi>x</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0</mn> <mo> </mo> <mi>mm</mi> </mrow> </semantics></math>); with an eccentricity of the main journal <math display="inline"><semantics> <mrow> <mi>e</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.03</mn> <mo> </mo> <mi>mm</mi> </mrow> </semantics></math> (<b>a</b>); discrete amplitude spectrum of the superimposed profile (<b>b</b>).</p>
Full article ">Figure 23
<p>Profile obtained by superimposing the measured roundness profile of journal no. 5 (green)—rotated by 90° with respect to the reference profile—onto the eccentric movement profile of the center of the measured roundness profile (blue), with the supports set at the same height (<span class="html-italic">x</span> = 0 mm), with an eccentricity of the main journal <math display="inline"><semantics> <mrow> <mi>e</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.03</mn> <mo> </mo> <mi>mm</mi> </mrow> </semantics></math> (<b>a</b>); discrete amplitude spectrum of the superimposed profile (<b>b</b>).</p>
Full article ">Figure 24
<p>Measured and theoretical profiles (excluding harmonic component no. 1) obtained by superimposing the measured roundness profile of journal no. 5 onto the full eccentric movement profile of the center of the measured roundness profile.</p>
Full article ">Figure 25
<p>Measured and theoretical profiles (excluding harmonic component no. 1) obtained by superimposing the measured roundness profile of journal no. 5 onto the eccentric movement profile of the center of the measured roundness profile with the supports set at the same height (<span class="html-italic">x</span> = 0 mm), with an eccentricity of the main journal <span class="html-italic">e</span> = 0.03 mm, (<b>a</b>) at a starting position; (<b>b</b>) rotated 60°; (<b>c</b>) rotated by 90°; (<b>d</b>) rotated by 225°; (<b>e</b>) rotated by 300°.</p>
Full article ">Figure 26
<p>Variation in the correlation coefficient <span class="html-italic">ρ</span> as a function of the angular shift between the measured and evaluated profiles.</p>
Full article ">
21 pages, 8471 KiB  
Article
A Compact and Flexible UHF RFID Tag Antenna for Massive IoT Devices in 5G System
by Muhammad Hussain, Yasar Amin and Kyung-Geun Lee
Sensors 2020, 20(19), 5713; https://doi.org/10.3390/s20195713 - 8 Oct 2020
Cited by 22 | Viewed by 7974
Abstract
Upcoming 5th-generation (5G) systems incorporate physical objects (referred to as things), which sense the presence of components such as gears, gadgets, and sensors. They may transmit many kinds of states in the smart city context, such as new deals at malls, safe distances [...] Read more.
Upcoming 5th-generation (5G) systems incorporate physical objects (referred to as things), which sense the presence of components such as gears, gadgets, and sensors. They may transmit many kinds of states in the smart city context, such as new deals at malls, safe distances on roads, patient heart rhythms (especially in hospitals), and logistic control at aerodromes and seaports around the world. These serve to form the so-called future internet of things (IoT). From this futuristic perspective, everything should have its own identity. In this context, radio frequency identification (RFID) plays a specific role, which provides wireless communications in a secure manner. Passive RFID tags carry out work using the energy harvested among massive systems. RFID has been habitually realized as a prerequisite for IoT, the combination of which is called IoT RFID (I-RFID). For the current scenario, such tags should be productive, low-profile, compact, easily mountable, and have eco-friendly features. The presently available tags are not cost-effective and have not been proven as green tags for environmentally friendly IoT in 5G systems nor are they suitable for long-range communications in 5G systems. The proposed I-RFID tag uses the meandering angle technique (MAT) to construct a design that satisfies the features of a lower-cost printed antenna over the worldwide UHF RFID band standard (860–960 MHz). In our research, tag MAT antennas are fabricated on paper-based Korsnäs by screen- and flexo-printing, which have lowest simulated effective outcomes with dielectric variation due to humidity and have a plausible read range (RR) for European (EU; 866–868 MHz) and North American (NA; 902–928 MHz) UHF band standards. The I-RFID tag size is reduced by 36% to 38% w.r.t. a previously published case, the tag gain has been improved by 23.6% to 33.12%, and its read range has been enhanced by 50.9% and 59.6% for EU and NA UHF bands, respectively. It provides impressive performance on some platforms (e.g., plastic, paper, and glass), thereby providing a new state-of-the-art I-RFID tag with better qualities in 5G systems. Full article
Show Figures

Figure 1

Figure 1
<p>Internet of things (IoT) four-tier architecture for smart X environments. Each tier is linked with each other through internet gateway access. Under this scenario, end users can easily manage and control the devices tier.</p>
Full article ">Figure 2
<p>Ultra-high frequency (UHF) radio frequency identification (RFID) band (865–930 MHz) standards of different countries.</p>
Full article ">Figure 3
<p>Flow chart of design process for I-RFID UHF tags for 5G IoT systems.</p>
Full article ">Figure 4
<p>(<b>a</b>) Conductive Asahi paste I-RFID Tag for European UHF RFID standard and (<b>b</b>) conductive Asahi paste I-RFID Tag for North American UHF RFID standard.</p>
Full article ">Figure 5
<p>Reflection coefficient measured under −10 dB: m1 is minimum reflection coefficient S11 at <span class="html-italic">f<sub>c</sub></span> = 867 MHz (which is −29 dB), m2 is the resistance (which is 22 Ω), and m3 is the reactance (which is 195 Ω). These markers (m1, m2, m3) exist within the target bandwidth (37 MHz) from MX1 = 851 MHz to MX2 = 888 MHz at S11 = −10 dB and conjugate impedance (22 + <span class="html-italic">j</span>195 Ω) of the European UHF RFID band. The tag antenna is achieved w.r.t. the targeted IC impedance (22 − <span class="html-italic">j</span>195 Ω) for the proposed tag and previously published tags (PPT).</p>
Full article ">Figure 6
<p>Reflection coefficient measured under −10 dB: m1 is the minimum reflection coefficient S11 at <span class="html-italic">f<sub>c</sub></span> = 913 MHz (which is −19 dB), m2 is the resistance (which is 22 Ω), and m3 is the reactance (which is 195 Ω). These markers (m1, m2, m3) exist within the target bandwidth (37 MHz) from MX1 = 896 MHz to MX2 = 933 MHz at S11 = −10 dB and conjugate impedance (22 + <span class="html-italic">j</span>195 Ω) of the North American UHF RFID band. The tag antenna is achieved w.r.t. the targeted IC impedance (22 − <span class="html-italic">j</span>195 Ω) for the proposed tag and previously published tags (PPT).</p>
Full article ">Figure 7
<p>Torus-shaped omnidirectional radiating pattern of antenna gain: for European tag, 0.6 dB (2.75 dBi); for North American tag, 0.98 dB (3.14 dBi).</p>
Full article ">Figure 8
<p>The effect of meandering angle variation on reflection coefficient (S11). With meandering angle variation, the minima of S11 curves fluctuates from −22 to −29 dB, while the tag antenna bandwidth is a little bit reformed (from 33 to 37 MHz). The tag antenna has a minimum reflection coefficient of −29 dB and maximum bandwidth of 37 MHz under −10 dB S11 at <span class="html-italic">θ</span> = 30°.</p>
Full article ">Figure 9
<p>Reflection coefficient results w.r.t. dielectric change (ε<sub>r</sub>) due to humidity-altering effect.</p>
Full article ">Figure 10
<p>Read range (RR) and reader power sensitivity (P<sub>R</sub>) for UHF RFID band countries w.r.t. center frequency (<span class="html-italic">f<sub>c</sub></span>) for proposed tags and previously published tags (PPT).</p>
Full article ">Figure 11
<p>(<b>a</b>) Inkjet printing setup and paper substrate printed antennas; (<b>b</b>) Experimental setup in an anechoic chamber [<a href="#B16-sensors-20-05713" class="html-bibr">16</a>].</p>
Full article ">Figure 12
<p>Prototypes of the antennas of previously published tags (PPT) and proposed tags (PT): (<b>a</b>) PPT EU UHF RFID tag (98 × 15 × 0.39 mm<sup>3</sup>); (<b>b</b>) PPT NA UHF RFID tag (97 x 13 × 0.39 mm<sup>3</sup>); (<b>c</b>) PT EU UHF RFID tag (101.2 × 10.5 × 0.39 mm<sup>3</sup>); and (<b>d</b>) PT EU UHF RFID tag (92.4 × 10 × 0.39 mm<sup>3</sup>).</p>
Full article ">Figure 13
<p>I-RFID indoor and outdoor work mechanism for IoT in smart X environments (e.g., smart logistics, smart market retail, smart passport immigration clearance, and smart city transportation).</p>
Full article ">Figure 14
<p>Simulation model for all platforms (140 mm × 30 mm × 1mm) w.r.t. the RFID tag.</p>
Full article ">Figure 15
<p>Torus-shaped omnidirectional radiating pattern of antenna gain for North American UHF RFID tag on several mounting platforms: 0.77 dB (2.92 dBi) for plastic; 0.36 dB (2.51 dBi) for paper; −0.5 dB (1.58 dBi) for glass; and −3.2 dB (−1.05 dBi) for water.</p>
Full article ">Figure 16
<p>Tag gain read range (RR) and reader power sensitivity (<span class="html-italic">P<sub>R</sub></span>) of NA UHF RFID tag on mounting platforms and in free space.</p>
Full article ">
27 pages, 6183 KiB  
Article
Determining UAV Flight Trajectory for Target Recognition Using EO/IR and SAR
by Wojciech Stecz and Krzysztof Gromada
Sensors 2020, 20(19), 5712; https://doi.org/10.3390/s20195712 - 8 Oct 2020
Cited by 15 | Viewed by 6892
Abstract
The paper presents the concept of planning the optimal trajectory of fixed-wing unmanned aerial vehicle (UAV) of a short-range tactical class, whose task is to recognize a set of ground objects as a part of a reconnaissance mission. Tasks carried out by such [...] Read more.
The paper presents the concept of planning the optimal trajectory of fixed-wing unmanned aerial vehicle (UAV) of a short-range tactical class, whose task is to recognize a set of ground objects as a part of a reconnaissance mission. Tasks carried out by such systems are mainly associated with an aerial reconnaissance using Electro-Optical/Infrared (EO/IR) systems and Synthetic Aperture Radars (SARs) to support military operations. Execution of a professional reconnaissance of the indicated objects requires determining the UAV flight trajectory in the close neighborhood of the target, in order to collect as much interesting information as possible. The paper describes the algorithm for determining UAV flight trajectories, which is tasked with identifying the indicated objectives using the sensors specified in the order. The presence of UAV threatening objects is taken into account. The task of determining the UAV flight trajectory for recognition of the target is a component of the planning process of the tactical class UAV mission, which is also presented in the article. The problem of determining the optimal UAV trajectory has been decomposed into several subproblems: determining the reconnaissance flight method in the vicinity of the currently recognized target depending on the sensor used and the required parameters of the recognition product (photo, film, or SAR scan), determining the initial possible flight trajectory that takes into account potential UAV threats, and planning detailed flight trajectory considering the parameters of the air platform based on the maneuver planning algorithm designed for tactical class platforms. UAV route planning algorithms with time constraints imposed on the implementation of individual tasks were used to solve the task of determining UAV flight trajectories. The problem was formulated in the form of a Mixed Integer Linear Problem (MILP) model. For determining the flight path in the neighborhood of the target, the optimal control algorithm was also presented in the form of a MILP model. The determined trajectory is then corrected based on the construction algorithm for determining real UAV flight segments based on Dubin curves. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Route plan example with two targets. Orange route segment is an alternative route segment that can be used for target recognition if main route segment should be omitted.</p>
Full article ">Figure 2
<p>An example of projecting the matrix on the earth surface at a right angle—Ground Sampled Distance (GSD) definition.</p>
Full article ">Figure 3
<p>Graphical representation of geometries used in algorithm and its output: <math display="inline"><semantics> <mrow> <mi>C</mi> <msub> <mi>l</mi> <mrow> <mi>F</mi> <mi>O</mi> <mi>V</mi> </mrow> </msub> </mrow> </semantics></math> closed loop defining outer edges of Field of View.</p>
Full article ">Figure 4
<p>Dimensions sketch in horizontal plane for single scan with SAR under the squint angle <math display="inline"><semantics> <mi>θ</mi> </semantics></math>.</p>
Full article ">Figure 5
<p>Coordinate systems used in mission planning and execution.</p>
Full article ">Figure 6
<p>Generated parameters by algorithm.</p>
Full article ">Figure 7
<p>Route plan for UAV with predefined target that is to be recognized with SAR. The green semi sphere models the location of anti-aircraft defense. Dark blue polygon models the Field of View of SAR. Target to be recognized is placed in the middle of the polygon and is marked by a dark triangle.</p>
Full article ">Figure 8
<p>Parameters defining segments, circles and arcs.</p>
Full article ">Figure 9
<p>3 arc Dubins curve.</p>
Full article ">Figure 10
<p>Single arc ingress segment curve calculation—initial state.</p>
Full article ">Figure 11
<p>Single arc ingress segment curve calculation result (Algorithm 5).</p>
Full article ">Figure 12
<p>Single arc out segment curve calculation</p>
Full article ">Figure 13
<p>Route plan for UAV with predefined targets that are to be recognized with SAR and EO/IR. Dark blue polygons model the Field of View for SAR and EO/IR, respectively. FoV for EO/IR is usually smaller than FoV of SAR.</p>
Full article ">Figure 14
<p>Result of SAR scanning the terrain of port in Gdynia. SAR scan with geographical reference put on map as additional layer. (<b>a</b>) Trajectory is presented on the image. Blue segment represents synthetic aperture; (<b>b</b>) zoomed SAR scan.</p>
Full article ">Figure 15
<p>An example of data fusion from EO/IR and SAR. (<b>a</b>) The orange polygon shows the scanned area. The area bounded by yellow sections shows Field of View for SAR. Background map is part of OpenStreetMap; (<b>b</b>) zoomed SAR scan.</p>
Full article ">Figure 16
<p>An example of photo data fusion from EO device gathered in different time. UAV trajectory segments are marked. The smaller photo was taken by UAV for recognition of objects located in this smaller area.</p>
Full article ">Figure 17
<p>Sample UAV flight trajectories generated for various UAV flight start and end points. The blue line shows the flight segment in which the UAV recognizes a target. Optimization function is defined in (<a href="#FD34-sensors-20-05712" class="html-disp-formula">34</a>).</p>
Full article ">Figure 18
<p>Four UAV flight trajectories generated for different optimization functions. The trajectories for the objective function that minimizes the number of control changes (<a href="#FD34-sensors-20-05712" class="html-disp-formula">34</a>) are marked in blue and yellow. The trajectories for the function that minimizes the length of the UAV flight path (<a href="#FD47-sensors-20-05712" class="html-disp-formula">47</a>) are marked in red and purple.</p>
Full article ">
22 pages, 3316 KiB  
Article
Real-Time Impedance Monitoring of Epithelial Cultures with Inkjet-Printed Interdigitated-Electrode Sensors
by Dahiana Mojena-Medina, Moritz Hubl, Manuel Bäuscher, José Luis Jorcano, Ha-Duong Ngo and Pablo Acedo
Sensors 2020, 20(19), 5711; https://doi.org/10.3390/s20195711 - 8 Oct 2020
Cited by 28 | Viewed by 7731
Abstract
From electronic devices to large-area electronics, from individual cells to skin substitutes, printing techniques are providing compelling applications in wide-ranging fields. Research has thus fueled the vision of a hybrid, printing platform to fabricate sensors/electronics and living engineered tissues simultaneously. Following this interest, [...] Read more.
From electronic devices to large-area electronics, from individual cells to skin substitutes, printing techniques are providing compelling applications in wide-ranging fields. Research has thus fueled the vision of a hybrid, printing platform to fabricate sensors/electronics and living engineered tissues simultaneously. Following this interest, we have fabricated interdigitated-electrode sensors (IDEs) by inkjet printing to monitor epithelial cell cultures. We have fabricated IDEs using flexible substrates with silver nanoparticles as a conductive element and SU-8 as the passivation layer. Our sensors are cytocompatible, have a topography that simulates microgrooves of 300 µm width and ~4 µm depth, and can be reused for cellular studies without detrimental in the electrical performance. To test the inkjet-printed sensors and demonstrate their potential use for monitoring laboratory-growth skin tissues, we have developed a real-time system and monitored label-free proliferation, migration, and detachment of keratinocytes by impedance spectroscopy. We have found that variations in the impedance correlate linearly to cell densities initially seeded and that the main component influencing the total impedance is the isolated effect of the cell membranes. Results obtained show that impedance can track cellular migration over the surface of the sensors, exhibiting a linear relationship with the standard method of image processing. Our results provide a useful approach for non-destructive in-situ monitoring of processes related to both in vitro epidermal models and wound healing with low-cost ink-jetted sensors. This type of flexible sensor as well as the impedance method are promising for the envisioned hybrid technology of 3D-bioprinted smart skin substitutes with built-in electronics. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Sketch and (<b>b</b>–<b>e</b>) manufacturing strategies for inkjet printing coplanar capacitors, showing the materials used and curing parameters for the development of the sensors.</p>
Full article ">Figure 2
<p>(<b>a</b>) Assembly and packaging of the IDE-based devices to perform impedance spectroscopy of cultured cells. (<b>b</b>) Schematic representation of the impedance-based biosensor system. The system consists of inkjet-printed IDE sensors, an IDEs-based chamber, an impedance analyzer, a microscope with an integrated bioreactor and a PC to control the images ‘acquisition and the impedance recordings in real-time.</p>
Full article ">Figure 3
<p>(<b>a</b>) Optical micrography of the inkjet-printed sensors (scale bar 100 µm). (<b>b</b>) Surface profilometry of the two continuous inkjet-printed electrodes, revealing a thickness of a 0.6 µm for bare Ag electrodes (<b>b.1</b>) and 4 µm for passivated electrodes (<b>b.2</b>). (<b>c</b>) SEM micrography of the inkjet-printed sensor surface in a top view (scale bar 200 µm). (<b>d</b>) The effect of UV curing on the SU-8 passivation layers showed an increase in the smoothness of the outmost layer. UV treated samples showed a smoother surface (<b>d.2</b>) compared with non-treated samples (<b>d.1</b>). (<b>e</b>) Electrical performance of pristine inkjet-printed sensors, sensors with collagen functionalization, and sensors after three days in vitro with HaCaT cell cultures. Magnitude of the impedance (left) and phase (right).</p>
Full article ">Figure 4
<p>(<b>a</b>) Live/dead assay on the HaCaT cell line after 3 days of incubation. Distribution of live and dead cells of a total of 60 images of areas around different electrode sites on IDEs was analyzed. The error bars represent the observed standard deviations. (<b>b</b>) Fluorescent image of the surface of two consecutive fingers of the electrodes (scale bar 100 μm) showing a simultaneous detection of the live and dead cells is observed, in which the live cells can be seen in green while the red spots identify dead cells. Owing to the non-transparency of the conductive lines (AgNP), stained cells can be observed only on the gaps of the interdigitated electrodes, as the sample was illuminated from below. (<b>c</b>) SEM image of the IDEs covered by a confluent layer of HaCaT cells (scale bar 1 mm), confirming that when stained cells are observed in the gaps of the interdigitated fingers, they were also over the electrode lines. The magnification of one area is highlighted showing cells adhered to the surface of the electrode (scale bar 100 μm).</p>
Full article ">Figure 5
<p>Monitoring of proliferation and determining detection limits of the sensor at 100 Hz. (<b>a</b>) Cell index values versus time in hours for real-time impedance monitoring of HaCaT-GFP cell adhesion and proliferation over 96 h. Cells were seeded at different initial densities (12,000 cells/cm<sup>2</sup>; 35,000 cells/cm<sup>2</sup>, 75,000 cells/cm<sup>2</sup> and 130,000 cells/cm<sup>2</sup>) on collagen-coated inkjet-printed IDEs. The higher the initial cell densities, the higher the cell index obtained. For the initial cell densities of 75,000 cells/cm<sup>2</sup>, the maximum cell index value is reached at around 72 h, indicating cell confluence in the culture. For low initial cell density (12,000 cells/cm<sup>2</sup>), the measurements are significantly lower than the other cultures, remaining low even after 69 h, and slowly increasing after that. For the highest initial cell density (130,000 cells/cm<sup>2</sup>), the maximum cell-index value is detected around 60 h, sharply decreasing in the following measurements. As cultures with different cell densities initially seeded have different growth rates, the maximum cell index is expected to be detected at different time points, which is verified in the cell index curve. (<b>b</b>) Cell index linearly correlated with the number of cells initially seeded on the sensors between 35,000 and 130,000 cells/cm<sup>2</sup> at 17 h after seeding. (error bar represents the standard deviation, <span class="html-italic">n</span> = 3). As cell index of the cultures with the lowest cell densities (i.e., 12,000 cells/cm<sup>2</sup>) remained low along with 69 h, such values were not included to analyze the correlation between initial cell densities and the cell index, as we assumed that was below the sensitivity limit of the device.</p>
Full article ">Figure 6
<p>(<b>a</b>) Monitoring the addition of chemicals by impedance spectroscopy. Triton X-100 was added to the cell culture to destroy the bi-lipid membrane of the cells. The impedance drastically dropped in the first 10 min after the addition of the chemical. (<b>b</b>) Optically observation of the presence of cells under the sensors after 20 min of adding Triton X100.</p>
Full article ">Figure 7
<p>(<b>a</b>) Equivalent circuit model to fit the impedance measured in a non-faradic analysis. (<b>a.1</b>) Example of the electrical equivalent applying partial capacitance techniques of a sensor with four electrodes and considering the effect of finite multilayer passivation on top of the electrodes on the static capacitance. (<b>a.2</b>) Electric circuit model of the IDEs with cells in a culture medium modeling the resistivity of the culture medium, Rc, the capacitance of the IDEs, Cg, and the impedance of the surface interface. The presence of cells is modeled by a constant phase element, CPEd<sub>DL</sub>, and a resistance Rs. (<b>a.3</b>) Schematic representation of the biophysical interpretation of such an equivalent circuit model showed in a.1. The total current can pass through the interface as a sum of the contributions from the geometrical capacitance of the IDEs (curved arrow I<sub>CG</sub>) and the non-faradic process that takes place in the interface of the electrodes due to the presence of cells (straight arrow I<sub>surface</sub>). The dashes lines in the schematic represent the electric field generated due to the nature of the interdigitated electrodes. Green circles represent the HaCaT-GFP cells seeded on the sensors while the blue dots represent the cellular wastes as a result of their growth medium’s consumption. (<b>b</b>) Nyquist plot of experimental impedance spectra for monitoring cell detachment and death during 80h period. The inset plot shows the measured impedance and the corresponding fitting with the circuit model at time 15 h. Each point in the Nyquist plot represents the impedance at one frequency, in which higher frequencies appear closer to the origin (i.e., lower values of the real part in the magnitude of the impedance).</p>
Full article ">Figure 8
<p>Monitoring of cell migration using real-time impedance spectroscopy. (<b>a</b>) Mean edge displacement of a monolayer of cell cultures in control groups (Petri dish) and experimental groups (cells seeded on the IDEs devices) during 4000 min of time-lapse observation. (<b>b</b>) Visualization of the experiment under a microscope. We can see how both for the sensors and the control cultures the cell edge advance towards the top of the figure in time (migration). Note how in the sensors culture the electrodes appear as dark stripes due to the opacity of the inks (illumination from below) (<b>c</b>) The relation between real-time impedance monitoring of cell migration and the displacement of cells by image processing of the microscope images. (<b>d</b>) Linear correlation between the cell index values and the recovery degree of wound healing, with a linear regression coefficient R = 0.89.</p>
Full article ">
18 pages, 2362 KiB  
Article
A Hybrid Intrusion Detection Model Combining SAE with Kernel Approximation in Internet of Things
by Yukun Wu, Wei William Lee, Xuan Gong and Hui Wang
Sensors 2020, 20(19), 5710; https://doi.org/10.3390/s20195710 - 8 Oct 2020
Cited by 11 | Viewed by 2745
Abstract
Owing to the constraints of time and space complexity, network intrusion detection systems (NIDSs) based on support vector machines (SVMs) face the “curse of dimensionality” in a large-scale, high-dimensional feature space. This study proposes a joint training model that combines a stacked autoencoder [...] Read more.
Owing to the constraints of time and space complexity, network intrusion detection systems (NIDSs) based on support vector machines (SVMs) face the “curse of dimensionality” in a large-scale, high-dimensional feature space. This study proposes a joint training model that combines a stacked autoencoder (SAE) with an SVM and the kernel approximation technique. The training model uses the SAE to perform feature dimension reduction, uses random Fourier features to perform kernel approximation, and then random Fourier mapping is explicitly applied to the sub-sample to generate the random feature space, making it possible to apply a linear SVM to uniformly approximate to the Gaussian kernel SVM. Finally, the SAE performs joint training with the efficient linear SVM. We studied the effects of an SAE structure and a random Fourier feature on classification performance, and compared that performance with that of other training models, including some without kernel approximation. At the same time, we compare the accuracy of the proposed model with that of other models, which include basic machine learning models and the state-of-the-art models in other literatures. The experimental results demonstrate that the proposed model outperforms the previously proposed methods in terms of classification performance and also reduces the training time. Our model is feasible and works efficiently on large-scale datasets. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Basic structure of the autoencoder (AE).</p>
Full article ">Figure 2
<p>Stacked autoencoder (SAE) structure.</p>
Full article ">Figure 3
<p>Proposed model framework. (KA denotes kernel approximation)</p>
Full article ">Figure 4
<p>Proposed joint training model.</p>
Full article ">Figure 5
<p>Experimental results with random features from 100 to 450. (<b>a</b>) Accuracy. (<b>b</b>)Training time.</p>
Full article ">Figure 6
<p>Binary-category classification accuracy.</p>
Full article ">Figure 7
<p>Five-category classification accuracy.</p>
Full article ">
18 pages, 3478 KiB  
Article
Force Shadows: An Online Method to Estimate and Distribute Vertical Ground Reaction Forces from Kinematic Data
by Alexander Weidmann, Bertram Taetz, Matthias Andres, Felix Laufer and Gabriele Bleser
Sensors 2020, 20(19), 5709; https://doi.org/10.3390/s20195709 - 8 Oct 2020
Cited by 2 | Viewed by 4029
Abstract
Kinetic models of human motion rely on boundary conditions which are defined by the interaction of the body with its environment. In the simplest case, this interaction is limited to the foot contact with the ground and is given by the so called [...] Read more.
Kinetic models of human motion rely on boundary conditions which are defined by the interaction of the body with its environment. In the simplest case, this interaction is limited to the foot contact with the ground and is given by the so called ground reaction force (GRF). A major challenge in the reconstruction of GRF from kinematic data is the double support phase, referring to the state with multiple ground contacts. In this case, the GRF prediction is not well defined. In this work we present an approach to reconstruct and distribute vertical GRF (vGRF) to each foot separately, using only kinematic data. We propose the biomechanically inspired force shadow method (FSM) to obtain a unique solution for any contact phase, including double support, of an arbitrary motion. We create a kinematic based function, model an anatomical foot shape and mimic the effect of hip muscle activations. We compare our estimations with the measurements of a Zebris pressure plate and obtain correlations of 0.39r0.94 for double support motions and 0.83r0.87 for a walking motion. The presented data is based on inertial human motion capture, showing the applicability for scenarios outside the laboratory. The proposed approach has low computational complexity and allows for online vGRF estimation. Full article
(This article belongs to the Special Issue Sensor-Based Systems for Kinematics and Kinetics)
Show Figures

Figure 1

Figure 1
<p>Kinematic data with constructed force shadow function underneath the feet; the blue vector describes the ground reaction force <math display="inline"><semantics> <msub> <mi>F</mi> <mrow> <mi>GRF</mi> </mrow> </msub> </semantics></math> (<b>left plot</b>). Standard foot model for the left and right foot with partitioned regions <math display="inline"><semantics> <msub> <mo>Ω</mo> <mi>i</mi> </msub> </semantics></math> (<b>right plot</b>); the location of the first- (<math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mi>F</mi> <mi>M</mi> </mrow> </msub> </semantics></math>) and the fifth metatarsal head (<math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mi>V</mi> <mi>M</mi> </mrow> </msub> </semantics></math>) and the calcaneus bone <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mrow> <mi>C</mi> <mi>A</mi> </mrow> </msub> <mo>)</mo> </mrow> </semantics></math> are marked as blue points.</p>
Full article ">Figure 2
<p>Standard foot model registered via right (<math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mi>R</mi> </msub> <mo>,</mo> <msub> <mi>t</mi> <mi>R</mi> </msub> </mrow> </semantics></math>) and left (<math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mi>L</mi> </msub> <mo>,</mo> <msub> <mi>t</mi> <mi>L</mi> </msub> </mrow> </semantics></math>) transformation using the anatomic landmarks from both the model and the kinematics.</p>
Full article ">Figure 3
<p>Left cuboid bone to model the arch of the foot using the relative height <math display="inline"><semantics> <msub> <mi>h</mi> <mi>a</mi> </msub> </semantics></math> (<b>left</b>) and muscles responsible for hip flexion, i.e., psoas major and iliacus muscles (<b>right</b>).</p>
Full article ">Figure 4
<p>vGRF trajectories for the left and right foot applying isometric Gaussian functions, i.e., <math display="inline"><semantics> <mrow> <msub> <mi>a</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>b</mi> <mi>s</mi> </msub> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mspace width="4.pt"/> <mi>for</mi> <mspace width="4.pt"/> <mi>all</mi> <mspace width="4.pt"/> <mi>s</mi> <mo>∈</mo> <mi mathvariant="script">S</mi> </mrow> </semantics></math>, (blue squares) and optimized, elliptic functions (green triangles) during AR movement compared to the pressure plate (red line); both plots show the estimations and measurements in Newton per kilogram.</p>
Full article ">Figure 5
<p><math display="inline"><semantics> <msub> <mi>vGRF</mi> <mi>Arch</mi> </msub> </semantics></math> trajectories for the left and right foot before (blue squares) and after (green triangles) modeling the foot arch during AR movement (left) compared to the pressure plate (red line); both plots show the estimations and measurements in Newton per kilogram.</p>
Full article ">Figure 6
<p>vGRF trajectories for the left and right foot before (blue squares) and after (green triangles) modeling the hip flexion compared to the pressure plate (red line) during AR movement; both plots show the estimations and measurements in Newton per kilogram.</p>
Full article ">Figure 7
<p>vGRF trajectories for the left and right foot resulting from <math display="inline"><semantics> <mrow> <mi>FSM</mi> </mrow> </semantics></math> (green indication) compared to the pressure plate (red indication) normalized by the total body weight; the lines illustrate the mean vGRF values taken from multiple gait cycles (from heel-strike to toe-off) with the respective variations of one standard deviation.</p>
Full article ">Figure 8
<p>vGRF trajectories for the left and right foot demonstrating the effect of poorly tracked <math display="inline"><semantics> <mrow> <mi>CoM</mi> </mrow> </semantics></math> while shifting the weight back and forth; both plots show the estimations and measurements in Newton per kilogram.</p>
Full article ">
26 pages, 3879 KiB  
Article
Condition-Based Maintenance with Reinforcement Learning for Dry Gas Pipeline Subject to Internal Corrosion
by Zahra Mahmoodzadeh, Keo-Yuan Wu, Enrique Lopez Droguett and Ali Mosleh
Sensors 2020, 20(19), 5708; https://doi.org/10.3390/s20195708 - 7 Oct 2020
Cited by 20 | Viewed by 4388
Abstract
Gas pipeline systems are one of the largest energy infrastructures in the world and are known to be very efficient and reliable. However, this does not mean they are prone to no risk. Corrosion is a significant problem in gas pipelines that imposes [...] Read more.
Gas pipeline systems are one of the largest energy infrastructures in the world and are known to be very efficient and reliable. However, this does not mean they are prone to no risk. Corrosion is a significant problem in gas pipelines that imposes large risks such as ruptures and leakage to the environment and the pipeline system. Therefore, various maintenance actions are performed routinely to ensure the integrity of the pipelines. The costs of the corrosion-related maintenance actions are a significant portion of the pipeline’s operation and maintenance costs, and minimizing this large cost is a highly compelling subject that has been addressed by many studies. In this paper, we investigate the benefits of applying reinforcement learning (RL) techniques to the corrosion-related maintenance management of dry gas pipelines. We first address the rising need for a simulated testbed by proposing a test bench that models corrosion degradation while interacting with the maintenance decision-maker within the RL environment. Second, we propose a condition-based maintenance management approach that leverages a data-driven RL decision-making methodology. An RL maintenance scheduler is applied to the proposed test bench, and the results show that applying the proposed condition-based maintenance management technique can reduce up to 58% of the maintenance costs compared to a periodic maintenance policy while securing pipeline reliability. Full article
(This article belongs to the Special Issue The Application of Sensors in Fault Diagnosis and Prognosis)
Show Figures

Figure 1

Figure 1
<p>The proposed test bench for interaction evaluation between a maintenance decision-maker and a corrosive pipeline.</p>
Full article ">Figure 2
<p>A rectangular-like corrosion defect in the pipeline in (<b>a</b>) top view and (<b>b</b>) cross-section view.</p>
Full article ">Figure 3
<p>A schematic diagram of the two-stage model for uniform corrosion as a function of time (after [<a href="#B20-sensors-20-05708" class="html-bibr">20</a>]).</p>
Full article ">Figure 4
<p>A schematic diagram of the model for pitting corrosion as a function of time.</p>
Full article ">Figure 5
<p>Reinforcement learning (RL) agent and the environment interaction.</p>
Full article ">Figure 6
<p>Transition between two consecutive time steps for a simple Markov Decision Process (MDP).</p>
Full article ">Figure 7
<p>Pseudocode for the implemented Q-learning algorithm.</p>
Full article ">Figure 8
<p>Corrosion depth and length in one full-life evaluation episode by Q-learning.</p>
Full article ">Figure 9
<p>Corrosion depth and length in one full-life evaluation episode for sensitivity scenario one.</p>
Full article ">Figure 10
<p>Corrosion depth and length in one 24-year full life evaluation episode for sensitivity scenario two.</p>
Full article ">Figure 11
<p>Corrosion depth and length in one 60-year full-life evaluation episode for sensitivity scenario two.</p>
Full article ">Figure 12
<p>Corrosion depth and length in one full life evaluation episode for sensitivity scenario three.</p>
Full article ">Figure 13
<p>Corrosion depth and length in one full life evaluation episode for sensitivity scenario four.</p>
Full article ">Figure 14
<p>Corrosion depth and length in one full life evaluation episode for sensitivity scenario five.</p>
Full article ">
14 pages, 591 KiB  
Letter
A Comparative Analysis of Hybrid Deep Learning Models for Human Activity Recognition
by Saedeh Abbaspour, Faranak Fotouhi, Ali Sedaghatbaf, Hossein Fotouhi, Maryam Vahabi and Maria Linden
Sensors 2020, 20(19), 5707; https://doi.org/10.3390/s20195707 - 7 Oct 2020
Cited by 59 | Viewed by 7024
Abstract
Recent advances in artificial intelligence and machine learning (ML) led to effective methods and tools for analyzing the human behavior. Human Activity Recognition (HAR) is one of the fields that has seen an explosive research interest among the ML community due to its [...] Read more.
Recent advances in artificial intelligence and machine learning (ML) led to effective methods and tools for analyzing the human behavior. Human Activity Recognition (HAR) is one of the fields that has seen an explosive research interest among the ML community due to its wide range of applications. HAR is one of the most helpful technology tools to support the elderly’s daily life and to help people suffering from cognitive disorders, Parkinson’s disease, dementia, etc. It is also very useful in areas such as transportation, robotics and sports. Deep learning (DL) is a branch of ML based on complex Artificial Neural Networks (ANNs) that has demonstrated a high level of accuracy and performance in HAR. Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) are two types of DL models widely used in the recent years to address the HAR problem. The purpose of this paper is to investigate the effectiveness of their integration in recognizing daily activities, e.g., walking. We analyze four hybrid models that integrate CNNs with four powerful RNNs, i.e., LSTMs, BiLSTMs, GRUs and BiGRUs. The outcomes of our experiments on the PAMAP2 dataset indicate that our proposed hybrid models achieve an outstanding level of performance with respect to several indicative measures, e.g., F-score, accuracy, sensitivity, and specificity. Full article
(This article belongs to the Special Issue Sensors for Activity Recognition)
Show Figures

Figure 1

Figure 1
<p>A schematic view of the analysis process.</p>
Full article ">Figure 2
<p>The architecture of the HAR system.</p>
Full article ">Figure 3
<p>The loss value of the hybrid models.</p>
Full article ">
20 pages, 1353 KiB  
Article
Accuracy–Power Controllable LiDAR Sensor System with 3D Object Recognition for Autonomous Vehicle
by Sanghoon Lee, Dongkyu Lee, Pyung Choi and Daejin Park
Sensors 2020, 20(19), 5706; https://doi.org/10.3390/s20195706 - 7 Oct 2020
Cited by 31 | Viewed by 7873
Abstract
Light detection and ranging (LiDAR) sensors help autonomous vehicles detect the surrounding environment and the exact distance to an object’s position. Conventional LiDAR sensors require a certain amount of power consumption because they detect objects by transmitting lasers at a regular interval according [...] Read more.
Light detection and ranging (LiDAR) sensors help autonomous vehicles detect the surrounding environment and the exact distance to an object’s position. Conventional LiDAR sensors require a certain amount of power consumption because they detect objects by transmitting lasers at a regular interval according to a horizontal angular resolution (HAR). However, because the LiDAR sensors, which continuously consume power inefficiently, have a fatal effect on autonomous and electric vehicles using battery power, power consumption efficiency needs to be improved. In this paper, we propose algorithms to improve the inefficient power consumption of conventional LiDAR sensors, and efficiently reduce power consumption in two ways: (a) controlling the HAR to vary the laser transmission period (TP) of a laser diode (LD) depending on the vehicle’s speed and (b) reducing the static power consumption using a sleep mode, depending on the surrounding environment. The proposed LiDAR sensor with the HAR control algorithm reduces the power consumption of the LD by 6.92% to 32.43% depending on the vehicle’s speed, compared to the maximum number of laser transmissions (Nx.max). The sleep mode with a surrounding environment-sensing algorithm reduces the power consumption by 61.09%. The algorithm of the proposed LiDAR sensor was tested on a commercial processor chip, and the integrated processor was designed as an IC using the Global Foundries 55 nm CMOS process. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Time-of-flight (ToF) of laser.</p>
Full article ">Figure 2
<p>Structure and operation of conventional light detection and ranging (LiDAR) sensor.</p>
Full article ">Figure 3
<p>Block diagram of multi-channel scanning LiDAR sensor.</p>
Full article ">Figure 4
<p>Time domain of time-to-digital converter (TDC).</p>
Full article ">Figure 5
<p>Power consumption of LiDAR sensor.</p>
Full article ">Figure 6
<p>Power consumption of TDC.</p>
Full article ">Figure 7
<p>Relation of power consumption and accuracy.</p>
Full article ">Figure 8
<p>Structure of propoesd LiDAR sensor’s sensing.</p>
Full article ">Figure 9
<p>Horizontal field of view (HFoV) of LiDAR sensor.</p>
Full article ">Figure 10
<p>Point cloud of LiDAR sensor.</p>
Full article ">Figure 11
<p>Power consumption of speed detection-based LiDAR sensor.</p>
Full article ">Figure 12
<p>Power consumption of environment sensing-based LiDAR sensor.</p>
Full article ">Figure 13
<p>Time domain of sleep mode.</p>
Full article ">Figure 14
<p>Test environment and implemented LiDAR system. (<b>a</b>) Test environment for LiDAR evaluation (bicycle, person, and car are detected). (<b>b</b>) Power consumption measurement setup for evaluating implemented LiDAR system.</p>
Full article ">Figure 15
<p>Comparison of <math display="inline"><semantics> <msub> <mi>T</mi> <mi>P</mi> </msub> </semantics></math>. (<b>a</b>) 580 laser transmissions at 11.57 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math> s intervals. (<b>b</b>) 483 laser transmissions at 13.89 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math> s intervals. (<b>c</b>) 414 laser transmissions at 16.21 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math> s intervals. (<b>d</b>) 362 laser transmissions at 18.54 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math> s intervals. (<b>e</b>) 322 laser transmissions at 20.84 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math> s intervals. (<b>f</b>) 290 laser transmissions at 23.14 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math> s intervals.</p>
Full article ">Figure 16
<p>Comparison of <math display="inline"><semantics> <msub> <mi>T</mi> <mi>P</mi> </msub> </semantics></math> in normal and sleep mode. (<b>a</b>) Laser transmissions of Nx every cycle in normal mode. (<b>b</b>) Laser transmissions of 290 times for only 1-cycle in 5-cycles in sleep mode.</p>
Full article ">Figure 17
<p>Comparison of power consumption depending on <math display="inline"><semantics> <msub> <mi>N</mi> <mi>x</mi> </msub> </semantics></math>. (<b>a</b>) Average power consumption graph of 7.442 W in 580 laser transmissions. (<b>b</b>) Average power consumption graph of 7.409 W in 483 laser transmissions. (<b>c</b>) Average power consumption graph of 7.347 W in 414 laser transmissions. (<b>d</b>) Average power consumption graph of 7.308 W in 362 laser transmissions. (<b>e</b>) Average power consumption graph of 7.289 W in 322 laser transmissions. (<b>f</b>) Average power consumption graph of 7.154 W in 290 laser transmissions.</p>
Full article ">Figure 18
<p>Designed microprocessor (MP) chip.</p>
Full article ">
21 pages, 550 KiB  
Article
Does the Position of Foot-Mounted IMU Sensors Influence the Accuracy of Spatio-Temporal Parameters in Endurance Running?
by Markus Zrenner, Arne Küderle, Nils Roth, Ulf Jensen, Burkhard Dümler and Bjoern M. Eskofier
Sensors 2020, 20(19), 5705; https://doi.org/10.3390/s20195705 - 7 Oct 2020
Cited by 25 | Viewed by 5774
Abstract
Wearable sensor technology already has a great impact on the endurance running community. Smartwatches and heart rate monitors are heavily used to evaluate runners’ performance and monitor their training progress. Additionally, foot-mounted inertial measurement units (IMUs) have drawn the attention of sport scientists [...] Read more.
Wearable sensor technology already has a great impact on the endurance running community. Smartwatches and heart rate monitors are heavily used to evaluate runners’ performance and monitor their training progress. Additionally, foot-mounted inertial measurement units (IMUs) have drawn the attention of sport scientists due to the possibility to monitor biomechanically relevant spatio-temporal parameters outside the lab in real-world environments. Researchers developed and investigated algorithms to extract various features using IMU data of different sensor positions on the foot. In this work, we evaluate whether the sensor position of IMUs mounted to running shoes has an impact on the accuracy of different spatio-temporal parameters. We compare both the raw data of the IMUs at different sensor positions as well as the accuracy of six endurance running-related parameters. We contribute a study with 29 subjects wearing running shoes equipped with four IMUs on both the left and the right shoes and a motion capture system as ground truth. The results show that the IMUs measure different raw data depending on their position on the foot and that the accuracy of the spatio-temporal parameters depends on the sensor position. We recommend to integrate IMU sensors in a cavity in the sole of a running shoe under the foot’s arch, because the raw data of this sensor position is best suitable for the reconstruction of the foot trajectory during a stride. Full article
(This article belongs to the Special Issue Technologies for Sports Engineering and Analytics)
Show Figures

Figure 1

Figure 1
<p>Visualization of the running gait cycle.</p>
Full article ">Figure 2
<p>Visualization of sole angle and range of motion.</p>
Full article ">Figure 3
<p>Visualization of sensor positions on the running shoes, the global coordinate system <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>g</mi> </msub> <mo>,</mo> <msub> <mi>z</mi> <mi>g</mi> </msub> <mo>)</mo> </mrow> </semantics></math>, the shoe coordinate system <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>x</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>y</mi> <mi>s</mi> </msub> <mo>,</mo> <msub> <mi>z</mi> <mi>s</mi> </msub> <mo>)</mo> </mrow> </semantics></math>, and the individual sensor coordinate systems. When the foot is flat on the ground, the global and the shoe coordinate system are aligned.</p>
Full article ">Figure 4
<p>Visualization of the functional calibration procedure. The first part of the functional calibration consisted of standing still with the foot flat on the ground in order to measure gravity. During the second part the subjects rotated their feet on a balance board to compute the medial/lateral axis using a principal component analysis.</p>
Full article ">Figure 5
<p>Exemplary IMU data of one stride segmented from IC to IC for the four different sensor positions.</p>
Full article ">Figure 6
<p>Visualization of the stride segmentation for the cavity sensor using the gyroscope signal in the sagittal plane <math display="inline"><semantics> <mrow> <msub> <mi>ω</mi> <mi>x</mi> </msub> <mrow> <mo>[</mo> <mi>t</mi> <mo>]</mo> </mrow> </mrow> </semantics></math>. The fiducial points at swing phase <math display="inline"><semantics> <msub> <mi>n</mi> <mrow> <mi>S</mi> <mi>P</mi> </mrow> </msub> </semantics></math> are local minima of the angular rate in the sagittal plane. The index <math display="inline"><semantics> <msub> <mi>n</mi> <mrow> <mi>I</mi> <mi>C</mi> </mrow> </msub> </semantics></math> indicates the index of IC, which corresponds to the bias corrected local maximum after <math display="inline"><semantics> <msub> <mi>n</mi> <mrow> <mi>S</mi> <mi>P</mi> </mrow> </msub> </semantics></math>. The MS event <math display="inline"><semantics> <msub> <mi>n</mi> <mrow> <mi>M</mi> <mi>S</mi> </mrow> </msub> </semantics></math> is at the minimum of the gyroscopic energy. The TO event at <math display="inline"><semantics> <msub> <mi>n</mi> <mrow> <mi>T</mi> <mi>O</mi> </mrow> </msub> </semantics></math> is based on the second local maxima and a bias correction.</p>
Full article ">Figure 7
<p>Visualization of angle computation for a sample stride from the cavity sensor. The angles are depicted from <math display="inline"><semantics> <msub> <mi>n</mi> <mrow> <mi>I</mi> <mi>C</mi> </mrow> </msub> </semantics></math> (<math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> s) to <math display="inline"><semantics> <msub> <mi>n</mi> <mrow> <mi>T</mi> <mi>O</mi> </mrow> </msub> </semantics></math> (<math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.32</mn> </mrow> </semantics></math> s). The sole angle is defined as the rotation in the sagittal plane between IC and MS. As the orientation is initialized with zero at MS, the sole angle is the angle at <math display="inline"><semantics> <msub> <mi>n</mi> <mrow> <mi>I</mi> <mi>C</mi> </mrow> </msub> </semantics></math>. The range of motion is defined as the difference between the maximum and minimum (red dots) of the angle in the frontal plane during ground contact.</p>
Full article ">Figure 8
<p>Results of the evaluation of the Pearson’s correlation coefficients between the IMU raw signals. Each box visualizes the correlation coefficients between two sensors for all the strides in <span class="html-italic">x</span>, <span class="html-italic">y</span>, and <span class="html-italic">z</span> direction. The box plots also display the median of the correlations (median line), the IQR (box), and the 5 and 95 percentiles (whiskers). The upper plot depicts the correlation of the full strides, the middle plot the correlations during the ground contact phase, and the lower plot the correlations during the swing phase.</p>
Full article ">Figure 9
<p>Visualization of the error for (<b>a</b>) stride length and (<b>b</b>) the acceleration at the zero-velocity update for the four different sensor positions in different speed ranges.</p>
Full article ">Figure A1
<p>Acceleration <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>a</mi> <mo>→</mo> </mover> <mrow> <mo>[</mo> <mi>t</mi> <mo>]</mo> </mrow> </mrow> </semantics></math> and annular rate <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>ω</mi> <mo>→</mo> </mover> <mrow> <mo>[</mo> <mi>t</mi> <mo>]</mo> </mrow> </mrow> </semantics></math> raw data of heel sensor.</p>
Full article ">Figure A2
<p>Visualization of the gravity removal in the acceleration signal for a sample stride of the heel sensor. The upper plot shows the raw acceleration <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>a</mi> <mo>→</mo> </mover> <mrow> <mo>[</mo> <mi>t</mi> <mo>]</mo> </mrow> </mrow> </semantics></math> segmented from MS to MS measured by the accelerometer. The lower plot shows the gravity corrected acceleration signal <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>a</mi> <mo>→</mo> </mover> <mrow> <mi>g</mi> <mi>c</mi> </mrow> </msub> <mrow> <mo>[</mo> <mi>t</mi> <mo>]</mo> </mrow> </mrow> </semantics></math> after rotating the raw acceleration by the quaternion sequence <math display="inline"><semantics> <mrow> <mi mathvariant="bold">q</mi> <mo>[</mo> <mi>n</mi> <mo>]</mo> </mrow> </semantics></math> and removing gravity from the rotated signal. After the gravity removal, both the <span class="html-italic">z</span>-components of the acceleration at the first midstance (<math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> s) and the second midstance (<math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.81</mn> </mrow> </semantics></math> s) have values close to zero.</p>
Full article ">Figure A3
<p>Visualization of the dedrifting of the velocity after the first integration of the acceleration signal for a sample stride of the heel sensor. The upper plot shows the velocity <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>v</mi> <mo>→</mo> </mover> <mrow> <mo>[</mo> <mi>t</mi> <mo>]</mo> </mrow> </mrow> </semantics></math> before dedrifting. This signal displays that the velocity at the second midstance (<math display="inline"><semantics> <mrow> <mi>t</mi> <mo>=</mo> <mn>0.81</mn> </mrow> </semantics></math> s) is not zero. We enforce the velocity to be zero by dedrifing the velocity using a linear dedrifting function. The lower plot shows the velocity <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>v</mi> <mo>→</mo> </mover> <mrow> <mi>d</mi> <mi>e</mi> <mi>d</mi> <mi>r</mi> <mi>i</mi> <mi>f</mi> <mi>t</mi> <mi>e</mi> <mi>d</mi> </mrow> </msub> <mrow> <mo>[</mo> <mi>t</mi> <mo>]</mo> </mrow> </mrow> </semantics></math> after dedrifting. Now, the velocity at the second MS is zero in all directions.</p>
Full article ">Figure A4
<p>Visualization of the trajectory for a sample stride of the heel sensor. The upper plot shows the orientation <math display="inline"><semantics> <mover accent="true"> <mrow> <mi>α</mi> <mo>[</mo> <mi>t</mi> <mo>]</mo> </mrow> <mo stretchy="false">→</mo> </mover> </semantics></math> obtained by the quaternion based forward integration after converting the quaternions back to their angle representation. The lower plot shows the translation <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>s</mi> <mo stretchy="false">→</mo> </mover> <mrow> <mo>[</mo> <mi>t</mi> <mo>]</mo> </mrow> </mrow> </semantics></math> obtained by dedrifted double integration of the gravity corrected acceleration <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>a</mi> <mo stretchy="false">→</mo> </mover> <mrow> <mi>g</mi> <mi>c</mi> </mrow> </msub> <mrow> <mo>[</mo> <mi>t</mi> <mo>]</mo> </mrow> </mrow> </semantics></math></p>
Full article ">
9 pages, 7088 KiB  
Letter
Detection of Small Magnetic Fields Using Serial Magnetic Tunnel Junctions with Various Geometrical Characteristics
by Zhenhu Jin, Yupeng Wang, Kosuke Fujiwara, Mikihiko Oogane and Yasuo Ando
Sensors 2020, 20(19), 5704; https://doi.org/10.3390/s20195704 - 7 Oct 2020
Cited by 14 | Viewed by 3629
Abstract
Thanks to their high magnetoresistance and integration capability, magnetic tunnel junction-based magnetoresistive sensors are widely utilized to detect weak, low-frequency magnetic fields in a variety of applications. The low detectivity of MTJs is necessary to obtain a high signal-to-noise ratio when detecting small [...] Read more.
Thanks to their high magnetoresistance and integration capability, magnetic tunnel junction-based magnetoresistive sensors are widely utilized to detect weak, low-frequency magnetic fields in a variety of applications. The low detectivity of MTJs is necessary to obtain a high signal-to-noise ratio when detecting small variations in magnetic fields. We fabricated serial MTJ-based sensors with various junction area and free-layer electrode aspect ratios. Our investigation showed that their sensitivity and noise power are affected by the MTJ geometry due to the variation in the magnetic shape anisotropy. Their MR curves demonstrated a decrease in sensitivity with an increase in the aspect ratio of the free-layer electrode, and their noise properties showed that MTJs with larger junction areas exhibit lower noise spectral density in the low-frequency region. All of the sensors were able detect a small AC magnetic field (Hrms = 0.3 Oe at 23 Hz). Among the MTJ sensors we examined, the sensor with a square-free layer and large junction area exhibited a high signal-to-noise ratio (4792 ± 646). These results suggest that MTJ geometrical characteristics play a critical role in enhancing the detectivity of MTJ-based sensors. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>(a) Magnetic film structure. (b) Schematic diagram of 20 serial MgO-barrier magnetic tunnel junctions (MTJs). (c)–(f) Microscopic images of 20 serial MTJs with different junction areas (length <math display="inline"><semantics> <mo>×</mo> </semantics></math> width = 15 μm <math display="inline"><semantics> <mo>×</mo> </semantics></math> 40 μm; 23 μm <math display="inline"><semantics> <mo>×</mo> </semantics></math> 26 μm; 30 μm <math display="inline"><semantics> <mo>×</mo> </semantics></math> 20 μm; 40 μm <math display="inline"><semantics> <mo>×</mo> </semantics></math> 15 μm) and footprint areas of the free layer (length <math display="inline"><semantics> <mo>×</mo> </semantics></math> width = 50 μm <math display="inline"><semantics> <mrow> <mo>×</mo> </mrow> </semantics></math> 50 μm; 70 μm <math display="inline"><semantics> <mrow> <mo>×</mo> </mrow> </semantics></math> 36 μm; 86 μm <math display="inline"><semantics> <mrow> <mo>×</mo> </mrow> </semantics></math> 29 μm; 100 μm <math display="inline"><semantics> <mrow> <mo>×</mo> </mrow> </semantics></math> 25 μm) with various free layer pattern (FL electrode) aspect ratios. White arrows denote the direction of external magnetic fields.</p>
Full article ">Figure 2
<p>Schematic diagram showing detection of the AC magnetic field using an MTJ sensor.</p>
Full article ">Figure 3
<p>Magnetoresistance transfer curves for MTJ sensor Series A (<b>a</b>–<b>d</b>), B (<b>e</b>–<b>h</b>), and C (<b>i</b>–<b>l</b>) at a bias voltage of 100 mV and zero magnetic field. The aspect ratios of the free layer electrodes ranged from 1 to 4.</p>
Full article ">Figure 4
<p>Dependence of (<b>a</b>) the linear range of the resistance response and (<b>b</b>) sensitivity on the aspect ratio of the free layer that ranged from 1 to 4 at a bias voltage of 100 mV and zero magnetic field. Here, the linear range shows a nonlinearity of 10% FS for each sensor.</p>
Full article ">Figure 5
<p>(<b>a</b>) Noise spectral density <span class="html-italic">S</span><sub>v</sub> as a function of frequency for serial MTJs with various pinned junction areas at a bias voltage of 100 mV under a magnetic field of 0 Oe. (<b>b</b>) Relationships among various geometrical characteristics of serial MTJs and noise spectral density at a certain frequency (23 Hz).</p>
Full article ">Figure 6
<p>(<b>a</b>) Output signals from serial MTJs with same aspect ratio of free layer but different junction area <span class="html-italic">A</span>, at a bias voltage of 100 mV. (<b>b</b>) SNR (<span class="html-italic">S</span><sub>peak</sub>/<span class="html-italic">S</span><sub>background</sub>) from the detection of magnetic field of 0.3 Oe using MTJs with various dimensional characteristics. Aspect ratios of the free layer electrode ranged from 1 to 4.</p>
Full article ">
17 pages, 8640 KiB  
Article
Multibeam Characteristics of a Negative Refractive Index Shaped Lens
by Salbiah Ab Hamid, Nurul Huda Abd Rahman, Yoshihide Yamada, Phan Van Hung and Dinh Nguyen Quoc
Sensors 2020, 20(19), 5703; https://doi.org/10.3390/s20195703 - 7 Oct 2020
Cited by 6 | Viewed by 2988
Abstract
Narrow beam width, higher gain and multibeam characteristics are demanded in 5G technology. Array antennas that are utilized in the existing mobile base stations have many drawbacks when operating at upper 5G frequency bands. For example, due to the high frequency operation, the [...] Read more.
Narrow beam width, higher gain and multibeam characteristics are demanded in 5G technology. Array antennas that are utilized in the existing mobile base stations have many drawbacks when operating at upper 5G frequency bands. For example, due to the high frequency operation, the antenna elements become smaller and thus, in order to provide higher gain, more antenna elements and arrays are required, which will cause the feeding network design to be more complex. The lens antenna is one of the potential candidates to replace the current structure in mobile base station. Therefore, a negative refractive index shaped lens is proposed to provide high gain and narrow beamwidth using energy conservation and Abbe’s sine principle. The aim of this study is to investigate the multibeam characteristics of a negative refractive index shaped lens in mobile base station applications. In this paper, the feed positions for the multibeam are selected on the circle from the center of the lens and the accuracy of the feed position is validated through Electromagnetic (EM) simulation. Based on the analysis performed in this study, a negative refractive index shaped lens with a smaller radius and slender lens than the conventional lens is designed, with the additional capability of performing wide-angle beam scanning. Full article
(This article belongs to the Special Issue Antenna Design for 5G and Beyond)
Show Figures

Figure 1

Figure 1
<p>Proposed mobile base station structure.</p>
Full article ">Figure 2
<p>Lens configuration and parameters.</p>
Full article ">Figure 3
<p>Abbe’s sine condition.</p>
Full article ">Figure 4
<p>MATLAB Program.</p>
Full article ">Figure 5
<p>Initial parameters at the lens edge.</p>
Full article ">Figure 6
<p>Energy conservation design.</p>
Full article ">Figure 7
<p>Comparison of aperture distribution.</p>
Full article ">Figure 8
<p>Shaped lens.</p>
Full article ">Figure 9
<p>Rays in and rays out.</p>
Full article ">Figure 10
<p>Abbe’s sine shaped lens.</p>
Full article ">Figure 11
<p>Rays in and rays out.</p>
Full article ">Figure 12
<p>Performance of horn antenna as a feed radiator.</p>
Full article ">Figure 13
<p>Energy conservation shaped lens performance.</p>
Full article ">Figure 14
<p>Feed position arrangement.</p>
Full article ">Figure 15
<p>Antenna gain for scanning angle 0° to 40°.</p>
Full article ">Figure 16
<p>Electric intensity distribution.</p>
Full article ">Figure 17
<p>Electric phase distribution.</p>
Full article ">Figure 18
<p>Abbe’s sine shaped lens performance.</p>
Full article ">Figure 19
<p>Feed position arrangement.</p>
Full article ">Figure 20
<p>Antenna gain for scanning angle 0° to 40°.</p>
Full article ">Figure 21
<p>Electric field distribution during off-focus.</p>
Full article ">Figure 22
<p>Electric intensity distribution.</p>
Full article ">Figure 23
<p>Electric phase distribution.</p>
Full article ">Figure 24
<p>Performance comparison for both types of lens.</p>
Full article ">Figure 25
<p>Feed position analysis arrangement.</p>
Full article ">Figure 26
<p>Antenna gain for all feed position for energy conservation lens.</p>
Full article ">Figure 27
<p>Antenna gain for all feed position for Abbe’s sine lens.</p>
Full article ">
20 pages, 6723 KiB  
Article
GNSS-Based Non-Negative Absolute Ionosphere Total Electron Content, its Spatial Gradients, Time Derivatives and Differential Code Biases: Bounded-Variable Least-Squares and Taylor Series
by Yury Yasyukevich, Anna Mylnikova and Artem Vesnin
Sensors 2020, 20(19), 5702; https://doi.org/10.3390/s20195702 - 7 Oct 2020
Cited by 26 | Viewed by 3955
Abstract
Global navigation satellite systems (GNSS) allow estimating total electron content (TEC). However, it is still a problem to calculate absolute ionosphere parameters from GNSS data: negative TEC values could appear, and most of existing algorithms does not enable to estimate TEC spatial gradients [...] Read more.
Global navigation satellite systems (GNSS) allow estimating total electron content (TEC). However, it is still a problem to calculate absolute ionosphere parameters from GNSS data: negative TEC values could appear, and most of existing algorithms does not enable to estimate TEC spatial gradients and TEC time derivatives. We developed an algorithm to recover the absolute non-negative vertical and slant TEC, its derivatives and its gradients, as well as the GNSS equipment differential code biases (DCBs) by using the Taylor series expansion and bounded-variable least-squares. We termed this algorithm TuRBOTEC. Bounded-variable least-squares fitting ensures non-negative values of both slant TEC and vertical TEC. The second order Taylor series expansion could provide a relevant TEC spatial gradients and TEC time derivatives. The technique validation was performed by using independent experimental data over 2014 and the IRI-2012 and IRI-plas models. As a TEC source we used Madrigal maps, CODE (the Center for Orbit Determination in Europe) global ionosphere maps (GIM), the IONOLAB software, and the SEEMALA-TEC software developed by Dr. Seemala. For the Asian mid-latitudes TuRBOTEC results agree with the GIM and IONOLAB data (root-mean-square was < 3 TECU), but they disagree with the SEEMALA-TEC and Madrigal data (root-mean-square was >10 TECU). About 9% of vertical TECs from the TuRBOTEC estimates exceed (by more than 1 TECU) those from the same algorithm but without constraints. The analysis of TEC spatial gradients showed that as far as 10–15° on latitude, TEC estimation error exceeds 10 TECU. Longitudinal gradients produce smaller error for the same distance. Experimental GLObal Navigation Satellite System (GLONASS) DCB from TuRBOTEC and CODE peaked 15 TECU difference, while GPS DCB agrees. Slant TEC series indicate that the TuRBOTEC data for GLONASS are physically more plausible. Full article
(This article belongs to the Special Issue GNSS Signals and Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>IONOLAB TEC (<b>a</b>), SEEMALA-TEC (<b>b</b>), Madrigal TEC (<b>c</b>). Grey dots on panels (<b>a</b>,<b>b</b>) show absolute slant TEC from different satellites. Grey dots on panel (<b>c</b>) show the vertical TEC for different cells (50°–54°N, 102°–106°E). Orange dots show the median along the 5° × 5° region, while the orange line shows the spline interpolation regarded as the absolute vertical TEC. Panel (<b>d</b>) shows all the mentioned vertical TECs along with the TEC from CODE GIMs (black line). The data are for 1 January 2014.</p>
Full article ">Figure 2
<p>Eliminating the phase ambiguity (<b>a</b>), outliers (<b>b</b>), and cycle slips (<b>c</b>). The panels show the numbers for the corresponding GNSS satellite. Red dots mark the uncorrected phase TEC, blue dots-corrected phase TEC, orange dots-uncorrected pseudorange TEC, grey dots-corrected pseudorange TEC.</p>
Full article ">Figure 3
<p>Results of simulating the recovery of the absolute vertical TEC (<b>a</b>), the time derivative (<b>b</b>), and the latitudinal gradient (<b>c</b>) for 10 April 2012. Red solid line indicates values (vertical TEC, time derivative, and spatial gradient) recovered by using the second order <span class="html-italic">I<sub>V</sub></span> expansion; blue dotted line indicates values recovered by using the first order <span class="html-italic">I<sub>V</sub></span> expansion, dot-and-dash black line shows values recovered by using the zero order <span class="html-italic">I<sub>V</sub></span> expansion). The solid black exhibits the IRI-2012 data.</p>
Full article ">Figure 4
<p>TEC dependence on the elevation for different <span class="html-italic">α</span>. The simulation used the IRI-2012 model. The red solid line shows the IRI-2012 slant TEC data; the red dotted line exhibits the slant TEC converted from the vertical one by using <span class="html-italic">α</span>=1; the blue solid line shows the slant TEC converted from the vertical one by using <span class="html-italic">α</span> = 0.97; the green dotted line shows the same, but with the use of <span class="html-italic">α</span> = 0.94; the black dotted line exhibits the same, but with the use of <span class="html-italic">α</span> 0.87. The blue dot-and-dash line shows the elevation of the satellite.</p>
Full article ">Figure 5
<p>Contribution of measurements from individual satellites to the estimates at different instants.</p>
Full article ">Figure 6
<p>Results of simulating the absolute vertical TEC recovery for different stations as 10 April 2012: (<b>a</b>) mid-latitude IRKJ, α = 0.97; (<b>b</b>) equatorial NTUS, <span class="html-italic">α</span> = 0.87; (<b>c</b>) high-latitude THU2, <span class="html-italic">α</span> = 0.94. The black dotted line shows the results of the TuRBOTEC operation, the red line exhibits the IRI-2012 vertical TEC data, and the blue line shows differences between the results of the TuRBOTEC operation and the IRI-2012 model. The green line shows the similar differences, but for the IRI-plas simulation. The axis for the blue and green lines are on the right, for the red and black lines on the left.</p>
Full article ">Figure 7
<p>Absolute vertical TEC recovered from real the GPS/GLONASS data for IRKJ (red line). Panel (<b>a</b>) is for 5 March 2015 (<span class="html-italic">Кр</span><span class="html-italic"><sub>max</sub></span> = 2.3), panel (<b>b</b>) 17 March 2015 (<span class="html-italic">Кр</span><span class="html-italic"><sub>max</sub></span> = 7.7). The dot-and-dash blue line presents the JPL data, the black dotted line presents the CODE data, and the red solid line shows the TuRBOTEC values.</p>
Full article ">Figure 8
<p>Normalized <span class="html-italic">ΔI<sub>V</sub></span> histograms (difference between <span class="html-italic">I<sub>V</sub></span> values from alternative and TuRBOTEC data). (<b>a</b>) GIM CODE vs. TuRBOTEC; (<b>b</b>) Madrigal vs. TuRBOTEC; (<b>c</b>) IONOLAB vs. TuRBOTEC; (<b>d</b>) SEEMALA-TEC vs. TuRBOTEC. Panel (<b>e</b>) shows the distribution between the difference in JPL GIM TEC and CODE GIM TEC; panel (<b>f</b>) compares techniques with (TuRBOTEC) and without constraint. The data are for the IRKJ over 2014.</p>
Full article ">Figure 9
<p>Simulation for the TEC latitude (<b>a–c</b>) and longitude (<b>d–f</b>) gradients recovery for IRKJ (<b>a</b>,<b>d</b>), NTUS (<b>b</b>,<b>e</b>), and THU2 (<b>c</b>,<b>f</b>). The black dotted line presents the results of the TuRBOTEC operation; the red line exhibits the IRI-2012 TEC gradient data, and the blue line shows the difference between the results of the TuRBOTEC operation and those from the IRI-2012 model. The green line shows the similar differences, but for the IRI-plas simulation. The axis for the blue and green lines are on the right, for the red and black lines on the left.</p>
Full article ">Figure 10
<p>Errors in TEC estimations at different UT for various distances from IRKJ. Δ<span class="html-italic">I</span><sub>V</sub>, is the difference between the model-specified values and the model-recovered data, panel (<b>a</b>) and panel (<b>b</b>) are for the distance from the station by latitude and longitude, respectively. The IRI-2012 simulation as of 10 April 2012 for the IRKJ geometry.</p>
Full article ">Figure 11
<p>Simulating the TEC time derivative recovery for IRKJ (<b>a</b>), NTUS (<b>b</b>), and THU2 (<b>c</b>) stations. The black dotted line presents the results of the TuRBOTEC operation, the red line exhibits the IRI-2012 TEC time derivative data, and the blue line shows the difference between the results of the TuRBOTEC operation and those from the IRI-2012 model. The green line shows the similar differences, but for the IRI-plas simulation. The axis for the blue and green lines are on the right, for the red and black lines on the left.</p>
Full article ">Figure 12
<p>GNSS-based TEC time derivative for IRKJ. (<b>a</b>) 5 March 2015 (<span class="html-italic">Kp<sub>max</sub></span> = 2.3); (<b>b</b>) 17 March 2015 (<span class="html-italic">Kp<sub>max</sub></span> = 7.7). The blue dot-and-dash line presents the JPL data, the black dotted line denotes the CODE data, and the red solid line displays the TuRBOTEC-obtained values.</p>
Full article ">Figure 13
<p>Simulating the DCB recovery from GPS (<b>a</b>) and GLONASS (<b>b</b>). The black triangles are the initial values; the red circles present the recovered values; the blue line shows the difference between the results of the TuRBOTEC operation and the initial values. The data are in TECU.</p>
Full article ">Figure 14
<p>DCBs estimate for GPS (<b>a</b>) and GLONASS (<b>b</b>) satellites from the IRKJ 1 March 2015 data. The red circles show the results of the TuRBOTEC algorithm operation; the black diamonds present the CODE data.</p>
Full article ">Figure 15
<p>GNSS absolute slant TEC after least squares without constraint (black dots) and TuRBOTEC techniques (blue dots).</p>
Full article ">Figure 16
<p>Influence of intra-day DCB variations on pseudorange TEC (<b>a</b>, blue dots) and vertical TEC estimates (<b>b</b>, black line). Red line on panel (<b>b</b>) shows the phase-without-pseudorange solution. The data are for IRKJ 12 July 2009.</p>
Full article ">
17 pages, 9984 KiB  
Article
Defect-Induced Gas-Sensing Properties of a Flexible SnS Sensor under UV Illumination at Room Temperature
by Nguyen Manh Hung, Chuong V. Nguyen, Vinaya Kumar Arepalli, Jeha Kim, Nguyen Duc Chinh, Tien Dai Nguyen, Dong-Bum Seo, Eui-Tae Kim, Chunjoong Kim and Dojin Kim
Sensors 2020, 20(19), 5701; https://doi.org/10.3390/s20195701 - 7 Oct 2020
Cited by 15 | Viewed by 4165
Abstract
Tin sulfide (SnS) is known for its effective gas-detecting ability at low temperatures. However, the development of a portable and flexible SnS sensor is hindered by its high resistance, low response, and long recovery time. Like other chalcogenides, the electronic and gas-sensing properties [...] Read more.
Tin sulfide (SnS) is known for its effective gas-detecting ability at low temperatures. However, the development of a portable and flexible SnS sensor is hindered by its high resistance, low response, and long recovery time. Like other chalcogenides, the electronic and gas-sensing properties of SnS strongly depend on its surface defects. Therefore, understanding the effects of its surface defects on its electronic and gas-sensing properties is a key factor in developing low-temperature SnS gas sensors. Herein, using thin SnS films annealed at different temperatures, we demonstrate that SnS exhibits n-type semiconducting behavior upon the appearance of S vacancies. Furthermore, the presence of S vacancies imparts the n-type SnS sensor with better sensing performance under UV illumination at room temperature (25 °C) than that of a p-type SnS sensor. These results are thoroughly investigated using various experimental analysis techniques and theoretical calculations using density functional theory. In addition, n-type SnS deposited on a polyimide substrate can be used to fabricate high-stability flexible sensors, which can be further developed for real applications. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of the fabrication of tin sulfide (SnS) thin film sensors by sputtering: (<b>a</b>) a p-type SnS sensor, and (<b>b</b>) an n-type SnS sensor.</p>
Full article ">Figure 2
<p>(<b>a</b>–<b>d</b>) SEM surface images of different as-deposited SnS thin films. The insets show cross-sectional SEM images with the corresponding thicknesses (the scale bar represents 100 nm). (<b>e</b>) X-ray diffraction spectra and (<b>f</b>) Raman spectra of as-deposited SnS thin films.</p>
Full article ">Figure 3
<p>(<b>a</b>–<b>d</b>) AFM surface images of different as-deposited SnS thin films. (<b>a</b>) SnS-30, (<b>b</b>) SnS-50, (<b>c</b>) SnS-80, and (<b>d</b>) SnS-100. (<b>e</b>,<b>f</b>) AFM surface images of (<b>e</b>) SnS-80-H250 and (<b>f</b>) SnS-80-H300.</p>
Full article ">Figure 4
<p>(<b>a</b>–<b>c</b>) SEM surface images of SnS-80 thin films annealed at different temperatures (250–400 °C): (<b>a</b>) 250 °C (SnS-80-H250), (<b>b</b>) 300 °C (SnS-80-H300), and (<b>c</b>) 400 °C (SnS-80-H400). (<b>d</b>) Corresponding XRD and (<b>e</b>) Raman spectra of SnS-80 thin films annealed at different temperatures (250–400 °C) and that of the as-deposited SnS-80 thin film. (<b>f</b>) Tauc plots for the as-deposited SnS-80 and SnS-80-H300 thin films. The inset shows the corresponding absorbance curves.</p>
Full article ">Figure 5
<p>Survey (<b>a</b>) and high-resolution ((<b>b</b>) Sn 3d and (<b>c</b>) S 2p) XPS profiles of SnS-80, SnS-80-H250, and SnS-80-H300.</p>
Full article ">Figure 6
<p>(<b>a</b>) TEM image of a SnS-80-H300 thin film on a glass substrate. (<b>b</b>) SAED trace of the selected region in (a) demonstrating the polycrystalline structure of the thin film. High-resolution TEM images of different regions (<b>c</b>) without and (<b>d</b>) with pinholes. The inset in (c) shows the distance between (113) lattice plane fringes. (<b>e</b>) Elemental mapping results for a SnS-80-H300 thin film on a glass substrate.</p>
Full article ">Figure 7
<p>(<b>a</b>) Real-time resistance curves obtained under different NO<sub>2</sub> concentrations for as-deposited SnS-80, SnS-80-H250, and SnS-80-H300 sensors. (<b>b</b>) Response curves for various concentrations of NO<sub>2</sub> gas of SnS sensors with different thicknesses annealed at 300 °C. (<b>c</b>) Corresponding responses of the sensors as derived from (b). (<b>d</b>) Dynamic response curves for as-deposited SnS thin-film sensors with different thicknesses showing p-type sensing behavior toward high-concentration NO<sub>2</sub> gas. All gas-sensing measurements were conducted under UV illumination at RT (25 °C).</p>
Full article ">Figure 8
<p>Side-view (top) and top-view (bottom) images showing the charge density difference of NO<sub>2</sub> molecules adsorbed on various SnS monolayer structures: (<b>a</b>) SnS, (<b>b</b>) SnS-Sn, and (<b>c</b>) SnS-S. The yellow and cyan regions represent the areas of electron accumulation and depletion, respectively.</p>
Full article ">Figure 9
<p>(<b>a</b>) Real-time resistance curves of SnS-80-H300 recorded under different NO<sub>2</sub> concentrations and with different bending angles. (<b>b</b>) Effect of bending angle on the base resistance and response of the sensors derived from (a). (<b>c</b>) Short-term stability of the SnS-80-H300 sensor over three cycles (top) and corresponding long-term stability (bottom) of the SnS-80-H300 toward 5 ppm NO<sub>2</sub> measured over 60 days. (<b>d</b>) Relative responses of SnS-80-H300 to various target gases showing the high selectivity of the sensor toward NO<sub>2</sub>. (<b>e</b>) Real-time response of the SnS-80-H300 sensor to 5 ppm NO<sub>2</sub> under different RH conditions. (<b>f</b>) Effect of RH on the base resistance and response of the sensor. All gas-sensing measurements were conducted under UV illumination at RT (25 °C).</p>
Full article ">
11 pages, 659 KiB  
Letter
Interference Spreading through Random Subcarrier Allocation Technique and Its Error Rate Performance in Cognitive Radio Networks
by Amit Kachroo, Adithya Popuri, Mostafa Ibrahim, Ali Imran and Sabit Ekin
Sensors 2020, 20(19), 5700; https://doi.org/10.3390/s20195700 - 7 Oct 2020
Cited by 3 | Viewed by 2470
Abstract
In this letter, we investigate the idea of interference spreading and its effect on bit error rate (BER) performance in a cognitive radio network (CRN). The interference spreading phenomenon is caused because of the random allocation of subcarriers in an orthogonal frequency division [...] Read more.
In this letter, we investigate the idea of interference spreading and its effect on bit error rate (BER) performance in a cognitive radio network (CRN). The interference spreading phenomenon is caused because of the random allocation of subcarriers in an orthogonal frequency division multiplexing (OFDM)-based CRN without any spectrum-sensing mechanism. The CRN assumed in this work is of underlay configuration, where the frequency bands are accessed concurrently by both primary users (PUs) and secondary users (SUs). With random allocation, subcarrier collisions occur among the carriers of primary users (PUs) and secondary users (SUs), leading to interference among subcarriers. This interference caused by subcarrier collisions spreads out across multiple subcarriers of PUs rather than on an individual PU, therefore avoiding high BER for an individual PU. Theoretical and simulated signal to interference and noise ratio (SINR) for collision and no-collision cases are validated for M-quadrature amplitude modulation (M-QAM) techniques. Similarly, theoretical BER performance expressions are found and compared for M-QAM modulation orders under Rayleigh fading channel conditions. The BER for different modulation orders of M-QAM are compared and the relationship of average BER with interference temperature is also explored further. Full article
(This article belongs to the Special Issue Advances in Cognitive Radio Networks)
Show Figures

Figure 1

Figure 1
<p>Interference spreading in an OFDM-based cognitive radio network showing subcarrier collisions and no subcarrier collisions.</p>
Full article ">Figure 2
<p>A cognitive radio network (CRN) with a primary base station (PBS) and primary users (<span class="html-italic">P</span>-PUs), and secondary base station (SBS) with secondary users (<span class="html-italic">S</span>-SUs).</p>
Full article ">Figure 3
<p>Theoretical and simulation plot of SNR in no-collision case with a) <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> dB and <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> dB (i.e., <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&gt;</mo> <mi>q</mi> </mrow> </semantics></math>) and b) <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> dB and <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> dB (i.e., <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>&gt;</mo> <mi>p</mi> </mrow> </semantics></math>).</p>
Full article ">Figure 4
<p>BER of M-QAM for no-collision case with IT ( <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> dB).</p>
Full article ">Figure 5
<p>Theoretical and simulation plot of signal to interference and noise ratio (SINR) in collision case with a) <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> dB and <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> dB (i.e., <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&gt;</mo> <mi>q</mi> </mrow> </semantics></math>) and b) <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> dB and <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> dB (i.e., <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>&gt;</mo> <mi>p</mi> </mrow> </semantics></math>).</p>
Full article ">Figure 6
<p>Bit error rate (BER) for M-quadrature amplitude modulation (M-QAM) for collision case with <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>17</mn> </mrow> </semantics></math> dB.</p>
Full article ">Figure 7
<p>Comparison of average BER for 16-QAM with different values of interference temperature (<math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> <mn>20</mn> <mo>,</mo> <mn>30</mn> </mrow> </semantics></math> dB).</p>
Full article ">Figure 8
<p>Mean BER of M-QAM with <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>4</mn> <mo>,</mo> <mn>16</mn> <mo>,</mo> <mn>64</mn> <mspace width="4pt"/> <mi>and</mi> <mspace width="4pt"/> <mn>256</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>17</mn> </mrow> </semantics></math> dB.</p>
Full article ">
28 pages, 1808 KiB  
Review
Vital Sign Monitoring in Car Seats Based on Electrocardiography, Ballistocardiography and Seismocardiography: A Review
by Michaela Sidikova, Radek Martinek, Aleksandra Kawala-Sterniuk, Martina Ladrova, Rene Jaros, Lukas Danys and Petr Simonik
Sensors 2020, 20(19), 5699; https://doi.org/10.3390/s20195699 - 6 Oct 2020
Cited by 30 | Viewed by 8214
Abstract
This paper focuses on a thorough summary of vital function measuring methods in vehicles. The focus of this paper is to summarize and compare already existing methods integrated into car seats with the implementation of inter alia capacitive electrocardiogram (cECG), mechanical motion analysis [...] Read more.
This paper focuses on a thorough summary of vital function measuring methods in vehicles. The focus of this paper is to summarize and compare already existing methods integrated into car seats with the implementation of inter alia capacitive electrocardiogram (cECG), mechanical motion analysis Ballistocardiography (BCG) and Seismocardiography (SCG). In addition, a comprehensive overview of other methods of vital sign monitoring, such as camera-based systems or steering wheel sensors, is also presented in this article. Furthermore, this work contains a very thorough background study on advanced signal processing methods and their potential application for the purpose of vital sign monitoring in cars, which is prone to various disturbances and artifacts occurrence that have to be eliminated. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Number of drivers who died while driving due to health reasons, according to statistics from the Police of the Czech Republic.</p>
Full article ">Figure 2
<p>The incidence of diseases having a major impact.</p>
Full article ">Figure 3
<p>Location of sensors in the car seat and their signals.</p>
Full article ">Figure 4
<p>Frequency ranges—visible, near-infrared, far-infrared—usable for optical monitoring techniques.</p>
Full article ">Figure 5
<p>The locations for the cameras. Locations can be distinguished by the angle of attack ρ.</p>
Full article ">Figure 6
<p>Noisy vital signal obtained by conventional algorithm—during motion period.</p>
Full article ">Figure 7
<p>Overview of signal processing methods and car seat. (<b>a</b>) BCG signal processing and (<b>b</b>) SCG signal processing.</p>
Full article ">
15 pages, 4514 KiB  
Article
A Pilot Study of the Impact of Microwave Ablation on the Dielectric Properties of Breast Tissue
by Luz Maria Neira, R. Owen Mays, James F. Sawicki, Amanda Schulman, Josephine Harter, Lee G. Wilke, Nader Behdad, Barry D. Van Veen and Susan C. Hagness
Sensors 2020, 20(19), 5698; https://doi.org/10.3390/s20195698 - 6 Oct 2020
Cited by 3 | Viewed by 2994
Abstract
Percutaneous microwave ablation (MWA) is a promising technology for patients with breast cancer, as it may help treat individuals who have less aggressive cancers or do not respond to targeted therapies in the neoadjuvant or pre-surgical setting. In this study, we investigate changes [...] Read more.
Percutaneous microwave ablation (MWA) is a promising technology for patients with breast cancer, as it may help treat individuals who have less aggressive cancers or do not respond to targeted therapies in the neoadjuvant or pre-surgical setting. In this study, we investigate changes to the microwave dielectric properties of breast tissue that are induced by MWA. While similar changes have been characterized for relatively homogeneous tissues, such as liver, those prior results are not directly translatable to breast tissue because of the extreme tissue heterogeneity present in the breast. This study was motivated, in part by the expectation that the changes in the dielectric properties of the microwave antenna’s operation environment will be impacted by tissue composition of the ablation target, which includes not only the tumor, but also its margins. Accordingly, this target comprises a heterogeneous mix of malignant, healthy glandular, and adipose tissue. Therefore, knowledge of MWA impact on breast dielectric properties is essential for the successful development of MWA systems for breast cancer. We performed ablations in 14 human ex-vivo prophylactic mastectomy specimens from surgeries that were conducted at the UW Hospital and monitored the temperature in the vicinity of the MWA antenna during ablation. After ablation we measured the dielectric properties of the tissue and analyzed the tissue samples to determine both the tissue composition and the extent of damage due to the ablation. We observed that MWA induced cell damage across all tissue compositions, and found that the microwave frequency-dependent relative permittivity and conductivity of damaged tissue are lower than those of healthy tissue, especially for tissue with high fibroglandular content. The results provide information for future developments on breast MWA systems. Full article
Show Figures

Figure 1

Figure 1
<p>Steps of the experimental procedure.</p>
Full article ">Figure 2
<p>Photograph of the microwave ablation and temperature monitoring configuration. The rigid ablation antenna is inserted horizontally into the mastectomy specimen, while four fiber-optic temperature probes are inserted vertically into the tissue through biopsy needles. The yellow and dark blue color of the specimen is due to ink applied to the specimen surface as part of the standard pathology grossing procedure.</p>
Full article ">Figure 3
<p>Schematic showing measurement locations relative to the MWA antenna. In this cross-sectional view, the ablation antenna is oriented perpendicular to the plane of the page. The temperature probes are shown as vertical black lines. The tips of the temperature probes are positioned along a horizontal line perpendicular to the longitudinal axis of the antenna and spaced in 5-mm increments away from the antenna. Dielectric spectroscopy measurements and samples for histological analysis were taken from these same temperature measurement locations. The dark region represents an example of ablated tissue.</p>
Full article ">Figure 4
<p>Data acquired and criteria for exclusion.</p>
Full article ">Figure 5
<p>Photograph of a sliced tissue specimen post ablation. The orientation here is the same as the plane of <a href="#sensors-20-05698-f003" class="html-fig">Figure 3</a>. Black ink spots mark the locations of the four dielectric measurements. The second dark circle from the left is the hole where the MWA antenna was inserted.</p>
Full article ">Figure 6
<p>Illustration showing the orientation of the histology section taken at the location of a black ink spot. The 1.5-mm by 5.0-mm region beneath the ink spot marks the area assessed during histological analysis of the section.</p>
Full article ">Figure 7
<p>2<span class="html-italic">X</span> magnification of histology slide showing the 1.5-mm by 5.0-mm analysis region.</p>
Full article ">Figure 8
<p>Mean (solid) and standard deviation range (dotted) of temperatures observed at the following locations as a function of time for the first four minutes of MWA. (<b>a</b>) 5 mm to the left of the antenna, (<b>b</b>) 5 mm to the right of the antenna, (<b>c</b>) 10 mm to the right of the antenna, and (<b>d</b>) 15 mm to the right of the antenna. The discontinuities are due to the availability of measurement data sets for a larger number of tissue specimens during the first 2 min of ablation. Only measurements from the 12 ablations at 100 W are shown.</p>
Full article ">Figure 9
<p>Histogram showing tissue composition of each sample, as determined via histological analysis. The bars show the number of cases at each measurement location (5 mm to the left and 5/10/15 mm to the right) in each of the following composition groups: 0–30% adipose, 31–84% adipose, and 85–100% adipose. Twelve of the 15 samples in the 85–100% adipose group were 100% adipose.</p>
Full article ">Figure 10
<p>20<span class="html-italic">X</span> magnification of histology slide showing intact glandular lobule (<b>a</b>) and evidence of thermal damage in glandular lobule (<b>b</b>).</p>
Full article ">Figure 11
<p>Histogram showing the histological outcomes for thermal damage assessment. The bars show the number of cases at each measurement location exhibiting damage versus not exhibiting damage in the histological analysis. The 12 samples of 100% adipose tissue are not reported here, since their damage state cannot be assessed via histological analysis.</p>
Full article ">Figure 12
<p>Peak temperature observed at each measurement location versus adipose content of the tissue at that location. The marker style indicates whether damage was or was not observed during the histological analysis.</p>
Full article ">Figure 13
<p>Median fitted curves for the permittivity (<b>a</b> and <b>c</b>) and conductivity (<b>b</b> and <b>d</b>) of ablated breast tissue (red lines, this study) as compared to healthy breast tissue (gray lines, [<a href="#B15-sensors-20-05698" class="html-bibr">15</a>]). The variability bars show the 25th and 75th percentiles. Bottom plots (<b>c</b> and <b>d</b>) provide more detail in the lower range of the y-axis.</p>
Full article ">
13 pages, 14508 KiB  
Article
Clustering-Based Component Fraction Estimation in Solid–Liquid Two-Phase Flow in Dredging Engineering
by Chang Sun, Shihong Yue, Qi Li and Huaxiang Wang
Sensors 2020, 20(19), 5697; https://doi.org/10.3390/s20195697 - 6 Oct 2020
Cited by 7 | Viewed by 1953
Abstract
Component fraction (CF) is one of the most important parameters in multiple-phase flow. Due to the complexity of the solid–liquid two-phase flow, the CF estimation remains unsolved both in scientific research and industrial application for a long time. Electrical resistance tomography (ERT) is [...] Read more.
Component fraction (CF) is one of the most important parameters in multiple-phase flow. Due to the complexity of the solid–liquid two-phase flow, the CF estimation remains unsolved both in scientific research and industrial application for a long time. Electrical resistance tomography (ERT) is an advanced type of conductivity detection technique due to its low-cost, fast-response, non-invasive, and non-radiation characteristics. However, when the existing ERT method is used to measure the CF value in solid–liquid two-phase flow in dredging engineering, there are at least three problems: (1) the dependence of reference distribution whose CF value is zero; (2) the size of the detected objects may be too small to be found by ERT; and (3) there is no efficient way to estimate the effect of artifacts in ERT. In this paper, we proposed a method based on the clustering technique, where a fast-fuzzy clustering algorithm is used to partition the ERT image to three clusters that respond to liquid, solid phases, and their mixtures and artifacts, respectively. The clustering algorithm does not need any reference distribution in the CF estimation. In the case of small solid objects or artifacts, the CF value remains effectively computed by prior information. To validate the new method, a group of typical CF estimations in dredging engineering were implemented. Results show that the new method can effectively overcome the limitations of the existing method, and can provide a practical and more accurate way for CF estimation. Full article
(This article belongs to the Special Issue Selected papers from ISMTMF-2019)
Show Figures

Figure 1

Figure 1
<p>Electrical resistance tomography (ERT) sensor and measurements process. (<b>a</b>) ERT electrode plane and detected field. (<b>b</b>) Discretized ERT field.</p>
Full article ">Figure 2
<p>Comparison of two imaging conditions. (<b>a</b>) Separated solid and liquid objects. (<b>b</b>) ERT image of (<b>a</b>). (<b>c</b>) Statistical histogram of (<b>b</b>). (<b>d</b>) Mixed solid and liquid objects. (<b>e</b>) ERT image of (<b>d</b>). (<b>f</b>) Statistical histogram of (<b>e</b>).</p>
Full article ">Figure 3
<p>The partitioned three clusters by the <span class="html-italic">f</span>-FCM algorithm.</p>
Full article ">Figure 4
<p>The varied curve of <span class="html-italic">σ</span>(<span class="html-italic">C<sub>u</sub></span>) on two relative variables of <span class="html-italic">n</span><sub>1</sub> and (<span class="html-italic">n<sub>s</sub>–n</span><sub>1</sub>). (<b>a</b>) <span class="html-italic">σ</span>(<span class="html-italic">C<sub>u</sub></span>) under various(<span class="html-italic">n<sub>s</sub>–n</span><sub>1</sub>) and <span class="html-italic">n</span><sub>1</sub>. (<b>b</b>) Relation between <span class="html-italic">σ</span>(<span class="html-italic">C<sub>u</sub></span>) and(<span class="html-italic">n<sub>s</sub>–n</span><sub>1</sub>).</p>
Full article ">Figure 5
<p>Experimental facility in dredging engineering. (<b>a</b>) ERT sensor and measuring meter. (<b>b</b>) Solid–liquid flow in pipe.</p>
Full article ">Figure 6
<p>The estimated values of CF by CBCF and MG.</p>
Full article ">
23 pages, 3330 KiB  
Review
A Review of Measurement Calibration and Interpretation for Seepage Monitoring by Optical Fiber Distributed Temperature Sensors
by Yaser Ghafoori, Andrej Vidmar, Jaromír Říha and Andrej Kryžanowski
Sensors 2020, 20(19), 5696; https://doi.org/10.3390/s20195696 - 6 Oct 2020
Cited by 29 | Viewed by 5525
Abstract
Seepage flow through embankment dams and their sub-base is a crucial safety concern that can initiate internal erosion of the structure. The thermometric method of seepage monitoring employs the study of heat transfer characteristics in the soils, as the temperature distribution in earth-filled [...] Read more.
Seepage flow through embankment dams and their sub-base is a crucial safety concern that can initiate internal erosion of the structure. The thermometric method of seepage monitoring employs the study of heat transfer characteristics in the soils, as the temperature distribution in earth-filled structures can be influenced by the presence of seepage. Thus, continuous temperature measurements can allow detection of seepage flows. With the recent advances in optical fiber temperature sensor technology, accurate and fast temperature measurements, with relatively high spatial resolution, have been made possible using optical fiber distributed temperature sensors (DTSs). As with any sensor system, to obtain a precise temperature, the DTS measurements need to be calibrated. DTS systems automatically calibrate the measurements using an internal thermometer and reference section. Additionally, manual calibration techniques have been developed which are discussed in this paper. The temperature data do not provide any direct information about the seepage, and this requires further processing and analysis. Several methods have been developed to interpret the temperature data for the localization of the seepage and in some cases to estimate the seepage quantity. An efficient DTS application in seepage monitoring strongly depends on the following factors: installation approach, calibration technique, along with temperature data interpretation and post-processing. This paper reviews the different techniques for calibration of DTS measurements as well as the methods of interpretation of the temperature data. Full article
(This article belongs to the Special Issue Environmental Sensors and Their Applications)
Show Figures

Figure 1

Figure 1
<p>Stokes and anti-Stokes intensity and obtained temperature from the laboratory DTS measurement using the Silixa XT-DTS system.</p>
Full article ">Figure 2
<p>The basic components of an optical fiber DTS system. Adapted from [<a href="#B30-sensors-20-05696" class="html-bibr">30</a>]. (The publisher gave permission to reproduce this figure.).</p>
Full article ">Figure 3
<p>The spatial resolution of the Silixa XT-DTS system.</p>
Full article ">Figure 4
<p>The loss in the intensity of Stokes and anti-Stokes scattering due to the connection of two types of optical fiber cable.</p>
Full article ">Figure 5
<p>Temperature measurement with a duplexed single-ended approach and calibration based on two reference points for each side (four reference points in total).</p>
Full article ">Figure 6
<p>General typology of DTS data calibration.</p>
Full article ">Figure 7
<p>Monitoring and detection of seepage in the embankment dam.</p>
Full article ">Figure 8
<p>Singularity detection using short-term experimental measurement: Measuring point at x = 134.08 m corresponding to the location of a leakage path is the singularity that shows a different trend of temperature variation compared with the non-singularity zones.</p>
Full article ">Figure 9
<p>IRFTA analysis for the optical fiber temperature measurements in the dam downstream toe with three artificial leakage paths which are localized by arrows [<a href="#B9-sensors-20-05696" class="html-bibr">9</a>]: (<b>a</b>) The signal damping factor; (<b>b</b>) The time-lag. The cable is embedded in two different elevations at the downstream toe. The thicker line with triangle marks is associated with the temperature measurement by the cable embedded in the middle height of the dam toe while the other cable is embedded into the bottom level of the toe. (Figure reproduced with permission of the publisher).</p>
Full article ">
Previous Issue
Back to TopTop