[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 19, February-1
Previous Issue
Volume 19, January-1
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 19, Issue 2 (January-2 2019) – 220 articles

Cover Story (view full-size image): Thrusters play an important role in the motion control of amphibious spherical robots. A thrust model for a new water-jet thruster based on hydrodynamic analyses is proposed as a way of achieving accurate motion control. The hydrodynamic characteristics of the new thruster were numerically analyzed using computational fluid dynamics (CFD) commercial software CFX. The moving reference frame (MRF) technique was utilized to simulate propeller rotation. The basic framework of the thrust model was built according to hydromechanics theory. Parameters in the basic framework were identified through the results of the hydrodynamic simulation. Compared with other related experimental results, the maximum error of the simulation results was only 7%, which indicates that the thrust model is precise enough to be utilized in the motion control of amphibious spherical robots. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 2995 KiB  
Article
A 4K-Input High-Speed Winner-Take-All (WTA) Circuit with Single-Winner Selection for Change-Driven Vision Sensors
by Fernando Pardo, Càndid Reig, José A. Boluda and Francisco Vegara
Sensors 2019, 19(2), 437; https://doi.org/10.3390/s19020437 - 21 Jan 2019
Cited by 7 | Viewed by 4394
Abstract
Winner-Take-All (WTA) circuits play an important role in applications where a single element must be selected according to its relevance. They have been successfully applied in neural networks and vision sensors. These applications usually require a large number of inputs for the WTA [...] Read more.
Winner-Take-All (WTA) circuits play an important role in applications where a single element must be selected according to its relevance. They have been successfully applied in neural networks and vision sensors. These applications usually require a large number of inputs for the WTA circuit, especially for vision applications where thousands to millions of pixels may compete to be selected. WTA circuits usually exhibit poor response-time scaling with the number of competitors, and most of the current WTA implementations are designed to work with less than 100 inputs. Another problem related to the large number of inputs is the difficulty to select just one winner, since many competitors may have differences below the WTA resolution. In this paper, a WTA circuit is presented that handles more than four thousand inputs, to our best knowledge the hitherto largest WTA, with response times below the microsecond, and with a guaranty of just a single winner selection. This performance is obtained by the combination of a standard analog WTA circuit and a fast digital single-winner selector with almost no size penalty. This WTA circuit has been successfully employed in the fabrication of a Selective Change-Driven Vision Sensor based on 180 nm CMOS technology. Both simulated and experimental results are presented in the paper, showing that a single pixel event can be selected in just 560 ns, and a multipixel pixel event can be processed in 100 μs. Similar results with a conventional approach would require a camera working at more than 1 Mfps for the single-pixel event detection, and 10 kfps for the whole multipixel event to be processed. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>WTA circuit proposed by Lazzaro.</p>
Full article ">Figure 2
<p>(<b>a</b>) Cascode WTA [<a href="#B5-sensors-19-00437" class="html-bibr">5</a>]. (<b>b</b>) Boosted-cascode WTA [<a href="#B7-sensors-19-00437" class="html-bibr">7</a>].</p>
Full article ">Figure 3
<p>Adjustable comparator for winner identification.</p>
Full article ">Figure 4
<p>Sensor blocks with the digital single-winner selection circuits and signals.</p>
Full article ">Figure 5
<p>Normal WTA operation of successive losing pixels.</p>
Full article ">Figure 6
<p>WTA delay when there is a large change in the illumination of one pixel and when it is promoted to be a winner.</p>
Full article ">Figure 7
<p>Worst case when there is a clear single winner, it is selected, and all other pixels fight to become the winner.</p>
Full article ">Figure 8
<p>Illumination differences of 4000 pixels read in a period of 1.12 µs. The event (LED switching on) is introduced after Pixel 500 (0.560 ms).</p>
Full article ">Figure 9
<p>Reconstructed image showing the illumination level obtained with the fast LED switching experiment in Selective Change Driven (SCD) mode.</p>
Full article ">Figure 10
<p>Three-dimensional representation of pixel co-ordinates (row and column) over time. Event occurs at 0.560 ms.</p>
Full article ">Figure 11
<p>Output and LED voltages measured with a digital 1 GHz oscilloscope. Pixel-reading period is 1.12 µs.</p>
Full article ">Figure 12
<p>Output and LED voltages measured with a digital 1 GHz oscilloscope. Pixel-reading period was 560 ns.</p>
Full article ">Figure 13
<p>Three-dimensional representation of pixel co-ordinates (row and column) over time. Pixel-reading period was 560 ns. Event occurred after Pixel 500 (0.280 ms).</p>
Full article ">Figure 14
<p>Illumination differences read with delays of (<b>a</b>) 0 s, (<b>b</b>) 10 s, (<b>c</b>) 20 s, and (<b>d</b>) 40 s after the event.</p>
Full article ">
15 pages, 16126 KiB  
Article
Experimental-Numerical Design and Evaluation of a Vibration Bioreactor Using Piezoelectric Patches
by David Valentín, Charline Roehr, Alexandre Presas, Christian Heiss, Eduard Egusquiza and Wolfram A. Bosbach
Sensors 2019, 19(2), 436; https://doi.org/10.3390/s19020436 - 21 Jan 2019
Cited by 6 | Viewed by 5149
Abstract
In this present study, we propose a method for exposing biological cells to mechanical vibration. The motive for our research was to design a bioreactor prototype in which in-depth in vitro studies about the influence of vibration on cells and their metabolism can [...] Read more.
In this present study, we propose a method for exposing biological cells to mechanical vibration. The motive for our research was to design a bioreactor prototype in which in-depth in vitro studies about the influence of vibration on cells and their metabolism can be performed. The therapy of cancer or antibacterial measures are applications of interest. In addition, questions about the reaction of neurons to vibration are still largely unanswered. In our methodology, we used a piezoelectric patch (PZTp) for inducing mechanical vibration to the structure. To control the vibration amplitude, the structure could be excited at different frequency ranges, including resonance and non-resonance conditions. Experimental results show the vibration amplitudes expected for every frequency range tested, as well as the vibration pattern of those excitations. These are essential parameters to quantify the effect of vibration on cell behavior. Furthermore, a numerical model was validated with the experimental results presenting accurate results for the prediction of those parameters. With the calibrated numerical model, we will study in greater depth the effects of different vibration patterns for the abovementioned cell types. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic of the planned bioreactor. (<b>a</b>) Cross-section view; (<b>b</b>) Assembly of the petri dish.</p>
Full article ">Figure 2
<p>Schematic diagram of the prototyping methodology.</p>
Full article ">Figure 3
<p>Disc sketch. (<b>a</b>) Main dimensions of the disc; (<b>b</b>) isometric view of the disc.</p>
Full article ">Figure 4
<p>(<b>a</b>) Picture of the experimental set-up; (<b>b</b>) detail of the piezoelectric patch (PZTp); (<b>c</b>) detail of the disc.</p>
Full article ">Figure 5
<p>Time signal of the PZTp excitation and the responses of the laser Doppler vibrometer (LDV) and ultrasound sensor for the three different ranges of excitation tested.</p>
Full article ">Figure 6
<p>Mesh sensitivity study and optimal mesh selected. Difference obtained from the relationship between the natural frequency in each simulation and the natural frequency with the optimal mesh.</p>
Full article ">Figure 7
<p>(<b>a</b>) Geometry for the operational deflection shape (ODS); (<b>b</b>) example of a mode-shape of the disc obtained using the ODS.</p>
Full article ">Figure 8
<p>Autospectra of four points located in different radii of the disc during the experimental modal analysis (EMA).</p>
Full article ">Figure 9
<p>Root-mean-square (RMS) vibration velocity within the range 0–22 kHz for each point of the disc. R1 to R4, from external radius to internal radius of the disk.</p>
Full article ">Figure 10
<p>Ultrasound sensor autospectra for different frequency ranges.</p>
Full article ">Figure 11
<p>Normalized frequency response function (FRF) of four different mode-shapes. Numerical and experimental comparison. (<b>a</b>) (3,0); (<b>b</b>) (7,0); (<b>c</b>) (9,0); (<b>d</b>) (14,0).</p>
Full article ">
12 pages, 5118 KiB  
Article
In-Fiber Collimator-Based Fabry-Perot Interferometer with Enhanced Vibration Sensitivity
by Bin Du, Xizhen Xu, Jun He, Kuikui Guo, Wei Huang, Fengchan Zhang, Min Zhang and Yiping Wang
Sensors 2019, 19(2), 435; https://doi.org/10.3390/s19020435 - 21 Jan 2019
Cited by 27 | Viewed by 6784
Abstract
A simple vibration sensor is proposed and demonstrated based on an optical fiber Fabry-Perot interferometer (FPI) with an in-fiber collimator. The device was fabricated by splicing a quarter-pitch graded index fiber (GIF) with a section of a hollow-core fiber (HCF) interposed between single [...] Read more.
A simple vibration sensor is proposed and demonstrated based on an optical fiber Fabry-Perot interferometer (FPI) with an in-fiber collimator. The device was fabricated by splicing a quarter-pitch graded index fiber (GIF) with a section of a hollow-core fiber (HCF) interposed between single mode fibers (SMFs). The static displacement sensitivity of the FPI with an in-fiber collimator was 5.17 × 10−4 μm−1, whereas the maximum static displacement sensitivity of the device without collimator was 1.73 × 10−4 μm−1. Moreover, the vibration sensitivity of the FPI with the collimator was 60.22 mV/g at 100 Hz, which was significantly higher than the sensitivity of the FPI without collimator (11.09 mV/g at 100 Hz). The proposed FPI with an in-fiber collimator also exhibited a vibration sensitivity nearly one order of magnitude higher than the device without the collimator at frequencies ranging from 40 to 200 Hz. This low-cost FPI sensor is highly-sensitive, robust and easy to fabricate. It could potentially be used for vibration monitoring in remote and harsh environments. Full article
(This article belongs to the Special Issue Cantilever Sensor)
Show Figures

Figure 1

Figure 1
<p>A schematic diagram of the proposed vibration sensor system (TL: tunable laser; FPI: Fabry-Perot interferometer; 1/4 pitch GIF: a quarter pitch graded index fiber (fiber collimator); SMF: single mode fiber; HCF: hollow-core fiber (air cavity of the Fabry-Perot interferometer); PD: photodiode; PZT: piezoelectric transducer; TIA: transimpedance amplifier; DAQ: data acquisition board; PC: personal computer). Insets (<b>a</b>) and (<b>b</b>) show the FPI sensor structure and the detailed schematic of the FPI sensor with a lateral displacement, respectively.</p>
Full article ">Figure 2
<p>The working principle of the proposed FPI vibration sensor. Insets (<b>a</b>): a schematic of the proposed FPI; (GIF: graded index fiber; HCF: hollow-core fiber, SMF: single mode fiber) (<b>b</b>): the beam profile on M1.</p>
Full article ">Figure 3
<p>The fabrication process for the vibration sensor based on an FPI with an in-fiber collimator. (<b>a</b>) Step 1: splicing a segment of GIF to the SMF; (<b>b</b>) Step 2: cleaving the GIF with remaining length of 200 μm; (<b>c</b>) Step 3: splicing a segment of HCF to the end of GIF; (<b>d</b>) Step 4: cleaving the HCF with the remaining length of 200 μm; (<b>e</b>) Step 5: splicing a segment of SMF to the end of HCF; (<b>f</b>) Step 6: cleaving an inclined end face of the SMF with residual length of 20 mm.</p>
Full article ">Figure 4
<p>(<b>a</b>–<b>c</b>) Microscope images of an SMF FPI, a quarter-pitch (245 μm) GIF-FPI, and a half-pitch (490 μm) GIF-FPI with the same FPI cavity length of ~200 μm, respectively. (<b>d</b>) Measured and simulated reflection spectra of the SMF-FPI, a quarter-pitch GIF-FPI, and a half-pitch GIF-FPI. (Simulation parameters: <math display="inline"> <semantics> <mo>Δ</mo> </semantics> </math> = 0.02, <math display="inline"> <semantics> <mi>r</mi> </semantics> </math> = 31.25 μm, <math display="inline"> <semantics> <mrow> <msub> <mi>ω</mi> <mn>0</mn> </msub> </mrow> </semantics> </math> = 5 μm (in SMF), <math display="inline"> <semantics> <mi>λ</mi> </semantics> </math> = 1550 nm and <math display="inline"> <semantics> <mrow> <msub> <mi>n</mi> <mn>0</mn> </msub> </mrow> </semantics> </math> = 1.491, <math display="inline"> <semantics> <mrow> <mrow> <mi>R</mi> <mo>=</mo> </mrow> <mn>0.04</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>=</mo> </mrow> </semantics> </math> 199.6, 191.8, and 198.8 μm respectively, for a SMF-FPI, a quarter-pitch GIF-FPI and a half-pitch GIF-FPI).</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>c</b>) Reflection spectra of a SMF-FPI, a quarter-pitch GIF-FPI, and a half-pitch GIF-FPI with a cavity length of 200 μm at various static displacements; (<b>d</b>) simulation results and static displacement responses of a SMF-FPI, a quarter-pitch GIF-FPI, and a half-pitch GIF-FPI with the same cavity length of 200 μm. (Simulation parameters: <math display="inline"> <semantics> <mo>Δ</mo> </semantics> </math> = 0.02, <math display="inline"> <semantics> <mi>r</mi> </semantics> </math> = 31.25 μm, <math display="inline"> <semantics> <mrow> <msub> <mi>ω</mi> <mn>0</mn> </msub> </mrow> </semantics> </math> = 5 μm, <math display="inline"> <semantics> <mi>λ</mi> </semantics> </math> = 1550 nm and <math display="inline"> <semantics> <mrow> <msub> <mi>n</mi> <mn>0</mn> </msub> </mrow> </semantics> </math> = 1.491 (GIF), <math display="inline"> <semantics> <mrow> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>=</mo> </mrow> </semantics> </math> 10 mm, for a SMF-FPI, a quarter-pitch GIF-FPI and a half-pitch GIF-FPI, <math display="inline"> <semantics> <mrow> <mrow> <mi>R</mi> <mo>=</mo> </mrow> <mn>0.04</mn> </mrow> </semantics> </math> ).</p>
Full article ">Figure 6
<p>The time domain (<b>a</b>) and the frequency domain (<b>b</b>) for vibration response of an SMF-FPI, a quarter-pitch GIF-FPI and a quarter-pitch GIF-FPI with a cavity length of 200 μm at 100 Hz.</p>
Full article ">Figure 7
<p>The linearity of the vibration responses of the SMF-FPI, the quarter-pitch GIF-FPI, and the half-pitch GIF-FPI with the same cavity length of 200 μm at a vibration frequency of 100 Hz.</p>
Full article ">Figure 8
<p>The frequency responses of the vibration sensors based on the SMF-FPI, the quarter-pitch GIF-FPI, and the half-pitch GIF-FPI with same cavity length of 200 μm. Different vibration frequency ranging from 40 Hz to 500 Hz was investigated.</p>
Full article ">
20 pages, 1271 KiB  
Article
Motion Plan of Maritime Autonomous Surface Ships by Dynamic Programming for Collision Avoidance and Speed Optimization
by Xiongfei Geng, Yongcai Wang, Ping Wang and Baochen Zhang
Sensors 2019, 19(2), 434; https://doi.org/10.3390/s19020434 - 21 Jan 2019
Cited by 48 | Viewed by 6107
Abstract
Maritime Autonomous Surface Ships (MASS) with advanced guidance, navigation, and control capabilities have attracted great attention in recent years. Sailing safely and efficiently are critical requirements for autonomous control of MASS. The MASS utilizes the information collected by the radar, camera, and Autonomous [...] Read more.
Maritime Autonomous Surface Ships (MASS) with advanced guidance, navigation, and control capabilities have attracted great attention in recent years. Sailing safely and efficiently are critical requirements for autonomous control of MASS. The MASS utilizes the information collected by the radar, camera, and Autonomous Identification System (AIS) with which it is equipped. This paper investigates the problem of optimal motion planning for MASS, so it can accomplish its sailing task early and safely when it sails together with other conventional ships. We develop velocity obstacle models for both dynamic and static obstacles to represent the potential conflict-free region with other objects. A greedy interval-based motion-planning algorithm is proposed based on the Velocity Obstacle (VO) model, and we show that the greedy approach may fail to avoid collisions in the successive intervals. A way-blocking metric is proposed to evaluate the risk of collision to improve the greedy algorithm. Then, by assuming constant velocities of the surrounding ships, a novel Dynamic Programming (DP) method is proposed to generate the optimal multiple interval motion plan for MASS. These proposed algorithms are verified by extensive simulations, which show that the DP algorithm provides the lowest collision rate overall and better sailing efficiency than the greedy approaches. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of the Velocity Obstacle (VO) model. (<b>a</b>) Ship <span class="html-italic">A</span> is a Maritime Autonomous Surface Ship (MASS), and ship <span class="html-italic">B</span> is a conventional ship, which are moving in velocity vector <math display="inline"><semantics> <msub> <mi>v</mi> <mi>A</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>v</mi> <mi>B</mi> </msub> </semantics></math>, respectively; (<b>b</b>) represents the geometric shapes of the two ships by circles; (<b>c</b>) represents the collision region by relative velocity: if the relative velocity vector <math display="inline"><semantics> <msub> <mi>v</mi> <mrow> <mi>A</mi> <mi>B</mi> </mrow> </msub> </semantics></math> falls into the shaded area, the MASS <span class="html-italic">A</span> may collide with the ship <span class="html-italic">B</span>; (<b>d</b>) represents the collision region by the absolute velocity vector of <span class="html-italic">A</span>, i.e., if the velocity vector <math display="inline"><semantics> <msub> <mi>v</mi> <mi>A</mi> </msub> </semantics></math> ends in the shaded area, the MASS <span class="html-italic">A</span> may collide with <span class="html-italic">B</span>; (<b>e</b>) the velocity obstacle <math display="inline"><semantics> <mrow> <mi>V</mi> <msub> <mi>O</mi> <mi>B</mi> </msub> </mrow> </semantics></math> for a short time interval.</p>
Full article ">Figure 2
<p>(<b>a</b>) VO of static objects; (<b>b</b>) illustration of the VO model of multiple mobile objects. <math display="inline"><semantics> <msub> <mi>B</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>B</mi> <mn>2</mn> </msub> </semantics></math> are two objects. The shaded areas show <math display="inline"><semantics> <mrow> <mi>V</mi> <msub> <mi>O</mi> <msub> <mi>B</mi> <mn>1</mn> </msub> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>V</mi> <msub> <mi>O</mi> <msub> <mi>B</mi> <mn>2</mn> </msub> </msub> </mrow> </semantics></math>, respectively.</p>
Full article ">Figure 3
<p>The red points show the discretized possible acceleration vectors at time <span class="html-italic">t</span> that satisfy the motion constraints.</p>
Full article ">Figure 4
<p>Illustration of the VO model of multiple mobile objects. <math display="inline"><semantics> <msub> <mi>B</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>B</mi> <mn>2</mn> </msub> </semantics></math> are two mobile objects. The shaded areas show <math display="inline"><semantics> <mrow> <mi>V</mi> <msub> <mi>O</mi> <msub> <mi>B</mi> <mn>1</mn> </msub> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>V</mi> <msub> <mi>O</mi> <msub> <mi>B</mi> <mn>2</mn> </msub> </msub> </mrow> </semantics></math>, respectively.</p>
Full article ">Figure 5
<p>The greedy motion plan in one step may lead to a collision in the next step.</p>
Full article ">Figure 6
<p>An illustration of the multiple interval dynamic programming approach for motion planning.</p>
Full article ">Figure 7
<p>Four snapshots in the simulation when the MASS started from (0, 0) with destination (1000, 1000). The black circles are collision regions with other ships that should be avoided. The red cones show the VOs.</p>
Full article ">Figure 8
<p>Collision probability vs. ship density when <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Collision probability vs. ship density when <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Collision probability vs. maximum velocity of other ships when <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Collision probability vs. maximum velocity of other ships when <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Time to destination vs. density of ships when <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 13
<p>Time to destination vs. density of ships when <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>.</p>
Full article ">
12 pages, 6624 KiB  
Article
Experimental Analysis of Bragg Reflection Peak Splitting in Gratings Fabricated Using a Multiple Order Phase Mask
by Gabriela Statkiewicz-Barabach, Karol Tarnowski, Dominik Kowal and Pawel Mergo
Sensors 2019, 19(2), 433; https://doi.org/10.3390/s19020433 - 21 Jan 2019
Viewed by 4099
Abstract
We performed an experimental analysis of the effect of phase mask alignment on the Bragg grating reflection spectra around the wavelength of λB = 1560 nm fabricated in polymer optical fiber by using a multiple order phase mask. We monitored the evolution [...] Read more.
We performed an experimental analysis of the effect of phase mask alignment on the Bragg grating reflection spectra around the wavelength of λB = 1560 nm fabricated in polymer optical fiber by using a multiple order phase mask. We monitored the evolution of the reflection spectra for different values of the angle ϕ by describing the tilt between the phase mask and the fiber. We observed that the peak at λB is split into five separate peaks for the nonzero tilt and that separation of the peaks increases linearly with ϕ. Through comparison with theoretical data we were able to identify the five peaks as products of different grating periodicities, which are associated with the interference of different pairs of diffraction orders on the phase mask. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic representation of grating inscription system. The blue plane represents the phase mask. <b><span class="html-italic">k</span></b>, wave vector of incidence beam; <b><span class="html-italic">K</span></b>, inverse vector of phase mask; <b><span class="html-italic">s</span></b>, versor indicating the fiber orientation, ϕ, tilt angle.</p>
Full article ">Figure 2
<p>Calculated positions of Bragg reflection peaks λ<sub>(m,q)</sub><sup>(p)</sup>, where m and q are the diffraction order number and p is the reflection order (p = 1 or 2). The auxiliary axis shows the difference between each position λ<sub>(m,q)</sub><sup>(p)</sup> and the position of the central peak λ<sub>(−1,1)</sub><sup>(1)</sup>.</p>
Full article ">Figure 3
<p>Cross section of the polymer optical fiber (POF) used for Bragg grating inscription. PMMA: polymethyl methacrylate; PS: polystyrene.</p>
Full article ">Figure 4
<p>OptoSigma goniometer stage (<b>a</b>) and rotational stage (<b>b</b>) for precise adjustment of θ and ϕ angles, respectively. The directions of rotation of the phase mask relative to the POF (polymer optical fiber) are marked with red arrows.</p>
Full article ">Figure 5
<p>Images of the POF and the phase mask used for evaluation of the angle θ (<b>a</b>) and tilt angle ϕ (<b>b</b>).</p>
Full article ">Figure 6
<p>Evolution of the reflection spectra of the fiber Bragg grating (FBG) observed for different tilt angles of the POF.</p>
Full article ">Figure 7
<p>The relative position of the peaks as a function of the irradiation time for the tilt angle ϕ = 0.77°.</p>
Full article ">Figure 8
<p>The reflection spectra near the primary Bragg wavelength <span class="html-italic">λ<sub>B</sub></span> of the FBGs fabricated in fibers tilted in respect to the phase mask with the angle ϕ = 0° (<b>a</b>), ϕ = 0.36° (<b>b</b>), ϕ = 0.77° (<b>c</b>), ϕ = 1.0° (<b>d</b>) and ϕ = 1.1° (<b>e</b>).</p>
Full article ">Figure 9
<p>Measured and calculated shift of the side-peaks in the spectra of FBGs in respect to the position of the central peak (<b>a</b>) and the separation of the inner and outer pair of side-peaks (<b>b</b>). Experimental results are marked with circles.</p>
Full article ">
13 pages, 1174 KiB  
Article
Wearable Sensor-Based Exercise Biofeedback for Orthopaedic Rehabilitation: A Mixed Methods User Evaluation of a Prototype System
by Rob Argent, Patrick Slevin, Antonio Bevilacqua, Maurice Neligan, Ailish Daly and Brian Caulfield
Sensors 2019, 19(2), 432; https://doi.org/10.3390/s19020432 - 21 Jan 2019
Cited by 48 | Viewed by 8836
Abstract
The majority of wearable sensor-based biofeedback systems used in exercise rehabilitation lack end-user evaluation as part of the development process. This study sought to evaluate an exemplar sensor-based biofeedback system, investigating the feasibility, usability, perceived impact and user experience of using the platform. [...] Read more.
The majority of wearable sensor-based biofeedback systems used in exercise rehabilitation lack end-user evaluation as part of the development process. This study sought to evaluate an exemplar sensor-based biofeedback system, investigating the feasibility, usability, perceived impact and user experience of using the platform. Fifteen patients participated in the study having recently undergone knee replacement surgery. Participants were provided with the system for two weeks at home, completing a semi-structured interview alongside the System Usability Scale (SUS) and user version of the Mobile Application Rating Scale (uMARS). The analysis from the SUS (mean = 90.8 [SD = 7.8]) suggests a high degree of usability, supported by qualitative findings. The mean adherence rate was 79% with participants reporting a largely positive user experience, suggesting it offers additional support with the rehabilitation regime. Overall quality from the mean uMARS score was 4.1 out of 5 (SD = 0.39), however a number of bugs and inaccuracies were highlighted along with suggestions for additional features to enhance engagement. This study has shown that patients perceive value in the use of wearable sensor-based biofeedback systems and has highlighted the benefit of user-evaluation during the design process, illustrated the need for real-world accuracy validation, and supports the ongoing development of such systems. Full article
(This article belongs to the Special Issue Data Analytics and Applications of the Wearable Sensors in Healthcare)
Show Figures

Figure 1

Figure 1
<p>User setup and IMU orientation of the biofeedback system consisting of a single IMU and associated Android tablet application (figure adapted from [<a href="#B23-sensors-19-00432" class="html-bibr">23</a>]).</p>
Full article ">Figure 2
<p>Screenshot of the Android tablet application during the straight leg raise exercise.</p>
Full article ">
21 pages, 3273 KiB  
Article
A Smart Recommender Based on Hybrid Learning Methods for Personal Well-Being Services
by Rayan M. Nouh, Hyun-Ho Lee, Won-Jin Lee and Jae-Dong Lee
Sensors 2019, 19(2), 431; https://doi.org/10.3390/s19020431 - 21 Jan 2019
Cited by 39 | Viewed by 9499
Abstract
The main focus of the paper is to propose a smart recommender system based on the methods of hybrid learning for personal well-being services, called a smart recommender system of hybrid learning (SRHL). The essential health factor is considered to be personal lifestyle, [...] Read more.
The main focus of the paper is to propose a smart recommender system based on the methods of hybrid learning for personal well-being services, called a smart recommender system of hybrid learning (SRHL). The essential health factor is considered to be personal lifestyle, with the help of a critical examination of various disciplines. Integrating the recommender system effectively contributes to the prevention of disease, and it also leads to a reduction in treatment cost, which contributes to an improvement in the quality of life. At the same time, there exist various challenges within the recommender system, mainly cold start and scalability. To effectively address the inefficiencies, we propose combined hybrid methods in regard to machine learning. The primary aim of this learning method is to integrate the most effective and efficient learning algorithms to examine how combined hybrid filtering can help to improve the cold star problem efficiently in the provision of personalized well-being in regard to health food service. These methods include: (1) switching among content-based and collaborative filtering; (2) identifying the user context with the integration of dynamic filtering; and (3) learning the profiles with the help of processing and screening of reflecting feedback loops. The experimental results were evaluated by using three absolute error measures, providing comparable results with other studies relative to machine learning domains. The effects of using the hybrid learning method are gathered with the help of the experimental results. Our experiments also show that the hybrid method improves accuracy by 14.61% of the average error predicted in the recommender systems in comparison to the collaborative methods, which mainly focus on the computation of similar entities. Full article
Show Figures

Figure 1

Figure 1
<p>Well-being recommender services.</p>
Full article ">Figure 2
<p>Service model.</p>
Full article ">Figure 3
<p>System architecture.</p>
Full article ">Figure 4
<p>Methods.</p>
Full article ">Figure 5
<p>Profile structure.</p>
Full article ">Figure 6
<p>Screenshot of the operation code that is part of recommendation models using R.</p>
Full article ">Figure 7
<p>Screenshot of results of MAE, MAPE, and MSE in R.</p>
Full article ">Figure 8
<p>Each experiment’s MAE results.</p>
Full article ">Figure 9
<p>Each experiment’s MAPE results.</p>
Full article ">Figure 10
<p>Each experiment’s MSE results.</p>
Full article ">
21 pages, 3578 KiB  
Article
Spatial and Temporal Variation of Drought Based on Satellite Derived Vegetation Condition Index in Nepal from 1982–2015
by Binod Baniya, Qiuhong Tang, Ximeng Xu, Gebremedhin Gebremeskel Haile and Gyan Chhipi-Shrestha
Sensors 2019, 19(2), 430; https://doi.org/10.3390/s19020430 - 21 Jan 2019
Cited by 51 | Viewed by 6806
Abstract
Identification of drought is essential for many environmental and agricultural applications. To further understand drought, this study presented spatial and temporal variations of drought based on satellite derived Vegetation Condition Index (VCI) on annual (Jan–Dec), seasonal monsoon (Jun–Nov) and pre-monsoon (Mar–May) scales from [...] Read more.
Identification of drought is essential for many environmental and agricultural applications. To further understand drought, this study presented spatial and temporal variations of drought based on satellite derived Vegetation Condition Index (VCI) on annual (Jan–Dec), seasonal monsoon (Jun–Nov) and pre-monsoon (Mar–May) scales from 1982–2015 in Nepal. The Vegetation Condition Index (VCI) obtained from NOAA, AVHRR (National Oceanic and Atmospheric Administration, Advanced Very High Resolution Radiometer) and climate data from meteorological stations were used. VCI was used to grade the drought, and the Mann–Kendall test and linear trend analysis were conducted to examine drought trends and the Pearson correlation between VCI and climatic factors (i.e., temperature and precipitation) was also acquired. The results identified that severe drought was identified in 1982, 1984, 1985 and 2000 on all time scales. However, VCI has increased at the rate of 1.14 yr−1 (p = 0.04), 1.31 yr−1 (p = 0.03) and 0.77 yr−1 (p = 0.77) on the annual, seasonal monsoon and pre-monsoon scales, respectively. These increased VCIs indicated decreases in drought. However, spatially, increased trends of drought were also found in some regions in Nepal. For instance, northern areas mainly in the Trans-Himalayan regions identified severe drought. The foothills and the lowlands of Terai (southern Nepal) experienced normal VCI, i.e., no drought. Similarly, the Anomaly Vegetation Condition Index (AVCI) was mostly negative before 2000 which indicated deficient soil moisture. The exceedance probability analysis results on the annual time scale showed that there was a 20% chance of occurring severe drought (VCI ≤ 35%) and a 35% chance of occurring normal drought (35% ≤ VCI ≤ 50%) in Nepal. Drought was also linked with climates in which temperature on the annual and seasonal monsoon scales was significant and positively correlated with VCI. Drought occurrence and trends in Nepal need to be further studied for comprehensive information and understanding. Full article
Show Figures

Figure 1

Figure 1
<p>Location of Nepal and its average Normalized Difference Vegetation Index (NDVI) value from 1982–2015.</p>
Full article ">Figure 2
<p>Climatic Research Unit (CRU) gridded downscaled to 1 km average annual temperature and precipitation distribution from 1982–2015 in Nepal.</p>
Full article ">Figure 3
<p>Temporal variation of the drought index (VCI) in different time scales; (<b>a</b>) annual VCI (VCI<sub>12</sub>); (<b>b</b>) seasonal monsoon VCI (VCI<sub>6</sub>) and (<b>c</b>) pre-monsoon VCI (VCI<sub>3</sub>) from 1982–2015 in Nepal.</p>
Full article ">Figure 4
<p>Spatial variation of the temporally averaged VCI for different time scales; (<b>a</b>) annual VCI (VCI<sub>12</sub>); (<b>b</b>) seasonal monsoon VCI (VCI<sub>6</sub>) and (<b>c</b>) pre-monsoon VCI (VCI<sub>3</sub>) from 1982–2015 in Nepal.</p>
Full article ">Figure 5
<p>Spatial and temporal trend of VCI; (<b>a</b>) annual (VCI <sub>12</sub>); (<b>b</b>) seasonal monsoon (VCI <sub>6</sub>) and (<b>c</b>) pre-monsoon (VCI<sub>3</sub>) VCI trends in Nepal from 1982–2015.</p>
Full article ">Figure 6
<p>Anomaly of the vegetation condition index for annual (AVCI<sub>12</sub>), seasonal monsoon (AVCI<sub>6</sub>) and pre-monsoonal (AVCI<sub>3</sub>) season from 1982–2015 in Nepal.</p>
Full article ">Figure 7
<p>Drought index probability curve: (<b>a</b>) annual; (<b>b</b>) seasonal monsoon and (<b>c</b>) pre-monsoon from 1982–2015 in Nepal.</p>
Full article ">
23 pages, 5106 KiB  
Article
A Transductive Model-based Stress Recognition Method Using Peripheral Physiological Signals
by Minjia Li, Lun Xie and Zhiliang Wang
Sensors 2019, 19(2), 429; https://doi.org/10.3390/s19020429 - 21 Jan 2019
Cited by 11 | Viewed by 4983
Abstract
Existing research on stress recognition focuses on the extraction of physiological features and uses a classifier that is based on global optimization. There are still challenges relating to the differences in individual physiological signals for stress recognition, including dispersed distribution and sample imbalance. [...] Read more.
Existing research on stress recognition focuses on the extraction of physiological features and uses a classifier that is based on global optimization. There are still challenges relating to the differences in individual physiological signals for stress recognition, including dispersed distribution and sample imbalance. In this work, we proposed a framework for real-time stress recognition using peripheral physiological signals, which aimed to reduce the errors caused by individual differences and to improve the regressive performance of stress recognition. The proposed framework was presented as a transductive model based on transductive learning, which considered local learning as a virtue of the neighborhood knowledge of training examples. The degree of dispersion of the continuous labels in the y space was also one of the influencing factors of the transductive model. For prediction, we selected the epsilon-support vector regression (e-SVR) to construct the transductive model. The non-linear real-time features were extracted using a combination of wavelet packet decomposition and bi-spectrum analysis. The performance of the proposed approach was evaluated using the DEAP dataset and Stroop training. The results indicated the effectiveness of the transductive model, which had a better prediction performance compared to traditional methods. Furthermore, the real-time interactive experiment was conducted in field studies to explore the usability of the proposed framework. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

Figure 1
<p>Comparison of the recent study and the proposed transductive model: (<b>a</b>) The flow diagram of the recent study, including the two main processes; (<b>b</b>) The flow diagram of the transductive model.</p>
Full article ">Figure 2
<p>Block diagram of the proposed stress recognition in the learning scenario.</p>
Full article ">Figure 3
<p>Block diagram of the Stroop Training.</p>
Full article ">Figure 4
<p>The process of congruent labels.</p>
Full article ">Figure 5
<p>Block diagram of the experiment in the learning scenario. The three lines mean three concurrent processing.</p>
Full article ">Figure 6
<p>The examples of morphological wavelet packet-bi-spectrum analysis from the BVP and GSR: (<b>a</b>) The signal decomposition of BVP; (<b>b</b>) Bispectrum magnitude of BVP; (<b>c</b>) The signal decomposition of GSR; (<b>d</b>) Bispectrum magnitude of GSR.</p>
Full article ">Figure 7
<p>Visualization of the high-dimensional features using t-SNE. Each class represents the specific subject ID: (<b>a</b>) From Stroop training; (<b>b</b>) From the DEAP dataset.</p>
Full article ">Figure 8
<p>The training process of T-SVR and ST-SVR.</p>
Full article ">Figure 9
<p>The label distribution of the DEAP (valence). Each legend represents the number of subjects.</p>
Full article ">Figure 10
<p>The Confusion matrix of the ST-SVR and T-SVR: (<b>a</b>) ST-SVR; (<b>b</b>) T-SVR.</p>
Full article ">Figure 11
<p>The example of the predicted stress state value from Subject 7th.</p>
Full article ">Figure A1
<p>The Empatica electrodes.</p>
Full article ">Figure A2
<p>The interactive interface for learning English words.</p>
Full article ">
15 pages, 6055 KiB  
Article
Guava Detection and Pose Estimation Using a Low-Cost RGB-D Sensor in the Field
by Guichao Lin, Yunchao Tang, Xiangjun Zou, Juntao Xiong and Jinhui Li
Sensors 2019, 19(2), 428; https://doi.org/10.3390/s19020428 - 21 Jan 2019
Cited by 118 | Viewed by 10016
Abstract
Fruit detection in real outdoor conditions is necessary for automatic guava harvesting, and the branch-dependent pose of fruits is also crucial to guide a robot to approach and detach the target fruit without colliding with its mother branch. To conduct automatic, collision-free picking, [...] Read more.
Fruit detection in real outdoor conditions is necessary for automatic guava harvesting, and the branch-dependent pose of fruits is also crucial to guide a robot to approach and detach the target fruit without colliding with its mother branch. To conduct automatic, collision-free picking, this study investigates a fruit detection and pose estimation method by using a low-cost red–green–blue–depth (RGB-D) sensor. A state-of-the-art fully convolutional network is first deployed to segment the RGB image to output a fruit and branch binary map. Based on the fruit binary map and RGB-D depth image, Euclidean clustering is then applied to group the point cloud into a set of individual fruits. Next, a multiple three-dimensional (3D) line-segments detection method is developed to reconstruct the segmented branches. Finally, the 3D pose of the fruit is estimated using its center position and nearest branch information. A dataset was acquired in an outdoor orchard to evaluate the performance of the proposed method. Quantitative experiments showed that the precision and recall of guava fruit detection were 0.983 and 0.948, respectively; the 3D pose error was 23.43° ± 14.18°; and the execution time per fruit was 0.565 s. The results demonstrate that the developed method can be applied to a guava-harvesting robot. Full article
(This article belongs to the Collection Sensors in Agriculture and Forestry)
Show Figures

Figure 1

Figure 1
<p>The guava-harvesting robot and its vision sensing system. (<b>a</b>) Robot system; (<b>b</b>) vision sensing system.</p>
Full article ">Figure 2
<p>Flow diagram of the developed vision sensing algorithm.</p>
Full article ">Figure 3
<p>Fully convolutional network (FCN) configuration. The first row uses a deconvolution stride of 32, resulting in a coarse prediction. The second row fuses the outputs from the conv7 layer, the pool3 layer, and the pool4 layer at stride 8, leading to a finer prediction. The deconvolution parameter is defined as ‘(stride) × deconv’.</p>
Full article ">Figure 4
<p>Segmentation results of the FCN model. (<b>a</b>) An aligned red–green–blue (RGB) image where black pixels represent objects outside the working range of the Kinect V2 sensor; (<b>b</b>) segmentation result where the red parts represent the fruits, and the green parts are the branches.</p>
Full article ">Figure 5
<p>Fruit detection results. (<b>a</b>) Fruit point cloud extracted from <a href="#sensors-19-00428-f004" class="html-fig">Figure 4</a>b; (<b>b</b>) clustering results, where each cluster is marked with a random color.</p>
Full article ">Figure 6
<p>Branch reconstruction process. (<b>a</b>) Branch skeletons extracted from <a href="#sensors-19-00428-f004" class="html-fig">Figure 4</a>b; (<b>b</b>) branch point cloud; (<b>c</b>) detected line segments, where each segment is marked with a random color.</p>
Full article ">Figure 7
<p>Principle of fruit pose estimation. (<b>a</b>) Schematic diagram; (<b>b</b>) three-dimensional (3D) pose estimation result, where the red array represents the fruit pose.</p>
Full article ">Figure 8
<p>Example showing ground-truth labels. (<b>a</b>) Ground-truth labels for three classes: fruit (blue), branch (red), and background. (<b>b</b>) Ground-truth fruit (blue) and the corresponding mother branch (red).</p>
Full article ">Figure 9
<p>Examples illustrating unsuccessful detections.</p>
Full article ">Figure 10
<p>Examples illustrating the fruit poses estimated by the proposed algorithm. The yellow array represents the fruit pose.</p>
Full article ">Figure 11
<p>Failure examples. The yellow array represents the estimated pose, while the white array is the ground-truth pose.</p>
Full article ">
15 pages, 1754 KiB  
Article
HEALPix-IA: A Global Registration Algorithm for Initial Alignment
by Yongzhuo Gao, Zhijiang Du, Wei Xu, Mingyang Li and Wei Dong
Sensors 2019, 19(2), 427; https://doi.org/10.3390/s19020427 - 21 Jan 2019
Cited by 6 | Viewed by 4972
Abstract
Methods of point cloud registration based on ICP algorithm are always limited by convergence rate, which is related to initial guess. A good initial alignment transformation can sharply reduce convergence time and raise efficiency. In this paper, we propose a global registration method [...] Read more.
Methods of point cloud registration based on ICP algorithm are always limited by convergence rate, which is related to initial guess. A good initial alignment transformation can sharply reduce convergence time and raise efficiency. In this paper, we propose a global registration method to estimate the initial alignment transformation based on HEALPix (Hierarchical Equal Area isoLatitude Pixelation of a sphere), an algorithm for spherical projections. We adopt EGI (Extended Gaussian Image) method to map the normals of the point cloud and estimate the transformation with optimized point correspondence. Cross-correlation method is used to search the best alignment results in consideration of the accuracy and robustness of the algorithm. The efficiency and accuracy of the proposed algorithm were verified with created model and real data from various sensors in comparison with similar methods. Full article
(This article belongs to the Special Issue Sensors for MEMS and Microsystems)
Show Figures

Figure 1

Figure 1
<p>HEALPix: The resolution increase by three steps from the base level (the figure is cited from <a href="https://healpix.jpl.nasa.gov/" target="_blank">https://healpix.jpl.nasa.gov/</a>).</p>
Full article ">Figure 2
<p>CAD models for HEALPix-IA tests.</p>
Full article ">Figure 3
<p>The results of different EGI-based methods for HEALPix-IA tests.</p>
Full article ">Figure 4
<p>RMS results in logarithm of four rough registration methods contrast tests.</p>
Full article ">Figure 5
<p>Running time results of four rough registration methods contrast tests.</p>
Full article ">Figure 6
<p>Other types of models for HEALPix-IA tests.</p>
Full article ">Figure 7
<p>Various sensors used for tests.</p>
Full article ">Figure 8
<p>Results of different types of models for HEALPix-IA tests.</p>
Full article ">Figure 9
<p>Results of real workpiece MA analysis using HEALPix-IA.</p>
Full article ">
21 pages, 5139 KiB  
Article
Chemical Source Searching by Controlling a Wheeled Mobile Robot to Follow an Online Planned Route in Outdoor Field Environments
by Ji-Gong Li, Meng-Li Cao and Qing-Hao Meng
Sensors 2019, 19(2), 426; https://doi.org/10.3390/s19020426 - 21 Jan 2019
Cited by 21 | Viewed by 4929
Abstract
In this paper, we present an estimation-based route planning (ERP) method for chemical source searching using a wheeled mobile robot and validate its effectiveness with outdoor field experiments. The ERP method plans a dynamic route for the robot to follow to search for [...] Read more.
In this paper, we present an estimation-based route planning (ERP) method for chemical source searching using a wheeled mobile robot and validate its effectiveness with outdoor field experiments. The ERP method plans a dynamic route for the robot to follow to search for a chemical source according to time-varying wind and an estimated chemical-patch path (C-PP), where C-PP is the historical trajectory of a chemical patch detected by the robot, and normally different from the chemical plume formed by the spatial distribution of all chemical patches previously released from the source. Owing to the limitations of normal gas sensors and actuation capability of ground mobile robots, it is quite hard for a single robot to directly trace the intermittent and rapidly swinging chemical plume resulting from the frequent and random changes of wind speed and direction in outdoor field environments. In these circumstances, tracking the C-PP originating from the chemical source back could help the robot approach the source. The proposed ERP method was tested in two different outdoor fields using a wheeled mobile robot. Experimental results indicate that the robot adapts to the time-varying airflow condition, arriving at the chemical source with an average success rate and approaching effectiveness of about 90% and 0.4~0.6, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>Chemical-source localization by following the online planned route. The planned route is represented by two colors, i.e., blue-black and cyan, corresponding to the two different parts <math display="inline"><semantics> <mrow> <mi>S</mi> <msub> <mi>L</mi> <mi>u</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>S</mi> <msub> <mi>L</mi> <mi>d</mi> </msub> </mrow> </semantics></math>, respectively. The wind initially blows from left to right (marked with “Direction 1”) for a period in (<b>a</b>), and then changes by about 30 degrees (marked with “Direction 2”) for another period in (<b>b</b>). It is assumed that the wind is uniform throughout the planar space around the robot during each period (See 4.1 for more details). All chemical patches previously released from the chemical source therefore form the plume shown by the gray strip. The chemical patches in (<b>a</b>) are collectively transported in wind direction 2 in (<b>b</b>). When a chemical-detection event happens in (<b>b</b>), a C-PP, i.e., the ensemble of the elliptic areas <math display="inline"><semantics> <mrow> <mo>{</mo> <mi>O</mi> <mi>S</mi> <mo stretchy="false">(</mo> <msub> <mi>t</mi> <mi>l</mi> </msub> <mo>,</mo> <msub> <mi>t</mi> <mi>j</mi> </msub> <mo stretchy="false">)</mo> <mo>,</mo> <mi>l</mi> <mo>=</mo> <mi>f</mi> <mo>,</mo> <mi>f</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <mi>j</mi> <mo>−</mo> <mn>1</mn> <mo>}</mo> </mrow> </semantics></math>, is estimated, and then a search route is planned online to make the robot attempt to re-meet the chemical clue. The robot will finally approach the source through an iterative searching procedure.</p>
Full article ">Figure 2
<p>Generation of the deviation-path point <math display="inline"><semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>L</mi> </mstyle> <mrow> <mi>o</mi> <mi>f</mi> <mi>f</mi> </mrow> </msub> <mo stretchy="false">(</mo> <msub> <mi>t</mi> <mi>l</mi> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Framework of the CSS based on ERP method. The ERP procedure is represented by the steps enclosed within broken lines.</p>
Full article ">Figure 4
<p>Experimental platform: the wheeled mobile robot MrSOS and the onboard sensors. An enlarged view of the gas sensor (MiCS 5135) is shown at the top right corner.</p>
Full article ">Figure 5
<p>The value of <math display="inline"><semantics> <mrow> <mrow> <mrow> <msub> <mi>n</mi> <mi>D</mi> </msub> </mrow> <mo>/</mo> <mrow> <msub> <mi>N</mi> <mi>D</mi> </msub> </mrow> </mrow> </mrow> </semantics></math> for eight different thresholds <math display="inline"><semantics> <mi>η</mi> </semantics></math> and five different distances from the source.</p>
Full article ">Figure 6
<p>Two of the 100 estimated C-PPs when the distance between the robot and the source was 4 m and the threshold <math display="inline"><semantics> <mi>η</mi> </semantics></math> in Equation (6) was chosen to be <math display="inline"><semantics> <mrow> <msup> <mrow> <mn>10</mn> </mrow> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math> m<sup>−2</sup>. The ellipses indicate the boundaries of areas <math display="inline"><semantics> <mrow> <mi>O</mi> <mi>S</mi> <mo stretchy="false">(</mo> <msub> <mi>t</mi> <mi>l</mi> </msub> <mo>,</mo> <msub> <mi>t</mi> <mi>j</mi> </msub> <mo stretchy="false">)</mo> </mrow> </semantics></math> in Equations (5) and (6). (<b>a</b>) A case of the airflow being more stable (<math display="inline"><semantics> <mrow> <mo stretchy="false">[</mo> <msubsup> <mi>σ</mi> <mi>x</mi> <mn>2</mn> </msubsup> <mo>,</mo> <msubsup> <mi>σ</mi> <mi>y</mi> <mn>2</mn> </msubsup> <mo stretchy="false">]</mo> <mo>=</mo> <mo stretchy="false">[</mo> <mn>0.029</mn> <mo>,</mo> <mn>0.053</mn> <mo stretchy="false">]</mo> </mrow> </semantics></math> m<sup>2</sup>/s<sup>2</sup>). (<b>b</b>) A case of the airflow being less stable (<math display="inline"><semantics> <mrow> <mo stretchy="false">[</mo> <msubsup> <mi>σ</mi> <mi>x</mi> <mn>2</mn> </msubsup> <mo>,</mo> <msubsup> <mi>σ</mi> <mi>y</mi> <mn>2</mn> </msubsup> <mo stretchy="false">]</mo> <mo>=</mo> <mo stretchy="false">[</mo> <mn>0.260</mn> <mo>,</mo> <mn>0.160</mn> <mo stretchy="false">]</mo> </mrow> </semantics></math> m<sup>2</sup>/s<sup>2</sup>).</p>
Full article ">Figure 7
<p>Plan sketch (upper) and picture (lower) of the experimental field for the O group. The trials of the O group were conducted in the small square (limited to 35 m × 55 m) on the north of Teaching Building No. 26-D&amp;C, and there is a small garden standing about 0.4 m above the ground.</p>
Full article ">Figure 8
<p>Sketch of the approaching effectiveness definition. The trajectory from <span class="html-italic">A</span> to <span class="html-italic">B</span> indicates the chemical-clue-finding phase, and <span class="html-italic">B</span> to C belongs to the plume tracing / traversal phase. The red point <span class="html-italic">S</span> stands for the chemical source. For our research, the plume tracing / traversal phase corresponds to the odor source searching with the ERP method and might include the chemical-clue-refinding phase.</p>
Full article ">Figure 9
<p>The process of chemical-clue finding and ERP procedure in the football field of Tianjin University. (<b>a</b>) <span class="html-italic">t</span> = 56.0 s. (<b>b</b>) <span class="html-italic">t</span> = 125.5 s. (<b>c</b>) <span class="html-italic">t</span> = 149.5 s. (<b>d</b>) <span class="html-italic">t</span> = 194.0 s. ‘<span class="html-italic">A</span>’ denotes the start position of the robot, ‘<span class="html-italic">S</span>’ indicates the true location of the chemical source, and ‘<span class="html-italic">B</span>’ is where the ERP procedure was launched. The planned route is represented by two colors, i.e., blue-black and cyan, corresponding to the two different parts <math display="inline"><semantics> <mrow> <mi>S</mi> <msub> <mi>L</mi> <mi>u</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>S</mi> <msub> <mi>L</mi> <mi>d</mi> </msub> </mrow> </semantics></math>, respectively. Along the trajectory of the robot, each symbol ‘+’ represents a chemical-detection event. The solid arrow near the robot denotes the instantaneous wind direction observed by the robot. The trajectory of the robot from points ‘<span class="html-italic">A</span>’ to ‘<span class="html-italic">B</span>’ belongs to the chemical-clue-finding phase and the remaining part belongs to the ERP procedure. The timer began when the robot started to find chemical clue at point ‘<span class="html-italic">A</span>’. The total tracing time (excluding the chemical-clue-finding time) was 138.0 s.</p>
Full article ">Figure 9 Cont.
<p>The process of chemical-clue finding and ERP procedure in the football field of Tianjin University. (<b>a</b>) <span class="html-italic">t</span> = 56.0 s. (<b>b</b>) <span class="html-italic">t</span> = 125.5 s. (<b>c</b>) <span class="html-italic">t</span> = 149.5 s. (<b>d</b>) <span class="html-italic">t</span> = 194.0 s. ‘<span class="html-italic">A</span>’ denotes the start position of the robot, ‘<span class="html-italic">S</span>’ indicates the true location of the chemical source, and ‘<span class="html-italic">B</span>’ is where the ERP procedure was launched. The planned route is represented by two colors, i.e., blue-black and cyan, corresponding to the two different parts <math display="inline"><semantics> <mrow> <mi>S</mi> <msub> <mi>L</mi> <mi>u</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>S</mi> <msub> <mi>L</mi> <mi>d</mi> </msub> </mrow> </semantics></math>, respectively. Along the trajectory of the robot, each symbol ‘+’ represents a chemical-detection event. The solid arrow near the robot denotes the instantaneous wind direction observed by the robot. The trajectory of the robot from points ‘<span class="html-italic">A</span>’ to ‘<span class="html-italic">B</span>’ belongs to the chemical-clue-finding phase and the remaining part belongs to the ERP procedure. The timer began when the robot started to find chemical clue at point ‘<span class="html-italic">A</span>’. The total tracing time (excluding the chemical-clue-finding time) was 138.0 s.</p>
Full article ">Figure 10
<p>Instantaneous wind directions/speeds and chemical-detection events during the ERP phase shown in <a href="#sensors-19-00426-f009" class="html-fig">Figure 9</a>.</p>
Full article ">Figure 11
<p>Results of 21 ERP experiments in the F group. (<b>a</b>) Approaching effectiveness. (<b>b</b>) Time costs. Each mark in the two subgraphs stands for the corresponding value of an experiment.</p>
Full article ">Figure 12
<p>Four reconstructed scenes during the CSS process in the square field. (<b>a</b>) <span class="html-italic">t</span> = 62.0s, the robot was at (28.81, 15.41) m. (<b>b</b>) <span class="html-italic">t</span> = 132.5s, the robot was at (25.80, 15.07) m. (<b>c</b>) <span class="html-italic">t</span> = 173.0s, the robot was at (24.06, 20.28) m. (<b>d</b>) <span class="html-italic">t</span> = 196.0s, the robot was at (21.01, 22.12) m. The unit of each subgraph is meters. The total tracing time (excluding the chemical-clue-finding time) was 134.0 s.</p>
Full article ">Figure 13
<p>Instantaneous wind directions/speeds and chemical-detection events during the ERP phase shown in <a href="#sensors-19-00426-f012" class="html-fig">Figure 12</a>.</p>
Full article ">Figure 14
<p>Results of 16 ERP experiments in the O group. (<b>a</b>) Approaching effectiveness. (<b>b</b>) Time cost. Each mark in the two subgraphs stands for the corresponding value of an experiment.</p>
Full article ">
20 pages, 8040 KiB  
Article
Damage Quantification with Embedded Piezoelectric Aggregates Based on Wavelet Packet Energy Analysis
by Zijian Wang, Li Wei and Maosen Cao
Sensors 2019, 19(2), 425; https://doi.org/10.3390/s19020425 - 21 Jan 2019
Cited by 21 | Viewed by 4653
Abstract
Cement-based components have been widely used in civil engineering structures. However, due to wearing and deterioration, the cement-based components may have brittle failure. To provide early warning and to support predictive reinforcement, the piezoelectric materials are embedded into the cement-based components to excite [...] Read more.
Cement-based components have been widely used in civil engineering structures. However, due to wearing and deterioration, the cement-based components may have brittle failure. To provide early warning and to support predictive reinforcement, the piezoelectric materials are embedded into the cement-based components to excite and receive elastic waves. By recognizing the abnormalities in the elastic waves, hidden damage can be identified in advance. However, few research has been published regarding the damage quantification. In this paper, the wavelet packet analysis is adopted to calculate the energy of the transmitted elastic waves based on the improved piezoelectric aggregates (IPAs). Due to the growth of the damage, less elastic waves can pass through the damage zone, decreasing the energy of the acquired signals. A set of cement beams with different crack depths at the mid-span is tested in both numerical and experimental ways. A damage quantification index, namely the wavelet packet-based energy index (WPEI), is developed. Both the numerical and experimental results demonstrate that the WPEI decreases with respect to the crack depth. Based on the regression analysis, a strong linear relationship has been observed between the WPEI and the crack depth. By referring to the linear relationship, the crack depth can be estimated by the WPEI with a good accuracy. The results demonstrated that the use of the IPAs and the WPEI can fulfill the real-time quantification of the crack depth in the cement beams. Full article
(This article belongs to the Special Issue Sensors Based NDE and NDT)
Show Figures

Figure 1

Figure 1
<p>Decomposition tree of the wavelet packet analysis.</p>
Full article ">Figure 2
<p>The manufacturing process of the traditional piezoelectric aggregate (TPA): (<b>a</b>) the Lead Zirconate Titanate (PZT) patches; (<b>b</b>) the steel mold; (<b>c</b>) the configuration of the TPA; and (<b>d</b>) the finished TPA.</p>
Full article ">Figure 3
<p>The manufacturing process of the improved piezoelectric aggregate (IPA): (<b>a</b>) the Polymethyl Methacrylate (PMMA) tube; (<b>b</b>) the configuration of the IPA; and (<b>c</b>) the finished IPA.</p>
Full article ">Figure 4
<p>The pitch-catch mode to compare the signals of different piezoelectric aggregates.</p>
Full article ">Figure 5
<p>Signals acquired by different piezoelectric aggregates.</p>
Full article ">Figure 6
<p>Frequency responses of the IPA and the TPA.</p>
Full article ">Figure 7
<p>Configuration of the finite element model.</p>
Full article ">Figure 8
<p>Snapshots of the numerical model with the crack depth = 0 mm: (<b>a</b>) t = 0 ms; (<b>b</b>) t = 75 ms; (<b>c</b>) t = 150 ms; (<b>d</b>) t = 225 ms; and (<b>e</b>) t = 300 ms.</p>
Full article ">Figure 9
<p>Snapshots of the numerical model with the crack depth = 20 mm: (<b>a</b>) t = 0 ms; (<b>b</b>) t = 75 ms; (<b>c</b>) t = 150 ms; (<b>d</b>) t = 225 ms; and (<b>e</b>) t = 300 ms.</p>
Full article ">Figure 10
<p>Snapshots of the numerical model with the crack depth = 55 mm: (<b>a</b>) t = 0 ms; (<b>b</b>) t = 75 ms; (<b>c</b>) t = 150 ms; (<b>d</b>) t = 225 ms; and (<b>e</b>) t = 300 ms.</p>
Full article ">Figure 10 Cont.
<p>Snapshots of the numerical model with the crack depth = 55 mm: (<b>a</b>) t = 0 ms; (<b>b</b>) t = 75 ms; (<b>c</b>) t = 150 ms; (<b>d</b>) t = 225 ms; and (<b>e</b>) t = 300 ms.</p>
Full article ">Figure 11
<p>Acquired signals from simulations with different crack depths: (<b>a</b>) crack depth = 0 mm (0%); (<b>b</b>) crack depth = 5 mm (5%); (<b>c</b>) crack depth = 10 mm (10%); (<b>d</b>) crack depth = 15 mm (15%); (<b>e</b>) crack depth = 20 mm (20%); (<b>f</b>) crack depth = 25 mm (25%); (<b>g</b>) crack depth = 30 mm (30%); (<b>h</b>) crack depth = 35 mm (35%); (<b>i</b>) crack depth = 40 mm (40%); (<b>j</b>) crack depth = 45 mm (45%); (<b>k</b>) crack depth = 50 mm (50%); and (<b>l</b>) crack depth = 55 mm (55%).</p>
Full article ">Figure 11 Cont.
<p>Acquired signals from simulations with different crack depths: (<b>a</b>) crack depth = 0 mm (0%); (<b>b</b>) crack depth = 5 mm (5%); (<b>c</b>) crack depth = 10 mm (10%); (<b>d</b>) crack depth = 15 mm (15%); (<b>e</b>) crack depth = 20 mm (20%); (<b>f</b>) crack depth = 25 mm (25%); (<b>g</b>) crack depth = 30 mm (30%); (<b>h</b>) crack depth = 35 mm (35%); (<b>i</b>) crack depth = 40 mm (40%); (<b>j</b>) crack depth = 45 mm (45%); (<b>k</b>) crack depth = 50 mm (50%); and (<b>l</b>) crack depth = 55 mm (55%).</p>
Full article ">Figure 12
<p>Linearity of the WPEI with respect to the crack depth (numerical investigation).</p>
Full article ">Figure 13
<p>Setup of the testing equipment.</p>
Full article ">Figure 14
<p>The specimens of the cement beams with different crack depths: (<b>a</b>) crack depth = 0 mm; (<b>b</b>) crack depth = 20 mm; and (<b>c</b>) crack depth = 55 mm.</p>
Full article ">Figure 15
<p>The acquired signals from the experiments with different crack depths: (<b>a</b>) crack depth = 0 mm (0%); (<b>b</b>) crack depth = 5 mm (5%); (<b>c</b>) crack depth = 10 mm (10%); (<b>d</b>) crack depth = 15 mm (15%); (<b>e</b>) crack depth = 20 mm (20%); (<b>f</b>) crack depth = 25 mm (25%); (<b>g</b>) crack depth = 30 mm (30%); (<b>h</b>) crack depth = 35 mm (35%); (<b>i</b>) crack depth = 40 mm (40%); (<b>j</b>) crack depth = 45 mm (45%); (<b>k</b>) crack depth = 50 mm (50%); and (<b>l</b>) crack depth = 55 mm (55%).</p>
Full article ">Figure 15 Cont.
<p>The acquired signals from the experiments with different crack depths: (<b>a</b>) crack depth = 0 mm (0%); (<b>b</b>) crack depth = 5 mm (5%); (<b>c</b>) crack depth = 10 mm (10%); (<b>d</b>) crack depth = 15 mm (15%); (<b>e</b>) crack depth = 20 mm (20%); (<b>f</b>) crack depth = 25 mm (25%); (<b>g</b>) crack depth = 30 mm (30%); (<b>h</b>) crack depth = 35 mm (35%); (<b>i</b>) crack depth = 40 mm (40%); (<b>j</b>) crack depth = 45 mm (45%); (<b>k</b>) crack depth = 50 mm (50%); and (<b>l</b>) crack depth = 55 mm (55%).</p>
Full article ">Figure 16
<p>Linearity of the WPEI with respect to the crack depth (experimental investigation).</p>
Full article ">
15 pages, 3249 KiB  
Article
A Hybrid Method to Improve the BLE-Based Indoor Positioning in a Dense Bluetooth Environment
by Ke Huang, Ke He and Xuecheng Du
Sensors 2019, 19(2), 424; https://doi.org/10.3390/s19020424 - 21 Jan 2019
Cited by 60 | Viewed by 7252
Abstract
Indoor positioning using Bluetooth Low Energy (BLE) beacons has attracted considerable attention after the release of the BLE protocol. A number of efforts have been exerted to improve the performance of BLE-based indoor positioning. However, few studies pay attention to the BLE-based indoor [...] Read more.
Indoor positioning using Bluetooth Low Energy (BLE) beacons has attracted considerable attention after the release of the BLE protocol. A number of efforts have been exerted to improve the performance of BLE-based indoor positioning. However, few studies pay attention to the BLE-based indoor positioning in a dense Bluetooth environment, where the propagation of BLE signals become more complex and more fluctuant. In this paper, we draw attention to the problems resulting from the dense Bluetooth environment, and it turns out that the dense Bluetooth environment would result in a high received signal strength indication (RSSI) variation and a longtime interval collection of BLE. Hence, to mitigate the effects of the dense Bluetooth environment, we propose a hybrid method fusing sliding-window filtering, trilateration, dead reckoning and the Kalman filtering method to improve the performance of the BLE indoor positioning. The Kalman filter is exploited to merge the trilateration and dead reckoning. Extensive experiments in a real implementation are conducted to examine the performance of three approaches: trilateration, dead reckoning and the fusion method. The implementation results proved that the fusion method was the most effective method to improve the positioning accuracy and timeliness in a dense Bluetooth environment. The positioning root-mean-square error (RMSE) calculation results have showed that the hybrid method can achieve a real-time positioning and reduce error of indoor positioning. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Received Signal Strength Indicator (RSSI) value and the scanning time interval of the Bluetooth Low Energy (BLE).</p>
Full article ">Figure 2
<p>The overview of BLE positioning solution.</p>
Full article ">Figure 3
<p>The curves of smoothed RSSI and real-time RSSI.</p>
Full article ">Figure 4
<p>The RSSI measurement value and BLE propagation model value.</p>
Full article ">Figure 5
<p>Average acceleration record of 20 steps.</p>
Full article ">Figure 6
<p>The processing pipeline of the hybrid indoor positioning.</p>
Full article ">Figure 7
<p>Diagram of two positioning phases.</p>
Full article ">Figure 8
<p>Layout of the test scenario.</p>
Full article ">Figure 9
<p>The estimated path and actual walked path of the dead reckoning.</p>
Full article ">Figure 10
<p>The estimated path and actual walked path of the trilateration.</p>
Full article ">Figure 11
<p>The estimated path and actual walked path of the hybrid method.</p>
Full article ">
15 pages, 951 KiB  
Article
Spatio-Temporal Features in Action Recognition Using 3D Skeletal Joints
by Mihai Trăscău, Mihai Nan and Adina Magda Florea
Sensors 2019, 19(2), 423; https://doi.org/10.3390/s19020423 - 21 Jan 2019
Cited by 11 | Viewed by 4602
Abstract
Robust action recognition methods lie at the cornerstone of Ambient Assisted Living (AAL) systems employing optical devices. Using 3D skeleton joints extracted from depth images taken with time-of-flight (ToF) cameras has been a popular solution for accomplishing these tasks. Though seemingly scarce in [...] Read more.
Robust action recognition methods lie at the cornerstone of Ambient Assisted Living (AAL) systems employing optical devices. Using 3D skeleton joints extracted from depth images taken with time-of-flight (ToF) cameras has been a popular solution for accomplishing these tasks. Though seemingly scarce in terms of information availability compared to its RGB or depth image counterparts, the skeletal representation has proven to be effective in the task of action recognition. This paper explores different interpretations of both the spatial and the temporal dimensions of a sequence of frames describing an action. We show that rather intuitive approaches, often borrowed from other computer vision tasks, can improve accuracy. We report results based on these modifications and propose an architecture that uses temporal convolutions with results comparable to the state of the art. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Configuration of 25 body joints proposed in NTU RGB+D Dataset [<a href="#B8-sensors-19-00423" class="html-bibr">8</a>]. 1—base of the spine, 2—middle of the spine; 3—neck, 4—head, 5—left shoulder, 6—left elbow, 7—left wrist, 8—left hand, 9—right shoulder, 10—right elbow, 11—right wrist, 12—right hand, 13—left hip, 14—left knee, 15—left ankle, 16—left foot, 17—right hip, 18—right knee, 19—right ankle, 20—right foot, 21—spine, 22—tip of the left hand, 23—left thumb, 24—tip of the right hand, 25—right thumb.</p>
Full article ">Figure 2
<p>FCs + LSTM - Illustration of the network structure of our first model with FC layers and Stacked LSTM with depth 3. For each layer, the output size is specified.</p>
Full article ">Figure 3
<p>Densely Connected FCs + LSTM—Network structure of the Densely Connected model. The first 4 layers are applied for concatenated input with the output of previous ones.</p>
Full article ">Figure 4
<p>2D Arrangement CNN + LSTM—Illustration of the network architecture based on convolutions on the 2D joint matrix arrangement we proposed. For each convolutional layer, the number of channels produced by the convolution, the size of the convolving kernel and the padding are specified.</p>
Full article ">Figure 5
<p>GCNs + LSTM—Network structure of the GCN model with one GCN layer, one Stacked LSTM with depth 3 and one FC layer.</p>
Full article ">Figure 6
<p>TCNs—Network architecture of the TCN model. For each TCN layer, the number of channels produced by the convolution and the stride (if it is different than 1) are specified. For this model, we used <math display="inline"><semantics> <mrow> <mn>1</mn> <mi>D</mi> </mrow> </semantics></math> convolutional layers with <math display="inline"><semantics> <mrow> <mi>k</mi> <mi>e</mi> <mi>r</mi> <mi>n</mi> <mi>e</mi> <mi>l</mi> <mo>_</mo> <mi>s</mi> <mi>i</mi> <mi>z</mi> <mi>e</mi> <mo>=</mo> <mn>9</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>s</mi> <mi>t</mi> <mi>r</mi> <mi>i</mi> <mi>d</mi> <mi>e</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> for units with different size for output and <math display="inline"><semantics> <mrow> <mi>s</mi> <mi>t</mi> <mi>r</mi> <mi>i</mi> <mi>d</mi> <mi>e</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> for others.</p>
Full article ">Figure 7
<p>TCN Layer—Architectural elements in a TCN layer. The optional block is used only when the number of channels between the input and the output differs.</p>
Full article ">Figure 8
<p>Sample frames of the NTU RGB + D dataset.</p>
Full article ">
14 pages, 1734 KiB  
Article
Low-Temperature Storage Improves the Over-Time Stability of Implantable Glucose and Lactate Biosensors
by Giulia Puggioni, Giammario Calia, Paola Arrigo, Andrea Bacciu, Gianfranco Bazzu, Rossana Migheli, Silvia Fancello, Pier Andrea Serra and Gaia Rocchitta
Sensors 2019, 19(2), 422; https://doi.org/10.3390/s19020422 - 21 Jan 2019
Cited by 18 | Viewed by 4943
Abstract
Molecular biomarkers are very important in biology, biotechnology and even in medicine, but it is quite hard to convert biology-related signals into measurable data. For this purpose, amperometric biosensors have proven to be particularly suitable because of their specificity and sensitivity. The operation [...] Read more.
Molecular biomarkers are very important in biology, biotechnology and even in medicine, but it is quite hard to convert biology-related signals into measurable data. For this purpose, amperometric biosensors have proven to be particularly suitable because of their specificity and sensitivity. The operation and shelf stability of the biosensor are quite important features, and storage procedures therefore play an important role in preserving the performance of the biosensors. In the present study two different designs for both glucose and lactate biosensor, differing only in regards to the containment net, represented by polyurethane or glutharaldehyde, were studied under different storage conditions (+4, −20 and −80 °C) and monitored over a period of 120 days, in order to evaluate the variations of kinetic parameters, as VMAX and KM, and LRS as the analytical parameter. Surprisingly, the storage at −80 °C yielded the best results because of an unexpected and, most of all, long-lasting increase of VMAX and LRS, denoting an interesting improvement in enzyme performances and stability over time. The present study aimed to also evaluate the impact of a short-period storage in dry ice on biosensor performances, in order to simulate a hypothetical preparation-conservation-shipment condition. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Graphical representation of two main designs of glucose and lactate biosensors used in this study: GB1 (Panel <b>A</b>): Pt<sub>c</sub>/PPD/[PEI(0.5%)-GOx]<sub>5</sub>/PU (5%); GB2 (Panel <b>B</b>): Pt<sub>c</sub>/PPD/[PE(0.5%)-GOx]<sub>5</sub>/GTA(1%)-BSA(2%); LB1 (Panel <b>C</b>): Pt<sub>c</sub>/PPD/[PEI(0.5%)-LOx]<sub>5</sub>/PU (5%); LB2 (Panel <b>D</b>): Pt<sub>c</sub>/PPD/[PEI(0.5%)-LOx]<sub>5</sub>/ GTA(1%)-BSA(2%); Pt<sub>c</sub>: Pt cylinder 1 mm long, 125 μm diameter; GOx: glucose oxidase; LOx: lactate oxidase PPD: poly-ortho-phenylenediamine; PEI: polyethyleneimine; PU: polyurethane; GTA: glutaraldehyde; BSA: bovine serum albumin. The subscript “x” represents the number of dip−evaporation deposition stages and in brackets the concentration of the component.</p>
Full article ">Figure 2
<p>Scattering plot describing the variations of V<sub>MAX</sub> (Panel <b>A</b>,<b>B</b>), K<sub>M</sub> (Panel <b>C</b>,<b>D</b>) and LRS (Panel <b>E</b>,<b>F</b>), in a range of 28 days (left inset) and of four months (right inset) of GB1 design Pt<sub>c</sub>/PPD/[PEI (0.5%)-GOx]<sub>5</sub>/PU (5%) when stored at +4 °C (red plot), −20 °C (green plot) and −80 °C (blue plot).</p>
Full article ">Figure 3
<p>Scattering plot describing the variations of V<sub>MAX</sub> (Panel <b>A</b>,<b>B</b>), K<sub>M</sub> (Panel <b>C</b>,<b>D</b>) and LRS (Panel <b>E</b>,<b>F</b>), in a range of 28 days (left inset) and of four months (right inset) of GB2 design Ptc/PPD/[PEI (0.5%)-GOx]5/ GTA(1%)-BSA (2%) when stored at +4 °C (red plot), −20 °C (green plot) and −80 °C (blue plot).</p>
Full article ">Figure 4
<p>Scattering plot describing the variations of V<sub>MAX</sub> (Panel <b>A</b>,<b>B</b>), K<sub>M</sub> (Panel <b>C</b>,<b>D</b>) and LRS (Panel <b>E</b>,<b>F</b>), in a range of 28 days (left inset) and of four months (right inset) of LB1 design Ptc/PPD/[PEI (0.5%)-LOx]<sub>5</sub>/ GTA(1%)-BSA (2%) in the defined range of 120 days, when stored at +4 °C (red plot), −20 °C (green plot) and −80 °C (blue plot).</p>
Full article ">Figure 5
<p>Scattering plot describing the variations of V<sub>MAX</sub> (Panel <b>A</b>,<b>B</b>), K<sub>M</sub> (Panel <b>C</b>,<b>D</b>) and LRS (Panel <b>E</b>,<b>F</b>), in a range of 28 days (left inset) and of four months (right inset) of LB2 design Ptc/PPD/[PEI (0.5%)-LOx]<sub>5</sub>/ GTA(1%)-BSA (2%) in the defined range of 120 days, when stored at +4 °C (red plot), −20 °C (green plot) and −80 °C (blue plot).</p>
Full article ">
22 pages, 2172 KiB  
Article
Appearance-Based Salient Regions Detection Using Side-Specific Dictionaries
by Mian Muhammad Sadiq Fareed, Qi Chun, Gulnaz Ahmed, Adil Murtaza, Muhammad Rizwan Asif and Muhammad Zeeshan Fareed
Sensors 2019, 19(2), 421; https://doi.org/10.3390/s19020421 - 21 Jan 2019
Cited by 2 | Viewed by 4031
Abstract
Image saliency detection is a very helpful step in many computer vision-based smart systems to reduce the computational complexity by only focusing on the salient parts of the image. Currently, the image saliency is detected through representation-based generative schemes, as these schemes are [...] Read more.
Image saliency detection is a very helpful step in many computer vision-based smart systems to reduce the computational complexity by only focusing on the salient parts of the image. Currently, the image saliency is detected through representation-based generative schemes, as these schemes are helpful for extracting the concise representations of the stimuli and to capture the high-level semantics in visual information with a small number of active coefficients. In this paper, we propose a novel framework for salient region detection that uses appearance-based and regression-based schemes. The framework segments the image and forms reconstructive dictionaries from four sides of the image. These side-specific dictionaries are further utilized to obtain the saliency maps of the sides. A unified version of these maps is subsequently employed by a representation-based model to obtain a contrast-based salient region map. The map is used to obtain two regression-based maps with LAB and RGB color features that are unified through the optimization-based method to achieve the final saliency map. Furthermore, the side-specific reconstructive dictionaries are extracted from the boundary and the background pixels, which are enriched with geometrical and visual information. The approach has been thoroughly evaluated on five datasets and compared with the seven most recent approaches. The simulation results reveal that our model performs favorably in comparison with the current saliency detection schemes. Full article
(This article belongs to the Special Issue Visual Sensors)
Show Figures

Figure 1

Figure 1
<p>The pipeline of proposed salient region detection model.</p>
Full article ">Figure 2
<p>The need for visual features for extracting a good saliency result is obvious from the depicted results. It is worth noting that the results in the second column are comparably less significant and missing a lot of real image information.</p>
Full article ">Figure 3
<p>The effectiveness of the heuristic background dictionary for highly precise and exact salient object maps extraction.</p>
Full article ">Figure 3 Cont.
<p>The effectiveness of the heuristic background dictionary for highly precise and exact salient object maps extraction.</p>
Full article ">Figure 4
<p>The validity of obtaining a background coefficient matrix is noticeable from the demonstrated results. The results are arranged as OI, the dense representation error map, ABM map, and the GT.</p>
Full article ">Figure 5
<p>Some examples demonstrating the difference between single and multi-level cues integration step. The results are arranged as OI, salient region map with single feature integration, and the saliency map extracted through multi-label features incorporation.</p>
Full article ">Figure 6
<p>We individually compare the salient region map of each stage of the proposed method by using ASD database [<a href="#B38-sensors-19-00421" class="html-bibr">38</a>]. The results are organized as OI, the segmented image, ABM salient region map, enhanced salient region map through RBM, and the final salient region map.</p>
Full article ">Figure 7
<p>Visual comparison of our scheme with some recent approaches using the ASD database. The SRD results are arranged as OI, MI, RS, AM, BD, MC, HS, UC, our scheme, and the GT. We can note that the SRD maps of our proposed scheme are very close to the GT.</p>
Full article ">Figure 8
<p>PR-curves to validate our proposed method with different parameters values for the MSRA database. The balancing parameter is tuned at different values to verify the refinement function and their effect on the final SRD map.</p>
Full article ">Figure 9
<p>Graphical performance comparison of different stages of our method using PR-curves to validate the single feature, multi-featured, and enhanced results using the MSRA dataset.</p>
Full article ">Figure 10
<p>The graphical assessment of our model against seven current approaches AM [<a href="#B29-sensors-19-00421" class="html-bibr">29</a>], BD [<a href="#B42-sensors-19-00421" class="html-bibr">42</a>], RS [<a href="#B43-sensors-19-00421" class="html-bibr">43</a>], MC [<a href="#B44-sensors-19-00421" class="html-bibr">44</a>], MI [<a href="#B30-sensors-19-00421" class="html-bibr">30</a>], HS [<a href="#B39-sensors-19-00421" class="html-bibr">39</a>], UC [<a href="#B31-sensors-19-00421" class="html-bibr">31</a>] and our proposed model using the ASD dataset.</p>
Full article ">Figure 11
<p>The graphical evaluation of our method with seven current approaches such as AM [<a href="#B29-sensors-19-00421" class="html-bibr">29</a>], BD [<a href="#B42-sensors-19-00421" class="html-bibr">42</a>], RS [<a href="#B43-sensors-19-00421" class="html-bibr">43</a>], MC [<a href="#B44-sensors-19-00421" class="html-bibr">44</a>], MI [<a href="#B30-sensors-19-00421" class="html-bibr">30</a>], HS [<a href="#B39-sensors-19-00421" class="html-bibr">39</a>], UC [<a href="#B31-sensors-19-00421" class="html-bibr">31</a>] and our proposed model on the DUT-OMRON database.</p>
Full article ">Figure 12
<p>Graphical evaluation of our model using the PR-curve, F-measure, ROC-curve, and MAE with seven most recent models.</p>
Full article ">Figure 13
<p>The graphical analysis of our SRD using four different saliency measures with other techniques.</p>
Full article ">Figure 14
<p>A few cases where our model performance is not very persuasive.</p>
Full article ">
20 pages, 1143 KiB  
Article
Indoor Positioning System Based on Chest-Mounted IMU
by Chuanhua Lu, Hideaki Uchiyama, Diego Thomas, Atsushi Shimada and Rin-ichiro Taniguchi
Sensors 2019, 19(2), 420; https://doi.org/10.3390/s19020420 - 21 Jan 2019
Cited by 59 | Viewed by 9186
Abstract
Demand for indoor navigation systems has been rapidly increasing with regard to location-based services. As a cost-effective choice, inertial measurement unit (IMU)-based pedestrian dead reckoning (PDR) systems have been developed for years because they do not require external devices to be installed in [...] Read more.
Demand for indoor navigation systems has been rapidly increasing with regard to location-based services. As a cost-effective choice, inertial measurement unit (IMU)-based pedestrian dead reckoning (PDR) systems have been developed for years because they do not require external devices to be installed in the environment. In this paper, we propose a PDR system based on a chest-mounted IMU as a novel installation position for body-suit-type systems. Since the IMU is mounted on a part of the upper body, the framework of the zero-velocity update cannot be applied because there are no periodical moments of zero velocity. Therefore, we propose a novel regression model for estimating step lengths only with accelerations to correctly compute step displacement by using the IMU data acquired at the chest. In addition, we integrated the idea of an efficient map-matching algorithm based on particle filtering into our system to improve positioning and heading accuracy. Since our system was designed for 3D navigation, which can estimate position in a multifloor building, we used a barometer to update pedestrian altitude, and the components of our map are designed to explicitly represent building-floor information. With our complete PDR system, we were awarded second place in 10 teams for the IPIN 2018 Competition Track 2, achieving a mean error of 5.2 m after the 800 m walking event. Full article
(This article belongs to the Special Issue Sensor Fusion and Novel Technologies in Positioning and Navigation)
Show Figures

Figure 1

Figure 1
<p>A prototype system of the chest-mounted intertial measurement unit (IMU)-based pedestrian dead reckoning (PDR).</p>
Full article ">Figure 2
<p>System overview.</p>
Full article ">Figure 3
<p>Sensor and world frames.</p>
Full article ">Figure 4
<p>Step-detection process.</p>
Full article ">Figure 5
<p>Definition of one step.</p>
Full article ">Figure 6
<p>Data of one step.</p>
Full article ">Figure 7
<p>Map components.</p>
Full article ">Figure 8
<p>Definition of heading correction <math display="inline"><semantics> <msub> <mi>h</mi> <mi>i</mi> </msub> </semantics></math>.</p>
Full article ">Figure 9
<p>Process of backtracking test. Blue arrows are recent steps.</p>
Full article ">Figure 10
<p>Pressure variations when going upstairs.</p>
Full article ">Figure 11
<p>Map-editing process.</p>
Full article ">Figure 12
<p>Calibrate IMU pose with initial heading.</p>
Full article ">Figure 13
<p>Experiment 1: system evaluation with different configurations.</p>
Full article ">Figure 14
<p>Experiment 1: errors at each keypoint.</p>
Full article ">Figure 15
<p>Experiment 2: evaluation in IPIN 2018 Competition Track 2.</p>
Full article ">Figure 16
<p>Experiment 2: errors at each keypoint.</p>
Full article ">
14 pages, 2982 KiB  
Article
Ripeness Prediction of Postharvest Kiwifruit Using a MOS E-Nose Combined with Chemometrics
by Dongdong Du, Jun Wang, Bo Wang, Luyi Zhu and Xuezhen Hong
Sensors 2019, 19(2), 419; https://doi.org/10.3390/s19020419 - 21 Jan 2019
Cited by 72 | Viewed by 6559
Abstract
Postharvest kiwifruit continues to ripen for a period until it reaches the optimal “eating ripe” stage. Without damaging the fruit, it is very difficult to identify the ripeness of postharvest kiwifruit by conventional means. In this study, an electronic nose (E-nose) with 10 [...] Read more.
Postharvest kiwifruit continues to ripen for a period until it reaches the optimal “eating ripe” stage. Without damaging the fruit, it is very difficult to identify the ripeness of postharvest kiwifruit by conventional means. In this study, an electronic nose (E-nose) with 10 metal oxide semiconductor (MOS) gas sensors was used to predict the ripeness of postharvest kiwifruit. Three different feature extraction methods (the max/min values, the difference values and the 70th s values) were employed to discriminate kiwifruit at different ripening times by linear discriminant analysis (LDA), and results showed that the 70th s values method had the best performance in discriminating kiwifruit at different ripening stages, obtaining a 100% original accuracy rate and a 99.4% cross-validation accuracy rate. Partial least squares regression (PLSR), support vector machine (SVM) and random forest (RF) were employed to build prediction models for overall ripeness, soluble solids content (SSC) and firmness. The regression results showed that the RF algorithm had the best performance in predicting the ripeness indexes of postharvest kiwifruit compared with PLSR and SVM, which illustrated that the E-nose data had high correlations with overall ripeness (training: R2 = 0.9928; testing: R2 = 0.9928), SSC (training: R2 = 0.9749; testing: R2 = 0.9143) and firmness (training: R2 = 0.9814; testing: R2 = 0.9290). This study demonstrated that E-nose could be a comprehensive approach to predict the ripeness of postharvest kiwifruit through aroma volatiles. Full article
(This article belongs to the Special Issue Electronic Noses and Their Application)
Show Figures

Figure 1

Figure 1
<p>Typical response signals of E-nose system during the ripening process: (<b>a</b>) day 0; (<b>b</b>) day 4; (<b>c</b>) day 7.</p>
Full article ">Figure 2
<p>Linear discriminant analysis (LDA) by ripening day based on three different feature extraction methods: (<b>a</b>) the max/min values; (<b>b</b>) the difference values; (<b>c</b>) the 70th s values.</p>
Full article ">Figure 3
<p>LDA by overall ripeness based on three different feature extraction methods: (<b>a</b>) the max/min values; (<b>b</b>) the difference values; (<b>c</b>) the 70th s values.</p>
Full article ">Figure 4
<p>Predicted versus actual values from different prediction models: (<b>a</b>) presents PLSR, (<b>b</b>) presents SVM, and (<b>c</b>) presents RF; (1) stands for overall ripeness, (2) stands for SSC, and (3) stands for firmness.</p>
Full article ">Figure 5
<p>Searching of the penalty and kernel parameters for building the LIBSVM model: (<b>a</b>) overall ripeness; (<b>b</b>) SSC; (<b>c</b>) firmness.</p>
Full article ">Figure 6
<p>Searching of the decision trees for building the RF model: (<b>a</b>) overall ripeness; (<b>b</b>) SSC; (<b>c</b>) firmness.</p>
Full article ">
14 pages, 5339 KiB  
Article
Combining Non-Uniform Time Slice and Finite Difference to Improve 3D Ghost Imaging
by Fanghua Zhang, Jie Cao, Qun Hao, Kaiyu Zhang, Yang Cheng, Yingbo Wang and Yongchao Feng
Sensors 2019, 19(2), 418; https://doi.org/10.3390/s19020418 - 21 Jan 2019
Cited by 4 | Viewed by 4038
Abstract
Three-dimensional ghost imaging (3DGI) using a detector is widely used in many applications. The performance of 3DGI based on a uniform time slice is difficult to improve because obtaining an accurate time-slice position remains a challenge. This paper reports a novel structure based [...] Read more.
Three-dimensional ghost imaging (3DGI) using a detector is widely used in many applications. The performance of 3DGI based on a uniform time slice is difficult to improve because obtaining an accurate time-slice position remains a challenge. This paper reports a novel structure based on non-uniform time slice combined with finite difference. In this approach, finite difference is beneficial to improving sensitivity of zero crossing to accurately obtain the position of the target in the field of view. Simultaneously, non-uniform time slice is used to quickly obtain 3DGI on an interesting target. Results show that better performances of 3DGI are obtained by our proposed method compared to the traditional method. Moreover, the relation between time slice and the signal-noise-ratio of 3DGI is discussed, and the optimal differential distance is obtained, thus motivating the development of a high-performance 3DGI. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Show Figures

Figure 1

Figure 1
<p>Comparison between uniform and non-uniform time-slice sequences. (<b>a</b>) Multiple targets are illuminated by pulsed laser. (<b>b</b>) Traditional uniform time-slice method (UTSM), and the interval between neighbor time slices is uniform. (<b>c</b>) Non-uniform time-slice method (NUTSM), and time slice are located on peak positions.</p>
Full article ">Figure 2
<p>Principle of 3D ghost imaging (GI) based on non-uniform time-slice method (NUTSM) combined with finite difference.</p>
Full article ">Figure 3
<p>Echo signal comparison between peak and zero crossing. (<b>a</b>) Peak position by single time-resolved bucket detector. (<b>b</b>) Zero crossing by finite difference.</p>
Full article ">Figure 4
<p>Step I: Depth information of targets is extracted based on finite difference.</p>
Full article ">Figure 5
<p>Step II: 3DGI is constructed by using non-uniform time slices.</p>
Full article ">Figure 6
<p>3D target models: (<b>a</b>) Perspective view and (<b>b</b>) side view.</p>
Full article ">Figure 7
<p>Results of echo signal based on finite difference, 2D, and 3DGI: (<b>a</b>) Echo signals based on a single detector and finite difference, (<b>b</b>)–(<b>f</b>) 2D images under different distances (500–508 m), and (<b>g</b>) 3DGI obtained by combining 2DGI. All the above results are without any post-processing.</p>
Full article ">Figure 8
<p>Comparative results: (<b>a</b>) And (<b>b</b>) are 3DGI based on uniform time-slice method (UTSM) and NUTSM, respectively; (<b>c</b>) echo signals based on UTSM, with points A and B representing the corresponding positions of the time slices; and (<b>d</b>) echo signals based on NUTSM, with points A’ and B’ representing the corresponding positions of the time slices.</p>
Full article ">Figure 9
<p>3DGI and 2DGI based on UTSM: (<b>a</b>) Echo signal based on a single detector, (<b>b</b>) 3DGI via combining 2DGI, and (<b>c</b>) corresponding 2D images under different time slices in (<b>a</b>).</p>
Full article ">Figure 10
<p>Comparative results based on NUTSM and UTSM. (<b>a</b>) is based on NUTSM and (<b>b</b>) is based on UTSM. (<b>c</b>) And (<b>e</b>) are the ghost image under the positions of ③ and ④ corresponding to (<b>a</b>). (<b>d</b>) And (<b>f</b>) are the ghost image under the positions of ⑤ and ⑦ corresponding to (<b>b</b>).</p>
Full article ">Figure 10 Cont.
<p>Comparative results based on NUTSM and UTSM. (<b>a</b>) is based on NUTSM and (<b>b</b>) is based on UTSM. (<b>c</b>) And (<b>e</b>) are the ghost image under the positions of ③ and ④ corresponding to (<b>a</b>). (<b>d</b>) And (<b>f</b>) are the ghost image under the positions of ⑤ and ⑦ corresponding to (<b>b</b>).</p>
Full article ">Figure 11
<p>Comparative results based on finite difference and peak detection: (<b>a</b>) Root-mean-square errors (RMSEs) based on finite difference and peak detection, and (<b>b</b>) Relationship between relative position and signal-to-noise ratio (SNR)</p>
Full article ">Figure 12
<p>The performance of the time slice under different differential optical paths. (<b>a</b>) Echo signal power vs. time. (<b>b</b>) Sensitivity vs. differential distance.</p>
Full article ">Figure 13
<p>Example of the issue on not exact zero when taking the signal difference. (<b>a</b>) The echo signal from the depth of field of view. (<b>b</b>) Zooming area for the black rectangle of (<b>a</b>).</p>
Full article ">
17 pages, 6799 KiB  
Article
Robust Kalman Filter Aided GEO/IGSO/GPS Raw-PPP/INS Tight Integration
by Zhouzheng Gao, You Li, Yuan Zhuang, Honglei Yang, Yuanjin Pan and Hongping Zhang
Sensors 2019, 19(2), 417; https://doi.org/10.3390/s19020417 - 21 Jan 2019
Cited by 14 | Viewed by 5684
Abstract
Reliable and continuous navigation solutions are essential for high-accuracy location-based services. Currently, the real-time kinematic (RTK) based Global Positioning System (GPS) is widely utilized to satisfy such requirements. However, RTK’s accuracy and continuity are limited by the insufficient number of the visible satellites [...] Read more.
Reliable and continuous navigation solutions are essential for high-accuracy location-based services. Currently, the real-time kinematic (RTK) based Global Positioning System (GPS) is widely utilized to satisfy such requirements. However, RTK’s accuracy and continuity are limited by the insufficient number of the visible satellites and the increasing length of base-lines between reference-stations and rovers. Recently, benefiting from the development of precise point positioning (PPP) and BeiDou satellite navigation systems (BDS), the issues existing in GPS RTK can be mitigated by using GPS and BDS together. However, the visible satellite number of GPS + BDS may decrease in dynamic environments. Therefore, the inertial navigation system (INS) is adopted to bridge GPS + BDS PPP solutions during signal outage periods. Meanwhile, because the quality of BDS geosynchronous Earth orbit (GEO) satellites is much lower than that of inclined geo-synchronous orbit (IGSO) satellites, the predicted observation residual based robust extended Kalman filter (R-EKF) is adopted to adjust the weight of GEO and IGSO data. In this paper, the mathematical model of the R-EKF aided GEO/IGSO/GPS PPP/INS tight integration, which uses the raw observations of GPS + BDS, is presented. Then, the influences of GEO, IGSO, INS, and R-EKF on PPP are evaluated by processing land-borne vehicle data. Results indicate that (1) both GEO and IGSO can provide accuracy improvement on GPS PPP; however, the contribution of IGSO is much more visible than that of GEO; (2) PPP’s accuracy and stability can be further improved by using INS; (3) the R-EKF is helpful to adjust the weight of GEO and IGSO in the GEO/IGSO/GPS PPP/INS tight integration and provide significantly higher positioning accuracy. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Sky-plots of GPS satellites (MEO) (<b>a</b>) and BDS GEO/IGSO/MEO regional satellites (<b>b</b>). MEO is medium Earth orbit, BDS is BeiDou satellite navigation system, GEO is geostationary Earth orbit, and IGSO is inclined geosynchronous satellite orbit.</p>
Full article ">Figure 2
<p>Algorithm structure of R-EKF aided GPS/BDS PPP/INS tight integration. R-EKF is robust extended Kalman filter, PPP is precise point positioning, INS is inertial navigation system, IMU is inertial measurement unit, and PCO/PCV is phase center offset/variation.</p>
Full article ">Figure 3
<p>IMUs and GPS/BDS antenna (<b>a</b>) used in the test and their locations on the land-borne vehicle (<b>b</b>).</p>
Full article ">Figure 4
<p>Trajectory in navigation frame (<b>a</b>) and the 3D velocity and movement direction (<b>b</b>) of the vehicle.</p>
Full article ">Figure 5
<p>Satellite sky plot (<b>a</b>) and the corresponding position dilution of precision (PDOP) (<b>b</b>) during the land-borne test.</p>
Full article ">Figure 6
<p>Position offsets of PPP (<b>a</b>) and PPP/INS tight integration (<b>b</b>) by comparing with GPS + BDS RTK/INS tight integration solutions.</p>
Full article ">Figure 7
<p>Position offsets of robust PPP/INS tight integration (<b>a</b>) and position RMS of all data processing modes (<b>b</b>); in subfigure (<b>b</b>), 1–4 denote PPP using GPS, GPS + GEO, GPS + IGSO, and GPS + GEO + IGSO, 5–8 denote PPP/INS tight integration using GPS, GPS + GEO, GPS + IGSO, and GPS + GEO + IGSO, 9–10 denote robust PPP/INS tight integration using GPS + GEO, GPS + IGSO, and GPS + GEO + IGSO.</p>
Full article ">Figure 8
<p>Azimuth-elevation based sky plot of the observed GEO, IGSO, and GEO satellite.</p>
Full article ">Figure 9
<p>Code residuals (<b>a</b>) and phase residuals (<b>b</b>) of GEO, IGSO, and GPS satellites.</p>
Full article ">Figure 10
<p>Velocity offsets (<b>a</b>) of PPP, PPP/INS, and robust PPP/INS tight integration models, and attitude offsets (<b>b</b>) of PPP/INS with and without R-EKF.</p>
Full article ">Figure 11
<p>Heading angles calculated by velocity (<b>top</b>) from PPP and the corresponding velocity in horizontal (<b>bottom</b>).</p>
Full article ">
13 pages, 7231 KiB  
Article
Ambulatory Evaluation of ECG Signals Obtained Using Washable Textile-Based Electrodes Made with Chemically Modified PEDOT:PSS
by Amale Ankhili, Xuyuan Tao, Cédric Cochrane, Vladan Koncar, David Coulon and Jean-Michel Tarlet
Sensors 2019, 19(2), 416; https://doi.org/10.3390/s19020416 - 21 Jan 2019
Cited by 46 | Viewed by 6606
Abstract
A development of washable PEDOT:PSS (poly(3,4-ethylenedioxythiophene) polystyrene sulfonate) polyamide textile-based electrodes is an interesting alternative to the traditional Ag/AgCl disposable electrodes, usually used in clinical practice, helping to improve medical assessment and treatment before apparition or progress of patients’ cardiovascular symptoms. This study [...] Read more.
A development of washable PEDOT:PSS (poly(3,4-ethylenedioxythiophene) polystyrene sulfonate) polyamide textile-based electrodes is an interesting alternative to the traditional Ag/AgCl disposable electrodes, usually used in clinical practice, helping to improve medical assessment and treatment before apparition or progress of patients’ cardiovascular symptoms. This study was conducted in order to determine whether physical properties of PEDOT:PSS had a significant impact on the coated electrode’s electrocardiogram (ECG) signal quality, particularly after 50 washing cycles in a domestic laundry machine. Tests performed, included the comparison of two PEDOT:PSS solutions, in term of viscosity with emphasis on wetting tests, including surface tension and contact angle measurements. In addition, polyamide textile fabrics were used as substrate to make thirty electrodes and to characterize the amount of PEDOT:PSS absorbed as a function of time. The results showed that surface tension of PEDOT:PSS had a significant impact on the wetting of polyamide textile fabric and consequently on the absorbed amount. In fact, lower values of surface tension of the solution lead to low values contact angles between PEDOT:PSS and textile fabric (good wettability). Before washing, no significant difference has been observed among signal-to-noise ratios measured (SNR) for coated electrodes by the two PEDOT:PSS solutions. However, after 50 washing cycles, SNR decreased strongly for electrodes coated by the solution that had low viscosity, since it contained less solid contents. That was confirmed by scanning electron microscopy images (SEM) and also by analyzing the color change of electrodes based on the calculation of CIELAB color space coordinates. Moreover, spectral power density of recorded ECG signals has been computed and presented. All cardiac waves were still visible in the ECG signals after 50 washing cycles. Furthermore, an experienced cardiologist considered that all the ECG signals acquired were acceptable. Accordingly, our newly developed polyamide textile-based electrodes seem to be suitable for long-term monitoring. The study also provided new insights into the better choice of PEDOT:PSS formulation as a function of a specific process in order to manufacture cheaper electrodes faster. Full article
(This article belongs to the Special Issue Smart Textiles and Wearable Sensors)
Show Figures

Figure 1

Figure 1
<p>Illustration of Wilhelmy plate method.</p>
Full article ">Figure 2
<p>Contact angle measurement with 3S instrument.</p>
Full article ">Figure 3
<p>(<b>a</b>) Polyamide textile fabrics before immersion coating; (<b>b</b>) Polyamide textile electrodes coated by PEDOT:PSS (S1); (<b>c</b>) Polyamide textile electrodes coated by PEDOT:PSS (S2).</p>
Full article ">Figure 4
<p>Polyamide textile electrodes sewn onto textile substrate before washing.</p>
Full article ">Figure 5
<p>CIELAB color space.</p>
Full article ">Figure 6
<p>The set-up of electrocardiogram (ECG) measurement.</p>
Full article ">Figure 7
<p>Surface tension illustration.</p>
Full article ">Figure 8
<p>Weight of PEDOT:PSS absorbed by polyamide textile fabric during 300 s. (<b>a</b>) by using S1; (<b>b</b>) by using S2.</p>
Full article ">Figure 9
<p>ECG signals, before washing, obtained by (<b>a</b>) polyamide textile electrodes coated by S1; (<b>b</b>) polyamide textile electrodes coated by S2.</p>
Full article ">Figure 10
<p>ECG signals, after 50 washing cycles, obtained by (<b>a</b>) polyamide textile electrodes coated by S1; (<b>b</b>) polyamide textile electrodes coated by S2.</p>
Full article ">Figure 11
<p>Power spectral densities of ECG signal measured from electrodes made by (<b>a</b>) S1 solution; (<b>b</b>) S2 solution.</p>
Full article ">Figure 12
<p>SEM images of polyamide electrodes coated by S1 and S2 solutions (<b>a</b>) S1 before washing; (<b>b</b>) S1 after 50 washes; (<b>c</b>) S2 before washing; (<b>d</b>) S2 after 50 washes.</p>
Full article ">
18 pages, 712 KiB  
Article
A Regulatory View on Smart City Services
by Mario Weber and Ivana Podnar Žarko
Sensors 2019, 19(2), 415; https://doi.org/10.3390/s19020415 - 21 Jan 2019
Cited by 38 | Viewed by 7928
Abstract
Even though various commercial Smart City solutions are widely available on the market, we are still witnessing their rather limited adoption, where solutions are typically bound to specific verticals or remain in pilot stages. In this paper we argue that the lack of [...] Read more.
Even though various commercial Smart City solutions are widely available on the market, we are still witnessing their rather limited adoption, where solutions are typically bound to specific verticals or remain in pilot stages. In this paper we argue that the lack of a Smart City regulatory framework is one of the major obstacles for a wider adoption of Smart City services in practice. Such framework should be accompanied by examples of good practice which stress the necessity of adopting interoperable Smart City services. Development and deployment of Smart City services can incur significant costs to cities, service providers and sensor manufacturers, and thus it is vital to adjust national legislation to ensure legal certainty to all stakeholders, and at the same time to protect interests of the citizens and the state. Additionally, due to a vast number of heterogeneous devices and Smart City services, both existing and future, their interoperability becomes vital for service replicability and massive deployment leading to digital transformation of future cities. The paper provides a classification of technical and regulatory characteristics of IoT services for Smart Cities which are mapped to corresponding roles in the IoT value chain. Four example use cases are chosen—Smart Parking, Smart Metering, Smart Street Lighting and Mobile Crowd Sensing—to showcase the legal implications relevant to each service. Based on the analysis, we propose a set of recommendations for each role in the value chain related to regulatory requirements of the aforementioned Smart City services. The analysis and recommendations serve as examples of good practice in hope that they will facilitate a wider adoption and longevity of IoT-based Smart City services. Full article
(This article belongs to the Special Issue Smart Cities)
Show Figures

Figure 1

Figure 1
<p>IoT value chain model.</p>
Full article ">Figure 2
<p>Taxonomy of Smart City service characteristics.</p>
Full article ">
19 pages, 1735 KiB  
Article
A Pattern-Based Approach for Detecting Pneumatic Failures on Temporary Immersion Bioreactors
by Octavio Loyola-González, Miguel Angel Medina-Pérez, Dayton Hernández-Tamayo, Raúl Monroy, Jesús Ariel Carrasco-Ochoa and Milton García-Borroto
Sensors 2019, 19(2), 414; https://doi.org/10.3390/s19020414 - 20 Jan 2019
Cited by 8 | Viewed by 5657
Abstract
Temporary Immersion Bioreactors (TIBs) are used for increasing plant quality and plant multiplication rates. These TIBs are actioned by mean of a pneumatic system. A failure in the pneumatic system could produce severe damages into the TIB. Consequently, the whole biological process would [...] Read more.
Temporary Immersion Bioreactors (TIBs) are used for increasing plant quality and plant multiplication rates. These TIBs are actioned by mean of a pneumatic system. A failure in the pneumatic system could produce severe damages into the TIB. Consequently, the whole biological process would be aborted, increasing the production cost. Therefore, an important task is to detect failures on a temporary immersion bioreactor system. In this paper, we propose to approach this task using a contrast pattern based classifier. We show that our proposal, for detecting pneumatic failures in a TIB, outperforms other approaches reported in the literature. In addition, we introduce a feature representation based on the differences among feature values. Additionally, we collected a new pineapple micropropagation database for detecting four new types of pneumatic failures on TIBs. Finally, we provide an analysis of our experimental results together with experts in both biotechnology and pneumatic devices. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Temporary immersion bioreactor diagram.</p>
Full article ">Figure 2
<p>Operating cycle of the TIB: (<b>a</b>) non-immersed stage; (<b>b</b>) beginning of the immersed stage; and (<b>c</b>) immersed stage.</p>
Full article ">Figure 3
<p>Phases inside and outside TIBs: (<b>a</b>) multiplication: typical shoots; (<b>b</b>) elongation: commonly produced into containers of high volume; (<b>c</b>) rooting: previous phase before carrying a plant to field; and (<b>d</b>) acclimatization: final phase where plants adapt to the environment (i.e., outside the TIB).</p>
Full article ">Figure 4
<p>The new failures that we propose to detect in the TIBs. SCADA refers to Supervisory Control And Data Acquisition; PLC refers to Programmable Logic Controller; and HDMI refers to High-Definition Multimedia Interface, which is used for communicating with the SCADA system.</p>
Full article ">Figure 5
<p>A representation based on time series for the average data of normal class and each of the six faulty classes.</p>
Full article ">Figure 6
<p>Average ranking for the tested classifiers, according to AUC vs ZFP. Those results closest to the origin (1,1) are the best considering both performance metrics; in addition, those enclosed in an ellipse have no statistical differences among them regarding both evaluated measures.</p>
Full article ">Figure 7
<p>Average AUC vs average ZFP for the tested classifiers. Those results closest to the upper right corner are the best considering both performance metrics; in addition, those enclosed in an ellipse have no statistical differences among them regarding both evaluated measures.</p>
Full article ">
10 pages, 2380 KiB  
Article
Three-Dimensional Monitoring of Plant Structural Parameters and Chlorophyll Distribution
by Kenta Itakura, Itchoku Kamakura and Fumiki Hosoi
Sensors 2019, 19(2), 413; https://doi.org/10.3390/s19020413 - 20 Jan 2019
Cited by 20 | Viewed by 6110
Abstract
Image analysis is widely used for accurate and efficient plant monitoring. Plants have complex three-dimensional (3D) structures; hence, 3D image acquisition and analysis is useful for determining the status of plants. Here, 3D images of plants were reconstructed using a photogrammetric approach, called [...] Read more.
Image analysis is widely used for accurate and efficient plant monitoring. Plants have complex three-dimensional (3D) structures; hence, 3D image acquisition and analysis is useful for determining the status of plants. Here, 3D images of plants were reconstructed using a photogrammetric approach, called “structure from motion”. Chlorophyll content is an important parameter that determines the status of plants. Chlorophyll content was estimated from 3D images of plants with color information. To observe changes in the chlorophyll content and plant structure, a potted plant was kept for five days under a water stress condition and its 3D images were taken once a day. As a result, the normalized Red value and the chlorophyll content were correlated; a high R2 value (0.81) was obtained. The absolute error of the chlorophyll content estimation in cross-validation studies was 4.0 × 10−2 μg/mm2. At the same time, the structural parameters (i.e., the leaf inclination angle and the azimuthal angle) were calculated by simultaneously monitoring the changes in the plant’s status in terms of its chlorophyll content and structural parameters. By combining these parameters related to plant information in plant image analysis, early detection of plant stressors, such as water stress, becomes possible. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of the validation of chlorophyll estimation from the plant 3D images.</p>
Full article ">Figure 2
<p>Relationship between the normalized Red value at different points in the plant 3D images and the local plant chlorophyll content.</p>
Full article ">Figure 3
<p>Reconstructed 3D images and distributions of chlorophyll content. Panels (<b>a</b>) and (<b>c</b>) show the same 3D images viewed from different angles. Panels (<b>b</b>) and (<b>d</b>) show the corresponding distributions of the chlorophyll content.</p>
Full article ">Figure 4
<p>Examples of temporal snapshots of chlorophyll content, inclination angle, and azimuthal angle distributions within one leaf. Histograms (<b>a</b>–<b>c</b>), (<b>d</b>–<b>f</b>), and (<b>g</b>–<b>i</b>) capture the information for days 1, 3, and 5, respectively.</p>
Full article ">Figure 5
<p>Time series data of chlorophyll content, inclination angle, and azimuthal angle, at a small point (leaf centroid, left edge, and right edge) within one leaf. The profiles of three leaves are illustrated. Panels (<b>a</b>–<b>c</b>), (<b>d</b>–<b>f</b>), and (<b>g</b>–<b>i</b>) capture the information for leaves 1, 2, and 3, respectively. Circles, squares, and triangles show the results for the leaves’ centroids, left edges, and right edges, respectively.</p>
Full article ">
17 pages, 1923 KiB  
Article
Differential Equation-Based Prediction Model for Early Change Detection in Transient Running Status
by Xin Wen, Guangyuan Chen, Guoliang Lu, Zhiliang Liu and Peng Yan
Sensors 2019, 19(2), 412; https://doi.org/10.3390/s19020412 - 20 Jan 2019
Cited by 4 | Viewed by 4344
Abstract
Early detection of changes in transient running status from sensor signals attracts increasing attention in modern industries. To achieve this end, this paper presents a new differential equation-based prediction model that can realize one-step-ahead prediction of machine status. Together with this model, an [...] Read more.
Early detection of changes in transient running status from sensor signals attracts increasing attention in modern industries. To achieve this end, this paper presents a new differential equation-based prediction model that can realize one-step-ahead prediction of machine status. Together with this model, an analysis of continuous monitoring of condition signal by means of a null hypothesis testing is presented to inspect/diagnose whether an abnormal status change occurs or not during successive machine operations. The detection operation is executed periodically and continuously, such that the machine running status can be monitored with an online and real-time manner. The effectiveness of the proposed method is demonstrated using three representative real-engineering applications: external loading status monitoring, bearing health status monitoring and speed condition monitoring. The method is also compared with those benchmark methods reported in the literature. From the results, the proposed method demonstrates significant improvements over others, which suggests its superiority and great potentials in real applications. Full article
(This article belongs to the Special Issue Sensors for Prognostics and Health Management)
Show Figures

Figure 1

Figure 1
<p>Results on testing data containing different changes: (<b>a</b>) amplitude change; (<b>b</b>) frequency change; (<b>c</b>) amplitude and frequency change.</p>
Full article ">Figure 2
<p>Flowchart of the proposed framework.</p>
Full article ">Figure 3
<p>An example of change detection of load condition change from 1 hp to 2 hp. (<b>a</b>) DE model; (<b>b</b>) ARIMA model; (<b>c</b>) kurtosis; (<b>d</b>) RMS.</p>
Full article ">Figure 4
<p>Results of change detection of bearing early failure detection: (<b>a</b>) bearing 1; (<b>b</b>) bearing 2; (<b>c</b>) bearing 3; (<b>d</b>) bearing 4. In each piece of testing data: (<b>1</b>) the result by DE model; (<b>2</b>) the result by ARIMA model; (<b>3</b>) the result by Kurtosis, and (<b>4</b>) the result by RMS.</p>
Full article ">Figure 4 Cont.
<p>Results of change detection of bearing early failure detection: (<b>a</b>) bearing 1; (<b>b</b>) bearing 2; (<b>c</b>) bearing 3; (<b>d</b>) bearing 4. In each piece of testing data: (<b>1</b>) the result by DE model; (<b>2</b>) the result by ARIMA model; (<b>3</b>) the result by Kurtosis, and (<b>4</b>) the result by RMS.</p>
Full article ">Figure 5
<p>Experimental setup.</p>
Full article ">Figure 6
<p>An example of change detection of speed condition change from 350 rpm to 400 rpm. (<b>a</b>) DE model; (<b>b</b>) ARIMA model; (<b>c</b>) Kurtosis; (<b>d</b>) RMS.</p>
Full article ">
17 pages, 878 KiB  
Article
Experimentation Management in the Co-Created Smart-City: Incentivization and Citizen Engagement
by Johnny Choque, Luis Diez, Arturo Medela and Luis Muñoz
Sensors 2019, 19(2), 411; https://doi.org/10.3390/s19020411 - 20 Jan 2019
Cited by 13 | Viewed by 4432
Abstract
Under the smart city paradigm, cities are changing at a rapid pace. In this context, it is necessary to develop tools that allow service providers to perform rapid deployments of novel solutions that can be validated by citizens. In this sense, the OrganiCity [...] Read more.
Under the smart city paradigm, cities are changing at a rapid pace. In this context, it is necessary to develop tools that allow service providers to perform rapid deployments of novel solutions that can be validated by citizens. In this sense, the OrganiCity experimentation-as-a-service platform brings about a unique solution to experiment with new urban services in a co-creative way, among all the involved stakeholders. On top of this, it is also necessary to ensure that users are engaged in the experimentation process, so as to guarantee that the resulting services actually fulfill their needs. In this work, we present the engagement monitoring framework that has been developed within the OrganiCity platform. This framework permits the tailored definition of metrics according to the experiment characteristics and provides valuable information about how citizens react to service modifications and incentivization campaigns. Full article
Show Figures

Figure 1

Figure 1
<p>Overarching description of the OrganiCity EaaS facility.</p>
Full article ">Figure 2
<p>Overall OrganiCity experimentation flow.</p>
Full article ">Figure 3
<p>Illustrative example of experiment and communities defined during an experiment. In both cases, relevant identifiers are highlighted.</p>
Full article ">Figure 4
<p>Examples of participant and metrics defined for one experiment.</p>
Full article ">Figure 5
<p>Description of the logging system. Currently the annotations, discovery services provide the system with the logs to define metrics.</p>
Full article ">Figure 6
<p>Illustrative examples of the log information and value obtained by one metric, for one participant.</p>
Full article ">Figure 7
<p>Example of user-defined utility functions. The graph shows the three function shapes currently available within OrganiCity.</p>
Full article ">Figure 8
<p>OrganiCity service adoption level during the first and second open calls.</p>
Full article ">Figure 9
<p>Technical readiness evolution of the experiments during the OrganiCity second open call according to experimenters’ feedback. Concrete experiments’ names are omitted.</p>
Full article ">
27 pages, 2654 KiB  
Article
Visible-Light Camera Sensor-Based Presentation Attack Detection for Face Recognition by Combining Spatial and Temporal Information
by Dat Tien Nguyen, Tuyen Danh Pham, Min Beom Lee and Kang Ryoung Park
Sensors 2019, 19(2), 410; https://doi.org/10.3390/s19020410 - 20 Jan 2019
Cited by 13 | Viewed by 5790
Abstract
Face-based biometric recognition systems that can recognize human faces are widely employed in places such as airports, immigration offices, and companies, and applications such as mobile phones. However, the security of this recognition method can be compromised by attackers (unauthorized persons), who might [...] Read more.
Face-based biometric recognition systems that can recognize human faces are widely employed in places such as airports, immigration offices, and companies, and applications such as mobile phones. However, the security of this recognition method can be compromised by attackers (unauthorized persons), who might bypass the recognition system using artificial facial images. In addition, most previous studies on face presentation attack detection have only utilized spatial information. To address this problem, we propose a visible-light camera sensor-based presentation attack detection that is based on both spatial and temporal information, using the deep features extracted by a stacked convolutional neural network (CNN)-recurrent neural network (RNN) along with handcrafted features. Through experiments using two public datasets, we demonstrate that the temporal information is sufficient for detecting attacks using face images. In addition, it is established that the handcrafted image features efficiently enhance the detection performance of deep features, and the proposed method outperforms previous methods. Full article
(This article belongs to the Special Issue Deep Learning-Based Image Sensors)
Show Figures

Figure 1

Figure 1
<p>Working sequence of the proposed method for face-PAD.</p>
Full article ">Figure 2
<p>Demonstration of our preprocessing step: (<b>a</b>) input face image from NUAA dataset [<a href="#B16-sensors-19-00410" class="html-bibr">16</a>]; (<b>b</b>) detected face region on input face image using ERT method; (<b>c</b>) face region is aligned using center points of face, left and right eyes; (<b>d</b>) final extracted face region.</p>
Full article ">Figure 3
<p>Demonstration of an RNN network: (<b>a</b>) a simple RNN cell; (<b>b</b>) structure of a standard LSTM cell.</p>
Full article ">Figure 4
<p>General architecture of a stacked CNN-RNN network for temporal image extraction.</p>
Full article ">Figure 5
<p>Handcrafted image feature extraction process using the MLBP method: (<b>a</b>) an input face image from NUAA dataset [<a href="#B16-sensors-19-00410" class="html-bibr">16</a>]; (<b>b</b>) formation of the MLBP features of (a) (left: encoded LBP image; right: LBP features).</p>
Full article ">Figure 6
<p>Feature level fusion approach.</p>
Full article ">Figure 7
<p>Score level fusion approach.</p>
Full article ">Figure 8
<p>Convergence graphs (accuracy and loss) of the training procedure on the CASIA dataset.</p>
Full article ">Figure 9
<p>DET curves of the face-PAD systems using various feature combination approaches with a testing subset of the CASIA dataset.</p>
Full article ">Figure 10
<p>Convergence graphs (accuracy and loss) of the training procedure on the Replay-mobile dataset.</p>
Full article ">
15 pages, 4359 KiB  
Article
Evaluation of a Wi-Fi Signal Based System for Freeway Traffic States Monitoring: An Exploratory Field Test
by Fan Ding, Xiaoxuan Chen, Shanglu He, Guangming Shou, Zhen Zhang and Yang Zhou
Sensors 2019, 19(2), 409; https://doi.org/10.3390/s19020409 - 20 Jan 2019
Cited by 12 | Viewed by 4229
Abstract
Monitoring traffic states from the road is arousing increasing concern from traffic management authorities. To complete the picture of real-time traffic states, novel data sources have been introduced and studied in the transportation community for decades. This paper explores a supplementary and novel [...] Read more.
Monitoring traffic states from the road is arousing increasing concern from traffic management authorities. To complete the picture of real-time traffic states, novel data sources have been introduced and studied in the transportation community for decades. This paper explores a supplementary and novel data source, Wi-Fi signal data, to extract traffic information through a well-designed system. An IoT (Internet of Things)-based Wi-Fi signal detector consisting of a solar power module, high capacity module, and IoT functioning module was constructed to collect Wi-Fi signal data. On this basis, a filtration and mining algorithm was developed to extract traffic state information (i.e., travel time, traffic volume, and speed). In addition, to evaluate the performance of the proposed system, a practical field test was conducted through the use of the system to monitor traffic states of a major corridor in China. The comparison results with loop data indicated that traffic speed obtained from the system was consistent with that collected from loop detectors. The mean absolute percentage error reached 3.55% in the best case. Furthermore, the preliminary analysis proved the existence of the highly correlated relationship between volumes obtained from the system and from loop detectors. The evaluation confirmed the feasibility of applying Wi-Fi signal data to acquisition of traffic information, indicating that Wi-Fi signal data could be used as a supplementary data source for monitoring real-time traffic states. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>The scheme diagram of the proposed Wi-Fi signal-based traffic monitoring system.</p>
Full article ">Figure 2
<p>(<b>a</b>) The abstract architecture of the designed detector, (<b>b</b>) the appearance of the detector, and (<b>c</b>) the principle circuit for the detector.</p>
Full article ">Figure 3
<p>Illustration of an example state for data packets in a Wi-Fi channel.</p>
Full article ">Figure 4
<p>General view of the research corridor as well as the detailed layout and indications of the testbed (<span class="html-italic">Note: background map photo is from Google Maps</span>).</p>
Full article ">Figure 5
<p>Comparisons between speed and volume obtained from Wi-Fi signal-based system and loop detectors. (<b>a</b>) Comparison results between speed obtained from loop detectors and Wi-Fi signals for Beijing to Shanghai over time, (<b>b</b>) Comparison results between speed obtained from loop detectors and Wi-Fi signals for Shanghai to Beijing over time, (<b>c</b>) Comparison results between volumes obtained from loop detectors and Wi-Fi signals for Beijing to Shanghai over time, (<b>d</b>) Comparison results between volumes obtained from loop detectors and Wi-Fi signals for Shanghai to Beijing over time.</p>
Full article ">Figure 6
<p>Details of absolute percentage errors over time for speed (<b>a</b>) and volume (<b>b</b>) between loop data and outputs of the Wi-Fi signal-based system.</p>
Full article ">Figure 7
<p>Demonstrations of average vehicle gross weight (<b>a</b>) and length (<b>b</b>) over time.</p>
Full article ">Figure 8
<p>Comparison of speed distributions between the numbers of vehicles and unique MAC addresses (after filtering and mining). (<b>a</b>) Speed distribution comparison between loop detector results and Wi-Fi signal based results for Beijing to Shanghai over 22 April 2017, (<b>b</b>) Speed distribution comparison between loop detector results and Wi-Fi signal based results for Shanghai to Beijing over 22 April 2017, (<b>c</b>) Speed distribution comparison between loop detector results and Wi-Fi signal based results for Beijing to Shanghai over 23 April 2017, (<b>d</b>) Speed distribution comparison between loop detector results and Wi-Fi signal based results for Shanghai to Beijing over 23 April 2017.</p>
Full article ">
24 pages, 4406 KiB  
Article
Infrared-Inertial Navigation for Commercial Aircraft Precision Landing in Low Visibility and GPS-Denied Environments
by Lei Zhang, Zhengjun Zhai, Lang He, Pengcheng Wen and Wensheng Niu
Sensors 2019, 19(2), 408; https://doi.org/10.3390/s19020408 - 20 Jan 2019
Cited by 19 | Viewed by 6875
Abstract
This paper proposes a novel infrared-inertial navigation method for the precise landing of commercial aircraft in low visibility and Global Position System (GPS)-denied environments. Within a Square-root Unscented Kalman Filter (SR_UKF), inertial measurement unit (IMU) data, forward-looking infrared (FLIR) images and airport geo-information [...] Read more.
This paper proposes a novel infrared-inertial navigation method for the precise landing of commercial aircraft in low visibility and Global Position System (GPS)-denied environments. Within a Square-root Unscented Kalman Filter (SR_UKF), inertial measurement unit (IMU) data, forward-looking infrared (FLIR) images and airport geo-information are integrated to estimate the position, velocity and attitude of the aircraft during landing. Homography between the synthetic image and the real image which implicates the camera pose deviations is created as vision measurement. To accurately extract real runway features, the current results of runway detection are used as the prior knowledge for the next frame detection. To avoid possible homography decomposition solutions, it is directly converted to a vector and fed to the SR_UKF. Moreover, the proposed navigation system is proven to be observable by nonlinear observability analysis. Last but not least, a general aircraft was elaborately equipped with vision and inertial sensors to collect flight data for algorithm verification. The experimental results have demonstrated that the proposed method could be used for the precise landing of commercial aircraft in low visibility and GPS-denied environments. Full article
(This article belongs to the Special Issue Aerospace Sensors and Multisensor Systems)
Show Figures

Figure 1

Figure 1
<p>Approach and Landing procedure.</p>
Full article ">Figure 2
<p>Framework of the proposed landing navigation: the blue box is the core part of the proposed approach</p>
Full article ">Figure 3
<p>Homography between synthetic and real images</p>
Full article ">Figure 4
<p>Reference frames and runway model</p>
Full article ">Figure 5
<p>Projection of features in synthetic image.</p>
Full article ">Figure 6
<p>Real Runway Detection: the black solid rectangle is the runway ROI, the red lines are the extracted line segments, the blue quadrangle is the synthetic runway contour, the black dashed rectangles are the neighborhoods of runway edges, and the green quadrangle is the fitted runway edge.</p>
Full article ">Figure 7
<p>The flight data acquisition platform: (<b>a</b>) ISS; (<b>b</b>) ISS installation; (<b>c</b>) aircraft landing; (<b>d</b>) instruments for flight data acquisition; (<b>e</b>) DGPS ground station.</p>
Full article ">Figure 8
<p>The block diagram of the experimental platform.</p>
Full article ">Figure 9
<p>Line Segments Extraction from ROI: (<b>a</b>) EDLines: 173 lines, 3.1 ms; (<b>b</b>) LSD: 213 lines, 17.1 ms.</p>
Full article ">Figure 10
<p>Runway detection at typical flight height: (<b>a</b>) 200 ft; (<b>b</b>) 100 ft; (<b>c</b>) 60 ft.</p>
Full article ">Figure 11
<p>Approach and landing trajectory.</p>
Full article ">Figure 12
<p>Errors of motion estimation: (<b>a</b>) position errors, (<b>b</b>) attitude errors, and (<b>c</b>) velocity errors.</p>
Full article ">Figure 12 Cont.
<p>Errors of motion estimation: (<b>a</b>) position errors, (<b>b</b>) attitude errors, and (<b>c</b>) velocity errors.</p>
Full article ">Figure 12 Cont.
<p>Errors of motion estimation: (<b>a</b>) position errors, (<b>b</b>) attitude errors, and (<b>c</b>) velocity errors.</p>
Full article ">Figure 13
<p>Flight height among 6 modes during landing.</p>
Full article ">
Previous Issue
Back to TopTop