[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 19, August-1
Previous Issue
Volume 19, July-1
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 19, Issue 14 (July-2 2019) – 213 articles

Cover Story (view full-size image): 5G technology will enable the development of a plethora of new services in vehicular scenarios. In this context, it is necessary that the infrastructure guarantee the availability of network resources to different types of applications and users. Following this demand, network slicing in 5G advocates mechanisms to assure quality of service (QoS) to specific data flows and subscribers. In this work, we aim at getting real the slicing concept in the vehicular domain. Hence, we present a slicing framework in an experimental vehicular test-bench based on a mobile edge computing (MEC)-based architecture. It permits traffic differentiation to ensure flow isolation, resource assignment, and network scalability for Internet of Vehicles (IoV). The presented results demonstrate the validity of the solution in terms of short and predictable slice-creation time, QoS assurance, and service scalability. View this [...] Read more.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
13 pages, 4579 KiB  
Article
Feasibility Study on Temperature Distribution Measurement Method of Thrust Sliding Bearing Bush Based on FBG Quasi-Distributed Sensing
by Hu Liu, Qiang Yu, Yuegang Tan, Wenjun Xu, Bing Huang, Zhichao Xie and Jian Mao
Sensors 2019, 19(14), 3245; https://doi.org/10.3390/s19143245 - 23 Jul 2019
Cited by 5 | Viewed by 4533
Abstract
According to the characteristics of the temperature distribution of the thrust sliding bearing bush, the principle and method of quasi-distributed fiber Bragg grating (FBG) sensing is used to measure it. The key problems such as calibration, arrangement and lying of optical FBG sensors [...] Read more.
According to the characteristics of the temperature distribution of the thrust sliding bearing bush, the principle and method of quasi-distributed fiber Bragg grating (FBG) sensing is used to measure it. The key problems such as calibration, arrangement and lying of optical FBG sensors are studied by using the simulated thrust sliding bearing bush, which was customized in the laboratory. Combined with the thrust sliding bearing bush, the measurement experiments were carried out, which were divided into two groups: Steady-state experiments and transient experiment. The steady-state experiments obtain the temperature data measured by the FBG temperature sensors at each setting temperature, and the transient experiment obtains the relationship between the measured temperature by each temperature sensor and time in the heating and cooling process. The experimental results showed that the FBG temperature sensors had good accuracy, stability and consistency when measuring the temperature distribution of bearing bush. Full article
(This article belongs to the Special Issue Fiber-Based Sensing Technology: Recent Progresses and New Challenges)
Show Figures

Figure 1

Figure 1
<p>Installation of a thermocouple. (<b>a</b>) Installation schematic diagram; (<b>b</b>) installation physical drawing.</p>
Full article ">Figure 2
<p>Structure diagram of thrust sliding bearing test bench.</p>
Full article ">Figure 3
<p>Internal structure of thrust sliding bearing.</p>
Full article ">Figure 4
<p>Schematic diagram of quasi-distributed fiber Bragg grating (FBG) temperature sensors.</p>
Full article ">Figure 5
<p>Installation process flow of FBG temperature sensors for thrust sliding bearing bush.</p>
Full article ">Figure 6
<p>Dimension diagram of imitation thrust bearing bush.</p>
Full article ">Figure 7
<p>Schematic diagram of the measurement point of FBG temperature sensors.</p>
Full article ">Figure 8
<p>Calibration experiment system of the FBG temperature sensors.</p>
Full article ">Figure 9
<p>Experimental heating schematic diagram of thrust sliding bearing bush temperature distribution measurement.</p>
Full article ">Figure 10
<p>The architecture of the testbed.</p>
Full article ">Figure 11
<p>Steady-state measurement results of the temperature at each measuring point.</p>
Full article ">Figure 12
<p>Transient measurement results of the temperature at each measuring point.</p>
Full article ">Figure 13
<p>Comparison of measurement results of the thermocouple and measuring point 1-1 of FBG sensor.</p>
Full article ">Figure 14
<p>Comparison of the steady-state temperature measured at four measuring points of 1# FBG temperature sensor.</p>
Full article ">Figure 15
<p>Comparison of the transient temperature measured at four measuring points of 1# FBG temperature sensor.</p>
Full article ">Figure 16
<p>Comparison of the steady-state temperature measurements at the second measuring point of each temperature sensor.</p>
Full article ">Figure 17
<p>Comparison of the transient temperature measurements at the second measuring point of each temperature sensor.</p>
Full article ">
17 pages, 7123 KiB  
Article
Can You Ink While You Blink? Assessing Mental Effort in a Sensor-Based Calligraphy Trainer
by Bibeg Hang Limbu, Halszka Jarodzka, Roland Klemke and Marcus Specht
Sensors 2019, 19(14), 3244; https://doi.org/10.3390/s19143244 - 23 Jul 2019
Cited by 16 | Viewed by 5169
Abstract
Sensors can monitor physical attributes and record multimodal data in order to provide feedback. The application calligraphy trainer, exploits these affordances in the context of handwriting learning. It records the expert’s handwriting performance to compute an expert model. The application then uses the [...] Read more.
Sensors can monitor physical attributes and record multimodal data in order to provide feedback. The application calligraphy trainer, exploits these affordances in the context of handwriting learning. It records the expert’s handwriting performance to compute an expert model. The application then uses the expert model to provide guidance and feedback to the learners. However, new learners can be overwhelmed by the feedback as handwriting learning is a tedious task. This paper presents the pilot study done with the calligraphy trainer to evaluate the mental effort induced by various types of feedback provided by the application. Ten participants, five in the control group and five in the treatment group, who were Ph.D. students in the technology-enhanced learning domain, took part in the study. The participants used the application to learn three characters from the Devanagari script. The results show higher mental effort in the treatment group when all types of feedback are provided simultaneously. The mental efforts for individual feedback were similar to the control group. In conclusion, the feedback provided by the calligraphy trainer does not impose high mental effort and, therefore, the design considerations of the calligraphy trainer can be insightful for multimodal feedback designers. Full article
(This article belongs to the Special Issue Advanced Sensors Technology in Education)
Show Figures

Figure 1

Figure 1
<p>System Model for supporting the framework.</p>
Full article ">Figure 2
<p>System Model for supporting the framework.</p>
Full article ">Figure 3
<p>Pressure feedback with saturation.</p>
Full article ">Figure 4
<p>Stroke feedback with color.</p>
Full article ">Figure 5
<p>Visual Inspection tool for providing summative feedback.</p>
Full article ">Figure 6
<p>Mean of Self-reported mental effort between two groups.</p>
Full article ">Figure 7
<p>Mean of Reaction time between two groups [in Seconds].</p>
Full article ">Figure 8
<p>Time taken by the two groups [in Seconds].</p>
Full article ">Figure 9
<p>Pupil diameter [in millimeters].</p>
Full article ">Figure 10
<p>Visual scan path of the participant while writing.</p>
Full article ">
5 pages, 193 KiB  
Correction
Correction: Design and Simulation of a Wireless SAW–Pirani Sensor with Extended Range and Sensitivity
by Sofia Toto, Pascal Nicolay, Gian Luca Morini, Michael Rapp, Jan G. Korvink and Juergen J. Brandner
Sensors 2019, 19(14), 3243; https://doi.org/10.3390/s19143243 - 23 Jul 2019
Cited by 2 | Viewed by 3350
Abstract
The authors wish to make the following erratum to Reference [...] Full article
(This article belongs to the Special Issue Advances in Surface Acoustic Wave Sensors)
30 pages, 4572 KiB  
Article
A Novel Centralized Range-Free Static Node Localization Algorithm with Memetic Algorithm and Lévy Flight
by Jin Yang, Yongming Cai, Deyu Tang and Zhen Liu
Sensors 2019, 19(14), 3242; https://doi.org/10.3390/s19143242 - 23 Jul 2019
Cited by 22 | Viewed by 3897
Abstract
Node localization, which is formulated as an unconstrained NP-hard optimization problem, is considered as one of the most significant issues of wireless sensor networks (WSNs). Recently, many swarm intelligent algorithms (SIAs) were applied to solve this problem. This study aimed to determine node [...] Read more.
Node localization, which is formulated as an unconstrained NP-hard optimization problem, is considered as one of the most significant issues of wireless sensor networks (WSNs). Recently, many swarm intelligent algorithms (SIAs) were applied to solve this problem. This study aimed to determine node location with high precision by SIA and presented a new localization algorithm named LMQPDV-hop. In LMQPDV-hop, an improved DV-Hop was employed as an underground mechanism to gather the estimation distance, in which the average hop distance was modified by a defined weight to reduce the distance errors among nodes. Furthermore, an efficient quantum-behaved particle swarm optimization algorithm (QPSO), named LMQPSO, was developed to find the best coordinates of unknown nodes. In LMQPSO, the memetic algorithm (MA) and Lévy flight were introduced into QPSO to enhance the global searching ability and a new fast local search rule was designed to speed up the convergence. Extensive simulations were conducted on different WSN deployment scenarios to evaluate the performance of the new algorithm and the results show that the new algorithm can effectively improve position precision. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Illustration of gathering the information of nodes by BS.</p>
Full article ">Figure 2
<p>Flowchart of LMQPDV-hop.</p>
Full article ">Figure 3
<p>Illustration of deployment for WSN#1.</p>
Full article ">Figure 4
<p>Illustration of deployment for WSN#2.</p>
Full article ">Figure 5
<p>Comparison Results of PSO1, PSO2, CS, MMQPSO and LMQPSO.</p>
Full article ">Figure 5 Cont.
<p>Comparison Results of PSO1, PSO2, CS, MMQPSO and LMQPSO.</p>
Full article ">Figure 6
<p>The Convergence of LMQPSO.</p>
Full article ">Figure 7
<p>The effect of the number of anchors on error for WSN#1.</p>
Full article ">Figure 8
<p>The effect of the number of anchors location on location error for WSN#2.</p>
Full article ">Figure 9
<p>Communication range 150 for WSN#1.</p>
Full article ">Figure 10
<p>Communication range 150 for WSN#2.</p>
Full article ">Figure 11
<p>Communication range 200for WSN#1.</p>
Full article ">Figure 12
<p>Communication range 200 for WSN#2.</p>
Full article ">Figure 13
<p>Communication range 250 for WSN#1.</p>
Full article ">Figure 14
<p>Communication range 250 WSN#2.</p>
Full article ">Figure 15
<p>Communication range 300 for WSN#1.</p>
Full article ">Figure 16
<p>Communication range 300 WSN#2.</p>
Full article ">Figure 17
<p>The influence of R for WSN#1.</p>
Full article ">Figure 18
<p>The influence of R for WSN#2.</p>
Full article ">Figure 19
<p>The influence of anchors for WSN#1.</p>
Full article ">Figure 20
<p>The influence of anchor WSN#2.</p>
Full article ">Figure 21
<p>The location error of LMQPDV-hop when the anchor proportion was 30% and R = 250 for WSN #1.</p>
Full article ">Figure 22
<p>The location error of LMQPDV-hop when the anchor proportion was 30% and R = 250 for WSN#2.</p>
Full article ">
10 pages, 2949 KiB  
Article
An Active Self-Driven Piezoelectric Sensor Enabling Real-Time Respiration Monitoring
by Ahmed Rasheed, Emad Iranmanesh, Weiwei Li, Yangbing Xu, Qi Zhou, Hai Ou and Kai Wang
Sensors 2019, 19(14), 3241; https://doi.org/10.3390/s19143241 - 23 Jul 2019
Cited by 23 | Viewed by 5858
Abstract
In this work, we report an active respiration monitoring sensor based on a piezoelectric-transducer-gated thin-film transistor (PTGTFT) aiming to measure respiration-induced dynamic force in real time with high sensitivity and robustness. It differs from passive piezoelectric sensors in that the piezoelectric transducer signal [...] Read more.
In this work, we report an active respiration monitoring sensor based on a piezoelectric-transducer-gated thin-film transistor (PTGTFT) aiming to measure respiration-induced dynamic force in real time with high sensitivity and robustness. It differs from passive piezoelectric sensors in that the piezoelectric transducer signal is rectified and amplified by the PTGTFT. Thus, a detailed and easy-to-analyze respiration rhythm waveform can be collected with a sufficient time resolution. The respiration rate, three phases of respiration cycle, as well as phase patterns can be further extracted for prognosis and caution of potential apnea and other respiratory abnormalities, making the PTGTFT a great promise for application in long-term real-time respiration monitoring. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Cross-sectional schematic illustration of a piezoelectric-transducer-gated thin-film transistor (PTGTFT) where a PVDF transducer is connected to a FIN-shaped a-Si:H piezoelectric transducer and a dual-gate thin-film transistor (DG-TFT); (<b>b</b>) equivalent circuit diagram of respiration monitoring sensor system composed of a PTGTFT, low-power analog front end (AFE) and conventional data acquisition module (MSP430F149); and (<b>c</b>) photo of the experimental setup for measuring the respiration rhythm signal by the proposed sensor system.</p>
Full article ">Figure 2
<p>(<b>a</b>) Transfer characteristics of 3-D FIN-shaped DG-TFT, when operating in a self-driven mode; (<b>b</b>) Current variation of the fabricated 3-D FIN-shaped DG-TFT when operating in saturation region; (<b>c</b>) mechanical stability assessment and time-resolution evaluation of PTGTFT sensor at 6.5 Hz and force of 1 N.</p>
Full article ">Figure 3
<p>Dynamic response of the sensor at two peripheral points of (<b>a</b>) neck and chest; (<b>b</b>) signal of one complete respiration cycle.</p>
Full article ">Figure 4
<p>(<b>a</b>) Dynamic response of the sensor in different respiration modes at rest; (<b>b</b>) respiration monitoring tests for three different daily activities when the subject is (I) sitting; (II) lying; (III) standing, and (IV) walking.</p>
Full article ">Figure 5
<p>(<b>a</b>) Phase analysis of human respiration rhythm; relation of expiratory time and sum of inspiratory and pause time in during respiration mode of (<b>b</b>) deep, (<b>c</b>) moderate, and (<b>d</b>) rapid.</p>
Full article ">
14 pages, 1268 KiB  
Article
Does the Femoral Head Size in Hip Arthroplasty Influence Lower Body Movements during Squats, Gait and Stair Walking? A Clinical Pilot Study Based on Wearable Motion Sensors
by Helena Grip, Kjell G Nilsson, Charlotte K Häger, Ronnie Lundström and Fredrik Öhberg
Sensors 2019, 19(14), 3240; https://doi.org/10.3390/s19143240 - 23 Jul 2019
Cited by 17 | Viewed by 4949
Abstract
A hip prosthesis design with larger femoral head size may improve functional outcomes compared to the conventional total hip arthroplasty (THA) design. Our aim was to compare the range of motion (RoM) in lower body joints during squats, gait and stair walking using [...] Read more.
A hip prosthesis design with larger femoral head size may improve functional outcomes compared to the conventional total hip arthroplasty (THA) design. Our aim was to compare the range of motion (RoM) in lower body joints during squats, gait and stair walking using a wearable movement analysis system based on inertial measurement units (IMUs) in three age-matched male groups: 6 males with a conventional THA (THAC), 9 with a large femoral head (LFH) design, and 8 hip- and knee-asymptomatic controls (CTRL). We hypothesized that the LFH design would allow a greater hip RoM, providing movement patterns more like CTRL, and a larger side difference in hip RoM in THAC when compared to LFH and controls. IMUs were attached to the pelvis, thighs and shanks during five trials of squats, gait, and stair ascending/descending performed at self-selected speed. THAC and LFH participants completed the Hip dysfunction and Osteoarthritis Outcome Score (HOOS). The results showed a larger hip RoM during squats in LFH compared to THAC. Side differences in LFH and THAC groups (operated vs. non-operated side) indicated that movement function was not fully recovered in either group, further corroborated by non-maximal mean HOOS scores (LFH: 83 ± 13, THAC: 84 ± 19 groups, vs. normal function 100). The IMU system may have the potential to enhance clinical movement evaluations as an adjunct to clinical scales. Full article
(This article belongs to the Special Issue Gyroscopes and Accelerometers)
Show Figures

Figure 1

Figure 1
<p>Hip dysfunction and Osteoarthritis Outcome Score (HOOS) profiles for the conventional THA prosthesis group (THAC) and large femoral head (LFH) prosthesis group. 100 indicates normal function and 0 indicates severe problems related to hip function. The five categories analyzed were Pain, Symptoms, Activities of daily living (ADL), Sport and recreation function (Sport) and Hip-related quality of life (QOL).</p>
Full article ">Figure 2
<p>Angle curves in the hip and knee joints during gait and stair walking for sagittal plane motion (<b>A</b>), frontal plane motion (<b>B</b>) and transverse plane motion (<b>C</b>). The average angle curves are plotted with standard deviations as a shaded area for the non-dominant side of healthy controls (CTRL) and the operated side of the group with a conventional prosthesis (THAC) or large femoral head (LFH) design.</p>
Full article ">Figure 3
<p>The boxplots illustrate range of motion (RoM) in the operated hips (light grey) and non-operated hips (dark grey) within the group with a total hip replacement (THAC) and the group with resurfaced hip design (LFH). Significant differences are marked in each graph, based on the statistical tests, between groups (brackets with end points) and between sides (simple brackets).</p>
Full article ">
29 pages, 13748 KiB  
Article
Low Power Wide Area Networks (LPWAN) at Sea: Performance Analysis of Offshore Data Transmission by Means of LoRaWAN Connectivity for Marine Monitoring Applications
by Lorenzo Parri, Stefano Parrino, Giacomo Peruzzi and Alessandro Pozzebon
Sensors 2019, 19(14), 3239; https://doi.org/10.3390/s19143239 - 23 Jul 2019
Cited by 45 | Viewed by 8217
Abstract
In this paper the authors discuss the realization of a Long Range Wide Area Network (LoRaWAN) network infrastructure to be employed for monitoring activities within the marine environment. In particular, transmission ranges as well as the assessment of parameters like Signal to Noise [...] Read more.
In this paper the authors discuss the realization of a Long Range Wide Area Network (LoRaWAN) network infrastructure to be employed for monitoring activities within the marine environment. In particular, transmission ranges as well as the assessment of parameters like Signal to Noise Ratio (SNR) and Received Signal Strength Indicator (RSSI) are analyzed in the specific context of an aquaculture industrial plant, setting up a transmission channel from an offshore monitoring structure provided with a LoRaWAN transmitter, to an ashore receiving device composed of two LoRaWAN Gateways. A theoretical analysis about the feasibility of the transmission is provided. The performances of the system are then measured with different network parameters (in particular the Spreading Factor—SF) as well as with two different heights for the transmitting antenna. Test results prove that efficient data transmission can be achieved at a distance of 8.33 km even using worst case network settings: this suggests the effectiveness of the system even in harsher environmental conditions, thus entailing a lower quality of the transmission channel, or for larger transmission ranges. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Gains, in blue, and losses, in yellow, of a communication channel to be accounted within link budget equation.</p>
Full article ">Figure 2
<p>First Fresnel zone example.</p>
Full article ">Figure 3
<p>Evaluation of the distance from the horizon.</p>
Full article ">Figure 4
<p>Example of the height of the Earth bulge in transmission links.</p>
Full article ">Figure 5
<p>Offshore setup for the measurements campaign: (<b>a</b>) offshore end node antenna and its pole; (<b>b</b>) view approaching the tests spot.</p>
Full article ">Figure 6
<p>Ashore setup of the Gateways.</p>
Full article ">Figure 7
<p>Map showing the positions of the end node, point A, and of the Gateways, point B along with the covered distance, red line.</p>
Full article ">Figure 8
<p>Ground profiles for either the exploited altitudes for the transmitting antenna: (<b>a</b>) 3.5 m; (<b>b</b>) 2.1 m.</p>
Full article ">Figure 9
<p>Finer estimates of the actual clearances for both the exploited altitudes for the transmitting antenna: (<b>a</b>) <math display="inline"><semantics> <mrow> <mn>20</mn> <mo>%</mo> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mrow> <mi>T</mi> <msub> <mi>X</mi> <mn>1</mn> </msub> </mrow> </msub> <mo>=</mo> <mn>3.5</mn> </mrow> </semantics></math> m; (<b>b</b>) <math display="inline"><semantics> <mrow> <mn>15</mn> <mo>%</mo> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mrow> <mi>T</mi> <msub> <mi>X</mi> <mn>2</mn> </msub> </mrow> </msub> <mo>=</mo> <mn>2.1</mn> </mrow> </semantics></math> m.</p>
Full article ">Figure 10
<p>Group <math display="inline"><semantics> <mrow> <mo>#</mo> <mn>1</mn> </mrow> </semantics></math> RSSIs PMFs: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>7</mn> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>9</mn> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>; (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>11</mn> </mrow> </semantics></math>; (<b>f</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>12</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Group <math display="inline"><semantics> <mrow> <mo>#</mo> <mn>1</mn> </mrow> </semantics></math> RSSIs temporal trend: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>7</mn> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>9</mn> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>; (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>11</mn> </mrow> </semantics></math>; (<b>f</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>12</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Group <math display="inline"><semantics> <mrow> <mo>#</mo> <mn>1</mn> </mrow> </semantics></math> SNRs PMFs: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>7</mn> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>9</mn> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>; (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>11</mn> </mrow> </semantics></math>; (<b>f</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>12</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 13
<p>Group <math display="inline"><semantics> <mrow> <mo>#</mo> <mn>1</mn> </mrow> </semantics></math> SNRs temporal trend: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>7</mn> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>9</mn> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>; (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>11</mn> </mrow> </semantics></math>; (<b>f</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>12</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 14
<p>Group <math display="inline"><semantics> <mrow> <mo>#</mo> <mn>2</mn> </mrow> </semantics></math> RSSIs PMFs: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>7</mn> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>9</mn> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>; (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>11</mn> </mrow> </semantics></math>; (<b>f</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>12</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 15
<p>Group <math display="inline"><semantics> <mrow> <mo>#</mo> <mn>2</mn> </mrow> </semantics></math> RSSIs temporal trend: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>7</mn> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>9</mn> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>; (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>11</mn> </mrow> </semantics></math>; (<b>f</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>12</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 16
<p>Group <math display="inline"><semantics> <mrow> <mo>#</mo> <mn>2</mn> </mrow> </semantics></math> SNRs PMFs: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>7</mn> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>9</mn> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>; (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>11</mn> </mrow> </semantics></math>; (<b>f</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>12</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 17
<p>Group <math display="inline"><semantics> <mrow> <mo>#</mo> <mn>2</mn> </mrow> </semantics></math> SNRs temporal trend: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>7</mn> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>9</mn> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>; (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>11</mn> </mrow> </semantics></math>; (<b>f</b>) <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>F</mi> <mo>=</mo> <mn>12</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 18
<p>Group comparison graphical analysis divided by SFs: (<b>a</b>) RSSIs mean values and standard deviations; (<b>b</b>) SNRs mean values and standard deviations; (<b>c</b>) percentage of received packets.</p>
Full article ">
18 pages, 1642 KiB  
Article
IoT-Based Home Monitoring: Supporting Practitioners’ Assessment by Behavioral Analysis
by Niccolò Mora, Ferdinando Grossi, Dario Russo, Paolo Barsocchi, Rui Hu, Thomas Brunschwiler, Bruno Michel, Francesca Cocchi, Enrico Montanari, Stefano Nunziata, Guido Matrella and Paolo Ciampolini
Sensors 2019, 19(14), 3238; https://doi.org/10.3390/s19143238 - 23 Jul 2019
Cited by 29 | Viewed by 6092
Abstract
This paper introduces technical solutions devised to support the Deployment Site - Regione Emilia Romagna (DS-RER) of the ACTIVAGE project. The ACTIVAGE project aims at promoting IoT (Internet of Things)-based solutions for Active and Healthy ageing. DS-RER focuses on improving continuity of care [...] Read more.
This paper introduces technical solutions devised to support the Deployment Site - Regione Emilia Romagna (DS-RER) of the ACTIVAGE project. The ACTIVAGE project aims at promoting IoT (Internet of Things)-based solutions for Active and Healthy ageing. DS-RER focuses on improving continuity of care for older adults (65+) suffering from aftereffects of a stroke event. A Wireless Sensor Kit based on Wi-Fi connectivity was suitably engineered and realized to monitor behavioral aspects, possibly relevant to health and wellbeing assessment. This includes bed/rests patterns, toilet usage, room presence and many others. Besides hardware design and validation, cloud-based analytics services are introduced, suitable for automatic extraction of relevant information (trends and anomalies) from raw sensor data streams. The approach is general and applicable to a wider range of use cases; however, for readability’s sake, two simple cases are analyzed, related to bed and toilet usage patterns. In particular, a regression framework is introduced, suitable for detecting trends (long and short-term) and labeling anomalies. A methodology for assessing multi-modal daily behavioral profiles is introduced, based on unsupervised clustering techniques. The proposed framework has been successfully deployed at several real-users’ homes, allowing for its functional validation. Clinical effectiveness will be assessed instead through a Randomized Control Trial study, currently being carried out. Full article
(This article belongs to the Special Issue IoT Sensors in E-Health)
Show Figures

Figure 1

Figure 1
<p>Diagram of the ACTIVAGE DS-RER system architecture, interacting with the FSE (Fascicolo Sanitario Elettronico, the patients’ interface to regional Electronics Health Record system) and the SOLE network (the interface for clinicians).</p>
Full article ">Figure 2
<p>IoT sensor power-up. A large current peak is observed (blue line, left <span class="html-italic">y</span>-axis), corresponding to the super-capacitors charge current; meanwhile the bus voltage (red line, right <span class="html-italic">y</span>-axis), which is connected to the super-capacitors, ramps-up to the nominal value. After charging, the current then settles around an average of 80 mA, while the sensor scans WiFi networks for joining.</p>
Full article ">Figure 3
<p>Analysis of toilet count data by a rolling Poisson regression model: blue dotted line represents the predicted mean counts. A significant abrupt trend is detected in the last days (the region is highlighted by the orange area, whereas the effect of the abrupt trend is shown by the black dashed line, along with the linear one, in solid gray). Data points not properly explained by the model are highlighted in red.</p>
Full article ">Figure 4
<p>SP visualizations, resulting from data-driven pattern clustering. Average SP are plotted as solid lines, representing the probability of having the person resting in bed at each point in time for the day. Shaded area, instead, represent the uncertainty, in the form of 95% confidence intervals, in such point-wise probability estimate. The time in <span class="html-italic">x</span>-axis is referred to UTC, whereas the pilot timezone is UTC+1. The deviations between the two pattern clusters, from 12:30 p.m. to 2:00 p.m. (UTC) are found to be statistically significant (<math display="inline"><semantics> <mrow> <mi>p</mi> <mo>&lt;</mo> <mn>0.01</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 5
<p>(<b>a</b>) graphical representation of “deviant” patterns, discovered with the NS score. A blue, dashed line represents the pattern taken as reference, whereas solid red lines show profiles with a high NS score, i.e., deviating from the reference; (<b>b</b>) histogram approximation of NS distribution obtained from SP traces shown in (<b>a</b>). Deviating patterns are identified by means of a simple filtering based on Inter-Quartile Range.</p>
Full article ">
25 pages, 2736 KiB  
Article
On the Evaluation of the NB-IoT Random Access Procedure in Monitoring Infrastructures
by Sergio Martiradonna, Giuseppe Piro and Gennaro Boggia
Sensors 2019, 19(14), 3237; https://doi.org/10.3390/s19143237 - 23 Jul 2019
Cited by 30 | Viewed by 5277
Abstract
NarrowBand IoT (NB-IoT) is emerging as a promising communication technology offering a reliable wireless connection to a large number of devices employed in pervasive monitoring scenarios, such as Smart City, Precision Agriculture, and Industry 4.0. Since most of the NB-IoT transmissions occur in [...] Read more.
NarrowBand IoT (NB-IoT) is emerging as a promising communication technology offering a reliable wireless connection to a large number of devices employed in pervasive monitoring scenarios, such as Smart City, Precision Agriculture, and Industry 4.0. Since most of the NB-IoT transmissions occur in the uplink, the random access channel (that is the primary interface between devices and the base station) may usually become the main bottleneck of the entire system. For this reason, analytical models and simulation tools able to investigate its behavior in different scenarios are of the utmost importance for driving current and future research activities. Unfortunately, scientific literature partially addresses the current open issues by means of simplified and, in many cases, not standard-compliant approaches. To provide a significant step forward in this direction, the contribution of this paper is three-folded. First, it presents a flexible, open-source, and 3GPP-compliant implementation of the NB-IoT random access procedure. Second, it formulates an analytical model capturing both collision and success probabilities associated with the aforementioned procedure. Third, it presents the cross-validation of both the analytical model and the simulation tool, by taking into account reference applications scenarios of sensor networks enabling periodic reporting in monitoring infrastructures. Obtained results prove the remarkable accuracy, demonstrating a well-calibrated instrument, which will be also useful for future research activities. Full article
Show Figures

Figure 1

Figure 1
<p>Time–frequency structure of NB-IoT uplink channels.</p>
Full article ">Figure 2
<p>NPRACH preambles.</p>
Full article ">Figure 3
<p>RAOs timing diagram.</p>
Full article ">Figure 4
<p>Random access procedure sequence diagram.</p>
Full article ">Figure 5
<p>Coverage class hopping of three distinct users during random access procedure with <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>4</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Big picture of the main research contributions.</p>
Full article ">Figure 7
<p>Overview of the main building blocks of the NB-IoT simulation platform.</p>
Full article ">Figure 8
<p>Block diagram of the implemented random access procedure.</p>
Full article ">Figure 9
<p>Average number of devices accessing RAOs for Configuration 1.</p>
Full article ">Figure 10
<p>Average number of devices accessing RAOs for Configuration 2.</p>
Full article ">Figure 10 Cont.
<p>Average number of devices accessing RAOs for Configuration 2.</p>
Full article ">Figure 11
<p>Collision and Success probabilities of RAOs for Configuration 1.</p>
Full article ">Figure 12
<p>Collision and Success probabilities of RAOs for Configuration 2.</p>
Full article ">Figure 12 Cont.
<p>Collision and Success probabilities of RAOs for Configuration 2.</p>
Full article ">Figure 13
<p>ECDF of the end-to-end delays obtained in all simulations.</p>
Full article ">Figure 14
<p>Average number of devices accessing RAOs for Configuration 3.</p>
Full article ">Figure 15
<p>Collision and success probabilities of RAOs for Configuration 3.</p>
Full article ">
20 pages, 4422 KiB  
Article
Methylated Poly(ethylene)imine Modified Capacitive Micromachined Ultrasonic Transducer for Measurements of CO2 and SO2 in Their Mixtures
by Dovydas Barauskas, Donatas Pelenis, Gailius Vanagas, Darius Viržonis and Jonas Baltrušaitis
Sensors 2019, 19(14), 3236; https://doi.org/10.3390/s19143236 - 23 Jul 2019
Cited by 20 | Viewed by 4980
Abstract
A gravimetric gas detection device based on surface functionalized Capacitive Micromachined Ultrasound Transducers (CMUTs) was designed, fabricated and tested for detection of carbon dioxide (CO2) and sulfur dioxide (SO2) mixtures in nitrogen. The created measurement setup of continuous data [...] Read more.
A gravimetric gas detection device based on surface functionalized Capacitive Micromachined Ultrasound Transducers (CMUTs) was designed, fabricated and tested for detection of carbon dioxide (CO2) and sulfur dioxide (SO2) mixtures in nitrogen. The created measurement setup of continuous data collection, integrated with an in-situ Fourier Transform Infrared (FT-IR) spectroscopy, allows for better understanding of the mechanisms and molecular interactions with the sensing layer (methylated poly(ethylene)imine) and its need of surface functionalization for multiple gas detection. During experimentation with CO2 gases, weak molecular interactions were observed in spectroscopy data. Linear sensor response to frequency shift was observed with CO2 concentrations ranging from 0.16 vol % to 1 vol %. Moreover, the Raman and FT-IR spectroscopy data showed much stronger SO2 and the polymer interactions, molecules were bound by stronger forces and irreversibly changed the polymer film properties. However, the sensor change in resonance frequency in the tested region of 1 vol % to 5 vol % SO2 showed a linear response. This effect changed not only the device resonance frequency but also affected the magnitude of electroacoustic impedance which was used for differentiating the gas mixture of CO2, SO2, in dry N2. Full article
(This article belongs to the Special Issue Infrared Spectroscopy and Sensors)
Show Figures

Figure 1

Figure 1
<p>Capacitive micromachined ultrasonic transducer (CMUT) analytical model results: (<b>a</b>) membrane collapse voltage as a function of membrane side length and thickness; (<b>b</b>) membrane resonance frequency as a function of membrane side length and thickness. A small inset shows the structure of the single CMUT cell and the parameters simulated in the analytical model.</p>
Full article ">Figure 2
<p>CMUT wafer bonding fabrication steps: (<b>a</b>) Thermal oxidation; (<b>b</b>) oxide wet etching; (<b>c</b>) wafer bonding; (<b>d</b>) backside protection with plasma enhanced chemical vapor deposition (PECVD) silicon nitride; (<b>e</b>) handle wafer removal using CMP and wet etch; (<b>f</b>) Oxford cryogenic etching process for separating devices by etching device layer; (<b>g</b>) opening of contact pads with reactive ion etch (RIE), (<b>h</b>) top electrode formation and contact pads metallization using lift-off procedure.</p>
Full article ">Figure 3
<p>(<b>a</b>) Two CMUT chips assembled on a custom printed circuit board with connection pins and contact pads bonded to the printed circuit board (PCB) with gold wires, (<b>b</b>) design of the CMUT chip.</p>
Full article ">Figure 4
<p>Impedance magnitude spectra of the CMUT device before and after spin-coated methylated poly(ethylene-imine) (mPEI) layer as a function of frequency.</p>
Full article ">Figure 5
<p>Experimental setup for in-situ simultaneous real-time CMUT resonance frequency and the magnitude of electroacoustic impedance measurement and Fourier transform infrared spectroscopy [<a href="#B59-sensors-19-03236" class="html-bibr">59</a>].</p>
Full article ">Figure 6
<p>Continuous resonance frequency shift and magnitude of the impedance spectra as a function of time when transitioning between N<sub>2</sub> and CO<sub>2</sub>.</p>
Full article ">Figure 7
<p>Continuous resonance frequency shift and magnitude of the impedance spectra as a function of time when transitioning between N<sub>2</sub> and SO<sub>2</sub>.</p>
Full article ">Figure 8
<p>Continuous resonance frequency shift and magnitude of the impedance spectra as a function of time when transitioning between N<sub>2</sub>, CO<sub>2</sub> and CO<sub>2</sub> + SO<sub>2</sub> mixture.</p>
Full article ">Figure 9
<p>The recorded resonance frequency shift of mPEI covered CMUT as a function of CO<sub>2</sub> concentration and inert N<sub>2</sub> at 23 °C (square data points). CMUT, covered with a thin layer of mPEI, resonance frequency dynamics with different SO<sub>2</sub> concentrations and inert N<sub>2</sub> at 23 °C (diamond data points).</p>
Full article ">Figure 10
<p>Fourier Transform Infrared spectroscopy data peaks formed at wavenumbers in region 2200 to 2500 cm<sup>−1</sup> and region 3500 to 3800 cm<sup>−1</sup> produced by absorbance of CO<sub>2</sub> phase molecules.</p>
Full article ">Figure 11
<p>Fourier Transform Infrared spectroscopy data peaks formed at wavenumbers in region 1250 to 1450 cm<sup>−1</sup> produced by absorbance of SO<sub>2</sub> phase molecules.</p>
Full article ">Figure 12
<p>Raman spectra comparison of Si, Si with spin coated thin layer of mPEI and Si with a thin layer of mPEI after SO<sub>2</sub> gas experiments [<a href="#B59-sensors-19-03236" class="html-bibr">59</a>].</p>
Full article ">
21 pages, 3670 KiB  
Article
Curve Similarity Model for Real-Time Gait Phase Detection Based on Ground Contact Forces
by Huacheng Hu, Jianbin Zheng, Enqi Zhan and Lie Yu
Sensors 2019, 19(14), 3235; https://doi.org/10.3390/s19143235 - 23 Jul 2019
Cited by 11 | Viewed by 3790
Abstract
This paper proposed a new novel method to adaptively detect gait patterns in real time through the ground contact forces (GCFs) measured by load cell. The curve similarity model (CSM) is used to identify the division of off-ground and on-ground statuses, and differentiate [...] Read more.
This paper proposed a new novel method to adaptively detect gait patterns in real time through the ground contact forces (GCFs) measured by load cell. The curve similarity model (CSM) is used to identify the division of off-ground and on-ground statuses, and differentiate gait patterns based on the detection rules. Traditionally, published threshold-based methods detect gait patterns by means of setting a fixed threshold to divide the GCFs into on-ground and off-ground statuses. However, the threshold-based methods in the literature are neither an adaptive nor a real-time approach. In this paper, the curve is composed of a series of continuous or discrete ordered GCF data points, and the CSM is built offline to obtain a training template. Then, the testing curve is compared with the training template to figure out the degree of similarity. If the computed degree of similarity is less than a given threshold, they are considered to be similar, which would lead to the division of off-ground and on-ground statuses. Finally, gait patterns could be differentiated according to the status division based on the detection rules. In order to test the detection error rate of the proposed method, a method in the literature is introduced as the reference method to obtain comparative results. The experimental results indicated that the proposed method could be used for real-time gait pattern detection, detect the gait patterns adaptively, and obtain a low error rate compared with the reference method. Full article
(This article belongs to the Special Issue Wearable Sensors for Gait and Motion Analysis 2018)
Show Figures

Figure 1

Figure 1
<p>Two load cell mounted severally in the ball and heel, and a lid is used to enlarge the contact area.</p>
Full article ">Figure 2
<p>Abnormal values existing in the data collection for ground contact force (GCF). Thr, threshold.</p>
Full article ">Figure 3
<p>Change in GCFs with walking speed change.</p>
Full article ">Figure 4
<p>A curve consisting of four GCF points is used to identify the starting flag of on-ground status or the ending flag of off-ground status.</p>
Full article ">Figure 5
<p>A curve consisting of four GCF points is used to identify the starting flag of off-ground status or the ending flag of on-ground status.</p>
Full article ">Figure 6
<p>(<b>a</b>) Original gait GCF data, (<b>b</b>) four gait phase index, (<b>c</b>) two gait classification.</p>
Full article ">Figure 7
<p>Flow chart of real-time gait detection algorithm. CSM, curve similarity model.</p>
Full article ">Figure 8
<p>Status division for ball and heel. (<b>a</b>) GCFs and status division for the ball, (<b>b</b>) GCFs and status division for the heel.</p>
Full article ">Figure 9
<p>(<b>a</b>) GCFs in the ball and heel, (<b>b</b>) results of gait pattern detection.</p>
Full article ">Figure 10
<p>A subject gait phrase detection at speeds of 2~6 km/h.</p>
Full article ">Figure 11
<p>The average error rate of 10 times is counted, and the results are as follows. STTTA, self-tuning triple-threshold algorithm; PM, proportional method.</p>
Full article ">Figure 12
<p>Result of gait phase detection.</p>
Full article ">Figure 13
<p>The best result of gait phase detection.</p>
Full article ">Figure 14
<p>The worst result of gait phase detection.</p>
Full article ">
18 pages, 4052 KiB  
Article
A Comparable Study of CNN-Based Single Image Super-Resolution for Space-Based Imaging Sensors
by Haopeng Zhang, Pengrui Wang, Cong Zhang and Zhiguo Jiang
Sensors 2019, 19(14), 3234; https://doi.org/10.3390/s19143234 - 23 Jul 2019
Cited by 19 | Viewed by 4858
Abstract
In the case of space-based space surveillance (SBSS), images of the target space objects captured by space-based imaging sensors usually suffer from low spatial resolution due to the extremely long distance between the target and the imaging sensor. Image super-resolution is an effective [...] Read more.
In the case of space-based space surveillance (SBSS), images of the target space objects captured by space-based imaging sensors usually suffer from low spatial resolution due to the extremely long distance between the target and the imaging sensor. Image super-resolution is an effective data processing operation to get informative high resolution images. In this paper, we comparably study four recent popular models for single image super-resolution based on convolutional neural networks (CNNs) with the purpose of space applications. We specially fine-tune the super-resolution models designed for natural images using simulated images of space objects, and test the performance of different CNN-based models in different conditions that are mainly considered for SBSS. Experimental results show the advantages and drawbacks of these models, which could be helpful for the choice of proper CNN-based super-resolution method to deal with image data of space objects. Full article
(This article belongs to the Special Issue Intelligent Sensors Applications in Aerospace)
Show Figures

Figure 1

Figure 1
<p>Network structure of SRCNN used in this paper. ILR, interpolated low-resolution image.</p>
Full article ">Figure 2
<p>Network structure of FSRCNN used in this paper.</p>
Full article ">Figure 3
<p>Network structure of VDSR used in this paper. ILR, interpolated low-resolution image; R_image, residual image.</p>
Full article ">Figure 4
<p>Network structure of DRCN used in this paper.</p>
Full article ">Figure 5
<p>Visualization of super-resolution reconstruction.</p>
Full article ">Figure 6
<p>Scale factor experiment for “glonas” in BUAA-SID 1.0. The method <math display="inline"><semantics> <mrow> <mi>s</mi> <mi>m</mi> <mo>−</mo> <mi>s</mi> <mi>n</mi> </mrow> </semantics></math> means the method is trained for <math display="inline"><semantics> <mrow> <mo>×</mo> <mi>m</mi> </mrow> </semantics></math> SR and tested for <math display="inline"><semantics> <mrow> <mo>×</mo> <mi>n</mi> </mrow> </semantics></math> SR.</p>
Full article ">Figure 7
<p>Performance of DRCN training by different methods.</p>
Full article ">Figure 8
<p>Super-resolution results of “cobe” (BUAA-SID 1.0) with scale factor × 2. Models are trained on T91, directly trained on BUAA-SID 1.0, and transfer trained from T91 respectively.</p>
Full article ">Figure 9
<p>PSNR curve with different std of Gaussian noise.</p>
Full article ">
17 pages, 5806 KiB  
Article
A Device-Free Indoor Localization Method Using CSI with Wi-Fi Signals
by Xiaochao Dang, Xuhao Tang, Zhanjun Hao and Yang Liu
Sensors 2019, 19(14), 3233; https://doi.org/10.3390/s19143233 - 23 Jul 2019
Cited by 26 | Viewed by 6955
Abstract
Amid the ever-accelerated development of wireless communication technology, we have become increasingly demanding for location-based service; thus, passive indoor positioning has gained widespread attention. Channel State Information (CSI), as it can provide more detailed and fine-grained information, has been followed by researchers. Existing [...] Read more.
Amid the ever-accelerated development of wireless communication technology, we have become increasingly demanding for location-based service; thus, passive indoor positioning has gained widespread attention. Channel State Information (CSI), as it can provide more detailed and fine-grained information, has been followed by researchers. Existing indoor positioning methods, however, are vulnerable to the environment and thus fail to fully reflect all the position features, due to limited accuracy of the fingerprint. As a solution, a CSI-based passive indoor positioning method was proposed, Wavelet Domain Denoising (WDD) was adopted to deal with the collected CSI amplitude, and the CSI phase information was unwound and transformed linearly in the offline phase. The post-processed amplitude and phase were taken as fingerprint data to build a fingerprint database, correlating with reference point position information. Results of experimental data analyzed under two different environments show that the present method boasts lower positioning error and higher stability than similar methods and can offer decimeter-level positioning accuracy. Full article
Show Figures

Figure 1

Figure 1
<p>Channel State Information (CSI) Signals.</p>
Full article ">Figure 2
<p>Flow Chart of Wavelet Domain Denoising (WDD).</p>
Full article ">Figure 3
<p>System Framework.</p>
Full article ">Figure 4
<p>Testing Environment.</p>
Full article ">Figure 5
<p>CSI Data Comparison Under Different Locations: (<b>a</b>) Unmanned Environment; (<b>b</b>) Position A; (<b>c</b>) Position B.</p>
Full article ">Figure 6
<p>The Amplitude of Wavelet Domain Denoising: (<b>a</b>) Unmanned Environment; (<b>b</b>) Position A; (<b>c</b>) Position B.</p>
Full article ">Figure 7
<p>(<b>a</b>–<b>c</b>) Original Phase; (<b>d</b>–<b>e</b>) The Phase after Linear Transformation.</p>
Full article ">Figure 8
<p>Experimental Scenarios: (<b>a</b>) Laboratory; (<b>b</b>) Conference Room.</p>
Full article ">Figure 9
<p>Impacts of the Number of Packets on Positioning Accuracy.</p>
Full article ">Figure 10
<p>Cumulative Distribution Function (CDF) of the Number of Reference Points.</p>
Full article ">Figure 11
<p>CDF of Data Quality (<b>a</b>) Laboratory; (<b>b</b>) Conference Room.</p>
Full article ">Figure 12
<p>CDF of Different Localization Methods: (<b>a</b>) Laboratory; (<b>b</b>) Conference Room.</p>
Full article ">Figure 13
<p>Execution Time of Different Stage.</p>
Full article ">
16 pages, 4700 KiB  
Article
Influence of Volumetric Damage Parameters on Patch Antenna Sensor-Based Damage Detection of Metallic Structure
by Zhiping Liu, Hanjin Yu, Kai Zhou, Runfa Li and Qian Guo
Sensors 2019, 19(14), 3232; https://doi.org/10.3390/s19143232 - 23 Jul 2019
Cited by 6 | Viewed by 4072
Abstract
Antenna sensors have been employed for crack monitoring of metallic materials. Existing studies have mainly focused on the mathematical relationship between the surface crack length of metallic material and the resonant frequency. The influence of the crack depth on the sensor output and [...] Read more.
Antenna sensors have been employed for crack monitoring of metallic materials. Existing studies have mainly focused on the mathematical relationship between the surface crack length of metallic material and the resonant frequency. The influence of the crack depth on the sensor output and the difference of whether the crack is depth-penetrated remains unexplored. Therefore, in this work, a numerical simulation method was used to investigate the current density distribution characteristics of the ground plane (metallic material) with different crack geometric parameters. The data reveals that, compared with the crack length, the crack depth has a greater influence on the resonant frequency. The relationship between the frequency and the crack geometric parameters was discussed by characterizing the current density and sensor output under different crack lengths and depths. Therefore, the feasibility of monitoring another common damage of metallic materials, i.e., corrosion pit, was explored. Furthermore, the influences of crack and corrosion pit geometric parameters on the output results were validated by experiments. Full article
(This article belongs to the Special Issue Sensors for Structural Health Monitoring and Condition Monitoring)
Show Figures

Figure 1

Figure 1
<p>Crack monitoring mechanism of the patch antenna sensor. (<b>a</b>) Patch antenna senor structure and (<b>b</b>) resonant cavity model.</p>
Full article ">Figure 2
<p>Current path classification on the ground plane.</p>
Full article ">Figure 3
<p>Dimension diagram of the patch antenna sensor.</p>
Full article ">Figure 4
<p>Simulation model of the patch antenna sensor.</p>
Full article ">Figure 5
<p>Dependence of the resonant frequency on the crack length associated with different crack depths.</p>
Full article ">Figure 6
<p>Relationship between the resonant frequency and the crack depth associated with different crack lengths.</p>
Full article ">Figure 7
<p>Current density distribution of the volumetric crack inner surface.</p>
Full article ">Figure 8
<p>Two-dimensional (2D) and three-dimensional (3D) schematic showing the current density distribution on the (<b>a</b>) crack length side, (<b>b</b>) crack width side, and (<b>c</b>) crack bottom.</p>
Full article ">Figure 9
<p>Current density distribution along a vertical axis of symmetry associated with the inner sides of the crack. (<b>a</b>) Line a and (<b>b</b>) line b.</p>
Full article ">Figure 10
<p>Current density distribution along the axis of symmetry on the crack bottom. (<b>a</b>) Line c and (<b>b</b>) line d.</p>
Full article ">Figure 11
<p>Influence of the volumetric corrosion pit radius on the resonant frequency.</p>
Full article ">Figure 12
<p>Influence of the volumetric corrosion pit depth on resonant frequency.</p>
Full article ">Figure 13
<p>Current density distribution corresponding to the inner surface of the volumetric corrosion pit.</p>
Full article ">Figure 14
<p>Current density distribution of the corrosion pit inner side. (<b>a</b>) Line a and (<b>b</b>) line b.</p>
Full article ">Figure 15
<p>Experiment setup. (<b>a</b>) Sample and (<b>b</b>) experiment platform.</p>
Full article ">Figure 16
<p>Comparison of simulation and experiment data corresponding to a crack depth of 1 mm.</p>
Full article ">
13 pages, 382 KiB  
Article
Optimal Offloading Decision Strategies and Their Influence Analysis of Mobile Edge Computing
by Jiuyun Xu, Zhuangyuan Hao and Xiaoting Sun
Sensors 2019, 19(14), 3231; https://doi.org/10.3390/s19143231 - 23 Jul 2019
Cited by 14 | Viewed by 3977
Abstract
Mobile edge computing (MEC) has become more popular both in academia and industry. Currently, with the help of edge servers and cloud servers, it is one of the substantial technologies to overcome the latency between cloud server and wireless device, computation capability and [...] Read more.
Mobile edge computing (MEC) has become more popular both in academia and industry. Currently, with the help of edge servers and cloud servers, it is one of the substantial technologies to overcome the latency between cloud server and wireless device, computation capability and storage shortage of wireless devices. In mobile edge computing, wireless devices take responsibility with input data. At the same time, edge servers and cloud servers take charge of computation and storage. However, until now, how to balance the power consumption of edge devices and time delay has not been well addressed in mobile edge computing. In this paper, we focus on strategies of the task offloading decision and the influence analysis of offloading decisions on different environments. Firstly, we propose a system model considering both energy consumption and time delay and formulate it into an optimization problem. Then, we employ two algorithms—Enumerating and Branch-and-Bound—to get the optimal or near-optimal decision for minimizing the system cost including the time delay and energy consumption. Furthermore, we compare the performance between two algorithms and draw the conclusion that the comprehensive performance of Branch-and-Bound algorithm is better than that of the other. Finally, we analyse the influence factors of optimal offloading decisions and the minimum cost in detail by changing key parameters. Full article
Show Figures

Figure 1

Figure 1
<p>Mobile Edge Computing Architecture.</p>
Full article ">Figure 2
<p>Time consumption.</p>
Full article ">Figure 3
<p>The accuracy rate of the Branch and Bound algorithm.</p>
Full article ">Figure 4
<p>Cost value comparison.</p>
Full article ">Figure 5
<p>The number of each decision for each set of tasks.</p>
Full article ">Figure 6
<p>The cost values under different parameters.</p>
Full article ">Figure 7
<p>Offloading decision comparison.</p>
Full article ">
25 pages, 6641 KiB  
Article
Enhanced 3-D GM-MAC Protocol for Guaranteeing Stability and Energy Efficiency of IoT Mobile Sensor Networks
by Yoonkyung Jang, Ahreum Shin and Intae Ryoo
Sensors 2019, 19(14), 3230; https://doi.org/10.3390/s19143230 - 23 Jul 2019
Cited by 1 | Viewed by 3268
Abstract
In wireless sensor networks, energy efficiency is important because sensor nodes have limited energy. 3-dimensional group management medium access control (3-D GM-MAC) is an attractive MAC protocol for application to the Internet of Things (IoT) environment with various sensors. 3-D GM-MAC outperforms the [...] Read more.
In wireless sensor networks, energy efficiency is important because sensor nodes have limited energy. 3-dimensional group management medium access control (3-D GM-MAC) is an attractive MAC protocol for application to the Internet of Things (IoT) environment with various sensors. 3-D GM-MAC outperforms the existing MAC schemes in terms of energy efficiency, but has some stability issues. In this paper, methods that improve the stability and transmission performance of 3-D GM-MAC are proposed. A buffer management scheme for sensor nodes is newly proposed. Fixed sensor nodes that have a higher priority than the mobile sensor nodes in determining the group numbers that were added, and an advanced group number management scheme was introduced. The proposed methods were simulated and analyzed. The newly derived buffer threshold had a similar energy efficiency to the original 3-D GM-MAC, but improved performance in the aspects of data loss rate and data collection rate. Data delay was not included in the comparison factors as 3-D GM-MAC targets non-real-time applications. When using fixed sensor nodes, the number of group number resets is reduced by about 43.4% and energy efficiency increased by about 10%. Advanced group number management improved energy efficiency by about 23.4%. In addition, the advanced group number management with periodical group number resets of the entire sensor nodes showed about a 48.9% improvement in energy efficiency. Full article
Show Figures

Figure 1

Figure 1
<p>Average active time of sensor nodes.</p>
Full article ">Figure 2
<p>Initial group number setting flowchart.</p>
Full article ">Figure 3
<p>Initial setting of group number and data transmission path.</p>
Full article ">Figure 4
<p>Group number resetting.</p>
Full article ">Figure 5
<p>Buffer thresholds for node groups.</p>
Full article ">Figure 6
<p>Sample topology for experiments of deriving buffer threshold equation.</p>
Full article ">Figure 7
<p>Network lifetime in case of using polynomial function with ‘a’ = 0.</p>
Full article ">Figure 8
<p>Data loss rate in case of using polynomial function with ‘a’ = 0.</p>
Full article ">Figure 9
<p>Network lifetime in case of using polynomial function with ‘a’ = 1.</p>
Full article ">Figure 10
<p>Data loss rate in case of using polynomial function with ‘a’ = 1.</p>
Full article ">Figure 11
<p>Network lifetime in case of using logarithmic function.</p>
Full article ">Figure 12
<p>Data loss rate in case of using logarithmic function.</p>
Full article ">Figure 13
<p>Primary and secondary settings of node group numbers.</p>
Full article ">Figure 14
<p>Initial group number setting using fixed sensor nodes.</p>
Full article ">Figure 15
<p>Incorrect group number setting scenario and its resolution.</p>
Full article ">Figure 16
<p>Incorrect group number resetting scenario.</p>
Full article ">Figure 17
<p>Group number resetting scenario.</p>
Full article ">Figure 18
<p>Buffer threshold.</p>
Full article ">Figure 19
<p>Comparison of node group number resets.</p>
Full article ">Figure 20
<p>Energy consumption of 3-D GM-MAC without fixed sensor nodes.</p>
Full article ">Figure 21
<p>Energy consumption of 3-D GM-MAC using fixed sensor nodes.</p>
Full article ">Figure 22
<p>Energy consumption of original 3-D GM-MAC.</p>
Full article ">Figure 23
<p>Energy consumption of 3-D GM-MAC using advanced group number management.</p>
Full article ">Figure 24
<p>Energy consumption of 3-D GM-MAC using advanced group number management and entire group number resetting.</p>
Full article ">
13 pages, 7649 KiB  
Article
Structured Light Three-Dimensional Measurement Based on Machine Learning
by Chuqian Zhong, Zhan Gao, Xu Wang, Shuangyun Shao and Chenjia Gao
Sensors 2019, 19(14), 3229; https://doi.org/10.3390/s19143229 - 23 Jul 2019
Cited by 14 | Viewed by 4780
Abstract
The three-dimensional measurement of structured light is commonly used and has widespread applications in many industries. In this study, machine learning is used for structured light 3D measurement to recover the phase distribution of the measured object by employing two machine learning models. [...] Read more.
The three-dimensional measurement of structured light is commonly used and has widespread applications in many industries. In this study, machine learning is used for structured light 3D measurement to recover the phase distribution of the measured object by employing two machine learning models. Without phase shift, the measurement operational complexity and computation time decline renders real-time measurement possible. Finally, a grating-based structured light measurement system is constructed, and machine learning is used to recover the phase. The calculated phase of distribution is wrapped in only one dimension and not in two dimensions, as in other methods. The measurement error is observed to be under 1%. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Optical path diagram of surface measured by a structured light measuring system.</p>
Full article ">Figure 2
<p>Structured light 3D measurement system.</p>
Full article ">Figure 3
<p>(<b>a</b>) Phases of calibration planes; (<b>b</b>) relative phases of calibration planes.</p>
Full article ">Figure 4
<p>(<b>a</b>) Measured object; (<b>b</b>) grating fringe pattern of the measured object.</p>
Full article ">Figure 5
<p>Phase of the measured object.</p>
Full article ">Figure 6
<p>Unwrapped phase of the measured object.</p>
Full article ">Figure 7
<p>Relative phase of the measured object.</p>
Full article ">Figure 8
<p>Measurement errors corresponding to columns 480–529.</p>
Full article ">Figure 9
<p>(<b>a</b>) Measured cylinder; (<b>b</b>) Measured cylinder’s structured light fringe pattern.</p>
Full article ">Figure 10
<p>(<b>a</b>) Cylinder’s phase distribution calculated by machine learning; (<b>b</b>) cylinder’s phase distribution calculated by FFT.</p>
Full article ">Figure 11
<p>(<b>a</b>) Measured calabash; (<b>b</b>) measured calabash’s structured light fringe pattern.</p>
Full article ">Figure 12
<p>(<b>a</b>) Calabash’s phase distribution calculated by machine learning; (<b>b</b>) calabash’s phase distribution calculated by FFT.</p>
Full article ">
25 pages, 7609 KiB  
Article
RTK with the Assistance of an IMU-Based Pedestrian Navigation Algorithm for Smartphones
by Zun Niu, Ping Nie, Lin Tao, Junren Sun and Bocheng Zhu
Sensors 2019, 19(14), 3228; https://doi.org/10.3390/s19143228 - 22 Jul 2019
Cited by 36 | Viewed by 6370
Abstract
Real-time kinematic (RTK) technique is widely used in modern society because of its high accuracy and real-time positioning. The appearance of Android P and the application of BCM47755 chipset make it possible to use single-frequency RTK and dual-frequency RTK on smartphones. The Xiaomi [...] Read more.
Real-time kinematic (RTK) technique is widely used in modern society because of its high accuracy and real-time positioning. The appearance of Android P and the application of BCM47755 chipset make it possible to use single-frequency RTK and dual-frequency RTK on smartphones. The Xiaomi Mi 8 is the first dual-frequency Global Navigation Satellite System (GNSS) smartphone equipped with BCM47755 chipset. However, the performance of RTK in urban areas is much poorer compared with its performance under the open sky because the satellite signals can be blocked by the buildings and trees. RTK can't provide the positioning results in some specific areas such as the urban canyons and the crossings under an overpass. This paper combines RTK with an IMU-based pedestrian navigation algorithm. We utilize attitude and heading reference system (AHRS) algorithm and zero velocity update (ZUPT) algorithm based on micro electro mechanical systems (MEMS) inertial measurement unit (IMU) in smartphones to assist RTK for the sake of improving positioning performance in urban areas. Some tests are carried out to verify the performance of RTK on the Xiaomi Mi 8 and we respectively assess the performances of RTK with and without the assistance of an IMU-based pedestrian navigation algorithm in urban areas. Results on actual tests show RTK with the assistance of an IMU-based pedestrian navigation algorithm is more robust and adaptable to complex environments than that without it. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Visualization of frame <math display="inline"><semantics> <mi>A</mi> </semantics></math> and frame <math display="inline"><semantics> <mi>B</mi> </semantics></math>. Frame <math display="inline"><semantics> <mi>A</mi> </semantics></math> rotates an angle <math display="inline"><semantics> <mi>θ</mi> </semantics></math> around the <math display="inline"><semantics> <mi>r</mi> </semantics></math> axis to become frame <math display="inline"><semantics> <mi>B</mi> </semantics></math>.</p>
Full article ">Figure 2
<p>Visualization of the various GNSS carrier frequencies.</p>
Full article ">Figure 3
<p>Close-up of tying the Xiaomi Mi 8 to our foot.</p>
Full article ">Figure 4
<p>RTK performances in static mode. (<b>a</b>) The visible signals; (<b>b</b>) vertical positioning errors; (<b>c</b>) horizontal positioning errors (single frequency); (<b>d</b>) horizontal positioning errors (dual frequency).</p>
Full article ">Figure 5
<p>View of the base station and the rover. (<b>a</b>) View of the M300 receiver; (<b>b</b>) view of the antenna on the roof; (<b>c</b>) view of the rover; (<b>d</b>) brief description of the rover.</p>
Full article ">Figure 6
<p>RTK performances based on the NovAtel receiver. (<b>a</b>) Visible signals and satellites; (<b>b</b>) number of different signals); (<b>c</b>) single-frequency RTK; (<b>d</b>) dual-frequency RTK.</p>
Full article ">Figure 7
<p>Signals tracked by the Xiaomi Mi 8. (<b>a</b>) Visible signals and satellites; (<b>b</b>) number of signals.</p>
Full article ">Figure 8
<p>RTK performances on the sports ground. (<b>a</b>) GPS(L1); (<b>b</b>) GPS(L1 + L5); (<b>c</b>) GPS(L1) + Galileo(E1); (<b>d</b>) GPS(L1 + L5) + Galileo(E1 + E5a).</p>
Full article ">Figure 9
<p>RTK performances shown in Google Earth.</p>
Full article ">Figure 10
<p>RTK performances on the basketball court. (<b>a</b>) GPS(L1) + Galileo(E1); (<b>b</b>) GPS(L1 + L5) + Galileo(E1 + E5a).</p>
Full article ">Figure 11
<p>The coordinate system used by the Android system.</p>
Full article ">Figure 12
<p>The performance of the Madgwick algorithm.</p>
Full article ">Figure 13
<p>The performance of the ZUPT aiding INS. (<b>a</b>) Place where we walked; (<b>b</b>) the estimated trajectory.</p>
Full article ">Figure 14
<p>A narrow path with tall teaching buildings on both sides.</p>
Full article ">Figure 15
<p>Comparison between the performances of RTK without the assistance of an IMU-based pedestrian navigation algorithm and RTK with the assistance of an IMU-based pedestrian navigation algorithm. (<b>a</b>) The performance of RTK without the assistance of an IMU-based pedestrian navigation algorithm; (<b>b</b>) the performance of RTK with the assistance of an IMU-based pedestrian navigation algorithm.</p>
Full article ">Figure 16
<p>Trajectories in Google Earth.</p>
Full article ">
17 pages, 10344 KiB  
Article
Hough Transform-Based Large Dynamic Reflection Coefficient Micro-Motion Target Detection in SAR
by Yang Zhou, Daping Bi, Aiguo Shen, Xiaoping Wang and Shuliang Wang
Sensors 2019, 19(14), 3227; https://doi.org/10.3390/s19143227 - 22 Jul 2019
Viewed by 3459
Abstract
Special phase modulation of SAR echoes resulted from target rotation or vibration, is a phenomenon called the micro-Doppler (m-D) effect. Such an effect offers favorable information for micro-motion (MM) target detection, thereby improving the performance of the synthetic aperture radar (SAR) system. However, [...] Read more.
Special phase modulation of SAR echoes resulted from target rotation or vibration, is a phenomenon called the micro-Doppler (m-D) effect. Such an effect offers favorable information for micro-motion (MM) target detection, thereby improving the performance of the synthetic aperture radar (SAR) system. However, when there are MM targets with large differences in reflection coefficient, the weak reflection components will be difficult to be detected. To find a solution to this problem, we propose a novel algorithm. First, we extract and detect the strongest reflection component. By removing the strongest reflection component from the original azimuth echo one by one, we realize the detection of reflection components sequentially, from the strongest to the weakest. Our algorithm applies to detecting MM targets with different reflection coefficients and has high precision of parameter estimation. The results of simulation and field experiments verify the advantages of the algorithm. Full article
(This article belongs to the Special Issue Sensors In Target Detection)
Show Figures

Figure 1

Figure 1
<p>Radar-target geometry.</p>
Full article ">Figure 2
<p>Time-frequency (TF) distribution of three micro-motion (MM) targets with different reflection coefficients.</p>
Full article ">Figure 3
<p>The schematic diagram of TF curve extraction: (<b>a</b>) TF curve extraction process, (<b>b</b>) TF curve extraction result.</p>
Full article ">Figure 4
<p>The algorithm flow chart.</p>
Full article ">Figure 5
<p>Detection results of <math display="inline"><semantics> <mrow> <mi>x</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>t</mi> <mi mathvariant="normal">a</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>. (<b>a</b>) Autocorrelation result of <math display="inline"><semantics> <mrow> <mi>x</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>t</mi> <mi mathvariant="normal">a</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) TF distribution result of <math display="inline"><semantics> <mrow> <mi>x</mi> <mrow> <mo>(</mo> <mrow> <msub> <mi>t</mi> <mi mathvariant="normal">a</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </semantics></math>-domain; (<b>d</b>) <math display="inline"><semantics> <mrow> <mrow> <mo>{</mo> <mrow> <msub> <mi>A</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>φ</mi> <mn>1</mn> </msub> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math> domain when <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>=</mo> <mo>−</mo> <mn>69</mn> <mtext> </mtext> <mi>Hz</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Detection results of <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>t</mi> <mi mathvariant="normal">a</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>. (<b>a</b>) Autocorrelation result of <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>t</mi> <mi mathvariant="normal">a</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) TF distribution result of <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>t</mi> <mi mathvariant="normal">a</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </semantics></math>-domain; (<b>d</b>) <math display="inline"><semantics> <mrow> <mrow> <mo>{</mo> <mrow> <msub> <mi>A</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>φ</mi> <mn>2</mn> </msub> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math> domain when <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>21</mn> <mtext> </mtext> <mi>Hz</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Detection result of <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>t</mi> <mi mathvariant="normal">a</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>. (<b>a</b>) Autocorrelation result of <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>t</mi> <mi mathvariant="normal">a</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) TF distribution result of <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>t</mi> <mi mathvariant="normal">a</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mn>3</mn> </msub> </mrow> </semantics></math>-domain; (<b>d</b>) <math display="inline"><semantics> <mrow> <mrow> <mo>{</mo> <mrow> <msub> <mi>A</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>φ</mi> <mn>3</mn> </msub> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math> domain when <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mn>3</mn> </msub> <mo>=</mo> <mn>43</mn> <mtext> </mtext> <mi>Hz</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Detection result of <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>t</mi> <mi mathvariant="normal">a</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>. (<b>a</b>) Autocorrelation result of <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>t</mi> <mi mathvariant="normal">a</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) TF distribution result of <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>t</mi> <mi mathvariant="normal">a</mi> </msub> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>IRT result. (<b>a</b>) When the rotational center coordinate of T1 is <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <mo>−</mo> <mn>53</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> <mo>,</mo> <mn>8000</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> <mo>,</mo> <mn>0</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>. (<b>b</b>) When the rotational center coordinate of T1 is <math display="inline"><semantics> <mrow> <mrow> <mo>(</mo> <mrow> <mn>0</mn> <mo>,</mo> <mn>8000</mn> <mtext> </mtext> <mi mathvariant="normal">m</mi> <mo>,</mo> <mn>0</mn> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Detection result in [<a href="#B15-sensors-19-03227" class="html-bibr">15</a>].</p>
Full article ">Figure 11
<p>Relative errors versus different Set the signal-to-noise ratios (SNRs). (<b>a</b>) Result of T1. (<b>b</b>) Result of T2. (<b>c</b>) Result of T3.</p>
Full article ">Figure 12
<p>On-site shooting. (<b>a</b>) Yun-8 aircraft; (<b>b</b>) Angle reflectors.</p>
Full article ">Figure 13
<p>SAR image result of the scene.</p>
Full article ">Figure 14
<p>Simulation detection result of rotating angle reflectors. (<b>a</b>) TF distribution result of the azimuth echo; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </semantics></math>-domain; (<b>c</b>) <math display="inline"><semantics> <mrow> <mrow> <mo>{</mo> <mrow> <msub> <mi>A</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>φ</mi> <mn>1</mn> </msub> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math> domain when <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>1</mn> <mi>Hz</mi> </mrow> </semantics></math>; (<b>d</b>) TF distribution result of the residual signal; (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </semantics></math>-domain; (<b>f</b>) <math display="inline"><semantics> <mrow> <mrow> <mo>{</mo> <mrow> <msub> <mi>A</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>φ</mi> <mn>2</mn> </msub> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math> domain when <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 15
<p>Real echo detection result of rotating angle reflectors. (<b>a</b>) TF distribution result of the azimuth echo; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> </mrow> </semantics></math>-domain; (<b>c</b>) <math display="inline"><semantics> <mrow> <mrow> <mo>{</mo> <mrow> <msub> <mi>A</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>φ</mi> <mn>1</mn> </msub> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math> domain when <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>1</mn> <mi>Hz</mi> </mrow> </semantics></math>; (<b>d</b>) TF distribution result of the residual signal; (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </semantics></math>-domain; (<b>f</b>) <math display="inline"><semantics> <mrow> <mrow> <mo>{</mo> <mrow> <msub> <mi>A</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>φ</mi> <mn>2</mn> </msub> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math> domain when <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">
16 pages, 545 KiB  
Article
Use of Computing Devices as Sensors to Measure Their Impact on Primary and Secondary Students’ Performance
by Francisco Luis Fernández-Soriano, Belén López, Raquel Martínez-España, Andrés Muñoz and Magdalena Cantabella
Sensors 2019, 19(14), 3226; https://doi.org/10.3390/s19143226 - 22 Jul 2019
Cited by 5 | Viewed by 4300
Abstract
The constant innovation in new technologies and the increase in the use of computing devices in different areas of the society have contributed to a digital transformation in almost every sector. This digital transformation has also reached the world of education, making it [...] Read more.
The constant innovation in new technologies and the increase in the use of computing devices in different areas of the society have contributed to a digital transformation in almost every sector. This digital transformation has also reached the world of education, making it possible for members of the educational community to adopt Learning Management Systems (LMS), where the digital contents replacing the traditional textbooks are exploited and managed. This article aims to study the relationship between the type of computing device from which students access the LMS and how affects their performance. To achieve this, the LMS accesses of students in a school comprising from elementary to bachelor’s degree stages have been monitored by means of different computing devices acting as sensors to gather data such as the type of device and operating system used by the students.The main conclusion is that students who access the LMS improve significantly their performance and that the type of device and the operating system has an influence in the number of passed subjects. Moreover, a predictive model has been generated to predict the number of passed subjects according to these factors, showing promising results. Full article
(This article belongs to the Special Issue Advanced Sensors Technology in Education)
Show Figures

Figure 1

Figure 1
<p>Statistical differences according to the Mann-Whitney test considering if there is any relation between accessing or not accessing the LMS and the number of passed subjects. The right side of the subfigures (in green color) represents the students who have logged into the LMS and the left side (blue color) represents the students who have not logged into the LMS. (<b>a</b>) Differences between the number of passed subjects for students accessing and not accessing the LMS considering all the educational stages. (<b>b</b>) Differences between the number of passed subjects for students accessing and not accessing the LMS considering the elementary stage. (<b>c</b>) Differences between the number of passed subjects for students accessing and not accessing the LMS considering the secondary stage. (<b>d</b>) Differences between the number of passed subjects for students accessing and not accessing the LMS considering the bachelor degree’s stage.</p>
Full article ">Figure 2
<p>Statistical differences according to the Mann-Whitney test considering if there is any relation between accessing or not accessing the LMS and the number of failed subjects. The right side of the subfigures (in green color) represents the students who have logged into the LMS and the left side (blue color) represents the students who have not logged into the LMS. (<b>a</b>) Differences between the number of failed subjects for students accessing and not accessing the LMS considering all the educational stages. (<b>b</b>) Differences between the number of failed subjects for students accessing and not accessing the LMS considering the elementary stage. (<b>c</b>) Differences between the number of failed subjects for students accessing and not accessing the LMS considering the secondary stage. (<b>d</b>) Differences between the number of failed subjects for students accessing and not accessing the LMS considering the bachelor degree’s stage.</p>
Full article ">
15 pages, 2665 KiB  
Article
Meat and Fish Freshness Assessment by a Portable and Simplified Electronic Nose System (Mastersense)
by Silvia Grassi, Simona Benedetti, Matteo Opizzio, Elia di Nardo and Susanna Buratti
Sensors 2019, 19(14), 3225; https://doi.org/10.3390/s19143225 - 22 Jul 2019
Cited by 73 | Viewed by 7606
Abstract
The evaluation of meat and fish quality is crucial to ensure that products are safe and meet the consumers’ expectation. The present work aims at developing a new low-cost, portable, and simplified electronic nose system, named Mastersense, to assess meat and fish freshness. [...] Read more.
The evaluation of meat and fish quality is crucial to ensure that products are safe and meet the consumers’ expectation. The present work aims at developing a new low-cost, portable, and simplified electronic nose system, named Mastersense, to assess meat and fish freshness. Four metal oxide semiconductor sensors were selected by principal component analysis and were inserted in an “ad hoc” designed measuring chamber. The Mastersense system was used to test beef and poultry slices, and plaice and salmon fillets during their shelf life at 4 °C, from the day of packaging and beyond the expiration date. The same samples were tested for Total Viable Count, and the microbial results were used to define freshness classes to develop classification models by the K-Nearest Neighbours’ algorithm and Partial Least Square–Discriminant Analysis. All the obtained models gave global sensitivity and specificity with prediction higher than 83.3% and 84.0%, respectively. Moreover, a McNemar’s test was performed to compare the prediction ability of the two classification algorithms, which resulted in comparable values (p > 0.05). Thus, the Mastersense prototype implemented with the K-Nearest Neighbours’ model is considered the most convenient strategy to assess meat and fish freshness. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Layout of the measurement chamber: EV = solenoid valve; EV1 drive = control driver for solenoid valve; EV2 drive = driver for additional solenoid valve; CN6-CN7-CN8 = connectors dedicated for settings; FTDI = USB to Serial converter; USB = USB port; S1-S2-S3-S4 = connectors for the four sensor’s board; MCU = microcontroller; PUMP DRIVE = pump controller; DC/DC = DC/DC 12 V out converter for battery charge management; FLAT = Connector used for debugging; PUMP = brushless pump (model KNF NMP 03 KPDCB-1, 3.3 Volt) for continuous 24 h operation; SW = ON/OFF switch; DC JACK = power supply input (DC 15–36 V).</p>
Full article ">Figure 2
<p>Sensor control board (<b>a</b>) and motherboard (<b>b</b>).</p>
Full article ">Figure 3
<p>Metal oxide semiconductor sensor responses for meat and fish samples collected the first day of sampling (<b>a</b>) and at the expiration date of the product (<b>b</b>). The histograms represent the 10 sensor responses after 50 s of sampling. Each bar colour corresponds to a sensor.</p>
Full article ">Figure 4
<p>Principal Component Analysis-biplots of e-nose data collected on beef (<b>a</b>) poultry (<b>b</b>) plaice (<b>c</b>) and salmon (<b>d</b>) samples classified as unspoiled (US)-green; acceptable (<b>A</b>)-yellow, spoiled (<b>S</b>)-red.</p>
Full article ">
20 pages, 23888 KiB  
Article
SemanticDepth: Fusing Semantic Segmentation and Monocular Depth Estimation for Enabling Autonomous Driving in Roads without Lane Lines
by Pablo R. Palafox, Johannes Betz, Felix Nobis, Konstantin Riedl and Markus Lienkamp
Sensors 2019, 19(14), 3224; https://doi.org/10.3390/s19143224 - 22 Jul 2019
Cited by 22 | Viewed by 7091
Abstract
Typically, lane departure warning systems rely on lane lines being present on the road.
However, in many scenarios, e.g., secondary roads or some streets in cities, lane lines are either
not present or not sufficiently well signaled. In this work, we present a [...] Read more.
Typically, lane departure warning systems rely on lane lines being present on the road.
However, in many scenarios, e.g., secondary roads or some streets in cities, lane lines are either
not present or not sufficiently well signaled. In this work, we present a vision-based method to
locate a vehicle within the road when no lane lines are present using only RGB images as input.
To this end, we propose to fuse together the outputs of a semantic segmentation and a monocular
depth estimation architecture to reconstruct locally a semantic 3D point cloud of the viewed scene.
We only retain points belonging to the road and, additionally, to any kind of fences or walls that
might be present right at the sides of the road. We then compute the width of the road at a certain
point on the planned trajectory and, additionally, what we denote as the fence-to-fence distance.
Our system is suited to any kind of motoring scenario and is especially useful when lane lines are
not present on the road or do not signal the path correctly. The additional fence-to-fence distance
computation is complementary to the road’s width estimation. We quantitatively test our method
on a set of images featuring streets of the city of Munich that contain a road-fence structure, so as
to compare our two proposed variants, namely the road’s width and the fence-to-fence distance
computation. In addition, we also validate our system qualitatively on the Stuttgart sequence of the
publicly available Cityscapes dataset, where no fences or walls are present at the sides of the road,
thus demonstrating that our system can be deployed in a standard city-like environment. For the
benefit of the community, we make our software open source. Full article
(This article belongs to the Special Issue Sensor Data Fusion for Autonomous and Connected Driving)
Show Figures

Figure 1

Figure 1
<p>Our method locates the vehicle within the road in scenarios where no lane lines are available. To do so, we reconstruct a local, semantic 3D point cloud of the viewed scene and then propose two complementary computations: (1) extracting the width of the road by employing only the road’s 3D point cloud and (2) additionally leveraging fences/walls to the sides of the road to compute the fence-to-fence distance. Our system can work in any kind of motoring scenario. (<b>a</b>) Original image. (<b>b</b>) Computation of the road’s width and the fence-to-fence distance.</p>
Full article ">Figure 2
<p>Example views of Roborace racetracks. The main elements are the road and the fences. (<b>a</b>) Frame from New York’s racetrack. (<b>b</b>) Frame from Montreal’s racetrack.</p>
Full article ">Figure 3
<p>Typical scheme of a fully-convolutional network [<a href="#B13-sensors-19-03224" class="html-bibr">13</a>].</p>
Full article ">Figure 4
<p>Pipeline describing our proposed approach.</p>
Full article ">Figure 5
<p>Segmentation of Roborace frames berlin_00127 (left) and berlin_00234 (right) into the classes road (purple) and fence (blue) by a model trained on Cityscapes (<b>a</b>,<b>b</b>) and by a model trained on Roborace images (<b>c</b>,<b>d</b>).</p>
Full article ">Figure 6
<p>Semantic segmentation step in our method’s pipeline. (<b>a</b>) Original image. (<b>b</b>) Segmented image. (<b>c</b>) Road mask. (<b>d</b>) Fence mask.</p>
Full article ">Figure 7
<p>Disparity map obtained from a single image using monodepth [<a href="#B25-sensors-19-03224" class="html-bibr">25</a>]. (<b>a</b>) Original image. (<b>b</b>) Disparity map.</p>
Full article ">Figure 8
<p>3D point clouds featuring points belonging to classes road and fence. (<b>a</b>) Front view. (<b>b</b>) Top view. (<b>c</b>) Left view. (<b>d</b>) Right view.</p>
Full article ">Figure 9
<p>Exemplary computation of the road’s width. (<b>a</b>) Top view. (<b>b</b>) Right view.</p>
Full article ">Figure 10
<p>Exemplary computation of the fence-to-fence distance. The light green line represents the fence-to-fence distance, while the red line represents the road’s width. (<b>a</b>) Top view. (<b>b</b>) Right view.</p>
Full article ">Figure 11
<p>Munich test set, on which we quantitatively tested the performance of the proposed system. (<b>a</b>) Munich-1. (<b>b</b>) Munich-2. (<b>c</b>) Munich-3. (<b>d</b>) Munich-4. (<b>e</b>) Munich-5.</p>
Full article ">Figure 12
<p>Exemplary results of applying our method on frame Munich-3 from the Munich test set. (<b>a</b>) Original image; (<b>b</b>) Output image displaying the predicted distances on the segmented frame; (<b>c</b>) Post-processed 3D point cloud, featuring the fitted planes and the computed distances: road’s width (red line) and fence-to-fence (green line); (<b>d</b>) Initial 3D point cloud, featuring the computed distances: road’s width (red line) and fence-to-fence (green line).</p>
Full article ">Figure 13
<p>We test our system on the Stuttgart sequence of the Cityscapes dataset. The complete sequence can be found at <a href="https://youtu.be/0yBb6kJ3mgQ" target="_blank">https://youtu.be/0yBb6kJ3mgQ</a>, as well as in the <a href="#app1-sensors-19-03224" class="html-app">Supplementary Material</a>. (<b>a</b>) Sample Frame 5185 of the Stuttgart sequence. (<b>b</b>) Sample Frame 5209 of the Stuttgart sequence.</p>
Full article ">
8 pages, 1122 KiB  
Article
Orthogonal Demodulation Pound–Drever–Hall Technique for Ultra-Low Detection Limit Pressure Sensing
by Jinliang Hu, Sheng Liu, Xiang Wu, Liying Liu and Lei Xu
Sensors 2019, 19(14), 3223; https://doi.org/10.3390/s19143223 - 22 Jul 2019
Cited by 7 | Viewed by 4064
Abstract
We report on a novel optical microcavity sensing scheme by using the orthogonal demodulation Pound–Drever–Hall (PDH) technique. We found that larger sensitivity in a broad range of cavity quality factor (Q) could be obtained. Taking microbubble resonator (MBR) pressure sensing as an example, [...] Read more.
We report on a novel optical microcavity sensing scheme by using the orthogonal demodulation Pound–Drever–Hall (PDH) technique. We found that larger sensitivity in a broad range of cavity quality factor (Q) could be obtained. Taking microbubble resonator (MBR) pressure sensing as an example, a lower detection limit than the conventional wavelength shift detection method was achieved. When the MBR cavity Q is about 105–106, the technique can decrease the detection limit by one or two orders of magnitude. The pressure-frequency sensitivity is 11.6 GHz/bar at wavelength of 850 nm, and its detection limit can approach 0.0515 mbar. This technique can also be applied to other kinds of microcavity sensors to improve sensing performance. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic setup of lock-in and orthogonal demodulation Pound–Drever–Hall (PDH) technique with 850 nm tunable laser (New Focus TLB 6716). PC, polarization controller; PM, phase electro-optical modulator (iXBlue NIR-MPX800); MBR, micro-bubble resonator; PD, photoelectric detector (Thorlabs PDA10A2); LPF, low-pass filter; DAQ, NI data acquisition card (NI PCIe 6351); HFSG, high frequency function signal generator; PS, pressure sensor.</p>
Full article ">Figure 2
<p>(<b>a</b>) Normalized mode demodulated signal when Q is 3 × 10<sup>6</sup> at a wavelength of 850 nm; (<b>b</b>) wavelength discriminant (WD) at different Q near resonant wavelength at a wavelength of 850 nm.</p>
Full article ">Figure 3
<p>Normalized mode ratio of (<b>a</b>) WD and sensitivity of light intensity at different Q when <span class="html-italic">f</span><sub>m</sub> = 50 MHz; (<b>b</b>) WD and maximal sensitivity of light intensity at different modulation frequency when <span class="html-italic">Q</span> = 3 × 10<sup>6</sup>.</p>
Full article ">Figure 4
<p>(<b>a</b>) Wavelength shift sensing results; insert illustrates <span class="html-italic">Q</span> = 2.26 × 10<sup>5</sup> and the process of wavelength shift; (<b>b</b>) orthogonal demodulation PDH sensing results; insert illustrates system noise.</p>
Full article ">Figure 5
<p>(<b>a</b>) Wavelength shift sensing results. insert illustrates <span class="html-italic">Q</span> = 2.34 × 10<sup>6</sup> and the process of wavelength shift; (<b>b</b>) orthogonal demodulation PDH sensing results. The insert illustrates the system noise.</p>
Full article ">
12 pages, 7376 KiB  
Article
Wireless, Portable Fiber Bragg Grating Interrogation System Employing Optical Edge Filter
by Ken Ogawa, Shouhei Koyama, Yuuki Haseda, Keiichi Fujita, Hiroaki Ishizawa and Keisaku Fujimoto
Sensors 2019, 19(14), 3222; https://doi.org/10.3390/s19143222 - 22 Jul 2019
Cited by 34 | Viewed by 6042
Abstract
A small-size, high-precision fiber Bragg grating interrogator was developed for continuous plethysmograph monitoring. The interrogator employs optical edge filters, which were integrated with a broad-band light source and photodetector to demodulate the Bragg wavelength shift. An amplifier circuit was designed to effectively amplify [...] Read more.
A small-size, high-precision fiber Bragg grating interrogator was developed for continuous plethysmograph monitoring. The interrogator employs optical edge filters, which were integrated with a broad-band light source and photodetector to demodulate the Bragg wavelength shift. An amplifier circuit was designed to effectively amplify the plethysmograph signal, obtained as a small vibration of optical power on the large offset. The standard deviation of the measured Bragg wavelength was about 0.1 pm. The developed edge filter module and amplifier circuit were encased with a single-board computer and communicated with a laptop computer via Wi-Fi. As a result, the plethysmograph was clearly obtained remotely, indicating the possibility of continuous vital sign measurement. Full article
(This article belongs to the Special Issue Wearable Sensors and Devices for Healthcare Applications)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The schematic diagram of the FBG sensor. The incident light propagates through the optical fiber core, which is surrounded by cladding. FBGs have a periodic refractive index change inscribed on the optical fiber core. An example of the incident light (<b>b</b>), and the real reflected spectrum of FBG and its change against tension (<b>c</b>) are shown in the graph. These spectrums are obtained with the optical spectrum analyzer (OSA) (AQ6370D, Yokogawa Electric Co., Tokyo, Japan) with a resolution setting of 0.1 nm.</p>
Full article ">Figure 2
<p>(<b>a</b>) Typical setup of the edge filter-based FBG interrogation. The light emitted from a broad band light source (BBS) enters the FBG and only the light of the Bragg wavelength is reflected. The reflected light enters the edge filter and is divided into the reflected and transmitted light, whose ratio depends on the wavelength, as in (<b>b</b>). These lights are observed by photo-detectors (PDs). (<b>b</b>) Conceptual diagram of the wavelength dependence of the transmission/reflection ratio of the edge filter. The dashed square indicates the slope used for FBG demodulation. The solid and dashed lines indicate the transmittance and reflectance ratio, respectively. The FBG wavelength is set in the dashed square range, in which the wavelength dependence could approximate a linear function.</p>
Full article ">Figure 3
<p>The edge filter module and amplifier circuit. The module contains an SLD, five PDs, a WDM, a half mirror, a SC/PC-pigtail and two edge filters.</p>
Full article ">Figure 4
<p>Schematic diagram of the edge filter module and amplifier circuit. (<b>a</b>) The schematic diagram of the developed edge filter module. In addition to <a href="#sensors-19-03222-f002" class="html-fig">Figure 2</a>a, this module contains a WDM filter to measure two FBGs with different center wavelengths. Corresponding to each FBG, two edge filters are used. (<b>b</b>) The schematic diagram of the amplifier circuit. Each PD converts the light into an electrical current, then the current-voltage converter converts this into voltage. This voltage is divided into two paths: the first path enters the low-pass filter (LPF) to output the slower signal (output 2). The other is amplified by the differential amplifier with the output voltage of the low-pass filter to obtain a largely amplified faster signal (output 1).</p>
Full article ">Figure 5
<p>(<b>a</b>,<b>b</b>) The spectrums of the (<b>a</b>) 1543 and (<b>b</b>) 1561 nm-centered edge filters and FBGs. (<b>c</b>,<b>d</b>) The results of the Bragg wavelength vs normalized differential signal of the (<b>c</b>) 1543 and (<b>d</b>) 1561 nm-centered edge filters. Dashed lines indicate the linear approximation of the data. The formulas are <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">λ</mi> <mi mathvariant="normal">B</mi> </msub> <mo>=</mo> <mn>1.3321</mn> <mtext> </mtext> <mi mathvariant="normal">D</mi> <mo>+</mo> <mn>1542.9567</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="sans-serif">λ</mi> <mi mathvariant="normal">B</mi> </msub> <mo>=</mo> <mo>−</mo> <mn>1.6497</mn> <mtext> </mtext> <mi mathvariant="normal">D</mi> <mo>+</mo> <mn>1560.5514</mn> </mrow> </semantics></math> for the 1543 and 1561 nm-centered edge filters, respectively.</p>
Full article ">Figure 6
<p>The measurement setup of the experiment to obtain the coefficient of the normalized differential to wavelength conversion formula, with 1543- and 1561-nm-centered FBGs glued separately on a steel plate, with the strain is controlled manually. The transmitted light of the FBGs is measured using an OSA, on which the Bragg wavelengths are calculated.</p>
Full article ">Figure 7
<p>The casing of the interrogator (left). The edge filter module, single-board computer and smaller AD board are encased. A portable battery (right) supplies the power.</p>
Full article ">Figure 8
<p>The settings of the plethysmograph measurement. The FBG sensor is taped near the brachial artery. Measured data are saved on a PC, which is connected to the interrogator via Wi-Fi.</p>
Full article ">Figure 9
<p>Plethysmogram obtained with the developed FBG interrogator and commercially available interrogator.</p>
Full article ">Figure 10
<p>A Butterworth-type bandpass filter (order 2, low-cutoff frequency 0.5 Hz, high-cutoff frequency 5 Hz) is applied onto the data of <a href="#sensors-19-03222-f009" class="html-fig">Figure 9</a>. The offset value of the reference FBG is subtracted before filtering.</p>
Full article ">
18 pages, 35018 KiB  
Article
Experimental Validation of Gaussian Process-Based Air-to-Ground Communication Quality Prediction in Urban Environments
by Pawel Ladosz, Jongyun Kim, Hyondong Oh and Wen-Hua Chen
Sensors 2019, 19(14), 3221; https://doi.org/10.3390/s19143221 - 22 Jul 2019
Cited by 1 | Viewed by 3475
Abstract
This paper presents a detailed experimental assessment of Gaussian Process (GP) regression for air-to-ground communication channel prediction for relay missions in urban environment. Considering restrictions from outdoor urban flight experiments, a way to simulate complex urban environments at an indoor room scale is [...] Read more.
This paper presents a detailed experimental assessment of Gaussian Process (GP) regression for air-to-ground communication channel prediction for relay missions in urban environment. Considering restrictions from outdoor urban flight experiments, a way to simulate complex urban environments at an indoor room scale is introduced. Since water significantly absorbs wireless communication signal, water containers are utilized to replace buildings in a real-world city. To evaluate the performance of the GP-based channel prediction approach, several indoor experiments in an artificial urban environment were conducted. The performance of the GP-based and empirical model-based prediction methods for a relay mission was evaluated by measuring and comparing the communication signal strength at the optimal relay position obtained from each method. The GP-based prediction approach shows an advantage over the model-based one as it provides a reasonable performance without a need for a priori information of the environment (e.g., 3D map of the city and communication model parameters) in dynamic urban environments. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Pattern of UAV scan flight on a sample urban scenario.</p>
Full article ">Figure 2
<p>RSSI values with the distance between the UGV (Turtlebot) and the quadrotor UAV. The distance is the ground distance between the UGV and the UAV.</p>
Full article ">Figure 3
<p>Experimental procedure for calculating the LOS signal strength model.</p>
Full article ">Figure 4
<p>Reduction of signal strength with the length of line-of-sight obstruction in a building.</p>
Full article ">Figure 5
<p>Experimental procedure for calculating the NLOS signal reduction model.</p>
Full article ">Figure 6
<p>A snapshot of an indoor flight experiment in an artificial urban environment where there are water containers inside boxes representing buildings.</p>
Full article ">Figure 7
<p>Overview of the Turtlebot 3 UGV used in this experiment with important components highlighted.</p>
Full article ">Figure 8
<p>Overview of the quad-rotor UAV used in this experiment with important components highlighted.</p>
Full article ">Figure 9
<p>System overview for the aerial relay vehicle.</p>
Full article ">Figure 10
<p>Hardware overview of common ROS/Autopilot system components.</p>
Full article ">Figure 11
<p>Experiment result for Case I.</p>
Full article ">Figure 12
<p>Experiment results for Case II.</p>
Full article ">Figure 13
<p>Experiment results for Case III.</p>
Full article ">Figure 14
<p>Experiment results for Case IV.</p>
Full article ">Figure 15
<p>Error histogram averaged over multiple runs.</p>
Full article ">
28 pages, 26116 KiB  
Article
Monocular Robust Depth Estimation Vision System for Robotic Tasks Interventions in Metallic Targets
by Carlos Veiga Almagro, Mario Di Castro, Giacomo Lunghi, Raúl Marín Prades, Pedro José Sanz Valero, Manuel Ferre Pérez and Alessandro Masi
Sensors 2019, 19(14), 3220; https://doi.org/10.3390/s19143220 - 22 Jul 2019
Cited by 10 | Viewed by 5441
Abstract
Robotic interventions in hazardous scenarios need to pay special attention to safety, as in most cases it is necessary to have an expert operator in the loop. Moreover, the use of a multi-modal Human-Robot Interface allows the user to interact with the robot [...] Read more.
Robotic interventions in hazardous scenarios need to pay special attention to safety, as in most cases it is necessary to have an expert operator in the loop. Moreover, the use of a multi-modal Human-Robot Interface allows the user to interact with the robot using manual control in critical steps, as well as semi-autonomous behaviours in more secure scenarios, by using, for example, object tracking and recognition techniques. This paper describes a novel vision system to track and estimate the depth of metallic targets for robotic interventions. The system has been designed for on-hand monocular cameras, focusing on solving lack of visibility and partial occlusions. This solution has been validated during real interventions at the Centre for Nuclear Research (CERN) accelerator facilities, achieving 95% success in autonomous mode and 100% in a supervised manner. The system increases the safety and efficiency of the robotic operations, reducing the cognitive fatigue of the operator during non-critical mission phases. The integration of such an assistance system is especially important when facing complex (or repetitive) tasks, in order to reduce the work load and accumulated stress of the operator, enhancing the performance and safety of the mission. Full article
Show Figures

Figure 1

Figure 1
<p>Unified multimodal human–robot interaction (HRI) to control the modular and reconfigurable CERNBot robotic system.</p>
Full article ">Figure 2
<p>Diagram of the general principle of the system’ operation.</p>
Full article ">Figure 3
<p>Left to right: Initial state; Rotation and/or translation in the same direction; Reduction of the region of interest (ROI) due to estrangement; Increased ROI due to the approach.</p>
Full article ">Figure 4
<p>SURF+KCF system sequence working with different patterns in the same execution. The red square shows the position where the pattern was taken. The colourful square depicts the SURF-based homography estimation, and the blue square represents the tracked ROI.</p>
Full article ">Figure 5
<p>Triangulation proposal in <span class="html-italic">X</span> and <span class="html-italic">Y</span> axis. X1P1 and Y1P1 are the projection of the point (P) on the first image, and X2P2 and Y2P2 the projection of the same point on the second picture.</p>
Full article ">Figure 6
<p>Deep Learning integration diagram.</p>
Full article ">Figure 7
<p>(<b>Top</b>) key-points correlation between two images in raw. (<b>Middle</b>) key-points correlation among two pictures after the isolated ROI (used as a pattern) translation adaptation. (<b>Bottom</b>) key-points correlations after filtering with the euclidean distance-based threshold.</p>
Full article ">Figure 8
<p>Rotation solution for the homography.</p>
Full article ">Figure 9
<p>Task sequence carried out by CERNBot2: (<b>a</b>) target detection through a Pan–Tilt–Zoom (PTZ) camera, (<b>b</b>) approaching to the target, (<b>c</b>) depth estimation (<b>d</b>) switch actuation.</p>
Full article ">Figure 10
<p>CERNBot in scorpion setup. Here it can be seen how the robot wears the two kind of Axis cameras outline above.</p>
Full article ">Figure 11
<p>Cameras PTZ: (<b>a</b>) Axis V5914 PTZ, usually attached on the CERNBot platform. (<b>b</b>) Bowtech BP-DTR-100-Z underwater camera for further use in radioactive dust and underwater scenarios.</p>
Full article ">Figure 12
<p>Use of the square-homography intersection to fix the orientation. The squares meaning is: the left one needs to turn right, the one at the centre is well oriented, the right one needs to turn left.</p>
Full article ">Figure 13
<p>Operator guidance by tracking-based Depth Estimation system upon metallic surface.</p>
Full article ">Figure 14
<p>System behavior under partial occlusions.</p>
Full article ">Figure 15
<p>Recovering System diagram.</p>
Full article ">Figure 16
<p>Confidence test. Grey and blue lines represent the current distance range. Orange line is the estimation. Red box shows the only one error over 1 cm.</p>
Full article ">Figure 17
<p>The performance of monitoring algorithms tested prior to system development, where it is shown the origin position, the translation in <span class="html-italic">X</span>, and translation in <span class="html-italic">Y</span>. Translations are with respect to the robot TCP.</p>
Full article ">Figure 18
<p>Difference of stability in the performance of the two solutions proposed (SURF-based and KCF-based), where the abscissa axis represents the distance to the target and the ordinate axis the percentage of frames providing the correct measurement.</p>
Full article ">Figure 19
<p>Relationship among translation and rotation (<span class="html-italic">X</span> and/or <span class="html-italic">Y</span> axes) to achieve the triangulation.</p>
Full article ">Figure 20
<p>System execution, where the tracking and estimation is shown by AR.</p>
Full article ">Figure 21
<p>Grasping Determination calculation on a metallic connector, to be grasped and inserted (specular symmetry).</p>
Full article ">Figure 22
<p>Grasping Determination calculation on a rounded and symmetric metallic object (radial symmetry).</p>
Full article ">
10 pages, 3287 KiB  
Article
Two Degree-of-Freedom Fiber-Coupled Heterodyne Grating Interferometer with Milli-Radian Operating Range of Rotation
by Fuzhong Yang, Ming Zhang, Yu Zhu, Weinan Ye, Leijie Wang and Yizhou Xia
Sensors 2019, 19(14), 3219; https://doi.org/10.3390/s19143219 - 22 Jul 2019
Cited by 14 | Viewed by 3644
Abstract
In the displacement measurement of the wafer stage in lithography machines, signal quality is affected by the relative angular position between the encoder head and the grating. In this study, a two-degree-of-freedom fiber-coupled heterodyne grating interferometer with large operating range of rotation is [...] Read more.
In the displacement measurement of the wafer stage in lithography machines, signal quality is affected by the relative angular position between the encoder head and the grating. In this study, a two-degree-of-freedom fiber-coupled heterodyne grating interferometer with large operating range of rotation is presented. Fibers without fiber couplers are utilized to receive the interference beams for high-contrast signals under the circumstances of large angular displacement and ZEMAX ray tracing software simulation and experimental validation have been carried out. Meanwhile, a reference beam generated inside the encoder head is adopted to suppress the thermal drift of the interferometer. Experimental results prove that the proposed grating interferometer could realize sub-nanometer displacement measurement stability in both in-plane and out-of-plane directions, which is 0.246 nm and 0.465 nm of 3σ value respectively within 30 s. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>The schematic diagram of the encoder measurement system of the wafer stage.</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematic of the two-degree-of-freedom fiber-coupled heterodyne grating interferometer (AOM: acousto-optical modulators, G: diffraction grating); (<b>b</b>) The optical configuration of the encoder (BS: beam splitter, P: polarizer, PBS: polarizing beam splitter, PM: prism mirror, QWP: quarter-wave plate, TP: trapezoidal prism).</p>
Full article ">Figure 3
<p>(<b>a</b>) The fiber without fiber coupler adopted for receiving the interference beams; (<b>b</b>) Intensity distribution diagram in the A–A cross-section.</p>
Full article ">Figure 4
<p>The sketch of the light path inside the encoder head.</p>
Full article ">Figure 5
<p>(<b>a</b>) Simulation model set in ZEMAX; (<b>b</b>) simulation result of the signal contrast with angular deflection varying from −1.5 mrad to 1.5 mrad in three directions of rotation.</p>
Full article ">Figure 6
<p>The interference signals obtained by the multimode fibers without fiber couplers: (<b>a</b>) waveforms; (<b>b</b>) the amplitude spectrum.</p>
Full article ">Figure 7
<p>(<b>a</b>) The overall perspective of experiment setup (SMF: single mode fiber; MMF, multimode fiber); (<b>b</b>) the assembly model of the encoder head.</p>
Full article ">Figure 8
<p>The measurement results of the grating interferometer: (<b>a</b>) Measurement stability in 30 s; (<b>b</b>) the cumulative amplitude spectrum (CAS) of <a href="#sensors-19-03219-f008" class="html-fig">Figure 8</a>a; (<b>c</b>) Environmental temperature fluctuation in 2 h; (<b>d</b>) the CAS of <a href="#sensors-19-03219-f008" class="html-fig">Figure 8</a>c.</p>
Full article ">
13 pages, 3830 KiB  
Article
Electrochemical Sensing of α-Fetoprotein Based on Molecularly Imprinted Polymerized Ionic Liquid Film on a Gold Nanoparticle Modified Electrode Surface
by Yingying Wu, Yanying Wang, Xing Wang, Chen Wang, Chunya Li and Zhengguo Wang
Sensors 2019, 19(14), 3218; https://doi.org/10.3390/s19143218 - 22 Jul 2019
Cited by 24 | Viewed by 4302
Abstract
A molecularly imprinted sensor was fabricated for alpha-fetoprotein (AFP) using an ionic liquid as a functional monomer. Ionic liquid possesses many excellent characteristics which can improve the sensing performances of the imprinted electrochemical sensor. To demonstrate this purpose, 1-[3-(N-cystamine)propyl]-3-vinylimidazolium tetrafluoroborate ionic liquid [(Cys)VIMBF [...] Read more.
A molecularly imprinted sensor was fabricated for alpha-fetoprotein (AFP) using an ionic liquid as a functional monomer. Ionic liquid possesses many excellent characteristics which can improve the sensing performances of the imprinted electrochemical sensor. To demonstrate this purpose, 1-[3-(N-cystamine)propyl]-3-vinylimidazolium tetrafluoroborate ionic liquid [(Cys)VIMBF4] was synthesized and used as a functional monomer to fabricate an AFP imprinted polymerized ionic liquid film on a gold nanoparticle modified glassy carbon electrode (GCE) surface at room temperature. After removing the AFP template, a molecularly imprinted electrochemical sensor was successfully prepared. The molecularly imprinted sensor exhibits excellent selectivity towards AFP, and can be used for sensitive determination of AFP. Under the optimized conditions, the imprinted sensor shows a good linear response to AFP in the concentration range of 0.03 ng mL−1~5 ng mL−1. The detection limit is estimated to be 2 pg mL−1. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Scanning electron microscopic images of gold nanoparticles modified glassy carbon electrode surface.</p>
Full article ">Figure 2
<p>A glassy carbon electrode before (<b>a</b>) and after (<b>b</b>) being modified with gold nanoparticles was characterized with X-ray diffraction.</p>
Full article ">Figure 3
<p>Scanning electron microscopic images of an imprinted film modified electrode before (<b>a</b>) and after (<b>b</b>) removing AFP.</p>
Full article ">Figure 4
<p>(<b>A</b>) Cyclic voltammograms of an AuNPs/GCE (<b>a</b>), a bare GCE (<b>b</b>), a MIP-poly[(Cys)VIMBF<sub>4</sub>]/AuNPs/GCE before (<b>e</b>) and after (<b>c</b>) removing AFP, and rebinding AFP (<b>d</b>); (<b>B</b>) Cyclic voltammograms of an AuNPs/GCE (<b>a</b>´), a bare GCE(<b>b</b>´), a NIP-poly[(Cys)VIMBF<sub>4</sub>]/AuNPs/GCE before (<b>e</b>´) and after (<b>c</b>´) removing AFP, and rebinding AFP (<b>d</b>´); AFP used for incubation is at the concentration of 20.0 ng mL<sup>−1</sup>; Scan rate: 0.1 V s<sup>−1</sup>.</p>
Full article ">Figure 5
<p>(<b>A</b>) Nyquist plots of an AuNPs/GCE (<b>a</b>), a bare GCE (<b>b</b>), a MIP-poly[(Cys)VIMBF<sub>4</sub>]/AuNPs/GCE before (<b>e</b>) and after (<b>c</b>) removing AFP, and rebinding AFP (<b>d</b>); (<b>B</b>) Nyquist plots of an AuNPs/GCE (<b>a´</b>), a bare GCE (<b>b´</b>), a NIP-poly[(Cys)VIMBF<sub>4</sub>]/AuNPs/GCE before (<b>e´</b>) and after (<b>c´</b>) washing, and interacting with AFP (<b>d´</b>) at the concentration of 20.0 ng mL<sup>−1</sup>; Supporting electrolyte: 5.0 mmol L<sup>−1</sup> K<sub>3</sub>[Fe(CN)<sub>6</sub>]/K<sub>4</sub>[Fe(CN)<sub>6</sub>] (1:1) + 0.01 mol L<sup>−1</sup> phosphate buffer (pH 7.4) + 0.1 mol L<sup>−1</sup> KCl solution.</p>
Full article ">Figure 6
<p>Influence of the pH value on the current response of the imprinted sensor towards AFP at the concentration of 1.0 ng mL<sup>−1</sup>.</p>
Full article ">Figure 7
<p>Influence of the incubation time on the current response of the imprinted sensor towards AFP at the concentration of 1.0 ng mL<sup>−1</sup>.</p>
Full article ">Figure 8
<p>Calibration curve for determining AFP at the imprinted sensor. The inset is the differential pulse voltammograms of K<sub>4</sub>Fe(CN)<sub>6</sub>/K<sub>3</sub>Fe(CN)<sub>6</sub> at the imprinted sensor with different AFP concentrations (0.03, 0.05, 0.1, 0.3, 0.5, 0.8, 1, 3, 5 ng mL<sup>−1</sup>).</p>
Full article ">Figure 9
<p>Current response of the imprinted sensor towards potential interferents.</p>
Full article ">Figure 10
<p>Electrochemical responses of the imprinted sensor toward 1.0 ng mL<sup>−1</sup> AFP in the presence of 50 ng mL<sup>−1</sup> NSE, PSA, IGg, AA, L-Cys, Gly and L-His.</p>
Full article ">Scheme 1
<p>Schematics for the synthetic route of (Cys)VIMBF<sub>4</sub> ionic liquid.</p>
Full article ">Scheme 2
<p>Schematic illustration for the alpha-fetoprotein (AFP) imprinted sensor fabrication and the electrochemical responses.</p>
Full article ">
14 pages, 1502 KiB  
Article
Moving Object Detection Based on Optical Flow Estimation and a Gaussian Mixture Model for Advanced Driver Assistance Systems
by Jaechan Cho, Yongchul Jung, Dong-Sun Kim, Seongjoo Lee and Yunho Jung
Sensors 2019, 19(14), 3217; https://doi.org/10.3390/s19143217 - 22 Jul 2019
Cited by 33 | Viewed by 6827
Abstract
Most approaches for moving object detection (MOD) based on computer vision are limited to stationary camera environments. In advanced driver assistance systems (ADAS), however, ego-motion is added to image frames owing to the use of a moving camera. This results in mixed motion [...] Read more.
Most approaches for moving object detection (MOD) based on computer vision are limited to stationary camera environments. In advanced driver assistance systems (ADAS), however, ego-motion is added to image frames owing to the use of a moving camera. This results in mixed motion in the image frames and makes it difficult to classify target objects and background. In this paper, we propose an efficient MOD algorithm that can cope with moving camera environments. In addition, we present a hardware design and implementation results for the real-time processing of the proposed algorithm. The proposed moving object detector was designed using hardware description language (HDL) and its real-time performance was evaluated using an FPGA based test system. Experimental results demonstrate that our design achieves better detection performance than existing MOD systems. The proposed moving object detector was implemented with 13.2K logic slices, 104 DSP48s, and 163 BRAM and can support real-time processing of 30 fps at an operating frequency of 200 MHz. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Show Figures

Figure 1

Figure 1
<p>Overall scheme of the proposed moving object detection (MOD) algorithm.</p>
Full article ">Figure 2
<p>Compensation for the integer parts of the ego-motion. The shaded region denotes empty space generated by the shift operation: (<b>a</b>) <math display="inline"><semantics> <msub> <mi>μ</mi> <mrow> <mi>n</mi> <mo>,</mo> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </semantics></math> memory; (<b>b</b>) <math display="inline"><semantics> <msub> <mi>w</mi> <mrow> <mi>n</mi> <mo>,</mo> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </semantics></math> memory; (<b>c</b>) <math display="inline"><semantics> <msubsup> <mi>σ</mi> <mrow> <mi>n</mi> <mo>,</mo> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> <mn>2</mn> </msubsup> </semantics></math> memory.</p>
Full article ">Figure 3
<p>Block diagram of the proposed moving object detector.</p>
Full article ">Figure 4
<p>Hardware structure: (<b>a</b>) optical flow estimator; (<b>b</b>) convolution calculator.</p>
Full article ">Figure 5
<p>Block diagram of the resolution process unit.</p>
Full article ">Figure 6
<p>Hardware structure of the camera motion estimator.</p>
Full article ">Figure 7
<p>Block diagram of the background detector.</p>
Full article ">Figure 8
<p>Block diagram of the object detector.</p>
Full article ">Figure 9
<p>FPGA test platform: (<b>a</b>) test environment; (<b>b</b>) Xilinx Virtex-5 FPGA based evaluation board; (<b>c</b>) 640 × 480 resolution camera.</p>
Full article ">Figure 10
<p>MOD performance of the proposed moving object detector.</p>
Full article ">
13 pages, 2671 KiB  
Article
Detecting Anomalies of Satellite Power Subsystem via Stage-Training Denoising Autoencoders
by Weihua Jin, Bo Sun, Zhidong Li, Shijie Zhang and Zhonggui Chen
Sensors 2019, 19(14), 3216; https://doi.org/10.3390/s19143216 - 22 Jul 2019
Cited by 15 | Viewed by 3816
Abstract
Satellite telemetry data contains satellite status information, and ground-monitoring personnel need to promptly detect satellite anomalies from these data. This paper takes the satellite power subsystem as an example and presents a reliable anomaly detection method. Due to the lack of abnormal data, [...] Read more.
Satellite telemetry data contains satellite status information, and ground-monitoring personnel need to promptly detect satellite anomalies from these data. This paper takes the satellite power subsystem as an example and presents a reliable anomaly detection method. Due to the lack of abnormal data, the autoencoder is a powerful method for unsupervised anomaly detection. This study proposes a novel stage-training denoising autoencoder (ST-DAE) that trains the features, in stages. This novel method has better reconstruction capabilities in comparison to common autoencoders, sparse autoencoders, and denoising autoencoders. Meanwhile, a cluster-based anomaly threshold determination method is proposed. In this study, specific methods were designed to evaluate the autoencoder performance in three perspectives. Experiments were carried out on real satellite telemetry data, and the results showed that the proposed ST-DAE generally outperformed the autoencoders, in comparison. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Diagram of the satellite power subsystem.</p>
Full article ">Figure 2
<p>The autoencoder architecture (n &lt; m).</p>
Full article ">Figure 3
<p>High reconstruction error caused by the interaction between features. (<b>a</b>) The BEA signal value and BCR input current. (<b>b</b>) The reconstruction result of the BEA signal value. (<b>c</b>) The BCR input current and solar cell array current. (<b>d</b>) The reconstruction result of the BCR input current. (<b>e</b>) The BEA signal value and battery set charge current. (<b>f</b>) The reconstruction result of the battery set charge current.</p>
Full article ">Figure 4
<p>Comparison of the original bus current and the one processed by the moving-average method. (<b>a</b>) The original bus current. (<b>b</b>) The bus current processed by the moving-average method.</p>
Full article ">Figure 5
<p>The architecture of the autoencoders.</p>
Full article ">Figure 6
<p>Comparison of the original BEA value with the BEA value with Gaussian noise. (<b>a</b>) The BEA value. (<b>b</b>) The BEA value with Gaussian noise.</p>
Full article ">Figure 7
<p>BEA reconstruction results and errors.</p>
Full article ">Figure 8
<p>Battery charge regulator (BCR) input current reconstruction results and errors.</p>
Full article ">Figure 9
<p>Battery set charge current reconstruction results and errors.</p>
Full article ">Figure 10
<p>Detection result of the main error amplifier (MEA) circuit failure ((A) indicates the anomalous curve). (<b>a</b>) The normal BDR1 output current and anomalous BDR1 output current. (<b>b</b>) The normal BDR2 output current and anomalous BDR2 output current. (<b>c</b>) The normal BDR3 output current and anomalous BDR3 output current. (<b>d</b>) The anomaly score of the reconstruction results.</p>
Full article ">Figure 11
<p>Detection result of the battery set open-circuit failure ((A) indicates the anomalous curve). (<b>a</b>) The normal MEA voltage and anomalous MEA voltage. (<b>b</b>) The normal battery set discharge current and anomalous battery set discharge current. (<b>c</b>) The anomaly score of the reconstruction results.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop