[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,113)

Search Parameters:
Keywords = low-cost camera

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 10689 KiB  
Article
Human Occupancy Monitoring and Positioning with Speed-Responsive Adaptive Sliding Window Using an Infrared Thermal Array Sensor
by Yukai Lin and Qiangfu Zhao
Sensors 2025, 25(1), 129; https://doi.org/10.3390/s25010129 - 28 Dec 2024
Viewed by 332
Abstract
In the current era of advanced IoT technology, human occupancy monitoring and positioning technology is widely used in various scenarios. For example, it can optimize passenger flow in public transportation systems, enhance safety in large shopping malls, and adjust smart home devices based [...] Read more.
In the current era of advanced IoT technology, human occupancy monitoring and positioning technology is widely used in various scenarios. For example, it can optimize passenger flow in public transportation systems, enhance safety in large shopping malls, and adjust smart home devices based on the location and number of occupants for energy savings. Additionally, in homes requiring special care, it can provide timely assistance. However, this technology faces limitations such as privacy concerns, environmental factors, and costs. Traditional cameras may not effectively address these issues, but infrared thermal sensors can offer similar applications while overcoming these challenges. Infrared thermal sensors detect the infrared heat emitted by the human body, protecting privacy and functioning effectively day and night with low power consumption, making them ideal for continuous monitoring scenarios like security systems or elderly care. In this study, we propose a system using the AMG8833, an 8 × 8 Infrared Thermal Array Sensor. The sensor data are processed through interpolation, adaptive thresholding, and blob detection, and the merged human heat signatures are separated. To enhance stability in human position estimation, a dynamic sliding window adjusts its size based on movement speed, effectively handling environmental changes and uncertainties. Full article
(This article belongs to the Special Issue Indoor Positioning Technologies for Internet-of-Things)
Show Figures

Figure 1

Figure 1
<p>Original heatmap from AMG8833.</p>
Full article ">Figure 2
<p>Different interpolation methods applied to original heatmaps.</p>
Full article ">Figure 3
<p>(<b>A</b>) 4-connectivity. (<b>B</b>) 8-connectivity.</p>
Full article ">Figure 4
<p>Connected component labeling results.</p>
Full article ">Figure 5
<p>Thermal merger due to close proximity.</p>
Full article ">Figure 6
<p>(<b>A</b>) Binary Image. (<b>B</b>) Topographic Image.</p>
Full article ">Figure 7
<p>(<b>A</b>) Detection Error. (<b>B</b>) Correct Detection.</p>
Full article ">Figure 8
<p>Sliding window size adjustment based on movement speed.</p>
Full article ">Figure 9
<p>Test environment.</p>
Full article ">Figure 10
<p>Hardware configuration.</p>
Full article ">Figure 11
<p>Pin diagram.</p>
Full article ">Figure 12
<p>One individual.</p>
Full article ">Figure 13
<p>Two individuals.</p>
Full article ">Figure 14
<p>Three individuals.</p>
Full article ">Figure 15
<p>Dark environment.</p>
Full article ">Figure 16
<p>Watershed algorithm applied.</p>
Full article ">
22 pages, 5600 KiB  
Article
Coffee Rust Severity Analysis in Agroforestry Systems Using Deep Learning in Peruvian Tropical Ecosystems
by Candy Ocaña-Zuñiga, Lenin Quiñones-Huatangari, Elgar Barboza, Naili Cieza Peña, Sherson Herrera Zamora and Jose Manuel Palomino Ojeda
Agriculture 2025, 15(1), 39; https://doi.org/10.3390/agriculture15010039 - 27 Dec 2024
Viewed by 274
Abstract
Agroforestry systems can influence the occurrence and abundance of pests and diseases because integrating crops with trees or other vegetation can create diverse microclimates that may either enhance or inhibit their development. This study analyzes the severity of coffee rust in two agroforestry [...] Read more.
Agroforestry systems can influence the occurrence and abundance of pests and diseases because integrating crops with trees or other vegetation can create diverse microclimates that may either enhance or inhibit their development. This study analyzes the severity of coffee rust in two agroforestry systems in the provinces of Jaén and San Ignacio in the department of Cajamarca (Peru). This research used a quantitative descriptive approach, and 319 photographs were collected with a professional camera during field trips. The photographs were segmented, classified and analyzed using the deep learning MobileNet and VGG16 transfer learning models with two methods for measuring rust severity from SENASA Peru and SENASICA Mexico. The results reported that grade 1 is the most prevalent rust severity according to the SENASA methodology (1 to 5% of the leaf affected) and SENASICA Mexico (0 to 2% of the leaf affected). Moreover, the proposed MobileNet model presented the best classification accuracy rate of 94% over 50 epochs. This research demonstrates the capacity of machine learning algorithms in disease diagnosis, which could be an alternative to help experts quantify the severity of coffee rust in coffee trees and broadens the field of research for future low-cost computational tools for disease recognition and classification Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Location of the study area, (<b>a</b>,<b>b</b>) agroforestry system at Finca “La Palestina” and (<b>c</b>,<b>d</b>) agroforestry system at Cooperativa Agraria Cafetalera “La Prosperidad”.</p>
Full article ">Figure 2
<p>Methodological process to estimate rust severity using deep learning.</p>
Full article ">Figure 3
<p>Coffee rust severity scale according to the methodologies of (<b>A</b>) SENASA of Peru and (<b>B</b>) SENASICA of Mexico.</p>
Full article ">Figure 4
<p>Convolutional neural network structure for classifying severity grades.</p>
Full article ">Figure 5
<p>Percentage of shade in batches: (<b>a</b>) using HabitApp, (<b>b</b>) using a visual template in Chirinos, (<b>c</b>) using HabitApp, (<b>d</b>) using a visual template in San Jose del Alto.</p>
Full article ">Figure 6
<p>Characterization of agroforestry systems with coffee: (<b>a</b>) shade monoculture, (<b>b</b>) traditional polyculture, (<b>c</b>) rustic system, (<b>d</b>) traditional polyculture, (<b>e</b>) traditional polyculture, (<b>f</b>) traditional polyculture, (<b>g</b>) shade monoculture, (<b>h</b>) rustic system, (<b>i</b>) traditional polyculture, (<b>j</b>) shade monoculture.</p>
Full article ">Figure 7
<p>Confusion matrix: (<b>a</b>,<b>b</b>) SENASA and (<b>c</b>,<b>d</b>) SENASICA using MobileNet and VGG16.</p>
Full article ">Figure 8
<p>Model accuracy for SENASA.</p>
Full article ">Figure 9
<p>Model accuracy for SENASICA.</p>
Full article ">
13 pages, 5669 KiB  
Article
Optimization of Video Surveillance System Deployment Based on Space Syntax and Deep Reinforcement Learning
by Bingchan Li and Chunguo Li
Electronics 2025, 14(1), 38; https://doi.org/10.3390/electronics14010038 - 26 Dec 2024
Viewed by 219
Abstract
With the widespread deployment of video surveillance devices, a large number of indoor and outdoor places are under the coverage of cameras, which plays a significant role in enhancing regional safety management and hazard detection. However, a vast number of cameras lead to [...] Read more.
With the widespread deployment of video surveillance devices, a large number of indoor and outdoor places are under the coverage of cameras, which plays a significant role in enhancing regional safety management and hazard detection. However, a vast number of cameras lead to high installation, maintenance, and analysis costs. At the same time, low-quality images and potential blind spots in key areas prevent the full utilization of the video system’s effectiveness. This paper proposes an optimization method for video surveillance system deployment based on space syntax analysis and deep reinforcement learning. First, space syntax is used to calculate the connectivity value, control value, depth value, and integration of the surveillance area. Combined with visibility and axial analysis results, a weighted index grid map of the area’s surveillance importance is constructed. This index describes the importance of video coverage at a given point in the area. Based on this index map, a deep reinforcement learning network based on DQN (Deep Q-Network) is proposed to optimize the best placement positions and angles for a given number of cameras in the area. Experiments show that the proposed framework, integrating space syntax and deep reinforcement learning, effectively improves video system coverage efficiency and allows for quick adjustment and refinement of camera placement by manually setting parameters for specific areas. Compared to existing coverage-first or experience-based optimization, the proposed method demonstrates significant performance and efficiency advantages. Full article
(This article belongs to the Special Issue Advances in Data-Driven Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>The overall framework of camera placement optimization.</p>
Full article ">Figure 2
<p>Convex space analysis and its graph representation. (<b>a</b>) Floor CAD, (<b>b</b>) segmented convex space, and (<b>c</b>) topology graph.</p>
Full article ">Figure 3
<p>Example of visibility analysis.</p>
Full article ">Figure 4
<p>Example of axial analysis. (<b>a</b>) Original map; (<b>b</b>) detected axial nodes; (<b>c</b>) topology relation graph of axial nodes; (<b>d</b>) importance of each axial line.</p>
Full article ">Figure 5
<p>Reward computation.</p>
Full article ">Figure 6
<p>Definition of the improved DQN framework.</p>
Full article ">Figure 7
<p>Space syntax computation results: (<b>a</b>) visibility; (<b>b</b>) connectivity.</p>
Full article ">Figure 8
<p>DQN model training results: (<b>a</b>) loss; (<b>b</b>) prediction accuracy.</p>
Full article ">Figure 9
<p>Model training time.</p>
Full article ">Figure 10
<p>DQN model training results: (<b>a</b>) max coverage; (<b>b</b>) human; (<b>c</b>) DQN.</p>
Full article ">
20 pages, 9743 KiB  
Article
UAV-Based Survey of the Earth Pyramids at the Kuklica Geosite (North Macedonia)
by Ivica Milevski, Bojana Aleksova and Slavoljub Dragićević
Heritage 2025, 8(1), 6; https://doi.org/10.3390/heritage8010006 - 26 Dec 2024
Viewed by 642
Abstract
This paper presents methods for a UAV-based survey of the site “Kuklica” near Kratovo, North Macedonia. Kuklica is a rare natural complex with earth pyramids, and because of its exceptional scientific, educational, touristic, and cultural significance, it was proclaimed to be a Natural [...] Read more.
This paper presents methods for a UAV-based survey of the site “Kuklica” near Kratovo, North Macedonia. Kuklica is a rare natural complex with earth pyramids, and because of its exceptional scientific, educational, touristic, and cultural significance, it was proclaimed to be a Natural Monument in 2008. However, after the proclamation, the interest in visiting this site and the threats in terms of its potential degradation rapidly grew, increasing the need for a detailed survey of the site and monitoring. Given the site’s small size (0.5 km2), the freely available satellite images and digital elevation models are not suitable for comprehensive analysis and monitoring of the site, especially in terms of the individual forms within the site. Instead, new tools are increasingly being used for such tasks, including UAVs (unmanned aerial vehicles) and LiDAR (Light Detection and Ranging). Since professional LiDAR is very expensive and still not readily available, we used a low-cost UAV (DJI Mini 4 Pro) to carry out a detailed survey. First, the flight path, the altitude of the UAV, the camera angle, and the photo recording intervals were precisely planned and defined. Also, the ground markers (checkpoints) were carefully selected. Then, the photos taken by the drone were aligned and processed using Agisoft Metashape software (v. 2.1.4), producing a digital elevation model and orthophoto imagery with a very high (sub-decimeter) resolution. Following this procedure, more than 140 earth pyramids were delineated, ranging in height from 1 to 2 m and to 30 m at their highest. At this stage, a very accurate UAV-based 3D model of the most remarkable earth pyramids was developed (the accuracy was checked using the iPhone 14 Pro LiDAR module), and their morphometrical properties were calculated. Also, the site’s erosion rate and flash flood potential were calculated, showing high susceptibility to both. The final goal was to monitor the changes and to minimize the degradation of the unique landscape, thus better protecting the geosite and its value. Full article
(This article belongs to the Section Geoheritage and Geo-Conservation)
Show Figures

Figure 1

Figure 1
<p>Location of the NM Kuklica site in North Macedonia (<b>left</b>) and its catchment area (<b>right</b>).</p>
Full article ">Figure 2
<p>The east (<b>left</b>) and the west (<b>right</b>) side of the site of the earth pyramids in the NM Kuklica site.</p>
Full article ">Figure 3
<p>Test polygons as the input for the machine learning classification (ANN).</p>
Full article ">Figure 4
<p>Machine learning ANN classification of land cover in Kuklica area.</p>
Full article ">Figure 5
<p>Identified and delineated earth pyramids in the Agisoft Metashape 3D model.</p>
Full article ">Figure 6
<p>Inventory of the remarkable earth pyramids in NM Kuklica. Morphometry is calculated using the 0.1 m resolution UAV-based DEM.</p>
Full article ">Figure 7
<p>Erosion susceptibility map (<b>A</b>) and mean annual erosion rate (<b>B</b>) of the Kuklica catchment area and corresponding maps of the NM Kuklica site (<b>C</b>,<b>D</b>).</p>
Full article ">Figure 8
<p>Flash Flood Potential Index in regard to the Kuklica catchment (<b>A</b>,<b>B</b>) and NM Kuklica site (<b>C</b>).</p>
Full article ">Figure 9
<p>Careful inspection and comparison of earth pyramid photos from the first research carried out in 1995 and the last visit in 2024, which shows discrete morphological changes.</p>
Full article ">
10 pages, 1474 KiB  
Communication
Comparative Analysis of Low-Cost Portable Spectrophotometers for Colorimetric Accuracy on the RAL Design System Plus Color Calibration Target
by Jaša Samec, Eva Štruc, Inese Berzina, Peter Naglič and Blaž Cugmas
Sensors 2024, 24(24), 8208; https://doi.org/10.3390/s24248208 - 23 Dec 2024
Viewed by 271
Abstract
Novel low-cost portable spectrophotometers could be an alternative to traditional spectrophotometers and calibrated RGB cameras by offering lower prices and convenient measurements but retaining high colorimetric accuracy. This study evaluated the colorimetric accuracy of low-cost, portable spectrophotometers on the established color calibration target—RAL [...] Read more.
Novel low-cost portable spectrophotometers could be an alternative to traditional spectrophotometers and calibrated RGB cameras by offering lower prices and convenient measurements but retaining high colorimetric accuracy. This study evaluated the colorimetric accuracy of low-cost, portable spectrophotometers on the established color calibration target—RAL Design System Plus (RAL+). Four spectrophotometers with a listed price between USD 100–1200 (Nix Spectro 2, Spectro 1 Pro, ColorReader, and Pico) and a smartphone RGB camera were tested on a representative subset of 183 RAL+ colors. Key performance metrics included the devices’ ability to match and measure RAL+ colors in the CIELAB color space using the color difference CIEDE2000 ΔE. The results showed that Nix Spectro 2 had the best performance, matching 99% of RAL+ colors with an estimated ΔE of 0.5–1.05. Spectro 1 Pro and ColorReader matched approximately 85% of colors with ΔE values between 1.07 and 1.39, while Pico and the Asus 8 smartphone matched 54–77% of colors, with ΔE of around 1.85. Our findings showed that low-cost, portable spectrophotometers offered excellent colorimetric measurements. They mostly outperformed existing RGB camera-based colorimetric systems, making them valuable tools in science and industry. Full article
(This article belongs to the Special Issue Recent Trends and Advances in Color and Spectral Sensors: 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Color calibration target (CCT) RAL Design System Plus (©RAL gGmbH, Bonn, Germany, reproduced with permission from RAL gGmbH).</p>
Full article ">Figure 2
<p>Spectrophotometers (<b>a</b>) Nix Spectro 2, (<b>b</b>) Spectro 1 Pro, (<b>c</b>) ColorReader, and (<b>d</b>) Pico ((<b>a</b>) ©Nix Sensor Ltd., Hamilton, ON, Canada; (<b>b</b>) ©Variable Inc., Chattanooga, TN, USA; (<b>c</b>) ©Datacolor GmbH, Marl, Germany; (<b>d</b>) ©Palette Pty Ltd., Melbourne, Victoria, Australia; images are reproduced with permissions from Nix Sensor Ltd., Variable Inc., Datacolor GmbH, and Palette Pty Ltd.).</p>
Full article ">
18 pages, 10480 KiB  
Article
Bacterial and Viral-Induced Changes in the Reflectance Spectra of Nicotiana benthamiana Plants
by Alyona Grishina, Maxim Lysov, Maria Ageyeva, Victoria Diakova, Oksana Sherstneva, Anna Brilkina and Vladimir Vodeneev
Horticulturae 2024, 10(12), 1363; https://doi.org/10.3390/horticulturae10121363 - 19 Dec 2024
Viewed by 463
Abstract
Phytopathogens pose a serious threat to agriculture, causing a decrease in yield and product quality. This necessitates the development of methods for early detection of phytopathogens, which will reduce losses and improve product quality by using lower quantities of agrochemicals. In this study, [...] Read more.
Phytopathogens pose a serious threat to agriculture, causing a decrease in yield and product quality. This necessitates the development of methods for early detection of phytopathogens, which will reduce losses and improve product quality by using lower quantities of agrochemicals. In this study, the efficiency of spectral imaging in the early detection and differentiation of diseases caused by pathogens of different types (Potato virus X (PVX) and the bacterium Pseudomonas syringae) was analyzed. An evaluation of the visual symptoms of diseases demonstrated the presence of pronounced symptoms in the case of bacterial infection and an almost complete absence of visual symptoms in the case of viral infection. P. syringae caused severe inhibition of photosynthetic activity in the infected leaf, while PVX did not have a pronounced effect on photosynthetic activity. Reflectance spectra of infected and healthy plants were detected in the range from 400 to 1000 nm using a hyperspectral camera, and the dynamics of infection-induced changes during disease progression were analyzed. P. syringae caused a strong increase in reflectance in the blue and red spectral ranges, as well as a decrease in the near-infrared range. PVX-induced changes in the reflectance spectrum had smaller amplitudes compared to P. syringae, and were localized mainly in the red edge (RE) range. The entire set of normalized reflectance indices (NRI) for the analyzed spectral range was calculated. The most sensitive NRIs to bacterial (NRI510/545, NRI510/850) and viral (NRI600/850, NRI700/850) infections were identified. The use of these indices makes it possible to detect the disease at an early stage. The study of the identified NRIs demonstrated the possibility of using the multispectral imaging method in early pathogen detection, which has high performance and a low cost of analysis. Full article
(This article belongs to the Section Plant Pathology and Disease Management (PPDM))
Show Figures

Figure 1

Figure 1
<p>Experiment design: 0 days post inoculation (dpi) to day of inoculation of the 4th leaf of <span class="html-italic">N. benthamiana</span> with bacterial or viral pathogens. The reflected light (hyperspectral and RGB imaging), chlorophyll fluorescence (PAM imaging), and GFP fluorescence in systemic-inoculated leaves infected with bacteria and viruses were recorded every day up to 12 dpi. The reflectance spectra of the systemically bacteria- and virus-infected and control leaves were obtained from hyperspectral images and subjected to further processing. The differences in the spectra as well as the heat maps of the differences of the normalized reflectance indices (NRIs) were calculated. The final step was to select NRIs that effectively differentiate bacterial and viral pathogens.</p>
Full article ">Figure 2
<p>Spread of visual symptoms of <span class="html-italic">Pss</span> infection in non-inoculated leaves of <span class="html-italic">N. benthamiana</span>. RGB images of whole plants with visual symptoms after inoculation of the 4th (bottom) leaf are shown. Signs of bacterial infection appeared in non-inoculated 10th, 11th, and 13th leaves (L10, L11 and L13). One of the youngest leaves (14th leaf) did not have any visual symptoms.</p>
Full article ">Figure 3
<p>Images of <span class="html-italic">N. benthamiana</span> plants with signs of PVX infection. Infection was performed in the 4th leaf (L4). Then, the appearance of symptoms in the upper leaves (L10, L11, L12) was observed. (<b>A</b>) RGB images of plants, (<b>B</b>) fluorescence images of tobacco with the virus obtained by surface fluorescence imaging. The GFP fluorescence signal in the virus capsid is shown in pseudocolor scale.</p>
Full article ">Figure 4
<p>Dynamics of changes in the parameters of chlorophyll fluorescence showing the activity of PSII in the leaves of <span class="html-italic">N. bentamiana</span> with signs of bacterial or viral infection and uninfected control plants. (<b>A</b>) The ratio of variable and maximum fluorescence (F<sub>v</sub>/F<sub>m</sub>), (<b>B</b>) the effective quantum yield of PSII (<span class="html-italic">Φ</span><sub>PSII</sub>), (<b>C</b>) the quantum yield of regulated non-photochemical energy dissipation in PSII (<span class="html-italic">Φ</span><sub>NPQ</sub>), (<b>D</b>) the quantum yield of unregulated energy dissipation of PSII (<span class="html-italic">Φ</span><sub>NO</sub>). Values are mean ± SEM. * indicates statistically significant differences between the values in the control and infected plants (<span class="html-italic">p</span> &lt; 0.05) (<span class="html-italic">n</span> = 6 (Pss); <span class="html-italic">n</span> = 6 (PVX); <span class="html-italic">n</span> = 12 (control)).</p>
Full article ">Figure 5
<p>Images showing the change in the F<sub>v</sub>/F<sub>m</sub> in leaves of the <span class="html-italic">N. bentamiana</span> plants with <span class="html-italic">Pss</span> (<b>A</b>) or PVX (<b>B</b>) at different dpi. The images were obtained from the same plants as in <a href="#horticulturae-10-01363-f002" class="html-fig">Figure 2</a> and <a href="#horticulturae-10-01363-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 6
<p>Dynamics of the reflectance spectrum of the leaf of the control plant (<b>A</b>). Dynamics of the difference in reflectance spectra between plants infected by <span class="html-italic">Pss</span> and control plants (<b>B</b>) and between the plants infected by PVX and control plants (<b>C</b>). The curves represent average means (<span class="html-italic">n</span> = 6 (Pss); <span class="html-italic">n</span> = 6 (PVX); <span class="html-italic">n</span> = 12 (control)).</p>
Full article ">Figure 7
<p>Heat maps of the NRIs of the non-inoculated control plants (<b>A</b>), ΔNRI between <span class="html-italic">Pss</span>-infected and control plants (<b>B</b>), ΔNRI between PVX-infected and control plants (<b>C</b>). The indices were calculated for leaves at 10 dpi. The vertical color scale reflects the NRI (<b>A</b>) or the ΔNRI value (<b>B</b>,<b>C</b>); the horizontal gray scale reflects the level of statistical significance of the differences (<span class="html-italic">n</span> = 6 (Pss); <span class="html-italic">n</span> = 6 (PVX); <span class="html-italic">n</span> = 12 (control)).</p>
Full article ">Figure 8
<p>Dynamics of the difference in the values of NRIs between control plants and plants infected with <span class="html-italic">Pss</span> (<b>A</b>) or PVX (<b>B</b>) at different dpi. *, indicates statistically significant (<span class="html-italic">p</span> &lt; 0.05) difference of the value from 0. Mean values ± SEM are shown (<span class="html-italic">n</span> = 6 (Pss); <span class="html-italic">n</span> = 6 (PVX); <span class="html-italic">n</span> = 12 (control)).</p>
Full article ">Figure 9
<p>Dependence of the difference in NRI of plants infected with <span class="html-italic">Pss</span> (<b>A</b>) or PVX (<b>B</b>) and healthy plants on the width of the spectral band (<span class="html-italic">n</span> = 6 (Pss); <span class="html-italic">n</span> = 6 (PVX); <span class="html-italic">n</span> = 12 (control)).</p>
Full article ">Figure 10
<p>Images of tobacco plants infected with PVX (<b>A</b>) or <span class="html-italic">Pss</span> (<b>B</b>), obtained based on NRI values, as shown in the figure. Images were obtained at 4, 8, and 12 dpi, respectively.</p>
Full article ">
30 pages, 63876 KiB  
Article
A Low-Cost 3D Mapping System for Indoor Scenes Based on 2D LiDAR and Monocular Cameras
by Xiaojun Li, Xinrui Li, Guiting Hu, Qi Niu and Luping Xu
Remote Sens. 2024, 16(24), 4712; https://doi.org/10.3390/rs16244712 - 17 Dec 2024
Viewed by 668
Abstract
The cost of indoor mapping methods based on three-dimensional (3D) LiDAR can be relatively high, and they lack environmental color information, thereby limiting their application scenarios. This study presents an innovative, low-cost, omnidirectional 3D color LiDAR mapping system for indoor environments. The system [...] Read more.
The cost of indoor mapping methods based on three-dimensional (3D) LiDAR can be relatively high, and they lack environmental color information, thereby limiting their application scenarios. This study presents an innovative, low-cost, omnidirectional 3D color LiDAR mapping system for indoor environments. The system consists of two two-dimensional (2D) LiDARs, six monocular cameras, and a servo motor. The point clouds are fused with imagery using a pixel-spatial dual-constrained depth gradient adaptive regularization (PS-DGAR) algorithm to produce dense 3D color point clouds. During fusion, the point cloud is reconstructed inversely based on the predicted pixel depth values, compensating for areas of sparse spatial features. For indoor scene reconstruction, a globally consistent alignment algorithm based on particle filter and iterative closest point (PF-ICP) is proposed, which incorporates adjacent frame registration and global pose optimization to reduce mapping errors. Experimental results demonstrate that the proposed density enhancement method achieves an average error of 1.5 cm, significantly improving the density and geometric integrity of sparse point clouds. The registration algorithm achieves a root mean square error (RMSE) of 0.0217 and a runtime of less than 4 s, both of which outperform traditional iterative closest point (ICP) variants. Furthermore, the proposed low-cost omnidirectional 3D color LiDAR mapping system demonstrates superior measurement accuracy in indoor environments. Full article
(This article belongs to the Special Issue New Perspectives on 3D Point Cloud (Third Edition))
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The structure of the proposed low-cost 3D indoor mapping system.</p>
Full article ">Figure 2
<p>Process of associating 3D point clouds with 2D image pixels. From right to left, (i) rigid body transformation from the world coordinate system to the camera coordinate system, (ii) perspective projection from the camera coordinate system to the image plane, (iii) mapping from the image plane to the pixel coordinate system.</p>
Full article ">Figure 3
<p>Neighborhood search and depth prediction diagram. The example of a failed prediction is represented by a triangle symbol and highlighted with a red dashed box at the top, while the example of a successful prediction is represented by a star symbol and highlighted with a red dashed box on the far right. At the bottom, the conditions for algorithm convergence and the corresponding output are shown. The arrows represent the prediction steps.</p>
Full article ">Figure 4
<p>Globally uniform alignment schematic diagram. The main components include key LiDAR frames (in blue), device poses (in orange) and several state nodes corresponding to the sensor fusion results. The system uses particle filtering (green lines) to update the system state and uses ICP (red dashed lines) to perform registration between adjacent frames. The diagram also highlights the fusion process between LiDAR and device observations to achieve global pose alignment.</p>
Full article ">Figure 5
<p>Experimental system design drawing. (<b>a</b>) Structure of a low-cost 3D point cloud acquisition device. (<b>b</b>) Diagram of camera detection range and installation configuration. (<b>c</b>) Overall design diagram of the acquisition system integrated into the UGV.</p>
Full article ">Figure 6
<p>Experimental scene layout and partial display. The upper figure is a 2D schematic of the experimental scene, while the lower images show real-world photos from certain nodes along with the measured dimensions of related objects.</p>
Full article ">Figure 7
<p>Sampling density analysis of the low-cost 3D point cloud acquisition system, showing the sampling density distribution along the X-axis, Y-axis, and Z-axis from top to bottom.</p>
Full article ">Figure 8
<p>The depth prediction result maps generated by the PS-DGAR algorithm and the SuperPixel segmentation-based prediction algorithm at different scan distances, as well as the enhanced 3D point cloud imaging result maps based on these depth maps.</p>
Full article ">Figure 9
<p>The point cloud registration results for adjacent frames under different initial transformations are illustrated as follows, from top to bottom: initial transformation, G−ICP, R−ICP, M−ICP, FGR, and the proposed PF−ICP method. Specifically, (<b>a</b>) an initial transformation involving only translation along the x-axis without any rotation (<math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>3.0</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mn>0.0</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>0.0</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, roll <math display="inline"><semantics> <msup> <mn>0</mn> <mo>°</mo> </msup> </semantics></math>, pitch <math display="inline"><semantics> <msup> <mn>0</mn> <mo>°</mo> </msup> </semantics></math>, yaw <math display="inline"><semantics> <msup> <mn>0</mn> <mo>°</mo> </msup> </semantics></math>). (<b>b</b>) An initial transformation with translation along the x-axis and slight rotation about the three axes (<math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>3.144</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mn>0.001</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>0.0</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, roll <math display="inline"><semantics> <mrow> <mo>−</mo> <mn>0</mn> <mo>.</mo> <msup> <mn>002</mn> <mo>°</mo> </msup> </mrow> </semantics></math>, pitch <math display="inline"><semantics> <mrow> <mn>0</mn> <mo>.</mo> <msup> <mn>004</mn> <mo>°</mo> </msup> </mrow> </semantics></math>, yaw <math display="inline"><semantics> <mrow> <mo>−</mo> <mn>0</mn> <mo>.</mo> <msup> <mn>001</mn> <mo>°</mo> </msup> </mrow> </semantics></math>). (<b>c</b>) An initial transformation involving translation along the y-axis and significant rotation (<math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0.004</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mo>−</mo> <mn>2.4</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mo>−</mo> <mn>0.001</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, roll <math display="inline"><semantics> <mrow> <mo>−</mo> <mn>0</mn> <mo>.</mo> <msup> <mn>001</mn> <mo>°</mo> </msup> </mrow> </semantics></math>, pitch <math display="inline"><semantics> <mrow> <mo>−</mo> <mn>0</mn> <mo>.</mo> <msup> <mn>001</mn> <mo>°</mo> </msup> </mrow> </semantics></math>, yaw <math display="inline"><semantics> <mrow> <mo>−</mo> <msup> <mn>90</mn> <mo>°</mo> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 10
<p>Result of globally consistent indoor colored map reconstruction. Subfigures (<b>a</b>–<b>f</b>) show the mapping results and real scenes of randomly selected indoor scene nodes.</p>
Full article ">Figure 11
<p>Three-dimensional imaging results of indoor complex environments.</p>
Full article ">
20 pages, 98934 KiB  
Article
Automated Snow Avalanche Monitoring and Alert System Using Distributed Acoustic Sensing in Norway
by Antoine Turquet, Andreas Wuestefeld, Guro K. Svendsen, Finn Kåre Nyhammer, Espen Lauvlund Nilsen, Andreas Per-Ola Persson and Vetle Refsum
GeoHazards 2024, 5(4), 1326-1345; https://doi.org/10.3390/geohazards5040063 - 17 Dec 2024
Viewed by 384
Abstract
Avalanches present substantial hazard risk in mountainous regions, particularly when avalanches obstruct roads, either hitting vehicles directly or leaving traffic exposed to subsequent avalanches during cycles. Traditional detection methods often are designed to cover only a limited section of a road stretch, hampering [...] Read more.
Avalanches present substantial hazard risk in mountainous regions, particularly when avalanches obstruct roads, either hitting vehicles directly or leaving traffic exposed to subsequent avalanches during cycles. Traditional detection methods often are designed to cover only a limited section of a road stretch, hampering effective risk management. This research introduces a novel approach using Distributed Acoustic Sensing (DAS) for avalanche detection. The monitoring site in Northern Norway is known to be frequently impacted by avalanches. Between 2022–2024, we continuously monitored the road for avalanches blocking the traffic. The automated alert system identifies avalanches affecting the road and estimates accumulated snow. The system provides continuous, real-time monitoring with competitive sensitivity and accuracy over large areas (up to 170 km) and for multiple sites on parallel. DAS powered alert system can work unaffected by visual barriers or adverse weather conditions. The system successfully identified 10 road-impacting avalanches (100% detection rate). Our results via DAS align with the previous works and indicate that low frequency part of the signal (<20 Hz) is crucial for detection and size estimation of avalanche events. Alternative fiber installation methods are evaluated for optimal sensitivity to avalanches. Consequently, this study demonstrates its durability and lower maintenance requirements, especially when compared to the high setup costs and coverage limitations of radar systems, or the weather and lighting vulnerabilities of cameras. Furthermore the system can detect vehicles on the road as important supplemental information for search and rescue operations, and thus the authorities can be alerted, thereby playing a vital role in urgent rescue efforts. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Map showing Existing Cable (blue) and New Cable extension (orange). Photos taken during the installation are attached from (i) the cabinet at the northern end of the monitoring system, (ii) the vehicle warning system end of the north section, (iii) the vehicle warning system beginning of the south section (iv) an example photo of microtrenching. (<b>b</b>) Map of Norway and the region surrounding the avalanche monitoring zone. Important places are marked, including Holmbuktura, the location of the installation. (<b>c</b>) A cross section sketch showing the details of microtrench cable installation (iv). Direct buried new cable is installed at 15 cm depth and plastic tube covered installation is done at 20 cm depth from the surface.</p>
Full article ">Figure 2
<p>Aerial overview of the Holmbuktura region detailing characteristic avalanche paths and the avalanche monitoring setup. The image on the left (<b>a</b>) shows a comprehensive view of the valley with shaded areas for avalanche zones in north and south. The paths (1–5) along the slope show 5 characteristic avalanche paths, delineating the primary areas of avalanche activity. The cyan line represents the trajectory of the sensor cable installation, placed to capture both the dynamics of avalanches and the road traffic activity. The plot on the right (<b>b</b>) shows the altitude evolution along 5 selected paths, giving the impression of the topography of the region. Image © 2024 Google Earth, Image Landsat/Copernicus, Image © 2024 Maxar Technologies, Image © 2024 CNES/Airbus.</p>
Full article ">Figure 3
<p>Simplified flowchart of the automated avalanche detection and monitoring system. Data is continuously collected and processed through edge computing in two separate modules: (1) vehicle detection and (2) avalanche detection, which operate independently to avoid interference. Detected avalanches and vehicles are then transferred to a central repository and messaging module. This module evaluates risk levels, checks for stranded or at-risk vehicles, and prepares necessary visualizations and alerts. If the risk level exceeds a predefined threshold, the system sends alerts, including plots and messages, via SCADA message system and email using 4G communication.</p>
Full article ">Figure 4
<p>Examples of signals recorded during monitoring with the DAS system in Holmbuktura are shown. The strain rate waterfall plot (Z) highlights features of different events: (<b>a</b>) avalanche activity in the north, (<b>b</b>) avalanche activity in the south, (<b>c</b>) a passenger car, and (<b>d</b>) a snowplow.</p>
Full article ">Figure 5
<p>The power spectral density (PSD) was computed from signals recorded during monitoring with the DAS system in Holmbuktura. The signals represent distinct events, specifically: (<b>a</b>) avalanche activity in the north, (<b>b</b>) avalanche activity in the south, (<b>c</b>) a passenger car, and (<b>d</b>) a snowplow.</p>
Full article ">Figure 6
<p>Most energetic traces from all avalanches are presented as raw signals. In (<b>a</b>) avalanche signals are presented and marked with “Zone N” and “Zone S” showing where the avalanches happened. Event 0 is an avalanche which stopped right before the road it is presented for comparison. Corresponding mean frequency of the 200 s trace is computed and marked on the end of trace. In (<b>b</b>) we present the spectrogram of all avalanches. (<b>c</b>,<b>d</b>) we compare the power spectra of north avalanches and south avalanches respectively. The associated log-averaged power spectra are also plotted.</p>
Full article ">Figure 7
<p>Co-located direct buried “D” (red) and piped loopback cable “P” (blue) traces from avalanches only hitting the southern section are presented (<b>a</b>). Corresponding mean frequency of the entire trace is computed and marked on the trace as well. On right we compare the power spectra from direct buried cable (<b>b</b>) and piped buried (<b>c</b>). The associated log-average power spectra are also plotted.</p>
Full article ">Figure 8
<p>Detailed analysis of the most energetic trace from Event 5. The avalanche signal is analyzed using 20 s sliding time window to investigate avalanche dynamics. In (<b>a</b>), the normalized signal is shown in the time domain; (<b>b</b>) presents the mean frequency of the 20 s time window sliding every 1 s; and (<b>c</b>) displays the power spectra of selected time windows. The colored boxes in (<b>a</b>) indicate time windows, which are highlighted with markers in the mean frequency plot (<b>b</b>) and in the power spectra plot (<b>c</b>) in corresponding colors.</p>
Full article ">Figure A1
<p>Comparison of avalanche dates with historical data of environmental variables. Temperature, snow depth, rain and wind speed from the region covering October 2022 to May 2024 is obtained from OpenMeteo [<a href="#B69-geohazards-05-00063" class="html-bibr">69</a>] is presented. We have plotted the 200 h moving averaged data to visualize long term trends.</p>
Full article ">
11 pages, 5810 KiB  
Article
Reading Dye-Based Colorimetric Inks: Achieving Color Consistency Using Color QR Codes
by Ismael Benito-Altamirano, Laura Engel, Ferran Crugeira, Miriam Marchena, Jürgen Wöllenstein, Joan Daniel Prades and Cristian Fàbrega
Chemosensors 2024, 12(12), 260; https://doi.org/10.3390/chemosensors12120260 - 13 Dec 2024
Viewed by 504
Abstract
Color consistency when reading colorimetric sensors is a key factor for this technology. Here, we demonstrate how the usage of machine-readable patterns, like QR codes, can be used to solve the problem. We present our approach of using back-compatible color QR codes as [...] Read more.
Color consistency when reading colorimetric sensors is a key factor for this technology. Here, we demonstrate how the usage of machine-readable patterns, like QR codes, can be used to solve the problem. We present our approach of using back-compatible color QR codes as colorimetric sensors, which are common QR codes that also embed a set of hundreds of color references as well as colorimetric indicators. The method allows locating the colorimetric sensor within the captured scene and to perform automated color correction to ensure color consistency regardless of the hardware used. To demonstrate it, a CO2-sensitive colorimetric indicator was printed on top of a paper-based substrate using screen printing. This indicator was formulated for Modified Atmosphere Packaging (MAP) applications. To verify the method, the sensors were exposed to several environmental conditions (both in gas composition and light conditions). And, images were captured with an 8M pixel digital camera sensor, similar to those used in smartphones. Our results show that the sensors have a relative error of 9% when exposed with a CO2 concentration of 20%. This is a good result for low-cost disposable sensors that are not intended for permanent use. However, as soon as light conditions change (2500–6500 K), this error increases up to ϵ20 = 440% (rel. error at 20% CO2 concentration) rendering the sensors unusable. Within this work, we demonstrate that our color QR codes can reduce the relative error to ϵ20 = 14%. Furthermore, we show that the most common color correction, white balance, is not sufficient to address the color consistency issue, resulting in a relative error of ϵ20 = 90%. Full article
(This article belongs to the Special Issue Novel Gas Sensing Approaches: From Fabrication to Application)
Show Figures

Figure 1

Figure 1
<p>A Back-compatible Color QR Code [<a href="#B18-chemosensors-12-00260" class="html-bibr">18</a>] for the evaluation of colorimetric indicators. This QR code is read by commercial scanners and should display the URL: <a href="http://c-s.is/#38RmtGVV6RQSf" target="_blank">c-s.is/#38RmtGVV6RQSf</a> (accessed on 12 December 2024). It includes up to 125 reference colors, and the colorimetric dye is printed above the lower finder pattern, represented here as seven purple modules.</p>
Full article ">Figure 2
<p>The structure of the color QR code from <a href="#chemosensors-12-00260-f001" class="html-fig">Figure 1</a>: (<b>a</b>,<b>b</b>) Possible sensor inks placements. (<b>a</b>) Big sensor outside the QR code. (<b>b</b>) Smaller factor forms (<math display="inline"><semantics> <mrow> <mn>3</mn> <mo>×</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>×</mo> <mn>1</mn> </mrow> </semantics></math>, …) within the QR code. (<b>c</b>) Color references and how they are spread over the QR code area. (<b>d</b>) Whole sensor layout of the gas-sensitive color QR code.</p>
Full article ">Figure 3
<p>The sensor changes from purple to yellow when exposed to <math display="inline"><semantics> <mrow> <mi>C</mi> <msub> <mi>O</mi> <mn>2</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>A mass-flow controller station, a capture station, and a user-access computer. The mass-flow controller station supplies a chamber in which the gas sensors are placed with modified atmospheres. The capture station takes time-lapse images of the sensor through an optical window of the chamber under controlled light settings. Finally, the user computer presents a web page interface to operate the system.</p>
Full article ">Figure 5
<p>A printed sensor featuring a color QR code and two different colorimetric indicators (<math display="inline"><semantics> <mrow> <mi>C</mi> <msub> <mi>O</mi> <mn>2</mn> </msub> </mrow> </semantics></math> indicator above, <math display="inline"><semantics> <mrow> <mi>N</mi> <msub> <mi>H</mi> <mn>3</mn> </msub> </mrow> </semantics></math> below, which was not used in this experiment) inside the sensor chamber. The image shows the sensor before exposure to the target gas under three different light conditions: 2500 K (<b>left</b>), 4500 K (<b>middle</b>) and 6500 K (<b>right</b>).</p>
Full article ">Figure 6
<p>Response of the green channel under nine different light conditions (2500 K to 6500 K) with all pulses overlapped in the same time frame and after correction of the measured values using a color correction method. Each target gas concentration (20%, 30%, 35%, 40%, 50%) was exposed three times under the respective light condition, resulting in a total of 27 pulses for every gas concentration.</p>
Full article ">Figure 7
<p><b>Up</b>: Fitting the responses to a model without performing any color correction (NONE), which is the worst-case scenario, with different color in the data points indicating different illumination conditions and different transparency indicating different repetition sample. <b>Down</b>: Fitting the responses to a model for the ground-truth responses (PERF), which is the best-case scenario, where all color corrections recover the D65 color of the sensor perfectly.</p>
Full article ">
19 pages, 8273 KiB  
Article
The Integration of Image Intensity and Texture for the Estimation of Particle Mass in Sorting Processes
by Pedro Compais, Belén Morales, Alberto Gala and Marta Guerrero
Processes 2024, 12(12), 2837; https://doi.org/10.3390/pr12122837 - 11 Dec 2024
Viewed by 517
Abstract
Although mass is one of the most relevant process variables, industries may lack an inline monitoring of mass, which has a high cost in some cases. Due to their availability in sorting processes, cameras have potential as a low-cost alternative for the estimation [...] Read more.
Although mass is one of the most relevant process variables, industries may lack an inline monitoring of mass, which has a high cost in some cases. Due to their availability in sorting processes, cameras have potential as a low-cost alternative for the estimation of mass in recycling applications. Nevertheless, further research is needed to transform image information into mass. This work tackles this challenge by proposing a novel method of converting image information into mass of particles, complementing size measures with intensity and texture features extracted from the whole picture. Models were adjusted, employing machine learning techniques, using an industrial waste sample of post-consumer plastic film. The visual properties showed a dependency on mass labels, and the models achieved an error of 9 g for subsamples between 2 and 82 g. The analysis and validation of this image processing method provide a new alternative for the estimation of particle mass. Full article
(This article belongs to the Section Process Control and Monitoring)
Show Figures

Figure 1

Figure 1
<p>Bales of post-consumer plastic film (<b>a</b>) compacted and (<b>b</b>) opened.</p>
Full article ">Figure 2
<p>Scheme of the laboratory setup for image acquisition.</p>
Full article ">Figure 3
<p>Method for the estimation of waste mass based on images.</p>
Full article ">Figure 4
<p>Method for the processing of waste images.</p>
Full article ">Figure 5
<p>Method for the modeling of mass based on image features.</p>
Full article ">Figure 6
<p>Color images for the ten subsamples of a group of waste fragments, shown according to cover levels: (<b>a</b>) high, (<b>b</b>) medium-high, (<b>c</b>) low-medium, and (<b>d</b>) low.</p>
Full article ">Figure 7
<p>(<b>a</b>) Subsample containing a black waste particle and (<b>b</b>) histograms of the grayscale images from <a href="#processes-12-02837-f006" class="html-fig">Figure 6</a>, averaged for each cover level.</p>
Full article ">Figure 8
<p>Mass measurements and quartiles of the general dataset.</p>
Full article ">Figure 9
<p>Experimental measurements and theoretical values assuming a uniform distribution, obtained for the subsamples (<b>a</b>) from 1 to 10 and (<b>b</b>) from 51 to 60.</p>
Full article ">Figure 10
<p>Boxplots of the mass for each cover level, computed from the general dataset.</p>
Full article ">Figure 11
<p>(<b>a</b>) <span class="html-italic">F</span>-values and (<b>b</b>) <span class="html-italic">p</span>-values for the characteristics extracted from the waste images of the general training set.</p>
Full article ">Figure 12
<p>Image features with a stronger linear dependency on mass for each property type: (<b>a</b>) geometry area, (<b>b</b>) intensity mean, and (<b>c</b>) texture sum entropy. Shown data were extracted from the general training set.</p>
Full article ">Figure 13
<p>Evolution of geometry area, intensity mean, and texture sum entropy along the four cover levels for a group of waste particles. Shown data were extracted from the general training set.</p>
Full article ">Figure 14
<p>Root mean squared error of the cross-validated models for the training and validation subsets, considering the (<b>a</b>) general, (<b>b</b>) light, and (<b>c</b>) heavy range.</p>
Full article ">Figure 15
<p>RMSE for the training, validation, and test sets, considering the best model of each mass interval.</p>
Full article ">Figure 16
<p>Predicted and actual values of mass during the training and test of the (<b>a</b>) general and (<b>b</b>) specific models.</p>
Full article ">
13 pages, 616 KiB  
Article
Dose–Response Curve in REMA Test: Determination from Smartphone-Based Pictures
by Eugene B. Postnikov, Alexander V. Sychev and Anastasia I. Lavrova
Analytica 2024, 5(4), 619-631; https://doi.org/10.3390/analytica5040041 - 10 Dec 2024
Viewed by 447
Abstract
We report a workflow and a software description for digital image colorimetry aimed at obtaining a quantitative dose–response curve and the minimal inhibitory concentration in the Resazurin Microtiter Assay (REMA) test of the activity of antimycobacterial drugs. The principle of this analysis is [...] Read more.
We report a workflow and a software description for digital image colorimetry aimed at obtaining a quantitative dose–response curve and the minimal inhibitory concentration in the Resazurin Microtiter Assay (REMA) test of the activity of antimycobacterial drugs. The principle of this analysis is based on the newly established correspondence between the intensity of the a* channel of the CIE L*a*b* colour space and the concentration of resorufin produced in the course of this test. The whole procedure can be carried out using free software. It has sufficiently mild requirements for the quality of colour images, which can be taken by a typical smartphone camera. Thus, the approach does not impose additional costs on the medical examination points and is widely accessible. Its efficiency is verified by applying it to the case of two representatives of substituted 2-(quinolin-4-yl) imidazolines. The direct comparison with the data on the indicator’s fluorescence obtained using a commercial microplate reader argues that the proposed approach provides results of the same range of accuracy on the quantitative level. As a result, it would be possible to apply the strategy not only for new low-cost studies but also for expanding databases on drug candidates by quantitatively reprocessing existing data, which were earlier documented by images of microplates but analysed only qualitatively. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Main window of the developed program’s graphic user interface. It demonstrates the graphic window displaying the normed drug–response curve and its regression according the notation in the input data files as well as the control panel, which allows to user to choose required data, run calculations, and save their result.</p>
Full article ">Figure 2
<p>Distribution of extracted average colours over the plate wells in the REMA test results for the compounds 16 (<b>A</b>) and 18 (<b>B</b>); markers denote the normed responses in the a* channel of the CIE L*a*b* colour system (circles) and the fluorescence data (asterisks) for the compounds 16 (<b>C</b>) and 18 (<b>D</b>), with abscissa labelling according to the notation in the pictures of plates reported in the work [<a href="#B25-analytica-05-00041" class="html-bibr">25</a>], which includes not only drug concentrations but also control cells (0, <math display="inline"><semantics> <mrow> <mi>C</mi> <mn>1</mn> <mo>%</mo> </mrow> </semantics></math> (1% control dilution of the bacterial culture), and <span class="html-italic">C</span> (the control representing medium without the bacterial culture), not used in the calculations). Coloured markers are used for the regression, grey ones are not. The red solid (colorimetric) and blue dashed (fluorometric) curves satisfy Equation (<a href="#FD4-analytica-05-00041" class="html-disp-formula">4</a>), with parameters listed in <a href="#analytica-05-00041-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 3
<p>The model colour change due to mixing of resazurin and resorufin with a step of 10% in resorufin’s molar concentration (<b>A</b>) and the respective change in the CIE L*a*b* colour space coordinates (<b>B</b>). The dashed straight line highlights the linear correlation.</p>
Full article ">
29 pages, 2318 KiB  
Review
A Review of Smart Camera Sensor Placement in Construction
by Wei Tian, Hao Li, Hao Zhu, Yongwei Wang, Xianda Liu, Rongzheng Yang, Yujun Xie, Meng Zhang, Jun Zhu and Xiangyu Wang
Buildings 2024, 14(12), 3930; https://doi.org/10.3390/buildings14123930 - 9 Dec 2024
Viewed by 561
Abstract
Cameras, with their low cost and efficiency, are widely used in construction management and structural health monitoring. However, existing reviews on camera sensor placement (CSP) are outdated due to rapid technological advancements. Furthermore, the construction industry poses unique challenges for CSP implementation due [...] Read more.
Cameras, with their low cost and efficiency, are widely used in construction management and structural health monitoring. However, existing reviews on camera sensor placement (CSP) are outdated due to rapid technological advancements. Furthermore, the construction industry poses unique challenges for CSP implementation due to its scale, complexity, and dynamic nature. Previous reviews have not specifically addressed these industry-specific demands. This study aims to fill this gap by analyzing articles from the Web of Science and ASCE databases that focus exclusively on CSP in construction. A rigorous selection process ensures the relevance and quality of the included studies. This comprehensive review navigates through the complexities of camera and environment models, advocating for advanced optimization techniques like genetic algorithms, greedy algorithms, Swarm Intelligence, and Markov Chain Monte Carlo to refine CSP strategies. Simultaneously, Building Information Modeling is employed to consider the progress of construction and visualize optimized layouts, improving the effect of CSP. This paper delves into perspective distortion, the field of view considerations, and the occlusion impacts, proposing a unified framework that bridges practical execution with the theory of optimal CSP. Furthermore, the roadmap for future exploration in the CSP of construction is proposed. This work enriches the study of construction CSP, charting a course for future inquiry, and emphasizes the need for adaptable and technologically congruent CSP approaches amid evolving application landscapes. Full article
(This article belongs to the Special Issue Smart and Digital Construction in AEC Industry)
Show Figures

Figure 1

Figure 1
<p>Research methodology.</p>
Full article ">Figure 2
<p>Methodological workflow.</p>
Full article ">Figure 3
<p>Camera mode: (<b>a</b>) bullet/dome camera, (<b>b</b>) omnidirectional cameras.</p>
Full article ">Figure 4
<p>The general framework of GAs.</p>
Full article ">Figure 5
<p>The iteration process of PSO.</p>
Full article ">Figure 6
<p>The camera placement optimization framework based on BIM.</p>
Full article ">
14 pages, 715 KiB  
Article
High-Precision Digital-to-Time Converter with High Dynamic Range for 28 nm 7-Series Xilinx FPGA and SoC Devices
by Fabio Garzetti, Nicola Lusardi, Nicola Corna, Gabriele Fiumicelli, Federico Cattaneo, Gabriele Bonanno, Andrea Costa, Enrico Ronconi and Angelo Geraci
Electronics 2024, 13(23), 4825; https://doi.org/10.3390/electronics13234825 - 6 Dec 2024
Viewed by 438
Abstract
Over the last ten years, the need for high-resolution time-domain digital signal production has grown exponentially. More than ever, applications call for a digital-to-time converter (DTC) that is extremely accurate and precise. Skew compensation and camera shutter operation represent just a few examples [...] Read more.
Over the last ten years, the need for high-resolution time-domain digital signal production has grown exponentially. More than ever, applications call for a digital-to-time converter (DTC) that is extremely accurate and precise. Skew compensation and camera shutter operation represent just a few examples of such applications. The advantages of adopting a flexible and rapid time-to-market strategy focused on fast prototyping using programmable logic devices—such as field-programmable gate arrays (FPGAs) and system-on-chip (SoC)—have become increasingly evident. These benefits outweigh those of performance-focused yet expensive application-specific integrated circuits (ASICs). Despite the availability of various architectures, the high non-recurring engineering (NRE) costs make them unsuitable for low-volume production, especially in research or prototyping environments. To address this trend, we introduce an innovative DTC IP-Core with a resolution, also known as least significant bit (LSB), of 52 ps, compatible with all Xilinx 7-Series FPGAs and SoCs. Measurements have been performed on a low-end Artix-7 XC7A100TFTG256-2, guaranteeing a jitter lower than 50 ps r.m.s. and offering a high dynamic range up to 56 ms. With resource utilization below 1% and a dynamic power dissipation of 285 mW for our target FPGA, the design maintains excellent differential and integral nonlinearity errors (DNL/INL) of 1.19 LSB and 1.56 LSB, respectively. Full article
(This article belongs to the Special Issue Feature Papers in Circuit and Signal Processing)
Show Figures

Figure 1

Figure 1
<p>Schematic plot of a PDL.</p>
Full article ">Figure 2
<p>Schematics (<b>a</b>) and waveforms (<b>b</b>) of Nutt interpolation and waveforms.</p>
Full article ">Figure 3
<p>(<b>a</b>) Register-transfer level (RTL) representation of the 2-bit circular buffer used to generate the CE signals. (<b>b</b>) Two BUFGCEs are used to generate the 180°-shifted clocks.</p>
Full article ">Figure 4
<p>Schematic (<b>a</b>) and waveforms (<b>b</b>) of the proposed dual-clock synchronous coarse logic.</p>
Full article ">Figure 5
<p>Overview of the proposed FPGA-based Nutt-interpolated DTC, where the dual-clock counter is used as coarse logic and the IDELAYE2 is employed as the fine interpolator.</p>
Full article ">Figure 6
<p>Jitter of the DTC with <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mrow> <mi>I</mi> <mi>D</mi> <mi>E</mi> <mi>L</mi> <mi>A</mi> <mi>Y</mi> <mi>C</mi> <mi>T</mi> <mi>R</mi> <mi>L</mi> </mrow> </msub> <mo>=</mo> <mn>300</mn> </mrow> </semantics></math> MHz, <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>S</mi> <mi>B</mi> <mo>=</mo> <mn>52.083</mn> </mrow> </semantics></math> ps in <math display="inline"><semantics> <mrow> <mn>0</mn> <mo>÷</mo> <mn>1023</mn> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>S</mi> <mi>B</mi> </mrow> </semantics></math> dynamic range at 25 °C.</p>
Full article ">Figure 7
<p>DNL of the DTC with <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mrow> <mi>I</mi> <mi>D</mi> <mi>E</mi> <mi>L</mi> <mi>A</mi> <mi>Y</mi> <mi>C</mi> <mi>T</mi> <mi>R</mi> <mi>L</mi> </mrow> </msub> <mo>=</mo> <mn>300</mn> </mrow> </semantics></math> MHz, <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>S</mi> <mi>B</mi> <mo>=</mo> <mn>52.083</mn> </mrow> </semantics></math> ps in <math display="inline"><semantics> <mrow> <mn>0</mn> <mo>÷</mo> <mn>1023</mn> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>S</mi> <mi>B</mi> </mrow> </semantics></math> dynamic range at 25 °C.</p>
Full article ">Figure 8
<p>INL of the DTC with <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mrow> <mi>I</mi> <mi>D</mi> <mi>E</mi> <mi>L</mi> <mi>A</mi> <mi>Y</mi> <mi>C</mi> <mi>T</mi> <mi>R</mi> <mi>L</mi> </mrow> </msub> <mo>=</mo> <mn>300</mn> </mrow> </semantics></math> MHz, <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>S</mi> <mi>B</mi> <mo>=</mo> <mn>52.083</mn> </mrow> </semantics></math> ps in <math display="inline"><semantics> <mrow> <mn>0</mn> <mo>÷</mo> <mn>1023</mn> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>S</mi> <mi>B</mi> </mrow> </semantics></math> dynamic range at 25 °C.</p>
Full article ">Figure 9
<p>Block diagram of the measurement setup.</p>
Full article ">Figure 10
<p>Block diagram of IDELAYE2-based PDL-DTC used as benchmark.</p>
Full article ">Figure 11
<p>Jitter of the IDELAYE2-based PDL-DTC with <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mrow> <mi>I</mi> <mi>D</mi> <mi>E</mi> <mi>L</mi> <mi>A</mi> <mi>Y</mi> <mi>C</mi> <mi>T</mi> <mi>R</mi> <mi>L</mi> </mrow> </msub> <mo>=</mo> <mn>300</mn> </mrow> </semantics></math> MHz, <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>S</mi> <mi>B</mi> <mo>=</mo> <mn>52.083</mn> </mrow> </semantics></math> ps in <math display="inline"><semantics> <mrow> <mn>0</mn> <mo>÷</mo> <mn>960</mn> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>S</mi> <mi>B</mi> </mrow> </semantics></math> dynamic range at 25 °C.</p>
Full article ">Figure 12
<p>DNL of the IDELAYE2-based PDL-DTC with <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mrow> <mi>I</mi> <mi>D</mi> <mi>E</mi> <mi>L</mi> <mi>A</mi> <mi>Y</mi> <mi>C</mi> <mi>T</mi> <mi>R</mi> <mi>L</mi> </mrow> </msub> <mo>=</mo> <mn>300</mn> </mrow> </semantics></math> MHz, <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>S</mi> <mi>B</mi> <mo>=</mo> <mn>52</mn> </mrow> </semantics></math>.083 ps in <math display="inline"><semantics> <mrow> <mn>0</mn> <mo>÷</mo> <mn>960</mn> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>S</mi> <mi>B</mi> </mrow> </semantics></math> dynamic range at 25 °C.</p>
Full article ">Figure 13
<p>INL of the IDELAYE2-based PDL-DTC with <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mrow> <mi>I</mi> <mi>D</mi> <mi>E</mi> <mi>L</mi> <mi>A</mi> <mi>Y</mi> <mi>C</mi> <mi>T</mi> <mi>R</mi> <mi>L</mi> </mrow> </msub> <mo>=</mo> <mn>300</mn> </mrow> </semantics></math> MHz, <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>S</mi> <mi>B</mi> <mo>=</mo> <mn>52.083</mn> </mrow> </semantics></math> ps in <math display="inline"><semantics> <mrow> <mn>0</mn> <mo>÷</mo> <mn>960</mn> </mrow> </semantics></math> <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>S</mi> <mi>B</mi> </mrow> </semantics></math> dynamic range at 25 °C.</p>
Full article ">
15 pages, 2366 KiB  
Article
Gas Leakage Detection Using Tiny Machine Learning
by Majda El Barkani, Nabil Benamar, Hanae Talei and Miloud Bagaa
Electronics 2024, 13(23), 4768; https://doi.org/10.3390/electronics13234768 - 2 Dec 2024
Viewed by 629
Abstract
Gas leakage detection is a critical concern in both industrial and residential settings, where real-time systems are essential for quickly identifying potential hazards and preventing dangerous incidents. Traditional detection systems often rely on centralized data processing, which can lead to delays and scalability [...] Read more.
Gas leakage detection is a critical concern in both industrial and residential settings, where real-time systems are essential for quickly identifying potential hazards and preventing dangerous incidents. Traditional detection systems often rely on centralized data processing, which can lead to delays and scalability issues. To overcome these limitations, in this study, we present a solution based on tiny machine learning (TinyML) to process data directly on devices. TinyML has the potential to execute machine learning algorithms locally, in real time, and using tiny devices, such as microcontrollers, ensuring faster and more efficient responses to potential dangers. Our approach combines an MLX90640 thermal camera with two optimized convolutional neural networks (CNNs), MobileNetV1 and EfficientNet-B0, deployed on the Arduino Nano 33 BLE Sense. The results show that our system not only provides real-time analytics but does so with high accuracy—88.92% for MobileNetV1 and 91.73% for EfficientNet-B0—while achieving inference times of 1414 milliseconds and using just 124.8 KB of memory. Compared to existing solutions, our edge-based system overcomes common challenges related to latency and scalability, making it a reliable, fast, and efficient option. This work demonstrates the potential for low-cost, scalable gas detection systems that can be deployed widely to enhance safety in various environments. By integrating cutting-edge machine learning models with affordable IoT devices, we aim to make safety more accessible, regardless of financial limitations, and pave the way for further innovation in environmental monitoring solutions. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental setup for data collection (reprinted, with permission, from [<a href="#B20-electronics-13-04768" class="html-bibr">20</a>] @2022 MDPI).</p>
Full article ">Figure 2
<p>The four dataset categories: (<b>a</b>) Mixture; (<b>b</b>); No Gas, (<b>c</b>); Perfume; (<b>d</b>) Smoke.</p>
Full article ">Figure 3
<p>Confusion matrix for the MobileNetV1 model.</p>
Full article ">Figure 4
<p>Confusion matrix for the BO configuration.</p>
Full article ">
18 pages, 5154 KiB  
Article
Detection of Hydrogen Peroxide Vapors Using Acidified Titanium(IV)-Based Test Strips
by Rayhan Hossain and Nicholas F. Materer
Materials 2024, 17(23), 5887; https://doi.org/10.3390/ma17235887 - 1 Dec 2024
Viewed by 543
Abstract
One method for the colorimetric detection of hydrogen peroxide vapor is based on a titanium–hydrogen peroxide complex. A color changing material based on a titania hydroxypropyl cellulose thin film was initially developed. However, as this material dries, the sensitivity of the material is [...] Read more.
One method for the colorimetric detection of hydrogen peroxide vapor is based on a titanium–hydrogen peroxide complex. A color changing material based on a titania hydroxypropyl cellulose thin film was initially developed. However, as this material dries, the sensitivity of the material is significantly reduced. Thus, an alternative sensing material, based on titanium(IV) oxysulfate, an ionic liquid, and in some cases, triflouromethanesulfonic acid adsorbed onto low-cost silicon thin-layer chromatography (TLC) plates, was developed. TiO2 was heated with concentrated sulfuric acid in a controlled environment, usually at temperatures ranging from 100 °C to 250 °C. These sensors are disposable and single-use and are simple and inexpensive. When the resulting thin-film sensors are exposed to ppm levels of hydrogen peroxide vapor, they turn from a white reflective material to an intense yellow or orange. Ti(IV) oxysulfate combined with an acid catalyst and an ionic-liquid-based material provides an opportunity to enhance the sensor activity towards the peroxide vapor and decreases the detection limit. Kinetic measurements were made by the quantification of the intensity of the reflected light as a function of the exposure time from the sensor in a special cell using a low-cost web camera and a tungsten lamp. The measured rate of the color change indicates high sensitivity and first-order kinetics over a hydrogen peroxide concentration range of approximately 2 to 31 ppm. These new materials are a starting point for the preparation of more active sensor materials for hydrogen peroxide and organic peroxide vapor detection. Full article
Show Figures

Figure 1

Figure 1
<p>Calibration curve for the different concentrations of standard 30% H<sub>2</sub>O<sub>2</sub> solution.</p>
Full article ">Figure 2
<p>Schematic diagram of the apparatus that depicts the (<b>A</b>) flow controller, (<b>B</b>) bubbler to entrain the peroxide vapor, (<b>C</b>) exposure chamber, (<b>D</b>) detection system, and (<b>E</b>) a bubbler used to determine the total concentration of peroxide in the flow.</p>
Full article ">Figure 3
<p>UV–vis reflection spectra for silica and cellulose test strips.</p>
Full article ">Figure 4
<p>Visualization showing the interaction of Ti(IV) oxysulfate, ionic liquids, and an acid catalyst at the microstructural level.</p>
Full article ">Figure 5
<p>XRD pattern of Ti(IV) oxysulfate with ionic liquid and an acid catalyst.</p>
Full article ">Figure 6
<p>FTIR spectrum of Ti(IV) oxysulfate with ionic liquid and an acid catalyst.</p>
Full article ">Figure 7
<p>XPS spectrum of Ti(IV) oxysulfate with ionic liquid and an acid catalyst.</p>
Full article ">Figure 8
<p>Scanning electron micrograph of two-week-old titania film at magnifications of (<b>A</b>) 3000× and (<b>B</b>) 20,000×.</p>
Full article ">Figure 9
<p>Reflected images of thin film (<b>A</b>) before and (<b>B</b>) after peroxide exposure. The thin film was exposed to a hydrogen peroxide vapor concentration of 30.9 ppm for 1 h.</p>
Full article ">Figure 10
<p>(<b>A</b>) Intensity versus exposure time for a cellulose test strip; the insert shows a blowup of the region of rapid decrease in intensity between 2 and 10 min; (<b>B</b>) the intensity versus exposure time for a silica test strip—with acid; the insert shows a blowup of the region of rapid decrease in intensity between 2 and 10 min.</p>
Full article ">Figure 11
<p>(<b>A</b>) The first-order behavior of the first 10 min of exposure. The cellulose test strip was exposed to a peroxide concentration of 28 ppm for 1 h and (<b>B</b>) the first-order behavior of the first 10 min of exposure. The silica test strip was exposed to a peroxide concentration of 30.9 ppm for 1 h.</p>
Full article ">Figure 12
<p>(<b>A</b>) Intensity versus exposure time for a silica test strip—without acid and (<b>B</b>) the first-order behavior of the first 10 min of exposure. The silica test strip was exposed to a peroxide concentration of 29 ppm for 1 h.</p>
Full article ">Figure 13
<p>Phenomenological first-order rate constants obtained from the test strip as a function of the peroxide concentrations. Error bars were determined from thin film homogeneity, as discussed in the text.</p>
Full article ">Figure 14
<p>Phenomenological first-order rate constants obtained from the silica test strip with acid as a function of the peroxide concentrations. Error bars were determined from thin film homogeneity, as discussed in the text.</p>
Full article ">
Back to TopTop