[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,388)

Search Parameters:
Keywords = sparse area

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1733 KiB  
Article
Endpoint Distribution Modeling-Based Capture Algorithm for Interfering Multi-Target
by Xiangliang Zhang, Junlin Li, Pengjie Li, Fang Si, Xiangzhi Liu, Yu Gu, Shuguang Meng, Jibin Yin and Tao Liu
Sensors 2024, 24(24), 8191; https://doi.org/10.3390/s24248191 (registering DOI) - 22 Dec 2024
Viewed by 108
Abstract
In physical spaces, pointing interactions cannot rely on cursors, rays, or virtual hands for feedback as in virtual environments; users must rely solely on their perception and experience to capture targets. Currently, research on modeling target distribution for pointing interactions in physical space [...] Read more.
In physical spaces, pointing interactions cannot rely on cursors, rays, or virtual hands for feedback as in virtual environments; users must rely solely on their perception and experience to capture targets. Currently, research on modeling target distribution for pointing interactions in physical space is relatively sparse. Area division is typically simplistic, and theoretical models are lacking. To address this issue, we propose two models for target distribution in physical space-pointing interactions: the single-target pointing endpoint distribution model (ST-PEDM) and the multi-target pointing endpoint distribution model (MT-PEDM). Based on these models, we have developed a basic region partitioning algorithm (BRPA) and an enhanced region partitioning algorithm (ERPA). We conducted experiments with 15 participants (11 males, and four females) to validate the proposed distribution models and region partitioning algorithm. The results indicate that these target distribution models accurately describe the distribution areas of targets, and the region partitioning algorithm demonstrates high precision and efficiency in determining user intentions during pointing interactions. At target distances of 200 cm and 300 cm, the accuracy without any algorithm is 60.54% and 42.39%, respectively. Using the BRPA algorithm, the accuracy is 72.94% and 68.57%, while, with the ERPA algorithm, the accuracy reaches 84.11% and 82.74%, respectively. This technology can be utilized in interaction scenarios involving handheld pointing devices, such as handheld remote controls. Additionally, it can be applied to the rapid capture control and trajectory planning of drone swarms. Users can quickly and accurately capture and control target drones using pointing interaction technology, issue commands, and transmit data through smart glasses, thereby achieving effective drone control and trajectory planning. Full article
(This article belongs to the Special Issue Sensors for Human Posture and Movement)
19 pages, 9717 KiB  
Article
Piping Plover Habitat Changes and Nesting Responses Following Post-Tropical Cyclone Fiona on Prince Edward Island, Canada
by Ryan Guild and Xiuquan Wang
Remote Sens. 2024, 16(24), 4764; https://doi.org/10.3390/rs16244764 (registering DOI) - 20 Dec 2024
Viewed by 275
Abstract
Climate change is driving regime shifts across ecosystems, exposing species to novel challenges of extreme weather, altered disturbances, food web disruptions, and habitat loss. For disturbance-dependent species like the endangered piping plover (Charadrius melodus), these shifts present both opportunities and risks. [...] Read more.
Climate change is driving regime shifts across ecosystems, exposing species to novel challenges of extreme weather, altered disturbances, food web disruptions, and habitat loss. For disturbance-dependent species like the endangered piping plover (Charadrius melodus), these shifts present both opportunities and risks. While most piping plover populations show net growth following storm-driven habitat creation, similar gains have not been documented in the Eastern Canadian breeding unit. In September 2022, post-tropical cyclone Fiona caused record coastal changes in this region, prompting our study of population and nesting responses within the central subunit of Prince Edward Island (PEI). Using satellite imagery and machine learning tools, we mapped storm-induced change in open sand habitat on PEI and compared nest outcomes across habitat conditions from 2020 to 2023. Open sand areas increased by 9–12 months post-storm, primarily through landward beach expansion. However, the following breeding season showed no change in abundance, minimal use of new habitats, and mixed nest success. Across study years, backshore zones, pure sand habitats, and sandspits/sandbars had lower apparent nest success, while washover zones, sparsely vegetated areas, and wider beaches had higher success. Following PTC Fiona, nest success on terminal spits declined sharply, dropping from 45–55% of nests hatched in pre-storm years to just 5%, partly due to increased flooding. This suggests reduced suitability, possibly from storm-induced changes to beach elevation or slope. Further analyses incorporating geomorphological and ecological data are needed to determine whether the availability of suitable habitat is limiting population growth. These findings highlight the importance of conserving and replicating critical habitat features to support piping plover recovery in vulnerable areas. Full article
Show Figures

Figure 1

Figure 1
<p>Map of all nesting sites on PEI with breeding activity since 2011.</p>
Full article ">Figure 2
<p>Classified landcover pre-storm (A) and 1-year post-storm (B) and total change in open sand area (C) over critical barrier island habitats for PIPL on PEI. Counts of breeding pairs (BP) and fledglings (FL) from each site indicated by light and dark blue bars, respectively. Black circle on the island map indicates nesting site with no data (ND).</p>
Full article ">Figure 3
<p>Classified landcover pre-storm (A) and post-storm (B) and total change in open sand area (C) over critical sandspit/bar habitats for PIPL on PEI. Counts of breeding pairs (BP) and fledglings (FL) from each site indicated by light and dark blue bars, respectively. Black circles on the island map indicate nesting sites with no data (ND). Additional classified sandspit/bar nesting sites are displayed in <a href="#app1-remotesensing-16-04764" class="html-app">Figure S3</a>.</p>
Full article ">Figure 4
<p>Classified landcover pre-storm (A) and 1-year post-storm (B) and total change in open sand area (C) over critical mainland beach habitats for PIPL on PEI. Counts of breeding pairs (BP) and fledglings (FL) from each site indicated by light and dark blue bars, respectively. Black circles on the island map indicate nesting sites with no data (ND).</p>
Full article ">Figure 5
<p>Nest locations and outcomes during three breeding seasons preceding (top three panels) and the initial season following (fourth panel) PTC Fiona over Conway Sandhills, PEI. Classified change in dry and wet sand area after one-year post-storm depicted in bottom panel, with colours representing change classes (in <a href="#remotesensing-16-04764-f002" class="html-fig">Figure 2</a>, <a href="#remotesensing-16-04764-f003" class="html-fig">Figure 3</a> and <a href="#remotesensing-16-04764-f004" class="html-fig">Figure 4</a>).</p>
Full article ">Figure 6
<p>Fledging rate across common nesting sites with at least three years of nesting attempts between 2020 and 2023. Horizontal line at 1.65 fledglings/pair indicates ECCC productivity target for the Eastern Canadian recovery unit. Vertical line distinguishes between pre- and post-storm breeding seasons.</p>
Full article ">Figure 7
<p>Model-averaged coefficient estimates (log-odds scale) from the top-ranked logistic regression GLMs of binary hatch success (<b>left</b>) and data summaries of hatch outcomes across habitat metrics from 2020–2023 on PEI (<b>right</b>). Number in white indicates nest counts in each respective category; error bars in GLM output display 95% confidence intervals (SE * 1.96). D2 ACCESS is represented as a categorical and continuous variable to convey complimentary insights.</p>
Full article ">Figure 8
<p>Summaries of binary hatch success by year across habitat measures on PEI from 2020–2023. Number in white indicates the number of nests in each respective category. D2 ACCESS is represented as a categorical and continuous variable to convey complimentary insights.</p>
Full article ">Figure 9
<p>Model averaged coefficient estimates (log-odds scale) from the top-ranked logistic regression GLMs of flooding and predation occurrences (<b>left</b>) and data summaries of nest outcomes across habitat metrics from 2020–2023 on PEI (<b>right</b>). Error bars in GLM output display 95% confidence intervals (SE * 1.96). D2 ACCESS is represented as a categorical and continuous variable to convey complimentary insights.</p>
Full article ">
20 pages, 12164 KiB  
Article
Heuristic Optimization-Based Trajectory Planning for UAV Swarms in Urban Target Strike Operations
by Chen Fei, Zhuo Lu and Weiwei Jiang
Drones 2024, 8(12), 777; https://doi.org/10.3390/drones8120777 (registering DOI) - 20 Dec 2024
Viewed by 220
Abstract
Unmanned aerial vehicle (UAV) swarms have shown substantial potential to enhance operational efficiency and reduce strike costs, presenting extensive applications in modern urban warfare. However, achieving effective strike performance in complex urban environments remains challenging, particularly when considering three-dimensional obstacles and threat zones [...] Read more.
Unmanned aerial vehicle (UAV) swarms have shown substantial potential to enhance operational efficiency and reduce strike costs, presenting extensive applications in modern urban warfare. However, achieving effective strike performance in complex urban environments remains challenging, particularly when considering three-dimensional obstacles and threat zones simultaneously, which can significantly degrade strike effectiveness. To address this challenge, this paper proposes a target strike strategy using the Electric Eel Foraging Optimization (EEFO) algorithm, a heuristic optimization method designed to ensure precise strikes in complex environments. The problem is formulated with specific constraints, modeling each UAV as an electric eel with random initial positions and velocities. This algorithm simulates the interaction, resting, hunting, and migrating behaviors of electric eels during their foraging process. During the interaction phase, UAVs engage in global exploration through communication and environmental sensing. The resting phase allows UAVs to temporarily hold their positions, preventing premature convergence to local optima. In the hunting phase, the swarm identifies and pursues optimal paths, while in the migration phase the UAVs transition to target areas, avoiding threats and obstacles while seeking safer routes. The algorithm enhances overall optimization capabilities by sharing information among surrounding individuals and promoting group cooperation, effectively planning flight paths and avoiding obstacles for precise strikes. The MATLAB(R2024b) simulation platform is used to compare the performance of five optimization algorithms—SO, SCA, WOA, MFO, and HHO—against the proposed Electric Eel Foraging Optimization (EEFO) algorithm for UAV swarm target strike missions. The experimental results demonstrate that in a sparse undefended environment, EEFO outperforms the other algorithms in terms of trajectory planning efficiency, stability, and minimal trajectory costs while also exhibiting faster convergence rates. In densely defended environments, EEFO not only achieves the optimal target strike trajectory but also shows superior performance in terms of convergence trends and trajectory cost reduction, along with the highest mission completion rate. These results highlight the effectiveness of EEFO in both sparse and complex defended scenarios, making it a promising approach for UAV swarm operations in dynamic urban environments. Full article
(This article belongs to the Special Issue Space–Air–Ground Integrated Networks for 6G)
Show Figures

Figure 1

Figure 1
<p>Three-dimensional configuration space.</p>
Full article ">Figure 2
<p>Schematic diagram of an urban building.</p>
Full article ">Figure 3
<p>Schematic diagram of ground threats.</p>
Full article ">Figure 4
<p>Flight altitude constraint.</p>
Full article ">Figure 5
<p>Maximum range constraint.</p>
Full article ">Figure 6
<p>Waypoint obstacle avoidance constraint.</p>
Full article ">Figure 7
<p>Cubic B-spline smoothing curve.</p>
Full article ">Figure 8
<p>Charts comparing the UAV swarm target strike results in the sparse environment scenario with hostile defense: (<b>a</b>–<b>f</b>) respectively represent the target strike trajectories of the EEFO, HHO, MFO, SCA, SO, and WOA algorithms in the 3D environment.</p>
Full article ">Figure 9
<p>Charts comparing the UAV swarm target strike results in the sparse environment scenario with hostile defense: (<b>a</b>–<b>f</b>) respectively represent the target strike trajectories of the EEFO, HHO, MFO, SCA, SO, and WOA algorithms in the 2D environments.</p>
Full article ">Figure 10
<p>Comparison of UAV swarm target strike results in the sparse environment scenario with invincible defense: (<b>a</b>) line chart comparing the optimal fitness values; (<b>b</b>) distribution chart, with bars showing differences in the optimal fitness values; (<b>c</b>) heatmap comparing the optimal fitness values.</p>
Full article ">Figure 11
<p>Charts comparing the UAV swarm target strike results in the dense environment scenario with hostile defense: (<b>a</b>–<b>f</b>) respectively represent the target strike trajectories of the EEFO, HHO, MFO, SCA, SO, and WOA algorithms in the 3D environment.</p>
Full article ">Figure 12
<p>Comparison of UAV swarm target strike results in the dense environment scenario with hostile defense: (<b>a</b>–<b>f</b>) respectively represent the target strike trajectories of the EEFO, HHO, MFO, SCA, SO, and WOA algorithms in the 2D environments.</p>
Full article ">Figure 13
<p>Comparison of UAV swarm target strike results in the sparse environment scenario with hostile defense: (<b>a</b>) line chart comparing optimal fitness values; (<b>b</b>) distribution chart with bars representing the difference in optimal fitness values; (<b>c</b>) heatmap chart comparing the optimal fitness values.</p>
Full article ">
26 pages, 13651 KiB  
Article
Dense In Situ Underwater 3D Reconstruction by Aggregation of Successive Partial Local Clouds
by Loïca Avanthey and Laurent Beaudoin
Remote Sens. 2024, 16(24), 4737; https://doi.org/10.3390/rs16244737 - 19 Dec 2024
Viewed by 379
Abstract
Assessing the completeness of an underwater 3D reconstruction on-site is crucial as it allows for rescheduling acquisitions, which capture missing data during a mission, avoiding additional costs of a subsequent mission. This assessment needs to rely on a dense point cloud since a [...] Read more.
Assessing the completeness of an underwater 3D reconstruction on-site is crucial as it allows for rescheduling acquisitions, which capture missing data during a mission, avoiding additional costs of a subsequent mission. This assessment needs to rely on a dense point cloud since a sparse cloud lacks detail and a triangulated model can hide gaps. The challenge is to generate a dense cloud with field-deployable tools. Traditional dense reconstruction methods can take several dozen hours on low-capacity systems like laptops or embedded units. To speed up this process, we propose building the dense cloud incrementally within an SfM framework while incorporating data redundancy management to eliminate recalculations and filtering already-processed data. The method evaluates overlap area limits and computes depths by propagating the matching around SeaPoints—the keypoints we design for identifying reliable areas regardless of the quality of the processed underwater images. This produces local partial dense clouds, which are aggregated into a common frame via the SfM pipeline to produce the global dense cloud. Compared to the production of complete dense local clouds, this approach reduces the computation time by about 70% while maintaining a comparable final density. The underlying prospect of this work is to enable real-time completeness estimation directly on board, allowing for the dynamic re-planning of the acquisition trajectory. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the global dense point clouds constructed from images of the four datasets: the Mermaid Dataset (<b>top left</b>), the Lost Freediver Rock Dataset (<b>top right</b>), the Flying Fortress Dataset (<b>bottom left</b>), and the Landingship Wreck dataset (<b>bottom right</b>).</p>
Full article ">Figure 2
<p>Workflow diagram of a standard incremental SfM framework, with options for incremental dense cloud generation and flexible application of loop closure detection and bundle adjustment based on criteria such as sparsity or exhaustiveness.</p>
Full article ">Figure 3
<p>Diagram of the algorithm for generating a partial local dense point cloud from an image pair selected in the incremental flow to optimize spatial sampling and depth resolution. Its main steps include detecting reliable areas using SeaPoints, assessing the overlap rate and identifying the overlap area based on prior information, and performing dense matching by propagating matches in the vicinity of SeaPoints outside the overlap area to obtain a partial disparity map. The resulting dense points can then be reprojected into the 3D frame, as with sparse points within the SfM framework, to form a partial local dense cloud that is subsequently aligned with previous local clouds.</p>
Full article ">Figure 4
<p>Diagram of the SeaPoint detector algorithm. To begin, we construct a map containing the Harris measurements for each pixel. Next, non-maximum suppression (NMS) is applied on the map to retain only the local maxima within a specified radius. The threshold to select the SeaPoints among these values is then determined through an analysis of the cumulative histogram of the map, aiming to achieve a given range of points. If the currently analyzed histogram bin lacks sufficient granularity (too many values in one bin), the range of values is expanded, generating a new histogram, and the analysis is recursively continued until convergence is achieved. The target interval, which indicates the desired minimum and maximum number of points, must be sufficiently wide to ensure convergence. A target interval with a range of a few hundred points typically guarantees convergence across a wide range of image types. Usually we look for several thousand points on 10 MP images.</p>
Full article ">Figure 5
<p>Example of the histogram during the SeaPoint Detector process for the image of the Lost Freediver Rock dataset in <a href="#remotesensing-16-04737-f006" class="html-fig">Figure 6</a>. (<b>Left</b>) is the first histogram and (<b>center</b>), a zoom on this histogram. There are not enough points accumulated when arriving at bin 98 (2202 points) of the first histogram to be consistent with the minimum of the target interval (2500 points minimum), but we would exceed the maximum of the target interval (3000 points maximum) by taking bin 97 (3793 points). We, therefore, re-explode the contents of bin 97 into a new histogram by a recursive call (<b>right</b>). The algorithms finally converge on 2502 points at bin 186 of the second histogram. Here, the histograms were calculated on 256 bins.</p>
Full article ">Figure 6
<p>SeaPoint Detector examples with a target interval of [2500, 3000]. (<b>Top left</b>): 2502 SeaPoints found on an image of the Lost Freediver Rock dataset in two recursive rounds (threshold adjusted to 38,32% of the max value). (<b>Top right</b>): 2930 SeaPoints found on an image of the Flying Fortress dataset in one round (threshold adjusted to 90,20% of the max value). (<b>Bottom left</b>): 2500 SeaPoints found on an image of the Mermaid statue in two recursive rounds (threshold adjusted to 62.31% of the max value). (<b>Bottom right</b>): 2504 SeaPoints found on an image of the LandingShip Wreck dataset in one round (threshold adjusted to 47.45% of the max value).</p>
Full article ">Figure 7
<p>In green and yellow: visualization of the intrapair matchings used to form the seeds for densifying the matching through propagation and generating the partial local dense clouds (4). In blue: visualization of the interpair matching used to evaluate the overlap rate for selecting the next pair (1), to estimate the relative pose for registering the new local cloud (2), and to automatically exclude the overlap area from the 3D reconstruction of the new local cloud (3).</p>
Full article ">Figure 8
<p>Flowchart illustrating the local statistical filtering applied to SeaPoint directional vectors for distinguishing inliers from outliers: the consistency score of each vector is incremented for each neighboring vector with a similar norm and direction. Vectors with low consistency scores are classified as outliers and are removed, resulting in a refined, robust list of matched SeaPoints.</p>
Full article ">Figure 9
<p>Directional vector flow is a representation of the matching within a single view. A local statistical filtering process, based on neighborhood coherence, is applied: neighboring vectors exhibiting similar norms and directions contribute to the assessment of the studied vector. The greater the number of votes, the more coherent the vector is deemed. The most locally coherent vectors are kept as inliers. In this image, the resulting inliers are represented in blue, while those identified as outliers are marked in red. The latter have a different direction and/or norm from their neighbors (or not enough neighbors to ensure this).</p>
Full article ">Figure 10
<p>Identification of the overlapping area thanks to the establishment of an area of influence around the interpair SeaPoints. The blue circles indicate the influence areas around the interpair matches on a view V (<b>left</b>) and on its subsequent view V + 2 (<b>right</b>), delimitating the overlap area between the two views.</p>
Full article ">Figure 11
<p>Mask on V + 2 given the areas of influence (see <a href="#remotesensing-16-04737-f010" class="html-fig">Figure 10</a>) calculated between V and V + 2 using the interpair SeaPoints. The sum of all white pixels in the mask is used to estimate the overlap rate between V and V + 2 with regard to the total number of pixels in V + 2.</p>
Full article ">Figure 12
<p>Diagram of the algorithm used to densify the matching by propagation around the seeds. In the first iteration, we analyze the neighborhood of a list of seeds (the initial seeds are all SeaPoints). After having selected all the best possible matches, and if they are not too far from their initial seed (this distance can be approximated by the number of iterations, for example), they are added to a new list of seeds. This new list will be studied in a second iteration until there are no more seeds added to the next list (the points did not match or are all already matched with the best score or are too far from the initial SeaPoint).</p>
Full article ">Figure 13
<p>Partial reconstruction: on the (<b>left</b>), the disparity map of the first pair with the SeaPoints in blue, and on the (<b>right</b>), the disparity map obtained for a normal propagation of the second pair (red + green) as well as the partial disparity map (green only) taking into account the exclusion of the overlap area.</p>
Full article ">Figure 14
<p>Diagram of the algorithm that reconstructs a partial local cloud by propagating the matching around the seeds while automatically excluding the overlapping area (and areas without reliable information).</p>
Full article ">Figure 15
<p>Illustration of two types of occlusion: on the (<b>left</b>), intrapair occlusion areas for which local seeds (circled in red in the black areas) have not spread, on the (<b>right</b>) an interpair occlusion area (circled in red) for which there has an absence of SeaPoints matched during the interpair matching (no blue circles).</p>
Full article ">Figure 16
<p>Modified diagram of the algorithm that reconstructs a partial local cloud by propagating the matching around the seeds while automatically excluding the overlapping area to take into account occlusion problems (compared to <a href="#remotesensing-16-04737-f014" class="html-fig">Figure 14</a>, the changes are framed in red).</p>
Full article ">Figure 17
<p>On the (<b>left</b>), the disparity map obtained after a partial propagation excluding entirely the overlap area, and on the (<b>right</b>), the disparity map obtained after a partial propagation taking into account intrapair and interpair occlusions.</p>
Full article ">Figure 18
<p>From left to right: example results of intermatching ORB points, SIFT points, and SeaPoints, each using approximately 3000 keypoints in both images of the interpair (<b>top row</b>), along with their corresponding masks showing influence areas applied around the matches to segment the reliable regions of the overlap area (<b>bottom row</b>).</p>
Full article ">Figure 19
<p>At the (<b>top</b>), the two successive local clouds reconstructed classically (total reconstruction), in the (<b>center</b>), the two successive local clouds, the second of which is partially reconstructed by following our method. (<b>Below</b>), the fusion of the two classic local clouds on the (<b>left</b>) and the fusion of the two partial local clouds on the (<b>right</b>).</p>
Full article ">
10 pages, 1269 KiB  
Article
Impact of Climatic Factors on the Temporal Trend of Malaria in India from 1961 to 2021
by Muniaraj Mayilsamy, Rajamannar Veeramanoharan, Kamala Jain, Vijayakumar Balakrishnan and Paramasivan Rajaiah
Trop. Med. Infect. Dis. 2024, 9(12), 309; https://doi.org/10.3390/tropicalmed9120309 - 19 Dec 2024
Viewed by 395
Abstract
Malaria remains a significant public health problem in India. Although temperature influences Anopheline mosquito feeding intervals, population density, and longevity, the reproductive potential of the Plasmodium parasite and rainfall influence the availability of larval habitats, and evidence to correlate the impact of climatic [...] Read more.
Malaria remains a significant public health problem in India. Although temperature influences Anopheline mosquito feeding intervals, population density, and longevity, the reproductive potential of the Plasmodium parasite and rainfall influence the availability of larval habitats, and evidence to correlate the impact of climatic factors on the incidence of malaria is sparse. Understanding the influence of climatic factors on malaria transmission will help us predict the future spread and intensification of the disease. The present study aimed to determine the impact of temporal trend of climatic factors such as annual average maximum, minimum, mean temperature, and rainfall on the annual incidence of malaria cases in India for a period of 61 years from 1961 to 2021 and relative humidity for a period of 41 years from 1981 to 2021. Two different analyses were performed. In the first analysis, the annual incidence of malaria and meteorological parameters such as annual maximum, minimum, and mean temperature, annual rainfall, and relative humidity were plotted separately in the graph to see if the temporal trend of climatic factors had any coherence or influence over the annual incidence of malaria cases. In the second analysis, a scatter plot was used to determine the relationship of the incidence of malaria in response to associated climatic factors. The incidence of malaria per million population was also calculated. In the first analysis, the annual malaria cases showed a negative correlation of varying degrees with relative humidity, minimum, maximum, and mean temperature, except rainfall, which showed a positive correlation. In the second analysis, the scatter plot showed that the rainfall had a positive correlation with malaria cases, and the rest of the climatic factors, such as temperature and humidity, had negative correlations of varying degrees. Out of the total 61 years studied, in 29 years, malaria cases increased more than 1000 square root counts when the minimum temperature was at 18–19 °C; counts also increased over a period of 33 years when the maximum temperature was 30–31 °C, over 37 years when the mean temperature was 24–25 °C, over 20 years when the rainfall was in the range of 100–120, and over a period of 29 years when the relative humidity was at 55–65%. While the rainfall showed a strong positive correlation with the annual incidence of malaria cases, the temperature and relative humidity showed negative correlations of various degrees. The increasing temperature may push the boundaries of malaria towards higher altitude and northern sub-tropical areas from the southern peninsular region. Although scanty rainfall reduces the transmission, increases in the same would increase the malaria incidence in India. Full article
(This article belongs to the Special Issue The Global Burden of Malaria and Control Strategies)
Show Figures

Figure 1

Figure 1
<p>The temporal trend of annual incidence of malaria (blue bars) in India was compared with the temporal trend of annual average maximum temperature (<b>A</b>); minimum temperature (<b>B</b>); mean temperature (<b>C</b>); rainfall (<b>D</b>); and relative humidity (<b>E</b>). The R<sup>2</sup> values, shown in dotted lines, show the reliability of the trendline with the actual trend.</p>
Full article ">Figure 2
<p>Distribution of the square root of malaria cases in a particular year with the corresponding average annual climatic factor (<b>A</b>–<b>E</b>). The red dots show the square root count of cases above 1000, and the blue dots show the square root count of cases below 1000. The trendline shows the pattern of the square root count of malaria cases to the corresponding climatic factors. The incidence of malaria cases (in a million population) vs. the total number of reported malaria cases in India is presented in (<b>F</b>).</p>
Full article ">
25 pages, 1330 KiB  
Article
Combined Barrier–Target Coverage for Directional Sensor Network
by Balázs Kósa, Márk Bukovinszki, Tamás V. Michaletzky and Viktor Tihanyi
Sensors 2024, 24(24), 8093; https://doi.org/10.3390/s24248093 - 18 Dec 2024
Viewed by 370
Abstract
Over the past twenty years, camera networks have become increasingly popular. In response to various demands imposed on these networks, several coverage models have been developed in the scientific literature, such as area, trap, barrier, and target coverage. In this paper, a new [...] Read more.
Over the past twenty years, camera networks have become increasingly popular. In response to various demands imposed on these networks, several coverage models have been developed in the scientific literature, such as area, trap, barrier, and target coverage. In this paper, a new type of coverage task, the Maximum Target Coverage with k-Barrier Coverage (MTCBC-k) problem, is defined. Here, the goal is to cover as many moving targets as possible from time step to time step while continuously maintaining k-barrier coverage over the region of interest (ROI). This approach is different from independently solving the two tasks and then merging the results. An Integer Linear Programming (ILP) formulation for the MTCBC-k problem is presented. Additionally, two types of camera clustering methods have been developed. This approach allows for solving smaller ILPs within clusters, and combining their solutions. Furthermore, a polynomial-time greedy algorithm has been introduced as an alternative to solve the MTCBC-k problem. An example was also provided of how the aforementioned methods can be modified to handle a more realistic scenario, where only the targets detected by the cameras are known, rather than all the targets within the ROI. The simulations were run with both dense and sparse camera placements, convincingly supporting the usefulness of the clustering and greedy methods. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) A directional sensor with its sectors and the quadruple characterization of a sector. (<b>b</b>) A sensor and its sectors used in the experiments.</p>
Full article ">Figure 2
<p>The coverage graph of a permissible sector selection.</p>
Full article ">Figure 3
<p>A simple example of the transformation from a coverage graph to the corresponding network graph.</p>
Full article ">Figure 4
<p>(<b>a</b>) An example of horizontal clusters. (<b>b</b>) An example of vertical clusters.</p>
Full article ">Figure 5
<p>(<b>a</b>) An example for how line segments are used to form the backbone of vertical clusters. (<b>b</b>) Adding multiple line segments to a vertical cluster. (<b>c</b>) Merging the source and sink nodes.</p>
Full article ">Figure 6
<p>An example when <math display="inline"><semantics> <mrow> <mi>G</mi> <mi>r</mi> <mi>e</mi> <mi>e</mi> <mi>d</mi> <msub> <mi>y</mi> <mrow> <mi>b</mi> <mi>c</mi> <mtext>_</mtext> <mi>m</mi> <mi>i</mi> <mi>n</mi> <mtext>_</mtext> <mi>s</mi> <mi>e</mi> <mi>c</mi> <mi>t</mi> </mrow> </msub> </mrow> </semantics></math> does not return an optimal result. (<b>a</b>) Sector. (<b>b</b>) Network graph.</p>
Full article ">
20 pages, 11605 KiB  
Article
GeometryFormer: Semi-Convolutional Transformer Integrated with Geometric Perception for Depth Completion in Autonomous Driving Scenes
by Siyuan Su and Jian Wu
Sensors 2024, 24(24), 8066; https://doi.org/10.3390/s24248066 - 18 Dec 2024
Viewed by 185
Abstract
Depth completion is widely employed in Simultaneous Localization and Mapping (SLAM) and Structure from Motion (SfM), which are of great significance to the development of autonomous driving. Recently, the methods based on the fusion of vision transformer (ViT) and convolution have brought the [...] Read more.
Depth completion is widely employed in Simultaneous Localization and Mapping (SLAM) and Structure from Motion (SfM), which are of great significance to the development of autonomous driving. Recently, the methods based on the fusion of vision transformer (ViT) and convolution have brought the accuracy to a new level. However, there are still two shortcomings that need to be solved. On the one hand, for the poor performance of ViT in details, this paper proposes a semi-convolutional vision transformer to optimize local continuity and designs a geometric perception module to learn the positional correlation and geometric features of sparse points in three-dimensional space to perceive the geometric structures in depth maps for optimizing the recovery of edges and transparent areas. On the other hand, previous methods implement single-stage fusion to directly concatenate or add the outputs of ViT and convolution, resulting in incomplete fusion of the two, especially in complex outdoor scenes, which will generate lots of outliers and ripples. This paper proposes a novel double-stage fusion strategy, applying learnable confidence after self-attention to flexibly learn the weight of local features. Our network achieves state-of-the-art (SoTA) performance with the NYU-Depth-v2 Dataset and the KITTI Depth Completion Dataset. It is worth mentioning that the root mean square error (RMSE) of our model on the NYU-Depth-v2 Dataset is 87.9 mm, which is currently the best among all algorithms. At the end of the article, we also verified the generalization ability in real road scenes. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>The depth values in the transparent plane with significant semantic changes are recovered incorrectly without geometric perception.</p>
Full article ">Figure 2
<p>Geometry Transformer and Local Context Block. We connect convolution and transformer in parallel and fuse the local features extracted by the convolutional layer after the self-attention and feed-forward modules.</p>
Full article ">Figure 3
<p>Comparison of our self-attention with ViT and ConvFormer. The left is self-attention of ViT, which directly generates the self-attention matrix after tokenization; the middle is our semi-convolution self-attention, which predicts the self-attention matrix by convolution layer; the right is pure convolution self-attention of ConvFormer, which generates dynamic convolution kernels to build long-range dependency.</p>
Full article ">Figure 4
<p>The left image is a projection diagram. The point <span class="html-italic">P</span> in the camera coordinate system is projected onto the pixel plane as <span class="html-italic">P′</span>, while the right image shows the geometric relationship of triangulation; ΔAOP is similar to ΔBOP′.</p>
Full article ">Figure 5
<p>Different geometric features at edges and planes.</p>
Full article ">Figure 6
<p>Geometric Perception. When the input data are in the form of feature maps, the three-dimensional coordinate map is concatenated to the input. After the feature map is tokenized, the coordinate map is also reshaped into a vector form and added to the tokens.</p>
Full article ">Figure 7
<p>Qualitative results on NYUv2 Dataset. Comparisons of our method against SoTA methods.</p>
Full article ">Figure 8
<p>Qualitative results on KITTI DC test dataset. Comparisons of our method against SoTA methods.</p>
Full article ">Figure 9
<p>Visual effect comparison of single-stage and double-stage fusion.</p>
Full article ">Figure 10
<p>Qualitative results on KITTI DC selected validation dataset with 4 and 16 LiDAR scanning lines. Comparisons of our method against SoTA methods.</p>
Full article ">Figure 11
<p>Our real-car experimental platform for collecting real road scenes data.</p>
Full article ">Figure 12
<p>Our calibration process and results on Autoware.</p>
Full article ">Figure 13
<p>Comparison with SoTA method in real road scenes.</p>
Full article ">
26 pages, 8827 KiB  
Article
IMERG V07B and V06B: A Comparative Study of Precipitation Estimates Across South America with a Detailed Evaluation of Brazilian Rainfall Patterns
by José Roberto Rozante and Gabriela Rozante
Remote Sens. 2024, 16(24), 4722; https://doi.org/10.3390/rs16244722 - 17 Dec 2024
Viewed by 311
Abstract
Satellite-based precipitation products (SPPs) are essential for climate monitoring, especially in regions with sparse observational data. This study compares the performance of the latest version (V07B) and its predecessor (V06B) of the Integrated Multi-satellitE Retrievals for GPM (IMERG) across South America and the [...] Read more.
Satellite-based precipitation products (SPPs) are essential for climate monitoring, especially in regions with sparse observational data. This study compares the performance of the latest version (V07B) and its predecessor (V06B) of the Integrated Multi-satellitE Retrievals for GPM (IMERG) across South America and the adjacent oceans. It focuses on evaluating their accuracy under different precipitation regimes in Brazil using 22 years of IMERG Final data (2000–2021), aggregated into seasonal totals (summer, autumn, winter, and spring). The observations used for the evaluation were organized into 0.1° × 0.1° grid points to match IMERG’s spatial resolution. The analysis was restricted to grid points containing at least one rain gauge, and in cases where multiple gauges were present within a grid point the average value was used. The evaluation metrics included the Root Mean Square Error (RMSE) and categorical indices. The results reveal that while both versions effectively capture major precipitation systems such as the mesoscale convective system (MCS), South Atlantic Convergence Zone (SACZ), and Intertropical Convergence Zone (ITCZ), significant discrepancies emerge in high-rainfall areas, particularly over oceans and tropical zones. Over the continent, however, these discrepancies are reduced due to the correction of observations in the final version of IMERG. A comprehensive analysis of the RMSE across Brazil, both as a whole and within the five analyzed regions, without differentiating precipitation classes, demonstrates that version V07B effectively reduces errors compared to version V06B. The analysis of statistical indices across Brazil’s five regions highlights distinct performance patterns between IMERG versions V06B and V07B, driven by regional and seasonal precipitation characteristics. V07B demonstrates a superior performance, particularly in regions with intense rainfall (R1, R2, and R5), showing a reduced RMSE and improved categorical indices. These advancements are linked to V07B’s reduced overestimation in cold-top cloud regions, although both versions consistently overestimate at rain/no-rain thresholds and for light rainfall. However, in regions prone to underestimation, such as the interior of the Northeastern region (R3) during winter, and the northeastern coast (R4) during winter and spring, V07B exacerbates these issues, highlighting challenges in accurately estimating precipitation from warm-top cloud systems. This study concludes that while V07B exhibits notable advancements, further enhancements are needed to improve accuracy in underperforming regions, specifically those influenced by warm-cloud precipitation systems. Full article
30 pages, 63876 KiB  
Article
A Low-Cost 3D Mapping System for Indoor Scenes Based on 2D LiDAR and Monocular Cameras
by Xiaojun Li, Xinrui Li, Guiting Hu, Qi Niu and Luping Xu
Remote Sens. 2024, 16(24), 4712; https://doi.org/10.3390/rs16244712 - 17 Dec 2024
Viewed by 533
Abstract
The cost of indoor mapping methods based on three-dimensional (3D) LiDAR can be relatively high, and they lack environmental color information, thereby limiting their application scenarios. This study presents an innovative, low-cost, omnidirectional 3D color LiDAR mapping system for indoor environments. The system [...] Read more.
The cost of indoor mapping methods based on three-dimensional (3D) LiDAR can be relatively high, and they lack environmental color information, thereby limiting their application scenarios. This study presents an innovative, low-cost, omnidirectional 3D color LiDAR mapping system for indoor environments. The system consists of two two-dimensional (2D) LiDARs, six monocular cameras, and a servo motor. The point clouds are fused with imagery using a pixel-spatial dual-constrained depth gradient adaptive regularization (PS-DGAR) algorithm to produce dense 3D color point clouds. During fusion, the point cloud is reconstructed inversely based on the predicted pixel depth values, compensating for areas of sparse spatial features. For indoor scene reconstruction, a globally consistent alignment algorithm based on particle filter and iterative closest point (PF-ICP) is proposed, which incorporates adjacent frame registration and global pose optimization to reduce mapping errors. Experimental results demonstrate that the proposed density enhancement method achieves an average error of 1.5 cm, significantly improving the density and geometric integrity of sparse point clouds. The registration algorithm achieves a root mean square error (RMSE) of 0.0217 and a runtime of less than 4 s, both of which outperform traditional iterative closest point (ICP) variants. Furthermore, the proposed low-cost omnidirectional 3D color LiDAR mapping system demonstrates superior measurement accuracy in indoor environments. Full article
(This article belongs to the Special Issue New Perspectives on 3D Point Cloud (Third Edition))
Show Figures

Figure 1

Figure 1
<p>The structure of the proposed low-cost 3D indoor mapping system.</p>
Full article ">Figure 2
<p>Process of associating 3D point clouds with 2D image pixels. From right to left, (i) rigid body transformation from the world coordinate system to the camera coordinate system, (ii) perspective projection from the camera coordinate system to the image plane, (iii) mapping from the image plane to the pixel coordinate system.</p>
Full article ">Figure 3
<p>Neighborhood search and depth prediction diagram. The example of a failed prediction is represented by a triangle symbol and highlighted with a red dashed box at the top, while the example of a successful prediction is represented by a star symbol and highlighted with a red dashed box on the far right. At the bottom, the conditions for algorithm convergence and the corresponding output are shown. The arrows represent the prediction steps.</p>
Full article ">Figure 4
<p>Globally uniform alignment schematic diagram. The main components include key LiDAR frames (in blue), device poses (in orange) and several state nodes corresponding to the sensor fusion results. The system uses particle filtering (green lines) to update the system state and uses ICP (red dashed lines) to perform registration between adjacent frames. The diagram also highlights the fusion process between LiDAR and device observations to achieve global pose alignment.</p>
Full article ">Figure 5
<p>Experimental system design drawing. (<b>a</b>) Structure of a low-cost 3D point cloud acquisition device. (<b>b</b>) Diagram of camera detection range and installation configuration. (<b>c</b>) Overall design diagram of the acquisition system integrated into the UGV.</p>
Full article ">Figure 6
<p>Experimental scene layout and partial display. The upper figure is a 2D schematic of the experimental scene, while the lower images show real-world photos from certain nodes along with the measured dimensions of related objects.</p>
Full article ">Figure 7
<p>Sampling density analysis of the low-cost 3D point cloud acquisition system, showing the sampling density distribution along the X-axis, Y-axis, and Z-axis from top to bottom.</p>
Full article ">Figure 8
<p>The depth prediction result maps generated by the PS-DGAR algorithm and the SuperPixel segmentation-based prediction algorithm at different scan distances, as well as the enhanced 3D point cloud imaging result maps based on these depth maps.</p>
Full article ">Figure 9
<p>The point cloud registration results for adjacent frames under different initial transformations are illustrated as follows, from top to bottom: initial transformation, G−ICP, R−ICP, M−ICP, FGR, and the proposed PF−ICP method. Specifically, (<b>a</b>) an initial transformation involving only translation along the x-axis without any rotation (<math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>3.0</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mn>0.0</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>0.0</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, roll <math display="inline"><semantics> <msup> <mn>0</mn> <mo>°</mo> </msup> </semantics></math>, pitch <math display="inline"><semantics> <msup> <mn>0</mn> <mo>°</mo> </msup> </semantics></math>, yaw <math display="inline"><semantics> <msup> <mn>0</mn> <mo>°</mo> </msup> </semantics></math>). (<b>b</b>) An initial transformation with translation along the x-axis and slight rotation about the three axes (<math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>3.144</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mn>0.001</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mn>0.0</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, roll <math display="inline"><semantics> <mrow> <mo>−</mo> <mn>0</mn> <mo>.</mo> <msup> <mn>002</mn> <mo>°</mo> </msup> </mrow> </semantics></math>, pitch <math display="inline"><semantics> <mrow> <mn>0</mn> <mo>.</mo> <msup> <mn>004</mn> <mo>°</mo> </msup> </mrow> </semantics></math>, yaw <math display="inline"><semantics> <mrow> <mo>−</mo> <mn>0</mn> <mo>.</mo> <msup> <mn>001</mn> <mo>°</mo> </msup> </mrow> </semantics></math>). (<b>c</b>) An initial transformation involving translation along the y-axis and significant rotation (<math display="inline"><semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0.004</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>y</mi> <mo>=</mo> <mo>−</mo> <mn>2.4</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>z</mi> <mo>=</mo> <mo>−</mo> <mn>0.001</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, roll <math display="inline"><semantics> <mrow> <mo>−</mo> <mn>0</mn> <mo>.</mo> <msup> <mn>001</mn> <mo>°</mo> </msup> </mrow> </semantics></math>, pitch <math display="inline"><semantics> <mrow> <mo>−</mo> <mn>0</mn> <mo>.</mo> <msup> <mn>001</mn> <mo>°</mo> </msup> </mrow> </semantics></math>, yaw <math display="inline"><semantics> <mrow> <mo>−</mo> <msup> <mn>90</mn> <mo>°</mo> </msup> </mrow> </semantics></math>).</p>
Full article ">Figure 10
<p>Result of globally consistent indoor colored map reconstruction. Subfigures (<b>a</b>–<b>f</b>) show the mapping results and real scenes of randomly selected indoor scene nodes.</p>
Full article ">Figure 11
<p>Three-dimensional imaging results of indoor complex environments.</p>
Full article ">
22 pages, 23617 KiB  
Article
Exploring the Footprint of COVID-19 on the Evolution of Public Bus Transport Demand Using GIS
by Rafael González-Escobar, Juan Miguel Vega Naranjo, Montaña Jiménez-Espada and Jonathan Galeano Vivas
Sustainability 2024, 16(24), 10901; https://doi.org/10.3390/su162410901 (registering DOI) - 12 Dec 2024
Viewed by 433
Abstract
The scope of the research work described in this article involved identifying the effects of the COVID-19 pandemic on the urban public transport system in a medium-sized city and its adjacent metropolitan area, using as reference information the number of tickets effectively sold [...] Read more.
The scope of the research work described in this article involved identifying the effects of the COVID-19 pandemic on the urban public transport system in a medium-sized city and its adjacent metropolitan area, using as reference information the number of tickets effectively sold in order to determine the fluctuation in the volume of passengers on the different bus lines before, during and after the pandemic. At the methodological level, a combined approach was employed, involving, on the one hand, the collection of open access public data from institutional repositories and information provided by the government and, on the other hand, network analysis and graphical mapping using GIS tools. The results obtained at the micro level (individualised study of each urban bus line) reveal a significant decrease in the number of passengers during the pandemic, showing the effect of mobility restrictions and the fear of contagion. However, a gradual recovery in post-pandemic demand has been observed, highlighting a large variability in recovery patterns between different bus lines. Such a situation could be attributable to several factors, such as the socio-demographic characteristics of the areas served, the frequency of the service, connectivity with other modes of transport and users’ perception of the quality of the service. At the macro level (comparison between urban and interurban transport), lines with higher demand prior to the pandemic have shown greater resilience and faster recovery. However, urban transport has experienced a more uniform and accelerated recuperation than interurban transport, with significant percentage differences in the years analysed. This disparity could be explained by the greater dependence of inhabitants on urban transport for their daily trips, due to its greater frequency and geographical coverage. Interurban transport, on the other hand, shows a more fluctuating demand and a lower dependence of users. Finally, the lack of previous research focused on the impact of the pandemic in sparsely populated rural areas restricts the ability to establish a solid frame of reference and generalise the results of this study. The authors consider that more detailed future research, including a comparative analysis of different alternative transport modes in inter-urban settings and considering a broader set of socio-demographic variables of passengers, is needed to better understand mobility dynamics in these areas and their evolution in the context of the pandemic. Full article
(This article belongs to the Special Issue Sustainable Transport and Land Use for a Sustainable Future)
Show Figures

Figure 1

Figure 1
<p>Methodology flowchart.</p>
Full article ">Figure 2
<p>Location map of the study area.</p>
Full article ">Figure 3
<p>Population of Cáceres city.</p>
Full article ">Figure 4
<p>Representation of the bus lines in the city of Cáceres. Source: SuBus Vectalia (<a href="https://caceres.vectalia.es/planos/" target="_blank">https://caceres.vectalia.es/planos/</a>, accessed on 2 December 2024).</p>
Full article ">Figure 5
<p>Percentage of total tickets sold per year on each route compared to 2019.</p>
Full article ">Figure 6
<p>Percentage of tickets sold per route each year.</p>
Full article ">Figure 7
<p>Total number of tickets sold per intercity bus route.</p>
Full article ">Figure 8
<p>Total number of tickets sold by Cáceres city bus lines and by year.</p>
Full article ">Figure 9
<p>Percentage of total tickets sold per year on each Cáceres urban transport route compared to 2019.</p>
Full article ">Figure 10
<p>Percentage of total tickets sold per year on each intercity bus route compared to 2019.</p>
Full article ">Figure 11
<p>Visualization of the trend and pattern of behaviour of intercity bus lines.</p>
Full article ">Figure 12
<p>Visualization of the trend and pattern of behaviour of the urban bus lines in the city of Cáceres.</p>
Full article ">
27 pages, 3310 KiB  
Article
Evaluation of Correction Algorithms for Sentinel-2 Images Implemented in Google Earth Engine for Use in Land Cover Classification in Northern Spain
by Iyán Teijido-Murias, Marcos Barrio-Anta and Carlos A. López-Sánchez
Forests 2024, 15(12), 2192; https://doi.org/10.3390/f15122192 - 12 Dec 2024
Viewed by 642
Abstract
This study examined the effect of atmospheric, topographic, and Bidirectional Reflectance Distribution Function (BRDF) corrections of Sentinel-2 images implemented in Google Earth Engine (GEE) for use in land cover classification. The study was carried out in an area of complex orography in northern [...] Read more.
This study examined the effect of atmospheric, topographic, and Bidirectional Reflectance Distribution Function (BRDF) corrections of Sentinel-2 images implemented in Google Earth Engine (GEE) for use in land cover classification. The study was carried out in an area of complex orography in northern Spain and made use of the Spanish National Forest Inventory plots and other systematically located plots to cover non-forest classes. A total of 2991 photo-interpreted ground plots and 15 Sentinel-2 images, acquired in summer at a spatial resolution of 10–20 m per pixel, were used for this purpose. The overall goal was to determine the optimal level of image correction in GEE for subsequent use in time series analysis of images for accurate forest cover classification. Particular attention was given to the classification of cover by the major commercial forest species: Eucalyptus globulus, Eucalyptus nitens, Pinus pinaster, and Pinus radiata. The Second Simulation of the Satellite Signal in the Solar Spectrum (Py6S) algorithm, used for atmospheric correction, provided the best compromise between execution time and image size, in comparison with other algorithms such as Sentinel-2 Level 2A Processor (Sen2Cor) and Sensor Invariant Atmospheric Correction (SIAC). To correct the topographic effect, we tested the modified Sun-canopy-sensor topographic correction (SCS + C) algorithm with digital elevation models (DEMs) of three different spatial resolutions (90, 30, and 10 m per pixel). The combination of Py6S, the SCS + C algorithm and the high-spatial resolution DEM (10 m per pixel) yielded the greatest precision, which demonstrated the need to match the pixel size of the image and the spatial resolution of the DEM used for topographic correction. We used the Ross-Thick/Li-Sparse-Reciprocal BRDF to correct the variation in reflectivity captured by the sensor. The BRDF corrections did not significantly improve the accuracy of the land cover classification with the Sentinel-2 images acquired in summer; however, we retained this correction for subsequent time series analysis of the images, as we expected it to be of much greater importance in images with larger solar incidence angles. Our final proposed dataset, with image correction for atmospheric (Py6S), topographic (SCS + C), and BRDF (Ross-Thick/Li-Sparse-Reciprocal BRDF) effects and a DEM of spatial resolution 10 m per pixel, yielded better goodness-of-fit statistics than other datasets available in the GEE catalogue. The Sentinel-2 images currently available in GEE are therefore not the most accurate for constructing land cover classification maps in areas with complex orography, such as northern Spain. Full article
Show Figures

Figure 1

Figure 1
<p>Workflow adopted in this study to analyze different combinations of Sentinel-2 imagery corrections. In Algorithm_AT00B, Algorithm_ is the name or abbreviation of the algorithm used, A denotes “atmospheric correction”, T “topographic correction”, the number 00 refers to the spatial resolution of the digital elevation model (DEM) (90, 30, and 10 m per pixel, respectively) and B refers to “application of BRDF correction”. The datasets are shown in three different colours: datasets available in the GEE repository, in blue, the dataset developed in Sentinel Application Platform—SNAP 11.0.0 and uploaded in GEE assets, in purple; and the Level 1 C datasets derived from the GEE platform, in orange. In all cases, the Random Forest algorithm was used for fitting each processing dataset.</p>
Full article ">Figure 2
<p>Overview of (<b>a</b>) the location of the study area overlapping the Spanish National Forest Inventory plots used in this study, (<b>b</b>) Sentinel-2 granules for the study area, and (<b>c</b>) location of the region of interest in northern Spain. WGS 84/UTM zone 29N (EPSG: 32629).</p>
Full article ">Figure 3
<p>Visual comparison into the 4 datasets.</p>
Full article ">Figure 4
<p>Box plots of the overall accuracy (Accuracy) of the whole land cover classification corresponding to different levels of S2 image processing: absence of atmospheric, topographic, or BRDF correction (1C), atmospheric correction with the Sen2Cor algorithm and topographic correction with the Sen2Cor algorithm with DEM of 90 m per pixel (S2C_AT90) and atmospheric correction with the Py6S algorithm, topographic correction with the SCS + C algorithm with DEM of 10 m per pixel and the BRDF correction (Py6S_AT10B). The letters at the top of the box indicate the results of Tukey’s HSD multiple comparison test (different letters indicate significant differences between the difference levels of database processing and/or correction algorithms used).</p>
Full article ">
26 pages, 843 KiB  
Review
Brown and Beige Adipose Tissue: One or Different Targets for Treatment of Obesity and Obesity-Related Metabolic Disorders?
by Yulia A. Kononova, Taisiia P. Tuchina and Alina Yu. Babenko
Int. J. Mol. Sci. 2024, 25(24), 13295; https://doi.org/10.3390/ijms252413295 - 11 Dec 2024
Viewed by 323
Abstract
The failure of the fight against obesity makes us turn to new goals in its treatment. Now, brown adipose tissue has attracted attention as a promising target for the treatment of obesity and associated metabolic disorders such as insulin resistance, dyslipidemia, and glucose [...] Read more.
The failure of the fight against obesity makes us turn to new goals in its treatment. Now, brown adipose tissue has attracted attention as a promising target for the treatment of obesity and associated metabolic disorders such as insulin resistance, dyslipidemia, and glucose tolerance disorders. Meanwhile, the expansion of our knowledge has led to awareness about two rather different subtypes: classic brown and beige (inducible brown) adipose tissue. These subtypes have different origin, differences in the expression of individual genes but also a lot in common. Both tissues are thermogenic, which means that, by increasing energy consumption, they can improve their balance with excess intake. Both tissues are activated in response to specific inducers (cold, beta-adrenergic receptor activation, certain food and drugs), but beige adipose tissue transdifferentiates back into white adipose tissue after the cessation of inducing action, while classic brown adipose tissue persists, but its activity decreases. In this review, we attempted to understand whether there are differences in the effects of different groups of thermogenesis-affecting drugs on these tissues. The analysis showed that this area of research is rather sparse and requires close attention in further studies. Full article
Show Figures

Figure 1

Figure 1
<p>Origin of brown and beige adipocytes. β3-AR–beta3-adrenoreceptors; DIO2-type 2 deiodinase; PPAR-peroxisome proliferator-activated receptor; FGF21-fibroblast growth factor 21.</p>
Full article ">
18 pages, 6618 KiB  
Article
A Convolutional Graph Neural Network Model for Water Distribution Network Leakage Detection Based on Segment Feature Fusion Strategy
by Xuan Li and Yongqiang Wu
Water 2024, 16(24), 3555; https://doi.org/10.3390/w16243555 - 10 Dec 2024
Viewed by 495
Abstract
In this study, an innovative leak detection model based on Convolutional Graph Neural Networks (CGNNs) is proposed to enhance response speed during pipeline bursts and to improve detection accuracy. By integrating node features into pipe segment features, the model effectively combines CGNN with [...] Read more.
In this study, an innovative leak detection model based on Convolutional Graph Neural Networks (CGNNs) is proposed to enhance response speed during pipeline bursts and to improve detection accuracy. By integrating node features into pipe segment features, the model effectively combines CGNN with water distribution networks, achieving leak detection at the pipe segment level. Optimizing the receptive field and convolutional layers ensures high detection performance even with sparse monitoring device density. Applied to two representative water distribution networks in City H, China, the model was trained on synthetic leak data generated by EPANET simulations and validated using real-world leak events. The experimental results show that the model achieves 90.28% accuracy in high-density monitoring areas, and over 85% accuracy within three pipe segments of actual leaks in low-density areas (10%–20%). The impact of feature engineering on model performance is also analyzed and strategies are suggested for optimizing monitoring point placement, further improving detection efficiency. This research provides valuable technical support for the intelligent management of water distribution networks under resource-limited conditions. Full article
(This article belongs to the Section Urban Water Management)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the leak localization model process.</p>
Full article ">Figure 2
<p>Eight warning events in the SCADA system of City H.</p>
Full article ">Figure 3
<p>The process of fusing node features into segment features through convolutional layers. (<b>a</b>) Pipe segment 5 in the water supply network. (<b>b</b>) Convolutional layer connectivity. (<b>c</b>) Receptive field expansion of node 5. (<b>d</b>) Receptive field expansion of node 6. (<b>e</b>) Final receptive field of pipe segment 5.</p>
Full article ">Figure 4
<p>A schematic overview of the entire process of the Convolutional Graph Neural Network (CGNN).</p>
Full article ">Figure 5
<p>Water distribution network of City H.</p>
Full article ">Figure 6
<p>On-site photos of pressure sensors in City H.</p>
Full article ">Figure 7
<p>Water supply network map of Ring A Water Plant.</p>
Full article ">Figure 8
<p>Zonal analysis of the water distribution network in City H.</p>
Full article ">
16 pages, 9484 KiB  
Article
Variability of Interpolation Errors and Mutual Enhancement of Different Interpolation Methods
by Yunxia He, Mingliang Luo, Hui Yang, Leichao Bai and Zhongsheng Chen
Appl. Sci. 2024, 14(24), 11493; https://doi.org/10.3390/app142411493 - 10 Dec 2024
Viewed by 419
Abstract
Data interpolation methods are important statistical analysis tools that can fill in data gaps and missing areas by predicting and estimating unknown data points, thereby improving the accuracy and credibility of data analysis and research. Different interpolation methods are widely used in related [...] Read more.
Data interpolation methods are important statistical analysis tools that can fill in data gaps and missing areas by predicting and estimating unknown data points, thereby improving the accuracy and credibility of data analysis and research. Different interpolation methods are widely used in related fields, but the error between different interpolation methods and their interpolation fusion optimization have a significant impact on the interpolation accuracy, which still deserves further exploration. This study is based on two different types of point data: PM2.5 (PM2.5 refers to particulate matter in the atmosphere with a diameter of 2.5 μm or less, also known as inhalable particles or fine particulate matter) in Xinyang City, Henan Province, and the elevation of typical gullies in Yuanmou County, Yunnan Province. Using relative difference coefficients and hotspot analysis methods, the differences in error characteristics among four interpolation methods, ordinary kriging (OK), universal kriging (UK), inverse distance weighted (IDW), and radial basis functions (RBFs), were compared, and the influence of interpolation fusion methods on the accuracy of interpolation results was explored. The results show that after interpolation of PM2.5 concentration and gully elevation, the error difference between OK and UK is the smallest in both datasets. For PM2.5 concentration data, IDW and UK interpolation errors have the largest difference; for elevation data, the differences between RBF and UK interpolation are the largest. The weighted fusion results show that the interpolation error accuracy of PM2.5 concentration data with an interpolation point density of 0.009 points per square kilometer is improved, and the root mean square error (RMSE) after fusion is reduced from 0.374 μg/m3 to 0.004 μg/m3. However, the error accuracy of the elevation data of the gully with an interpolation point density of 0.76 points/m2 did not improve significantly. This indicates that characteristics such as the density of the original data are important factors that affect the accuracy of interpolation. In the case of sparse interpolation points, it is possible to consider fusing the interpolation results with different error patterns to improve their accuracy. This study provides a new idea for improving the accuracy of interpolation errors. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

Figure 1
<p>Distribution of PM<sub>2.5</sub> in Xinyang City.</p>
Full article ">Figure 2
<p>Histogram of PM<sub>2.5</sub>.</p>
Full article ">Figure 3
<p>Elevation data of Shadi Village in Yuanmou County.</p>
Full article ">Figure 4
<p>Histogram of elevation data.</p>
Full article ">Figure 5
<p>Similarity in interpolation error results: (<b>a</b>) is the PM<sub>2.5</sub> concentration and (<b>b</b>) is the elevation data.</p>
Full article ">Figure 6
<p>PM<sub>2.5</sub> error hotspot distribution by the four methods.</p>
Full article ">Figure 7
<p>Hotspot distribution of elevation errors of the four methods.</p>
Full article ">
19 pages, 18714 KiB  
Article
Hardware Implementation for Triaxial Contact-Force Estimation from Stress Tactile Sensor Arrays: An Efficient Design Approach
by María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín and José A. Hidalgo-López
Sensors 2024, 24(23), 7829; https://doi.org/10.3390/s24237829 - 7 Dec 2024
Viewed by 576
Abstract
This paper presents a contribution to the state of the art in the design of tactile sensing algorithms that take advantage of the characteristics of generalized sparse matrix-vector multiplication to reduce the area, power consumption, and data storage required for real-time hardware implementation. [...] Read more.
This paper presents a contribution to the state of the art in the design of tactile sensing algorithms that take advantage of the characteristics of generalized sparse matrix-vector multiplication to reduce the area, power consumption, and data storage required for real-time hardware implementation. This work also addresses the challenge of implementing the hardware to execute multiaxial contact-force estimation algorithms from a normal stress tactile sensor array on a field-programmable gate-array development platform, employing a high-level description approach. This paper describes the hardware implementation of the proposed sparse algorithm and that of an algorithm previously reported in the literature, comparing the results of both hardware implementations with the software results already validated. The calculation of force vectors on the proposed hardware required an average time of 58.68 ms, with an estimation error of 12.6% for normal forces and 7.7% for tangential forces on a 10 × 10 taxel tactile sensor array. Some advantages of the developed hardware are that it does not require additional memory elements, achieves a 4× reduction in processing elements compared to a non-sparse implementation, and meets the requirements of being generalizable, scalable, and efficient, allowing an expansion of the applications of normal stress sensors in low-power tactile systems. Full article
(This article belongs to the Special Issue Recent Development of Flexible Tactile Sensors and Their Applications)
Show Figures

Figure 1

Figure 1
<p>Triaxial forces reconstruction algorithm (TFRA).</p>
Full article ">Figure 2
<p>Relationship between <span class="html-italic">m</span>-stress values and <span class="html-italic">m</span>-triaxial forces for an <span class="html-italic">m</span>-taxel tactile sensor array under the Businessq Equation: (<b>a</b>) Direct problem, (<b>b</b>) Inverse problem, and (<b>c</b>) Ill-posed inverse problem from normal stress data.</p>
Full article ">Figure 3
<p>Normal stress on a tactile sensor array of <span class="html-italic">m</span> taxels: (<b>a</b>) Sensor top view, and (<b>b</b>) Interaction between the <span class="html-italic">i</span>-th taxel (dot in red) and the <span class="html-italic">j</span>-th force vector (green arrow) on the sensor surface for <span class="html-italic">m</span>-stress values and <span class="html-italic">m</span>-force vectors.</p>
Full article ">Figure 4
<p>Blocks for the TFRA implementation in hardware: (<b>A</b>) <span class="html-italic"><b>Memory</b></span> stores the set of matrices in memory, (<b>B</b>) <span class="html-italic"><b>Bz</b></span> reads the data from the <math display="inline"><semantics> <msub> <mi>b</mi> <mi>z</mi> </msub> </semantics></math> sensor in memory, (<b>C</b>) <span class="html-italic"><b>CeAn</b></span> computes the contact centroids and tangential force angle, (<b>D</b>) <span class="html-italic"><b>We</b></span> calculates the initial solution weight vector <math display="inline"><semantics> <msub> <mi>w</mi> <mi>z</mi> </msub> </semantics></math>, (<b>E</b>) <span class="html-italic"><b>Co</b></span> calculates the <math display="inline"><semantics> <msub> <mi>g</mi> <mrow> <mi>p</mi> <mi>q</mi> </mrow> </msub> </semantics></math> coefficients, and (<b>F</b>) <span class="html-italic"><b>Op</b></span> finds the optimal values of the algorithm. Note that the symbol * represents a functional block in hardware that implements a matrix-vector multiplication.</p>
Full article ">Figure 5
<p>Behavior of magnitudes of the TFRA matrices (size <math display="inline"><semantics> <mrow> <mn>100</mn> <mo>×</mo> <mn>100</mn> </mrow> </semantics></math>) for a tactile sensor of <math display="inline"><semantics> <mrow> <mn>10</mn> <mo>×</mo> <mn>10</mn> </mrow> </semantics></math> taxels. The first three graphs show the components of the matrix <math display="inline"><semantics> <msubsup> <mi>C</mi> <mrow> <mn>33</mn> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msubsup> </semantics></math> for different rows or columns.</p>
Full article ">Figure 6
<p>Hardware implementation of the SpTFRA algorithm. This figure highlights the blocks that are modified with respect to the original TFRA, which change from normal matrix operations to sparse matrix operations.</p>
Full article ">Figure 7
<p>SpTFRA filters for matrix <span class="html-italic">B</span> applied to the: (<b>a</b>) SpTFRA−F1, selects the non-zero values closest to the diagonal; (<b>b</b>) SpTFRA−F2, selects the non-zero values greater or equal to a percentage <span class="html-italic">p</span> of the maximum value for each row.</p>
Full article ">Figure 8
<p>Application of the SpTFRA−F1 on the <math display="inline"><semantics> <msubsup> <mi>C</mi> <mrow> <mn>33</mn> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msubsup> </semantics></math> matrix.</p>
Full article ">Figure 9
<p>SpTFRA−F2 application on <math display="inline"><semantics> <msubsup> <mi>C</mi> <mrow> <mn>33</mn> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msubsup> </semantics></math> matrix.</p>
Full article ">Figure 10
<p>Estimated friction coefficient by applying the SpTFRA−F1 and SpTFRA−F2 filters.</p>
Full article ">Figure 11
<p>Comparative response of the filters applied to the SpTFRA model evaluating <math display="inline"><semantics> <mrow> <msub> <mi>e</mi> <mi>μ</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>B</mi> <mrow> <mi>s</mi> <mi>p</mi> </mrow> </msub> <mo>)</mo> </mrow> <mi>N</mi> <mi>n</mi> <mi>z</mi> <mrow> <mo>(</mo> <msub> <mi>B</mi> <mrow> <mi>s</mi> <mi>p</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math>) for each matrix. Note that the orange cells with white text represent the best cases for SpTFRA−F3.</p>
Full article ">Figure 12
<p>Hardware resource consumption for the TFRA (<math display="inline"><semantics> <mrow> <mi>H</mi> <msub> <mi>W</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>H</mi> <msub> <mi>W</mi> <mn>2</mn> </msub> </mrow> </semantics></math>) and SpTFRA for the two sensors analyzed.</p>
Full article ">Figure 13
<p>Resultant forces obtained in the TFRA and SpTFRA hardware implementations.</p>
Full article ">Figure 14
<p>The friction coefficient and tangential force orientation results for SpTFRA hardware implementations.</p>
Full article ">Figure 15
<p>Time response for SpTFRA-HW10 and SpTFRA-HW13.</p>
Full article ">
Back to TopTop