[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,060)

Search Parameters:
Keywords = days open

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 3243 KiB  
Article
A Modular AI-Driven Intrusion Detection System for Network Traffic Monitoring in Industry 4.0, Using Nvidia Morpheus and Generative Adversarial Networks
by Beatrice-Nicoleta Chiriac, Florin-Daniel Anton, Anca-Daniela Ioniță and Bogdan-Valentin Vasilică
Sensors 2025, 25(1), 130; https://doi.org/10.3390/s25010130 (registering DOI) - 28 Dec 2024
Abstract
Every day, a considerable number of new cybersecurity attacks are reported, and the traditional methods of defense struggle to keep up with them. In the current context of the digital era, where industrial environments handle large data volumes, new cybersecurity solutions are required, [...] Read more.
Every day, a considerable number of new cybersecurity attacks are reported, and the traditional methods of defense struggle to keep up with them. In the current context of the digital era, where industrial environments handle large data volumes, new cybersecurity solutions are required, and intrusion detection systems (IDSs) based on artificial intelligence (AI) algorithms are coming up with an answer to this critical issue. This paper presents an approach for implementing a generic model of a network-based intrusion detection system for Industry 4.0 by integrating the computational advantages of the Nvidia Morpheus open-source AI framework. The solution is modularly built with two pipelines for data analysis. The pipelines use a pre-trained XGBoost (eXtreme Gradient Boosting) model that achieved an accuracy score of up to 90%. The proposed IDS has a fast rate of analysis, managing more than 500,000 inputs in almost 10 s, due to the application of the federated learning methodology. The classification performance of the model was improved by integrating a generative adversarial network (GAN) that generates polymorphic network traffic packets. Full article
(This article belongs to the Special Issue Data Protection and Privacy in Industry 4.0 Era)
Show Figures

Figure 1

Figure 1
<p>Distribution of normal vs. abnormal network traffic datagrams.</p>
Full article ">Figure 2
<p>Example of a JSON object parsed into a data frame using cuDF.</p>
Full article ">Figure 3
<p>Main pipeline of the IDS created for classification of the PCAP capture and for generating polymorphic attacks.</p>
Full article ">Figure 4
<p>Pipeline for testing the performance of the classification stage with polymorphic inputs.</p>
Full article ">Figure 5
<p>Generative adversarial network model implemented inside the stage specialized for generating polymorphic attacks.</p>
Full article ">Figure 6
<p>Workflow of the proposed IDS.</p>
Full article ">Figure 7
<p>A prototype hardware architecture suitable for monitoring using an Nvidia Morpheus IDS solution.</p>
Full article ">Figure 8
<p>Models’ accuracy evolution before and after the first set of generated data. (<b>a</b>) Accuracy result before introducing generated data. (<b>b</b>) Accuracy results after generated data was introduced.</p>
Full article ">Figure 9
<p>Models’ accuracy evolution.</p>
Full article ">
16 pages, 5090 KiB  
Article
Accuracy of ASCAT-DIREX Soil Moisture Mapping in a Small Alpine Catchment
by Patrik Sleziak, Michal Danko, Martin Jančo, Ladislav Holko, Isabella Greimeister-Pfeil, Mariette Vreugdenhil and Juraj Parajka
Water 2025, 17(1), 49; https://doi.org/10.3390/w17010049 (registering DOI) - 28 Dec 2024
Viewed by 178
Abstract
Recent improvements in soil moisture mapping using satellites provide estimates at higher spatial and temporal resolutions. The accuracy in alpine regions is, however, still not well understood. The main objective of this study is to evaluate the accuracy of the experimental ASCAT-DIREX soil [...] Read more.
Recent improvements in soil moisture mapping using satellites provide estimates at higher spatial and temporal resolutions. The accuracy in alpine regions is, however, still not well understood. The main objective of this study is to evaluate the accuracy of the experimental ASCAT-DIREX soil moisture product in a small alpine catchment and to identify factors that control the soil moisture agreement between the satellite estimates and in situ observations in open and forest sites. The analysis is carried out in the experimental mountain catchment of Jalovecký Creek, situated in the Western Tatra Mountains (Slovakia). The satellite soil moisture estimates are derived by merging the ASCAT and Sentinel-1 retrievals (the ASCAT-DIREX dataset), providing relative daily soil moisture estimates at 500 m spatial resolution in the period 2012–2019. The soil water estimates represent four characteristic timescales of 1, 2, 5, and 10 days, which are compared with in situ topsoil moisture observations. The results show that the correlation between satellite-derived and in situ soil moisture is larger at the open site and for larger characteristic timescales (10 days). The correlations have a strong seasonal pattern, showing low (negative) correlations in winter and spring and larger (more than 0.5) correlations in summer and autumn. The main reason for low correlations in winter and spring is insufficient masking of the snowpack. Using local snow data masks and soil moisture retrieval in the period December–March, improves the soil moisture agreement in April was improved from negative correlations to 0.68 at the open site and 0.92 at the forest site. Low soil moisture correlations in the summer months may also be due to small-scale precipitation variability and vegetation dynamics mapping, which result in satellite soil moisture overestimation. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

Figure 1
<p>Topography (<b>A</b>), land cover (<b>B</b>), and location of the Jalovecký Creek catchment (Slovakia). Locations of two sites with measurements at open alpine (C1500) and forest (C1420) sites are indicated by the red circles as well as the pictures in the right part of the Figure.</p>
Full article ">Figure 2
<p>Comparison of in situ soil moisture measurements in an open area (orange line) and forest area (green line) in the period 2012–2019. The gray shaded area shows runoff response represented by measured daily discharge at the catchment outlet.</p>
Full article ">Figure 3
<p>Pearson correlations between different variants of ASCAT-DIREX (SWI where Ts = 1, 2, 5, and 10 days) and in situ soil moisture at the open (<b>left panel</b>) and forested (<b>right panel</b>) sites. Correlation is calculated from daily observations in individual years. Colors represent correlation for different variants of ASCAT-DIREX product.</p>
Full article ">Figure 4
<p>Pearson correlations between ASCAT-DIREX (SWI) and in situ soil moisture at the open (<b>left panel</b>) and forested (<b>right panel</b>) sites. Correlation is calculated from daily observations in individual months in the period 2012–2019. Colors represent correlation for different variants of ASCAT-DIREX product.</p>
Full article ">Figure 5
<p>Temporal variability in monthly Pearson correlation between in situ and satellite soil moisture at the open (left panel) and forest (right panel) sites. Yellow and purple colors represent correlation of in situ observations with ASCAT-DIREX SWI values of 1 (yellow) and 10 (purple) days, respectively. Months with negative correlation of the SWI (1 day) product are indicated on the <span class="html-italic">y</span>-axis.</p>
Full article ">Figure 6
<p>Scatterplot of monthly Pearson correlations (r) between in situ and ASCAT-DIREX (SWI 1 day) products estimated for the open (<span class="html-italic">x</span>-axis) and forest (<span class="html-italic">y</span>-axis) sites. The color of the symbols indicates the season (month) of the correlation.</p>
Full article ">Figure 7
<p>Soil moisture dynamics in winter and spring 2014 at the open site. Brown and yellow lines show relative soil moisture observed using ASCAT-DIREX (Ts = 1 day) and in situ observations, respectively. Blue color indicates daily precipitation (bars) and dynamics of snowpack represented by snow water equivalent (SWE) observed at the open site. Grey line shows observed runoff at the catchment outlet. Numbers indicate monthly correlation between in situ and satellite daily soil moisture.</p>
Full article ">Figure 8
<p>Disagreement between the in situ and satellite soil moisture between 21st June 2016 and 25th June 2016. Top panel shows in situ (brown line) and satellite (yellow line) relative soil moisture and observed daily precipitation at C1500 (blue bars). Bottom panel shows variability in daily precipitation sum in the period 21st–25th June 2016 at the nearby stations. Purple lines show size of the original ASCAT pixels. The topography is described by a digital elevation model with a spatial resolution of 25 m.</p>
Full article ">Figure 9
<p>Disagreement between the in situ and satellite-derived soil moisture in June 2019. Top panel shows in situ (brow line) and satellite-derived (yellow line) relative soil moisture and observed daily precipitation at C1500 (blue bars). Bottom panel shows the inter-annual variability in vegetation, which has an impact on the disaggregation of ASCAT soil moisture into the ASCAT-DIREX product.</p>
Full article ">
17 pages, 1585 KiB  
Article
Integrating Genomic Selection and a Genome-Wide Association Study to Improve Days Open in Thai Dairy Holstein Cattle: A Comprehensive Genetic Analysis
by Akhmad Fathoni, Wuttigrai Boonkum, Vibuntita Chankitisakul, Sayan Buaban and Monchai Duangjinda
Animals 2025, 15(1), 43; https://doi.org/10.3390/ani15010043 (registering DOI) - 27 Dec 2024
Viewed by 145
Abstract
Days open (DO) is a critical economic and reproductive trait that is commonly employed in genetic selection. Making improvements using conventional genetic techniques is exceedingly challenging. Therefore, new techniques are required to improve the accuracy of genetic selection using genomic data. This study [...] Read more.
Days open (DO) is a critical economic and reproductive trait that is commonly employed in genetic selection. Making improvements using conventional genetic techniques is exceedingly challenging. Therefore, new techniques are required to improve the accuracy of genetic selection using genomic data. This study examined the genetic approaches of traditional AIREML and single-step genomic AIREML (ssGAIREML) to assess genetic parameters and the accuracy of estimated breeding values while also investigating SNP regions associated with DO and identifying candidate genes through a genome-wide association study (GWAS). The dataset included 59415 DO records from 36368 Thai–Holstein crossbred cows and 882 genotyped animals. The cows were classified according to their Holstein genetic proportion (breed group, BG) as follows: BG1 (>93.7% Holstein genetics), BG2 (87.5% to 93.6% Holstein genetics), and BG3 (<87.5% Holstein genetics). AIREML was utilized to estimate genetic parameters and variance components. The results of this study reveal that the average DO values for BG1, BG2, and BG3 were 97.64, 97.25, and 96.23 days, respectively. The heritability values were estimated to be 0.02 and 0.03 for the traditional AIREML and ssGAIREML approaches, respectively. Depending on the dataset, the ssGAIREML method produced more accurate estimated breeding values than the traditional AIREML method, ranging from 40.5 to 45.6%. The highest values were found in the top 20% of the dam dataset. For the GWAS, we found 12 potential candidate genes (DYRK1A, CALCR, MIR489, MIR653, SLC36A1, GNA14, GNAQ, TRNAC-GCA, XYLB, ACVR2B, SLC22A14, and EXOC2) that are believed to have a significant influence on days open. In summary, the ssGAIREML method has the potential to enhance the accuracy and heritability of reproductive values compared to those obtained using conventional AIREML. Consequently, it is a viable alternative for transitioning from conventional methodologies to the ssGAIREML method in the breeding program for dairy cattle in Thailand. Moreover, the 12 identified potential candidate genes can be utilized in future studies to select markers for days open in regard to dairy cattle. Full article
(This article belongs to the Collection Advances in Cattle Breeding, Genetics and Genomics)
Show Figures

Figure 1

Figure 1
<p>Comparison of the mean values of days open (T, standard error) separated by breed group and parity. The superscripts <sup>a–c</sup> indicate statistically significant differences (<span class="html-italic">p</span> &lt; 0.05) within each breed group.</p>
Full article ">Figure 2
<p>The SNP effect from GEBVs for the days open of Thai–Holstein crossbred cattle.</p>
Full article ">Figure 3
<p>Manhattan plots of the additive genetic variance explained by windows of five adjacent SNPs for DO of Thai–Holstein crossbred cattle. The different colors in the image represent different chromosome names.</p>
Full article ">Figure 4
<p>Manhattan plot of GWAS for the days open of Thai–Holstein crossbred cattle. The y-axis depicts the −log10 value of the reported <span class="html-italic">p</span>-values for genome-wide SNPs, while the x-axis represents their placements on each chromosome. The horizontal line represents the threshold level, which is suggestive at a significance level of −log10 of 5 × 10<sup>−8</sup>.</p>
Full article ">
11 pages, 203 KiB  
Article
Absorbable Powder Haemostat Use in Minimally Invasive Thoracic Surgery
by Sara Ricciardi, Akshay Jatin Patel, Danilo Alunni Fegatelli, Sara Volpi, Federico Femia, Lea Petrella, Andrea Bille and Giuseppe Cardillo
J. Clin. Med. 2025, 14(1), 85; https://doi.org/10.3390/jcm14010085 - 27 Dec 2024
Viewed by 184
Abstract
Background: Significant intraoperative and postoperative blood loss are rare but possibly life-threatening complications after lung resection surgery either during open or minimally invasive procedures. Microporous Polysaccharide Haemospheres (ARISTA™AH) have demonstrated time-efficient haemostasis, lower postoperative blood volumes and a lower blood transfusion requirement, [...] Read more.
Background: Significant intraoperative and postoperative blood loss are rare but possibly life-threatening complications after lung resection surgery either during open or minimally invasive procedures. Microporous Polysaccharide Haemospheres (ARISTA™AH) have demonstrated time-efficient haemostasis, lower postoperative blood volumes and a lower blood transfusion requirement, without any identified adverse events across other specialities. The primary aim of our study was to evaluate the impact of ARISTA™AH on short-term postoperative outcomes in thoracic surgery. Our secondary aim was to compare ARISTA™AH with other commonly used haemostatic agents. Methods: We retrospectively reviewed a prospectively collected database of consecutive early-stage lung cancer patients surgically treated in two European centres (October 2020–December 2022). Exclusion criteria included open surgery, patients with coagulopathy/anticoagulant medication, major intraoperative bleeding, non-anatomical lung resection and age <18 years. The cohort was divided into five groups according to the haemostatic agent that was used. Propensity score matching was used to estimate the effect of ARISTA™AH on various intra- and postoperative parameters (continuous and binary outcome modelling). Results: A total of 482 patients (M/F:223/259; VATS 97/RATS 385) with a mean age of 68.9 (±10.6) years were analysed. In 253 cases, ARISTA™AH was intraoperatively used to control bleeding. This cohort of patients had a significant reduction in total drain volume by 135 mls (standard error 53.9; p = 0.012). The use of ARISTA™AH did reduce the average length of a hospital stay (−1.47 days) and the duration of chest drainage (−0.596 days), albeit not significantly. In the ARISTA™AH group, we observed no postoperative bleeding, no blood transfusion requirement, no 30-day mortality and no requirement for redo surgery. The use of ARISTA™AH significantly reduced the odds of postoperative complications, as well as the need for transfusion and redo surgery. Conclusions: Our data showed that Microporous Polysaccharide Haemospheres are a safe and effective haemostatic device. Their use has a positive effect on the short-term postoperative outcomes of patients surgically treated for early-stage lung cancer. Full article
(This article belongs to the Section Pulmonology)
12 pages, 1613 KiB  
Article
Evolution of Liver Resection for Hepatocellular Carcinoma: Change Point Analysis of Textbook Outcome over Twenty Years
by Yeshong Park, Ho-Seong Han, Seung Yeon Lim, Hyelim Joo, Jinju Kim, MeeYoung Kang, Boram Lee, Hae Won Lee, Yoo-Seok Yoon and Jai Young Cho
Medicina 2025, 61(1), 12; https://doi.org/10.3390/medicina61010012 - 26 Dec 2024
Viewed by 221
Abstract
Background and Objectives: The aim of this study was to comprehensively analyze the evolution in textbook outcome (TO) achievement after liver resection for hepatocellular carcinoma (HCC) over two decades at a single tertiary referral center. Materials and Methods: All consecutive liver [...] Read more.
Background and Objectives: The aim of this study was to comprehensively analyze the evolution in textbook outcome (TO) achievement after liver resection for hepatocellular carcinoma (HCC) over two decades at a single tertiary referral center. Materials and Methods: All consecutive liver resections for HCC at Seoul National University Bundang Hospital from 2003 to 2022 were analyzed. The included 1334 patients were divided into four groups by time intervals identified through change point analysis. TO was defined as no intraoperative transfusions, positive margins, major complications, 30-day readmission or mortality, and prolonged length of hospital stay (LOS). Results: Multiple change point analysis identified three change points (2006, 2012, 2017), and patients were divided into four groups. More recent time interval groups were associated with older age (59 vs. 59 vs. 61 vs. 63 years, p < 0.0001) and more comorbidities. Minimally invasive procedures were increasingly performed (open/laparoscopic/robotic 37.0%/63.0%/0%) vs. 43.8%/56.2%/0% vs. 17.1%/82.4%/0.5% vs. 22.9%/75.9%/1.2%, p < 0.0001). TO achievement improved over time (1.9% vs. 18.5% vs. 47.7% vs. 62.5%, p < 0.0001), and LOS was the greatest limiting factor. Conclusions: TO after liver resection improved with advances in minimally invasive techniques and parenchymal sparing procedures, even in older patients with more comorbidities and advanced tumors. Full article
(This article belongs to the Special Issue Advances in Liver Surgery)
Show Figures

Figure 1

Figure 1
<p>Multiple change point analysis was performed to identify distinct change points in textbook outcome achievement. (<b>A</b>) Comparison of Bayesian Information Criterion (BIC) values for different numbers of change points revealed that three change points were optimal. (<b>B</b>) The number of breakpoints was confirmed as statistically significant based on ordinary least-squares-based cumulative sum test. (<b>C</b>) Three change points were identified at year 2006, 2012, and 2017.</p>
Full article ">Figure 2
<p>Trends in number of annual operations during each time interval.</p>
Full article ">Figure 3
<p>Achievement of overall textbook outcome and its individual components over time.</p>
Full article ">
18 pages, 1652 KiB  
Article
Role of Cement Type on Properties of High Early-Strength Concrete
by Nader Ghafoori, Matthew O. Maler, Meysam Najimi, Ariful Hasnat and Aderemi Gbadamosi
J. Compos. Sci. 2025, 9(1), 3; https://doi.org/10.3390/jcs9010003 - 25 Dec 2024
Viewed by 220
Abstract
Properties of high early-strength concretes (HESCs) containing Type V, Type III, and rapid hardening calcium sulfoaluminate (CSA) cements were investigated at curing ages of opening time, 24 h, and 28 days. Investigated properties included the fresh (workability, setting time, air content, unit weight, [...] Read more.
Properties of high early-strength concretes (HESCs) containing Type V, Type III, and rapid hardening calcium sulfoaluminate (CSA) cements were investigated at curing ages of opening time, 24 h, and 28 days. Investigated properties included the fresh (workability, setting time, air content, unit weight, and released heat of hydration), mechanical (compressive and flexural strengths), transport (absorption, volume of permeable voids, water penetration, rapid chloride permeability, and accelerated corrosion resistance), dimensional stability (drying shrinkage), and durability (de-icing salt and abrasion resistance) properties. Test results revealed that the HESC containing Rapid-Set cement achieved the shortest opening time to attain the required minimum strength, followed by Type III and Type V cement HESCs. For the most part, Type V cement HESC produced the best transport and de-icing salt resistance, whereas Rapid-Set cement HESC displayed the best dimensional stability and wear resistance. Full article
(This article belongs to the Section Composites Applications)
Show Figures

Figure 1

Figure 1
<p>Size distribution for the fine aggregate.</p>
Full article ">Figure 2
<p>HESC mixing procedure and sample casting.</p>
Full article ">Figure 3
<p>Heat of hydration trends for non-air-entrained HESCs.</p>
Full article ">Figure 4
<p>Total charge passed for the studied HESCs: (<b>a</b>) air-entrained HESCs, (<b>b</b>) no-air-entrained HESCs.</p>
Full article ">Figure 5
<p>Drying shrinkage of non-air-entrained HESCs after 7.5 months.</p>
Full article ">Figure 6
<p>Ultimate mass loss of air-entrained HESCs after 25 freezing and thawing cycles.</p>
Full article ">Figure 7
<p>Abrasion depth of non-air-entrained HESCs subjected to 20,000 revolutions.</p>
Full article ">
11 pages, 234 KiB  
Article
Comparative Evaluation of Temporomandibular Disorders and Dental Wear in Video Game Players
by Cezar Ionia, Alexandru Eugen Petre, Alexandra Velicu and Adriana Sarah Nica
J. Clin. Med. 2025, 14(1), 31; https://doi.org/10.3390/jcm14010031 - 25 Dec 2024
Viewed by 157
Abstract
Background/Objectives: The increasing prevalence of video gaming has raised concerns about its potential impact on musculoskeletal health, particularly temporomandibular disorders (TMDs). This study aims to compare TMD symptoms, mandibular function, and dental wear between gamers and non-gamers among university students. Methods: [...] Read more.
Background/Objectives: The increasing prevalence of video gaming has raised concerns about its potential impact on musculoskeletal health, particularly temporomandibular disorders (TMDs). This study aims to compare TMD symptoms, mandibular function, and dental wear between gamers and non-gamers among university students. Methods: An observational study included 108 students aged 20 to 23 years, divided into gamers (n = 48) and non-gamers (n = 60). Participants completed questionnaires assessing TMD symptoms, gaming habits, and screen time. Clinical examinations measured mandibular movements, palpation-induced pain, and dental wear using the Smith and Knight Tooth Wear Index. Statistical analyses included independent t-tests, chi-square tests, Pearson’s correlations, and logistic regression. Seven comprehensive tables present the findings with p-values. Results: Gamers reported significantly higher screen time (Mean = 6.5 h/day) compared to non-gamers (Mean = 4.0 h/day; p < 0.001). Maximum unassisted mouth opening was greater in gamers (Mean = 48.31 mm) than in non-gamers (Mean = 46.33 mm; p = 0.04). Gamers exhibited a higher prevalence of pain on palpation of the masseter muscle (45.8% vs. 30.0%; p = 0.05). Dental wear scores were significantly higher in gamers for teeth 2.3 (upper left canine) and 3.3 (lower left canine) (p < 0.05). Positive correlations were found between hours spent gaming and maximum mouth opening (r = 0.25; p = 0.01) and dental wear (r = 0.30; p = 0.002). Logistic regression showed that gaming status significantly predicted the presence of TMD symptoms (Odds Ratio = 2.5; p = 0.03). Conclusions: Gamers exhibit greater mandibular opening, increased dental wear, and a higher prevalence of masticatory muscle pain compared to non-gamers. Prolonged gaming may contribute to altered mandibular function and increased risk of TMD symptoms. Further research is needed to explore underlying mechanisms and develop preventive strategies. Full article
18 pages, 10250 KiB  
Article
Effects of Floral Characters on the Pollination Biology and Breeding System of Iris setosa (Iridaceae): A Cold-Tolerant Ornamental Species from Jilin Province
by Xiyue Zhang, Ruoqi Liu, Lifei Chen, Tianhao Pei, Yu Gao, Xi Lu and Yunwei Zhou
Biology 2025, 14(1), 2; https://doi.org/10.3390/biology14010002 - 24 Dec 2024
Viewed by 231
Abstract
Floral phenology and features are intricately linked to pollinator behavior and pollination systems. Iris setosa is one of the ornamental irises of the family Iridaceae with beautiful flowers and leaves, and little research has been reported on its pollination biology. This study analyzed [...] Read more.
Floral phenology and features are intricately linked to pollinator behavior and pollination systems. Iris setosa is one of the ornamental irises of the family Iridaceae with beautiful flowers and leaves, and little research has been reported on its pollination biology. This study analyzed how phenology, floral features, breeding systems, and pollinator visits affect reproductive success of I. setosa populations in Jilin Province. Field observations and pollination studies demonstrated that I. setosa reached the bud stage in late May, with an average flowering time of 30 days. The anthers were outwardly dehiscent toward the outer edge of the style branches. In herkogamy, the relative locations of the anthers and stigma remained unchanged during flower opening. The stamens matured first. The pollen was most viable and the stigmas were most receptive on the first day of flowering. The nectar had the maximum sugar content. The sexual reproduction system was mainly outcrossing, with some self-compatibility and a need for pollinators. After artificial self-pollination, fluorescent microscopy revealed the winding of pollen tubes. The predominant flower-visiting insects were Apis mellifera, Megachile sp., Syrphus corollae, Episyrphus balteatus, and Lasioglossum sp., among which A. mellifera, Megachile sp., and Lasioglossum sp. were effective pollinators. Understanding the pollination mechanisms and strategies of I. setosa provides basic reference data on the potential for reproduction, and conservation efforts. Full article
(This article belongs to the Special Issue Pollination Biology)
Show Figures

Figure 1

Figure 1
<p>The characteristics of the flower of <span class="html-italic">I. setosa</span>. (<b>A</b>) The overall structure of the corolla of <span class="html-italic">I. setosa</span>, (<b>B</b>) top view of corolla, (<b>C</b>) floral unit. sti. Stigma, st. Stamen, pi. Pistil, hg. nectar guide, ct. Corolla tube, ov. Ovary, op. Outer perianth, ip. Inner perianth, an. Anther, fi. Filament.</p>
Full article ">Figure 2
<p>Changes of stamens and pistils in <span class="html-italic">I. setosa</span> during flowering. From left to right, 3 p.m. the day before bloom, 9 a.m. on the day of bloom, 3 p.m. on the day of bloom, 9 a.m. on the first day of bloom, and 9 a.m. on the second day of bloom.</p>
Full article ">Figure 3
<p>Annual phenology of <span class="html-italic">I. setosa</span> group. (<b>A</b>,<b>B</b>) nutritive growth stage, (<b>C</b>) stem extraction stage, (<b>D</b>) squaring period, (<b>E</b>) initial flowering period, (<b>F</b>) blooming period, (<b>G</b>) end flowering period, (<b>H</b>,<b>I</b>) fruit period.</p>
Full article ">Figure 4
<p>The flowering process of <span class="html-italic">I. setosa</span>. (<b>A</b>) bracts, buds, (<b>B</b>) buds about to open, (<b>C</b>–<b>E</b>) perianths are scattered one by one, (<b>F</b>) blooming flowers, (<b>G</b>–<b>J</b>) flowers gradually withered, (<b>K</b>,<b>L</b>) ovary enlarged.</p>
Full article ">Figure 5
<p>Pollen vitality of <span class="html-italic">I. setosa</span>.</p>
Full article ">Figure 6
<p>Stigma receptivity of <span class="html-italic">I. setosa</span>.</p>
Full article ">Figure 7
<p>Changes in pollen viability of <span class="html-italic">I. setosa</span> under different storage temperatures.</p>
Full article ">Figure 8
<p>Fluorescence observation of pollen tube growth after natural flowering of <span class="html-italic">I. setosa</span>. (<b>A</b>) Natural flowering 0 d in the morning, (<b>B</b>) Natural flowering 0 d in the afternoon, (<b>C</b>) Natural flowering 1 d in the morning, (<b>D</b>) Natural flowering 1 d in the afternoon. PG: Pollen grain, PT: Pollen tube, OU: Ovule.</p>
Full article ">Figure 9
<p>Fluorescence observation of pollen tube growth of <span class="html-italic">I. setosa</span> after artificial self-pollination. (<b>A</b>) Pollen morphology under fluorescence, (<b>B</b>) Pollen germinated on the stigma at 1 h after pollination, (<b>C</b>) Pollen tube bundle growth at 2 h after pollination, (<b>D</b>) Pollen tube folds intertwined at 4 h after pollination, (<b>E</b>) Pollen tubes entered the style, (<b>F</b>) Pollen tube continues to grow downward, (<b>G</b>) Pollen tube bending, (<b>H</b>) Pollen tube fracture, (<b>I</b>) Pollen tubes reaches the ovule at 6 h after pollination. PG: Pollen grain, PT: Pollen tube, OU: Ovule.</p>
Full article ">Figure 10
<p>Fluorescence observation of pollen tube growth of <span class="html-italic">I. setosa</span> after artificial cross pollination. (<b>A</b>) Pollen morphology under fluorescence, (<b>B</b>) Pollen germinated on the stigma at 1 h after pollination, (<b>C</b>) Pollen tube bundle growth at 2 h after pollination, (<b>D</b>) Pollen tube folds intertwined at 4 h after pollination, (<b>E</b>) Pollen tube continues to grow downward, (<b>F</b>) Pollen tube bending, (<b>G</b>) Pollen tube reaches the bottom of the style, (<b>H</b>) Pollen tubes reaches the ovule at 6 h after pollination, (<b>I</b>) Pollen tube wrapped around the ovule at 8 h after pollination. PG: Pollen grain, PT: Pollen tube, OU: Ovule.</p>
Full article ">Figure 11
<p>Flower visiting insect species. (<b>A</b>) <span class="html-italic">Apis mellifera</span>, (<b>B</b>) <span class="html-italic">Megachile</span> sp., (<b>C</b>) <span class="html-italic">Syrphus corollae</span>, (<b>D</b>) <span class="html-italic">Episyrphus balteatus</span>, (<b>E</b>) <span class="html-italic">Lasioglossum</span> sp., (<b>F</b>) Mordellidae.</p>
Full article ">Figure 12
<p>Behaviour of flower-visiting insects. (<b>A</b>–<b>C</b>) The pollination process of <span class="html-italic">Apis mellifera</span> (p. pollen ball), (<b>D</b>) Flower Visiting Behaviour of <span class="html-italic">Lasioglossum</span> sp. (p. pollen ball), (<b>E</b>,<b>F</b>) The pollination process of <span class="html-italic">Syrphus corollae</span>, (<b>G</b>) Flower Visiting Behaviour of <span class="html-italic">Episyrphus balteatus</span>, (<b>H</b>,<b>I</b>) The pollination process of <span class="html-italic">Megachile</span> sp. (p. pollen ball).</p>
Full article ">
16 pages, 4572 KiB  
Article
Models of Geospatially Referenced People Distribution as a Basis for Studying the Daily Cycles of Urban Infrastructure Use by Residents
by Danila Parygin, Alexander Anokhin, Anton Anikin, Anton Finogeev and Alexander Gurtyakov
Smart Cities 2025, 8(1), 1; https://doi.org/10.3390/smartcities8010001 - 24 Dec 2024
Viewed by 330
Abstract
City services and infrastructures are focused on consumers and are able to effectively and qualitatively implement their functions only under conditions of normal workload. In this regard, the correct organization of a public service system is directly related to the knowledge of the [...] Read more.
City services and infrastructures are focused on consumers and are able to effectively and qualitatively implement their functions only under conditions of normal workload. In this regard, the correct organization of a public service system is directly related to the knowledge of the quantitative and qualitative composition of people in the city during the day. The article discusses existing solutions for analyzing the distribution of people in a territory based on data collected by mobile operators, payment terminals, navigation systems and other network solutions, as well as the modeling methods derived from them. The scientific aim of the study is to propose a solution for modeling the daily distribution of people based on open statistics collected from the Internet and open-web mapping data. The stages of development of the modeling software environment and the methods for spatial analysis of available data on a digital cartographic basis are described. The proposed approach includes the use of archetypes of social groups, occupational statistics, gender and age composition of a certain territory, as well as the characteristics of urban infrastructure objects in terms of composition and purpose. Solutions for modeling the 48 h distribution of city residents with reference to certain infrastructure facilities (residential, public and working) during working and weekend days with an hourly breakdown of the simulated values were created as a result of the study. A simulation of the daily distribution of people in the city was carried out using the example of the city of Volgograd, Russian Federation. A picture of the daily distribution of city residents by district and specific buildings of the city was obtained as a result of the modeling. The proposed approach and the created algorithm can be applied to any city. Full article
(This article belongs to the Section Applied Science and Humanities for Smart Cities)
Show Figures

Figure 1

Figure 1
<p>Architecture of the simulation software environment.</p>
Full article ">Figure 2
<p>Model data visualization interface.</p>
Full article ">Figure 3
<p>Displaying numeric values within distribution boundaries: (<b>a</b>) test city boundaries; (<b>b</b>) cluster markers for the distribution of people by work activities in the city (markers colors show the visual difference in the quantitative gradation).</p>
Full article ">Figure 4
<p>Distribution representation with color gradation of population activity types (green markers are people who are resting; yellow markers are those who are studying; orange markers represent those who are working).</p>
Full article ">Figure 5
<p>The number of employees in one of the commercial and business districts of the city (the colors of the cluster markers show the visual difference in the quantitative gradation): (<b>a</b>) detailing at the level of individual buildings; (<b>b</b>) detailing at the level of departments and stores.</p>
Full article ">
28 pages, 70926 KiB  
Article
Fusion of Visible and Infrared Aerial Images from Uncalibrated Sensors Using Wavelet Decomposition and Deep Learning
by Chandrakanth Vipparla, Timothy Krock, Koundinya Nouduri, Joshua Fraser, Hadi AliAkbarpour, Vasit Sagan, Jing-Ru C. Cheng and Palaniappan Kannappan
Sensors 2024, 24(24), 8217; https://doi.org/10.3390/s24248217 - 23 Dec 2024
Viewed by 323
Abstract
Multi-modal systems extract information about the environment using specialized sensors that are optimized based on the wavelength of the phenomenology and material interactions. To maximize the entropy, complementary systems operating in regions of non-overlapping wavelengths are optimal. VIS-IR (Visible-Infrared) systems have been at [...] Read more.
Multi-modal systems extract information about the environment using specialized sensors that are optimized based on the wavelength of the phenomenology and material interactions. To maximize the entropy, complementary systems operating in regions of non-overlapping wavelengths are optimal. VIS-IR (Visible-Infrared) systems have been at the forefront of multi-modal fusion research and are used extensively to represent information in all-day all-weather applications. Prior to image fusion, the image pairs have to be properly registered and mapped to a common resolution palette. However, due to differences in the device physics of image capture, information from VIS-IR sensors cannot be directly correlated, which is a major bottleneck for this area of research. In the absence of camera metadata, image registration is performed manually, which is not practical for large datasets. Most of the work published in this area assumes calibrated sensors and the availability of camera metadata providing registered image pairs, which limits the generalization capability of these systems. In this work, we propose a novel end-to-end pipeline termed DeepFusion for image registration and fusion. Firstly, we design a recursive crop and scale wavelet spectral decomposition (WSD) algorithm for automatically extracting the patch of visible data representing the thermal information. After data extraction, both the images are registered to a common resolution palette and forwarded to the DNN for image fusion. The fusion performance of the proposed pipeline is compared and quantified with state-of-the-art classical and DNN architectures for open-source and custom datasets demonstrating the efficacy of the pipeline. Furthermore, we also propose a novel keypoint-based metric for quantifying the quality of fused output. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Image on the left shows the visible image from Kittler dataset where the person in the scene has low contrast against the background dark foliage due to low-lighting conditions. The same person is clearly visible in the corresponding thermal image on the right.</p>
Full article ">Figure 2
<p>Architecture of the proposed DeepFusion pipeline. It consists of two modules: DWT-based image matching and VIS-IR Image Fusion. The high-resolution visible image is iteratively stripped and passed onto wavelet spectral decomposition module to decompose the image into spectral bands. The information from high-frequency bands is fused to generate a comprehensive edge map. The edge map in the visual channel is gaussian-filtered to smooth out internal edges. The processed image is scaled back to the infrared resolution. The SSIM index for the pre-processed infrared image and visible image are calculated iteratively, terminating the loop when the SSIM index crosses a local maxima. The stripped visible patch is processed according to the iteration number to generate the registered multi-modal image pair. The image pair is then passed into <span class="html-italic">VIRFusionNet</span> DNN to generate the fused output.</p>
Full article ">Figure 3
<p>Performance of SIFT (<b>a</b>,<b>b</b>), ORB (<b>c</b>,<b>d</b>) feature detection and matching applied to two sample images from JesseHall and Kittler datasets. The algorithm detects individual key points at different locations in each modality but failed to find the correlative matches highlighting the limitations of classical methods applied to multi-modal image matching.</p>
Full article ">Figure 3 Cont.
<p>Performance of SIFT (<b>a</b>,<b>b</b>), ORB (<b>c</b>,<b>d</b>) feature detection and matching applied to two sample images from JesseHall and Kittler datasets. The algorithm detects individual key points at different locations in each modality but failed to find the correlative matches highlighting the limitations of classical methods applied to multi-modal image matching.</p>
Full article ">Figure 4
<p>Performance of LoFTR [<a href="#B21-sensors-24-08217" class="html-bibr">21</a>] feature matching on a JesseHall image pair. The algorithm detects individual key points at different locations in each modality but fails to match them correctly. It clearly indicates that multi-modal sensors capture the scene information quite differently.</p>
Full article ">Figure 5
<p>Performance of LoFTR feature matching on another JesseHall image pair. The algorithm failed to detect individual key points in the images.</p>
Full article ">Figure 6
<p>Image matching using LightGlue. The keypoints and matching performance of the network is better than other methods. However, there are still incorrect correlations which result in errors for homography estimation.</p>
Full article ">Figure 7
<p>Homography estimation of an IR image in the visible frame using the LightGlue network. The homography matrix generated a skewed mask which can be observed in the highlighted location for sample image from the Rolla dataset. The skewness worsens for the images captured during the turns amplifying the minor positional differences of the visible and infrared sources.</p>
Full article ">Figure 8
<p>Results of homography estimation using keypoints generated by the Superpoint algorithm matched with LightGlue network for the 2 images from the JesseHall dataset. The mask generated using a homography matrix is skewed across the edges, as seen in the figures, due to the difference in spatial orientation of the VIS-IR sensors which introduces fusion artifacts in the output.</p>
Full article ">Figure 9
<p>Images show the matched edge maps for VIS-IR image pair from the JesseHall dataset. The images are decomposed using Daubechies wavelet. The information from low-frequency quadrant is discarded and the high-frequency quadrants capturing edge information (H, V, D) are processed to generate a fused edge map. The edge maps are iteratively processed to find the local SSIM maxima to generate a registered image pair as shown in top row.</p>
Full article ">Figure 10
<p>Block diagram of the proposed VIRFusionNet image fusion module. The network is designed as a multi-channel encoder–decoder architecture with attention blocks focusing on extracting dominant features from both the modalities at different levels of decomposition. The encoder blocks are designed with stacked residual and max pooling blocks which are concatenated and processed through the decoder. The network employs skip connection to multi-level attention blocks which focus on extracting relevant information from both the modalities for enhanced fusion. The sub-figures below show the detailed architecture for residual and attention blocks.</p>
Full article ">Figure 11
<p>Cross-channel feature detector performance evaluation. Top row illustrates the definition for matched feature points that are true positives (<span style="color: #00FF00">TP</span>, green in VIS or IR), false positives (<span style="color: yellow">FP</span>, yellow in Fused) and false negatives (<span style="color: #FF0000">F</span><span style="color: #0000FF">N</span>, red in VIS and blue in IR). Reproducibility of keypoints between the Fused image and those detected in the original VIS and IR images, is quantified using precision (Prec), recall (Re), and F1-score (F1) to evaluate fusion performance (example in bottom row).</p>
Full article ">Figure 12
<p>Plot shows the wavelet spectral decomposition (WSD)-based recursive image matching performance mapped with respect to SSIM index. It can be observed that the image pair at the maxima is registered perfectly without any skew as shown. Each image subplot shows the iteratively stripped visible image concatenated with infrared template for comparison.</p>
Full article ">Figure 13
<p>Image matching output using wavelet spectral decomposition and pyramid search method for a sample image pair from TKHouse dataset. The proposed wavelet-based image matching pipeline successfully extracts the matched visible information from high-resolution input.</p>
Full article ">Figure 14
<p>SIFT-Keypoint analysis for VIFB <span class="html-italic">kettle</span> image pair for state-of-the-art methods (in chronological order) [<a href="#B85-sensors-24-08217" class="html-bibr">85</a>,<a href="#B91-sensors-24-08217" class="html-bibr">91</a>,<a href="#B92-sensors-24-08217" class="html-bibr">92</a>,<a href="#B93-sensors-24-08217" class="html-bibr">93</a>,<a href="#B94-sensors-24-08217" class="html-bibr">94</a>,<a href="#B95-sensors-24-08217" class="html-bibr">95</a>,<a href="#B96-sensors-24-08217" class="html-bibr">96</a>,<a href="#B97-sensors-24-08217" class="html-bibr">97</a>,<a href="#B98-sensors-24-08217" class="html-bibr">98</a>,<a href="#B99-sensors-24-08217" class="html-bibr">99</a>,<a href="#B100-sensors-24-08217" class="html-bibr">100</a>,<a href="#B101-sensors-24-08217" class="html-bibr">101</a>,<a href="#B102-sensors-24-08217" class="html-bibr">102</a>,<a href="#B103-sensors-24-08217" class="html-bibr">103</a>,<a href="#B104-sensors-24-08217" class="html-bibr">104</a>,<a href="#B105-sensors-24-08217" class="html-bibr">105</a>,<a href="#B106-sensors-24-08217" class="html-bibr">106</a>] and DeepFusion pipeline. Reference images show the cropped analysis patch (original) for VIS-IR image pair. The rest of the images show the analysis patches of fused output from various state-of-the-art pipelines cropped to show the exclusive keypoints retained from infrared image along with the other detected keypoints. Matched keypoints between fused, visible and infrared images are shown as (plus symbols) <span style="color: #00FF00">Green</span> and the infrared keypoints retained in fused output are shown in <span style="color: #0000FF">Blue</span>.</p>
Full article ">Figure 14 Cont.
<p>SIFT-Keypoint analysis for VIFB <span class="html-italic">kettle</span> image pair for state-of-the-art methods (in chronological order) [<a href="#B85-sensors-24-08217" class="html-bibr">85</a>,<a href="#B91-sensors-24-08217" class="html-bibr">91</a>,<a href="#B92-sensors-24-08217" class="html-bibr">92</a>,<a href="#B93-sensors-24-08217" class="html-bibr">93</a>,<a href="#B94-sensors-24-08217" class="html-bibr">94</a>,<a href="#B95-sensors-24-08217" class="html-bibr">95</a>,<a href="#B96-sensors-24-08217" class="html-bibr">96</a>,<a href="#B97-sensors-24-08217" class="html-bibr">97</a>,<a href="#B98-sensors-24-08217" class="html-bibr">98</a>,<a href="#B99-sensors-24-08217" class="html-bibr">99</a>,<a href="#B100-sensors-24-08217" class="html-bibr">100</a>,<a href="#B101-sensors-24-08217" class="html-bibr">101</a>,<a href="#B102-sensors-24-08217" class="html-bibr">102</a>,<a href="#B103-sensors-24-08217" class="html-bibr">103</a>,<a href="#B104-sensors-24-08217" class="html-bibr">104</a>,<a href="#B105-sensors-24-08217" class="html-bibr">105</a>,<a href="#B106-sensors-24-08217" class="html-bibr">106</a>] and DeepFusion pipeline. Reference images show the cropped analysis patch (original) for VIS-IR image pair. The rest of the images show the analysis patches of fused output from various state-of-the-art pipelines cropped to show the exclusive keypoints retained from infrared image along with the other detected keypoints. Matched keypoints between fused, visible and infrared images are shown as (plus symbols) <span style="color: #00FF00">Green</span> and the infrared keypoints retained in fused output are shown in <span style="color: #0000FF">Blue</span>.</p>
Full article ">Figure 15
<p>Qualitative performance of state-of-the-art image fusion methods compared to our proposed DeepFusion is shown using the <span class="html-italic">Kettle</span> image pair from VIFB dataset with three zoomed inset views, corresponding to the fused image patch shown as a green box [<a href="#B85-sensors-24-08217" class="html-bibr">85</a>,<a href="#B91-sensors-24-08217" class="html-bibr">91</a>,<a href="#B92-sensors-24-08217" class="html-bibr">92</a>,<a href="#B93-sensors-24-08217" class="html-bibr">93</a>,<a href="#B94-sensors-24-08217" class="html-bibr">94</a>,<a href="#B95-sensors-24-08217" class="html-bibr">95</a>,<a href="#B96-sensors-24-08217" class="html-bibr">96</a>,<a href="#B97-sensors-24-08217" class="html-bibr">97</a>,<a href="#B98-sensors-24-08217" class="html-bibr">98</a>,<a href="#B99-sensors-24-08217" class="html-bibr">99</a>,<a href="#B100-sensors-24-08217" class="html-bibr">100</a>,<a href="#B101-sensors-24-08217" class="html-bibr">101</a>,<a href="#B102-sensors-24-08217" class="html-bibr">102</a>,<a href="#B103-sensors-24-08217" class="html-bibr">103</a>,<a href="#B104-sensors-24-08217" class="html-bibr">104</a>,<a href="#B105-sensors-24-08217" class="html-bibr">105</a>,<a href="#B106-sensors-24-08217" class="html-bibr">106</a>]. (<b>a</b>) shows the reference VIS (Top)–IR (Bottom) pair stacked vertically for reference. (<b>b</b>) shows the template layout in the rest of the subimages. The three column image patches at the top represent information from visible (VIS), infrared (IR) and fused (Fused) images. The bottom part shows the complete Fused output for each fusion method. The DeepFusion pipeline output in (<b>t</b>) better retains saliency and contrast compared to the other methods due to the attention module. Image better viewed in color.</p>
Full article ">Figure 15 Cont.
<p>Qualitative performance of state-of-the-art image fusion methods compared to our proposed DeepFusion is shown using the <span class="html-italic">Kettle</span> image pair from VIFB dataset with three zoomed inset views, corresponding to the fused image patch shown as a green box [<a href="#B85-sensors-24-08217" class="html-bibr">85</a>,<a href="#B91-sensors-24-08217" class="html-bibr">91</a>,<a href="#B92-sensors-24-08217" class="html-bibr">92</a>,<a href="#B93-sensors-24-08217" class="html-bibr">93</a>,<a href="#B94-sensors-24-08217" class="html-bibr">94</a>,<a href="#B95-sensors-24-08217" class="html-bibr">95</a>,<a href="#B96-sensors-24-08217" class="html-bibr">96</a>,<a href="#B97-sensors-24-08217" class="html-bibr">97</a>,<a href="#B98-sensors-24-08217" class="html-bibr">98</a>,<a href="#B99-sensors-24-08217" class="html-bibr">99</a>,<a href="#B100-sensors-24-08217" class="html-bibr">100</a>,<a href="#B101-sensors-24-08217" class="html-bibr">101</a>,<a href="#B102-sensors-24-08217" class="html-bibr">102</a>,<a href="#B103-sensors-24-08217" class="html-bibr">103</a>,<a href="#B104-sensors-24-08217" class="html-bibr">104</a>,<a href="#B105-sensors-24-08217" class="html-bibr">105</a>,<a href="#B106-sensors-24-08217" class="html-bibr">106</a>]. (<b>a</b>) shows the reference VIS (Top)–IR (Bottom) pair stacked vertically for reference. (<b>b</b>) shows the template layout in the rest of the subimages. The three column image patches at the top represent information from visible (VIS), infrared (IR) and fused (Fused) images. The bottom part shows the complete Fused output for each fusion method. The DeepFusion pipeline output in (<b>t</b>) better retains saliency and contrast compared to the other methods due to the attention module. Image better viewed in color.</p>
Full article ">Figure 16
<p>Visualizing feature detections for <span class="html-italic">LabMan</span> image pair from Kittler dataset. The keypoints are color-coded based on the modality and location. <span style="color: #00FF00">Green</span> color represents common keypoints in visible, fused output, and infrared images. <span style="color: #FF0000">Red</span> keypoints in first column represent exclusive keypoints present in visible image that are captured in fused image, <span style="color: yellow">Yellow</span> keypoints in second column represent false positive keypoints generated in fused output and <span style="color: #0000FF">Blue</span> keypoints in the third column represent exclusive keypoints in infrared image that are also retained in fused output. DeepFusion captures the exclusive keypoints effectively, as seen in the figure, indicating the efficiency of data preservation in the fused output. Image best viewed in color.</p>
Full article ">Figure 17
<p>Visualization shows the keypoint plots for another image pair from JesseHall dataset. The keypoints are color coded with <span style="color: #00FF00">Green</span> color representing common keypoints in visible, fused output and infrared images. <span style="color: #FF0000">Red</span> keypoints in first column represent exclusive keypoints present in visible image that are captured in fused image, <span style="color: yellow">Yellow</span> keypoints in second column are false positive keypoints generated in the fused output, and <span style="color: #0000FF">Blue</span> keypoints in third column represent exclusive keypoints in infrared image that are also retained in fused output. The DeepFusion pipeline captures the exclusive keypoints effectively as seen from the figure indicating the efficiency of data preservation in the fused output. Image best viewed in color.</p>
Full article ">
10 pages, 762 KiB  
Article
Post-Operative Outcome Predictions in Vestibular Schwannoma Using Machine Learning Algorithms
by Abigail Dichter, Khushi Bhatt, Mohan Liu, Timothy Park, Hamid R. Djalilian and Mehdi Abouzari
J. Pers. Med. 2024, 14(12), 1170; https://doi.org/10.3390/jpm14121170 - 22 Dec 2024
Viewed by 339
Abstract
Background/Objectives: This study aimed to develop a machine learning (ML) algorithm that can predict unplanned reoperations and surgical/medical complications after vestibular schwannoma (VS) surgery. Methods: All pre- and peri-operative variables available in the American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) [...] Read more.
Background/Objectives: This study aimed to develop a machine learning (ML) algorithm that can predict unplanned reoperations and surgical/medical complications after vestibular schwannoma (VS) surgery. Methods: All pre- and peri-operative variables available in the American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) database (n = 110), except those directly related to our outcome variables, were used as input variables. A deep neural network model consisting of seven layers was developed using the Keras open-source library, with a 70:30 breakdown for training and testing. The feature importance of input variables was measured to elucidate their relative permutation effect in the ML model. Results: Of the 1783 patients with VS undergoing surgery, unplanned reoperation, surgical complications, and medical complications were seen in 8.5%, 5.2%, and 6.2% of patients, respectively. The deep neural network model had area under the curve of receiver operating characteristics (ROC-AUC) of 0.6315 (reoperation), 0.7939 (medical complications), and 0.719 (surgical complications). Accuracy, specificity, and negative predictive values of the model for all outcome variables ranged from 82.1 to 96.6%, while positive predictive values and sensitivity ranged from 16.7 to 51.5%. Variables such as the length of stay post-operation until discharge, days from operation to discharge, and the total hospital length of stay had the highest permutation importance. Conclusions: We developed an effective ML algorithm predicting unplanned reoperation and surgical/medical complications post-VS surgery. This may offer physicians guidance into potential post-surgical outcomes to allow for personalized medical care plans for VS patients. Full article
(This article belongs to the Section Clinical Medicine, Cell, and Organism Physiology)
Show Figures

Figure 1

Figure 1
<p>PR curves for predictions of (<b>a</b>) surgical complications; (<b>b</b>) medical complications; and (<b>c</b>) reoperation.</p>
Full article ">Figure 2
<p>ROC curve for predictions of (<b>a</b>) surgical complications; (<b>b</b>) medical complications; and (<b>c</b>) reoperation.</p>
Full article ">
12 pages, 2335 KiB  
Article
The First Report on Liver Resection Using the Novel Japanese hinotori™ Surgical Robot System: First Case Series Report of 10 Cases
by Kenichi Nakamura, Tetsuya Koide, Takahiko Higashiguchi, Kazuhiro Matsuo, Tomoyoshi Endo, Kenji Kikuchi, Koji Morohara, Hidetoshi Katsuno, Ichiro Uyama, Koichi Suda and Zenichi Morise
J. Clin. Med. 2024, 13(24), 7819; https://doi.org/10.3390/jcm13247819 - 21 Dec 2024
Viewed by 341
Abstract
Background: In Japan, the hinotori™ surgical robot system (Medicaroid Corporation, Kobe, Japan) was approved for gastrointestinal surgeries in October 2022. This report details our initial experience performing liver resection using the hinotori™ system. Methods: Ten patients, who were assessed as cases that would [...] Read more.
Background: In Japan, the hinotori™ surgical robot system (Medicaroid Corporation, Kobe, Japan) was approved for gastrointestinal surgeries in October 2022. This report details our initial experience performing liver resection using the hinotori™ system. Methods: Ten patients, who were assessed as cases that would benefit from the robot-assisted procedure, underwent liver resections using the hinotori™ system at Fujita Health University, Okazaki Medical Center, between August 2023 and October 2024. The backgrounds (patient, tumor, and liver function conditions, along with types of liver resections and previous surgical procedures) and short-term outcomes (operation time, blood loss, postoperative complications, open conversion, length of hospital stay, and mortality) of the cases were evaluated. Results: Eight cases of partial liver resection, one extended left medial sectionectomy, and one left hemi-hepatectomy were performed. Six cases of hepatocellular carcinomas, three cases of liver metastases, and one case of hepatolithiasis were included. There were seven male and three female patients with a median age of 70 years. Three physical status class III and seven class II patients were included. The median body mass index was 24. Five patients had previous upper abdominal surgical histories and five patients had liver cirrhosis. The median operation time was 419.5 min, and the median intraoperative blood loss was 276 mL. An open conversion in one hepatocellular carcinoma case was carried out due to bleeding from collateral vessels in the round ligament. The median length of hospital stay was 7.5 days. A grade IIIa complication (delayed bile leakage) was developed in one case. All patients with tumors underwent R0 resection. There were no cases of mortality. Conclusions: Liver resection using the hinotori™ system was feasibly performed. This study reports the first global use of the hinotori™ system for liver resection. Full article
(This article belongs to the Section Gastroenterology & Hepatopancreatobiliary Medicine)
Show Figures

Figure 1

Figure 1
<p>The hinotori™ Surgical Robot System. (<b>a</b>) The operation unit with four robotic arms; the Hinotori™ Surgical Robot System has four arms, similar to those of the da Vinci™ Surgical System. However, the manipulating arms do not require docking with the ports. (<b>b</b>) Robotic arms with eight axes of motion. These arms have one more axis than the arms of the da Vinci™ Surgical System, allowing flexibility of arm movement and minimizing the risk of interference between the arms. (<b>c</b>) Monitor cart and (<b>d</b>) surgeon cockpit. The surgeon’s cockpit features a flexible 3D viewer that helps reduce neck and shoulder fatigue. The operating procedure is similar to that of the da Vinci™ Surgical System.</p>
Full article ">Figure 2
<p>The docking-free design of hinotori. (<b>a</b>) Pivoting using a pivoter. The pivot point (the center of the movement on the abdominal wall) of the instruments is controlled via software. Therefore, no docking of the port and arm is required. This has the potential to reduce damage to the abdominal wall caused by port traction. (<b>b</b>) A large working space around the port. There is no docking of the port, and the arm provides more space around the port, making external manipulation easier.</p>
Full article ">Figure 3
<p>Operating theater configuration. There are two assistant surgeons beside the patient who perform the extracorporeal intermittent Pringle maneuver, change the robot’s instruments, etc. In addition, the hinotori™ Surgical Robot System does not have a vessel sealing system and suction–irrigation device; thus, the assistant surgeons use laparoscopic devices to assist with the surgery.</p>
Full article ">
10 pages, 641 KiB  
Article
Robotic Pancreaticoduodenectomy for Pancreatic Head Tumour: A Single-Centre Analysis
by Vera Hartman, Bart Bracke, Thiery Chapelle, Bart Hendrikx, Ellen Liekens and Geert Roeyen
Cancers 2024, 16(24), 4243; https://doi.org/10.3390/cancers16244243 - 20 Dec 2024
Viewed by 411
Abstract
Background: The robotic approach is an appealing way to perform minimally invasive pancreaticoduodenectomy. We compare robotic cases’ short-term and oncological outcomes to a historical cohort of open cases. Methods: Data were collected in a prospective database between 2016 and 2024; complications [...] Read more.
Background: The robotic approach is an appealing way to perform minimally invasive pancreaticoduodenectomy. We compare robotic cases’ short-term and oncological outcomes to a historical cohort of open cases. Methods: Data were collected in a prospective database between 2016 and 2024; complications were graded using the ISGPS definition for the specific pancreas-related complications and the Clavien–Dindo classification for overall complications. Furthermore, the Comprehensive Complication Index was calculated. All patients undergoing pancreaticoduodenectomy were included, except those with acute or chronic pancreatitis, vascular tumour involvement or multi-visceral resections. Only the subset of patients with malignancy was regarded for the oncologic outcome. Results: In total, 100 robotic and 102 open pancreaticoduodenectomy cases are included. Equal proportions of patients have a main pancreatic duct ≤3 mm (p = 1.00) and soft consistency of the pancreatic remnant (p = 0.78). Surgical time is longer for robotic pancreaticoduodenectomy (p < 0.01), and more patients have delayed gastric emptying (44% and 28.4%, p = 0.03). In the robotic group, the number of patients without any postoperative complications is higher (p = 0.02), and there is less chyle leak (p < 0.01). Ninety-day mortality, postoperative pancreatic fistula, and postpancreatectomy haemorrhage are similar. The lymph node retrieval and R0 resection rates are comparable. Conclusions: In conclusion, after robotic pancreaticoduodenectomy, remembering all cases during the learning curve are included, less chyle leak is observed, the proportion of patients without any complication is significantly larger, the surgical duration is longer, and more patients have delayed gastric emptying. Oncological results, i.e., lymph node yield and R0 resection rate, are comparable to open pancreaticoduodenectomy. Full article
Show Figures

Figure 1

Figure 1
<p>Evolution of surgical duration of robotic procedures.</p>
Full article ">Figure 2
<p>Percentage of delayed gastric emptying in robotic procedures, shown per quartile.</p>
Full article ">
25 pages, 6793 KiB  
Article
Specific Design of a Self-Compacting Concrete with Raw-Crushed Wind-Turbine Blade
by Manuel Hernando-Revenga, Víctor Revilla-Cuesta, Nerea Hurtado-Alonso, Javier Manso-Morato and Vanesa Ortega-López
J. Compos. Sci. 2024, 8(12), 540; https://doi.org/10.3390/jcs8120540 - 19 Dec 2024
Viewed by 423
Abstract
Wind-turbine blades pose significant disposal challenges in the wind-energy sector due to the increasing demand for wind farms. Therefore, this study researched the revaluation of Raw-Crushed Wind-Turbine Blade (RCWTB), obtained through a non-selective blade crushing process, as a partial substitute for aggregates in [...] Read more.
Wind-turbine blades pose significant disposal challenges in the wind-energy sector due to the increasing demand for wind farms. Therefore, this study researched the revaluation of Raw-Crushed Wind-Turbine Blade (RCWTB), obtained through a non-selective blade crushing process, as a partial substitute for aggregates in Self-Compacting Concrete (SCC). The aim was to determine the most adequate water/cement (w/c) ratio and amount of superplasticizing admixtures required to achieve adequate flowability and 7-day compressive strength in SCC for increasing proportions of RCWTB, through the production of more than 40 SCC mixes. The results reported that increasing RCWTB additions decreased the slump flow of SCC by 6.58% per 1% RCWTB on average, as well as the compressive strength, although a minimum value of 25 MPa was always reached. Following a multi-criteria decision-making analysis, a w/c ratio of 0.45 and a superplasticizer content of 2.8% of the cement mass were optimum to produce SCC with up to 2% RCWTB. A w/c ratio of 0.50 and an amount of superplasticizers of 4.0% and 4.6% were optimum to produce SCC with 3% and 4% RCWTB, respectively. Concrete mixes containing 5% RCWTB did not achieve self-compacting properties under any design condition. All modifications of the SCC mix design showed statistically significant effects according to an analysis of variance at a confidence level of 95%. Overall, this study confirms that the incorporation of RCWTB into SCC through a careful mix design is feasible in terms of flowability and compressive strength, opening a new research avenue for the recycling of wind-turbine blades as an SCC component. Full article
(This article belongs to the Special Issue Novel Cement and Concrete Materials)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Granulometry of the aggregates.</p>
Full article ">Figure 2
<p>RCWTB: (<b>a</b>) composition; (<b>b</b>) appearance.</p>
Full article ">Figure 3
<p>Overall gradation of the aggregates.</p>
Full article ">Figure 4
<p>Workability of the mixes of the Group 1: (<b>a</b>) slump flow; (<b>b</b>) slump of the <span class="html-italic">W3</span> mixes (w/c: water/cement ratio; ad.: amount of superplasticizing admixtures).</p>
Full article ">Figure 5
<p>Compressive strength of the mixes of the Group 1 (w/c: water/cement ratio; ad.: amount of superplasticizing admixtures).</p>
Full article ">Figure 6
<p>Workability of the mixes of the Group 2: (<b>a</b>) slump flow (0.55 w/c); (<b>b</b>) slump (0.45–0.50 w/c) (w/c: water/cement ratio; ad.: amount of superplasticizing admixtures).</p>
Full article ">Figure 7
<p>Water segregation in SCC with 3% RCWTB, 0.55 w/c ratio and 2.8% admixture: (<b>a</b>) slump-flow test; (<b>b</b>) detail of water segregation.</p>
Full article ">Figure 8
<p>Compressive strength of the mixes of the Group 2 (w/c: water/cement ratio; ad.: amount of superplasticizing admixtures).</p>
Full article ">Figure 9
<p>Workability of the mixes of the Group 3: (<b>a</b>) slump flow (0.45 w/c for 6.4% ad.; 0.50 w/c from 4.0% ad.); (<b>b</b>) slump (0.45 w/c up to 5.8% ad.; 0.50 w/c for 3.4% ad.) (w/c: water/cement ratio; ad.: amount of superplasticizing admixtures).</p>
Full article ">Figure 10
<p>Compressive strength of the mixes of the Group 3 (w/c: water/cement ratio; ad.: amount of superplasticizing admixtures).</p>
Full article ">Figure 11
<p>Workability of the mixes of the Group 4: (<b>a</b>) slump flow (<span class="html-italic">W4</span> mixes from 4.6% ad.); (<b>b</b>) slump (<span class="html-italic">W4</span> mix with 4.0% ad.; <span class="html-italic">W5</span> mixes) (w/c: cement/cement ratio; ad: amount of superplasticizing admixtures).</p>
Full article ">Figure 12
<p>Compressive strength of the mixes of the Group 4 (w/c: water/cement ratio; ad.: amount of superplasticizing admixtures).</p>
Full article ">Figure 13
<p>MCDM analysis of the SCC mixes: (<b>a</b>) optimum w/c ratios; (<b>b</b>) optimum amounts of superplasticizing admixtures.</p>
Full article ">
9 pages, 1115 KiB  
Article
The Presence/Absence of an Awake-State Dominant EEG Rhythm in Delirious Patients Is Related to Different Symptoms of Delirium Evaluated by the Intensive Care Delirium Screening Checklist (ICDSC)
by Toshikazu Shinba, Yusuke Fujita, Yusuke Ogawa, Yujiro Shinba and Shuntaro Shinba
Sensors 2024, 24(24), 8097; https://doi.org/10.3390/s24248097 - 19 Dec 2024
Viewed by 309
Abstract
(1) Background: Delirium is a serious condition in patients undergoing treatment for somatic diseases, leading to poor prognosis. However, the pathophysiology of delirium is not fully understood and should be clarified for its adequate treatment. This study analyzed the relationship between confusion symptoms [...] Read more.
(1) Background: Delirium is a serious condition in patients undergoing treatment for somatic diseases, leading to poor prognosis. However, the pathophysiology of delirium is not fully understood and should be clarified for its adequate treatment. This study analyzed the relationship between confusion symptoms in delirium and resting-state electroencephalogram (EEG) power spectrum (PS) profiles to investigate the heterogeneity. (2) Methods: The participants were 28 inpatients in a general hospital showing confusion symptoms with an Intensive Care Delirium Screening Checklist (ICDSC) score of 4 or above. EEG was measured at Pz in the daytime awake state for 100 s with the eyes open and 100 s with the eyes closed on the day of the ICDSC evaluation. PS analysis was conducted consecutively for each 10 s datum. (3) Results: Two resting EEG PS patterns were observed regarding the dominant rhythm: the presence or absence of a dominant rhythm, whereby the PS showed alpha or theta peaks in the former and no dominant rhythm in the latter. The patients showing a dominant EEG rhythm were frequently accompanied by hallucination or delusion (p = 0.039); conversely, those lacking a dominant rhythm tended to exhibit fluctuations in the delirium symptoms (p = 0.020). The other ICDSC scores did not differ between the participants with these two EEG patterns. (4) Discussion: The present study indicates that the presence and absence of a dominant EEG rhythm in delirious patients are related to different symptoms of delirium. Using EEG monitoring in the care of delirium will help characterize its heterogeneous pathophysiology, which requires multiple management strategies. Full article
Show Figures

Figure 1

Figure 1
<p>Different types of EEG power spectrum in sequential 10 s data at the parietal head position (Pz) in the eyes-open (Open) and -closed (Closed) conditions in two delirious patients. The data in the red line were inserted at the interval of 50 s. The arrow indicates the power spectrum peak at 8 Hz in a participant showing a dominant EEG rhythm (Dominant Rhythm (+)). No peak is present in the power spectrum of a participant without a dominant EEG rhythm (Dominant Rhythm (−)).</p>
Full article ">Figure 2
<p>Total ICDSC scores of participants with (+) and without (−) dominant rhythm. Each filled circle indicates the individual data. The horizontal bar shows the average.</p>
Full article ">Figure 3
<p>The numbers of participants scoring 1 or 0 for each ICDSC index: altered level of consciousness (Consciousness), inattention (Inattention), disorientation (Disorientation), hallucination or delusion (Delusion), psychomotor agitation or retardation (Psychomotor), inappropriate mood or speech (Inappropriate), sleep/wake cycle disturbance (Sleep/Awake), and symptom fluctuation (Fluctuation). The ICDSC score distribution (1 or 0) is shown as the number of participants with (black column) and without (white column) a dominant EEG rhythm. The differences were assessed using Fisher’s exact test. A significant difference (<span class="html-italic">p</span> &lt; 0.05) is indicated as an asterisk at the right shoulder of the indices.</p>
Full article ">
Back to TopTop