[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (604)

Search Parameters:
Keywords = cloud data centers

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 397 KiB  
Review
Exploring In-Network Computing with Information-Centric Networking: Review and Research Opportunities
by Marica Amadeo and Giuseppe Ruggeri
Future Internet 2025, 17(1), 42; https://doi.org/10.3390/fi17010042 (registering DOI) - 18 Jan 2025
Viewed by 104
Abstract
The advent of 6G networks and beyond calls for innovative paradigms to address the stringent demands of emerging applications, such as extended reality and autonomous vehicles, as well as technological frameworks like digital twin networks. Traditional cloud computing and edge computing architectures fall [...] Read more.
The advent of 6G networks and beyond calls for innovative paradigms to address the stringent demands of emerging applications, such as extended reality and autonomous vehicles, as well as technological frameworks like digital twin networks. Traditional cloud computing and edge computing architectures fall short in providing their required flexibility, scalability, and ultra-low latency. Cloud computing centralizes resources in distant data centers, leading to high latency and increased network congestion, while edge computing, though closer to data sources, lacks the agility to dynamically adapt to fluctuating workloads, user mobility, and real-time requirements. In-network computing (INC) offers a transformative solution by integrating computational capabilities directly into the network fabric, enabling dynamic and distributed task execution. This paper explores INC through the lens of information-centric networking (ICN), a revolutionary communication paradigm implementing routing-by-name and in-network caching, and thus emerging as a natural enabler for INC. We review state-of-the-art advancements involving INC and ICN, addressing critical topics such as service naming, executor selection strategies, compute reuse, and security. Furthermore, we discuss key challenges and propose research directions for deploying INC via ICN, thereby outlining a cohesive roadmap for future investigation. Full article
(This article belongs to the Special Issue Featured Papers in the Section Internet of Things, 2nd Edition)
Show Figures

Figure 1

Figure 1
<p>Reference scenario.</p>
Full article ">Figure 2
<p>NDN node architecture.</p>
Full article ">
21 pages, 13265 KiB  
Article
Synoptic and Mesoscale Atmospheric Patterns That Triggered the Natural Disasters in the Metropolitan Region of Belo Horizonte, Brazil, in January 2020
by Thaís Aparecida Cortez Pinto, Enrique Vieira Mattos, Michelle Simões Reboita, Diego Oliveira de Souza, Paula S. S. Oda, Fabrina Bolzan Martins, Thiago Souza Biscaro and Glauber Willian de Siqueira Ferreira
Atmosphere 2025, 16(1), 102; https://doi.org/10.3390/atmos16010102 (registering DOI) - 18 Jan 2025
Viewed by 96
Abstract
Between 23 and 25 January 2020, the Metropolitan Region of Belo Horizonte (MRBH) in Brazil experienced 32 natural disasters, which affected 90,000 people, resulted in 13 fatalities, and caused economic damages of approximately USD 250 million. This study aims to describe the synoptic [...] Read more.
Between 23 and 25 January 2020, the Metropolitan Region of Belo Horizonte (MRBH) in Brazil experienced 32 natural disasters, which affected 90,000 people, resulted in 13 fatalities, and caused economic damages of approximately USD 250 million. This study aims to describe the synoptic and mesoscale conditions that triggered these natural disasters in the MRBH and the physical properties of the associated clouds and precipitation. To achieve this, we analyzed data from various sources, including natural disaster records from the National Center for Monitoring and Early Warning of Natural Disasters (CEMADEN), GOES-16 satellite imagery, soil moisture data from the Soil Moisture Active Passive (SMAP) satellite mission, ERA5 reanalysis, reflectivity from weather radar, and lightning data from the Lightning Location System. The South Atlantic Convergence Zone, coupled with a low-pressure system off the southeast coast of Brazil, was the predominant synoptic pattern responsible for creating favorable conditions for precipitation during the studied period. Clouds and precipitating cells, with cloud-top temperatures below −65 °C, over several days contributed to the high precipitation volumes and lightning activity. Prolonged rainfall, with a maximum of 240 mm day−1 and 48 mm h−1, combined with the region’s soil characteristics, enhanced water infiltration and was critical in triggering and intensifying natural disasters. These findings highlight the importance of monitoring atmospheric conditions in conjunction with soil moisture over an extended period to provide additional information for mitigating the impacts of natural disasters. Full article
(This article belongs to the Special Issue Prediction and Modeling of Extreme Weather Events)
17 pages, 30535 KiB  
Article
A Method to Evaluate Orientation-Dependent Errors in the Center of Contrast Targets Used with Terrestrial Laser Scanners
by Bala Muralikrishnan, Xinsu Lu, Mary Gregg, Meghan Shilling and Braden Czapla
Sensors 2025, 25(2), 505; https://doi.org/10.3390/s25020505 - 16 Jan 2025
Viewed by 276
Abstract
Terrestrial laser scanners (TLS) are portable dimensional measurement instruments used to obtain 3D point clouds of objects in a scene. While TLSs do not require the use of cooperative targets, they are sometimes placed in a scene to fuse or compare data from [...] Read more.
Terrestrial laser scanners (TLS) are portable dimensional measurement instruments used to obtain 3D point clouds of objects in a scene. While TLSs do not require the use of cooperative targets, they are sometimes placed in a scene to fuse or compare data from different instruments or data from the same instrument but from different positions. A contrast target is an example of such a target; it consists of alternating black/white squares that can be printed using a laser printer. Because contrast targets are planar as opposed to three-dimensional (like a sphere), the center of the target might suffer from errors that depend on the orientation of the target with respect to the TLS. In this paper, we discuss a low-cost method to characterize such errors and present results obtained from a short-range TLS and a long-range TLS. Our method involves comparing the center of a contrast target against the center of spheres and, therefore, does not require the use of a reference instrument or calibrated objects. For the short-range TLS, systematic errors of up to 0.5 mm were observed in the target center as a function of the angle for the two distances (5 m and 10 m) and resolutions (30 points-per-degree (ppd) and 90 ppd) considered for this TLS. For the long-range TLS, systematic errors of about 0.3 mm to 0.8 mm were observed in the target center as a function of the angle for the two distances (5 m and 10 m) at low resolution (28 ppd). Errors of under 0.3 mm were observed in the target center as a function of the angle for the two distances at high resolution (109 ppd). Full article
(This article belongs to the Special Issue Laser Scanning and Applications)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Commercially procured contrast target with magnetic/adhesive backing, (<b>b</b>) contrast target printed on cardstock using a laser printer, (<b>c</b>) contrast target mounted on a two-axis gimbal, (<b>d</b>) contrast target with a partial 38.1 mm (1.5 inches) sphere on the back.</p>
Full article ">Figure 2
<p>Artifact comprising four spheres and a contrast target to study errors as a function of orientation.</p>
Full article ">Figure 3
<p>Different orientations of the artifact, (<b>a</b>–<b>c</b>) rotation about the vertical axis, i.e., yaw, (<b>d</b>–<b>f</b>) rotation about the horizontal axis, i.e., pitch. Photos of the artifact oriented so that (<b>g</b>) yaw = 0°, pitch = 0°, (<b>h</b>) yaw = 40°, pitch = 0°, (<b>i</b>) yaw = 0°, pitch = −40°. The TLS is located directly in front of the target in part (<b>g</b>) at a distance of either 5 m or 10 m.</p>
Full article ">Figure 4
<p>(<b>a</b>) Intensity plot of the entire artifact, (<b>b</b>) intensity plot of the contrast target and the edge points (transition between the black (blue dots in figure) and the white (red dots) regions of a target).</p>
Full article ">Figure 5
<p>The 68% data ellipses visualizing the pooled within-sample covariance matrices for the four distance/resolution scenarios. Text annotations correspond to the standard deviations in the X (horizontal) and Y (vertical) coordinates for the far distance (10 m), low resolution (30 ppd) scenario (bolded and italicized values in <a href="#sensors-25-00505-t001" class="html-table">Table 1</a>), visualized by the magnitude of the dashed lines, and near distance (5 m), high resolution (90 ppd) scenario, indicated by solid lines (bolded values in <a href="#sensors-25-00505-t001" class="html-table">Table 1</a>).</p>
Full article ">Figure 6
<p>The 95% data ellipses from low-resolution scans (30 ppd) from TLS I for (<b>a</b>) 5 m distance and (<b>b</b>) 10 m distance. The range in the average X and Y coordinates from <a href="#sensors-25-00505-t002" class="html-table">Table 2</a> have been added as text annotations.</p>
Full article ">Figure 7
<p>The 95% data ellipses from high-resolution scans (90 ppd) from TLS I for (<b>a</b>) 5 m distance and (<b>b</b>) 10 m distance. The range in the average X and Y coordinates from <a href="#sensors-25-00505-t002" class="html-table">Table 2</a> have been added as text annotations.</p>
Full article ">Figure 8
<p>The 68% data ellipses visualizing the pooled within-sample covariance matrices for the four distance/resolution scenarios from the TLS II data. Text annotations correspond to the standard deviations in the X (horizontal) and Y (vertical) coordinates for the far distance (10 m), low resolution (28 ppd) scenario, visualized by the magnitude of the dashed lines (bolded and italicized values in <a href="#sensors-25-00505-t003" class="html-table">Table 3</a>), and near distance (5 m), high resolution (109 ppd) scenario, indicated by solid lines (bolded values in <a href="#sensors-25-00505-t003" class="html-table">Table 3</a>).</p>
Full article ">Figure 9
<p>The 95% data ellipses from low-resolution scans (28 ppd) from TLS II for (<b>a</b>) 5 m distance and (<b>b</b>) 10 m distance. The range in the average X and Y coordinates from <a href="#sensors-25-00505-t004" class="html-table">Table 4</a> have been added as text annotations.</p>
Full article ">Figure 10
<p>The 95% data ellipses from high-resolution scans (109 ppd) from TLS II for (<b>a</b>) 5 m distance and (<b>b</b>) 10 m distance. The range in the average X and Y coordinates from <a href="#sensors-25-00505-t004" class="html-table">Table 4</a> have been added as text annotations.</p>
Full article ">
17 pages, 5548 KiB  
Article
Decoupling and Collaboration: An Intelligent Gateway-Based Internet of Things System Architecture for Meat Processing
by Jun Liu, Chenggang Zhou, Haoyuan Wei, Jie Pi and Daoying Wang
Agriculture 2025, 15(2), 179; https://doi.org/10.3390/agriculture15020179 - 15 Jan 2025
Viewed by 314
Abstract
The complex multi-stage process of meat processing encompasses critical phases, including slaughtering, cooling, cutting, packaging, warehousing, and logistics. The quality and nutritional value of the final meat product are significantly influenced by each processing link. To address the major challenges in the meat [...] Read more.
The complex multi-stage process of meat processing encompasses critical phases, including slaughtering, cooling, cutting, packaging, warehousing, and logistics. The quality and nutritional value of the final meat product are significantly influenced by each processing link. To address the major challenges in the meat processing industry, including device heterogeneity, model deficiencies, rapidly increasing demands for data analysis, and limitations of cloud computing, this study proposes an Internet of Things (IoT) architecture. This architecture is centered around an intelligently decoupled gateway design and edge-cloud collaborative intelligent meat inspection. Pork freshness detection is used as an example. In this paper, a high-precision and lightweight pork freshness detection model is developed by optimizing the MobileNetV3 model with Efficient Channel Attention (ECA). The experimental results indicate that the model’s accuracy on the test set is 99.8%, with a loss function value of 0.019. Building upon these results, this paper presents an experimental platform for real-time pork freshness detection, implemented by deploying the model on an intelligent gateway. The platform demonstrates stable performance with peak model memory usage under 600 MB, average CPU utilization below 20%, and gateway internal response times not exceeding 100 ms. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Intelligent gateway scheme schematic.</p>
Full article ">Figure 2
<p>Intelligent detection with side cloud collaboration.</p>
Full article ">Figure 3
<p>IoT architecture for meat processing.</p>
Full article ">Figure 4
<p>Intelligent gateway combinable schematic.</p>
Full article ">Figure 5
<p>Software systems framework.</p>
Full article ">Figure 6
<p>Original and enhanced images of pork samples. (<b>a</b>) Fresh. (<b>b</b>) Half-Fresh. (<b>c</b>) Spoiled. (<b>d</b>) Translation. (<b>e</b>) Inversion. (<b>f</b>) Rotation.</p>
Full article ">Figure 7
<p>Pork freshness testing experimental platform. (<b>a</b>) Test platform. (<b>b</b>) Algorithmic container. (<b>c</b>) Acquisition interface.</p>
Full article ">Figure 8
<p>Model iteration curves. (<b>A</b>) Accuracy iteration curve and (<b>B</b>) Loss value iteration curve.</p>
Full article ">
13 pages, 1770 KiB  
Article
Exploring Musical Feedback for Gait Retraining: A Novel Approach to Orthopedic Rehabilitation
by Luisa Cedin, Christopher Knowlton and Markus A. Wimmer
Healthcare 2025, 13(2), 144; https://doi.org/10.3390/healthcare13020144 - 14 Jan 2025
Viewed by 382
Abstract
Background/Objectives: Gait retraining is widely used in orthopedic rehabilitation to address abnormal movement patterns. However, retaining walking modifications can be challenging without guidance from physical therapists. Real-time auditory biofeedback can help patients learn and maintain gait alterations. This study piloted the feasibility of [...] Read more.
Background/Objectives: Gait retraining is widely used in orthopedic rehabilitation to address abnormal movement patterns. However, retaining walking modifications can be challenging without guidance from physical therapists. Real-time auditory biofeedback can help patients learn and maintain gait alterations. This study piloted the feasibility of the musification of feedback to medialize the center of pressure (COP). Methods: To provide musical feedback, COP and plantar pressure were captured in real time at 100 Hz from a wireless 16-sensor pressure insole. Twenty healthy subjects (29 ± 5 years old, 75.9 ± 10.5 Kg, 1.73 ± 0.07 m) were recruited to walk using this system and were further analyzed via marker-based motion capture. A lowpass filter muffled a pre-selected music playlist when the real-time center of pressure exceeded a predetermined lateral threshold. The only instruction participants received was to adjust their walking to avoid the muffling of the music. Results: All participants significantly medialized their COP (−9.38% ± 4.37, range −2.3% to −19%), guided solely by musical feedback. Participants were still able to reproduce this new walking pattern when the musical feedback was removed. Importantly, no significant changes in cadence or walking speed were observed. The results from a survey showed that subjects enjoyed using the system and suggested that they would adopt such a system for rehabilitation. Conclusions: This study highlights the potential of musical feedback for orthopedic rehabilitation. In the future, a portable system will allow patients to train at home, while clinicians could track their progress remotely through cloud-enabled telemetric health data monitoring. Full article
(This article belongs to the Special Issue 2nd Edition of the Expanding Scope of Music in Healthcare)
Show Figures

Figure 1

Figure 1
<p>Flow chart of the study design with descriptions of data collected in each condition. COP: center of pressure.</p>
Full article ">Figure 2
<p>Representation of insoles’ geometry, sensors’ locations, medial (blue-colored sensors) and lateral (red-colored sensors) boundaries, and IMU positioning. Adapted from the manufacturer’s user guide (Insole3–Moticon, OpenGo, Munich, Germany).</p>
Full article ">Figure 3
<p>Musical feedback design.</p>
Full article ">Figure 4
<p>Gait line at the baseline (measured at warmup) and training with musical feedback. Shaded regions represent +/−1 SD.</p>
Full article ">Figure 5
<p>Mean plantar pressure throughout the stance phase at (<b>a</b>) the baseline and (<b>b</b>) training with musical feedback for one participant. Figure generated via a MOTICON OpenGo report.</p>
Full article ">
21 pages, 4740 KiB  
Article
Multi-Level Network Topology and Time Series Multi-Scenario Optimization Planning Method for Hybrid AC/DC Distribution Systems in Data Centers
by Bing Chen, Yongjun Zhang and Handong Liang
Electronics 2025, 14(2), 264; https://doi.org/10.3390/electronics14020264 - 10 Jan 2025
Viewed by 357
Abstract
With the rapid development of the Internet, cloud computing, big data, artificial intelligence, and other information technologies, data centers have become a crucial part of modern society’s infrastructure, which puts forward very high requirements for the safety and reliability of power supply. Most [...] Read more.
With the rapid development of the Internet, cloud computing, big data, artificial intelligence, and other information technologies, data centers have become a crucial part of modern society’s infrastructure, which puts forward very high requirements for the safety and reliability of power supply. Most of the servers, networks, and other equipment in data centers are DC-driven loads, which can significantly enhance resource utilization efficiency by efficiently accessing the DC power supply through voltage source converter-based high-voltage direct current transmission and distribution technology. For this reason, this paper first proposes a multi-level network topology design method for AC/DC distribution systems in the context of data centers. Based on the analysis of the adaptability of AC/DC distribution systems in data center access, the design and analysis of its multi-level network topology is carried out at the physical level for the construction of hybrid AC/DC distribution systems in data center. On this basis, a time series multi-scenario planning model of AC/DC distribution system with distributed generation in data center is established, the configuration strategy of AC/DC distribution system is investigated, and a time series multi-scenario optimization planning method for hybrid AC/DC distribution systems in data centers is proposed. Finally, the validity of the proposed method is verified by simulation examples. Full article
Show Figures

Figure 1

Figure 1
<p>Diagram of flexible DC distribution system in an A or Tier IV/III data center.</p>
Full article ">Figure 2
<p>Diagram of flexible DC distribution system in a B data center.</p>
Full article ">Figure 3
<p>Diagram of flexible DC distribution system in a B or Tier II data center.</p>
Full article ">Figure 4
<p>Diagram of a flexible DC distribution system in a C data center.</p>
Full article ">Figure 5
<p>Diagram of a flexible DC distribution system in a C or Tier I data center.</p>
Full article ">Figure 6
<p>Planning Flowchart.</p>
Full article ">Figure 7
<p>Topological morphology optimization result.</p>
Full article ">Figure 8
<p>Optimal topology evolution.</p>
Full article ">Figure 9
<p>IEEE33 Node Power Distribution System.</p>
Full article ">Figure 10
<p>Hybrid AC/DC distribution systems formed after planning.</p>
Full article ">Figure 11
<p>Comparison of annual average voltage stability indicators for planning schemes with or without DC consideration.</p>
Full article ">
21 pages, 1808 KiB  
Article
An Authentication Approach in a Distributed System Through Synergetic Computing
by Jia-Jen Wang, Yaw-Chung Chen and Meng-Chang Chen
Computers 2025, 14(1), 16; https://doi.org/10.3390/computers14010016 - 6 Jan 2025
Viewed by 334
Abstract
A synergetic computing mechanism is proposed to authenticate the validity of event data in merchandise exchange applications. The events are handled by the proposed synergetic computing system which is composed of edge devices. Asteroid_Node_on_Duty (ANOD) acts like a supernode to take the duty [...] Read more.
A synergetic computing mechanism is proposed to authenticate the validity of event data in merchandise exchange applications. The events are handled by the proposed synergetic computing system which is composed of edge devices. Asteroid_Node_on_Duty (ANOD) acts like a supernode to take the duty of coordination. The computation performed by nodes in local area can reduce round-trip data propagation delay to distant data centers. Events with different risk levels are processed in parallel through different flows by using Chief chain (CC) and Telstar chain (TC) methods. Low-risk events are computed in edge nodes to form TC, which can be periodically integrated into CC that contains data of high-risk events. New authentication methods are proposed. The difficulty of authentication tasks is adjusted for different scenarios where lower difficulty in low-risk tasks may accelerate the process of validation. Authentication by a certain number of nodes is required so that the system may ensure the consistency of data. Participants in the system may need to register as members. The transaction processing speed on low-risk events may reach 25,000 TPS based on the assumption of certain member classes given that all of ANOD, and Asteroid_Node_of_Backup (ANB), Edge Cloud, and Core Cloud function normally. Full article
Show Figures

Figure 1

Figure 1
<p>Flow chart of item exchange.</p>
Full article ">Figure 2
<p>System diagram.</p>
Full article ">Figure 3
<p>Authentication process one. <span class="html-italic">Parameter</span> 1 is an input value for step C. <span class="html-italic">Parameter</span> 2 is an input value for step D. The sign <span class="html-fig-inline" id="computers-14-00016-i001"><img alt="Computers 14 00016 i001" src="/computers/computers-14-00016/article_deploy/html/images/computers-14-00016-i001.png"/></span> represents XOR operation.</p>
Full article ">Figure 4
<p>Authentication process two. <span class="html-italic">Parameter</span> 1 is an input value for step C. <span class="html-italic">Parameter</span> 2 is an input value for step D. The sign <span class="html-fig-inline" id="computers-14-00016-i001"><img alt="Computers 14 00016 i001" src="/computers/computers-14-00016/article_deploy/html/images/computers-14-00016-i001.png"/></span> represents XOR operation.</p>
Full article ">Figure 5
<p>TC contest on writing to CC.</p>
Full article ">
23 pages, 2766 KiB  
Article
Unveiling Patterns in Forecasting Errors: A Case Study of 3PL Logistics in Pharmaceutical and Appliance Sectors
by Maciej Wolny and Mariusz Kmiecik
Sustainability 2025, 17(1), 214; https://doi.org/10.3390/su17010214 - 31 Dec 2024
Viewed by 618
Abstract
Purpose: The study aims to analyze forecast errors for various time series generated by a 3PL logistics operator across 10 distribution channels managed by the operator. Design/methodology/approach: This study examines forecasting errors across 10 distribution channels managed by a 3PL operator using Google [...] Read more.
Purpose: The study aims to analyze forecast errors for various time series generated by a 3PL logistics operator across 10 distribution channels managed by the operator. Design/methodology/approach: This study examines forecasting errors across 10 distribution channels managed by a 3PL operator using Google Cloud AI forecasting. The R environment was used in the study. The research centered on analyzing forecast error series, particularly decomposition analysis of the series, to identify trends and seasonality in forecast errors. Findings: The analysis of forecast errors reveals diverse patterns and characteristics of errors across individual channels. A systematic component was observed in all analyzed household appliance channels (seasonality in all channels, and no significant trend identified only in Channel 10). In contrast, significant trends were identified in one pharmaceutical channel (Channel 02), while no systematic components were detected in the remaining channels within this group. Research limitations: Logistics operations typically depend on numerous variables, which may affect forecast accuracy. Additionally, the lack of information on the forecasting models, mechanisms (black box), and input data limits a comprehensive understanding of the sources of errors. Value of the paper: The study highlights the valuable insights that can be derived from analyzing forecast errors in the time series within the context of logistics operations. The findings underscore the need for a tailored forecasting approach for each channel, the importance of enhancing the forecasting tool, and the potential for improving forecast accuracy by focusing on trends and seasonality. The findings also emphasize that customized forecasting tools can significantly enhance operational efficiency by improving demand planning accuracy and reducing resource misallocation. This analysis makes a significant contribution to the theory and practice of demand forecasting by logistics operators in distribution networks. The research offers valuable contributions to ongoing efforts in demand forecasting by logistics operators. Full article
(This article belongs to the Special Issue Advances in Business Model Innovation and Corporate Sustainability)
Show Figures

Figure 1

Figure 1
<p>The modeling pipeline for the ARIMA_PLUS time series models. Source: <a href="https://cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-create-time-series" target="_blank">https://cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-create-time-series</a> (accessed on 6 November 2024).</p>
Full article ">Figure 2
<p>General overview of distribution network with 3PL.</p>
Full article ">Figure 3
<p>Hypothesis.</p>
Full article ">Figure 4
<p>Analytical procedure for unveiling patterns in forecasting errors.</p>
Full article ">Figure 5
<p>Time series of forecasting errors for the considered channels.</p>
Full article ">Figure 6
<p>The decomposition of Channel_03 errors time series.</p>
Full article ">Figure 7
<p>The decomposition of Channel_10 errors time series.</p>
Full article ">
17 pages, 4615 KiB  
Article
Analysis of Bulk Queueing Model with Load Balancing and Vacation
by Subramani Palani Niranjan, Suthanthiraraj Devi Latha, Sorin Vlase and Maria Luminita Scutaru
Axioms 2025, 14(1), 18; https://doi.org/10.3390/axioms14010018 - 30 Dec 2024
Viewed by 345
Abstract
Data center architecture plays an important role in effective server management network systems. Load balancing is one such data architecture used to efficiently distribute network traffic to the server. In this paper, we incorporated the load-balancing technique used in cloud computing with power [...] Read more.
Data center architecture plays an important role in effective server management network systems. Load balancing is one such data architecture used to efficiently distribute network traffic to the server. In this paper, we incorporated the load-balancing technique used in cloud computing with power business intelligence (BI) and cloud load based on the queueing theoretic approach. This model examines a bulk arrival and batch service queueing system, incorporating server overloading and underloading based on the queue length. In a batch service system, customers are served in groups following a general bulk service rule with the server operating between the minimum value a and the maximum value b. But in certain situations, maintaining the same extreme values of the server is difficult, and it needs to be changed according to the service request. In this paper, server load balancing is introduced for a batch service queueing model, which is the capacity of the server that can be adjusted, either increased or decreased, based upon the service request by the customer. On service completion, if the service request is not enough to start any of the services, the server will be assigned to perform a secondary job (vacation). After vacation completion based upon the service request, the server will start regular service, overload or underload. Cloud computing using power BI can be analyzed based on server load balancing. The function that determines the probability of the queue size at any given time is derived for the specified queueing model using the supplementary variable technique with the remaining time as the supplementary variable. Additionally, various system characteristics are calculated and illustrated with suitable numerical examples. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic representation of the queueing model. Q—queue size.</p>
Full article ">Figure 2
<p>Service rate vs. efficiency metrics of cloud resource utilization.</p>
Full article ">Figure 3
<p>Arrival rate vs. efficiency metrics of cloud resource utilization.</p>
Full article ">Figure 4
<p>Boundary value ‘<span class="html-italic">c</span>’ vs. aggregate mean cost.</p>
Full article ">Figure 5
<p>Boundary value ‘<span class="html-italic">a</span>’ vs. aggregate mean cost, <math display="inline"><semantics> <mrow> <mi>ξ</mi> <mo>=</mo> <mn>2</mn> <mo>,</mo> <mi>b</mi> <mo>=</mo> <mn>5</mn> <mo>,</mo> <mo> </mo> <mi>N</mi> <mo>=</mo> <mn>8</mn> <mo>,</mo> <msub> <mrow> <mi mathvariant="normal">µ</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>7</mn> <mo>,</mo> <msub> <mrow> <mi mathvariant="normal">µ</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> <mo>=</mo> <mn>6</mn> <mo>,</mo> <mi>φ</mi> <mo>=</mo> <mn>5</mn> <mo>,</mo> <mi>c</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>.</p>
Full article ">
20 pages, 7144 KiB  
Article
A Study of NOAA-20 VIIRS Band M1 (0.41 µm) Striping over Clear-Sky Ocean
by Wenhui Wang, Changyong Cao, Slawomir Blonski and Xi Shao
Remote Sens. 2025, 17(1), 74; https://doi.org/10.3390/rs17010074 - 28 Dec 2024
Viewed by 368
Abstract
The Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the National Oceanic and Atmospheric Administration-20 (NOAA-20) satellite was launched on 18 November 2017. The on-orbit calibration of the NOAA-20 VIIRS visible and near-infrared (VisNIR) bands has been very stable over time. However, NOAA-20 operational [...] Read more.
The Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the National Oceanic and Atmospheric Administration-20 (NOAA-20) satellite was launched on 18 November 2017. The on-orbit calibration of the NOAA-20 VIIRS visible and near-infrared (VisNIR) bands has been very stable over time. However, NOAA-20 operational M1 (a dual gain band with a center wavelength of 0.41 µm) sensor data records (SDR) have exhibited persistent scene-dependent striping over clear-sky ocean (high gain, low radiance) since the beginning of the mission, different from other VisNIR bands. This paper studies the root causes of the striping in the operational NOAA-20 M1 SDRs. Two potential factors were analyzed: (1) polarization effect-induced striping over clear-sky ocean and (2) imperfect on-orbit radiometric calibration-induced striping. NOAA-20 M1 is more sensitive to the polarized lights compared to other NOAA-20 short-wavelength bands and the similar bands on the Suomi NPP and NOAA-21 VIIRS, with detector and scan angle-dependent polarization sensitivity up to ~6.4%. The VIIRS M1 top of atmosphere radiance is dominated by Rayleigh scattering over clear-sky ocean and can be up to ~70% polarized. In this study, the impact of the polarization effect on M1 striping was investigated using radiative transfer simulation and a polarization correction method similar to that developed by the NOAA ocean color team. Our results indicate that the prelaunch-measured polarization sensitivity and the polarization correction method work well and can effectively reduce striping over clear-sky ocean scenes by up to ~2% at near nadir zones. Moreover, no significant change in NOAA-20 M1 polarization sensitivity was observed based on the data analyzed in this study. After the correction of the polarization effect, residual M1 striping over clear-sky ocean suggests that there exists half-angle mirror (HAM)-side and detector-dependent striping, which may be caused by on-orbit radiometric calibration errors. HAM-side and detector-dependent striping correction factors were analyzed using deep convective cloud (DCC) observations (low gain, high radiances) and verified over the homogeneous Libya-4 desert site (low gain, mid-level radiance); neither are significantly affected by the polarization effect. The imperfect on-orbit radiometric calibration-induced striping in the NOAA operational M1 SDR has been relatively stable over time. After the correction of the polarization effect, the DCC-based striping correction factors can further reduce striping over clear-sky ocean scenes by ~0.5%. The polarization correction method used in this study is only effective over clear-sky ocean scenes that are dominated by the Rayleigh scattering radiance. The DCC-based striping correction factors work well at all radiance levels; therefore, they can be deployed operationally to improve the quality of NOAA-20 M1 SDRs. Full article
(This article belongs to the Collection The VIIRS Collection: Calibration, Validation, and Application)
Show Figures

Figure 1

Figure 1
<p>Monthly DCC reflectance (mode) time series for NOAA-20 VIIRS bands M1–M4 from May 2018 to June 2024.</p>
Full article ">Figure 2
<p>NOAA-20 M1 (<b>a</b>) detector level relative response (RSR, represented by different colors) functions and (<b>b</b>) operational F-factors on 31 December 2023 (right).</p>
Full article ">Figure 3
<p>Example of 6SV simulated Stokes vectors (<b>a</b>) <span class="html-italic">I</span>, (<b>b</b>) <span class="html-italic">Q</span>, (<b>c</b>) <span class="html-italic">U</span>, and (<b>d</b>) DoLP for a NOAA-20 VIIRS M1 granule on 9 January 2024 20:36–20:38 UTC (Pacific Coast, latitude: 29.27°, longitude: −116.95°).</p>
Full article ">Figure 4
<p>6SV simulated degree of linear polarization (DoLP, unitless) over clear-sky ocean at surface pressure of 1013.5 hPa and wind speed of 5 m/s: (<b>a</b>) DoLP as functions of view zenith angle (VZA) and relative azimuth angle (RAA) at solar zenith angle (SZA) of 22.5°; (<b>b</b>) DoLP as functions of SZA and RAA at VZA of 22.5°.</p>
Full article ">Figure 5
<p>6SV simulated DoLP (black dots) for NOAA-20 M1 over clear-sky ocean as a function of scattering angle, at a surface pressure of 1013.5 hPa and a wind speed of 5 m/s. The blue vertical dash line marks the 90° scattering angle.</p>
Full article ">Figure 6
<p>Polar plots of NOAA-20 VIIRS M1 prelaunch polarization sensitivity and phase angle at different scan angles for (<b>a</b>) HAM-A and (<b>b</b>) HAM-B. Polarization sensitivity (unit: percent) is represented by the length of a vector on the polar plot, while polarization phase angle is represented by the direction of the vector. Scan angle is represented by different colors; detector is represented by different symbols.</p>
Full article ">Figure 7
<p>NOAA-20 VIIRS M1 detector- and HAM-side-dependent <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mn>12</mn> </mrow> </msub> </mrow> </semantics></math> (left panel) and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>m</mi> </mrow> <mrow> <mn>13</mn> </mrow> </msub> </mrow> </semantics></math> (right panel) terms as a function of the scan angle, derived using prelaunch characterized polarization amplitude and phase angle.</p>
Full article ">Figure 8
<p>NOAA-20 M1 striping over a clear-sky ocean scene on 9 January 2024 20:36 UTC: (<b>a</b>) operational SDR image; (<b>b</b>) HAM-side and detector-level reflectance divergence in the operational SDR; (<b>c</b>) operational reflectance ratios between individual detectors and band-averaged value; (<b>d</b>–<b>f</b>) are similar to (<b>a</b>–<b>c</b>), but after applying the polarization correction. The gray horizontal dash lines in (<b>c</b>,<b>f</b>) mark reflectance ratio values of 0.99, 1.00, and 1.01, to assist understanding only.</p>
Full article ">Figure 9
<p>Similar to <a href="#remotesensing-17-00074-f008" class="html-fig">Figure 8</a>, but for a NOAA-20 M1 clear-sky ocean scene on 23 September 2018 06:12 UTC (Indian Ocean, West Coast of Australia): (<b>a</b>) operational SDR image; (<b>b</b>) HAM-side and detector-level reflectance divergence in the operational SDR; (<b>c</b>) operational reflectance ratios between individual detectors and band-averaged value; (<b>d</b>–<b>f</b>) are similar to (<b>a</b>–<b>c</b>), but after applying the polarization correction. The gray horizontal dash lines in (<b>c</b>,<b>f</b>) mark reflectance ratio values of 0.99, 1.00, and 1.01, to assist understanding only.</p>
Full article ">Figure 10
<p>Comparison of NOAA-20 M1 DCC-based striping correction factors for (<b>a</b>) considering detector-dependent striping only and (<b>b</b>) considering both HAM-side- and detector-dependent striping.</p>
Full article ">Figure 11
<p>Impacts of DCC-based striping correction factors for NOAA-20 M1 over the Libyan-4 desert site (30 March 2024, 11:32 UTC): (<b>a</b>) operational SDR image; (<b>b</b>) HAM-side and detector-level reflectance divergence in the operational SDR; (<b>c</b>) operational reflectance ratios between individual detectors and band-averaged value; (<b>d</b>–<b>f</b>) are similar to (<b>a</b>–<b>c</b>), but after applying the DCC-based striping correction. The gray horizontal dash lines in (<b>c</b>,<b>f</b>) mark reflectance ratio values of 0.99, 1.00, and 1.01, to assist understanding only.</p>
Full article ">Figure 12
<p>Similar to <a href="#remotesensing-17-00074-f008" class="html-fig">Figure 8</a>, but after applying both DCC-based striping correction and polarization correction: (<b>a</b>) SDR image; (<b>b</b>) HAM-side and detector-level reflectance divergence; (<b>c</b>) reflectance ratios between individual detectors and band-averaged value. The gray horizontal dash lines in (<b>c</b>) mark reflectance ratio values of 0.99, 1.00, and 1.01, to assist understanding only.</p>
Full article ">Figure 13
<p>Similar to <a href="#remotesensing-17-00074-f009" class="html-fig">Figure 9</a>, but after applying both DCC-based striping correction and polarization correction: (<b>a</b>) SDR image; (<b>b</b>) HAM-side and detector-level reflectance divergence; (<b>c</b>) reflectance ratios between individual detectors and band-averaged value. The gray horizontal dash lines in (<b>c</b>) mark reflectance ratio values of 0.99, 1.00, and 1.01, to assist understanding only.</p>
Full article ">
16 pages, 15762 KiB  
Article
A LiDAR-Based Backfill Monitoring System
by Xingliang Xu, Pengli Huang, Zhengxiang He, Ziyu Zhao and Lin Bi
Appl. Sci. 2024, 14(24), 12073; https://doi.org/10.3390/app142412073 - 23 Dec 2024
Viewed by 432
Abstract
A backfill system in underground mines supports the walls and roofs of mined-out areas and improves the structural integrity of mines. However, there has been a significant gap in the visualization and monitoring of the backfill progress. To better observe the process of [...] Read more.
A backfill system in underground mines supports the walls and roofs of mined-out areas and improves the structural integrity of mines. However, there has been a significant gap in the visualization and monitoring of the backfill progress. To better observe the process of the paste backfill material filling the tunnels, a LiDAR-based backfill monitoring system is proposed. As long as the rising top surface of the backfill material enters the LiDAR range, the proposed system can compute the plane coefficient of this surface. The intersection boundary of the tunnel and the backfill material can be obtained by substituting the plane coefficient into the space where the initial tunnel is located. A surface point generation and slurry point determination algorithm are proposed to obtain the point cloud of the backfill body based on the intersection boundary. After Poisson surface reconstruction and volume computation, the point cloud model is reconstructed into a 3D mesh, and the backfill progress is digitized as the ratio of the backfill body volume to the initial tunnel volume. The volumes of the meshes are compared with the results computed by two other algorithms; the error is less than 1%. The time to compute a set of data increases with the amount of data, ranging from 8 to 20 s, which is sufficient to update a set of data with a tiny increase in progress. As the digitized results update, the visualization progress is transmitted to the mining control center, allowing unexpected problems inside the tunnel to be monitored and addressed based on the messages provided by the proposed system. Full article
Show Figures

Figure 1

Figure 1
<p>Coordinate of one point scanned by LiDAR.</p>
Full article ">Figure 2
<p>Backfill monitor.</p>
Full article ">Figure 3
<p>Backfill progress computation workflow.</p>
Full article ">Figure 4
<p>Slurry points determination.</p>
Full article ">Figure 5
<p>Moving sampling method.</p>
Full article ">Figure 6
<p>Calculation of signed volume.</p>
Full article ">Figure 7
<p>System workflow.</p>
Full article ">Figure 8
<p>The overview map of the underground mine simulation environment: (<b>a</b>) lateral view; (<b>b</b>) top view. In the CARLA simulation environment, cuboids and retaining walls are used to restrict the LiDAR laser beams. Therefore, the size can be larger than in real-world conditions.</p>
Full article ">Figure 9
<p>Simulation of backfill progress: (<b>a</b>) external simulation environment before backfill; (<b>b</b>) internal simulation environment before backfill; (<b>c</b>) internal simulation environment of the earlier backfill process; (<b>d</b>) internal simulation environment of the later backfill process; (<b>e</b>) external simulation environment of the interim backfill process.</p>
Full article ">Figure 10
<p>Diagram of point cloud data in the backfill monitor.</p>
Full article ">Figure 11
<p>Results of steps for a set of LiDAR scans in different tunnels: (<b>a</b>) results of each step in a straight tunnel; (<b>b</b>) results of each step in a slightly curvy tunnel; (<b>c</b>) results of each step in a curvy tunnel.</p>
Full article ">Figure 12
<p>The time to process each set of scans: (<b>a</b>) results in the LiDAR range of 100 m; (<b>b</b>) results in the LiDAR range of 200 m.</p>
Full article ">
16 pages, 6025 KiB  
Article
Synergetic Use of Bare Soil Composite Imagery and Multitemporal Vegetation Remote Sensing for Soil Mapping (A Case Study from Samara Region’s Upland)
by Andrey V. Chinilin, Nikolay I. Lozbenev, Pavel M. Shilov, Pavel P. Fil, Ekaterina A. Levchenko and Daniil N. Kozlov
Land 2024, 13(12), 2229; https://doi.org/10.3390/land13122229 - 20 Dec 2024
Viewed by 617
Abstract
This study presents an approach for predicting soil class probabilities by integrating synthetic composite imagery of bare soil with long-term vegetation remote sensing data and soil survey data. The goal is to develop detailed soil maps for the agro-innovation center “Orlovka-AIC” (Samara Region), [...] Read more.
This study presents an approach for predicting soil class probabilities by integrating synthetic composite imagery of bare soil with long-term vegetation remote sensing data and soil survey data. The goal is to develop detailed soil maps for the agro-innovation center “Orlovka-AIC” (Samara Region), with a focus on lithological heterogeneity. Satellite data were sourced from a cloud-filtered collection of Landsat 4–5 and 7 images (April–May, 1988–2010) and Landsat 8–9 images (June–August, 2012–2023). Bare soil surfaces were identified using threshold values for NDVI (<0.06), NBR2 (<0.05), and BSI (>0.10). Synthetic bare soil images were generated by calculating the median reflectance values across available spectral bands. Following the adoption of no-till technology in 2012, long-term average NDVI values were additionally calculated to assess the condition of agricultural lands. Seventy-one soil sampling points within “Orlovka-AIC” were classified using both the Russian and WRB soil classification systems. Logistic regression was applied for pixel-based soil class prediction. The model achieved an overall accuracy of 0.85 and a Cohen’s Kappa coefficient of 0.67, demonstrating its reliability in distinguishing the two main soil classes: agrochernozems and agrozems. The resulting soil map provides a robust foundation for sustainable land management practices, including erosion prevention and land use optimization. Full article
Show Figures

Figure 1

Figure 1
<p>The location of the research object ((<b>a</b>)—the position of the Samara oblast within the Russian Federation; (<b>b</b>)—the position of the research object within the Samara oblast; the red marker shows the position of the regional center (Samara); (<b>c</b>)—agricultural lands of the AIC with the display of a digital elevation model; the boundaries of key sites are highlighted in red).</p>
Full article ">Figure 2
<p>The 1st column of images is the position of the soil survey points within key areas; the 2nd column is a synthetic composite image of the bare soil surface in natural colors (R = Band 3; G = Band 2; B = Band 1); the 3rd column is a synthetic composite image in artificial colors (R = Band 4; G = Band 3; B = Band 2); and the 4th column is the average annual vegetation index NDVI.</p>
Full article ">Figure 3
<p>Comparison of the median NDVI values for the considered groups of soils. Color circles correspond to color of soil survey points on <a href="#land-13-02229-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure 4
<p>Probability density functions (PDF) of the long-term averaged NDVI values (<b>a</b>) and averaged reflectivity spectra by soil types (<b>b</b>).</p>
Full article ">Figure 5
<p>PCA results.</p>
Full article ">Figure 6
<p>Resulting maps of the probability distributions of soil types with OpenStreetMap as underlayer.</p>
Full article ">Figure 7
<p>The map of soil classes (1—areas of agrochernozems, &gt;80% probability; 2—areas of combinations (mosaics) of agrochernozems and agrozems, 40–60% probability; and 3—areas of agrozems, &gt;80% probability) with OpenStreetMap as underlayer.</p>
Full article ">
23 pages, 2180 KiB  
Article
A Multi-Objective Approach for Optimizing Virtual Machine Placement Using ILP and Tabu Search
by Mohamed Koubàa, Rym Regaieg, Abdullah S. Karar, Muhammad Nadeem and Faouzi Bahloul
Telecom 2024, 5(4), 1309-1331; https://doi.org/10.3390/telecom5040065 - 16 Dec 2024
Viewed by 673
Abstract
Efficient Virtual Machine (VM) placement is a critical challenge in optimizing resource utilization in cloud data centers. This paper explores both exact and approximate methods to address this problem. We begin by presenting an exact solution based on a Multi-Objective Integer Linear Programming [...] Read more.
Efficient Virtual Machine (VM) placement is a critical challenge in optimizing resource utilization in cloud data centers. This paper explores both exact and approximate methods to address this problem. We begin by presenting an exact solution based on a Multi-Objective Integer Linear Programming (MOILP) model, which provides an optimal VM Placement (VMP) strategy. Given the NP-completeness of the MOILP model when handling large-scale problems, we then propose an approximate solution using a Tabu Search (TS) algorithm. The TS algorithm is designed as a practical alternative for addressing these complex scenarios. A key innovation of our approach is the simultaneous optimization of three performance metrics: the number of accepted VMs, resource wastage, and power consumption. To the best of our knowledge, this is the first application of a TS algorithm in the context of VMP. Furthermore, these three performance metrics are jointly optimized to ensure operational efficiency (OPEF) and minimal operational expenditure (OPEX). We rigorously evaluate the performance of the TS algorithm through extensive simulation scenarios and compare its results with those of the MOILP model, enabling us to assess the quality of the approximate solution relative to the optimal one. Additionally, we benchmark our approach against existing methods in the literature to emphasize its advantages. Our findings demonstrate that the TS algorithm strikes an effective balance between efficiency and practicality, making it a robust solution for VMP in cloud environments. The TS algorithm outperforms the other algorithms considered in the simulations, achieving a gain of 2% to 32% in OPEF, with a worst-case increase of up to 6% in OPEX. Full article
Show Figures

Figure 1

Figure 1
<p>Comparative analysis of VMP solutions: solution 1 vs. solution 2. (<b>a</b>) VMP solution 1; (<b>b</b>) VMP solution 2.</p>
Full article ">Figure 2
<p>An example of a VMP of three VMs over one PM. (<b>a</b>) A first example of a VMP of three VMs over one PM; (<b>b</b>) A second example of a VMP of three VMs over one PM.</p>
Full article ">Figure 3
<p>Graphical representation of the three-stage MOILP solution.</p>
Full article ">Figure 4
<p>Distribution of VM sizes for various values of <span class="html-italic">N</span>.</p>
Full article ">Figure 5
<p>Average percentage of hosted VMs.</p>
Full article ">Figure 6
<p>Average residual resource wastage.</p>
Full article ">Figure 7
<p>Average total power consumption.</p>
Full article ">Figure 8
<p>Average percentage of hosted VMs of type S.</p>
Full article ">Figure 9
<p>Average percentage of hosted VMs of type M.</p>
Full article ">Figure 10
<p>Average percentage of hosted VMs of type L.</p>
Full article ">Figure 11
<p>Average percentage of hosted VMs of type XL.</p>
Full article ">Figure 12
<p>Distribution of hosted VMs by type across PMs for <span class="html-italic">N</span> = 200.</p>
Full article ">Figure 13
<p>Average percentage of CPU usage among active PMs.</p>
Full article ">Figure 14
<p>Average percentage of RAM usage among active PMs.</p>
Full article ">Figure 15
<p>Average percentage of storage usage among active PMs.</p>
Full article ">Figure 16
<p>Average CPU execution time for various values of <span class="html-italic">N</span>.</p>
Full article ">
11 pages, 844 KiB  
Article
Clarifying the Actual Situation of Old-Old Adults with Unknown Health Conditions and Those Indifferent to Health Using the National Health Insurance Database (KDB) System
by Mio Kitamura, Takaharu Goto, Tetsuo Ichikawa and Yasuhiko Shirayama
Geriatrics 2024, 9(6), 156; https://doi.org/10.3390/geriatrics9060156 - 6 Dec 2024
Viewed by 766
Abstract
Background/Objectives: This study aimed to investigate the actual situation of individuals with unknown health conditions (UHCs) and those indifferent to health (IH) among old-old adults (OOAs) aged 75 years and above using the National Health Insurance Database (KDB) system. Methods: A [...] Read more.
Background/Objectives: This study aimed to investigate the actual situation of individuals with unknown health conditions (UHCs) and those indifferent to health (IH) among old-old adults (OOAs) aged 75 years and above using the National Health Insurance Database (KDB) system. Methods: A total of 102 individuals with no history of medical examinations were selected from the KDB system in a city in Japan. Data were collected through home visit interviews and blood pressure monitors distributed by public health nurses (PHNs) from Community Comprehensive Support Centers (CCSCs). The collected data included personal attributes, health concern levels, and responses to a 15-item OOA questionnaire. Semi-structured interviews were conducted with seven PHNs. The control group consisted of 76 users of the “Kayoinoba” service (Kayoinoba users: KUs). Results: Of the 83 individuals who could be interviewed, 50 (49.0%) were classified as UHCs and 11 (10.8%) were classified as IH, including 5 from the low health concern group and 6 who refused to participate. In the word cloud generated from the PHNs’ interviews, the words and phrases “community welfare commissioner”, “community development”, “blood pressure monitor”, “troublesome”, “suspicious”, and “young” were highlighted. In the comparison of health assessments between UHCs and KUs, “body weight loss” and “cognitive function” were more prevalent among KUs, and “smoking” and “social participation” were more prevalent among UHCs. Conclusions: The home visit activities of CCSCs utilizing the KDB system may contribute to an understanding of the actual situation of UHCs, including IHs, among OOAs. UHCs (including patients with IH status) had a higher proportion of risk factors related to smoking and lower social participation than KUs. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart and results of the home visit interview survey. a: Individuals who received medical examinations. b: Individuals whose receipt of medical examinations was unclear. c: Individuals with unknown health conditions (UHCs). d: Individuals who were classified as “unknown” due to missing data, absence from their residence, or institutionalization. e: Individuals who refused intervention and for whom intervention was deemed difficult (categorized as IH). f: Individuals with high health concern whose receipt of medical examinations was unclear. g: Individuals with low health concern whose receipt of medical examinations was unclear (categorized as IH). h: UHCs with high health concern. i: UHCs with low health concern (categorized as IH).</p>
Full article ">Figure 2
<p>The word cloud generated from the interview survey of public health nurses.</p>
Full article ">
25 pages, 2551 KiB  
Article
Optimizing Scheduled Virtual Machine Requests Placement in Cloud Environments: A Tabu Search Approach
by Mohamed Koubàa, Abdullah S. Karar and Faouzi Bahloul
Computers 2024, 13(12), 321; https://doi.org/10.3390/computers13120321 - 2 Dec 2024
Viewed by 644
Abstract
This paper introduces a novel model for virtual machine (VM) requests with predefined start and end times, referred to as scheduled virtual machine demands (SVMs). In cloud computing environments, SVMs represent anticipated resource requirements derived from historical data, usage trends, and predictive analytics, [...] Read more.
This paper introduces a novel model for virtual machine (VM) requests with predefined start and end times, referred to as scheduled virtual machine demands (SVMs). In cloud computing environments, SVMs represent anticipated resource requirements derived from historical data, usage trends, and predictive analytics, allowing cloud providers to optimize resource allocation for maximum efficiency. Unlike traditional VMs, SVMs are not active concurrently. This allows providers to reuse physical resources such as CPU, RAM, and storage for time-disjoint requests, opening new avenues for optimizing resource distribution in data centers. To leverage this opportunity, we propose an advanced VM placement algorithm designed to maximize the number of hosted SVMs in cloud data centers. We formulate the SVM placement problem (SVMPP) as a combinatorial optimization challenge and introduce a tailored Tabu Search (TS) meta-heuristic to provide an effective solution. Our algorithm demonstrates significant improvements over existing placement methods, achieving up to a 15% increase in resource efficiency compared to baseline approaches. This advancement highlights the TS algorithm’s potential to deliver substantial scalability and optimization benefits, particularly for high-demand scenarios, albeit with a necessary consideration for computational cost. Full article
(This article belongs to the Section Cloud Continuum and Enabled Applications)
Show Figures

Figure 1

Figure 1
<p>Enhancing VM placement efficiency by exploiting SVMs’ time disjointness.</p>
Full article ">Figure 2
<p>Percentage of accepted PVMs vs. the number of arriving PVMs.</p>
Full article ">Figure 3
<p>ILP model vs. TS: Relative deviation in hosted PVMs.</p>
Full article ">Figure 4
<p>Number of hosted PVMs of type S.</p>
Full article ">Figure 5
<p>Number of hosted PVMs of type M.</p>
Full article ">Figure 6
<p>Number of hosted PVMs of type L.</p>
Full article ">Figure 7
<p>Number of hosted PVMs of type XL.</p>
Full article ">Figure 8
<p>CPU normalized residual capacity.</p>
Full article ">Figure 9
<p>RAM normalized residual capacity.</p>
Full article ">Figure 10
<p>Storage normalized residual capacity.</p>
Full article ">Figure 11
<p>CPU execution time.</p>
Full article ">Figure 12
<p>Percentage of hosted SVMs vs. the number of arriving SVMs.</p>
Full article ">Figure 13
<p>TS gain in terms of percentage of accepted SVMs compared to ACO and PSO.</p>
Full article ">Figure 14
<p>Number of hosted SVMs of type S.</p>
Full article ">Figure 15
<p>Number of hosted SVMs of type M.</p>
Full article ">Figure 16
<p>Number of hosted SVMs of type L.</p>
Full article ">Figure 17
<p>Number of hosted SVMs of type XL.</p>
Full article ">Figure 18
<p>CPU execution time.</p>
Full article ">Figure 19
<p>Impact of time correlation.</p>
Full article ">
Back to TopTop