[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,275)

Search Parameters:
Keywords = computer architecture

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 1085 KiB  
Article
Multi-Channel Speech Enhancement Using Labelled Random Finite Sets and a Neural Beamformer in Cocktail Party Scenario
by Jayanta Datta, Ali Dehghan Firoozabadi, David Zabala-Blanco and Francisco R. Castillo-Soria
Appl. Sci. 2025, 15(6), 2944; https://doi.org/10.3390/app15062944 (registering DOI) - 8 Mar 2025
Abstract
In this research, a multi-channel target speech enhancement scheme is proposed that is based on deep learning (DL) architecture and assisted by multi-source tracking using a labeled random finite set (RFS) framework. A neural network based on minimum variance distortionless response (MVDR) beamformer [...] Read more.
In this research, a multi-channel target speech enhancement scheme is proposed that is based on deep learning (DL) architecture and assisted by multi-source tracking using a labeled random finite set (RFS) framework. A neural network based on minimum variance distortionless response (MVDR) beamformer is considered as the beamformer of choice, where a residual dense convolutional graph-U-Net is applied in a generative adversarial network (GAN) setting to model the beamformer for target speech enhancement under reverberant conditions involving multiple moving speech sources. The input dataset for this neural architecture is constructed by applying multi-source tracking using multi-sensor generalized labeled multi-Bernoulli (MS-GLMB) filtering, which belongs to the labeled RFS framework, to obtain estimations of the sources’ positions and the associated labels (corresponding to each source) at each time frame with high accuracy under the effect of undesirable factors like reverberation and background noise. The tracked sources’ positions and associated labels help to correctly discriminate the target source from the interferers across all time frames and generate time–frequency (T-F) masks corresponding to the target source from the output of a time-varying, minimum variance distortionless response (MVDR) beamformer. These T-F masks constitute the target label set used to train the proposed deep neural architecture to perform target speech enhancement. The exploitation of MS-GLMB filtering and a time-varying MVDR beamformer help in providing the spatial information of the sources, in addition to the spectral information, within the neural speech enhancement framework during the training phase. Moreover, the application of the GAN framework takes advantage of adversarial optimization as an alternative to maximum likelihood (ML)-based frameworks, which further boosts the performance of target speech enhancement under reverberant conditions. The computer simulations demonstrate that the proposed approach leads to better target speech enhancement performance compared with existing state-of-the-art DL-based methodologies which do not incorporate the labeled RFS-based approach, something which is evident from the 75% ESTOI and PESQ of 2.70 achieved by the proposed approach as compared with the 46.74% ESTOI and PESQ of 1.84 achieved by Mask-MVDR with self-attention mechanism at a reverberation time (RT60) of 550 ms. Full article
28 pages, 2674 KiB  
Article
The Euler-Type Universal Numerical Integrator (E-TUNI) with Backward Integration
by Paulo M. Tasinaffo, Gildárcio S. Gonçalves, Johnny C. Marques, Luiz A. V. Dias and Adilson M. da Cunha
Algorithms 2025, 18(3), 153; https://doi.org/10.3390/a18030153 (registering DOI) - 8 Mar 2025
Viewed by 51
Abstract
The Euler-Type Universal Numerical Integrator (E-TUNI) is a discrete numerical structure that couples a first-order Euler-type numerical integrator with some feed-forward neural network architecture. Thus, E-TUNI can be used to model non-linear dynamic systems when the real-world plant’s analytical model is unknown. From [...] Read more.
The Euler-Type Universal Numerical Integrator (E-TUNI) is a discrete numerical structure that couples a first-order Euler-type numerical integrator with some feed-forward neural network architecture. Thus, E-TUNI can be used to model non-linear dynamic systems when the real-world plant’s analytical model is unknown. From the discrete solution provided by E-TUNI, the integration process can be either forward or backward. Thus, in this article, we intend to use E-TUNI in a backward integration framework to model autonomous non-linear dynamic systems. Three case studies, including the dynamics of the non-linear inverted pendulum, were developed to verify the computational and numerical validation of the proposed model. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 3rd Edition)
20 pages, 6467 KiB  
Article
A Lightweight TA-YOLOv8 Method for the Spot Weld Surface Anomaly Detection of Body in White
by Weijie Liu, Miao Jia, Shuo Zhang, Siyu Zhu, Jin Qi and Jie Hu
Appl. Sci. 2025, 15(6), 2931; https://doi.org/10.3390/app15062931 (registering DOI) - 8 Mar 2025
Viewed by 12
Abstract
The deep learning architecture YOLO (You Only Look Once) has demonstrated its superior visual detection performance in various computer vision tasks and has been widely applied in the field of automatic surface defect detection. In this paper, we propose a lightweight YOLOv8-based method [...] Read more.
The deep learning architecture YOLO (You Only Look Once) has demonstrated its superior visual detection performance in various computer vision tasks and has been widely applied in the field of automatic surface defect detection. In this paper, we propose a lightweight YOLOv8-based method for the quality inspection of car body welding spots. We developed a TA-YOLOv8 network structure which has an improved Task-Aligned (TA) head detection, designed to handle a small sample size, imbalanced positive and negative samples, and high-noise characteristics of Body-in-White welding spot data. By learning with fewer parameters, the model achieves more efficient and accurate classification. Additionally, our algorithm framework can perform anomaly segmentation and classification on our open-world raw datasets obtained from actual production environments. The experimental results show that the lightweight module improves the processing speed by an average of 2.8%, with increases in detection the mAP@50-95 and recall rate of 1.35% and 0.1226, respectively. Full article
(This article belongs to the Special Issue Motion Control for Robots and Automation)
Show Figures

Figure 1

Figure 1
<p>Architecture of YOLOv8 model. The different color parts of the input batches represent different image data. The different color parts of the architecture represent different function modules.</p>
Full article ">Figure 2
<p>Improved backbone network of our architecture. On the left is a schematic diagram of the backbone network process for detecting spot weld images, while the right side shows the corresponding structure layer parameters and related information. The different color parts are the same as <a href="#applsci-15-02931-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 3
<p>Proposed Multiple Cross-Layer FPN (MC-FPN) network. The different color dotted lines represent multiple cross layers, with P<sub>2</sub>-P<sub>5</sub> being simplified representations of the intermediate connection layers.</p>
Full article ">Figure 4
<p>Task-Aligned head structure: to learn extensive task-interactive features from multiple convolutional layers.</p>
Full article ">Figure 5
<p>Welding spots sample images and annotated data in Body-in-White production lines. (<b>a</b>) shows the samples we collected in the production lines, while (<b>b</b>) shows the pretraining dataset and the labels (yellow squares).</p>
Full article ">Figure 5 Cont.
<p>Welding spots sample images and annotated data in Body-in-White production lines. (<b>a</b>) shows the samples we collected in the production lines, while (<b>b</b>) shows the pretraining dataset and the labels (yellow squares).</p>
Full article ">Figure 6
<p>Performance comparison with typical object detection algorithms on test set.</p>
Full article ">Figure 7
<p>Some results of WSDDM and comparison between small welding spot detection models. We use green to represent the detected weld spots are normal, and red to represent the detected weld spots have defects or abnormalities.</p>
Full article ">Figure 8
<p>The weld spot dataset obtained from image segmentation using the WSDDM.</p>
Full article ">Figure 9
<p>Data augmentation and labeling.</p>
Full article ">Figure 10
<p>Visualization and validation sample results for model testing. The model effectively captures the location of welding defects through highlighted (green) regions.</p>
Full article ">Figure 11
<p>Validation sample results for model generalization ability.</p>
Full article ">Figure 12
<p>Experimental pipeline and integrated detection systems.</p>
Full article ">
27 pages, 899 KiB  
Article
Comparative Analysis of AlexNet, ResNet-50, and VGG-19 Performance for Automated Feature Recognition in Pedestrian Crash Diagrams
by Baraah Qawasmeh, Jun-Seok Oh and Valerian Kwigizile
Appl. Sci. 2025, 15(6), 2928; https://doi.org/10.3390/app15062928 (registering DOI) - 8 Mar 2025
Viewed by 5
Abstract
Pedestrians, as the most vulnerable road users in traffic crashes, prompt transportation researchers and urban planners to prioritize pedestrian safety due to the elevated risk and growing incidence of injuries and fatalities. Thorough pedestrian crash data are indispensable for safety research, as the [...] Read more.
Pedestrians, as the most vulnerable road users in traffic crashes, prompt transportation researchers and urban planners to prioritize pedestrian safety due to the elevated risk and growing incidence of injuries and fatalities. Thorough pedestrian crash data are indispensable for safety research, as the most detailed descriptions of crash scenes and pedestrian actions are typically found in crash narratives and diagrams. However, extracting and analyzing this information from police crash reports poses significant challenges. This study tackles these issues by introducing innovative image-processing techniques to analyze crash diagrams. By employing cutting-edge technological methods, the research aims to uncover and extract hidden features from pedestrian crash data in Michigan, thereby enhancing the understanding and prevention of such incidents. This study evaluates the effectiveness of three Convolutional Neural Network (CNN) architectures—VGG-19, AlexNet, and ResNet-50—in classifying multiple hidden features in pedestrian crash diagrams. These features include intersection type (three-leg or four-leg), road type (divided or undivided), the presence of marked crosswalk (yes or no), intersection angle (skewed or unskewed), the presence of Michigan left turn (yes or no), and the presence of nearby residentials (yes or no). The research utilizes the 2020–2023 Michigan UD-10 pedestrian crash reports, comprising 5437 pedestrian crash diagrams for large urbanized areas and 609 for rural areas. The CNNs underwent comprehensive evaluation using various metrics, including accuracy and F1-score, to assess their capacity for reliably classifying multiple pedestrian crash features. The results reveal that AlexNet consistently surpasses other models, attaining the highest accuracy and F1-score. This highlights the critical importance of choosing the appropriate architecture for crash diagram analysis, particularly in the context of pedestrian safety. These outcomes are critical for minimizing errors in image classification, especially in transportation safety studies. In addition to evaluating model performance, computational efficiency was also considered. In this regard, AlexNet emerged as the most efficient model. This understanding is precious in situations where there are limitations on computing resources. This study contributes novel insights to pedestrian safety research by leveraging image processing technology, and highlights CNNs’ potential use in detecting concealed pedestrian crash patterns. The results lay the groundwork for future research, and offer promise in supporting safety initiatives and facilitating countermeasures’ development for researchers, planners, engineers, and agencies. Full article
(This article belongs to the Special Issue Traffic Safety Measures and Assessment)
Show Figures

Figure 1

Figure 1
<p>Methodological framework for pedestrian crash diagram classification using CNNs.</p>
Full article ">Figure 2
<p>(<b>a</b>) Mean training loss of all CNN models for all features’ classifications. (<b>b</b>) Mean validation loss of all CNN models for all features classifications.</p>
Full article ">Figure 3
<p>Computational time of all CNN models for all features’ classifications over 50 epochs.</p>
Full article ">
18 pages, 2974 KiB  
Article
Evolving Towards Artificial-Intelligence-Driven Sixth-Generation Mobile Networks: An End-to-End Framework, Key Technologies, and Opportunities
by Zexu Li, Jingyi Wang, Song Zhao, Qingtian Wang and Yue Wang
Appl. Sci. 2025, 15(6), 2920; https://doi.org/10.3390/app15062920 - 7 Mar 2025
Viewed by 192
Abstract
The incorporation of artificial intelligence (AI) into sixth-generation (6G) mobile networks is expected to revolutionize communication systems, transforming them into intelligent platforms that provide seamless connectivity and intelligent services. This paper explores the evolution of 6G architectures, as well as the enabling technologies [...] Read more.
The incorporation of artificial intelligence (AI) into sixth-generation (6G) mobile networks is expected to revolutionize communication systems, transforming them into intelligent platforms that provide seamless connectivity and intelligent services. This paper explores the evolution of 6G architectures, as well as the enabling technologies required to integrate AI across the cloud, core network (CN), radio access network (RAN), and terminals. It begins by examining the necessity of embedding AI into 6G networks, making it a native capability. The analysis then outlines potential evolutionary paths for the RAN architecture and proposes an end-to-end AI-driven framework. Additionally, key technologies such as cross-domain AI collaboration, native computing, and native security mechanisms are discussed. The study identifies potential use cases, including embodied intelligence, wearable devices, and generative AI, which offer valuable insights into fostering collaboration within the AI-driven ecosystem and highlight new revenue model opportunities and challenges. The paper concludes with a forward-looking perspective on the convergence of AI and 6G technology. Full article
(This article belongs to the Special Issue 5G/6G Mechanisms, Services, and Applications)
Show Figures

Figure 1

Figure 1
<p>Typical 6G use cases defined by the ITU-R.</p>
Full article ">Figure 2
<p>Possible computing resource exposure options based on GPC and dedicated hardware.</p>
Full article ">Figure 3
<p>AI-driven E2E next-generation framework.</p>
Full article ">Figure 4
<p>AI collaboration across different domains.</p>
Full article ">Figure 5
<p>RAN AI for wearable devices.</p>
Full article ">Figure 6
<p>Cloud–edge–device collaboration for embodied intelligence.</p>
Full article ">Figure 7
<p>Comparison: (a) Robotic guide dog without AI-native RAN. (b) Robotic guide dog with AI-native RAN.</p>
Full article ">
26 pages, 34185 KiB  
Article
Design and Implementation of ESP32-Based Edge Computing for Object Detection
by Yeong-Hwa Chang, Feng-Chou Wu and Hung-Wei Lin
Sensors 2025, 25(6), 1656; https://doi.org/10.3390/s25061656 - 7 Mar 2025
Viewed by 151
Abstract
This paper explores the application of the ESP32 microcontroller in edge computing, focusing on the design and implementation of an edge server system to evaluate performance improvements achieved by integrating edge and cloud computing. Responding to the growing need to reduce cloud burdens [...] Read more.
This paper explores the application of the ESP32 microcontroller in edge computing, focusing on the design and implementation of an edge server system to evaluate performance improvements achieved by integrating edge and cloud computing. Responding to the growing need to reduce cloud burdens and latency, this research develops an edge server, detailing the ESP32 hardware architecture, software environment, communication protocols, and server framework. A complementary cloud server software framework is also designed to support edge processing. A deep learning model for object recognition is selected, trained, and deployed on the edge server. Performance evaluation metrics, classification time, MQTT (Message Queuing Telemetry Transport) transmission time, and data from various MQTT brokers are used to assess system performance, with particular attention to the impact of image size adjustments. Experimental results demonstrate that the edge server significantly reduces bandwidth usage and latency, effectively alleviating the load on the cloud server. This study discusses the system’s strengths and limitations, interprets experimental findings, and suggests potential improvements and future applications. By integrating AI and IoT, the edge server design and object recognition system demonstrates the benefits of localized edge processing in enhancing efficiency and reducing cloud dependency. Full article
Show Figures

Figure 1

Figure 1
<p>Installation process of ESP32-CAM in Arduino IDE.</p>
Full article ">Figure 2
<p>ESP32-CAM module: 1.8-inch LCD (<b>left</b>), onboard camera (<b>right</b>).</p>
Full article ">Figure 3
<p>MQTT communication in the edge–cloud environment.</p>
Full article ">Figure 4
<p>Overall software framework.</p>
Full article ">Figure 5
<p>The software framework for the ESP32-CAM edge device.</p>
Full article ">Figure 6
<p>Cloud server software framework.</p>
Full article ">Figure 7
<p>Entire process from data collection to model deployment.</p>
Full article ">Figure 8
<p>The experimental setup for the image capture and recognition.</p>
Full article ">Figure 9
<p>Samples of testing images.</p>
Full article ">Figure 10
<p>Complete object detection process in the edge–cloud system.</p>
Full article ">Figure 11
<p>Response time of the Mosquitto broker (Option 1).</p>
Full article ">Figure 12
<p>Response time of the Mosquitto broker (Option 2).</p>
Full article ">Figure 13
<p>Response time of the Mosquitto broker (Option 3).</p>
Full article ">Figure 14
<p>Response time of the MQTTGO broker (Option 1).</p>
Full article ">Figure 15
<p>Response time of the MQTTGO broker (Option 2).</p>
Full article ">Figure 16
<p>Response time of the MQTTGO broker (Option 3).</p>
Full article ">Figure 17
<p>Response of the Eclipse broker (Option 1).</p>
Full article ">Figure 18
<p>Response time of the Eclipse broker (Option 2).</p>
Full article ">Figure 19
<p>Response time of the Eclipse broker (Option 3).</p>
Full article ">Figure 20
<p>Samples of validation images: person (<b>a</b>,<b>b</b>,<b>d</b>,<b>f</b>,<b>g</b>,<b>k</b>,<b>l</b>,<b>m</b>,<b>n</b>), non-person (<b>c</b>,<b>e</b>,<b>h</b>,<b>i</b>,<b>j</b>,<b>o</b>,<b>p</b>,<b>q</b>,<b>r</b>,<b>s</b>,<b>t</b>).</p>
Full article ">Figure 21
<p>Snap shots of object recognition under domestic broker and Option 3: (<b>a</b>) a person is detected, (<b>b</b>) object recognition by the edge, (<b>c</b>) object recognition by the cloud.</p>
Full article ">
18 pages, 949 KiB  
Article
Accelerating Pattern Recognition with a High-Precision Hardware Divider Using Binary Logarithms and Regional Error Corrections
by Dat Ngo, Suhun Ahn, Jeonghyeon Son and Bongsoon Kang
Electronics 2025, 14(6), 1066; https://doi.org/10.3390/electronics14061066 - 7 Mar 2025
Viewed by 61
Abstract
Pattern recognition applications involve extensive arithmetic operations, including additions, multiplications, and divisions. When implemented on resource-constrained edge devices, these operations demand dedicated hardware, with division being the most complex. Conventional hardware dividers, however, incur substantial overhead in terms of resource consumption and latency. [...] Read more.
Pattern recognition applications involve extensive arithmetic operations, including additions, multiplications, and divisions. When implemented on resource-constrained edge devices, these operations demand dedicated hardware, with division being the most complex. Conventional hardware dividers, however, incur substantial overhead in terms of resource consumption and latency. To address these limitations, we employ binary logarithms with regional error correction to approximate division operations. By leveraging approximation errors at boundary regions to formulate logarithm and antilogarithm offsets, our approach effectively reduces hardware complexity while minimizing the inherent errors of binary logarithm-based division. Additionally, we propose a six-stage pipelined hardware architecture, synthesized and validated on a Zynq UltraScale+ FPGA platform. The implementation results demonstrate that the proposed divider outperforms conventional division methods in terms of resource utilization and power savings. Furthermore, its application in image dehazing and object detection highlights its potential for real-time, high-performance computing systems. Full article
(This article belongs to the Special Issue Biometrics and Pattern Recognition)
Show Figures

Figure 1

Figure 1
<p>Block diagram of binary logarithm-based division. The red-dashed blocks require approximation techniques that introduce errors into the quotient.</p>
Full article ">Figure 2
<p>Illustration of errors introduced by Mitchell’s algorithm. (<b>a</b>) Error resulting from the approximation <math display="inline"><semantics> <mrow> <msub> <mi>log</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>≈</mo> <mi>x</mi> </mrow> </semantics></math>. (<b>b</b>) Distribution of division errors when applying Mitchell’s algorithm.</p>
Full article ">Figure 3
<p>Comparison of methods improving upon Mitchell’s algorithm. (<b>a</b>) Approximation lines used in each method, with the region <math display="inline"><semantics> <mrow> <mn>0.8</mn> <mo>≤</mo> <mi>x</mi> <mo>≤</mo> <mn>0.9</mn> </mrow> </semantics></math> enlarged for better visualization. (<b>b</b>) Corresponding approximation errors.</p>
Full article ">Figure 4
<p>Approximation lines corresponding to different offset definitions. (<b>a</b>) <math display="inline"><semantics> <msub> <mo>Δ</mo> <mi>right</mi> </msub> </semantics></math>. (<b>b</b>) <math display="inline"><semantics> <msub> <mo>Δ</mo> <mi>center</mi> </msub> </semantics></math>. (<b>c</b>) <math display="inline"><semantics> <msub> <mo>Δ</mo> <mi>avg</mi> </msub> </semantics></math>. The fraction is divided into four regions, with an enlarged view of the third region for clarity.</p>
Full article ">Figure 5
<p>Approximation error analysis of the proposed method. (<b>a</b>) Comparison of errors among different methods. (<b>b</b>) Approximation errors of the proposed method for varying values of <span class="html-italic">M</span>.</p>
Full article ">Figure 6
<p>Hardware architecture of the proposed divider. REG, MSB, and LSB denote register, most significant bit, and least significant bit, respectively. The “…” symbol indicates that the data path for the divisor is identical to that of the dividend.</p>
Full article ">Figure 7
<p>YOLOv9 object detection results on aerial images under varying haze levels using IFDH. Yellow labels represent airplanes, and blue labels represent birds.</p>
Full article ">
22 pages, 5968 KiB  
Article
The Optimization of PID Controller and Color Filter Parameters with a Genetic Algorithm for Pineapple Tracking Using an ROS2 and MicroROS-Based Robotic Head
by Carolina Maldonado-Mendez, Sergio Fabian Ruiz-Paz, Isaac Machorro-Cano, Antonio Marin-Hernandez and Sergio Hernandez-Mendez
Computation 2025, 13(3), 69; https://doi.org/10.3390/computation13030069 - 7 Mar 2025
Viewed by 42
Abstract
This work proposes a vision system mounted on the head of an omnidirectional robot to track pineapples and maintain them at the center of its field of view. The robot head is equipped with a pan–tilt unit that facilitates dynamic adjustments. The system [...] Read more.
This work proposes a vision system mounted on the head of an omnidirectional robot to track pineapples and maintain them at the center of its field of view. The robot head is equipped with a pan–tilt unit that facilitates dynamic adjustments. The system architecture, implemented in Robot Operating System 2 (ROS2), performs the following tasks: it captures images from a webcam embedded in the robot head, segments the object of interest based on color, and computes its centroid. If the centroid deviates from the center of the image plane, a proportional–integral–derivative (PID) controller adjusts the pan–tilt unit to reposition the object at the center, enabling continuous tracking. A multivariate Gaussian function is employed to segment objects with complex color patterns, such as the body of a pineapple. The parameters of both the PID controller and the multivariate Gaussian filter are optimized using a genetic algorithm. The PID controller receives as input the (x, y) positions of the pan–tilt unit, obtained via an embedded board and MicroROS, and generates control signals for the servomotors that drive the pan–tilt mechanism. The experimental results demonstrate that the robot successfully tracks a moving pineapple. Additionally, the color segmentation filter can be further optimized to detect other textured fruits, such as soursop and melon. This research contributes to the advancement of smart agriculture, particularly for fruit crops with rough textures and complex color patterns. Full article
Show Figures

Figure 1

Figure 1
<p>Omnidirectional robot used in the experiments. The robot’s head, located at the top, where a webcam is mounted on the pan–tilt unit.</p>
Full article ">Figure 2
<p>The two images on the left show the robot head mounted on an PTU, with a webcam housed inside. The angular positions of the PTU are obtained using an Arduino Due board. The image on the right illustrates PTU, which consists of two motors: one for movement along the <span class="html-italic">x</span>-axis and another for movement along the <span class="html-italic">y</span>-axis.</p>
Full article ">Figure 3
<p>The architecture implemented in ROS2 consists of both hardware and software components. (1) Hardware: This includes input and output devices such as a servo motor, a webcam, and an Arduino Due board with microROS to control the servo motors responsible for moving the pan–tilt unit (PTU). (2) Software: The system is structured into modules with specific functionalities, including object segmentation, centroid calculation, and tracking using two PID controllers. The servomotors are powered by lithium batteries. ROS2 manages the communication between the software and hardware layers.</p>
Full article ">Figure 4
<p>Communication graph of nodes and topics in ROS2.</p>
Full article ">Figure 5
<p>A total of 35 photos of pineapples were used to train the MGF and optimize its parameters with the GA.</p>
Full article ">Figure 6
<p>In the upper part, 3 images are shown where the body of the pineapple are manually colored in black. The lower part shows the images generated in MATLAB, where the black pixels are changed to white, and the remaining pixels are colored black.</p>
Full article ">Figure 7
<p>The segmented pineapples are shown using the best individual from the GA.</p>
Full article ">Figure 8
<p>The image processing steps applied to calculate the centroid of the pineapple body are shown: (<b>a</b>) the original image, (<b>b</b>) the segmented image with the MGF filter optimized with the GA, (<b>c</b>) the image resulting from the application of the smoothing filter, (<b>d</b>) the image after contour detection, and (<b>e</b>) the blue point indicating the centroid of the pineapple contour.</p>
Full article ">Figure 9
<p>The upper images depict pineapples captured in an outdoor environment, while the lower images display the segmented pineapple highlighted with a green outline.</p>
Full article ">Figure 10
<p>Images of pineapples with green hues captured in an outdoor environment [<a href="#B35-computation-13-00069" class="html-bibr">35</a>]. The green outline indicates the segmentation of a pineapple, which is an incorrect detection.</p>
Full article ">Figure 11
<p>Images of pineapples with shades similar to those used during training taken from [<a href="#B35-computation-13-00069" class="html-bibr">35</a>]. The green circle highlights the segmented pineapple, while the blue dot represents the centroid of the segmented area. In (<b>b</b>,<b>c</b>,<b>e</b>,<b>g</b>,<b>h</b>), the segmentation results are accurate. In (<b>a</b>), only the pineapple body with yellow hues was segmented. In (<b>d</b>,<b>f</b>), partial segmentation of the pineapple body is observed due to occlusions in these images.</p>
Full article ">Figure 12
<p>The visual information is processed to segment the object of interest, and this centroid is calculated. Two PID controllers in the pan and tilt unit are used, in order to track the object with a minimum error and soft movement. For this purpose, the centroid of the object is located at the center of the plane image.</p>
Full article ">Figure 13
<p>The images show the green square placed at the bottom of the board. In (<b>a</b>,<b>c</b>), the centroid of the green object does not coincide with the center of the image plane; in (<b>b</b>,<b>d</b>), the control action was applied, and the centroid of the green object was positioned near the center of the image plane. The purple circle indicates the segmented object.</p>
Full article ">Figure 14
<p>It shows the change in the average error for both pan and tilt across the 20 repetitions.</p>
Full article ">Figure 15
<p>Tracking the green object with the robot head. The purple circle indicates the segmented object.</p>
Full article ">Figure 16
<p>It shows the change in the average error in the pan and tilt of the 20 repetitions by following the moving green object. The PID was optimized with the GA.</p>
Full article ">Figure 17
<p>It shows the change in the average error in the pan and tilt of the 20 repetitions by following the moving green object.</p>
Full article ">Figure 18
<p>The images show the pineapple placed on a table. In the images of the left column, the robot head does not observe the centroid of the pineapple in the center of its image plane. In the images in the right column, the control action is observed to have been performed, and the centroid of the pineapple is close to the center of the image plane. The purple circle indicates the segmented object.</p>
Full article ">Figure 19
<p>It shows the change in the average error in the pan and tilt of the 20 repetitions. The PID was optimized manually.</p>
Full article ">Figure 20
<p>Tracking the pineapple with the robot head. The purple circle indicates the segmented object.</p>
Full article ">Figure 21
<p>The figure shows the change in the average error in the pan and tilt across the 20 repetitions while tracking the moving pineapple.</p>
Full article ">
9 pages, 520 KiB  
Article
Research on Approximate Computation of Signal Processing Algorithms for AIoT Processors Based on Deep Learning
by Yingzhe Liu, Fangfa Fu and Xuejian Sun
Electronics 2025, 14(6), 1064; https://doi.org/10.3390/electronics14061064 - 7 Mar 2025
Viewed by 41
Abstract
In the post-Moore era, the excessive amount of information brings great challenges to the performance of computing systems. To cope with these challenges, approximate computation has developed rapidly, which enhances the system performance with minor degradation in accuracy. In this paper, we investigate [...] Read more.
In the post-Moore era, the excessive amount of information brings great challenges to the performance of computing systems. To cope with these challenges, approximate computation has developed rapidly, which enhances the system performance with minor degradation in accuracy. In this paper, we investigate the utilization of an Artificial Intelligence of Things (AIoT) processor for approximate computing. Firstly, we employed neural architecture search (NAS) to acquire the neural network structure for approximate computation, which approximates the functions of FFT, DCT, FIR, and IIR. Subsequently, based on this structure, we quantized and trained a neural network implemented on the AI accelerator of the MAX78000 development board. To evaluate the performance, we implemented the same functions using the CMSIS-DSP library. The results demonstrate that the computational efficiency of the approximate computation on the AI accelerator is significantly higher compared to traditional DSP implementations. Therefore, the approximate computation based on AIoT devices can be effectively utilized in real-time applications. Full article
(This article belongs to the Special Issue The Progress in Application-Specific Integrated Circuit Design)
Show Figures

Figure 1

Figure 1
<p>Scatter plot of evaluation results. (<b>a</b>) FFT; (<b>b</b>) FIR; (<b>c</b>) DCT; (<b>d</b>) IIR.</p>
Full article ">
22 pages, 8390 KiB  
Article
Dual-Attention-Enhanced MobileViT Network: A Lightweight Model for Rice Disease Identification in Field-Captured Images
by Meng Zhang, Zichao Lin, Shuqi Tang, Chenjie Lin, Liping Zhang, Wei Dong and Nan Zhong
Agriculture 2025, 15(6), 571; https://doi.org/10.3390/agriculture15060571 - 7 Mar 2025
Viewed by 153
Abstract
Accurate identification of rice diseases is crucial for improving rice yield and ensuring food security. In this study, we constructed an image dataset containing six classes of rice diseases captured under real field conditions to address challenges such as complex backgrounds, varying lighting, [...] Read more.
Accurate identification of rice diseases is crucial for improving rice yield and ensuring food security. In this study, we constructed an image dataset containing six classes of rice diseases captured under real field conditions to address challenges such as complex backgrounds, varying lighting, and symptom similarities. Based on the MobileViT-XXS architecture, we proposed an enhanced model named MobileViT-DAP, which integrates Channel Attention (CA), Efficient Channel Attention (ECA), and PoolFormer blocks to achieve precise classification of rice diseases. The experimental results demonstrated that the improved model achieved superior performance with 0.75 M Params and 0.23 G FLOPs, ensuring computational efficiency while maintaining high classification accuracy. On the testing set, the model achieved an accuracy of 99.61%, a precision of 99.64%, a recall of 99.59%, and a specificity of 99.92%. Compared to traditional lightweight models, MobileViT-DAP showed significant improvements in model complexity, computational efficiency, and classification performance, effectively balancing lightweight design with high accuracy. Furthermore, visualization analysis confirmed that the model’s decision-making process primarily relies on lesion-related features, enhancing its interpretability and reliability. This study provides a novel perspective for optimizing plant disease recognition tasks and contributes to improving plant protection strategies, offering a solution for accurate and efficient disease monitoring in agricultural applications. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Examples of rice diseases in the dataset: (<b>a</b>) rice brown spot, (<b>b</b>) rice bacterial leaf blight, (<b>c</b>) rice sheath blight, (<b>d</b>) rice bacterial leaf streak, (<b>e</b>) rice false smut, and (<b>f</b>) rice blast.</p>
Full article ">Figure 2
<p>Structure of MobileViT. The MV2 structure with a downward arrow indicates a stride of 2, representing downsampling (adapted from [<a href="#B24-agriculture-15-00571" class="html-bibr">24</a>]). C, H, W, d, N and P represent data size, X and Y are input and output, respectively.</p>
Full article ">Figure 3
<p>Diagram of Unfold and Fold operations.</p>
Full article ">Figure 4
<p>Structure of the ECA module. The value of <span class="html-italic">k</span> determines the size of the convolutional kernel, while <span class="html-italic">C</span> represents the number of input feature channels (adapted from [<a href="#B27-agriculture-15-00571" class="html-bibr">27</a>]).</p>
Full article ">Figure 5
<p>Structure of the CA module (adapted from [<a href="#B12-agriculture-15-00571" class="html-bibr">12</a>]).</p>
Full article ">Figure 6
<p>Structure of MobileViT-DAP: (<b>a</b>) MbECA module structure, (<b>b</b>) PoolFormer block structure, and (<b>c</b>) workflow of MobileViT-DAP.</p>
Full article ">Figure 7
<p>Confusion matrix for rice disease image classification using the dual-attention enhanced MobileViT models: (<b>a</b>) MobileViT+CA+ECA and (<b>b</b>) MobileViT-DAP, with the addition of the PoolFormer blocks.</p>
Full article ">Figure 8
<p>Accuracy and loss curves of different networks: (<b>a</b>) training accuracy curve, (<b>b</b>) testing accuracy curve, (<b>c</b>) training loss curve, and (<b>d</b>) testing loss curve.</p>
Full article ">Figure 9
<p>Visualization of Grad-CAM results. Each row represents a different rice disease class, and each column corresponds to a model with a different attention mechanism. The red areas indicate the regions that the model primarily focuses on.</p>
Full article ">Figure 10
<p>Visualization of SHAP values for rice disease images using MobileViT-DAP. Each row represents a different rice disease class, and each column displays the SHAP values when classifying the sample into the specified category. The SHAP values indicate the impact of each pixel on the model’s prediction, with blue representing a negative impact and red representing a positive impact.</p>
Full article ">Figure 11
<p>T-SNE visualization of test set features using MobileViT-DAP, where points that are closer together indicate better classification performance.</p>
Full article ">Figure 12
<p>Symptoms of rice bacterial leaf streak in the late stage and rice bacterial leaf blight in the early stage: (<b>a</b>) rice bacterial leaf streak (10021) and (<b>b</b>) rice bacterial leaf blight (10017).</p>
Full article ">Figure 13
<p>Performance comparison of different models for rice disease image classification.</p>
Full article ">
20 pages, 16857 KiB  
Article
D-YOLO: A Lightweight Model for Strawberry Health Detection
by Enhui Wu, Ruijun Ma, Daming Dong and Xiande Zhao
Agriculture 2025, 15(6), 570; https://doi.org/10.3390/agriculture15060570 - 7 Mar 2025
Viewed by 181
Abstract
In complex agricultural settings, accurately and rapidly identifying the growth and health conditions of strawberries remains a formidable challenge. Therefore, this study aims to develop a deep framework, Disease-YOLO (D-YOLO), based on the YOLOv8s model to monitor the health status of strawberries. Key [...] Read more.
In complex agricultural settings, accurately and rapidly identifying the growth and health conditions of strawberries remains a formidable challenge. Therefore, this study aims to develop a deep framework, Disease-YOLO (D-YOLO), based on the YOLOv8s model to monitor the health status of strawberries. Key innovations include (1) replacing the original backbone with MobileNetv3 to optimize computational efficiency; (2) implementing a Bidirectional Feature Pyramid Network for enhanced multi-scale feature fusion; (3) integrating Contextual Transformer attention modules in the neck network to improve lesion localization; and (4) adopting weighted intersection over union loss to address class imbalance. Evaluated on our custom strawberry disease dataset containing 1301 annotated images across three fruit development stages and five plant health states, D-YOLO achieved 89.6% mAP on the train set and 90.5% mAP on the test set while reducing parameters by 72.0% and floating-point operations by 75.1% compared to baseline YOLOv8s. The framework’s balanced performance and computational efficiency surpass conventional models including Faster R-CNN, RetinaNet, YOLOv5s, YOLOv6s, and YOLOv8s in comparative trials. Cross-domain validation on a maize disease dataset demonstrated D-YOLO’s superior generalization with 94.5% mAP, outperforming YOLOv8 by 0.6%. The framework’s balanced performance (89.6% training mAP) and computational efficiency surpass conventional models, including Faster R-CNN, RetinaNet, YOLOv5s, YOLOv6s, and YOLOv8s, in comparative trials. This lightweight solution enables precise, real-time crop health monitoring. The proposed architectural improvements provide a practical paradigm for intelligent disease detection in precision agriculture. Full article
(This article belongs to the Section Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Overview of the processes for D-YOLO models.</p>
Full article ">Figure 2
<p>Images of different categories.</p>
Full article ">Figure 3
<p>Network structure of D-YOLO. (<b>a</b>) D-YOLO network structure: conv is convolution, concat_BiFPN is a feature fusion method in which the number of channels is added by weight; benck is MobileNet module; and CoT is the convolutional attention module. (<b>b</b>) Improvement modules: Improvement modules for each part of the D-YOLO, MobileNet block, BiFPN multi-scale feature network, and CoT attention mechanism.</p>
Full article ">Figure 4
<p>Confusion matrices of different attention mechanisms. (<b>a</b>) EMA, (<b>b</b>) CBAM, (<b>c</b>) CA, (<b>d</b>) CoT. A—flower, B—health, C—ripe, D—fruit, E—fertilizer, F—powdery, G—acalcerosis, H—greyleaf, and I—background.</p>
Full article ">Figure 5
<p>Different model results.</p>
Full article ">Figure 6
<p>Feature map of strawberry health condition detection.</p>
Full article ">Figure A1
<p>Different model results in complex environments.</p>
Full article ">
41 pages, 603 KiB  
Review
Edge and Cloud Computing in Smart Cities
by Maria Trigka and Elias Dritsas
Future Internet 2025, 17(3), 118; https://doi.org/10.3390/fi17030118 - 6 Mar 2025
Viewed by 159
Abstract
The evolution of smart cities is intrinsically linked to advancements in computing paradigms that support real-time data processing, intelligent decision-making, and efficient resource utilization. Edge and cloud computing have emerged as fundamental pillars that enable scalable, distributed, and latency-aware services in urban environments. [...] Read more.
The evolution of smart cities is intrinsically linked to advancements in computing paradigms that support real-time data processing, intelligent decision-making, and efficient resource utilization. Edge and cloud computing have emerged as fundamental pillars that enable scalable, distributed, and latency-aware services in urban environments. Cloud computing provides extensive computational capabilities and centralized data storage, whereas edge computing ensures localized processing to mitigate network congestion and latency. This survey presents an in-depth analysis of the integration of edge and cloud computing in smart cities, highlighting architectural frameworks, enabling technologies, application domains, and key research challenges. The study examines resource allocation strategies, real-time analytics, and security considerations, emphasizing the synergies and trade-offs between cloud and edge computing paradigms. The present survey also notes future directions that address critical challenges, paving the way for sustainable and intelligent urban development. Full article
(This article belongs to the Special Issue IoT, Edge, and Cloud Computing in Smart Cities)
Show Figures

Figure 1

Figure 1
<p>An overview of surveyed key topics: edge and cloud computing in smart cities.</p>
Full article ">Figure 2
<p>Schematic representation of the three-tier architecture.</p>
Full article ">
26 pages, 5572 KiB  
Article
Leveraging Symmetry and Addressing Asymmetry Challenges for Improved Convolutional Neural Network-Based Facial Emotion Recognition
by Gabriela Laura Sălăgean, Monica Leba and Andreea Cristina Ionica
Symmetry 2025, 17(3), 397; https://doi.org/10.3390/sym17030397 - 6 Mar 2025
Viewed by 175
Abstract
This study introduces a custom-designed CNN architecture that extracts robust, multi-level facial features and incorporates preprocessing techniques to correct or reduce asymmetry before classification. The innovative characteristics of this research lie in its integrated approach to overcoming facial asymmetry challenges and enhancing CNN-based [...] Read more.
This study introduces a custom-designed CNN architecture that extracts robust, multi-level facial features and incorporates preprocessing techniques to correct or reduce asymmetry before classification. The innovative characteristics of this research lie in its integrated approach to overcoming facial asymmetry challenges and enhancing CNN-based emotion recognition. This is completed by well-known data augmentation strategies—using methods such as vertical flipping and shuffling—that generate symmetric variations in facial images, effectively balancing the dataset and improving recognition accuracy. Additionally, a Loss Weight parameter is used to fine-tune training, thereby optimizing performance across diverse and unbalanced emotion classes. Collectively, all these contribute to an efficient, real-time facial emotion recognition system that outperforms traditional CNN models and offers practical benefits for various applications while also addressing the inherent challenges of facial asymmetry in emotion detection. Our experimental results demonstrate superior performance compared to other CNN methods, marking a step forward in applications ranging from human–computer interaction to immersive technologies while also acknowledging privacy and ethical considerations. Full article
Show Figures

Figure 1

Figure 1
<p>Research flowchart illustrating the objectives and methods employed.</p>
Full article ">Figure 2
<p>Emotion image distribution in the training and validation sets.</p>
Full article ">Figure 3
<p>Stages of facial expression recognition.</p>
Full article ">Figure 4
<p>Network structure.</p>
Full article ">Figure 5
<p>Confusion matrix—Case 1: data augmentation with the Shuffle function.</p>
Full article ">Figure 6
<p>Confusion matrix—Case 2: data augmentation with the Shuffle and Flip-Vertical functions.</p>
Full article ">Figure 7
<p>Confusion matrix—Case 3: data augmentation with the Loss Weight parameter.</p>
Full article ">Figure 8
<p>Confusion matrix—Case 4: data augmentation with the Shuffle function and the Loss Weight parameter.</p>
Full article ">Figure 9
<p>Similarities between feelings in the FER2013 dataset.</p>
Full article ">Figure 10
<p>Confusion matrix—Case 5: data augmentation with the Flip-Vertical and Shuffle functions and the Loss Weight parameter.</p>
Full article ">Figure 11
<p>Confusion matrix—Case 6: data augmentation with the Shuffle function and a larger number of images in the test set.</p>
Full article ">Figure 12
<p>Application results displaying emotions (images captured and used with the subject’s consent).</p>
Full article ">
32 pages, 2960 KiB  
Article
Comparing Application-Level Hardening Techniques for Neural Networks on GPUs
by Giuseppe Esposito, Juan-David Guerrero-Balaguera, Josie E. Rodriguez Condia and Matteo Sonza Reorda
Electronics 2025, 14(5), 1042; https://doi.org/10.3390/electronics14051042 - 6 Mar 2025
Viewed by 89
Abstract
Neural networks (NNs) are essential in advancing modern safety-critical systems. Lightweight NN architectures are deployed on resource-constrained devices using hardware accelerators like Graphics Processing Units (GPUs) for fast responses. However, the latest semiconductor technologies may be affected by physical faults that can jeopardize [...] Read more.
Neural networks (NNs) are essential in advancing modern safety-critical systems. Lightweight NN architectures are deployed on resource-constrained devices using hardware accelerators like Graphics Processing Units (GPUs) for fast responses. However, the latest semiconductor technologies may be affected by physical faults that can jeopardize the NN computations, making fault mitigation crucial for safety-critical domains. The recent studies propose software-based Hardening Techniques (HTs) to address these faults. However, the proposed fault countermeasures are evaluated through different hardware-agnostic error models neglecting the effort required for their implementation and different test benches. Comparing application-level HTs across different studies is challenging, leaving it unclear (i) their effectiveness against hardware-aware error models on any NN and (ii) which HTs provide the best trade-off between reliability enhancement and implementation cost. In this study, application-level HTs are evaluated homogeneously and independently by performing a study on the feasibility of implementation and a reliability assessment under two hardware-aware error models: (i) weight single bit-flips and (ii) neuron bit error rate. Our results indicate that not all HTs suit every NN architecture, and their effectiveness varies depending on the evaluated error model. Techniques based on the range restriction of activation function consistently outperform others, achieving up to 58.23% greater mitigation effectiveness while keeping the introduced overhead at inference time low while requiring a contained effort in their implementation. Full article
Show Figures

Figure 1

Figure 1
<p>Basic convolutional block.</p>
Full article ">Figure 2
<p>Hierarchical GPU organization [<a href="#B46-electronics-14-01042" class="html-bibr">46</a>].</p>
Full article ">Figure 3
<p>Effect of the fault propagation up to the application level.</p>
Full article ">Figure 4
<p>Hardening technique evaluation procedure.</p>
Full article ">Figure 5
<p>Fault class distribution. In this figure, the HT names are abbreviated as follows: Baseline (BL), Adaptive Clipper (AC), Swap ReLU6 (SR) and median filter (MF).</p>
Full article ">Figure 6
<p>Accuracy degradation for HTs in front of errors injected in weights per bit location.</p>
Full article ">Figure 7
<p>Accuracy degradation for HTs when injecting errors in multiple FM bit locations.</p>
Full article ">
34 pages, 10596 KiB  
Article
Scalable Container-Based Time Synchronization for Smart Grid Data Center Networks
by Kennedy Chinedu Okafor, Wisdom Onyema Okafor, Omowunmi Mary Longe, Ikechukwu Ignatius Ayogu, Kelvin Anoh and Bamidele Adebisi
Technologies 2025, 13(3), 105; https://doi.org/10.3390/technologies13030105 - 5 Mar 2025
Viewed by 329
Abstract
The integration of edge-to-cloud infrastructures in smart grid (SG) data center networks requires scalable, efficient, and secure architecture. Traditional server-based SG data center architectures face high computational loads and delays. To address this problem, a lightweight data center network (DCN) with low-cost, and fast-converging [...] Read more.
The integration of edge-to-cloud infrastructures in smart grid (SG) data center networks requires scalable, efficient, and secure architecture. Traditional server-based SG data center architectures face high computational loads and delays. To address this problem, a lightweight data center network (DCN) with low-cost, and fast-converging optimization is required. This paper introduces a container-based time synchronization model (CTSM) within a spine–leaf virtual private cloud (SL-VPC), deployed via AWS CloudFormation stack as a practical use case. The CTSM optimizes resource utilization, security, and traffic management while reducing computational overhead. The model was benchmarked against five DCN topologies—DCell, Mesh, Skywalk, Dahu, and Ficonn—using Mininet simulations and a software-defined CloudFormation stack on an Amazon EC2 HPC testbed under realistic SG traffic patterns. The results show that CTSM achieved near-100% reliability, with the highest received energy data (29.87%), lowest packetization delay (13.11%), and highest traffic availability (70.85%). Stateless container engines improved resource allocation, reducing administrative overhead and enhancing grid stability. Software-defined Network (SDN)-driven adaptive routing and load balancing further optimized performance under dynamic demand conditions. These findings position CTSM-SL-VPC as a secure, scalable, and efficient solution for next-generation smart grid automation. Full article
Show Figures

Figure 1

Figure 1
<p>Residential units with layered SGDA with edge-to-cloud interfaces.</p>
Full article ">Figure 2
<p>Smart grid edge-to-cloud integration using CTSM multi-queue system for heterogenous fleet servers.</p>
Full article ">Figure 3
<p>(<b>a</b>,<b>b</b>): Implementation of the load management AMI hardware in SGDA.</p>
Full article ">Figure 4
<p>Proof-of-concept Advanced Metering Infrastructure (AMI) that employs full-duplex computational modeling of energy generation and distribution. This model utilizes exponential, gamma, Bernoulli, and binomial distributions to simulate GENCO lifespan, aggregated energy output, and smart meter reading accuracy for dynamic load management in the cloud. The SG system comprises key components such as smart load control switching modules, voltage and current sensors, and IoT RF communication modules, which monitor and manage electrical parameters while facilitating real-time data exchange. Enclosed edge aggregation boxes with both disabled and active load points organize and control distributed energy resources. Data acquisition mobile devices gather operational data, and high-frequency display modules provide energy readings and system status updates, enabling informed decision-making and effective grid management.</p>
Full article ">Figure 5
<p>Computation of neural controller architecture for SG architecture.</p>
Full article ">Figure 6
<p>Mean square error plot for SG edge neural network model.</p>
Full article ">Figure 7
<p>Simulated SGDA implementation. The edge-to-cloud AMI experiments were conducted on an EC2 HPC testbed featuring Intel Xeon Gold 6132 CPUs, NVIDIA GeForce GTX 1080Ti GPUs, and 192GB of RAM. We used Python 3.7.4 and PyTorch 1.1.0 to implement the CTSM modules on the EC2 HPC infrastructure. A Cisco Nexus 7700 core switch with 18 slots managed network connectivity, supporting up to 768 × 1 and 10 Gigabit Ethernet ports, 384 × 40 Gigabit Ethernet ports, and 192 × 100 Gigabit Ethernet ports, which efficiently handled the SG workloads and automation processes.</p>
Full article ">Figure 8
<p>SGDA energy data received response.</p>
Full article ">Figure 9
<p>SGDA service delay response.</p>
Full article ">Figure 10
<p>SGDA media access delay response.</p>
Full article ">Figure 11
<p>SGDA service throughput response.</p>
Full article ">Figure 12
<p>SGDA traffic availability response.</p>
Full article ">Figure 13
<p>SGDA security overhead response.</p>
Full article ">
Back to TopTop