[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Computational Methods and Application in Machine Learning, 2nd Edition

A special issue of Mathematics (ISSN 2227-7390).

Deadline for manuscript submissions: 31 December 2024 | Viewed by 3906

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, Zhejiang Normal University, Jinhua 321004, China
Interests: data mining; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Computer Science and Electronic Engineering, Hunan University, Changsha 410082, China
Interests: cross modal data retrieval; data analysis; representation and mining
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Machine learning is an interdisciplinary subject involving probability theory, statistics, approximation theory, convex analysis, optimalization, algorithm complexity theory, etc. It focuses on how computers simulate or realize human learning behaviors, so as to obtain new knowledge or skills. It is the core of artificial intelligence. In essence, the aim of machine learning is to enable computers to simulate human learning behaviors, automatically acquire knowledge and skills through learning, continuously improve performance, and realize artificial intelligence.

The main focus of this Special Issue is the progress of machine learning methods and applications, as well as emerging intelligent applications and models in topics of interest, including, but not limited to, information retrieval, expert systems, automatic reasoning, natural language understanding, pattern recognition, computer vision, intelligent robot, and deep learning.

The goal of this Special Issue is to establish a community of authors and readers to discuss the latest research, propose new ideas and research directions, and associate them with practical applications. In terms of application, we welcome papers including, but not limited to, the following topics: new machine learning models for vision, natural language, bioinformatics, intelligent robots, and expert systems. We will consider any theoretically solid contributions to the fields related to machine learning.

Prof. Dr. Huawen Liu
Dr. Chengyuan Zhang
Dr. Chunwei Tian
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • big data and analysis
  • machine learning
  • deep learning
  • natural language understanding
  • pattern recognition
  • computer vision
  • information retrieval
  • data mining
  • bioinformatics and biomedical applications
  • reinforcement learning
  • multimedia analysis and retrievalmultimodal representation learning
  • feature selection
  • clustering

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Related Special Issue

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 2546 KiB  
Article
Enhancing Vehicle Location Prediction Accuracy with Road-Aware Rectification for Multi-Access Edge Computing Applications
by Asif Mehmood, Afaq Muhammad, Faisal Mehmood and Wang-Cheol Song
Mathematics 2024, 12(24), 3980; https://doi.org/10.3390/math12243980 - 18 Dec 2024
Viewed by 283
Abstract
In future 6G networks, real-time and accurate vehicular data are key requirements for enhancing the data-driven multi-access edge computing (MEC) applications. Existing estimation techniques to forecast vehicle position aim to meet the real-time data needs but compromise accuracy due to a lack of [...] Read more.
In future 6G networks, real-time and accurate vehicular data are key requirements for enhancing the data-driven multi-access edge computing (MEC) applications. Existing estimation techniques to forecast vehicle position aim to meet the real-time data needs but compromise accuracy due to a lack of context awareness. While algorithms such as the Kalman filter improve estimation accuracy by considering certainty-grading and current-state estimate of measurements, they do not include the road context, which is vital for more accurate predictions. Unfortunately, current implementations of linear Kalman filters are not road-aware and struggle to predict a two-dimensional movement accurately. To this end, we propose a significant road-aware rectification-assisted prediction mechanism that enhances the modified Kalman filter predictions by incorporating road awareness. The parameters used for the Kalman filter include vehicle location, angle, speed, and time. In contrast, road-aware location rectification incorporates predicted location and lane shape, increasing the accuracy and precision of vehicle location predictions, reaching up to 99.9%. Performance is evaluated by comparing actual, predicted, and rectified vehicular traces at different speeds. The results demonstrate that the prediction error is approximately 0.005, while the proposed rectification process further reduces the error to 0.001, highlighting the effectiveness of the proposed approach. Overall, results support the idea of provisioning accurate, proactive, and real-time vehicular location data at the edge using a road-aware approach, thereby revolutionizing 6G vehicle location provisioning in MEC. Full article
Show Figures

Figure 1

Figure 1
<p>Proposed modified Kalman filter and its interworking. The process includes Step 0: initialization, Step 1: measurement (update), Step 2: state update (Kalman gain calculation), Step 3: prediction (updates in latitude and longitude), and Step 4: rectification (updates in latitude and longitude for enhanced accuracy). The black and white arrows indicate the sequence of steps in the process, while the yellow dashed lines represent the flow of vehicular and road data provision and movement.</p>
Full article ">Figure 2
<p>ERD of vTrachea-Store supporting road-awareness into the modified Kalman filter.</p>
Full article ">Figure 3
<p>Illustration highlighting the key differences between prediction and rectification.</p>
Full article ">Figure 4
<p>Comparison between predicted and rectified trajectories. The orange dashed frame highlights significant discrepancies in the predicted path and the corresponding corrections applied by the modified Kalman filter for improved accuracy.</p>
Full article ">
25 pages, 5540 KiB  
Article
IMITASD: Imitation Assessment Model for Children with Autism Based on Human Pose Estimation
by Hany Said, Khaled Mahar, Shaymaa E. Sorour, Ahmed Elsheshai, Ramy Shaaban, Mohamed Hesham, Mustafa Khadr, Youssef A. Mehanna, Ammar Basha and Fahima A. Maghraby
Mathematics 2024, 12(21), 3438; https://doi.org/10.3390/math12213438 - 3 Nov 2024
Viewed by 834
Abstract
Autism is a challenging brain disorder affecting children at global and national scales. Applied behavior analysis is commonly conducted as an efficient medical therapy for children. This paper focused on one paradigm of applied behavior analysis, imitation, where children mimic certain lessons to [...] Read more.
Autism is a challenging brain disorder affecting children at global and national scales. Applied behavior analysis is commonly conducted as an efficient medical therapy for children. This paper focused on one paradigm of applied behavior analysis, imitation, where children mimic certain lessons to enhance children’s social behavior and play skills. This paper introduces IMITASD, a practical monitoring assessment model designed to evaluate autistic children’s behaviors efficiently. The proposed model provides an efficient solution for clinics and homes equipped with mid-specification computers attached to webcams. IMITASD automates the scoring of autistic children’s videos while they imitate a series of lessons. The model integrates two core modules: attention estimation and imitation assessment. The attention module monitors the child’s position by tracking the child’s face and determining the head pose. The imitation module extracts a set of crucial key points from both the child’s head and arms to measure the similarity with a reference imitation lesson using dynamic time warping. The model was validated using a refined dataset of 268 videos collected from 11 Egyptian autistic children during conducting six imitation lessons. The analysis demonstrated that IMITASD provides fast scoring, takes less than three seconds, and shows a robust measure as it has a high correlation with scores given by medical therapists, about 0.9, highlighting its effectiveness for children’s training applications. Full article
Show Figures

Figure 1

Figure 1
<p>List of imitation tasks.</p>
Full article ">Figure 2
<p>Landmarks from MediaPipe Hand and Body Pose Tracking module [<a href="#B69-mathematics-12-03438" class="html-bibr">69</a>,<a href="#B70-mathematics-12-03438" class="html-bibr">70</a>].</p>
Full article ">Figure 3
<p>Room setting inside medical clinic.</p>
Full article ">Figure 4
<p>GUI control available for the admin.</p>
Full article ">Figure 5
<p>GUI interface, where the left part (child preview) is visible on the child’s screen.</p>
Full article ">Figure 6
<p>Child attention module.</p>
Full article ">Figure 7
<p>Imitation Assessment Block module.</p>
Full article ">Figure 8
<p>Feature extraction flowchart.</p>
Full article ">Figure 9
<p>Comparison between IMITASD score and medical evaluation.</p>
Full article ">Figure 10
<p>Detailed comparison of distance metrics and expert evaluation scores.</p>
Full article ">Figure 11
<p>Comparison of distance metrics and expert evaluation scores for each imitation task.</p>
Full article ">Figure 12
<p>Running time to process a video segment.</p>
Full article ">Figure 13
<p>Number of videos that were unable to be processed by MediaPipe grouped by participant.</p>
Full article ">Figure 14
<p>Number of videos that were unable to be processed by MediaPipe grouped by participant and task.</p>
Full article ">Figure 15
<p>Number of videos that were unable to be processed by MediaPipe grouped by task.</p>
Full article ">
46 pages, 27418 KiB  
Article
Enhanced Parameter Estimation of DENsity CLUstEring (DENCLUE) Using Differential Evolution
by Omer Ajmal, Shahzad Mumtaz, Humaira Arshad, Abdullah Soomro, Tariq Hussain, Razaz Waheeb Attar and Ahmed Alhomoud
Mathematics 2024, 12(17), 2790; https://doi.org/10.3390/math12172790 - 9 Sep 2024
Cited by 1 | Viewed by 863
Abstract
The task of finding natural groupings within a dataset exploiting proximity of samples is known as clustering, an unsupervised learning approach. Density-based clustering algorithms, which identify arbitrarily shaped clusters using spatial dimensions and neighbourhood aspects, are sensitive to the selection of parameters. For [...] Read more.
The task of finding natural groupings within a dataset exploiting proximity of samples is known as clustering, an unsupervised learning approach. Density-based clustering algorithms, which identify arbitrarily shaped clusters using spatial dimensions and neighbourhood aspects, are sensitive to the selection of parameters. For instance, DENsity CLUstEring (DENCLUE)—a density-based clustering algorithm—requires a trial-and-error approach to find suitable parameters for optimal clusters. Earlier attempts to automate the parameter estimation of DENCLUE have been highly dependent either on the choice of prior data distribution (which could vary across datasets) or by fixing one parameter (which might not be optimal) and learning other parameters. This article addresses this challenge by learning the parameters of DENCLUE through the differential evolution optimisation technique without prior data distribution assumptions. Experimental evaluation of the proposed approach demonstrated consistent performance across datasets (synthetic and real datasets) containing clusters of arbitrary shapes. The clustering performance was evaluated using clustering validation metrics (e.g., Silhouette Score, Davies–Bouldin Index and Adjusted Rand Index) as well as qualitative visual analysis when compared with other density-based clustering algorithms, such as DPC, which is based on weighted local density sequences and nearest neighbour assignments (DPCSA) and Variable KDE-based DENCLUE (VDENCLUE). Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of the Proposed Approach.</p>
Full article ">Figure 2
<p>Visualisations of Clustering by Different Methods—Synthetic Datasets. From Top to Bottom: Aggregation, Moons, Path-based, Shapes, Spiral, Zahn’s Compound, S2 and A3. Each cluster is shown in a unique colour and marker.</p>
Full article ">Figure 2 Cont.
<p>Visualisations of Clustering by Different Methods—Synthetic Datasets. From Top to Bottom: Aggregation, Moons, Path-based, Shapes, Spiral, Zahn’s Compound, S2 and A3. Each cluster is shown in a unique colour and marker.</p>
Full article ">Figure 3
<p>Comparison of Predicted vs. True Clusters.</p>
Full article ">Figure 4
<p>Comparison of Best ARI.</p>
Full article ">Figure 5
<p>Comparison of Average ARI.</p>
Full article ">Figure A1
<p>Aggregation—Clusters from different runs, unique color/marker.</p>
Full article ">Figure A2
<p>Two Moons—Clusters from different runs, unique color/marker.</p>
Full article ">Figure A3
<p>Path-based—Clusters from different runs, unique color/marker.</p>
Full article ">Figure A4
<p>Shapes—Clusters from different runs, unique color/marker.</p>
Full article ">Figure A5
<p>Spiral—Clusters from different runs, unique color/marker.</p>
Full article ">Figure A6
<p>Zahn’s Compound—Clusters from different runs, unique color/marker.</p>
Full article ">Figure A7
<p>S2—Clusters from different runs, unique color/marker.</p>
Full article ">Figure A8
<p>A3—Clusters from different runs, unique color/marker.</p>
Full article ">Figure A9
<p>Aggregation—Correlation of metrics. Color shows correlation strength; <span class="html-italic">p</span>-values indicate significance, and confidence intervals show range certainty.</p>
Full article ">Figure A10
<p>Two Moons—Correlation of metrics. Color shows correlation strength; <span class="html-italic">p</span>-values indicate significance, and confidence intervals show range certainty.</p>
Full article ">Figure A11
<p>Path-based—Correlation of metrics. Color shows correlation strength; <span class="html-italic">p</span>-values indicate significance, and confidence intervals show range certainty.</p>
Full article ">Figure A12
<p>Shapes—Correlation of metrics. Color shows correlation strength; <span class="html-italic">p</span>-values indicate significance, and confidence intervals show range certainty.</p>
Full article ">Figure A13
<p>Spiral—Correlation of metrics. Color shows correlation strength; <span class="html-italic">p</span>-values indicate significance, and confidence intervals show range certainty.</p>
Full article ">Figure A14
<p>Zahn’s Compound—Correlation of metrics. Color shows correlation strength; <span class="html-italic">p</span>-values indicate significance, and confidence intervals show range certainty.</p>
Full article ">Figure A15
<p>S2—Correlation of metrics. Color shows correlation strength; <span class="html-italic">p</span>-values indicate significance, and confidence intervals show range certainty.</p>
Full article ">Figure A16
<p>A3—Correlation of metrics. Color shows correlation strength; <span class="html-italic">p</span>-values indicate significance, and confidence intervals show range certainty.</p>
Full article ">Figure A17
<p>Distribution of DBCV and ARI—Two Moons.</p>
Full article ">Figure A18
<p>Distribution of DBCV and ARI—Path-based.</p>
Full article ">Figure A19
<p>IRIS—Sepal Length vs. Sepal Width.</p>
Full article ">Figure A20
<p>Heart Disease—resting bp vs. cholesterol.</p>
Full article ">Figure A21
<p>Heart Disease—heart rate vs. depression.</p>
Full article ">Figure A22
<p>Heart Disease—age vs. resting bp.</p>
Full article ">Figure A23
<p>Heart Disease—age vs. max. heart rate.</p>
Full article ">Figure A24
<p>Seeds—compactness vs. asymmetry.</p>
Full article ">Figure A25
<p>Seeds—compactness vs. kernel groove.</p>
Full article ">Figure A26
<p>Wine—total phenols vs. diluted wines.</p>
Full article ">Figure A27
<p>Wine—nanoflavanoid vs. hue.</p>
Full article ">Figure A28
<p>Wine—proanthocyanins vs. hue.</p>
Full article ">Figure A29
<p>IRIS—Correlation of metrics. Color shows correlation strength; <span class="html-italic">p</span>-values indicate significance, and confidence intervals show range certainty.</p>
Full article ">Figure A30
<p>Heart Disease—Correlation of metrics. Color shows correlation strength; <span class="html-italic">p</span>-values indicate significance, and confidence intervals show range certainty.</p>
Full article ">Figure A31
<p>Seeds—Correlation of metrics. Color shows correlation strength; <span class="html-italic">p</span>-values indicate significance, and confidence intervals show range certainty.</p>
Full article ">Figure A32
<p>Wine— Correlation of metrics. Color shows correlation strength; <span class="html-italic">p</span>-values indicate significance, and confidence intervals show range certainty.</p>
Full article ">Figure A33
<p>Distribution of DBCV and ARI—Heart Disease.</p>
Full article ">Figure A34
<p>Distribution of DBCV and ARI—Seeds.</p>
Full article ">Figure A35
<p>Clustering Visualisations Proposed Method vs. Grid-Search on Original DENCLUE—Synthetic Datasets. From Top to Bottom: Aggregation, Moons, Path-based, Shapes, Spiral, Zahn’s Compound, S2 and A3. Each cluster is shown in a unique colour and marker.</p>
Full article ">Figure A35 Cont.
<p>Clustering Visualisations Proposed Method vs. Grid-Search on Original DENCLUE—Synthetic Datasets. From Top to Bottom: Aggregation, Moons, Path-based, Shapes, Spiral, Zahn’s Compound, S2 and A3. Each cluster is shown in a unique colour and marker.</p>
Full article ">

Review

Jump to: Research

37 pages, 4940 KiB  
Review
Graph Convolutional Network for Image Restoration: A Survey
by Tongtong Cheng, Tingting Bi, Wen Ji and Chunwei Tian
Mathematics 2024, 12(13), 2020; https://doi.org/10.3390/math12132020 - 28 Jun 2024
Cited by 1 | Viewed by 1376
Abstract
Image restoration technology is a crucial field in image processing and is extensively utilized across various domains. Recently, with advancements in graph convolutional network (GCN) technology, methods based on GCNs have increasingly been applied to image restoration, yielding impressive results. Despite these advancements, [...] Read more.
Image restoration technology is a crucial field in image processing and is extensively utilized across various domains. Recently, with advancements in graph convolutional network (GCN) technology, methods based on GCNs have increasingly been applied to image restoration, yielding impressive results. Despite these advancements, there is a gap in comprehensive research that consolidates various image denoising techniques. In this paper, we conduct a comparative study of image restoration techniques using GCNs. We begin by categorizing GCN methods into three primary application areas: image denoising, image super-resolution, and image deblurring. We then delve into the motivations and principles underlying various deep learning approaches. Subsequently, we provide both quantitative and qualitative comparisons of state-of-the-art methods using public denoising datasets. Finally, we discuss potential challenges and future directions, aiming to pave the way for further advancements in this domain. Our key findings include the identification of superior performance of GCN-based methods in capturing long-range dependencies and improving image quality across different restoration tasks, highlighting their potential for future research and applications. Full article
Show Figures

Figure 1

Figure 1
<p>Two common types of system-related degradations. The first row shows turbulence degraded images under two different turbulence degrees (k), and the second row shows motion-blurred degraded images under different motion degrees.</p>
Full article ">Figure 2
<p>Two common types of statistical degradations resulting from noise. The first row shows the images after Gaussian blur and the second row shows the images after Rayleigh blur.</p>
Full article ">Figure 3
<p>Outline of the survey. It consists of four parts, including basic framework, categories, performance comparison, challenges, and potential direction. Specifically, categories comprise GCNs for image denoising, GCNs for image super-resolution, GCNs for image deblurring, and LLMs and GCNs for image restoration.</p>
Full article ">Figure 4
<p>Social network relationship diagram. P1–4 represents different characters, and <math display="inline"><semantics> <mrow> <mi>ω</mi> </mrow> </semantics></math>1–4 represents the weight of the relationship between different characters.</p>
Full article ">Figure 5
<p>Basic network framework of GCNs.</p>
Full article ">Figure 6
<p>Visual illustration of the GraphSAGE sample and aggregate approach.</p>
Full article ">Figure 7
<p>Visualization of GATs’ distribution of weights to different nodes.</p>
Full article ">Figure 8
<p>Experimental dataset image samples. From left to right, they are Set12, BSD68, and Urban100 dataset samples.</p>
Full article ">Figure 9
<p>Visual comparison of gray-scale image denoising of various methods on one sample from Urban100 with noise level σ = 25.</p>
Full article ">Figure 10
<p>Visual comparisons of various methods on the SysData benchmark dataset. Models: Joint, Sharp-Shpere, Carter, and Gargoyle with the Gaussian noise of level 0.3, 0.3, and 0.3 (mean edge length).</p>
Full article ">Figure 11
<p>The denoised Kinect v2 (1st row) single-frame meshes and Kinect Fusion models (2nd row).</p>
Full article ">Figure 12
<p>Denoising real-scan meshes in PrintData.</p>
Full article ">Figure 13
<p>Visual comparisons on Set14 datasets with ×4 scale.</p>
Full article ">Figure 14
<p>Visual comparisons on Urban100 dataset with ×4 scale.</p>
Full article ">
Back to TopTop