[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Advanced Machine Learning Technologies and Their Applications in Intelligent Imaging and Image Processing

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Electronic Multimedia".

Deadline for manuscript submissions: 15 March 2025 | Viewed by 1384

Special Issue Editors


E-Mail Website
Guest Editor
College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
Interests: image processing; deep learning; machine learning

E-Mail Website
Guest Editor
School of Computer Science and Engineering, Hunan University of Science and Technology, Xiangtan 411201, China
Interests: signal processing; image restoration and fast imaging; deep learning; machine learning

E-Mail Website
Guest Editor
Institute of Optics and Electronics, Nanjing University of Information Science and Technology, Nanjing, China
Interests: image processing; hyperspectral image anomaly detection; pattern recognition

Special Issue Information

Dear Colleagues,

Intelligent imaging and image processing is one of the fundamental tasks in the area of machine learning and artificial intelligence. Recently, continuous progress is being made in machine learning. Powered by advanced machine learning techniques, intelligent imaging and image processing has attracted increasing attention due to its wide range of applications, such as face image analysis, fast medical imaging, snapshot compressive imaging, hyperspectral image restoration, machine vision sensing, etc. Despite the promising results achieved using advanced machine learning technology and their increasing number of related applications and achievements, there remain several unsolved challenges regarding their practical applications, such as efficient image prior modeling, fast and robust large-scale optimization algorithms, etc. There is ample room for improvement in contemporary theories and methodologies for intelligent imaging, image processing, and their applications.

The aim of this Special Issue is to discuss new machine learning technologies and their applications in intelligent imaging and image processing. The topics include but not limited to new deep learning techniques; low-level image processing, restoration, and enhancement; intelligent sensing systems; signal processing; multi-sensor imaging fusion; and high-level image visions including image classification and recognition.

Dr. Licheng Liu
Dr. Yunyi Li
Prof. Dr. Bing Tu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • intelligent imaging
  • image restoration
  • multi-sensor imaging fusion
  • face image analysis
  • hyperspectral image processing
  • advanced machine learning algorithms
  • new applications from novel AI technologies

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 1096 KiB  
Article
Detection of Sealing Surface of Electric Vehicle Electronic Water Pump Housings Based on Lightweight YOLOv8n
by Li Sun, Yi Shen, Jie Li, Weiyu Jiang, Xiang Bian and Mingxin Yuan
Electronics 2025, 14(2), 258; https://doi.org/10.3390/electronics14020258 - 9 Jan 2025
Abstract
Due to the characteristics of large size differences and shape variations in the sealing surface of electric vehicle electronic water pump housings, and the shortcomings of traditional YOLO defect detection models such as large volume and low accuracy, a lightweight defect detection algorithm [...] Read more.
Due to the characteristics of large size differences and shape variations in the sealing surface of electric vehicle electronic water pump housings, and the shortcomings of traditional YOLO defect detection models such as large volume and low accuracy, a lightweight defect detection algorithm based on YOLOv8n (You Only Look Once version 8n) is proposed for the sealing surface of electric vehicle electronic water pump housings. First, on the basis of introducing the MoblieNetv3 module, the YOLOv8n network structure is redesigned, which not only achieves network lightweighting but also improves the detection accuracy of the model. Then, DualConv (Dual Convolutional) convolution is introduced and the CMPDual (Cross Max Pooling Dual) module is designed to further optimize the detection model, which reduces redundant parameters and computational complexity of the model. Finally, in response to the characteristics of large size differences and shape variations in sealing surface defects, the Inner-WIoU (Inner-Wise-IoU) loss function is used instead of the CIoU (Complete-IoU) loss function in YOLOv8n, which improves the positioning accuracy of the defect area bounding box and further enhances the detection accuracy of the model. The ablation experiment based on the dataset constructed in this paper shows that compared with the YOLOv8n model, the weight of the proposed model is reduced by 61.9%, the computational complexity is reduced by 58.0%, the detection accuracy is improved by 9.4%, and the [email protected] is improved by 6.9%. The comparison of detection results from different models shows that the proposed model has an average improvement of 6.9% in detection accuracy and an average improvement of 8.6% on [email protected], which indicates that the proposed detection model effectively improves defect detection accuracy while ensuring model lightweighting. Full article
21 pages, 6186 KiB  
Article
Automatic Measurement of Comprehensive Skin Types Based on Image Processing and Deep Learning
by Jianghong Ran, Guolong Dong, Fan Yi, Li Li and Yue Wu
Electronics 2025, 14(1), 49; https://doi.org/10.3390/electronics14010049 - 26 Dec 2024
Viewed by 397
Abstract
The skin serves as a physical and chemical barrier, effectively protecting us against the external environment. The Baumann Skin Type Indicator (BSTI) classifies skin into 16 types based on traits such as dry/oily (DO), sensitive/resistant (SR), pigmented/nonpigmented (PN), and wrinkle-prone/tight (WT). Traditional assessments [...] Read more.
The skin serves as a physical and chemical barrier, effectively protecting us against the external environment. The Baumann Skin Type Indicator (BSTI) classifies skin into 16 types based on traits such as dry/oily (DO), sensitive/resistant (SR), pigmented/nonpigmented (PN), and wrinkle-prone/tight (WT). Traditional assessments are time-consuming and challenging as they require the involvement of experts. While deep learning has been widely used in skin disease classification, its application in skin type classification, particularly using multimodal data, remains largely unexplored. To address this, we propose an improved Inception-v3 model incorporating transfer learning, based on the four-dimensional classification of the Baumann Skin Type Index (BSTI), which demonstrates outstanding accuracy. The dataset used in this study includes non-invasive physiological indicators, BSTI questionnaires, and skin images captured under various light sources. By comparing performance across different light sources, regions of interest (ROI), and baseline models, the improved Inception-v3 model achieved the best results, with accuracy reaching 91.11% in DO, 81.13% in SR, 91.72% in PN, and 74.9% in WT, demonstrating its effectiveness in skin type classification. This study surpasses traditional classification methods and previous similar research, offering a new, objective approach to measuring comprehensive skin types using multimodal and multi-light-source data. Full article
Show Figures

Figure 1

Figure 1
<p>Graphical summary.</p>
Full article ">Figure 2
<p>Model structure diagram. (<b>1</b>) (<b>2</b>) (<b>3</b>) (<b>4</b>) are the specific settings for the four level models DO, SR, PN, and WT, respectively.</p>
Full article ">Figure 3
<p>Heat map of correlation analysis significance results.</p>
Full article ">Figure 4
<p>Data enhancement.</p>
Full article ">Figure 5
<p>Data optimization results. Left side shows the original data. Right side shows the optimized data. (<b>1</b>) and (<b>2</b>), (<b>3</b>) and (<b>4</b>), (<b>5</b>) and (<b>6</b>) are, respectively, the DO, SR, and PN model datasets.</p>
Full article ">Figure 6
<p>DO model results. (<b>1</b>,<b>2</b>) represent training curves for the initial version of the model. (<b>3</b>,<b>4</b>) represent training curves after fine-tuning. The vertical line in (<b>1</b>,<b>2</b>) is the retained optimal model epoch.</p>
Full article ">Figure 7
<p>SR model results. (<b>1</b>,<b>2</b>) represent training curves for the initial version of the model. (<b>3</b>,<b>4</b>) represent training curves after fine-tuning.</p>
Full article ">Figure 8
<p>PN model results. (<b>1</b>,<b>2</b>) represent training curves for the initial version of the model. (<b>3</b>,<b>4</b>) represent training curves after fine-tuning.</p>
Full article ">Figure 9
<p>WT model results. (<b>1</b>,<b>2</b>) represent training curves for the initial version of the model. (<b>3</b>,<b>4</b>) represent training curves after fine-tuning. The vertical line in (<b>3</b>,<b>4</b>) is the retained optimal model epoch.</p>
Full article ">Figure 10
<p>Comparison of classification accuracy using MobileNet, ResNet-50, and Inception-v3 at optimal light sources. Bolded in the table is our proposed model.</p>
Full article ">Figure 11
<p>Comparison of model accuracy under optimal and comparison light sources.</p>
Full article ">Figure 12
<p>ROI selection. (<b>1</b>) shows the ROI choices for the DO, SR, PN dimensions and (<b>2</b>) shows the ROI choices for WT for the left and right side faces.</p>
Full article ">Figure 13
<p>Comparison of model accuracy for different areas. Bolded in the table is the optimal ROI we used.</p>
Full article ">
13 pages, 4672 KiB  
Article
A Four-Point Orientation Method for Scene-to-Model Point Cloud Registration of Engine Blades
by Duanjiao Li, Ying Zhang, Ziran Jia, Zhiyu Wang, Qiu Fang and Xiaogang Zhang
Electronics 2024, 13(23), 4634; https://doi.org/10.3390/electronics13234634 - 25 Nov 2024
Viewed by 512
Abstract
The use of 3D optical equipment for multi-view scanning is a promising approach to assessing the processing errors of engine blades. However, incomplete scanned point cloud data may impact the accuracy of point cloud registration (PCR). This paper proposes a four-point orientation point [...] Read more.
The use of 3D optical equipment for multi-view scanning is a promising approach to assessing the processing errors of engine blades. However, incomplete scanned point cloud data may impact the accuracy of point cloud registration (PCR). This paper proposes a four-point orientation point cloud registration method to improve the efficiency and accuracy of the coarse registration of turbine blades and prevent PCR failure. First, the point cloud is divided into four labeling blocks based on a principal component analysis. Second, keypoints are detected in each block based on their distance from the plane formed by the principal axes and described with a location-label descriptor based on their position. Third, a keypoint pair set is chosen based on the descriptor, and a suitable keypoint base is selected through singular value decomposition to obtain the final rigid transformation. To verify the effectiveness of the method, experiments are conducted on different blades. The results demonstrate the improved performance and efficiency of the proposed method of coarse registration for turbine blades. Full article
Show Figures

Figure 1

Figure 1
<p>The flowchart for measuring blade manufacturing errors using a 3D structured light device.</p>
Full article ">Figure 2
<p>Two ways of scanning objects with maker points. The left is placing maker points on the object to be scanned. The right is placing objects on the fixture table with maker points attached on the pillars.</p>
Full article ">Figure 3
<p>An example of a PCA-computed coordinate axis of the point cloud of blade.</p>
Full article ">Figure 4
<p>Two three-point bases with similar structures.</p>
Full article ">Figure 5
<p>Incorrect keypoints pairing.</p>
Full article ">Figure 6
<p>An example of a divided-block point cloud.</p>
Full article ">Figure 7
<p>Original experiment data. The left is the scene point cloud. The right is the model point cloud.</p>
Full article ">Figure 8
<p>Registration experiment result, red is the scene point cloud and green is the model point cloud. From left to right are Blade 1 to Blade 5, respectively.</p>
Full article ">Figure 9
<p>Color deviation graph comparison for Blade 3 about three common methods. From left to right are KDDAR (ours), PCA, Super 4PCSs, and RANSAC, respectively.</p>
Full article ">Figure 10
<p>Comparison of the registration deviation of a certain cross-section of Blade 3. From left to right are KDDAR (ours), PCA, Super 4PCSs and RANSAC, respectively.</p>
Full article ">
Back to TopTop