[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,153)

Search Parameters:
Keywords = image restoration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 2206 KiB  
Article
RGB Approach for Pixel-Wise Identification of Cellulose Nitrate Photo Negative Yellowing
by Anastasia Povolotckaia, Svetlana Kaputkina, Irina Grigorieva, Dmitrii Pankin, Evgenii Borisov, Anna Vasileva, Valeria Lipovskaia and Maria Dynnikova
Heritage 2025, 8(1), 16; https://doi.org/10.3390/heritage8010016 - 3 Jan 2025
Viewed by 303
Abstract
Film-based cellulose nitrate negatives are a unique class of objects that contain important information about life, historical buildings, and the natural landscapes of past years. Increased sensitivity to storage conditions makes these objects highly flammable and can lead to irretrievable loss. In this [...] Read more.
Film-based cellulose nitrate negatives are a unique class of objects that contain important information about life, historical buildings, and the natural landscapes of past years. Increased sensitivity to storage conditions makes these objects highly flammable and can lead to irretrievable loss. In this regard, timely identification of the degradation process is a necessary step towards further conservation and restoration. This work studies the possibility of detecting the degradation process based on cellulose nitrate artifact yellowing. A total of 20 normal and 20 yellowed negatives from the collection of Karl Kosse (The State Museum and Exhibition Center ROSPHOTO) were selected as objects for statistical study. The novelty of this work is in its demonstration of the possibility to divide negatives into normal and yellowed areas with different shades based on different B/R and B/G ratios of both light and dark negatives, i.e., regardless of the distribution of RGB component values for the obtained digital photo from the negative. Moreover, the obtained differentiation result was demonstrated for individual image pixels, without the need for averaging over a certain area. Full article
(This article belongs to the Section Materials and Heritage)
19 pages, 3737 KiB  
Article
End-to-End Multi-Scale Adaptive Remote Sensing Image Dehazing Network
by Xinhua Wang, Botao Yuan, Haoran Dong, Qiankun Hao and Zhuang Li
Sensors 2025, 25(1), 218; https://doi.org/10.3390/s25010218 - 2 Jan 2025
Viewed by 203
Abstract
Satellites frequently encounter atmospheric haze during imaging, leading to the loss of detailed information in remote sensing images and significantly compromising image quality. This detailed information is crucial for applications such as Earth observation and environmental monitoring. In response to the above issues, [...] Read more.
Satellites frequently encounter atmospheric haze during imaging, leading to the loss of detailed information in remote sensing images and significantly compromising image quality. This detailed information is crucial for applications such as Earth observation and environmental monitoring. In response to the above issues, this paper proposes an end-to-end multi-scale adaptive feature extraction method for remote sensing image dehazing (MSD-Net). In our network model, we introduce a dilated convolution adaptive module to extract global and local detail features of remote sensing images. The design of this module can extract important image features at different scales. By expanding convolution, the receptive field is expanded to capture broader contextual information, thereby obtaining a more global feature representation. At the same time, a self-adaptive attention mechanism is also used, allowing the module to automatically adjust the size of its receptive field based on image content. In this way, important features suitable for different scales can be flexibly extracted to better adapt to the changes in details in remote sensing images. To fully utilize the features at different scales, we also adopted feature fusion technology. By fusing features from different scales and integrating information from different scales, more accurate and rich feature representations can be obtained. This process aids in retrieving lost detailed information from remote sensing images, thereby enhancing the overall image quality. A large number of experiments were conducted on the HRRSD and RICE datasets, and the results showed that our proposed method can better restore the original details and texture information of remote sensing images in the field of dehazing and is superior to current state-of-the-art methods. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Diagram of the MSD-Net model architecture.</p>
Full article ">Figure 2
<p>Diagram of the internal structure of the MSD-Net group module.</p>
Full article ">Figure 3
<p>Visualization table of shallow concentration haze on HRRSD dataset.</p>
Full article ">Figure 4
<p>Visualization table of equal concentration haze in HRRSD dataset.</p>
Full article ">Figure 5
<p>HRRSD dataset dense haze visualization table.</p>
Full article ">Figure 6
<p>RICE dataset visualization table.</p>
Full article ">Figure 7
<p>RICE Dataset Visualization Table.</p>
Full article ">Figure 8
<p>Visualization of ablation experiments on the HRRSD dataset.</p>
Full article ">
21 pages, 9210 KiB  
Article
sRrsR-Net: A New Low-Light Image Enhancement Network via Raw Image Reconstruction
by Zhiyong Hong, Dexin Zhen, Liping Xiong, Xuechen Li and Yuhan Lin
Appl. Sci. 2025, 15(1), 361; https://doi.org/10.3390/app15010361 - 2 Jan 2025
Viewed by 287
Abstract
Most existing low-light image enhancement (LIE) methods are primarily designed for human-vision-friendly image formats, such as sRGB, due to their convenient storage and smaller file sizes. In addition, raw images provide greater detail and a wider dynamic range, which makes them more suitable [...] Read more.
Most existing low-light image enhancement (LIE) methods are primarily designed for human-vision-friendly image formats, such as sRGB, due to their convenient storage and smaller file sizes. In addition, raw images provide greater detail and a wider dynamic range, which makes them more suitable for LIE tasks. Despite these advantages, raw images, the original format captured by cameras, are larger and less accessible and are hard to use in methods of LIE with mobile devices. In order to leverage both the advantages of sRGB and raw domains while avoiding the direct use of raw images as training data, this paper introduces sRrsR-Net, a novel framework with the image translation process of sRGB–raw–sRGB for LIE task. In our approach, firstly, the RGB-to-iRGB module is designed to convert sRGB images into intermediate RGB feature maps. Then, with these intermediate feature maps, to bridge the domain gap between sRGB and raw pixels, the RAWFormer module is proposed to employ global attention to effectively align features between the two domains to generate reconstructed raw images. For enhancing the raw images and restoring them back to normal-light sRGB, unlike traditional Image Signal Processing (ISP) pipelines, which are often bulky and integrate numerous processing steps, we propose the RRAW-to-sRGB module. This module simplifies the process by focusing only on color correction and white balance, while still delivering competitive results. Extensive experiments on four benchmark datasets referring to both domains demonstrate the effectiveness of our approach. Full article
(This article belongs to the Special Issue Advances in Image Enhancement and Restoration Technology)
Show Figures

Figure 1

Figure 1
<p>Experimental results on four datasets. This is a comprehensive comparison chart of results in both sRGB and raw domains, with detailed data available in <a href="#sec4-applsci-15-00361" class="html-sec">Section 4</a>. The horizontal and vertical axes of the chart represent PSNR and SSIM, respectively. The better the performance, the further right and up the model is on the chart. It can be seen that our model achieves excellent performance.</p>
Full article ">Figure 2
<p>Flowchart of sRrsR-Net integrating Sampler, RGB-iRGB, RAWFormer, and RRAW-sRGB modules.</p>
Full article ">Figure 3
<p>The structure of the RGB-iRGB module.</p>
Full article ">Figure 4
<p>The structure of the RAWFormer module.</p>
Full article ">Figure 5
<p>The structure of the RRAW-sRGB module.</p>
Full article ">Figure 6
<p>Visual comparison results on the LOL-v1 and LOL-v2 datasets. The magnified portion has already been marked with a red box. The following are the same.</p>
Full article ">Figure 7
<p>Visual comparison of raw domain image reconstruction results using sRrsR-Net and six other methods.</p>
Full article ">Figure 8
<p>sRrsR-Net’s visualization results on the VE-LOL test set. From top to bottom, the images represent real and synthetic scenarios. From left to right, the input, output, and ground-truth images are depicted.</p>
Full article ">Figure 9
<p>Comparison of running times across different datasets. The left side compares the average running times of other methods, while the right side shows our method’s running time compared to other state-of-the-art methods.</p>
Full article ">Figure 10
<p>Visualization results of ablation study.</p>
Full article ">
22 pages, 15972 KiB  
Article
Regeneration Filter: Enhancing Mosaic Algorithm for Near Salt & Pepper Noise Reduction
by Ratko M. Ivković, Ivana M. Milošević and Zoran N. Milivojević
Sensors 2025, 25(1), 210; https://doi.org/10.3390/s25010210 - 2 Jan 2025
Viewed by 243
Abstract
This paper presents a Regeneration filter for reducing near Salt-and-Pepper (nS&P) noise in images, designed for selective noise removal while simultaneously preserving structural details. Unlike conventional methods, the proposed filter eliminates the need for median or other filters, focusing exclusively on restoring noise-affected [...] Read more.
This paper presents a Regeneration filter for reducing near Salt-and-Pepper (nS&P) noise in images, designed for selective noise removal while simultaneously preserving structural details. Unlike conventional methods, the proposed filter eliminates the need for median or other filters, focusing exclusively on restoring noise-affected pixels through localized contextual analysis in the immediate surroundings. Our approach employs an iterative processing method, where additional iterations do not degrade the image quality achieved after the first filtration, even with high noise densities up to 97% spatial distribution. To ensure the results are measurable and comparable with other methods, the filter’s performance was evaluated using standard image quality assessment metrics. Experimental evaluations across various image databases confirm that our filter consistently provides high-quality results. The code is implemented in the R programming language, and both data and code used for the experiments are available in a public repository, allowing for replication and verification of the findings. Full article
Show Figures

Figure 1

Figure 1
<p>Original samples of digital images.</p>
Full article ">Figure 2
<p>Digital image samples with added nS&amp;P noise: (<b>a</b>) 1%, (<b>b</b>) 2%, (<b>c</b>) 3%, (<b>d</b>) 4%, (<b>e</b>) 5%, (<b>f</b>) 7.5%, (<b>g</b>) 10%, (<b>h</b>) 10%, (<b>i</b>) 30%, (<b>j</b>) 40%, (<b>k</b>) 50%, (<b>l</b>) 50%, (<b>m</b>) 70%, (<b>n</b>) 80%, (<b>o</b>) 80%, (<b>p</b>) 90%.</p>
Full article ">Figure 2 Cont.
<p>Digital image samples with added nS&amp;P noise: (<b>a</b>) 1%, (<b>b</b>) 2%, (<b>c</b>) 3%, (<b>d</b>) 4%, (<b>e</b>) 5%, (<b>f</b>) 7.5%, (<b>g</b>) 10%, (<b>h</b>) 10%, (<b>i</b>) 30%, (<b>j</b>) 40%, (<b>k</b>) 50%, (<b>l</b>) 50%, (<b>m</b>) 70%, (<b>n</b>) 80%, (<b>o</b>) 80%, (<b>p</b>) 90%.</p>
Full article ">Figure 3
<p>Mosaic algorithm optimized for nS&amp;P noise.</p>
Full article ">Figure 4
<p>(<b>a</b>) Values of the SSIM parameter for reconstructed images with increasing noise spatial density, (<b>b</b>) values of the standard deviation of the SSIM parameter.</p>
Full article ">Figure 5
<p>(<b>a</b>) Values of the entropy parameter for reconstructed images with increasing noise spatial density, (<b>b</b>) values of the standard deviation of the entropy.</p>
Full article ">Figure 6
<p>(<b>a</b>) Values of the MSE parameter for reconstructed images with increasing noise spatial density, (<b>b</b>) values of the standard deviation of the MSE parameter.</p>
Full article ">Figure 7
<p>(<b>a</b>) Values of the PSNR parameter for reconstructed images with increasing noise spatial density, (<b>b</b>) values of the standard deviation of the MSE parameter.</p>
Full article ">Figure 8
<p>(<b>a</b>) Values of the LoD parameter for reconstructed images with increasing noise spatial density, (<b>b</b>) values of the standard deviation of the LoD parameter.</p>
Full article ">Figure 9
<p>(<b>a</b>) Values of the CSI parameter for reconstructed images with increasing noise spatial density, (<b>b</b>) values of the standard deviation of the CSI parameter.</p>
Full article ">Figure 10
<p>Images with added nS&amp;P noise and result of reduction by regeneration filter for 90% noise spatial density for: (<b>a</b>,<b>c</b>) with first treatment and (<b>b</b>,<b>d</b>) with second interaction.</p>
Full article ">Figure 11
<p>Results of regeneration filter treatment over test specimens from <a href="#sensors-25-00210-f002" class="html-fig">Figure 2</a>.</p>
Full article ">
7 pages, 1349 KiB  
Case Report
Fibrous Dysplasia of the Ethmoid Bone Diagnosed in a 10-Year-Old Patient
by Zofia Resler, Monika Morawska-Kochman, Katarzyna Resler and Tomasz Zatoński
Medicina 2025, 61(1), 45; https://doi.org/10.3390/medicina61010045 - 31 Dec 2024
Viewed by 333
Abstract
Fibrous dysplasia is an uncommon bone disorder affecting various parts of the skeleton, often affecting facial and cranial bones. In this case, a 10-year-old patient was diagnosed with fibrous dysplasia of the ethmoid sinus at an early age. The patient has experienced nasal [...] Read more.
Fibrous dysplasia is an uncommon bone disorder affecting various parts of the skeleton, often affecting facial and cranial bones. In this case, a 10-year-old patient was diagnosed with fibrous dysplasia of the ethmoid sinus at an early age. The patient has experienced nasal congestion, snores, and worsening nasal patency since 2019. A CT scan revealed an expansive proliferative lesion, likely from the frontal or ethmoid bone, protruding into the nasal cavity, ethmoid sinus, and right orbit. The tumor causes bone defects in the area of the nasal bone, leading to fluid retention in the peripheral parts of the right maxillary sinus. The patient’s parents decided not to undergo surgery to remove the diseased tissue and reconstruct the area, as it would be very extensive, risky, and disfiguring. The patient is being treated conservatively with an MRI, with a contrast performed approximately every six months and infusions of bisphosphonates. Despite the lesion’s size, the patient does not experience pain characteristic of dysplasia, and functions typically. Fibrous dysplasia of bone is a rare condition that presents with the most visually apparent manifestations, often mistaken for other bone conditions. Advanced diagnostic tools, like CT and MRI, are used to identify conditions affecting the ethmoid sinus more frequently. However, diagnostic errors often occur in imaging studies, leading to confusion. The most common period for clinical manifestations and diagnosis is around 10 years of age. The preferred approach in managing fibrous dysplasia involves symptomatic treatment, which can alleviate airway obstruction, restore normal globe position and visual function, and address physical deformities. Surgical intervention is recommended only for patients with severe functional impairment, progressive deformities, or malignant transformation. Full article
Show Figures

Figure 1

Figure 1
<p>Histopathological findings. Tumor tissue composed of diffuse irregular, circular attenuated bone trabeculae on a background of fibrous tissue (H + E, ×100).</p>
Full article ">Figure 2
<p>Differential diagnosis of craniofacial fibrous dysplasia [<a href="#B7-medicina-61-00045" class="html-bibr">7</a>].</p>
Full article ">Figure 3
<p>The pattern of cascade of events resulting from GNAS mutations leading to fibrous dysplasia. (↑—rise).</p>
Full article ">Figure 4
<p>CT image in: (<b>A</b>) sagittal projection, (<b>B</b>) frontal, (<b>C</b>,<b>D</b>) 3D image reconstructions.</p>
Full article ">
22 pages, 11189 KiB  
Article
VUF-MIWS: A Visible and User-Friendly Watermarking Scheme for Medical Images
by Chia-Chen Lin, Yen-Heng Lin, En-Ting Chu, Wei-Liang Tai and Chun-Jung Lin
Electronics 2025, 14(1), 122; https://doi.org/10.3390/electronics14010122 - 30 Dec 2024
Viewed by 356
Abstract
The integration of Internet of Medical Things (IoMT) technology has revolutionized healthcare, allowing rapid access to medical images and enhancing remote diagnostics in telemedicine. However, this advancement raises serious cybersecurity concerns, particularly regarding unauthorized access and data integrity. This paper presents a novel, [...] Read more.
The integration of Internet of Medical Things (IoMT) technology has revolutionized healthcare, allowing rapid access to medical images and enhancing remote diagnostics in telemedicine. However, this advancement raises serious cybersecurity concerns, particularly regarding unauthorized access and data integrity. This paper presents a novel, user-friendly, visible watermarking scheme for medical images—Visual and User-Friendly Medical Image Watermarking Scheme (VUF-MIWS)—designed to secure medical image ownership while maintaining usability for diagnostic purposes. VUF-MIWS employs a unique combination of inpainting and data hiding techniques to embed hospital logos as visible watermarks, which can be removed seamlessly once image authenticity is verified, restoring the image to its original state. Experimental results demonstrate the scheme’s robust performance, with the watermarking process preserving critical diagnostic information with high fidelity. The method achieved Peak Signal-to-Noise Ratios (PSNR) above 70 dB and Structural Similarity Index Measures (SSIM) of 0.99 for inpainted images, indicating minimal loss of image quality. Additionally, VUF-MIWS effectively restored the ROI region of medical images post-watermark removal, as verified through test cases with restored watermarked regions matching the original images. These findings affirm VUF-MIWS’s suitability for secure telemedicine applications. Full article
Show Figures

Figure 1

Figure 1
<p>Extra verification procedure for doctors.</p>
Full article ">Figure 2
<p>Framework of Yu et al.’s enhanced generative inpainting framework [<a href="#B19-electronics-14-00122" class="html-bibr">19</a>].</p>
Full article ">Figure 3
<p>Inpainting results of Yu et al.’s scheme [<a href="#B19-electronics-14-00122" class="html-bibr">19</a>]. (<b>a</b>) Original image; (<b>b</b>) Original image; (<b>c</b>) Original image; (<b>d</b>) Image (<b>a</b>) with a mask; (<b>e</b>) Image (<b>b</b>) with a mask; (<b>f</b>) Image (<b>c</b>) with a mask; (<b>g</b>) Inpainting results of (<b>d</b>) (PSNR = 26.45 dB); (<b>h</b>) Inpainting results of (<b>e</b>) (PSNR = 43.37 dB); (<b>i</b>) Inpainting results of (<b>f</b>) (PSNR = 54.21 dB).</p>
Full article ">Figure 4
<p>The enhancement of Yu et al.’s [<a href="#B19-electronics-14-00122" class="html-bibr">19</a>] model and the details of GAN network. (<b>a</b>) The enhancement of Yu et al.’s [<a href="#B19-electronics-14-00122" class="html-bibr">19</a>] model; (<b>b</b>) Generative Adversarial Network.</p>
Full article ">Figure 5
<p>Framework of the proposed VUF-MIWS.</p>
Full article ">Figure 6
<p>Flowchart of the recovery information generation phase.</p>
Full article ">Figure 7
<p>Flowchart of the embedding phase.</p>
Full article ">Figure 8
<p>The circular hiding path for embedding at the LL subband.</p>
Full article ">Figure 9
<p>Flowchart of the watermark removal and restoration.</p>
Full article ">Figure 10
<p>Eight medical test images. (<b>a</b>) 10.png; (<b>b</b>) 11.png; (<b>c</b>) 14.png; (<b>d</b>) 16.png; (<b>e</b>) 19.png; (<b>f</b>) 26.png; (<b>g</b>) 31.png; (<b>h</b>) 57.png.</p>
Full article ">Figure 11
<p>Two datasets are used to test the stable performance of the proposed scheme. (<b>a</b>–<b>d</b>) are Dataset 1, images of the pituitary gland taken from back to front. (<b>e</b>–<b>h</b>) are Dataset 2, images of the pituitary gland taken from top to bottom.</p>
Full article ">Figure 12
<p>Six general grayscale images sized 512 × 512 are used for the third experiment.</p>
Full article ">Figure 13
<p>The logo with a size of 64 × 64. (<b>a</b>) NCUT logo, and (<b>b</b>) Squirrel logo.</p>
Full article ">Figure 14
<p>In the first and second experiments, nine sub-regions were designated as position candidates for the visible watermark.</p>
Full article ">Figure 15
<p>Eight watermarked images. (<b>a</b>) Watermarked 10.png; (<b>b</b>) Watermarked 11.png; (<b>c</b>) Watermarked 14.png; (<b>d</b>) Watermarked 16.png; (<b>e</b>) Watermarked 19.png; (<b>f</b>) Watermarked 26.png; (<b>g</b>) Watermarked 31.png; (<b>h</b>) Watermarked 57.png.</p>
Full article ">Figure 16
<p>The restored images. (<b>a</b>) Restored 10.png; (<b>b</b>) Restored 11.png; (<b>c</b>) Restored 14.png; (<b>d</b>) Restored 16.png; (<b>e</b>) Restored 19.png; (<b>f</b>) Restored 26.png; (<b>g</b>) Restored 31.png; (<b>h</b>) Restored 57.png.</p>
Full article ">Figure 17
<p>Image recovery analysis. (<b>a</b>) Enlarged part of 11.png; (<b>b</b>) Enlarged watermarked of (<b>a</b>); (<b>c</b>) Restored image of (<b>b</b>); (<b>d</b>) Enlarged part of 14.png; (<b>e</b>) Enlarged watermarked of (<b>d</b>); (<b>f</b>) Restored image of (<b>e</b>).</p>
Full article ">Figure 18
<p>Inpainting results analysis. (<b>a</b>) Inpainting results of Yu [<a href="#B19-electronics-14-00122" class="html-bibr">19</a>]; (<b>b</b>) Histogram analysis of Yu [<a href="#B19-electronics-14-00122" class="html-bibr">19</a>]; (<b>c</b>) Inpainting results of the proposed scheme; (<b>d</b>) Histogram analysis of the proposed scheme.</p>
Full article ">
22 pages, 2055 KiB  
Article
Reversible Data Hiding in Absolute Moment Block Truncation Codes via Arithmetical and Logical Differential Coding
by Ching-Chun Chang, Yijie Lin, Jui-Chuan Liu and Chin-Chen Chang
Cryptography 2025, 9(1), 4; https://doi.org/10.3390/cryptography9010004 - 30 Dec 2024
Viewed by 216
Abstract
To reduce bandwidth usage in communications, absolute moment block truncation coding is employed to compress cover images. Confidential data are embedded into compressed images using reversible data-hiding technology for purposes such as image management, annotation, or authentication. As data size increases, enhancing embedding [...] Read more.
To reduce bandwidth usage in communications, absolute moment block truncation coding is employed to compress cover images. Confidential data are embedded into compressed images using reversible data-hiding technology for purposes such as image management, annotation, or authentication. As data size increases, enhancing embedding capacity becomes essential to accommodate larger volumes of secret data without compromising image quality or reversibility. Instead of using conventional absolute moment block truncation coding to encode each image block, this work proposes an effective reversible data-hiding scheme that enhances the embedding results by utilizing the traditional set of values: a bitmap, a high value, and a low value. In addition to the traditional set of values, a value is calculated using arithmetical differential coding and may be used for embedding. A process involving joint neighborhood coding and logical differential coding is applied to conceal the secret data in two of the three value tables, depending on the embedding capacity evaluation. An indicator is recorded to specify which two values are involved in the embedding process. The embedded secret data can be correctly extracted using a corresponding two-stage extraction process based on the indicator. To defeat the state-of-the-art scheme, bitmaps are also used as carriers in our scheme yet are compacted even more with Huffman coding. To reconstruct the original image, the low and high values of each block are reconstructed after data extraction. Experimental results show that our proposed scheme typically achieves an embedding rate exceeding 30%, surpassing the latest research by more than 2%. Our scheme reaches outstanding embedding rates while allowing the image to be perfectly restored to its original absolute moment block truncation coding form. Full article
Show Figures

Figure 1

Figure 1
<p>Flow of AMBTC compression phase.</p>
Full article ">Figure 2
<p>Four reference values of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>h</mi> </mrow> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> </mrow> </semantics></math> and their corresponding codes.</p>
Full article ">Figure 3
<p>The multi-layer SM bitmap embedding. (<b>a</b>) The first layer. (<b>b</b>) The second layer.</p>
Full article ">Figure 4
<p>Flow of our proposed scheme.</p>
Full article ">Figure 5
<p>Test images: (<b>a</b>–<b>d</b>) standard grayscale images; (<b>e</b>–<b>h</b>) complex grayscale images; (<b>i</b>–<b>l</b>) color images.</p>
Full article ">
24 pages, 6819 KiB  
Article
Three-Dimensional Reconstruction of Road Structural Defects Using GPR Investigation and Back-Projection Algorithm
by Lutai Wang, Zhen Liu, Xingyu Gu and Danyu Wang
Sensors 2025, 25(1), 162; https://doi.org/10.3390/s25010162 - 30 Dec 2024
Viewed by 349
Abstract
Ground-Penetrating Radar (GPR) has demonstrated significant advantages in the non-destructive detection of road structural defects due to its speed, safety, and efficiency. This paper proposes a three-dimensional (3D) reconstruction method for GPR images, integrating the back-projection (BP) imaging algorithm to accurately determine the [...] Read more.
Ground-Penetrating Radar (GPR) has demonstrated significant advantages in the non-destructive detection of road structural defects due to its speed, safety, and efficiency. This paper proposes a three-dimensional (3D) reconstruction method for GPR images, integrating the back-projection (BP) imaging algorithm to accurately determine the size, location, and other parameters of road structural defects. Initially, GPR detection images were preprocessed, including direct wave removal and wavelet denoising, followed by the application of the BP algorithm to effectively restore the defect’s location and size. Subsequently, a 3D data set was constructed through interpolation, and the effective reflection data were extracted by using a clustering algorithm. This algorithm distinguished the effective reflection data from the background data by determining the distance threshold between the data points. The 3D imaging of the defect was then performed in MATLAB. The proposed method was validated using both gprMax simulations and laboratory test models. The experimental results indicate that the correlation between the reconstructed and actual defects was approximately 0.67, demonstrating the method’s efficacy in accurately achieving the 3D reconstruction of road structural defects. Full article
Show Figures

Figure 1

Figure 1
<p>Three-dimensional reconstruction process for road structural defects.</p>
Full article ">Figure 2
<p>Principle of GPR detection.</p>
Full article ">Figure 3
<p>Three-level wavelet decomposition.</p>
Full article ">Figure 4
<p>GPR image of the underground cavity model.</p>
Full article ">Figure 5
<p>Principle of BP algorithm imaging.</p>
Full article ">Figure 6
<p>BP imaging.</p>
Full article ">Figure 7
<p>Basic flow of K-means clustering algorithm.</p>
Full article ">Figure 8
<p>Defect model.</p>
Full article ">Figure 9
<p>B-Scan images.</p>
Full article ">Figure 9 Cont.
<p>B-Scan images.</p>
Full article ">Figure 10
<p>Results of BP imaging.</p>
Full article ">Figure 10 Cont.
<p>Results of BP imaging.</p>
Full article ">Figure 11
<p>Results of 3D reconstruction.</p>
Full article ">Figure 12
<p>Laboratory test model.</p>
Full article ">Figure 13
<p>IDS-RIS GPR.</p>
Full article ">Figure 14
<p>B-Scan images and processing results of laboratory test.</p>
Full article ">Figure 14 Cont.
<p>B-Scan images and processing results of laboratory test.</p>
Full article ">Figure 15
<p>Radar used in detection.</p>
Full article ">Figure 16
<p>Result of 3D reconstruction on the actual road.</p>
Full article ">Figure 17
<p>Core sample.</p>
Full article ">
24 pages, 9570 KiB  
Article
Fringe Texture Driven Droplet Measurement End-to-End Network Based on Physics Aberrations Restoration of Coherence Scanning Interferometry
by Zhou Zhang, Jiankui Chen, Hua Yang and Zhouping Yin
Micromachines 2025, 16(1), 42; https://doi.org/10.3390/mi16010042 - 30 Dec 2024
Viewed by 263
Abstract
Accurate and efficient measurement of deposited droplets’ volume is vital to achieve zero-defect manufacturing in inkjet printed organic light-emitting diode (OLED), but it remains a challenge due to droplets’ featurelessness. In our work, coherence scanning interferometry (CSI) is utilized to measure the volume. [...] Read more.
Accurate and efficient measurement of deposited droplets’ volume is vital to achieve zero-defect manufacturing in inkjet printed organic light-emitting diode (OLED), but it remains a challenge due to droplets’ featurelessness. In our work, coherence scanning interferometry (CSI) is utilized to measure the volume. However, the CSI redundant sampling and image degradation led by the sample’s transparency decrease the efficiency and accuracy. Based on the prior degradation and strong representation for context, a novel method, volume measurement via fringe distribution module (VMFD), is proposed to directly measure the volume by single interferogram without redundant sampling. Firstly, the 3D point spread function (PSF) for CSI imaging is modeling to relate the degradation and image. Secondly, the Zernike to PSF (ZTP) module is proposed to efficiently compute the aberrations to PSF. Then, a physics aberration restoration network (PARN) is designed to remove the degradation via the channel Transformer and U-net architecture. The long term context is learned by PARN and beneficial to restoration. The restored fringes are used to measure the droplet’s volume by constrained regression network (CRN) module. Finally, the performances on public datasets and the volume measurement experiments show the promising deblurring, measurement precision and efficiency. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic of inkjet printing in OLED manufacturing. Different jetting volume printing nozzles are planned to spray into the corresponding position to mix up the deposited droplet with target volume by motion planning.</p>
Full article ">Figure 2
<p>Features of deposited droplets in OLED. (<b>a</b>) smooth surface with few texture features. (<b>b</b>) droplets with various deposited shapes decided by the shapes of pixels. (<b>c</b>) cross scale between the deposited droplets and the panel. (<b>d</b>) non-contact measurement for no ink pollution.</p>
Full article ">Figure 3
<p>(<b>a</b>) Measuring the curvature by the fringe distribution of the newton ring; (<b>b</b>) Measuring the volume of deposited droplet by 3D rebuilding based on redundant sampling or fringe mapping based on one single interferogram.</p>
Full article ">Figure 4
<p>The physics aberrations led by (<b>a</b>) the optical system and (<b>b</b>) the characteristics of the transparent sample.</p>
Full article ">Figure 5
<p>The imaging process for deposition droplet interferogram sequence.</p>
Full article ">Figure 6
<p>The framework of VMFD. In VMFD, there are three modules ZTP, PARN and CRN. ZTP modules transform the phase aberrations to PSF. PARN is trained to eliminate the deblur. CRN is utilized to measure the volume via single restored interferogram.</p>
Full article ">Figure 7
<p>Scheme of TN (CAT+MCFN). PARN is based on the U-net architecture composed of TN blocks to deblur in different scale via channel Transformer.</p>
Full article ">Figure 8
<p>The difference of fringe distribution in single interferogram at different sampling heights.</p>
Full article ">Figure 9
<p>The encoder-decoder structure of CRN.</p>
Full article ">Figure 10
<p>The volume measurement regression network of CRN.</p>
Full article ">Figure 11
<p>Visual comparisons on the datatset DIV2K [<a href="#B61-micromachines-16-00042" class="html-bibr">61</a>]. PARN has the better restoration.</p>
Full article ">Figure 12
<p>Visual comparisons on the datatset Set5 [<a href="#B63-micromachines-16-00042" class="html-bibr">63</a>]. PARN restores the shaper image.</p>
Full article ">Figure 13
<p>Visual comparisons on the datatset Set14 [<a href="#B64-micromachines-16-00042" class="html-bibr">64</a>]. PARN better captures the context features to deblur image.</p>
Full article ">Figure 14
<p>Visual comparisons on the datatset BSD100 [<a href="#B62-micromachines-16-00042" class="html-bibr">62</a>]. The image restored by PARN visually closer to the GT (ground truth).</p>
Full article ">Figure 15
<p>OLED inkjet printing manufacturing equipment. (<b>a</b>) Inkjet printer. (<b>b</b>) Droplet measurement system with Mirau objective and deposited droplets fabrication system. (<b>c</b>) Droplet weighing experimental setup via QCM.</p>
Full article ">Figure 16
<p>Comparison of interferogram restored by PARN and other methods.</p>
Full article ">Figure 17
<p>Droplet measurement results with and without PARN compared to the weighing result by QCM. Ori: without restoration. Res: restored by PARN. MeanOri: the average volume of the original group. MeanRes: the average volume of the restored group.</p>
Full article ">Figure 18
<p>The comparisons of measurement error and time consuming for different methods and traditional scanning method.</p>
Full article ">
20 pages, 7164 KiB  
Article
A Method for Borehole Image Reverse Positioning and Restoration Based on Grayscale Characteristics
by Shuangyuan Chen, Zengqiang Han, Yiteng Wang, Yuyong Jiao, Chao Wang and Jinchao Wang
Appl. Sci. 2025, 15(1), 222; https://doi.org/10.3390/app15010222 - 30 Dec 2024
Viewed by 215
Abstract
Borehole imaging technology is a critical means for the meticulous measurement of rock mass structures. However, the inherent issue of probe eccentricity significantly compromises the quality of borehole images obtained during testing. This paper proposes a method based on grayscale feature analysis for [...] Read more.
Borehole imaging technology is a critical means for the meticulous measurement of rock mass structures. However, the inherent issue of probe eccentricity significantly compromises the quality of borehole images obtained during testing. This paper proposes a method based on grayscale feature analysis for reverse positioning of imaging probes and image restoration. An analysis of the response characteristics of probe eccentricity was conducted, leading to the development of a grayscale feature model and a method for reverse positioning analysis. By calculating the error matrix using the probe’s spatial trajectory, this method corrects and restores grayscale errors caused by probe eccentricity in images. Quantitative analysis was conducted on the azimuthal errors in borehole images caused by probe eccentricity, establishing a method for correcting image perspective errors based on probe spatial-positioning calibration. Results indicate significant enhancement in the effectiveness and measurement accuracy of borehole images. Full article
Show Figures

Figure 1

Figure 1
<p>Digital panoramic borehole camera system.</p>
Full article ">Figure 2
<p>Schematic of panoramic image transformation.</p>
Full article ">Figure 3
<p>Brightness variation bands in a borehole image.</p>
Full article ">Figure 4
<p>Imaging principle of borehole camera. 1—Borehole wall, 2—imaging probe, 3—CMOS camera, 4—light source, 5—truncated cone mirror.</p>
Full article ">Figure 5
<p>Coordinate system of borehole wall and probe.</p>
Full article ">Figure 6
<p>Algorithm flow of reverse positioning of borehole imaging probe.</p>
Full article ">Figure 7
<p>Typical borehole image under probe eccentricity.</p>
Full article ">Figure 8
<p>Regression analysis curve at depth 24.5 m. (<b>a</b>) Regression before fixing <span class="html-italic">λ</span>; (<b>b</b>) regression after fixing <span class="html-italic">λ</span>.</p>
Full article ">Figure 9
<p>Estimation of parameter λ and sample mean.</p>
Full article ">Figure 10
<p>Estimated 3D trajectory of the probe in borehole.</p>
Full article ">Figure 11
<p>Result of color space transferring.</p>
Full article ">Figure 12
<p>Grayscale error caused by probe eccentricity. (<b>a</b>) Situation of the probe working centered in the borehole; (<b>b</b>) borehole image with the probe centered in the borehole; (<b>c</b>) situation of the probe working off-center in the borehole; (<b>d</b>) borehole image with the probe eccentrically positioned in the borehole.</p>
Full article ">Figure 13
<p>Calculation result of grayscale offset matrix.</p>
Full article ">Figure 14
<p>Borehole image after grayscale restoration.</p>
Full article ">Figure 15
<p>Grayscale histogram of borehole image. (<b>a</b>) Histogram before restoration; (<b>b</b>) histogram after restoration.</p>
Full article ">Figure 16
<p>Perspective error caused by probe eccentricity. (<b>a</b>) Situation of the probe working centered in the borehole; (<b>b</b>) borehole image with the probe centered in the borehole; (<b>c</b>) situation of the probe working off-center in the borehole; (<b>d</b>) borehole image with the probe eccentrically positioned in the borehole.</p>
Full article ">Figure 17
<p>Probe coordinate system and borehole coordinate system.</p>
Full article ">Figure 18
<p>Calculation result of perspective offset matrix.</p>
Full article ">Figure 19
<p>Borehole image after grayscale and perspective restoration.</p>
Full article ">
7 pages, 793 KiB  
Case Report
The Use of REBOA in a Zone Trauma Center Emergency Department for the Management of Massive Hemorrhages Secondary to Major Trauma, with Subsequent Transfer to a Level 1 Trauma Center for Surgery After Hemodynamic Stabilization
by Iacopo Cappellini, Alessio Baldini, Maddalena Baraghini, Maurizio Bartolucci, Stefano Cantafio, Antonio Crocco, Matteo Zini, Simone Magazzini, Francesco Menici, Vittorio Pavoni and Franco Lai
Emerg. Care Med. 2025, 2(1), 1; https://doi.org/10.3390/ecm2010001 - 27 Dec 2024
Viewed by 388
Abstract
Introduction: Non-compressible torso hemorrhage (NCTH) is a major cause of preventable mortality in trauma, particularly when immediate surgical intervention is not available. Resuscitative Endovascular Balloon Occlusion of the Aorta (REBOA) has emerged as a promising technique to control severe hemorrhaging and stabilize patients [...] Read more.
Introduction: Non-compressible torso hemorrhage (NCTH) is a major cause of preventable mortality in trauma, particularly when immediate surgical intervention is not available. Resuscitative Endovascular Balloon Occlusion of the Aorta (REBOA) has emerged as a promising technique to control severe hemorrhaging and stabilize patients until definitive surgical care can be performed. Case Presentation: We report the case of a 45-year-old woman who sustained multiple traumatic injuries—including thoracic, pelvic, and aortic damage—after a fall from approximately 5 m in an apparent suicide attempt. She arrived at a secondary-level trauma center in profound hemorrhagic shock, unresponsive to standard resuscitation. Interventions: As the patient’s condition deteriorated to cardiac arrest, an emergent REBOA procedure was performed by emergency physicians. This intervention rapidly restored hemodynamic stability, enabling damage control resuscitation and safe transfer to a Level 1 Trauma Center for definitive surgical management, including thoracic endovascular aortic repair and splenectomy. Outcomes: After prolonged intensive care, the patient recovered sufficiently to be discharged for rehabilitation. This case illustrates the life-saving potential of early REBOA deployment in a non-surgical, resource-limited setting to bridge patients to definitive care. Conclusions: This case supports integrating REBOA into emergency trauma protocols, particularly in centers without immediate surgical capabilities. Further research is warranted to refine REBOA deployment strategies, balloon positioning, patient selection, and the role of imaging guidance. Full article
Show Figures

Figure 1

Figure 1
<p>A comprehensive overview of the patient’s clinical course, from prehospital management to definitive care. The visualization underscores the timely decision-making and procedural interventions critical to the patient’s survival. The placement of REBOA during cardiac arrest and its role in achieving temporary hemodynamic stabilization are particularly noteworthy, demonstrating its utility as a bridge to advanced surgical care.</p>
Full article ">Figure 2
<p>Correct placement of the REBOA in Zone 1 (blind insertion by assessing the device’s centimeter scale). Device insertion during chest compressions with the Lucas automatic CPR device (patient in cardiac arrest).</p>
Full article ">
21 pages, 66390 KiB  
Article
Photorealistic Texture Contextual Fill-In
by Radek Richtr
Heritage 2025, 8(1), 9; https://doi.org/10.3390/heritage8010009 - 27 Dec 2024
Viewed by 277
Abstract
This paper presents a comprehensive study of the application of AI-driven inpainting techniques to the restoration of historical photographs of the Czech city Most, with a focus on restoration and reconstructing the lost architectural heritage. The project combines state-of-the-art methods, including generative adversarial [...] Read more.
This paper presents a comprehensive study of the application of AI-driven inpainting techniques to the restoration of historical photographs of the Czech city Most, with a focus on restoration and reconstructing the lost architectural heritage. The project combines state-of-the-art methods, including generative adversarial networks (GANs), patch-based inpainting, and manual retouching, to restore and enhance severely degraded images. The reconstructed/restored photographs of the city Most offer an invaluable visual representation of a city that was largely destroyed for industrial purposes in the 20th century. Through a series of blind and informed user tests, we assess the subjective quality of the restored images and examine how knowledge of edited areas influences user perception. Additionally, this study addresses the technical challenges of inpainting, including computational demands, interpretability, and bias in AI models. Ethical considerations, particularly regarding historical authenticity and speculative reconstruction, are also discussed. The findings demonstrate that AI techniques can significantly contribute to the preservation of cultural heritage, but must be applied with careful oversight to maintain transparency and cultural integrity. Future work will focus on improving the interpretability and efficiency of these methods, while ensuring that reconstructions remain historically and culturally sensitive. Full article
(This article belongs to the Section Cultural Heritage)
Show Figures

Figure 1

Figure 1
<p>Examples of original archive photos of Most in various quality.</p>
Full article ">Figure 2
<p>Examples of the ability to reconstruct color an restore artificially damaged photograph.</p>
Full article ">Figure 3
<p>(<b>Left</b>): A sample photo where shading obscures much of the building, making reconstruction difficult. The appearance of a large part of the building is unknown and cannot be obtained even from historical photographs; (<b>Right</b>): Objects removed and replaced with one of the possible reconstructions of the obscured content.</p>
Full article ">Figure 4
<p>Four possible results of filling the obscured area using the content-aware fill method [<a href="#B6-heritage-08-00009" class="html-bibr">6</a>].</p>
Full article ">Figure 5
<p>Reference data for the colorization process. Unfortunately, the amount of similar, usually hand-colored photographs is extremely small.</p>
Full article ">Figure 6
<p>Sample of several colored complex photographs of Most city.</p>
Full article ">Figure 7
<p>(<b>First row</b>): Original black and white photograph and reconstructed color image without obscuring objects; (<b>Second row left</b>): colorized photo with photos with marked obstructing objects; (<b>rest</b>): gradually removed objects, content supplemented by generative AI. Objects must be removed in order from the most distant obscuring object to the object closest to the reconstructed object.</p>
Full article ">Figure 8
<p>Two possible results of shop signs color on the Peace square.</p>
Full article ">Figure 9
<p>(<b>Left</b>): Graph of user Informed test of the quality of synthesis of photos of the Bridge (selection of twenty random photos of Peace Square and adjacent streets). Value 1 is the minimum (unsuccessful retouching, obvious manipulation) value 5 is the maximum (high-quality and successful retouching, imperceptible manipulation); (<b>Right</b>): Graph of user Blind test of the quality of synthesis of photos of the Bridge (selection of twenty random photos of Peace Square and adjacent streets). Value 1 is the minimum (unsuccessful retouching, obvious manipulation) value 5 is the maximum (high-quality and successful retouching, imperceptible manipulation).</p>
Full article ">Figure 10
<p>Retouched photo example with color-coded overlays. Each colored region indicates an area where a significant object was removed and subsequently reconstructed.</p>
Full article ">
13 pages, 4502 KiB  
Article
In Vitro Investigation of Novel Peptide Hydrogels for Enamel Remineralization
by Codruta Sarosi, Alexandrina Muntean, Stanca Cuc, Ioan Petean, Sonia Balint, Marioara Moldovan and Aurel George Mohan
Gels 2025, 11(1), 11; https://doi.org/10.3390/gels11010011 - 27 Dec 2024
Viewed by 269
Abstract
This study investigates the microstructure of dental enamel following demineralization and re-mineralization processes, using DIAGNOdent scores and images obtained via scanning electron microscopy (SEM), atomic force microscopy (AFM), and microhardness (Vickers). The research evaluates the effects of two experimental hydrogels, Anti-Amelogenin isoform X [...] Read more.
This study investigates the microstructure of dental enamel following demineralization and re-mineralization processes, using DIAGNOdent scores and images obtained via scanning electron microscopy (SEM), atomic force microscopy (AFM), and microhardness (Vickers). The research evaluates the effects of two experimental hydrogels, Anti-Amelogenin isoform X (ABT260, S1) and Anti-Kallikrein L1 (K3014, S2), applied to demineralized enamel surfaces over periods of 14 and 21 days. The study involved 60 extracted teeth, free from cavities or other lesions, divided into four groups: a positive group (+), a negative group (−) and groups S1 and S2. The last three groups underwent demineralization with 37% phosphoric acid for 20 min. The negative group (−) was without remineralization treatment. The DIAGNOdent scores indicate that the S1 group treated with Anti-Amelogenin is more effective in remineralizing the enamel surface compared to the S2 group treated with Anti-Kallikrein. These findings were corroborated by SEM and AFM images, which revealed elongated hydroxyapatite (HAP) nanoparticles integrated into the demineralized structures. Demineralization reduced enamel microhardness to about 1/3 of a healthy one. Both tested hydrogels restored enamel hardness, with S1 being more effective than S2. Both peptides facilitated the interaction between the newly added minerals and residual protein binders on the enamel surface, thereby contributing to effective enamel restoration. Full article
Show Figures

Figure 1

Figure 1
<p>DIAGNOdent evaluation of the enamel surface.</p>
Full article ">Figure 2
<p>SEM images of the general aspect of the enamel surface microstructure for (<b>a</b>) healthy—untreated, (<b>b</b>) demineralized, (<b>c</b>) treated with S1 for 14 days, (<b>d</b>) treated with S2 for 14 days, (<b>e</b>) treated with S1 for 21 days, and (<b>f</b>) treated with S2 for 21 days.</p>
Full article ">Figure 3
<p>AFM topographic images of the control samples: positive control sample—healthy untreated enamel: (<b>a</b>) fine microstructure, (<b>b</b>) nanostructure, and negative control sample—demineralized enamel: (<b>c</b>) fine microstructure, (<b>d</b>) nanostructure. The tridimensional profile is presented below each topographic image.</p>
Full article ">Figure 4
<p>AFM topographic images of the treated enamel surface: S1 for 14 days: (<b>a</b>) fine microstructure, (<b>b</b>) nanostructure; S1 for 21 days: (<b>c</b>) fine microstructure, (<b>d</b>) nano-structure; S2 for 14 days: (<b>e</b>) fine microstructure, (<b>f</b>) nano-structure; S2 for 21 days: (<b>g</b>) fine microstructure, (<b>h</b>) nano-structure. The tridimensional profile is presented below each topographic image.</p>
Full article ">Figure 5
<p>Roughness variation at (<b>a</b>) the fine microstructure level and (<b>b</b>) the nanostructure level. Error bars represent standard deviation.</p>
Full article ">Figure 6
<p>Mean hardness variation with the applied treatments. Error bars represent standard deviation.</p>
Full article ">Figure 7
<p>Sample size distribution scheme.</p>
Full article ">
25 pages, 7197 KiB  
Article
Performance Restoration of Chemically Recycled Carbon Fibres Through Surface Modification with Sizing
by Dionisis Semitekolos, Sofia Terzopoulou, Silvia Zecchi, Dimitrios Marinis, Ergina Farsari, Eleftherios Amanatides, Marcin Sajdak, Szymon Sobek, Weronika Smok, Tomasz Tański, Sebastian Werle, Alberto Tagliaferro and Costas Charitidis
Polymers 2025, 17(1), 33; https://doi.org/10.3390/polym17010033 - 26 Dec 2024
Viewed by 472
Abstract
The recycling of Carbon Fibre-Reinforced Polymers (CFRPs) is becoming increasingly crucial due to the growing demand for sustainability in high-performance industries such as automotive and aerospace. This study investigates the impact of two chemical recycling techniques, chemically assisted solvolysis and plasma-enhanced solvolysis, on [...] Read more.
The recycling of Carbon Fibre-Reinforced Polymers (CFRPs) is becoming increasingly crucial due to the growing demand for sustainability in high-performance industries such as automotive and aerospace. This study investigates the impact of two chemical recycling techniques, chemically assisted solvolysis and plasma-enhanced solvolysis, on the morphology and properties of carbon fibres (CFs) recovered from end-of-life automotive parts. In addition, the effects of fibre sizing are explored to enhance the performance of the recycled carbon fibres (rCFs). The surface morphology of the fibres was characterised using Scanning Electron Microscopy (SEM), and their structural integrity was assessed through Thermogravimetric Analysis (TGA) and Raman spectroscopy. An automatic analysis method based on optical microscopy images was also developed to quantify filament loss during the recycling process. Mechanical testing of single fibres and yarns showed that although rCFs from both recycling methods exhibited a ~20% reduction in tensile strength compared to reference fibres, the application of sizing significantly mitigated these effects (~10% reduction). X-ray Photoelectron Spectroscopy (XPS) further confirmed the introduction of functional oxygen-containing groups on the fibre surface, which improved fibre-matrix adhesion. Overall, the results demonstrate that plasma-enhanced solvolysis was more effective at fully decomposing the resin, while the subsequent application of sizing enhanced the mechanical performance of rCFs, restoring their properties closer to those of virgin fibres. Full article
Show Figures

Figure 1

Figure 1
<p>Plasma reactor set-up.</p>
Full article ">Figure 2
<p>Fibre sizing line.</p>
Full article ">Figure 3
<p>SEM images of (<b>a</b>) Ref_CF and (<b>b</b>) EDS spot, (<b>c</b>) Ch_rCF and (<b>d</b>) EDS spot, (<b>e</b>) Pl_rCF and (<b>f</b>) EDS spot, (<b>g</b>) Sized_Ch_rCF and (<b>h</b>) EDS spot, (<b>i</b>) Sized_Pl_rCF and (<b>j</b>) EDS spot.</p>
Full article ">Figure 4
<p>TGA results of Ref_CFs, Sized_Pl_rCFs and Sized_Ch_rCFs.</p>
Full article ">Figure 5
<p>Raman spectra of (<b>a</b>) Ref_CFs, Pl_rCFs, and Ch_rCFs and deconvoluted spectra illustrating fitting signals for (<b>b</b>) Ref_CFs, (<b>c</b>) Pl_rCFs and (<b>d</b>) Ch_rCF.</p>
Full article ">Figure 6
<p>(<b>a</b>) Ref_CF, (<b>b</b>) Pl_rCF, and (<b>c</b>) Ch_rCF.</p>
Full article ">Figure 7
<p>(<b>a</b>) Ref_CF from Olympus software, (<b>b</b>) Ref_CF from Python, and (<b>c</b>) Ref_CF from ImageJ software.</p>
Full article ">Figure 8
<p>HR spectra of Ref_CF.</p>
Full article ">Figure 9
<p>HR spectra of Pl_rCF.</p>
Full article ">Figure 10
<p>HR spectra of Sized_Pl_rCF.</p>
Full article ">
25 pages, 6883 KiB  
Article
Hybrid Frequency–Spatial Domain Learning for Image Restoration in Under-Display Camera Systems Using Augmented Virtual Big Data Generated by the Angular Spectrum Method
by Kibaek Kim, Yoon Kim and Young-Joo Kim
Appl. Sci. 2025, 15(1), 30; https://doi.org/10.3390/app15010030 - 24 Dec 2024
Viewed by 263
Abstract
In the rapidly advancing realm of mobile technology, under-display camera (UDC) systems have emerged as a promising solution for achieving seamless full-screen displays. Despite their innovative potential, UDC systems face significant challenges, including low light transmittance and pronounced diffraction effects that degrade image [...] Read more.
In the rapidly advancing realm of mobile technology, under-display camera (UDC) systems have emerged as a promising solution for achieving seamless full-screen displays. Despite their innovative potential, UDC systems face significant challenges, including low light transmittance and pronounced diffraction effects that degrade image quality. This study aims to address these issues by examining degradation phenomena through optical simulation and employing a deep neural network model incorporating hybrid frequency–spatial domain learning. To effectively train the model, we generated a substantial synthetic dataset that virtually simulates the unique image degradation characteristics of UDC systems, utilizing the angular spectrum method for optical simulation. This approach enabled the creation of a diverse and comprehensive dataset of virtual degraded images by accurately replicating the degradation process from pristine images. The augmented virtual data were combined with actual degraded images as training data, compensating for the limitations of real data availability. Through our proposed methods, we achieved a marked improvement in image quality, with the average structural similarity index measure (SSIM) value increasing from 0.8047 to 0.9608 and the peak signal-to-noise ratio (PSNR) improving from 26.383 dB to 36.046 dB on an experimentally degraded image dataset. These results highlight the potential of our integrated optics and AI-based methodology in addressing image restoration challenges within UDC systems and advancing the quality of display technology in smartphones. Full article
(This article belongs to the Special Issue Advances in Image Enhancement and Restoration Technology)
Show Figures

Figure 1

Figure 1
<p>Conceptual pixel layouts of (<b>a</b>) a typical punch-hole camera system and (<b>b</b>) a UDC system, respectively, and their respective image quality.</p>
Full article ">Figure 2
<p>Overall flowchart of the proposed method for image restoration in the UDC system.</p>
Full article ">Figure 3
<p>Mimicking the UDC panel structure for the experiment: (<b>a</b>) observed image from a commercial smartphone and (<b>b</b>) fabricated UDC panel pattern.</p>
Full article ">Figure 4
<p>Setup for the simulation and optical experiment to observe PSF fields generated by the UDC panel.</p>
Full article ">Figure 5
<p>(<b>a</b>) Camera spectral sensitivity and (<b>b</b>) simulated 3ch PSF fields.</p>
Full article ">Figure 6
<p>Representation of k-space regions showing lost or altered data for four different degraded sample images and the averaged loss across the entire dataset.</p>
Full article ">Figure 7
<p>Process of generating virtual degraded images.</p>
Full article ">Figure 8
<p>Image acquisition moments by fixed camera by holding frame (<b>a</b>) without UDC panel, and (<b>b</b>) with UDC panel.</p>
Full article ">Figure 9
<p>Image restoration result: (<b>a</b>) Visual comparison of pristine images (green), degraded images (red), images restored by the cGAN (yellow), and images restored by the hybrid domain learning cGAN (blue) in cases of validation using a virtual validation dataset. Quantified results of the image restoration quality based on (<b>b</b>) SSIM index and (<b>c</b>) PSNR index for the virtual validation dataset.</p>
Full article ">Figure 10
<p>Image restoration result: (<b>a</b>) Visual comparison of pristine images (green), degraded images (red), images restored by the cGAN (yellow), and images restored by the hybrid domain learning cGAN (blue) in cases of validation using an experimentally obtained degraded dataset. Quantified results of the image restoration quality based on (<b>b</b>) SSIM index and (<b>c</b>) PSNR index for experimentally obtained degraded dataset.</p>
Full article ">Figure A1
<p>Common architecture of the generator and the discriminator for both frequency and spatial domains in the hybrid domain learning framework. The numbers above each tensor (N, M<sup>2</sup>) represent the tensor structure, with N dimensions and a size of M × M.</p>
Full article ">Figure A2
<p>The architecture of a single cGAN network unit used in both the frequency and spatial domains.</p>
Full article ">
Back to TopTop