QRNet: A Quaternion-Based Retinex Framework for Enhanced Wireless Capsule Endoscopy Image Quality
<p>An overview of the proposed <b>QRNet</b> pipeline for low-light WCE enhancement. The input image is decomposed into two feature maps <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">Q</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> (reflectance) and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">Q</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> (illumination), each processed by separate SSM Mamba-based encoder–decoder branches. Wavelet-transformed sub-bands guide an attention mechanism that refines both reflectance and illumination features. Finally, the enhanced quaternions are combined via the Hamilton product to generate the output image with improved contrast and preserved color fidelity.</p> "> Figure 2
<p>Visual comparison on the Kvasir-Capsule dataset. GT is the ground truth. Our method preserves natural color balance and detail better than competing approaches.</p> "> Figure 3
<p>Visual comparison on the RLE dataset. Ground truth (top-left) vs. various enhancement methods. Our framework balances brightness and color fidelity, aiding in lesion visibility.</p> ">
Abstract
:1. Introduction
- Limited illumination in GI environments, resulting in underexposed anatomical features;
- Contrast problems from overexposure and reflective surfaces;
- Color distortion due to independent channel processing;
- Loss of inter-channel relationships critical for accurate diagnosis;
- The trade-off between enhancement and preservation of diagnostic features.
- A specialized mamba-based network architecture optimized for endoscopic environments reduces computational overhead through efficient parameter usage, enables real-time processing on resource-limited endoscopic devices, maintains high enhancement quality while using 30% fewer parameters than conventional architectures, and facilitates deployment in clinical settings where computing resources are constrained.
- A novel Retinex framework for comprehensive RGB channel processing: Preserves crucial diagnostic details through quaternion-valued representation; enhances color accuracy by separately handling reflectance and illumination components; reduces color distortion common in traditional enhancement methods; and Improves visibility of subtle tissue variations and lesions through better color fidelity.
- A complete framework for endoscopic image enhancement with a custom loss function delivers consistently superior results across multiple objective quality metrics; provides enhanced visual clarity that aids in detecting subtle pathological changes; Improves diagnostic accuracy by highlighting clinically relevant features; and maintains natural tissue appearance while enhancing the visibility of essential structures.
- Extensive evaluation: extensive evaluation using the Kvasir-Capsule (2000 images) and Red Lesion Endoscopy (1500 images) datasets demonstrate significant improvements: PSNR improvement (+2.3 dB), SSIM enhancement (+0.089), LPIPS reduction (−0.126), and lesion segmentation accuracy (+5%). Ablation studies validate the effectiveness of the quaternion representation in color preservation, which is particularly beneficial for detecting early-stage lesions in challenging lighting conditions.
2. Literature Review
3. Materials and Methods
3.1. Learning in Decomposition Transform Domain
3.2. Proposed Framework
- is the current feature map from either the reflectance branch or the illumination branch .
- are the Haar wavelet sub-band features computed in the decomposition stage.
- Conv(⋅) is a learnable convolution that fuses these inputs to produce attention weights.
- σ(⋅) is a sigmoid (or similar) activation that normalizes the weights.
- ⊙ denotes elementwise multiplication.
3.3. Loss Function
- is pixel-wise reconstruction loss directly comparing and the ground truth .
- uses pretrained feature networks (VGG16 encoder) to encourage perceptual similarity in high-level representations [36].
- is a frequency-domain loss computed by taking the 2D FFT of both and measuring their discrepancy in the log-magnitude space [37]. This helps mitigate color banding and enforces consistent global illumination.
- enforces structural similarity at a global scale [38].
4. Results
4.1. Dataset
4.2. Metrics
4.3. Implementation Details and Methods for Comparison
- LIME [8]: A classic low-light enhancement method using illumination maps.
- ZeroDCE [47]: A zero-reference deep curve estimation technique for low-light adjustment.
- LLFormer [48]: A transformer-based model specialized in low-light image correction.
- LytNet [31]: A lightweight CNN framework aiming for real-time low-light enhancement.
- WaveNet [49]: Uses wavelet transforms to enhance image details in the frequency domain.
- HVI-CIDNet [32]: A hybrid visual intelligence model for contrast-illumination-denoising.
- RetinexFormer [27]: A Transformer variant that decomposes images via a Retinex-inspired design.
- LightTDiff [7]: A diffusion-based approach that iteratively refines the image, often expensive computationally.
4.4. Low-Light Image Enhancement Results
4.5. Ablation Studies
5. Discussion
5.1. Effectiveness and Contribution
5.2. Limitations and Future Directions
5.3. Relevance of Design Choices to WCE-Specific Enhancement Challenges
6. Conclusions
- Quantitative improvements across standard metrics (PSNR: +2.3 dB, SSIM: +0.089, LPIPS: −0.126);
- A 5% enhancement in lesion segmentation accuracy;
- Preservation of critical inter-channel color relationships;
- Robust performance in challenging lighting conditions;
- Effective balance between enhancement and diagnostic fidelity;
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Peery, A.F.; Crockett, S.D.; Murphy, C.C.; Lund, J.L.; Dellon, E.S.; Williams, J.L.; Jensen, E.T.; Shaheen, N.J.; Barritt, A.S.; Lieber, S.R.; et al. Burden and Cost of Gastrointestinal, Liver, and Pancreatic Diseases in the United States: Update 2018. Gastroenterology 2019, 156, 254–272.e11. [Google Scholar] [CrossRef]
- Chang, L. The Role of Stress on Physiologic Responses and Clinical Symptoms in Irritable Bowel Syndrome. Gastroenterology 2011, 140, 761–765. [Google Scholar] [CrossRef] [PubMed]
- Viale, P.H. The American Cancer Society’s Facts & Figures: 2020 Edition. J. Adv. Pract. Oncol. 2020, 11, 135–136. [Google Scholar] [CrossRef] [PubMed]
- Iddan, G.; Meron, G.; Glukhovsky, A.; Swain, P. Wireless Capsule Endoscopy. Nature 2000, 405, 417. [Google Scholar] [CrossRef]
- Hewett, D.G.; Kahi, C.J.; Rex, D.K. Efficacy and Effectiveness of Colonoscopy: How Do We Bridge the Gap? Gastrointest. Endosc. Clin. N. Am. 2010, 20, 673–684. [Google Scholar] [CrossRef]
- Bai, L.; Chen, T.; Wu, Y.; Wang, A.; Islam, M.; Ren, H. LLCaps: Learning to Illuminate Low-Light Capsule Endoscopy with Curved Wavelet Attention and Reverse Diffusion. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2023. [Google Scholar]
- Chen, T.; Lyu, Q.; Bai, L.; Guo, E.; Gao, H.; Yang, X.; Ren, H.; Zhou, L. LighTDiff: Surgical Endoscopic Image Low-Light Enhancement with T-Diffusion. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2024. [Google Scholar]
- Guo, X.; Li, Y.; Ling, H. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE Trans. Image Process. 2017, 26, 982–993. [Google Scholar] [CrossRef] [PubMed]
- Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep Retinex Decomposition for Low-Light Enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar]
- Liu, P.; Wang, Y.; Yang, J.; Li, W. An Adaptive Enhancement Method for Gastrointestinal Low-Light Images of Capsule Endoscope. In Proceedings of the ICASSP 2023—2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; pp. 1–5. [Google Scholar]
- Nam, S.; Lim, Y.; Nam, J.; Lee, H.; Hwang, Y.; Park, J.; Chun, H. 3D Reconstruction of Small Bowel Lesions Using Stereo Camera-Based Capsule Endoscopy. Sci. Rep. 2020, 10, 6025. [Google Scholar] [CrossRef]
- Tanwar, S.; Vijayalakshmi, S.; Sabharwal, M.; Kaur, M.; AlZubi, A.A.; Lee, H.-N. Detection and Classification of Colorectal Polyp Using Deep Learning. Biomed Res. Int. 2022, 2022, 2805607. [Google Scholar] [CrossRef]
- Stoleru, C.-A.; Dulf, E.H.; Ciobanu, L. Automated Detection of Celiac Disease Using Machine Learning Algorithms. Sci. Rep. 2022, 12, 4071. [Google Scholar] [CrossRef] [PubMed]
- Liu, X.; Li, Z.; Ishii, M.; Hager, G.D.; Taylor, R.H.; Unberath, M. SAGE: SLAM with Appearance and Geometry Prior for Endoscopy. IEEE Int. Conf. Robot. Autom. 2022, 2022, 5587–5593. [Google Scholar] [CrossRef]
- Li, C.; Guo, C.; Han, L.; Jiang, J.; Cheng, M.-M.; Gu, J.; Loy, C.C. Low-Light Image and Video Enhancement Using Deep Learning: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 9396–9416. [Google Scholar] [CrossRef] [PubMed]
- Jobson, D.J.; Rahman, Z.; Woodell, G.A. Properties and Performance of a Center/Surround Retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef]
- Rahman, Z.-U.; Jobson, D.J.; Woodell, G.A. Multiscale Retinex for Color Rendition and Dynamic Range Compression. In Proceedings of the Applications of Digital Image Processing XIX; Tescher, A.G., Ed.; SPIE: Bellingham, DC, USA, 14 November 1996. [Google Scholar]
- Jobson, D.J.; Rahman, Z.; Woodell, G.A. A Multiscale Retinex for Bridging the Gap between Color Images and the Human Observation of Scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef]
- Fu, X.; Zeng, D.; Huang, Y.; Zhang, X.-P.; Ding, X. A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2782–2790. [Google Scholar]
- Wang, L.; Wu, B.; Wang, X.; Zhu, Q.; Xu, K. Endoscopic Image Luminance Enhancement Based on the Inverse Square Law for Illuminance and Retinex. Int. J. Med. Robot. 2022, 18, e2396. [Google Scholar] [CrossRef]
- Ng, M.K.; Wang, W. A Total Variation Model for Retinex. SIAM J. Imaging Sci. 2011, 4, 345–365. [Google Scholar] [CrossRef]
- Fu, X.; Zeng, D.; Huang, Y.; Liao, Y.; Ding, X.; Paisley, J. A Fusion-Based Enhancing Method for Weakly Illuminated Images. Signal Process. 2016, 129, 82–96. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhang, J.; Guo, X. Kindling the Darkness: A Practical Low-Light Image Enhancer. In Proceedings of the 27th ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; ACM: New York, NY, USA, 2019. [Google Scholar]
- Wu, W.; Weng, J.; Zhang, P.; Wang, X.; Yang, W.; Jiang, J. URetinex-Net: Retinex-Based Deep Unfolding Network for Low-Light Image Enhancement. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022. [Google Scholar]
- Zhao, Z.; Xiong, B.; Wang, L.; Ou, Q.; Yu, L.; Kuang, F. RetinexDIP: A Unified Deep Framework for Low-Light Image Enhancement. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 1076–1088. [Google Scholar] [CrossRef]
- Zhu, A.; Zhang, L.; Shen, Y.; Ma, Y.; Zhao, S.; Zhou, Y. Zero-Shot Restoration of Underexposed Images via Robust Retinex Decomposition. In Proceedings of the 2020 IEEE International Conference on Multimedia and Expo (ICME), London, UK, 6–10 July 2020; pp. 1–6. [Google Scholar]
- Cai, Y.; Bian, H.; Lin, J.; Wang, H.; Timofte, R.; Zhang, Y. Retinexformer: One-Stage Retinex-Based Transformer for Low-Light Image Enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023. [Google Scholar]
- Bai, J.; Yin, Y.; He, Q.; Li, Y.; Zhang, X. Retinexmamba: Retinex-Based Mamba for Low-Light Image Enhancement. arXiv 2024, arXiv:2405.03349. [Google Scholar]
- Yi, X.; Xu, H.; Zhang, H.; Tang, L.; Ma, J. Diff-Retinex: Rethinking Low-Light Image Enhancement with A Generative Diffusion Model. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023. [Google Scholar]
- Huang, C.; Fang, Y.; Wu, T.; Zeng, T.; Zeng, Y. Quaternion Screened Poisson Equation for Low-Light Image Enhancement. IEEE Signal Process. Lett. 2022, 29, 1417–1421. [Google Scholar] [CrossRef]
- Brateanu, A.; Balmez, R.; Avram, A.; Orhei, C.; Ancuti, C. LYT-NET: Lightweight YUV Transformer-Based Network for Low-Light Image Enhancement. arXiv 2024, arXiv:2401.15204. [Google Scholar]
- Yan, Q.; Feng, Y.; Zhang, C.; Wang, P.; Wu, P.; Dong, W.; Sun, J.; Zhang, Y. You Only Need One Color Space: An Efficient Network for Low-Light Image Enhancement. arXiv 2024, arXiv:2402.05809. [Google Scholar]
- Gómez, P.; Semmler, M.; Schützenberger, A.; Bohr, C.; Döllinger, M. Low-Light Image Enhancement of High-Speed Endoscopic Videos Using a Convolutional Neural Network. Med. Biol. Eng. Comput. 2019, 57, 1451–1463. [Google Scholar] [CrossRef] [PubMed]
- Xia, W.; Chen, E.C.S.; Peters, T. Endoscopic Image Enhancement with Noise Suppression. Healthc. Technol. Lett. 2018, 5, 154–157. [Google Scholar] [CrossRef] [PubMed]
- Zhu, L.; Liao, B.; Zhang, Q.; Wang, X.; Liu, W.; Wang, X. Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model. arXiv 2024, arXiv:2401.09417. [Google Scholar]
- Johnson, J.; Alexandre, A.; Li, F. Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In Computer Vision—ECCV 2016; Lecture Notes in Computer Science; Springer International Publishing: Cham, Swtizerland, 2016; pp. 694–711. ISBN 9783319464749. [Google Scholar]
- Jiang, L.; Dai, B.; Wu, W.; Loy, C.C. Focal Frequency Loss for Image Reconstruction and Synthesis. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 13919–13929. [Google Scholar]
- Hwang, J.; Yu, C.; Shin, Y. SAR-to-Optical Image Translation Using SSIM and Perceptual Loss Based Cycle-Consistent GAN. In Proceedings of the 2020 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, Republic of Korea, 21–23 October 2020; pp. 191–194. [Google Scholar]
- Smedsrud, P.; Thambawita, V.L.; Hicks, S.; Gjestang, H.L.; Nedrejord, O.O.; Næss, E.; Borgli, H.; Jha, D.; Berstad, T.J.; Eskeland, S.L.; et al. Kvasir-Capsule, a Video Capsule Endoscopy Dataset. Sci. Data 2020, 8, 142. [Google Scholar] [CrossRef]
- Coelho, P.; Pereira, A.; Leite, A.; Salgado, M.; Cunha, A. A Deep Learning Approach for Red Lesions Detection in Video Capsule Endoscopies. In Lecture Notes in Computer Science; Springer International Publishing: Cham, Swtizerland, 2018; pp. 553–561. ISBN 9783319929996. [Google Scholar]
- Hore, A.; Ziou, D. Image Quality Metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 586–595. [Google Scholar]
- Venkatanath, N.; Praneeth, D.; Bh, M.C.; Channappayya, S.S.; Medasani, S.S. Blind Image Quality Evaluation Using Perception Based Features. In Proceedings of the 2015 Twenty First National Conference on Communications (NCC), Mumbai, India, 27 February–1 March 2015; pp. 1–6. [Google Scholar]
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “Completely Blind” Image Quality Analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
- Müller, D.; Soto-Rey, I.; Kramer, F. Towards a Guideline for Evaluation Metrics in Medical Image Segmentation. BMC Res. Notes 2022, 15, 210. [Google Scholar] [CrossRef]
- Guo, C.; Li, C.; Guo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Wang, T.; Zhang, K.; Shen, T.; Luo, W.; Stenger, B.; Lu, T. Ultra-High-Definition Low-Light Image Enhancement: A Benchmark and Transformer-Based Method. In Proceedings of the AAAI Conference on Artificial Intelligence, Philadelphia, PA, USA, 27 February–2 March 2022. [Google Scholar]
- Dang, J.; Li, Z.; Zhong, Y.; Wang, L. WaveNet: Wave-Aware Image Enhancement. In Proceedings of the Pacific Graphics, Daejeon, Republic of Korea, 10–13 October 2023. [Google Scholar]
- Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar]
Method | PSNR | SSIM | LPIPS |
---|---|---|---|
LIME [8] | 12.058 | 0.3021 | 0.460 |
ZeroDCE [47] | 13.671 | 0.446 | 0.515 |
LLFormer [48] | 34.419 | 0.962 | 0.032 |
LytNet [31] | 30.302 | 0.948 | 0.071 |
WaveNet [49] | 35.893 | 0.975 | 0.026 |
HVI-CIDNet [32] | 21.595 | 0.798 | 0.102 |
RetinexFormer [27] | 33.660 | 0.959 | 0.033 |
LighTDiff [7] | 31.413 | 0.920 | 0.067 |
QRNet (ours) | 37.781 | 0.976 | 0.022 |
Method | PSNR | SSIM | LPIPS |
---|---|---|---|
LIME [8] | 14.248 | 0.225 | 0.513 |
ZeroDCE [47] | 15.277 | 0.323 | 0.429 |
LLFormer [48] | 30.021 | 0.898 | 0.094 |
LytNet [31] | 25.556 | 0.823 | 0.189 |
WaveNet [49] | 31.405 | 0.922 | 0.088 |
HVI-CIDNet [32] | 24.083 | 0.702 | 0.140 |
RetinexFormer [27] | 28.724 | 0.891 | 0.097 |
LighTDiff [7] | 31.413 | 0.920 | 0.067 |
QRNet (ours) | 34.030 | 0.936 | 0.055 |
Method | NIQE |
---|---|
LIME [8] | 46.345 |
ZeroDCE [47] | 38.057 |
LLFormer [48] | 31.432 |
LytNet [31] | 50.041 |
WaveNet [49] | 37.773 |
HVI-CIDNet [32] | 43.748 |
RetinexFormer [27] | 48.237 |
LighTDiff [7] | 15.462 |
Method | f1 | mIoU | Accuracy | Precision | Sensitivity | Specificity |
---|---|---|---|---|---|---|
NO | 0.7830 | 0.6434 | 0.6930 | 0.6587 | 0.9651 | 0.3266 |
LIME [8] | 0.5713 | 0.3999 | 0.4775 | 0.5398 | 0.6068 | 0.3033 |
ZeroDCE [47] | 0.6061 | 0.4349 | 0.5214 | 0.5743 | 0.6417 | 0.3594 |
LLFormer [48] | 0.7715 | 0.6280 | 0.6718 | 0.6424 | 0.9656 | 0.2761 |
LytNet [31] | 0.7507 | 0.6009 | 0.6316 | 0.6136 | 0.9666 | 0.1804 |
WaveNet [49] | 0.7754 | 0.6332 | 0.6768 | 0.6449 | 0.9721 | 0.2792 |
HVI-CIDNet [32] | 0.7768 | 0.6351 | 0.6825 | 0.6510 | 0.9629 | 0.3048 |
RetinexFormer [27] | 0.7577 | 0.6100 | 0.6500 | 0.6285 | 0.9538 | 0.2407 |
LighTDiff [7] | 0.7755 | 0.6333 | 0.6790 | 0.6478 | 0.9659 | 0.2927 |
Ours | 0.7786 | 0.6345 | 0.6919 | 0.6516 | 0.9725 | 0.2792 |
Method | PSNR | SSIM | LPIPS |
---|---|---|---|
Full QRNet | 37.78 | 0.976 | 0.022 |
w/o Quaternion | 35.99 | 0.960 | 0.031 |
w/o Wavelet | 36.56 | 0.970 | 0.027 |
w/o FFT Loss | 37.44 | 0.975 | 0.030 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Frants, V.; Agaian, S. QRNet: A Quaternion-Based Retinex Framework for Enhanced Wireless Capsule Endoscopy Image Quality. Bioengineering 2025, 12, 239. https://doi.org/10.3390/bioengineering12030239
Frants V, Agaian S. QRNet: A Quaternion-Based Retinex Framework for Enhanced Wireless Capsule Endoscopy Image Quality. Bioengineering. 2025; 12(3):239. https://doi.org/10.3390/bioengineering12030239
Chicago/Turabian StyleFrants, Vladimir, and Sos Agaian. 2025. "QRNet: A Quaternion-Based Retinex Framework for Enhanced Wireless Capsule Endoscopy Image Quality" Bioengineering 12, no. 3: 239. https://doi.org/10.3390/bioengineering12030239
APA StyleFrants, V., & Agaian, S. (2025). QRNet: A Quaternion-Based Retinex Framework for Enhanced Wireless Capsule Endoscopy Image Quality. Bioengineering, 12(3), 239. https://doi.org/10.3390/bioengineering12030239