Abstract
Enhancing low-light images is crucial for various applications in computer vision, yet current approaches often fall short in balancing image quality and detail preservation. This study introduces a novel method designed to enhance low-light images by applying advanced mathematical techniques from geometric function theory. Specifically, we employ Sakaguchi-type class functions, subordinated with the Gegenbeur polynomial, to derive coefficient estimations. These estimations are then used in convolution kernels to produce enhanced image versions. The method was tested on the LOw-Light dataset (LOL), containing challenging low-light images with noise and artifacts. Our approach’s effectiveness is validated through quantitative metrics, including PSNR and SSIM, as well as visual comparisons. The results demonstrate significant improvements over existing state-of-the-art methods, offering better visibility and detail retention. This method holds promise for enhancing images in critical fields such as surveillance and medical imaging.
Similar content being viewed by others
Introduction
Advancements in technology and photographic equipment have increased the demand for high-quality images. However, environmental factors often hinder the acquisition of desired images, resulting in issues such as blurred details, uneven lighting, low light conditions, and backlighting that affect image quality. To tackle these challenges, low-light image enhancement techniques have become instrumental. These techniques are of immense importance in fields such as computer vision (for tasks like target detection and recognition), surveillance systems, home security, medical image segmentation, and autonomous driving. Notably, many existing algorithms primarily focus on brightening images but often overlook the preservation of essential original image details. This often results in a reduction in the information entropy of the image, hindering the complete expression of the original image information.
Numerous algorithms have been proposed to address the challenge of enhancing low-light images, with the histogram equalization technique being a commonly employed method due to its avoidance of saturation and simplicity. However, this method has its limitations. It enhances low-light images by using the entire image’s histogram details as a transformation function, primarily focusing on improving contrast. This singular focus on contrast enhancement can result in over-enhancement in areas with low contrast, sometimes leading to regions that are either under or over-enhanced within the image.
Similarly, Wang et al.’s1 algorithm aimed to enhance contrast while preserving illumination in the image. Unfortunately, this approach fell short in maintaining the overall visual quality of the image. A different perspective can be found in techniques inspired by Retinex theory, originally developed to model human color perception, which has been effectively applied to enhance images captured under challenging lighting conditions. This theory posits that color perception is influenced by both the illumination and the reflectance properties of objects. While Retinex can theoretically facilitate image decomposition into illumination and reflectance, this is primarily effective under conditions of uniform lighting and for matte surfaces. Consequently, many contemporary techniques inspired by Retinex focus on estimating and removing the illumination component to enhance overall image quality. For example, Wang et al.2 proposed methods that focus on eliminating the illumination component, while others, such as the approach outlined in Wang et al.1, aim to retain a portion of the illumination to preserve natural image effects. However, these methods often introduce distortions that compromise visual quality, as they fail to account for the intrinsic characteristics of the camera’s response. Additionally, notable implementations of Retinex that do not perform image decomposition, such as SuPeR3, Light Random Spray Retinex4, STAR5, and Milano-Retinex6, provide alternative approaches to improving image clarity and detail.
Furthermore, certain techniques involve image dehazing, aiming to preserve the natural distribution of pixel values, as exemplified by the work of Dong et al.7, who proposed a method for dehazing low-visibility input images. Subsequently, image inversion is performed to obtain illuminated images. The dark channel prior, as proposed in Ref.8, is a well-known technique focused on enhancing the quality of hazy images characterized by low visibility. The colour attenuation prior9 has been presented as an effective approach for recovering deteriorated hazy images. Cai et al.10 investigated learning-based dehazing for an end-to-end haze removal procedure, while Ancuti et al.11 presented a fusion-based night picture dehazing approach. Many researchers have been working on ways to improve low-light images in various environments12,13,14,15. Although these methods yield satisfactory results, they may not fully capture the true illumination and contrast of the scene. Additionally, some of these approaches do not account for the impact of noise in resulting images, potentially leading to varying results under different lighting conditions.
In addition to the approaches discussed earlier, several related works have significantly contributed to the field of image enhancement. For instance, the study “Bilateral Tone Mapping Scheme for Color Correction and Contrast Adjustment in Nearly Invisible Medical Images”16 addresses the challenges of poor illumination in medical imaging. The authors propose a method that enhances contrast while maintaining natural color quality, effectively improving the visibility of crucial details in medical images.Another notable contribution is the work by Bhandari et al.17, their study emphasizes the importance of balancing contrast and brightness in color images, particularly under suboptimal lighting conditions. By optimizing these elements, the authors demonstrate improved color fidelity, making their technique valuable for applications where accurate color representation is essential. Furthermore, Subramani et al.18 introduced a method that effectively restores visual quality in degraded images through intensity mapping adjustments. By manipulating the Bezier curve, the authors were able to enhance contrast and reveal hidden details, making it a useful tool for improving image quality in various scenarios. In19, by applying an optimized Bezier curve-based intensity mapping scheme, the authors were able to enhance visibility and detail in dark images. This advancement is particularly relevant for applications such as night photography and surveillance, where capturing high-quality images under low-light conditions is crucial.
The application of geometric function theory in computer vision remains a relatively unexplored domain, with only a limited number of researchers venturing into this area. Notably, Priya et al.20 introduced the class \(C_\Sigma\), utilizing a Sakaguchi type class subordinated with a Horadam polynomial, where the coefficient bounds of \(C_\Sigma\) play a pivotal role in texture enhancement. Similarly, Nithiyanandham et al.21 defined a class \(p-\Phi S^* (t,\mu ,\nu ,J,K)\), crafted from a Mittag-Leffler type Poisson distribution, and investigated its geometrical properties. By making use of the coefficient bounds derived from this class, Nithiyanandham et al.22 achieved successful enhancements in retinal images. Aarthy et al.23 further extended these findings by applying the same class with varied parameter values to enhance images from datasets like ’DAISY,’ ’MEDICAL,’ and ’MISCELLANEOUS.’ The coefficient bounds, serving as critical factors, influence the enhancement process by providing essential constraints.
Despite these contributions, a notable knowledge gap exists in understanding the coefficient bounds of Gegenbauer polynomials and their application in low light image enhancement. This study aims to address this gap by determining the coefficient bounds of Gegenbauer polynomials. Subsequently, the obtained coefficient bounds are convoluted in eight different directions using a \(3 \times 3\) mask. This innovative approach facilitates uniform enhancement across the entire image. Our research contributes to advancing the understanding and application of geometric function theory in low-light image enhancement.
The structure of this article is as follows: Section “Mathematical interpretation” explains the mathematical interpretation of coefficient bounds. In “Proposed methodology”, we provide a detailed explanation of our proposed model. Sections “Experimental analysis and results” and “Performance analysis” address both qualitative and quantitative assessments, comparing our approach to other cutting-edge methodologies. Finally, “Conclusion” contains the conclusion of our study as well as a discussion on future research prospects.
Mathematical interpretation
Gegenbauer polynomials, a set of orthogonal polynomials, have been applied across various domains, including image processing. In image processing, the moments derived from these polynomials are used for tasks such as image representation24 and analysis25. To achieve a controlled image enhancement process, we utilize the concept of coefficient bounds from geometric function theory. This requires the construction of a specific class, for which we employ the Gegenbauer polynomial and the Sakaguchi type function, combined through subordination. This subordination creates a mathematical connection between the Sakaguchi-type function and the Gegenbauer polynomial, enabling us to define a new class, denoted as \(G_S (\Phi )\) . For defining the class and determining the coefficient bounds, we provide a few fundamental definitions below. The coefficient bounds derived from this new class are central to our enhancement process, as they control each element of the convolution kernel based on local image characteristics. This targeted use of coefficient bounds provides a structured and adaptable approach to enhance image clarity, particularly under challenging low-light conditions.
The class of all analytic functions of the form
is denoted by \({\mathcal {A}}\) and normalized in the open unit disk \({\mathscr {D}} = \{ \mu :|\mu |<1 \}\). The aforementioned equation is then substituted in the class \(G_S (\Phi )\) to derive the coefficient bounds. \({\mathcal {S}}\) is the class of all \(univalent \ function\) (one-to-one) in \({\mathcal {A}}\). According to the Koebe one-quarter theorem, the image of \({\mathscr {D}}\) under every univalent function \(f \in {\mathscr {A}}\) contains a disk of radius \(\frac{1}{4}\) and it states that for every univalent function \(f \in {\mathscr {A}}\) there exits a inverse map \(f^{-1}=g\) satisfying
and
is the inverse function. An analytic function \({\mathcal {A}}\) in \({\mathscr {D}}\) is said to be \(bi-univalent\) if both \(f,f^{-1}\) are univalent in \({\mathscr {D}}\). Let \(\Sigma\) be the class of all bi-univalent functions in the unit disk \({\mathscr {D}}\). Regarding early results on analytic and bi-univalent functions, refer to Refs.26,27.
Let the two functions \(f, \ g\) be analytic in \({\mathscr {D}}\) and \(\varpi\) be a schwarz function satisfying \(\varpi (0)=0\) and \(|\varpi (\mu )| < 1\) such that \(f(\mu )=g(\varpi (\mu ))\) then f is said to be subordinate to g, or \(f \prec g\). If g is univalent, then \(f \prec g\) iff \(f(0)=g(0)\) and \(f({\mathscr {D}}) \subset g({\mathscr {D}})\). The generating function of \(Gegenbauer \ polynomials\) is defined by
where \(\Phi\) is a nonzero real constant, \(x \in [-1,1]\) and \(\mu \in {\mathscr {D}}\). Gegenbauer polynomial satisfies the below recurrence relations:
with the initial values
Special Cases:
-
(1)
When \(\Phi =1/2\), then \(C_{n}^{\Phi }(x)\) reduce to the Legendre polynomials.
-
(ii)
When \(\Phi =1\), then \(C_{n}^{\Phi }(x)\) reduce to the Chebyshev polynomials.
Over the recent years, a growing number of researchers have been investigating the realm of bi-univalent functions in associated with orthogonal polynomials28,29,30,31.
Class definition
A function \(f\in \Sigma\) given by Eq. (1) is said to be in the class \(G_S(\Phi )\) if the following conditions hold for all \(\mu ,\nu \in {\mathscr {D}}\):
and
where \(x\in (\frac{1}{2},1], -1 \le t < 1\), the function \(g(\nu )=f^{-1}(\nu )\) is defined by Eq. (2) and \(H_{\Phi }\) is the generating function of the Gegenbauer polynomial given by Eq. (3)
Theorem
Let the function \(f \in \Sigma\) given by Eq. (1) be in the class \(G_S(\Phi )\). Then
where \(u_n = \frac{1-t^n}{1+t}\) , \(\tau = \Phi (1+\Phi )(2-u_2)^2-2\Phi ^2 [(3-u_3)-u_2(2-u_2)]\)
Proof
Let \(f\in G_S(\Phi )\) then by definition (2.1) , we have
and
for some analytic functions
such that \(w(0)=v(0)=0, |w(\mu )|<1 \ (\mu \in {\mathscr {D}})\) and \(|w(\nu )|<1 \ (\nu \in {\mathscr {D}})\). It follows from Eqs. (11) and (12) that
By expanding (Eq. 1), we get
Apply the aforementioned equations in (15)
By further expansion and equating the coefficient of \(\mu ^2\) and \(\mu ^3\), we arrive at the following equations
Similarly for 16, we obtain
From Eqs. (18) and (20), we get
and
Summing up Eqs. (19) and (21), we have
By equating Eqs. (22) and (23), we obtain
It is well know from Ref.32 that if
then
By applying Eqs. (26) and (5) in Eq. (25). We obtain
Rearrange the above equation to get the necessary inequality (Eq. 9).
Next, by subtracting Eq. (21) from Eq. (19), we have
Further, in view of Eq. (22), it follows from Eq. (26) that
By making use of the Eqs. (23) and (26), we get from aforementioned equation the desired inequality (Eq. 10).
where \(u_n = \frac{1-t^n}{1+t}\) , \(\tau = \Phi (1+\Phi )(2-u_2)^2-2\Phi ^2 [(3-u_3)-u_2(2-u_2)]\) \(\square\)
Proposed methodology
We introduce a new mathematical approach in this study for improving images in low light conditions. Our approach makes use of coefficients obtained from the class \(G_S (\Phi )\) to develop a highly effective enhancement technique. The coefficient bounds act as guiding factors, ensuring a controlled and effective enhancement that addresses the challenges posed by low-light environments, ultimately contributing to improved image quality. In image processing, a kernel, also known as a convolution matrix or mask, plays a pivotal role in filtering and transforming images. It is a small matrix composed of coefficient bounds that is convoluted with an input image, enabling operations like blurring, sharpening, embossing, and edge detection. Kernels are essentially mathematical functions that dictate how each pixel in the output image depends on the values of nearby pixels, including the pixel itself. To achieve effective image enhancement with low computational complexity, we have chosen to employ \(3 \times 3\) kernels over larger sizes, as increasing the kernel size can lead to image blurring. The \(3 \times 3\) kernel size strikes a balance between enhancement and preserving image details. The kernels associated with the coefficients in eight directions are given in Fig 1.
The proposed method is as follows.
-
Step 1: Take a low light color image of size \((M\times N)\).
-
Step 2: Define the t ranges to be \(-1\) to 0.9.
-
Step 3: Determine the coefficient bounds \(|a_1|\) (Eq. 8), \(|a_2|\) (Eq. 9) and \(|a_3|\) (Eq. 10).
-
Step 4: Create \(3 \times 3\) kernels for eight different directions (\(0^\circ ,45^\circ ,90^\circ ,135^\circ ,180^\circ ,225^\circ ,270^\circ\) and \(315^\circ\)).
-
Step 5: Apply the previously determined coefficient bounds to the kernels.
-
Step 6: Convolute the image with each of these kernels separately for all eight directions. Perform this convolution operation on each individual color channel (R, G, and B).
-
Step 7: Calculate the average of the results obtained from the convolution operations.
-
Step 8 Introduce some additional brightness to the enhanced image by using the function ’imadd’ which performs pixel-wise addition to contribute to an overall improvement in visual quality in the processed image.
-
Step 9: Compute the entropy of the enhanced image. According to entropy theory, the more extensive and detailed the information in an image, the greater the entropy (IE)33. Entropy computation aids in determining the optimal parameter for effective low-light image enhancement.
-
Step 10: Iteratively repeat Step 3 through Step 9 for each t value.
-
Step 11: Identify the t value that results in the maximum entropy value. This particular t value represents the optimal parameter for image enhancement.
-
Step 12: Finally, display the enhanced image, which was produced using the chosen optimal t value.
For the proposed methodology, the total computational complexity is \(O(n*M*N)\).
Where n—number of iterations based on the t value
M and N—dimensions of the image.
Experimental analysis and results
The experiment was conducted in the following setting. The machine used for this experiment is a computer with a Dual-Intel Core i3 processor running at 1.1 GHz. This computer is equipped with 8 GB of 3733 MHz LPDDR4X RAM and operates on MacOS 13.5. The experimental setup was executed using Matlab online version (Mathworks). LOL34 is a collection of 500 image pairs captured in varying lighting conditions, encompassing scenes such as houses, campuses, clubs, and streets. This dataset is designed for enhancing images taken in low-light situations and includes 485 pairs of training and 15 pairs of testing images. In the low-light images, you can observe noise resulting from the image capture process. The majority of these images depict indoor scenes, and all images have been standardized to a resolution of \(400 \times 600\) pixels and are saved in Portable Network Graphics (PNG) format.
Figure 2 presents the original images along with their respective enhanced versions, each associated with its corresponding t value. Figure 3 displays sample low-light images sourced from the dataset, highlighting the challenges associated with enhancement. Our method aims to decrease noise levels and generate high-quality images; however, with increasing noise, our approach introduces a subtle color variation in the enhanced image. Figures 4, 5 and 6 displays a clear comparison between our method and other existing techniques. The method proposed by He et al.8 generates images with uneven enhancement. Li et al.’s36 and Chen et al.’s35 method produce images with diminished visibility and limited enhancement. Ancuti et al.’s11, Hari et al.’s37, Lecca et al.38, Zhang et al.39 and our method exhibit notable performance on the LoL dataset.
Performance analysis
Performance-based analysis is a systematic approach for assessing the quality and efficacy of an algorithm or techniques using mathematical functions. It involves the use of specific metrics and criteria to assess how well a particular method performs in achieving its intended objectives. In our study we have considered the quantitative measures such as PSNR, SSIM, and entropy. These analyses provide valuable insights into the strengths and weaknesses of image enhancement techniques.
PSNR
Peak Signal-to-Noise Ratio (PSNR) serves as a metric for evaluating image quality. It involves the comparison of an image to a reference image. PSNR provides a quantification of how the maximum achievable intensity in a clean image relates to the impact of any noise or errors that might compromise the image’s fidelity.
The formula for PSNR is as follows:
where P represents the number of potential intensity levels in the image, with the lowest intensity level considered as 0. The calculation of Mean Squared Error (MSE) in PSNR is defined as:
here O signifies the matrix data of the original, noise-free image. D stands for the matrix data of the degraded or altered image. i, j denotes the number of rows and columns of pixels respectively. m, n signifies the index of a specific row and coloumn respectively.
SSIM
The Structural Similarity Index Measure (SSIM) is a perceptual image quality metric designed to measure the similarity between an original image and a processed version of it. SSIM considers three fundamental aspects of human visual perception: luminance (brightness), contrast, and structural information. It produces a score within the range of \(-1\) to 1, where a score of 1 signifies perfect similarity. A higher SSIM score indicates a greater resemblance between the two images. In our evaluation, we have employed the reference image instead of original image for SSIM calculations. The formula for SSIM is
where: a and b are the compared images. \(\mu _a\) and \(\mu _b\) are the means of a and b. \(\sigma _a^2\) and \(\sigma _b^2\) are the variances of a and b. \(\sigma _{ab}\) is the covariance of a and b. \(C_1\) and \(C_2\) are constants added to avoid instability when the means and variances are close to zero. These constants are typically small positive values.
Entropy
Entropy in image processing is a metric that quantifies the amount of information, or randomness, in an image. It is calculated using pixel values and helps evaluate the diversity of intensities within an image. The formula for entropy is
where p(x) represents the probability of a pixel having intensity x.
Tables 1 and 2 provides a performance comparison of our method with those proposed by He et al.8, Chen et al.35, Li et al.36, Ancuti et al.11, Hari et al.37, and Lecca et al.38 and Zhang et al.39 . From the data presented in Tables 1 and 2, it is evident that for the set-15 image, our algorithm outperforms other methods in terms of PSNR and SSIM. Specifically, it performs 46\(\%\) better than He et al.8, 78\(\%\) better than Chen et al.35, 125\(\%\) better than Li et al.36, 4\(\%\) better than Ancuti et al.11, 2\(\%\) better than Zhang et al.39 and 7\(\%\) better than Lecc et al.38. However, it falls slightly behind by 2\(\%\) compared to Hari et al37. In terms of SSIM, our algorithm demonstrates superiority with 43\(\%\) improvement over He et al.8, 55\(\%\) over Chen et al.35, 161 \(\%\) over Li et al.36, 16\(\%\) over Ancuti et al.11, 7\(\%\) over Hari et al.8, 12 \(\%\) over Lecca et al.38 and Zhang et al.39. Although the average PSNR improvement percentage is less than that of Hari et al.8, our method preserves similarity structures well and visually produces higher-quality results, even in the presence of noise. We performed a one-way ANOVA to compare the means of the different enhancement methods based on their respective performance metrics. The groups were classified according to the enhancement algorithms, with 15 images taken for each method. The results are presented in the form of a boxplot in Fig. 7. The ANOVA test yielded a p-value of \(2.227 \times e^{-21}\) for PSNR comparison and \(9.2285 \times e^{-15}\), which is far below the conventional threshold of 0.05. This extremely small p-value indicates that there is a statistically significant difference between the groups. Therefore, we can conclude that at least one of the enhancement methods has a significantly different effect compared to the others. However, to identify which specific methods differ, further post-hoc tests are required to determine whether our method performs significantly better than others. The results from the post hoc analysis revealed in Fig. 8 that the proposed method excels in both PSNR and SSIM metrics, consistently outperforming most of the state-of-the-art techniques tested. With a high average PSNR (mean = 18.7400) and the highest SSIM (mean = 0.7193), our method shows superior effectiveness in enhancing low-light images, preserving both image clarity and structural similarity. The significant differences observed in the statistical analysis further reinforce the robustness of our approach.
Conclusion
The main focus of our work is to explore the application of geometric function theory in the enhancement of low-light images, aiming to achieve a superior enhanced version. The proposed method successfully delivers an improved image with reduced noise, balanced brightness, enhanced clarity, and preservation of crucial details. When compared with other state-of-the-art methods, both qualitative and quantitative assessments demonstrate the superior performance of our approach. In future work, the methodology employed in this study can be extended to video enhancement by applying the same techniques to each frame of the video sequence. This extension involves adapting the low-light image enhancement approach to accommodate the temporal dynamics and characteristics inherent in video data. By consistently applying the proposed methodology across video frames, it becomes possible to enhance the visual quality of entire video sequence, addressing challenges posed by low-light conditions in dynamic scenarios. This expansion would contribute to advancing the applicability of the proposed method in scenarios where video data is crucial, such as surveillance, monitoring, and various computer vision applications where enhancing scenes under such conditions can greatly enhance the accuracy of object detection and recognition.
Data availability
The dataset analysed during the current study is “https://daooshee.github.io/BMVC2018website/” The MATLAB code used in this investigation are accessible through the following link: K, Sivagami Sundari; B, Srutha Keerthi (2024), “Low light image enhancement - GFT”, Mendeley Data, V1, doi: https://doi.org/10.17632/2gcrkjr88z.1.
References
Wang, S., Zheng, J., Hu, H. M. & Li, B. Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 22(9), 3538–3548. https://doi.org/10.1109/TIP.2013.2261309 (2013).
Wang, L., Xiao, L., Liu, H. & Wei, Z. Variational Bayesian method for retinex. IEEE Trans. Image Process. 23(8), 3381–3396. https://doi.org/10.1109/TIP.2014.2324813 (2014).
Lecca, M. & Messelodi, S. SuPeR: Milano Retinex implementation exploiting a regular image grid. JOSA A. 36(8), 1423–1432 (2019).
Tanaka, M., Lanaro, M. P., Horiuchi, T. & Rizzi, A. Random spray Retinex extensions considering region of interest and eye movements. J. Imaging Sci. Technol. 63(6), 060403–1. https://doi.org/10.2352/J.ImagingSci.Technol.2019.63.6.060403 (2019).
Lecca, M. STAR: A segmentation-based approximation of point-based sampling Milano Retinex for color image enhancement. IEEE Trans. Image Process. 27(12), 5802–5812. https://doi.org/10.1109/TIP.2018.2858541 (2018).
Lecca, M., Simone, G., Bonanomi, C. & Rizzi, A. Point-based spatial colour sampling in Milano-Retinex: A survey. IET Image Process. 12(6), 833–849. https://doi.org/10.1049/iet-ipr.2017.1224 (2018).
Dong, X., Pang, Y., & Wen, J. (2010). Fast efficient algorithm for enhancement of low lighting video. in ACM SIGGRAPH 2010 Posters, pp. 1–1. https://doi.org/10.1145/1836845.1836920
He, K., Sun, J. & Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353. https://doi.org/10.1109/CVPR.2009.5206515 (2010).
Fattal, R. Dehazing using color-lines. ACM Trans. Graph. (TOG) 34(1), 1–14. https://doi.org/10.1145/2651362 (2014).
Cai, B., Xu, X., Jia, K., Qing, C. & Tao, D. Dehazenet: An end-to-end system for single image haze removal. IEEE Trans. Image Process. 25(11), 5187–5198 (2016).
Ancuti, C., Ancuti, C. O., De Vleeschouwer, C., & Bovik, A. C. (2016). Night-time dehazing by fusion. in 2016 IEEE International Conference on Image Processing (ICIP) IEEE. 2256–2260. https://doi.org/10.1109/ICIP.2016.7532760
Abraham, N. J., Daway, H. G. & Ali, R. A. Enhancement of images with very low light by using modified brightness low lightness areas algorithm based on sigmoid function. Traitement du Signal 39(4), 1323. https://doi.org/10.18280/ts.390425 (2022).
Yao, Z. (2022). Low-light image enhancement and target detection based on deep learning. Traitement du Signal. 39(4).
Zhang, Q., Lu, S., Liu, L., Liu, Y., Zhang, J., & Shi, D. (2021). Color enhancement of low illumination garden landscape images. Traitement du Signal. https://doi.org/10.18280/ts.380618.
Prakash, S. J., Chetty, M. S. R., & Aravapalli, J. (2022). Swarm based optimization for image dehazing from noise filtering perspective. Ingénierie des Systèmes d’Information. https://doi.org/10.18280/isi.270416
Subramani, B. & Veluchamy, M. Bilateral tone mapping scheme for color correction and contrast adjustment in nearly invisible medical images. Color Res. Appl. 48(6), 748–760. https://doi.org/10.1002/col.22887 (2023).
Bhandari, A. K., Subramani, B. & Veluchamy, M. Multi-exposure optimized contrast and brightness balance color image enhancement. Digital Signal Process. 123, 103406. https://doi.org/10.1016/j.dsp.2022.103406 (2022).
Subramani, B., Bhandari, A. K. & Veluchamy, M. Optimal Bezier curve modification function for contrast degraded images. IEEE Trans. Instrum. Meas. 70, 1–10. https://doi.org/10.1109/TIM.2021.3073320 (2021).
Veluchamy, M., Bhandari, A. K. & Subramani, B. Optimized Bezier curve based intensity mapping scheme for low light image enhancement. IEEE Trans. Emerg. Topics Comput. Intell. 6(3), 602–612. https://doi.org/10.1109/TETCI.2021.3053253 (2021).
Priya, H. & Sruthakeerthi, B. Texture analysis using Horadam polynomial coefficient estimate for the class of Sakaguchi kind function. Sci. Rep. 13(1), 14436. https://doi.org/10.1038/s41598-023-41734-w (2023).
Nithiyanandham, E. K. & Sruthakeerthi, B. Properties on subclass of Sakaguchi type functions using a Mittag-Leffler type Poisson distribution series. Mathematica Bohemica[SPACE]https://doi.org/10.21136/MB.2023.0061-23 (2023).
Nithiyanandham, E. K. & Keerthi, B. S. A new proposed model for image enhancement using the coefficients obtained by a subclass of the Sakaguchi-type function. Signal Image Video Process. 1, 8. https://doi.org/10.1007/s11760-023-02861-z (2023).
Aarthy, B. & Keerthi, B. S. Enhancement of various images using coefficients obtained from a class of Sakaguchi type functions. Sci. Rep. 13(1), 18722. https://doi.org/10.1038/s41598-023-45938-y (2023).
Hosny, K. M. Image representation using accurate orthogonal Gegenbauer moments. Pattern Recognit. Lett. 32(6), 795–804 (2011).
Hosny, K. M., Darwish, M. M. & Eltoukhy, M. M. New fractional-order shifted Gegenbauer moments for image analysis and recognition. J. Adv. Res. 25, 57–66. https://doi.org/10.1016/j.jare.2020.05.024 (2020).
Bulut, S., Priya, H. & Keerthi, B. S. Coefficient bounds for Sakaguchi kind of functions associated with sine function. Aust. J. Math. Anal. Appl. 20(1), 1–8 (2023).
Sundari, K. S. & Keerthi, B. S. Geometrical properties of subclass of analytic function with odd degree. Aust. J. Math. Anal. Appl. 20(2), 1–12 (2023).
Liu, W. & Wang, L. L. Asymptotics of the generalized Gegenbauer functions of fractional degree. J. Approximation Theory 253, 105378. https://doi.org/10.1016/j.jat.2020.105378 (2020).
Magesh, N. & Bulut, S. Chebyshev polynomial coefficient estimates for a class of analytic bi-univalent functions related to pseudo-starlike functions. Afrika Matematika 29, 203–209. https://doi.org/10.1007/s13370-017-0535-3 (2018).
Orhan, H., Magesh, N. & Balaji, V. K. Second Hankel determinant for certain class of bi-univalent functions defined by Chebyshev polynomials. Asian-Eur. J. Math. 12(02), 1950017. https://doi.org/10.1142/S1793557119500177 (2019).
Yousef, F., Frasin, B. A., & Al-Hawary, T. (2018). Fekete-Szego inequality for analytic and bi-univalent functions subordinate to Chebyshev polynomials. arXiv preprint arXiv:1801.09531.
Duren, P. L. Univalent Functions (Grundlehren der Mathematischen Wissenschaften, 259 (Springer-Verlag, 1983).
Wang, W., Chen, Z., Yuan, X. & Wu, X. Adaptive image enhancement method for correcting low-illumination images. Inform. Sci. 496, 25–41 (2019).
Wei, C., Wang, W., Yang, W., & Liu, J. Deep retinex decomposition for low-light enhancement. arXiv 2018. arXiv preprint arXiv:1808.04560.
Chen, C., Do, M. N., & Wang, J. Robust image and video dehazing with visual artifact suppression via gradient residual minimization. in Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part II 14, 576–591. (Springer International Publishing, 2016). https://doi.org/10.1007/978-3-319-46475-6-36
Li, Y., Tan, R. T., & Brown, M. S. (2015). Nighttime haze removal with glow and multiple light colors. in Proceedings of the IEEE International Conference on Computer Vision, 226–234. https://doi.org/10.1109/ICCV.2015.34
Unnikrishnan, H., & Azad, R. B. (2022). Non-local Retinex based dehazing and low light enhancement of images. Traitement du Signal. https://doi.org/10.18280/ts.390313
Lecca, M. & Messelodi, S. SuPeR: Milano Retinex implementation exploiting a regular image grid. JOSA A 36(8), 1423–1432 (2019).
Zhang, L., Zhang, L., Liu, X., Shen, Y., Zhang, S., & Zhao, S. (2019). Zero-shot restoration of back-lit images using deep internal learning. in Proceedings of the 27th ACM International Conference on Multimedia, 1623–1631.
Funding
This research received no specific grant from any funding agency.
Author information
Authors and Affiliations
Contributions
K.Sivagami Sundari - Writing original draft, conceptualization. Dr. B. Srutha Keerthi - Supervision.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Sundari, K.S., Keerthi, B.S. Enhancing low-light images using Sakaguchi type function and Gegenbauer polynomial. Sci Rep 14, 29679 (2024). https://doi.org/10.1038/s41598-024-80605-w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-024-80605-w