[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114359200A - Image definition evaluation method based on pulse coupling neural network and terminal equipment - Google Patents

Image definition evaluation method based on pulse coupling neural network and terminal equipment Download PDF

Info

Publication number
CN114359200A
CN114359200A CN202111629067.8A CN202111629067A CN114359200A CN 114359200 A CN114359200 A CN 114359200A CN 202111629067 A CN202111629067 A CN 202111629067A CN 114359200 A CN114359200 A CN 114359200A
Authority
CN
China
Prior art keywords
matrix
input
pulse
domain
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111629067.8A
Other languages
Chinese (zh)
Other versions
CN114359200B (en
Inventor
陈韬宇
王华伟
刘庆
常三三
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XiAn Institute of Optics and Precision Mechanics of CAS
Original Assignee
XiAn Institute of Optics and Precision Mechanics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by XiAn Institute of Optics and Precision Mechanics of CAS filed Critical XiAn Institute of Optics and Precision Mechanics of CAS
Priority to CN202111629067.8A priority Critical patent/CN114359200B/en
Publication of CN114359200A publication Critical patent/CN114359200A/en
Application granted granted Critical
Publication of CN114359200B publication Critical patent/CN114359200B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to an image processing method, in particular to an image definition evaluation method based on a pulse coupled neural network and a terminal device, which apply the pulse coupled neural network to solve the technical problem of low image definition, by setting relevant parameters of a feedback input domain, a coupling connection domain and a pulse generation domain of the pulse coupling neural network in advance, circularly iterating, calculating an image definition evaluation function value corresponding to an image sequence obtained in the focusing process, and drawing a curve chart to represent the definition transformation trend corresponding to the image sequence, can be suitable for different scene types, and only the gray information of the image is concerned, so that the operation accuracy can be improved, the interference of other factors is avoided, the real-time performance is better, and the calculation efficiency is high to meet the engineering requirement when the terminal equipment runs.

Description

基于脉冲耦合神经网络的图像清晰度评估方法及终端设备Image clarity evaluation method and terminal equipment based on pulse coupled neural network

技术领域technical field

本发明涉及一种图像处理方法,具体涉及基于脉冲耦合神经网络的图像清晰度评估方法及终端设备。The invention relates to an image processing method, in particular to an image definition evaluation method and terminal equipment based on a pulse coupled neural network.

背景技术Background technique

光学系统在调焦过程中因各种原因机械结构未必能定位至焦点,而已经完成对焦的系统焦点位置也可能漂变,从而失焦导致成像质量变差,因此需要实现自动调焦功能。自动调焦技术在诸多领域都有广泛应用,现在的研究热点主要集中在基于图像处理的自动调焦方法,其主要思路是获得光学系统的离焦程度信息后,驱动电机根据该信息进行调焦控制;使用图像清晰度评估函数来表征光学系统的聚焦程度,并使用图像清晰度评估函数快速、准确地计算图像清晰度是图像处理的关键;同时,受限于机械结构的固有缺陷,镜头无法精确地达到图像焦点的最佳位置,或已经在焦点位置时图像质量仍不佳,因此需要图像清晰度评估方法,以提高最终成像质量。During the focusing process, the mechanical structure of the optical system may not be able to locate the focus due to various reasons, and the focus position of the system that has completed focusing may also drift, resulting in poor image quality due to defocusing. Therefore, it is necessary to realize the automatic focusing function. Autofocus technology is widely used in many fields. The current research focus is mainly on the autofocus method based on image processing. Control; using the image sharpness evaluation function to characterize the focusing degree of the optical system, and using the image sharpness evaluation function to quickly and accurately calculate the image sharpness is the key to image processing; at the same time, limited by the inherent defects of the mechanical structure, the lens cannot Accurately reach the best position of the image focus, or the image quality is still poor when it is already at the focus position, so an image sharpness evaluation method is needed to improve the final image quality.

发明内容SUMMARY OF THE INVENTION

本发明的目的是针对脉冲耦合神经网络解决图像清晰度低的技术问题,而提供一种基于脉冲耦合神经网络的图像清晰度评估方法及终端设备,利用脉冲耦合神经网络计算图像清晰度结合控制策略实现镜头对焦,同时使图像增强特性输出更高质量的清晰图像。The purpose of the present invention is to solve the technical problem of low image clarity for the pulse coupled neural network, and to provide an image clarity evaluation method and terminal device based on the pulse coupled neural network, which utilizes the pulse coupled neural network to calculate the image clarity combined with the control strategy Enables the lens to focus while enabling the Image Enhancement feature to output higher quality clear images.

为解决上述技术问题,本发明所采用的技术方案为:In order to solve the above-mentioned technical problems, the technical scheme adopted in the present invention is:

一种基于脉冲耦合神经网络的图像清晰度评估方法,其特殊之处在于,包括以下步骤:An image sharpness evaluation method based on pulse coupled neural network, which is special in that it includes the following steps:

步骤1:构建脉冲耦合神经网络;Step 1: Build a pulse coupled neural network;

脉冲耦合神经网络包括反馈输入域、耦合连接域及脉冲发生域;The impulse coupled neural network includes feedback input domain, coupling connection domain and impulse generating domain;

反馈输入域包括输入矩阵F,耦合连接域包括耦合连接输入矩阵L,脉冲发生域包括动态门限矩阵E、神经元内部活动项矩阵U和脉冲输出矩阵Y;The feedback input domain includes the input matrix F, the coupling connection domain includes the coupling connection input matrix L, and the impulse generation domain includes the dynamic threshold matrix E, the neuron internal activity term matrix U and the impulse output matrix Y;

步骤2:对图像进行预处理,设置初始化参数;Step 2: Preprocess the image and set the initialization parameters;

2.1)求出图像的灰度矩阵,将灰度矩阵归一化处理,作为刺激输入矩阵S;2.1) Find the grayscale matrix of the image, normalize the grayscale matrix, and use it as the stimulus input matrix S;

刺激输入矩阵S的大小i*j根据图像的分辨率确定,i为刺激输入矩阵S的第i行,j为刺激输入矩阵S的第j列;The size i*j of the stimulus input matrix S is determined according to the resolution of the image, i is the ith row of the stimulus input matrix S, and j is the jth column of the stimulus input matrix S;

2.2)将刺激输入矩阵S的值赋予输入矩阵F;2.2) Assign the value of the stimulus input matrix S to the input matrix F;

2.3)将脉冲耦合神经网络中的耦合连接输入矩阵L、神经元内部活动项矩阵U、脉冲输出矩阵Y和全局特征矩阵Q,初始化为零矩阵;2.3) The coupling connection input matrix L, the activity term matrix U inside the neuron, the pulse output matrix Y and the global feature matrix Q in the pulse coupled neural network are initialized to a zero matrix;

2.4)设置脉冲耦合神经网络中反馈输入域系数矩阵M和耦合连接域系数矩阵W;2.4) Set the feedback input domain coefficient matrix M and the coupling connection domain coefficient matrix W in the impulse coupled neural network;

2.5)计算动态门限矩阵E;2.5) Calculate the dynamic threshold matrix E;

求解输入矩阵F拉普拉斯算子的卷积,返回与输入矩阵F大小相同的矩阵;两者相减得到初始的动态门限矩阵E;Solve the convolution of the Laplacian operator of the input matrix F, and return a matrix of the same size as the input matrix F; subtract the two to obtain the initial dynamic threshold matrix E;

2.6)筛选输入矩阵F中的最大值T;2.6) Screen the maximum value T in the input matrix F;

2.7)设置放大系数和衰减时间常数;2.7) Set the amplification factor and decay time constant;

设置反馈输入域的放大系数vF、耦合连接域的放大系数vL、脉冲发生域的放大系数vE、输入矩阵F的衰减时间常数αF、耦合连接输入矩阵L的衰减时间常数αL以及动态门限矩阵E的衰减时间常数αE;其中,vF、vL和vE的取值范围为大于等于1的自然数,αF和αL和αE的取值范围为0<αF<1,0<αL<1,0<αE<1;Set the amplification factor vF of the feedback input domain, the amplification factor vL of the coupling connection domain, the amplification factor vE of the pulse generation domain, the decay time constant αF of the input matrix F, the decay time constant αL of the coupling connection input matrix L, and the dynamic threshold matrix E. Decay time constant αE; the value ranges of vF, vL and vE are natural numbers greater than or equal to 1, and the value ranges of αF, αL and αE are 0<αF<1, 0<αL<1, 0<αE<1 ;

2.8)设置耦合连接输入矩阵L对神经元内部活动项矩阵U的连接系数β,其中,β为神经元内部活动项矩阵U对耦合连接输入矩阵L的比例关系,β的取值范围为0<β<1;2.8) Set the connection coefficient β of the coupling connection input matrix L to the activity item matrix U inside the neuron, where β is the proportional relationship between the activity item matrix U inside the neuron and the coupling connection input matrix L, and the value range of β is 0< β<1;

2.9)设置循环次数为n,N≥n≥1,其中N为循环次数上限;设置循环次数初始值n=1;2.9) Set the number of cycles to n, N≥n≥1, where N is the upper limit of the number of cycles; set the initial value of the number of cycles to n=1;

步骤3:基于脉冲耦合神经网络,计算出动态门限矩阵E(n)、神经元内部活动项矩阵U(n)及脉冲输出矩阵Y(n);Step 3: Based on the pulse coupled neural network, calculate the dynamic threshold matrix E(n), the internal activity term matrix U(n) of the neuron and the pulse output matrix Y(n);

步骤4:判断循环结果Step 4: Judge the loop result

4.1)当U(n)≤E(n)时,则Y(n)为0矩阵,循环结束,执行步骤5;4.1) When U(n)≤E(n), then Y(n) is a 0 matrix, the loop ends, and step 5 is executed;

当U(n)>E(n)时,判断n是否等于N;When U(n)>E(n), judge whether n is equal to N;

若n≠N,则令n=n+1,返回步骤3;If n≠N, then let n=n+1, go back to step 3;

若n=N,则循环结束,执行步骤5。If n=N, the loop ends, and step 5 is executed.

步骤5:输出结果Step 5: Output the result

5.1)计算全局特征矩阵Q5.1) Calculate the global feature matrix Q

Figure BDA0003440649060000031
Figure BDA0003440649060000031

5.2)将步骤5.1得到的全局特征矩阵Q标准化,输出标准化的全局特征矩阵Q;5.2) Standardize the global feature matrix Q obtained in step 5.1, and output the standardized global feature matrix Q;

5.3)计算步骤5.2得到的标准化全局特征矩阵Q中所有元素的和,即为该矩阵对应图像的灰度和,作为图像清晰度评估函数值。5.3) Calculate the sum of all elements in the standardized global feature matrix Q obtained in step 5.2, that is, the grayscale sum of the image corresponding to the matrix, as the value of the image sharpness evaluation function.

进一步地,步骤3中,基于脉冲耦合神经网络,计算出动态门限矩阵E(n)、神经元内部活动项矩阵U(n)及脉冲输出矩阵Y(n)具体为:Further, in step 3, based on the pulse coupled neural network, the dynamic threshold matrix E(n), the internal activity term matrix U(n) of the neuron and the pulse output matrix Y(n) are calculated as follows:

3.1)计算脉冲输出矩阵Y(n-1)与反馈输入域系数矩阵M的卷积,保存为中间矩阵J(n);当n=1时Y(n-1)即Y(0)代表脉冲输出矩阵Y的初始值;3.1) Calculate the convolution of the pulse output matrix Y(n-1) and the feedback input domain coefficient matrix M, and save it as the intermediate matrix J(n); when n=1, Y(n-1) means Y(0) represents the pulse The initial value of the output matrix Y;

3.2)计算脉冲输出矩阵Y(n-1)与耦合连接域系数矩阵W的卷积,保存为中间矩阵K(n);3.2) Calculate the convolution of the pulse output matrix Y(n-1) and the coupling domain coefficient matrix W, and save it as an intermediate matrix K(n);

3.3)计算输入矩阵F(n)=exp(-αF)*F(n-1)+vF*J(n-1)3.3) Calculate the input matrix F(n)=exp(-αF)*F(n-1)+vF*J(n-1)

式中:where:

F(n-1)为上一循环中F(n)的值,当n=1时F(n-1)即F(0)代表输入矩阵F的初始值;F(n-1) is the value of F(n) in the previous cycle. When n=1, F(n-1), that is, F(0), represents the initial value of the input matrix F;

J(n-1)为上一循环中J(n)的值,当n=1时J(n-1)即J(0)代表中间矩阵J的初始值,中间矩阵J的初始值为零矩阵;J(n-1) is the value of J(n) in the previous cycle. When n=1, J(n-1), that is, J(0), represents the initial value of the intermediate matrix J, and the initial value of the intermediate matrix J is zero matrix;

3.4)计算耦合连接输入矩阵L(n)=exp(-αL)*L(n-1)+vL*K(n-1)3.4) Calculate the coupled connection input matrix L(n)=exp(-αL)*L(n-1)+vL*K(n-1)

式中:where:

L(n-1)为上一循环中L(n)的值,当n=1时L(n-1)即L(0)代表耦合连接输入矩阵L的初始值;L(n-1) is the value of L(n) in the previous cycle. When n=1, L(n-1), that is, L(0), represents the initial value of the coupling connection input matrix L;

K(n-1)为上一循环中K(n)的值,当n=1时K(n-1)即K(0)代表中间矩阵K的初始值,中间矩阵K的初始值为零矩阵;K(n-1) is the value of K(n) in the previous cycle. When n=1, K(n-1) means K(0) represents the initial value of the intermediate matrix K, and the initial value of the intermediate matrix K is zero matrix;

3.5)计算动态门限矩阵E(n)=exp(-αE)*E(n-1)+vE*Y(n-1)3.5) Calculate the dynamic threshold matrix E(n)=exp(-αE)*E(n-1)+vE*Y(n-1)

式中:where:

E(n-1)为上一循环中E(n)的值,当n=1时E(n-1)即E(0)代表动态门限矩阵E的初始值;E(n-1) is the value of E(n) in the previous cycle. When n=1, E(n-1), that is, E(0), represents the initial value of the dynamic threshold matrix E;

Y(n-1)为上一循环中Y(n)的值;Y(n-1) is the value of Y(n) in the previous cycle;

3.6)计算神经元内部活动项矩阵U(n)的组成项Uij(n),公式为:3.6) Calculate the component item U ij (n) of the activity item matrix U(n) inside the neuron, the formula is:

Uij(n)=Fij(n)*(1+β*Lij(n))U ij (n)=F ij (n)*(1+β*L ij (n))

式中:where:

Uij(n)为U(n)中第i行第j列的项;U ij (n) is the item in the i-th row and the j-th column in U(n);

Fij(n)为F(n)中第i行第j列的项;F ij (n) is the item in the ith row and the jth column of F(n);

Lij(n)为L(n)中第i行第j列的项;L ij (n) is the item in the i-th row and the j-th column in L(n);

3.7)计算脉冲输出矩阵Y(n)=(lnT-(n-1)*αE)*(U(n)-E(n))。3.7) Calculate the pulse output matrix Y(n)=(lnT-(n-1)*αE)*(U(n)-E(n)).

进一步地,还包括步骤6:重复步骤1-步骤5.3,获得图像序列对应的图像清晰度评估函数值,并绘制曲线图表征该图像序列对应的清晰度变换趋势。Further, it also includes step 6: repeating steps 1-5.3 to obtain the image sharpness evaluation function value corresponding to the image sequence, and drawing a curve graph to represent the sharpness transformation trend corresponding to the image sequence.

进一步地,步骤2.3)中,反馈输入域系数矩阵M和耦合连接域系数矩阵W的大小均为3*3。Further, in step 2.3), the size of the feedback input domain coefficient matrix M and the coupling connection domain coefficient matrix W are both 3*3.

进一步地,所述反馈输入域的输入是上一个循环的脉冲输出,其输出是当前循环的反馈输入;耦合连接域的输入是上一个循环周围神经元的脉冲输出,其输出是当前循环的耦合连接输入;脉冲发生域的输入是当前反馈输入与耦合连接输入决定的内部活动项强度,其输出是当前的脉冲输出和决定输出强度的动态门限。Further, the input of the feedback input field is the pulse output of the previous loop, and its output is the feedback input of the current loop; the input of the coupling connection field is the pulse output of the neurons around the previous loop, and its output is the coupling of the current loop. Connection input; the input of the pulse generation domain is the internal activity item intensity determined by the current feedback input and the coupling connection input, and the output is the current pulse output and the dynamic threshold that determines the output intensity.

本发明还提供了一种终端设备,包括存储器、处理器以及存储在存储器中并可在处理器上运行的计算机程序,其特殊之处在于:处理器执行计算机程序时实现如上方法的步骤。The present invention also provides a terminal device including a memory, a processor and a computer program stored in the memory and running on the processor, the special feature of which is that the processor implements the steps of the above method when executing the computer program.

与现有技术相比,本发明技术方案的有益效果是:Compared with the prior art, the beneficial effects of the technical solution of the present invention are:

本发明基于脉冲耦合神经网络的图像清晰度评估方法,将脉冲耦合神经网络应用在图像清晰度评估及自动调焦领域;脉冲耦合神经网络的网络参数是提前设置的,不需要经过多样本长时间的训练与调参,解决图像清晰度评估,可直接应用于工程,节约大量工程时间;相较于传统图像处理方法更多样化,实现多种功能:脉冲耦合神经网络可单独用于图像增强,提高图像质量,也可单独用于图像清晰度评估,也可以将两者结合;此外,通过修改脉冲耦合神经网络的网络参数,能够适用不同的场景类型,并且只关注图像的灰度信息,可以提高运算精确度,避免其他因素的干扰,具有较好的实时性,在硬件上运行时计算效率高,处理速度能满足工程需求。The present invention is based on the image definition evaluation method of the pulse coupled neural network, and applies the pulse coupled neural network in the field of image definition evaluation and automatic focusing; the network parameters of the pulse coupled neural network are set in advance, and it does not need to go through multiple samples for a long time. It can be directly applied to engineering and save a lot of engineering time; compared with traditional image processing methods, it is more diversified and realizes multiple functions: pulse coupled neural network can be used for image enhancement alone , to improve the image quality, it can also be used for image clarity evaluation alone, or the two can be combined; in addition, by modifying the network parameters of the pulse coupled neural network, it can be applied to different scene types, and only focus on the grayscale information of the image, It can improve the calculation accuracy, avoid the interference of other factors, has better real-time performance, high computing efficiency when running on hardware, and the processing speed can meet engineering needs.

附图说明Description of drawings

图1为本发明基于脉冲耦合神经网络的图像清晰度评估方法中脉冲耦合神经网络的模型循环迭代结构示意图。FIG. 1 is a schematic diagram of the model loop iteration structure of the pulse coupled neural network in the image sharpness evaluation method based on the pulse coupled neural network of the present invention.

图中附图标记为:The reference numbers in the figure are:

1-反馈输入域,2-耦合连接域,3-脉冲发生域。1-Feedback input domain, 2-Coupling connection domain, 3-Pulse generating domain.

具体实施方式Detailed ways

下面将结合本发明的实施例和附图,对本发明的技术方案进行清楚、完整地描述,显然,所描述的实施例并非对本发明的限制。The technical solutions of the present invention will be clearly and completely described below with reference to the embodiments of the present invention and the accompanying drawings. Obviously, the described embodiments do not limit the present invention.

脉冲耦合神经网络(PCNN)是哺乳动物视觉皮层神经元信号传导特性的量化描述,其生物特性和人眼感知亮度变化的滞后和指数衰减的视觉暂留现象吻合,该特性往往应用于图像分割、边缘提取等图像处理工程中;改变网络结构与参数,还能实现图像增强、滤波、融合等功能。Pulse-coupled neural network (PCNN) is a quantitative description of the signal transduction characteristics of mammalian visual cortex neurons. Edge extraction and other image processing projects; changing the network structure and parameters can also achieve image enhancement, filtering, fusion and other functions.

脉冲耦合神经网络包括三部分:反馈输入域1、耦合连接域2及脉冲发生域3,反馈输入域1的输入是上一个循环的脉冲输出,其输出是当前循环的反馈输入;耦合连接域2的输入是上一个循环周围神经元的脉冲输出,其输出是当前循环的耦合连接输入;脉冲发生域3的输入是当前反馈输入与耦合连接输入决定的内部活动项强度,其输出是当前的脉冲输出和决定输出强度的动态门限。The pulse coupled neural network includes three parts: feedback input domain 1, coupling connection domain 2 and pulse generation domain 3. The input of feedback input domain 1 is the pulse output of the previous cycle, and its output is the feedback input of the current cycle; coupling connection domain 2 The input is the pulse output of the neurons around the previous cycle, and its output is the coupled connection input of the current cycle; the input of the pulse generation domain 3 is the internal activity item intensity determined by the current feedback input and the coupled connection input, and its output is the current pulse. Output and a dynamic threshold that determines the output strength.

脉冲耦合神经网络的图像处理模型由脉冲耦合神经元构成的二维单层神经元阵列组成,神经元数目与像素数目一致,每个神经元与像素一一对应,每个神经元都处在一个3*3的反馈输入域系数矩阵M和耦合连接域系数矩阵W(反馈输入域系数矩阵M和耦合连接域系数矩阵W为权连接矩阵)的中心,相邻像素为反馈输入域系数矩阵M、耦合连接域系数矩阵W中对应的神经元,每一神经元与相邻神经元的相连权值有多种,通过权连接矩阵的值体现。The image processing model of the pulse-coupled neural network consists of a two-dimensional single-layer neuron array composed of pulse-coupled neurons. The number of neurons is the same as the number of pixels. The center of the 3*3 feedback input domain coefficient matrix M and the coupled connection domain coefficient matrix W (the feedback input domain coefficient matrix M and the coupled connection domain coefficient matrix W are the weight connection matrix), and the adjacent pixels are the feedback input domain coefficient matrix M, For the neurons corresponding to the coupling connection domain coefficient matrix W, there are various connection weights between each neuron and adjacent neurons, which are reflected by the value of the weight connection matrix.

脉冲耦合神经网络的模型由生物视觉神经元模型演化而来,对较暗的区域,即灰度值较低的区域较敏感;在图像处理前,为了更好的突出图像暗部的边缘特征,往往会人为扩大图像像素间的灰度差值,增强图像边缘从而突出图像特征信息;为此需要对动态门限矩阵E的初始值进行修饰,对刺激输入矩阵S进行边缘增强,用拉普拉斯算子进行滤波,再对图像进行反相;此时图像原本暗区的位置变亮,灰度值变大,将其值赋予动态门限矩阵E,使暗区域的阈值增大,衰减时间更长,在循环迭代的过程中可以得到更多的处理。The model of the pulse coupled neural network is evolved from the biological visual neuron model, and it is more sensitive to the darker area, that is, the area with low gray value; before image processing, in order to better highlight the edge features of the dark part of the image, often It will artificially expand the grayscale difference between image pixels, enhance the edge of the image to highlight the image feature information; for this, it is necessary to modify the initial value of the dynamic threshold matrix E, and perform edge enhancement on the stimulus input matrix S, and use Laplace calculation. Then the image is inverted; at this time, the position of the original dark area of the image becomes brighter, and the gray value becomes larger, and its value is assigned to the dynamic threshold matrix E, so that the threshold value of the dark area increases, and the decay time is longer. More processing can be done during loop iterations.

刺激输入矩阵S产生之后,即神经元得到激活;每个神经元都收到与输入矩阵F和耦合连接输入矩阵L有关的信号,产生神经元内部活动项矩阵U。只有神经元内部活动项矩阵U大于动态门限矩阵E时,神经元才会被激活,产生脉冲输出矩阵Y,脉冲输出矩阵Y的强度与神经元内部活动项矩阵U和动态门限矩阵E之差成正比,同时脉冲输出矩阵Y不断累加形成的积分即为全局特征矩阵Q。每个神经元与上一个循环周围神经元有关,上一个循环的脉冲输出矩阵Y会通过反馈输入域系数矩阵M和耦合连接域系数矩阵W作用于相邻神经元,产生新的输入矩阵F和耦合连接输入矩阵L,循环迭代过程类似脉冲函数。经过数次脉冲循环后,神经元内部活动项矩阵U不再大于动态门限矩阵E,脉冲输出矩阵Y不再被激励,即循环结束,此时的全局特征矩阵Q就是增强后的图像,也可类比为经过脉冲神经网络处理后人眼看到的图像。After the stimulus input matrix S is generated, the neuron is activated; each neuron receives the signal related to the input matrix F and the coupled connection input matrix L, and generates the internal activity matrix U of the neuron. Only when the internal activity item matrix U of the neuron is greater than the dynamic threshold matrix E, the neuron will be activated, and the pulse output matrix Y will be generated. The intensity of the pulse output matrix Y is equal to the difference between the neuron internal activity item matrix U and the dynamic threshold matrix E. The integral formed by the continuous accumulation of the pulse output matrix Y is the global characteristic matrix Q. Each neuron is related to the surrounding neurons in the previous cycle. The pulse output matrix Y of the previous cycle will act on the adjacent neurons by feeding back the input domain coefficient matrix M and the coupled connection domain coefficient matrix W to generate a new input matrix F and The coupling is connected to the input matrix L, and the loop iteration process is similar to the impulse function. After several pulse cycles, the internal activity matrix U of the neuron is no longer larger than the dynamic threshold matrix E, and the pulse output matrix Y is no longer excited, that is, the cycle ends. The global feature matrix Q at this time is the enhanced image, or The analogy is the image seen by the human eye after being processed by a spiking neural network.

在同一目标图像(或视频)的镜头调焦过程中,聚焦位置的图像有更丰富的暗部细节,图像反相处理时灰度值更大,需要循环更多次,激活图像周围神经元产生脉冲,因此累积的全局特征矩阵Q会得到更大的灰度值,通过计算增强后图像的全局特征矩阵Q的灰度总和,可以提取这一特征信息,作为图像清晰度的表征,根据最终结果输出增强后的图像和清晰度评估函数值,以进行下一阶段的操作,结合控制策略实现自动调焦,其具体方法如下:During the lens focusing process of the same target image (or video), the image at the focus position has richer dark details, and the gray value is larger when the image is inverted, which requires more cycles to activate the neurons around the image to generate pulses , so the accumulated global feature matrix Q will get a larger gray value. By calculating the gray sum of the global feature matrix Q of the enhanced image, this feature information can be extracted as a representation of image clarity, and output according to the final result The enhanced image and the sharpness evaluation function value are used for the next stage of operation, and the automatic focusing is realized in combination with the control strategy. The specific method is as follows:

一种基于脉冲耦合神经网络的图像清晰度评估方法,包括以下步骤:A method for evaluating image clarity based on pulse coupled neural network, comprising the following steps:

步骤1:构建脉冲耦合神经网络;Step 1: Build a pulse coupled neural network;

脉冲耦合神经网络包括反馈输入域1、耦合连接域2及脉冲发生域3;The impulse coupled neural network includes feedback input domain 1, coupling connection domain 2 and impulse generating domain 3;

反馈输入域1包括输入矩阵F,耦合连接域2包括耦合连接输入矩阵L,脉冲发生域3包括动态门限矩阵E、神经元内部活动项矩阵U和脉冲输出矩阵Y;Feedback input domain 1 includes input matrix F, coupling connection domain 2 includes coupling connection input matrix L, and impulse generating domain 3 includes dynamic threshold matrix E, neuron internal activity term matrix U and impulse output matrix Y;

步骤2:对图像进行预处理,设置初始化参数;Step 2: Preprocess the image and set the initialization parameters;

2.1)求出图像的灰度矩阵,将灰度矩阵归一化处理,作为刺激输入矩阵S;2.1) Find the grayscale matrix of the image, normalize the grayscale matrix, and use it as the stimulus input matrix S;

刺激输入矩阵S的大小i*j根据图像的分辨率确定,i为刺激输入矩阵S的第i行,j为刺激输入矩阵S的第j列;The size i*j of the stimulus input matrix S is determined according to the resolution of the image, i is the ith row of the stimulus input matrix S, and j is the jth column of the stimulus input matrix S;

2.2)将刺激输入矩阵S的值赋予输入矩阵F;2.2) Assign the value of the stimulus input matrix S to the input matrix F;

2.3)将脉冲耦合神经网络中的耦合连接输入矩阵L、神经元内部活动项矩阵U、脉冲输出矩阵Y和全局特征矩阵Q,初始化为零矩阵;其中,反馈输入域系数矩阵M和耦合连接域系数矩阵W的大小均为3*3;2.3) Initialize the coupling connection input matrix L, the activity term matrix U inside the neuron, the pulse output matrix Y and the global feature matrix Q in the impulse coupled neural network to zero matrix; among them, the feedback input domain coefficient matrix M and the coupling connection domain The size of the coefficient matrix W is 3*3;

2.4)设置脉冲耦合神经网络中反馈输入域系数矩阵M和耦合连接域系数矩阵W;2.4) Set the feedback input domain coefficient matrix M and the coupling connection domain coefficient matrix W in the impulse coupled neural network;

2.5)计算动态门限矩阵E;2.5) Calculate the dynamic threshold matrix E;

求解输入矩阵F拉普拉斯算子的卷积,返回与输入矩阵F大小相同的矩阵;两者相减得到初始的动态门限矩阵E;Solve the convolution of the Laplacian operator of the input matrix F, and return a matrix of the same size as the input matrix F; subtract the two to obtain the initial dynamic threshold matrix E;

2.6)筛选输入矩阵F中的最大值T;2.6) Screen the maximum value T in the input matrix F;

2.7)设置放大系数和衰减时间常数;2.7) Set the amplification factor and decay time constant;

设置反馈输入域1的放大系数vF、耦合连接域2的放大系数vL、脉冲发生域3的放大系数vE、输入矩阵F的衰减时间常数αF、耦合连接输入矩阵L的衰减时间常数αL以及动态门限矩阵E的衰减时间常数αE;其中,vF、vL和vE的取值范围为大于等于1的自然数,αF和αL和αE的取值范围为0<αF<1,0<αL<1,0<αE<1;Set the amplification factor vF of the feedback input domain 1, the amplification factor vL of the coupling connection domain 2, the amplification factor vE of the pulse generation domain 3, the decay time constant αF of the input matrix F, the decay time constant αL of the coupling connection input matrix L, and the dynamic threshold Decay time constant αE of matrix E; among them, the value ranges of vF, vL and vE are natural numbers greater than or equal to 1, and the value ranges of αF, αL and αE are 0<αF<1, 0<αL<1, 0< αE<1;

2.8)设置耦合连接输入矩阵L对神经元内部活动项矩阵U的连接系数β,其中,β为神经元内部活动项矩阵U对耦合连接输入矩阵L的比例关系,β的取值范围为0<β<1;2.8) Set the connection coefficient β of the coupling connection input matrix L to the activity item matrix U inside the neuron, where β is the proportional relationship between the activity item matrix U inside the neuron and the coupling connection input matrix L, and the value range of β is 0< β<1;

2.9)设置循环次数为n,N≥n≥1,其中N为循环次数上限;设置循环次数初始值n=1;2.9) Set the number of cycles to n, N≥n≥1, where N is the upper limit of the number of cycles; set the initial value of the number of cycles to n=1;

步骤3:基于脉冲耦合神经网络,计算出动态门限矩阵E(n)、神经元内部活动项矩阵U(n)及脉冲输出矩阵Y(n);Step 3: Based on the pulse coupled neural network, calculate the dynamic threshold matrix E(n), the internal activity term matrix U(n) of the neuron and the pulse output matrix Y(n);

3.1)计算脉冲输出矩阵Y(n-1)与反馈输入域系数矩阵M的卷积,保存为中间矩阵J(n);当n=1时Y(n-1)即Y(0)代表脉冲输出矩阵Y的初始值;3.1) Calculate the convolution of the pulse output matrix Y(n-1) and the feedback input domain coefficient matrix M, and save it as the intermediate matrix J(n); when n=1, Y(n-1) means Y(0) represents the pulse The initial value of the output matrix Y;

3.2)计算脉冲输出矩阵Y(n-1)与耦合连接域系数矩阵W的卷积,保存为中间矩阵K(n);3.2) Calculate the convolution of the pulse output matrix Y(n-1) and the coupling domain coefficient matrix W, and save it as an intermediate matrix K(n);

3.3)计算输入矩阵F(n)=exp(-αF)*F(n-1)+vF*J(n-1)3.3) Calculate the input matrix F(n)=exp(-αF)*F(n-1)+vF*J(n-1)

式中:where:

F(n-1)为上一循环中F(n)的值,当n=1时F(n-1)即F(0)代表输入矩阵F的初始值;F(n-1) is the value of F(n) in the previous cycle. When n=1, F(n-1), that is, F(0), represents the initial value of the input matrix F;

J(n-1)为上一循环中J(n)的值,当n=1时J(n-1)即J(0)代表中间矩阵J的初始值,中间矩阵J的初始值为零矩阵;J(n-1) is the value of J(n) in the previous cycle. When n=1, J(n-1), that is, J(0), represents the initial value of the intermediate matrix J, and the initial value of the intermediate matrix J is zero matrix;

3.4)计算耦合连接输入矩阵L(n)=exp(-αL)*L(n-1)+vL*K(n-1)3.4) Calculate the coupled connection input matrix L(n)=exp(-αL)*L(n-1)+vL*K(n-1)

式中:where:

L(n-1)为上一循环中L(n)的值,当n=1时L(n-1)即L(0)代表耦合连接输入矩阵L的初始值;L(n-1) is the value of L(n) in the previous cycle. When n=1, L(n-1), that is, L(0), represents the initial value of the coupling connection input matrix L;

K(n-1)为上一循环中K(n)的值,当n=1时K(n-1)即K(0)代表中间矩阵K的初始值,中间矩阵K的初始值为零矩阵;K(n-1) is the value of K(n) in the previous cycle. When n=1, K(n-1) means K(0) represents the initial value of the intermediate matrix K, and the initial value of the intermediate matrix K is zero matrix;

3.5)计算动态门限矩阵E(n)=exp(-αE)*E(n-1)+vE*Y(n-1)3.5) Calculate the dynamic threshold matrix E(n)=exp(-αE)*E(n-1)+vE*Y(n-1)

式中:where:

E(n-1)为上一循环中E(n)的值,当n=1时E(n-1)即E(0)代表动态门限矩阵E的初始值;E(n-1) is the value of E(n) in the previous cycle. When n=1, E(n-1), that is, E(0), represents the initial value of the dynamic threshold matrix E;

Y(n-1)为上一循环中Y(n)的值;Y(n-1) is the value of Y(n) in the previous cycle;

3.6)计算神经元内部活动项矩阵U(n)的组成项Uij(n),公式为:3.6) Calculate the component item U ij (n) of the activity item matrix U(n) inside the neuron, the formula is:

Uij(n)=Fij(n)*(1+β*Lij(n))U ij (n)=F ij (n)*(1+β*L ij (n))

式中:where:

Uij(n)为U(n)中第i行第j列的项;U ij (n) is the item in the i-th row and the j-th column in U(n);

Fij(n)为F(n)中第i行第j列的项;F ij (n) is the item in the ith row and the jth column of F(n);

Lij(n)为L(n)中第i行第j列的项;L ij (n) is the item in the i-th row and the j-th column in L(n);

3.7)计算脉冲输出矩阵Y(n)=(lnT-(n-1)*αE)*(U(n)-E(n))。3.7) Calculate the pulse output matrix Y(n)=(lnT-(n-1)*αE)*(U(n)-E(n)).

步骤4:判断循环结果Step 4: Judge the loop result

4.1)当U(n)≤E(n)时,则Y(n)为0矩阵,循环结束,执行步骤5;4.1) When U(n)≤E(n), then Y(n) is a 0 matrix, the loop ends, and step 5 is executed;

当U(n)>E(n)时,判断n是否等于N;When U(n)>E(n), judge whether n is equal to N;

若n≠N,则令n=n+1,返回步骤3;If n≠N, then let n=n+1, go back to step 3;

若n=N,则循环结束,执行步骤5。If n=N, the loop ends, and step 5 is executed.

步骤5:输出结果Step 5: Output the result

5.1)计算全局特征矩阵Q5.1) Calculate the global feature matrix Q

Figure BDA0003440649060000091
Figure BDA0003440649060000091

5.2)将步骤5.1得到的全局特征矩阵Q标准化,输出标准化的全局特征矩阵Q;5.2) Standardize the global feature matrix Q obtained in step 5.1, and output the standardized global feature matrix Q;

5.3)计算步骤5.2得到的标准化全局特征矩阵Q中所有元素的和,即为该矩阵对应图像的灰度和,作为图像清晰度评估函数值。5.3) Calculate the sum of all elements in the standardized global feature matrix Q obtained in step 5.2, that is, the grayscale sum of the image corresponding to the matrix, as the value of the image sharpness evaluation function.

步骤6:重复步骤1-步骤5.3,获得图像序列对应的图像清晰度评估函数值,并绘制曲线图表征该图像序列对应的清晰度变换趋势。Step 6: Repeat steps 1-5.3 to obtain the image sharpness evaluation function value corresponding to the image sequence, and draw a curve graph to represent the sharpness transformation trend corresponding to the image sequence.

另外,本发明基于脉冲耦合神经网络的图像清晰度评估方法还可以应用于终端设备,终端设备,包括存储器、处理器以及存储在存储器中并可在处理器上运行的计算机程序,处理器执行计算机程序时实现本发明图像清晰度评估方法的步骤。此处的终端设备可以是计算机、笔记本、掌上电脑,及各种云端服务器等计算设备,处理器可以是通用处理器、数字信号处理器、专用集成电路或其他可编程逻辑器件等。In addition, the image sharpness evaluation method based on the pulse coupled neural network of the present invention can also be applied to terminal equipment. The terminal equipment includes a memory, a processor, and a computer program stored in the memory and running on the processor. The processor executes the computer In the program, the steps of the image sharpness evaluation method of the present invention are realized. The terminal device here can be computing devices such as a computer, a notebook, a palmtop computer, and various cloud servers, and the processor can be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, or other programmable logic devices.

本实施例中,脉冲耦合神经网络的模型循环迭代如下:In this embodiment, the model loop iteration of the pulse coupled neural network is as follows:

反馈输入域1有反馈输入域系数矩阵M、中间矩阵J、输入矩阵F、反馈输入域1的放大系数vF与输入矩阵F的衰减时间常数αF;The feedback input domain 1 has the feedback input domain coefficient matrix M, the intermediate matrix J, the input matrix F, the amplification coefficient vF of the feedback input domain 1 and the decay time constant αF of the input matrix F;

在反馈输入域1,输入矩阵F由两部分构成:一部分来自上一个循环的脉冲输出矩阵Y(即Y(n-1)),上一个循环的脉冲输出矩阵Y与反馈输入域系数矩阵M卷积得到中间矩阵J,中间矩阵J与反馈输入域1的放大系数vF相乘;另一部分来自上一个循环的输入矩阵F(即F(n-1))与其衰减时间常数αF相乘;将两部分运算结果相加得到当前循环的输入矩阵F(即F(n))。In the feedback input domain 1, the input matrix F consists of two parts: one part comes from the pulse output matrix Y of the previous cycle (ie Y(n-1)), the pulse output matrix Y of the previous cycle and the feedback input domain coefficient matrix M volume The product obtains the intermediate matrix J, which is multiplied by the amplification factor vF of the feedback input domain 1; the other part of the input matrix F (ie F(n-1)) from the previous cycle is multiplied by its decay time constant αF; The partial operation results are added to obtain the input matrix F (ie, F(n)) of the current loop.

耦合连接域2有耦合连接域系数矩阵W、中间矩阵K、耦合连接输入矩阵L、耦合连接域2的放大系数vL与耦合连接输入矩阵L的衰减时间常数αL;The coupling connection field 2 has a coupling connection field coefficient matrix W, an intermediate matrix K, a coupling connection input matrix L, an amplification coefficient vL of the coupling connection field 2 and a decay time constant αL of the coupling connection input matrix L;

在耦合连接域2,耦合连接输入矩阵L由两部分构成:一部分来自上一个循环周围神经元的脉冲输出矩阵Y与耦合连接域系数矩阵W卷积得到中间矩阵K,中间矩阵K与耦合连接域2的放大系数vL相乘;另一部分来自上一个循环的耦合连接输入矩阵L(即L(n-1))与其衰减时间常数αL相乘;将两部分运算结果相加得到当前循环的耦合连接输入矩阵L(即L(n))。In the coupling-connection domain 2, the coupling-connection input matrix L consists of two parts: a part of the pulse output matrix Y from the surrounding neurons in the previous cycle is convolved with the coupling-connection domain coefficient matrix W to obtain an intermediate matrix K, which is connected with the coupling-connection domain. The amplification factor vL of 2 is multiplied; the other part of the coupling connection input matrix L (ie L(n-1)) from the previous cycle is multiplied by its decay time constant αL; the two parts of the operation results are added to obtain the coupling connection of the current cycle Input matrix L (ie L(n)).

脉冲发生域3有动态门限矩阵E、脉冲发生域3的放大系数vE、动态门限矩阵E的衰减时间常数αE、脉冲输出矩阵Y、图像灰度的最大值T与神经元内部活动项矩阵U;The pulse generation domain 3 has the dynamic threshold matrix E, the amplification factor vE of the pulse generation domain 3, the decay time constant αE of the dynamic threshold matrix E, the pulse output matrix Y, the maximum value T of the image gray level and the neuron internal activity term matrix U;

在脉冲发生域3,动态门限矩阵E由两部分构成:一部分来自上一个循环的脉冲输出矩阵Y与脉冲发生域3的放大系数vE相乘;另一部分来自上一个循环的动态门限矩阵E(即E(n-1))与其衰减时间常数αE相乘;将两部分运算结果相加得到当前循环的动态门限矩阵E(即E(n))。In the pulse generation domain 3, the dynamic threshold matrix E consists of two parts: one part comes from the pulse output matrix Y of the previous cycle and the amplification factor vE of the pulse generation domain 3 is multiplied; the other part comes from the dynamic threshold matrix E of the previous cycle (ie E(n-1)) is multiplied by its decay time constant αE; the results of the two operations are added to obtain the dynamic threshold matrix E (ie, E(n)) of the current cycle.

同时,当前神经元内部活动项矩阵U也由两部分构成:一部分来自中心神经元的影响,即当前循环的输入矩阵F;另一部分来自上一个循环周围神经元的脉冲输出矩阵Y的影响,在当前循环耦合连接输入矩阵L的基础上乘以连接系数β,再加上基础权重1;这两部分相乘的结果即为神经元内部活动项矩阵U。At the same time, the activity matrix U of the current neuron is also composed of two parts: one part comes from the influence of the central neuron, that is, the input matrix F of the current cycle; the other part comes from the influence of the pulse output matrix Y of the surrounding neurons in the previous cycle. The current cyclic coupling connection input matrix L is multiplied by the connection coefficient β, plus the basic weight 1; the result of the multiplication of these two parts is the neuron's internal activity term matrix U.

当神经元内部活动项矩阵U大于动态门限矩阵E时生成脉冲激励,脉冲发生器产生强度与最大值T呈正相关的脉冲输出矩阵Y,进行下一步判断或运算;将每次循环产生的脉冲输出矩阵Y累加,得到的全局特征矩阵Q将是最终的输出结果。When the internal activity matrix U of the neuron is greater than the dynamic threshold matrix E, pulse excitation is generated, and the pulse generator generates a pulse output matrix Y whose intensity is positively correlated with the maximum value T, and the next judgment or operation is performed; the pulse output generated in each cycle is output. The matrix Y is accumulated, and the obtained global feature matrix Q will be the final output result.

以上所述仅为本发明的实施例,并非对本发明保护范围的限制,凡是利用本发明说明书及附图内容所作的等效结构变换,或直接或间接运用在其他相关的技术领域,均包括在本发明的专利保护范围内。The above description is only an embodiment of the present invention, and does not limit the protection scope of the present invention. Any equivalent structural transformation made by using the contents of the description and drawings of the present invention, or directly or indirectly applied in other related technical fields, are included in the within the scope of patent protection of the present invention.

Claims (6)

1.一种基于脉冲耦合神经网络的图像清晰度评估方法,其特征在于,包括以下步骤:1. an image clarity assessment method based on pulse coupled neural network, is characterized in that, comprises the following steps: 步骤1:构建脉冲耦合神经网络;Step 1: Build a pulse coupled neural network; 所述脉冲耦合神经网络包括反馈输入域(1)、耦合连接域(2)及脉冲发生域(3);The pulse coupling neural network includes a feedback input domain (1), a coupling connection domain (2) and an impulse generating domain (3); 所述反馈输入域(1)包括输入矩阵F,所述耦合连接域(2)包括耦合连接输入矩阵L,所述脉冲发生域(3)包括动态门限矩阵E、神经元内部活动项矩阵U和脉冲输出矩阵Y;The feedback input domain (1) includes an input matrix F, the coupling connection domain (2) includes a coupling connection input matrix L, and the impulse generation domain (3) includes a dynamic threshold matrix E, a neuron internal activity term matrix U and pulse output matrix Y; 步骤2:对图像进行预处理,设置初始化参数;Step 2: Preprocess the image and set the initialization parameters; 2.1)求出图像的灰度矩阵,将灰度矩阵归一化处理,作为刺激输入矩阵S;2.1) Find the grayscale matrix of the image, normalize the grayscale matrix, and use it as the stimulus input matrix S; 所述刺激输入矩阵S的大小i*j根据图像的分辨率确定,i为刺激输入矩阵S的第i行,j为刺激输入矩阵S的第j列;The size i*j of the stimulus input matrix S is determined according to the resolution of the image, i is the ith row of the stimulus input matrix S, and j is the jth column of the stimulus input matrix S; 2.2)将刺激输入矩阵S的值赋予输入矩阵F;2.2) Assign the value of the stimulus input matrix S to the input matrix F; 2.3)将脉冲耦合神经网络中的耦合连接输入矩阵L、神经元内部活动项矩阵U、脉冲输出矩阵Y和全局特征矩阵Q,初始化为零矩阵;2.3) The coupling connection input matrix L, the activity term matrix U inside the neuron, the pulse output matrix Y and the global feature matrix Q in the pulse coupled neural network are initialized to a zero matrix; 2.4)设置脉冲耦合神经网络中反馈输入域系数矩阵M和耦合连接域系数矩阵W;2.4) Set the feedback input domain coefficient matrix M and the coupling connection domain coefficient matrix W in the impulse coupled neural network; 2.5)计算动态门限矩阵E;2.5) Calculate the dynamic threshold matrix E; 求解输入矩阵F拉普拉斯算子的卷积,返回与输入矩阵F大小相同的矩阵;两者相减得到初始的动态门限矩阵E;Solve the convolution of the Laplacian operator of the input matrix F, and return a matrix of the same size as the input matrix F; subtract the two to obtain the initial dynamic threshold matrix E; 2.6)筛选输入矩阵F中的最大值T;2.6) Screen the maximum value T in the input matrix F; 2.7)设置放大系数和衰减时间常数;2.7) Set the amplification factor and decay time constant; 设置反馈输入域(1)的放大系数vF、耦合连接域(2)的放大系数vL、脉冲发生域(3)的放大系数vE、输入矩阵F的衰减时间常数αF、耦合连接输入矩阵L的衰减时间常数αL以及动态门限矩阵E的衰减时间常数αE;其中,vF、vL和vE的取值范围为大于等于1的自然数,αF和αL和αE的取值范围为0<αF<1,0<αL<1,0<αE<1;Set the amplification factor vF of the feedback input domain (1), the amplification factor vL of the coupling connection domain (2), the amplification factor vE of the pulse generation domain (3), the decay time constant αF of the input matrix F, and the attenuation of the coupling connection input matrix L The time constant αL and the decay time constant αE of the dynamic threshold matrix E; wherein, the value ranges of vF, vL and vE are natural numbers greater than or equal to 1, and the value ranges of αF, αL and αE are 0<αF<1, 0< αL<1, 0<αE<1; 2.8)设置耦合连接输入矩阵L对神经元内部活动项矩阵U的连接系数β,其中,β为神经元内部活动项矩阵U对耦合连接输入矩阵L的比例关系,β的取值范围为0<β<1;2.8) Set the connection coefficient β of the coupling connection input matrix L to the neuron's internal activity item matrix U, where β is the proportional relationship between the neuron's internal activity item matrix U and the coupling connection input matrix L, and the value range of β is 0 < β<1; 2.9)设置循环次数为n,N≥n≥1,其中N为循环次数上限;设置循环次数初始值n=1;2.9) Set the number of cycles to n, N≥n≥1, where N is the upper limit of the number of cycles; set the initial value of the number of cycles to n=1; 步骤3:基于脉冲耦合神经网络,计算出动态门限矩阵E(n)、神经元内部活动项矩阵U(n)及脉冲输出矩阵Y(n);Step 3: Based on the pulse coupled neural network, calculate the dynamic threshold matrix E(n), the internal activity term matrix U(n) of the neuron and the pulse output matrix Y(n); 步骤4:判断循环结果Step 4: Judge the loop result 4.1)当U(n)≤E(n)时,则Y(n)为0矩阵,循环结束,执行步骤5;4.1) When U(n)≤E(n), then Y(n) is a 0 matrix, the loop ends, and step 5 is executed; 当U(n)>E(n)时,判断n是否等于N;When U(n)>E(n), judge whether n is equal to N; 若n≠N,则令n=n+1,返回步骤3;If n≠N, then let n=n+1, go back to step 3; 若n=N,则循环结束,执行步骤5;If n=N, the cycle ends, and step 5 is executed; 步骤5:输出结果Step 5: Output the result 5.1)计算全局特征矩阵Q5.1) Calculate the global feature matrix Q
Figure FDA0003440649050000021
Figure FDA0003440649050000021
5.2)将步骤5.1得到的全局特征矩阵Q标准化,输出标准化的全局特征矩阵Q;5.2) Standardize the global feature matrix Q obtained in step 5.1, and output the standardized global feature matrix Q; 5.3)计算步骤5.2得到的标准化全局特征矩阵Q中所有元素的和,即为该矩阵对应图像的灰度和,作为图像清晰度评估函数值。5.3) Calculate the sum of all elements in the standardized global feature matrix Q obtained in step 5.2, that is, the grayscale sum of the image corresponding to the matrix, as the value of the image sharpness evaluation function.
2.根据权利要求1所述的基于脉冲耦合神经网络的图像清晰度评估方法,其特征在于:步骤3中,所述基于脉冲耦合神经网络,计算出动态门限矩阵E(n)、神经元内部活动项矩阵U(n)及脉冲输出矩阵Y(n)具体为:2. The image definition evaluation method based on pulse coupled neural network according to claim 1, is characterized in that: in step 3, described based on pulse coupled neural network, calculates dynamic threshold matrix E(n), neuron internal The activity item matrix U(n) and the pulse output matrix Y(n) are specifically: 3.1)计算脉冲输出矩阵Y(n-1)与反馈输入域系数矩阵M的卷积,保存为中间矩阵J(n);当n=1时Y(n-1)即Y(0)代表脉冲输出矩阵Y的初始值;3.1) Calculate the convolution of the pulse output matrix Y(n-1) and the feedback input domain coefficient matrix M, and save it as an intermediate matrix J(n); when n=1, Y(n-1) means Y(0) represents the pulse The initial value of the output matrix Y; 3.2)计算脉冲输出矩阵Y(n-1)与耦合连接域系数矩阵W的卷积,保存为中间矩阵K(n);3.2) Calculate the convolution of the pulse output matrix Y(n-1) and the coupling domain coefficient matrix W, and save it as an intermediate matrix K(n); 3.3)计算输入矩阵F(n)=exp(-αF)*F(n-1)+vF*J(n-1)3.3) Calculate the input matrix F(n)=exp(-αF)*F(n-1)+vF*J(n-1) 式中:where: F(n-1)为上一循环中F(n)的值,当n=1时F(n-1)即F(0)代表输入矩阵F的初始值;F(n-1) is the value of F(n) in the previous cycle. When n=1, F(n-1), that is, F(0), represents the initial value of the input matrix F; J(n-1)为上一循环中J(n)的值,当n=1时J(n-1)即J(0)代表中间矩阵J的初始值,中间矩阵J的初始值为零矩阵;J(n-1) is the value of J(n) in the previous cycle. When n=1, J(n-1), that is, J(0), represents the initial value of the intermediate matrix J, and the initial value of the intermediate matrix J is zero matrix; 3.4)计算耦合连接输入矩阵L(n)=exp(-αL)*L(n-1)+vL*K(n-1)3.4) Calculate the coupled connection input matrix L(n)=exp(-αL)*L(n-1)+vL*K(n-1) 式中:where: L(n-1)为上一循环中L(n)的值,当n=1时L(n-1)即L(0)代表耦合连接输入矩阵L的初始值;L(n-1) is the value of L(n) in the previous cycle. When n=1, L(n-1), that is, L(0), represents the initial value of the coupling connection input matrix L; K(n-1)为上一循环中K(n)的值,当n=1时K(n-1)即K(0)代表中间矩阵K的初始值,中间矩阵K的初始值为零矩阵;K(n-1) is the value of K(n) in the previous cycle. When n=1, K(n-1) means K(0) represents the initial value of the intermediate matrix K, and the initial value of the intermediate matrix K is zero matrix; 3.5)计算动态门限矩阵E(n)=exp(-αE)*E(n-1)+vE*Y(n-1)3.5) Calculate the dynamic threshold matrix E(n)=exp(-αE)*E(n-1)+vE*Y(n-1) 式中:where: E(n-1)为上一循环中E(n)的值,当n=1时E(n-1)即E(0)代表动态门限矩阵E的初始值;E(n-1) is the value of E(n) in the previous cycle. When n=1, E(n-1), that is, E(0), represents the initial value of the dynamic threshold matrix E; Y(n-1)为上一循环中Y(n)的值;Y(n-1) is the value of Y(n) in the previous cycle; 3.6)计算神经元内部活动项矩阵U(n)的组成项Uij(n),公式为:3.6) Calculate the component item U ij (n) of the activity item matrix U(n) inside the neuron, the formula is: Uij(n)=Fij(n)*(1+β*Lij(n))U ij (n)=F ij (n)*(1+β*L ij (n)) 式中:where: Uij(n)为U(n)中第i行第j列的项;U ij (n) is the item in the i-th row and the j-th column in U(n); Fij(n)为F(n)中第i行第j列的项;F ij (n) is the item in the ith row and the jth column of F(n); Lij(n)为L(n)中第i行第j列的项;L ij (n) is the item in the i-th row and the j-th column in L(n); 3.7)计算脉冲输出矩阵Y(n)=(lnT-(n-1)*αE)*(U(n)-E(n))。3.7) Calculate the pulse output matrix Y(n)=(lnT-(n-1)*αE)*(U(n)-E(n)). 3.根据权利要求2所述的基于脉冲耦合神经网络的图像清晰度评估方法,其特征在于,还包括步骤6:重复步骤1-步骤5.3,获得图像序列对应的图像清晰度评估函数值,并绘制曲线图表征该图像序列对应的清晰度变换趋势。3. The image sharpness evaluation method based on pulse coupling neural network according to claim 2, is characterized in that, also comprises step 6: repeating step 1-step 5.3, obtains the image sharpness evaluation function value corresponding to the image sequence, and A curve graph is drawn to represent the corresponding sharpness transformation trend of the image sequence. 4.根据权利要求1或2所述的基于脉冲耦合神经网络的图像清晰度评估方法,其特征在于:步骤2.3)中,所述反馈输入域(1)系数矩阵M和耦合连接域(2)系数矩阵W的大小均为3*3。4. The image sharpness evaluation method based on impulse coupling neural network according to claim 1 or 2, is characterized in that: in step 2.3), described feedback input domain (1) coefficient matrix M and coupling connection domain (2) The size of the coefficient matrix W is all 3*3. 5.根据权利要求1或2所述的基于脉冲耦合神经网络的图像清晰度评估方法,其特征在于:所述反馈输入域(1)的输入是上一个循环的脉冲输出,其输出是当前循环的反馈输入;耦合连接域(2)的输入是上一个循环周围神经元的脉冲输出,其输出是当前循环的耦合连接输入;脉冲发生域(3)的输入是当前反馈输入与耦合连接输入决定的内部活动项强度,其输出是当前的脉冲输出和决定输出强度的动态门限。5. The image sharpness assessment method based on pulse coupled neural network according to claim 1 or 2, characterized in that: the input of the feedback input field (1) is the pulse output of the previous cycle, and its output is the current cycle The input of the coupled connection domain (2) is the pulse output of the neurons around the previous cycle, and its output is the coupled connection input of the current cycle; the input of the pulse generation domain (3) is determined by the current feedback input and the coupled connection input. The intensity of the internal activity term, its output is the current pulse output and the dynamic threshold that determines the output intensity. 6.一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于:所述处理器执行所述计算机程序时实现如权利要求1至5任一所述方法的步骤。6. A terminal device, comprising a memory, a processor, and a computer program stored in the memory and running on the processor, characterized in that: when the processor executes the computer program, the implementation as claimed in the claims Steps of any one of 1 to 5 of the method.
CN202111629067.8A 2021-12-28 2021-12-28 Image definition evaluation method based on pulse coupling neural network and terminal equipment Active CN114359200B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111629067.8A CN114359200B (en) 2021-12-28 2021-12-28 Image definition evaluation method based on pulse coupling neural network and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111629067.8A CN114359200B (en) 2021-12-28 2021-12-28 Image definition evaluation method based on pulse coupling neural network and terminal equipment

Publications (2)

Publication Number Publication Date
CN114359200A true CN114359200A (en) 2022-04-15
CN114359200B CN114359200B (en) 2023-04-18

Family

ID=81102416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111629067.8A Active CN114359200B (en) 2021-12-28 2021-12-28 Image definition evaluation method based on pulse coupling neural network and terminal equipment

Country Status (1)

Country Link
CN (1) CN114359200B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008537A (en) * 2013-11-04 2014-08-27 无锡金帆钻凿设备股份有限公司 Novel noise image fusion method based on CS-CT-CHMM
CN108985252A (en) * 2018-07-27 2018-12-11 陕西师范大学 The image classification method of improved pulse deep neural network
WO2021012752A1 (en) * 2019-07-23 2021-01-28 中建三局智能技术有限公司 Spiking neural network-based short-range tracking method and system
CN112785539A (en) * 2021-01-30 2021-05-11 西安电子科技大学 Multi-focus image fusion method based on image adaptive decomposition and parameter adaptive

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008537A (en) * 2013-11-04 2014-08-27 无锡金帆钻凿设备股份有限公司 Novel noise image fusion method based on CS-CT-CHMM
CN108985252A (en) * 2018-07-27 2018-12-11 陕西师范大学 The image classification method of improved pulse deep neural network
WO2021012752A1 (en) * 2019-07-23 2021-01-28 中建三局智能技术有限公司 Spiking neural network-based short-range tracking method and system
CN112785539A (en) * 2021-01-30 2021-05-11 西安电子科技大学 Multi-focus image fusion method based on image adaptive decomposition and parameter adaptive

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
D. AGRAWAL 等: "Multifocus image fusion using modified pulse coupled neural network for improved image quality", 《IET IMAGE PROCESSING》 *
KANGJIAN HE 等: "Multi-focus image fusion combining focus-region-level partition and pulse-coupled neural network", 《METHODOLOGIES AND APPLICATION》 *
TAOYU CHEN 等: "Research on Auto-focusing Method Based on Pulse Coupled Neural Network", 《INTERNATIONAL CONFERENCE ON ADVANCED ALGORITHMS AND CONTROL ENGINEERING (ICAACE 2021)》 *
王爱文等: "基于脉冲耦合神经网络的图像分割", 《计算机科学》 *
陈广秋等: "基于图像质量评价参数的FDST域图像融合", 《光电子.激光》 *

Also Published As

Publication number Publication date
CN114359200B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
Li et al. Fast multi-scale structural patch decomposition for multi-exposure image fusion
Wen et al. A simple local minimal intensity prior and an improved algorithm for blind image deblurring
Talebi et al. Learned perceptual image enhancement
Fu et al. A multi-task network for joint specular highlight detection and removal
CN108986050B (en) Image and video enhancement method based on multi-branch convolutional neural network
Xiao et al. Brightness and contrast controllable image enhancement based on histogram specification
CN109902715B (en) Infrared dim target detection method based on context aggregation network
Bar et al. Semi-blind image restoration via Mumford-Shah regularization
CN109472193A (en) Method for detecting human face and device
Zhong et al. Deep attentional guided image filtering
Kumwilaisak et al. Image denoising with deep convolutional neural and multi-directional long short-term memory networks under Poisson noise environments
CN114092760A (en) Self-adaptive feature fusion method and system in convolutional neural network
CN106204502B (en) Based on mixing rank L0Regularization fuzzy core estimation method
CN105931191B (en) Blind Image Deconvolution Method Based on Convex Mixed Regular Prior
CN111783935B (en) Convolutional neural network construction method, device, equipment and medium
Tan et al. High dynamic range imaging for dynamic scenes with large-scale motions and severe saturation
Herulambang et al. Comparison of SVM And BPNN methods in the classification of batik patterns based on color histograms and invariant moments
CN111507135A (en) Face detection method and device, computer equipment and storage medium
CN108156130B (en) Network attack detection method and device
CN110211122A (en) A kind of detection image processing method and processing device
CN112614108B (en) Method and device for detecting nodules in thyroid ultrasound image based on deep learning
CN114359200B (en) Image definition evaluation method based on pulse coupling neural network and terminal equipment
Shi et al. Combined channel and spatial attention for YOLOv5 during target detection
CN114241204B (en) Image recognition method, device, equipment, medium and computer product
Zhu et al. HDRfeat: A feature-rich network for high dynamic range image reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant