TWI769603B - Image processing method and computer readable medium thereof - Google Patents
Image processing method and computer readable medium thereof Download PDFInfo
- Publication number
- TWI769603B TWI769603B TW109142281A TW109142281A TWI769603B TW I769603 B TWI769603 B TW I769603B TW 109142281 A TW109142281 A TW 109142281A TW 109142281 A TW109142281 A TW 109142281A TW I769603 B TWI769603 B TW I769603B
- Authority
- TW
- Taiwan
- Prior art keywords
- image
- prediction
- feature
- neural network
- convolutional neural
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Investigating Or Analyzing Materials Using Thermal Means (AREA)
- Image Processing (AREA)
Abstract
Description
本發明涉及圖像識別技術領域,具體涉及一種影像處理方法及存儲介質。 The invention relates to the technical field of image recognition, in particular to an image processing method and a storage medium.
利用深度學習卷積網路對圖像進行判別已是當前的主流,對於圖像的判別有著優異的表現。也可以應用到圖像的目標檢測中,然而深度學習卷積網路對目標進行判別也有一定的局限性。例如開發者很難去解釋卷積網路生成的模型,僅能從模型的判別結果來預測其好壞。因此,不易對網路進行針對性的控制與改進。另一方面,卷積網路容易忽略局部與整體之間的關聯性。例如,其無法提取目標在圖像中的“位置”,“大小”及“方向”等特徵。然而在某些情況中,這些因素都是目標判斷的重要依據,缺乏這些因素的判斷會影響模型對目標判別的準確性。 The use of deep learning convolutional networks to discriminate images is the current mainstream, and has excellent performance for image discrimination. It can also be applied to the target detection of images, but the deep learning convolutional network also has certain limitations for the target discrimination. For example, it is difficult for developers to interpret the model generated by the convolutional network, and can only predict whether it is good or bad from the discriminative results of the model. Therefore, it is not easy to control and improve the network in a targeted manner. On the other hand, convolutional networks tend to ignore the correlation between the part and the whole. For example, it cannot extract features such as "position", "size" and "orientation" of the target in the image. However, in some cases, these factors are important basis for target judgment, and the lack of judgment of these factors will affect the accuracy of the model's target judgment.
鑒於以上問題,本發明提出一種影像處理的方法和存儲介質,以提高影像處理中目標判定的準確率。 In view of the above problems, the present invention proposes an image processing method and storage medium to improve the accuracy of target determination in image processing.
本申請的第一方面提供一種影像處理方法,所述方法包括:獲取待測目標的至少一張圖像;基於所述圖像生成特徵熱圖;將所述圖像和所述特徵熱圖同時輸入深度學習模型中,得到訓練特徵值;根據所述訓練特徵值從所述深度學習模型中得到預測結果。 A first aspect of the present application provides an image processing method, the method includes: acquiring at least one image of a target to be tested; generating a feature heat map based on the image; Input into the deep learning model to obtain training feature values; obtain prediction results from the deep learning model according to the training feature values.
優選地,基於所述圖像生成特徵熱圖包括:提取所述圖像的特徵值;導入所述特徵值至預設矩陣中;基於導入後的預設矩陣生成所述特徵熱圖。 Preferably, generating a feature heat map based on the image includes: extracting feature values of the image; importing the feature values into a preset matrix; generating the feature heat map based on the imported preset matrix.
優選地,提取所述圖像的特徵值後,所述方法還包括:歸一化處理所述特徵值。 Preferably, after extracting the feature value of the image, the method further comprises: normalizing the feature value.
優選地,所述預設矩陣包括多個分類區塊,每個分類區塊包括多個元素,所述導入所述特徵值至預設矩陣中包括:根據所述分類區塊的類型劃分所述特徵值;導入劃分後的特徵值至對應的分類區塊,其中,每一個特徵值對應所述分類區塊中的每一個元素。 Preferably, the preset matrix includes a plurality of classification blocks, and each classification block includes a plurality of elements, and the importing the eigenvalues into the preset matrix includes: dividing the eigenvalue; import the divided eigenvalues into the corresponding classification block, wherein each eigenvalue corresponds to each element in the classification block.
優選地,所述特徵值包括描述所述圖像中目標尺寸的特徵值,描述所述目標在所述圖像中位置的特徵值,描述所述目標的紋理的特徵值,以及描述所述目標在所述圖像中方向的特徵值。 Preferably, the feature values include feature values describing the size of the object in the image, feature values describing the position of the object in the image, feature values describing the texture of the object, and feature values describing the object Eigenvalues of the orientation in the image.
優選地,基於導入後的預設矩陣生成所述特徵熱圖包括:將導入後的預設矩陣中的所有元素轉換成灰度值;及根據轉換後的預設矩陣生成所述特徵熱圖。 Preferably, generating the feature heat map based on the imported preset matrix includes: converting all elements in the imported preset matrix into grayscale values; and generating the feature heat map according to the converted preset matrix.
優選地,所述深度學習模型包括第一卷積神經網路、第二卷積神經網路和預測單元。 Preferably, the deep learning model includes a first convolutional neural network, a second convolutional neural network and a prediction unit.
優選地,將所述圖像和所述特徵熱圖同時輸入深度學習模型中,得到訓練特徵值包括:根據所述第一卷積神經網路和所述圖像得到第一預測參數;根據所述第二卷積神經網路和所述特徵熱圖得到第二預測參數;組合所述第一預測參數和所述第二預測參數得到所述訓練特徵值。 Preferably, inputting the image and the feature heatmap into a deep learning model at the same time, and obtaining the training feature value includes: obtaining a first prediction parameter according to the first convolutional neural network and the image; The second convolutional neural network and the feature heat map are used to obtain second prediction parameters; the training feature values are obtained by combining the first prediction parameters and the second prediction parameters.
優選地,所述第一卷積神經網路和所述第二卷積神經網路分別包括多個卷積層和池化層,所述池化層設置在一個或多個卷積層之間。 Preferably, the first convolutional neural network and the second convolutional neural network respectively comprise a plurality of convolutional layers and pooling layers, and the pooling layers are arranged between one or more convolutional layers.
優選地,至少一個所述卷積層和至少一個所述池化層組合成所述第一卷積神經網路和所述第二卷積神經網路的中間層。 Preferably, at least one of said convolutional layers and at least one of said pooling layers are combined into an intermediate layer of said first convolutional neural network and said second convolutional neural network.
優選地,從所述第一卷積神經網路的中間層的池化層中提取所述第一預測參數,至少一個第一預測參數是另一個第一預測參數經過所述中間層的卷積層處理後輸出的參數;從所述第二卷積神經網路的中間層的池化層中提取所述第二預測參數,至少一個第二預測參數是另一個第二預測參數經過所述中間層的卷積層處理後輸出的參數。 Preferably, the first prediction parameters are extracted from a pooling layer of an intermediate layer of the first convolutional neural network, and at least one first prediction parameter is another first prediction parameter passing through a convolutional layer of the intermediate layer. The parameters output after processing; the second prediction parameters are extracted from the pooling layer of the middle layer of the second convolutional neural network, and at least one second prediction parameter is another second prediction parameter passing through the middle layer The parameters of the output of the convolutional layer after processing.
優選地,組合所述第一預測參數和所述第二預存參數得到所述訓練特徵值包括:將從所述第一卷積神經網路的最後一個中間層提取的第一預測參數作為所述圖像的第三預測參數;將從所述第二卷積神經網路的最後一個中間層提取的第二預測參數作為所述特徵熱圖的第四預測參數;組合多個所述第一預測參數與所述第三預測參數,以及組合多個所述第二預測參數與所述第四預測參數,得到所述訓練特徵值。 Preferably, combining the first prediction parameter and the second pre-stored parameter to obtain the training feature value includes: as the first prediction parameter extracted from the last intermediate layer of the first convolutional neural network the third prediction parameter of the image; the second prediction parameter extracted from the last intermediate layer of the second convolutional neural network is used as the fourth prediction parameter of the feature heatmap; combining a plurality of the first predictions parameter and the third prediction parameter, and combining a plurality of the second prediction parameter and the fourth prediction parameter to obtain the training feature value.
優選地,根據所述訓練特徵值從所述深度學習模型中得到預測結果包括:輸入所述訓練特徵值至所述深度學習模型的預測單元;所述預測單元輸出所述預測結果。 Preferably, obtaining the prediction result from the deep learning model according to the training feature value includes: inputting the training feature value to a prediction unit of the deep learning model; and the prediction unit outputting the prediction result.
優選地,所述預測單元輸出所述預測結果包括:根據所述預測單元中的損失函數得到計算結果; 指數歸一化所述計算結果;將歸一化後的計算結果轉換成對應的輸出標籤,其中,所述輸出標籤對應目標的等級標籤。 Preferably, outputting the prediction result by the prediction unit includes: obtaining a calculation result according to a loss function in the prediction unit; The calculation result is exponentially normalized; the normalized calculation result is converted into a corresponding output label, wherein the output label corresponds to the level label of the target.
本發明第二方面提供一種電腦可讀存儲介質,其上存儲有電腦程式,所述電腦程式被處理器執行時實現如前所述的影像處理方法。 A second aspect of the present invention provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the above-mentioned image processing method is implemented.
本發明提供的影像處理方法和介質,通過基於所述圖像生成特徵熱圖;並將所述圖像和所述特徵熱圖同時輸入深度學習模型中,得到訓練特徵值;根據所述訓練特徵值從所述深度學習模型中得到預測結果。解決了卷積神經網路對圖像無法提取並學習圖像量測資料特徵值的問題。利用本申請提供的方法對圖像中的目標進行判別,能夠得到更高的準確率。 The image processing method and medium provided by the present invention generate a feature heat map based on the image; input the image and the feature heat map into a deep learning model at the same time to obtain training feature values; according to the training feature value is predicted from the deep learning model. It solves the problem that the convolutional neural network cannot extract the image and learn the feature value of the image measurement data. Using the method provided in this application to discriminate the target in the image can obtain a higher accuracy rate.
S1~S4:步驟 S1~S4: Steps
201:第一分類區塊 201: The first classification block
202:第二分類區塊 202: Second Classification Block
203:第三分類區塊 203: The third classification block
204:第四分類區塊 204: Fourth Classification Block
100:影像處理系統 100: Image Processing System
101:獲取模組 101: Get Mods
102:生成模組 102: Generating Mods
103:輸入模組 103: Input module
104:處理模組 104: Processing modules
10:電子裝置 10: Electronics
11:記憶體 11: Memory
12:處理器 12: Processor
13:電腦程式 13: Computer Programs
圖1是本發明一實施例所提供的影像處理方法的流程示意圖。 FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the present invention.
圖2A是本發明一實施方式提供的預設矩陣的分類區塊示意圖。 2A is a schematic diagram of a classification block of a preset matrix provided by an embodiment of the present invention.
圖2B是本發明一實施方式提供的預設矩陣的示意圖。 FIG. 2B is a schematic diagram of a preset matrix provided by an embodiment of the present invention.
圖3A是本發明一實施方式提供的待處理圖A的示意圖。 FIG. 3A is a schematic diagram of a to-be-processed image A according to an embodiment of the present invention.
圖3B是本發明一實施方式提供的待處理圖A對應的特徵熱圖的示意圖。 FIG. 3B is a schematic diagram of a feature heat map corresponding to a to-be-processed image A according to an embodiment of the present invention.
圖3C是本發明一實施方式提供的待處理圖B的示意圖。 FIG. 3C is a schematic diagram of a to-be-processed image B according to an embodiment of the present invention.
圖3D是本發明一實施方式提供的待處理圖B對應的特徵熱圖的示意圖。 FIG. 3D is a schematic diagram of a feature heat map corresponding to a to-be-processed image B according to an embodiment of the present invention.
圖4是本發明一實施例所提供的深度學習模型的示意圖。 FIG. 4 is a schematic diagram of a deep learning model provided by an embodiment of the present invention.
圖5是本發明一實施例所提供的影像處理系統示意圖。 FIG. 5 is a schematic diagram of an image processing system provided by an embodiment of the present invention.
圖6是本發明一實施方式提供的電子裝置架構示意圖。 FIG. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
為了能夠更清楚地理解本發明的所述目的、特徵和優點,下面結合附圖和具體實施例對本發明進行詳細描述。需要說明的是,在不衝突的情況下,本申請的實施例及實施例中的特徵可以相互組合。 In order to understand the objects, features and advantages of the present invention more clearly, the present invention will be described in detail below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments of the present application and the features in the embodiments may be combined with each other in the case of no conflict.
在下面的描述中闡述了很多具體細節以便於充分理解本發明,所描述的實施例僅是本發明一部分實施例,而不是全部的實施例。基於本發明中的實施例,本領域普通技術人員在沒有做出創造性勞動前提下所獲得的所有其他實施例,都屬於本發明保護的範圍。 In the following description, many specific details are set forth in order to facilitate a full understanding of the present invention, and the described embodiments are only some, but not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
除非另有定義,本文所使用的所有的技術和科學術語與屬於本發明的技術領域的技術人員通常理解的含義相同。本文中在本發明的說明書中所使用的術語只是為了描述具體的實施例的目的,不是旨在於限制本發明。 Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terms used herein in the description of the present invention are for the purpose of describing specific embodiments only, and are not intended to limit the present invention.
請參閱圖1,圖1為本發明一個實施例提供的影像處理方法的流程示意圖。根據不同的需求,所述流程圖中步驟的順序可以改變,某些步驟可以省略。為了便於說明,僅示出了與本發明實施例相關的部分。 Please refer to FIG. 1 , which is a schematic flowchart of an image processing method according to an embodiment of the present invention. According to different requirements, the order of the steps in the flowchart can be changed, and some steps can be omitted. For the convenience of description, only the parts related to the embodiments of the present invention are shown.
如圖1所示,所述影像處理方法包括以下步驟。 As shown in FIG. 1 , the image processing method includes the following steps.
步驟S1、獲取待測目標的至少一張圖像。 Step S1, acquiring at least one image of the target to be measured.
在一實施方式中,可通過相機拍攝待測目標獲得至少一張圖像。相機可以為手機、單眼相機、線陣相機等,待測目標可以是人、物體、動物、手機、個人電腦等。在另一實施方式中,獲取待測目標的至少一張圖像可以是接收伺服器傳送的待測目標的至少一張圖像。在其他實施方式中,可以從本地資料庫中獲取待測目標的至少一張圖像。本實施方式中,圖像可包括待測目標的完整或局部圖像。圖像可以是任意解析度,也可以經過高採樣或低採樣,依實際需求而定。 In one embodiment, at least one image can be obtained by photographing the object to be measured by a camera. The camera can be a mobile phone, a monocular camera, a line scan camera, etc., and the target to be tested can be a person, an object, an animal, a mobile phone, a personal computer, and the like. In another embodiment, acquiring at least one image of the object to be measured may be at least one image of the object to be measured transmitted by the receiving server. In other embodiments, at least one image of the object to be tested may be acquired from a local database. In this embodiment, the image may include a complete or partial image of the object to be tested. The image can be of any resolution, and it can also be oversampled or undersampled, depending on actual needs.
步驟S2、基於圖像生成特徵熱圖。 Step S2, generating a feature heat map based on the image.
在一實施例中,基於圖像生成特徵熱圖的方法包括: In one embodiment, the method for generating a feature heatmap based on an image includes:
(1)提取圖像的特徵值; 在一實施例中,待測目標在圖像中可以是一或多個目標,特徵值包括描述目標在圖像中的尺寸的特徵值,描述目標在圖像中位置的特徵值,描述目標的紋理的特徵值,以及描述目標在圖像中方向的特徵值。描述目標的尺寸的特徵值包括瑕疵長度、灰度差、單點瑕疵面積、寬度、瑕疵亮度、群聚瑕疵面積和長寬比等。描述目標在所述圖像中的位置的特徵值包括是否在孔洞第一區、是否在孔洞第二區、是否在孔洞第三區、是否在縱向分區第一區、是否在縱向分區第二區、是否在縱向分區第三區、是否在縱向分區第四區以及是否在圓角區等。描述目標的紋理的特徵值包括密度、群密度、熵、對比度、相關性、以及均勻性等。描述目標在所述圖像中方向的特徵值可以是目標與依據圖像建立的坐標系(XOY)的X軸或/和Y軸之間的角度,包括角度_0、角度_1、角度_2等。 (1) Extract the eigenvalues of the image; In one embodiment, the target to be measured may be one or more targets in the image, and the feature values include a feature value describing the size of the target in the image, a feature value describing the position of the target in the image, and a feature value describing the target's position in the image. The eigenvalues of the texture, and the eigenvalues that describe the orientation of the object in the image. The feature values that describe the size of the target include flaw length, grayscale difference, single-point flaw area, width, flaw brightness, cluster flaw area, and aspect ratio. The feature values describing the position of the target in the image include whether it is in the first area of the hole, whether it is in the second area of the hole, whether it is in the third area of the hole, whether it is in the first area of the vertical partition, whether it is in the second area of the vertical partition , Whether it is in the third area of the vertical partition, whether it is in the fourth area of the vertical partition, and whether it is in the rounded corner area, etc. The feature values that describe the texture of the target include density, cluster density, entropy, contrast, correlation, and uniformity. The feature value describing the direction of the target in the image may be the angle between the target and the X axis or/and the Y axis of the coordinate system (XOY) established according to the image, including angle_0, angle_1, angle_ 2 and so on.
在一實施方式中,目標可以是描述待測工件上的瑕疵區域。瑕疵為待測工件在生產過程中被擦傷、刮傷或碰傷時造成的,也可以包括待測工件上存在污漬的區域。 In one embodiment, the goal may be to describe a defect area on the workpiece to be tested. Defects are caused when the workpiece to be tested is scratched, scratched or bruised during the production process, and can also include areas with stains on the workpiece to be tested.
需要說明的是,可以通過目標在圖像中的位置確認目標在待測工件中的位置,也可以通過目標在圖像中的方向確認目標在待測工件中的方向。 It should be noted that the position of the target in the workpiece to be tested can be confirmed by the position of the target in the image, and the direction of the target in the workpiece to be tested can also be confirmed by the direction of the target in the image.
(2)歸一化處理特徵值;在一實施例中,可將特徵值歸一化處理為0至1範圍內的數值。 (2) The eigenvalues are normalized; in one embodiment, the eigenvalues can be normalized to a value in the range of 0 to 1.
(3)導入特徵值至預設矩陣中;在本實施例中,預設矩陣可以包括多個分類區塊,每一個分類區塊分別包括多個元素。每個元素可以導入一個特徵值。分類區塊可以描述目標的特徵,例如尺寸、紋理等。 (3) Import the eigenvalues into a preset matrix; in this embodiment, the preset matrix may include a plurality of classification blocks, and each classification block includes a plurality of elements respectively. One eigenvalue can be imported per element. The classification block can describe the characteristics of the object, such as size, texture, etc.
優選地,當特徵值數量少於分類區塊中的元素的總數時,補入0至所述分類區塊。 Preferably, when the number of feature values is less than the total number of elements in the classification block, 0 is added to the classification block.
例如,如圖2A所示,預設矩陣包括四個分類區塊,第一分類區塊201,第二分類區塊202,第三分類區塊203和第四分類區塊204。第一分類
區塊201描述目標的尺寸。第二分類區塊202描述目標的紋理,第三分類區塊203描述目標的位置,以及第四分類區塊204描述目標的方向。四個分類區塊分別是尺寸,紋理,位置和方向。
For example, as shown in FIG. 2A , the preset matrix includes four sorting blocks, a
如圖2B所示,第一分類區塊201,第二分類區塊202,第三分類區塊203和第四分類區塊204分別包括九個元素。第一分類區塊201導入了描述目標尺寸的特徵值,如瑕疵長度、灰度差、單點瑕疵面積、寬度、瑕疵亮度、群聚瑕疵面積和長寬比,以及補入一個零至第一分類區塊201。第二分類區塊202導入了描述目標紋理的特徵值,如密度、群密度、熵、對比度、相關性和均勻性,以及補入三個零至所述第二分類區塊202。第三分類區塊203導入了描述目標所在位置的特徵值,如是否在孔洞第一區,是否在孔洞第二區,是否在孔洞第三區,是否在縱向分區第一區,是否在縱向分區第二區,是否在縱向分區第三區,是否在縱向分區第四區和是否在圓角區,以及補入一個零至所述第三分類區塊203。第四分類區塊204導入了描述目標方向的特徵值,例如,角度_0、角度_1、角度_2、角度_3、角度_4、角度_5、角度_6、角度_7和角度_8。
As shown in FIG. 2B , the
(4)基於導入後的預設矩陣生成特徵熱圖;在一實施方式中,通過將預設矩陣中的所有元素轉換成灰度值,再根據轉換後的預設矩陣生成特徵熱圖。 (4) Generating a feature heatmap based on the imported preset matrix; in one embodiment, by converting all elements in the preset matrix into grayscale values, and then generating a feature heatmap according to the converted preset matrix.
在一實施例中,將預設矩陣中的每一元素乘以255轉換成灰度值,再將轉換後的預設矩陣中的每一個元素作為一個圖元點,以得到特徵熱圖。如圖3A為待處理圖A,圖3B為待處理圖A通過上述方法得到的特徵熱圖。如圖3C所示為待處理圖B,圖3D為待處理圖B通過上述方法得到的特徵熱圖。 In one embodiment, each element in the preset matrix is multiplied by 255 and converted into a grayscale value, and then each element in the converted preset matrix is used as a primitive point to obtain a feature heat map. FIG. 3A is the image A to be processed, and FIG. 3B is the feature heat map of the image A to be processed obtained by the above method. FIG. 3C shows the image B to be processed, and FIG. 3D is the feature heat map obtained by the above method for the image B to be processed.
在一實施例中,將提取到的圖像的特徵值依類別分區,再依特徵之間的相關性順序排列,最後將排列後的預設矩陣轉變成特徵熱圖,使特徵熱圖得以與圖像一起應用於卷積神經網路模型。 In one embodiment, the feature values of the extracted images are divided according to categories, and then arranged in order of correlation between features, and finally the arranged preset matrix is converted into a feature heatmap, so that the feature heatmap can be compared with the feature heatmap. The images are applied together to a convolutional neural network model.
優選地,在獲取圖像的特徵熱圖後,對圖像進行處理的方法還包括:調整特徵熱圖的尺寸。為了滿足深度學習模型的需求,需要先將特徵熱圖的尺寸進行處理。 Preferably, after acquiring the feature heat map of the image, the method for processing the image further includes: adjusting the size of the feature heat map. In order to meet the needs of the deep learning model, the size of the feature heatmap needs to be processed first.
步驟S3、將圖像和特徵熱圖同時輸入深度學習模型中,得到訓練特徵值。 Step S3: Input the image and the feature heatmap into the deep learning model at the same time to obtain the training feature value.
在一實施方式中,將圖像和特徵熱圖同時輸入深度學習模型中,深度學習模型包括第一卷積神經網路、第二卷積神經網路和預測單元。 In one embodiment, the image and feature heatmap are simultaneously input into a deep learning model, and the deep learning model includes a first convolutional neural network, a second convolutional neural network and a prediction unit.
在一實施例中,將圖像2輸入深度學習模型的第一卷積神經網路,同時將特徵熱圖輸入深度學習模型的第二卷積神經網路。深度學習模型為預先訓練好的模型,可以在輸入圖像後,輸出圖像中的目標的分類等級,如圖4所示。訓練深度學習模型的方法為現有的訓練方法,在此不再贅述。 In one embodiment, the image 2 is input into the first convolutional neural network of the deep learning model, while the feature heatmap is input into the second convolutional neural network of the deep learning model. The deep learning model is a pre-trained model, which can output the classification level of the target in the image after inputting the image, as shown in Figure 4. The method for training the deep learning model is an existing training method, which will not be repeated here.
需要說明的是,可以動態調整所述深度學習模型的參數,所述深度學習模型不限定於本申請中對目標的分類判定,還適用於其他任何圖像的識別。 It should be noted that the parameters of the deep learning model can be dynamically adjusted, and the deep learning model is not limited to the classification and determination of the target in this application, and is also applicable to the recognition of any other image.
優選地,第一卷積神經網路和第二卷積神經網路分別包括多個卷積層和池化層,池化層設置在一個或多個卷積層之間。至少一個卷積層和至少一個池化層組合成第一卷積神經網路和第二卷積神經網路的中間層。 Preferably, the first convolutional neural network and the second convolutional neural network respectively include a plurality of convolutional layers and pooling layers, and the pooling layers are arranged between one or more convolutional layers. At least one convolutional layer and at least one pooling layer are combined into an intermediate layer of the first convolutional neural network and the second convolutional neural network.
優選地,第一卷積神經網路與第二卷積神經網路的結構可以相同,也可以不同。例如,第一卷積神經網路的中間層包括四層,第二卷積神經網路的中間層也包括四層。又如,當特徵熱圖的重要性較低時,可以使用較簡易的神經網路架構,例如,減少第二卷積神經網路的中間層的數量。 Preferably, the structures of the first convolutional neural network and the second convolutional neural network may be the same or different. For example, the middle layer of the first convolutional neural network includes four layers, and the middle layer of the second convolutional neural network also includes four layers. As another example, when the importance of the feature heatmap is low, a simpler neural network architecture can be used, for example, reducing the number of intermediate layers of the second convolutional neural network.
優選地,將圖像和所述特徵熱圖同時輸入深度學習模型中,得到訓練特徵值包括:根據第一卷積神經網路和圖像得到第一預測參數;根據第二卷積神經網路和特徵熱圖得到第二預測參數;組合第一預測參數和第二預測參數得到訓練特徵值。 Preferably, inputting the image and the feature heat map into the deep learning model at the same time, and obtaining the training feature value includes: obtaining the first prediction parameter according to the first convolutional neural network and the image; according to the second convolutional neural network and the feature heatmap to obtain the second prediction parameter; combining the first prediction parameter and the second prediction parameter to obtain the training feature value.
在一實施方式中,可以從第一卷積神經網路的池化層中提取第一預測參數,至少一個第一預測參數是另一個第一預測參數經過卷積層處理後輸出的參數。從第二卷積神經網路的池化層中提取第二預測參數,至少一個第二預測參數是另一個第二預測參數經過卷積層處理後輸出的參數。 In one embodiment, the first prediction parameters may be extracted from the pooling layer of the first convolutional neural network, and at least one first prediction parameter is a parameter output after another first prediction parameter is processed by the convolution layer. The second prediction parameters are extracted from the pooling layer of the second convolutional neural network, and at least one second prediction parameter is a parameter that is output after another second prediction parameter is processed by the convolution layer.
第一卷積神經網路的中間層可以包括多層,每一個中間層都包括多個卷積層和池化層。第二卷積神經網路的中間層也可以包括多層,每一個中間層都包括多個卷積層和池化層。如圖4所示,第一卷積神經網路的中間層包括四層,提取依序的每一個中間層的預測參數可以得到四個第一預測參數,如第一預測參數A、第一預測參數B、第一預測參數C和第一預測參數D。第一預測參數A為從第一個中間層提取的參數,第一預測參數B為從第二個中間層提取的參數,第一預測參數C為從第三個中間層提取的參數,第一預測參數D為從第四個中間層提取參數。 The intermediate layers of the first convolutional neural network may include multiple layers, and each intermediate layer includes a plurality of convolutional layers and pooling layers. The intermediate layers of the second convolutional neural network may also include multiple layers, each intermediate layer including multiple convolutional layers and pooling layers. As shown in FIG. 4 , the middle layer of the first convolutional neural network includes four layers, and four first prediction parameters can be obtained by extracting the prediction parameters of each middle layer in sequence, such as the first prediction parameter A, the first prediction parameter parameter B, first prediction parameter C, and first prediction parameter D. The first prediction parameter A is the parameter extracted from the first intermediate layer, the first prediction parameter B is the parameter extracted from the second intermediate layer, the first prediction parameter C is the parameter extracted from the third intermediate layer, and the first prediction parameter C is the parameter extracted from the third intermediate layer. The prediction parameter D is the parameter extracted from the fourth intermediate layer.
在一實施例中,可以利用注意力機制提取第一卷積神經網路的每一個中間層的第一預測參數來得到圖像的多個第一預測參數。也可以是其他方法提取第一預測參數,本申請中對此不作限定。 In one embodiment, the attention mechanism may be used to extract the first prediction parameters of each intermediate layer of the first convolutional neural network to obtain a plurality of first prediction parameters of the image. Other methods may also be used to extract the first prediction parameter, which is not limited in this application.
第二卷積神經網路的中間層也包括四層,提取依序的每一個中間層的預測參數可以得到四個第二預測參數,如第二預測參數A、第二預測參數B、第二預測參數C和第二預測參數D。第二預測參數A為從第一個中間層提取的參數,第二預測參數B為從第二個中間層提取的參數,第二預測參數C為從第三個中間層提取的參數,第二預測參數D為從第四個中間層提取的參數。同樣地,可以利用注意力機制提取第二卷積神經網路的中間層的第二預測參數,得到特徵熱圖的多個第二預測參數。 The middle layer of the second convolutional neural network also includes four layers. By extracting the prediction parameters of each middle layer in sequence, four second prediction parameters can be obtained, such as the second prediction parameter A, the second prediction parameter B, the second prediction parameter Prediction parameter C and second prediction parameter D. The second prediction parameter A is the parameter extracted from the first intermediate layer, the second prediction parameter B is the parameter extracted from the second intermediate layer, the second prediction parameter C is the parameter extracted from the third intermediate layer, and the second prediction parameter C is the parameter extracted from the third intermediate layer. The prediction parameter D is the parameter extracted from the fourth intermediate layer. Similarly, the attention mechanism can be used to extract the second prediction parameters of the middle layer of the second convolutional neural network to obtain multiple second prediction parameters of the feature heat map.
需要說明的是,同樣可以利用注意力機制提取第二卷積神經網路的每一個中間層的第二預測參數來得到圖像的多個第二預測參數。也可以是其他方法提取第二預測參數,本申請中對此不作限定。 It should be noted that, the attention mechanism can also be used to extract the second prediction parameters of each intermediate layer of the second convolutional neural network to obtain multiple second prediction parameters of the image. Other methods may also be used to extract the second prediction parameter, which is not limited in this application.
在本實施方式中,可通過組合多個第一預測參數生成第一組合預測參數,通過組合多個第二預測參數生成第二組合預測參數,再連接第一組合預測參數與第二組合預測參數,構成深度學習模型的全連接層。 In this embodiment, the first combined prediction parameter may be generated by combining multiple first prediction parameters, the second combined prediction parameter may be generated by combining multiple second prediction parameters, and then the first combined prediction parameter and the second combined prediction parameter may be connected , which constitute the fully connected layers of the deep learning model.
在一實施方式中,組合第一預測參數和第二預測參數得到訓練特徵值包括: In one embodiment, combining the first prediction parameter and the second prediction parameter to obtain the training feature value includes:
(1)將從第一卷積神經網路的最後一個中間層提取的第一預測參數作為第三預測參數。在本實施例中,可以提取第一卷積神經網路的最後一層的參數,得到圖像的第三預測參數。 (1) The first prediction parameter extracted from the last intermediate layer of the first convolutional neural network is used as the third prediction parameter. In this embodiment, the parameters of the last layer of the first convolutional neural network can be extracted to obtain the third prediction parameters of the image.
(2)將從第二卷積神經網路的最後一個中間層提取的第二預測參數作為特徵熱圖的第四預測參數。在本實施例中,可以提取第二卷積神經網路的最後一層的參數,得到特徵熱圖的第四預測參數。 (2) The second prediction parameter extracted from the last intermediate layer of the second convolutional neural network is used as the fourth prediction parameter of the feature heatmap. In this embodiment, the parameters of the last layer of the second convolutional neural network can be extracted to obtain the fourth prediction parameter of the feature heat map.
(3)組合多個第一預測參數與第三預測參數,以及組合多個第二預測參數與第四預測參數,得到訓練特徵值。在本實施方式中,可通過組合多個第一預測參數與第三預測參數生成第一組合預測參數,通過組合多個第二預測參數與第四預測參數生成第二組合預測參數,再連接第一組合預測參數與第二組合預測參數,構成深度學習模型的全連接層。 (3) Combining multiple first prediction parameters and third prediction parameters, and combining multiple second prediction parameters and fourth prediction parameters to obtain training feature values. In this embodiment, the first combined prediction parameter may be generated by combining multiple first prediction parameters and the third prediction parameter, the second combined prediction parameter may be generated by combining multiple second prediction parameters and the fourth prediction parameter, and then the A combined prediction parameter and a second combined prediction parameter constitute a fully connected layer of the deep learning model.
可以理解的是,訓練特徵值至少包括第一組合預測參數和第二組合預測參數。 It can be understood that the training feature value includes at least the first combined prediction parameter and the second combined prediction parameter.
步驟S4、根據訓練特徵值從深度學習模型中得到預測結果。 Step S4, obtaining a prediction result from the deep learning model according to the training feature value.
在本實施方式中,通過輸入訓練特徵值至深度學習模型的預測單元,預測單元輸出預測結果。在一實施方式中,可輸入全連接層至預測單元中,根據預測單元中的損失函數得到計算結果,指數歸一化計算結果,將歸一化後的計算結果轉換成對應的輸出標籤,其中,輸出標籤對應目標的等級標籤。 In this embodiment, the prediction unit outputs the prediction result by inputting the training feature value to the prediction unit of the deep learning model. In one embodiment, the fully connected layer can be input into the prediction unit, the calculation result can be obtained according to the loss function in the prediction unit, the calculation result can be exponentially normalized, and the normalized calculation result can be converted into the corresponding output label, wherein , the output label corresponds to the level label of the target.
在一實施方式中,預測單元可以是一分類器,分類器可以是Softmax損失函數,損失函數的結果相當於輸入的圖像被分到每個等級標籤的概 率分佈。全連接層輸入至Softmax損失函數得到計算結果,再將計算結果進行指數歸一化,最後轉換成對應的輸出標籤。輸出標籤對應的是目標的等級標籤。 In one embodiment, the prediction unit may be a classifier, the classifier may be a Softmax loss function, and the result of the loss function is equivalent to the probability that the input image is classified into each level label. rate distribution. The fully connected layer is input to the Softmax loss function to obtain the calculation result, and then the calculation result is exponentially normalized, and finally converted into the corresponding output label. The output label corresponds to the level label of the target.
本申請利用影像處理技術提取圖像中的靶心圖表像以量測資料特徵值,並基於特徵值生成圖像的特徵熱圖,將特徵熱圖和圖像一起使用卷積神經網路來學習與預測,以增加多種卷積神經網路對原始圖像無法提取到的圖像量測資料特徵。 This application uses image processing technology to extract the bullseye image in the image to measure the feature value of the data, and generates a feature heatmap of the image based on the feature value, and uses the feature heatmap and the image together to learn and match the image with the convolutional neural network. Prediction to increase the features of image measurement data that cannot be extracted from the original image by a variety of convolutional neural networks.
本申請的卷積神經網路除了能夠對圖像自身的特徵值進行學習,也能同時學習到使用者指定的相關聯圖像(如特徵熱圖)特徵,解決了卷積神經網路對圖像無法提取並學習,例如“位置”、“大小”、“方向”等圖像量測資料特徵值的問題。利用本申請對圖像進行檢測判別,能夠得到更高的準確率。 The convolutional neural network of the present application can not only learn the feature values of the image itself, but also learn the associated image (such as feature heatmap) features specified by the user at the same time, which solves the problem of the convolutional neural network. For example, it is impossible to extract and learn the feature values of image measurement data such as "position", "size", and "direction". Using the present application to detect and discriminate images can obtain higher accuracy.
圖1至圖4詳細介紹了本發明的影像處理的方法,通過所述方法,能夠提高影像處理中目標判定的準確率。下面結合圖5和圖6,對實現所述影像處理方法的軟體系統的功能模組以及硬體裝置架構進行介紹。 FIG. 1 to FIG. 4 describe the image processing method of the present invention in detail, and through the method, the accuracy of target determination in image processing can be improved. The functional modules of the software system and the hardware device architecture for implementing the image processing method will be introduced below with reference to FIG. 5 and FIG. 6 .
圖5為本發明一實施方式提供的影像處理系統的結構圖。 FIG. 5 is a structural diagram of an image processing system according to an embodiment of the present invention.
在一些實施方式中,影像處理系統100可以包括多個由程式碼段所組成的功能模組。影像處理系統100中的各個程式段的程式碼可以存儲於電腦裝置的記憶體中,並由電腦裝置中的至少一個處理器所執行,以實現圖像檢測的功能。 In some embodiments, the image processing system 100 may include a plurality of functional modules composed of program code segments. The program codes of each program segment in the image processing system 100 can be stored in the memory of the computer device and executed by at least one processor in the computer device to realize the function of image detection.
參考圖5,本實施方式中,影像處理系統100根據其所執行的功能,可以被劃分為多個功能模組,各個功能模組用於執行圖1對應實施方式中的各個步驟,以實現影像處理的功能。本實施方式中,影像處理系統100的功能模組包括:獲取模組101、生成模組102、輸入模組103以及處理模組104。各個功能模組的功能將在下面的實施例中進行詳述。 Referring to FIG. 5 , in this embodiment, the image processing system 100 can be divided into a plurality of functional modules according to the functions performed by the image processing system 100 , and each functional module is used to execute each step in the corresponding embodiment of FIG. processing function. In this embodiment, the functional modules of the image processing system 100 include: an acquisition module 101 , a generation module 102 , an input module 103 , and a processing module 104 . The functions of each functional module will be described in detail in the following embodiments.
獲取模組101用於獲取待測目標的至少一張圖像。在一實施方式中,可通過相機拍攝待測目標獲得至少一張圖像。相機可以為手機、單眼相機、 線陣相機等,待測目標可以是人、物體、動物、手機、個人電腦等。在另一實施方式中,獲取待測目標的至少一張圖像可以是接收伺服器傳送的待測目標的至少一張圖像。在其他實施方式中,可以從本地資料庫中獲取待測目標的圖像。本實施方式中,圖像可包括待測目標的完整或局部圖像。圖像可以是任意解析度,也可以經過高採樣或低採樣,依實際需求而定。 The acquiring module 101 is used for acquiring at least one image of the object to be measured. In one embodiment, at least one image can be obtained by photographing the object to be measured by a camera. The camera can be a mobile phone, a monocular camera, Line scan cameras, etc., the target to be tested can be people, objects, animals, mobile phones, personal computers, etc. In another embodiment, acquiring at least one image of the object to be measured may be at least one image of the object to be measured transmitted by the receiving server. In other embodiments, the image of the object to be tested may be obtained from a local database. In this embodiment, the image may include a complete or partial image of the object to be tested. The image can be of any resolution, and it can also be oversampled or undersampled, depending on actual needs.
生成模組102用於基於圖像生成特徵熱圖。 The generation module 102 is used to generate a feature heatmap based on the image.
在一實施例中,生成模組102可用於: In one embodiment, the generation module 102 may be used to:
(1)提取圖像的特徵值。在一實施例中,特徵值包括描述目標尺寸的特徵值,描述目標在圖像中位置的特徵值,描述目標的紋理的特徵值,以及描述目標在圖像中方向的特徵值。描述目標尺寸的特徵值包括長度、灰度差、面積、寬度、亮度、顏色、飽和度和長寬比等。描述目標在圖像中位置的特徵值包括是否在第一區、是否在第二區、是否在第三區、是否在縱向分區第一區、是否在縱向分區第二區、是否在縱向分區第三區、是否在縱向分區第四區以及是否在圓角區等。描述目標紋理的特徵值包括密度、群密度、熵、對比度、相關性、以及均勻性等。描述目標在圖像中方向的特徵值可以是目標與依據圖像建立的坐標系(XOY)的X軸或/和Y軸之間的角度,包括角度_0、角度_1、角度_2等。 (1) Extract the feature values of the image. In one embodiment, the feature values include feature values that describe the size of the object, feature values that describe the location of the object in the image, feature values that describe the texture of the object, and feature values that describe the orientation of the object in the image. The feature values that describe the target size include length, grayscale difference, area, width, brightness, color, saturation, and aspect ratio. The feature values describing the position of the target in the image include whether it is in the first area, whether it is in the second area, whether it is in the third area, whether it is in the first area of the vertical partition, whether it is in the second area of the vertical partition, whether it is in the first area of the vertical partition. The third area, whether it is in the fourth area of the vertical partition and whether it is in the rounded area, etc. The feature values that describe the target texture include density, cluster density, entropy, contrast, correlation, and uniformity. The feature value describing the direction of the target in the image can be the angle between the target and the X-axis or/and the Y-axis of the coordinate system (XOY) established according to the image, including angle_0, angle_1, angle_2, etc. .
(2)歸一化處理所述特徵值。在一實施例中,可將特徵值歸一化處理為0至1範圍內的數值。 (2) Normalize the eigenvalues. In one embodiment, the eigenvalues may be normalized to a value in the range of 0-1.
(3)導入特徵值至預設矩陣中;在一實施例中,預設矩陣可以包括多個分類區塊,每一個分類區塊分別包括多個元素,每個元素可以導入一個特徵值。分類區塊可以描述目標的特徵,例如尺寸、紋理、顏色等。 (3) Import the eigenvalues into the preset matrix; in one embodiment, the preset matrix may include a plurality of classification blocks, each classification block includes a plurality of elements, and each element can import one eigenvalue. Classification blocks can describe characteristics of objects such as size, texture, color, etc.
優選地,當特徵值數量少於分類區塊中的元素的總數時,補入0至所述分類區塊。 Preferably, when the number of feature values is less than the total number of elements in the classification block, 0 is added to the classification block.
例如,如圖2A所示,預設矩陣包括四個分類區塊,第一分類區塊201、第二分類區塊202、第三分類區塊203和第四分類區塊204。第一分類區塊201描述目標的尺寸。第二分類區塊202描述目標的紋理,第三分類區塊203描述目標的位置,以及第四分類區塊204描述目標的方向。四個分類區塊分別是尺寸、紋理、位置和方向。
For example, as shown in FIG. 2A , the preset matrix includes four sorting blocks, a
如圖2B所示,第一分類區塊201、第二分類區塊202、第三分類區塊203和第四分類區塊204分別包括九個元素。第一分類區塊201導入了描述目標尺寸的特徵值,如長度、灰度差、顏色、寬度、亮度、面積和長寬比,以及補入一個零至第一分類區塊201。第二分類區塊202導入了目標紋理的特徵值,如密度,群密度,熵,對比度,相關性和均勻性,以及補入三個零至第二分類區塊202。第三分類區塊203導入了描述目標所在位置的特徵值,如是否在第一區,是否在第二區,是否在第三區,是否在縱向分區第一區,是否在縱向分區第二區,是否在縱向分區第三區,是否在縱向分區第四區和是否在圓角區,以及補入一個零至第三分類區塊203。第四分類區塊204導入了描述目標方向的特徵值,例如,角度_0,角度_1,角度_2,角度_3,角度_4,角度_5,角度_6,角度_7和角度_8。
As shown in FIG. 2B , the
(4)基於導入後的預設矩陣生成特徵熱圖。在一實施方式中,通過將預設矩陣中的所有元素轉換成灰度值,再根據轉換後的預設矩陣生成特徵熱圖。 (4) Generate a feature heatmap based on the imported preset matrix. In one embodiment, all elements in the preset matrix are converted into grayscale values, and then a feature heat map is generated according to the converted preset matrix.
在一實施例中,將預設矩陣中的每一元素乘以255轉換成灰度值。再將轉換後的預設矩陣中的每一個元素作為一個圖元點,得到特徵熱圖。如圖3A所示為待處理圖A,圖3B為待處理圖A通的特徵熱圖。如圖3C所示為待處理圖B,圖3D為待處理圖B的特徵熱圖。 In one embodiment, each element in the preset matrix is multiplied by 255 to convert to a grayscale value. Then use each element in the converted preset matrix as a primitive point to obtain a feature heat map. FIG. 3A shows the image A to be processed, and FIG. 3B is the feature heat map of the image A to be processed. FIG. 3C shows the image B to be processed, and FIG. 3D is the feature heat map of the image B to be processed.
在一實施例中,生成模組102可用於將提取到的圖像的特徵值依類別分區,再依特徵之間的相關性順序排列,最後將排列後的預設矩陣轉變成特徵熱圖,使特徵熱圖得以與圖像一起應用於卷積神經網路模型。 In one embodiment, the generating module 102 can be used to divide the feature values of the extracted images according to categories, and then arrange them in order of correlation between the features, and finally convert the arranged preset matrix into a feature heat map, Enables feature heatmaps to be applied to convolutional neural network models along with images.
優選地,生成模組102可在獲取圖像的特徵熱圖後,調整特徵熱圖的尺寸,以滿足深度學習模型的需求。 Preferably, the generating module 102 can adjust the size of the feature heatmap after acquiring the feature heatmap of the image to meet the needs of the deep learning model.
輸入模組103用於將圖像和特徵熱圖同時輸入深度學習模型中以得到訓練特徵值。 The input module 103 is used to simultaneously input the image and the feature heatmap into the deep learning model to obtain training feature values.
在一實施方式中,輸入模組103可將圖像和特徵熱圖同時輸入深度學習模型中,深度學習模型包括第一卷積神經網路、第二卷積神經網路和預測單元。 In one embodiment, the input module 103 may simultaneously input the image and the feature heatmap into a deep learning model, where the deep learning model includes a first convolutional neural network, a second convolutional neural network and a prediction unit.
在一實施例中,將圖像輸入深度學習模型的第一卷積神經網路,同時將特徵熱圖輸入深度學習模型的第二卷積神經網路。深度學習模型為預先訓練好的模型,可以在輸入圖像後,輸出目標的檢測結果,如圖4所示。訓練所述深度學習模型的方法為現有的訓練方法,在此不再贅述。 In one embodiment, the image is fed into the first convolutional neural network of the deep learning model, while the feature heatmap is fed into the second convolutional neural network of the deep learning model. The deep learning model is a pre-trained model, which can output the detection result of the target after inputting the image, as shown in Figure 4. The method for training the deep learning model is an existing training method, which will not be repeated here.
需要說明的是,可以動態調整深度學習模型的參數,深度學習模型不限定於本申請中對目標的判定,還適用於其他任何圖像的識別。 It should be noted that the parameters of the deep learning model can be dynamically adjusted, and the deep learning model is not limited to the determination of the target in this application, and is also applicable to the recognition of any other image.
處理模組104用於根據訓練特徵值從深度學習模型中得到預測結果。 The processing module 104 is used to obtain the prediction result from the deep learning model according to the training feature value.
在一實施方式中,處理模組104是一深度學習模型,可從深度學習模型的預測單元輸出預測結果。 In one embodiment, the processing module 104 is a deep learning model, and can output prediction results from the prediction unit of the deep learning model.
在一實施例中,預測單元可以是一分類器,分類器可以是Softmax損失函數,損失函數的結果相當於輸入的圖像被分到每個等級標籤的概率分佈。訓練特徵值輸入至Softmax損失函數得到計算結果,再將計算結果進行指數歸一化,最後轉換成對應的輸出標籤。輸出標籤對應目標的檢測結果標籤。 In one embodiment, the prediction unit may be a classifier, the classifier may be a Softmax loss function, and the result of the loss function is equivalent to the probability distribution of the input image being assigned to each level label. The training feature value is input to the Softmax loss function to obtain the calculation result, and then the calculation result is exponentially normalized, and finally converted into the corresponding output label. The output label corresponds to the detection result label of the target.
圖6為本發明一實施方式提供的電子裝置的功能模組示意圖。電子裝置10包括記憶體11、處理器12以及存儲在所述記憶體11中並可在所述處理器12上運行的電腦程式13,例如影像處理的程式。 FIG. 6 is a schematic diagram of a functional module of an electronic device according to an embodiment of the present invention. The electronic device 10 includes a memory 11 , a processor 12 , and a computer program 13 , such as an image processing program, stored in the memory 11 and executable on the processor 12 .
在本實施方式中,電子裝置10可以是但不限於智慧手機、平板電腦、電腦設備等。 In this embodiment, the electronic device 10 may be, but not limited to, a smart phone, a tablet computer, a computer device, and the like.
示例性的,電腦程式13可以被分割成一個或多個模組/單元,一個或者多個模組/單元被存儲在記憶體11中,並由處理器12執行。一個或多個模組/單元可以是能夠完成特定功能的一系列電腦程式指令段,指令段用於描述電腦程式13在電子裝置10中的執行過程。例如,電腦程式13可以被分割成圖5中的模組101-104。 Exemplarily, the computer program 13 may be divided into one or more modules/units, and one or more modules/units are stored in the memory 11 and executed by the processor 12 . One or more modules/units may be a series of computer program instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer program 13 in the electronic device 10 . For example, the computer program 13 may be divided into modules 101-104 in FIG. 5 .
本領域技術人員可以理解,示意圖6僅僅是電子裝置10的示例,並不構成對電子裝置10的限定,電子裝置10可以包括比圖示更多或更少的部件,或者組合某些部件,或者不同的部件,例如電子裝置10還可以包括輸入輸出設備等。 Those skilled in the art can understand that the schematic diagram 6 is only an example of the electronic device 10, and does not constitute a limitation on the electronic device 10. The electronic device 10 may include more or less components than those shown in the figure, or combine certain components, or Various components such as the electronic device 10 may also include input and output devices and the like.
所稱處理器12可以是中央處理單元(Central Processing Unit,CPU),還可以包括其他通用處理器、數位訊號處理器(Digital Signal Processor,DSP)、專用積體電路(Application Specific Integrated Circuit,ASIC)、現成可程式設計閘陣列(Field-Programmable Gate Array,FPGA)或者其他可程式設計邏輯器件、分立門或者電晶體邏輯器件、分立硬體元件等。通用處理器可以是微處理器或者所述處理器也可以是任何常規的處理器等,處理器12是所述電子裝置10的控制中心,利用各種介面和線路連接整個電子裝置10的各個部分。 The so-called processor 12 may be a central processing unit (Central Processing Unit, CPU), and may also include other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC) , Off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. The general-purpose processor can be a microprocessor or the processor can also be any conventional processor, etc. The processor 12 is the control center of the electronic device 10 , and uses various interfaces and lines to connect various parts of the entire electronic device 10 .
記憶體11可用於存儲電腦程式13和/或模組/單元,處理器12通過運行或執行存儲在記憶體11內的電腦程式和/或模組/單元,以及調用存儲在記憶體11內的資料,實現電子裝置10的各種功能。記憶體11可以包括外部存儲介質,也可以包括記憶體。此外,記憶體11可以包括高速隨機存取記憶體,還可以包括非易失性記憶體,例如硬碟、記憶體、插接式硬碟,智慧存儲卡(Smart Media Card,SMC),安全數位(Secure Digital,SD)卡,快閃記憶體卡(Flash Card)、至少一個磁碟記憶體器件、快閃記憶體器件、或其他易失性固態記憶體器件。 The memory 11 can be used to store computer programs 13 and/or modules/units. The processor 12 executes or executes the computer programs and/or modules/units stored in the memory 11 and calls the data to realize various functions of the electronic device 10 . The memory 11 may include an external storage medium or a memory. In addition, the memory 11 may include high-speed random access memory, and may also include non-volatile memory, such as hard disk, memory, plug-in hard disk, Smart Media Card (SMC), Secure Digital (Secure Digital, SD) card, flash memory card (Flash Card), at least one disk memory device, flash memory device, or other volatile solid state memory device.
電子裝置10集成的模組/單元如果以軟體功能單元的形式實現並作為獨立的產品銷售或使用時,可以存儲在一個電腦可讀取存儲介質中。基於這樣的理解,本發明的方法實施例中的全部或部分流程也可以通過電腦程式來 指令相關的硬體來完成,電腦程式可存儲於電腦可讀存儲介質中,電腦程式在被處理器執行時,可實現各個方法實施例的步驟。 If the modules/units integrated in the electronic device 10 are implemented in the form of software functional units and sold or used as independent products, they may be stored in a computer-readable storage medium. Based on this understanding, all or part of the processes in the method embodiments of the present invention can also be implemented by computer programs. The computer program can be stored in a computer-readable storage medium, and when executed by the processor, the computer program can implement the steps of each method embodiment.
最後應說明的是,以上實施例僅用以說明本發明的技術方案而非限制,儘管參照較佳實施例對本發明進行了詳細說明,本領域的普通技術人員應當理解,可以對本發明的技術方案進行修改或等同替換,而不脫離本發明技術方案的精神和範圍。 Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention and not to limit them. Although the present invention has been described in detail with reference to the preferred embodiments, those of ordinary skill in the art should understand that the technical solutions of the present invention can be Modifications or equivalent substitutions can be made without departing from the spirit and scope of the technical solutions of the present invention.
S1~S4:步驟 S1~S4: Steps
Claims (13)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911415118.XA CN111222558B (en) | 2019-12-31 | 2019-12-31 | Image processing method and storage medium |
CN201911415118.X | 2019-12-31 |
Publications (2)
Publication Number | Publication Date |
---|---|
TW202127312A TW202127312A (en) | 2021-07-16 |
TWI769603B true TWI769603B (en) | 2022-07-01 |
Family
ID=70808302
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW109142281A TWI769603B (en) | 2019-12-31 | 2020-12-01 | Image processing method and computer readable medium thereof |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111222558B (en) |
TW (1) | TWI769603B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113297866A (en) * | 2021-05-21 | 2021-08-24 | 苏州视印智能系统有限公司 | Industrial code reader based on deep learning and heat map technology |
CN115695787A (en) * | 2021-07-27 | 2023-02-03 | 脸萌有限公司 | Segmentation information in neural network-based video coding and decoding |
TWI813522B (en) * | 2022-12-20 | 2023-08-21 | 悟智股份有限公司 | Classification Model Building Method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107833220A (en) * | 2017-11-28 | 2018-03-23 | 河海大学常州校区 | Fabric defect detection method based on depth convolutional neural networks and vision significance |
CN109472790A (en) * | 2018-11-22 | 2019-03-15 | 南昌航空大学 | A kind of machine components defect inspection method and system |
TW201939365A (en) * | 2018-02-23 | 2019-10-01 | 荷蘭商Asml荷蘭公司 | Methods for training machine learning model for computation lithography |
US20190371080A1 (en) * | 2018-06-05 | 2019-12-05 | Cristian SMINCHISESCU | Image processing method, system and device |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106250819A (en) * | 2016-07-20 | 2016-12-21 | 上海交通大学 | Based on face's real-time monitor and detection facial symmetry and abnormal method |
CN106326937B (en) * | 2016-08-31 | 2019-08-09 | 郑州金惠计算机系统工程有限公司 | Crowd density distribution estimation method based on convolutional neural networks |
CN108509963B (en) * | 2017-02-28 | 2023-04-28 | 株式会社日立制作所 | Target difference detection method and target difference detection equipment based on deep learning |
US10650515B2 (en) * | 2017-05-23 | 2020-05-12 | Case Western Reserve University | Characterizing intra-tumoral heterogeneity for response and outcome prediction using radiomic spatial textural descriptor (RADISTAT) |
US10902252B2 (en) * | 2017-07-17 | 2021-01-26 | Open Text Corporation | Systems and methods for image based content capture and extraction utilizing deep learning neural network and bounding box detection training techniques |
CN107292886B (en) * | 2017-08-11 | 2019-12-31 | 厦门市美亚柏科信息股份有限公司 | Target object intrusion detection method and device based on grid division and neural network |
CN109886125A (en) * | 2019-01-23 | 2019-06-14 | 青岛慧拓智能机器有限公司 | A kind of method and Approach for road detection constructing Road Detection model |
CN110111313B (en) * | 2019-04-22 | 2022-12-30 | 腾讯科技(深圳)有限公司 | Medical image detection method based on deep learning and related equipment |
CN110619350B (en) * | 2019-08-12 | 2021-06-18 | 北京达佳互联信息技术有限公司 | Image detection method, device and storage medium |
-
2019
- 2019-12-31 CN CN201911415118.XA patent/CN111222558B/en active Active
-
2020
- 2020-12-01 TW TW109142281A patent/TWI769603B/en active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107833220A (en) * | 2017-11-28 | 2018-03-23 | 河海大学常州校区 | Fabric defect detection method based on depth convolutional neural networks and vision significance |
TW201939365A (en) * | 2018-02-23 | 2019-10-01 | 荷蘭商Asml荷蘭公司 | Methods for training machine learning model for computation lithography |
US20190371080A1 (en) * | 2018-06-05 | 2019-12-05 | Cristian SMINCHISESCU | Image processing method, system and device |
CN109472790A (en) * | 2018-11-22 | 2019-03-15 | 南昌航空大学 | A kind of machine components defect inspection method and system |
Also Published As
Publication number | Publication date |
---|---|
CN111222558A (en) | 2020-06-02 |
TW202127312A (en) | 2021-07-16 |
CN111222558B (en) | 2024-01-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10083347B2 (en) | Face identification using artificial neural network | |
TWI769603B (en) | Image processing method and computer readable medium thereof | |
CN108875537B (en) | Object detection method, device and system and storage medium | |
KR101581112B1 (en) | Method for generating hierarchical structured pattern-based descriptor and method for recognizing object using the descriptor and device therefor | |
CN113159147A (en) | Image identification method and device based on neural network and electronic equipment | |
CN109815770A (en) | Two-dimentional code detection method, apparatus and system | |
CN111652054A (en) | Joint point detection method, posture recognition method and device | |
CN112085056B (en) | Target detection model generation method, device, equipment and storage medium | |
CN113569868A (en) | Target detection method and device and electronic equipment | |
CN111079930A (en) | Method and device for determining quality parameters of data set and electronic equipment | |
CN113139540A (en) | Backboard detection method and equipment | |
JP5704909B2 (en) | Attention area detection method, attention area detection apparatus, and program | |
CN110009625B (en) | Image processing system, method, terminal and medium based on deep learning | |
CN115112661B (en) | Defect detection method, device, computer equipment and storage medium | |
CN117726862A (en) | Model training method, device and storage medium applied to industrial detection | |
CN117953581A (en) | Method and device for identifying actions, electronic equipment and readable storage medium | |
CN112070853A (en) | Image generation method and device | |
US11663816B2 (en) | Apparatus and method for classifying attribute of image object | |
Van Den Braak et al. | GPU-vote: A framework for accelerating voting algorithms on GPU | |
JP7446338B2 (en) | Method, device, equipment and storage medium for detecting degree of association between face and hand | |
CN116977783A (en) | Training method, device, equipment and medium of target detection model | |
CN108446602A (en) | A kind of device and method for Face datection | |
CN111832629A (en) | FPGA-based fast-RCNN target detection method | |
CN113378608A (en) | Crowd counting method, device, equipment and storage medium | |
CN112634143A (en) | Image color correction model training method and device and electronic equipment |