CN109242792B - White balance correction method based on white object - Google Patents
White balance correction method based on white object Download PDFInfo
- Publication number
- CN109242792B CN109242792B CN201810964247.3A CN201810964247A CN109242792B CN 109242792 B CN109242792 B CN 109242792B CN 201810964247 A CN201810964247 A CN 201810964247A CN 109242792 B CN109242792 B CN 109242792B
- Authority
- CN
- China
- Prior art keywords
- white
- image
- model
- color
- white object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000012937 correction Methods 0.000 title claims abstract description 16
- 238000012549 training Methods 0.000 claims abstract description 32
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 8
- 238000005516 engineering process Methods 0.000 claims abstract description 8
- 238000012360 testing method Methods 0.000 claims description 21
- 238000012545 processing Methods 0.000 claims description 13
- 230000003068 static effect Effects 0.000 claims description 10
- 230000006870 function Effects 0.000 claims description 4
- 239000000463 material Substances 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 15
- 230000008859 change Effects 0.000 abstract description 5
- 230000001915 proofreading effect Effects 0.000 abstract description 5
- 239000000284 extract Substances 0.000 abstract description 3
- 238000003745 diagnosis Methods 0.000 description 12
- 238000004364 calculation method Methods 0.000 description 4
- 201000010099 disease Diseases 0.000 description 4
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 4
- 239000003086 colorant Substances 0.000 description 3
- 239000003814 drug Substances 0.000 description 3
- 238000007689 inspection Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 2
- 230000032683 aging Effects 0.000 description 1
- 238000002555 auscultation Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 210000001835 viscera Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Radiology & Medical Imaging (AREA)
- Epidemiology (AREA)
- General Health & Medical Sciences (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a white balance proofreading method based on a white object, which comprises the following steps: establishing a white object comparison model by utilizing a large number of obtained white object samples through continuous training and learning; the white object contrast model is applied to on-site snapshot, and the pixel mean value and the color gain value of the white object contrast model are calculated by identifying the image and extracting the white object area in the image, so that the color is adjusted to realize the white balance correction of the image; the method has the advantages of simple realization principle, low resource cost, quick operation time and ideal and accurate white balance effect; the paper towel recognition technology adopted by the invention is based on a convolutional neural network algorithm, learns from large sample data, gradually extracts high-level features of the image, classifies the features, completes recognition, can cope with certain offset, scale change and deformation of paper towels, ensures strong separability of the features, has ideal recognition effect on feature classification, reduces the dependency of recognition on external conditions, and simultaneously reduces the complexity of a model.
Description
Technical Field
The invention relates to the field of picture color processing, in particular to a white balance proofreading method based on a white object.
Background
The traditional Chinese medicine includes inspection, auscultation, inquiry and cutting, while tongue inspection is the key content of inspection. In the theory of traditional Chinese medicine, no matter how complex pathological symptoms of internal organs of human body are, the nature of disease, the shallow depth of disease and the sadness of qi and blood can be judged intuitively and quickly by observing tongue images, so that the method is a simple and effective medical auxiliary diagnosis and identification method. Conventionally, doctors generally observe the tongue morphology, color and other features of patients through naked eyes, and then diagnose diseases according to medical experience. Because of the large population and serious aging problem in China, the number of patients who see traditional Chinese medicine everyday increases, and doctors are in short supply; furthermore, waiting for the doctor to see is a cumbersome and time consuming task for the patient.
In order to solve the above-mentioned doctor-patient contradiction, a lot of tongue diagnosis products based on computer image auxiliary processing are developed in the market. The realization principle of the product is as follows: the tongue of the patient is taken as an object to acquire images, and a doctor diagnoses based on the images, so that the diagnosis efficiency is greatly improved. Because the image is collected by the shooting equipment of the terminal, such as: the mobile phone camera has no adaptability to human eyes, and can shoot under different light rays, so that color restoration distortion is caused due to the unbalance of CCD output; therefore, the acquired images of the tongue diagnosis product often have a color difference problem, which causes deviation in medical diagnosis, and thus, targeted treatment and prevention of the diseases of the patient cannot be realized.
Therefore, there is a need for a proofing method that can accurately restore the colors of the captured images.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a white balance proofreading method based on a white object, which has the advantages of simple realization principle, low resource cost, quick operation time and ideal and accurate white balance effect; the paper towel identification technology adopted by the method is based on a convolutional neural network algorithm, the high-level features of the image are gradually extracted from large sample data through learning, the features are classified, identification is completed, the paper towel identification method can cope with certain offset, scale change and deformation of the paper towel, the strong separability of the features is guaranteed, an ideal identification effect is achieved on feature classification, the dependency of identification on external conditions is reduced, and meanwhile the complexity of the model is reduced.
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
a white balance correction method based on a white object, comprising:
s1, establishing a white object comparison model by using a large number of obtained white object samples through continuous training and learning;
s11, collecting a large number of white object images, and establishing an object sample database;
s111, collecting a large number of static images of the white object;
s112, carrying out gray processing on the collected static images, and calculating the gray value of each pixel point of each static image to enable the images to be in a black, white and gray state;
and S113, dividing the static image sample subjected to gray level processing into a test image and a training image, storing the test image and the training image on a server, and establishing a white object sample database.
And S12, establishing a model, and performing model training by using the sample to realize automatic identification of the white object.
S121, the system establishes a model, transmits the training image of the white object sample to the model for repeated training, and automatically identifies the white object of the image;
s122, judging whether the number of times of model training reaches a preset threshold value of the system, if not, turning to S121, and if so, turning to S123;
s123, stopping model training, and calculating the model identification accuracy rate through the loss function;
s124, judging whether the accuracy obtained by the above steps reaches a certain threshold value, if not, readjusting the class information of the sample, and if so, turning to S125;
and S125, performing test run test on the model and the test image of the white object sample database, and applying the model to on-site snapshot after the test accuracy reaches a preset threshold value.
And S2, applying the white object contrast model to on-site snapshot, and calculating the pixel mean value and the color gain value of the white object region by identifying the image and extracting the white object region in the image so as to adjust the color and realize the white balance correction of the image.
S21, applying the model to on-site snapshot, and storing a group photo image of the tongue and a white object of the user;
s22, recognizing a white object of the user image, learning from sample data through the white object comparison model and an identification technology based on an algorithm, gradually extracting high-level features of the image, classifying the features, and completing recognition;
s23, extracting a white object area of the user image, cutting the boundary of the area, and reserving the middle part of the area;
s24, calculating the average value of each pixel point in the middle part of the area as the color value of the white object;
s25, comparing the object color value with the standard white color value to obtain a color gain value between the object color value and the standard white color value;
s26, performing overall color adjustment on the user image based on the gain value;
s27, the color-adjusted user image is output.
Further, the formula used for calculating the gray value of each pixel point in step S112 is a gray processing weighted average formula:
f(i,j)=0.30R(i,j)+0.59G(i,j)+0.11B(i,j)
wherein, i, j represent the position of a pixel point in the two-dimensional space vector, namely: row i, column j.
Further, the sample category information in step S124 includes the material and size of the object.
Further, the algorithm based on which the identification technique in the step S22 is a convolutional neural network algorithm.
Further, the threshold of the number of times of model training in step S122 is 20 ten thousand, the threshold of the model training accuracy in step S124 is 80%, and the threshold of the model testing accuracy in step S125 is 80%.
Further, the white object is a white tissue.
Compared with the prior art, the method has the advantages of simple realization principle, low resource cost, quick operation time and ideal and accurate white balance effect; the paper towel identification technology adopted by the method is based on a convolutional neural network algorithm, the high-level features of the image are gradually extracted from large sample data through learning, the features are classified, identification is completed, the paper towel identification method can cope with certain offset, scale change and deformation of the paper towel, the strong separability of the features is guaranteed, an ideal identification effect is achieved on feature classification, the dependency of identification on external conditions is reduced, and meanwhile the complexity of the model is reduced.
Drawings
FIG. 1: a detailed flowchart of step S1 of the white balance correction method according to the present invention;
FIG. 2: a detailed flowchart of step S2 of the white balance correction method according to the present invention is shown.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
In order to accurately restore the acquired image colors, the invention provides a white balance proofreading method based on a white object. White paper towels were used as comparative samples in the examples.
Referring to fig. 1, the system collects a large number of white tissue images and builds a tissue sample database. Establishing a model, and repeatedly training based on the sample to realize automatic identification of the white paper towel; through calculation of the loss function, if the model identification accuracy reaches a certain threshold value, such as 80%, the model is applied to on-site snapshot.
Referring to fig. 2, in the field, the user saves the combined image of the tongue of the user and the white paper towel through the tongue diagnosis terminal. Based on the model, the system performs paper towel recognition on the user image. Based on the recognition results, the system extracts the tissue area of the image and cuts the borders separately at 10% of the area size, leaving the middle part of the area. Based on the middle part, the system calculates the average value of each pixel point as the color value of the tissue area. Based on the color values, the system compares the color values with standard white color values to obtain color gain values between the color values and the standard white color values. Based on the gain value, the system recalculates the color value of the user image to realize the color white balance effect of the user image.
In the embodiment, based on the color characteristics of the white paper towel, a white reference object is provided for the tongue diagnosis image, and the color gain value between the white reference object and the tongue diagnosis image is obtained by combining the comparison of standard white; and realizing the color white balance effect of the tongue diagnosis image based on the gain value. The method has the advantages of simple realization principle, low resource cost, quick operation time and ideal and accurate white balance effect. The prior calibration method based on the white balance technology comprises the following steps: firstly, partitioning an image; then detecting the white point of the image based on the average value and the accumulated value of the blocks; then determining a final white reference point; then comparing the average value of the reference points with the maximum value of the brightness in the reference points to obtain a gain value; and finally, performing color adjustment on the image based on the gain value. Compared with the present invention, the prior method has the following defects: the realization principle is complex, the resource cost is high, the calculation amount is large, and the working efficiency is low; the image originally has color difference, and the reference point of the maximum brightness value compared is not necessarily standard white, which means that the color difference is still possible in the subsequent color adjustment of the image, and the white balance effect is not accurate. The tissue identification technology adopted by the invention is based on a convolutional neural network algorithm, learning is carried out from large sample data, high-level features of the image are gradually extracted, and the features are classified to complete identification; the paper towel recognition method can cope with certain deviation, scale change and deformation of the paper towel, ensures strong separability of features, has ideal recognition effect on feature classification, reduces the dependency of recognition on external conditions, and reduces the complexity of a model.
In the examples, the specific implementation method is as follows:
s1: collecting a large number of white paper towel images and establishing a paper towel sample database.
S1.1: white tissue images were collected in large quantities.
Through modes such as webpage grabbing, the system collects a large number of static images of the white paper towel.
S1.2: and carrying out gray scale processing on the image.
The color image is composed of a plurality of pixel points, and each pixel point is represented by three values of RGB; the image is subjected to gray level processing, the texture characteristic information of the image is not influenced, and each pixel point can be represented by only one gray level value, so that the image processing efficiency is greatly improved. The gray scale processing weighted average formula is as follows:
f(i,j)=0.30R(i,j)+0.59G(i,j)+0.11B(i,j)
wherein, i, j represent the position of a pixel point in the two-dimensional space vector, namely: row i, column j. And calculating the gray value of each pixel point of each static image according to the formula, wherein the value range is 0-255, so that the images are in a black, white and gray state.
S1.3: and storing the image on a server to complete the establishment of the tissue sample database.
The still image is subjected to gradation processing to become a gradation image. Based on the total number of images, the system classifies the images into two categories, one being training images and one being test images. The training images occupy 90% of the total amount and are mainly used for model training; the test images occupy 10% of the total amount and are mainly used when the model is trained and is in test operation. And all the images are stored on a local server, and the establishment of the tissue sample database is finished.
S2: and establishing a model to realize automatic identification of the white paper towel.
The system establishes a model and transmits the paper towel sample to the model for repeated training. And the model training mainly adopts a convolutional neural network method to learn from large sample data and classifies the characteristics by utilizing the advanced characteristics output by different convolutional layers, thereby completing the automatic identification of the white paper towel. When the times of the repeated training of the model and the training image of the tissue sample database reach a system set threshold value, if: model training was stopped 20 ten thousand times. And obtaining the model identification accuracy according to the calculation of the loss function. If the accuracy reaches a certain threshold, such as: 80%, the model is considered as an ideal model and can be used for carrying out test run tests with the test images of the tissue sample database; otherwise, the model is considered not ideal, and the tissue category information is readjusted, such as: the material, the size and the like are continuously and repeatedly trained with the training image of the tissue sample database.
S3: the model is applied to on-site snapshot and is used for carrying out paper towel identification on the user image.
When the training of the model and the test image of the tissue sample database is finished and the accuracy reaches a certain threshold value, if: 80% can be applied to on-site snapshot. In the field, the user saves the combined image of the tongue and the white paper towel through the tongue diagnosis terminal. Based on the model, the system performs paper towel recognition on the user image.
S4: based on the model recognition result, the paper towel region of the user image is extracted and the color thereof is calculated.
Based on the model, the system realizes white paper towel identification on the user image. According to the recognition result, the system extracts the paper towel area of the user image. The color of the extraction area is white, and other colors exist at the periphery of the area, so that the white at the boundary position of the area is influenced by the transmission or reflection of the peripheral color; such as: the color of the periphery of the region is red, and the white color at the boundary position of the region is mixed with part of the red color. In order to ensure the stability of white color of the extracted area, the system cuts the boundary of the area by taking 10% of the area size as a reference, and reserves the middle part of the area. Based on the above-mentioned middle part, the system calculates the average value of each pixel point color as the color value of the region, namely: color value of white towel in user image.
S5: based on the color of the tissue, the tissue is compared with the standard white color to obtain the color gain between the tissue and the standard white color. And based on the calculation of the system on the color mean value of each pixel point in the middle part of the extraction area, the specific color value of the tissue is obtained. Because the color value of the current paper towel has color difference, in order to ensure the white balance effect of the user image, the system compares the color value of the current paper towel with the standard white color value to obtain a color gain value between the color value of the current paper towel and the standard white color value. In the RGB color mode, the standard white color is R ═ G ═ B ═ 255. The following is the formula for calculating the color gain:
Rgain=Ymax/Ravew
Ggain=Ymax/Gavew
Bgain=Ymax/Bavew
and obtaining the gain value of each color channel between the user image paper towel and the standard white through the formula. Wherein, Rgain, Ggain and Bgain represent the gain value of each color channel between the two; ymax represents the color value of each color channel of standard white; ravew, Gavew, Bavew represent color values for each color channel of a user image towel.
S6: based on the color gain, an overall color adjustment is performed on the user image.
And obtaining a color gain value between the color value of the paper towel of the user image and the standard white color value based on comparison of the color value of the paper towel of the user image and the standard white color value. Based on the gain value, the system recalculates the color value of each color channel of the user image, performs integral color adjustment on the image and realizes the white balance effect of the image color. The following is a formula for calculating color values of color channels of an image:
R’=R×Rgain
G’=G×Ggain
B’=B×Bgain
and obtaining the color value of each color channel of the user image through the formula. Wherein, R ', G ' and B ' represent color values of each color channel after the image color is adjusted; r, G, B represent color values of each color channel before color adjustment of the image.
S7: outputting the color-adjusted user image.
Based on the color gain values of the image tissues and the standard white color, the system performs overall color adjustment on the user image, and the user image is output by the tongue diagnosis terminal for diagnosis of doctors.
The invention provides a white balance proofreading method based on a white object, which has the following advantages: the method has the advantages of simple realization principle, low resource cost, quick operation time and ideal and accurate white balance effect; the paper towel identification technology adopted by the method is based on a convolutional neural network algorithm, the high-level features of the image are gradually extracted from large sample data through learning, the features are classified, identification is completed, the paper towel identification method can cope with certain offset, scale change and deformation of the paper towel, the strong separability of the features is guaranteed, an ideal identification effect is achieved on feature classification, the dependency of identification on external conditions is reduced, and meanwhile the complexity of the model is reduced.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that are within the spirit and principle of the present invention are intended to be included therein.
Claims (8)
1. A white balance correction method based on a white object, comprising:
s1, establishing a white object comparison model by using a large number of obtained white object samples through continuous training and learning;
s2, the white object contrast model is applied to on-site snapshot, and the pixel mean value and the color gain value of the white object region in the image are calculated by recognizing the image and extracting the white object region, so that the white balance correction of the image is realized by adjusting the color;
the step S1 specifically includes:
s11, collecting a large number of white object images, and establishing an object sample database;
s12, establishing a model, and performing model training by using the sample to realize automatic identification of the white object;
the step S2 specifically includes:
s21, applying the model to on-site snapshot, and storing a group photo image of the tongue and a white object of the user;
s22, recognizing a white object of the user image, learning from sample data through the white object comparison model and an identification technology based on an algorithm, gradually extracting high-level features of the image, classifying the features, and completing recognition;
s23, extracting a white object area of the user image, cutting the boundary of the area, and reserving the middle part of the area;
s24, calculating the average value of each pixel point in the middle part of the area as the color value of the white object;
s25, comparing the object color value with the standard white color value to obtain a color gain value between the object color value and the standard white color value;
s26, performing overall color adjustment on the user image based on the gain value;
s27, the color-adjusted user image is output.
2. The white balance correction method according to claim 1, wherein the step S11 specifically includes:
s111, collecting a large number of static images of the white object;
s112, carrying out gray processing on the collected static images, and calculating the gray value of each pixel point of each static image to enable the images to be in a black, white and gray state;
and S113, dividing the static image sample subjected to gray level processing into a test image and a training image, storing the test image and the training image on a server, and establishing a white object sample database.
3. The white balance correction method according to claim 2, wherein the formula for calculating the gray scale value of each pixel point in step S112 is a gray scale processing weighted average formula:
f(i,j)=0.30R(i,j)+0.59G(i,j)+0.11B(i,j)
wherein, i, j represent the position of a pixel point in the two-dimensional space vector, namely: row i, column j.
4. The white balance correction method according to claim 1, wherein the step S12 specifically includes:
s121, the system establishes a model, transmits the training image of the white object sample to the model for repeated training, and automatically identifies the white object of the image;
s122, judging whether the number of times of model training reaches a preset threshold value of the system, if not, turning to S121, and if so, turning to S123;
s123, stopping model training, and calculating the model identification accuracy rate through the loss function;
s124, judging whether the accuracy obtained by the above steps reaches a certain threshold value, if not, readjusting the class information of the sample, and if so, turning to S125;
and S125, performing test run test on the model and the test image of the white object sample database, and applying the model to on-site snapshot after the test accuracy reaches a preset threshold value.
5. The white balance correction method according to claim 4, wherein the sample classification information in the step S124 includes material and size of the object.
6. The white balance correction method according to claim 4, wherein the threshold of the number of times of model training in step S122 is 20 ten thousand, the threshold of the model training accuracy in step S124 is 80%, and the threshold of the model testing accuracy in step S125 is 80%.
7. The white balance correction method according to claim 1, wherein the algorithm on which the identification technique in the step S22 is based is a convolutional neural network algorithm.
8. The white balance correction method according to any one of claims 1 to 7, wherein the white object is a white tissue.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810964247.3A CN109242792B (en) | 2018-08-23 | 2018-08-23 | White balance correction method based on white object |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810964247.3A CN109242792B (en) | 2018-08-23 | 2018-08-23 | White balance correction method based on white object |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109242792A CN109242792A (en) | 2019-01-18 |
CN109242792B true CN109242792B (en) | 2020-11-17 |
Family
ID=65068283
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810964247.3A Active CN109242792B (en) | 2018-08-23 | 2018-08-23 | White balance correction method based on white object |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109242792B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110740306B (en) * | 2019-10-24 | 2021-05-11 | 深圳市视特易智能科技有限公司 | Color white balance statistical correction method |
CN114071106B (en) * | 2020-08-10 | 2023-07-04 | 合肥君正科技有限公司 | Cold start fast white balance method for low-power-consumption equipment |
CN112333437B (en) * | 2020-09-21 | 2022-05-31 | 宁波萨瑞通讯有限公司 | AI camera debugging parameter generator |
CN112532960B (en) * | 2020-12-18 | 2022-10-25 | Oppo(重庆)智能科技有限公司 | White balance synchronization method and device, electronic equipment and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1079255A2 (en) * | 1999-08-23 | 2001-02-28 | Olympus Optical Co., Ltd. | Light source device for endoscope using dmd |
CN103957396A (en) * | 2014-05-14 | 2014-07-30 | 姚杰 | Image processing method and device used when tongue diagnosis is conducted with intelligent device and equipment |
CN104658003A (en) * | 2015-03-16 | 2015-05-27 | 北京理工大学 | Tongue image segmentation method and device |
CN104856680A (en) * | 2015-05-11 | 2015-08-26 | 深圳贝申医疗技术有限公司 | Automatic detection method and system for neonatal jaundice |
CN106339719A (en) * | 2016-08-22 | 2017-01-18 | 微梦创科网络科技(中国)有限公司 | Image identification method and image identification device |
CN106412547A (en) * | 2016-08-29 | 2017-02-15 | 厦门美图之家科技有限公司 | Image white balance method and device based on convolutional neural network, and computing device |
CN107578390A (en) * | 2017-09-14 | 2018-01-12 | 长沙全度影像科技有限公司 | A kind of method and device that image white balance correction is carried out using neutral net |
CN109273071A (en) * | 2018-08-23 | 2019-01-25 | 广东数相智能科技有限公司 | A method of establishing white object contrast model |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7423779B2 (en) * | 2004-03-30 | 2008-09-09 | Omnivision Technologies, Inc. | Method and apparatus for automatic white balance |
US20150049177A1 (en) * | 2012-02-06 | 2015-02-19 | Biooptico Ab | Camera Arrangement and Image Processing Method for Quantifying Tissue Structure and Degeneration |
CN105738364B (en) * | 2015-12-28 | 2018-08-17 | 清华大学深圳研究生院 | Silastic surface algal grown degree measurement method and device based on image procossing |
CN106295139B (en) * | 2016-07-29 | 2019-04-02 | 汤一平 | A kind of tongue body autodiagnosis health cloud service system based on depth convolutional neural networks |
CN108205671A (en) * | 2016-12-16 | 2018-06-26 | 浙江宇视科技有限公司 | Image processing method and device |
CN107507250B (en) * | 2017-06-02 | 2020-08-21 | 北京工业大学 | Surface color and tongue color image color correction method based on convolutional neural network |
-
2018
- 2018-08-23 CN CN201810964247.3A patent/CN109242792B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1079255A2 (en) * | 1999-08-23 | 2001-02-28 | Olympus Optical Co., Ltd. | Light source device for endoscope using dmd |
CN103957396A (en) * | 2014-05-14 | 2014-07-30 | 姚杰 | Image processing method and device used when tongue diagnosis is conducted with intelligent device and equipment |
CN104658003A (en) * | 2015-03-16 | 2015-05-27 | 北京理工大学 | Tongue image segmentation method and device |
CN104856680A (en) * | 2015-05-11 | 2015-08-26 | 深圳贝申医疗技术有限公司 | Automatic detection method and system for neonatal jaundice |
CN106339719A (en) * | 2016-08-22 | 2017-01-18 | 微梦创科网络科技(中国)有限公司 | Image identification method and image identification device |
CN106412547A (en) * | 2016-08-29 | 2017-02-15 | 厦门美图之家科技有限公司 | Image white balance method and device based on convolutional neural network, and computing device |
CN107578390A (en) * | 2017-09-14 | 2018-01-12 | 长沙全度影像科技有限公司 | A kind of method and device that image white balance correction is carried out using neutral net |
CN109273071A (en) * | 2018-08-23 | 2019-01-25 | 广东数相智能科技有限公司 | A method of establishing white object contrast model |
Also Published As
Publication number | Publication date |
---|---|
CN109242792A (en) | 2019-01-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109242792B (en) | White balance correction method based on white object | |
US10115191B2 (en) | Information processing apparatus, information processing system, information processing method, program, and recording medium | |
US8027533B2 (en) | Method of automated image color calibration | |
US10210433B2 (en) | Method for evaluating quality of tone-mapping image based on exposure analysis | |
WO2016179981A1 (en) | Automatic detection method and system for neonatal jaundice | |
CN103974053B (en) | A kind of Automatic white balance antidote extracted based on ash point | |
CA3153067C (en) | Picture-detecting method and apparatus | |
Xiao et al. | Retinal hemorrhage detection by rule-based and machine learning approach | |
CN110495888B (en) | Standard color card based on tongue and face images of traditional Chinese medicine and application thereof | |
CN115965607A (en) | Intelligent traditional Chinese medicine tongue diagnosis auxiliary analysis system | |
JP7087390B2 (en) | Diagnostic support device, image processing method and program | |
Wang et al. | Facial image medical analysis system using quantitative chromatic feature | |
CN113012093B (en) | Training method and training system for glaucoma image feature extraction | |
CN110874572B (en) | Information detection method and device and storage medium | |
CN111105407A (en) | Pathological section staining quality evaluation method, device, equipment and storage medium | |
CN104766068A (en) | Random walk tongue image extraction method based on multi-rule fusion | |
CN109711306B (en) | Method and equipment for obtaining facial features based on deep convolutional neural network | |
CN112464871A (en) | Deep learning-based traditional Chinese medicine tongue image processing method and system | |
CN114862851B (en) | Processing method based on tongue picture analysis | |
KR102342334B1 (en) | Improved method for diagnosing jaundice and system thereof | |
CN111291706B (en) | Retina image optic disc positioning method | |
CN114972065A (en) | Training method and system of color difference correction model, electronic equipment and mobile equipment | |
CN110726536B (en) | Color correction method for color digital reflection microscope | |
CN118196218B (en) | Fundus image processing method, device and equipment | |
Chakraborty et al. | A decision scheme based on adaptive morphological image processing for mobile detection of early stage diabetic retinopathy |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |