[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110009626A - Method and apparatus for generating image - Google Patents

Method and apparatus for generating image Download PDF

Info

Publication number
CN110009626A
CN110009626A CN201910290517.1A CN201910290517A CN110009626A CN 110009626 A CN110009626 A CN 110009626A CN 201910290517 A CN201910290517 A CN 201910290517A CN 110009626 A CN110009626 A CN 110009626A
Authority
CN
China
Prior art keywords
eye fundus
fundus image
image
region
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910290517.1A
Other languages
Chinese (zh)
Inventor
杨大陆
杨叶辉
王磊
许言午
黄艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910290517.1A priority Critical patent/CN110009626A/en
Publication of CN110009626A publication Critical patent/CN110009626A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Quality & Reliability (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

Embodiment of the disclosure discloses the method and apparatus for generating image.One specific embodiment of this method includes: acquisition eye fundus image;Eye fundus image is pre-processed, pretreatment eye fundus image is obtained;Eye fundus image input eye fundus image identification model trained in advance will be pre-processed, eye fundus image type information is obtained;It is the eye fundus image of predefined type for image type, is mapped using Class Activation and generate model region-of-interest image;Based on model region-of-interest image and eye fundus image, the eye fundus image including predefined type image-region is generated.The embodiment realizes the operand for reducing processor, saves the calculation resources of processor.

Description

Method and apparatus for generating image
Technical field
Embodiment of the disclosure is related to field of computer technology, and in particular to the method and apparatus for generating image.
Background technique
With the development of science and technology, computer image processing technology is gradually applied to more and more fields.For example, biology doctor Learning image can help to carry out the diagnosing and treating of disease.In Biomedical Image processing technique, for the place of eye fundus image Reason expects there is general image processing method at present.
Eyeground is by the macula area and view on retina, optical fundus blood vessel, optic papilla, optic nerve fiber, retina Choroid after film etc. is constituted.Eye fundus image is the imaging in the eyeground region obtained using fundus camera.Eye fundus image processing Purpose is to carry out digitized description to features such as the physiological structure on eyeground and lesions.
Summary of the invention
Embodiment of the disclosure proposes the method and apparatus for generating image.
In a first aspect, embodiment of the disclosure provides a kind of method for generating image, this method comprises: obtaining eye Base map picture;Eye fundus image is pre-processed, pretreatment eye fundus image is obtained;Eye fundus image input training in advance will be pre-processed Eye fundus image identification model, obtains eye fundus image type information, and eye fundus image identification model includes characteristic pattern extract layer, global flat Equal pond layer, global maximum pond layer and full articulamentum;In response to eye fundus image type information instruction eye fundus image type be Predefined type, the characteristic pattern extracted based on characteristic pattern extract layer and Class Activation mapping, generate model region-of-interest image;Based on mould Type region-of-interest image and eye fundus image generate the eye fundus image including predefined type image-region.
In some embodiments, pretreatment is carried out to eye fundus image and comprises determining that whether eye fundus image is green channel eye Base map picture;In response to determine eye fundus image be green channel eye fundus image, to eye fundus image perform the following operation at least one : data normalization processing, size-normalized processing;In response to determining that eye fundus image is not green channel eye fundus image, to eye Base map picture carries out green channel images extraction process, obtains green channel eye fundus image;To the green channel eyeground of eye fundus image Image perform the following operation at least one of: data normalization processing, size-normalized processing.
In some embodiments, eye fundus image identification model is trained in the following manner obtains: obtaining training sample Set, wherein training sample includes the sample type of sample eye fundus image and the image type for identifying sample eye fundus image Information;At least two training samples are chosen from training sample set, and execute following training step: by least the two of selection Each sample eye fundus image in a training sample sequentially inputs initial eye fundus image identification model, obtains at least two training samples Picture type information corresponding to each sample eye fundus image in this;By each sample eyeground at least two training samples Picture type information corresponding to image is compared with sample type information corresponding to the sample eye fundus image, is obtained initial The predictablity rate of eye fundus image identification model, determines whether predictablity rate is greater than default accuracy rate threshold value, in response to determination Predictablity rate is greater than default accuracy rate threshold value, and initial eyeground identification model is determined as eyeground identification model;In response to determination Predictablity rate is not more than default accuracy rate threshold value, adjusts the relevant parameter in initial eyeground identification model, and from training sample Again at least two training samples are chosen in this set, use initial eye fundus image identification model adjusted as initial eyeground Identification model executes training step again.
In some embodiments, it is based on model region-of-interest image and eye fundus image, generating includes predefined type image district The eye fundus image in domain, comprising: according to default second threshold, model region-of-interest image is subjected to image threshold processing, is obtained Thresholding model region-of-interest image;It is closed according to thresholding model region-of-interest image is corresponding with the location of pixels of eye fundus image System generates eye fundus image initially including predefined type image-region;It will initially include predefined type according to default third threshold value The eye fundus image of image-region carries out wavelet threshold denoising processing, generates the eye fundus image including predefined type image-region.
In some embodiments, this method further include: the eye fundus image including predefined type image-region is sent to mesh Mark display equipment, and control target display devices show the eye fundus image for including predefined type image-region.
Second aspect, embodiment of the disclosure provide it is a kind of for generating the device of image, the device include: obtain it is single Member is configured to obtain eye fundus image;Pretreatment unit is configured to pre-process the eye fundus image, be pre-processed Eye fundus image;Recognition unit is configured to pre-process eye fundus image input eye fundus image identification model trained in advance, obtains Eye fundus image type information, the eye fundus image identification model include characteristic pattern extract layer, global average pond layer, global maximum pond Change layer and full articulamentum;First generation unit is configured in response to the eye fundus image class of eye fundus image type information instruction Type is predefined type, and characteristic pattern and the Class Activation mapping extracted based on characteristic pattern extract layer generate model region-of-interest image;The Two generation units are configured to region-of-interest image and eye fundus image based on this model, and generating includes predefined type image-region Eye fundus image.
In some embodiments, pretreatment unit comprises determining that subelement, is configured to determine whether eye fundus image is green Chrominance channel eye fundus image;First processing subelement is configured in response to determine that eye fundus image is green channel eye fundus image, right The eye fundus image perform the following operation at least one of: data normalization processing, size-normalized processing.Subelement is extracted, It is configured in response to determine that eye fundus image is not green channel eye fundus image, green channel images is carried out to the eye fundus image and are mentioned Processing is taken, green channel eye fundus image is obtained;Second processing subelement is configured to the green channel eyeground to the eye fundus image Image perform the following operation at least one of: data normalization processing, size-normalized processing.
In some embodiments, which further includes eye fundus image identification model training unit, which identifies mould Type training unit includes: to obtain training sample set zygote unit, is configured to obtain training sample set, wherein training sample The sample type label of image type including sample eye fundus image and for identifying sample eye fundus image;The first son of model training Unit is configured to choose at least two training samples from training sample set, and executes following training step: by selection Each sample eye fundus image at least two training samples sequentially inputs initial eye fundus image identification model, obtains at least two Picture type information corresponding to each sample eye fundus image in training sample;By each sample at least two training samples Picture type information corresponding to this eye fundus image is compared with sample type information corresponding to the sample eye fundus image, is obtained It to the predictablity rate of initial eye fundus image identification model, determines whether predictablity rate is greater than default accuracy rate threshold value, responds Accuracy rate threshold value is preset in determining that predictablity rate is greater than, initial eyeground identification model is determined as eyeground identification model;Model The second subelement of training is configured in response to determine that predictablity rate is not more than default accuracy rate threshold value, adjusts initial eyeground Relevant parameter in identification model, and choose at least two training samples again from training sample set, after adjustment Initial eye fundus image identification model as initial eyeground identification model, execute training step again.
In some embodiments, the second generation unit includes: image threshold subelement, is configured to according to default second Model region-of-interest image is carried out image threshold processing, obtains thresholding model region-of-interest image by threshold value;Image mapping Subelement is configured to the location of pixels corresponding relationship according to thresholding model region-of-interest image and eye fundus image, generates just Begin the eye fundus image including predefined type image-region;Wavelet Denoising Method subelement is configured to according to third threshold value is preset, will be first Beginning includes that the eye fundus image of predefined type image-region carries out wavelet threshold denoising processing, and generating includes predefined type image-region Eye fundus image.
In some embodiments, device further include: control unit is configured to include predefined type image-region Eye fundus image is sent to target display devices, and control target display devices to the eyeground figure including predefined type image-region As being shown.
The third aspect, embodiment of the disclosure provide a kind of electronic equipment, which includes: one or more places Manage device;Storage device is stored thereon with one or more programs;When one or more programs are held by one or more processors Row, so that one or more processors realize the method as described in any embodiment in first aspect.
Fourth aspect, embodiment of the disclosure provide a kind of computer-readable medium, are stored thereon with computer program, The method as described in any embodiment in first aspect is realized when the computer program is executed by processor.
The method and apparatus for generating image that embodiment of the disclosure provides are then right by obtaining eye fundus image Eye fundus image is pre-processed, and pretreatment eye fundus image is obtained.Eye fundus image input eyeground trained in advance will be pre-processed later Image recognition model obtains eye fundus image type information, and determines whether eye fundus image is predetermined class according to picture type information Type eye fundus image.After the type for determining eye fundus image is predefined type, is mapped using Class Activation and generate model region-of-interest Image, and it is based on model region-of-interest image and eye fundus image, generate the eye fundus image including predefined type image-region.? In the present embodiment, eye fundus image identification model is to train in advance, can identify the type of eye fundus image.Then only eyeground is known The image type that other model identifies is that the eye fundus image of predetermined class carries out generation image procossing, which realizes reduction The operand of processor saves the calculation resources of processor.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the disclosure is other Feature, objects and advantages will become more apparent upon:
Fig. 1 is that one embodiment of the disclosure can be applied to exemplary system architecture figure therein;
Fig. 2 is according to an embodiment of the present disclosure for generating the flow chart of one embodiment of the method for image;
Fig. 3 is according to an embodiment of the present disclosure for generating the schematic diagram of an application scenarios of the method for image;
Fig. 4 is according to an embodiment of the present disclosure for generating the structural schematic diagram of one embodiment of the device of image;
Fig. 5 is adapted for the structural schematic diagram for realizing the electronic equipment of embodiment of the disclosure.
Specific embodiment
The disclosure is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched The specific embodiment stated is used only for explaining that correlation is open, rather than the restriction to the disclosure.It also should be noted that in order to Convenient for description, is illustrated only in attached drawing and disclose relevant part to related.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the disclosure can phase Mutually combination.The disclosure is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 is shown can generate using the method for being used to generate image of embodiment of the disclosure or for generating image Device exemplary system architecture 100.
As shown in Figure 1, system architecture 100 may include terminal 101,102, network 103,104 kimonos of database server Business device 105.Network 103 is to provide communication link in terminal 101,102 between database server 104 and server 105 Medium.Network 103 may include various connection types, such as wired, wireless communication link or fiber optic cables etc..
User 110 can be used terminal 101,102 and be interacted by network 103 with server 105, to receive or send Message etc..Various client applications can be installed, such as image processing class application, picture browsing class are answered in terminal 101,102 With, shopping class application, the application of payment class, web browser and immediate communication tool etc..
Here terminal 101,102 can be hardware, be also possible to software.When terminal 101,102 is hardware, can be Various electronic equipments with display screen, including but not limited to smart phone, tablet computer, E-book reader, MP3 player (Moving Picture Experts Group Audio Layer III, dynamic image expert's compression standard audio level 3), Pocket computer on knee and desktop computer etc..When terminal 101,102 is software, may be mounted at above-mentioned cited In electronic equipment.Multiple softwares or software module (such as providing Distributed Services) may be implemented into it, also may be implemented At single software or software module.It is not specifically limited herein.
Database server 104 can be to provide the database server of various services.Such as it can in database server To be stored with training sample set.It include a large amount of training sample in training sample set.Wherein, training sample may include The sample type label of sample eye fundus image and the image type for identifying sample eye fundus image.In this way, user 110 can also be with By terminal 101,102, training sample is chosen from the training sample set that database server 104 is stored.
Server 105 is also possible to provide the server of various services, such as various answers to what is shown in terminal 101,102 The background server supported with offer.Background server can use the instruction in the training sample set of the transmission of terminal 101,102 Practice sample, eye fundus image identification model is trained, and can be by training result (as the eye fundus image generated generates model) It is sent to terminal 101,102.Background server can also obtain from database server 104 and be stored in eye to be processed therein Base map picture receives the eye fundus image to be processed that terminal 101,102 is sent, and the eye fundus image identification model after application training Identify the eye fundus image type of eye fundus image to be processed, and the extraction of the characteristic pattern extract layer based on eye fundus image identification model Characteristic pattern, Class Activation mapping and eye fundus image to be processed, generate the eye fundus image including predefined type image-region.
Here database server 104 and server 105 equally can be hardware, be also possible to software.When they are When hardware, the distributed server cluster of multiple server compositions may be implemented into, individual server also may be implemented into.When it Be software when, multiple softwares or software module (such as providing Distributed Services) may be implemented into, also may be implemented into Single software or software module.It is not specifically limited herein.
It should be noted that the method provided by the embodiment of the present disclosure for generating image is generally held by server 105 Row.Correspondingly, it is generally also disposed in server 105 for generating the device of image.
It should be pointed out that being in the case where the correlation function of database server 104 may be implemented in server 105 Database server 104 can be not provided in system framework 100.
It should be understood that the number of terminal, network, database server and server in Fig. 1 is only schematical.Root It factually now needs, can have any number of terminal, network, database server and server.
With continued reference to Fig. 2, the process of one embodiment of the method for generating image according to the disclosure is shown 200.The method for being used to generate image, comprising the following steps:
Step 201, eye fundus image is obtained.
It in the present embodiment, can be with for generating the executing subject (such as server 105 shown in FIG. 1) of the method for image Obtain eye fundus image in several ways.For example, executing subject can by wired connection mode or radio connection, from It is obtained in database server (such as database server 104 shown in FIG. 1) and is stored in existing eye fundus image therein.Again For example, executing subject can collect eye fundus image by terminal (such as terminal shown in FIG. 1 101,102).
In the present embodiment, eye fundus image may include color image and/or green channel images.Wherein, color image It for example, include RGB (red, green, blue) image of three Color Channels.The format of eye fundus image is not intended to limit in the disclosure, such as Jpg (Joint Photo graphic Experts Group, a kind of picture format), BMP (Bitmap, image file format) Or the formats such as RAW (RAW Image Format, nondestructive compression type), as long as can be performed main body reads identification.
Step 202, eye fundus image is pre-processed, obtains pretreatment eye fundus image.
In the present embodiment, above-mentioned executing subject can first determine whether the eye fundus image that step 201 obtains is that green is logical Road eye fundus image.If it is determined that the eye fundus image be green channel eye fundus image, above-mentioned executing subject can to the eye fundus image into Row image preprocessing.If it is determined that the eye fundus image is not green channel eye fundus image, above-mentioned executing subject can be first to eyeground figure As carrying out green channel images extraction process, green channel eye fundus image is obtained.Then figure is carried out to green channel eye fundus image As pretreatment.For example, green channel is the most abundant channel of vessel information in color fundus image.And different eyeground phase The colour-difference of red channel and blue channel is away from larger in eye fundus image captured by the fundus camera of machine producer.So locating The purpose for managing eye fundus image is when identifying the blood vessel structure in eye fundus image, and above-mentioned executing subject can be given up in eye fundus image Red channel and blue channel, only green channel eye fundus image is handled.It can know in this way not influencing blood vessel structure While other result, the operand of processor can also be reduced, saves the processing time of processor.
Above-mentioned executing subject can pre-process green channel eye fundus image by following steps:
The first step carries out data normalization processing to green channel eye fundus image, obtains data normalization eye fundus image.
In the present embodiment, above-mentioned executing subject can first determine the image data of green channel eye fundus image mean value and Variance is then based on the mean value and variance, utilizes data to the pixel value of each of green channel eye fundus image pixel It standardizes formula and carries out data normalization processing.Data-standardizing formula can be there are many form, and one of form can be with are as follows:
Wherein, p indicates the original pixel value of pixel in green channel eye fundus image;P' is indicated in green channel eye fundus image The data normalization of pixel treated pixel value;μ indicates the mean value of the image data of green channel eye fundus image;ρ indicates green The variance of the image data of chrominance channel eye fundus image.
Second step carries out size-normalized processing to green image eye fundus image.
In the present embodiment, above-mentioned executing subject can carry out ruler to data normalization eye fundus image obtained in the first step Very little standardization processing.First determine that the size between the picture size and target image size of the data normalization eye fundus image is closed System.If the picture size that the size relation designation date standardizes eye fundus image is greater than target image size, data mark is reduced The picture size of standardization eye fundus image is to target image size;If the image of size relation designation date standardization eye fundus image Size is less than target image size, then the picture size of amplification data standardization eye fundus image to target image size.Herein, Target image size is preset standard size, can be arranged according to actual needs.
In some optional implementations of the present embodiment, above-mentioned executing subject can be by following steps to eyeground figure As being pre-processed:
The first step determines whether eye fundus image is green channel eye fundus image.
In the optional implementation, the eye fundus image that step 201 obtains may include color image, green channel figure Picture.Above-mentioned executing subject can first determine whether the eye fundus image is green channel eye fundus image.
Second step carries out the following processing eye fundus image in response to determining that eye fundus image is green channel eye fundus image At least one of: data normalization processing, size-normalized processing.
In the optional implementation, data normalization processing may include: that above-mentioned executing subject can first will be green The pixel value of the background pixel of channel eye fundus image is determined as standardized threshold.Then the numerical value of green channel eye fundus image is determined Greater than the mean value and variance of the image data of standardized threshold.Later, to each of green channel eye fundus image pixel Pixel value utilize data-standardizing formula carry out data normalization processing.Data-standardizing formula can there are many form, In form can be with are as follows:
Wherein, p indicates the original pixel value of pixel in green channel eye fundus image;P' is indicated in green channel eye fundus image The data normalization of pixel treated pixel value;μ indicates that the numerical value of green channel eye fundus image is greater than the figure of standardized threshold As the mean value of data;ρ indicates that the numerical value of green channel eye fundus image is greater than the variance of the image data of standardized threshold.
In the optional implementation, above-mentioned executing subject is by by the picture of the background pixel of green channel eye fundus image Plain value is determined as standardized threshold, and the numerical value of green channel eye fundus image is greater than to the equal of the image data of standardized threshold later Value and variance are used for image data standardization, can weaken in the green channel eye fundus image that data normalization is handled Image background, and then reduce influence of the image background for eye fundus image type identification result precision.
In the optional implementation, it is size-normalized processing may include: above-mentioned executing subject can first determine it is green The transverse width in eyeground region in the image length-width ratio and green channel eye fundus image of chrominance channel eye fundus image.Then above-mentioned Main body is in the case where keeping the image length-width ratio constant, by adjusting the picture size of green channel eye fundus image, so that green The transverse width in eyeground region is equal to predetermined width (for example, 1024 pixels) in the eye fundus image of chrominance channel.Later, above-mentioned execution master Body is by carrying out image cropping or/and image completion for green channel eye fundus image, so that the length of green channel eye fundus image It is equal with width.Herein, green channel eye fundus image can be the green by data normalization processing generation in the first step Image eye fundus image can also be the green channel eyeground figure that the data normalization function in general image processing software generates Picture.
In the optional implementation, the eyeground region in green channel eye fundus image is eye fundus image type identification mistake Vital image-region in journey.The characteristics of image of this image-region determines the type identification result of eye fundus image.On Executing subject is stated while carrying out size-normalized processing to whole green channel eye fundus image, it is desirable that green channel eyeground figure The transverse width in eyeground region is equal to predetermined width as in.Eyeground region in green channel eye fundus image can be preferably protected in this way Characteristics of image, help to improve the accuracy of eye fundus image type identification result.
Third step carries out green channel to eye fundus image in response to determining that eye fundus image is not green channel eye fundus image Image zooming-out processing, obtains green channel eye fundus image.
In the optional implementation, when above-mentioned executing subject determines that the eye fundus image that step 201 obtains is not green When the eye fundus image of channel, above-mentioned executing subject extracts green channel images data from the image data of eye fundus image, obtains Green channel eye fundus image.
4th step, to the green channel eye fundus image of eye fundus image carry out the following processing at least one of: data standard Change processing, size-normalized processing.
In the optional implementation, when above-mentioned executing subject determines that the eye fundus image that step 201 obtains is that green is logical When road eye fundus image, do not need again to eye fundus image carry out green channel images extraction process, can directly to eye fundus image into In data normalization processing and size-normalized processing described in second step in the row optional implementation at least One.
Step 203, eye fundus image input eye fundus image identification model trained in advance will be pre-processed, eye fundus image class is obtained Type information.
In the present embodiment, step 202 pretreatment eye fundus image generated can be input to pre- by above-mentioned executing subject First trained eye fundus image identification model, to obtain the picture type information of eye fundus image.
In the present embodiment, the picture type information of eye fundus image identification model output can be any one of following: first Type eye fundus image (for example, eye fundus image without obvious diabetic retinopathy);Second Type eye fundus image is (for example, band There is the eye fundus image of Non-proliferative diabetic retinopathy);Third type eye fundus image is (for example, have proliferation period diabetes The eye fundus image of retinopathy).
In the present embodiment, eye fundus image identification model is used to characterize the picture type information of eye fundus image and eye fundus image Between corresponding relationship.Eye fundus image identification model can be including characteristic pattern extract layer, global average pond layer, global maximum The convolutional neural networks of pond layer and full articulamentum.Wherein, characteristic pattern extract layer can extract the characteristic pattern of input picture.It is global Average pond layer (Global Average Pooling, GAP) and global maximum pond layer (Global Max Pooling, GMP) feature selecting and information filtering can be carried out to the extracted characteristic pattern of characteristic pattern extract layer.Full articulamentum (Fully Connected layers, FC) it can be to have in integration characteristics figure extract layer, global average pond layer and global maximum pond layer There is the characteristic information of class discrimination, to obtain picture type information.This feature figure extract layer can wrap in the present embodiment Include but be not limited to residual error network (Residual Network, ResNet), intensive convolutional network (Dense Network, DenseNet).Each layer of parameter in eye fundus image identification model can be different.As an example, above-mentioned executing subject The pretreatment eye fundus image that step 202 can be obtained is inputted from the input side of eye fundus image identification model, successively by eyeground Then each layer of processing in image recognition model is exported from the outlet side of eye fundus image identification model, the letter of outlet side output Breath is the picture type information of eye fundus image.
In the present embodiment, above-mentioned executing subject can train in several ways can characterize eye fundus image and eyeground The eye fundus image identification model trained in advance of corresponding relationship between picture type information.As an example, above-mentioned execution Main body can obtain multiple sample eyeground figure from database server (such as database server 104 shown in FIG. 1) first For identifying the image class of sample eye fundus image corresponding to each sample eye fundus image in picture and multiple sample eye fundus images The sample type label of type;Then each sample eye fundus image in multiple sample eye fundus images is known as initial eye fundus image The input of other model, using sample type label corresponding to each sample eye fundus image in multiple sample eye fundus images as just The desired output of beginning eye fundus image identification model, training obtain eye fundus image identification model.Herein, above-mentioned executing subject can be with Multiple sample eye fundus images are obtained, and show that those skilled in the art can be rule of thumb to multiple for those skilled in the art Each sample eye fundus image in sample eye fundus image marks sample type label.Above-mentioned executing subject can also use general Image processing software carries out image data standardization and figure to each sample eye fundus image in multiple sample eye fundus images As size-normalized processing.The initial eye fundus image identification model of above-mentioned executing subject training can be unbred convolution mind The convolutional neural networks completed are not trained through network or, initial parameter has can be set in each layer of initial eyeground identification model, joins Number can be continuously adjusted in the training process of eyeground identification model.
In some optional implementations of the present embodiment, eye fundus image identification model can be through the following steps that instruction It gets:
The first step obtains training sample set, wherein training sample includes sample eye fundus image and for identifying sample eye The sample type information of the image type of base map picture.
In the optional implementation, above-mentioned executing subject (such as server 105 shown in FIG. 1) can be by a variety of Mode obtains training sample set.For example, executing subject can be by wired connection mode or radio connection, from data It is obtained in library server (such as database server 104 shown in FIG. 1) and is stored in existing training sample set therein.Again For example, user can collect training sample by terminal (such as terminal shown in FIG. 1 101,102).In this way, executing subject can To receive training sample collected by terminal, and these training samples are stored in local, to generate training sample set.
It in the present embodiment, may include at least two training samples in training sample set.Wherein, training sample can be with The sample type information of image type including sample eye fundus image and for identifying sample eye fundus image.
Second step chooses at least two training samples from training sample set, and executes following training step: will select Each sample eye fundus image at least two training samples taken sequentially inputs initial eye fundus image identification model, obtains at least Picture type information corresponding to each sample eye fundus image in two training samples;It will be every at least two training samples Picture type information corresponding to a sample eye fundus image is compared with sample type information corresponding to the sample eye fundus image Compared with, the predictablity rate of initial eye fundus image identification model is obtained, determines whether predictablity rate is greater than default accuracy rate threshold value, In response to determining that predictablity rate is greater than default accuracy rate threshold value, initial eyeground identification model is determined as eyeground identification model.
In this optional implementation, based on training sample set acquired in the first step, above-mentioned executing subject can be with At least two training samples are chosen from training sample set.Later, above-mentioned executing subject trains samples for at least two of selection Each sample eye fundus image in this sequentially inputs initial eye fundus image identification model, obtains every at least two training samples Picture type information corresponding to a sample eye fundus image.Herein, initial eyeground identification model can be unbred eye Bottom identification model or the eyeground identification model that training is not completed.Each layer in initial eye fundus image identification model is provided with initial ginseng Number, initial parameter can be continuously adjusted in the training process of eyeground identification model.
In the optional implementation, above-mentioned executing subject is by each sample eyeground figure at least two training samples As corresponding picture type information is compared with sample type information corresponding to the sample eye fundus image, initial eye is obtained Predictablity rate of the base map as identification model.Specifically, if image type corresponding to a sample eye fundus image and the sample Sample type information corresponding to eye fundus image is identical, then initial eyeground identification model prediction is correct;If a sample eyeground figure As corresponding image type and sample type information corresponding to the sample eye fundus image is not identical, then initial eye fundus image is known Other model prediction mistake.Above-mentioned executing subject can calculate the ratio for predicting correct number and selected total sample number, and make For the predictablity rate of initial eye fundus image identification model.
In the optional implementation, above-mentioned executing subject is by the predictablity rate of initial eyeground identification model and presets Accuracy rate threshold value is compared.If predictablity rate is greater than default accuracy threshold value, illustrate the initial eye fundus image identification model Training is completed.At this point, the eye fundus image that above-mentioned executing subject can be completed eye fundus image identification model is initialized as training Identification model.
Third step adjusts initial eyeground identification model in response to determining that predictablity rate is not more than default accuracy rate threshold value In relevant parameter, and choose at least two training samples again from training sample set, use initial eye adjusted Base map executes training step as identification model is as initial eyeground identification model again.
It is accurate no more than default in the prediction accuracy of initial eye fundus image identification model in the optional implementation In the case where spending threshold value, the parameter of the above-mentioned adjustable initialization eye fundus image identification model of executing subject, and returning to execution should The first step and second step in optional implementation, until eye fundus image and eye fundus image type information can be characterized by training Between corresponding relationship eye fundus image identification model until.
In the optional implementation, above-mentioned executing subject uses during the training of eye fundus image identification model Sample eye fundus image be the eye fundus image for not carrying out Pixel-level mark.It is at low cost that image data mark can be reduced, enhance mould Type generalization ability.
Step 204, the eye fundus image type in response to the instruction of eye fundus image type information is predefined type, is based on eyeground figure The characteristic pattern extracted as the characteristic pattern extract layer of identification model and Class Activation mapping, generate model region-of-interest image.
In the present embodiment, which may, for example, be third type.Eye fundus image identification model is defeated in step 203 When picture type information out is third type eye fundus image, show that the image type of eye fundus image is predefined type.Class Activation Mapping (Class Activation Mapping, CAM) is that a kind of will generate in the identification model of eyeground is classified with eye fundus image As a result directly related characteristic pattern carries out visual technology.Above-mentioned executing subject will be with eye fundus image point using Class Activation mapping The directly related characteristic pattern of class result is weighted summation and generates model region-of-interest image.For example, for third class eyeground figure Picture, important characteristics of image first is that there are new vessels regions in eye fundus image.Above-mentioned executing subject is reflected using Class Activation It penetrates and the characteristic pattern directly related with third class this classification results of eye fundus image is weighted summation, model concern can be generated Area image.New vessels region can be accurately showed in the model region-of-interest image.
In the present embodiment, above-mentioned to hold when the eye fundus image type information that step 203 obtains is third class eye fundus image Row main body first the extracted characteristic pattern of characteristic pattern extract layer based on eye fundus image identification model and Class Activation can map CAM, Generate initial model region-of-interest image.It can specifically indicate are as follows:
Wherein M (x, y) indicates the picture element matrix of the model region-of-interest image generated using Class Activation mapping;N indicates eye Quantity of the base map as the extracted characteristic pattern of characteristic pattern extract layer of identification model;FnIndicate the feature of eye fundus image identification model The picture element matrix of extracted n-th of the characteristic pattern of figure extract layer;N indicates that the characteristic pattern extract layer of eye fundus image identification model is mentioned The number of the characteristic pattern taken;wanIndicate that the overall situation of global average pond layer GAP is averaged pond weight;wmnIndicate global maximum pond Change the maximum pond weight of the overall situation of layer GMP.
Later, the picture size of the adjustable initial model region-of-interest image of above-mentioned executing subject, so that initial model The picture size of region-of-interest image is adapted to (for example, initial model region-of-interest image and eye with the picture size of eye fundus image The picture size of base map picture is equal, initial model region-of-interest image relationship proportional to the picture size of eye fundus image).
Then, above-mentioned executing subject can be distributed according to the image data of initial model region-of-interest image chooses binaryzation Threshold value.Later, above-mentioned executing subject carries out at image binaryzation initial model region-of-interest image based on the binarization threshold Reason generates model region-of-interest image.Herein, binarization threshold can be arranged according to actual needs.
Step 205, it is based on model region-of-interest image and eye fundus image, generates the eyeground including predefined type image-region Image.
In the present embodiment, predefined type image-region can be the important image-region (example in third class eye fundus image Such as, new vessels image-region).Step 204 model region-of-interest image generated includes predefined type image-region.
In the present embodiment, there are image rulers between step 204 model region-of-interest image generated and eye fundus image Very little fitting relation.Above-mentioned executing subject can be first according to the picture size fitting relation of model region-of-interest image and eye fundus image Establish the position corresponding relationship between model region-of-interest image pixel and eye fundus image pixel.Then, above-mentioned executing subject root According to the position corresponding relationship between the pixel, predetermined class included in model region-of-interest image is oriented in eye fundus image Type image-region, obtain include predefined type image-region eye fundus image.For example, including new life in model region-of-interest image Blood-vessel image.Above-mentioned executing subject first establishes mould according to the picture size fitting relation of model region-of-interest image and eye fundus image Position corresponding relationship between type region-of-interest image pixel and eye fundus image pixel.Then, above-mentioned executing subject is according to the picture Position corresponding relationship between element orients the new vessels image district that model region-of-interest image includes in eye fundus image Domain, obtain include new vessels image-region eye fundus image.
In some optional implementations of the present embodiment, above-mentioned executing subject can be generated by following steps includes The eye fundus image of predefined type image-region:
Model region-of-interest image is carried out image threshold processing, obtains threshold value by the first step according to default second threshold Change model region-of-interest image.
In the optional implementation, above-mentioned executing subject can be first by step 204 model region-of-interest generated Image carries out connected component analysis.Then, area in model region-of-interest image is greater than default the according to default second threshold The connected region of two threshold values filters out, and obtains thresholding model region-of-interest image.Herein, default second threshold can be according to reality Border demand is arranged.For example, generally including proliferation film area in model region-of-interest image corresponding with third type eye fundus image Domain, new vessels region.In general, proliferation diaphragm area is noticeably greater than new vessels region.Above-mentioned executing subject is first to model Region-of-interest image carries out connected component analysis.Later, according to the size of connected region in model region-of-interest image, really Surely second threshold is preset.Then, the connected region that area in model region-of-interest image is greater than default second threshold is filtered out, from And the proliferation diaphragm area in model region-of-interest image can be filtered out.
Second step generates just according to the location of pixels corresponding relationship of thresholding model region-of-interest image and eye fundus image Begin the eye fundus image including predefined type image-region.
The figure of the first step resulting thresholding model region-of-interest image and eye fundus image in the optional implementation There are picture size fitting relations as between.Above-mentioned executing subject can be first according to thresholding model region-of-interest image and eyeground The picture size fitting relation of image establishes the position between thresholding model region-of-interest image pixel and eye fundus image pixel Corresponding relationship.Then, above-mentioned executing subject orients threshold value according to the position corresponding relationship between the pixel in eye fundus image Change predefined type image-region included in model region-of-interest image, obtains eye initially including predefined type image-region Base map picture.
The initial eye fundus image including predefined type image-region is carried out small echo according to default third threshold value by third step Threshold denoising processing, generates the eye fundus image including predefined type image-region.
In the optional implementation, what above-mentioned executing subject can obtain above-mentioned second step initially includes predetermined class The eye fundus image of type image-region carries out wavelet transformation decomposition, obtains the low frequency component and high frequency division that the wavelet transformation decomposes Amount.Later, above-mentioned executing subject is based on default third threshold value and high fdrequency component is done high-pass filtering, obtains filtering high frequency component.? Here, presetting third threshold value can be arranged according to actual needs.Then, above-mentioned executing subject is based on low frequency component and filtering is high Frequency component does wavelet inverse transformation, generates the eye fundus image including predefined type image-region.For example, initially including predefined type There may be in the eye fundus image of image-region the proliferation diaphragm area that is not filtered out in the first step of the optional implementation and New vessels region.Want high since the complexity in new vessels region is relatively proliferated diaphragm area.So above-mentioned executing subject is to initial After eye fundus image including predefined type image-region carries out wavelet transformation decomposition, new vessels region correspond in high fdrequency component compared with Strong signal, proliferation diaphragm area correspond to relatively weak signal in high fdrequency component.Above-mentioned executing subject can first choose default third Threshold value is by target signal filter relatively weak in high fdrequency component.Later, above-mentioned executing subject is based on low frequency component and filtered high frequency Component does wavelet inverse transformation, generates the eye fundus image including new vessels image-region.Herein, above-mentioned executing subject is by being somebody's turn to do Processing step in optional implementation, can effectively filter out the proliferation diaphragm area in eye fundus image, so as to more acurrate Identification eye fundus image in new vessels region.
In some optional implementations of the present embodiment, this method further include will include predefined type image-region Eye fundus image is sent to target display devices, and control target display devices to the eyeground figure including predefined type image-region As being shown.
In the optional implementation, target display devices are communicated to connect with above-mentioned executing subject, are used for Show the equipment (such as terminal shown in FIG. 1 101,102) for the image that above-mentioned executing subject is sent.In practice, above-mentioned execution master Body can send control signal to target display devices, and then control target display devices to the eyeground of predefined type image-region Image is shown.
The picture structure in eyeground region is complicated in predefined type eye fundus image, if above-mentioned executing subject will directly make a reservation for Type eye fundus image is sent to target display devices and is shown, user is difficult to obtain crucial eyeground structure by the observation of human eye Relevant information (for example, quantity information of the location information in new vessels region, new vessels region).In the optional realization In mode, step 205 eye fundus image generated including predefined type image-region clearly shows that predefined type eyeground The relevant information of crucial eyeground structure in image.Above-mentioned executing subject can be generated including predefined type by step 205 The eye fundus image of image-region is sent to target display devices, and control target display devices include predefined type image to this The eye fundus image in region shown, can save user by eye-observation analysis of key eyeground structure obtain relevant information when Between, to reduce the consumption of display resource.
It is showing for application scenarios of the method according to the present embodiment for generating image with further reference to Fig. 3, Fig. 3 It is intended to.In the application scenarios of Fig. 3, it can be equipped in terminal 31 used by a user and generate the application of image class.When user beats The application is opened, and after uploading eye fundus image 3210, the server 32 for providing back-office support to the application can be run for generating The method of image, comprising: the eye fundus image 3210 for uploading terminal 31 carries out pretreatment 3201, obtains pretreatment eye fundus image 3211.Pretreatment eye fundus image 3211 is inputted into eye fundus image identification model, exports picture type information 3213.Wherein, eyeground Image recognition model includes characteristic pattern extract layer 3202, global average pond layer 3203, global maximum pond layer 3204 and Quan Lian Connect layer 3205.Then the type of eye fundus image indicated by the picture type information 3213 of eye fundus image identification model output is determined It whether is predefined type 3206.If the type of eye fundus image indicated by picture type information 3213 is predefined type, first is raw The characteristic pattern 3212 that is extracted at unit 3207 based on characteristic pattern extract layer 3202, global average pond layer 3203 the overall situation be averaged pond Change weight, the maximum pond weight of the overall situation of global maximum pond layer 3204 and Class Activation mapping, generates model region-of-interest image 3214.Second generation unit 3208 is based on model region-of-interest image 3214 and eye fundus image 3210 is generated including predefined type figure As the eye fundus image 3215 in region.The method and apparatus for generating image that embodiment of the disclosure provides, by obtaining eye Base map picture, then pre-processes eye fundus image, obtains pretreatment eye fundus image.Pretreatment eye fundus image is inputted in advance later First trained eye fundus image identification model, obtains eye fundus image type information, and determine eye fundus image according to picture type information It whether is predefined type eye fundus image.After the type for determining eye fundus image is predefined type, maps and generate using Class Activation Model region-of-interest image, and it is based on model region-of-interest image and eye fundus image, generating includes predefined type image-region Eye fundus image.In the present embodiment, eye fundus image identification model is to train in advance, can effectively identify the class of eye fundus image Type.Then, above-mentioned executing subject carries out generation image procossing just for the eye fundus image that image type is predetermined class, the embodiment party Formula can rapidly and accurately identify the new vessels region in eye fundus image, and can reduce the operand of processor, section Save the calculation resources of processor.
If Fig. 4 shows, the device 400 for generating image of the present embodiment is to include: acquiring unit 401, pretreatment unit 402, recognition unit 403, the first generation unit 404 and the second generation unit 405.Wherein, acquiring unit 401 are configured to obtain Take eye fundus image;Pretreatment unit 402 is configured to pre-process the eye fundus image, obtains pretreatment eye fundus image;Know Other unit 403 is configured to pre-process eye fundus image input eye fundus image identification model trained in advance, obtains eye fundus image Type information, the eye fundus image identification model include characteristic pattern extract layer, global average pond layer, global maximum pond layer and entirely Articulamentum;First generation unit 404, it is pre- for being configured in response to the eye fundus image type of eye fundus image type information instruction Determine type, characteristic pattern and the Class Activation mapping extracted based on characteristic pattern extract layer generate model region-of-interest image;Second generates Unit 405 is configured to region-of-interest image and eye fundus image based on this model, generates the eye including predefined type image-region Base map picture.
In the present embodiment, in the device 400 for generating image: acquiring unit 401, pretreatment unit 402, identification are single The specific processing of first 403, first generation unit 404 and the second generation unit 405 and its brought technical effect can join respectively Step 201, step 202, step 203, step 204 and the related description with step 205 in Fig. 2 corresponding embodiment are examined, herein It repeats no more.
In some optional implementations of the present embodiment, pretreatment unit 402 comprises determining that subelement, is configured It whether is green channel eye fundus image at determining eye fundus image;First processing subelement, is configured in response to determine eyeground figure Seem green channel eye fundus image, to the eye fundus image perform the following operation at least one of: data normalization processing, size Standardization processing.Subelement is extracted, is configured in response to determine that eye fundus image is not green channel eye fundus image, to the eyeground Image carries out green channel images extraction process, obtains green channel eye fundus image;Second processing subelement, is configured to this The green channel eye fundus image of eye fundus image perform the following operation at least one of: data normalization processing, it is size-normalized Processing.
In some optional implementations of the present embodiment, the device 400 for generating image can also include eye fundus image Identification model training unit (not shown), which may include: acquisition training sample Gather subelement, be configured to obtain training sample set, wherein training sample includes sample eye fundus image and for identifying sample The sample type label of the image type of this eye fundus image;The first subelement of model training, is configured from training sample set At least two training samples are chosen, and execute following training step: by each sample at least two training samples of selection This eye fundus image sequentially inputs initial eye fundus image identification model, obtains each sample eyeground figure at least two training samples As corresponding picture type information;By image type corresponding to each sample eye fundus image at least two training samples Information is compared with sample type information corresponding to the sample eye fundus image, obtains the pre- of initial eye fundus image identification model Accuracy rate is surveyed, determines whether predictablity rate is greater than default accuracy rate threshold value, in response to determining that predictablity rate is greater than default standard True rate threshold value, is determined as eyeground identification model for initial eyeground identification model;The second subelement of model training, is configured to respond to Accuracy rate threshold value is preset in determining that predictablity rate is not more than, adjusts the relevant parameter in initial eyeground identification model, Yi Jicong Again at least two training samples are chosen in training sample set, use initial eye fundus image identification model adjusted as just Beginning eyeground identification model, executes training step again.
In some optional implementations of the present embodiment, the second generation unit 405 includes: that image threshold beggar is single Member is configured to that model region-of-interest image is carried out image threshold processing, obtains thresholding mould according to second threshold is preset Type region-of-interest image;Image maps subelement, is configured to according to thresholding model region-of-interest image and eye fundus image Location of pixels corresponding relationship generates eye fundus image initially including predefined type image-region;Wavelet Denoising Method subelement, is configured At according to third threshold value is preset, the initial eye fundus image including predefined type image-region is subjected to wavelet threshold denoising processing, Generate the eye fundus image including predefined type image-region.
In some optional implementations of the present embodiment, the device 400 for generating image can also include: control unit (not shown) is configured to the eye fundus image including predefined type image-region being sent to target display devices, and Control target display devices show the eye fundus image for including predefined type image-region.
The device provided by the above embodiment of the disclosure, the eyeground that acquiring unit 401 is obtained by pretreatment unit 402 Image is pre-processed, and pretreatment eye fundus image is obtained.Then, recognition unit 403 will pre-process eye fundus image input instruction in advance Experienced eye fundus image identification model, obtains eye fundus image type information.It later, is the eyeground figure of predefined type for image type Picture, the application Class Activation mapping of the first generation unit 404 generate model region-of-interest image.The second last generation unit 405 is based on Model region-of-interest image and eye fundus image generate the eye fundus image including predefined type image-region.The embodiment is realized The operand for reducing processor, saves the calculation resources of processor.
Below with reference to Fig. 5, it illustrates the electronic equipment (clothes of example as shown in figure 1 for being suitable for being used to realize embodiment of the disclosure It is engaged in device or terminal device) 500 structural schematic diagram.Terminal device in embodiment of the disclosure can include but is not limited to such as Mobile phone, laptop, digit broadcasting receiver, PDA (personal digital assistant), PAD (tablet computer), PMP are (portable Multimedia player), the mobile terminal and such as number TV, desk-top calculating of car-mounted terminal (such as vehicle mounted guidance terminal) etc. The fixed terminal of machine etc..Electronic equipment shown in Fig. 5 is only an example, should not function to embodiment of the disclosure and Use scope brings any restrictions.
If Fig. 5 shows, electronic equipment 500 may include processing unit (such as central processing unit, graphics processor etc.) 501, It can be loaded into random access storage according to the program being stored in read-only memory (ROM) 502 or from storage device 508 Program in device (RAM) 503 and execute various movements appropriate and processing.In RAM 503, it is also stored with electronic equipment 500 Various programs and data needed for operation.Processing unit 501, ROM 502 and RAM503 are connected with each other by bus 504.It is defeated Enter/export (I/O) interface 505 and is also connected to bus 504.
In general, following device can connect to I/O interface 505: including such as touch screen, touch tablet, keyboard, mouse, taking the photograph As the input unit 506 of head, microphone, accelerometer, gyroscope etc.;Including such as liquid crystal display (LCD), loudspeaker, vibration The output device 507 of dynamic device etc.;Storage device 508 including such as tape, hard disk etc.;And communication device 509.Communication device 509, which can permit electronic equipment 500, is wirelessly or non-wirelessly communicated with other equipment to exchange data.Although Fig. 5 shows tool There is the electronic equipment 500 of various devices, it should be understood that being not required for implementing or having all devices shown.It can be with Alternatively implement or have more or fewer devices.Each box shown in Fig. 5 can represent a device, can also root According to needing to represent multiple devices.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium On computer program, which includes the program code for method shown in execution flow chart.In such reality It applies in example, which can be downloaded and installed from network by communication device 509, or from storage device 508 It is mounted, or is mounted from ROM 502.When the computer program is executed by processing unit 501, the implementation of the disclosure is executed The above-mentioned function of being limited in the method for example.It should be noted that the computer-readable medium of embodiment of the disclosure can be meter Calculation machine readable signal medium or computer readable storage medium either the two any combination.Computer-readable storage Medium for example may be-but not limited to-system, device or the device of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, Or any above combination.The more specific example of computer readable storage medium can include but is not limited to: have one Or the electrical connections of multiple conducting wires, portable computer diskette, hard disk, random access storage device (RAM), read-only memory (ROM), Erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light Memory device, magnetic memory device or above-mentioned any appropriate combination.In embodiment of the disclosure, computer-readable to deposit Storage media can be any tangible medium for including or store program, which can be commanded execution system, device or device Part use or in connection.And in embodiment of the disclosure, computer-readable signal media may include in base band In or as carrier wave a part propagate data-signal, wherein carrying computer-readable program code.This propagation Data-signal can take various forms, including but not limited to electromagnetic signal, optical signal or above-mentioned any appropriate combination.Meter Calculation machine readable signal medium can also be any computer-readable medium other than computer readable storage medium, which can Read signal medium can be sent, propagated or be transmitted for being used by instruction execution system, device or device or being tied with it Close the program used.The program code for including on computer-readable medium can transmit with any suitable medium, including but not It is limited to: electric wire, optical cable, RF (radio frequency) etc. or above-mentioned any appropriate combination.
Above-mentioned computer-readable medium can be included in above-mentioned electronic equipment;It is also possible to individualism, and not It is fitted into the electronic equipment.Above-mentioned computer-readable medium carries one or more program, when said one or more When a program is executed by the electronic equipment, so that the electronic equipment: obtaining eye fundus image;Eye fundus image is pre-processed, is obtained To pretreatment eye fundus image;Eye fundus image input eye fundus image identification model trained in advance will be pre-processed, eye fundus image is obtained Type information, eye fundus image identification model include characteristic pattern extract layer, global average pond layer, global maximum pond layer and Quan Lian Connect layer;Eye fundus image type in response to the instruction of eye fundus image type information is predefined type, is extracted based on characteristic pattern extract layer Characteristic pattern and Class Activation mapping, generate model region-of-interest image;Based on model region-of-interest image and eye fundus image, generate Eye fundus image including predefined type image-region.
The behaviour for executing embodiment of the disclosure can be write with one or more programming languages or combinations thereof The computer program code of work, the programming language include object oriented program language-such as Java, Smalltalk, C++ further include conventional procedural programming language-such as " C " language or similar program design language Speech.Program code can be executed fully on the user computer, partly be executed on the user computer, as an independence Software package execute, part on the user computer part execute on the remote computer or completely in remote computer or It is executed on server.In situations involving remote computers, remote computer can pass through the network of any kind --- packet It includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefit It is connected with ISP by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the disclosure, method and computer journey The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction Combination realize.
Being described in unit involved in embodiment of the disclosure can be realized by way of software, can also be passed through The mode of hardware is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor Including first acquisition unit, second acquisition unit, training unit.Wherein, the title of these units not structure under certain conditions The restriction of the pairs of unit itself, for example, first acquisition unit is also described as " obtaining the unit of training sample set ".
Above description is only the preferred embodiment of the disclosure and the explanation to institute's application technology principle.Those skilled in the art Member it should be appreciated that embodiment of the disclosure involved in invention scope, however it is not limited to the specific combination of above-mentioned technical characteristic and At technical solution, while should also cover do not depart from foregoing invention design in the case where, by above-mentioned technical characteristic or its be equal Feature carries out any combination and other technical solutions for being formed.Such as disclosed in features described above and embodiment of the disclosure (but It is not limited to) technical characteristic with similar functions is replaced mutually and the technical solution that is formed.

Claims (12)

1. a kind of method for generating image, which comprises
Obtain eye fundus image;
The eye fundus image is pre-processed, pretreatment eye fundus image is obtained;
By pretreatment eye fundus image input eye fundus image identification model trained in advance, eye fundus image type information is obtained, The eye fundus image identification model includes characteristic pattern extract layer, global average pond layer, global maximum pond layer, full articulamentum;
Eye fundus image type in response to eye fundus image type information instruction is predefined type, is extracted based on the characteristic pattern Characteristic pattern and the Class Activation mapping that layer extracts, generate model region-of-interest image;
Based on the model region-of-interest image and the eye fundus image, the eyeground figure including predefined type image-region is generated Picture.
It is described pretreatment is carried out to eye fundus image to include: 2. according to the method described in claim 1, wherein
Determine whether the eye fundus image is green channel eye fundus image;
Green channel eye fundus image in response to the determination eye fundus image, to the eye fundus image perform the following operation in extremely One item missing: data normalization processing, size-normalized processing;
It is not green channel eye fundus image in response to the determination eye fundus image, green channel images is carried out to the eye fundus image Extraction process obtains green channel eye fundus image;
To the green channel eye fundus image of the eye fundus image perform the following operation at least one of: data normalization processing, Size-normalized processing.
3. according to the method described in claim 1, wherein, the eye fundus image identification model is trained in the following manner obtains :
Obtain training sample set, wherein training sample includes sample eye fundus image and the figure for identifying sample eye fundus image As the sample type information of type;
At least two training samples are chosen from the training sample set, and execute following training step: extremely by selection Each sample eye fundus image in few two training samples sequentially inputs initial eye fundus image identification model, obtains described at least two Picture type information corresponding to each sample eye fundus image in a training sample;It will be at least two training sample Sample type information corresponding to picture type information corresponding to each sample eye fundus image and the sample eye fundus image carries out Compare, obtains the predictablity rate of the initial eye fundus image identification model, it is default to determine whether the predictablity rate is greater than Accuracy rate threshold value is greater than default accuracy rate threshold value in response to the determination predictablity rate, by the initial eyeground identification model It is determined as eyeground identification model;
It is not more than default accuracy rate threshold value in response to the determination predictablity rate, adjusts in the initial eyeground identification model Relevant parameter, and at least two training samples are chosen again from the training sample set, use initial eye adjusted Base map executes the training step as identification model is as initial eyeground identification model again.
4. according to the method described in claim 1, wherein, being based on the model region-of-interest image and the eye fundus image, life At the eye fundus image including predefined type image-region, comprising:
According to default second threshold, the model region-of-interest image is subjected to image threshold processing, obtains thresholding model Region-of-interest image;
According to the location of pixels corresponding relationship of the thresholding model region-of-interest image and the eye fundus image, initial packet is generated Include the eye fundus image of predefined type image-region;
According to default third threshold value, the initial eye fundus image including predefined type image-region is subjected to wavelet threshold denoising Processing generates the eye fundus image including predefined type image-region.
5. according to the method described in claim 1, wherein, the method also includes:
The eye fundus image including predefined type image-region is sent to target display devices, and the control target is shown Show that equipment shows the eye fundus image including predefined type image-region.
6. a kind of for generating the device of image, comprising:
Acquiring unit is configured to obtain eye fundus image;
Pretreatment unit is configured to pre-process the eye fundus image, obtains pretreatment eye fundus image;
Recognition unit is configured to the eye fundus image identification model that the pretreatment eye fundus image input is trained in advance, obtains Eye fundus image type information, the eye fundus image identification model include characteristic pattern extract layer, global average pond layer, global maximum Pond layer and full articulamentum;
First generation unit, the eye fundus image type for being configured in response to the eye fundus image type information instruction is predetermined class Type, the characteristic pattern extracted based on the characteristic pattern extract layer and Class Activation mapping, generate model region-of-interest image;
Second generation unit is configured to based on the model region-of-interest image and the eye fundus image, and it includes predetermined for generating The eye fundus image in types of image region.
7. device according to claim 6, wherein the pretreatment unit includes:
It determines subelement, is configured to determine whether the eye fundus image is green channel eye fundus image;
First processing subelement, be configured to perform the following operation the green channel eye fundus image of the eye fundus image in extremely One item missing: data normalization processing, size-normalized processing;
Subelement is extracted, is configured in response to determine the eye fundus image not to be green channel eye fundus image, to the eyeground Image carries out green channel images extraction process, obtains green channel eye fundus image;
Second processing subelement is configured in response to determine that the eye fundus image is green channel eye fundus image, to the eye Base map picture perform the following operation at least one of: data normalization processing, size-normalized processing.
8. device according to claim 6, wherein described device further includes eye fundus image identification model training unit, institute Stating eye fundus image identification model training unit includes:
Training sample set zygote unit is obtained, is configured to obtain training sample set, wherein training sample includes sample eyeground The sample type information of image and the image type for identifying sample eye fundus image;
The first subelement of model training is configured to choose at least two training samples from the training sample set, and It executes following training step: each sample eye fundus image at least two training samples of selection is sequentially input into initial eyeground Image recognition model obtains the letter of image type corresponding to each sample eye fundus image at least two training sample Breath;By picture type information corresponding to each sample eye fundus image at least two training sample and the sample eyeground Sample type information corresponding to image is compared, and obtains the predictablity rate of the initial eye fundus image identification model, really Whether the fixed predictablity rate is greater than default accuracy rate threshold value, is greater than default accuracy rate in response to the determination predictablity rate The initial eyeground identification model is determined as eyeground identification model by threshold value;
The second subelement of model training is configured in response to determine that the predictablity rate is not more than default accuracy rate threshold value, The relevant parameter in the initial eyeground identification model is adjusted, and chooses at least two again from the training sample set Training sample uses initial eye fundus image identification model adjusted as initial eyeground identification model, executes the instruction again Practice step.
9. device according to claim 6, wherein second generation unit includes:
Image threshold subelement is configured to that the model region-of-interest image is carried out image according to second threshold is preset Thresholding processing, obtains thresholding model region-of-interest image;
Position maps subelement, is configured to the pixel according to the thresholding model region-of-interest image and the eye fundus image Position corresponding relationship generates eye fundus image initially including predefined type image-region;
Wavelet Denoising Method subelement, be configured to according to presetting third threshold value, will be described initial including predefined type image-region Eye fundus image carries out wavelet threshold denoising processing, generates the eye fundus image including predefined type image-region.
10. device according to claim 6, wherein described device further include:
Control unit is configured to the eye fundus image including predefined type image-region being sent to target display devices, And the control target display devices show the eye fundus image including predefined type image-region.
11. a kind of electronic equipment, comprising:
One or more processors;
Storage device is stored thereon with one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real Now such as method as claimed in any one of claims 1 to 5.
12. a kind of computer-readable medium, is stored thereon with computer program, wherein the realization when program is executed by processor Such as method as claimed in any one of claims 1 to 5.
CN201910290517.1A 2019-04-11 2019-04-11 Method and apparatus for generating image Pending CN110009626A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910290517.1A CN110009626A (en) 2019-04-11 2019-04-11 Method and apparatus for generating image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910290517.1A CN110009626A (en) 2019-04-11 2019-04-11 Method and apparatus for generating image

Publications (1)

Publication Number Publication Date
CN110009626A true CN110009626A (en) 2019-07-12

Family

ID=67171283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910290517.1A Pending CN110009626A (en) 2019-04-11 2019-04-11 Method and apparatus for generating image

Country Status (1)

Country Link
CN (1) CN110009626A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144408A (en) * 2019-12-24 2020-05-12 Oppo广东移动通信有限公司 Image recognition method, image recognition device, electronic equipment and storage medium
CN111523593A (en) * 2020-04-22 2020-08-11 北京百度网讯科技有限公司 Method and apparatus for analyzing medical images
CN111861999A (en) * 2020-06-24 2020-10-30 北京百度网讯科技有限公司 Detection method and device for artery and vein cross compression sign, electronic equipment and readable storage medium
CN112200794A (en) * 2020-10-23 2021-01-08 苏州慧维智能医疗科技有限公司 Multi-model automatic sugar network lesion screening method based on convolutional neural network
CN112711999A (en) * 2020-12-24 2021-04-27 西人马帝言(北京)科技有限公司 Image recognition method, device, equipment and computer storage medium
CN113076379A (en) * 2021-04-27 2021-07-06 上海德衡数据科技有限公司 Method and system for distinguishing element number areas based on digital ICD

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0625332A3 (en) * 1993-04-16 1995-08-09 Canon Kk Viewing apparatus.
CN203149266U (en) * 2013-03-29 2013-08-21 苏州微影光电科技有限公司 Alignment device for inner layer printed circuit board of PCB (Printed Circuit Board) exposure machine
CN105261015A (en) * 2015-09-29 2016-01-20 北京工业大学 Automatic eyeground image blood vessel segmentation method based on Gabor filters
CN105787927A (en) * 2016-02-06 2016-07-20 上海市第人民医院 Diffusate detection method of retina fundus image
CN106355599A (en) * 2016-08-30 2017-01-25 上海交通大学 Non-fluorescent eye fundus image based automatic segmentation method for retinal blood vessels
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning
CN107072765A (en) * 2014-06-17 2017-08-18 科尼尔赛德生物医学公司 Method and apparatus for ocular disorders after treatment
CN107437092A (en) * 2017-06-28 2017-12-05 苏州比格威医疗科技有限公司 The sorting algorithm of retina OCT image based on Three dimensional convolution neutral net
CN107679525A (en) * 2017-11-01 2018-02-09 腾讯科技(深圳)有限公司 Image classification method, device and computer-readable recording medium
CN108122236A (en) * 2017-12-18 2018-06-05 上海交通大学 Iterative eye fundus image blood vessel segmentation method based on distance modulated loss
CN108231194A (en) * 2018-04-04 2018-06-29 苏州医云健康管理有限公司 A kind of disease diagnosing system
CN108230341A (en) * 2018-03-07 2018-06-29 汕头大学 A kind of eye fundus image blood vessel segmentation method that nomography is scratched based on layering
CN108290933A (en) * 2015-06-18 2018-07-17 布罗德研究所有限公司 Reduce the CRISPR enzyme mutants of undershooting-effect
CN108346149A (en) * 2018-03-02 2018-07-31 北京郁金香伙伴科技有限公司 image detection, processing method, device and terminal
CN108470359A (en) * 2018-02-11 2018-08-31 艾视医疗科技成都有限公司 A kind of diabetic retinal eye fundus image lesion detection method
CN108564026A (en) * 2018-04-10 2018-09-21 复旦大学附属肿瘤医院 Network establishing method and system for Thyroid Neoplasms smear image classification
CN108615051A (en) * 2018-04-13 2018-10-02 博众精工科技股份有限公司 Diabetic retina image classification method based on deep learning and system
CN108961296A (en) * 2018-07-25 2018-12-07 腾讯科技(深圳)有限公司 Eye fundus image dividing method, device, storage medium and computer equipment
CN108986106A (en) * 2017-12-15 2018-12-11 浙江中医药大学 Retinal vessel automatic division method towards glaucoma clinical diagnosis

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0625332A3 (en) * 1993-04-16 1995-08-09 Canon Kk Viewing apparatus.
CN203149266U (en) * 2013-03-29 2013-08-21 苏州微影光电科技有限公司 Alignment device for inner layer printed circuit board of PCB (Printed Circuit Board) exposure machine
CN107072765A (en) * 2014-06-17 2017-08-18 科尼尔赛德生物医学公司 Method and apparatus for ocular disorders after treatment
CN108290933A (en) * 2015-06-18 2018-07-17 布罗德研究所有限公司 Reduce the CRISPR enzyme mutants of undershooting-effect
CN105261015A (en) * 2015-09-29 2016-01-20 北京工业大学 Automatic eyeground image blood vessel segmentation method based on Gabor filters
CN105787927A (en) * 2016-02-06 2016-07-20 上海市第人民医院 Diffusate detection method of retina fundus image
CN106355599A (en) * 2016-08-30 2017-01-25 上海交通大学 Non-fluorescent eye fundus image based automatic segmentation method for retinal blood vessels
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning
CN107437092A (en) * 2017-06-28 2017-12-05 苏州比格威医疗科技有限公司 The sorting algorithm of retina OCT image based on Three dimensional convolution neutral net
CN107679525A (en) * 2017-11-01 2018-02-09 腾讯科技(深圳)有限公司 Image classification method, device and computer-readable recording medium
CN108986106A (en) * 2017-12-15 2018-12-11 浙江中医药大学 Retinal vessel automatic division method towards glaucoma clinical diagnosis
CN108122236A (en) * 2017-12-18 2018-06-05 上海交通大学 Iterative eye fundus image blood vessel segmentation method based on distance modulated loss
CN108470359A (en) * 2018-02-11 2018-08-31 艾视医疗科技成都有限公司 A kind of diabetic retinal eye fundus image lesion detection method
CN108346149A (en) * 2018-03-02 2018-07-31 北京郁金香伙伴科技有限公司 image detection, processing method, device and terminal
CN108230341A (en) * 2018-03-07 2018-06-29 汕头大学 A kind of eye fundus image blood vessel segmentation method that nomography is scratched based on layering
CN108231194A (en) * 2018-04-04 2018-06-29 苏州医云健康管理有限公司 A kind of disease diagnosing system
CN108564026A (en) * 2018-04-10 2018-09-21 复旦大学附属肿瘤医院 Network establishing method and system for Thyroid Neoplasms smear image classification
CN108615051A (en) * 2018-04-13 2018-10-02 博众精工科技股份有限公司 Diabetic retina image classification method based on deep learning and system
CN108961296A (en) * 2018-07-25 2018-12-07 腾讯科技(深圳)有限公司 Eye fundus image dividing method, device, storage medium and computer equipment

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111144408A (en) * 2019-12-24 2020-05-12 Oppo广东移动通信有限公司 Image recognition method, image recognition device, electronic equipment and storage medium
CN111523593A (en) * 2020-04-22 2020-08-11 北京百度网讯科技有限公司 Method and apparatus for analyzing medical images
CN111523593B (en) * 2020-04-22 2023-07-21 北京康夫子健康技术有限公司 Method and device for analyzing medical images
CN111861999A (en) * 2020-06-24 2020-10-30 北京百度网讯科技有限公司 Detection method and device for artery and vein cross compression sign, electronic equipment and readable storage medium
CN112200794A (en) * 2020-10-23 2021-01-08 苏州慧维智能医疗科技有限公司 Multi-model automatic sugar network lesion screening method based on convolutional neural network
CN112711999A (en) * 2020-12-24 2021-04-27 西人马帝言(北京)科技有限公司 Image recognition method, device, equipment and computer storage medium
CN113076379A (en) * 2021-04-27 2021-07-06 上海德衡数据科技有限公司 Method and system for distinguishing element number areas based on digital ICD

Similar Documents

Publication Publication Date Title
CN110009626A (en) Method and apparatus for generating image
CN111091576B (en) Image segmentation method, device, equipment and storage medium
KR102311654B1 (en) Smart skin disease discrimination platform system constituting API engine for discrimination of skin disease using artificial intelligence deep run based on skin image
WO2021179852A1 (en) Image detection method, model training method, apparatus, device, and storage medium
CN109635627A (en) Pictorial information extracting method, device, computer equipment and storage medium
CN108229419A (en) For clustering the method and apparatus of image
CN108776786A (en) Method and apparatus for generating user's truth identification model
CN108388889B (en) Method and device for analyzing face image
CN110414607A (en) Classification method, device, equipment and the medium of capsule endoscope image
WO2020107156A1 (en) Automated classification method and device for breast medical ultrasound images
CN110298850B (en) Segmentation method and device for fundus image
CN102063570A (en) Method and system for processing psychological illness information based on mobile phone
CN108509921A (en) Method and apparatus for generating information
CN110287926A (en) Infusion monitoring alarm method, user equipment, storage medium and device
Ali et al. Design of automated computer-aided classification of brain tumor using deep learning
Maria et al. A comparative study on prominent connectivity features for emotion recognition from EEG
CN110473176B (en) Image processing method and device, fundus image processing method and electronic equipment
CN115620384A (en) Model training method, fundus image prediction method and device
US20230036366A1 (en) Image attribute classification method, apparatus, electronic device, medium and program product
EP3561815A1 (en) A unified platform for domain adaptable human behaviour inference
CN117392119B (en) Tumor lesion area detection method and device based on position priori and feature perception
AU2012268887A1 (en) Saliency prediction method
CN109241930A (en) Method and apparatus for handling supercilium image
CN109961060A (en) Method and apparatus for generating crowd density information
CN116258942A (en) Picture identification method, device, equipment and medium based on neural network model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination