CN110084280B - Method and device for determining classification label - Google Patents
Method and device for determining classification label Download PDFInfo
- Publication number
- CN110084280B CN110084280B CN201910252729.0A CN201910252729A CN110084280B CN 110084280 B CN110084280 B CN 110084280B CN 201910252729 A CN201910252729 A CN 201910252729A CN 110084280 B CN110084280 B CN 110084280B
- Authority
- CN
- China
- Prior art keywords
- picture
- pictures
- covering
- original
- domain
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a method and a device for determining a classification label, wherein the method comprises the following steps: acquiring a sample picture for training a preset model; the preset model is used for identifying an original picture; classifying all sample pictures; wherein the classification type comprises a class II out-of-domain classification label; the class II out-of-domain classification label is determined based on an original picture without medical judgment value, an original picture attached with a covering and an original picture containing a digestive residue object. The device performs the above method. According to the method and the device for determining the classification labels, provided by the embodiment of the invention, all sample pictures used for training the preset model are classified, and the classification types comprise class II out-of-domain classification labels, so that the rationality and the accuracy of the selection of the classification labels of the sample pictures can be improved.
Description
Technical Field
The embodiment of the invention relates to the technical field of picture processing, in particular to a method and a device for determining a classification label.
Background
The capsule endoscopy has the advantages of no pain, no injury, large information amount of shot images and the like, and has wide application value.
In the prior art, an original picture shot through a capsule endoscope is identified and classified in a manual mode, a model needs to be constructed in order to identify the original picture more accurately and efficiently, but the model usually needs to be trained before use, and no effective method exists for selection of a training sample (which can be a sample picture) of the original picture and determination of a classification label of the sample picture.
Therefore, how to avoid the above-mentioned drawbacks and improve the rationality and accuracy of selecting the classification label of the sample picture becomes a problem to be solved urgently.
Disclosure of Invention
Aiming at the problems in the prior art, the embodiment of the invention provides a method and a device for determining a classification label.
The embodiment of the invention provides a method for determining a classification label, which comprises the following steps:
acquiring a sample picture for training a preset model; the preset model is used for identifying an original picture;
classifying all sample pictures; wherein the classification type comprises a class II out-of-domain classification label; the class II out-of-domain classification label is determined based on an original picture without medical judgment value, an original picture attached with a covering and an original picture containing a digestive residue object.
The embodiment of the invention provides a device for determining a classification label, which comprises:
the acquisition unit is used for acquiring a sample picture for training a preset model; the preset model is used for identifying an original picture;
the classification unit is used for classifying all sample pictures; wherein the classification type comprises a class II out-of-domain classification label; the class II out-of-domain classification label is determined based on an original picture without medical judgment value, an original picture attached with a covering and an original picture containing a digestive residue object.
An embodiment of the present invention provides an electronic device, including: a processor, a memory, and a bus, wherein,
the processor and the memory are communicated with each other through the bus;
the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform a method comprising:
acquiring a sample picture for training a preset model; the preset model is used for identifying an original picture;
classifying all sample pictures; wherein the classification type comprises a class II out-of-domain classification label; the class II out-of-domain classification label is determined based on an original picture without medical judgment value, an original picture attached with a covering and an original picture containing a digestive residue object.
An embodiment of the present invention provides a non-transitory computer-readable storage medium, including:
the non-transitory computer readable storage medium stores computer instructions that cause the computer to perform a method comprising:
acquiring a sample picture for training a preset model; the preset model is used for identifying an original picture;
classifying all sample pictures; wherein the classification type comprises a class II out-of-domain classification label; the class II out-of-domain classification label is determined based on an original picture without medical judgment value, an original picture attached with a covering and an original picture containing a digestive residue object.
According to the method and the device for determining the classification labels, provided by the embodiment of the invention, all sample pictures used for training the preset model are classified, and the classification types comprise class II out-of-domain classification labels, so that the rationality and the accuracy of the selection of the classification labels of the sample pictures can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flow chart of an embodiment of a method of determining a class label according to the present invention;
FIGS. 2(a) to 2(g) are independent of each other and represent homogeneous images;
fig. 3(a) to fig. 3(h) are all screenshots of waterline pictures taken by the embodiment of the invention;
FIGS. 4(a) -4 (g) are all screenshots of full-coverage pictures taken with an embodiment of the present invention;
FIGS. 5(a) -5 (h) are all screenshots of pictures of a half-cover taken in accordance with an embodiment of the present invention;
FIGS. 6(a) -6 (h) are all screenshots of bubble cap pictures taken in accordance with an embodiment of the present invention;
FIGS. 7(a) to 7(h) are all screenshots of pictures of the webbed covering taken according to the embodiment of the present invention;
fig. 8(a) to 8(h) are all screenshots of original pictures containing the digestion residues taken by the embodiment of the invention;
FIG. 9 is a schematic structural diagram of an embodiment of an apparatus for determining a category label according to the present invention;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a flowchart of an embodiment of a method for determining a classification tag according to the present invention, and as shown in fig. 1, the method for determining a classification tag according to the embodiment of the present invention includes the following steps:
s101: acquiring a sample picture for training a preset model; the preset model is used for identifying an original picture.
Specifically, the device acquires a sample picture for training a preset model; the preset model is used for identifying an original picture. It should be noted that: the original picture is shot by the capsule endoscope, and the working process of the capsule endoscope is explained as follows:
the capsule endoscope enters the digestive tract from the oral cavity and is naturally discharged from the anus.
The battery of capsule endoscopy has limited endurance, and the effective working space is a part of the mouth, esophagus, stomach, duodenum, small intestine and large intestine.
Each activity of the capsule endoscope produces an in-field exam picture and an out-of-field exam picture.
The intra-field examination picture is a result of taking a certain section of the digestive tract.
The out-of-field inspection picture is a picture taken by the capsule endoscope in addition to the in-field inspection picture.
All pictures can be automatically identified without any human intervention (including image pre-processing).
After the images are identified, the images taken by the capsule endoscopy are divided into six major categories (125 minor categories) and automatically saved in 125 image folders, wherein the six major categories can be:
the first major category: one class of out-of-domain category labels (10 classes).
The second major category: class two out-of-domain category labels (13 categories).
The third major category: the tags (14 classes) are classified based on the first target picture of the local structural features.
The fourth major category: hole-structured first target picture classification tags (8 classes).
The fifth main category: the tags (24 classes) are classified based on the first target picture of the global structural features.
The sixth major class: the second target picture category label (56 categories).
It is possible to automatically recognize different parts of the digestive tract such as the oral cavity, the esophagus, the stomach, the duodenum, the small intestine, and the large intestine.
The number of the original pictures which can be shot by each capsule endoscope at each time can be 2000-3000, namely the number of the pictures which are acquired by the capsule endoscopes and concentrated.
Raw pictures taken of the capsule endoscopy (JPG format) can be derived from the hospital information system without any processing. That is, the sample pictures may be all pictures corresponding to the six major categories (125 minor categories), and the second major category is described in the embodiment of the present invention.
S102: classifying all sample pictures; wherein the classification type comprises a class II out-of-domain classification label; the class II out-of-domain classification label is determined based on an original picture without medical judgment value, an original picture attached with a covering and an original picture containing a digestive residue object.
Specifically, the device classifies all sample pictures; wherein the classification type comprises a class II out-of-domain classification label; the class II out-of-domain classification label is determined based on an original picture without medical judgment value, an original picture attached with a covering and an original picture containing a digestive residue object. The original picture without medical judgment value can comprise:
homogenizing the whole picture and the waterline picture; the outer surface of the shot in the homogeneous whole picture is flat and smooth, has no texture and is uniform in color; the waterline picture has an interface line of air and water. Fig. 2(a) to 2(g) are all screenshots of the homogenization chart shot by the embodiment of the invention, and the graphs in fig. 2(a) to 2(g) are independent from each other and are respectively the expression forms of the homogenization chart. Such pictures are characterized by:
homogeneity map (beta _ whole): the outer surface of the shot is flat and smooth, has no texture and uniform color, and although the shooting quality is high, the medical judgment value is lost due to the fact that the content is too single (the position, the angle, the organ carrier, the anatomical feature and the like of the shot object cannot be judged). The number of pictures is about 5.8% which is very high. Such pictures, although not seemingly spam pictures (corresponding noisy pictures, i.e. pictures that cannot be used for picture recognition), are in fact not distinguished from "spam pictures" due to the loss of medical value. The subsequent treatment process can be completely omitted.
Fig. 3(a) to 3(h) are all screenshots of the waterline pictures shot by the embodiment of the invention, and the drawings in fig. 3(a) to 3(h) are independent from each other and are respectively the expression forms of the waterline pictures. Such pictures are characterized by:
waterline picture (beta _ watermark): the picture has boundary lines of air and water, and the image structure is clear and simple. The part exposed in the air has the content similar to that of the homogenized image and has no medical value; the part submerged under the water surface is covered by the water film, valuable information is not exposed, and therefore the whole picture has no medical value and can be regarded as a 'garbage picture'. The picture count is about 3.8%.
According to the proportion of the area occupied by the water surface and the air surface, the water surface and the air surface can be subdivided into the following sub-categories:
beta _ waterfine _ bubble, beta _ waterfine _ edge (water surface in small part),
beta _ waterfowline _ middle (water half), Beta _ waterfowline _ major (water surface is the majority).
The original picture with the overlay attached may include:
full-overlay pictures, half-overlay pictures; the full-cover picture and the half-cover picture are distinguished according to the size of the area occupied by the cover.
Fig. 4(a) to 4(g) are all the screenshots of the full-coverage pictures taken by the embodiment of the invention, and the pictures in fig. 4(a) to 4(g) are independent from each other and are respectively the expression forms of the full-coverage pictures. Such pictures are characterized by:
full-cover picture (beta _ bubble _ full): 95% of the pictures are covered by some kind of covering (e.g. lumps, bubbles, mucous, etc.), resulting in a substantial loss of medical value of the pictures. The digestate is not a float of this type because it is voluminous and unique in character and therefore is classified independently. The number of pictures is about 1.7%. Such pictures are also "garbage pictures" due to loss of medical value. However, if the abnormal characteristic part is covered, some liquid and floating objects secreted by the abnormal characteristic part are mixed to form the abnormal characteristic accompanied by the covering, all the abnormal characteristics accompanied by the covering can be collected, and a preset covering accompanying characteristic set is formed, wherein the set comprises all the preset covering accompanying characteristics. If some covering features in the full-covering picture or the half-covering picture are consistent with the comparison result of the preset covering accompanying features, the positions covered by the coverings are indicated as abnormal feature positions, and the target pictures covered by the coverings are led into a comparison picture set (focus _ bubble) of the abnormal features accompanied by the coverings, wherein the focus _ bubble corresponds to the preset covering accompanying feature set; if the comparison results are not consistent, the positions covered by the coverings are non-abnormal characteristic positions, and the target pictures covered by the coverings are used as garbage pictures. It should be noted that: the anomalous features may include raised features, which may include swollen, particulate matter raised, and/or designated color features. The designated color characteristics may include red and white, and are not particularly limited. The abnormal features can be used as intermediate reference features in some disease diagnosis processes, and the abnormal features are not enough to diagnose diseases only by relying on the abnormal features.
Fig. 5(a) to 5(h) are all the screenshots of the half-cover pictures shot by the embodiment of the invention, and the pictures in fig. 5(a) to 5(h) are independent from each other and are respectively the expression forms of the half-cover pictures. Such pictures are characterized by:
half-covered picture (beta _ bubble _ half): 20% -90% of the image frame is covered by some kind of covering (such as lump-like suspended matter, bubble group, mucous membrane), although the picture still has medical value, but it is difficult to use for formal citation of medical report. The number of the pictures is about 8%, and the number of the pictures is huge. When the pictures are collected, careful identification is needed, the uncovered part of the pictures cannot have any abnormal features, even slight abnormal features can lead the abnormal feature pictures to the class, and pressure is brought to subsequent network model recognition pictures. If the semi-covered picture does not contain any abnormal features, the semi-covered picture can be regarded as a 'garbage picture', and subsequent examination is not performed any more, and the method can be realized by the following method steps:
identifying features in the half-overlay picture other than overlays;
comparing all the characteristics with the characteristics in a preset abnormal characteristic comparison picture set, and taking pictures corresponding to the characteristics with inconsistent comparison results as interference pictures; the interference picture is a picture which cannot be used for picture identification.
The method can also comprise the following steps: and importing the pictures corresponding to the features with consistent comparison results into a preset abnormal feature comparison picture set. If the number of the guide pictures is too large, the scene is too complicated, and another subdivision can be considered, for example, the guide pictures can be divided into cotton-wool-shaped coverings, block-shaped coverings, foam-filled coverings and the like, and the guide pictures can be further subdivided according to the area proportion occupied by the coverings.
The original picture with the overlay attached may further include:
bubble cover pictures, cobweb cover pictures; the outer surface of a shot in the bubble covering picture is covered by bubbles and has a light reflection phenomenon; the outer surface of the picture in the webbed covering picture is covered by the webbed covering.
Fig. 6(a) to 6(h) are all the sectional views of the bubble cap pictures taken by the embodiment of the present invention, and the respective views of fig. 6(a) to 6(h) are independent of each other and are the representations of the bubble cap pictures. Such pictures are characterized by:
large bubble reflect (corresponding to bubble cover picture): the outer surface of the shot is covered by a large bubble, and the shot content is only the reflection effect of the large bubble, so that the picture loses medical value. The number of pictures is about 0.7%. In most cases, the coverage of the bubbles is relatively large, and occupies an area exceeding 70% of the image width, but there are always a few pictures, and all the pictures which are not covered by the bubbles need to be ensured to have no abnormal features, so that the pictures guided by the category are all pictures which do not contain the abnormal features, and can be classified into 'garbage pictures' and do not participate in subsequent processing, and the implementation process for ensuring that no abnormal features appear can be realized in the following manner:
identifying features in the bubble overlay picture other than an overlay.
Comparing all the characteristics with the characteristics in a preset abnormal characteristic comparison picture set, and taking pictures corresponding to the characteristics with inconsistent comparison results as interference pictures; the interference picture is a picture which cannot be used for picture identification.
The method can also comprise the following steps: and importing the pictures corresponding to the features with consistent comparison results into a preset abnormal feature comparison picture set.
Fig. 7(a) to 7(h) are all screenshots of the webbed covering pictures taken by the embodiment of the invention, and the respective drawings in fig. 7(a) to 7(h) are independent from each other and are respectively representations of the webbed covering pictures. Such pictures are characterized by:
cobweb cover (beta _ bubble _ spider): some thread-like floaters shield the lens, and shot contents are seriously interfered, so that the medical judgment value is greatly reduced. The number of pictures is about 1.8%. When the guide label of the part of the picture is selected, great care is needed, and for the content which is not covered by the mesh covering object, no abnormal feature is ensured to appear, so that the picture guided by the part of the label can be regarded as a 'garbage picture', and does not participate in the subsequent processing flow. The implementation process for ensuring that no abnormal feature occurs can be implemented as follows:
identifying features in the webbed covering picture other than coverings.
Comparing all the characteristics with the characteristics in a preset abnormal characteristic comparison picture set, and taking pictures corresponding to the characteristics with inconsistent comparison results as interference pictures; the interference picture is a picture which cannot be used for picture identification.
The method can also comprise the following steps: and importing the pictures corresponding to the features with consistent comparison results into a preset abnormal feature comparison picture set.
Fig. 8(a) to 8(h) are all screenshots of the original pictures containing the digestion residues shot by the embodiment of the invention, and the pictures in fig. 8(a) to 8(h) are independent from each other and are respectively the representation forms of the original pictures containing the digestion residues. Such pictures are characterized by:
residue from digestion picture (beta _ bubble _ food): there is no clear food residue in the digestive tract, which is possible in the stomach and intestines, and the number of pictures is about 1%. In most cases, the coverage of the digestion residues is large, and the digestion residues occupy more than 50% of the area of the image, but as long as there is a place which is not covered, no abnormal feature is required to be ensured, so that the images guided by the category are all images without the abnormal feature, and can be classified into 'garbage images' and do not participate in subsequent processing.
The implementation process for ensuring that no abnormal feature occurs can be implemented as follows:
identifying features other than the digestive residues in the original picture containing the digestive residues.
Comparing all the characteristics with the characteristics in a preset abnormal characteristic comparison picture set, and taking pictures corresponding to the characteristics with inconsistent comparison results as interference pictures; the interference picture is a picture which cannot be used for picture identification.
The method can also comprise the following steps: and importing the pictures corresponding to the features with consistent comparison results into a preset abnormal feature comparison picture set.
According to the method for determining the classification labels, provided by the embodiment of the invention, all sample pictures used for training the preset model are classified, and the classification types comprise class II out-of-domain classification labels, so that the rationality and the accuracy of the selection of the classification labels of the sample pictures can be improved.
On the basis of the above embodiment, the original picture without medical judgment value includes:
homogenizing the whole picture and the waterline picture; the outer surface of the shot in the homogeneous whole picture is flat and smooth, has no texture and is uniform in color; the waterline picture has an interface line of air and water. Reference may be made to the above embodiments, which are not described in detail.
The method for determining the classification label provided by the embodiment of the invention can further improve the rationality and accuracy of the selection of the classification label of the sample picture.
On the basis of the above embodiment, the original picture with the covering attached thereto includes:
full-overlay pictures, half-overlay pictures; the full-cover picture and the half-cover picture are distinguished according to the size of the area occupied by the cover. Reference may be made to the above embodiments, which are not described in detail.
The method for determining the classification label provided by the embodiment of the invention can further improve the rationality and accuracy of the selection of the classification label of the sample picture.
On the basis of the above embodiment, the original picture with the covering attached thereto further includes:
bubble cover pictures, cobweb cover pictures; the outer surface of a shot in the bubble covering picture is covered by bubbles and has a light reflection phenomenon; the outer surface of the picture in the webbed covering picture is covered by the webbed covering. Reference may be made to the above embodiments, which are not described in detail.
The method for determining the classification label provided by the embodiment of the invention can further improve the rationality and accuracy of the selection of the classification label of the sample picture.
On the basis of the above embodiment, the method further includes:
identifying overlay features in the full overlay picture and the half overlay picture.
Specifically, the device identifies overlay features in the full overlay picture and the half overlay picture. Reference may be made to the above embodiments, which are not described in detail.
Comparing all the characteristics of the covering with the accompanying characteristics of the preset covering, and leading the target pictures corresponding to the covering characteristics with consistent comparison results into a comparison picture set of the abnormal characteristics accompanying the covering; wherein the abnormal features comprise raised features and/or specified color features, and the target picture comprises a target full-coverage picture and a target half-coverage picture.
Specifically, the device compares all the characteristics of the covering with the preset accompanying characteristics of the covering, and leads the target pictures corresponding to the characteristics of the covering with consistent comparison results into a comparison picture set of the accompanying abnormal characteristics of the covering; wherein the abnormal features comprise raised features and/or specified color features, and the target picture comprises a target full-coverage picture and a target half-coverage picture. Reference may be made to the above embodiments, which are not described in detail.
The method for determining the classification label provided by the embodiment of the invention can further improve the rationality and accuracy of the selection of the classification label of the sample picture.
On the basis of the above embodiment, the method further includes:
features other than overlay and digestate are identified in the half-overlay picture and the original picture containing digestate.
In particular, the device identifies features other than the covering and the digestate in the semi-covering picture and the original picture containing the digestate. Reference may be made to the above embodiments, which are not described in detail.
Comparing all the characteristics with the characteristics in a preset abnormal characteristic comparison picture set, and taking pictures corresponding to the characteristics with inconsistent comparison results as interference pictures; the interference picture is a picture which cannot be used for picture identification.
Specifically, the device compares all the characteristics with the characteristics in a preset abnormal characteristic comparison picture set, and takes pictures corresponding to the characteristics with inconsistent comparison results as interference pictures; the interference picture is a picture which cannot be used for picture identification. Reference may be made to the above embodiments, which are not described in detail.
The method for determining the classification label provided by the embodiment of the invention can further improve the rationality and accuracy of the selection of the classification label of the sample picture.
On the basis of the above embodiment, the method further includes:
identifying features in the bubble covering picture and the webbed covering picture other than covering.
Specifically, the device identifies features other than the covering in the bubble covering picture and the webbed covering picture. Reference may be made to the above embodiments, which are not described in detail.
Comparing all the characteristics with the characteristics in a preset abnormal characteristic comparison picture set, and taking pictures corresponding to the characteristics with inconsistent comparison results as interference pictures; the interference picture is a picture which cannot be used for picture identification.
Specifically, the device compares all the characteristics with the characteristics in a preset abnormal characteristic comparison picture set, and takes pictures corresponding to the characteristics with inconsistent comparison results as interference pictures; the interference picture is a picture which cannot be used for picture identification. Reference may be made to the above embodiments, which are not described in detail.
The method for determining the classification label provided by the embodiment of the invention can further improve the rationality and accuracy of the selection of the classification label of the sample picture.
Fig. 9 is a schematic structural diagram of an embodiment of an apparatus for determining a classification tag according to the present invention, and as shown in fig. 9, an embodiment of the present invention provides an apparatus for determining a classification tag, which includes an obtaining unit 901 and a classifying unit 902, where:
the obtaining unit 901 is configured to obtain a sample picture for training a preset model; the preset model is used for identifying an original picture; the classification unit 902 is configured to classify all sample pictures; wherein the classification type comprises a class II out-of-domain classification label; the class II out-of-domain classification label is determined based on an original picture without medical judgment value, an original picture attached with a covering and an original picture containing a digestive residue object.
Specifically, the obtaining unit 901 is configured to obtain a sample picture for training a preset model; the preset model is used for identifying an original picture; the classification unit 902 is configured to classify all sample pictures; wherein the classification type comprises a class II out-of-domain classification label; the class II out-of-domain classification label is determined based on an original picture without medical judgment value, an original picture attached with a covering and an original picture containing a digestive residue object.
According to the device for determining the classification labels, provided by the embodiment of the invention, all sample pictures used for training the preset model are classified, and the classification types comprise class II out-of-domain classification labels, so that the rationality and the accuracy of the selection of the classification labels of the sample pictures can be improved.
The apparatus for determining a classification tag provided in the embodiment of the present invention may be specifically configured to execute the processing flow of each of the method embodiments, and the functions of the apparatus are not described herein again, and refer to the detailed description of the method embodiments.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 10, the electronic device includes: a processor (processor)1001, a memory (memory)1002, and a bus 1003;
the processor 1001 and the memory 1002 complete communication with each other through a bus 1003;
the processor 1001 is configured to call the program instructions in the memory 1002 to execute the methods provided by the above-mentioned method embodiments, for example, including: acquiring a sample picture for training a preset model; the preset model is used for identifying an original picture; classifying all sample pictures; wherein the classification type comprises a class II out-of-domain classification label; the class II out-of-domain classification label is determined based on an original picture without medical judgment value, an original picture attached with a covering and an original picture containing a digestive residue object.
The present embodiment discloses a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the method provided by the above-mentioned method embodiments, for example, comprising: acquiring a sample picture for training a preset model; the preset model is used for identifying an original picture; classifying all sample pictures; wherein the classification type comprises a class II out-of-domain classification label; the class II out-of-domain classification label is determined based on an original picture without medical judgment value, an original picture attached with a covering and an original picture containing a digestive residue object.
The present embodiments provide a non-transitory computer-readable storage medium storing computer instructions that cause the computer to perform the methods provided by the above method embodiments, for example, including: acquiring a sample picture for training a preset model; the preset model is used for identifying an original picture; classifying all sample pictures; wherein the classification type comprises a class II out-of-domain classification label; the class II out-of-domain classification label is determined based on an original picture without medical judgment value, an original picture attached with a covering and an original picture containing a digestive residue object.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (7)
1. A method of determining a class label, comprising:
acquiring a sample picture for training a preset model; the preset model is used for identifying an original picture;
the sample picture and the original picture are shot by a capsule endoscope, the capsule endoscope automatically identifies different parts of the alimentary tract, the picture shot by the capsule endoscope is divided into an intra-domain picture and an out-domain picture, the intra-domain picture is a shooting result of a certain section of the alimentary tract, and the out-domain picture is a picture shot by the capsule endoscope except the intra-domain picture;
the capsule endoscopy automatically identifies sample pictures and classifies the pictures; wherein the classification type comprises a class II out-of-domain classification label; the second class of out-of-domain classification labels are determined based on original pictures without medical judgment values, original pictures attached with coverings and original pictures containing digestive residues;
the original picture with the covering attached comprises:
full-overlay pictures, half-overlay pictures; the full-cover picture and the half-cover picture are distinguished according to the size of the area occupied by the cover;
the method further comprises the following steps:
identifying overlay features in the full overlay picture;
comparing all the characteristics of the covering with the accompanying characteristics of the preset covering, wherein if the comparison results are consistent, the parts covered by the covering are abnormal characteristic parts, and leading the target pictures corresponding to the covering characteristics consistent with the comparison results into an abnormal characteristic comparison picture set accompanied by the covering; the abnormal features comprise raised features and/or specified color features, and the target picture is a target full-coverage picture;
identifying features other than overlay and digestate in the semi-overlay picture and original picture containing digestate;
comparing the characteristics of the semi-covering picture and the original picture containing the digestion residues except the covering and the digestion residues with the characteristics in a preset abnormal characteristic comparison picture set, and taking the picture corresponding to the characteristics with inconsistent comparison results as an interference picture; the interference picture is a picture which cannot be used for picture identification.
2. The method according to claim 1, wherein the original picture without medical judgment value comprises:
homogenizing the whole picture and the waterline picture; the outer surface of the shot in the homogeneous whole picture is flat and smooth, has no texture and is uniform in color; the waterline picture has an interface line of air and water.
3. The method of claim 1, wherein the original picture with the overlay attached further comprises:
bubble cover pictures, cobweb cover pictures; the outer surface of a shot in the bubble covering picture is covered by bubbles and has a light reflection phenomenon; the outer surface of the picture in the webbed covering picture is covered by the webbed covering.
4. The method of claim 3, further comprising:
identifying features in the bubble covering picture and the webbed covering picture other than covering;
comparing the characteristics of the bubble covering pictures and the cobweb-shaped covering pictures except the covering with the characteristics of a preset abnormal characteristic comparison picture set, and taking pictures corresponding to the characteristics with inconsistent comparison results as interference pictures; the interference picture is a picture which cannot be used for picture identification.
5. An apparatus for determining a classification tag, comprising:
the acquisition unit is used for acquiring a sample picture for training a preset model; the preset model is used for identifying an original picture;
the sample picture and the original picture are shot by a capsule endoscope, the capsule endoscope can automatically identify different parts of the alimentary tract, the pictures shot by the capsule endoscope are divided into an intra-domain picture and an out-domain picture, the intra-domain picture is a shooting result of a certain section of the alimentary tract, and the out-domain picture is a picture shot by the capsule endoscope except an intra-domain examination picture;
the classification unit is used for classifying all sample pictures; wherein the classification type comprises a class II out-of-domain classification label; the second class of out-of-domain classification labels are determined based on original pictures without medical judgment values, original pictures attached with coverings and original pictures containing digestive residues;
the original picture with the covering attached comprises:
full-overlay pictures, half-overlay pictures; the full-cover picture and the half-cover picture are distinguished according to the size of the area occupied by the cover;
the classification unit is further configured to:
identifying overlay features in the full overlay picture;
comparing all the characteristics of the covering with the accompanying characteristics of the preset covering, wherein if the comparison results are consistent, the parts covered by the covering are abnormal characteristic parts, and leading the target pictures corresponding to the covering characteristics consistent with the comparison results into an abnormal characteristic comparison picture set accompanied by the covering; the abnormal features comprise raised features and/or specified color features, and the target picture is a target full-coverage picture;
identifying features other than overlay and digestate in the semi-overlay picture and original picture containing digestate;
comparing the characteristics of the semi-covering picture and the original picture containing the digestion residues except the covering and the digestion residues with the characteristics in a preset abnormal characteristic comparison picture set, and taking the picture corresponding to the characteristics with inconsistent comparison results as an interference picture; the interference picture is a picture which cannot be used for picture identification.
6. An electronic device, comprising: a processor, a memory, and a bus, wherein,
the processor and the memory are communicated with each other through the bus;
the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of any of claims 1 to 4.
7. A non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the method of any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910252729.0A CN110084280B (en) | 2019-03-29 | 2019-03-29 | Method and device for determining classification label |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910252729.0A CN110084280B (en) | 2019-03-29 | 2019-03-29 | Method and device for determining classification label |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110084280A CN110084280A (en) | 2019-08-02 |
CN110084280B true CN110084280B (en) | 2021-08-31 |
Family
ID=67413994
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910252729.0A Active CN110084280B (en) | 2019-03-29 | 2019-03-29 | Method and device for determining classification label |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110084280B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104083143A (en) * | 2013-12-23 | 2014-10-08 | 北京华科创智健康科技股份有限公司 | Endoscopic OCT (optical coherence tomography) system capable of automatically identifying valid areas and invalid areas of image |
CN107292347A (en) * | 2017-07-06 | 2017-10-24 | 中冶华天南京电气工程技术有限公司 | A kind of capsule endoscope image-recognizing method |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102436665A (en) * | 2011-08-25 | 2012-05-02 | 清华大学 | Two-dimensional plane representation method for images of alimentary tract |
JP6150583B2 (en) * | 2013-03-27 | 2017-06-21 | オリンパス株式会社 | Image processing apparatus, endoscope apparatus, program, and operation method of image processing apparatus |
CN106204599B (en) * | 2016-07-14 | 2019-04-26 | 安翰科技(武汉)股份有限公司 | Automatic segmentation system and method for image in alimentary canal |
CN107145840B (en) * | 2017-04-18 | 2020-04-21 | 重庆金山医疗器械有限公司 | Endoscope expert diagnosis knowledge embedded computer aided WCE sequence image data identification method |
CN107240091B (en) * | 2017-04-21 | 2019-09-03 | 安翰科技(武汉)股份有限公司 | Capsule endoscope image preprocessing system and method |
CN109523522B (en) * | 2018-10-30 | 2023-05-09 | 腾讯医疗健康(深圳)有限公司 | Endoscopic image processing method, device, system and storage medium |
-
2019
- 2019-03-29 CN CN201910252729.0A patent/CN110084280B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104083143A (en) * | 2013-12-23 | 2014-10-08 | 北京华科创智健康科技股份有限公司 | Endoscopic OCT (optical coherence tomography) system capable of automatically identifying valid areas and invalid areas of image |
CN107292347A (en) * | 2017-07-06 | 2017-10-24 | 中冶华天南京电气工程技术有限公司 | A kind of capsule endoscope image-recognizing method |
Also Published As
Publication number | Publication date |
---|---|
CN110084280A (en) | 2019-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109741346B (en) | Region-of-interest extraction method, device, equipment and storage medium | |
KR102028106B1 (en) | Systems, methods, and computer-readable media for identifying when a subject is likely to be affected by a medical condition | |
CN113743384B (en) | Stomach picture identification method and device | |
CN114612389B (en) | Fundus image quality evaluation method and device based on multi-source multi-scale feature fusion | |
CN110867233B (en) | System and method for generating electronic laryngoscope medical test reports | |
CN111462115B (en) | Medical image display method and device and computer equipment | |
CN112232977A (en) | Aquatic product cultivation evaluation method, terminal device and storage medium | |
WO2020078114A1 (en) | Animal labour identification method, apparatus, and device | |
CN113516639B (en) | Training method and device for oral cavity abnormality detection model based on panoramic X-ray film | |
CN111062929A (en) | Intelligent analysis and diagnosis system for livestock and poultry disease pictures through software design | |
CN110110750B (en) | Original picture classification method and device | |
CN110084280B (en) | Method and device for determining classification label | |
CN110097080B (en) | Construction method and device of classification label | |
McKenna et al. | Automated classification for visual-only postmortem inspection of porcine pathology | |
CN110084267A (en) | Portrait clustering method, device, electronic equipment and readable storage medium storing program for executing | |
CN117315787B (en) | Infant milk-spitting real-time identification method, device and equipment based on machine vision | |
CN110083727B (en) | Method and device for determining classification label | |
CN116596927B (en) | Endoscope video processing method, system and device | |
CN110097082B (en) | Splitting method and device of training set | |
CN110084277B (en) | Splitting method and device of training set | |
JP7463589B1 (en) | Fish school behavior analysis system, information processing device, fish school behavior analysis method and program | |
CN110084276B (en) | Splitting method and device of training set | |
CN110084278B (en) | Splitting method and device of training set | |
CN115797729A (en) | Model training method and device, and motion artifact identification and prompting method and device | |
CN110070113B (en) | Training method and device for training set |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |