CN112991238B - Food image segmentation method, system and medium based on texture and color mixing - Google Patents
Food image segmentation method, system and medium based on texture and color mixing Download PDFInfo
- Publication number
- CN112991238B CN112991238B CN202110197874.0A CN202110197874A CN112991238B CN 112991238 B CN112991238 B CN 112991238B CN 202110197874 A CN202110197874 A CN 202110197874A CN 112991238 B CN112991238 B CN 112991238B
- Authority
- CN
- China
- Prior art keywords
- food
- segmentation
- texture
- areas
- color
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 235000013305 food Nutrition 0.000 title claims abstract description 259
- 238000000034 method Methods 0.000 title claims abstract description 81
- 238000003709 image segmentation Methods 0.000 title claims abstract description 50
- 238000002156 mixing Methods 0.000 title claims abstract description 22
- 230000011218 segmentation Effects 0.000 claims abstract description 98
- 239000011159 matrix material Substances 0.000 claims abstract description 66
- 235000003166 Opuntia robusta Nutrition 0.000 claims abstract description 42
- 244000218514 Opuntia robusta Species 0.000 claims abstract description 42
- 239000003086 colorant Substances 0.000 claims abstract description 11
- 238000006243 chemical reaction Methods 0.000 claims abstract description 7
- 238000004422 calculation algorithm Methods 0.000 claims description 46
- 238000012545 processing Methods 0.000 claims description 17
- 238000004364 calculation method Methods 0.000 claims description 15
- 238000005259 measurement Methods 0.000 claims description 11
- 230000004931 aggregating effect Effects 0.000 claims description 10
- 238000001514 detection method Methods 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 9
- 239000000203 mixture Substances 0.000 claims description 7
- 238000003708 edge detection Methods 0.000 claims description 4
- 235000015219 food category Nutrition 0.000 claims description 3
- 238000000638 solvent extraction Methods 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 11
- 238000011156 evaluation Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 3
- 235000005911 diet Nutrition 0.000 description 3
- 230000037213 diet Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 208000008589 Obesity Diseases 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 235000019577 caloric intake Nutrition 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 235000012631 food intake Nutrition 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 235000016709 nutrition Nutrition 0.000 description 2
- 230000035764 nutrition Effects 0.000 description 2
- 235000020824 obesity Nutrition 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005192 partition Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 208000017667 Chronic Disease Diseases 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 150000001720 carbohydrates Chemical class 0.000 description 1
- 230000001684 chronic effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 206010012601 diabetes mellitus Diseases 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 235000006694 eating habits Nutrition 0.000 description 1
- 238000004186 food analysis Methods 0.000 description 1
- 230000037406 food intake Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 235000004280 healthy diet Nutrition 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 235000012054 meals Nutrition 0.000 description 1
- 230000004630 mental health Effects 0.000 description 1
- 208000030159 metabolic disease Diseases 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 235000011888 snacks Nutrition 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 235000021139 traditional diet Nutrition 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a food image segmentation method, a food image segmentation system, a food image segmentation medium and a food image segmentation terminal based on texture and color mixing; the method comprises the following steps: acquiring a food image and the total number of food types contained in the food image; performing color space conversion on the food image; generating a texture feature map; obtaining a best fit tray edge ellipse; obtaining a marking matrix; obtaining a mark matrix of a segmentation area; acquiring an updated segmentation area marking matrix; judging the type of the segmentation area by utilizing the best fit dinner plate edge ellipse; merging the partitioned areas with the types of foods until the number of the partitioned areas with the types of foods is equal to the total number, and acquiring a final partitioned area marking matrix; the invention is suitable for dividing food images with various foods in the same food container, has good effect when dividing foods with similar colors but different textures, can accurately improve the accuracy of dividing multi-target food images, and helps to further identify food images.
Description
Technical Field
The present invention relates to the field of physics, and in particular, to image processing techniques. In particular to a food image segmentation method, a food image segmentation system, a food image segmentation medium and a food image segmentation terminal based on texture and color mixing.
Background
Obesity is a chronic disease that is harmful to both physical and mental health and society of individuals, and according to western medicine theory, causes obesity due to unbalance of calorie intake and consumption, and at the same time, causes an increase in risk of chronic metabolic diseases such as diabetes, and traditionally, nutritionists have attempted to solve these problems by analyzing and monitoring daily eating habits of patients or by viewing images of foods consumed by patients, and traditional diet nutrition evaluation needs to record daily food consumption and manually identify food types and estimate food quality, but these methods have inconveniences for elderly people, particularly in terms of accurately estimating food intake and estimating time of food content consumption, because of which a complicated system is required to automatically perform all food analysis tasks such as food image segmentation, food identification, volume estimation and nutrition analysis, which has become an important point of many recent research works, and in recent years, with the development of smart phones and image analysis technologies, new generation automatic diet evaluation systems based on computer vision technology have become possible.
In an automatic diet evaluation system, the type of food in the picture, the weight, the volume and the like of different foods can be estimated through the food picture shot by a user, and an evaluation result of whether the food in the picture can meet the healthy diet requirement of the user or not is further given by combining the health condition of the user, in the automatic diet evaluation system, the picture is divided according to the area where the different foods are in the picture (short for dividing the food image) which is a first and very key technology, although the development of computer graphics and computer vision is greatly advanced in recent years, the dividing of the food image is still a difficult task, because certain food images may not show the characteristics of obvious shape outline, food edges and the like, and when the food is cut up, different foods are mixed, and the food image division may be more difficult when the foods are overlapped and blocked; in addition, chinese and western foods, unlike western foods, can have more than one component of different colors and textures in a single serving, and they are often cooked to different shapes, which makes food image segmentation more challenging.
Although some solutions have been used to segment food images, most of these solutions have difficulty in accurately distinguishing different food areas in the image at the pixel level.
Taking the invention patent as an example, in the method for photographing and measuring the quantity of dishes, which is proposed by Xu Chunlei et al, a Grabcut algorithm is used for dividing food images, but the algorithm requires a user to manually draw a square frame of an area where food is located on the images, so that the food images cannot be completely and automatically divided; zheng Xin et al propose a food volume estimation method based on 3D model nesting, in which the K-Means method is used to segment the food image, but the method can obtain better effect only when different foods are placed in the image with larger intervals, if the different foods are placed next to each other in the dinner plate, the algorithm cannot segment effectively; in the intelligent checkout system and method based on Hough circle and color Euclidean distance method proposed by Ni Jianjun et al, the image is segmented by using Hough circle and Canny algorithm, the output result of the method is a rectangular frame marking the area of food in the image instead of a pixel-level segmentation result, and the method can only extract the approximate position of the food in the image, but cannot be used as a basis for further estimating the food amount; also, ji Gang et al propose a method for identifying and classifying dishes based on image analysis, in which only a dividing boundary box (a rectangular box containing food) of an image can be obtained, but accurate pixel-level division cannot be obtained; chen Xiaopeng et al also propose a food image segmentation method. However, the method proposed by the patent only can divide and extract the dinner plate area in the image, but cannot divide various foods in the dinner plate, so that the method has a great difference from the problem solved by the patent; xu Bing et al propose a method for segmenting a snack food image by using a slec super-pixel segmentation technique and a gabor texture filter technique, but the method can only distinguish food from non-food areas in the image, and cannot segment a multi-target food image; wang Yanqing et al propose an intelligent meal assessment method based on cloud computing, wherein segmentation of food images is mentioned. However, only an example is briefly presented, and a technical scheme used for specific implementation is not presented; J. de Hais et al propose an estimation of food volume and carbohydrate, wherein a food image segmentation algorithm uses mean shift filtering and region growing algorithms to segment food in CIELAB color space. However, the algorithm only considers using color information, is only suitable for dividing foods with large color and texture differences, and has very limited distinguishing effect on different food areas with similar colors in an image.
Taking academic research as an example, m.anthimopoulos et al split food in CIELAB color space using mean shift filtering and region growing algorithm, but the algorithm only considers using color information, only applies to splitting between foods with large color texture differences; the Su et al propose a segmentation algorithm based on the local variation idea of superpixel segmentation and region analysis, but the algorithm does not consider the texture information of the image; there is also a method of dividing food images by a wearable camera, the food images obtained by the wearable camera combine a saliency map and an active contour method to divide food in a dish after detecting a dish, the algorithm can achieve a good dividing effect on a single target food image but cannot divide a multi-target food image; wang et al propose a weak supervision food image segmentation method based on a graph model using Class Activation Mapping (CAM), but the method also fails to segment multi-target food images; two types of classifiers are used: food class classifier and food/non-food classifier. By using CAM, the food class classifier can highlight the food region excluding the dinner plate region, while the food/non-food class classifier can highlight the food region containing the dinner plate region, but it has the disadvantage that it is also unable to segment multiple food images; there has also been proposed a graph dividing method based on super pixels and normalized cut (Ncut) to divide a food image, but the method can only generate a global divided graph and cannot judge whether the generated divided area is a food area; zhu et al achieve multi-target food image segmentation by assigning a class label to each pixel in the image, the result of the food classifier being used as segmentation effect feedback, and estimating the number of segmented regions in the image based on the confidence score assigned to each segment, this approach being superior to the traditional Ncut approach, but the algorithm has high time complexity and also does not effectively separate food regions from non-food regions; in another study, food images were processed by two independent processing methods: the saturation-based method and the color texture-based method (JSEG) to find a possible food region, however, the JSEG algorithm has a disadvantage in that local J-value calculation and region growing need to be repeatedly performed on a plurality of scales during segmentation, which requires a large amount of calculation amount and high time complexity; furthermore, a semantic segmentation method based on a deep neural network is provided, and a large number of food images which are manually segmented in advance are needed to be used as a training set; in another study, algorithms required the user to draw a bounding box and select the appropriate food label from the available list, and then automatically segment the food region using GrabCut techniques, a semi-automatic segmentation method that proved effective through verification of a large image dataset, but still required manual intervention by the user part.
In summary, the existing techniques for segmenting food images have many disadvantages, so that the effectiveness of these algorithms in segmenting multi-target food images with texture diversity is poor, and therefore, how to propose an image segmentation algorithm that comprehensively considers texture and color modes to segment food images is a technical problem to be solved by those skilled in the art.
Disclosure of Invention
In view of the above-mentioned drawbacks of the prior art, an object of the present invention is to provide a method, a system, a medium and a terminal for dividing food images based on texture-color mixture, which comprehensively consider textures and colors of images and can realize accurate division of food images.
To achieve the above and other related objects, the present invention provides a food image segmentation method based on texture-color mixing, comprising the steps of: acquiring a food image and a total number of food categories contained in the food image; performing color space conversion on the food image to convert the food image from RGB color space to LAB color space, and obtaining a processing image; converting the food image from RGB color space to gray image, and applying LBP operator on the gray image to generate texture feature map; positioning the edge of the dinner plate in the food image by using an ellipse detection algorithm to obtain a best fit dinner plate edge ellipse; performing super-pixel segmentation on the processed image to generate a preset super-pixel area, and obtaining a marking matrix; aggregating the super pixel areas based on a texture-color mixed area growing algorithm and the marking matrix to generate a segmentation area, and obtaining a segmentation area marking matrix; combining the segmented regions with the areas smaller than the first preset threshold value with the segmented regions with the nearest texture colors adjacent to the segmented regions until the areas of all the segmented regions are larger than the second preset threshold value, and obtaining an updated segmented region marking matrix; judging the type of the dividing region by utilizing the best fit dinner plate edge ellipse; the types include: food, dinner plate and background; and merging the partitioned areas with the type of food until the number of the partitioned areas with the type of food is equal to the total number, and acquiring a final partitioned area marking matrix.
In one embodiment of the present invention, locating the dish edges in the food image using an ellipse detection algorithm, obtaining a best fit dish edge ellipse comprises the steps of: applying an edge detection operator to generate an edge map of the food image; discarding edge sections with the area smaller than a third preset threshold value in the edge map; and fitting the rest edge segments in the edge map by using a least square method to obtain the best-fit dinner plate edge ellipse.
In an embodiment of the present invention, aggregating the super pixel regions based on a texture-color hybrid region growing algorithm and the marking matrix to generate a segmented region, and obtaining the segmented region marking matrix includes the steps of: randomly selecting a super-pixel region from the processed image, which is not marked as any segmented region in the marking matrix; the super pixel area is taken as a starting point, the area growing algorithm is executed by taking the super pixel area as a basic unit, and the segmentation area is generated; the similarity distance between different segmentation areas is calculated by using a texture and color mixed measurement mode; the calculation formula of the color similarity distance is as follows:
wherein Dis color Representing a color similarity distance between two super-pixel regions on the processed image; l, a, b represent the values of all pixels in the partitioned area on L, A, B channels of the LAB color space, respectively; l, A, B each represent the value of all pixel points in a super-pixel region adjacent to the partitioned region over the L, A, B channel of the LAB color space;
The calculation formula of the texture similarity distance is as follows:
wherein Dis texture Representing a texture similarity distance between two super-pixel regions on the processed image; hist represents the LBP feature histogram calculated from the texture feature map; lbp (lbp) 1 (x)、lbp 2 (x) LBP characteristic histograms respectively representing the segmentation region and a certain super pixel region adjacent to the segmentation region are normalized to be at the value of the x-th bit; the value range of x is an integer of 0-255, and represents the x-th interval of the LBP characteristic histogram;
the calculation formula of the mixed similarity distance of the comprehensive color and texture is as follows:
Dis=p*Dis color +q*Dis texture ;
wherein Dis represents a hybrid similarity distance for the integrated color and texture; p and q respectively represent Dis color 、Dis texture Corresponding weights;
repeating the steps until all the super pixel areas are marked as a dividing area in the marking matrix, and obtaining the dividing area marking matrix.
In an embodiment of the present invention, merging a segmented region of the segmented regions having an area smaller than a first preset threshold with a segmented region closest to an adjacent texture color until the areas of all the segmented regions are larger than a second preset threshold, and obtaining an updated segmented region marking matrix includes the following steps: arranging all the divided areas from small to large according to the area; and sequentially taking out the segmentation areas with the areas smaller than the first preset threshold value from small to large, merging the segmentation areas into the most similar segmentation areas adjacent to the segmentation areas according to the texture color mixing type measurement mode, and obtaining the updated segmentation area marking matrix.
In one embodiment of the present invention, determining the type of the segmented region using the best fit tray edge ellipse comprises the steps of: for each of the segmented regions, if the segmented region has an area exceeding a first predetermined proportion that is outside the best-fit tray edge ellipse, the segmented region is marked as background; for a segmented region not marked as the background, if the segmented region has an edge length exceeding a second preset proportion adjacent to the segmented region marked as the background, the segmented region is marked as a dinner plate; the segmented areas not marked as the background and the dinner plate are marked as food.
In one embodiment of the present invention, merging the partitioned areas of the type of food includes the steps of: selecting all the divided areas marked as the food, and arranging the divided areas from small to large; and sequentially selecting the segmentation areas from small to large according to the area, and merging the segmentation areas into the most similar segmentation areas adjacent to the segmentation areas according to the measurement mode of the texture color mixture.
The invention provides a food image segmentation system based on texture and color mixing, which comprises: the device comprises a first acquisition module, a second acquisition module, a feature map generation module, a third acquisition module, a fourth acquisition module, a fifth acquisition module, a sixth acquisition module, a type judgment module and a seventh acquisition module; the first acquisition module is used for acquiring food images and the total number of food types contained in the food images; the second acquisition module is used for performing color space conversion on the food image so as to convert the food image from RGB color space to LAB color space and acquire a processing image; the feature map generation module is used for converting the food image from an RGB color space to a gray image, and applying an LBP operator on the gray image to generate a texture feature map; the third acquisition module is used for positioning the edge of the dinner plate in the food image by using an ellipse detection algorithm and acquiring a best-fit dinner plate edge ellipse; the fourth acquisition module is used for carrying out super-pixel segmentation on the processed image, generating a preset super-pixel area and acquiring a marking matrix; the fifth acquisition module is used for aggregating the super pixel areas based on a texture and color mixed area growth algorithm and the marking matrix to generate a segmentation area and acquiring a segmentation area marking matrix; the sixth obtaining module is configured to combine a segmentation region with an area smaller than a first preset threshold with a segmentation region with a nearest texture color adjacent to the segmentation region, until the areas of all the segmentation regions are larger than a second preset threshold, and obtain an updated segmentation region marking matrix; the type judging module is used for judging the type of the dividing region by utilizing the best fit dinner plate edge ellipse; the types include: food, dinner plate and background; the seventh obtaining module is configured to combine the divided areas of the type of food until the number of the divided areas of the type of food is equal to the total number, and obtain a final divided area marking matrix.
The present invention provides a storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described texture-color-mixing-based food image segmentation method.
The invention provides a terminal, comprising: a processor and a memory; the memory is used for storing a computer program; the processor is used for executing the computer program stored in the memory so as to enable the terminal to execute the food image segmentation method based on texture and color mixing.
As described above, the texture-color-mixing-based food image segmentation method, system, medium and terminal provided by the invention have the following beneficial effects:
(1) Compared with the prior art, the method and the device have the advantages that the food in the picture is accurately divided in pixel level on the premise that the image is not required to be preprocessed manually, the dividing effect is less influenced by the placement positions and colors of different foods in the image, and the method and the device have good operation efficiency.
(2) The invention realizes the segmentation treatment of the target food image based on the mixed characteristic of the color and the texture of the image superpixel, so that the food image segmentation method has good effect when the food with the color close to that of the dinner plate is segmented and has better performance when the food with the color close to that of the dinner plate is segmented; meanwhile, the method is suitable for segmenting images of various types of foods in the same food container, and can greatly improve the accuracy of segmenting the images of the multiple targets of foods.
(3) The method has a large application value, and has a very critical position in applications such as further identifying the types of foods and the weights of different foods through pictures.
Drawings
Fig. 1 is a flowchart of a texture-color hybrid-based food image segmentation method according to an embodiment of the invention.
FIG. 2 is a flow chart of the present invention for locating tray edges in a food image using an ellipse detection algorithm to obtain a best fit tray edge ellipse in one embodiment.
FIG. 3 is a flow chart of the texture-color-mixture-based region growing algorithm and the marking matrix according to the present invention, aggregating the super-pixel regions to generate the segmented regions, and obtaining the segmented region marking matrix in one embodiment.
Fig. 4 is a flowchart of an embodiment of the present invention for merging a segmented region with an area smaller than a first preset threshold with a segmented region with a closest texture color adjacent to the segmented region until the areas of all the segmented regions are larger than a second preset threshold, and obtaining an updated segmented region marking matrix.
FIG. 5 is a flow chart of determining the type of segmented regions using best fit tray edge ellipses according to one embodiment of the present invention.
FIG. 6 is a flow chart of the present invention for merging segmented regions of food according to one embodiment.
Fig. 7 is a schematic diagram of a practical application scenario of the texture-color hybrid-based food image segmentation method according to an embodiment of the invention.
Fig. 8 is a schematic diagram illustrating a texture-color hybrid-based food image segmentation system according to an embodiment of the invention.
Fig. 9 is a schematic structural diagram of a terminal according to an embodiment of the invention.
Description of the reference numerals
81-a first acquisition module; 82-a second acquisition module; 83-a feature map generation module; 84-a third acquisition module; 85-a fourth acquisition module; 86-a fifth acquisition module; 87-a sixth acquisition module; 88-a type judgment module; 89-a seventh acquisition module; 91-a processor; 92 memory.
Detailed Description
The following specific examples are presented to illustrate the present invention, and those skilled in the art will readily appreciate the additional advantages and capabilities of the present invention as disclosed herein. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It should be noted that the illustrations provided in the following embodiments merely illustrate the basic concept of the present invention by way of illustration, and only the components related to the present invention are shown in the illustrations, not according to the number, shape and size of the components in actual implementation, and the form, number and proportion of each component in actual implementation may be arbitrarily changed, and the layout of the components may be more complex.
Compared with the prior art, the texture-color-mixing-based food image segmentation method, system, medium and terminal provided by the invention have the advantages that the food in the image is precisely divided in pixel level on the premise that the image is not required to be preprocessed manually, the effect of division is less influenced by the placement positions and colors of different foods in the image, and the operation efficiency is better; the invention realizes the segmentation treatment of the target food image based on the mixed characteristic of the color and the texture of the image superpixel, so that the food image segmentation method has good effect when the food with the color close to that of the dinner plate is segmented and has better performance when the food with the color close to that of the dinner plate is segmented; meanwhile, the method is suitable for segmenting images of various types of foods in the same food container, and can greatly improve the accuracy of segmenting the images of the multiple targets of foods; the method has a large application value, and has a very critical position in applications such as further identifying the types of foods and the weights of different foods through pictures.
As shown in fig. 1, in an embodiment, the texture-color hybrid-based food image segmentation method of the present invention is applied to a terminal; specifically, the food image segmentation method based on texture color mixing comprises the following steps:
step S1, acquiring food images and total number of food types contained in the food images.
Specifically, the user inputs the food image photographed by the user into the terminal together with the total number of food categories contained in the food image.
And S2, performing color space conversion on the food image to convert the food image from RGB color space to LAB color space, and acquiring a processed image.
It should be noted that, the reason for converting the RGB color space into the LAB color space is that, compared with the RGB color space, the LAB color space is perceptually uniform to the human eye, and the distance based on the LAB color space can more represent the true color difference to the human eye; meanwhile, in the LAB space, the brightness information is independently represented by the L channel and is irrelevant to other channels, so that a color distance formula for reducing the influence of brightness on the color distance is easier to design in the LAB space.
And step S3, converting the food image from an RGB color space to a gray image, and applying an LBP operator on the gray image to generate a texture feature map.
It should be noted that LBP (Local Binary Pattern ) is an operator for describing local texture features of an image, and the reason for selecting LBP as a texture feature operator is that LBP is a simple but very effective texture operator, has significant advantages of rotation invariance and gray invariance, and also has strong robustness to illumination variation.
The execution order of the step S2 and the step S3 is not limited to the condition of the present invention, and the step S2 may be executed first, then the step S3 may be executed first, then the step S2 may be executed, or the step S2 and the step S3 may be executed simultaneously.
And S4, positioning the edge of the dinner plate in the food image by using an ellipse detection algorithm, and obtaining a best-fit dinner plate edge ellipse.
In one embodiment, as shown in fig. 2, locating the dish edges in the food image using an ellipse detection algorithm, obtaining a best fit dish edge ellipse comprises the steps of:
step S41, an edge detection operator is applied to generate an edge map of the food image.
Preferably, the edge detection operator adopts a Canny operator; specifically, the edge map is generated using the Canny operator.
And step S42, discarding the edge section with the area smaller than a third preset threshold value in the edge map.
The third preset threshold is a preset value, and the specific value is not a condition for limiting the present invention, and may be set according to different application scenarios.
And S43, fitting the rest edge segments in the edge map by using a least square method, and obtaining the best-fit dinner plate edge ellipse.
And S5, performing super-pixel segmentation on the processed image to generate a preset super-pixel area, and obtaining a marking matrix.
Specifically, the processed image obtained in step S2 is subjected to super-pixel division, and the processed image is divided into preset (m) super-pixel areas, and the marker matrix is obtained.
It should be noted that how many super pixel regions (i.e., what value is taken by m) the processed image is divided into is not a limitation of the present invention, and may be set depending on different application scenarios.
Preferably, m takes 800.
And S6, aggregating the super pixel areas based on a texture and color mixed area growing algorithm and the marking matrix to generate a segmentation area, and obtaining a segmentation area marking matrix.
As shown in fig. 3, in one embodiment, aggregating the super-pixel regions based on a texture-color hybrid region growing algorithm and the marking matrix to generate a segmented region, obtaining a segmented region marking matrix includes the steps of:
Step S61, randomly selecting a super-pixel area from the processed image, which is not marked as any divided area in the marking matrix.
And step S62, executing the region growing algorithm by taking the super pixel region as a starting point and taking the super pixel as a basic unit to generate the segmentation region.
It should be noted that, the similarity distance between different divided regions is calculated by using a texture color mixed measurement mode; the calculation formula of the color similarity distance is as follows:
wherein Dis color Representing a color similarity distance between two super-pixel regions on the processed image; l, a, b represent the values of all pixels in the partitioned area on L, A, B channels of the LAB color space, respectively; l, A, B represent the values of all pixel points in a super pixel region adjacent to the partitioned region over the L, A, B channel of the LAB color space, respectively.
It should be noted that, the calculation formula of the texture similarity distance is as follows:
wherein Dis texture Representing a texture similarity distance between two super-pixel regions on the processed image; hist represents the LBP feature histogram calculated from the texture feature map; lbp (lbp) 1 (x)、lbp 2 (x) LBP characteristic histograms respectively representing the segmentation region and a certain super pixel region adjacent to the segmentation region are normalized to be at the value of the x-th bit; the value range of x is an integer of 0-255, and represents the xth interval of the LBP characteristic histogram.
It should be noted that, the value range of each pixel point of the texture feature map is an integer ranging from 0 to 255, and the meaning of the LBP feature histogram generated according to the texture feature map is a statistical histogram of the number of pixels with different values in a partition area, where each possible value of the pixel points of the texture feature map is separately divided into one interval, so that the LBP feature histogram is divided into 256 intervals in total, the value range of x is an integer ranging from 0 to 255, and represents the xth interval of the LBP feature histogram, that is, the interval corresponding to the pixel point with the value of x in the texture feature map of one partition area in the LBP feature histogram i (x) In the texture feature map representing the divided region i, the number of pixel points having a value x is normalized.
It should be noted that, the calculation formula of the mixed similarity distance of the comprehensive color and texture is as follows:
Dis=p*Dis color +q*Dis texture ;
wherein Dis represents a hybrid similarity distance for the integrated color and texture; p and q respectively represent Dis color 、Dis texture And (5) corresponding weight.
The number of p and q is not limited to the above, and may be set specifically depending on the application scenario, as long as p+q=1 is ensured.
And step S63, repeating the steps until all the super pixel areas are marked as a segmentation area in the marking matrix, and obtaining a segmentation area marking matrix.
Specifically, step S61 and step S62 are repeated until all the super pixel areas are marked as a certain divided area in the above marking matrix, and the divided area marking matrix is obtained.
And S7, merging the segmented regions with the areas smaller than the first preset threshold value and the segmented regions with the nearest adjacent texture colors in the segmented regions until the areas of all the segmented regions are larger than the second preset threshold value, and obtaining an updated segmented region marking matrix.
It should be noted that, the first preset threshold value and the second preset threshold value are both set in advance, and the specific values are not used as conditions for limiting the present invention, and can be set according to different application scenarios.
As shown in fig. 4, in an embodiment, merging a segmented region with an area smaller than a first preset threshold value of the segmented regions with a segmented region with a nearest texture color adjacent to the segmented region until the areas of all the segmented regions are larger than a second preset threshold value, and obtaining the updated segmented region marking matrix includes the following steps:
Step S71, arranging all the divided areas from small to large according to the area.
Step S72, sequentially taking out the segmentation areas with the areas smaller than the first preset threshold value from small to large, merging the segmentation areas into the most similar segmentation areas adjacent to the segmentation areas according to the measurement mode of the texture color mixture, and obtaining the updated segmentation area marking matrix.
Specifically, the segmented regions with areas smaller than the first preset threshold are sequentially fetched from small to large, and are merged into the most similar segmented region adjacent thereto according to the texture-color mixture metric method described in step S62.
And S8, judging the type of the segmentation area by utilizing the best fit dinner plate edge ellipse.
Note that the types include: food, dinner plate and background.
In one embodiment, as shown in fig. 5, determining the type of the segmented region using the best fit tray edge ellipse comprises the steps of:
step S81, for each of the divided regions, if the area of the divided region exceeding the first preset proportion is located outside the best-fit tray edge ellipse, the divided region is marked as a background.
It should be noted that, the first preset ratio is set in advance, and the specific value is not a condition for limiting the present invention, and may be set according to different application scenarios.
Preferably, the first preset proportion is set to 20%.
Step S82, for the divided area which is not marked as the background, if the divided area has the edge length exceeding the second preset proportion and is adjacent to the divided area marked as the background, the divided area is marked as a dinner plate.
It should be noted that, the second preset ratio is set in advance, and the specific value is not a condition for limiting the present invention, and may be set according to different application scenarios.
Preferably, the second preset ratio is set to 1/3.
Step S83, marking the segmented areas not marked as the background and the dinner plate as food.
And step S9, merging the segmented areas with the type of food until the number of the segmented areas with the type of food is equal to the total number, and acquiring a final segmented area marking matrix.
As shown in fig. 6, in one embodiment, merging the partitioned areas of the type of food includes the steps of:
step S91, selecting all the divided areas marked as the food, and arranging the divided areas from small to large.
Step S92, the segmentation areas are selected successively from small to large according to the area, and are combined into the most similar segmentation areas adjacent to the segmentation areas according to the texture color mixing type measurement mode.
It should be noted that, the execution sequence of the above steps is not a condition for limiting the present invention, and all changes (the working principle is the same) of the execution sequence based on the content of the above steps are within the protection scope of the present invention.
The food image segmentation method based on texture-color mixing according to the present invention is further explained by the following specific examples.
As shown in fig. 7, in an embodiment, the texture-color hybrid-based food image segmentation method is applied to a terminal for segmenting the food image (a) in fig. 7.
Specifically, first the terminal accepts the total number of foods from the food image (a) photographed by the user and the food image (a) manually input to the terminal by the user, the number being 2.
Then, converting the color space of the input food image (a) into LAB space and gray space, and simultaneously applying an LBP operator with a window size of 3 on the gray space to generate an LBP characteristic map; subsequently, a least squares fit best ellipse model is applied to determine the edge of the dish used to load the food in food image (a) (as shown in fig. 7 (b)).
Next, the food image converted into LAB space is super-pixel-segmented using the SLIC super-pixel segmentation algorithm, generating 800 super-pixel regions (as shown in (c) of fig. 7).
It should be noted that, the SLIC (simple linear iterative cluster), i.e. simple linear iterative clustering, is a process of locally clustering image pixels by converting a color image into 5-dimensional feature vectors in CIELAB color space and XY coordinates, and then constructing a distance metric for the 5-dimensional feature vectors.
Then, the super pixel area is taken as the minimum growth unit of the area growth algorithm, the area growth algorithm is applied, the super pixel blocks are combined to form the divided areas, the threshold value of the area growth algorithm is set to 7, the weight p in the calculation formula of the mixed similarity distance of the comprehensive colors and textures is set to 0.7, and the weight q is set to 0.3 in the corresponding step S62; subsequently, all the divided regions having an area smaller than the entire image area 1/50 are merged with their neighboring regions according to step S7 (as shown in (d) of fig. 7).
Then, the food areas in the divided areas are identified according to step S8, and finally, the number of the areas identified as "food" is combined to 2 according to step S9, so as to obtain the final division result (as shown in (e), (f) and (g) of fig. 7).
It should be noted that, the protection scope of the food image segmentation method based on texture-color mixing according to the present invention is not limited to the execution sequence of the steps listed in the embodiment, and all the schemes implemented by increasing or decreasing steps and replacing steps according to the prior art according to the principles of the present invention are included in the protection scope of the present invention.
As shown in fig. 8, in an embodiment, the texture-color hybrid-based food image segmentation system of the present invention includes a first acquisition module 81, a second acquisition module 82, a feature map generation module 83, a third acquisition module 84, a fourth acquisition module 85, a fifth acquisition module 86, a sixth acquisition module 87, a type judgment module 88, and a seventh acquisition module 89.
The first obtaining module 81 is configured to obtain a food image and a total number of food types contained in the food image.
The second obtaining module 82 is configured to perform color space conversion on the food image to convert the food image from RGB color space to LAB color space, and obtain a processed image.
The feature map generating module 83 is configured to convert the food image from the RGB color space to a gray scale image, and apply an LBP operator on the gray scale image to generate a texture feature map.
The third acquisition module 84 is configured to locate the tray edge in the food image using an ellipse detection algorithm to obtain a best fit tray edge ellipse.
The fourth obtaining module 85 is configured to perform superpixel segmentation on the processed image, generate a preset superpixel area, and obtain a marking matrix.
The fifth obtaining module 86 is configured to aggregate the super-pixel regions based on a texture-color hybrid region growing algorithm and the marking matrix to generate a segmented region, and obtain a segmented region marking matrix.
The sixth obtaining module 87 is configured to combine the segmented region with the area smaller than the first preset threshold with the segmented region with the closest texture color adjacent to the segmented region until the areas of all the segmented regions are larger than the second preset threshold, and obtain the updated segmented region marking matrix.
The type determination module 88 is configured to determine the type of the segmented region using the best fit tray edge ellipse.
Note that the types include: food, dinner plate and background.
The seventh obtaining module 89 is configured to combine the divided areas of the type of food until the number of the divided areas of the type of food is equal to the total number, and obtain a final divided area marking matrix.
It should be noted that the structures and principles of the first obtaining module 81, the second obtaining module 82, the feature map generating module 83, the third obtaining module 84, the fourth obtaining module 85, the fifth obtaining module 86, the sixth obtaining module 87, the type determining module 88, and the seventh obtaining module 89 are in one-to-one correspondence with the steps (step S1 to step S9) in the above-mentioned food image segmentation method based on texture-color mixing, and therefore will not be described here again.
It should be noted that, it should be understood that the division of the modules of the above system is merely a division of a logic function, and may be fully or partially integrated into a physical entity or may be physically separated. And these modules may all be implemented in software in the form of calls by the processing element; or can be realized in hardware; the method can also be realized in a form of calling software by a processing element, and the method can be realized in a form of hardware by a part of modules. For example, the x module may be a processing element that is set up separately, may be implemented in a chip of the system, or may be stored in a memory of the system in the form of program code, and the function of the x module may be called and executed by a processing element of the system. The implementation of the other modules is similar. In addition, all or part of the modules can be integrated together or can be independently implemented. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in a software form.
For example, the modules above may be one or more integrated circuits configured to implement the methods above, such as: one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), or one or more digital signal processors (Digital Signal Processor, abbreviated as DSP), or one or more field programmable gate arrays (Field Programmable Gate Array, abbreviated as FPGA), etc. For another example, when a module above is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a System-On-a-Chip (SOC).
The storage medium of the present invention stores a computer program which, when executed by a processor, implements the above-described texture-color-mixing-based food image segmentation method. The storage medium includes: various media capable of storing program codes, such as ROM, RAM, magnetic disk, U-disk, memory card, or optical disk.
As shown in fig. 9, the terminal of the present invention includes a processor 91 and a memory 92.
The memory 92 is used for storing a computer program; preferably, the memory 92 includes: various media capable of storing program codes, such as ROM, RAM, magnetic disk, U-disk, memory card, or optical disk.
The processor 91 is connected to the memory 92 and is configured to execute a computer program stored in the memory 92, so that the terminal performs the above-mentioned food image segmentation method based on texture-color mixing.
Preferably, the processor 91 may be a general-purpose processor, including a central processing unit (Central Processing Unit, abbreviated as CPU), a network processor (Network Processor, abbreviated as NP), etc.; but also digital signal processors (Digital Signal Processor, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field programmable gate arrays (Field Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
It should be noted that, the texture-color hybrid-based food image segmentation system of the present invention may implement the texture-color hybrid-based food image segmentation method of the present invention, but the implementation device of the texture-color hybrid-based food image segmentation method of the present invention includes, but is not limited to, the structure of the texture-color hybrid-based food image segmentation system as exemplified in the present embodiment, and all the structural modifications and substitutions of the prior art according to the principles of the present invention are included in the protection scope of the present invention.
In summary, compared with the prior art, the texture-color hybrid-based food image segmentation method, system, medium and terminal provided by the invention have the advantages that the food in the image is precisely divided at the pixel level on the premise that the image is not required to be preprocessed manually, the effect of division is less influenced by the placement positions and colors of different foods in the image, and the operation efficiency is better; the invention realizes the segmentation treatment of the target food image based on the mixed characteristic of the color and the texture of the image superpixel, so that the food image segmentation method has good effect when the food with the color close to that of the dinner plate is segmented and has better performance when the food with the color close to that of the dinner plate is segmented; meanwhile, the method is suitable for segmenting images of various types of foods in the same food container, and can greatly improve the accuracy of segmenting the images of the multiple targets of foods; the method has a large application value, and has a very critical position in applications such as further identifying the types of foods and the weights of different foods through pictures; therefore, the invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. Accordingly, it is intended that all equivalent modifications and variations of the invention be covered by the claims, which are within the ordinary skill of the art, be within the spirit and scope of the present disclosure.
Claims (7)
1. The food image segmentation method based on texture and color mixing is characterized by comprising the following steps of:
acquiring a food image and a total number of food categories contained in the food image; the total number is a plurality;
performing color space conversion on the food image to convert the food image from RGB color space to LAB color space, and obtaining a processing image;
converting the food image from RGB color space to gray image, and applying LBP operator on the gray image to generate texture feature map;
positioning the edge of the dinner plate in the food image by using an ellipse detection algorithm to obtain a best fit dinner plate edge ellipse;
performing super-pixel segmentation on the processed image to generate a preset super-pixel area, and obtaining a marking matrix;
Aggregating the super pixel areas based on a texture-color mixed area growing algorithm and the marking matrix to generate a segmentation area, and obtaining a segmentation area marking matrix; aggregating the superpixel regions based on a texture-color-mixed region growing algorithm and the marking matrix to generate segmented regions, the obtaining of the segmented region marking matrix comprising the steps of:
randomly selecting a super-pixel region from the processed image, which is not marked as any segmented region in the marking matrix;
the super pixel area is taken as a starting point, the area growing algorithm is executed by taking the super pixel area as a basic unit, and the segmentation area is generated; the similarity distance between different segmentation areas is calculated by using a texture and color mixed measurement mode; the calculation formula of the similarity distance is as follows:
Dis=p*Dis color +q*Dis texture ;
wherein Dis represents the similarity distance; p and q represent color similarity distances DiS, respectively color Texture similarity distance Dis texture Corresponding weights;
the calculation formula of the color similarity distance is as follows:
wherein Dis color Representing a color similarity distance between two super-pixel regions on the processed image; l, a, b represent the values of all pixels in the partitioned area on L, A, B channels of the LAB color space, respectively; l, A, B each represent the value of all pixel points in a super-pixel region adjacent to the partitioned region over the L, A, B channel of the LAB color space;
The calculation formula of the texture similarity distance is as follows:
wherein Dis texture Representing a texture similarity distance between two super-pixel regions on the processed image; hist represents the LBP feature histogram calculated from the texture feature map; lbp (lbp) 1 (x)、lbp 2 (x) LBP characteristic histograms respectively representing the segmentation region and a certain super pixel region adjacent to the segmentation region are normalized to be at the value of the x-th bit; the value range of x is an integer of 0-255, and represents the x-th interval of the LBP characteristic histogram;
repeating the steps until all the super pixel areas are marked as a segmentation area in the marking matrix, and obtaining a segmentation area marking matrix;
combining the segmented regions with the areas smaller than the first preset threshold value with the segmented regions with the nearest texture colors adjacent to the segmented regions until the areas of all the segmented regions are larger than the second preset threshold value, and obtaining an updated segmented region marking matrix;
judging the type of the dividing region by utilizing the best fit dinner plate edge ellipse; the types include: food, dinner plate and background;
merging the partitioned areas with the type of food until the number of the partitioned areas with the type of food is equal to the total number, and acquiring a final partitioned area marking matrix to realize pixel-level partitioning of the food in the food image; merging the partitioned areas of type food comprises the steps of:
Selecting all the divided areas marked as the food, and arranging the divided areas from small to large;
and sequentially selecting the segmentation areas from small to large according to the area, and merging the segmentation areas into the most similar segmentation areas adjacent to the segmentation areas according to the measurement mode of the texture color mixture.
2. The texture-color hybrid-based food image segmentation method according to claim 1, wherein locating the dish edges in the food image using an ellipse detection algorithm, obtaining a best-fit dish edge ellipse comprises the steps of:
applying an edge detection operator to generate an edge map of the food image;
discarding edge sections with the area smaller than a third preset threshold value in the edge map;
and fitting the rest edge segments in the edge map by using a least square method to obtain the best-fit dinner plate edge ellipse.
3. The texture-color hybrid-based food image segmentation method according to claim 1, wherein merging the segmentation areas with areas smaller than a first preset threshold with the segmentation area closest to the adjacent texture color until the areas of all the segmentation areas are larger than a second preset threshold, and acquiring the updated segmentation area marking matrix comprises the following steps:
Arranging all the divided areas from small to large according to the area;
and sequentially taking out the segmentation areas with the areas smaller than the first preset threshold value from small to large, merging the segmentation areas into the most similar segmentation areas adjacent to the segmentation areas according to the texture color mixing type measurement mode, and obtaining the updated segmentation area marking matrix.
4. The texture-color hybrid-based food image segmentation method according to claim 1, wherein determining the type of the segmented region using the best-fit tray edge ellipse comprises the steps of:
for each of the segmented regions, if the segmented region has an area exceeding a first predetermined proportion that is outside the best-fit tray edge ellipse, the segmented region is marked as background;
for a segmented region not marked as the background, if the segmented region has an edge length exceeding a second preset proportion adjacent to the segmented region marked as the background, the segmented region is marked as a dinner plate;
the segmented areas not marked as the background and the dinner plate are marked as food.
5. A multi-target food image segmentation system based on texture-color blending, comprising: the device comprises a first acquisition module, a second acquisition module, a feature map generation module, a third acquisition module, a fourth acquisition module, a fifth acquisition module, a sixth acquisition module, a type judgment module and a seventh acquisition module;
The first acquisition module is used for acquiring food images and the total number of food types contained in the food images; the total number is a plurality;
the second acquisition module is used for performing color space conversion on the food image so as to convert the food image from RGB color space to LAB color space and acquire a processing image;
the feature map generation module is used for converting the food image from an RGB color space to a gray image, and applying an LBP operator on the gray image to generate a texture feature map;
the third acquisition module is used for positioning the edge of the dinner plate in the food image by using an ellipse detection algorithm and acquiring a best-fit dinner plate edge ellipse;
the fourth acquisition module is used for carrying out super-pixel segmentation on the processed image, generating a preset super-pixel area and acquiring a marking matrix;
the fifth acquisition module is used for aggregating the super pixel areas based on a texture and color mixed area growth algorithm and the marking matrix to generate a segmentation area and acquiring a segmentation area marking matrix; aggregating the superpixel regions based on a texture-color-mixed region growing algorithm and the marking matrix to generate segmented regions, the obtaining of the segmented region marking matrix comprising the steps of:
Randomly selecting a super-pixel region from the processed image, which is not marked as any segmented region in the marking matrix;
the super pixel area is taken as a starting point, the area growing algorithm is executed by taking the super pixel area as a basic unit, and the segmentation area is generated; the similarity distance between different segmentation areas is calculated by using a texture and color mixed measurement mode; the calculation formula of the similarity distance is as follows:
Dis=p*Dis color +q*Dis texture ;
wherein Dis represents the similarity distance; p and q represent color similarity distances DiS, respectively color Texture similarity distance Dis texture Corresponding weights;
the calculation formula of the color similarity distance is as follows:
wherein Dis color Representing a color similarity distance between two super-pixel regions on the processed image; l, a, b represent the values of all pixels in the partitioned area on L, A, B channels of the LAB color space, respectively; l, A, B each represent the value of all pixel points in a super-pixel region adjacent to the partitioned region over the L, A, B channel of the LAB color space;
the calculation formula of the texture similarity distance is as follows:
wherein Dis texture Representing a texture similarity distance between two super-pixel regions on the processed image; hist represents the LBP feature histogram calculated from the texture feature map; lbp (lbp) 1 (x)、lbp 2 (x) LBP characteristic histograms respectively representing the segmentation region and a certain super pixel region adjacent to the segmentation region are normalized to be at the value of the x-th bit; the value range of x is an integer of 0-255, and represents the x-th interval of the LBP characteristic histogram;
repeating the steps until all the super pixel areas are marked as a segmentation area in the marking matrix, and obtaining a segmentation area marking matrix;
the sixth obtaining module is configured to combine a segmentation region with an area smaller than a first preset threshold with a segmentation region with a nearest texture color adjacent to the segmentation region, until the areas of all the segmentation regions are larger than a second preset threshold, and obtain an updated segmentation region marking matrix;
the type judging module is used for judging the type of the dividing region by utilizing the best fit dinner plate edge ellipse; the types include: food, dinner plate and background;
the seventh obtaining module is configured to combine the divided areas of the type of food until the number of the divided areas of the type of food is equal to the total number, and obtain a final divided area marking matrix to implement pixel-level division of the food in the food image; merging the partitioned areas of type food comprises the steps of:
Selecting all the divided areas marked as the food, and arranging the divided areas from small to large;
and sequentially selecting the segmentation areas from small to large according to the area, and merging the segmentation areas into the most similar segmentation areas adjacent to the segmentation areas according to the measurement mode of the texture color mixture.
6. A storage medium having stored thereon a computer program, which when executed by a processor implements the texture color blending-based food image segmentation method according to any one of claims 1 to 4.
7. A terminal, comprising: a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the computer program stored in the memory, so that the terminal performs the texture-color hybrid-based food image segmentation method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110197874.0A CN112991238B (en) | 2021-02-22 | 2021-02-22 | Food image segmentation method, system and medium based on texture and color mixing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110197874.0A CN112991238B (en) | 2021-02-22 | 2021-02-22 | Food image segmentation method, system and medium based on texture and color mixing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112991238A CN112991238A (en) | 2021-06-18 |
CN112991238B true CN112991238B (en) | 2023-08-22 |
Family
ID=76349521
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110197874.0A Active CN112991238B (en) | 2021-02-22 | 2021-02-22 | Food image segmentation method, system and medium based on texture and color mixing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112991238B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113536014B (en) * | 2021-06-30 | 2023-09-01 | 青岛中科英泰商用系统股份有限公司 | Dish information retrieval method integrating container information |
CN114092486A (en) * | 2021-10-11 | 2022-02-25 | 安庆师范大学 | Automatic segmentation method and device for image texture background |
CN114973237B (en) * | 2022-06-07 | 2023-01-10 | 慧之安信息技术股份有限公司 | Optical disk rate detection method based on image recognition |
CN114782711B (en) * | 2022-06-20 | 2022-09-16 | 四川航天职业技术学院(四川航天高级技工学校) | Intelligent risk detection method and system based on image recognition |
CN115439846B (en) * | 2022-08-09 | 2023-04-25 | 北京邮电大学 | Image segmentation method and device, electronic equipment and medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103530882A (en) * | 2013-10-17 | 2014-01-22 | 南京大学 | Improved image segmentation method based on picture and color texture features |
CN104123417A (en) * | 2014-07-22 | 2014-10-29 | 上海交通大学 | Image segmentation method based on cluster ensemble |
CN105118049A (en) * | 2015-07-22 | 2015-12-02 | 东南大学 | Image segmentation method based on super pixel clustering |
CN108280469A (en) * | 2018-01-16 | 2018-07-13 | 佛山市顺德区中山大学研究院 | A kind of supermarket's commodity image recognition methods based on rarefaction representation |
CN108357227A (en) * | 2017-01-26 | 2018-08-03 | 天津市阿波罗信息技术有限公司 | A kind of method of the direct coding of variable information |
CN110163239A (en) * | 2019-01-25 | 2019-08-23 | 太原理工大学 | A kind of Weakly supervised image, semantic dividing method based on super-pixel and condition random field |
CN110610505A (en) * | 2019-09-25 | 2019-12-24 | 中科新松有限公司 | Image segmentation method fusing depth and color information |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108986119B (en) * | 2018-07-25 | 2020-07-28 | 京东方科技集团股份有限公司 | Image segmentation method and device, computer equipment and readable storage medium |
-
2021
- 2021-02-22 CN CN202110197874.0A patent/CN112991238B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103530882A (en) * | 2013-10-17 | 2014-01-22 | 南京大学 | Improved image segmentation method based on picture and color texture features |
CN104123417A (en) * | 2014-07-22 | 2014-10-29 | 上海交通大学 | Image segmentation method based on cluster ensemble |
CN105118049A (en) * | 2015-07-22 | 2015-12-02 | 东南大学 | Image segmentation method based on super pixel clustering |
CN108357227A (en) * | 2017-01-26 | 2018-08-03 | 天津市阿波罗信息技术有限公司 | A kind of method of the direct coding of variable information |
CN108280469A (en) * | 2018-01-16 | 2018-07-13 | 佛山市顺德区中山大学研究院 | A kind of supermarket's commodity image recognition methods based on rarefaction representation |
CN110163239A (en) * | 2019-01-25 | 2019-08-23 | 太原理工大学 | A kind of Weakly supervised image, semantic dividing method based on super-pixel and condition random field |
CN110610505A (en) * | 2019-09-25 | 2019-12-24 | 中科新松有限公司 | Image segmentation method fusing depth and color information |
Non-Patent Citations (1)
Title |
---|
基于图像的中餐菜品分割与识别;苏国炀;《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》;20200115;第13-35页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112991238A (en) | 2021-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112991238B (en) | Food image segmentation method, system and medium based on texture and color mixing | |
Li et al. | Robust saliency detection via regularized random walks ranking | |
Kim et al. | Salient region detection via high-dimensional color transform | |
US9014467B2 (en) | Image processing method and image processing device | |
Wang et al. | Deep networks for saliency detection via local estimation and global search | |
Kumar et al. | Leafsnap: A computer vision system for automatic plant species identification | |
EP3101594A1 (en) | Saliency information acquisition device and saliency information acquisition method | |
US10121245B2 (en) | Identification of inflammation in tissue images | |
US8666170B2 (en) | Computer system and method of matching for images and graphs | |
Do et al. | Early melanoma diagnosis with mobile imaging | |
WO2017181892A1 (en) | Foreground segmentation method and device | |
CN108710916B (en) | Picture classification method and device | |
CN108629783A (en) | Image partition method, system and medium based on the search of characteristics of image density peaks | |
Feng et al. | A color image segmentation method based on region salient color and fuzzy c-means algorithm | |
Gade et al. | Feature extraction using GLCM for dietary assessment application | |
CN108280469A (en) | A kind of supermarket's commodity image recognition methods based on rarefaction representation | |
Chen et al. | Visual saliency detection based on homology similarity and an experimental evaluation | |
CN104050674B (en) | Salient region detection method and device | |
Mairon et al. | A closer look at context: From coxels to the contextual emergence of object saliency | |
Wo et al. | A saliency detection model using aggregation degree of color and texture | |
JP2009123234A (en) | Object identification method, apparatus and program | |
CN115082551A (en) | Multi-target detection method based on unmanned aerial vehicle aerial video | |
Heravi et al. | Low price foot pressure distribution screening technique: optical podoscope with accurate foot print segmentation using hidden Markov random field model | |
Xu et al. | Low complexity image quality measures for dietary assessment using mobile devices | |
Saputra et al. | Integration GLCM and geometric feature extraction of region of interest for classifying tuna |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |