Disclosure of Invention
The embodiment of the invention provides a food doneness identification method, a food doneness identification device, a food doneness identification system, an electric appliance, a server and a medium.
The food maturity identification method comprises the following steps: acquiring an initial state image of food;
obtaining a target image according to the initial state image and a preset antagonistic neural network model, wherein the target image represents an image when the food reaches the required doneness;
acquiring a current state image of the food;
and comparing the similarity of the target image and the current state image to obtain the food doneness.
In some embodiments, the obtaining a target image according to the initial state image and a preset antagonistic neural network model includes:
acquiring characteristic information of the initial state image according to the initial state image and the preset antagonistic neural network model;
and obtaining the target image according to the characteristic information and the preset antagonistic neural network model.
In some embodiments, the preset antagonistic neural network model comprises a number of sub-neural network models corresponding to different types of food,
the obtaining of the target image according to the initial state image and the preset antagonistic neural network model comprises:
searching the preset antagonistic neural network model according to the initial state image to obtain a corresponding sub-neural network model;
inputting the initial state image into the sub-neural network model to obtain the characteristic information of the initial state image;
and obtaining the target image according to the characteristic information and the sub-neural network model.
In some embodiments, the comparing the similarity between the target image and the current state image to obtain the food doneness comprises:
acquiring pixel values corresponding to all pixel points of the target image and pixel values corresponding to all pixel points of the current state image;
calculating the difference between the pixel value of the pixel point of the target image and the pixel value of the pixel point of the current state image to obtain a pixel difference value;
determining that the two corresponding pixel points are similar under the condition that the pixel difference value is smaller than a preset threshold value;
determining that the two corresponding pixel points are not similar under the condition that the pixel difference value is larger than the preset threshold value;
and calculating the proportion of similar pixel points to all pixel points to obtain the similarity, wherein the similarity represents the food doneness.
In some embodiments, the comparing the similarity between the target image and the current state image to obtain the food doneness comprises:
under the condition that the similarity is smaller than a preset doneness threshold value, acquiring that the current food doneness is not cooked;
and under the condition that the similarity is greater than the preset doneness threshold value, acquiring the current food doneness as cooked.
The food doneness recognition device of the embodiment of the present invention includes: the image acquisition module is used for acquiring an initial state image of food and acquiring a current state image of the food;
the image processing module is used for obtaining a target image according to the initial state image and a preset antagonistic neural network model, wherein the target image represents an image when the food reaches the required doneness;
and the image comparison module is used for comparing the similarity between the target image and the current state image to obtain the food doneness.
The cooking electric appliance comprises the food maturity identification device.
The server comprises a communication module, a processing module and a display module, wherein the communication module is used for receiving an initial state image of food uploaded by a cooking appliance and a current state image of the food;
the image processing module is used for obtaining a target image according to the initial state image and a preset antagonistic neural network model, wherein the target image represents an image when the food reaches the required doneness;
and the image comparison module is used for comparing the similarity between the target image and the current state image to obtain the food cooking degree so as to transmit the food cooking degree to the cooking electric appliance.
The food maturity identifying system comprises a cooking appliance and a server,
the cooking appliance comprises an image acquisition module and an image comparison module, wherein the image acquisition module is used for acquiring an initial state image of food and a current state image of the food so as to transmit the initial state image and the current state image to the server, and the image comparison module is used for comparing the similarity between the target image and the current state image to obtain the food doneness;
the server comprises an image processing module, and the image processing module is used for obtaining the target image according to the initial state image and a preset antagonistic neural network model so as to transmit the target image to the cooking appliance.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the food doneness identification method of any of the above embodiments.
According to the food doneness identification method, the device, the system, the electric appliance, the server and the medium, the doneness of the food is obtained by comparing the similarity of the target image and the current state image, the requirements of doneness judgment of various types of food can be met, the generalization degree is high, in addition, the method occupies less computing resources, and the cost of doneness judgment is reduced.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
In the description of the embodiments of the present invention, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implying any number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the embodiments of the present invention, "a plurality" means two or more.
Referring to fig. 1, a method for identifying a degree of ripeness of food according to an embodiment of the present invention includes:
s10, acquiring an initial state image of the food;
s20, obtaining a target image according to the initial state image and a preset antagonistic neural network model, wherein the target image represents an image when the food reaches the required doneness;
s30, acquiring a current state image of the food;
and S40, comparing the similarity of the target image and the current state image to obtain the food doneness.
According to the food doneness identification method, the doneness of the food is obtained by comparing the similarity of the target image and the current state image, the requirements of doneness judgment of various types of food can be met, the generalization degree is high, in addition, the method occupies less computing resources, and the cost of the doneness judgment is reduced.
Specifically, in the existing method for judging the cooked degree of food, part of cooking devices detect the temperature of the food by adding probes, but the probes are troublesome to insert and clean, and the probes can only be used on thicker food generally, so that the generalization degree is low; the partial cooking equipment judges the cooking degree by combining the oxygen sensor, judges the cooking degree change by measuring the oxygen content change in the oven cavity, and considers that the food is cooked to be edible cooking degree when the oxygen content is lower than a certain set threshold value. However, the types of food supporting the doneness judgment are few, the food is limited to supporting a few baking types or meat, and the oxygen sensor threshold of the corresponding menu can be initialized only by inputting the food to be cooked by the user in the using process, so that the intelligent generalization degree is not high, and the use experience brought to the user is poor.
According to the food doneness recognition method provided by the embodiment of the invention, the target image corresponding to the initial state image is generated through the preset antagonistic neural network model, so that the current state image of the food can be obtained according to the similarity, the doneness of various types of foods can be judged, and in addition, the calculation resources occupied by the judgment of the similarity can be reduced, and the doneness judgment cost can be reduced.
Specifically, the initial state image of the food is an image including an initial state of the food, the image may be an image including elements of the food, the baking tray, the environment, and the like, the image may be an image including all the food in the cooking appliance 200, and the image may also be an image including some food in the cooking appliance 200, which is not limited herein.
It should be noted that the original image acquired by the image acquisition device may be processed, and elements such as the baking tray and the cavity in the image are removed, so that pixels representing the elements such as the baking tray and the cavity in the original image are black, only the pixels representing the food are reserved, and the processed original image is used as the initial state image of the food. The form of the current state image of the food should be the same as the initial state image of the food, that is, if the initial state image of the food includes elements such as food, bakeware, and environment, the current state image of the food should also include elements such as food, bakeware, and environment, and the current state image of the food and the initial state image of the food should be collected by the image collecting device at the same position.
It is understood that the higher the similarity between the target image and the current state image, the closer the current state image is to the desired doneness of the food, i.e., the closer the current state of the food is to the desired doneness. It should be noted that the time for acquiring the current state image should not be too long as the time for acquiring the initial state image of the food, so as to avoid the problem that the food is blurred because the current state image is captured after the food has reached the required doneness.
It should be noted that the initial state image includes, but is not limited to, a state image of raw food, that is, the initial state image may be an image acquired when food is just put into the cooking appliance 200, or the initial state image may be an image acquired after food is put into the cooking appliance 200 for a certain period of time, and is not limited in this respect.
The preset countermeasure neural Network model may be an countermeasure neural Network model based on ACGAN (automatic Classifier genetic adaptive Network, Network with Auxiliary classification information based on generation of a countermeasure Network model), CGAN (Conditional generation adaptive Network), and the like.
In some embodiments, referring to fig. 2, step S20 includes:
step S21, acquiring characteristic information of the initial state image according to the initial state image and a preset antagonistic neural network model;
and step S22, obtaining a target image according to the characteristic information and the preset antagonistic neural network model.
Therefore, the target image can be obtained, and a basis is provided for judging the food doneness.
Specifically, the characteristic information includes information indicating characteristics of a food color, a food shape, a food size, a food texture, and the like. In this embodiment, the preset antagonistic neural network model can be used to generate target images of various types of food such as steak, cake, barbecue, etc. The initial state image can be input into a preset antagonistic neural network model, the preset neural network model extracts characteristic information from the initial state image through convolutional coding, and an intermediate image with the characteristic information is obtained; and mapping the preset antagonistic neural network model through a deconvolution generator, and generating a target image according to the intermediate image with the characteristic information. Due to the limited computing power, the intermediate image with feature information is typically smaller than the target image and the initial state image, e.g. when the target image and the initial state image are both 1080p, the intermediate image with feature information may be a 480p image.
In some embodiments, referring to fig. 3, the preset antagonistic neural network model includes a plurality of sub-neural network models corresponding to different types of food, and step S20 includes:
step S23, searching a preset antagonistic neural network model according to the initial state image to obtain a corresponding sub-neural network model;
step S24, inputting the initial state image into a sub-neural network model to obtain the characteristic information of the initial state image;
and step S25, obtaining a target image according to the characteristic information and the sub neural network model.
Therefore, different sub-neural network models are selected according to different food types, and the judgment accuracy is improved.
Specifically, the preset antagonistic neural network model may include a plurality of sub-neural network models, and each neural network model corresponds to different types of food such as steak, cake, barbecue, and the like. The type of food of the initial state image can be identified through the image identification module, so that the corresponding sub-neural network model is searched in the preset antagonistic neural network model according to the identification result of the image identification module, the target image is generated through the sub-neural network model corresponding to the type of the food, and the generated target image is more accurate. The method also can preset the antagonistic neural network model to extract the characteristic information from the initial state image through convolutional coding to form an intermediate image with the characteristic information, search a corresponding sub-neural network model in the antagonistic neural network model according to the intermediate image with the characteristic information, and map the intermediate image with the characteristic information by the sub-neural network model through a deconvolution generator to generate a target image.
In some embodiments, referring to fig. 4, step S40 includes:
step S41, acquiring pixel values corresponding to each pixel point of the target image and pixel values corresponding to each pixel point of the current state image;
step S42, calculating the difference between the pixel value of the pixel point of the target image and the pixel value of the pixel point of the current state image to obtain a pixel difference value;
step S43, determining that two corresponding pixel points are similar under the condition that the pixel difference value is smaller than a preset threshold value;
step S44, determining that the two corresponding pixel points are not similar when the pixel difference value is larger than a preset threshold value;
and step S45, calculating the proportion of the similar pixel points to all the pixel points to obtain the similarity, wherein the similarity represents the food doneness.
Therefore, the similarity between the target image and the current state image can be obtained, and a basis is provided for judging the food doneness.
Specifically, the pixel value may be an RGB (Red Green Blue ) color channel of the pixel point. The pixel values may be represented in binary code, and similarly, the pixel difference values may also be represented in binary code.
The pixel difference value may be a difference between pixel values of two corresponding pixels. Specifically, the difference value between the R color channel of the pixel value of the pixel at the certain position of the target image and the R color channel of the pixel value of the pixel at the corresponding position of the current state image, the difference value between the G color channel of the pixel value of the pixel at the certain position of the target image and the G color channel of the pixel value of the pixel at the corresponding position of the current state image, and the difference value between the B color channel of the pixel value of the pixel at the certain position of the target image and the B color channel of the pixel value of the pixel at the corresponding position of the current state image together form a pixel difference value, so that the pixel difference value of the pixel at the certain position is obtained, and the steps are repeated to obtain the pixel difference value between the pixel at each position of the target image and the pixel at each position of the current state image.
It should be noted that the target image and the current state image are shot by the image acquisition device at the same position, so that a pixel point at a certain position of the target image corresponds to a pixel point at the same position of the current state image.
It should be added that there are many ways to obtain the similarity, for example, after step S41, the histogram statistics is used to obtain the similarity; for another example, the target image and the current state image are subjected to feature extraction by using the same convolutional neural network, and the features of the target image and the features of the current state image are compared to obtain similarity; for another example, the target image and the current state image are input into a convolutional neural network, and the similarity between the two images is obtained by using a regression method.
Further, referring to fig. 5, step S40 includes:
step S46, under the condition that the similarity is smaller than a preset doneness threshold value, acquiring the current food doneness as being unripe;
and step S47, acquiring the current food ripeness as being already ripened when the similarity is larger than the preset ripeness threshold.
Thus, the degree of ripeness of the food can be judged according to the similarity.
Specifically, the food ripeness degree is judged to be in a cooked state and an unripe state according to the similarity, and other threshold value ranges can be set to judge the degree of the food in the unripe state, for example, when the threshold value range can be set to be 5% -10%, the current food ripeness degree is in the initial cooking stage in the unripe state, and at this time, if the similarity is 7%, the current food ripeness degree can be obtained to be in the initial cooking stage; if the threshold range is set to be 50% -60%, the current doneness of the food is at the middle tip of cooking in an unripe state, and if the similarity is 55%, the current doneness of the food can be obtained and is at the middle stage of cooking, which is not listed here.
In general, it is sufficient that the cooking appliance 200 determines whether the state of the food is cooked or not, and if the state of the food is cooked, the heating is stopped in time, and if the state of the food is not cooked, the heating is continued, so that it is only necessary to determine whether the state of the food is cooked or not according to the similarity.
Referring to fig. 6, the embodiment of the invention provides a food doneness recognition device 100, which includes an image acquisition module 10, an image processing module 20 and an image comparison module 30. The image acquisition module 10 is used for acquiring an initial state image of the food and acquiring a current state image of the food. The image processing module 20 is configured to obtain a target image according to the initial state image and the preset antagonistic neural network model, where the target image represents an image when the food reaches the required doneness. The image comparison module 30 is used for comparing the similarity between the target image and the current state image to obtain the food doneness.
According to the food doneness recognition device 100 of the embodiment of the invention, the preset antagonistic neural network model is used for generating the target image corresponding to the initial state image, so that the current state image of the food can be obtained according to the similarity when the current state image of the food is obtained, and thus, the doneness of various types of food can be judged, and in addition, the calculation resources occupied by the similarity judgment can be reduced, and the doneness judgment cost can be reduced.
Referring to fig. 7, an embodiment of the invention provides an electric cooking appliance 200 including the food doneness recognition device 100.
According to the cooking appliance 200 provided by the embodiment of the invention, the target image corresponding to the initial state image is generated through the preset antagonistic neural network model, so that the current state image of the food can be obtained according to the similarity when the current state image of the food is obtained, thus the doneness of various types of foods can be judged, in addition, the calculation resources occupied by the judgment of the similarity can be reduced, and the cost of the doneness judgment can be reduced.
Specifically, the cooking appliance 200 may be an oven, a stove, or the like. Taking the cooking appliance 200 as an oven as an example, the cooking appliance 200 may further include a heating device, a door switch detection device, a controller, and the like. The heating device is used for heating food in the oven, the oven door switch detection device is used for detecting whether the oven door of the oven is opened, and the controller is used for controlling the heating device, the oven door switch detection device, the food doneness recognition device 100 and the like to work.
The image acquisition module 10 can be electrically connected with the box door detection device, and when the box door detection device detects that the box door is opened, namely, when a user puts food into the oven through the box door, the image acquisition module 10 is controlled to start working, so as to acquire images of the food in the cooking appliance 200 and obtain initial state images of the food. The heating device can be electrically connected with the image comparison module 30, and when the image comparison module 30 obtains the cooked degree of food, and the controller judges that the current food is cooked, the heating device can be controlled to stop heating, so that the food is prevented from being cooked too much.
The cooking appliance 200 may further include a switch detection device for detecting whether the cooking appliance 200 is in an operating state, i.e., detecting whether the cooking appliance 200 starts to cook. The image capturing module 10 may be electrically connected to the switch detection device, and when the switch detection device detects that the cooking appliance 200 starts to work, the image capturing module 10 may start to capture an image as an initial state image of the food.
The image capturing module 10 may be a camera mounted on the cooking appliance 200, or may be a camera mounted on another electrical appliance and electrically connected to the cooking appliance 200, which is not limited in this respect.
The controller can control the image acquisition module 10 to perform image acquisition once every 5 seconds to update the current state image of the food; the controller may also control the image capturing module 10 to perform image capturing operation once every 10 seconds to update the current state image of the food, and the time interval of the image capturing operation performed by the controller is many, which may be 3 seconds, 20 seconds, 1 minute, and the like, and may be adjusted according to the use habit of the user, the power of the cooking appliance 200, the type of the food, and the like, which is not limited specifically herein. It should be noted that the time interval between the image capturing operations performed by the image capturing module 10 is not too long, so as to avoid the food from being cooked, because the current state image is not updated and the food is not perceived to be too hot to cook.
Referring to fig. 8, a server 300 is provided according to an embodiment of the present invention. The server 300 includes a communication module 40, an image processing module 20, and an image comparison module 30. The communication module 40 is configured to receive the initial state image of the food and the current state image of the food uploaded by the cooking appliance 200. And the image processing module 20 is used for obtaining a target image according to the initial state image and the preset antagonistic neural network model, wherein the target image represents an image when the food reaches the required doneness. And the image comparison module 30 is used for comparing the similarity between the target image and the current state image to obtain the food doneness, so as to transmit the food doneness to the cooking appliance 200.
According to the server 300 of the embodiment of the invention, the preset antagonistic neural network model is used for generating the target image corresponding to the initial state image, so that the current state of the food can be known according to the similarity when the current state image of the food is obtained, thus the doneness of various types of food can be judged, and in addition, the calculation resources occupied by the judgment of the similarity can be reduced, and the doneness judgment cost can be reduced.
Specifically, the communication module 40 may implement communication with the cooking appliance 200 in a bluetooth mode, a WiFi mode, or the like, and the communication module 40 may also implement communication with the cooking appliance 200 by being electrically connected with the cooking appliance 200, which is not limited herein.
Referring to fig. 9, the embodiment of the invention provides a food doneness recognition system 500, which includes a cooking appliance 200 and a server 300. The cooking appliance 200 includes an image capture module 10 and an image comparison module 30. The image capturing module 10 is configured to obtain an initial state image of the food and a current state image of the food, and transmit the initial state image and the current state image to the server 300. The image comparison module 30 is used for comparing the similarity between the target image and the current state image to obtain the food doneness. The server 300 includes an image processing module 20. The image processing module 20 is configured to obtain a target image according to the initial state image and the preset antagonistic neural network model, so as to transmit the target image to the cooking appliance 200.
According to the food doneness recognition system 500 provided by the embodiment of the invention, the target image corresponding to the initial state image is generated through the preset antagonistic neural network model, so that the current state image of the food can be obtained according to the similarity, the doneness of various types of foods can be judged, and in addition, the calculation resources occupied by the judgment of the similarity can be reduced, and the doneness judgment cost can be reduced.
Specifically, the cooking appliance 200 may be an oven, a stove, or the like. Taking the cooking appliance 200 as an oven as an example, the cooking appliance 200 may further include a heating device, a door switch detection device, a controller, and the like. The heating device is used for heating food in the oven, the oven door switch detection device is used for detecting whether the oven door of the oven is opened, and the controller is used for controlling the heating device, the oven door switch detection device, the food doneness recognition device 100 and the like to work.
The image acquisition module 10 can be electrically connected with the door switch detection device, and when the door switch detection device detects that the door is opened, namely, when a user puts food into the oven through the door, the image acquisition module 10 is controlled to start working, and image acquisition is performed on the food in the cooking appliance 200 to obtain an initial state image of the food. The heating device can be electrically connected with the image comparison module 30, and when the image comparison module 30 obtains the cooked degree of food, and the controller judges that the current food is cooked, the heating device can be controlled to stop heating, so that the food is prevented from being cooked too much.
The image capturing module 10 may be a camera mounted on the cooking appliance 200, or may be a camera mounted on another electrical appliance and electrically connected to the cooking appliance 200, which is not limited in this respect.
The controller can control the image acquisition module 10 to perform image acquisition once every 5 seconds to update the current state image of the food; the controller may also control the image capturing module 10 to perform image capturing operation once every 10 seconds to update the current state image of the food, and the time interval of the image capturing operation performed by the controller is many, which may be 3 seconds, 20 seconds, 1 minute, and the like, and may be adjusted according to the use habit of the user, the power of the cooking appliance 200, the type of the food, and the like, which is not limited specifically herein. It should be noted that the time interval between the image capturing operations performed by the image capturing module 10 is not too long, so as to avoid the food from being cooked, because the current state image is not updated and the food is not perceived to be too hot to cook.
The communication module 40 may communicate with the cooking appliance 200 through bluetooth, WiFi, or the like, and the communication module 40 may also communicate with the cooking appliance 200 through being electrically connected with the cooking appliance 200, which is not limited herein.
Embodiments of the present invention provide a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the food doneness identification method of any of the above embodiments.
The computer-readable storage medium of the embodiment of the invention generates the target image corresponding to the initial state image through the preset antagonistic neural network model, so that the state of the current food can be known according to the similarity when the current state image of the food is obtained, thus the doneness of various types of food can be judged, in addition, the calculation resources occupied by the judgment of the similarity can be reduced, and the doneness judgment cost can be reduced.
The computer readable medium may be provided in the cooking appliance 200 or in the server. The cooking appliance 200 can communicate with the server to obtain the corresponding program. It will be appreciated that the computer program comprises computer program code. The computer program code may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), software distribution medium, and the like.
A computer readable storage medium may be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable storage medium may even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be noted that the above description of the embodiments and the advantageous effects of the food doneness identification method is also applicable to the food doneness identification device 100, the cooking appliance 200, the server 300, the food doneness identification system 500 and the computer readable medium of the embodiments of the present invention, and is not detailed herein to avoid redundancy.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and alternate implementations are included within the scope of the preferred embodiment of the present invention in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present invention.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.