[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109712707A - A kind of lingual diagnosis method, apparatus calculates equipment and computer storage medium - Google Patents

A kind of lingual diagnosis method, apparatus calculates equipment and computer storage medium Download PDF

Info

Publication number
CN109712707A
CN109712707A CN201811634491.XA CN201811634491A CN109712707A CN 109712707 A CN109712707 A CN 109712707A CN 201811634491 A CN201811634491 A CN 201811634491A CN 109712707 A CN109712707 A CN 109712707A
Authority
CN
China
Prior art keywords
tongue
network
image
positioning
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811634491.XA
Other languages
Chinese (zh)
Inventor
王鑫宇
周桂文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Het Data Resources and Cloud Technology Co Ltd
Original Assignee
Shenzhen Het Data Resources and Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Het Data Resources and Cloud Technology Co Ltd filed Critical Shenzhen Het Data Resources and Cloud Technology Co Ltd
Priority to CN201811634491.XA priority Critical patent/CN109712707A/en
Publication of CN109712707A publication Critical patent/CN109712707A/en
Pending legal-status Critical Current

Links

Landscapes

  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The present invention relates to field of artificial intelligence, in particular discloses a kind of lingual diagnosis method, apparatus, calculates equipment and computer storage medium, wherein method includes: the tongue image for obtaining user;The tongue image is inputted into the positioning tongue image after positioning neural network obtains the positioning of tongue target, the positioning neural network is obtained by first nerves network training, and the first nerves network includes SSD neural network;The positioning tongue image is inputted into identification neural network and obtains recognition result, the identification neural network is obtained by nervus opticus network training, the first nerves network and the nervus opticus network are same neural network or the first nerves network and the nervus opticus network is different neural networks.It can be seen that wisdom lingual diagnosis may be implemented using the present invention program.

Description

A kind of lingual diagnosis method, apparatus calculates equipment and computer storage medium
Technical field
The present embodiments relate to field of artificial intelligence, more particularly to a kind of lingual diagnosis method, apparatus, calculate equipment And computer storage medium.
Background technique
Lingual diagnosis is the color for observing tongue, the variation of form carrys out auxiliary diagnosis and one of identifying disease simply and effectively square Method is one of method for tcm diagnosis.In recent years, Chinese medicine intelligent diagnosis system came out, and was examined with intellectual technology auxiliary Chinese medicine It is disconnected, to mitigate Chinese medicine hard work amount.Currently, lingual diagnosis technology is broadly divided into two parts: being based on traditional images location technology Lingual diagnosis technology and lingual diagnosis technology instrument-based.
The present inventor in the implementation of the present invention, has found: the lingual diagnosis side based on traditional images location technology Method generally uses Haar, and the traditional images location algorithm such as Snake, robustness is bad, is encountering different light environments or tongue shape The case where shape and color change, cannot position well;Instrument price is expensive, needs to shoot patient under fixed environment within the hospital Tongue image, effect is good, but user is inconvenient to use.
Summary of the invention
In view of the above problems, it proposes on the present invention overcomes the above problem or at least be partially solved in order to provide one kind It states a kind of lingual diagnosis method, apparatus of problem, calculate equipment and computer storage medium.
In order to solve the above technical problems, a technical solution used in the embodiment of the present invention is: a kind of lingual diagnosis method is provided, It include: the tongue image for obtaining user;The tongue image is inputted positioning neural network to obtain determining after tongue target positions Position tongue image, the positioning neural network is obtained by first nerves network training, and the first nerves network includes SSD neural network;The positioning tongue image is inputted into identification neural network and obtains recognition result, the identification neural network is It being obtained by nervus opticus network training, the first nerves network and the nervus opticus network are same neural network, Or the first nerves network is different neural networks from the nervus opticus network.
Optionally, the method also includes including: the tongue sample image for obtaining preset quantity;Mark the tongue sample The circumscribed frame at tongue position in image obtains positioning tongue sample image;By the tongue sample image and the positioning tongue Sample image inputs the first nerves network and is trained, and obtains the positioning neural network.
Optionally, the method also includes: obtain the positioning tongue sample image;To the positioning tongue sample image Tongue label is added, the tongue label is used to indicate the physical condition of user corresponding with the tongue sample image;It will be described Positioning tongue sample image and the tongue label corresponding with the positioning tongue sample image are input to the nervus opticus Network is trained to obtain identification neural network.
Optionally, the method also includes: according to Haar algorithm, detection is located at the coverage of mobile terminal taking lens Interior image whether include user tongue image;If comprising obtaining the tongue image of user.
Optionally, the method also includes: if the image in the coverage of the mobile terminal taking lens does not include The tongue image of user prompts user to adjust the position of mobile terminal taking lens, until the mobile terminal taking lens Image in coverage includes the tongue image of user.
Optionally, the method also includes: image cleaning is carried out to the tongue image and the tongue sample image, is returned One change, dimensionality reduction and/or whitening processing.
In order to solve the above technical problems, another technical solution used in the embodiment of the present invention is: providing a kind of lingual diagnosis dress It sets characterized by comprising obtain module: for obtaining the tongue image of user;Locating module: it is used for the tongue figure The positioning tongue image after tongue target positions is obtained as input positions neural network, the positioning neural network is by first What neural metwork training obtained, the first nerves network includes SSD neural network;Identification module: it is used for the positioning tongue Head image input identification neural network obtains recognition result, and the identification neural network is obtained by nervus opticus network training , the first nerves network and the nervus opticus network are same neural network or the first nerves network and institute Stating nervus opticus network is different neural networks.
Optionally, positioning neural network described in the locating module is obtained by first nerves network training, packet It includes: the positioning neural network is obtained to the first nerves network training;It is described that the first nerves network training is obtained The positioning neural network, comprising: obtain the tongue sample image of preset quantity;Mark tongue portion in the tongue sample image The circumscribed frame of position obtains positioning tongue sample image;The tongue sample image and the positioning tongue sample image are inputted The default first nerves network, obtains the positioning neural network.
Optionally, described in the identification module identification neural network be by being obtained to nervus opticus network training, It include: that the identification neural network is obtained to the nervus opticus network training;It is described that the nervus opticus network training is obtained To the identification neural network, comprising: obtain the positioning tongue sample image;Tongue is added to the positioning tongue sample image Leader label, the tongue label are used to indicate the physical condition of user corresponding with the tongue sample image;By the positioning tongue Head sample image and the tongue label corresponding with the positioning tongue sample image be input to the nervus opticus network into Row training obtains identification neural network.
Optionally, described device further include: detection module: for according to Haar algorithm, detection to be located at mobile terminal and shoots Image in the coverage of camera lens whether include user tongue image;Cue module: for being shot when the mobile terminal Image in the coverage of camera lens does not include the tongue image of user, and user is prompted to adjust the position of mobile terminal taking lens It sets, until the image in the coverage of the mobile terminal taking lens includes the tongue image of user.
Optionally, described device further include: image processing module: for the tongue image and the tongue sample graph As carrying out image cleaning, normalization, dimensionality reduction and/or whitening processing.
In order to solve the above technical problems, another technical solution used in the embodiment of the present invention is: providing a kind of calculate and set It is standby, comprising: processor, memory, communication interface and communication bus, the processor, the memory and the communication interface Mutual communication is completed by the communication bus;
The memory makes described in the processor execution for storing an at least executable instruction, the executable instruction A kind of corresponding operation of lingual diagnosis method.
In order to solve the above technical problems, another technical solution used in the embodiment of the present invention is: providing a kind of computer Storage medium, is stored with an at least executable instruction in the storage medium, and the executable instruction makes described in processor executes A kind of corresponding operation of lingual diagnosis method.
The beneficial effect of the embodiment of the present invention is: being in contrast to the prior art, the embodiment of the present invention passes through positioning mind Physical condition through network and identification neural network recognization user, may be implemented wisdom lingual diagnosis, with existing some use nerves The wisdom lingual diagnosis technology of network is compared, and the embodiment of the present invention is positioned using tongue image of the SSD neural network to user, is led to It crosses and is identified comprising tongue image of the neural network including SSD neural network to user, obtain the physical condition of user, know Other result is relatively reliable;In addition, using Haar algorithm detection customer mobile terminal taking lens coverage in whether contain The tongue image of user can improve positioning nerve before positioning neural network positioning to the tongue image Primary Location of user The efficiency of network, and the embodiment of the present invention be can be used for into mobile terminal, it is user-friendly.
The above description is only an overview of the technical scheme of the present invention, in order to better understand the technical means of the present invention, And it can be implemented in accordance with the contents of the specification, and in order to allow above and other objects of the present invention, feature and advantage can It is clearer and more comprehensible, it is special below to lift specific embodiments of the present invention.
Detailed description of the invention
By reading hereafter detailed description of preferred embodiment, various other advantages and benefits skill common for this field Art personnel will become clear.Attached drawing is only used for showing the purpose of preferred embodiment, and is not considered as to limit of the invention System.And throughout the drawings, the same reference numbers will be used to refer to the same parts.In the accompanying drawings:
Fig. 1 is a kind of lingual diagnosis method flow diagram of the embodiment of the present invention;
Fig. 2 is that neural metwork training flow chart is positioned in a kind of lingual diagnosis method of the embodiment of the present invention;
Fig. 3 is that neural metwork training flow chart is identified in a kind of lingual diagnosis method of the embodiment of the present invention;
Fig. 4 is a kind of lingual diagnosis method flow diagram of another embodiment of the present invention;
Fig. 5 is a kind of lingual diagnosis apparatus function block diagram of the embodiment of the present invention;
Fig. 6 is a kind of calculating equipment schematic diagram of the embodiment of the present invention.
Specific embodiment
Exemplary embodiments of the present disclosure are described in more detail below with reference to accompanying drawings.Although showing the disclosure in attached drawing Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure It is fully disclosed to those skilled in the art.
Fig. 1 is a kind of lingual diagnosis method flow diagram of the embodiment of the present invention.As shown in Figure 1, method includes the following steps:
Step S101: the tongue image of user is obtained.
In this step, the tongue image of the user refers to the image comprising user's tongue position.
Step S102: the tongue image is inputted into the positioning tongue figure after positioning neural network obtains the positioning of tongue target Picture.
In this step, the positioning neural network is obtained by first nerves network training, the first nerves Network includes SSD neural network.
Fig. 2 shows neural metwork training flow chart is positioned in the embodiment of the present invention, as shown in Fig. 2, positioning neural network Training the following steps are included:
Step S1021: the tongue sample image of preset quantity is obtained.
In this step, the tongue sample image is a large amount of tongues that research staff collected from hospital internal or on the net Head sample image.
Step S1022: marking the circumscribed frame at tongue position in the tongue sample image, obtains positioning tongue sample graph Picture.
In this step, tongue position in the tongue sample image is labeled using circumscribed frame, by the use The tongue position image of circumscribed collimation mark note is as positioning tongue sample image.
Step S1023: the tongue sample image and the positioning tongue sample image are inputted into the first nerves net Network is trained, and obtains the positioning neural network.
In this step, the default first nerves network can use any particular algorithms under SSD neural network framework, Such as, mobileNet, ResNet, SquezzNet, ShuffleNet etc., by taking mobileNet as an example, when being trained, first The training parameter in the mobileNet is defined, the training parameter includes anchor, sacle, ratio, wherein is being arranged When anchor, according to the shape of tongue, rectangle is set by anchor box, e.g., a rectangle is indicated using Rec, makes Indicate the rectangular length with x, y indicates the rectangular width, when being configured, guarantee x in the anchor box > =y.Different scale and ratio is arranged in optimal anchor box in order to obtain, to obtain a series of Anchor box, wherein the scale indicates the ratio of the long x Yu width y, ratio defines the scaling of the anchorbox Ratio sets 32,16,8,4 four values for ratio in the present embodiment.Patient's tongue image is cut into fixed big Small grid, wherein patient's tongue image is known as feature map, and the grid of the fixed size is known as feature Map cell, and anchor box is exactly a series of box of fixed sizes on each feature map cell, by described one Each of the box of serial fixed size box and the positioning tongue sample image, i.e. ground truth box comparison, Obtain only one anchor box corresponding with the ground truth box, wherein the control methods is to calculate IOU, the IOU refer to the degree of overlapping of each box Yu the ground truth box, best using calculated result as screening The standard of box guarantees that each ground truth box is corresponding with only one anchor box;In the embodiment of the present invention In, be arranged IOU threshold value be 0.85, i.e., when the IOU be greater than or equal to 0.85 when, determine the anchor box with it is described Ground truth box is corresponding, wherein the anchor box is positive sample, remaining anchor box is negative sample.It determines After anchorbox, the weight of the default first nerves network is calculated using stochastic gradient descent method, and use back-propagation Algorithm updates the weight of each layer network in the default first nerves network, obtains positioning neural network.
Step S103: the positioning tongue image is inputted into identification neural network and obtains recognition result.
In this step, the identification neural network is obtained by nervus opticus network training, the first nerves Network and the nervus opticus network are that same neural network or the first nerves network are with the nervus opticus network Different neural networks, e.g., the nervus opticus network can be SSD neural network identical with the first nerves network, It can be the neural network of some other mainstream, e.g., convolutional neural networks, Inception neural network etc..
Fig. 3 shows identification neural metwork training flow chart in the embodiment of the present invention, as shown in figure 3, identification neural network Training the following steps are included:
Step S1031: the positioning tongue sample image is obtained.
In this step, the positioning tongue sample image is the tongue sample image to positioning neural metwork training The circumscribed frame at middle tongue position obtains positioning tongue sample image.
Step S1032: tongue label is added to the positioning tongue sample image, the tongue label is used to indicate and institute State the physical condition that tongue sample image corresponds to user.
In this step, the corresponding disease type of the positioning tongue sample image is known, by the positioning tongue The corresponding physical condition type of sample image carries out label.In embodiments of the present invention, the physical condition type includes five big Class: each when being labeled without disease, respiratory disease, disease of digestive system, disease in the urological system and heart disease Kind disease corresponds to a kind of disease type label, 1 will be such as labeled as without disease, respiratory disease is labeled as 2, disease of digestive system 3 are labeled as, disease in the urological system is labeled as 4, and heart disease is labeled as 5.
Step S1033: by the positioning tongue sample image and the tongue corresponding with the positioning tongue sample image Leader label are input to the nervus opticus network and are trained to obtain identification neural network.
In this step, when the nervus opticus network is identical as the first nerves network, by the positioning tongue Sample image and the label corresponding with the positioning tongue sample image are input to the first nerves network and are trained, It should be noted that if the nervus opticus network is identical as the first nerves network, then positioning neural network instruction is being carried out When practicing, a large amount of sample image is needed, when the sample image quantity is very big, using first nerves network, including SSD nerve Network is that can reach the purpose of medical diagnosis on disease, under this methodology, calculates the corresponding disease of each described positioning tongue sample image The score of sick type thinks the positioning tongue sample image and the body shape when the score is higher than given threshold Condition is corresponding.It when the positioning tongue sample image is less, is identified using nervus opticus network, specific training process is public affairs Know technology, details are not described herein.In embodiments of the present invention, a large amount of sample had been collected, therefore, using SSD neural network It is identified, the score threshold value is set as 0.8.
The embodiment of the present invention may be implemented by the physical condition of positioning neural network and identification neural network recognization user Wisdom lingual diagnosis, compared with the more existing wisdom lingual diagnosis technology using neural network, the embodiment of the present invention uses SSD nerve net Network positions the tongue image of user, by the inclusion of the neural network including SSD neural network to the tongue image of user It is identified, obtain the physical condition of user, recognition result is relatively reliable.
Fig. 4 is a kind of lingual diagnosis method flow diagram of another embodiment of the present invention.As shown in figure 4, the embodiment and upper one is implemented Example is compared, further comprising the steps of before the tongue image for obtaining user:
Step S401: according to Haar algorithm, whether detection is located at the image in the coverage of mobile terminal taking lens Tongue image comprising user, if comprising executing step S402, if not including, executing step S403.
In this step, user shoots image using mobile terminal, and Haar algorithm is arranged in the mobile terminal, and real-time monitoring is used It whether include tongue in the taking lens of family.When user shoots image, in order to obtain best shooting effect, tongue position is at whole Ratio in image is 20%-40%, when detecting tongue proportion in whole image, user is prompted to remain stationary, Obtain shooting image.
Step S402: the tongue image of user is obtained.
In this step, the tongue image of the user is that user shoots the tongue figure containing user's tongue position in image Picture.
In one embodiment, image cleaning, normalization, dimensionality reduction are carried out to the tongue image and sample image of the user And/or whitening processing, wherein described image cleaning is too low in order to not contain the image at tongue position, tongue image pixel The too small image-erasing of image and tongue position accounting not easy to identify, meanwhile, in order to obtain more accurately as a result, when a figure It, also can be corresponding image-erasing when containing multiple tongue positions as in;Normalized purpose is in order to by the tongue image Zoom to identical size, is trained convenient for network.Dimensionality reduction and/or whitening processing are the complexity in order to reduce model training Degree.
Step S403: prompt user adjusts the position of mobile terminal taking lens, until the mobile terminal taking lens Coverage in image include user tongue image.
Whether the embodiment of the present invention contains in the coverage using Haar algorithm detection customer mobile terminal taking lens The tongue image of user, can be preliminary to the tongue image of user before positioning neural network is to the tongue framing of user Positioning, improves the efficiency of positioning neural network, and the embodiment of the present invention is used for mobile terminal, user-friendly.
Fig. 5 is a kind of lingual diagnosis apparatus function block diagram of the embodiment of the present invention, as shown in figure 4, described device includes: acquisition mould Block 501, locating module 502, identification module 503, detection module 504, cue module 505 and image processing module 506, wherein The acquisition module 501, for obtaining the tongue image of user;Locating module 502 is positioned for inputting the tongue image Neural network obtains the positioning tongue image after the positioning of tongue target, and the positioning neural network is instructed by first nerves network It gets, the first nerves network includes SSD neural network;Identification module 503, for the positioning tongue image is defeated Enter to identify that neural network obtains recognition result, the identification neural network is obtained by nervus opticus network training, described First nerves network and the nervus opticus network are same neural network or the first nerves network and second mind It is different neural networks through network;Detection module 504, for according to Haar algorithm, detection to be located at mobile terminal taking lens Image in coverage whether include user tongue image;Cue module 505, for working as the mobile terminal taking lens Coverage in image do not include user tongue image, prompt user adjust mobile terminal taking lens position, directly Image in the coverage of the mobile terminal taking lens includes the tongue image of user;Image processing module 506 is used In to the tongue image and tongue sample image progress image cleaning, normalization, dimensionality reduction and/or whitening processing.
Each module of above-mentioned apparatus is corresponding with a kind of lingual diagnosis method of embodiment, and the concrete function of each module can be with reference implementation A kind of lingual diagnosis method of example, details are not described herein.
The embodiment of the present invention identifies the physical condition of user by locating module and identification module, and wisdom tongue may be implemented It examines;In addition, using detection module detection customer mobile terminal taking lens coverage in whether the tongue figure containing user Picture can improve the efficiency of locating module, and the embodiment of the present invention before locating module to the tongue image Primary Location of user It can be used for mobile terminal, it is user-friendly.
The embodiment of the present application provides a kind of nonvolatile computer storage media, and the computer storage medium is stored with One of above-mentioned any means embodiment lingual diagnosis method pair can be performed in an at least executable instruction, the computer executable instructions The operation answered.
Fig. 6 is the structural schematic diagram that the present invention calculates apparatus embodiments, and the specific embodiment of the invention is not to calculating equipment Specific implementation limit.
As shown in fig. 6, the calculating equipment may include: processor (processor) 602, communication interface (Communications Interface) 604, memory (memory) 606 and communication bus 608.
Wherein:
Processor 602, communication interface 604 and memory 606 complete mutual communication by communication bus 608.
Communication interface 604, for being communicated with the network element of other equipment such as client or other servers etc..
Processor 602 can specifically execute the correlation in a kind of above-mentioned lingual diagnosis embodiment of the method for executing program 610 Step.
Specifically, program 610 may include program code, which includes computer operation instruction.
Processor 602 may be central processor CPU or specific integrated circuit ASIC (Application Specific Integrated Circuit), or be arranged to implement the integrated electricity of one or more of the embodiment of the present invention Road.The one or more processors that equipment includes are calculated, can be same type of processor, such as one or more CPU;It can also To be different types of processor, such as one or more CPU and one or more ASIC.
Memory 606, for storing program 610.Memory 606 may include high speed RAM memory, it is also possible to further include Nonvolatile memory (non-volatile memory), for example, at least a magnetic disk storage.
Program 610 specifically can be used for so that processor 602 executes following operation:
Obtain the tongue image of user;After tongue image input positioning neural network is obtained the positioning of tongue target Tongue image is positioned, the positioning neural network is obtained by first nerves network training, the first nerves network packet Include SSD neural network;The positioning tongue image is inputted into identification neural network and obtains recognition result, the identification neural network It is to be obtained by nervus opticus network training, the first nerves network and the nervus opticus network are same nerve net Network or the first nerves network are different neural networks from the nervus opticus network.
In a kind of optional mode, program 610 can specifically be further used for so that processor 602 executes following behaviour Make: the method also includes:
Obtain the tongue sample image of preset quantity;The circumscribed frame for marking tongue position in the tongue sample image, obtains To positioning tongue sample image;By the tongue sample image and positioning tongue sample image input default first mind Through network, the positioning neural network is obtained.
In a kind of optional mode, program 410 can specifically be further used for so that processor 402 executes following behaviour Make: obtaining the positioning tongue sample image;Tongue label is added to the positioning tongue sample image, the tongue label is used In the physical condition for indicating user corresponding with the tongue sample image;By the positioning tongue sample image and with the positioning The corresponding tongue label of tongue sample image is input to the nervus opticus network and is trained to obtain identification neural network.
In a kind of optional mode, program 610 can specifically be further used for so that processor 602 executes following behaviour Make: according to Haar algorithm, detection be located at the image in the coverage of mobile terminal taking lens whether include user tongue Image;If comprising obtaining the tongue image of user.If the image in the coverage of the mobile terminal taking lens does not include The tongue image of user prompts user to adjust the position of mobile terminal taking lens, until the mobile terminal taking lens Image in coverage includes the tongue image of user.
In a kind of optional mode, program 610 can specifically be further used for so that processor 602 executes following behaviour Make: image cleaning, normalization, dimensionality reduction and/or whitening processing are carried out to the tongue image and the tongue sample image.
Algorithm and display are not inherently related to any particular computer, virtual system, or other device provided herein. Various general-purpose systems can also be used together with teachings based herein.As described above, it constructs required by this kind of system Structure be obvious.In addition, the present invention is also not directed to any particular programming language.It should be understood that can use various Programming language realizes summary of the invention described herein, and the description done above to language-specific is to disclose this hair Bright most preferred embodiment.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention Example can be practiced without these specific details.In some instances, well known method, structure is not been shown in detail And technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of the various inventive aspects, Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes In example, figure or descriptions thereof.However, the disclosed method should not be interpreted as reflecting the following intention: i.e. required to protect Shield the present invention claims features more more than feature expressly recited in each claim.More precisely, such as right As claim reflects, inventive aspect is all features less than single embodiment disclosed above.Therefore, it then follows tool Thus claims of body embodiment are expressly incorporated in the specific embodiment, wherein each claim itself is used as this hair Bright separate embodiments.
Those skilled in the art will understand that can be carried out adaptively to the module in the equipment in embodiment Change and they are arranged in one or more devices different from this embodiment.It can be the module or list in embodiment Member or component are combined into a module or unit or component, and furthermore they can be divided into multiple submodule or subelement or Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it can use any Combination is to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed All process or units of what method or apparatus are combined.Unless expressly stated otherwise, this specification is (including adjoint power Benefit require, abstract and attached drawing) disclosed in each feature can carry out generation with an alternative feature that provides the same, equivalent, or similar purpose It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments In included certain features rather than other feature, but the combination of the feature of different embodiments mean it is of the invention Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed Meaning one of can in any combination mode come using.
Various component embodiments of the invention can be implemented in hardware, or to run on one or more processors Software module realize, or be implemented in a combination thereof.It will be understood by those of skill in the art that can be used in practice Microprocessor or digital signal processor (DSP) come realize some in a kind of lingual diagnosis device according to an embodiment of the present invention or The some or all functions of person's whole component.The present invention is also implemented as one for executing method as described herein Point or whole device or device programs (for example, computer program and computer program product).Such this hair of realization Bright program can store on a computer-readable medium, or may be in the form of one or more signals.It is such Signal can be downloaded from an internet website to obtain, and is perhaps provided on the carrier signal or is provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and ability Field technique personnel can be designed alternative embodiment without departing from the scope of the appended claims.In the claims, Any reference symbol between parentheses should not be configured to limitations on claims.Word "comprising" does not exclude the presence of not Element or step listed in the claims.Word "a" or "an" located in front of the element does not exclude the presence of multiple such Element.The present invention can be by means of including the hardware of several different elements and being come by means of properly programmed computer real It is existing.In the unit claims listing several devices, several in these devices can be through the same hardware branch To embody.The use of word first, second, and third does not indicate any sequence.These words can be explained and be run after fame Claim.

Claims (9)

1. a kind of lingual diagnosis method characterized by comprising
Obtain the tongue image of user;
The tongue image is inputted into the positioning tongue image after positioning neural network obtains the positioning of tongue target, the positioning mind It through network is obtained by first nerves network training, the first nerves network includes SSD neural network;
The positioning tongue image is inputted into identification neural network and obtains recognition result, the identification neural network is by second What neural metwork training obtained, the first nerves network and the nervus opticus network are same neural network or described First nerves network is different neural networks from the nervus opticus network.
2. the method according to claim 1, wherein the positioning neural network is by first nerves net Network training obtains, comprising:
The positioning neural network is obtained to the first nerves network training;
It is described that the positioning neural network is obtained to the first nerves network training, comprising:
Obtain the tongue sample image of preset quantity;
The circumscribed frame for marking tongue position in the tongue sample image obtains positioning tongue sample image;
The tongue sample image and the positioning tongue sample image are inputted the first nerves network to be trained, obtained The positioning neural network.
3. according to the method described in claim 2, it is characterized in that, the identification neural network is by nervus opticus What network training obtained, comprising:
The identification neural network is obtained to the nervus opticus network training;
It is described that the identification neural network is obtained to the nervus opticus network training, comprising:
Obtain the positioning tongue sample image;
Tongue label is added to the positioning tongue sample image, the tongue label is used to indicate and the tongue sample image The physical condition of corresponding user;
The positioning tongue sample image and the tongue label corresponding with the positioning tongue sample image are input to institute Nervus opticus network is stated to be trained to obtain identification neural network.
4. the method according to claim 1, wherein the method is also before the tongue image for obtaining user Include:
According to Haar algorithm, detection be located at the image in the coverage of mobile terminal taking lens whether include user tongue Image;
If comprising obtaining the tongue image of user.
5. according to the method described in claim 4, it is characterized in that, the method also includes:
If the image in the coverage of the mobile terminal taking lens does not include the tongue image of user, user's adjustment is prompted The position of mobile terminal taking lens, until the image in the coverage of the mobile terminal taking lens includes the tongue of user Head image.
6. method according to claim 1-4, which is characterized in that the method also includes: to the tongue figure Picture and the tongue sample image carry out image cleaning, normalization, dimensionality reduction and/or whitening processing.
7. a kind of wisdom lingual diagnosis device based on mobile terminal, which is characterized in that described device includes:
Obtain module: for obtaining the tongue image of user;
Locating module: for the tongue image to be inputted the positioning tongue figure after positioning neural network obtains the positioning of tongue target Picture, the positioning neural network is obtained by first nerves network training, and the first nerves network includes SSD nerve net Network;
Identification module: recognition result, the identification nerve are obtained for the positioning tongue image to be inputted identification neural network Network is obtained by nervus opticus network training, and the first nerves network and the nervus opticus network are same nerve Network or the first nerves network are different neural networks from the nervus opticus network.
8. a kind of calculating equipment, comprising: processor, memory, communication interface and communication bus, the processor, the storage Device and the communication interface complete mutual communication by the communication bus;
The memory executes the processor as right is wanted for storing an at least executable instruction, the executable instruction Ask a kind of corresponding operation of lingual diagnosis method described in any one of 1-6.
9. a kind of computer storage medium, an at least executable instruction, the executable instruction are stored in the storage medium Processor is set to execute a kind of corresponding operation of lingual diagnosis method such as of any of claims 1-6.
CN201811634491.XA 2018-12-29 2018-12-29 A kind of lingual diagnosis method, apparatus calculates equipment and computer storage medium Pending CN109712707A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811634491.XA CN109712707A (en) 2018-12-29 2018-12-29 A kind of lingual diagnosis method, apparatus calculates equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811634491.XA CN109712707A (en) 2018-12-29 2018-12-29 A kind of lingual diagnosis method, apparatus calculates equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN109712707A true CN109712707A (en) 2019-05-03

Family

ID=66258234

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811634491.XA Pending CN109712707A (en) 2018-12-29 2018-12-29 A kind of lingual diagnosis method, apparatus calculates equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN109712707A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110164550A (en) * 2019-05-22 2019-08-23 杭州电子科技大学 A kind of congenital heart disease aided diagnosis method based on multi-angle of view conspiracy relation
CN110363072A (en) * 2019-05-31 2019-10-22 正和智能网络科技(广州)有限公司 Tongue image recognition method, apparatus, computer equipment and computer readable storage medium
CN113362334A (en) * 2020-03-04 2021-09-07 北京悦熙兴中科技有限公司 Tongue picture processing method and device
CN113837987A (en) * 2020-12-31 2021-12-24 京东科技控股股份有限公司 Tongue image acquisition method and device and computer equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103905733A (en) * 2014-04-02 2014-07-02 哈尔滨工业大学深圳研究生院 Method and system for conducting real-time tracking on faces by monocular camera
CN105357442A (en) * 2015-11-27 2016-02-24 小米科技有限责任公司 Shooting angle adjustment method and device for camera
CN106295139A (en) * 2016-07-29 2017-01-04 姹ゅ钩 A kind of tongue body autodiagnosis health cloud service system based on degree of depth convolutional neural networks
CN108986912A (en) * 2018-07-12 2018-12-11 北京三医智慧科技有限公司 Chinese medicine stomach trouble tongue based on deep learning is as information intelligent processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103905733A (en) * 2014-04-02 2014-07-02 哈尔滨工业大学深圳研究生院 Method and system for conducting real-time tracking on faces by monocular camera
CN105357442A (en) * 2015-11-27 2016-02-24 小米科技有限责任公司 Shooting angle adjustment method and device for camera
CN106295139A (en) * 2016-07-29 2017-01-04 姹ゅ钩 A kind of tongue body autodiagnosis health cloud service system based on degree of depth convolutional neural networks
CN108986912A (en) * 2018-07-12 2018-12-11 北京三医智慧科技有限公司 Chinese medicine stomach trouble tongue based on deep learning is as information intelligent processing method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110164550A (en) * 2019-05-22 2019-08-23 杭州电子科技大学 A kind of congenital heart disease aided diagnosis method based on multi-angle of view conspiracy relation
CN110164550B (en) * 2019-05-22 2021-07-09 杭州电子科技大学 Congenital heart disease auxiliary diagnosis method based on multi-view cooperative relationship
CN110363072A (en) * 2019-05-31 2019-10-22 正和智能网络科技(广州)有限公司 Tongue image recognition method, apparatus, computer equipment and computer readable storage medium
CN110363072B (en) * 2019-05-31 2023-06-09 正和智能网络科技(广州)有限公司 Tongue picture identification method, tongue picture identification device, computer equipment and computer readable storage medium
CN113362334A (en) * 2020-03-04 2021-09-07 北京悦熙兴中科技有限公司 Tongue picture processing method and device
CN113362334B (en) * 2020-03-04 2024-05-24 北京悦熙兴中科技有限公司 Tongue photo processing method and device
CN113837987A (en) * 2020-12-31 2021-12-24 京东科技控股股份有限公司 Tongue image acquisition method and device and computer equipment
CN113837987B (en) * 2020-12-31 2023-11-03 京东科技控股股份有限公司 Tongue image acquisition method and device and computer equipment

Similar Documents

Publication Publication Date Title
CN109712707A (en) A kind of lingual diagnosis method, apparatus calculates equipment and computer storage medium
CN110232383B (en) Focus image recognition method and focus image recognition system based on deep learning model
CN109964235A (en) For carrying out the prediction model of vision sorter to insect
CN109829446A (en) Eye fundus image recognition methods, device, electronic equipment and storage medium
CN108875934A (en) A kind of training method of neural network, device, system and storage medium
CN109376663A (en) A kind of human posture recognition method and relevant apparatus
CN110472737A (en) Training method, device and the magic magiscan of neural network model
CN106682127A (en) Image searching system and method
CN115345938B (en) Global-to-local-based head shadow mark point positioning method, equipment and medium
CN113516639B (en) Training method and device for oral cavity abnormality detection model based on panoramic X-ray film
CN111681247A (en) Lung lobe and lung segment segmentation model training method and device
CN112614573A (en) Deep learning model training method and device based on pathological image labeling tool
CN110742690A (en) Method for configuring endoscope and terminal equipment
Raja et al. Convolutional Neural Networks based Classification and Detection of Plant Disease
US10970863B2 (en) System and method of analyzing features of the human face and breasts using one or more overlay grids
Sapkota et al. Comparing YOLO11 and YOLOv8 for instance segmentation of occluded and non-occluded immature green fruits in complex orchard environment
CN110110750A (en) A kind of classification method and device of original image
Minowa et al. Convolutional neural network applied to tree species identification based on leaf images
Pho et al. Segmentation-driven hierarchical retinanet for detecting protozoa in micrograph
CN112668668B (en) Postoperative medical image evaluation method and device, computer equipment and storage medium
CN114495097A (en) Multi-model-based urine cell identification method and system
CN110110749A (en) Image processing method and device in a kind of training set
Aleynikov et al. Automated diseases detection of plant diseases in space greenhouses
Dhar et al. A Deep Learning Approach to Teeth Segmentation and Orientation from Panoramic X-rays
Parakh et al. Detection of Bell Pepper Crop Diseases Using Convolution Neural Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190503

RJ01 Rejection of invention patent application after publication