CN107918726A - Apart from inducing method, equipment and storage medium - Google Patents
Apart from inducing method, equipment and storage medium Download PDFInfo
- Publication number
- CN107918726A CN107918726A CN201710981331.1A CN201710981331A CN107918726A CN 107918726 A CN107918726 A CN 107918726A CN 201710981331 A CN201710981331 A CN 201710981331A CN 107918726 A CN107918726 A CN 107918726A
- Authority
- CN
- China
- Prior art keywords
- distance
- active user
- apart
- human face
- default
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000001939 inductive effect Effects 0.000 title claims abstract description 34
- 230000002452 interceptive effect Effects 0.000 claims abstract description 14
- 238000004458 analytical method Methods 0.000 claims description 59
- 230000001815 facial effect Effects 0.000 claims description 48
- 230000001755 vocal effect Effects 0.000 claims description 29
- 230000006870 function Effects 0.000 claims description 17
- 239000000284 extract Substances 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 6
- 230000006698 induction Effects 0.000 claims 1
- 230000004044 response Effects 0.000 abstract description 12
- 238000010586 diagram Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 241000208340 Araliaceae Species 0.000 description 4
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 4
- 235000003140 Panax quinquefolius Nutrition 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 235000008434 ginseng Nutrition 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000010835 comparative analysis Methods 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000029058 respiratory gaseous exchange Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- NGVDGCNFYWLIFO-UHFFFAOYSA-N pyridoxal 5'-phosphate Chemical compound CC1=NC=C(COP(O)(O)=O)C(C=O)=C1O NGVDGCNFYWLIFO-UHFFFAOYSA-N 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000002618 waking effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses one kind apart from inducing method,Equipment and storage medium,The present invention passes through the human face image information apart from sensing apparatus acquisition active user,The first reference distance between the active user is calculated according to the human face image information,Obtain the wave audio information in current environment,The second reference distance between the active user is calculated according to the wave audio information,Target range is determined according to first reference distance and second reference distance,The target range is compared with pre-determined distance,When the target range is less than the pre-determined distance,Start good in interactive function,Can be by the distance that recognition of face and sound wave sensing obtain compared with pre-determined distance,So as to allow equipment to make different responses,Equipment wake-up is carried out without relying on particular keywords,Shorten the time of equipment response,Make equipment more intelligent and hommization,Improve user experience.
Description
Technical field
The present invention relates to field of human-computer interaction, more particularly to one kind is apart from inducing method, equipment and storage medium.
Background technology
Recognition of face, is a kind of biological identification technology that the facial feature information based on people carries out identification.With shooting
Machine or camera collection image or video flowing containing face, and automatic detect and track face in the picture, and then to detection
The face that arrives carries out a series of correlation techniques of face, usually also referred to as Identification of Images, face recognition.Current face's identification is main
It is used in camera and takes pictures locating human face position so as to increase the clarity of this position, makes photo more beautiful in the picture of face,
Equipment and people's distance are not calculated.In addition microphone location equipment on the market is all simple conduct voice input, is not had
There is the distance for calculating source of sound and equipment.The interface of human-computer interaction has been passed by knob, button to touch-screen between 100 years
Evolution, and some manufacturers release intelligent sound box then represent again interaction change, it start completely without user or
It is wearing accessory, waking up speaker only by voice keyword carries out interactive voice, but the intelligent sound sound on current market
Case human-computer interaction mainly by microphone array come realizing, it is necessary to which voice friendship could be carried out after being waken up by particular keywords
Mutually, if forget or wrong particular keywords if can not wake-up device, to a certain extent or good people can not be provided
Machine interactive experience, the routine use to user bring inconvenience.
The content of the invention
It is a primary object of the present invention to one kind apart from inducing method, equipment and storage medium, it is intended to solve the prior art
In need particular keywords wake-up device to cause the awkward technical problem of user.
To achieve the above object, the present invention provides one kind apart from inducing method, described to include following step apart from inducing method
Suddenly:
Apart from sensing apparatus obtain active user human face image information, according to the human face image information calculate with it is described
The first reference distance between active user;
Obtain current environment in wave audio information, according to the wave audio information calculate with the active user it
Between the second reference distance;
Target range is determined according to first reference distance and second reference distance, the target range is described
Active user and the actual distance between sensing apparatus;
The target range is compared with pre-determined distance, when the target range is less than the pre-determined distance, is opened
Dynamic good in interactive function.
Preferably, the human face image information that active user is obtained apart from sensing apparatus, believes according to the facial image
Breath calculates the first reference distance between the active user, specifically includes:
The facial image of the active user is gathered, dynamic comparison analysis is carried out to the facial image, and generate first
Analysis result;
Using first analysis result as the human face image information, using default face recognition algorithms according to the people
Face image information calculates first reference distance between the active user.
Preferably, the facial image of the collection active user, dynamic comparison analysis is carried out to the facial image,
And the first analysis result is generated, specifically include:
The facial image of active user's different angle is gathered, extracts multiple face characteristics in the facial image
Point, each human face characteristic point and the default human face characteristic point in default face characteristic point data base are compared, and generate institute
State the first analysis result.
Preferably, the wave audio information obtained in current environment, according to wave audio information calculating and institute
The second reference distance between active user is stated, is specifically included:
The sound that the active user produces in the current environment is gathered, voiceprint analysis are carried out to the sound, and
Generate the second analysis result;
Using second analysis result as the wave audio information, using default voiceprint recognition algorithm according to the sound
Sound audio-frequency information calculates second reference distance between the active user.
Preferably, the sound that the collection active user produces in the current environment, carries out the sound
Voiceprint analysis, and the second analysis result is generated, specifically include:
The sound that the active user produces in the current environment is gathered, the multiple vocal prints extracted in the sound are special
Sign, each vocal print feature and the default vocal print feature in default vocal print feature database are compared, generation described second
Analysis result.
Preferably, it is described that target range, the mesh are determined according to first reference distance and second reference distance
Subject distance is the active user and the actual distance between sensing apparatus, is specifically included:
Average computation is weighted to first reference distance and second reference distance according to default weight proportion,
The target range is obtained according to result of calculation, the target range is the active user and described between sensing apparatus
Actual distance.
Preferably, it is described that the target range is compared with pre-determined distance, when the target range be less than it is default away from
From when, start good in interactive function before, it is described to be further included apart from inducing method:
The human body infrared information of the active user is obtained, is calculated and the active user according to the human body infrared information
Between the 3rd reference distance;
The target range is corrected according to the 3rd reference distance, using the distance after correction as new target
Distance.
Preferably, the human face image information that active user is obtained apart from sensing apparatus, believes according to the facial image
It is described to be further included apart from inducing method before breath calculates the first reference distance between the active user:
Present mode of operation is obtained, when the present mode of operation is default private mode, obtains the active user
Human face image information, the default human face image information in the human face image information and default human face image information storehouse is carried out
Matching;
When the human face image information is mismatched with the default human face image information, generation prompt message is sent to pre-
If information receiving terminal.
In addition, to achieve the above object, the present invention also proposes one kind apart from sensing apparatus, described apart from sensing apparatus bag
Include:Memory, processor and it is stored in the distance perspective that can be run on the memory and on the processor and answers program, it is described
Distance perspective answer program be arranged for carrying out as described above apart from inducing method the step of.
In addition, to achieve the above object, the present invention also proposes a kind of storage medium, and distance is stored with the storage medium
Sense program, realized when the distance perspective answers the program to be executed by processor as described above apart from inducing method the step of.
It is proposed by the present invention to pass through the facial image apart from sensing apparatus acquisition active user apart from inducing method, the present invention
Information, calculates the first reference distance between the active user according to the human face image information, obtains in current environment
Wave audio information, according to the wave audio information calculate and the active user between the second reference distance, according to
First reference distance determines target range with second reference distance, and the target range is the active user and institute
Actual distance between sensing apparatus is stated, the target range is compared with pre-determined distance, when the target range
During less than the pre-determined distance, start good in interactive function, can by the distance that recognition of face and sound wave sensing obtain with it is pre-
If distance compares, so as to allow equipment to make different responses, equipment wake-up is carried out without relying on particular keywords, shortens equipment sound
The time answered, make equipment more intelligent and hommization, improve user experience.
Brief description of the drawings
Fig. 1 is that the distance perspective for the hardware running environment that the embodiment of the present invention is related to answers device structure schematic diagram;
Fig. 2 is flow diagram of the present invention apart from inducing method first embodiment;
Fig. 3 is flow diagram of the present invention apart from inducing method second embodiment;
Fig. 4 is flow diagram of the present invention apart from inducing method 3rd embodiment.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The solution of the embodiment of the present invention is mainly:The present invention passes through the face apart from sensing apparatus acquisition active user
Image information, calculates the first reference distance between the active user, front ring is worked as in acquisition according to the human face image information
Wave audio information in border, the second reference distance between the active user is calculated according to the wave audio information,
Target range is determined according to first reference distance and second reference distance, the target range is the active user
With the actual distance between sensing apparatus, the target range is compared with pre-determined distance, when the target
When distance is less than the pre-determined distance, start good in interactive function, the distance that can be obtained by recognition of face and sound wave sensing
Compared with pre-determined distance, so as to allow equipment to make different responses, equipment wake-up is carried out without relying on particular keywords, shortens and sets
The time of standby response, make equipment more intelligent and hommization, improve user experience, solve need in the prior art it is specific
The problem of keyword wake-up device causes user inconvenient for use.
With reference to Fig. 1, Fig. 1 is that the distance perspective for the hardware running environment that the embodiment of the present invention is related to answers device structure to illustrate
Figure.
As shown in Figure 1, this can include apart from sensing apparatus:Processor 1001, such as CPU, communication bus 1002, user
End interface 1003, network interface 1004, memory 1005.Wherein, communication bus 1002 is used for realization the company between these components
Connect letter.User's end interface 1003 can include display screen (Display), input unit such as keyboard (Keyboard), optional
User's end interface 1003 can also include standard wireline interface and wireless interface.Network interface 1004 can optionally include mark
Wireline interface, the wave point (such as WI-FI interfaces) of standard.Memory 1005 can be high-speed RAM memory or stabilization
Memory (non-volatile memory), such as magnetic disk storage.Memory 1005 optionally can also be independently of preceding
State the storage device of processor 1001.
It will be understood by those skilled in the art that the distance perspective shown in Fig. 1 answers device structure not form to the distance perspective
Answer the restriction of equipment, can include than illustrate more or fewer components for example camera, infrared sensor, microphone array and
Loudspeaker etc., either combine some components such as intelligent sound or different components arrangement.
As shown in Figure 1, it can lead to as in a kind of memory 1005 of computer-readable storage medium including operating system, network
Letter module, user terminal interface module and distance perspective answer program.
The present invention calls the distance perspective stored in memory 1005 to answer program apart from sensing apparatus by processor 1001, and
Perform following operation:
Apart from sensing apparatus obtain active user human face image information, according to the human face image information calculate with it is described
The first reference distance between active user;
Obtain current environment in wave audio information, according to the wave audio information calculate with the active user it
Between the second reference distance;
Target range is determined according to first reference distance and second reference distance, the target range is described
Active user and the actual distance between sensing apparatus;
The target range is compared with pre-determined distance, when the target range is less than the pre-determined distance, is opened
Dynamic good in interactive function.
Further, processor 1001 can call the distance perspective stored in memory 1005 to answer program, also perform following
Operation:
The facial image of the active user is gathered, dynamic comparison analysis is carried out to the facial image, and generate first
Analysis result;
Using first analysis result as the human face image information, using default face recognition algorithms according to the people
Face image information calculates first reference distance between the active user.
Further, processor 1001 can call the distance perspective stored in memory 1005 to answer program, also perform following
Operation:
The facial image of active user's different angle is gathered, extracts multiple face characteristics in the facial image
Point, each human face characteristic point and the default human face characteristic point in default face characteristic point data base are compared, and generate institute
State the first analysis result.
Further, processor 1001 can call the distance perspective stored in memory 1005 to answer program, also perform following
Operation:
The sound that the active user produces in the current environment is gathered, voiceprint analysis are carried out to the sound, and
Generate the second analysis result;
Using second analysis result as the wave audio information, using default voiceprint recognition algorithm according to the sound
Sound audio-frequency information calculates second reference distance between the active user.
Further, processor 1001 can call the distance perspective stored in memory 1005 to answer program, also perform following
Operation:
The sound that the active user produces in the current environment is gathered, the multiple vocal prints extracted in the sound are special
Sign, each vocal print feature and the default vocal print feature in default vocal print feature database are compared, generation described second
Analysis result.
Further, processor 1001 can call the distance perspective stored in memory 1005 to answer program, also perform following
Operation:
Average computation is weighted to first reference distance and second reference distance according to default weight proportion,
The target range is obtained according to result of calculation, the target range is the active user and described between sensing apparatus
Actual distance.
Further, processor 1001 can call the distance perspective stored in memory 1005 to answer program, also perform following
Operation:
The human body infrared information of the active user is obtained, is calculated and the active user according to the human body infrared information
Between the 3rd reference distance;
The target range is corrected according to the 3rd reference distance, using the distance after correction as new target
Distance.
Further, processor 1001 can call the distance perspective stored in memory 1005 to answer program, also perform following
Operation:
Present mode of operation is obtained, when the present mode of operation is default private mode, obtains the active user
Human face image information, the default human face image information in the human face image information and default human face image information storehouse is carried out
Matching;
When the human face image information is mismatched with the default human face image information, generation prompt message is sent to pre-
If information receiving terminal.
The present embodiment through the above scheme, by apart from sensing apparatus obtain active user human face image information, according to
The human face image information calculates the first reference distance between the active user, obtains the wave audio in current environment
Information, the second reference distance between the active user is calculated according to the wave audio information, according to the described first ginseng
Examine distance and determine target range with second reference distance, the target range should for the active user and the distance perspective
Actual distance between equipment, the target range is compared with pre-determined distance, when the target range is less than described pre-
If apart from when, start good in interactive function, can by recognition of face and sound wave sensing obtain distance compared with pre-determined distance,
So as to allow equipment to make different responses, equipment wake-up is carried out without relying on particular keywords, the time of equipment response is shortened, allows
Equipment is more intelligent and hommization, improves user experience.
Based on above-mentioned hardware configuration, propose the present invention apart from inducing method embodiment.
With reference to Fig. 2, Fig. 2 is flow diagram of the present invention apart from inducing method first embodiment.
In the first embodiment, it is described to comprise the following steps apart from inducing method:
Step S10, the human face image information of active user is obtained apart from sensing apparatus, according to the human face image information meter
Calculate the first reference distance between the active user;
It should be noted that can be the equipment with collection facial image and acoustic information apart from sensing apparatus, can be with
It is such as intelligent sound, television set and tablet computer, naturally it is also possible to be intelligent household control terminal etc., can also be certainly
Other-end equipment etc., the present embodiment is not any limitation as this, the active user for directly or indirectly using it is described away from
From the user of sensing apparatus, the equipment user of service that is drafted in database can be stored in or in the distance perspective
The personnel in the preset range of equipment application are answered, the present embodiment is not any limitation as this.
It is understood that by it is described apart from sensing apparatus obtain active user the human face image information, according to
The human face image information calculates the first reference distance between the active user, by can to the human face image information
To calculate the active user and the distance apart from sensing apparatus, first ginseng according to face size and other specification
It can be a probably distance i.e. distance range with error amount to examine distance, and certain first reference distance can also be one
A unique value, by a large amount of computings and compares the relatively precise distance reference value obtained, the present embodiment is not added with this
With limitation.
It should be appreciated that by the human face image information calculate between the active user first with reference to away from
Can be the algorithm based on deep learning from, the algorithm for calculating first reference distance, such as deep neural network algorithm, also
It can be compression of images identification wavelet algorithm or logistic regression algorithm, can also be certainly based on unsupervised learning
Other face recognition algorithms such as algorithm calculate first reference distance, and the present embodiment is not any limitation as this.
Step S20, the wave audio information in current environment is obtained, is calculated according to the wave audio information and worked as with described
The second reference distance between preceding user;
It should be noted that the current environment can be the environment apart from residing for sensing apparatus or pre-
The using area apart from sensing apparatus first set, can also be that other conditions determine described where sensing apparatus
Environment, the present embodiment is not any limitation as this, the wave audio information that the active user produces in the current environment
It can be the wave audio letter obtained after the acoustic processings such as footsteps, breathing and sound of speech by gathering the active user
Cease or gather all sound in the current environment and carry out similar to after the processing such as filtering, echo cancellor and high-fidelity
Wave audio information is obtained, can also be the wave audio information obtained by other means certainly, the present embodiment is not added with this
With limitation.
It is understood that pass through the sound produced apart from the sensing apparatus acquisition active user in current environment
Sound audio-frequency information, calculates the second reference distance between the active user, by institute according to the wave audio information
State wave audio information can be calculated according to sound wave size and other specification the active user with it is described apart from sensing apparatus
Distance, second reference distance can be that a probably distance carries the distance range of error amount, certain described second
Reference distance can also be a unique value, by a large amount of computings and compare the relatively precise distance obtained reference
Value, the present embodiment are not any limitation as this.
It should be appreciated that by the wave audio information calculate and the active user between second with reference to away from
Can be the algorithm based on deep learning from, the algorithm for calculating second reference distance, such as further feature algorithm (Deep
Feature), deep layer vector operation (Deep vector), can also be Mel frequency cepstral coefficients (Mel Frequency
Cepstrum Coefficient, MFCC) algorithm or perceive linear predictor coefficient (Perceptual Linear
Predictive, PLP) algorithm, it can also be that other vocal prints such as wave filter group (Filter Bank, FB) characteristics algorithm are known certainly
Other algorithm calculates second reference distance, and the present embodiment is not any limitation as this.
Step S30, target range is determined according to first reference distance and second reference distance, the target away from
From for the active user and it is described with a distance from distance actual between sensing apparatus;
It should be noted that determine that the target range can according to first reference distance and second reference distance
To be to carry out computing by the way that first reference distance and second reference distance are carried out preset algorithm, computing knot is obtained
Fruit, the target range is determined according to operation result, the target range for the active user with it is described apart from sensing apparatus
Between actual distance, will the target range as the active user at current time and described between equipment
Unique distance.
Further, the step S30, specifically includes:
Average computation is weighted to first reference distance and second reference distance according to default weight proportion,
The target range is obtained according to result of calculation, the target range is the active user and described between sensing apparatus
Actual distance.
It should be appreciated that the default weight proportion is first reference distance set in advance and the described second ginseng
Corresponding weight proportion when distance is calculated is examined, can be technical staff by largely training and learn to obtain proper
Weight proportion, can also be the weight proportion of user's sets itself, can also be it is described apart from sensing apparatus acquiescence weight
Ratio or the power of the suitable user set apart from sensing apparatus according to user's use habit combination big data analysis
Weight ratio, can also be the weight proportion determined by other means certainly, and the present embodiment is not any limitation as this.
It is understood that by the default weight proportion to first reference distance and second reference distance
Average computation is weighted, the target range is obtained according to result of calculation, the target range is the active user and institute
Actual distance between sensing apparatus is stated, calculating the active user by default weight proportion should set with the distance perspective
Standby distance can be more accurate, is more nearly described apart from sensing apparatus and the actual range of the active user, Neng Gouti
Height identifies the precision of the distance of the active user, and accurate data are provided for subsequent operation.
Step S40, the target range is compared with pre-determined distance, when the target range be less than it is described it is default away from
From when, start good in interactive function.
It should be noted that the pre-determined distance answers opening of device human-computer dialogue for the triggering distance perspective set in advance
The distance of function, the pre-determined distance can be the proper distances that technical staff is obtained by largely training and study,
Can also be the distance of user's sets itself, can also be it is described apart from sensing apparatus acquiescence distance or it is described away from
With a distance from the suitable user set from sensing apparatus according to user's use habit combination big data analysis, it can also be and pass through certainly
The distance that other modes determine, the present embodiment are not any limitation as this.
It is understood that it is described apart from sensing apparatus can by way of wireless communication connection with other intelligent families
Electrical appliance or smart home connection, the voice messaging for obtaining user generate corresponding control instruction, and then voice control afterwards
The purpose of controlling intelligent household appliances or smart home processed, the target range is compared with the pre-determined distance, when the mesh
When subject distance is less than the pre-determined distance, start good in interactive function, such as described can send prompt tone apart from sensing apparatus
" you is welcome to go back home!Whether television set is turned on for you now" or " welcome you to go back home!Whether window is opened for you now
Curtain" etc., after the voice messaging for receiving user feedback, generate corresponding control instruction send to corresponding controlling intelligent household appliances or
Smart home, to achieve the purpose that voice control controlling intelligent household appliances or smart home.
The present embodiment is believed by the human face image information apart from sensing apparatus acquisition active user according to the facial image
Breath calculates the first reference distance between the active user, the wave audio information in current environment is obtained, according to described
Wave audio information calculates the second reference distance between the active user, according to first reference distance and described the
Two reference distances determine target range, and the target range is the active user and described actual between sensing apparatus
Distance, the target range is compared with pre-determined distance, when the target range is less than the pre-determined distance, starts people
Machine dialogue function, can be by the distance that recognition of face and sound wave sensing obtain compared with pre-determined distance, so as to allow equipment to make
Difference response, carries out equipment wake-up without relying on particular keywords, shortens the time of equipment response, make equipment more intelligent
And hommization, improve user experience.
Further, as shown in figure 3, proposing the present invention apart from inducing method second embodiment based on first embodiment,
In the present embodiment, the step S10 specifically includes step:
Step S11, the facial image of the active user is gathered, dynamic comparison analysis is carried out to the facial image, and
Generate the first analysis result;
It should be noted that the facial image for gathering the active user can lead to the shooting apart from sensing apparatus
Head is acquired, and can obtain visible ray facial image, can also be in obtaining near-infrared facial image or obtaining
The multiwave facial image of infrared, far infrared and thermal infrared, can also be and obtains other kinds of facial image, this reality certainly
Example is applied not to be any limitation as this.
It is understood that after gathering the facial image of the active user, it is seen that by the facial image and database
In facial image be compared, can be that the facial image is compared with the facial image in local data base
Analyze or the facial image and the facial image in cloud database are compared, determine face figure
As analysis result, that is, first analysis result.
Further, the step S11, specifically includes:
The facial image of active user's different angle is gathered, extracts multiple face characteristics in the facial image
Point, each human face characteristic point and the default human face characteristic point in default face characteristic point data base are compared, and generate institute
State the first analysis result.
It should be appreciated that it can also be high in the clouds data that the default face characteristic point data base, which can be local data base,
Storehouse, the default face characteristic point data base need to be previously stored with substantial amounts of face characteristic point data, pass through the distance perspective
Answer the camera of equipment to gather the facial image of the active user with multi-angle, extract more personal in the facial image
Face characteristic point, each human face characteristic point is contrasted with presetting human face characteristic point in default face characteristic point data base, screening is picked
Except some interference characteristic points, first analysis result is generated after comparative analysis.
Step S12, using first analysis result as the human face image information, default face recognition algorithms root is utilized
First reference distance between the active user is calculated according to the human face image information;
It should be noted that the default face recognition algorithms are calculated to be set in advance using the human face image information
The algorithm of the distance between the active user, the default face recognition algorithms can be based on human face characteristic point identification and
View picture facial image identification algorithm, the default face recognition algorithms can also be based on face template identification algorithm, institute
State default face recognition algorithms and can also be the algorithm being identified based on neutral net, certain default face recognition algorithms
It can also be and be based on the pre-set algorithm of other identification methods, the present embodiment is not any limitation as this.
It is understood that the human face image information can be substituted into by the default face recognition algorithms into algorithm
The distance between middle calculating and the active user, using the distance as first reference distance.
Correspondingly, the step S20 specifically includes step:
Step S21, the sound that the active user produces in the current environment is gathered, vocal print is carried out to the sound
Analysis, and generate the second analysis result;
It should be noted that the sound that the active user produces in the current environment is the active user in institute
Any sound produced in current environment is stated, the sound that the active user produces in the current environment can pass through institute
State apart from acoustic processings such as footsteps, breathing and the sound of speech of the active user of microphone array collection of sensing apparatus
All sound in the wave audio information or the collection current environment that obtain afterwards carry out similar filtering, echo disappears
Except with obtain sound after the processing such as high-fidelity, can also be the sound obtained by other means certainly, the present embodiment to this not
It is any limitation as.
It is understood that after collecting the sound that the active user produces in the current environment, it is seen that by institute
State sound to be compared with the sound in database, can be compared the sound and the sound in local data base
To analyzing or the sound and the sound in cloud database being compared, phonetic analysis result is determined
I.e. described second analysis result.
Further, the step S21 is specifically included:
The sound that the active user produces in the current environment is gathered, the multiple vocal prints extracted in the sound are special
Sign, each vocal print feature and the default vocal print feature in default vocal print feature database are compared, generation described second
Analysis result.
It should be appreciated that it can also be high in the clouds data that the default vocal print feature database, which can be local data base,
Storehouse, the default vocal print feature database are needed to be previously stored with substantial amounts of vocal print feature data, should set by the distance perspective
Standby microphone array can the sound that is produced in the current environment of the comprehensive collection active user, extract the sound
Multiple vocal print features in sound, each vocal print feature is compared with the default vocal print feature in default vocal print feature database,
Some interference noises and echo are rejected in screening, and second analysis result is generated after comparative analysis.
Step S22, using second analysis result as the wave audio information, default voiceprint recognition algorithm root is utilized
Second reference distance between the active user is calculated according to the wave audio information.
It should be noted that the default voiceprint recognition algorithm is calculated to be set in advance using the wave audio information
The algorithm of the distance between the active user, the default voiceprint recognition algorithm can be the algorithms based on deep learning,
Such as further feature algorithm, deep layer vector operation, it can also be Mel frequency cepstral coefficients algorithm or perceive linear pre-
Coefficient Algorithm is surveyed, can also be other voiceprint recognition algorithms such as wave filter group characteristics algorithm by the wave audio information generation certainly
Enter to calculate second reference distance, the present embodiment is not any limitation as this.
It is understood that the wave audio information can be substituted into by the default voiceprint recognition algorithm into algorithm
The distance between middle calculating and the active user, using the distance as second reference distance.
The present embodiment carries out dynamic comparison point by gathering the facial image of the active user, to the facial image
Analysis, and the first analysis result is generated, using first analysis result as the human face image information, utilize default recognition of face
Algorithm calculates first reference distance between the active user according to the human face image information, gathers described current
The sound that user produces in the current environment, carries out the sound voiceprint analysis, and generates the second analysis result, by institute
The second analysis result is stated as the wave audio information, using default voiceprint recognition algorithm according to the wave audio information meter
Second reference distance between the active user is calculated, passes through default face recognition algorithms and default voiceprint recognition algorithm
The distance between described active user can be quickly calculated, and then accelerates to calculate the time of distance, further shorten equipment sound
The time answered, make equipment more intelligent and hommization, improve user experience.
Further, as shown in figure 4, proposing the present invention apart from inducing method 3rd embodiment based on second embodiment,
It is described further comprising the steps of apart from inducing method after the step S30 in the present embodiment:
Step S301, the human body infrared information of the active user is obtained, according to human body infrared information calculating and institute
State the 3rd reference distance between active user;
It should be noted that the human body infrared information of the active user can be by gathering by infrared sensor
The human body infrared collection of illustrative plates for stating active user obtains or the human body of the active user is gathered by near infrared light camera
Infared spectrum obtains, and infrared, far infrared and thermal infrared camera gather the human body infrared of the active user in can also be logical
Collection of illustrative plates obtains, and can also be the human body infrared information for obtaining the active user by other means certainly, the present embodiment is to this
It is not any limitation as.
Can be using presetting infrared distance it is understood that after obtaining the human body infrared information of the active user
Algorithm calculates the distance between described active user according to the human body infrared information, using the distance as the described 3rd reference
Distance.
It should be appreciated that can substitute into the human body infrared information by using the default infrared distance algorithm
Calculate the distance between described active user or calculated by the infrared distance measurement unit in sensing apparatus
The distance between described active user, can also be by other means according to human body infrared information calculating and institute certainly
The 3rd reference distance between active user is stated, the present embodiment is not any limitation as this.
Step S302, the target range is corrected according to the 3rd reference distance, the distance after correction is made
For new target range.
It is understood that it can be according to that the target range, which is corrected, according to the 3rd reference distance
3rd reference distance step-up error offset, then the error compensation value is corrected compensation to the target range, also may be used
To be that the 3rd reference distance and the target range are carried out a weighted average again, using the value after weighted average as new
Target range, can also be certainly and school carried out to the target range using the 3rd reference distance by other means
Just, the present embodiment is not any limitation as this.
Correspondingly, it is described further comprising the steps of apart from inducing method before the step S10:
Step S01, present mode of operation is obtained, when the present mode of operation is default private mode, described in acquisition
The human face image information of active user, by the default facial image in the human face image information and default human face image information storehouse
Information is matched;
It should be noted that the present mode of operation is the operational mode being currently at apart from sensing apparatus, institute
Safeguard protection mould can be similar to including default private mode and default open mode, the default private mode by stating operational mode
Formula, as in addition to presetting the corresponding user of default facial image in human face image information storehouse, other users want to make
With it is described apart from sensing apparatus when, be not allowed to, the default open mode be anyone can use it is described away from
From sensing apparatus, it can also be other preset modes as the operational mode apart from sensing apparatus, the present embodiment pair certainly
This is not any limitation as.
It is understood that when the present mode of operation apart from sensing apparatus is the default private mode, can
To play the role of the monitoring current environment, by obtaining the human face image information of the active user, by the face figure
Currently used person can be learnt as the default human face image information in information and the default human face image information storehouse carries out matching
Whether the user that uses is allowed to.
Step S02, when the human face image information is mismatched with the default human face image information, prompt message is generated
Send to presupposed information receiving terminal.
It should be noted that it is described when the human face image information is mismatched with the default human face image information, i.e.,
Can be shown that the corresponding user of current face's image information is not the user being allowed to use, it is more likely that it is stranger,
The even Unidentified Individual such as thief, it is described can be according to the human face image information and sound sound collected apart from sensing apparatus
Frequency information generation prompt message is sent to presupposed information receiving terminal, and presupposed information receiving terminal is as set in advance, works as institute
State the terminal that corresponding prompt message, when noting abnormalities under presetting private mode, is received apart from sensing apparatus, the presupposed information
Receiving terminal can be that mobile terminal or computer etc. have the function of the terminal device of receive information, the present embodiment pair
This is not any limitation as.
It is understood that except through the human face image information and sound audio-frequency information generate the prompt message it
Outside, the simple notifying messages of generation be can also be, such as prompt the information apart from sensing apparatus using exception, may be used also certainly
To be to generate the prompt message by other means, the present embodiment is not any limitation as this;Send the side of the prompt message
Formula processing can be sent by short message mode, can also be notified by instant communication modes such as wechats or
Notified by other modes such as phone and mails, can also be certainly and carry out connecing the presupposed information by other means
Terminal notification is received, the present embodiment is not any limitation as this.
It should be appreciated that the presupposed information receiving terminal by receive be shown to after the prompt message it is described pre-
If the user of information receiving terminal, the user of the presupposed information receiving terminal can take phase according to the prompt message
The measure answered, such as alarm or authorize and the operation such as allow to use, by it is described can should to the distance perspective apart from sensing apparatus
The corresponding current environment of equipment is monitored in real time, if some fortuitous events occur, such as fire and steal situations such as, can and
Shi Shengcheng prompt messages send presupposed information receiving terminal to have the function that the property safety for protecting user.
The present embodiment by obtaining the human body infrared information of the active user, according to the human body infrared information calculate with
The 3rd reference distance between the active user, is corrected the target range according to the 3rd reference distance, will
Distance after correction is as new target range, using the teaching of the invention it is possible to provide the more accurate active user with it is described apart from sensing apparatus
The distance between, and then accelerate to calculate the time of distance, the time of equipment response is further shorten, makes equipment more intelligent
And hommization, user experience is improved, by obtaining present mode of operation, when the present mode of operation is default private mode
When, the human face image information of the active user is obtained, by the human face image information and default human face image information storehouse
Default human face image information is matched, raw when the human face image information is mismatched with the default human face image information
Sent into prompt message to presupposed information receiving terminal, can make described apart from sensing apparatus is more intelligent and hommization, protected
Hinder the property safety of user, further lift user experience.
In addition, the embodiment of the present invention also proposes a kind of storage medium, being stored with distance perspective on the storage medium answers program,
The distance perspective answers program to realize following operation when being executed by processor:
Apart from sensing apparatus obtain active user human face image information, according to the human face image information calculate with it is described
The first reference distance between active user;
Obtain current environment in wave audio information, according to the wave audio information calculate with the active user it
Between the second reference distance;
Target range is determined according to first reference distance and second reference distance, the target range is described
Active user and the actual distance between sensing apparatus;
The target range is compared with pre-determined distance, when the target range is less than the pre-determined distance, is opened
Dynamic good in interactive function.
Further, the distance perspective answers program also to realize following operation when being executed by processor:
The facial image of the active user is gathered, dynamic comparison analysis is carried out to the facial image, and generate first
Analysis result;
Using first analysis result as the human face image information, using default face recognition algorithms according to the people
Face image information calculates first reference distance between the active user.
Further, the distance perspective answers program also to realize following operation when being executed by processor:
The facial image of active user's different angle is gathered, extracts multiple face characteristics in the facial image
Point, each human face characteristic point and the default human face characteristic point in default face characteristic point data base are compared, and generate institute
State the first analysis result.
Further, the distance perspective answers program also to realize following operation when being executed by processor:
The sound that the active user produces in the current environment is gathered, voiceprint analysis are carried out to the sound, and
Generate the second analysis result;
Using second analysis result as the wave audio information, using default voiceprint recognition algorithm according to the sound
Sound audio-frequency information calculates second reference distance between the active user.
Further, the distance perspective answers program also to realize following operation when being executed by processor:
The sound that the active user produces in the current environment is gathered, the multiple vocal prints extracted in the sound are special
Sign, each vocal print feature and the default vocal print feature in default vocal print feature database are compared, generation described second
Analysis result.
Further, the distance perspective answers program also to realize following operation when being executed by processor:
Average computation is weighted to first reference distance and second reference distance according to default weight proportion,
The target range is obtained according to result of calculation, the target range is the active user and described between sensing apparatus
Actual distance.
Further, the distance perspective answers program also to realize following operation when being executed by processor:
The human body infrared information of the active user is obtained, is calculated and the active user according to the human body infrared information
Between the 3rd reference distance;
The target range is corrected according to the 3rd reference distance, using the distance after correction as new target
Distance.
Further, the distance perspective answers program also to realize following operation when being executed by processor:
Present mode of operation is obtained, when the present mode of operation is default private mode, obtains the active user
Human face image information, the default human face image information in the human face image information and default human face image information storehouse is carried out
Matching;
When the human face image information is mismatched with the default human face image information, generation prompt message is sent to pre-
If information receiving terminal.
The present embodiment through the above scheme, by apart from sensing apparatus obtain active user human face image information, according to
The human face image information calculates the first reference distance between the active user, obtains the wave audio in current environment
Information, the second reference distance between the active user is calculated according to the wave audio information, according to the described first ginseng
Examine distance and determine target range with second reference distance, the target range should for the active user and the distance perspective
Actual distance between equipment, the target range is compared with pre-determined distance, when the target range is less than described pre-
If apart from when, start good in interactive function, can by recognition of face and sound wave sensing obtain distance compared with pre-determined distance,
So as to allow equipment to make different responses, equipment wake-up is carried out without relying on particular keywords, the time of equipment response is shortened, allows
Equipment is more intelligent and hommization, improves user experience.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row
His property includes, so that process, method, article or system including a series of elements not only include those key elements, and
And other elements that are not explicitly listed are further included, or further include as this process, method, article or system institute inherently
Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this
Also there are other identical element in the process of key element, method, article or system.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
It these are only the preferred embodiment of the present invention, be not intended to limit the scope of the invention, it is every to utilize this hair
The equivalent structure or equivalent flow shift that bright specification and accompanying drawing content are made, is directly or indirectly used in other relevant skills
Art field, is included within the scope of the present invention.
Claims (10)
1. one kind is apart from inducing method, it is characterised in that described to include apart from inducing method:
The human face image information of active user is obtained apart from sensing apparatus, is calculated according to the human face image information and described current
The first reference distance between user;
The wave audio information in current environment is obtained, is calculated according to the wave audio information between the active user
Second reference distance;
Target range is determined according to first reference distance and second reference distance, the target range is described current
User and the actual distance between sensing apparatus;
The target range is compared with pre-determined distance, when the target range is less than the pre-determined distance, starts people
Machine dialogue function.
2. as claimed in claim 1 apart from inducing method, it is characterised in that described to obtain active user's apart from sensing apparatus
Human face image information, the first reference distance between the active user, specific bag are calculated according to the human face image information
Include:
The facial image of the active user is gathered, dynamic comparison analysis is carried out to the facial image, and generates the first analysis
As a result;
Using first analysis result as the human face image information, using default face recognition algorithms according to the face figure
As information calculates first reference distance between the active user.
3. as claimed in claim 2 apart from inducing method, it is characterised in that the face figure of the collection active user
Picture, carries out the facial image dynamic comparison analysis, and generates the first analysis result, specifically includes:
The facial image of active user's different angle is gathered, extracts multiple human face characteristic points in the facial image, will
Each human face characteristic point is compared with the default human face characteristic point in default face characteristic point data base, generation described first
Analysis result.
4. as claimed in claim 1 apart from inducing method, it is characterised in that the wave audio letter obtained in current environment
Breath, calculates the second reference distance between the active user according to the wave audio information, specifically includes:
The sound that the active user produces in the current environment is gathered, voiceprint analysis are carried out to the sound, and generate
Second analysis result;
Using second analysis result as the wave audio information, using default voiceprint recognition algorithm according to the sound sound
Frequency information calculates second reference distance between the active user.
5. as claimed in claim 4 apart from inducing method, it is characterised in that the collection active user is described current
The sound produced in environment, carries out the sound voiceprint analysis, and generates the second analysis result, specifically includes:
The sound that the active user produces in the current environment is gathered, extracts multiple vocal print features in the sound,
Each vocal print feature and the default vocal print feature in default vocal print feature database are compared, generation second analysis
As a result.
6. as claimed in claim 1 apart from inducing method, it is characterised in that it is described according to first reference distance with it is described
Second reference distance determines target range, and the target range is the active user and the reality between sensing apparatus
Distance, specifically include:
Average computation is weighted to first reference distance and second reference distance according to default weight proportion, according to
Result of calculation obtains the target range, and the target range is the active user and the reality between sensing apparatus
Distance.
7. as any one of claim 1-6 apart from inducing method, it is characterised in that it is described by the target range with
Pre-determined distance is compared, when the target range is less than pre-determined distance, before starting good in interactive function, and the distance perspective
Induction method further includes:
The human body infrared information of the active user is obtained, is calculated according to the human body infrared information between the active user
The 3rd reference distance;
The target range is corrected according to the 3rd reference distance, using the distance after correction as new target away from
From.
8. as any one of claim 1-6 apart from inducing method, it is characterised in that it is described apart from sensing apparatus obtain
The human face image information of active user, according to the human face image information calculate and the active user between first with reference to away from
It is described to be further included apart from inducing method from before:
Present mode of operation is obtained, when the present mode of operation is default private mode, obtains the people of the active user
Face image information, by the default human face image information progress in the human face image information and default human face image information storehouse
Match somebody with somebody;
When the human face image information is mismatched with the default human face image information, generation prompt message is sent to default letter
Cease receiving terminal.
9. one kind is apart from sensing apparatus, it is characterised in that described to include apart from sensing apparatus:Memory, processor and it is stored in
On the memory and the distance perspective that can run on the processor answers program, the distance perspective answer program be arranged for carrying out as
It is described in any item of the claim 1 to 8 apart from inducing method the step of.
10. a kind of storage medium, it is characterised in that distance perspective is stored with the storage medium and answers program, the distance perspective should
When program is executed by processor realize as it is described in any item of the claim 1 to 8 apart from inducing method the step of.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710981331.1A CN107918726A (en) | 2017-10-18 | 2017-10-18 | Apart from inducing method, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710981331.1A CN107918726A (en) | 2017-10-18 | 2017-10-18 | Apart from inducing method, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107918726A true CN107918726A (en) | 2018-04-17 |
Family
ID=61894826
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710981331.1A Pending CN107918726A (en) | 2017-10-18 | 2017-10-18 | Apart from inducing method, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107918726A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111081257A (en) * | 2018-10-19 | 2020-04-28 | 珠海格力电器股份有限公司 | Voice acquisition method, device, equipment and storage medium |
CN111105792A (en) * | 2018-10-29 | 2020-05-05 | 华为技术有限公司 | Voice interaction processing method and device |
CN111276142A (en) * | 2020-01-20 | 2020-06-12 | 北京声智科技有限公司 | Voice awakening method and electronic equipment |
CN111832535A (en) * | 2018-08-24 | 2020-10-27 | 创新先进技术有限公司 | Face recognition method and device |
CN113095116A (en) * | 2019-12-23 | 2021-07-09 | 深圳云天励飞技术有限公司 | Identity recognition method and related product |
CN113242163A (en) * | 2021-06-09 | 2021-08-10 | 思必驰科技股份有限公司 | Voice wake-up method and device |
CN113543874A (en) * | 2019-03-08 | 2021-10-22 | 富士胶片株式会社 | Data generation device and method, and learning device and method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104092836A (en) * | 2014-06-11 | 2014-10-08 | 小米科技有限责任公司 | Power-saving method and apparatus |
CN104703090A (en) * | 2013-12-05 | 2015-06-10 | 北京东方正龙数字技术有限公司 | Automatic adjustment pick-up equipment based on face recognition and automatic adjustment method |
CN105744441A (en) * | 2016-03-30 | 2016-07-06 | 苏州合欣美电子科技有限公司 | Self-adaptive volume adjustment loudspeaker box based on distance sensing |
CN106210511A (en) * | 2016-06-30 | 2016-12-07 | 纳恩博(北京)科技有限公司 | A kind of method and apparatus positioning user |
-
2017
- 2017-10-18 CN CN201710981331.1A patent/CN107918726A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104703090A (en) * | 2013-12-05 | 2015-06-10 | 北京东方正龙数字技术有限公司 | Automatic adjustment pick-up equipment based on face recognition and automatic adjustment method |
CN104092836A (en) * | 2014-06-11 | 2014-10-08 | 小米科技有限责任公司 | Power-saving method and apparatus |
CN105744441A (en) * | 2016-03-30 | 2016-07-06 | 苏州合欣美电子科技有限公司 | Self-adaptive volume adjustment loudspeaker box based on distance sensing |
CN106210511A (en) * | 2016-06-30 | 2016-12-07 | 纳恩博(北京)科技有限公司 | A kind of method and apparatus positioning user |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111832535A (en) * | 2018-08-24 | 2020-10-27 | 创新先进技术有限公司 | Face recognition method and device |
CN111081257A (en) * | 2018-10-19 | 2020-04-28 | 珠海格力电器股份有限公司 | Voice acquisition method, device, equipment and storage medium |
CN111105792A (en) * | 2018-10-29 | 2020-05-05 | 华为技术有限公司 | Voice interaction processing method and device |
US11620995B2 (en) | 2018-10-29 | 2023-04-04 | Huawei Technologies Co., Ltd. | Voice interaction processing method and apparatus |
CN113543874A (en) * | 2019-03-08 | 2021-10-22 | 富士胶片株式会社 | Data generation device and method, and learning device and method |
CN113543874B (en) * | 2019-03-08 | 2023-06-30 | 富士胶片株式会社 | Learning device and method |
CN113095116A (en) * | 2019-12-23 | 2021-07-09 | 深圳云天励飞技术有限公司 | Identity recognition method and related product |
CN113095116B (en) * | 2019-12-23 | 2024-03-22 | 深圳云天励飞技术有限公司 | Identity recognition method and related product |
CN111276142A (en) * | 2020-01-20 | 2020-06-12 | 北京声智科技有限公司 | Voice awakening method and electronic equipment |
CN111276142B (en) * | 2020-01-20 | 2023-04-07 | 北京声智科技有限公司 | Voice wake-up method and electronic equipment |
CN113242163A (en) * | 2021-06-09 | 2021-08-10 | 思必驰科技股份有限公司 | Voice wake-up method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107918726A (en) | Apart from inducing method, equipment and storage medium | |
CN105022981B (en) | A kind of method, device and mobile terminal detecting human eye health state | |
CN103561652B (en) | Method and system for assisting patients | |
US20170024885A1 (en) | Health information service system | |
CN109982124A (en) | User's scene intelligent analysis method, device and storage medium | |
CN109558512A (en) | A kind of personalized recommendation method based on audio, device and mobile terminal | |
CN107945625A (en) | A kind of pronunciation of English test and evaluation system | |
CN107635147A (en) | Health information management TV based on multi-modal man-machine interaction | |
US10325144B2 (en) | Wearable apparatus and information processing method and device thereof | |
WO2021047069A1 (en) | Face recognition method and electronic terminal device | |
CN111967770A (en) | Questionnaire data processing method and device based on big data and storage medium | |
CN110007758A (en) | A kind of control method and terminal of terminal | |
CN109255064A (en) | Information search method, device, intelligent glasses and storage medium | |
CN109119080A (en) | Sound identification method, device, wearable device and storage medium | |
KR20090001848A (en) | Method and system monitoring facial expression | |
CN113096808B (en) | Event prompting method, device, computer equipment and storage medium | |
CN113764099A (en) | Psychological state analysis method, device, equipment and medium based on artificial intelligence | |
US11418757B1 (en) | Controlled-environment facility video communications monitoring system | |
CN113301372A (en) | Live broadcast method, device, terminal and storage medium | |
CN110491384B (en) | Voice data processing method and device | |
CN109102813B (en) | Voiceprint recognition method and device, electronic equipment and storage medium | |
CN114636231A (en) | Control method and device of air conditioner, terminal and medium | |
JP4631464B2 (en) | Physical condition determination device and program thereof | |
CN108615020A (en) | A kind of floating population number statistical method in video monitoring regional | |
US20200301398A1 (en) | Information processing device, information processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
AD01 | Patent right deemed abandoned | ||
AD01 | Patent right deemed abandoned |
Effective date of abandoning: 20210702 |