CN108831505A - The method and apparatus for the usage scenario applied for identification - Google Patents
The method and apparatus for the usage scenario applied for identification Download PDFInfo
- Publication number
- CN108831505A CN108831505A CN201810538486.2A CN201810538486A CN108831505A CN 108831505 A CN108831505 A CN 108831505A CN 201810538486 A CN201810538486 A CN 201810538486A CN 108831505 A CN108831505 A CN 108831505A
- Authority
- CN
- China
- Prior art keywords
- voice messaging
- probability
- scene
- target application
- mentioned
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000000605 extraction Methods 0.000 claims abstract description 23
- 230000004044 response Effects 0.000 claims abstract description 16
- 238000012549 training Methods 0.000 claims description 67
- 238000013528 artificial neural network Methods 0.000 claims description 21
- 238000001228 spectrum Methods 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 12
- 238000004422 calculation algorithm Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 6
- 210000004218 nerve net Anatomy 0.000 claims description 2
- 230000008569 process Effects 0.000 description 15
- 230000006870 function Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 8
- 230000006854 communication Effects 0.000 description 6
- 239000000284 extract Substances 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 230000005291 magnetic effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/18—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
- H04M1/72454—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Spectroscopy & Molecular Physics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the present application discloses the method and apparatus for the usage scenario applied for identification.One specific embodiment of this method includes:In response to detecting that the son application for belonging to pre-set categories of target application is run, the voice messaging in ambient enviroment is acquired;Feature extraction is carried out to collected voice messaging, by the characteristic information extracted input scene Recognition model trained in advance, obtain recognition result, wherein, recognition result includes that voice messaging is the probability acquired under a preset scenario, and scene Recognition model is for the corresponding relationship between characteristic feature information and recognition result;Based on the probability, determine whether the current usage scenario of target application is default scene.The embodiment realizes the identification of the usage scenario current to target application.
Description
Technical field
The invention relates to field of computer technology, and in particular to the method for the usage scenario applied for identification and
Device.
Background technique
Existing application generally has at least one usage scenario.By taking navigation application as an example, the usage scenario of navigation application is big
Body can be divided into two kinds, and one is onboard, such as user is onboard navigated using navigation application;Another kind is not in vehicle
On, such as user is indoors or other places using navigation application carry out route inquiry etc..By using field to using current
Scape is identified, resulting recognition result can be applied to the application scenarios such as information pushes and/or user's portrait generates.
Summary of the invention
The embodiment of the present application proposes the method and apparatus for the usage scenario applied for identification.
In a first aspect, the embodiment of the present application provides a kind of method of usage scenario applied for identification, this method packet
It includes:In response to detecting that the son application for belonging to pre-set categories of target application is run, to the voice messaging in ambient enviroment
It is acquired;Feature extraction is carried out to collected voice messaging, by the characteristic information extracted input scene trained in advance
Identification model obtains recognition result, wherein above-mentioned recognition result include above-mentioned voice messaging be acquire under a preset scenario it is general
Rate, above-mentioned scene Recognition model is for the corresponding relationship between characteristic feature information and recognition result;Based on above-mentioned probability, determine
Whether the current usage scenario of above-mentioned target application is above-mentioned default scene.
In some embodiments, the above method further includes:Generate for prompt above-mentioned target application usage scenario whether
For the prompt information of above-mentioned default scene, and the above-mentioned prompt information of output.
In some embodiments, the above-mentioned prompt information of above-mentioned output, including:Above-mentioned prompt information is exported to above-mentioned mesh
Mark application provides the server-side supported.
In some embodiments, it is above-mentioned be based on above-mentioned probability, determine the current usage scenario of above-mentioned target application whether be
Above-mentioned default scene, including:Determine whether above-mentioned probability is greater than probability threshold value;If above-mentioned probability is greater than above-mentioned probability threshold value,
Determine that the current usage scenario of above-mentioned target application is above-mentioned default scene.
In some embodiments, it is above-mentioned be based on above-mentioned probability, determine the current usage scenario of above-mentioned target application whether be
Above-mentioned default scene further includes:If above-mentioned probability is not more than above-mentioned probability threshold value, it is determined that the current use of above-mentioned target application
Scene is not above-mentioned default scene.
In some embodiments, above-mentioned that feature extraction is carried out to collected voice messaging, including:Using fast Fourier
Transformation algorithm extracts the frequency spectrum of above-mentioned voice messaging, using above-mentioned frequency spectrum as features described above information.
In some embodiments, above-mentioned scene Recognition model is by carrying out following training to preset deep neural network
What operation training obtained:Obtaining preset training sample set, wherein training sample includes voice messaging and markup information, on
It states markup information and is used to indicate whether corresponding voice messaging is the voice messaging acquired under above-mentioned default scene;For upper
The training sample in training sample set is stated, feature extraction, the spy that will be extracted are carried out to the voice messaging in the training sample
The above-mentioned deep neural network of information input is levied, obtaining the voice messaging in the training sample is acquired under above-mentioned default scene
Probability determines the difference between the markup information in the probability and the training sample, based on the above-mentioned depth mind of above-mentioned discrepancy adjustment
Parameter through network.
Second aspect, the embodiment of the present application provide a kind of device of usage scenario applied for identification, the device packet
It includes:Acquisition unit is configured in response to detect that the son application for belonging to pre-set categories of target application is run, to surrounding
Voice messaging in environment is acquired;Recognition unit is configured to carry out feature extraction to collected voice messaging, will mention
The characteristic information input of taking-up scene Recognition model trained in advance, obtains recognition result, wherein above-mentioned recognition result includes upper
Stating voice messaging is the probability acquired under a preset scenario, and above-mentioned scene Recognition model is used for characteristic feature information and recognition result
Between corresponding relationship;Determination unit is configured to determine that the current usage scenario of above-mentioned target application is based on above-mentioned probability
No is above-mentioned default scene.
In some embodiments, above-mentioned apparatus further includes:Output unit is configured to generate for prompting above-mentioned target to answer
Whether usage scenario is the prompt information of above-mentioned default scene, and exports above-mentioned prompt information.
In some embodiments, above-mentioned output unit is further configured to:Above-mentioned prompt information is exported to above-mentioned
Target application provides the server-side supported.
In some embodiments, above-mentioned determination unit includes:It determines subelement, whether big is configured to determine above-mentioned probability
In probability threshold value;First processing subelement, if being configured to above-mentioned probability greater than above-mentioned probability threshold value, it is determined that above-mentioned target is answered
It is above-mentioned default scene with current usage scenario.
In some embodiments, above-mentioned determination unit further includes:Second processing subelement, if being configured to above-mentioned probability not
Greater than above-mentioned probability threshold value, it is determined that the current usage scenario of above-mentioned target application is not above-mentioned default scene.
In some embodiments, above-mentioned recognition unit is further configured to:It is extracted using fast fourier transform algorithm
The frequency spectrum of above-mentioned voice messaging out, using above-mentioned frequency spectrum as features described above information.
In some embodiments, above-mentioned scene Recognition model is by carrying out following training to preset deep neural network
What operation training obtained:Obtaining preset training sample set, wherein training sample includes voice messaging and markup information, on
It states markup information and is used to indicate whether corresponding voice messaging is the voice messaging acquired under above-mentioned default scene;For upper
The training sample in training sample set is stated, feature extraction, the spy that will be extracted are carried out to the voice messaging in the training sample
The above-mentioned deep neural network of information input is levied, obtaining the voice messaging in the training sample is acquired under above-mentioned default scene
Probability determines the difference between the markup information in the probability and the training sample, based on the above-mentioned depth mind of above-mentioned discrepancy adjustment
Parameter through network.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, which includes:One or more processing
Device;Storage device is stored thereon with one or more programs;When the one or more program is held by the one or more processors
Row, so that the one or more processors realize the method as described in implementation any in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, should
The method as described in implementation any in first aspect is realized when program is executed by processor.
The method and apparatus of the usage scenario provided by the embodiments of the present application applied for identification, by response to detecting
The son application for belonging to pre-set categories of target application is run and is acquired to the voice messaging in ambient enviroment, then right
Collected voice messaging carries out feature extraction, and the characteristic information extracted input scene Recognition model trained in advance obtains
To the recognition result for including the voice messaging being the probability acquired under a preset scenario, so as to based on general in the recognition result
Rate determines whether the current usage scenario of target application is the default scene, realizes the usage scenario current to target application
Identification.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart according to one embodiment of the method for the usage scenario of the application applied for identification;
Fig. 3 is the schematic diagram according to an application scenarios of the method for the usage scenario of the application applied for identification;
Fig. 4 is the flow chart according to another embodiment of the method for the usage scenario of the application applied for identification;
Fig. 5 is the structural representation according to one embodiment of the device for the usage scenario of the application applied for identification
Figure;
Fig. 6 is adapted for the structural schematic diagram for the computer system for realizing the electronic equipment of the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 show can using the usage scenario applied for identification of the application method or apply for identification
The exemplary system architecture 100 of the embodiment of the device of usage scenario.
As shown in Figure 1, system architecture 100 may include terminal device 101, network 102 and voice acquisition device 103.Net
Network 102 between terminal device 101 and voice acquisition device 103 to provide the medium of communication link.Network 102 may include
Various connection types, such as wired, wireless communication link or fiber optic cables etc..
Terminal device 101 can be interacted by network 102 and voice acquisition device 103, with from voice acquisition device
103 obtain required voice messaging, and carry out the processing such as analyzing to the voice messaging.It can be equipped on terminal device 101 each
Kind of telecommunication customer end application, such as web browser applications, the application that is capable of providing voice service (such as navigation application, input
Method application, class of taking pictures application etc.) etc..
Terminal device 101 can be hardware, be also possible to software.When terminal device 101 is hardware, it can be various electricity
Sub- equipment, including but not limited to smart phone, tablet computer, pocket computer on knee, desktop computer etc..Work as terminal device
101,102,103 when being software, may be mounted in above-mentioned cited electronic equipment.Multiple softwares or soft may be implemented into it
Part module (such as providing Distributed Services), also may be implemented into single software or software module.Specific limit is not done herein
It is fixed.
Voice acquisition device 103 is such as can be microphone.Voice acquisition device 103 can be used for carrying out voice messaging
Acquisition, and collected voice messaging is sent to terminal device 101.
It should be noted that the method for the usage scenario applied for identification provided by the embodiment of the present application is generally by end
End equipment 101 executes.Correspondingly, the device for the usage scenario applied for identification is generally positioned in terminal device 101.
It should be understood that the number of terminal device, network and voice acquisition device in Fig. 1 is only schematical.According to
It realizes and needs, can have any number of terminal device, network and voice acquisition device.
With continued reference to Fig. 2, an implementation of the method for the usage scenario applied for identification according to the application is shown
The process 200 of example.The process 200 of the method for the usage scenario applied for identification, includes the following steps:
Step 201, in response to detecting that the son application for belonging to pre-set categories of target application is run, to ambient enviroment
In voice messaging be acquired.
In the present embodiment, (such as terminal shown in Fig. 1 is set the executing subject of the method for the usage scenario applied for identification
Standby 101) can be run in response to detecting the son application for belonging to pre-set categories of target application and in ambient enviroment
Voice messaging is acquired.Wherein, it includes belonging to pre-set categories that target application, which can be mounted in above-mentioned executing subject,
Son application application.Pre-set categories may include that voice service provides class etc..Voice service provides the son application under class can be with
It is for providing the son application of voice service.
It should be noted that above-mentioned executing subject can in real time or periodically detect target application belong to pre-set categories
Son application whether be run.Moreover, above-mentioned executing subject can be detected using various detection methods the son application whether by
Operation.
For example, son application can be associated with process identification (PID).When a son application is run, son application is associated
Process identification (PID) can be written into preset process identification (PID) list.Above-mentioned executing subject can detecte whether deposit in the process identification (PID) list
Associated process identification (PID) is applied in the son for belonging to pre-set categories of target application, and if it exists, then above-mentioned executing subject can be with
Determine that the son application for belonging to pre-set categories of target application is run.
For another example son application can be associated with running state of process, which can be son application and is closed
Running state of a process indicated by the process identification (PID) of connection.Running state of process for example may include being used to indicate process transporting
First state in row.Above-mentioned executing subject can detecte the son for belonging to pre-set categories of target application using it is associated into
Whether journey operating status is first state, if so, above-mentioned executing subject can determine that son application is run.
It should be pointed out that above-mentioned executing subject is transported in the son application for belonging to pre-set categories for detecting target application
After row, connected voice acquisition device (such as voice acquisition device 103 shown in FIG. 1) can use in ambient enviroment
Voice messaging is acquired.In addition, the duration of collected voice messaging can be specified duration (such as 0.1 or 0.2 second etc.).
It should be understood that specified duration can be adjusted according to actual needs, the present embodiment does not do any limit to content in this respect
It is fixed.
Step 202, feature extraction is carried out to collected voice messaging, the characteristic information extracted is inputted into training in advance
Scene Recognition model, obtain recognition result.
In the present embodiment, above-mentioned executing subject can carry out feature extraction to collected voice messaging, will extract
The trained in advance scene Recognition model of characteristic information input, obtain recognition result.Wherein, recognition result may include voice letter
Breath is the probability acquired under a preset scenario.Default scene for example may include vehicle-mounted scene etc..Vehicle-mounted scene is alternatively referred to as " vehicle
It is interior " or " on vehicle ".
In addition, scene Recognition model can be used for the corresponding relationship between characteristic feature information and recognition result.Scene is known
Other model can be technical staff calculated and pre-established based on a large amount of statistics, for characteristic feature information and recognition result it
Between corresponding relationship mapping table;Be also possible to using model-naive Bayesian (Naive Bayesian Model,
NBM), support vector machines (Support Vector Machine, SVM), XGBoost (eXtreme Gradient
) etc. Boosting it is trained for the model of classification or using classification function (such as softmax function etc.) etc..
In the present embodiment, above-mentioned executing subject can for example use Short Time Fourier Transform (Short-Time
Fourier Transform, STFT) algorithm to collected voice messaging carry out feature extraction.Wherein, Short Time Fourier Transform
A kind of mathematic(al) manipulation relevant with Fourier transformation, can to determine its regional area sine wave of time varying signal frequency with
Phase.
In some optional implementations of the present embodiment, above-mentioned executing subject can use Fast Fourier Transform (FFT)
(Fast Fourier Transform, FFT) algorithm extracts the frequency spectrum of collected voice messaging, using the frequency spectrum as feature
Information.Wherein, which can be the frequency spectrum indicated with vector.FFT is the fast algorithm of discrete Fourier transform, can be by language
Sound signal transforms to frequency domain.In addition, FFT can also extract the frequency spectrum of voice signal.
In some optional implementations of the present embodiment, above-mentioned scene Recognition model can be actuating station (on such as
The server stating executing subject or being connect with above-mentioned executing subject telecommunication) by being instructed to preset deep neural network
Practice what operation training obtained.The multilayer deep neural network that the deep neural network can be indiscipline or training is not completed,
Such as 7 layer depth neural network.Since first layer, each layer included by the 7 layer depth neural network can be followed successively by input
Layer (for carrying out the convolutional layer of one-dimensional convolution), full articulamentum, full articulamentum, full articulamentum, pond layer, full articulamentum, output
Layer (layer for classifying).
Wherein, above-mentioned training, which operates, may include:
Firstly, obtaining preset training sample set.Wherein, above-mentioned actuating station can be from server that is local or being connected
Obtain the training sample set.Training sample may include voice messaging and markup information.Markup information can serve to indicate that institute
Whether corresponding voice messaging is the voice messaging acquired under above-mentioned default scene.In addition, markup information can with number 0 or
1 indicates.When markup information is 0, can indicate voice messaging corresponding to the markup information not is under above-mentioned default scene
The voice messaging of acquisition.When markup information is 1, it can indicate that voice messaging corresponding to the markup information is above-mentioned default
The voice messaging acquired under scene.
Then, for each instruction in the training sample in above-mentioned training sample set, such as above-mentioned training sample set
Practice sample, above-mentioned actuating station can execute following operation:
First, feature extraction is carried out to the voice messaging in training sample.Here, above-mentioned actuating station for example can be using such as
Used algorithm is to the voice messaging in training sample when above-mentioned steps 202 are to the progress feature extraction of collected voice messaging
Carry out feature extraction.
Second, the characteristic information extracted is inputted into above-mentioned deep neural network, obtains the voice messaging in training sample
It is the probability acquired under above-mentioned default scene.
Third determines that the voice messaging in training sample is in the probability and training sample acquired under above-mentioned default scene
Markup information between difference.Here, above-mentioned actuating station can determine difference using various loss functions, using loss letter
Number determines that the mode of difference is the well-known technique of extensive research and application at present, and details are not described herein.
4th, the parameter based on the above-mentioned deep neural network of discrepancy adjustment.Here, above-mentioned actuating station can use various realities
Existing parameter of the mode based on the above-mentioned deep neural network of discrepancy adjustment.For example, can using BP (Back Propagation, instead
To propagating) algorithm or SGD (Stochastic Gradient Descent, stochastic gradient descent) algorithm etc. be above-mentioned to adjust
The parameter of deep neural network.
In practice, when the voice messaging in the training sample in above-mentioned training sample set is adopted under above-mentioned default scene
When the difference between markup information in the probability of collection and the training sample meets the preset condition of convergence, above-mentioned depth nerve net
Network training is completed.The condition of convergence for example can be the voice messaging in training sample be acquired under above-mentioned default scene it is general
The difference between label information in rate and the training sample is less than preset threshold value.
Step 203, based on the probability in recognition result, determine whether the current usage scenario of target application is default field
Scape.
In the present embodiment, above-mentioned executing subject can be based on the probability in resulting recognition result in step 202, really
It sets the goal using whether current usage scenario is default scene.As an example, above-mentioned executing subject can be by the probability and general
Rate threshold value is compared, if the probability is greater than the probability threshold value, above-mentioned executing subject can determine the currently used of target application
Scene is above-mentioned default scene.
In some optional implementations of the present embodiment, if the probability in recognition result is not more than probability threshold value,
Above-mentioned executing subject can determine that the current usage scenario of target application is not above-mentioned default scene.
In some optional implementations of the present embodiment, above-mentioned executing subject be can be generated for prompting target application
Usage scenario whether be the prompt information of default scene, and export the prompt information.For example, by the prompt information export to
The information transmission system connected and/or user's portrait generation system etc..In this way, the use current to target application can be extended
Scene identify the application scenarios of resulting recognition result.
It should be noted that if above-mentioned executing subject determines that the current usage scenario of target application is above-mentioned default scene,
Then the prompt information of title or mark including above-mentioned default scene can be generated in above-mentioned executing subject, also can be generated including number
The prompt information of word 1.It is above-mentioned default scene that number 1, which can be used for characterizing the current usage scenario of target application,.If above-mentioned hold
Row main body determines that the current usage scenario of target application is not above-mentioned default scene, then above-mentioned executing subject can be generated including
NOT sum presets the prompt information of the title of scene, for example, if entitled " vehicle-mounted scene " of default scene, prompt information can
To include " off-board scene ";Or the prompt information including number 0 can be generated in above-mentioned executing subject.Number 0 can be used for
The current usage scenario of characterization target application is not default scene.
With continued reference to the application scenarios that Fig. 3, Fig. 3 are according to the method for the usage scenario applied for identification of the present embodiment
A schematic diagram.In the application scenarios of Fig. 3, user can hold the terminal device being connected with voice acquisition device, should
Navigation application can be installed, which may include for providing the son application of voice service, the son on terminal device
Start automatically using that can be activated in response to navigation application.In addition, default scene is vehicle-mounted scene.Above-mentioned user is driving
When trip, the navigation application on above-mentioned terminal device can be opened to navigate.As shown in label 301, in response to above-mentioned end
End equipment detects that above-mentioned son application is run, and above-mentioned terminal device can state voice acquisition device transmission voice messaging upwards and adopt
Collection instruction.Then, as shown in label 302, above-mentioned voice acquisition device can be based on the voice messaging acquisition instructions, to ring around
Voice messaging in border is acquired, and collected voice messaging is sent to above-mentioned terminal device.Then, such as label 303
Shown, above-mentioned terminal device can carry out feature extraction to collected voice messaging, the characteristic information extracted be inputted pre-
First trained scene Recognition model, obtains recognition result, wherein the recognition result may include that the voice messaging is in vehicle-mounted field
The probability acquired under scape.Then, as shown in label 304, which can be compared by above-mentioned terminal device with probability threshold value,
Determine whether the probability is greater than the probability threshold value.Finally, determining the probability in response to above-mentioned terminal device as shown in label 305
Greater than the probability threshold value, above-mentioned terminal device can determine that the current usage scenario of navigation application is vehicle-mounted scene.
The method provided by the above embodiment of the application, by belonging to pre-set categories in response to detect target application
Son application be run and the voice messaging in ambient enviroment be acquired, then to collected voice messaging carry out feature
It extracts, the characteristic information extracted input scene Recognition model trained in advance obtains including the voice messaging being default
The recognition result of the probability acquired under scene, to determine the current use of target application based on the probability in the recognition result
Whether scene is the default scene, realizes the identification of the usage scenario current to target application.
With further reference to Fig. 4, it illustrates the streams of another embodiment of the method for the usage scenario applied for identification
Journey 400.The process 400 of the method for the usage scenario applied for identification, includes the following steps:
Step 401, in response to detecting that the son application for belonging to pre-set categories of target application is run, to ambient enviroment
In voice messaging be acquired.
Step 402, feature extraction is carried out to collected voice messaging, the characteristic information extracted is inputted into training in advance
Scene Recognition model, obtain recognition result.
Step 403, based on the probability in recognition result, determine whether the current usage scenario of target application is default field
Scape.
Step 404, whether generate for prompting the usage scenario of target application is to preset the prompt information of scene, and incite somebody to action
Prompt information exports to target application and provides the server-side of support.
It in the present embodiment, can be respectively referring to the step in embodiment illustrated in fig. 2 for the explanation of step 401-403
The related description of rapid 201-203, details are not described herein.
For step 404, the executing subject of the method for the usage scenario applied for identification (such as terminal shown in FIG. 1
Equipment 101) can be generated for prompt target application usage scenario whether be default scene prompt information.Wherein, if on
It states executing subject and determines that the current usage scenario of target application is default scene, then above-mentioned executing subject can be generated including default
The prompt information including number 1 also can be generated in the title of scene or the prompt information of mark.Number 1 can be used for characterizing mesh
Mark is default scene using current usage scenario.If above-mentioned executing subject determines that the current usage scenario of target application is not pre-
If scene, then the prompt information that the title of scene is preset including NOT sum can be generated in above-mentioned executing subject, for example, if default field
Entitled " vehicle-mounted scene " of scape, then prompt information may include " off-board scene ";Or above-mentioned executing subject can be generated
Prompt information including number 0.It is default scene that number 0, which can be used for characterizing the current usage scenario of target application not,.
In addition, above-mentioned executing subject, which can also export prompt information generated to target application, provides the clothes of support
Business end.In this way, the server-side can be current based on target application usage scenario, the user belonged to above-mentioned executing subject is fixed
The service of scene personalization processed.
Figure 4, it is seen that the use applied for identification compared with the corresponding embodiment of Fig. 2, in the present embodiment
The process 400 of the method for scene, which highlights, generates whether the usage scenario for prompting target application current is mentioning for default scene
Show information and will be prompted to information and exports to the step of providing the server-side supported to target application.The present embodiment describes as a result,
Scheme the identification of the usage scenario current to target application not only may be implemented, can also realize that being imbued with targetedly information pushes away
It send.There is provided the server-side of support to target application by being pushed to prompt information generated, can help the server-side to
The service of user's customization scene personalization.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind to answer for identification
One embodiment of the device of usage scenario, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, the device
It specifically can be applied in various electronic equipments.
As shown in figure 5, the device 500 of the usage scenario applied for identification of the present embodiment includes:Acquisition unit 501,
Recognition unit 502 and determination unit 503.Wherein, acquisition unit 501 may be configured in response to detecting returning for target application
The son application for belonging to pre-set categories is run, and is acquired to the voice messaging in ambient enviroment;Recognition unit 502 can be matched
It is set to and feature extraction is carried out to collected voice messaging, by the characteristic information extracted input scene Recognition mould trained in advance
Type obtains recognition result, wherein recognition result may include that the voice messaging is the probability acquired under a preset scenario, scene
Identification model can be used for the corresponding relationship between characteristic feature information and recognition result;Determination unit 503 may be configured to
Based on the probability, determine whether the current usage scenario of target application is default scene.
In the present embodiment, in the device 500 for the usage scenario applied for identification:Acquisition unit 501, recognition unit
502 and determination unit 503 specific processing and its brought technical effect can be respectively with reference to the step in Fig. 2 corresponding embodiment
201, the related description of step 202 and step 203, details are not described herein.
In some optional implementations of the present embodiment, above-mentioned apparatus 500 can also include:Output unit is (in figure
It is not shown), whether be configured to generate for prompting the usage scenario of target application is the prompt information of default scene and defeated
Prompt information out.
In some optional implementations of the present embodiment, above-mentioned output unit can be further configured to:It will mention
Show that information is exported to target application and the server-side of support is provided.
In some optional implementations of the present embodiment, above-mentioned determination unit 503 may include:Determine subelement
(not shown), is configured to determine whether above-mentioned probability is greater than probability threshold value;First processing subelement (not shown),
If being configured to above-mentioned probability greater than above-mentioned probability threshold value, it is determined that the current usage scenario of above-mentioned target application is above-mentioned default
Scene.
In some optional implementations of the present embodiment, above-mentioned determination unit 503 can also include:Second processing
Unit (not shown), if being configured to above-mentioned probability no more than above-mentioned probability threshold value, it is determined that above-mentioned target application is current
Usage scenario be not above-mentioned default scene.
In some optional implementations of the present embodiment, above-mentioned recognition unit 502 can be further configured to:It adopts
The frequency spectrum that above-mentioned voice messaging is extracted with fast fourier transform algorithm, using above-mentioned frequency spectrum as features described above information.
In some optional implementations of the present embodiment, above-mentioned scene Recognition model be can be by preset depth
Degree neural network carries out what following training operation training obtained:Obtain preset training sample set, wherein training sample can be with
Including voice messaging and markup information, markup information can serve to indicate that whether corresponding voice messaging is in above-mentioned default field
The voice messaging acquired under scape;For the training sample in training sample set, the voice messaging in the training sample is carried out
The characteristic information extracted is inputted above-mentioned deep neural network by feature extraction, and the voice messaging obtained in the training sample is
The probability acquired under above-mentioned default scene, determines the difference between the markup information in the probability and the training sample, is based on
The parameter of the above-mentioned deep neural network of the discrepancy adjustment.
The device provided by the above embodiment of the application, by belonging to pre-set categories in response to detect target application
Son application be run and the voice messaging in ambient enviroment be acquired, then to collected voice messaging carry out feature
It extracts, the characteristic information extracted input scene Recognition model trained in advance obtains including the voice messaging being default
The recognition result of the probability acquired under scene, to determine the current use of target application based on the probability in the recognition result
Whether scene is the default scene, realizes the identification of the usage scenario current to target application.
Below with reference to Fig. 6, it is (such as shown in FIG. 1 that it illustrates the electronic equipments for being suitable for being used to realize the embodiment of the present application
Terminal device 101) computer system 600 structural schematic diagram.Electronic equipment shown in Fig. 6 is only an example, is not answered
Any restrictions are brought to the function and use scope of the embodiment of the present application.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in
Program in memory (ROM) 602 or be loaded into the program in random access storage device (RAM) 603 from storage section 608 and
Execute various movements appropriate and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.
CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always
Line 604.
I/O interface 605 is connected to lower component:Importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 608 including hard disk etc.;
And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because
The network of spy's net executes communication process.Driver 610 is also connected to I/O interface 605 as needed.Detachable media 611, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 610, in order to read from thereon
Computer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communications portion 609, and/or from detachable media
611 are mounted.When the computer program is executed by central processing unit (CPU) 601, executes and limited in the system of the application
Above-mentioned function.
It should be noted that computer-readable medium shown in the application can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to:Electrical connection with one or more conducting wires, just
Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In this application, computer readable storage medium can be it is any include or storage journey
The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this
In application, computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal,
Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited
In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can
Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for
By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc. or above-mentioned
Any appropriate combination.
The calculating of the operation for executing the application can be write with one or more programming languages or combinations thereof
Machine program code, described program design language include object oriented program language-such as Java, Smalltalk, C+
+, further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package,
Part executes on the remote computer or executes on a remote computer or server completely on the user computer for part.
In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network (LAN)
Or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize Internet service
Provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of above-mentioned module, program segment or code include one or more
Executable instruction for implementing the specified logical function.It should also be noted that in some implementations as replacements, institute in box
The function of mark can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are practical
On can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it wants
It is noted that the combination of each box in block diagram or flow chart and the box in block diagram or flow chart, can use and execute rule
The dedicated hardware based systems of fixed functions or operations is realized, or can use the group of specialized hardware and computer instruction
It closes to realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit also can be set in the processor, for example, can be described as:A kind of processor packet
Include acquisition unit, recognition unit and determination unit.Wherein, the title of these units is not constituted under certain conditions to the unit
The restriction of itself, for example, acquisition unit is also described as " unit being acquired to the voice messaging in ambient enviroment ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be
Included in electronic equipment described in above-described embodiment;It is also possible to individualism, and without in the supplying electronic equipment.
Above-mentioned computer-readable medium carries one or more program, when the electronics is set by one for said one or multiple programs
When standby execution, so that the electronic equipment includes:In response to detecting that the son application for belonging to pre-set categories of target application is transported
Row, is acquired the voice messaging in ambient enviroment;Feature extraction, the spy that will be extracted are carried out to collected voice messaging
The scene Recognition model that sign information input is trained in advance, obtains recognition result, wherein recognition result may include the voice messaging
It is the probability acquired under a preset scenario, scene Recognition model can be used for corresponding between characteristic feature information and recognition result
Relationship;Based on the probability, determine whether the current usage scenario of target application is default scene.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (16)
1. the method for the usage scenario that one kind is applied for identification, including:
In response to detecting that the son application for belonging to pre-set categories of target application is run, to the voice messaging in ambient enviroment
It is acquired;
Feature extraction is carried out to collected voice messaging, by the characteristic information extracted input scene Recognition mould trained in advance
Type obtains recognition result, wherein the recognition result includes that the voice messaging is the probability acquired under a preset scenario, institute
Scene Recognition model is stated for the corresponding relationship between characteristic feature information and recognition result;
Based on the probability, determine whether the current usage scenario of the target application is the default scene.
2. according to the method described in claim 1, wherein, the method also includes:
Generate for prompt the target application usage scenario whether be the default scene prompt information, and output institute
State prompt information.
3. according to the method described in claim 2, wherein, the output prompt information, including:
The prompt information is exported to the target application, the server-side of support is provided.
4. method described in one of -3 according to claim 1, wherein it is described to be based on the probability, determine that the target application is worked as
Whether preceding usage scenario is the default scene, including:
Determine whether the probability is greater than probability threshold value;
If the probability is greater than the probability threshold value, it is determined that the current usage scenario of the target application is the default field
Scape.
5. according to the method described in claim 4, wherein, described to be based on the probability, determining that the target application is current makes
Whether it is the default scene with scene, further includes:
If the probability is not more than the probability threshold value, it is determined that the current usage scenario of the target application is not described default
Scene.
6. it is described that feature extraction is carried out to collected voice messaging according to the method described in claim 1, wherein, including:
The frequency spectrum that the voice messaging is extracted using fast fourier transform algorithm is believed the frequency spectrum as the feature
Breath.
7. according to the method described in claim 1, wherein, the scene Recognition model is by preset deep neural network
Carry out what following training operation training obtained:
Obtain preset training sample set, wherein training sample includes voice messaging and markup information, and the markup information is used
Whether the voice messaging corresponding to instruction is the voice messaging acquired under the default scene;
For the training sample in the training sample set, feature extraction is carried out to the voice messaging in the training sample, it will
The characteristic information extracted inputs the deep neural network, and obtaining the voice messaging in the training sample is in the default field
The probability acquired under scape determines the difference between the markup information in the probability and the training sample, is based on the discrepancy adjustment
The parameter of the deep neural network.
8. the device for the usage scenario that one kind is applied for identification, including:
Acquisition unit is configured in response to detect that the son application for belonging to pre-set categories of target application is run, to week
Voice messaging in collarette border is acquired;
Recognition unit is configured to carry out feature extraction to collected voice messaging, the characteristic information extracted is inputted pre-
First trained scene Recognition model, obtains recognition result, wherein the recognition result includes that the voice messaging is in default field
The probability acquired under scape, the scene Recognition model is for the corresponding relationship between characteristic feature information and recognition result;
Determination unit is configured to determine whether the current usage scenario of the target application is described pre- based on the probability
If scene.
9. device according to claim 8, wherein described device further includes:
Whether output unit, being configured to generate for prompting the usage scenario of the target application is mentioning for the default scene
Show information, and the output prompt information.
10. device according to claim 9, wherein the output unit is further configured to:
The prompt information is exported to the target application, the server-side of support is provided.
11. the device according to one of claim 8-10, wherein the determination unit includes:
It determines subelement, is configured to determine whether the probability is greater than probability threshold value;
First processing subelement, if being configured to the probability greater than the probability threshold value, it is determined that the target application is current
Usage scenario be the default scene.
12. device according to claim 11, wherein the determination unit further includes:
Second processing subelement, if being configured to the probability no more than the probability threshold value, it is determined that the target application is worked as
Preceding usage scenario is not the default scene.
13. device according to claim 8, wherein the recognition unit is further configured to:
The frequency spectrum that the voice messaging is extracted using fast fourier transform algorithm is believed the frequency spectrum as the feature
Breath.
14. device according to claim 8, wherein the scene Recognition model is by preset depth nerve net
Network carries out what following training operation training obtained:
Obtain preset training sample set, wherein training sample includes voice messaging and markup information, and the markup information is used
Whether the voice messaging corresponding to instruction is the voice messaging acquired under the default scene;
For the training sample in the training sample set, feature extraction is carried out to the voice messaging in the training sample, it will
The characteristic information extracted inputs the deep neural network, and obtaining the voice messaging in the training sample is in the default field
The probability acquired under scape determines the difference between the markup information in the probability and the training sample, is based on the discrepancy adjustment
The parameter of the deep neural network.
15. a kind of electronic equipment, including:
One or more processors;
Storage device is stored thereon with one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
The now method as described in any in claim 1-7.
16. a kind of computer-readable medium, is stored thereon with computer program, wherein real when described program is executed by processor
The now method as described in any in claim 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810538486.2A CN108831505B (en) | 2018-05-30 | 2018-05-30 | Method and device for identifying use scenes of application |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810538486.2A CN108831505B (en) | 2018-05-30 | 2018-05-30 | Method and device for identifying use scenes of application |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108831505A true CN108831505A (en) | 2018-11-16 |
CN108831505B CN108831505B (en) | 2020-01-21 |
Family
ID=64146402
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810538486.2A Active CN108831505B (en) | 2018-05-30 | 2018-05-30 | Method and device for identifying use scenes of application |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108831505B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109783028A (en) * | 2019-01-16 | 2019-05-21 | Oppo广东移动通信有限公司 | Optimization method, device, storage medium and the intelligent terminal of I/O scheduling |
CN109817236A (en) * | 2019-02-01 | 2019-05-28 | 安克创新科技股份有限公司 | Audio defeat method, apparatus, electronic equipment and storage medium based on scene |
CN109871807A (en) * | 2019-02-21 | 2019-06-11 | 百度在线网络技术(北京)有限公司 | Face image processing process and device |
CN109919244A (en) * | 2019-03-18 | 2019-06-21 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating scene Recognition model |
CN110473568A (en) * | 2019-08-08 | 2019-11-19 | Oppo广东移动通信有限公司 | Scene recognition method, device, storage medium and electronic equipment |
CN110580897A (en) * | 2019-08-23 | 2019-12-17 | Oppo广东移动通信有限公司 | audio verification method and device, storage medium and electronic equipment |
CN110825446A (en) * | 2019-10-28 | 2020-02-21 | Oppo广东移动通信有限公司 | Parameter configuration method and device, storage medium and electronic equipment |
CN111008595A (en) * | 2019-12-05 | 2020-04-14 | 武汉大学 | Private car interior rear row baby/pet groveling window distinguishing and car interior atmosphere identifying method |
CN111414944A (en) * | 2020-03-11 | 2020-07-14 | 北京声智科技有限公司 | Electronic equipment control method and electronic equipment |
CN113741226A (en) * | 2020-05-28 | 2021-12-03 | 上海汽车集团股份有限公司 | Scene control method and device in vehicle-mounted system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006195302A (en) * | 2005-01-17 | 2006-07-27 | Honda Motor Co Ltd | Speech recognition system and vehicle equipped with the speech recognition system |
CN102074231A (en) * | 2010-12-30 | 2011-05-25 | 万音达有限公司 | Voice recognition method and system |
CN105355201A (en) * | 2015-11-27 | 2016-02-24 | 百度在线网络技术(北京)有限公司 | Scene-based voice service processing method and device and terminal device |
CN106486127A (en) * | 2015-08-25 | 2017-03-08 | 中兴通讯股份有限公司 | A kind of method of speech recognition parameter adjust automatically, device and mobile terminal |
CN106775562A (en) * | 2016-12-09 | 2017-05-31 | 奇酷互联网络科技(深圳)有限公司 | The method and device of audio frequency parameter treatment |
CN106875949A (en) * | 2017-04-28 | 2017-06-20 | 深圳市大乘科技股份有限公司 | A kind of bearing calibration of speech recognition and device |
CN107454260A (en) * | 2017-08-03 | 2017-12-08 | 深圳天珑无线科技有限公司 | Terminal enters control method, electric terminal and the storage medium of certain scenarios pattern |
-
2018
- 2018-05-30 CN CN201810538486.2A patent/CN108831505B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006195302A (en) * | 2005-01-17 | 2006-07-27 | Honda Motor Co Ltd | Speech recognition system and vehicle equipped with the speech recognition system |
CN102074231A (en) * | 2010-12-30 | 2011-05-25 | 万音达有限公司 | Voice recognition method and system |
CN106486127A (en) * | 2015-08-25 | 2017-03-08 | 中兴通讯股份有限公司 | A kind of method of speech recognition parameter adjust automatically, device and mobile terminal |
CN105355201A (en) * | 2015-11-27 | 2016-02-24 | 百度在线网络技术(北京)有限公司 | Scene-based voice service processing method and device and terminal device |
CN106775562A (en) * | 2016-12-09 | 2017-05-31 | 奇酷互联网络科技(深圳)有限公司 | The method and device of audio frequency parameter treatment |
CN106875949A (en) * | 2017-04-28 | 2017-06-20 | 深圳市大乘科技股份有限公司 | A kind of bearing calibration of speech recognition and device |
CN107454260A (en) * | 2017-08-03 | 2017-12-08 | 深圳天珑无线科技有限公司 | Terminal enters control method, electric terminal and the storage medium of certain scenarios pattern |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109783028A (en) * | 2019-01-16 | 2019-05-21 | Oppo广东移动通信有限公司 | Optimization method, device, storage medium and the intelligent terminal of I/O scheduling |
CN109783028B (en) * | 2019-01-16 | 2022-07-15 | Oppo广东移动通信有限公司 | Optimization method and device for I/O scheduling, storage medium and intelligent terminal |
CN109817236A (en) * | 2019-02-01 | 2019-05-28 | 安克创新科技股份有限公司 | Audio defeat method, apparatus, electronic equipment and storage medium based on scene |
CN109871807A (en) * | 2019-02-21 | 2019-06-11 | 百度在线网络技术(北京)有限公司 | Face image processing process and device |
CN109919244A (en) * | 2019-03-18 | 2019-06-21 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating scene Recognition model |
CN110473568B (en) * | 2019-08-08 | 2022-01-07 | Oppo广东移动通信有限公司 | Scene recognition method and device, storage medium and electronic equipment |
CN110473568A (en) * | 2019-08-08 | 2019-11-19 | Oppo广东移动通信有限公司 | Scene recognition method, device, storage medium and electronic equipment |
CN110580897A (en) * | 2019-08-23 | 2019-12-17 | Oppo广东移动通信有限公司 | audio verification method and device, storage medium and electronic equipment |
CN110825446A (en) * | 2019-10-28 | 2020-02-21 | Oppo广东移动通信有限公司 | Parameter configuration method and device, storage medium and electronic equipment |
CN110825446B (en) * | 2019-10-28 | 2023-12-08 | Oppo广东移动通信有限公司 | Parameter configuration method and device, storage medium and electronic equipment |
CN111008595A (en) * | 2019-12-05 | 2020-04-14 | 武汉大学 | Private car interior rear row baby/pet groveling window distinguishing and car interior atmosphere identifying method |
CN111414944A (en) * | 2020-03-11 | 2020-07-14 | 北京声智科技有限公司 | Electronic equipment control method and electronic equipment |
CN111414944B (en) * | 2020-03-11 | 2023-09-15 | 北京声智科技有限公司 | Electronic equipment control method and electronic equipment |
CN113741226A (en) * | 2020-05-28 | 2021-12-03 | 上海汽车集团股份有限公司 | Scene control method and device in vehicle-mounted system |
Also Published As
Publication number | Publication date |
---|---|
CN108831505B (en) | 2020-01-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108831505A (en) | The method and apparatus for the usage scenario applied for identification | |
CN108022586B (en) | Method and apparatus for controlling the page | |
CN108154196B (en) | Method and apparatus for exporting image | |
CN110110811A (en) | Method and apparatus for training pattern, the method and apparatus for predictive information | |
CN109460513A (en) | Method and apparatus for generating clicking rate prediction model | |
CN109922032A (en) | Method and apparatus for determining the risk of logon account | |
CN109325213A (en) | Method and apparatus for labeled data | |
CN108416310A (en) | Method and apparatus for generating information | |
CN109545192A (en) | Method and apparatus for generating model | |
CN108989882A (en) | Method and apparatus for exporting the snatch of music in video | |
CN109086719A (en) | Method and apparatus for output data | |
CN109993150A (en) | The method and apparatus at age for identification | |
CN109815365A (en) | Method and apparatus for handling video | |
CN109086780A (en) | Method and apparatus for detecting electrode piece burr | |
CN108986805A (en) | Method and apparatus for sending information | |
CN109299477A (en) | Method and apparatus for generating text header | |
CN108933730A (en) | Information-pushing method and device | |
CN108960110A (en) | Method and apparatus for generating information | |
CN109766418A (en) | Method and apparatus for output information | |
CN109460652A (en) | For marking the method, equipment and computer-readable medium of image pattern | |
CN109359194A (en) | Method and apparatus for predictive information classification | |
CN109902446A (en) | Method and apparatus for generating information prediction model | |
CN109214501A (en) | The method and apparatus of information for identification | |
CN109895781A (en) | Method for controlling a vehicle and device | |
CN109389182A (en) | Method and apparatus for generating information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |