[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN105933272A - Voiceprint recognition method capable of preventing recording attack, server, terminal, and system - Google Patents

Voiceprint recognition method capable of preventing recording attack, server, terminal, and system Download PDF

Info

Publication number
CN105933272A
CN105933272A CN201511020257.4A CN201511020257A CN105933272A CN 105933272 A CN105933272 A CN 105933272A CN 201511020257 A CN201511020257 A CN 201511020257A CN 105933272 A CN105933272 A CN 105933272A
Authority
CN
China
Prior art keywords
character
user speech
voiceprint
voice
described user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201511020257.4A
Other languages
Chinese (zh)
Inventor
徐燕军
何朔
尹亚伟
万四爽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Unionpay Co Ltd
Original Assignee
China Unionpay Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Unionpay Co Ltd filed Critical China Unionpay Co Ltd
Priority to CN201511020257.4A priority Critical patent/CN105933272A/en
Publication of CN105933272A publication Critical patent/CN105933272A/en
Priority to PCT/CN2016/111714 priority patent/WO2017114307A1/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities
    • H04L63/0861Network architectures or network communication protocols for network security for authentication of entities using biometrical features, e.g. fingerprint, retina-scan
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • G10L15/187Phonemic context, e.g. pronunciation rules, phonotactical constraints or phoneme n-grams
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention provides a voiceprint recognition method capable of preventing recording attack, a server, a terminal, and a system. The voiceprint recognition method includes generating character combination and character pronunciation rules based on a user's voiceprint recognition request; sending the character combination and character pronunciation rules to a request terminal; receiving the user voice input by the request terminal based on the character combination and character pronunciation rules; performing voiceprint recognition based on the user voice, and the character combination and character pronunciation rules; and sending the voiceprint recognition result to the request terminal. According to the invention, the recording attack can be effectively prevented.

Description

It is prevented from voiceprint authentication method, server, terminal and system that recording is attacked
Technical field
The invention belongs to Application on Voiceprint Recognition field, be prevented from, particularly to a kind of, voiceprint authentication method, the clothes that recording is attacked Business device, terminal and system.
Background technology
Vocal print is the same with fingerprint, is a kind of very important biological characteristic that can characterize people's identity.Compare traditional password The means such as certification, the feature such as vocal print high security and convenience.Attack means the most frequently used in voiceprint mainly has record Sound replay attack, speaker's bogus attack and forgery certification voice are attacked.
Wherein recording replay attack refers to that assailant passes through the sound pick-up outfit of high-fidelity by various means acquisition user's Speech samples, uses the original recording of user or by synthesizing " the true sound of speaker " after the process of the means such as cutting, splicing, Then, when Verification System gathers user speech, played back by the power amplifier of high-fidelity, thus attack.Speak People's bogus attack refers to that some are good at the assailant giving tacit consent to other people voice by imitating the tongue of speaker and sending out Sound feature is attacked.The technological means such as forgery certification voice attack refers to by synthesizing, changing, splicing forge quilt The voice of assailant is attacked.
Speaker's bogus attack needs assailant to have good ability to model, forges the attack of certification voice and also tends to needs Higher professional skill, both attacks itself are attacked the highest, and the most either imitative sound still forges sound, eventually Studying carefully is not true sound, and existing sound groove recognition technology in e substantially copes with this two class and attacks.
Recording replay attack is the very important problem faced in Application on Voiceprint Recognition, and assailant passes through software after obtaining sound Synthesis is attacked.Recording attack has two kinds of situations, and one is that user's sound of speaking in other cases is stolen Attack;Another kind be user when carrying out Application on Voiceprint Recognition, by Malware admission user sound attack.
Attack for recording, in prior art, mainly have a following two kinds solution:
The first scheme is to be recorded by analysis and between raw tone, in channel characteristics pattern, difference is told and is No is recording substance;First scheme is while the vocal print of checking speaker, also verifies the content of speaking of speaker, Because recording assailant is not aware that this content of speaking.
But, a pair sound signal quality of scheme, signal to noise ratio, channel quality etc. require the highest, take in actual applications The effect obtained not is fine.
If the most random big section of word of user writable that allow in scheme two, Consumer's Experience is poor, if reducing user's Phonetic entry, such as patent (application number: 201310123555.0;Denomination of invention: body based on dynamic password voice Part confirms system and method), from 26 English alphabets and 10 numerals, select combination, each random group symphysis After producing dynamic password, user is allowed to be inputted by voice, owing to being not aware that the dynamic password of production every time in advance, So attack of simply recording can be resisted, it it is a kind of preferably solution.But owing to this patent is only at 26 Totally 36 character random combines in English alphabet and 10 numerals, if assailant is by the way of recording separates, point Being separated out this 36 characters, then no matter obtain which kind of random string, assailant has only to simply by 36 words Symbol carries out splicing attack.
Summary of the invention
The present invention provide a kind of have prevent recording from attacking the voiceprint authentication method of function, server and terminal, be used for solving Certainly prior art prevent recording attack method from there is leak, it is impossible to effectively to prevent the defect that recording is attacked.
In order to solve above-mentioned technical problem, the present invention provides a kind of and is prevented from the voiceprint authentication method that recording is attacked,
Voiceprint request according to a user generates character combination and the pronunciation rule of character;
The pronunciation rule of described character combination and character is sent to requesting terminal;
Receive the user speech that described requesting terminal inputs according to the pronunciation rule of described character combination and character;
Pronunciation rule according to described user speech, described character combination and character carries out voiceprint;;
Described voiceprint result is sent to described requesting terminal.
The present invention separately provides a kind of and is prevented from the voiceprint authentication method that recording is attacked,
The voiceprint sending a user is asked to server;
Receive and show character combination and the pronunciation rule of character that described server sends;
Receive the user speech that user inputs according to the pronunciation rule of described character combination and character;
Described user speech is sent to described server;
Receive the voiceprint result that described server sends.
The present invention separately provides a kind of voiceprint server being prevented from recording,
Signal generating unit, generates character combination and the pronunciation rule of character for the request according to a user;
Transmitting element, for being sent to requesting terminal by the pronunciation rule of described character combination and character, by voiceprint Result sends to described requesting terminal;
Receive unit, for receiving the use that described requesting terminal inputs according to the pronunciation rule of described character combination and character Family voice;
Sound detection unit, for carrying out sound according to the pronunciation rule of described user speech, described character combination and character Stricture of vagina certification;
The present invention provides again a kind of and is prevented from the voiceprint terminal that recording is attacked,
Request unit, asks to server for sending the voiceprint of a user;
Receive unit, for receiving and show character combination and the pronunciation rule of character that described server sends, receive The voiceprint result that described server sends;
Typing unit, for receiving the user speech that user inputs according to the pronunciation rule of described character combination and character;
Transmitting element, for sending described user speech to described server.
The present invention reoffers a kind of voiceprint authentication system being prevented from recording attack, and this system includes server and request Terminal, wherein, described server asks to generate character combination and the pronunciation of character for the voiceprint according to a user Rule;The pronunciation rule of described character combination and character is sent to requesting terminal;Receive described requesting terminal according to institute State the user speech of the pronunciation rule input of character combination and character;According to described user speech, described character combination and The pronunciation rule of character carries out voiceprint;Described voiceprint result is sent to described requesting terminal;
Described requesting terminal is asked to server for the voiceprint sending a user;Receive and show described server The character combination sent and the pronunciation rule of character;Receive user defeated according to the pronunciation rule of described character combination and character The user speech entered;Described user speech is sent to described server;Receive the voiceprint that described server sends Result.
What the present invention proposed is prevented from voiceprint authentication method, server, terminal and the system that recording is attacked, by testing Character combination that character in card user speech and articulation type and server generate and the pronunciation rule of character whether one Cause, it is possible to effectively preventing recording from attacking, assailant is i.e. enabled the user speech got by other channels and meets language Sound content, also cannot meet the requirement of articulation type.Further, for the user speech preventing user from repeatedly inputting Attacked by recording, it is judged that character combination that character in user speech and articulation type generate with server and character Pronunciation rule consistent after, also judge voice the most to be verified and the voice of this user in history sound bank whether Cause, if consistent, illustrate that there is recording attacks.The present invention can effectively prevent the recording in voiceprint from attacking.
Accompanying drawing explanation
In order to be illustrated more clearly that the technical scheme of the embodiment of the present invention, in embodiment being described below required for make Accompanying drawing be briefly described, it should be apparent that, below describe in accompanying drawing be only some embodiments of the present invention, For those of ordinary skill in the art, on the premise of not paying creative work, it is also possible to according to these accompanying drawings Obtain other accompanying drawing.
Fig. 1 is the voiceprint authentication method flow chart being prevented from recording attack of one embodiment of the invention;
Fig. 2 is the voiceprint process flow diagram flow chart being prevented from recording attack of one embodiment of the invention;
Fig. 3 is the voiceprint process flow diagram flow chart being prevented from recording attack of one embodiment of the invention;
Fig. 4 is the oscillogram of the pronunciation correspondence of the digital " 0 " of one embodiment of the invention;
Fig. 5 is the voiceprint authentication method flow chart being prevented from recording attack of one embodiment of the invention;
Fig. 6 is the voiceprint server being prevented from recording attack of one embodiment of the invention;
Fig. 7 is the voiceprint terminal being prevented from recording attack of one embodiment of the invention;
Fig. 8 is the voiceprint authentication system being prevented from recording attack of one embodiment of the invention;
Fig. 9 be one embodiment of the invention have prevent recording attack function voiceprint authentication method flow chart.
Detailed description of the invention
Technical characterstic and effect in order to make the present invention become apparent from, and do technical scheme below in conjunction with the accompanying drawings Further illustrating, the present invention also can have other different instantiations be illustrated or implement, any art technology The equivalents that personnel do within the scope of the claims belongs to the protection category of the present invention.
As it is shown in figure 1, the voiceprint authentication method flow chart being prevented from recording attack that Fig. 1 is one embodiment of the invention.
The present embodiment is the voiceprint authentication method described from server side, according to user speech, the server of terminal feedback The character combination generated and the pronunciation rule of character carry out voiceprint, and the present embodiment can prevent recording to a certain extent Attack.
Concrete, it is possible to the voiceprint authentication method preventing recording from attacking comprises the steps:
Step 101: ask to generate character combination and the pronunciation rule of character according to the voiceprint of a user;
Character combination includes but not limited to letter, numeral, Chinese character etc., and the pronunciation rule of character includes but not limited to pronunciation Tone, the length etc. of pronunciation, in an embodiment, the corresponding pronunciation rule of each character in character combination, separately In one embodiment, the corresponding pronunciation rule of two characters in character combination, the present invention is to character combination and character group The concrete form of the pronunciation rule of the character in conjunction does not limits.
In the application one embodiment, the pronunciation rule of described character combination and character is randomly generated.
Step 102: the pronunciation rule of character combination and character is sent to requesting terminal;
Terminal of the present invention includes but not limited to mobile phone, PAD, computer and notebook.
Step 103: receive user's language that described requesting terminal inputs according to the pronunciation rule of described character combination and character Sound;
Step 104: carry out voiceprint according to the pronunciation rule of described user speech, described character combination and character;
Step 105: described voiceprint result is sent to described requesting terminal.
In the present embodiment, even if assailant can obtain phonetic characters information, also cannot obtain the pronunciation rule of character, By adding the certification of pronunciation rule, it is possible to effectively prevent recording from attacking.
Specifically, step 104 farther includes:
Judge that whether the voice of described user speech and described user's history input is the sound of same people;
Judge that the character in described user speech is the most identical with the character in described character combination;
Judge whether the articulation type of the character in described user speech mates with the pronunciation rule of described character;
The most described user speech is the character in same people, described user speech with the voice of described user's history input The articulation type of character in and described user speech identical with the character in described character combination and sending out of described character When sound rule match meets simultaneously, voiceprint just passes through, and other situation voiceprints do not pass through, the most described user Voice is not the character in same people, and/or described user speech and described word with the voice of described user's history input Character in symbol combination is different, and/or the pronunciation rule of the articulation type of the character in described user speech and described character Do not mate, then voiceprint does not passes through.
The present invention is not limiting as the order of above-mentioned judge process, and the combination of any order all can realize sentencing of voiceprint Disconnected.
Preferably, as in figure 2 it is shown, step 104 farther includes:
Step 201: first judge that whether the voice of described user speech and described user's history input is the sound of same people Sound;If not being the sound of same people, then voiceprint does not passes through, and if the sound of same people, continues step 202;
When being embodied as, before carrying out step 202, need first to separate, according to character, the user speech sent in client, Then the character in user speech is extracted.
Step 202: judge that the character in described user speech is the most identical with the character in described character combination;
If the character in described user speech is different from the character in described character combination, then voiceprint is not by i.e. Voiceprint failure;
If the character in described user speech is identical with the character in described character combination, then continue step 203;
Step 203: judge that whether the pronunciation rule of the articulation type of character in described user speech and described character Join;
If the articulation type of the character in described user speech does not mates with the pronunciation rule of described character, then vocal print is recognized Card does not passes through;
If the articulation type of the character in described user speech mates with the pronunciation rule of described character, then voiceprint Pass through.
Carrying out voiceprint according to the order described in the present embodiment and can accelerate the speed of certification, it is same that prevention recording is attacked The experience effect of Shi Tigao user.In below embodiment, if do not done specified otherwise, all suitable according to described in the present embodiment Sequence carries out voiceprint.
Refer to Fig. 2 again, it is judged that the voice that described user speech and described user's history input is same people, described Character in user speech is identical with the character in described character combination and the pronunciation side of character in described user speech Formula also includes after mating with the pronunciation rule of described character being stored by user speech to history sound bank, it is simple to follow-up tune Take the voice messaging of family input.
As it is shown on figure 3, in the application one embodiment, it is judged that the language that described user speech inputs with described user's history Sound is that the character in same people, described user speech is identical with the character in described character combination and in described user speech The articulation type of character mate with the pronunciation rule of described character after also include:
Step 204: judge that described user speech is the most consistent with described user voice in history sound bank;
If described user speech is consistent with described user voice in history sound bank, then voiceprint does not passes through;
If described user speech is inconsistent with described user voice in history sound bank, then voiceprint passes through, Described user speech is stored to history sound bank.
The most consistent with the voice of this user in history sound bank by checking user speech, it is possible to prevent same user Not homogeneous voice authentication in input same subscriber voice occur recording attack.
In one embodiment of the invention, the step 204 of a upper embodiment farther includes:
Extract the characteristic parameter of described user speech;
Calculate the characteristic parameter of described user speech and the characteristic parameter of described user voice in historical data base Euclidean distance, when described Euclidean distance is less than predetermined threshold, described user speech and described user are in history Voice in sound bank is consistent, when described Euclidean distance is more than predetermined threshold, and described user speech and described user Voice in history sound bank is inconsistent.
Predetermined threshold described in the present embodiment can send the diversity of same sound according to people and determine.
When being embodied as, it is judged that the detailed mistake that user speech is the most consistent with described user voice in history sound bank Cheng Wei:
1) by character, user speech is divided into multistage voice, every section of voice is carried out pretreatment, including framing, pre-add Weight, windowing etc. process, and obtain the one section of sound that can calculate further.
2) beginning and end of efficient voice part in every section of voice is found.
As shown in Figure 4, Fig. 4 is the oscillogram of the pronunciation correspondence of digital " 0 ", as seen from Figure 4 at sound Front and back have a lot without segment or trickle noise segment.If not removing these invalid acoustical signals, assailant Can carry out processing at the invalid sound end of recording and affect the effect of recording detection.
When being embodied as, the beginning and end of voice live part can be judged by short-time energy and short-time zero-crossing rate.
Wherein short-time energy refers to the intensity sum of a frame voice signal, the short-time energy En of n-th frame voice signal:
E n = Σ m = 0 N - 1 | x n ( m ) |
Wherein, m is n-th frame m-th sampled point, and N is the size of this frame, xnM () is the sampling of n-th frame m-th Frequency after some normalization.
Short-time zero-crossing rate refers at a frame voice signal waveform through the number of times of transverse axis, is designated as Zn,
Z n = 1 2 Σ m = 0 N - 1 | s g n [ x n ( m ) ] - s g n [ x n ( m - 1 ) ] |
Wherein, m is n-th frame m-th sampled point, and N is the size of this frame, xnM () is the sampling of n-th frame m-th Frequency after some normalization.
When short-time energy En exceedes threshold values E or short-time zero-crossing rate Zn exceedes threshold values Z, this voice is effective language The beginning of sound, when short-time energy En is less than threshold values Z less than threshold values E or short-time zero-crossing rate Zn, this voice is The end of efficient voice.
3) use Mel yardstick cepstrum coefficient (MFCC) that efficient voice is extracted characteristic parameter.The method is current sound Characteristic parameter extraction way more common in process, here is omitted for the present invention.
Record user this walk pretreatment through first three, voice inactive portion is fallen in segmentation and extracts after characteristic parameter, user The voice of certain character be expressed as T:
T have N frame vector T (1), T (2) ... T (n) ..., T (N) }, T (n) is the speech characteristic vector of n-th frame.
Carry out same pretreatment for the character sound of this user in history library, voice inactive portion is fallen in segmentation and extracts spy Levying parameter postscript is R:
R has M frame vector R={R (1), R (2) ... R (m) ..., R (M) }, R (m) is that the voice of m frame is special Levy vector.
4) calculate user voice and the similarity of the sound of storage in history sound bank, be and calculate the similar of T to R Property, this similarity judges can be by calculating the Euclidean distance of T and R.
d(T(in),R(im)) represent in T i-thnFrame feature and i in RmEuclidean distance between frame feature, if two Individual waveform is completely superposed at certain frame, then distance d is 0.In order to compare the similarity between them, can be calculated it Distance D [T, R] between, the least then similarity of distance is the highest.
If N=M, i.e. two sections voice length are identical, direct simple computation user speech and storage in history sound bank Euclidean distance D [T, R] of voice=d (1,1)+d (2,2)+...+d (N, N), if two ends voice is just the same, then D [T, R]=0, may only judge that T with R is the most identical in this way, but recording assailant is in reality Attack often taked original recording is stretched at portion, shorten or the operation such as deletion, if so Both simple computation distance can not well defend this type of to attack.
When N and M is differed, consider to align T (n) and R (m).Alignment can use the side of linear expansion Method, if N < M can by the sequence that T Linear Mapping is a M frame, then calculate it with R (1), R (2) ..., R (M) } between distance.But whole section of sound will not be processed by assailant, and often only part to sound Position processes, if taking the method will recognise that, the two assonance degree is the lowest.
Therefore the similarity comparing voice T and R needs to combine time rule and range measurement, by finding Function im=Φ (in), the time shaft n of T is non-linearly mapped on the time shaft m of R, and makes this T's Yu R Distance D [T, R] meets:
D &lsqb; T , R &rsqb; = m i n &Phi; ( i n ) &Sigma; i n = 1 N d ( T ( i n ) , R ( &Phi; ( i n ) ) )
Wherein:
&Phi; ( 1 ) = 1 &Phi; ( N ) = M
Φ(in+1)≥Φ(in)
Φ(in+1)-Φ(in)≤1
It is evident that meet the condition of dynamic programming, it is possible to use dynamic programming algorithm solves, Qi Zhongdong State planning multinomial is:
D (in, im)=d (T (in), R (im))+min{D (in-1, im), D (in-1, im-1), D (in-1, im-2) }
(making D (1,1)=0) search of setting out, recursion repeatedly is so put from (l, 1), until (N, M) can be obtained by optimal path, And D (N, M) is exactly the matching distance corresponding to best matching path.
Owing to everyone speech is affected by many factors, anyone repeats to send out the sound of identical characters can not on sound wave Can be the most similar, the property of there are differences certainly, defining this diversity is the reservation threshold judged.If D (N, M)=0, Then explanation two ends voice T and R is completely the same, may certify that for voice T and R be a sound, it is understood that there may be record Sound is attacked;If D (N, M) < threshold values, then explanation two ends voice T and R similarity degree are the highest, it is equally possible to exist Recording is attacked;If D (N, M) >=threshold values, then explanation T and R is not same voice, there is not recording and attacks.
The present invention propose be prevented from recording attack voiceprint authentication method, by checking user speech in character and Articulation type is the most consistent with the pronunciation rule of the character combination that server generates and character, it is possible to effectively prevent recording Attacking, assailant is i.e. enabled the user speech got by other channels and meets voice content, also cannot meet pronunciation The requirement of mode.Further, the user speech in order to prevent user from repeatedly inputting is attacked by recording, it is judged that use After character in the voice of family is consistent with the pronunciation rule of the character combination that server generates and character with articulation type, also sentence Disconnected voice the most to be verified is the most consistent with the voice of this user in history sound bank, if consistent, illustrates to there is record Sound is attacked.The present invention can effectively prevent the recording in voiceprint from attacking.
As it is shown in figure 5, the voiceprint authentication method flow chart being prevented from recording attack that Fig. 5 is one embodiment of the invention. The method is the description carried out from side, requesting terminal, and concrete, voiceprint authentication method includes:
Step 501: the voiceprint sending a user is asked to server;
Step 502: receive and show character combination and the pronunciation rule of character that described server sends;
Step 503: receive the user speech that user inputs according to the pronunciation rule of described character combination and character;
Step 504: described user speech is sent to described server;
Step 505: receive the voiceprint result that described server sends.
As shown in Figure 6, Fig. 6 is that a kind of of one embodiment of the invention is prevented from the voiceprint server that recording is attacked, This server 600 includes, signal generating unit 601, for generating sending out of character combination and character according to the request of a user Sound rule;
Transmitting element 602, for being sent to requesting terminal by the pronunciation rule of described character combination and character, by vocal print Authentication result sends to described requesting terminal;
Receive unit 603, input according to the pronunciation rule of described character combination and character for receiving described requesting terminal User speech;
Sound detection unit 604, for entering according to the pronunciation rule of described user speech, described character combination and character Row voiceprint.
As it is shown in fig. 7, the voiceprint terminal being prevented from recording attack that Fig. 7 is one embodiment of the invention.Specifically , this certification terminal 700 includes: request unit 701, asks to server for sending the voiceprint of a user;
Receive unit 702, for receiving and show character combination and the pronunciation rule of character that described server sends, Receive the voiceprint result that described server sends;
Typing unit 703, for receiving user's language that user inputs according to the pronunciation rule of described character combination and character Sound;
Transmitting element 704, for sending described user speech to described server.
As shown in Figure 8, Fig. 8 is the voiceprint authentication system being prevented from recording attack of one embodiment of the invention.
This voiceprint authentication system includes server 600 and requesting terminal 700, and wherein, described server 600 is for root Ask to generate character combination and the pronunciation rule of character according to the voiceprint of a user;By described character combination and character Pronunciation rule is sent to requesting terminal;Receive described requesting terminal defeated according to the pronunciation rule of described character combination and character The user speech entered;Pronunciation rule according to described user speech, described character combination and character carries out voiceprint; Described voiceprint result is sent to described requesting terminal;
Described requesting terminal 700 is asked to server for the voiceprint sending a user;Receive and show described clothes The character combination of business device transmission and the pronunciation rule of character;Receive user to advise according to the pronunciation of described character combination and character The user speech then inputted;Described user speech is sent to described server;Receive the vocal print that described server sends Authentication result.
What the present invention proposed is prevented from voiceprint authentication method, server, terminal and the system that recording is attacked, by testing Character combination that character in card user speech and articulation type and server generate and the pronunciation rule of character whether one Cause, it is possible to effectively preventing recording from attacking, assailant is i.e. enabled the user speech got by other channels and meets language Sound content, also cannot meet the requirement of articulation type.Further, for the user speech preventing user from repeatedly inputting Attacked by recording, it is judged that character combination that character in user speech and articulation type generate with server and character Pronunciation rule consistent after, also judge voice the most to be verified and the voice of this user in history sound bank whether Cause, if consistent, illustrate that there is recording attacks.The present invention can effectively prevent the recording in voiceprint from attacking.
For the technical scheme of clearer explanation the application, illustrate with a specific embodiment below, in conjunction with Fig. 9 Shown in, the working-flow preventing recording from attacking is:
Step 901: client sends ID authentication request to server;
Step 902: server receives ID authentication request;
Step 903: server is according to the checking character combination of ID authentication request stochastic generation and the pronunciation side of character Formula, and send it to client;
Step 904: after client receives the pronunciation rule of character combination to be verified that server issues and character, carry Show that user reads in character on request;
Step 905: client receives the user speech that user is read in, and user speech user read in sends to clothes Business device;
Step 906: server carries out voice print verification, it is judged that the language of the user speech of reception and this user prestored Whether sound is same people, can use the most conventional voice print verification algorithm when being embodied as;
If voice print verification is not same person, the most directly return user authentication failure to client;
If voice print verification is same people, then continue recording detection;
Step 907: the character in the character combination that the character in checking user voice and server generate is the most identical; If the character in the character combination that the character in user voice and server generate differs, then the word in user voice Symbol checking is not passed through, and returns user authentication failure to client;If the character in user voice generates with server Character in character combination is identical, then the character in user voice is verified, and continues step 908;
Step 908: whether the character sound mode that the articulation type of the character in checking user voice and server generate Identical, if the character sound mode that the articulation type of the character in user voice and server generate differs, then use Character sound mode in the sound of family is verified and is not passed through, and returns user authentication failure to client;If in user voice The articulation type of character identical with the character sound mode that server generates, then the character sound mode in user voice It is verified, continues step 909;
Step 909: whether checking user voice is present in history sound bank, if it is present prove to there is recording Attack, authentification failure, authentification failure result is sent to client;If it does not exist, then voiceprint passes through, will User voice is stored in history sound bank, and by result, voiceprint is sent to client.
Whether checking user voice is present in the process in history sound bank has carried out detailed the most in the above-described embodiments Illustrating, here is omitted.After voiceprint passes through, client continues corresponding operation, and this is not limited by the present invention System.
What the present invention proposed is prevented from voiceprint authentication method, server, terminal and the system that recording is attacked, by testing Character combination that character in card user speech and articulation type and server generate and the pronunciation rule of character whether one Cause, it is possible to effectively preventing recording from attacking, assailant is i.e. enabled the user speech got by other channels and meets language Sound content, also cannot meet the requirement of articulation type.Further, for the user speech preventing user from repeatedly inputting Attacked by recording, it is judged that character combination that character in user speech and articulation type generate with server and character Pronunciation rule consistent after, also judge voice the most to be verified and the voice of this user in history sound bank whether Cause, if consistent, illustrate that there is recording attacks.The present invention can effectively prevent the recording in voiceprint from attacking.
The above is merely to illustrate technical scheme, and any those of ordinary skill in the art all can be without prejudice to this Under the spirit and the scope of invention, above-described embodiment is modified and changes.Therefore, the scope of the present invention Should be as the criterion depending on right.

Claims (11)

1. one kind is prevented from the voiceprint authentication method that recording is attacked, it is characterised in that include,
Voiceprint request according to a user generates character combination and the pronunciation rule of character;
The pronunciation rule of described character combination and character is sent to requesting terminal;
Receive the user speech that described requesting terminal inputs according to the pronunciation rule of described character combination and character;
Pronunciation rule according to described user speech, described character combination and character carries out voiceprint;By described vocal print Authentication result sends to described requesting terminal.
It is prevented from the voiceprint authentication method that recording is attacked the most as claimed in claim 1, it is characterised in that according to institute State the pronunciation rule of user speech, described character combination and character to carry out voiceprint and farther include,
Judge that whether the voice of described user speech and described user's history input is the sound of same people;
Judge that the character in described user speech is the most identical with the character in described character combination;
Judge whether the articulation type of the character in described user speech mates with the pronunciation rule of described character;
The most described user speech is the character in same people, described user speech with the voice of described user's history input The articulation type of character in and described user speech identical with the character in described character combination and sending out of described character During sound rule match, voiceprint just passes through, and other situation voiceprints do not pass through.
It is prevented from the voiceprint authentication method that recording is attacked the most as claimed in claim 2, it is characterised in that judge Described user speech is the character in same people, described user speech and described word with the voice of described user's history input Character in symbol combination is identical and the pronunciation rule of the articulation type of character in described user speech and described character Also include after joining,
Described user speech is stored to history sound bank.
It is prevented from the voiceprint authentication method that recording is attacked the most as claimed in claim 2, it is characterised in that judge Described user speech is the character in same people, described user speech and described word with the voice of described user's history input Character in symbol combination is identical and the pronunciation rule of the articulation type of character in described user speech and described character Also include after joining,
Judge that described user speech is the most consistent with described user voice in history sound bank;
If described user speech is consistent with described user voice in history sound bank, then voiceprint does not passes through;
If described user speech is inconsistent with described user voice in history sound bank, then voiceprint passes through, Described user speech is stored to history sound bank.
It is prevented from the voiceprint authentication method that recording is attacked the most as claimed in claim 4, it is characterised in that judge institute State that user speech is the most consistent with described user voice in history sound bank to be farther included,
Extract the characteristic parameter of described user speech;
Calculate the characteristic parameter of described user speech and the characteristic parameter of described user voice in historical data base Euclidean distance, when described Euclidean distance is less than predetermined threshold, described user speech and described user are in history Voice in sound bank is consistent, when described Euclidean distance is more than predetermined threshold, and described user speech and described user Voice in history sound bank is inconsistent.
It is prevented from the voiceprint authentication method that recording is attacked the most as claimed in claim 5, it is characterised in that extract institute The characteristic parameter stating user speech farther includes,
Described user speech is carried out pretreatment, described user speech is divided into multistage voice by character;
Find the beginning and end of efficient voice part in every section of voice;
Extract the characteristic parameter of efficient voice part.
It is prevented from the voiceprint authentication method that recording is attacked the most as claimed in claim 1, it is characterised in that described word The pronunciation rule of symbol combination and character is randomly generated.
8. one kind is prevented from the voiceprint authentication method that recording is attacked, it is characterised in that include,
The voiceprint sending a user is asked to server;
Receive and show character combination and the pronunciation rule of character that described server sends;
Receive the user speech that user inputs according to the pronunciation rule of described character combination and character;
Described user speech is sent to described server;
Receive the voiceprint result that described server sends.
9. one kind is prevented from the voiceprint server that recording is attacked, it is characterised in that include,
Signal generating unit, generates character combination and the pronunciation rule of character for the request according to a user;
Transmitting element, for being sent to requesting terminal by the pronunciation rule of described character combination and character, by voiceprint Result sends to described requesting terminal;
Receive unit, for receiving the use that described requesting terminal inputs according to the pronunciation rule of described character combination and character Family voice;
Sound detection unit, for carrying out sound according to the pronunciation rule of described user speech, described character combination and character Stricture of vagina certification.
10. one kind is prevented from the voiceprint terminal that recording is attacked, it is characterised in that include,
Request unit, asks to server for sending the voiceprint of a user;
Receive unit, for receiving and show character combination and the pronunciation rule of character that described server sends, receive The voiceprint result that described server sends;
Typing unit, for receiving the user speech that user inputs according to the pronunciation rule of described character combination and character;
Transmitting element, for sending described user speech to described server.
11. 1 kinds are prevented from the voiceprint authentication system that recording is attacked, it is characterised in that include server and request eventually End, wherein, described server asks to generate the pronunciation rule of character combination and character for the voiceprint according to a user Then;The pronunciation rule of described character combination and character is sent to requesting terminal;Receive described requesting terminal according to described The user speech of the pronunciation rule input of character combination and character;According to described user speech, described character combination and word The pronunciation rule of symbol carries out voiceprint;Described voiceprint result is sent to described requesting terminal;
Described requesting terminal is asked to server for the voiceprint sending a user;Receive and show described server The character combination sent and the pronunciation rule of character;Receive user defeated according to the pronunciation rule of described character combination and character The user speech entered;Described user speech is sent to described server;Receive the voiceprint that described server sends Result.
CN201511020257.4A 2015-12-30 2015-12-30 Voiceprint recognition method capable of preventing recording attack, server, terminal, and system Pending CN105933272A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201511020257.4A CN105933272A (en) 2015-12-30 2015-12-30 Voiceprint recognition method capable of preventing recording attack, server, terminal, and system
PCT/CN2016/111714 WO2017114307A1 (en) 2015-12-30 2016-12-23 Voiceprint authentication method capable of preventing recording attack, server, terminal, and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511020257.4A CN105933272A (en) 2015-12-30 2015-12-30 Voiceprint recognition method capable of preventing recording attack, server, terminal, and system

Publications (1)

Publication Number Publication Date
CN105933272A true CN105933272A (en) 2016-09-07

Family

ID=56839979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511020257.4A Pending CN105933272A (en) 2015-12-30 2015-12-30 Voiceprint recognition method capable of preventing recording attack, server, terminal, and system

Country Status (2)

Country Link
CN (1) CN105933272A (en)
WO (1) WO2017114307A1 (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017114307A1 (en) * 2015-12-30 2017-07-06 中国银联股份有限公司 Voiceprint authentication method capable of preventing recording attack, server, terminal, and system
CN109087647A (en) * 2018-08-03 2018-12-25 平安科技(深圳)有限公司 Application on Voiceprint Recognition processing method, device, electronic equipment and storage medium
CN109218269A (en) * 2017-07-05 2019-01-15 阿里巴巴集团控股有限公司 Identity authentication method, device, equipment and data processing method
CN109935233A (en) * 2019-01-29 2019-06-25 天津大学 A kind of recording attack detection method based on amplitude and phase information
CN110169014A (en) * 2017-01-03 2019-08-23 诺基亚技术有限公司 Device, method and computer program product for certification
CN111316668A (en) * 2017-11-14 2020-06-19 思睿逻辑国际半导体有限公司 Detection of loudspeaker playback
CN111524528A (en) * 2020-05-28 2020-08-11 Oppo广东移动通信有限公司 Voice awakening method and device for preventing recording detection
US10984083B2 (en) 2017-07-07 2021-04-20 Cirrus Logic, Inc. Authentication of user using ear biometric data
CN112735426A (en) * 2020-12-24 2021-04-30 深圳市声扬科技有限公司 Voice verification method and system, computer device and storage medium
US11017252B2 (en) 2017-10-13 2021-05-25 Cirrus Logic, Inc. Detection of liveness
US11023755B2 (en) 2017-10-13 2021-06-01 Cirrus Logic, Inc. Detection of liveness
US11037574B2 (en) 2018-09-05 2021-06-15 Cirrus Logic, Inc. Speaker recognition and speaker change detection
US11042617B2 (en) 2017-07-07 2021-06-22 Cirrus Logic, Inc. Methods, apparatus and systems for biometric processes
US11042618B2 (en) 2017-07-07 2021-06-22 Cirrus Logic, Inc. Methods, apparatus and systems for biometric processes
US11042616B2 (en) 2017-06-27 2021-06-22 Cirrus Logic, Inc. Detection of replay attack
CN113012684A (en) * 2021-03-04 2021-06-22 电子科技大学 Synthesized voice detection method based on voice segmentation
US11164588B2 (en) 2017-06-28 2021-11-02 Cirrus Logic, Inc. Magnetic detection of replay attack
US11264037B2 (en) 2018-01-23 2022-03-01 Cirrus Logic, Inc. Speaker identification
US11270707B2 (en) 2017-10-13 2022-03-08 Cirrus Logic, Inc. Analysing speech signals
US11276409B2 (en) 2017-11-14 2022-03-15 Cirrus Logic, Inc. Detection of replay attack
CN114826709A (en) * 2022-04-15 2022-07-29 马上消费金融股份有限公司 Identity authentication and acoustic environment detection method, system, electronic device and medium
US11475899B2 (en) 2018-01-23 2022-10-18 Cirrus Logic, Inc. Speaker identification
US11631402B2 (en) 2018-07-31 2023-04-18 Cirrus Logic, Inc. Detection of replay attack
US11704397B2 (en) 2017-06-28 2023-07-18 Cirrus Logic, Inc. Detection of replay attack
US11705135B2 (en) 2017-10-13 2023-07-18 Cirrus Logic, Inc. Detection of liveness
US11735189B2 (en) 2018-01-23 2023-08-22 Cirrus Logic, Inc. Speaker identification
US11748462B2 (en) 2018-08-31 2023-09-05 Cirrus Logic Inc. Biometric authentication
US11755701B2 (en) 2017-07-07 2023-09-12 Cirrus Logic Inc. Methods, apparatus and systems for authentication
US11829461B2 (en) 2017-07-07 2023-11-28 Cirrus Logic Inc. Methods, apparatus and systems for audio playback

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109754817A (en) * 2017-11-02 2019-05-14 北京三星通信技术研究有限公司 signal processing method and terminal device
CN112365895B (en) * 2020-10-09 2024-04-19 深圳前海微众银行股份有限公司 Audio processing method, device, computing equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1808567A (en) * 2006-01-26 2006-07-26 覃文华 Voice-print authentication device and method of authenticating people presence
CN102457845A (en) * 2010-10-14 2012-05-16 阿里巴巴集团控股有限公司 Wireless service identity authentication method, equipment and system
CN102543084A (en) * 2010-12-29 2012-07-04 盛乐信息技术(上海)有限公司 Online voiceprint recognition system and implementation method thereof
CN102737634A (en) * 2012-05-29 2012-10-17 百度在线网络技术(北京)有限公司 Authentication method and device based on voice
CN104717219A (en) * 2015-03-20 2015-06-17 百度在线网络技术(北京)有限公司 Vocal print login method and device based on artificial intelligence

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9202460B2 (en) * 2008-05-14 2015-12-01 At&T Intellectual Property I, Lp Methods and apparatus to generate a speech recognition library
CN104901808A (en) * 2015-04-14 2015-09-09 时代亿宝(北京)科技有限公司 Voiceprint authentication system and method based on time type dynamic password
CN105185379B (en) * 2015-06-17 2017-08-18 百度在线网络技术(北京)有限公司 voiceprint authentication method and device
CN105096121B (en) * 2015-06-25 2017-07-25 百度在线网络技术(北京)有限公司 voiceprint authentication method and device
CN105933272A (en) * 2015-12-30 2016-09-07 中国银联股份有限公司 Voiceprint recognition method capable of preventing recording attack, server, terminal, and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1808567A (en) * 2006-01-26 2006-07-26 覃文华 Voice-print authentication device and method of authenticating people presence
CN102457845A (en) * 2010-10-14 2012-05-16 阿里巴巴集团控股有限公司 Wireless service identity authentication method, equipment and system
CN102543084A (en) * 2010-12-29 2012-07-04 盛乐信息技术(上海)有限公司 Online voiceprint recognition system and implementation method thereof
CN102737634A (en) * 2012-05-29 2012-10-17 百度在线网络技术(北京)有限公司 Authentication method and device based on voice
CN104717219A (en) * 2015-03-20 2015-06-17 百度在线网络技术(北京)有限公司 Vocal print login method and device based on artificial intelligence

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵力: "《语音信号处理》", 31 May 2009 *

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017114307A1 (en) * 2015-12-30 2017-07-06 中国银联股份有限公司 Voiceprint authentication method capable of preventing recording attack, server, terminal, and system
US11283631B2 (en) 2017-01-03 2022-03-22 Nokia Technologies Oy Apparatus, method and computer program product for authentication
CN110169014A (en) * 2017-01-03 2019-08-23 诺基亚技术有限公司 Device, method and computer program product for certification
US11042616B2 (en) 2017-06-27 2021-06-22 Cirrus Logic, Inc. Detection of replay attack
US12026241B2 (en) 2017-06-27 2024-07-02 Cirrus Logic Inc. Detection of replay attack
US11704397B2 (en) 2017-06-28 2023-07-18 Cirrus Logic, Inc. Detection of replay attack
US11164588B2 (en) 2017-06-28 2021-11-02 Cirrus Logic, Inc. Magnetic detection of replay attack
CN109218269A (en) * 2017-07-05 2019-01-15 阿里巴巴集团控股有限公司 Identity authentication method, device, equipment and data processing method
US11042618B2 (en) 2017-07-07 2021-06-22 Cirrus Logic, Inc. Methods, apparatus and systems for biometric processes
US10984083B2 (en) 2017-07-07 2021-04-20 Cirrus Logic, Inc. Authentication of user using ear biometric data
US11755701B2 (en) 2017-07-07 2023-09-12 Cirrus Logic Inc. Methods, apparatus and systems for authentication
US11714888B2 (en) 2017-07-07 2023-08-01 Cirrus Logic Inc. Methods, apparatus and systems for biometric processes
US11042617B2 (en) 2017-07-07 2021-06-22 Cirrus Logic, Inc. Methods, apparatus and systems for biometric processes
US11829461B2 (en) 2017-07-07 2023-11-28 Cirrus Logic Inc. Methods, apparatus and systems for audio playback
US12135774B2 (en) 2017-07-07 2024-11-05 Cirrus Logic Inc. Methods, apparatus and systems for biometric processes
US11270707B2 (en) 2017-10-13 2022-03-08 Cirrus Logic, Inc. Analysing speech signals
US11023755B2 (en) 2017-10-13 2021-06-01 Cirrus Logic, Inc. Detection of liveness
US11705135B2 (en) 2017-10-13 2023-07-18 Cirrus Logic, Inc. Detection of liveness
US11017252B2 (en) 2017-10-13 2021-05-25 Cirrus Logic, Inc. Detection of liveness
US11051117B2 (en) 2017-11-14 2021-06-29 Cirrus Logic, Inc. Detection of loudspeaker playback
CN111316668A (en) * 2017-11-14 2020-06-19 思睿逻辑国际半导体有限公司 Detection of loudspeaker playback
US11276409B2 (en) 2017-11-14 2022-03-15 Cirrus Logic, Inc. Detection of replay attack
CN111316668B (en) * 2017-11-14 2021-09-28 思睿逻辑国际半导体有限公司 Detection of loudspeaker playback
US11264037B2 (en) 2018-01-23 2022-03-01 Cirrus Logic, Inc. Speaker identification
US11694695B2 (en) 2018-01-23 2023-07-04 Cirrus Logic, Inc. Speaker identification
US11735189B2 (en) 2018-01-23 2023-08-22 Cirrus Logic, Inc. Speaker identification
US11475899B2 (en) 2018-01-23 2022-10-18 Cirrus Logic, Inc. Speaker identification
US11631402B2 (en) 2018-07-31 2023-04-18 Cirrus Logic, Inc. Detection of replay attack
CN109087647A (en) * 2018-08-03 2018-12-25 平安科技(深圳)有限公司 Application on Voiceprint Recognition processing method, device, electronic equipment and storage medium
US11748462B2 (en) 2018-08-31 2023-09-05 Cirrus Logic Inc. Biometric authentication
US11037574B2 (en) 2018-09-05 2021-06-15 Cirrus Logic, Inc. Speaker recognition and speaker change detection
CN109935233A (en) * 2019-01-29 2019-06-25 天津大学 A kind of recording attack detection method based on amplitude and phase information
CN111524528A (en) * 2020-05-28 2020-08-11 Oppo广东移动通信有限公司 Voice awakening method and device for preventing recording detection
CN112735426A (en) * 2020-12-24 2021-04-30 深圳市声扬科技有限公司 Voice verification method and system, computer device and storage medium
CN113012684A (en) * 2021-03-04 2021-06-22 电子科技大学 Synthesized voice detection method based on voice segmentation
CN114826709A (en) * 2022-04-15 2022-07-29 马上消费金融股份有限公司 Identity authentication and acoustic environment detection method, system, electronic device and medium

Also Published As

Publication number Publication date
WO2017114307A1 (en) 2017-07-06

Similar Documents

Publication Publication Date Title
CN105933272A (en) Voiceprint recognition method capable of preventing recording attack, server, terminal, and system
Yu et al. Spoofing detection in automatic speaker verification systems using DNN classifiers and dynamic acoustic features
Reynolds An overview of automatic speaker recognition technology
CN107104803B (en) User identity authentication method based on digital password and voiceprint joint confirmation
Mukhopadhyay et al. All your voices are belong to us: Stealing voices to fool humans and machines
CN105913855B (en) A kind of voice playback attack detecting algorithm based on long window scale factor
US11979398B2 (en) Privacy-preserving voiceprint authentication apparatus and method
Reynolds Automatic speaker recognition: Current approaches and future trends
Chen et al. Towards understanding and mitigating audio adversarial examples for speaker recognition
CN105933323B (en) Voiceprint registration, authentication method and device
WO2017162053A1 (en) Identity authentication method and device
CN1808567A (en) Voice-print authentication device and method of authenticating people presence
JPWO2005013263A1 (en) Voice authentication system
CN110459226A (en) A method of voice is detected by vocal print engine or machine sound carries out identity veritification
Turner et al. Attacking speaker recognition systems with phoneme morphing
CN110379433A (en) Method, apparatus, computer equipment and the storage medium of authentication
CN109273012A (en) A kind of identity identifying method based on Speaker Identification and spoken digit recognition
Yuan et al. Overview of the development of speaker recognition
Al-Shayea et al. Speaker identification: A novel fusion samples approach
Reynolds et al. Automatic speaker recognition
EP4170526A1 (en) An authentication system and method
RU2351023C2 (en) User verification method in authorised access systems
CN113012684B (en) Synthesized voice detection method based on voice segmentation
Chen et al. Personal threshold in a small scale text-dependent speaker recognition
Mishra et al. Speaker identification, differentiation and verification using deep learning for human machine interface

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160907