CN109117622A - A kind of identity identifying method based on audio-frequency fingerprint - Google Patents
A kind of identity identifying method based on audio-frequency fingerprint Download PDFInfo
- Publication number
- CN109117622A CN109117622A CN201811095315.3A CN201811095315A CN109117622A CN 109117622 A CN109117622 A CN 109117622A CN 201811095315 A CN201811095315 A CN 201811095315A CN 109117622 A CN109117622 A CN 109117622A
- Authority
- CN
- China
- Prior art keywords
- audio
- frequency fingerprint
- input
- user
- voice messaging
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Telephone Function (AREA)
Abstract
The present invention relates to a kind of identity identifying methods based on audio-frequency fingerprint.Terminal device acquires the voice messaging text information corresponding with the voice messaging of user's input in advance, inputs text information to user and carries out word segmentation processing, obtains position corresponding importance of each keyword in the input text information;Voice messaging is divided into several sub-pieces according to above-mentioned keyword, extracts audio-frequency fingerprint respectively for each sub-piece.When user carries out authentication, the similarity of audio-frequency fingerprint is calculated in the audio-frequency fingerprint of the corresponding importance in position inputted in text information, pre-stored audio-frequency fingerprint and user's input voice information according to each keyword, if similarity is greater than predetermined threshold, authentication success.This method has fully considered the semantic feature of corresponding text information, so that matching precision is higher, so that the safety of terminal device is further promoted.
Description
Technical field
The present invention relates to audio frequency identification technique field, in particular to a kind of identity identifying method based on audio-frequency fingerprint.
Background technique
With advances in technology, smart phone and tablet computer play more and more important in people's work and life
Role can have a large amount of personal information and operating records on terminal device, thus, the safety of mobile phone and tablet computer also becomes
It is more and more important.
The method that existing cipher mode generally uses traditional cryptography method or physical fingerprint.Although traditional is close
Code method can provide safety guarantee to a certain extent for authentication, and still, under many scenes, this method exists
Certain loophole.Once password is stolen, this method just entirely collapses.
Apple at present, Samsung, the product of the companies such as Huawei has supported fingerprint identification function, but fingerprint identification process is first
Need the finger print information of typing user, when unlock need to sense finger area coverage by sensor, then carry out finger print data and test
Card, the above process are both needed to the finger print information of acquisition user.And in practical application, there are certain customers since finger is peeled, fingerprint
It polishes or the case where the reasons such as finger fold is too many can not carry out fingerprint collecting, causes smart phone and tablet computer can not benefit
Its safety is promoted with the mode of finger print identifying.
Audio-frequency fingerprint be specific algorithm by numerical characteristic unique in a segment of audio in the form of identifier
It extracts, for identification the position of the sample sound of magnanimity or tracking and positioning sample in the database.In audio-frequency fingerprint is used as
The core algorithm for holding automatic identification technology has been widely used in music recognition, and content of copyright prison is broadcast, content library duplicate removal and TV
The fields such as the second screen interaction.
Audio fingerprint techniques will need the audio of identified content and foundation to refer to by extracting the data characteristics in sound
Completion is compared in line database.Identification process is not by the storage format of audio itself, coding mode, code rate and compress technique
It influences.The matching of audio-frequency fingerprint is the matching of high precision, independent of file meta information, watermarking and file cryptographic Hash.
Current smart phone and tablet computer aiming at the problem that certain customers are not available finger print identifying, it is contemplated that sound
Frequency interface is widely present in smart phone and tablet computer, is recognized the invention proposes a kind of using audio-frequency fingerprint to carry out identity
The method of card.
Summary of the invention
The invention discloses a kind of identity identifying methods based on audio-frequency fingerprint, can be based on the audio-frequency fingerprint information reality of people
Existing authentication, to improve the safety of terminal device.
The identity identifying method specifically comprises the following steps:
Step S10: terminal device detects the first operation of user;
Step S20: it if the first operation meets preset condition, is shown by terminal device display interface to user's input
Voice messaging enters step S30, and otherwise, return step S10 is continued to test;
Step S30: voice messaging described in user's input step S20, the audio for extracting user's input voice information refer to
Line;
Step S40: audio-frequency fingerprint pre-stored in the audio-frequency fingerprint and terminal device is subjected to match cognization:
Step S50: if successful match, terminal device unlocks, and otherwise, return step S10 is continued to test.
Before the step S10, includes the steps that audio-frequency fingerprint acquires S00, specifically includes:
User's input voice information, and input the corresponding text information of the voice messaging and saved, for the language of input
Message breath carries out the extraction of audio-frequency fingerprint, and the audio-frequency fingerprint of extraction is saved.
The extraction for carrying out audio-frequency fingerprint in the step S00 for the voice messaging of input, specifically includes:
Word segmentation processing is carried out to the text information corresponding with voice messaging of user's input, extracts n using TFIDF method
Keyword Ki, wherein i=1,2 ..., n, wherein n is the natural number greater than 2, and it is literary in the input to obtain each keyword
The corresponding importance k in position in this informationi;Voice messaging is divided into several sub-pieces according to above-mentioned keyword, for institute
It states each sub-piece and extracts audio-frequency fingerprint P respectivelyi。
First operation described in the step S10 is long-pressing touch screen or specified button.
The voice messaging to user's input that terminal device display interface is shown in the step S20 is to use in step S00
The text information of family input.
The audio-frequency fingerprint that user's input voice information is extracted in the step S30, specifically includes:
Input voice is divided into several sub-pieces according to the keyword extracted in step S00, for each sub-piece
Audio-frequency fingerprint P' is extracted respectivelyi。
Pre-stored audio-frequency fingerprint in the audio-frequency fingerprint and terminal device is subjected to matching knowledge in the step S40
Not, it specifically includes:
According to position corresponding importance α of each keyword in the input text informationi, pre-stored audio
Fingerprint PiWith the audio-frequency fingerprint P' of user's input voice informationi, calculate the similarity of audio-frequency fingerprintIf phase
Otherwise it is greater than predetermined threshold like degree, then pre-stored audio-frequency fingerprint successful match in the audio-frequency fingerprint and terminal device,
With failure.
Further include step S60 after the step S50: after successful match, the voice letter based on this input of user
Breath and audio-frequency fingerprint improve pre-stored audio-frequency fingerprint.
Technical solution provided in an embodiment of the present invention has the benefit that terminal device acquires user's input in advance
Voice messaging text information corresponding with the voice messaging divides the text information corresponding with voice messaging of user's input
Word processing extracts n keyword using TFIDF method, obtains position of each keyword in the input text information
Set corresponding importance;Voice messaging is divided into several sub-pieces according to above-mentioned keyword, for each sub-piece point
Indescribably take audio-frequency fingerprint.When user carries out authentication, need to input corresponding language according to the text information that display interface prompts
Message breath, equally divides this into several sub-pieces according to the keyword of aforementioned extraction for input voice information, for described each
Sub-piece extracts audio-frequency fingerprint respectively.According to each keyword it is described input text information in the corresponding importance in position,
The audio-frequency fingerprint of pre-stored audio-frequency fingerprint and user's input voice information calculates the similarity of audio-frequency fingerprint, if similarity is big
In predetermined threshold, then pre-stored audio-frequency fingerprint successful match in the audio-frequency fingerprint and terminal device.Above-mentioned audio-frequency fingerprint
The semantic feature that corresponding text information has been fully considered in matching process, by position of the keyword in the input text information
Weighted value of the corresponding importance as corresponding sub-piece similarity, so that matching precision is higher, so that terminal device
Safety is further promoted.
Detailed description of the invention
A kind of flow chart of identity identifying method based on audio-frequency fingerprint of Fig. 1 embodiment of the present invention;
A kind of flow chart of the extracting method of audio-frequency fingerprint of Fig. 2 embodiment of the present invention;
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
The present invention can be used in numerous general or special purpose computing device environment or configurations.Such as: personal computer is held
Equipment or portable device, laptop device, multi-processor device, the distributed computing ring including any of the above devices or devices
Border etc..
The invention discloses a kind of identity identifying methods based on audio-frequency fingerprint, can be based on the audio-frequency fingerprint information reality of people
Existing authentication, to improve the safety of terminal device.
The identity identifying method is specifically as shown in Fig. 1, includes the following steps:
Step S10: terminal device detects the first operation of user;
User often will appear in taking portable terminal equipment the operation such as accidentally touches, and therefore, accidentally touches in order to avoid described
Operation starting authentication process, causes the loss of equipment electricity, it is specified that preset user's operation, only when user's operation meets
Just start authentication process when the predetermined operation, so as to effectively save equipment electricity, wherein the predetermined registration operation can be with
For long-pressing touch screen or specified button.
Before the step S10, includes the steps that audio-frequency fingerprint acquires S00, specifically includes:
User's input voice information, and input the corresponding text information of the voice messaging and saved, for the language of input
Message breath carries out the extraction of audio-frequency fingerprint, and the audio-frequency fingerprint of extraction is saved.
The extraction for carrying out audio-frequency fingerprint in the step S00 for the voice messaging of input, specifically includes:
Word segmentation processing is carried out to the text information corresponding with voice messaging of user's input, extracts n using TFIDF method
Keyword Ki, wherein i=1,2 ..., n, wherein n is the natural number greater than 2, and it is literary in the input to obtain each keyword
The corresponding importance k in position in this informationi;Voice messaging is divided into several sub-pieces according to above-mentioned keyword, for institute
It states each sub-piece and extracts audio-frequency fingerprint P respectivelyi。
Whether the extracting method of audio-frequency fingerprint is effective, directly determines audio-frequency fingerprint effectiveness of retrieval and precision.
(the Proceedings of Recent Advances in visual information latest developments international conference in 2002
In Visual Information Systems 2002), Oostveen and Kalker et al. are in " the feature extraction of video finger print
Method and database policies " (Feature Extraction and a Database Strategy for Video
Fingerprinting) in this article, a kind of audio fingerprint feature extracting method is proposed, but this feature extracting method
Precision is not high in actual use.
Jaap Haitsma et al. is in paper " A Highly Robust AudioFingerprinting System "
Propose a kind of audio-frequency fingerprint extracting method and corresponding detection algorithm.In the paper, author by comparing default template and
Whether the audio-frequency fingerprint of audio to be measured is identical to be preset template to judge whether to contain in audio to be measured.Pass through test, it has been found that
The recall ratio judged using this method is lower, and the audio-frequency fingerprint noiseproof feature that analysis finds that this method is extracted is poor.If to
After certain transformation (compression, transmission), the sound quality of audio will change acoustic frequency, be referred to using the audio that this method obtains
Line will also have greatly changed, so that recall ratio is lower.On this basis, Jerome Lebosse et al. is in " A
The difference method of accumulated energies is proposed in Robust Audio Fingerprint Extraction Algorithm ".With
The method of Jaap Haitsma et al. is compared, and the robustness of the audio-frequency fingerprint of the method for Lebosse et al. is enhanced, so that
The rate of hitting of audio-frequency fingerprint increases when detection, improves recall ratio, but correspondingly brings certain false-alarm again.
And audio file is retrieved based on Philips algorithm in the prior art: to audio fragment according to certain frame
It is folded to carry out adding window and framing, after obtaining multiple audio frames, to each audio frame carry out Fast Fourier Transform (FFT) (FFT,
FastFourier Transformation), the frequency spectrum of each audio frame is obtained, each audio frame is divided into 33 on frequency domain
A subband, the frequency spectrum based on audio frame calculate the energy of each subband.Later, for each audio frame, the audio frame is calculated
Energy difference between any two adjacent sub-bands obtains 32 energy differences of the audio frame, later, for multiple audio frames
The adjacent every two audio frame of middle timing calculates each energy difference energy corresponding with next audio frame of a upper audio frame
The difference for measuring difference, obtains 32 differences, for each difference in this 32 differences, takes 1 when difference is greater than 0, works as difference
0 is taken when less than 0, obtains 32 audio-frequency fingerprints, then retrieved in audio file library based on audio-frequency fingerprint.But Philips is calculated
Method is easy to produce pseudo- formant problem, causes the audio-frequency fingerprint accuracy extracted poor, affects and is retrieved to audio file
Accuracy, matching degree be not high.
The present invention realizes the audio-frequency fingerprint to each sub-piece using following methods:
Firstly, carrying out Fourier transformation to audio data, the location information of energy maximum point is extracted from the frequency spectrum of every frame,
That is spectrum peak point.The selection of peak point comprises steps of determining that candidate peak point, in candidate peak point using threshold value to
It measures forward and backward and selects peak point.
Secondly, determining candidate region centered on the maximum point in peak point, two extreme values are selected in candidate region
Point constitutes triangle vector as audio-frequency fingerprint with maximum point.The candidate region be maximum point after according to time-sequencing
M node, m are the natural number greater than 2.
All audio-frequency fingerprints are mapped as integer as Hash key assignments, are inserted into Hash table.
Step S20: it if the first operation meets preset condition, is shown by terminal device display interface to user's input
Voice messaging enters step S30, and otherwise, return step S10 is continued to test;
When terminal device detects the first operation of user, if judge, first is operated as long-pressing touch screen or specified button,
Then the display interface of terminal device shows stored text information corresponding with voice messaging that is inputting to user, if first
Operation is not long-pressing touch screen or specified button, then ignores first operation, wait the next operation of user.
Step S30: voice messaging described in user's input step S20, the audio for extracting user's input voice information refer to
Line;
User carries out the input of corresponding voice messaging according to the text information that the display interface of terminal device is shown, for
The voice messaging of family input carries out the extraction of audio-frequency fingerprint.
The audio-frequency fingerprint that user's input voice information is extracted in the step S30, specifically includes:
Input voice is divided into several sub-pieces according to the keyword extracted in step S00, for each sub-piece
Audio-frequency fingerprint P' is extracted respectivelyi。
The extracting method of each sub-piece audio-frequency fingerprint is specific as follows:
Firstly, carrying out Fourier transformation to audio data, the location information of energy maximum point is extracted from the frequency spectrum of every frame,
That is spectrum peak point.The selection of peak point comprises steps of determining that candidate peak point, in candidate peak point using threshold value to
It measures forward and backward and selects peak point.
Secondly, determining candidate region centered on the maximum point in peak point, two extreme values are selected in candidate region
Point constitutes triangle vector as audio-frequency fingerprint with maximum point.The candidate region be maximum point after according to time-sequencing
M node, m are the natural number greater than 2.
All audio-frequency fingerprints are mapped as integer as Hash key assignments, are inserted into Hash table.
Step S40: audio-frequency fingerprint pre-stored in the audio-frequency fingerprint and terminal device is subjected to match cognization:
Pre-stored audio-frequency fingerprint in the audio-frequency fingerprint and terminal device is subjected to matching knowledge in the step S40
Not, it specifically includes:
According to position corresponding importance α of each keyword in the input text informationi, pre-stored audio
Fingerprint PiWith the audio-frequency fingerprint P' of user's input voice informationi, calculate the similarity of audio-frequency fingerprintIf phase
Otherwise it is greater than predetermined threshold like degree, then pre-stored audio-frequency fingerprint successful match in the audio-frequency fingerprint and terminal device,
With failure.
Wherein pre-stored audio-frequency fingerprint PiWith the audio-frequency fingerprint P' of user's input voice informationiRespectively correspond Hash table
In Hash key assignments.
Step S50: if successful match, terminal device unlocks, and otherwise, return step S10 is continued to test.
Further include step S60 after the step S50: after successful match, the voice letter based on this input of user
Breath and audio-frequency fingerprint improve pre-stored audio-frequency fingerprint.
Professional should further appreciate that, described in conjunction with the examples disclosed in the embodiments of the present disclosure
Unit and algorithm steps, can be realized with electronic hardware, computer software, or a combination of the two, hard in order to clearly demonstrate
The interchangeability of part and software generally describes each exemplary composition and step according to function in the above description.
These functions are implemented in hardware or software actually, the specific application and design constraint depending on technical solution.
Professional technician can use different methods to achieve the described function each specific application, but this realization
It should not be considered as beyond the scope of the present invention.
The step of method described in conjunction with the examples disclosed in this document or algorithm, can be executed with hardware, processor
The combination of software module or the two is implemented.Software module can be placed in random access memory (RAM), memory, read-only memory
(ROM), electrically programmable ROM, electrically erasable ROM, register, hard disk, moveable magnetic disc, CD-ROM or technical field
In any other form of storage medium well known to interior.
Above-described specific embodiment has carried out further the purpose of the present invention, technical scheme and beneficial effects
It is described in detail, it should be understood that being not intended to limit the present invention the foregoing is merely a specific embodiment of the invention
Protection scope, all any modification, equivalent substitution, improvement and etc. within the scope of the present invention, done should be included in this hair
Within bright protection scope.
Claims (8)
1. a kind of identity identifying method based on audio-frequency fingerprint, specifically comprises the following steps:
Step S10: terminal device detects the first operation of user;
Step S20: if the first operation meets preset condition, the voice to user's input is shown by terminal device display interface
Information enters step S30, and otherwise, return step S10 is continued to test;
Step S30: voice messaging described in user's input step S20 extracts the audio-frequency fingerprint of user's input voice information;
Step S40: audio-frequency fingerprint pre-stored in the audio-frequency fingerprint and terminal device is subjected to match cognization:
Step S50: if successful match, terminal device unlocks, and otherwise, return step S10 is continued to test.
2. the method according to claim 1, wherein being acquired before the step S10 including audio-frequency fingerprint
Step S00, specifically includes: user's input voice information, and inputs the corresponding text information of the voice messaging and saved, for
The voice messaging of input carries out the extraction of audio-frequency fingerprint, and the audio-frequency fingerprint of extraction is saved.
3. according to the method described in claim 2, it is characterized in that, the voice messaging in the step S00 for input carries out
The extraction of audio-frequency fingerprint, specifically includes: carrying out word segmentation processing to the text information corresponding with voice messaging of user's input, uses
TFIDF method extracts n keyword Ki, wherein i=1,2 ..., n, wherein n is the natural number greater than 2, obtains each pass
Position corresponding importance k of the keyword in the input text informationi;If voice messaging is divided into according to above-mentioned keyword
Dry sub-piece extracts audio-frequency fingerprint P for each sub-piece respectivelyi。
4. the method according to claim 1, wherein the first operation described in the step S10 is that long-pressing is touched
Touch screen or specified button.
5. the method according to claim 1, wherein terminal device display interface is shown in the step S20
Voice messaging to user's input is the text information of user's input in step S00.
6. according to the method described in claim 3, it is characterized in that, extracting user's input voice information in the step S30
Audio-frequency fingerprint specifically includes: input voice is divided into several sub-pieces according to the keyword extracted in step S00, for described
Each sub-piece extracts audio-frequency fingerprint P respectivelyi′。
7. according to the method described in claim 6, it is characterized in that, the audio-frequency fingerprint is set with terminal in the step S40
Pre-stored audio-frequency fingerprint carries out match cognization in standby, specifically includes: according to each keyword in the input text information
In the corresponding importance α in positioni, pre-stored audio-frequency fingerprint PiWith the audio-frequency fingerprint P of user's input voice informationi', meter
Calculate the similarity of audio-frequency fingerprintIf similarity is greater than predetermined threshold, the audio-frequency fingerprint is set with terminal
Pre-stored audio-frequency fingerprint successful match in standby, otherwise it fails to match.
8. according to the method described in claim 6, it is characterized in that, further including step S60 after the step S50:
After success, the voice messaging and audio-frequency fingerprint that this is inputted based on user improve pre-stored audio-frequency fingerprint.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811095315.3A CN109117622B (en) | 2018-09-19 | 2018-09-19 | Identity authentication method based on audio fingerprints |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811095315.3A CN109117622B (en) | 2018-09-19 | 2018-09-19 | Identity authentication method based on audio fingerprints |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109117622A true CN109117622A (en) | 2019-01-01 |
CN109117622B CN109117622B (en) | 2020-09-01 |
Family
ID=64859819
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811095315.3A Active CN109117622B (en) | 2018-09-19 | 2018-09-19 | Identity authentication method based on audio fingerprints |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109117622B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110223709A (en) * | 2019-05-31 | 2019-09-10 | 维沃移动通信有限公司 | A kind of recording frequency spectrum display methods and terminal device |
CN111506888A (en) * | 2020-04-15 | 2020-08-07 | 厦门快商通科技股份有限公司 | Identity authentication method, device and equipment based on audio fingerprints |
CN112214635A (en) * | 2020-10-23 | 2021-01-12 | 昆明理工大学 | Fast audio retrieval method based on cepstrum analysis |
CN113392262A (en) * | 2020-11-26 | 2021-09-14 | 腾讯科技(北京)有限公司 | Music identification method, recommendation method, device, equipment and storage medium |
WO2024077588A1 (en) * | 2022-10-14 | 2024-04-18 | Qualcomm Incorporated | Voice-based user authentication |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101465123A (en) * | 2007-12-20 | 2009-06-24 | 株式会社东芝 | Verification method and device for speaker authentication and speaker authentication system |
US8036464B2 (en) * | 2007-09-07 | 2011-10-11 | Satyam Computer Services Limited | System and method for automatic segmentation of ASR transcripts |
CN103440313A (en) * | 2013-08-27 | 2013-12-11 | 复旦大学 | Music retrieval system based on audio fingerprint features |
CN103685185A (en) * | 2012-09-14 | 2014-03-26 | 上海掌门科技有限公司 | Mobile equipment voiceprint registration and authentication method and system |
CN103888606A (en) * | 2014-03-11 | 2014-06-25 | 上海乐今通信技术有限公司 | Mobile terminal and unlocking method thereof |
CN106098068A (en) * | 2016-06-12 | 2016-11-09 | 腾讯科技(深圳)有限公司 | A kind of method for recognizing sound-groove and device |
-
2018
- 2018-09-19 CN CN201811095315.3A patent/CN109117622B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8036464B2 (en) * | 2007-09-07 | 2011-10-11 | Satyam Computer Services Limited | System and method for automatic segmentation of ASR transcripts |
CN101465123A (en) * | 2007-12-20 | 2009-06-24 | 株式会社东芝 | Verification method and device for speaker authentication and speaker authentication system |
CN103685185A (en) * | 2012-09-14 | 2014-03-26 | 上海掌门科技有限公司 | Mobile equipment voiceprint registration and authentication method and system |
CN103440313A (en) * | 2013-08-27 | 2013-12-11 | 复旦大学 | Music retrieval system based on audio fingerprint features |
CN103888606A (en) * | 2014-03-11 | 2014-06-25 | 上海乐今通信技术有限公司 | Mobile terminal and unlocking method thereof |
CN106098068A (en) * | 2016-06-12 | 2016-11-09 | 腾讯科技(深圳)有限公司 | A kind of method for recognizing sound-groove and device |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110223709A (en) * | 2019-05-31 | 2019-09-10 | 维沃移动通信有限公司 | A kind of recording frequency spectrum display methods and terminal device |
CN110223709B (en) * | 2019-05-31 | 2021-08-27 | 维沃移动通信有限公司 | Recorded audio spectrum display method and terminal equipment |
CN111506888A (en) * | 2020-04-15 | 2020-08-07 | 厦门快商通科技股份有限公司 | Identity authentication method, device and equipment based on audio fingerprints |
CN112214635A (en) * | 2020-10-23 | 2021-01-12 | 昆明理工大学 | Fast audio retrieval method based on cepstrum analysis |
CN113392262A (en) * | 2020-11-26 | 2021-09-14 | 腾讯科技(北京)有限公司 | Music identification method, recommendation method, device, equipment and storage medium |
WO2024077588A1 (en) * | 2022-10-14 | 2024-04-18 | Qualcomm Incorporated | Voice-based user authentication |
Also Published As
Publication number | Publication date |
---|---|
CN109117622B (en) | 2020-09-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109117622A (en) | A kind of identity identifying method based on audio-frequency fingerprint | |
US9612791B2 (en) | Method, system and storage medium for monitoring audio streaming media | |
Haitsma et al. | A highly robust audio fingerprinting system with an efficient search strategy | |
Seo et al. | Audio fingerprinting based on normalized spectral subband centroids | |
CN107293307B (en) | Audio detection method and device | |
EP3023882B1 (en) | Method and apparatus for generating fingerprint of an audio signal | |
US20140280304A1 (en) | Matching versions of a known song to an unknown song | |
US9224385B1 (en) | Unified recognition of speech and music | |
Maiorana et al. | User adaptive fuzzy commitment for signature template protection and renewability | |
CN105989000B (en) | Audio-video copy detection method and device | |
Zhang et al. | Spectrogram-based Efficient Perceptual Hashing Scheme for Speech Identification. | |
Wu et al. | Robust and blind audio watermarking algorithm in dual domain for overcoming synchronization attacks | |
CN109271501B (en) | Audio database management method and system | |
EP1761895A1 (en) | Searching for a scaling factor for watermark detection | |
Zhang et al. | ASLNet: An Encoder‐Decoder Architecture for Audio Splicing Detection and Localization | |
Haitsma et al. | An efficient database search strategy for audio fingerprinting | |
Wu et al. | Audio watermarking algorithm with a synchronization mechanism based on spectrum distribution | |
Kekre et al. | A review of audio fingerprinting and comparison of algorithms | |
CN104637496B (en) | Computer system and audio comparison method | |
Zhang et al. | An efficient retrieval algorithm of encrypted speech based on inverse fast Fourier transform and measurement matrix | |
You et al. | Music Identification System Using MPEG‐7 Audio Signature Descriptors | |
CN108198573B (en) | Audio recognition method and device, storage medium and electronic equipment | |
Seo et al. | Affine transform resilient image fingerprinting | |
Wang et al. | Audio fingerprint based on spectral flux for audio retrieval | |
CN114444464A (en) | Document detection processing method and device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |