CN112397047B - Speech synthesis method, device, electronic device and readable storage medium - Google Patents
Speech synthesis method, device, electronic device and readable storage medium Download PDFInfo
- Publication number
- CN112397047B CN112397047B CN202011442571.2A CN202011442571A CN112397047B CN 112397047 B CN112397047 B CN 112397047B CN 202011442571 A CN202011442571 A CN 202011442571A CN 112397047 B CN112397047 B CN 112397047B
- Authority
- CN
- China
- Prior art keywords
- text
- vector
- standard
- matrix
- phoneme
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001308 synthesis method Methods 0.000 title claims abstract description 29
- 239000011159 matrix material Substances 0.000 claims abstract description 94
- 238000006243 chemical reaction Methods 0.000 claims abstract description 52
- 238000012545 processing Methods 0.000 claims abstract description 48
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 44
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 40
- 238000000605 extraction Methods 0.000 claims abstract description 28
- 238000001228 spectrum Methods 0.000 claims abstract description 20
- 238000000034 method Methods 0.000 claims abstract description 11
- 238000013145 classification model Methods 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 13
- 238000012952 Resampling Methods 0.000 claims description 6
- 230000003595 spectral effect Effects 0.000 abstract description 29
- 238000005516 engineering process Methods 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 8
- 238000012549 training Methods 0.000 description 7
- 238000007726 management method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 241001672694 Citrus reticulata Species 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
- G10L13/04—Details of speech synthesis systems, e.g. synthesiser structure or memory management
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/08—Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention relates to a voice synthesis technology, and discloses a voice synthesis method, which comprises the following steps: obtaining sample audio, and carrying out sound feature extraction conversion and vectorization processing on the sample audio to obtain a standard speech vector; when receiving a text to be synthesized, performing phoneme conversion on the text to be synthesized to obtain a text phoneme sequence; vector conversion is carried out on the text phoneme sequence to obtain a text matrix; vector splicing is carried out on the standard voice vector and the text matrix to obtain a target matrix; extracting spectral features of the target matrix to obtain spectral feature information; and performing voice synthesis on the frequency spectrum characteristic information by using a preset vocoder to obtain synthesized audio. The invention also relates to a blockchain technique, and the spectral feature information can be stored in the blockchain. The invention also provides a voice synthesis device, electronic equipment and a readable storage medium. The invention can improve the flexibility of voice synthesis.
Description
Technical Field
The present invention relates to the field of speech synthesis, and in particular, to a speech synthesis method, apparatus, electronic device, and readable storage medium.
Background
Along with the development of artificial intelligence, speech synthesis is an important component of artificial intelligence, and can convert any text information into standard smooth speech for reading in real time, which is equivalent to installing an artificial mouth on a machine, so that the speech synthesis technology is also receiving more and more attention from people.
However, at present, the speech synthesis method can only synthesize text into speech of a certain style or language, such as: the Mandarin Chinese which can only synthesize the Chinese text into the Beijing accent can not synthesize the Sichuan accent or the Japanese accent; the requirements of people on multiple styles of speech synthesis cannot be met, and the flexibility of speech synthesis is poor.
Disclosure of Invention
The invention provides a voice synthesis method, a voice synthesis device, electronic equipment and a computer readable storage medium, and mainly aims to improve the flexibility of voice synthesis.
In order to achieve the above object, the present invention provides a speech synthesis method, including:
obtaining sample audio, and carrying out sound feature extraction conversion and vectorization processing on the sample audio to obtain a standard speech vector;
When receiving a text to be synthesized, performing phoneme conversion on the text to be synthesized to obtain a text phoneme sequence;
vector conversion is carried out on the text phoneme sequence to obtain a text matrix;
Vector splicing is carried out on the standard voice vector and the text matrix to obtain a target matrix;
extracting spectral features of the target matrix to obtain spectral feature information;
And performing voice synthesis on the frequency spectrum characteristic information by using a preset vocoder to obtain synthesized audio.
Optionally, the performing acoustic feature extraction conversion and vectorization processing on the sample audio to obtain a standard speech vector includes:
extracting and converting sound characteristics of the sample audio to obtain a target spectrogram;
and extracting features of the target spectrogram by using a pre-constructed picture classification model to obtain the standard voice vector.
Optionally, the performing sound feature extraction and conversion on the sample audio to obtain a target spectrogram includes:
resampling the sample audio to obtain a digital voice signal;
pre-emphasis is carried out on the digital voice signal to obtain a standard digital voice signal;
And performing feature conversion on the standard digital voice signal to obtain the target spectrogram.
Optionally, the extracting features of the target spectrogram by using a pre-constructed image classification model to obtain the standard speech vector includes:
Obtaining the output of all nodes of a full-connection layer contained in the picture classification model to obtain a target spectrogram characteristic value set;
and according to the sequence of all the nodes of the full-connection layer, longitudinally combining the characteristic values in the characteristic value set of the target spectrogram to obtain a standard voice vector.
Optionally, the performing feature conversion on the standard digital voice signal to obtain the target spectrogram includes:
and mapping the standard digital voice signal into a frequency domain by using a preset sound processing algorithm to obtain the target spectrogram.
Optionally, the vector splicing the standard speech vector and the text matrix to obtain a target matrix includes:
Calculating the phoneme frame length of each phoneme in the text phoneme sequence by using a preset algorithm model to obtain a phoneme frame length sequence;
Converting the phoneme frame length sequence into a phoneme frame length vector;
transversely splicing the phoneme frame length vector and the text matrix to obtain a standard text matrix;
And carrying out longitudinal splicing on the standard voice vector and each column of the standard text matrix to obtain the target matrix.
Optionally, the performing phoneme conversion on the text to be synthesized to obtain a text phoneme sequence includes:
deleting punctuation marks of the text to be synthesized to obtain a standard text;
and marking phonemes corresponding to each character in the standard text by using a preset phonetic symbol rule to obtain the text phoneme sequence.
In order to solve the above problems, the present invention also provides a speech synthesis apparatus, the apparatus comprising:
The audio processing module is used for acquiring sample audio, carrying out sound feature extraction conversion and vectorization processing on the sample audio, and obtaining a standard speech vector;
the text processing module is used for carrying out phoneme conversion on the text to be synthesized to obtain a text phoneme sequence when receiving the text to be synthesized; vector conversion is carried out on the text phoneme sequence to obtain a text matrix; vector splicing is carried out on the standard voice vector and the text matrix to obtain a target matrix;
The voice synthesis module is used for extracting the frequency spectrum characteristics of the target matrix to obtain frequency spectrum characteristic information; and performing voice synthesis on the frequency spectrum characteristic information by using a preset vocoder to obtain synthesized audio.
In order to solve the above-mentioned problems, the present invention also provides an electronic apparatus including:
A memory storing at least one computer program; and
And a processor executing the computer program stored in the memory to implement the above-described speech synthesis method.
In order to solve the above-described problems, the present invention also provides a computer-readable storage medium having stored therein at least one computer program that is executed by a processor in an electronic device to implement the above-described speech synthesis method.
According to the embodiment of the invention, the sample audio is subjected to sound feature extraction conversion and vectorization processing to obtain a standard speech vector; when receiving a text to be synthesized, performing phoneme conversion on the text to be synthesized to obtain a text phoneme sequence, eliminating the pronunciation difference of different types of characters, and realizing more flexible speech synthesis; vector conversion is carried out on the text phoneme sequence to obtain a text matrix; vector splicing is carried out on the standard voice vector and the text matrix to obtain a target matrix, so that flexible combination of voice characteristics and characteristics of a text to be synthesized is realized, and flexible synthesis of subsequent voices is ensured; extracting spectral features of the target matrix to obtain spectral feature information; and performing voice synthesis on the frequency spectrum characteristic information by using a preset vocoder to obtain synthesized audio. Therefore, the voice synthesis method, the voice synthesis device, the electronic equipment and the computer readable storage medium provided by the embodiment of the invention improve the flexibility of voice synthesis.
Drawings
FIG. 1 is a flow chart of a speech synthesis method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a target spectrogram obtained in a speech synthesis method according to an embodiment of the present invention;
FIG. 3 is a flow chart of a standard speech vector obtained in a speech synthesis method according to an embodiment of the invention;
FIG. 4 is a schematic block diagram of a speech synthesis apparatus according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an internal structure of an electronic device for implementing a speech synthesis method according to an embodiment of the present invention;
the achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment of the application provides a voice synthesis method. The execution subject of the speech synthesis method includes, but is not limited to, at least one of a server, a terminal, and the like, which can be configured to execute the method provided by the embodiment of the application. In other words, the speech synthesis method may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The service end includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Referring to fig. 1, a flow chart of a speech synthesis method according to an embodiment of the present invention is shown, where in the embodiment of the present invention, the speech synthesis method includes:
S1, acquiring sample audio, and carrying out sound feature extraction conversion and vectorization processing on the sample audio to obtain a standard speech vector;
in the embodiment of the present invention, the sample audio is speech data of a target speaker to be formed later, for example: the subsequent text is synthesized into speaker a's speech, then the sample audio is the number of speaker a's speech.
Furthermore, in order to better and more accurately synthesize the voice of the subsequent text, the invention carries out feature extraction processing on the sample audio to obtain the standard voice vector.
Because the voice data has larger capacity and is not easy to process, the voice characteristic extraction and conversion are carried out on the sample audio to obtain a spectrogram.
In detail, in the embodiment of the present invention, referring to fig. 2, the performing acoustic feature extraction and conversion on the sample audio to obtain a target spectrogram includes:
s11, resampling the sample audio to obtain a digital voice signal;
In the embodiment of the present invention, in order to facilitate data processing on the sample audio, resampling is performed on the sample audio to obtain the digital voice signal, and preferably, a digital-to-analog converter is used to resample the sample audio.
S12, pre-emphasis is carried out on the digital voice signal to obtain a standard digital voice signal;
in detail, the pre-emphasis operation is performed by using the following formula:
y(t)=x(t)-μx(t-1)
Wherein x (t) is the digital voice signal, t is time, y (t) is the standard digital voice signal, μ is a preset adjustment value of the pre-emphasis operation, and preferably, the value range of μ is [0.9,1.0].
S13, performing feature conversion on the standard digital voice signal to obtain the target spectrogram;
In the embodiment of the invention, the standard digital voice signal can only show the change of the audio frequency in the time domain, but can not show the audio characteristics of the standard voice signal, and in order to show the audio characteristics of the standard voice signal, the audio characteristics are more visual and clear, and the characteristic conversion is carried out on the standard digital voice signal.
In detail, in the embodiment of the present invention, performing feature conversion on the standard digital voice signal includes: and mapping the standard digital voice signal into a frequency domain by using a preset sound processing algorithm to obtain the target spectrogram. Preferably, in the embodiment of the present invention, the sound processing algorithm is a mel filtering algorithm.
Further, in order to further simplify the data and improve the processing efficiency of the data, the embodiment of the present invention performs vectorization processing on the target spectrogram, including: and extracting features of the target spectrogram by using a pre-constructed picture classification model to obtain the standard voice vector. Preferably, in the embodiment of the present invention, the pre-constructed image classification model is a residual network model trained by using a historical spectrogram set, where the historical spectrogram set is a plurality of spectrogram sets with the same type and different content from the target spectrogram.
In detail, in the embodiment of the present invention, referring to fig. 3, the feature extraction is performed on the target spectrogram by using a pre-constructed image classification model to obtain the standard speech vector, which includes:
s21, obtaining the output of all nodes of a full-connection layer contained in the picture classification model, and obtaining a target spectrogram characteristic value set;
for example: the full-connection layer included in the picture classification model comprises 1000 nodes in total, a target spectrogram T is input into the picture classification model, 1000 node output values are obtained, and a target spectrogram characteristic value set of the target spectrogram T is obtained, wherein the output of each node is one characteristic value of the target spectrogram T, so that the target spectrogram characteristic value set of the target spectrogram T comprises 1000 characteristic values in total.
S22, longitudinally combining the characteristic values in the characteristic value set of the target spectrogram according to the sequence of all the nodes of the full-connection layer to obtain a standard voice vector;
for example: the full-connection layer comprises 3 nodes, namely a first node, a second node and a third node in sequence, wherein 3 characteristic values are 3,5 and 1 in the characteristic value set of the target spectrogram A, the characteristic value 1 is the output of the first node, the characteristic value 3 is the output of the second node, the characteristic value 5 is the output of the third node, and three characteristic values in the characteristic value set of the target spectrogram A are longitudinally combined according to the node sequence to obtain the standard voice vector of the target spectrogram A
S2, when receiving a text to be synthesized, performing phoneme conversion on the text to be synthesized to obtain a text phoneme sequence;
In the embodiment of the invention, the text to be synthesized is a text requiring synthesized voice, phonemes of pronunciations of texts of different voices can be represented by a universal phonetic symbol rule, and further, in order to eliminate differences of texts of different languages, the embodiment of the invention carries out phoneme conversion on the text to be synthesized to obtain a text phoneme sequence.
In detail, in the embodiment of the present invention, the performing phoneme conversion on the text to be synthesized to obtain the text phoneme sequence includes: deleting punctuation marks of the text to be synthesized to obtain a standard text; marking phonemes corresponding to each character in the standard text by using a preset phonetic symbol rule to obtain the text phoneme sequence, for example: the preset phonetic symbol rule is an international phonetic symbol rule, the corresponding phoneme of the mark character 'o' is a, and the obtained text phoneme sequence is [ a ].
S3, carrying out vector conversion on the text phoneme sequence to obtain a text matrix;
in the embodiment of the invention, each phoneme in the text phoneme sequence is converted into a column vector by utilizing onehot coding algorithm to obtain the text matrix.
S4, vector splicing is carried out on the standard voice vector and the text matrix to obtain a target matrix;
In detail, in the embodiment of the present invention, in order to better perform the subsequent speech synthesis, it is further required to determine that each phoneme in the text phoneme sequence is aligned with the speech, that is, determine the pronunciation duration of each phoneme in the text phoneme sequence, that is, the phoneme frame length, so that the embodiment of the present invention calculates the phoneme frame length of each phoneme in the text phoneme sequence by using a preset algorithm model to obtain a phoneme frame length sequence, where the preset algorithm model in the embodiment of the present invention may be a DNN-HMM network model.
Further, in the embodiment of the present invention, the phoneme frame length sequence is converted into a phoneme frame length vector, that is, the phoneme frame length sequence is converted into a corresponding row vector, so as to obtain the phoneme frame length vector, and the phoneme frame length vector and the text matrix are transversely spliced, so as to obtain the standard text matrix, for example: and the phoneme frame length vector is a row vector of 1*4, the text matrix is a matrix of 5*4, and the phoneme frame length vector is used as a fifth row of the text matrix to obtain the standard text matrix of 6*4.
In detail, in the embodiment of the present invention, the standard speech vector is longitudinally spliced with each column of the standard text matrix to obtain the target matrix, for example: the standard text matrix isThe standard speech vector isLongitudinally splicing the standard voice vector and each column of the standard text matrix to obtain the target matrix as
S5, extracting spectral features of the target matrix to obtain spectral feature information;
In order to further perform speech synthesis, the embodiment of the present invention further needs to determine a spectral feature of the target matrix, where the spectral feature may be a Mel spectrum.
In detail, in the embodiment of the invention, the spectral feature extraction is performed on the target matrix by using the trained acoustic model, so as to obtain the spectral feature extraction. Preferably, the acoustic model may be a transducer model.
Further, before the training is used to perform the spectral feature extraction on the target matrix by using the trained acoustic model, the method further includes: acquiring a historical text matrix set; carrying out frequency spectrum characteristic information marking on each historical text matrix of the historical text matrix set to obtain a training set; and training the acoustic model by using the training set until the acoustic model converges to obtain the trained acoustic model. The historical text matrix set is a set of a plurality of historical text matrices, and the historical text matrix is a target matrix corresponding to a text different from the text to be synthesized.
In another embodiment of the present invention, to ensure the privacy of data, the spectral feature information may be stored in a blockchain node.
S6, performing voice synthesis on the frequency spectrum characteristic information by using a preset vocoder to obtain synthesized audio.
In detail, in the embodiment of the present invention, the spectral feature information is input to a preset vocoder to obtain the synthesized audio.
Preferably, the vocoder is a WORLD vocoder.
As shown in fig. 4, a functional block diagram of the speech synthesis apparatus according to the present invention is shown.
The speech synthesis apparatus 100 of the present invention may be mounted in an electronic device. Depending on the functions implemented, the speech synthesis apparatus may comprise an audio processing module 101, a word processing module 102, a speech synthesis module 103, which modules may also be referred to as units, refer to a series of computer program segments capable of being executed by a processor of an electronic device and of performing fixed functions, which are stored in a memory of the electronic device.
In the present embodiment, the functions concerning the respective modules/units are as follows:
the audio processing module 101 is configured to obtain a sample audio, and perform sound feature extraction conversion and vectorization processing on the sample audio to obtain a standard speech vector.
In the embodiment of the present invention, the sample audio is speech data of a target speaker to be formed later, for example: the subsequent text is synthesized into speaker a's speech, then the sample audio is the number of speaker a's speech.
Further, in order to better and more accurately synthesize the speech of the subsequent text, the audio processing module 101 performs feature extraction processing on the sample audio to obtain the standard speech vector.
Because the voice data has a large capacity and is not easy to process, the audio processing module 101 performs voice feature extraction and conversion on the sample audio to obtain a target spectrogram.
In detail, in the embodiment of the present invention, the audio processing module 101 performs sound feature extraction and conversion on the sample audio by using the following means to obtain a target spectrogram, including:
resampling the sample audio to obtain a digital voice signal;
In the embodiment of the present invention, in order to facilitate data processing on the sample audio, resampling is performed on the sample audio to obtain the digital voice signal, and preferably, a digital-to-analog converter is used to resample the sample audio.
Pre-emphasis is carried out on the digital voice signal to obtain a standard digital voice signal;
in detail, the pre-emphasis operation is performed by using the following formula:
y(t)=x(t)-μx(t-1)
Wherein x (t) is the digital voice signal, t is time, y (t) is the standard digital voice signal, μ is a preset adjustment value of the pre-emphasis operation, and preferably, the value range of μ is [0.9,1.0].
Performing feature conversion on the standard digital voice signal to obtain the target spectrogram;
In the embodiment of the invention, the standard digital voice signal can only show the change of the audio frequency in the time domain, but can not show the audio characteristics of the standard voice signal, and in order to show the audio characteristics of the standard voice signal, the audio characteristics are more visual and clear, and the characteristic conversion is carried out on the standard digital voice signal.
In detail, in the embodiment of the present invention, the audio processing module 101 performs feature conversion on the standard digital voice signal, including: and mapping the standard digital voice signal into a frequency domain by using a preset sound processing algorithm to obtain the target spectrogram. Preferably, in the embodiment of the present invention, the sound processing algorithm is a mel filtering algorithm.
Further, in order to further simplify the use of data and improve the processing efficiency of data, the audio processing module 101 according to the embodiment of the present invention performs vectorization processing on the target spectrogram, including: and extracting features of the target spectrogram by using a pre-constructed picture classification model to obtain the standard voice vector. Preferably, in the embodiment of the present invention, the pre-constructed image classification model is a residual network model trained by using a historical spectrogram set, where the historical spectrogram set is a plurality of spectrogram sets with the same type and different content from the target spectrogram.
In detail, in the embodiment of the present invention, the audio processing module 101 performs feature extraction on the target spectrogram by using the following means to obtain the standard speech vector, including:
Obtaining the output of all nodes of a full-connection layer contained in the picture classification model to obtain a target spectrogram characteristic value set;
for example: the full-connection layer included in the picture classification model comprises 1000 nodes in total, a target spectrogram T is input into the picture classification model, 1000 node output values are obtained, and a target spectrogram characteristic value set of the target spectrogram T is obtained, wherein the output of each node is one characteristic value of the target spectrogram T, so that the target spectrogram characteristic value set of the target spectrogram T comprises 1000 characteristic values in total.
According to the sequence of all nodes of the full-connection layer, longitudinally combining the characteristic values in the characteristic value set of the target spectrogram to obtain a standard voice vector;
for example: the full-connection layer comprises 3 nodes, namely a first node, a second node and a third node in sequence, wherein 3 characteristic values are 3,5 and 1 in the characteristic value set of the target spectrogram A, the characteristic value 1 is the output of the first node, the characteristic value 3 is the output of the second node, the characteristic value 5 is the output of the third node, and three characteristic values in the characteristic value set of the target spectrogram A are longitudinally combined according to the node sequence to obtain the standard voice vector of the target spectrogram A
The text processing module 102 is configured to perform phoneme conversion on a text to be synthesized to obtain a text phoneme sequence when receiving the text to be synthesized; vector conversion is carried out on the text phoneme sequence to obtain a text matrix; and vector splicing is carried out on the standard voice vector and the text matrix to obtain a target matrix.
In the embodiment of the invention, the text to be synthesized is a text requiring synthesized voice, phonemes of pronunciations of texts of different voices can be represented by a universal phonetic symbol rule, and further, in order to eliminate differences of texts of different languages, the embodiment of the invention carries out phoneme conversion on the text to be synthesized to obtain a text phoneme sequence.
In detail, in the embodiment of the present invention, the text processing module 102 performs phoneme conversion on the text to be synthesized to obtain the text phoneme sequence, which includes: deleting punctuation marks of the text to be synthesized to obtain a standard text; marking phonemes corresponding to each character in the standard text by using a preset phonetic symbol rule to obtain the text phoneme sequence, for example: the preset phonetic symbol rule is an international phonetic symbol rule, the corresponding phoneme of the mark character 'o' is a, and the obtained text phoneme sequence is [ a ].
In the embodiment of the present invention, the text processing module 102 converts each phoneme in the text phoneme sequence into a column vector by using onehot coding algorithm to obtain the text matrix.
In detail, in the embodiment of the present invention, in order to better perform the subsequent speech synthesis, it is further required to determine that each phoneme in the text phoneme sequence is aligned with the speech, that is, determine the pronunciation duration, that is, the phoneme frame length, of each phoneme in the text phoneme sequence, so the text processing module 102 calculates the phoneme frame length of each phoneme in the text phoneme sequence by using a preset algorithm model to obtain a phoneme frame length sequence, where the preset algorithm model in the embodiment of the present invention may be a DNN-HMM network model.
Further, in the embodiment of the present invention, the text processing module 102 converts the phoneme frame length sequence into a phoneme frame length vector, that is, converts the phoneme frame length sequence into a corresponding row vector, to obtain the phoneme frame length vector, and performs lateral stitching on the phoneme frame length vector and the text matrix to obtain the standard text matrix, for example: and the phoneme frame length vector is a row vector of 1*4, the text matrix is a matrix of 5*4, and the phoneme frame length vector is used as a fifth row of the text matrix to obtain the standard text matrix of 6*4.
In detail, in the embodiment of the present invention, the text processing module 102 performs a vertical concatenation on the standard speech vector and each column of the standard text matrix to obtain the target matrix, for example: the standard text matrix isThe standard speech vector isLongitudinally splicing the standard voice vector and each column of the standard text matrix to obtain the target matrix as
The voice synthesis module 103 is configured to perform spectral feature extraction on the target matrix to obtain spectral feature information; and performing voice synthesis on the frequency spectrum characteristic information by using a preset vocoder to obtain synthesized audio.
In order to further perform speech synthesis, the embodiment of the present invention further needs to determine a spectral feature of the target matrix, where the spectral feature may be a Mel spectrum.
In detail, in the embodiment of the invention, the spectral feature extraction is performed on the target matrix by using the trained acoustic model, so as to obtain the spectral feature extraction. Preferably, the acoustic model may be a transducer model.
Further, before the speech synthesis module 103 performs spectral feature extraction on the target matrix by using the trained acoustic model in the embodiment of the present invention, the method further includes: acquiring a historical text matrix set; carrying out frequency spectrum characteristic information marking on each historical text matrix of the historical text matrix set to obtain a training set; and training the acoustic model by using the training set until the acoustic model converges to obtain the trained acoustic model. The historical text matrix set is a set of a plurality of historical text matrices, and the historical text matrix is a target matrix corresponding to a text different from the text to be synthesized.
In another embodiment of the present invention, to ensure the privacy of data, the spectral feature information may be stored in a blockchain node.
In detail, in the embodiment of the present invention, the speech synthesis module 103 inputs the spectral feature information to a preset vocoder to obtain the synthesized audio.
Preferably, the vocoder is a WORLD vocoder.
Fig. 5 is a schematic structural diagram of an electronic device for implementing the speech synthesis method according to the present invention.
The electronic device 1 may comprise a processor 10, a memory 11 and a bus, and may further comprise a computer program, such as a speech synthesis program 12, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, including flash memory, a mobile hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may in other embodiments also be an external storage device of the electronic device 1, such as a plug-in mobile hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only for storing application software installed in the electronic device 1 and various types of data, such as codes of a speech synthesis program, but also for temporarily storing data that has been output or is to be output.
The processor 10 may be comprised of integrated circuits in some embodiments, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, combinations of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects respective components of the entire electronic device using various interfaces and lines, and executes various functions of the electronic device 1 and processes data by running or executing programs or modules (e.g., a speech synthesis program, etc.) stored in the memory 11, and calling data stored in the memory 11.
The bus may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. The bus is arranged to enable a connection communication between the memory 11 and at least one processor 10 etc.
Fig. 5 shows only an electronic device with components, it being understood by a person skilled in the art that the structure shown in fig. 5 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or may combine certain components, or may be arranged in different components.
For example, although not shown, the electronic device 1 may further include a power source (such as a battery) for supplying power to each component, and preferably, the power source may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management, and the like are implemented through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device 1 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which will not be described herein.
Further, the electronic device 1 may also comprise a network interface, optionally the network interface may comprise a wired interface and/or a wireless interface (e.g. WI-FI interface, bluetooth interface, etc.), typically used for establishing a communication connection between the electronic device 1 and other electronic devices.
The electronic device 1 may optionally further comprise a user interface, which may be a Display, an input unit, such as a Keyboard (Keyboard), or a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device 1 and for displaying a visual user interface.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
The speech synthesis program 12 stored in the memory 11 in the electronic device 1 is a combination of a plurality of computer programs, which, when run in the processor 10, can realize:
obtaining sample audio, and carrying out sound feature extraction conversion and vectorization processing on the sample audio to obtain a standard speech vector;
When receiving a text to be synthesized, performing phoneme conversion on the text to be synthesized to obtain a text phoneme sequence;
vector conversion is carried out on the text phoneme sequence to obtain a text matrix;
Vector splicing is carried out on the standard voice vector and the text matrix to obtain a target matrix;
extracting spectral features of the target matrix to obtain spectral feature information;
And performing voice synthesis on the frequency spectrum characteristic information by using a preset vocoder to obtain synthesized audio.
In particular, the specific implementation method of the processor 10 on the computer program may refer to the description of the relevant steps in the corresponding embodiment of fig. 1, which is not repeated herein.
Further, the modules/units integrated in the electronic device 1 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. The computer readable medium may be non-volatile or volatile. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM).
Embodiments of the present invention may also provide a computer readable storage medium storing a computer program which, when executed by a processor of an electronic device, may implement:
obtaining sample audio, and carrying out sound feature extraction conversion and vectorization processing on the sample audio to obtain a standard speech vector;
When receiving a text to be synthesized, performing phoneme conversion on the text to be synthesized to obtain a text phoneme sequence;
vector conversion is carried out on the text phoneme sequence to obtain a text matrix;
Vector splicing is carried out on the standard voice vector and the text matrix to obtain a target matrix;
extracting spectral features of the target matrix to obtain spectral feature information;
And performing voice synthesis on the frequency spectrum characteristic information by using a preset vocoder to obtain synthesized audio.
Further, the computer-usable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created from the use of blockchain nodes, and the like.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The blockchain (Blockchain), essentially a de-centralized database, is a string of data blocks that are generated in association using cryptographic methods, each of which contains information from a batch of network transactions for verifying the validity (anti-counterfeit) of its information and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.
Claims (9)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011442571.2A CN112397047B (en) | 2020-12-11 | 2020-12-11 | Speech synthesis method, device, electronic device and readable storage medium |
PCT/CN2021/083824 WO2022121176A1 (en) | 2020-12-11 | 2021-03-30 | Speech synthesis method and apparatus, electronic device, and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011442571.2A CN112397047B (en) | 2020-12-11 | 2020-12-11 | Speech synthesis method, device, electronic device and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112397047A CN112397047A (en) | 2021-02-23 |
CN112397047B true CN112397047B (en) | 2024-11-19 |
Family
ID=74625646
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011442571.2A Active CN112397047B (en) | 2020-12-11 | 2020-12-11 | Speech synthesis method, device, electronic device and readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112397047B (en) |
WO (1) | WO2022121176A1 (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112397047B (en) * | 2020-12-11 | 2024-11-19 | 平安科技(深圳)有限公司 | Speech synthesis method, device, electronic device and readable storage medium |
CN113096625A (en) * | 2021-03-24 | 2021-07-09 | 平安科技(深圳)有限公司 | Multi-person Buddha music generation method, device, equipment and storage medium |
CN112927677B (en) * | 2021-03-29 | 2023-07-25 | 北京大米科技有限公司 | Speech synthesis method and device |
CN113327578B (en) * | 2021-06-10 | 2024-02-02 | 平安科技(深圳)有限公司 | Acoustic model training method and device, terminal equipment and storage medium |
CN113436608B (en) * | 2021-06-25 | 2023-11-28 | 平安科技(深圳)有限公司 | Double-flow voice conversion method, device, equipment and storage medium |
US11869483B2 (en) * | 2021-10-07 | 2024-01-09 | Nvidia Corporation | Unsupervised alignment for text to speech synthesis using neural networks |
CN114373443A (en) * | 2022-01-14 | 2022-04-19 | 腾讯科技(深圳)有限公司 | Speech synthesis method and apparatus, computing device, storage medium, and program product |
CN114400005A (en) * | 2022-01-18 | 2022-04-26 | 平安科技(深圳)有限公司 | Voice message generation method and device, computer equipment and storage medium |
CN114783406B (en) * | 2022-06-16 | 2022-10-21 | 深圳比特微电子科技有限公司 | Speech synthesis method, apparatus and computer-readable storage medium |
CN116705058B (en) * | 2023-08-04 | 2023-10-27 | 贝壳找房(北京)科技有限公司 | Processing method of multimode voice task, electronic equipment and readable storage medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022121176A1 (en) * | 2020-12-11 | 2022-06-16 | 平安科技(深圳)有限公司 | Speech synthesis method and apparatus, electronic device, and readable storage medium |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4966048B2 (en) * | 2007-02-20 | 2012-07-04 | 株式会社東芝 | Voice quality conversion device and speech synthesis device |
US10186252B1 (en) * | 2015-08-13 | 2019-01-22 | Oben, Inc. | Text to speech synthesis using deep neural network with constant unit length spectrogram |
CN109754778B (en) * | 2019-01-17 | 2023-05-30 | 平安科技(深圳)有限公司 | Text speech synthesis method and device and computer equipment |
CN109767752B (en) * | 2019-02-27 | 2023-05-26 | 平安科技(深圳)有限公司 | Voice synthesis method and device based on attention mechanism |
CN111161702B (en) * | 2019-12-23 | 2022-08-26 | 爱驰汽车有限公司 | Personalized speech synthesis method and device, electronic equipment and storage medium |
CN112002305B (en) * | 2020-07-29 | 2024-06-18 | 北京大米科技有限公司 | Speech synthesis method, device, storage medium and electronic equipment |
-
2020
- 2020-12-11 CN CN202011442571.2A patent/CN112397047B/en active Active
-
2021
- 2021-03-30 WO PCT/CN2021/083824 patent/WO2022121176A1/en active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022121176A1 (en) * | 2020-12-11 | 2022-06-16 | 平安科技(深圳)有限公司 | Speech synthesis method and apparatus, electronic device, and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2022121176A1 (en) | 2022-06-16 |
CN112397047A (en) | 2021-02-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112397047B (en) | Speech synthesis method, device, electronic device and readable storage medium | |
CN112086086B (en) | Speech synthesis method, device, equipment and computer readable storage medium | |
CN107220235B (en) | Speech recognition error correction method and device based on artificial intelligence and storage medium | |
CN112820269B (en) | Text-to-speech method and device, electronic equipment and storage medium | |
CN112951203B (en) | Speech synthesis method, device, electronic equipment and storage medium | |
CN107707745A (en) | Method and apparatus for extracting information | |
CN113205814B (en) | Voice data labeling method and device, electronic equipment and storage medium | |
CN113555003B (en) | Speech synthesis method, device, electronic equipment and storage medium | |
CN113345431B (en) | Cross-language voice conversion method, device, equipment and medium | |
WO2022121157A1 (en) | Speech synthesis method and apparatus, electronic device and storage medium | |
CN113420556B (en) | Emotion recognition method, device, equipment and storage medium based on multi-mode signals | |
CN111862937A (en) | Singing voice synthesis method, singing voice synthesis device and computer readable storage medium | |
CN113096242A (en) | Virtual anchor generation method and device, electronic equipment and storage medium | |
CN114866807A (en) | Avatar video generation method and device, electronic equipment and readable storage medium | |
WO2022121158A1 (en) | Speech synthesis method and apparatus, and electronic device and storage medium | |
CN114155832A (en) | Speech recognition method, device, equipment and medium based on deep learning | |
CN112201253A (en) | Character marking method and device, electronic equipment and computer readable storage medium | |
CN113990286B (en) | Speech synthesis method, device, equipment and storage medium | |
CN114842880A (en) | Intelligent customer service voice rhythm adjusting method, device, equipment and storage medium | |
CN115511704A (en) | Virtual customer service generation method and device, electronic equipment and storage medium | |
CN114863945A (en) | Text-based voice changing method and device, electronic equipment and storage medium | |
WO2022141867A1 (en) | Speech recognition method and apparatus, and electronic device and readable storage medium | |
CN118135994A (en) | Speech synthesis method, device, equipment and medium | |
CN112489628A (en) | Voice data selection method and device, electronic equipment and storage medium | |
CN116564322A (en) | Voice conversion method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |