[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112397047B - Speech synthesis method, device, electronic device and readable storage medium - Google Patents

Speech synthesis method, device, electronic device and readable storage medium Download PDF

Info

Publication number
CN112397047B
CN112397047B CN202011442571.2A CN202011442571A CN112397047B CN 112397047 B CN112397047 B CN 112397047B CN 202011442571 A CN202011442571 A CN 202011442571A CN 112397047 B CN112397047 B CN 112397047B
Authority
CN
China
Prior art keywords
text
vector
standard
matrix
phoneme
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011442571.2A
Other languages
Chinese (zh)
Other versions
CN112397047A (en
Inventor
陈闽川
马骏
王少军
肖京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202011442571.2A priority Critical patent/CN112397047B/en
Publication of CN112397047A publication Critical patent/CN112397047A/en
Priority to PCT/CN2021/083824 priority patent/WO2022121176A1/en
Application granted granted Critical
Publication of CN112397047B publication Critical patent/CN112397047B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/04Details of speech synthesis systems, e.g. synthesiser structure or memory management
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to a voice synthesis technology, and discloses a voice synthesis method, which comprises the following steps: obtaining sample audio, and carrying out sound feature extraction conversion and vectorization processing on the sample audio to obtain a standard speech vector; when receiving a text to be synthesized, performing phoneme conversion on the text to be synthesized to obtain a text phoneme sequence; vector conversion is carried out on the text phoneme sequence to obtain a text matrix; vector splicing is carried out on the standard voice vector and the text matrix to obtain a target matrix; extracting spectral features of the target matrix to obtain spectral feature information; and performing voice synthesis on the frequency spectrum characteristic information by using a preset vocoder to obtain synthesized audio. The invention also relates to a blockchain technique, and the spectral feature information can be stored in the blockchain. The invention also provides a voice synthesis device, electronic equipment and a readable storage medium. The invention can improve the flexibility of voice synthesis.

Description

Speech synthesis method, device, electronic equipment and readable storage medium
Technical Field
The present invention relates to the field of speech synthesis, and in particular, to a speech synthesis method, apparatus, electronic device, and readable storage medium.
Background
Along with the development of artificial intelligence, speech synthesis is an important component of artificial intelligence, and can convert any text information into standard smooth speech for reading in real time, which is equivalent to installing an artificial mouth on a machine, so that the speech synthesis technology is also receiving more and more attention from people.
However, at present, the speech synthesis method can only synthesize text into speech of a certain style or language, such as: the Mandarin Chinese which can only synthesize the Chinese text into the Beijing accent can not synthesize the Sichuan accent or the Japanese accent; the requirements of people on multiple styles of speech synthesis cannot be met, and the flexibility of speech synthesis is poor.
Disclosure of Invention
The invention provides a voice synthesis method, a voice synthesis device, electronic equipment and a computer readable storage medium, and mainly aims to improve the flexibility of voice synthesis.
In order to achieve the above object, the present invention provides a speech synthesis method, including:
obtaining sample audio, and carrying out sound feature extraction conversion and vectorization processing on the sample audio to obtain a standard speech vector;
When receiving a text to be synthesized, performing phoneme conversion on the text to be synthesized to obtain a text phoneme sequence;
vector conversion is carried out on the text phoneme sequence to obtain a text matrix;
Vector splicing is carried out on the standard voice vector and the text matrix to obtain a target matrix;
extracting spectral features of the target matrix to obtain spectral feature information;
And performing voice synthesis on the frequency spectrum characteristic information by using a preset vocoder to obtain synthesized audio.
Optionally, the performing acoustic feature extraction conversion and vectorization processing on the sample audio to obtain a standard speech vector includes:
extracting and converting sound characteristics of the sample audio to obtain a target spectrogram;
and extracting features of the target spectrogram by using a pre-constructed picture classification model to obtain the standard voice vector.
Optionally, the performing sound feature extraction and conversion on the sample audio to obtain a target spectrogram includes:
resampling the sample audio to obtain a digital voice signal;
pre-emphasis is carried out on the digital voice signal to obtain a standard digital voice signal;
And performing feature conversion on the standard digital voice signal to obtain the target spectrogram.
Optionally, the extracting features of the target spectrogram by using a pre-constructed image classification model to obtain the standard speech vector includes:
Obtaining the output of all nodes of a full-connection layer contained in the picture classification model to obtain a target spectrogram characteristic value set;
and according to the sequence of all the nodes of the full-connection layer, longitudinally combining the characteristic values in the characteristic value set of the target spectrogram to obtain a standard voice vector.
Optionally, the performing feature conversion on the standard digital voice signal to obtain the target spectrogram includes:
and mapping the standard digital voice signal into a frequency domain by using a preset sound processing algorithm to obtain the target spectrogram.
Optionally, the vector splicing the standard speech vector and the text matrix to obtain a target matrix includes:
Calculating the phoneme frame length of each phoneme in the text phoneme sequence by using a preset algorithm model to obtain a phoneme frame length sequence;
Converting the phoneme frame length sequence into a phoneme frame length vector;
transversely splicing the phoneme frame length vector and the text matrix to obtain a standard text matrix;
And carrying out longitudinal splicing on the standard voice vector and each column of the standard text matrix to obtain the target matrix.
Optionally, the performing phoneme conversion on the text to be synthesized to obtain a text phoneme sequence includes:
deleting punctuation marks of the text to be synthesized to obtain a standard text;
and marking phonemes corresponding to each character in the standard text by using a preset phonetic symbol rule to obtain the text phoneme sequence.
In order to solve the above problems, the present invention also provides a speech synthesis apparatus, the apparatus comprising:
The audio processing module is used for acquiring sample audio, carrying out sound feature extraction conversion and vectorization processing on the sample audio, and obtaining a standard speech vector;
the text processing module is used for carrying out phoneme conversion on the text to be synthesized to obtain a text phoneme sequence when receiving the text to be synthesized; vector conversion is carried out on the text phoneme sequence to obtain a text matrix; vector splicing is carried out on the standard voice vector and the text matrix to obtain a target matrix;
The voice synthesis module is used for extracting the frequency spectrum characteristics of the target matrix to obtain frequency spectrum characteristic information; and performing voice synthesis on the frequency spectrum characteristic information by using a preset vocoder to obtain synthesized audio.
In order to solve the above-mentioned problems, the present invention also provides an electronic apparatus including:
A memory storing at least one computer program; and
And a processor executing the computer program stored in the memory to implement the above-described speech synthesis method.
In order to solve the above-described problems, the present invention also provides a computer-readable storage medium having stored therein at least one computer program that is executed by a processor in an electronic device to implement the above-described speech synthesis method.
According to the embodiment of the invention, the sample audio is subjected to sound feature extraction conversion and vectorization processing to obtain a standard speech vector; when receiving a text to be synthesized, performing phoneme conversion on the text to be synthesized to obtain a text phoneme sequence, eliminating the pronunciation difference of different types of characters, and realizing more flexible speech synthesis; vector conversion is carried out on the text phoneme sequence to obtain a text matrix; vector splicing is carried out on the standard voice vector and the text matrix to obtain a target matrix, so that flexible combination of voice characteristics and characteristics of a text to be synthesized is realized, and flexible synthesis of subsequent voices is ensured; extracting spectral features of the target matrix to obtain spectral feature information; and performing voice synthesis on the frequency spectrum characteristic information by using a preset vocoder to obtain synthesized audio. Therefore, the voice synthesis method, the voice synthesis device, the electronic equipment and the computer readable storage medium provided by the embodiment of the invention improve the flexibility of voice synthesis.
Drawings
FIG. 1 is a flow chart of a speech synthesis method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a target spectrogram obtained in a speech synthesis method according to an embodiment of the present invention;
FIG. 3 is a flow chart of a standard speech vector obtained in a speech synthesis method according to an embodiment of the invention;
FIG. 4 is a schematic block diagram of a speech synthesis apparatus according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an internal structure of an electronic device for implementing a speech synthesis method according to an embodiment of the present invention;
the achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment of the application provides a voice synthesis method. The execution subject of the speech synthesis method includes, but is not limited to, at least one of a server, a terminal, and the like, which can be configured to execute the method provided by the embodiment of the application. In other words, the speech synthesis method may be performed by software or hardware installed in a terminal device or a server device, and the software may be a blockchain platform. The service end includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Referring to fig. 1, a flow chart of a speech synthesis method according to an embodiment of the present invention is shown, where in the embodiment of the present invention, the speech synthesis method includes:
S1, acquiring sample audio, and carrying out sound feature extraction conversion and vectorization processing on the sample audio to obtain a standard speech vector;
in the embodiment of the present invention, the sample audio is speech data of a target speaker to be formed later, for example: the subsequent text is synthesized into speaker a's speech, then the sample audio is the number of speaker a's speech.
Furthermore, in order to better and more accurately synthesize the voice of the subsequent text, the invention carries out feature extraction processing on the sample audio to obtain the standard voice vector.
Because the voice data has larger capacity and is not easy to process, the voice characteristic extraction and conversion are carried out on the sample audio to obtain a spectrogram.
In detail, in the embodiment of the present invention, referring to fig. 2, the performing acoustic feature extraction and conversion on the sample audio to obtain a target spectrogram includes:
s11, resampling the sample audio to obtain a digital voice signal;
In the embodiment of the present invention, in order to facilitate data processing on the sample audio, resampling is performed on the sample audio to obtain the digital voice signal, and preferably, a digital-to-analog converter is used to resample the sample audio.
S12, pre-emphasis is carried out on the digital voice signal to obtain a standard digital voice signal;
in detail, the pre-emphasis operation is performed by using the following formula:
y(t)=x(t)-μx(t-1)
Wherein x (t) is the digital voice signal, t is time, y (t) is the standard digital voice signal, μ is a preset adjustment value of the pre-emphasis operation, and preferably, the value range of μ is [0.9,1.0].
S13, performing feature conversion on the standard digital voice signal to obtain the target spectrogram;
In the embodiment of the invention, the standard digital voice signal can only show the change of the audio frequency in the time domain, but can not show the audio characteristics of the standard voice signal, and in order to show the audio characteristics of the standard voice signal, the audio characteristics are more visual and clear, and the characteristic conversion is carried out on the standard digital voice signal.
In detail, in the embodiment of the present invention, performing feature conversion on the standard digital voice signal includes: and mapping the standard digital voice signal into a frequency domain by using a preset sound processing algorithm to obtain the target spectrogram. Preferably, in the embodiment of the present invention, the sound processing algorithm is a mel filtering algorithm.
Further, in order to further simplify the data and improve the processing efficiency of the data, the embodiment of the present invention performs vectorization processing on the target spectrogram, including: and extracting features of the target spectrogram by using a pre-constructed picture classification model to obtain the standard voice vector. Preferably, in the embodiment of the present invention, the pre-constructed image classification model is a residual network model trained by using a historical spectrogram set, where the historical spectrogram set is a plurality of spectrogram sets with the same type and different content from the target spectrogram.
In detail, in the embodiment of the present invention, referring to fig. 3, the feature extraction is performed on the target spectrogram by using a pre-constructed image classification model to obtain the standard speech vector, which includes:
s21, obtaining the output of all nodes of a full-connection layer contained in the picture classification model, and obtaining a target spectrogram characteristic value set;
for example: the full-connection layer included in the picture classification model comprises 1000 nodes in total, a target spectrogram T is input into the picture classification model, 1000 node output values are obtained, and a target spectrogram characteristic value set of the target spectrogram T is obtained, wherein the output of each node is one characteristic value of the target spectrogram T, so that the target spectrogram characteristic value set of the target spectrogram T comprises 1000 characteristic values in total.
S22, longitudinally combining the characteristic values in the characteristic value set of the target spectrogram according to the sequence of all the nodes of the full-connection layer to obtain a standard voice vector;
for example: the full-connection layer comprises 3 nodes, namely a first node, a second node and a third node in sequence, wherein 3 characteristic values are 3,5 and 1 in the characteristic value set of the target spectrogram A, the characteristic value 1 is the output of the first node, the characteristic value 3 is the output of the second node, the characteristic value 5 is the output of the third node, and three characteristic values in the characteristic value set of the target spectrogram A are longitudinally combined according to the node sequence to obtain the standard voice vector of the target spectrogram A
S2, when receiving a text to be synthesized, performing phoneme conversion on the text to be synthesized to obtain a text phoneme sequence;
In the embodiment of the invention, the text to be synthesized is a text requiring synthesized voice, phonemes of pronunciations of texts of different voices can be represented by a universal phonetic symbol rule, and further, in order to eliminate differences of texts of different languages, the embodiment of the invention carries out phoneme conversion on the text to be synthesized to obtain a text phoneme sequence.
In detail, in the embodiment of the present invention, the performing phoneme conversion on the text to be synthesized to obtain the text phoneme sequence includes: deleting punctuation marks of the text to be synthesized to obtain a standard text; marking phonemes corresponding to each character in the standard text by using a preset phonetic symbol rule to obtain the text phoneme sequence, for example: the preset phonetic symbol rule is an international phonetic symbol rule, the corresponding phoneme of the mark character 'o' is a, and the obtained text phoneme sequence is [ a ].
S3, carrying out vector conversion on the text phoneme sequence to obtain a text matrix;
in the embodiment of the invention, each phoneme in the text phoneme sequence is converted into a column vector by utilizing onehot coding algorithm to obtain the text matrix.
S4, vector splicing is carried out on the standard voice vector and the text matrix to obtain a target matrix;
In detail, in the embodiment of the present invention, in order to better perform the subsequent speech synthesis, it is further required to determine that each phoneme in the text phoneme sequence is aligned with the speech, that is, determine the pronunciation duration of each phoneme in the text phoneme sequence, that is, the phoneme frame length, so that the embodiment of the present invention calculates the phoneme frame length of each phoneme in the text phoneme sequence by using a preset algorithm model to obtain a phoneme frame length sequence, where the preset algorithm model in the embodiment of the present invention may be a DNN-HMM network model.
Further, in the embodiment of the present invention, the phoneme frame length sequence is converted into a phoneme frame length vector, that is, the phoneme frame length sequence is converted into a corresponding row vector, so as to obtain the phoneme frame length vector, and the phoneme frame length vector and the text matrix are transversely spliced, so as to obtain the standard text matrix, for example: and the phoneme frame length vector is a row vector of 1*4, the text matrix is a matrix of 5*4, and the phoneme frame length vector is used as a fifth row of the text matrix to obtain the standard text matrix of 6*4.
In detail, in the embodiment of the present invention, the standard speech vector is longitudinally spliced with each column of the standard text matrix to obtain the target matrix, for example: the standard text matrix isThe standard speech vector isLongitudinally splicing the standard voice vector and each column of the standard text matrix to obtain the target matrix as
S5, extracting spectral features of the target matrix to obtain spectral feature information;
In order to further perform speech synthesis, the embodiment of the present invention further needs to determine a spectral feature of the target matrix, where the spectral feature may be a Mel spectrum.
In detail, in the embodiment of the invention, the spectral feature extraction is performed on the target matrix by using the trained acoustic model, so as to obtain the spectral feature extraction. Preferably, the acoustic model may be a transducer model.
Further, before the training is used to perform the spectral feature extraction on the target matrix by using the trained acoustic model, the method further includes: acquiring a historical text matrix set; carrying out frequency spectrum characteristic information marking on each historical text matrix of the historical text matrix set to obtain a training set; and training the acoustic model by using the training set until the acoustic model converges to obtain the trained acoustic model. The historical text matrix set is a set of a plurality of historical text matrices, and the historical text matrix is a target matrix corresponding to a text different from the text to be synthesized.
In another embodiment of the present invention, to ensure the privacy of data, the spectral feature information may be stored in a blockchain node.
S6, performing voice synthesis on the frequency spectrum characteristic information by using a preset vocoder to obtain synthesized audio.
In detail, in the embodiment of the present invention, the spectral feature information is input to a preset vocoder to obtain the synthesized audio.
Preferably, the vocoder is a WORLD vocoder.
As shown in fig. 4, a functional block diagram of the speech synthesis apparatus according to the present invention is shown.
The speech synthesis apparatus 100 of the present invention may be mounted in an electronic device. Depending on the functions implemented, the speech synthesis apparatus may comprise an audio processing module 101, a word processing module 102, a speech synthesis module 103, which modules may also be referred to as units, refer to a series of computer program segments capable of being executed by a processor of an electronic device and of performing fixed functions, which are stored in a memory of the electronic device.
In the present embodiment, the functions concerning the respective modules/units are as follows:
the audio processing module 101 is configured to obtain a sample audio, and perform sound feature extraction conversion and vectorization processing on the sample audio to obtain a standard speech vector.
In the embodiment of the present invention, the sample audio is speech data of a target speaker to be formed later, for example: the subsequent text is synthesized into speaker a's speech, then the sample audio is the number of speaker a's speech.
Further, in order to better and more accurately synthesize the speech of the subsequent text, the audio processing module 101 performs feature extraction processing on the sample audio to obtain the standard speech vector.
Because the voice data has a large capacity and is not easy to process, the audio processing module 101 performs voice feature extraction and conversion on the sample audio to obtain a target spectrogram.
In detail, in the embodiment of the present invention, the audio processing module 101 performs sound feature extraction and conversion on the sample audio by using the following means to obtain a target spectrogram, including:
resampling the sample audio to obtain a digital voice signal;
In the embodiment of the present invention, in order to facilitate data processing on the sample audio, resampling is performed on the sample audio to obtain the digital voice signal, and preferably, a digital-to-analog converter is used to resample the sample audio.
Pre-emphasis is carried out on the digital voice signal to obtain a standard digital voice signal;
in detail, the pre-emphasis operation is performed by using the following formula:
y(t)=x(t)-μx(t-1)
Wherein x (t) is the digital voice signal, t is time, y (t) is the standard digital voice signal, μ is a preset adjustment value of the pre-emphasis operation, and preferably, the value range of μ is [0.9,1.0].
Performing feature conversion on the standard digital voice signal to obtain the target spectrogram;
In the embodiment of the invention, the standard digital voice signal can only show the change of the audio frequency in the time domain, but can not show the audio characteristics of the standard voice signal, and in order to show the audio characteristics of the standard voice signal, the audio characteristics are more visual and clear, and the characteristic conversion is carried out on the standard digital voice signal.
In detail, in the embodiment of the present invention, the audio processing module 101 performs feature conversion on the standard digital voice signal, including: and mapping the standard digital voice signal into a frequency domain by using a preset sound processing algorithm to obtain the target spectrogram. Preferably, in the embodiment of the present invention, the sound processing algorithm is a mel filtering algorithm.
Further, in order to further simplify the use of data and improve the processing efficiency of data, the audio processing module 101 according to the embodiment of the present invention performs vectorization processing on the target spectrogram, including: and extracting features of the target spectrogram by using a pre-constructed picture classification model to obtain the standard voice vector. Preferably, in the embodiment of the present invention, the pre-constructed image classification model is a residual network model trained by using a historical spectrogram set, where the historical spectrogram set is a plurality of spectrogram sets with the same type and different content from the target spectrogram.
In detail, in the embodiment of the present invention, the audio processing module 101 performs feature extraction on the target spectrogram by using the following means to obtain the standard speech vector, including:
Obtaining the output of all nodes of a full-connection layer contained in the picture classification model to obtain a target spectrogram characteristic value set;
for example: the full-connection layer included in the picture classification model comprises 1000 nodes in total, a target spectrogram T is input into the picture classification model, 1000 node output values are obtained, and a target spectrogram characteristic value set of the target spectrogram T is obtained, wherein the output of each node is one characteristic value of the target spectrogram T, so that the target spectrogram characteristic value set of the target spectrogram T comprises 1000 characteristic values in total.
According to the sequence of all nodes of the full-connection layer, longitudinally combining the characteristic values in the characteristic value set of the target spectrogram to obtain a standard voice vector;
for example: the full-connection layer comprises 3 nodes, namely a first node, a second node and a third node in sequence, wherein 3 characteristic values are 3,5 and 1 in the characteristic value set of the target spectrogram A, the characteristic value 1 is the output of the first node, the characteristic value 3 is the output of the second node, the characteristic value 5 is the output of the third node, and three characteristic values in the characteristic value set of the target spectrogram A are longitudinally combined according to the node sequence to obtain the standard voice vector of the target spectrogram A
The text processing module 102 is configured to perform phoneme conversion on a text to be synthesized to obtain a text phoneme sequence when receiving the text to be synthesized; vector conversion is carried out on the text phoneme sequence to obtain a text matrix; and vector splicing is carried out on the standard voice vector and the text matrix to obtain a target matrix.
In the embodiment of the invention, the text to be synthesized is a text requiring synthesized voice, phonemes of pronunciations of texts of different voices can be represented by a universal phonetic symbol rule, and further, in order to eliminate differences of texts of different languages, the embodiment of the invention carries out phoneme conversion on the text to be synthesized to obtain a text phoneme sequence.
In detail, in the embodiment of the present invention, the text processing module 102 performs phoneme conversion on the text to be synthesized to obtain the text phoneme sequence, which includes: deleting punctuation marks of the text to be synthesized to obtain a standard text; marking phonemes corresponding to each character in the standard text by using a preset phonetic symbol rule to obtain the text phoneme sequence, for example: the preset phonetic symbol rule is an international phonetic symbol rule, the corresponding phoneme of the mark character 'o' is a, and the obtained text phoneme sequence is [ a ].
In the embodiment of the present invention, the text processing module 102 converts each phoneme in the text phoneme sequence into a column vector by using onehot coding algorithm to obtain the text matrix.
In detail, in the embodiment of the present invention, in order to better perform the subsequent speech synthesis, it is further required to determine that each phoneme in the text phoneme sequence is aligned with the speech, that is, determine the pronunciation duration, that is, the phoneme frame length, of each phoneme in the text phoneme sequence, so the text processing module 102 calculates the phoneme frame length of each phoneme in the text phoneme sequence by using a preset algorithm model to obtain a phoneme frame length sequence, where the preset algorithm model in the embodiment of the present invention may be a DNN-HMM network model.
Further, in the embodiment of the present invention, the text processing module 102 converts the phoneme frame length sequence into a phoneme frame length vector, that is, converts the phoneme frame length sequence into a corresponding row vector, to obtain the phoneme frame length vector, and performs lateral stitching on the phoneme frame length vector and the text matrix to obtain the standard text matrix, for example: and the phoneme frame length vector is a row vector of 1*4, the text matrix is a matrix of 5*4, and the phoneme frame length vector is used as a fifth row of the text matrix to obtain the standard text matrix of 6*4.
In detail, in the embodiment of the present invention, the text processing module 102 performs a vertical concatenation on the standard speech vector and each column of the standard text matrix to obtain the target matrix, for example: the standard text matrix isThe standard speech vector isLongitudinally splicing the standard voice vector and each column of the standard text matrix to obtain the target matrix as
The voice synthesis module 103 is configured to perform spectral feature extraction on the target matrix to obtain spectral feature information; and performing voice synthesis on the frequency spectrum characteristic information by using a preset vocoder to obtain synthesized audio.
In order to further perform speech synthesis, the embodiment of the present invention further needs to determine a spectral feature of the target matrix, where the spectral feature may be a Mel spectrum.
In detail, in the embodiment of the invention, the spectral feature extraction is performed on the target matrix by using the trained acoustic model, so as to obtain the spectral feature extraction. Preferably, the acoustic model may be a transducer model.
Further, before the speech synthesis module 103 performs spectral feature extraction on the target matrix by using the trained acoustic model in the embodiment of the present invention, the method further includes: acquiring a historical text matrix set; carrying out frequency spectrum characteristic information marking on each historical text matrix of the historical text matrix set to obtain a training set; and training the acoustic model by using the training set until the acoustic model converges to obtain the trained acoustic model. The historical text matrix set is a set of a plurality of historical text matrices, and the historical text matrix is a target matrix corresponding to a text different from the text to be synthesized.
In another embodiment of the present invention, to ensure the privacy of data, the spectral feature information may be stored in a blockchain node.
In detail, in the embodiment of the present invention, the speech synthesis module 103 inputs the spectral feature information to a preset vocoder to obtain the synthesized audio.
Preferably, the vocoder is a WORLD vocoder.
Fig. 5 is a schematic structural diagram of an electronic device for implementing the speech synthesis method according to the present invention.
The electronic device 1 may comprise a processor 10, a memory 11 and a bus, and may further comprise a computer program, such as a speech synthesis program 12, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, including flash memory, a mobile hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may in other embodiments also be an external storage device of the electronic device 1, such as a plug-in mobile hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only for storing application software installed in the electronic device 1 and various types of data, such as codes of a speech synthesis program, but also for temporarily storing data that has been output or is to be output.
The processor 10 may be comprised of integrated circuits in some embodiments, for example, a single packaged integrated circuit, or may be comprised of multiple integrated circuits packaged with the same or different functions, including one or more central processing units (Central Processing unit, CPU), microprocessors, digital processing chips, graphics processors, combinations of various control chips, and the like. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects respective components of the entire electronic device using various interfaces and lines, and executes various functions of the electronic device 1 and processes data by running or executing programs or modules (e.g., a speech synthesis program, etc.) stored in the memory 11, and calling data stored in the memory 11.
The bus may be a peripheral component interconnect standard (PERIPHERAL COMPONENT INTERCONNECT, PCI) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. The bus is arranged to enable a connection communication between the memory 11 and at least one processor 10 etc.
Fig. 5 shows only an electronic device with components, it being understood by a person skilled in the art that the structure shown in fig. 5 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or may combine certain components, or may be arranged in different components.
For example, although not shown, the electronic device 1 may further include a power source (such as a battery) for supplying power to each component, and preferably, the power source may be logically connected to the at least one processor 10 through a power management device, so that functions of charge management, discharge management, power consumption management, and the like are implemented through the power management device. The power supply may also include one or more of any of a direct current or alternating current power supply, recharging device, power failure detection circuit, power converter or inverter, power status indicator, etc. The electronic device 1 may further include various sensors, bluetooth modules, wi-Fi modules, etc., which will not be described herein.
Further, the electronic device 1 may also comprise a network interface, optionally the network interface may comprise a wired interface and/or a wireless interface (e.g. WI-FI interface, bluetooth interface, etc.), typically used for establishing a communication connection between the electronic device 1 and other electronic devices.
The electronic device 1 may optionally further comprise a user interface, which may be a Display, an input unit, such as a Keyboard (Keyboard), or a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or display unit, as appropriate, for displaying information processed in the electronic device 1 and for displaying a visual user interface.
It should be understood that the embodiments described are for illustrative purposes only and are not limited to this configuration in the scope of the patent application.
The speech synthesis program 12 stored in the memory 11 in the electronic device 1 is a combination of a plurality of computer programs, which, when run in the processor 10, can realize:
obtaining sample audio, and carrying out sound feature extraction conversion and vectorization processing on the sample audio to obtain a standard speech vector;
When receiving a text to be synthesized, performing phoneme conversion on the text to be synthesized to obtain a text phoneme sequence;
vector conversion is carried out on the text phoneme sequence to obtain a text matrix;
Vector splicing is carried out on the standard voice vector and the text matrix to obtain a target matrix;
extracting spectral features of the target matrix to obtain spectral feature information;
And performing voice synthesis on the frequency spectrum characteristic information by using a preset vocoder to obtain synthesized audio.
In particular, the specific implementation method of the processor 10 on the computer program may refer to the description of the relevant steps in the corresponding embodiment of fig. 1, which is not repeated herein.
Further, the modules/units integrated in the electronic device 1 may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. The computer readable medium may be non-volatile or volatile. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM).
Embodiments of the present invention may also provide a computer readable storage medium storing a computer program which, when executed by a processor of an electronic device, may implement:
obtaining sample audio, and carrying out sound feature extraction conversion and vectorization processing on the sample audio to obtain a standard speech vector;
When receiving a text to be synthesized, performing phoneme conversion on the text to be synthesized to obtain a text phoneme sequence;
vector conversion is carried out on the text phoneme sequence to obtain a text matrix;
Vector splicing is carried out on the standard voice vector and the text matrix to obtain a target matrix;
extracting spectral features of the target matrix to obtain spectral feature information;
And performing voice synthesis on the frequency spectrum characteristic information by using a preset vocoder to obtain synthesized audio.
Further, the computer-usable storage medium may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created from the use of blockchain nodes, and the like.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be other manners of division when actually implemented.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical units, may be located in one place, or may be distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units can be realized in a form of hardware or a form of hardware and a form of software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The blockchain (Blockchain), essentially a de-centralized database, is a string of data blocks that are generated in association using cryptographic methods, each of which contains information from a batch of network transactions for verifying the validity (anti-counterfeit) of its information and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the system claims can also be implemented by means of software or hardware by means of one unit or means. The terms second, etc. are used to denote a name, but not any particular order.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.

Claims (9)

1.一种语音合成方法,其特征在于,所述方法包括:1. A speech synthesis method, characterized in that the method comprises: 获取样本音频,对所述样本音频进行声音特征提取转换及向量化处理,得到标准语音向量;Acquire sample audio, perform sound feature extraction, conversion and vectorization processing on the sample audio to obtain a standard speech vector; 当接收待合成文本时,对所述待合成文本进行音素转换得到文本音素序列;When receiving a text to be synthesized, performing phoneme conversion on the text to be synthesized to obtain a text phoneme sequence; 对所述文本音素序列进行向量转换,得到文本矩阵;Performing vector conversion on the text phoneme sequence to obtain a text matrix; 将所述标准语音向量与所述文本矩阵进行向量拼接,得到目标矩阵;Performing vector concatenation of the standard speech vector and the text matrix to obtain a target matrix; 对所述目标矩阵进行频谱特征提取,得到频谱特征信息;Extracting spectrum features from the target matrix to obtain spectrum feature information; 利用预设声码器对所述频谱特征信息进行语音合成,得到合成音频;Using a preset vocoder to perform speech synthesis on the frequency spectrum feature information to obtain synthesized audio; 其中,所述将所述标准语音向量与所述文本矩阵进行向量拼接,得到目标矩阵,包括:利用预设的算法模型计算所述文本音素序列中的每个音素的音素帧长,得到音素帧长序列;将所述音素帧长序列转换为音素帧长向量;将所述音素帧长向量和所述文本矩阵进行横向拼接,得到标准文本矩阵;将所述标准语音向量与所述标准文本矩阵的每一列进行纵向拼接,得到所述目标矩阵。Among them, the vector splicing of the standard speech vector and the text matrix to obtain the target matrix includes: using a preset algorithm model to calculate the phoneme frame length of each phoneme in the text phoneme sequence to obtain a phoneme frame length sequence; converting the phoneme frame length sequence into a phoneme frame length vector; horizontally splicing the phoneme frame length vector and the text matrix to obtain a standard text matrix; vertically splicing the standard speech vector with each column of the standard text matrix to obtain the target matrix. 2.如权利要求1所述的语音合成方法,其特征在于,所述对所述样本音频进行声音特征提取转换及向量化处理,得到标准语音向量,包括:2. The speech synthesis method according to claim 1, wherein the step of extracting, converting and vectorizing the sample audio to obtain a standard speech vector comprises: 对所述样本音频进行声音特征提取转换,得到目标声谱图;Performing sound feature extraction and conversion on the sample audio to obtain a target spectrogram; 利用预构建的图片分类模型对所述目标声谱图进行特征提取,得到所述标准语音向量。The pre-built picture classification model is used to extract features from the target spectrogram to obtain the standard speech vector. 3.如权利要求2所述的语音合成方法,其特征在于,所述对所述样本音频进行声音特征提取转换,得到目标声谱图,包括:3. The speech synthesis method according to claim 2, wherein the step of extracting and converting the sample audio to obtain a target spectrogram comprises: 对所述样本音频进行重采样,得到数字语音信号;Resampling the sample audio to obtain a digital voice signal; 对所述数字语音信号进行预加重,得到标准数字语音信号;Pre-emphasize the digital voice signal to obtain a standard digital voice signal; 对所述标准数字语音信号进行特征转换,得到所述目标声谱图。The standard digital speech signal is subjected to feature conversion to obtain the target spectrogram. 4.如权利要求2所述的语音合成方法,其特征在于,所述利用预构建的图片分类模型对所述目标声谱图进行特征提取,得到所述标准语音向量,包括:4. The speech synthesis method according to claim 2, wherein the step of extracting features from the target spectrogram using a pre-built picture classification model to obtain the standard speech vector comprises: 获取所述图片分类模型包含的全连接层的所有节点的输出,得到目标声谱图特征值集;Obtaining outputs of all nodes of the fully connected layer included in the image classification model to obtain a target spectrogram feature value set; 根据所述全连接层的所有节点的顺序,将所述目标声谱图特征值集中的特征值进行纵向组合,得到标准语音向量。According to the order of all nodes of the fully connected layer, the feature values in the target spectrogram feature value set are combined vertically to obtain a standard speech vector. 5.如权利要求3所述的语音合成方法,其特征在于,所述对所述标准数字语音信号进行特征转换,得到所述目标声谱图,包括:5. The speech synthesis method according to claim 3, wherein the step of performing feature conversion on the standard digital speech signal to obtain the target spectrogram comprises: 利用预设声音处理算法,将所述标准数字语音信号映射在频域,得到所述目标声谱图。The standard digital speech signal is mapped in the frequency domain using a preset sound processing algorithm to obtain the target spectrogram. 6.如权利要求1至5中任意一项所述的语音合成方法,其特征在于,所述对所述待合成文本进行音素转换得到文本音素序列,包括:6. The speech synthesis method according to any one of claims 1 to 5, characterized in that the step of converting the text to be synthesized into a text phoneme sequence comprises: 将所述待合成文本进行标点符号删除,得到标准文本;Deleting punctuation marks from the text to be synthesized to obtain a standard text; 利用预设的音标规则标记所述标准文本中的每个字符对应的音素,得到所述文本音素序列。The phoneme corresponding to each character in the standard text is marked using a preset phonetic symbol rule to obtain the text phoneme sequence. 7.一种语音合成装置,用于实现如权利要求1至6中任一项所述的语音合成方法,其特征在于,包括:7. A speech synthesis device, used to implement the speech synthesis method according to any one of claims 1 to 6, characterized in that it comprises: 音频处理模块,用于获取样本音频,对所述样本音频进行声音特征提取转换及向量化处理,得到标准语音向量;An audio processing module is used to obtain sample audio, perform sound feature extraction, conversion and vectorization processing on the sample audio to obtain a standard speech vector; 文本处理模块,用于当接收待合成文本时,对所述待合成文本进行音素转换得到文本音素序列;对所述文本音素序列进行向量转换,得到文本矩阵;将所述标准语音向量与所述文本矩阵进行向量拼接,得到目标矩阵;The text processing module is used for, when receiving the text to be synthesized, performing phoneme conversion on the text to be synthesized to obtain a text phoneme sequence; performing vector conversion on the text phoneme sequence to obtain a text matrix; performing vector concatenation on the standard speech vector and the text matrix to obtain a target matrix; 语音合成模块,用于对所述目标矩阵进行频谱特征提取,得到频谱特征信息;利用预设声码器对所述频谱特征信息进行语音合成,得到合成音频。The speech synthesis module is used to extract the spectrum feature of the target matrix to obtain spectrum feature information; and use a preset vocoder to perform speech synthesis on the spectrum feature information to obtain synthesized audio. 8.一种电子设备,其特征在于,所述电子设备包括:8. An electronic device, characterized in that the electronic device comprises: 至少一个处理器;以及,at least one processor; and, 与所述至少一个处理器通信连接的存储器;其中,a memory communicatively connected to the at least one processor; wherein, 所述存储器存储有可被所述至少一个处理器执行的计算机程序指令,所述计算机程序指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1至6中任一项所述的语音合成方法。The memory stores computer program instructions that can be executed by the at least one processor, and the computer program instructions are executed by the at least one processor to enable the at least one processor to perform the speech synthesis method according to any one of claims 1 to 6. 9.一种计算机可读存储介质,存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至6中任一项所述的语音合成方法。9. A computer-readable storage medium storing a computer program, wherein when the computer program is executed by a processor, the speech synthesis method according to any one of claims 1 to 6 is implemented.
CN202011442571.2A 2020-12-11 2020-12-11 Speech synthesis method, device, electronic device and readable storage medium Active CN112397047B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011442571.2A CN112397047B (en) 2020-12-11 2020-12-11 Speech synthesis method, device, electronic device and readable storage medium
PCT/CN2021/083824 WO2022121176A1 (en) 2020-12-11 2021-03-30 Speech synthesis method and apparatus, electronic device, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011442571.2A CN112397047B (en) 2020-12-11 2020-12-11 Speech synthesis method, device, electronic device and readable storage medium

Publications (2)

Publication Number Publication Date
CN112397047A CN112397047A (en) 2021-02-23
CN112397047B true CN112397047B (en) 2024-11-19

Family

ID=74625646

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011442571.2A Active CN112397047B (en) 2020-12-11 2020-12-11 Speech synthesis method, device, electronic device and readable storage medium

Country Status (2)

Country Link
CN (1) CN112397047B (en)
WO (1) WO2022121176A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112397047B (en) * 2020-12-11 2024-11-19 平安科技(深圳)有限公司 Speech synthesis method, device, electronic device and readable storage medium
CN113096625A (en) * 2021-03-24 2021-07-09 平安科技(深圳)有限公司 Multi-person Buddha music generation method, device, equipment and storage medium
CN112927677B (en) * 2021-03-29 2023-07-25 北京大米科技有限公司 Speech synthesis method and device
CN113327578B (en) * 2021-06-10 2024-02-02 平安科技(深圳)有限公司 Acoustic model training method and device, terminal equipment and storage medium
CN113436608B (en) * 2021-06-25 2023-11-28 平安科技(深圳)有限公司 Double-flow voice conversion method, device, equipment and storage medium
US11869483B2 (en) * 2021-10-07 2024-01-09 Nvidia Corporation Unsupervised alignment for text to speech synthesis using neural networks
CN114373443A (en) * 2022-01-14 2022-04-19 腾讯科技(深圳)有限公司 Speech synthesis method and apparatus, computing device, storage medium, and program product
CN114400005A (en) * 2022-01-18 2022-04-26 平安科技(深圳)有限公司 Voice message generation method and device, computer equipment and storage medium
CN114783406B (en) * 2022-06-16 2022-10-21 深圳比特微电子科技有限公司 Speech synthesis method, apparatus and computer-readable storage medium
CN116705058B (en) * 2023-08-04 2023-10-27 贝壳找房(北京)科技有限公司 Processing method of multimode voice task, electronic equipment and readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022121176A1 (en) * 2020-12-11 2022-06-16 平安科技(深圳)有限公司 Speech synthesis method and apparatus, electronic device, and readable storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4966048B2 (en) * 2007-02-20 2012-07-04 株式会社東芝 Voice quality conversion device and speech synthesis device
US10186252B1 (en) * 2015-08-13 2019-01-22 Oben, Inc. Text to speech synthesis using deep neural network with constant unit length spectrogram
CN109754778B (en) * 2019-01-17 2023-05-30 平安科技(深圳)有限公司 Text speech synthesis method and device and computer equipment
CN109767752B (en) * 2019-02-27 2023-05-26 平安科技(深圳)有限公司 Voice synthesis method and device based on attention mechanism
CN111161702B (en) * 2019-12-23 2022-08-26 爱驰汽车有限公司 Personalized speech synthesis method and device, electronic equipment and storage medium
CN112002305B (en) * 2020-07-29 2024-06-18 北京大米科技有限公司 Speech synthesis method, device, storage medium and electronic equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022121176A1 (en) * 2020-12-11 2022-06-16 平安科技(深圳)有限公司 Speech synthesis method and apparatus, electronic device, and readable storage medium

Also Published As

Publication number Publication date
WO2022121176A1 (en) 2022-06-16
CN112397047A (en) 2021-02-23

Similar Documents

Publication Publication Date Title
CN112397047B (en) Speech synthesis method, device, electronic device and readable storage medium
CN112086086B (en) Speech synthesis method, device, equipment and computer readable storage medium
CN107220235B (en) Speech recognition error correction method and device based on artificial intelligence and storage medium
CN112820269B (en) Text-to-speech method and device, electronic equipment and storage medium
CN112951203B (en) Speech synthesis method, device, electronic equipment and storage medium
CN107707745A (en) Method and apparatus for extracting information
CN113205814B (en) Voice data labeling method and device, electronic equipment and storage medium
CN113555003B (en) Speech synthesis method, device, electronic equipment and storage medium
CN113345431B (en) Cross-language voice conversion method, device, equipment and medium
WO2022121157A1 (en) Speech synthesis method and apparatus, electronic device and storage medium
CN113420556B (en) Emotion recognition method, device, equipment and storage medium based on multi-mode signals
CN111862937A (en) Singing voice synthesis method, singing voice synthesis device and computer readable storage medium
CN113096242A (en) Virtual anchor generation method and device, electronic equipment and storage medium
CN114866807A (en) Avatar video generation method and device, electronic equipment and readable storage medium
WO2022121158A1 (en) Speech synthesis method and apparatus, and electronic device and storage medium
CN114155832A (en) Speech recognition method, device, equipment and medium based on deep learning
CN112201253A (en) Character marking method and device, electronic equipment and computer readable storage medium
CN113990286B (en) Speech synthesis method, device, equipment and storage medium
CN114842880A (en) Intelligent customer service voice rhythm adjusting method, device, equipment and storage medium
CN115511704A (en) Virtual customer service generation method and device, electronic equipment and storage medium
CN114863945A (en) Text-based voice changing method and device, electronic equipment and storage medium
WO2022141867A1 (en) Speech recognition method and apparatus, and electronic device and readable storage medium
CN118135994A (en) Speech synthesis method, device, equipment and medium
CN112489628A (en) Voice data selection method and device, electronic equipment and storage medium
CN116564322A (en) Voice conversion method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant