[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

EP1846918B1 - Method of estimating a voice conversion function - Google Patents

Method of estimating a voice conversion function Download PDF

Info

Publication number
EP1846918B1
EP1846918B1 EP05850632A EP05850632A EP1846918B1 EP 1846918 B1 EP1846918 B1 EP 1846918B1 EP 05850632 A EP05850632 A EP 05850632A EP 05850632 A EP05850632 A EP 05850632A EP 1846918 B1 EP1846918 B1 EP 1846918B1
Authority
EP
European Patent Office
Prior art keywords
voice
speaker
recorded
conversion
voice message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Not-in-force
Application number
EP05850632A
Other languages
German (de)
French (fr)
Other versions
EP1846918A1 (en
Inventor
Olivier Rosec
Taoufik En-Najjary
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orange SA
Original Assignee
France Telecom SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by France Telecom SA filed Critical France Telecom SA
Publication of EP1846918A1 publication Critical patent/EP1846918A1/en
Application granted granted Critical
Publication of EP1846918B1 publication Critical patent/EP1846918B1/en
Not-in-force legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/02Methods for producing synthetic speech; Speech synthesisers
    • G10L13/033Voice editing, e.g. manipulating the voice of the synthesiser
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/003Changing voice quality, e.g. pitch or formants
    • G10L21/007Changing voice quality, e.g. pitch or formants characterised by the process used
    • G10L21/013Adapting to target pitch
    • G10L2021/0135Voice conversion or morphing

Definitions

  • It also relates to a method for estimating a voice conversion function between, on the one hand, the voice of a source speaker defined from a first voice message recorded by said source speaker, and, on the other hand, on the other hand, the voice of a target speaker defined from a second voice message recorded by said target speaker.
  • the invention finds an advantageous application whenever it is desired to have a speaker say a voice message recorded by another speaker. It is thus possible, for example, to diversify the voices used in speech synthesis systems, or, conversely, to anonymously render messages recorded by different speakers. It is also conceivable to implement the method according to the invention for making dubbing films.
  • voice conversion consists of estimating a transformation function, or conversion function, which, applied to a first speaker whose voice is defined from a recorded voice message, makes it possible to reproduce as faithfully as possible the voice of a second speaker.
  • said second speaker may be a reference speaker whose voice is defined by a voice synthesis database or a so-called "target” speaker whose voice is also defined from a voice message registered, with the first speaker being called "source”.
  • timbre segmental
  • pitch of voice vocal quality
  • suprasegmental speech style
  • the principle of voice conversion consists, in known manner, in a learning operation which aims to estimate a function connecting the tone of the voice of the first speaker to that of the voice of the second speaker.
  • two parallel recordings of the two speakers that is to say containing the same voice message, are necessary.
  • An analysis is conducted on each of the recordings in order to extract representative parameters of the tone of the voice.
  • Many transformation methods based on this principle have been proposed, for example, conversion by vector quantization ( M. Abe, S. Nakamura, K. Shikano and H.
  • the speaker adaptation module allows you to customize an HMM synthesis system.
  • a classification of the HMM models in context by decision tree is carried out to build a model of "average" voice.
  • the parameters of these HMM models are adapted according to the target speaker. Both objective and subjective tests have shown the usefulness of the method in the context of HMM synthesis. But the quality of the converted speech accessible by synthesis systems by HMM is nevertheless very poor.
  • a technical problem to be solved by the object of the present invention is to propose a method of estimating a voice conversion function between, on the one hand, the voice of a speaker defined from a voice message recorded by said speaker, and, on the other hand, the voice of a reference speaker defined by a voice synthesis database, which would provide a converted speech of better quality than that provided by the methods to non-parallel corpora known.
  • the document US2002 / 0173962 discloses a method for synthesizing a personalized voice from text where the learning operation is performed between a synthethic voice message obtained from the text and a corresponding voice message spoken by the target speaker.
  • said voice synthesis database is a database of a concatenated speech synthesis system.
  • said voice synthesis database is a database of a speech synthesis system by corpus.
  • the acoustic database is not restricted to a dictionary of mono-represented diphones, but contains these same elements recorded in different contexts (grammatical, syntactic, phonemic, phonological or prosodic). Each element thus manipulated, also called “unit”, is thus a segment of speech characterized by a set of symbolic descriptors relative to the context in which it was recorded.
  • the problematic of the synthesis changes radically: it is no longer a matter of distorting the speech signal with the aim of degrading the quality of the timbre as little as possible, but rather of having a sufficiently rich database.
  • the selection of units can therefore be likened to a problem of minimizing a cost function composed of two types of metrics: a "target cost” which measures the adequacy of the units with the symbolic parameters resulting from the language processing modules of the system and a "concatenation cost” which accounts for the acoustic compatibility of two consecutive units.
  • the figure 1 is a block diagram showing the steps of a voice conversion method between a speaker and a reference speaker.
  • the figure 3 is a diagram of a voice conversion system implementing the estimation method according to the invention.
  • FIG. 1 On the figure 1 is illustrated a voice conversion estimation method between a speaker and a reference speaker.
  • the voice of said speaker is defined from a recorded voice message while the voice of said reference speaker is defined from an acoustic data base of a concatenated speech synthesis system, preferably by corpus. although a mono-represented diphon synthesis system can also be used.
  • a synthetic record parallel to the voice message recorded by the speaker is generated from said voice synthesis data base.
  • a first block required for generation is intended to extract from the record of the speaker concerned information of a symbolic type relating to the message contained in said record.
  • a first type of treatment envisaged consists in extracting the message delivered in text form from the voice recording. This can be obtained automatically by a voice recognition system, or manually by listening and retranscribing voice messages. In this case, the text thus recognized directly feeds the system 30 of speech synthesis, thereby generating the desired reference synthetic record.
  • a prosodic annotation algorithm can be integrated in the method or a manual annotation phase of the corpus can be considered to take into account melodic markers deemed relevant.
  • the acoustic analysis is carried out for example by means of the HNM model ("Harmonic plus Noise Model") which supposes that a segment (also called frame) voiced of the speech signal s (n) can be decomposed into a harmonic part h ( n) representing the quasi-periodic component of the signal consisting of a sum of L harmonic sinusoids of amplitudes A l and of phases ⁇ l , and a noisy part b (n) representing the friction noise and the variation of the glottal excitation from one period to another, modeled by a noise-excited LPC (Linear Prediction Coefficients) filter.
  • the harmonic part is absent and the signal is simply modeled by a white noise shaped by auto-regressive filtering (AR).
  • AR auto-regressive filtering
  • the fundamental frequency F 0 and the maximum frequency of voicing that is to say the frequency beyond which the signal is considered to consist solely of noise, are first determined. Then, a synchronized analysis on F 0 makes it possible to estimate the parameters of the harmonic part (the amplitudes and the phases) as well as the parameters of the noise.
  • the harmonic parameters are calculated by minimizing a weighted least squares criterion (see the article by Y.
  • the parts of the spectrum corresponding to noise are modeled using a simple linear prediction.
  • the frequency response of the AR model thus estimated is then sampled at constant pitch, which provides an estimate of the spectral envelope on the noisy areas.
  • the parameters modeling this spectral envelope are deduced using the discrete regularized cepstrum method ( O. Cappe, E. Mills, Regularization techniques for discrete cepstrum estimation, IEEE Signal Processing Letters, Vol. 3 (4), pp. 100-102, April 1996 ).
  • the order of cepstral modeling was set at 20.
  • a Bark scale transformation is performed.
  • Dynamic Alignment DTW for Dynamic Time Warping
  • the alignment path can be constrained so as to respect the segmentation marks.
  • a joint classification of the acoustic vectors of the two aligned recordings is performed.
  • x 1: N [x 1 , x 2 , ⁇ , x N ]
  • y 1: N [y 1 , y 2 ⁇ , y N ] be the sequences of aligned acoustic vectors.
  • the random variable z is modeled by a mixture of Gaussian laws (in English GMM for "Gaussian Mixture Model") of order Q.
  • the estimation of the parameters of the model is carried out by applying a classical iterative procedure, namely the EM (Expectation-Maximization) algorithm ( AP Dempster, NM Laird, DR Rubin, EM algorithm, Journal of the Royal Statistical Society B, vol. 39, pp. 1-38, 1977 ).
  • the determination of the initial parameters of the GMM model is obtained using a standard vector quantization technique.
  • the figure 2 illustrates a method for estimating a voice conversion function between a source speaker and a target speaker whose voices are respectively defined from voice messages recorded by each of the speakers, these recordings being non-parallel.
  • synthetic reference records are generated from said voice messages recorded according to a procedure similar to that just described with regard to the figure 1 .
  • a voice conversion system incorporating the described estimation method is represented on the figure 3 .
  • the analysis step still relies on HNM modeling, but this time is conducted in a pitch-synchronous manner, as this allows for better pitch and spectral envelope changes (see FIG. article by Y. Stylianou cited above).
  • the extracted spectral parameters are then transformed using a conversion module 80 performing the conversion determined by the relation (6).
  • modified parameters as well as the residual information necessary for the sound generation are transmitted to a synthesis module by HNM.
  • the harmonic component of the signal defined by equation (2) and present for the voiced signal frames is generated by summation of sinusoids previously tabulated whose amplitudes are calculated from the converted spectral parameters.
  • the stochastic portion is determined by inverse Fourier Transform (IFFT) on the spectrum calculated from the spectral parameters.
  • IFFT inverse Fourier Transform
  • the HNM model can be replaced by other models known to those skilled in the art, such as linear prediction coding (LPC) models, sinusoidal or MBE ("Multi-Band Excited") models. ").
  • LPC linear prediction coding
  • MBE Multi-Band Excited
  • the GMM conversion method can be replaced by conventional vector quantization (VQ) or fuzzy vector quantization (Fuzzy VQ) techniques.
  • the steps of the method are determined by the instructions of a program for estimating a voice conversion function incorporated in a server, and the method according to the invention is implemented when this program is loaded into a computer whose operation is then controlled by the execution of the program.
  • the information carrier may be any entity or device capable of storing the program.
  • the medium may comprise storage means, such as a ROM memory, for example a CD ROM or a microelectronic circuit ROM memory, or a means magnetic recording, for example a floppy disk or a hard disk.
  • the information medium may be a transmissible medium such as an electrical or optical signal, which may be conveyed via an electrical or optical cable, by radio or by other means.
  • the program according to the invention can be downloaded in particular on an Internet type network.
  • the information carrier may be an integrated circuit in which the program is incorporated, the circuit being adapted to execute or to be used in the execution of the method in question.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)
  • Devices For Executing Special Programs (AREA)

Abstract

The invention relates to a method of estimating a voice conversion function between (i) the voice of a speaker, defined from a voice message recorded by said speaker, and (ii) the voice of a reference speaker, defined by a speech synthesis database. According to the invention, the method comprises the following steps consisting in: generating a synthetic recording of the voice message recorded by the speaker from the speech synthesis database, and estimating the voice conversion function using a training operation which is performed on the recorded voice message and the synthetic recording.

Description

La présente invention concerne un procédé d'estimation d'une fonction de conversion de voix entre, d'une part, la voix d'un locuteur définie à partir d'un message vocal enregistré par ledit locuteur, et, d'autre part, la voix d'un locuteur de référence définie par une base de données de synthèse vocale.The present invention relates to a method for estimating a voice conversion function between, on the one hand, the voice of a speaker defined from a voice message recorded by said speaker, and, on the other hand, the voice of a reference speaker defined by a speech synthesis database.

Elle concerne également un procédé d'estimation d'une fonction de conversion de voix entre, d'une part, la voix d'un locuteur source définie à partir d'un premier message vocal enregistré par ledit locuteur source, et, d'autre part, la voix d'un locuteur cible définie à partir d'un deuxième message vocal enregistré par ledit locuteur cible.It also relates to a method for estimating a voice conversion function between, on the one hand, the voice of a source speaker defined from a first voice message recorded by said source speaker, and, on the other hand, on the other hand, the voice of a target speaker defined from a second voice message recorded by said target speaker.

L'invention trouve une application avantageuse à chaque fois que l'on veut faire dire par un locuteur un message vocal enregistré par un autre locuteur. Il est ainsi possible, par exemple, de diversifier les voix utilisées dans les systèmes de synthèse de la parole, ou, à l'inverse, restituer de manière anonyme des messages enregistrés par différents locuteurs. On peut également envisager de mettre en oeuvre le procédé conforme à l'invention pour réaliser des doublages de films.The invention finds an advantageous application whenever it is desired to have a speaker say a voice message recorded by another speaker. It is thus possible, for example, to diversify the voices used in speech synthesis systems, or, conversely, to anonymously render messages recorded by different speakers. It is also conceivable to implement the method according to the invention for making dubbing films.

D'une manière générale, la conversion de voix consiste à estimer une fonction de transformation, ou de conversion, qui, appliquée à un premier locuteur dont la voix est définie à partir d'un message vocal enregistré, permet de reproduire aussi fidèlement que possible la voix d'un deuxième locuteur. Dans le cadre de l'invention, ledit deuxième locuteur peut être un locuteur de référence dont la voix est définie par une base de données de synthèse vocale ou un locuteur dit « cible » dont la voix est également définie à partir d'un message vocal enregistré, le premier locuteur étant qualifié de « source ».In general, voice conversion consists of estimating a transformation function, or conversion function, which, applied to a first speaker whose voice is defined from a recorded voice message, makes it possible to reproduce as faithfully as possible the voice of a second speaker. In the context of the invention, said second speaker may be a reference speaker whose voice is defined by a voice synthesis database or a so-called "target" speaker whose voice is also defined from a voice message registered, with the first speaker being called "source".

L'identité vocale d'un locuteur dépend de nombreuses caractéristiques, qu'elles soient segmentales (timbre, hauteur de voix, qualité vocale), ou suprasegmentales (style d'élocution). Parmi celles-ci, le timbre reste l'information la plus importante, c'est pourquoi la plupart des travaux dans le domaine de la conversion de voix traitent essentiellement de la modification du timbre. Néanmoins, lors de la conversion, une modification de la fréquence fondamentale, appelée aussi « pitch », peut être également effectuée afin de respecter globalement la hauteur de voix du deuxième locuteur.The vocal identity of a speaker depends on many characteristics, whether they are segmental (timbre, pitch of voice, vocal quality), or suprasegmental (speech style). Of these, timbre remains the most important piece of information, which is why most work in the voice conversion field deals mainly with the modification of the timbre. Nevertheless, during the conversion, a modification of the fundamental frequency, also called "pitch", can also be performed in order to respect overall the voice height of the second speaker.

En substance, le principe de la conversion de voix consiste, de manière connue, en une opération d'apprentissage qui vise à estimer une fonction reliant le timbre de la voix du premier locuteur à celui de la voix du deuxième locuteur. Pour cela, deux enregistrements parallèles des deux locuteurs, c'est-à-dire comportant le même message vocal, sont nécessaires. Une analyse est menée sur chacun des enregistrements afin d'extraire des paramètres représentatifs du timbre de la voix. Puis, après alignement des deux enregistrements, on commence par effectuer une classification, c'est à dire une partition des espaces acoustiques des deux locuteurs. Cette classification est ensuite utilisée pour l'estimation de la fonction de conversion. De nombreuses méthodes de transformation basées sur ce principe ont été proposées, on citera par exemple la conversion par quantification vectorielle ( M. Abe, S. Nakamura, K. Shikano and H. Kuwabara, "Voice conversion through vector quantization", Proceedings of ICASSP, pp 655-658, 1988 ), par régression linéaire multiple ( H. Valbret, "Système de conversion de voix pour la synthèse de la parole", PhD Thesis ENST Paris, 1992 ), par alignement fréquentiel dynamique ( H. Valbret, E. Moulines, J.P. Tubach, "Voice transformation using PSOLA technique", Speech Communication, vol 11, pp. 175-187, 1995 ), par réseau de neurones ( M. Narendranath, H. A. Murthy, S. Rajendran and B. Yegnanarayana, "Transformation of formants for voice conversion using artificial neural networks", Speech Communication, vol 16, pp. 207-216, 1995 ), ou encore par modèle de mélange de gaussiennes (GMM) proposé dans « Y. Stylianou, O. Cappe, C. Moulines, Continuous probabilistic transform for voice conversion, IEEE Transactions on Speech and Audio Processing, Vol. 6 (2), pp. 131-142, March 1998 » et améliorée par Kain ( A. Kain and M. Macon, "Text-to-speech voice adaptation from sparse training data", Proceedings of ICSLP, 1998 ).In essence, the principle of voice conversion consists, in known manner, in a learning operation which aims to estimate a function connecting the tone of the voice of the first speaker to that of the voice of the second speaker. For this, two parallel recordings of the two speakers, that is to say containing the same voice message, are necessary. An analysis is conducted on each of the recordings in order to extract representative parameters of the tone of the voice. Then, after aligning the two recordings, we begin by performing a classification, ie a partition of the acoustic spaces of the two speakers. This classification is then used to estimate the conversion function. Many transformation methods based on this principle have been proposed, for example, conversion by vector quantization ( M. Abe, S. Nakamura, K. Shikano and H. Kuwabara, "Voice conversion through vector quantization", Proceedings of ICASSP, pp 655-658, 1988 ), by multiple linear regression ( H. Valbret, "Voice Conversion System for Speech Synthesis", PhD Thesis ENST Paris, 1992 ), by dynamic frequency alignment ( H. Valbret, E. Moulines, JP Tubach, "Voice transformation using PSOLA technique", Speech Communication, vol 11, pp. 175-187, 1995 ), by neural network ( M. Narendranath, HA Murthy, S. Rajendran and B. Yegnanarayana, "Transformation of Formants for Voice Conversion Using Artificial Neural Networks," Speech Communication, Vol 16, pp. 207-216, 1995 ), or by Gaussian Mixture Model (GMM) proposed in " Y. Stylianou, O. Cappe, C. Moulines, Continuous probabilistic transform for voice conversion, IEEE Transactions on Speech and Audio Processing, Vol. 6 (2), pp. 131-142, March 1998 And improved by Kain ( A. Kain and M. Macon, "Text-to-speech voice adaptation of sparse training data," Proceedings of ICSLP, 1998 ).

Les procédés d'estimation de fonctions de conversion de voix qui viennent d'être présentés utilisent des enregistrements, ou corpus, de messages parallèles des deux locuteurs. Cependant, il n'est pas toujours possible d'obtenir de tels enregistrements. C'est pourquoi, parallèlement au développement des méthodes de conversion basée sur l'utilisation de corpus parallèles, d'autres travaux ont été menés afin de rendre possible la conversion dans le cas où les corpus source et cible ne sont pas parallèles. Ces travaux sont très largement inspirés des techniques d'adaptation au locuteur classiquement utilisées en reconnaissance de la parole par modèles de Markov cachés (en anglais HMM pour Hidden Markov Model). Une application intéressante a été proposée ( J. Yamagishi, M. Tamura, T. Masuko, K. Tokuda and T. Kobayashi, "A context clustering technique for average voice models", IEICE Trans. Inf & Syst, vol. E86-D (3), pp. 534-542, March 2003 ), où le module d'adaptation au locuteur permet de personnaliser un système de synthèse par HMM. Dans un premier temps, une classification des modèles HMM en contexte par arbre de décision est réalisée pour construire un modèle de voix "moyenne". Ensuite, les paramètres de ces modèles HMM sont adaptés en fonction du locuteur cible. Des tests tant objectifs que subjectifs ont certes montré l'utilité de la méthode dans le cadre de la synthèse par HMM. Mais la qualité de la parole convertie accessible par les systèmes de synthèse par HMM est néanmoins très médiocre.The methods for estimating voice conversion functions that have just been presented use recordings, or corpus, of parallel messages of the two speakers. However, it is not always possible to obtain such records. This is why, in parallel with the development of conversion methods based on the use of parallel corpora, other work has been done to make conversion possible in cases where the source and target corpora are not parallel. This work is very largely inspired by the speaker adaptation techniques conventionally used in speech recognition by Hidden Markov Model (HMM). An interesting application has been proposed ( J. Yamagishi, M. Tamura, T. Masuko, K. Tokuda and T. Kobayashi, "A context clustering technique for average voice models", IEICE Trans. Inf & Syst, vol. E86-D (3), pp. 534-542, March 2003 ), where the speaker adaptation module allows you to customize an HMM synthesis system. As a first step, a classification of the HMM models in context by decision tree is carried out to build a model of "average" voice. Then, the parameters of these HMM models are adapted according to the target speaker. Both objective and subjective tests have shown the usefulness of the method in the context of HMM synthesis. But the quality of the converted speech accessible by synthesis systems by HMM is nevertheless very poor.

Une technique d'adaptation au locuteur est également proposée ( A. Mouchtaris, J. van der Spiegel and P. Mueller, « Non-paraliel training for voice conversion by maximum likelihood constrained adaptation », In Proceeding ICASSP, 2004, vol 1, pp 1-4 ) pour obtenir une conversion de voix basée sur des corpus non parallèles. Dans cette application, hypothèse est faite que deux corpus parallèles A et B sont disponibles. Pour réaliser la conversion entre les corpus non parallèles source C et cible D, on suppose en outre que les corpus C et D sont parallèles respectivement à une partie des corpus A et B. Dans ce cas, la fonction de conversion entre les locuteurs C et D est exprimée comme la composée de trois fonctions de conversion, respectivement des locuteurs C vers A, A vers B et B vers D. Le cadre d'application de ce procédé semble assez restrictif, car il requiert néanmoins des portions d'enregistrement parallèles. De plus, aucun mécanisme permettant de contrôler le parallélisme des corpus utilisés n'est proposé. Enfin, la composition des trois fonctions de conversion risque d'entraîner des erreurs de transformation importantes. Au final, la qualité de la parole convertie obtenue par cette méthode est jugée moins bonne que celle obtenue à partir de corpus parallèles.A technique of adaptation to the speaker is also proposed ( A. Mouchtaris, J. van der Spiegel and P. Mueller, "Non-paralel training for voice conversion by maximum likelihood constrained adaptation," In Proceeding ICASSP, 2004, vol 1, pp 1-4. ) to obtain a voice conversion based on nonparallel corpora. In this application, hypothesis is made that two parallel corpora A and B are available. To carry out the conversion between the non-parallel source C and target D corpora, it is furthermore supposed that the corpora C and D are parallel respectively to part of the corpora A and B. In this case, the conversion function between the speakers C and D is expressed as the compound of three conversion functions, respectively speakers C to A, A to B and B to D. The framework of application of this process seems rather restrictive, because it nevertheless requires parallel recording portions. Moreover, no mechanism to control the parallelism of the corpora used is proposed. Finally, the composition of the three conversion functions may lead to significant transformation errors. In the end, the quality of the converted speech obtained by this method is considered less good than that obtained from parallel corpora.

Aussi, un problème technique à résoudre par l'objet de la présente invention est de proposer un procédé d'estimation d'une fonction de conversion de voix entre, d'une part, la voix d'un locuteur définie à partir d'un message vocal enregistré par ledit locuteur, et, d'autre part, la voix d'un locuteur de référence définie par une base de données de synthèse vocale, qui permettrait d'obtenir une parole convertie de qualité meilleure que celle fournie par les procédés à corpus non parallèles connus.Also, a technical problem to be solved by the object of the present invention is to propose a method of estimating a voice conversion function between, on the one hand, the voice of a speaker defined from a voice message recorded by said speaker, and, on the other hand, the voice of a reference speaker defined by a voice synthesis database, which would provide a converted speech of better quality than that provided by the methods to non-parallel corpora known.

Le document US2002/0173962 décrit un procédé pour la synthèse d'une voix personalisée à partir de texte où l'opération d'apprentissage est réalisée entre un message vocal synthéthique obtenu à partir du texte et un message vocal correspondant prononcé par le locuteur cible.The document US2002 / 0173962 discloses a method for synthesizing a personalized voice from text where the learning operation is performed between a synthethic voice message obtained from the text and a corresponding voice message spoken by the target speaker.

La solution au problème technique posé consiste, selon la présente invention, en ce que ledit procédé comprend les étapes consistant à :

  • générer, à partir dudit message vocal enregistré par le locuteur et de ladite base de données de synthèse vocale, un enregistrement synthétique dudit message vocal,
  • estimer ladite fonction de conversion de voix par une opération d'apprentissage effectuée sur ledit message vocal enregistré et ledit enregistrement synthétique.
The solution to the technical problem posed consists, according to the present invention, in that said method comprises the steps of:
  • generating, from said voice message recorded by the speaker and said voice synthesis database, a synthetic record of said voice message,
  • estimating said voice conversion function by a learning operation performed on said recorded voice message and said synthetic record.

Ainsi, on comprend que le procédé selon l'invention permet d'obtenir deux enregistrements parallèles du même message vocal, l'un étant enregistré directement par le locuteur, et qui constitue en quelque sorte le message de base, et l'autre étant une reproduction synthétique de ce message de base. L'estimation de la fonction de conversion recherchée est alors réalisée par une opération d'apprentissage classique effectuée sur deux enregistrements parallèles. Les différentes étapes de ce traitement seront décrites en détail plus loin.Thus, it will be understood that the method according to the invention makes it possible to obtain two parallel recordings of the same voice message, one being recorded directly by the speaker, which constitutes in a way the basic message, and the other being a Synthetic reproduction of this basic message. The estimation of the conversion function sought is then performed by a conventional learning operation performed on two parallel recordings. The different stages of this treatment will be described in detail below.

Deux applications du procédé conforme à l'invention peuvent être envisagées, à savoir, d'une part, une application à la conversion de messages vocaux enregistrés par un locuteur source en messages correspondants reproduits par ledit locuteur de référence, et, d'autre part, une application à la conversion de messages synthétiques enregistrés par un locuteur de référence en messages correspondants reproduits par un locuteur cible. La première application conduit à rendre anonymes, car reproduits par un même locuteur de référence, des messages vocaux enregistrés par des locuteurs différents. La deuxième application vise, au contraire, à diversifier les voix utilisées en synthèse de la parole.Two applications of the method according to the invention can be envisaged, namely, on the one hand, an application to the conversion of voice messages recorded by a source speaker into corresponding messages reproduced by said reference speaker, and, on the other hand , an application to the conversion of synthetic messages recorded by a speaker of reference in corresponding messages reproduced by a target speaker. The first application leads to make anonymous, because reproduced by the same reference speaker, voice messages recorded by different speakers. The second application aims, on the contrary, to diversify the voices used in speech synthesis.

Le même principe de parallélisation de messages via un locuteur de référence peut s'appliquer à la conversion de voix entre deux locuteurs conformément à un procédé d'estimation d'une fonction de conversion de voix entre, d'une part, la voix d'un locuteur source définie à partir d'un premier message vocal enregistré par ledit locuteur source, et, d'autre part, la voix d'un locuteur cible définie à partir d'un deuxième message vocal enregistré par ledit locuteur cible, qui, selon l'invention, est remarquable en ce que ledit procédé comprend les étapes consistant à :

  • générer, à partir dudit premier message vocal enregistré par le locuteur source et d'une base de données de synthèse vocale, un enregistrement synthétique dudit premier message vocal,
  • estimer une première fonction de conversion de voix entre la voix du locuteur source et la voix d'un locuteur de référence définie par ladite base de données de synthèse vocale, par une opération d'apprentissage effectuée sur ledit premier message vocal enregistré par le locuteur source et ledit enregistrement synthétique du premier message vocal,
  • générer, à partir dudit deuxième message vocal enregistré par le locuteur cible et de ladite base de données de synthèse vocale, un enregistrement synthétique dudit deuxième message vocal,
  • estimer une deuxième fonction de conversion de voix entre la voix dudit locuteur de référence et la voix du locuteur cible, par une opération d'apprentissage effectuée sur ledit enregistrement synthétique du deuxième message vocal et ledit deuxième message vocal enregistré par le locuteur cible,
  • estimer ladite fonction de conversion de voix par composition de ladite première et de ladite deuxième fonctions de conversion de voix.
The same principle of parallelization of messages via a reference speaker can be applied to the conversion of voice between two speakers according to a method of estimating a voice conversion function between, on the one hand, the voice of a source speaker defined from a first voice message recorded by said source speaker, and, on the other hand, the voice of a target speaker defined from a second voice message recorded by said target speaker, which according to the invention is remarkable in that said method comprises the steps of:
  • generating, from said first voice message recorded by the source speaker and a voice synthesis database, a synthetic record of said first voice message,
  • estimating a first voice conversion function between the voice of the source speaker and the voice of a reference speaker defined by said voice synthesis database, by a learning operation performed on said first voice message recorded by the source speaker and said synthetic record of the first voice message,
  • generating, from said second voice message recorded by the target speaker and said voice synthesis database, a synthetic record of said second voice message,
  • estimating a second voice conversion function between the voice of said reference speaker and the voice of the target speaker, by a learning operation performed on said synthetic record of the second voice message and said second voice message recorded by the target speaker,
  • estimating said voice conversion function by composing said first and said second voice conversion functions.

Selon un premier mode de réalisation de l'invention, ladite base de données de synthèse vocale est une base de données d'un système de synthèse de la parole par concaténation.According to a first embodiment of the invention, said voice synthesis database is a database of a concatenated speech synthesis system.

Selon un deuxième mode de réalisation de l'invention, ladite base de données de synthèse vocale est une base de données d'un système de synthèse de la parole par corpus.According to a second embodiment of the invention, said voice synthesis database is a database of a speech synthesis system by corpus.

On rappelle que les systèmes de synthèse par concaténation peuvent utiliser des bases de diphones mono-représentés. Le choix du diphone, et non pas du phone (réalisation acoustique d'un phonème), résulte de l'importance de la zone transitoire, ainsi conservée, comprise entre deux phones pour l'intelligibilité du signal de parole. La synthèse par diphone conduit en général à un signal synthétique dont l'intelligibilité est assez bonne. En revanche, les modifications effectuées par l'algorithme TD-PSOLA ( F. Charpentier and E. Moulines, "Pitch-synchronous waveform processing techniques for text-to-speech synthesis using diphones", Proceedings of Eurospeech, 1989 ), afin de satisfaire les consignes prosodiques, introduisent des distorsions du signal de synthèse et dégradent ainsi notablement la qualité de la parole synthétique restituée.Recall that concatenation synthesis systems can use mono-represented diphon bases. The choice of the diphone, and not the phone (acoustic realization of a phoneme), results from the importance of the transient zone, thus conserved, between two phones for the intelligibility of the speech signal. Diphone synthesis generally leads to a synthetic signal whose intelligibility is quite good. On the other hand, modifications made by the TD-PSOLA algorithm ( F. Charpentier and E. Moulines, "Pitch-synchronous waveform processing techniques for text-to-speech synthesis using diphones", Proceedings of Eurospeech, 1989 ), in order to satisfy the prosodic instructions, introduce distortions of the synthesis signal and thus significantly degrade the quality of the rendered synthetic speech.

La disponibilité récente de ressources informatiques importantes a permis l'émergence de solutions nouvelles regroupées sous l'appellation de synthèse par corpus. Dans cette approche, la base de données acoustiques ne se restreint pas à un dictionnaire de diphones mono-représentés, mais contient ces mêmes éléments enregistrés dans différents contextes (grammatical, syntaxique, phonémique, phonologique ou prosodique). Chaque élément ainsi manipulé, appelé aussi "unité", est donc un segment de parole caractérisé par un ensemble de descripteurs symboliques relatifs au contexte dans lequel il a été enregistré. Dans cette approche par corpus, la problématique de la synthèse change alors radicalement : il ne s'agit plus de déformer le signal de parole en visant à dégrader le moins possible la qualité du timbre mais plutôt de disposer d'une base de données suffisamment riche et d'une algorithmique fine permettant la sélection des unités les mieux adaptées au contexte et minimisant les artéfacts aux instants de concaténation. La sélection des unités peut donc être assimilée à un problème de minimisation d'une fonction de coût composée de deux types de métriques : un "coût cible" qui mesure l'adéquation des unités avec les paramètres symboliques issus des modules de traitements linguistiques du système et un "coût de concaténation" qui rend compte de la compatibilité acoustique de deux unités consécutives.The recent availability of important computer resources has allowed the emergence of new solutions grouped under the name of synthesis by corpus. In this approach, the acoustic database is not restricted to a dictionary of mono-represented diphones, but contains these same elements recorded in different contexts (grammatical, syntactic, phonemic, phonological or prosodic). Each element thus manipulated, also called "unit", is thus a segment of speech characterized by a set of symbolic descriptors relative to the context in which it was recorded. In this corpus approach, the problematic of the synthesis changes radically: it is no longer a matter of distorting the speech signal with the aim of degrading the quality of the timbre as little as possible, but rather of having a sufficiently rich database. and a fine algorithm allowing the selection of the units best adapted to the context and minimizing the artifacts at the concatenation instants. The selection of units can therefore be likened to a problem of minimizing a cost function composed of two types of metrics: a "target cost" which measures the adequacy of the units with the symbolic parameters resulting from the language processing modules of the system and a "concatenation cost" which accounts for the acoustic compatibility of two consecutive units.

Pour des raisons de complexité algorithmique, énumérer et traiter d'emblée l'ensemble des combinaisons d'unités correspondant à la phonétisation d'un texte donné est difficilement envisageable. Il convient donc d'opérer un filtrage des données avant de décider du choix de la séquence optimale. Pour cette raison, le module de sélection des unités opère généralement en deux étapes : premièrement une "pré-sélection" qui consiste à sélectionner des ensembles d'unités candidates pour chaque séquence cible, puis une "sélection finale" qui vise à déterminer la séquence optimale selon une certaine fonction de coût prédéterminé. Les méthodes de pré-sélection sont pour la plupart des variantes de la méthode baptisée "Context Oriented Clustering" introduite par Nakajima ( S. Nakajima and H. Hiroshi, "Automatic Generation of Synthesis Units Based on Context Oriented Clustering", Proceedings of ICASSP, pp. 659-662, New York, USA, April 1988 ). A titre d'exemple, on peut citer les travaux de Black et Taylor ( A.W. Black and P. Taylor, "Automatically clustering similar units for unit selection in speech synthesis", Proceedings of Eurospeech, Rhodes, Greece, September 1997 ) et de Donovan ( R.E. Donovan, "Trainable Speech Synthesis", PhD Thesis, University of Cambridge, United Kingdom, 1996 ) sur ce sujet. La sélection finale se fait par minimisation d'une fonction de coût, généralement par un algorithme de type Viterbi. De nombreuses fonctions de coût ont été proposées celles-ci se différenciant essentiellement par la nature des différents coûts utilisés ainsi que par la manière dont ces coûts sont combinés. Il est à noter toutefois que la détermination de telles fonctions de coûts hétérogènes de manière automatique reste délicate, malgré les nombreux travaux dans ce domaine ( H. Peng, Y. Zhong and M. Chu, "Perpetually optimizing the cost function for unit selection in a TTS system with one single run of MOS evaluation", Proceedings ICSLP, pp. 2613-2616, 2002 ), ( S.S. Park, C.K. Kim and N.S. Kim, "Discriminative weight training for unit-selection based speech synthesis", Proceedings of Eurospeech, pp. 281-284, 2003 ), ( T. Toda, H. Kawai and M. Tsuzaki, "Optimizing sub-cost functions for segment selection based on perceptual evaluations in concatenative speech synthesis", Proceedings of ICASSP, pp. 657-660, Montreal, Canada, 2004 ).For reasons of algorithmic complexity, enumerating and treating from the outset all the combinations of units corresponding to the phonetization of a given text is difficult to envisage. It is therefore necessary to filter the data before deciding on the choice of the optimal sequence. For this reason, the unit selection module generally operates in two steps: firstly a "pre-selection" which consists in selecting sets of candidate units for each target sequence, then a "final selection" which aims to determine the sequence optimal according to a certain predetermined cost function. The pre-selection methods are mostly variants of the method called Context Oriented Clustering introduced by Nakajima ( S. Nakajima and H. Hiroshi, "Automatic Generation of Synthesis Units Based on Context Oriented Clustering", Proceedings of ICASSP, pp. 659-662, New York, USA, April 1988 ). As an example, we can mention the work of Black and Taylor ( AW Black and P. Taylor, "Automatically clustering similar units for speech synthesis", Proceedings of Eurospeech, Rhodes, Greece, September 1997 ) and Donovan ( RE Donovan, "Trainable Speech Synthesis", PhD Thesis, University of Cambridge, United Kingdom, 1996 ) on this topic. The final selection is done by minimizing a cost function, usually by a Viterbi type algorithm. Many cost functions have been proposed, which differ essentially in the nature of the different costs used and in the way these costs are combined. It should be noted, however, that the determination of such heterogeneous cost functions in an automatic way remains delicate, despite the numerous studies in this area ( H. Peng, Y. Zhong, and M. Chu, "Perpetually Optimizing the Cost Function of Unit Selection in a MOS Evaluation", Proceedings ICSLP, pp. 2613-2616, 2002 ), ( SS Park, Kim CK and Kim NS, "Discriminative weight training for unit-selection based speech synthesis ", Proceedings of Eurospeech, pp. 281-284, 2003 ), ( T. Toda, H. Kawai and M. Tsuzaki, "Optimizing sub-cost functions for segment selection based on perceptual evaluations in concatenative speech synthesis", Proceedings of ICASSP, pp. 657-660, Montreal, Canada, 2004 ).

La description qui va suivre en regard des dessins annexés, donnés à titre d'exemples non limitatifs, fera bien comprendre en quoi consiste l'invention et comment elle peut être réalisée.The following description with reference to the accompanying drawings, given as non-limiting examples, will make it clear what the invention consists of and how it can be achieved.

La figure 1 est un schéma-bloc représentant les étapes d'un procédé de conversion de voix entre un locuteur et un locuteur de référence.The figure 1 is a block diagram showing the steps of a voice conversion method between a speaker and a reference speaker.

La figure 2 est un schéma-bloc représentant les étapes d'un procédé de conversion de voix entre un locuteur source et un locuteur cible.The figure 2 is a block diagram representing the steps of a voice conversion process between a source speaker and a target speaker.

La figure 3 est un schéma d'un système de conversion de voix mettant en oeuvre le procédé d'estimation conforme à l'invention.The figure 3 is a diagram of a voice conversion system implementing the estimation method according to the invention.

Sur la figure 1 est illustré un procédé d'estimation de conversion de voix entre un locuteur et un locuteur de référence. La voix dudit locuteur est définie à partir d'un message vocal enregistré tandis que la voix dudit locuteur de référence est définie à partir d'une base 10 de données acoustiques d'un système de synthèse de la parole par concaténation, de préférence par corpus, bien qu'un système de synthèse par diphones mono-représentés puisse également être utilisé.On the figure 1 is illustrated a voice conversion estimation method between a speaker and a reference speaker. The voice of said speaker is defined from a recorded voice message while the voice of said reference speaker is defined from an acoustic data base of a concatenated speech synthesis system, preferably by corpus. although a mono-represented diphon synthesis system can also be used.

Dans une première étape, un enregistrement synthétique parallèle au message vocal enregistré par le locuteur est généré à partir de ladite base 10 de données de synthèse vocale.In a first step, a synthetic record parallel to the voice message recorded by the speaker is generated from said voice synthesis data base.

A cet effet, un premier bloc nécessaire à la génération, dit bloc 20 d'analyse et d'annotation, a pour but d'extraire de l'enregistrement du locuteur considéré des informations de type symbolique, relatives au message contenu dans ledit enregistrement.For this purpose, a first block required for generation, called analysis and annotation block 20, is intended to extract from the record of the speaker concerned information of a symbolic type relating to the message contained in said record.

Un premier type de traitement envisagé consiste à extraire de l'enregistrement vocal le message prononcé sous forme textuelle. Ceci peut être obtenu de façon automatique par un système de reconnaissance vocale, ou de façon manuelle par écoute et retranscription des messages vocaux. Dans ce cas, le texte ainsi reconnu alimente directement le système 30 de synthèse vocale, générant ainsi l'enregistrement synthétique de référence désiré.A first type of treatment envisaged consists in extracting the message delivered in text form from the voice recording. This can be obtained automatically by a voice recognition system, or manually by listening and retranscribing voice messages. In this case, the text thus recognized directly feeds the system 30 of speech synthesis, thereby generating the desired reference synthetic record.

Cependant, il peut être avantageux de déterminer la chaîne phonétique effectivement réalisée par le locuteur considéré. Pour cela, des procédures standard de décodage acoustico-phonétique, par exemple à base de modèles HMM, peuvent être utilisées. Par cette variante, il est possible de contraindre le synthétiseur vocal à reproduire exactement la phonétisation ainsi déterminée.However, it may be advantageous to determine the phonetic string actually performed by the speaker. For this, standard acousto-phonetic decoding procedures, for example based on HMM models, can be used. By this variant, it is possible to constrain the speech synthesizer to reproduce exactly the phonetization thus determined.

Plus généralement, il est souhaitable d'introduire un mécanisme d'annotation de l'enregistrement afin d'extraire le maximum d'informations pouvant être pris en compte par le système de synthèse par concaténation. Parmi celles-ci, les informations relatives à l'intonation semblent particulièrement pertinentes, car elles permettent de mieux contrôler le mode d'élocution du locuteur. Ainsi, un algorithme d'annotation prosodique peut être intégré au procédé ou une phase d'annotation manuelle du corpus peut être envisagée afin de prendre en compte des marqueurs mélodiques jugés pertinents.More generally, it is desirable to introduce an annotation mechanism of the record in order to extract the maximum of information that can be taken into account by the concatenation synthesis system. Among these, the intonation information seems particularly relevant, because it allows to better control the speech mode of the speaker. Thus, a prosodic annotation algorithm can be integrated in the method or a manual annotation phase of the corpus can be considered to take into account melodic markers deemed relevant.

Il est alors possible d'estimer la fonction de conversion recherchée en appliquant aux deux enregistrements parallèles disponibles, à savoir le message vocal enregistré et l'enregistrement synthétique de référence, une opération d'apprentissage qui va maintenant être décrite en détail.It is then possible to estimate the conversion function sought by applying to the two available parallel records, namely the recorded voice message and the synthetic reference record, a learning operation which will now be described in detail.

Comme on peut le voir sur la figure 1, le traitement appliqué aux deux enregistrements fait apparaître différentes opérations nécessaires pour obtenir la fonction de conversion désirée. Ces opérations sont, dans l'ordre :

  • l'analyse acoustique 40,
  • l'alignement 50 des corpus,
  • la classification acoustique 60,
  • l'estimation 70 de la fonction de conversion.
As can be seen on the figure 1 , the processing applied to the two recordings shows different operations necessary to obtain the desired conversion function. These operations are, in order:
  • acoustic analysis 40,
  • the alignment 50 of the corpora,
  • the acoustic classification 60,
  • the estimate 70 of the conversion function.

L'analyse acoustique est effectuée par exemple au moyen du modèle HNM (« Harmonic plus Noise Model ») qui suppose qu'un segment (appelé aussi trame) voisé du signal de parole s(n) peut être décomposé en une partie harmonique h(n) représentant la composante quasi-périodique du signal constituée d'une somme de L sinusoïdes harmoniques d'amplitudes Al et de phases φ l , et une partie bruitée b(n) représentant le bruit de friction et la variation de l'excitation glottale d'une période à l'autre, modélisée par un filtre LPC (« Linear Prediction Coefficients ») excité par un bruit blanc gaussien ( Y. Stylianou, "Harmonic plus Noise Model for speech, combined with statistical methods, for speech and speaker modification", PhD thesis, Ecole Nationale Supérieure des Télécommunications, France, 1996 ). s n = h n + b n

Figure imgb0001
avec h n = l = 1 L A l n cos φ l n
Figure imgb0002
The acoustic analysis is carried out for example by means of the HNM model ("Harmonic plus Noise Model") which supposes that a segment (also called frame) voiced of the speech signal s (n) can be decomposed into a harmonic part h ( n) representing the quasi-periodic component of the signal consisting of a sum of L harmonic sinusoids of amplitudes A l and of phases φ l , and a noisy part b (n) representing the friction noise and the variation of the glottal excitation from one period to another, modeled by a noise-excited LPC (Linear Prediction Coefficients) filter. white Gaussian ( Y. Stylianou, "Harmonic plus Noise Model for speech, combined with statistical methods, for speech and modification", PhD thesis, National School of Telecommunications, France, 1996 ). s not = h not + b not
Figure imgb0001
with h not = Σ l = 1 The AT l not cos φ l not
Figure imgb0002

Pour une trame non-voisée, la partie harmonique est absente et le signal est simplement modélisé par un bruit blanc mis en forme par filtrage auto-régressif (AR).For an unvoiced frame, the harmonic part is absent and the signal is simply modeled by a white noise shaped by auto-regressive filtering (AR).

La première étape de l'analyse HNM consiste à prendre une décision quant au caractère voisé ou non de la trame analysée. Ce traitement est réalisé en mode asynchrone à l'aide d'un pas d'analyse fixé à 10 ms.The first step of the HNM analysis is to make a decision as to whether the analyzed frame is voiced. This processing is performed in asynchronous mode using an analysis step set at 10 ms.

Pour une trame voisée, on détermine d'abord la fréquence fondamentale F0 et la fréquence maximale de voisement, c'est-à-dire la fréquence au-delà de laquelle le signal est considéré comme uniquement constitué de bruit. Ensuite, une analyse synchronisée sur F0 permet d'estimer les paramètres de la partie harmonique (les amplitudes et les phases) ainsi que les paramètres du bruit. Les paramètres des harmoniques sont calculés par minimisation d'un critère des moindres carrés pondérés (voir l'article de Y. Stylianou cité plus haut) : E = n = - T 0 l T 0 l w 2 n ( s n - h n ) 2

Figure imgb0003

s(n) est le signal original, h(n) est la partie harmonique définie par la relation (5) écrite plus loin, w(n) est la fenêtre d'analyse, et T0 i est la période fondamentale de la trame courante. Il convient de noter que la trame d'analyse a une durée égale à deux fois la période fondamentale (voir l'article de Y. Stylianou cité plus haut). Cette analyse harmonique est importante dans la mesure où elle apporte une information fiable sur la valeur du spectre aux fréquences harmoniques. Une telle information est nécessaire pour avoir une estimation robuste de l'enveloppe spectrale.For a voiced frame, the fundamental frequency F 0 and the maximum frequency of voicing, that is to say the frequency beyond which the signal is considered to consist solely of noise, are first determined. Then, a synchronized analysis on F 0 makes it possible to estimate the parameters of the harmonic part (the amplitudes and the phases) as well as the parameters of the noise. The harmonic parameters are calculated by minimizing a weighted least squares criterion (see the article by Y. Stylianou cited above): E = Σ not = - T 0 l T 0 l w 2 not ( s not - h not ) 2
Figure imgb0003

where s (n) is the original signal, h (n) is the harmonic part defined by relation (5) written later, w (n) is the analysis window, and T 0 i is the fundamental period of the current frame. It should be noted that the analysis frame has a duration equal to twice the fundamental period (see the article by Y. Stylianou cited above). This harmonic analysis is important in the extent to which it provides reliable information on the value of the spectrum at harmonic frequencies. Such information is necessary to have a robust estimate of the spectral envelope.

Les parties du spectre correspondant à du bruit (qu'il s'agisse de la composante de bruit d'une trame voisée ou d'une trame non voisée) sont modélisées à l'aide d'une simple prédiction linéaire. La réponse fréquentielle du modèle AR ainsi estimé est ensuite échantillonnée à pas constant, ce qui fournit une estimation de l'enveloppe spectrale sur les zones bruitées.The parts of the spectrum corresponding to noise (whether the noise component of a voiced frame or an unvoiced frame) are modeled using a simple linear prediction. The frequency response of the AR model thus estimated is then sampled at constant pitch, which provides an estimate of the spectral envelope on the noisy areas.

Dans le mode de réalisation proposé, étant donné cet échantillonnage de l'enveloppe spectrale, on en déduit les paramètres modélisant cette enveloppe spectrale en utilisant la méthode du cepstre discret régularisé ( O. Cappe, E. Moulines, Regularization techniques for discrete cepstrum estimation, IEEE Signal Processing Letters, Vol. 3 (4), pp. 100-102, April 1996 ). L'ordre de la modélisation cepstrale a été fixé à 20. De plus, pour reproduire le plus fidèlement possible les propriétés de l'oreille humaine, une transformation en échelle de Bark est effectuée. Ces coefficients sont ainsi à rapprocher des MFCC « (Mel Frequency Cepstral Coefficients ») classiquement rencontrés en reconnaissance de la parole. Ainsi, pour chaque trame de parole, un vecteur acoustique constitué de paramètres cepstraux est calculé.In the proposed embodiment, given this sampling of the spectral envelope, the parameters modeling this spectral envelope are deduced using the discrete regularized cepstrum method ( O. Cappe, E. Mills, Regularization techniques for discrete cepstrum estimation, IEEE Signal Processing Letters, Vol. 3 (4), pp. 100-102, April 1996 ). The order of cepstral modeling was set at 20. Moreover, to reproduce as closely as possible the properties of the human ear, a Bark scale transformation is performed. These coefficients are thus to be compared with MFCCs (Mel Frequency Cepstral Coefficients) conventionally encountered in speech recognition. Thus, for each speech frame, an acoustic vector consisting of cepstral parameters is calculated.

Il convient également de noter que d'autres types de paramètres modélisant l'enveloppe spectrale peuvent être utilisés : par exemple les LSF (Line Spectral Frequency) ou encore les LAR (Log Area Ratio).It should also be noted that other types of parameters modeling the spectral envelope can be used: for example Line Spectral Frequency (LSF) or Log Area Ratio (LAR).

Après analyse acoustique, il convient de mettre en correspondance les différents vecteurs acoustiques des deux enregistrements. Pour cela, un algorithme classique, dit d'alignement dynamique (en anglais DTW pour "Dynamic Time Warping), est utilisé.After acoustic analysis, it is necessary to match the different acoustic vectors of the two recordings. For this, a conventional algorithm called Dynamic Alignment (DTW for Dynamic Time Warping) is used.

Avantageusement, si une annotation et une segmentation des deux enregistrements sont disponibles (par exemple un découpage en phonèmes) et si ces informations sont concordantes entre les deux enregistrements, alors le chemin d'alignement peut être contraint de manière à respecter les marques de segmentation.Advantageously, if an annotation and a segmentation of the two recordings are available (for example a phoneme division) and if this information is concordant between the two recordings, then the alignment path can be constrained so as to respect the segmentation marks.

Dans le mode de réalisation proposé, une classification conjointe des vecteurs acoustiques des deux enregistrements alignés est effectuée. Soient x 1: N = [x1,x2, ···, x N ] et y 1: N =[y1 ,y 2···,y N ] les séquences de vecteurs acoustiques alignés. Soient x et y les variables aléatoires relatives aux vecteurs acoustiques de chacun des enregistrements et z=(x,y) le couple associé. Dans la classification acoustique décrite ici, la variable aléatoire z est modélisée par un mélange de lois gaussiennes (en anglais GMM pour "Gaussian Mixture Model") d'ordre Q. Sa densité de probabilité s'écrit alors sous la forme suivante : p z = i = 1 Q α i N z ; μ i ; Σ i , i = 1 Q α i = 1 , α i 0

Figure imgb0004

N(z;µ;Σ) est la densité de probabilité de la loi normale de moyenne µ et de matrice de covariance Σ, et où les αi sont les coefficients du mélange (α i est la probabilité a priori que z soit généré par la iième gaussienne).In the proposed embodiment, a joint classification of the acoustic vectors of the two aligned recordings is performed. Let x 1: N = [x 1 , x 2 , ···, x N ] and y 1: N = [y 1 , y 2 ··· , y N ] be the sequences of aligned acoustic vectors. Let x and y be the random variables relating to the acoustic vectors of each of the records and z = (x, y) the associated pair. In the acoustic classification described here, the random variable z is modeled by a mixture of Gaussian laws (in English GMM for "Gaussian Mixture Model") of order Q. Its probability density is then written in the following form: p z = Σ i = 1 Q α i NOT z ; μ i ; Σ i , Σ i = 1 Q α i = 1 , α i 0
Figure imgb0004

where N (z; μ; Σ) is the probability density of the normal law of mean μ and of covariance matrix Σ, and where α i are the coefficients of the mixture (α i is the probability a priori that z is generated by the ith Gaussian).

L'estimation des paramètres du modèle est effectuée en appliquant une procédure itérative classique, à savoir l'algorithme EM (Expectation-Maximization) ( A. P. Dempster, N.M. Laird, D.R Rubin, Maximum likelihood from incomplete data via the EM algorithm, Journal of the Royal Statistical Society B, vol. 39, pp. 1-38, 1977 ). La détermination des paramètres initiaux du modèle GMM est obtenue à l'aide d'une technique standard de quantification vectorielle.The estimation of the parameters of the model is carried out by applying a classical iterative procedure, namely the EM (Expectation-Maximization) algorithm ( AP Dempster, NM Laird, DR Rubin, EM algorithm, Journal of the Royal Statistical Society B, vol. 39, pp. 1-38, 1977 ). The determination of the initial parameters of the GMM model is obtained using a standard vector quantization technique.

Une fois le modèle GMM appris, il peut être utilisé pour déterminer par régression une fonction de conversion entre le locuteur et le locuteur de référence. Dans le cas d'une conversion d'un locuteur x vers un locuteur y, celle-ci s'écrit sous la forme : y ^ = F x = E y / x = i = 1 Q h i x μ i y + Σ i yx ( Σ i xx ) - 1 x - μ i x ,

Figure imgb0005

h i x = α i N x μ i x Σ i xx i = 1 Q α i N x ; μ i x ; Σ i xx
Figure imgb0006
est la probabilité a posteriori que x soit généré par la gaussienne d'indice i, avec Σ i = Σ i xx Σ i xy Σ i yx Σ i yy et μ i = μ i x μ i y .
Figure imgb0007
Once the GMM model has been learned, it can be used to regression determine a conversion function between the speaker and the reference speaker. In the case of a conversion of a speaker x to a speaker y, it is written in the form: there ^ = F x = E there / x = Σ i = 1 Q h i x μ i there + Σ i yx ( Σ i xx ) - 1 x - μ i x ,
Figure imgb0005

or h i x = α i NOT x μ i x Σ i xx Σ i = 1 Q α i NOT x ; μ i x ; Σ i xx
Figure imgb0006
is the posterior probability that x is generated by the Gaussian index i , with Σ i = Σ i xx Σ i xy Σ i yx Σ i yy and μ i = μ i x μ i there .
Figure imgb0007

La figure 2 illustre un procédé d'estimation d'une fonction de conversion de voix entre un locuteur source et un locuteur cible dont les voix sont respectivement définies à partir de messages vocaux enregistrés par chacun des locuteurs, ces enregistrements étant non parallèles.The figure 2 illustrates a method for estimating a voice conversion function between a source speaker and a target speaker whose voices are respectively defined from voice messages recorded by each of the speakers, these recordings being non-parallel.

Dans une première étape, des enregistrements synthétiques de référence sont générés à partir desdits messages vocaux enregistrés selon une procédure analogue à celle qui vient d'être décrite en regard de la figure 1.In a first step, synthetic reference records are generated from said voice messages recorded according to a procedure similar to that just described with regard to the figure 1 .

Deux étapes de conversion sont alors nécessaires pour convertir la voix du locuteur source en celle du locuteur cible. Dans un premier temps, il faut convertir les paramètres du locuteur source en ceux du locuteur de référence, puis transformer ces derniers de manière à reproduire le locuteur cible désiré. Ainsi, une fonction permettant la conversion source-cible recherchée peut être estimée en composant deux fonctions de transformation données par (4) : F source cible x = F référence cible F source référence x .

Figure imgb0008
Two conversion steps are then necessary to convert the voice of the source speaker to that of the target speaker. As a first step, you need to convert the source speaker parameters into those of the reference speaker, then transform them to reproduce the desired target speaker. Thus, a function allowing the desired source-target conversion can be estimated by composing two transformation functions given by (4): F source target x = F reference target F source reference x .
Figure imgb0008

Un système de conversion de voix intégrant le procédé d'estimation décrit est représenté sur la figure 3. Dans le mode de réalisation proposé, l'étape d'analyse repose toujours sur une modélisation par HNM, mais est cette fois menée de manière pitch-synchrone, car ceci permet des modifications de pitch et d'enveloppe spectrale de meilleure qualité (voir l'article de Y. Stylianou cité plus haut). Les paramètres spectraux extraits sont ensuite transformés à l'aide d'un module 80 de conversion effectuant la conversion déterminée par la relation (6).A voice conversion system incorporating the described estimation method is represented on the figure 3 . In the proposed embodiment, the analysis step still relies on HNM modeling, but this time is conducted in a pitch-synchronous manner, as this allows for better pitch and spectral envelope changes (see FIG. article by Y. Stylianou cited above). The extracted spectral parameters are then transformed using a conversion module 80 performing the conversion determined by the relation (6).

Ces paramètres modifiés ainsi que les informations résiduelles nécessaires à la génération sonore (fréquence fondamentale, phase des harmoniques, gain de la partie bruitée, fréquence maximale de voisement) sont transmises à un module de synthèse par HNM. La composante harmonique du signal définie par l'équation (2) et présente pour les trames de signal voisées est générée par sommation de sinusoïdes préalablement tabulées dont les amplitudes sont calculées à partir des paramètres spectraux convertis. La partie stochastique est déterminée par Transformée de Fourier inverse (IFFT) sur le spectre calculé à partir des paramètres spectraux.These modified parameters as well as the residual information necessary for the sound generation (fundamental frequency, phase of the harmonics, gain of the noisy part, maximum frequency of voicing) are transmitted to a synthesis module by HNM. The harmonic component of the signal defined by equation (2) and present for the voiced signal frames is generated by summation of sinusoids previously tabulated whose amplitudes are calculated from the converted spectral parameters. The stochastic portion is determined by inverse Fourier Transform (IFFT) on the spectrum calculated from the spectral parameters.

En variante, le modèle HNM peut être remplacé par d'autres modèles connus de l'homme du métier, tels que les modèles par prédiction linéaire (LPC pour « Linear Predictive Coding »), les modèles sinusoïdaux ou MBE (« Multi-Band Excited »). La méthode de conversion par GMM peut être remplacée par des techniques classiques de quantification vectorielle (VQ pour « Vector Quantization ») ou de quantification vectorielle floue (Fuzzy VQ).Alternatively, the HNM model can be replaced by other models known to those skilled in the art, such as linear prediction coding (LPC) models, sinusoidal or MBE ("Multi-Band Excited") models. "). The GMM conversion method can be replaced by conventional vector quantization (VQ) or fuzzy vector quantization (Fuzzy VQ) techniques.

La description qui vient d'être donnée du procédé d'estimation conforme à l'invention n'a fait référence qu'à la seule transformation de paramètres relatifs au timbre. Mais il est bien entendu que le même procédé peut également être appliqué à la transformation d'autres types de paramètres comme la fréquence fondamentale (« pitch ») ou encore de paramètres liés à la qualité vocale.The description that has just been given of the estimation method according to the invention only referred to the only transformation of parameters relating to the stamp. But it is understood that the same method can also be applied to the transformation of other types of parameters such as the pitch frequency or parameters related to voice quality.

Selon une implémentation préférée de l'invention, les étapes du procédé sont déterminées par les instructions d'un programme d'estimation d'une fonction de conversion de voix incorporé dans un serveur, et le procédé selon l'invention est mis en oeuvre lorsque ce programme est chargé dans un ordinateur dont le fonctionnement est alors commandé par l'exécution du programme.According to a preferred implementation of the invention, the steps of the method are determined by the instructions of a program for estimating a voice conversion function incorporated in a server, and the method according to the invention is implemented when this program is loaded into a computer whose operation is then controlled by the execution of the program.

En conséquence, l'invention s'applique également à un programme d'ordinateur, notamment un programme d'ordinateur sur ou dans un support d'informations, adapté à mettre en oeuvre l'invention. Ce programme peut utiliser n'importe quel langage de programmation, et être sous la forme de code source, code objet, ou de code intermédiaire entre code source et code objet tel que dans une forme partiellement compilée, ou dans n'importe quelle autre forme souhaitable pour implémenter le procédé selon l'invention.Accordingly, the invention also applies to a computer program, including a computer program on or in an information carrier, adapted to implement the invention. This program can use any programming language, and be in the form of source code, object code, or intermediate code between source code and object code such as in a partially compiled form, or in any other form desirable to implement the method according to the invention.

Le support d'informations peut être n'importe quelle entité ou dispositif capable de stocker le programme. Par exemple, le support peut comporter un moyen de stockage, tel qu'une mémoire ROM, par exemple un CD ROM ou une mémoire ROM de circuit microélectronique, ou encore un moyen d'enregistrement magnétique, par exemple une disquette (floppy disc) ou un disque dur.The information carrier may be any entity or device capable of storing the program. For example, the medium may comprise storage means, such as a ROM memory, for example a CD ROM or a microelectronic circuit ROM memory, or a means magnetic recording, for example a floppy disk or a hard disk.

D'autre part, le support d'informations peut être un support transmissible tel qu'un signal électrique ou optique, qui peut être acheminé via un câble électrique ou optique, par radio ou par d'autres moyens. Le programme selon l'invention peut être en particulier téléchargé sur un réseau de type Internet.On the other hand, the information medium may be a transmissible medium such as an electrical or optical signal, which may be conveyed via an electrical or optical cable, by radio or by other means. The program according to the invention can be downloaded in particular on an Internet type network.

Alternativement, le support d'informations peut être un circuit intégré dans lequel le programme est incorporé, le circuit étant adapté pour exécuter ou pour être utilisé dans l'exécution du procédé en question.Alternatively, the information carrier may be an integrated circuit in which the program is incorporated, the circuit being adapted to execute or to be used in the execution of the method in question.

Claims (8)

  1. Method of estimating a voice conversion function for converting between, on the one hand, the voice of a speaker defined on the basis of a voice message recorded by said speaker, and, on the other hand, the voice of a reference speaker defined by a voice synthesis database, characterized in that said method comprises the steps consisting in:
    - generating, on the basis of said voice message recorded by the speaker and of said voice synthesis database, a synthetic recording of said voice message,
    - estimating said voice conversion function by a training operation performed on said recorded voice message and said synthetic recording.
  2. Method of estimating a voice conversion function for converting between, on the one hand, the voice of a source speaker defined on the basis of a first voice message recorded by said source speaker, and, on the other hand, the voice of a target speaker defined on the basis of a second voice message recorded by said target speaker, characterized in that said method comprises the steps consisting in:
    - generating, on the basis of said first voice message recorded by the source speaker and of a voice synthesis database, a synthetic recording of said first voice message,
    - estimating a first voice conversion function for converting between the voice of the source speaker and the voice of a reference speaker defined by said voice synthesis database, by a training operation performed on said first voice message recorded by the source speaker and said synthetic recording of the first voice message,
    - generating, on the basis of said second voice message recorded by the target speaker and of said voice synthesis database, a synthetic recording of said second voice message,
    - estimating a second voice conversion function for converting between the voice of said reference speaker and the voice of the target speaker, by a training operation performed on said synthetic recording of the second voice message and said second voice message recorded by the target speaker,
    - estimating said voice conversion function by composition of said first and said second voice conversion functions.
  3. Method according to one of Claims 1 or 2,
    characterized in that said voice synthesis database is a database of a concatenation-based speech synthesis system.
  4. Method according to one of Claims 1 or 2,
    characterized in that said voice synthesis database is a database of a corpus-based speech synthesis system.
  5. Application of the method according to Claim 1 to the conversion of voice messages recorded by a source speaker into corresponding messages reproduced by said reference speaker.
  6. Application of the method according to Claim 1 to the conversion of synthetic messages recorded by a reference speaker into corresponding messages reproduced by a target speaker.
  7. Voice conversion system, characterized in that it comprises a voice conversion module comprising means for implementing the method according to any one of Claims 1 to 4.
  8. Computer program on an information medium, said program comprising program instructions suitable for implementing a method according to any one of Claims 1 to 4, when said program is loaded and executed in a computer system.
EP05850632A 2005-01-31 2005-12-28 Method of estimating a voice conversion function Not-in-force EP1846918B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR0550278 2005-01-31
PCT/FR2005/003308 WO2006082287A1 (en) 2005-01-31 2005-12-28 Method of estimating a voice conversion function

Publications (2)

Publication Number Publication Date
EP1846918A1 EP1846918A1 (en) 2007-10-24
EP1846918B1 true EP1846918B1 (en) 2009-02-25

Family

ID=34954674

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05850632A Not-in-force EP1846918B1 (en) 2005-01-31 2005-12-28 Method of estimating a voice conversion function

Country Status (5)

Country Link
EP (1) EP1846918B1 (en)
AT (1) ATE424022T1 (en)
DE (1) DE602005012998D1 (en)
ES (1) ES2322909T3 (en)
WO (1) WO2006082287A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101015522B1 (en) * 2005-12-02 2011-02-16 아사히 가세이 가부시키가이샤 Voice quality conversion system
JP4241736B2 (en) * 2006-01-19 2009-03-18 株式会社東芝 Speech processing apparatus and method
CN108780643B (en) 2016-11-21 2023-08-25 微软技术许可有限责任公司 Automatic dubbing method and device
CN111179902B (en) * 2020-01-06 2022-10-28 厦门快商通科技股份有限公司 Speech synthesis method, equipment and medium for simulating resonance cavity based on Gaussian model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1156819C (en) * 2001-04-06 2004-07-07 国际商业机器公司 Method of producing individual characteristic speech sound from text

Also Published As

Publication number Publication date
WO2006082287A1 (en) 2006-08-10
ES2322909T3 (en) 2009-07-01
DE602005012998D1 (en) 2009-04-09
EP1846918A1 (en) 2007-10-24
ATE424022T1 (en) 2009-03-15

Similar Documents

Publication Publication Date Title
EP1944755B1 (en) Modification of a voice signal
EP1730729A1 (en) Improved voice signal conversion method and system
EP1970894A1 (en) Method and device for modifying an audio signal
FR2553555A1 (en) SPEECH CODING METHOD AND DEVICE FOR IMPLEMENTING IT
LU88189A1 (en) Speech segment coding and pitch control methods for speech synthesis
EP1769489B1 (en) Voice recognition method and system adapted to non-native speakers' characteristics
EP1593116A1 (en) Method for differentiated digital voice and music processing, noise filtering, creation of special effects and device for carrying out said method
EP1730728A1 (en) Method and system for the quick conversion of a voice signal
EP1606792B1 (en) Method for analyzing fundamental frequency information and voice conversion method and system implementing said analysis method
Muralishankar et al. Modification of pitch using DCT in the source domain
Meyer et al. Effect of speech-intrinsic variations on human and automatic recognition of spoken phonemes
Türk New methods for voice conversion
EP1526508B1 (en) Method for the selection of synthesis units
EP1789953B1 (en) Method and device for selecting acoustic units and a voice synthesis device
EP1846918B1 (en) Method of estimating a voice conversion function
Mary et al. Automatic syllabification of speech signal using short time energy and vowel onset points
Kakouros et al. Comparison of spectral tilt measures for sentence prominence in speech—Effects of dimensionality and adverse noise conditions
Csapó et al. Modeling irregular voice in statistical parametric speech synthesis with residual codebook based excitation
Orphanidou et al. Wavelet-based voice morphing
US11302300B2 (en) Method and apparatus for forced duration in neural speech synthesis
Gupta et al. G-Cocktail: An Algorithm to Address Cocktail Party Problem of Gujarati Language Using Cat Boost
Xiao et al. Speech intelligibility enhancement by non-parallel speech style conversion using CWT and iMetricGAN based CycleGAN
Gupta et al. A new framework for artificial bandwidth extension using H∞ filtering
Bous A neural voice transformation framework for modification of pitch and intensity
Alrige et al. End-to-End Text-to-Speech Systems in Arabic: A Comparative Study

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20070831

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

17Q First examination report despatched

Effective date: 20080129

RIN1 Information on inventor provided before grant (corrected)

Inventor name: EN-NAJJARY, TAOUFIK

Inventor name: ROSEC, OLIVIER

DAX Request for extension of the european patent (deleted)
GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REF Corresponds to:

Ref document number: 602005012998

Country of ref document: DE

Date of ref document: 20090409

Kind code of ref document: P

REG Reference to a national code

Ref country code: ES

Ref legal event code: FG2A

Ref document number: 2322909

Country of ref document: ES

Kind code of ref document: T3

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090225

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090225

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090225

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090225

NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090525

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090225

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090225

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090225

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090625

REG Reference to a national code

Ref country code: IE

Ref legal event code: FD4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090225

Ref country code: IE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090225

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090225

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090812

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090225

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090225

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090225

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090525

26N No opposition filed

Effective date: 20091126

BERE Be: lapsed

Owner name: FRANCE TELECOM

Effective date: 20091231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20100701

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090526

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20091231

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20091231

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20091231

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090225

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20091228

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090826

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090225

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090225

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 11

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 12

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20161121

Year of fee payment: 12

Ref country code: FR

Payment date: 20161121

Year of fee payment: 12

Ref country code: GB

Payment date: 20161128

Year of fee payment: 12

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: ES

Payment date: 20161125

Year of fee payment: 12

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 602005012998

Country of ref document: DE

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20171228

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20180831

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180102

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20180703

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171228

REG Reference to a national code

Ref country code: ES

Ref legal event code: FD2A

Effective date: 20190704

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20171229