EP1790173A2 - Method for processing sound signals for a communication terminal and communication terminal implementing said method - Google Patents
Method for processing sound signals for a communication terminal and communication terminal implementing said methodInfo
- Publication number
- EP1790173A2 EP1790173A2 EP05778168A EP05778168A EP1790173A2 EP 1790173 A2 EP1790173 A2 EP 1790173A2 EP 05778168 A EP05778168 A EP 05778168A EP 05778168 A EP05778168 A EP 05778168A EP 1790173 A2 EP1790173 A2 EP 1790173A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- communication terminal
- voice recognition
- signals
- filtering
- voice
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000004891 communication Methods 0.000 title claims abstract description 95
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000012545 processing Methods 0.000 title claims abstract description 9
- 230000005236 sound signal Effects 0.000 title claims description 17
- 230000009471 action Effects 0.000 claims abstract description 5
- 238000001914 filtration Methods 0.000 claims description 42
- 238000003672 processing method Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 description 14
- 238000005192 partition Methods 0.000 description 9
- 238000010295 mobile communication Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000008054 signal transmission Effects 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000002592 echocardiography Methods 0.000 description 1
- 230000002401 inhibitory effect Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
Definitions
- the present invention relates to a sound signal processing method for a communication terminal and to a communication terminal implementing this method, in particular for using this communication terminal with different sound acquisition systems.
- This invention can in particular be used in mobile telephony.
- the voice recognition means in particular the means for processing and storing the information, are limited in a communication terminal causes restrictions in weight, cost and space that must be respected by the designers of these communication terminals, particularly in the case of portable communication terminals.
- the same communication terminal, and therefore the same set of voice recognition means can be used with different sound acquisition systems, including in particular different microphones and / or connection means to the communication terminal, as detailed below. below.
- FIG. 1 shows schematically the operation of voice recognition in an example of the prior art.
- a communication terminal 100 including internal voice recognition means 108, alternately uses different sound acquisition systems: a system 101 including including an internal microphone 102, a system 103 of a pedestrian hands-free kit including a microphone 104 external to the communication terminal 100 or a system 105 of a hands-free car kit including including a microphone 106 external to the communication terminal 100.
- These recognition means compare parameters extracted from a signal 1 14, 1 16 or 1 18, respectively transmitted by one of the systems 101, 103 or 105, with parameters contained in a database 1 10 internal to the communication terminal and each representing a datum, such as a name, or a function.
- this operation generally implements a recognition score, or 'score' in English, for each comparison and chooses the set of stored parameters having the best recognition score when it exceeds a certain validation threshold.
- a set of stored parameters is sufficiently close to the parameters extracted from the received signal, then this set is transmitted to means 1 12 of management of the communication terminal to perform an operation, such as making a call.
- This proximity is also called the speech recognition rate of a communication terminal. It is accepted that this success rate must be greater than 95% for the speech recognition process to be valid.
- the database 1 10 is built in particular by a factory recording of so-called multi-speakers sequences because, for the same sequence, they integrate potential sound differences between different people.
- the user can use the communication terminal 100 with different sound acquisition systems 101, 103 or 105 so that each of these systems introduces its own distortion to the signal transmitted by the user 102 (in particular its harmonic distortion, its own distortion of volumes or its sensitivity to ambient noise and echoes).
- the speech recognition rate of a communication terminal is often considered insufficient for the user to use the speech recognition of his communication terminal if this communication terminal is used with a different sound signal acquisition system. of the one with which the learning procedure was performed or on the basis of which the multi-speaker pre-recordings were made.
- the invention relates to a voice signal processing method for a communication terminal using voice recognition means comparing these voice signals with data stored in a base in order to identify the data corresponding to these signals.
- these identified data being transmitted to management means for triggering an action, characterized in that, the voice signals can be provided by different sound acquisition systems, using separate voice recognition means for each acquisition system.
- the voice recognition rate is made satisfactory for various sound acquisition systems of the communication terminal since the signal processing is adapted to each acquisition system.
- a user can therefore satisfactorily use the voice recognition function with all sound acquisition systems that can be used vis-à-vis his communication terminal.
- independent sub-bases are included in the database, each sub-base being associated with a sound acquisition system such that the voice recognition means primarily uses the sub-base associated with the system. sound acquisition used by the user to perform the comparison.
- the comparison between a signal and the stored data is performed successively for each of the sub-bases until a required recognition rate is reached by this comparison.
- a speech recognition learning procedure is performed with different speech recognition systems to generate sub-bases specific to each speech recognition system.
- At least two sound signal filters are integrated in the voice recognition means of the communication terminal, each of the filters being specific to a sound acquisition system of the communication terminal.
- the filters have predetermined filtering characteristics.
- the signals delivered by the filters are processed identically by the voice recognition means vis-à-vis the database.
- the voice recognition means contain fixed filtering means associated with a first voice recognition system and dynamic filtering means associated with a second filtering system, these dynamic filtering means 612 detecting the characteristics of the filtering. fixed so as to output a signal similar to the signal delivered by this fixed filtering.
- the invention also relates to a communication terminal processing voice signals by means of voice recognition means comparing these voice signals with data stored in a base in order to identify the data corresponding to these signals, these identified data being transmitted to management means for triggering an action, characterized in that, the voice signals can be provided by different sound acquisition systems, it comprises separate voice recognition means for each acquisition system.
- the communication terminal is characterized in that the database is located outside the communication terminal in a server.
- the communication terminal comprises, in the database, independent sub-bases, each sub-base being associated with a sound acquisition system considered so that the voice recognition means preferably uses the sub-base associated with the sound acquisition system used by the user to perform the comparison.
- the communication terminal comprises means for performing the comparison between a signal and the data stored successively for each of the sub-bases until a required recognition rate is reached by this comparison.
- the communication terminal comprises means for performing a procedure for learning speech recognition with different speech recognition systems so as to generate the sub-bases specific to each speech recognition system.
- the communication terminal comprises in the voice recognition means at least two sound signal filters, each of the filters being specific to a sound acquisition system of the communication terminal.
- the communication terminal comprises filters that have fixed and predetermined filtering characteristics.
- the communication terminal comprises means for the signals delivered by the filters to be processed identically by the voice recognition means vis-à-vis the database.
- the communication terminal comprises voice recognition means which contain fixed filtering means associated with a first voice recognition system and dynamic filtering means associated with a second filtering system, these dynamic filtering means. detecting the characteristics of the fixed filtering so as to deliver a signal similar to the signal delivered by this fixed filtering.
- the communication terminal comprises a microphone.
- one of these data acquisition systems is a pedestrian hands-free kit, a hands-free kit for a vehicle or a recognition system integrated into the communication terminal.
- FIG. 2 is a schematic representation of the applications of implementation of the invention
- FIG. 3 is a diagram of a first embodiment of the invention
- FIG. 4 is a diagram of a second example of the invention
- FIG. 5 is a diagram showing a spectral correction introduced in various embodiments of the invention.
- FIG. 6 is a schematic representation of a third embodiment of the invention.
- FIG. 2 diagrammatically represents the implementation of the speech recognition method according to the invention for three sound acquisition systems of the same mobile communication terminal 204, implemented by a user 202.
- the so-called learning step has been carried out for voice recognition, the user being able to trigger with his voice, or any other recognizable sound signal, a function of the communication terminal.
- the user 202 commands his communication terminal 204, through his voice 203, to make a call to a correspondent by simply mentioning the first name of the correspondent.
- the use case 200 of the voice recognition of the mobile communication terminal 204 is implemented for example with a sound acquisition system 206 integrated with the communication terminal 204 and comprising a microphone.
- the voice recognition means of the communication terminal compare the parameters of the signal then transmitted by the system 206 with the sets of parameters stored in the database.
- the communication terminal 204 triggers the call to the desired party.
- the user 202 can then decide to put his communication terminal 204 on his belt or in a pocket, in a use case 210 of the mobile communication terminal 204 with a sound acquisition system 212, commonly called hand-held kit. pedestrian free, including a microphone 216, close to the mouth of the user 202, and a headset 214 and the cables and connection means connecting them to the communication terminal 204.
- a sound acquisition system 212 commonly called hand-held kit. pedestrian free, including a microphone 216, close to the mouth of the user 202, and a headset 214 and the cables and connection means connecting them to the communication terminal 204.
- the user can, thanks to the invention, pronounce the name of its correspondent through the microphone 216 and successfully control the call of this correspondent.
- the user 202 can then decide to use his communication terminal 204 with the aid of another sound acquisition system 228 in a car 220, in a use case 218 of the mobile communication terminal 204 with a hands-free car kit, including a microphone 230 and cables and connecting means 222 connecting them to the communication terminal 204.
- the user pronounces the name of his correspondent through the microphone 230 and thus controls the call to this correspondent.
- a user 202 can use the voice recognition function of his communication terminal 204 with various sound acquisition systems 206, 212 or 228, which does not present a problem of voice recognition when a method according to US Pat. the invention is taken into account, three preferred embodiments of the invention being described below:
- a first embodiment is shown diagrammatically in FIG. 3, including a communication terminal 300 equipped in particular with means 302 for voice recognition, with a database 304 of sets of parameters, each said sets corresponding to a function to be recognized, an internal sound acquisition system 305 including including an integrated microphone 306 and means 312 for managing the communication terminal 300.
- This communication terminal can also use a sound acquisition system 307, for example corresponding to the pedestrian hands-free kit, including a microphone 308 and a sound acquisition system 309, corresponding for example to the car hands-free kit, including in particular a microphone 310.
- a sound acquisition system 307 for example corresponding to the pedestrian hands-free kit, including a microphone 308 and a sound acquisition system 309, corresponding for example to the car hands-free kit, including in particular a microphone 310.
- the user performs the speech recognition learning procedure with the various systems 305, 307 and 310 incorporating different microphones 306, 308 and 310.
- the communication terminal comprises means for detecting the sound acquisition system used and inhibiting the other systems.
- a user carries out the learning process with the integrated microphone 306 of his communication terminal 300, for example by selecting on his communication terminal the function to which he wishes to associate a sequence of sounds and then pronouncing this sequence of sounds one or more times.
- the voice recognition means 302 extract a set of parameters of this signal 320 which is then stored in a sub-base, or partition, 314 of the database 304.
- the user sets up the system 307 including another microphone 308, of the hands-free kit, and also realizes the training method with the microphone 308 for the previously processed function.
- the voice recognition means 302 extract a set of signal parameters
- the user sets up the system 309 including another microphone 310 of the hands-free car kit, and it carries out once again the learning process for the same data or the same function as previously.
- the voice recognition means 302 extract a set of parameters of the signal 324, then transmitted by the system 309, which is then stored in a partition 318 of the database 304.
- the communication terminal recognizes the system used, such recognition is already used to reduce the echo or ambient noise. Finally, it compares the parameters extracted by the means 302 of the signal
- This embodiment is capable of many variants.
- a variant uses the comparison of the sequence pronounced by the user with the partition used at that moment.
- FIG. 4 illustrates a communication terminal 400 containing, in particular, voice recognition means 402, a database 404, means 412 for managing the communication terminal and a system 405 for communication.
- sound acquisition including including a microphone 406.
- the communication terminal can also operate with two other sound acquisition systems including two other microphones: a system 407 including including a microphone 408, said system 407 being for example a hands-free kit, and a system 409 including a microphone 410, said system 409 being for example a hands-free car kit.
- the signal transmission characteristics of the different sound signal acquisition systems 405, 407 and 409 associated with the communication terminal 400 are known before the use of said systems.
- the various systems 405, 407 and 409 for acquiring the sound signal associated with the communication terminal 400 behave like filters.
- filtering means 414 associated with the system 405 internal to the communication terminal 400 for acquiring the sound signal filtering means 416 associated with the system 407 external to the communication terminal 400 for acquiring the sound signal
- - filtering means 418 associated with the system 409 external to the communication terminal 400 for acquiring the sound signal filtering means 414 associated with the system 405 internal to the communication terminal 400 for acquiring the sound signal
- filtering means 416 associated with the system 407 external to the communication terminal 400 for acquiring the sound signal filtering means 416 associated with the system 407 external to the communication terminal 400 for acquiring the sound signal
- - filtering means 418 associated with the system 409 external to the communication terminal 400 for acquiring the sound signal filtering means 414 associated with the system 405 internal to the communication terminal 400 for acquiring the sound signal
- filtering means 416 associated with the system 407 external to the communication terminal 400 for acquiring the sound signal filtering means 416 associated with
- FIG. 5 is an example of adaptation of the spectral characteristics by inverse filtering which is a particular filtering that can be used in this embodiment.
- This FIG. 5 represents three curves connecting the attenuation, for example in dB, on the ordinate 502 as a function of the frequency on the abscissa 504.
- Curve 506 represents the frequency response of a sound acquisition system 405, 407 or 409.
- Curve 508 represents the frequency response of one of the filtering means 414, 416 or 418 respectively associated with the system 405, 407 or 409.
- a flat response 510 is obtained which does not depend on the frequency in the required bandwidth and which does not depend on the sound acquisition system used.
- all the corresponding parameters stored in the database 404 can be compared homogeneously by voice recognition means 420 to one of the signals 422, 424 or 426 input into said voice recognition means 420, independently of the fact that said signals 422, 424 or 426 have been processed in the means 414, the means 416 or the filtering means 418 from the signals 428, 430 or 432 respectively.
- This embodiment is capable of numerous variants such as, for example, externalizing the filtering means 414 with respect to the internal system 405.
- a communication terminal 600 contains, in particular, voice recognition means 602, a database 614, means 616 for managing the speech communication terminal and means 607 for acquiring the sound signal, said means 607 comprising in particular a microphone 608.
- Another system 609 for acquiring the sound signal can be connected to the communication terminal 600 if this is the wish of the user.
- This system 609 can be in particular a hands-free kit or a hands-free car kit.
- the voice recognition means 602 comprise:
- Adaptive filter means 612 Adaptive filter means 612; algorithm means 606 implementing a voice recognition algorithm with the database 614.
- the adaptive filtering means 612 makes it possible to detect the signal processing characteristics of the system 609 by comparing, during a time when the user does not speak, a signal 618 from the system 609 with a signal 622 to identify the filtering 612 to identify the filtering 612 delivering a signal
- a double listening of the ambient medium through the system 607 and the system 609 alternately or simultaneously depending on the achievements.
- a variant of this embodiment is to operate this double listening, not in the learning step but in a systematic manner in the operating step, in particular at given time intervals or at each call or call reception.
- the adapted signal 618 becomes a signal 620 which can then be processed by the algorithm means 606 to extract the necessary parameters therefrom algorithm and then compare these parameters to the sets of parameters stored in the database 614.
- FIG. 6 also shows means 604 that process a signal 624 from the sound signal acquisition system 607 to also adapt it to predetermined levels and transform it into a signal 622.
- the mobile communication terminal 300, 400, 600 transmits and receives communications in a radio communication network.
- the database 304, 404, 614 is located outside the mobile communication terminal in a server 700 also located in the radio communication network.
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Telephonic Communication Services (AREA)
- Telephone Function (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR0451186A FR2871978B1 (en) | 2004-06-16 | 2004-06-16 | METHOD FOR PROCESSING SOUND SIGNALS FOR A COMMUNICATION TERMINAL AND COMMUNICATION TERMINAL USING THE SAME |
PCT/FR2005/050450 WO2006003340A2 (en) | 2004-06-16 | 2005-06-16 | Method for processing sound signals for a communication terminal and communication terminal implementing said method |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1790173A2 true EP1790173A2 (en) | 2007-05-30 |
Family
ID=34945192
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP05778168A Withdrawn EP1790173A2 (en) | 2004-06-16 | 2005-06-16 | Method for processing sound signals for a communication terminal and communication terminal implementing said method |
Country Status (5)
Country | Link |
---|---|
US (1) | US20080172231A1 (en) |
EP (1) | EP1790173A2 (en) |
CN (1) | CN101128865A (en) |
FR (1) | FR2871978B1 (en) |
WO (1) | WO2006003340A2 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101335203B1 (en) * | 2010-03-26 | 2013-11-29 | 숙명여자대학교산학협력단 | Peptides for Promotion of Angiogenesis and the use thereof |
US9493698B2 (en) | 2011-08-31 | 2016-11-15 | Universal Display Corporation | Organic electroluminescent materials and devices |
CN102510426A (en) * | 2011-11-29 | 2012-06-20 | 安徽科大讯飞信息科技股份有限公司 | Personal assistant application access method and system |
WO2014081429A2 (en) | 2012-11-21 | 2014-05-30 | Empire Technology Development | Speech recognition |
CN103200329A (en) * | 2013-04-10 | 2013-07-10 | 威盛电子股份有限公司 | Voice control method, mobile terminal device and voice control system |
JP7062958B2 (en) * | 2018-01-10 | 2022-05-09 | トヨタ自動車株式会社 | Communication system and communication method |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6057261B2 (en) * | 1980-03-18 | 1985-12-13 | 日本電気株式会社 | Multi-line audio input/output device |
US6125347A (en) * | 1993-09-29 | 2000-09-26 | L&H Applications Usa, Inc. | System for controlling multiple user application programs by spoken input |
DE19533541C1 (en) * | 1995-09-11 | 1997-03-27 | Daimler Benz Aerospace Ag | Method for the automatic control of one or more devices by voice commands or by voice dialog in real time and device for executing the method |
JPH0981183A (en) * | 1995-09-14 | 1997-03-28 | Pioneer Electron Corp | Generating method for voice model and voice recognition device using the method |
JPH10105191A (en) * | 1996-09-30 | 1998-04-24 | Toshiba Corp | Speech recognition device and microphone frequency characteristic converting method |
EP0911808B1 (en) * | 1997-10-23 | 2002-05-08 | Sony International (Europe) GmbH | Speech interface in a home network environment |
US5970446A (en) * | 1997-11-25 | 1999-10-19 | At&T Corp | Selective noise/channel/coding models and recognizers for automatic speech recognition |
US6233559B1 (en) * | 1998-04-01 | 2001-05-15 | Motorola, Inc. | Speech control of multiple applications using applets |
EP1224569A4 (en) * | 1999-05-28 | 2005-08-10 | Sehda Inc | Phrase-based dialogue modeling with particular application to creating recognition grammars for voice-controlled user interfaces |
US6937977B2 (en) * | 1999-10-05 | 2005-08-30 | Fastmobile, Inc. | Method and apparatus for processing an input speech signal during presentation of an output audio signal |
US7139709B2 (en) * | 2000-07-20 | 2006-11-21 | Microsoft Corporation | Middleware layer between speech related applications and engines |
US6964023B2 (en) * | 2001-02-05 | 2005-11-08 | International Business Machines Corporation | System and method for multi-modal focus detection, referential ambiguity resolution and mood classification using multi-modal input |
US7072837B2 (en) * | 2001-03-16 | 2006-07-04 | International Business Machines Corporation | Method for processing initially recognized speech in a speech recognition session |
JP3997459B2 (en) * | 2001-10-02 | 2007-10-24 | 株式会社日立製作所 | Voice input system, voice portal server, and voice input terminal |
US7222073B2 (en) * | 2001-10-24 | 2007-05-22 | Agiletv Corporation | System and method for speech activated navigation |
-
2004
- 2004-06-16 FR FR0451186A patent/FR2871978B1/en not_active Expired - Fee Related
-
2005
- 2005-06-16 CN CNA2005800276716A patent/CN101128865A/en active Pending
- 2005-06-16 WO PCT/FR2005/050450 patent/WO2006003340A2/en active Application Filing
- 2005-06-16 EP EP05778168A patent/EP1790173A2/en not_active Withdrawn
- 2005-06-16 US US11/570,755 patent/US20080172231A1/en not_active Abandoned
Non-Patent Citations (1)
Title |
---|
See references of WO2006003340A2 * |
Also Published As
Publication number | Publication date |
---|---|
US20080172231A1 (en) | 2008-07-17 |
CN101128865A (en) | 2008-02-20 |
FR2871978A1 (en) | 2005-12-23 |
WO2006003340A2 (en) | 2006-01-12 |
WO2006003340A3 (en) | 2007-09-13 |
FR2871978B1 (en) | 2006-09-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0974221B1 (en) | Radiotelephone voice control device, in particular for use in a motor vehicle | |
EP1606795B1 (en) | Distributed speech recognition system | |
EP0932964B1 (en) | Method and device for blind equalizing of transmission channel effects on a digital speech signal | |
EP2057834B1 (en) | Circuit for reducing acoustic echo for a hands-free device usable with a portable telephone | |
EP1606796B1 (en) | Distributed speech recognition method | |
EP1401183B1 (en) | Method and device for echo cancellation | |
CA2183899A1 (en) | Acoustic echo suppressor with subband filtering | |
US20020097844A1 (en) | Speech enabled, automatic telephone dialer using names, including seamless interface with computer-based address book programs | |
EP0692883B1 (en) | Blind equalisation method, and its application to speech recognition | |
EP0856832A1 (en) | Word recognition method and device | |
EP1790173A2 (en) | Method for processing sound signals for a communication terminal and communication terminal implementing said method | |
EP1400097B1 (en) | Method for adaptive control of multichannel acoustic echo cancellation system and device therefor | |
WO2004029934A1 (en) | Voice recognition method with automatic correction | |
US6772118B2 (en) | Automated speech recognition filter | |
EP0786920A1 (en) | Transmission system of correlated signals | |
EP1634435A1 (en) | Echo processing method and device | |
FR2775407A1 (en) | Telephone terminal with voice recognition system | |
FR2767941A1 (en) | ECHO SUPPRESSOR BY SENSE TRANSFORMATION AND ASSOCIATED METHOD | |
EP1155497A1 (en) | Antenna treatment method and system | |
WO2007042677A1 (en) | Interfacing circuit for pre- and post-processing of audio signals before or after software processing operations executed by a processor | |
FR2581469A1 (en) | Vocal entry/exit device and speech recognition or synthesis installation making use of it | |
EP1492312A1 (en) | Telephony user perceptible alerts generation device | |
FR2803927A1 (en) | Voice recognition system for activation of vehicle on-board electronic equipment e.g. navigation equipment or mobile telephones, has system to recognise short forms of keywords |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL BA HR LV MK YU |
|
DAX | Request for extension of the european patent (deleted) | ||
R17D | Deferred search report published (corrected) |
Effective date: 20070913 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G10L 15/28 20060101AFI20070920BHEP |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: LEJAY, FREDERIC Inventor name: PARISEL, ARNAUD |
|
17P | Request for examination filed |
Effective date: 20080313 |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: ALCATEL LUCENT |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20130103 |