[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US20070255554A1 - Language translation service for text message communications - Google Patents

Language translation service for text message communications Download PDF

Info

Publication number
US20070255554A1
US20070255554A1 US11/411,450 US41145006A US2007255554A1 US 20070255554 A1 US20070255554 A1 US 20070255554A1 US 41145006 A US41145006 A US 41145006A US 2007255554 A1 US2007255554 A1 US 2007255554A1
Authority
US
United States
Prior art keywords
destination
text
language
source
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/411,450
Inventor
Yigang Cai
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia of America Corp
Original Assignee
Lucent Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lucent Technologies Inc filed Critical Lucent Technologies Inc
Priority to US11/411,450 priority Critical patent/US20070255554A1/en
Assigned to LUCENT TECHNOLOGIES INC. reassignment LUCENT TECHNOLOGIES INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CAI, YIQANG
Priority to JP2009507742A priority patent/JP5089683B2/en
Priority to EP07755799A priority patent/EP2011034A2/en
Priority to PCT/US2007/009662 priority patent/WO2007127141A2/en
Priority to KR1020087025912A priority patent/KR101057852B1/en
Priority to CNA2007800147080A priority patent/CN101427244A/en
Publication of US20070255554A1 publication Critical patent/US20070255554A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/18Information format or content conversion, e.g. adaptation by the network of the transmitted or received information for the purpose of wireless delivery to users or terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • G10L13/08Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B1/00Details of transmission systems, not covered by a single one of groups H04B3/00 - H04B13/00; Details of transmission systems not characterised by the medium used for transmission
    • H04B1/38Transceivers, i.e. devices in which transmitter and receiver form a structural unit and in which at least one part is used for functions of transmitting and receiving
    • H04B1/40Circuits

Definitions

  • This invention relates to methods and apparatus for communications between terminals whose users speak different languages.
  • Asian language e.g., Chinese, Japanese, Hindi, Philippino, Malay
  • Asian speakers While many Asian speakers also know English, the communications between an English speaker and an Asian speaker lay a great burden on the Asian speaker if they are carried out in English, frequently, to the disadvantage of the Asian speaker. This burden is one which the Asian speakers are increasingly reluctant to bear. Unfortunately, the number of English speakers who are also fluent in an Asian language is still small.
  • SYSTRAN translates between English and any of French, Dutch, Japanese, Chinese, Arabic, Spanish, German, Swedish, Italian, Portuguese and Korean.
  • the translated text is converted with essentially no additional delay into spoken text for announcement to a receiving party of a communication.
  • a voice mail message of this type can then be delivered to the recipient at the recipient's convenience.
  • each user specifies a preferred language or two or more languages that are acceptable. If a message is generated in an acceptable language, the translation process is bypassed.
  • the network reports to the customer equipment the preferred language(s) of the message recipient. Then if no translation is necessary or the translation is to a language for which the calling customer's terminal has translation capabilities, no translation is required in the network; otherwise, the call is routed to a service center where the required translation is performed.
  • calls which are candidates for translation are identified by a suitable prefix. Calls which do not include such a prefix are processed in the normal manner of the prior art.
  • FIG. 1 is a block diagram illustrating the operation of Applicant's invention.
  • FIG. 2 is a flow diagram illustrating the operation of Applicant's invention.
  • FIG. 1 is a block diagram illustrating the operation of the invention.
  • the calling party has terminal equipment 1 or 2 .
  • Terminal equipment 1 is a cellular station equipped to transmit text messages or voice.
  • Terminal 2 is land-based station equipped to transmit text messages or voice.
  • Terminals 1 and 2 can, optionally, be equipped with limited text translation capabilities, which can accept text messages in a first language or one of a plurality of first languages, and can translate the text into one of a plurality of second languages.
  • the called station is one of two stations 5 or 6 .
  • Called station 5 is a cellular station equipped to receive text messages or voice messages; land-based station 6 is connected by land-based facilities and is equipped to receive data messages or voice messages. Either of the terminals 5 or 6 can be optionally equipped with software to receive data messages in one language and display or print the translation of these messages into a second language.
  • the calling and called parties are connected via network 10 which can be a network for transmitting Internet multimedia service (IMS) signals representing data, text data, video, or voice.
  • Network 10 is connected to a service center 20 which includes text-to-text translators 22 , speech-to-text translators (operating in a single language) 24 , and text-to-speech converters (also operating in a single language) 26 .
  • the service center also contains a database 28 for storing the language preferences of destination terminals served by the service center.
  • the service center can be part of an instant message server, an e-mail server, or a short message service server. If the output is to be delivered as speech, a voice mail facility 12 in the network can be used to store and deliver voice signals representing the message.
  • the service center can be a separate unit connected to one of these servers by the network 10 ; in that case, the service center is called whenever one of the servers recognizes the need to translate a message.
  • the speech to text translator 24 is used before using the text to text translator 22 , or the text to speech converter 26 is used after the text to text translator 22 has finished its work, or both, respectively.
  • the Service Center then sends text to the called party, or sends speech to a voice messaging system for subsequent communication to the called party. Because the text to text translation process is relatively slow, with the present state of the art, speech to speech translation, wherein the translated speech is immediately recognized and can be responded to does not yet appear to be feasible; that is why translated speech is delivered to a voice messaging system for access by the called party. Note that at the present time, simple text to text conversion appears to be the most desirable mode.
  • the service center can decide whether the input language is one of the acceptable languages, and can bypass the translation step.
  • FIG. 2 illustrates the operation of Applicant's invention.
  • a caller originates a call (action block 200 ).
  • the caller accesses the database of the service center to determine the preferred languages of the destination party (action block 201 ).
  • Test 202 is used to determine whether the caller wishes to have his/her message translated to a language available in his/her own software. If so, the call is translated and then routed as in the prior art (action block 215 ). If not, then test 203 is used to determine whether a caller specifies a translation requirement. In the preferred embodiment of Applicant's invention, this is done by using a prefix in the call addressing mechanism. The prefix may also be augmented by treating calls to a set of specified terminating customer equipments as requiring translation.
  • test 205 is used to determine whether the caller specifies the target language. If so, the network routes the call to the service center (action block 207 ) and specifies to the service center the identity of the target language as well as the language of the original message.
  • the network routes the call to the service center (action block 209 ).
  • the service center determines the target language (action block 211 ). If the caller has not specified translation, then test 213 is used to determine whether the called party has requested incoming messages to be translated to one of one or more specific target language and tests whether the different target languages are all different from the source language. If the called party has not requested translation or if the calling party language is the same as a called party target language then the call is routed as in the prior art (action block 215 ). If the called party has requested translation to a language different from the source language, then the network routes the call to the service center (action block 209 , previously discussed).
  • the service center performs the translation (action block 217 ).
  • the translated message is then routed to the called party (action block 219 ). If the caller inputs voice, the speech to text translator 24 generates text for use by the text to text translator 22 for generating text in the target language. If the called party wishes to have speech delivered, the output of the text to text translator 22 is presented to text to speech converter 26 which then generates a voice mail message for storage in voice mail unit 28 for subsequent delivery to the called party.
  • the caller knows that the called party has software for translating from the caller's language to a language desired by the called party, then the call can be handled as in the prior art.
  • the original request to determine the languages acceptable to the called party includes the source languages from which the called party can make a translation.
  • the called party then must recognize the need for translation and invoke the required software, or can recognize that calls from a specific caller, identified by caller identification, must be translated.
  • the service center can generate messages to each of the plurality in the preferred language of that terminal.
  • the recipient of a message may wish to examine the original source text, since translation is an imperfect process.
  • the service center should store the source text, and transmit this source text upon request or routinely.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Telephonic Communication Services (AREA)
  • Machine Translation (AREA)

Abstract

This invention relates to a method and apparatus for providing automatic language translation in a telecommunications network. Communications from a speaker or generator of text in one language are routed to a service center for automatically converting to another language. The service center stores the language preference(s) of destination terminals served by the center. If the input language of a message is one serviced by the center and one of the preferred languages of the destination terminal is serviced by the center, the center translates the message before delivering it to the destination terminal. Advantageously, this arrangement greatly enhances communications between individuals who have no common language in which both are fluent.

Description

    TECHNICAL FIELD
  • This invention relates to methods and apparatus for communications between terminals whose users speak different languages.
  • BACKGROUND OF THE INVENTION
  • As the globalization movement accelerates, it is becoming more and more necessary to allow communications between parties speaking different languages. Further, the range of languages is rapidly increasing as more of the communications are between people speaking an Asian language (e.g., Chinese, Japanese, Hindi, Philippino, Malay) and one of the European languages or English. While many Asian speakers also know English, the communications between an English speaker and an Asian speaker lay a great burden on the Asian speaker if they are carried out in English, frequently, to the disadvantage of the Asian speaker. This burden is one which the Asian speakers are increasingly reluctant to bear. Unfortunately, the number of English speakers who are also fluent in an Asian language is still small.
  • Fortunately, software packages which translate between two languages are becoming increasingly sophisticated and of higher quality. For example, SYSTRAN translates between English and any of French, Dutch, Japanese, Chinese, Arabic, Spanish, German, Swedish, Italian, Portuguese and Korean.
  • A problem of the prior art is that in spite of the above factors, communications between speakers of different languages continue to be inefficient and awkward.
  • SUMMARY OF THE INVENTION
  • The above problem is substantially alleviated and an advance is made over the teachings of the prior art in accordance with this invention wherein communications between speakers of two different languages are routed through a service center in which a text in one language is converted to a text in another language; advantageously, this type of text-to-text translation can typically be carried out with present software packages in as little as one second for a short message.
  • In accordance with one feature of Applicant's invention, the translated text is converted with essentially no additional delay into spoken text for announcement to a receiving party of a communication. A voice mail message of this type can then be delivered to the recipient at the recipient's convenience.
  • In accordance with one feature of Applicant's invention, each user specifies a preferred language or two or more languages that are acceptable. If a message is generated in an acceptable language, the translation process is bypassed.
  • In accordance with another feature of Applicant's invention if the caller is provided with translation software in his/her customer equipment, the network reports to the customer equipment the preferred language(s) of the message recipient. Then if no translation is necessary or the translation is to a language for which the calling customer's terminal has translation capabilities, no translation is required in the network; otherwise, the call is routed to a service center where the required translation is performed.
  • In accordance with one feature of Applicant's invention, calls which are candidates for translation are identified by a suitable prefix. Calls which do not include such a prefix are processed in the normal manner of the prior art.
  • Advantageously, these arrangements greatly enhance communications between individuals who have no common language in which both are comfortable and fluent.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram illustrating the operation of Applicant's invention; and
  • FIG. 2 is a flow diagram illustrating the operation of Applicant's invention.
  • DETAILED DESCRIPTION
  • FIG. 1 is a block diagram illustrating the operation of the invention. The calling party has terminal equipment 1 or 2. Terminal equipment 1 is a cellular station equipped to transmit text messages or voice. Terminal 2 is land-based station equipped to transmit text messages or voice. Terminals 1 and 2 can, optionally, be equipped with limited text translation capabilities, which can accept text messages in a first language or one of a plurality of first languages, and can translate the text into one of a plurality of second languages. The called station is one of two stations 5 or 6. Called station 5 is a cellular station equipped to receive text messages or voice messages; land-based station 6 is connected by land-based facilities and is equipped to receive data messages or voice messages. Either of the terminals 5 or 6 can be optionally equipped with software to receive data messages in one language and display or print the translation of these messages into a second language.
  • The calling and called parties are connected via network 10 which can be a network for transmitting Internet multimedia service (IMS) signals representing data, text data, video, or voice. Network 10 is connected to a service center 20 which includes text-to-text translators 22, speech-to-text translators (operating in a single language) 24, and text-to-speech converters (also operating in a single language) 26. The service center also contains a database 28 for storing the language preferences of destination terminals served by the service center. The service center can be part of an instant message server, an e-mail server, or a short message service server. If the output is to be delivered as speech, a voice mail facility 12 in the network can be used to store and deliver voice signals representing the message. Alternatively, the service center can be a separate unit connected to one of these servers by the network 10; in that case, the service center is called whenever one of the servers recognizes the need to translate a message.
  • If the desired mode of operation is not simply text to text, but is either speech to text, or text to speech, or speech to speech, then the speech to text translator 24 is used before using the text to text translator 22, or the text to speech converter 26 is used after the text to text translator 22 has finished its work, or both, respectively. The Service Center then sends text to the called party, or sends speech to a voice messaging system for subsequent communication to the called party. Because the text to text translation process is relatively slow, with the present state of the art, speech to speech translation, wherein the translated speech is immediately recognized and can be responded to does not yet appear to be feasible; that is why translated speech is delivered to a voice messaging system for access by the called party. Note that at the present time, simple text to text conversion appears to be the most desirable mode.
  • If the called party is willing to accept text or speech in one of two, or more, languages, then the service center can decide whether the input language is one of the acceptable languages, and can bypass the translation step.
  • FIG. 2 illustrates the operation of Applicant's invention. A caller originates a call (action block 200). The caller accesses the database of the service center to determine the preferred languages of the destination party (action block 201). Test 202 is used to determine whether the caller wishes to have his/her message translated to a language available in his/her own software. If so, the call is translated and then routed as in the prior art (action block 215). If not, then test 203 is used to determine whether a caller specifies a translation requirement. In the preferred embodiment of Applicant's invention, this is done by using a prefix in the call addressing mechanism. The prefix may also be augmented by treating calls to a set of specified terminating customer equipments as requiring translation. If the caller has specified the translation option, test 205 is used to determine whether the caller specifies the target language. If so, the network routes the call to the service center (action block 207) and specifies to the service center the identity of the target language as well as the language of the original message.
  • If the caller does not specify the target language, then the network routes the call to the service center (action block 209). The service center then determines the target language (action block 211). If the caller has not specified translation, then test 213 is used to determine whether the called party has requested incoming messages to be translated to one of one or more specific target language and tests whether the different target languages are all different from the source language. If the called party has not requested translation or if the calling party language is the same as a called party target language then the call is routed as in the prior art (action block 215). If the called party has requested translation to a language different from the source language, then the network routes the call to the service center (action block 209, previously discussed).
  • Following execution of action blocks 207 or 211, the service center performs the translation (action block 217). The translated message is then routed to the called party (action block 219). If the caller inputs voice, the speech to text translator 24 generates text for use by the text to text translator 22 for generating text in the target language. If the called party wishes to have speech delivered, the output of the text to text translator 22 is presented to text to speech converter 26 which then generates a voice mail message for storage in voice mail unit 28 for subsequent delivery to the called party.
  • Note that if the caller knows that the called party has software for translating from the caller's language to a language desired by the called party, then the call can be handled as in the prior art. In that case, the original request to determine the languages acceptable to the called party includes the source languages from which the called party can make a translation. The called party then must recognize the need for translation and invoke the required software, or can recognize that calls from a specific caller, identified by caller identification, must be translated.
  • If a message is destined for a plurality of terminals, the service center can generate messages to each of the plurality in the preferred language of that terminal.
  • In some cases, the recipient of a message may wish to examine the original source text, since translation is an imperfect process. The service center should store the source text, and transmit this source text upon request or routinely.
  • The above description is of one preferred embodiment of Applicant's invention. Other embodiments will be apparent to those of ordinary skill in the art. The invention is limited only by the attached claims.

Claims (18)

1. A method of processing a text communications call, comprising the steps of:
determining whether a source language is the same as a preferred target language for a text message call between a source terminal and a destination terminal;
if the source language does not match a preferred target language, routing the call via a service center for automatic translation of a source text in a first language to a destination text in a second language; and
routing said destination text from said service center to said destination terminal.
2. The method of claim 1 further comprising the steps of:
determining whether a caller party wishes to input text by speech; and
converting the input speech text to data text for use as a source text for said automatic translation.
3. The method of claim 1 further comprising the steps of:
determining whether a called party wishes to receive said destination as speech; and
converting said destination text to destination speech prior to transmitting said destination text to said destination terminal.
4. The method of claim 3 further comprising the step of:
transmitting said destination speech to a voice mail system prior to transmitting said destination speech to said destination terminal.
5. The method of claim 1 wherein said destination text is in one of a plurality of acceptable destination languages; and
wherein said translation is bypassed if said source language is one of said acceptable destination languages.
6. The method of claim 1 wherein a source terminal has limited translation capabilities, and said call is routed directly to said destination terminal if said source terminal can translate to a language acceptable to said destination terminal.
7. The method of claim 1 wherein at least one of said source and said destination terminals is a cellular terminal.
8. The method of claim 1 wherein said service center comprises a database for storing language preferences of destination terminals served by said service center.
9. The method of claim 1 wherein said language preferences include languages which a destination terminal can translate.
10. Apparatus for processing a text communications call, comprising:
a service center for automatic translation of a source text in a first language to a destination text in a second language;
means for determining whether a source language is the same as a preferred target language for a text message call between a source terminal and a destination terminal;
if the source language does not match a preferred target language, means for routing the call via said service center for automatic translation of a source text in a first language to a destination text in a second language; and
means for routing said destination text from said service center to said destination terminal.
11. The apparatus of claim 10 further comprising:
means for determining whether a caller party wishes to input text by speech; and
means for converting the input speech text to data text for use as a source text for said automatic translation.
12. The apparatus of claim 10 further comprising:
means for determining whether a called party wishes to receive said destination as speech; and
means for converting said destination text to destination speech prior to transmitting said destination text to said destination terminal.
13. The apparatus of claim 12 further comprising:
a voice mail system;
means for transmitting said destination speech to said voice mail system prior to transmitting said destination speech to said destination terminal.
14. The apparatus of claim 10 wherein said destination text is in one of a plurality of acceptable destination languages; and
wherein said translation by said service center is bypassed if said source language is one of said acceptable destination languages.
15. The apparatus of claim 10 wherein a source terminal comprises limited translation capabilities, and wherein said call is routed directly to said destination terminal if said source terminal can translate to a language acceptable to said destination terminal.
16. The apparatus of claim 10 wherein at least one of said source and said destination terminals is a cellular terminal.
17. The apparatus of claim 10 wherein said service center comprises a database for storing language preferences of destination terminals served by said service center.
18. The apparatus of claim 10 wherein said language preferences include languages which a destination terminal can translate.
US11/411,450 2006-04-26 2006-04-26 Language translation service for text message communications Abandoned US20070255554A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US11/411,450 US20070255554A1 (en) 2006-04-26 2006-04-26 Language translation service for text message communications
JP2009507742A JP5089683B2 (en) 2006-04-26 2007-04-19 Language translation service for text message communication
EP07755799A EP2011034A2 (en) 2006-04-26 2007-04-19 Language translation service for text message communications
PCT/US2007/009662 WO2007127141A2 (en) 2006-04-26 2007-04-19 Language translation service for text message communications
KR1020087025912A KR101057852B1 (en) 2006-04-26 2007-04-19 Method and apparatus for handling text communication call
CNA2007800147080A CN101427244A (en) 2006-04-26 2007-04-19 Language translation service for text message communications

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/411,450 US20070255554A1 (en) 2006-04-26 2006-04-26 Language translation service for text message communications

Publications (1)

Publication Number Publication Date
US20070255554A1 true US20070255554A1 (en) 2007-11-01

Family

ID=38563125

Family Applications (1)

Application Number Title Priority Date Filing Date
US11/411,450 Abandoned US20070255554A1 (en) 2006-04-26 2006-04-26 Language translation service for text message communications

Country Status (6)

Country Link
US (1) US20070255554A1 (en)
EP (1) EP2011034A2 (en)
JP (1) JP5089683B2 (en)
KR (1) KR101057852B1 (en)
CN (1) CN101427244A (en)
WO (1) WO2007127141A2 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110046939A1 (en) * 2009-08-21 2011-02-24 Avaya Inc. Alerting of language preference and translating across language boundaries
US20120016656A1 (en) * 2010-07-13 2012-01-19 Enrique Travieso Dynamic language translation of web site content
US20120166183A1 (en) * 2009-09-04 2012-06-28 David Suendermann System and method for the localization of statistical classifiers based on machine translation
US20120271619A1 (en) * 2011-04-21 2012-10-25 Sherif Aly Abdel-Kader Methods and systems for sharing language capabilities
WO2012151479A3 (en) * 2011-05-05 2013-03-21 Ortsbo, Inc. Cross-language communication between proximate mobile devices
US20140157113A1 (en) * 2012-11-30 2014-06-05 Ricoh Co., Ltd. System and Method for Translating Content between Devices
US8949223B2 (en) 2003-02-21 2015-02-03 Motionpoint Corporation Dynamic language translation of web site content
WO2015049697A1 (en) 2013-10-04 2015-04-09 Deshmukh Rakesh A gesture based system for translation and transliteration of input text and a method thereof
US20150268943A1 (en) * 2014-03-19 2015-09-24 Mediatek Singapore Pte. Ltd. File processing method and electronic apparatus
US20160117315A1 (en) * 2013-07-18 2016-04-28 Tencent Technology (Shenzhen) Company Limited Method And Apparatus For Processing Message
WO2018125003A1 (en) * 2016-12-30 2018-07-05 Turkcell Teknoloji̇ Araştirma Ve Geli̇şti̇rme Anoni̇m Şi̇rketi̇ A translation system
US20200168203A1 (en) * 2018-11-26 2020-05-28 International Business Machines Corporation Sharing confidential information with privacy using a mobile phone
CN113726968A (en) * 2021-08-18 2021-11-30 中国联合网络通信集团有限公司 Terminal communication method, device, server and storage medium

Families Citing this family (127)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8677377B2 (en) 2005-09-08 2014-03-18 Apple Inc. Method and apparatus for building an intelligent automated assistant
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US10002189B2 (en) 2007-12-20 2018-06-19 Apple Inc. Method and apparatus for searching using an active ontology
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US8996376B2 (en) 2008-04-05 2015-03-31 Apple Inc. Intelligent text-to-speech conversion
US20100030549A1 (en) 2008-07-31 2010-02-04 Lee Michael M Mobile device having human language translation capability with positional feedback
US8676904B2 (en) 2008-10-02 2014-03-18 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US9431006B2 (en) 2009-07-02 2016-08-30 Apple Inc. Methods and apparatuses for automatic speech recognition
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US8682667B2 (en) 2010-02-25 2014-03-25 Apple Inc. User profiling for selecting user specific voice input processing information
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US10417037B2 (en) 2012-05-15 2019-09-17 Apple Inc. Systems and methods for integrating third party services with a digital assistant
US9721563B2 (en) 2012-06-08 2017-08-01 Apple Inc. Name recognition system
US9547647B2 (en) 2012-09-19 2017-01-17 Apple Inc. Voice-based media searching
KR102380145B1 (en) 2013-02-07 2022-03-29 애플 인크. Voice trigger for a digital assistant
WO2014197334A2 (en) 2013-06-07 2014-12-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
WO2014197335A1 (en) 2013-06-08 2014-12-11 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
CN110442699A (en) 2013-06-09 2019-11-12 苹果公司 Operate method, computer-readable medium, electronic equipment and the system of digital assistants
US10296160B2 (en) 2013-12-06 2019-05-21 Apple Inc. Method for extracting salient dialog usage from live data
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
WO2015184186A1 (en) 2014-05-30 2015-12-03 Apple Inc. Multi-command single utterance input method
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10152299B2 (en) 2015-03-06 2018-12-11 Apple Inc. Reducing response latency of intelligent automated assistants
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US10460227B2 (en) 2015-05-15 2019-10-29 Apple Inc. Virtual assistant in a communication session
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US9578173B2 (en) 2015-06-05 2017-02-21 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US20160378747A1 (en) 2015-06-29 2016-12-29 Apple Inc. Virtual assistant for media playback
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
CN106804031B (en) * 2015-11-26 2020-08-07 中国移动通信集团公司 Voice conversion method and device
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US11227589B2 (en) 2016-06-06 2022-01-18 Apple Inc. Intelligent list reading
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
DK179309B1 (en) 2016-06-09 2018-04-23 Apple Inc Intelligent automated assistant in a home environment
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10586535B2 (en) 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
DK179049B1 (en) 2016-06-11 2017-09-18 Apple Inc Data driven natural language event detection and classification
DK179415B1 (en) 2016-06-11 2018-06-14 Apple Inc Intelligent device arbitration and control
DK179343B1 (en) 2016-06-11 2018-05-14 Apple Inc Intelligent task discovery
DK201670540A1 (en) 2016-06-11 2018-01-08 Apple Inc Application integration with a digital assistant
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
WO2018205072A1 (en) * 2017-05-08 2018-11-15 深圳市卓希科技有限公司 Method and apparatus for converting text into speech
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
DK201770383A1 (en) 2017-05-09 2018-12-14 Apple Inc. User interface for correcting recognition errors
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
DK201770439A1 (en) 2017-05-11 2018-12-13 Apple Inc. Offline personal assistant
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
DK179496B1 (en) 2017-05-12 2019-01-15 Apple Inc. USER-SPECIFIC Acoustic Models
DK179745B1 (en) 2017-05-12 2019-05-01 Apple Inc. SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT
DK201770427A1 (en) 2017-05-12 2018-12-20 Apple Inc. Low-latency intelligent automated assistant
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
DK201770431A1 (en) 2017-05-15 2018-12-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
DK201770432A1 (en) 2017-05-15 2018-12-21 Apple Inc. Hierarchical belief states for digital assistants
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
DK179560B1 (en) 2017-05-16 2019-02-18 Apple Inc. Far-field extension for digital assistant services
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
US20180336275A1 (en) 2017-05-16 2018-11-22 Apple Inc. Intelligent automated assistant for media exploration
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
CN108650419A (en) * 2018-05-09 2018-10-12 深圳市知远科技有限公司 Telephone interpretation system based on smart mobile phone
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
DK201870355A1 (en) 2018-06-01 2019-12-16 Apple Inc. Virtual assistant operation in multi-device environments
DK179822B1 (en) 2018-06-01 2019-07-12 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
DK180639B1 (en) 2018-06-01 2021-11-04 Apple Inc DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
US10496705B1 (en) 2018-06-03 2019-12-03 Apple Inc. Accelerated task performance
US11010561B2 (en) 2018-09-27 2021-05-18 Apple Inc. Sentiment prediction from textual data
US10839159B2 (en) 2018-09-28 2020-11-17 Apple Inc. Named entity normalization in a spoken dialog system
US11462215B2 (en) 2018-09-28 2022-10-04 Apple Inc. Multi-modal inputs for voice commands
US11170166B2 (en) 2018-09-28 2021-11-09 Apple Inc. Neural typographical error modeling via generative adversarial networks
US11475898B2 (en) 2018-10-26 2022-10-18 Apple Inc. Low-latency multi-speaker speech recognition
CN109739664B (en) * 2018-12-29 2021-05-18 联想(北京)有限公司 Information processing method, information processing apparatus, electronic device, and medium
US11638059B2 (en) 2019-01-04 2023-04-25 Apple Inc. Content playback on multiple devices
US11348573B2 (en) 2019-03-18 2022-05-31 Apple Inc. Multimodality in digital assistant systems
DK201970509A1 (en) 2019-05-06 2021-01-15 Apple Inc Spoken notifications
US11307752B2 (en) 2019-05-06 2022-04-19 Apple Inc. User configurable task triggers
US11475884B2 (en) 2019-05-06 2022-10-18 Apple Inc. Reducing digital assistant latency when a language is incorrectly determined
US11423908B2 (en) 2019-05-06 2022-08-23 Apple Inc. Interpreting spoken requests
US11140099B2 (en) 2019-05-21 2021-10-05 Apple Inc. Providing message response suggestions
US11289073B2 (en) 2019-05-31 2022-03-29 Apple Inc. Device text to speech
US11496600B2 (en) 2019-05-31 2022-11-08 Apple Inc. Remote execution of machine-learned models
DK180129B1 (en) 2019-05-31 2020-06-02 Apple Inc. User activity shortcut suggestions
US11360641B2 (en) 2019-06-01 2022-06-14 Apple Inc. Increasing the relevance of new available information
US11488406B2 (en) 2019-09-25 2022-11-01 Apple Inc. Text detection using global geometry estimators

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5987401A (en) * 1995-12-08 1999-11-16 Apple Computer, Inc. Language translation for real-time text-based conversations
US20020022954A1 (en) * 2000-07-25 2002-02-21 Sayori Shimohata Conversation system and conversation method
US6438524B1 (en) * 1999-11-23 2002-08-20 Qualcomm, Incorporated Method and apparatus for a voice controlled foreign language translation device
US20020169592A1 (en) * 2001-05-11 2002-11-14 Aityan Sergey Khachatur Open environment for real-time multilingual communication
US20030065504A1 (en) * 2001-10-02 2003-04-03 Jessica Kraemer Instant verbal translator
US20030120478A1 (en) * 2001-12-21 2003-06-26 Robert Palmquist Network-based translation system
US20030149557A1 (en) * 2002-02-07 2003-08-07 Cox Richard Vandervoort System and method of ubiquitous language translation for wireless devices
US20030163300A1 (en) * 2002-02-22 2003-08-28 Mitel Knowledge Corporation System and method for message language translation
US20040167770A1 (en) * 2003-02-24 2004-08-26 Microsoft Corporation Methods and systems for language translation
US6789057B1 (en) * 1997-01-07 2004-09-07 Hitachi, Ltd. Dictionary management method and apparatus
US6980953B1 (en) * 2000-10-31 2005-12-27 International Business Machines Corp. Real-time remote transcription or translation service
US7310605B2 (en) * 2003-11-25 2007-12-18 International Business Machines Corporation Method and apparatus to transliterate text using a portable device
US7409333B2 (en) * 2002-11-06 2008-08-05 Translution Holdings Plc Translation of electronically transmitted messages

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0353377A (en) * 1989-07-21 1991-03-07 Hitachi Ltd Decentralized hierarchical translation system
JPH10289235A (en) * 1997-04-17 1998-10-27 Brother Ind Ltd Electronic mail device
US6219638B1 (en) * 1998-11-03 2001-04-17 International Business Machines Corporation Telephone messaging and editing system
US7333507B2 (en) * 2001-08-31 2008-02-19 Philip Bravin Multi modal communications system
JP2003288340A (en) * 2002-03-27 2003-10-10 Ntt Comware Corp Speech translation device
US8392173B2 (en) * 2003-02-10 2013-03-05 At&T Intellectual Property I, L.P. Message translations
US20040267527A1 (en) 2003-06-25 2004-12-30 International Business Machines Corporation Voice-to-text reduction for real time IM/chat/SMS
JP4025730B2 (en) * 2004-01-16 2007-12-26 富士通株式会社 Information system, information providing method, and program

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5987401A (en) * 1995-12-08 1999-11-16 Apple Computer, Inc. Language translation for real-time text-based conversations
US6789057B1 (en) * 1997-01-07 2004-09-07 Hitachi, Ltd. Dictionary management method and apparatus
US6438524B1 (en) * 1999-11-23 2002-08-20 Qualcomm, Incorporated Method and apparatus for a voice controlled foreign language translation device
US20020022954A1 (en) * 2000-07-25 2002-02-21 Sayori Shimohata Conversation system and conversation method
US6980953B1 (en) * 2000-10-31 2005-12-27 International Business Machines Corp. Real-time remote transcription or translation service
US20020169592A1 (en) * 2001-05-11 2002-11-14 Aityan Sergey Khachatur Open environment for real-time multilingual communication
US20030065504A1 (en) * 2001-10-02 2003-04-03 Jessica Kraemer Instant verbal translator
US20030120478A1 (en) * 2001-12-21 2003-06-26 Robert Palmquist Network-based translation system
US20030149557A1 (en) * 2002-02-07 2003-08-07 Cox Richard Vandervoort System and method of ubiquitous language translation for wireless devices
US20030163300A1 (en) * 2002-02-22 2003-08-28 Mitel Knowledge Corporation System and method for message language translation
US7409333B2 (en) * 2002-11-06 2008-08-05 Translution Holdings Plc Translation of electronically transmitted messages
US20040167770A1 (en) * 2003-02-24 2004-08-26 Microsoft Corporation Methods and systems for language translation
US7310605B2 (en) * 2003-11-25 2007-12-18 International Business Machines Corporation Method and apparatus to transliterate text using a portable device

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11308288B2 (en) 2003-02-21 2022-04-19 Motionpoint Corporation Automation tool for web site content language translation
US10621287B2 (en) 2003-02-21 2020-04-14 Motionpoint Corporation Dynamic language translation of web site content
US10409918B2 (en) 2003-02-21 2019-09-10 Motionpoint Corporation Automation tool for web site content language translation
US9910853B2 (en) 2003-02-21 2018-03-06 Motionpoint Corporation Dynamic language translation of web site content
US9652455B2 (en) 2003-02-21 2017-05-16 Motionpoint Corporation Dynamic language translation of web site content
US8949223B2 (en) 2003-02-21 2015-02-03 Motionpoint Corporation Dynamic language translation of web site content
US9626360B2 (en) 2003-02-21 2017-04-18 Motionpoint Corporation Analyzing web site for translation
US9367540B2 (en) 2003-02-21 2016-06-14 Motionpoint Corporation Dynamic language translation of web site content
US20110046939A1 (en) * 2009-08-21 2011-02-24 Avaya Inc. Alerting of language preference and translating across language boundaries
US20120166183A1 (en) * 2009-09-04 2012-06-28 David Suendermann System and method for the localization of statistical classifiers based on machine translation
US9558183B2 (en) * 2009-09-04 2017-01-31 Synchronoss Technologies, Inc. System and method for the localization of statistical classifiers based on machine translation
US9465782B2 (en) 2010-07-13 2016-10-11 Motionpoint Corporation Dynamic language translation of web site content
US11481463B2 (en) 2010-07-13 2022-10-25 Motionpoint Corporation Dynamic language translation of web site content
US9213685B2 (en) * 2010-07-13 2015-12-15 Motionpoint Corporation Dynamic language translation of web site content
US9311287B2 (en) 2010-07-13 2016-04-12 Motionpoint Corporation Dynamic language translation of web site content
US11157581B2 (en) 2010-07-13 2021-10-26 Motionpoint Corporation Dynamic language translation of web site content
US10936690B2 (en) 2010-07-13 2021-03-02 Motionpoint Corporation Dynamic language translation of web site content
US9411793B2 (en) 2010-07-13 2016-08-09 Motionpoint Corporation Dynamic language translation of web site content
US10922373B2 (en) 2010-07-13 2021-02-16 Motionpoint Corporation Dynamic language translation of web site content
US20120016656A1 (en) * 2010-07-13 2012-01-19 Enrique Travieso Dynamic language translation of web site content
US11409828B2 (en) 2010-07-13 2022-08-09 Motionpoint Corporation Dynamic language translation of web site content
US10387517B2 (en) 2010-07-13 2019-08-20 Motionpoint Corporation Dynamic language translation of web site content
US10296651B2 (en) 2010-07-13 2019-05-21 Motionpoint Corporation Dynamic language translation of web site content
US11030267B2 (en) 2010-07-13 2021-06-08 Motionpoint Corporation Dynamic language translation of web site content
US9128918B2 (en) 2010-07-13 2015-09-08 Motionpoint Corporation Dynamic language translation of web site content
US9858347B2 (en) 2010-07-13 2018-01-02 Motionpoint Corporation Dynamic language translation of web site content
US9864809B2 (en) 2010-07-13 2018-01-09 Motionpoint Corporation Dynamic language translation of web site content
US10210271B2 (en) 2010-07-13 2019-02-19 Motionpoint Corporation Dynamic language translation of web site content
US10977329B2 (en) 2010-07-13 2021-04-13 Motionpoint Corporation Dynamic language translation of web site content
US10073917B2 (en) 2010-07-13 2018-09-11 Motionpoint Corporation Dynamic language translation of web site content
US10089400B2 (en) 2010-07-13 2018-10-02 Motionpoint Corporation Dynamic language translation of web site content
US10146884B2 (en) 2010-07-13 2018-12-04 Motionpoint Corporation Dynamic language translation of web site content
US8775157B2 (en) * 2011-04-21 2014-07-08 Blackberry Limited Methods and systems for sharing language capabilities
US20120271619A1 (en) * 2011-04-21 2012-10-25 Sherif Aly Abdel-Kader Methods and systems for sharing language capabilities
WO2012151479A3 (en) * 2011-05-05 2013-03-21 Ortsbo, Inc. Cross-language communication between proximate mobile devices
KR101732515B1 (en) 2011-05-05 2017-05-24 야픈 코포레이션 Cross-language communication between proximate mobile devices
US9053097B2 (en) 2011-05-05 2015-06-09 Ortsbo, Inc. Cross-language communication between proximate mobile devices
EP2705444A4 (en) * 2011-05-05 2015-08-26 Ortsbo Inc Cross-language communication between proximate mobile devices
US9858271B2 (en) * 2012-11-30 2018-01-02 Ricoh Company, Ltd. System and method for translating content between devices
US20140157113A1 (en) * 2012-11-30 2014-06-05 Ricoh Co., Ltd. System and Method for Translating Content between Devices
US20160117315A1 (en) * 2013-07-18 2016-04-28 Tencent Technology (Shenzhen) Company Limited Method And Apparatus For Processing Message
WO2015049697A1 (en) 2013-10-04 2015-04-09 Deshmukh Rakesh A gesture based system for translation and transliteration of input text and a method thereof
US9684498B2 (en) * 2014-03-19 2017-06-20 Mediatek Singapore Pte. Ltd. File processing method and electronic apparatus
US20150268943A1 (en) * 2014-03-19 2015-09-24 Mediatek Singapore Pte. Ltd. File processing method and electronic apparatus
WO2018125003A1 (en) * 2016-12-30 2018-07-05 Turkcell Teknoloji̇ Araştirma Ve Geli̇şti̇rme Anoni̇m Şi̇rketi̇ A translation system
US10891939B2 (en) * 2018-11-26 2021-01-12 International Business Machines Corporation Sharing confidential information with privacy using a mobile phone
US20200168203A1 (en) * 2018-11-26 2020-05-28 International Business Machines Corporation Sharing confidential information with privacy using a mobile phone
CN113726968A (en) * 2021-08-18 2021-11-30 中国联合网络通信集团有限公司 Terminal communication method, device, server and storage medium

Also Published As

Publication number Publication date
WO2007127141A3 (en) 2008-04-03
WO2007127141A2 (en) 2007-11-08
JP2009535906A (en) 2009-10-01
CN101427244A (en) 2009-05-06
KR20090018604A (en) 2009-02-20
EP2011034A2 (en) 2009-01-07
KR101057852B1 (en) 2011-08-19
JP5089683B2 (en) 2012-12-05

Similar Documents

Publication Publication Date Title
US20070255554A1 (en) Language translation service for text message communications
US10614173B2 (en) Auto-translation for multi user audio and video
US6816468B1 (en) Captioning for tele-conferences
US8856236B2 (en) Messaging response system
US8442563B2 (en) Automated text-based messaging interaction using natural language understanding technologies
US7027986B2 (en) Method and device for providing speech-to-text encoding and telephony service
EP1495413B1 (en) Messaging response system
US20080084974A1 (en) Method and system for interactively synthesizing call center responses using multi-language text-to-speech synthesizers
US7450698B2 (en) System and method of utilizing a hybrid semantic model for speech recognition
US7751551B2 (en) System and method for speech-enabled call routing
US20160284352A1 (en) Method and device for providing speech-to-text encoding and telephony service
US20080300852A1 (en) Multi-Lingual Conference Call
US20140269678A1 (en) Method for providing an application service, including a managed translation service
CN111478971A (en) Multilingual translation telephone system and translation method
JP5243645B2 (en) Service server device, service providing method, service providing program
CA2973566C (en) System and method for language specific routing
KR20160097406A (en) Telephone service system and method supporting interpreting and translation
JP5461651B2 (en) Service server device, service providing method, service providing program
Hegde et al. MULTILINGUAL VOICE SUPPORT FOR CONTACT CENTER AGENTS

Legal Events

Date Code Title Description
AS Assignment

Owner name: LUCENT TECHNOLOGIES INC., NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CAI, YIQANG;REEL/FRAME:017816/0024

Effective date: 20060425

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION