CN110602334A - Intelligent outbound method and system based on man-machine cooperation - Google Patents
Intelligent outbound method and system based on man-machine cooperation Download PDFInfo
- Publication number
- CN110602334A CN110602334A CN201910827971.6A CN201910827971A CN110602334A CN 110602334 A CN110602334 A CN 110602334A CN 201910827971 A CN201910827971 A CN 201910827971A CN 110602334 A CN110602334 A CN 110602334A
- Authority
- CN
- China
- Prior art keywords
- outbound
- voice
- module
- calling
- robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 230000008451 emotion Effects 0.000 claims description 13
- 230000008909 emotion recognition Effects 0.000 claims description 7
- 230000015572 biosynthetic process Effects 0.000 claims description 6
- 238000003786 synthesis reaction Methods 0.000 claims description 6
- 238000013518 transcription Methods 0.000 claims description 6
- 230000035897 transcription Effects 0.000 claims description 6
- 230000008447 perception Effects 0.000 abstract description 8
- 230000004044 response Effects 0.000 description 4
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/487—Arrangements for providing information services, e.g. recorded voice services or time announcements
- H04M3/493—Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/487—Arrangements for providing information services, e.g. recorded voice services or time announcements
- H04M3/493—Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
- H04M3/4936—Speech interaction details
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M3/00—Automatic or semi-automatic exchanges
- H04M3/42—Systems providing special services or facilities to subscribers
- H04M3/50—Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
- H04M3/51—Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing
- H04M3/5166—Centralised call answering arrangements requiring operator intervention, e.g. call or contact centers for telemarketing in combination with interactive voice response systems or voice portals, e.g. as front-ends
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- General Health & Medical Sciences (AREA)
- Hospice & Palliative Care (AREA)
- Psychiatry (AREA)
- Child & Adolescent Psychology (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Telephonic Communication Services (AREA)
Abstract
The invention provides an intelligent outbound method and system based on man-machine cooperation, wherein the method comprises the following steps: s1, acquiring the speech term voice of the caller, and calling the caller by the robot according to the speech term voice; s2, recording robot outbound records, judging whether manual intervention is performed, if so, executing S4, otherwise, executing S3; s3, the robot continues to call out; s4, the calling personnel intervene in calling; s5, judging whether to end the outbound call, if so, executing S6, otherwise executing S3; and S6, ending the outbound call. The intelligent outbound method and system based on man-machine cooperation can perform manual intervention when the robot needs to outbound, and improve the perception of the client.
Description
Technical Field
The invention relates to the technical field of outbound systems, in particular to an intelligent outbound system based on man-machine cooperation.
Background
The outbound call actively initiates a call to the user so as to communicate the relevant service information with the user. Because many telephone traffic contents are actually the same contents which are simply repeated to different users, with the increase of calls and various service responses, Interactive Voice Response (IVR) technology is produced, and response processing of some common problems of customers can be realized by 'calling robots' instead of human seats, so that manpower is saved, and service efficiency is improved.
However, in the interactive voice response process in the prior art, the problem that the outbound of the outbound robot is inflexible in interaction, rigid in speaking operation and the like exists, so that the perception of the client is poor.
Disclosure of Invention
In view of this, the technical problem to be solved by the present invention is to provide an intelligent outbound method and system based on human-machine cooperation, which can perform manual intervention when the robot needs to outbound, so as to improve the perception of the client.
The technical scheme of the invention is realized as follows:
an intelligent outbound method based on man-machine cooperation comprises the following steps:
s1, acquiring the speech operation voice of the caller, and calling the caller by the robot according to the speech term voice;
s2, recording robot outbound records, judging whether manual intervention is performed, if so, executing S4, otherwise, executing S3;
s3, the robot continues to call out;
s4, the calling personnel intervene in calling;
s5, judging whether to end the outbound call, if so, executing S6, otherwise executing S3;
and S6, ending the outbound call.
Preferably, before S2, the method further includes:
and selecting the number of robots cooperated with the human-machine.
Preferably, in S1, the method further includes:
recognizing the emotion of the client;
and acquiring a voice data stream of a client, performing audio dimension analysis on the voice data stream, performing character transcription and text semantic analysis on the voice data stream, and identifying the emotion of the client.
Preferably, the acquiring the voice term tone of the caller comprises:
recording the voice of the calling person, editing, reducing noise and storing;
and/or;
and generating the voice phonetics of the calling person through voice synthesis and storing the voice phonetics.
Preferably, after S4, the method further includes:
the outbound person exits the outbound call.
The invention also provides an intelligent outbound system based on man-machine cooperation, which comprises an acquisition module, a recording module, a first judgment module, a robot, an intervention module and a second judgment module;
the acquisition module is used for acquiring the speaking and operation voice of the outbound person;
the robot is used for calling out according to the speech term voice;
the recording module is used for recording the outbound record of the robot;
the first judgment module is used for judging whether to carry out manual intervention, if so, the intervention module is started, otherwise, the second judgment module is started;
the intervention module is used for an outbound person to intervene in outbound;
the second judging module is used for judging whether the outbound is ended or not, if so, the outbound is ended, and otherwise, the outbound is continued.
Preferably, the system further comprises a selection module;
the selection module is used for receiving selection signals and selecting the number of the man-machine cooperative robots.
Preferably, the system further comprises an emotion recognition module;
the emotion recognition module is used for acquiring a voice data stream of a client, performing audio dimension analysis on the voice data stream, performing character transcription and text semantic analysis on the voice data stream, and recognizing the emotion of the client.
Preferably, the acquiring the voice term tone of the caller comprises:
recording the voice of the calling person, editing, reducing noise and storing;
and/or;
and generating the voice phonetics of the calling person through voice synthesis and storing the voice phonetics.
Preferably, the system further comprises an exit module;
the quitting module is used for acquiring a quitting signal for the outbound personnel to quit the outbound.
According to the intelligent outbound method and system based on man-machine cooperation, the robot carries out outbound through the speech term voice of the outbound person, when a certain condition is met, the outbound person intervenes in the outbound, and the speech term voice used by the robot is collected from the outbound person, so that seamless switching between the robot and manual work can be carried out, and a client is ensured to have good perception.
Drawings
Fig. 1 is a flowchart of an intelligent outbound method based on human-computer cooperation according to an embodiment of the present invention;
fig. 2 is a block diagram of an intelligent outbound system based on human-machine cooperation according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, an embodiment of the present invention provides an intelligent outbound method based on human-computer cooperation, including the following steps:
s1, acquiring the speech term voice of the caller, and calling the caller by the robot according to the speech term voice;
s2, recording robot outbound records, judging whether manual intervention is performed, if so, executing S4, otherwise, executing S3;
s3, the robot continues to call out;
s4, the calling personnel intervene in calling;
s5, judging whether to end the outbound call, if so, executing S6, otherwise executing S3;
and S6, ending the outbound call.
Therefore, according to the intelligent outbound method based on man-machine cooperation provided by the embodiment of the invention, the robot performs outbound through the speech term tone of the outbound person, when a certain condition is met, the outbound person intervenes in the outbound, and as the speech term tone used by the robot is collected from the outbound person, seamless switching between the robot and manual work can be performed, and a client is ensured to have good perception.
In a preferred embodiment of the present invention, before S2, the method further includes:
and selecting the number of robots cooperated with the human-machine.
In the application, before the outbound call is started, the number of the robots can be selected by the outbound person according to the requirement.
The requirement may be the difficulty of speaking, the familiarity of the caller, etc.
In a preferred embodiment of the present invention, S1 further includes:
recognizing the emotion of the client;
and acquiring a voice data stream of a client, performing audio dimension analysis on the voice data stream, performing character transcription and text semantic analysis on the voice data stream, and identifying the emotion of the client.
In the embodiment of the invention, by identifying the emotion of the client, whether manual intervention is needed or not can be judged in an auxiliary way, and the emotion of the client can be identified from the audio frequency and the semantics of the emotion.
In a preferred embodiment of the present invention, obtaining the spoken term tone of the outbound person comprises:
recording the voice of the calling person, editing, reducing noise and storing;
and/or;
the voice utterances of the calling person are generated by speech synthesis and stored.
In the application, the speech based on the calling person is used when the robot calls out, and the speech template of the calling person is used when the robot calls out, so that the calling person is not aware of the calling person when the calling person is involved.
In a preferred embodiment of the present invention, after S4, the method further includes:
the outbound person exits the outbound call.
In the application, after the manual intervention is completed, the calling personnel can mark a new calling stage, quit the intervention picture, the robot can take over the calling again, and the calling is continued from the new calling stage.
As shown in fig. 2, the present invention further provides an intelligent outbound system based on human-machine cooperation, which includes an obtaining module 1, a recording module 2, a first determining module 3, a robot 4, an intervention module 5, and a second determining module 6;
the acquisition module 1 is used for acquiring the speaking and operation voice of the calling person;
the robot 4 is used to make an outbound call according to the speech terminology tone;
the recording module 2 is used for recording the outbound record of the robot;
the first judgment module 3 is used for judging whether to perform manual intervention, if so, the intervention module 5 is started, otherwise, the second judgment module 6 is started;
the intervention module 5 is used for an outbound person to intervene in outbound;
the second judging module 6 is used for judging whether to finish the outbound or not, if so, finishing the outbound, and if not, continuing the outbound.
Therefore, according to the intelligent outbound system based on man-machine cooperation provided by the embodiment of the invention, the robot carries out outbound through the speech term tone of the outbound person, when a certain condition is met, the outbound person intervenes in the outbound, and as the speech term tone used by the robot is collected from the outbound person, seamless switching between the robot and manual work can be carried out, so that a client is ensured to have good perception
In a preferred embodiment of the invention, the system further comprises a selection module;
the selection module is used for receiving the selection signal and selecting the number of the robots cooperated with the human machine.
In a preferred embodiment of the invention, the system further comprises an emotion recognition module;
the emotion recognition module is used for acquiring a voice data stream of a client, performing audio dimension analysis on the voice data stream, performing character transcription and text semantic analysis on the voice data stream, and recognizing the emotion of the client.
In a preferred embodiment of the present invention, obtaining the spoken term tone of the outbound person comprises:
recording the voice of the calling person, editing, reducing noise and storing;
and/or;
the voice utterances of the calling person are generated by speech synthesis and stored.
In a preferred embodiment of the invention, the system further comprises an exit module;
the quit module is used for acquiring a quit signal for the outbound personnel to quit the outbound.
According to the intelligent outbound method and system based on man-machine cooperation, the robot carries out outbound through the speech term voice of the outbound person, when a certain condition is met, the outbound person intervenes in the outbound, and the speech term voice used by the robot is collected from the outbound person, so that seamless switching between the robot and manual work can be carried out, and a client is ensured to have good perception.
After the manual intervention of the calling person, the system will automatically prompt the phase of the execution of the calling so that the calling person can quickly know the dialect which should be explained in the phase and the actual problem of the client. Meanwhile, the emotion of the client is identified through the emotion recognition of the client, and the calling personnel can read the call records of the robot calling and the standard dialect of the task at any time. As the robot outbound is the voice template of the outbound person, which is used by the robot, the robot is not sensible to the client, after the manual intervention is completed, the outbound person can mark a new outbound stage, quit the intervention picture, the robot can take over the outbound again, and the outbound is continued from the new outbound stage.
In summary, the embodiments of the present invention can at least achieve the following effects:
in the embodiment of the invention, the robot carries out outbound through the speech term tone of the outbound personnel, when a certain condition is met, the outbound personnel intervene in outbound, and as the speech term tone used by the robot is collected from the outbound personnel, seamless switching between the robot and manual work can be carried out, and the client is ensured to have good perception.
Finally, it is to be noted that: the above description is only a preferred embodiment of the present invention, and is only used to illustrate the technical solutions of the present invention, and not to limit the protection scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.
Claims (10)
1. An intelligent outbound method based on man-machine cooperation is characterized by comprising the following steps:
s1, acquiring the speech operation voice of the caller, and calling the caller by the robot according to the speech term voice;
s2, recording robot outbound records, judging whether manual intervention is performed, if so, executing S4, otherwise, executing S3;
s3, the robot continues to call out;
s4, the calling personnel intervene in calling;
s5, judging whether to end the outbound call, if so, executing S6, otherwise executing S3;
and S6, ending the outbound call.
2. The intelligent callout method based on human-computer cooperation as claimed in claim 1, further comprising, before said S2:
and selecting the number of robots cooperated with the human-machine.
3. The intelligent outbound method based on man-machine cooperation according to claim 1, wherein in S1, further comprising:
recognizing the emotion of the client;
and acquiring a voice data stream of a client, performing audio dimension analysis on the voice data stream, performing character transcription and text semantic analysis on the voice data stream, and identifying the emotion of the client.
4. The intelligent outbound method based on human-computer collaboration as claimed in claim 1, wherein said obtaining the speech term tone of the outbound person comprises:
recording the voice of the calling person, editing, reducing noise and storing;
and/or;
and generating the voice phonetics of the calling person through voice synthesis and storing the voice phonetics.
5. The intelligent outbound method based on human-computer collaboration as claimed in any one of claims 1 to 4, further comprising, after said S4:
the outbound person exits the outbound call.
6. An intelligent outbound system based on man-machine cooperation is characterized by comprising an acquisition module, a recording module, a first judgment module, a robot, an intervention module and a second judgment module;
the acquisition module is used for acquiring the speaking and operation voice of the outbound person;
the robot is used for calling out according to the speech term voice;
the recording module is used for recording the outbound record of the robot;
the first judgment module is used for judging whether to carry out manual intervention, if so, the intervention module is started, otherwise, the second judgment module is started;
the intervention module is used for an outbound person to intervene in outbound;
the second judging module is used for judging whether the outbound is ended or not, if so, the outbound is ended, and otherwise, the outbound is continued.
7. The intelligent human-computer collaboration based outbound system as claimed in claim 6, further comprising a selection module;
the selection module is used for receiving selection signals and selecting the number of the man-machine cooperative robots.
8. The intelligent human-computer collaboration based outbound system as claimed in claim 6, further comprising an emotion recognition module;
the emotion recognition module is used for acquiring a voice data stream of a client, performing audio dimension analysis on the voice data stream, performing character transcription and text semantic analysis on the voice data stream, and recognizing the emotion of the client.
9. The intelligent outbound system based on human-computer collaboration as claimed in claim 6 in full library wherein said obtaining spoken term voices of outbound persons comprises:
recording the voice of the calling person, editing, reducing noise and storing;
and/or;
and generating the voice phonetics of the calling person through voice synthesis and storing the voice phonetics.
10. The intelligent human-computer collaboration based outbound system as claimed in any one of claims 6 to 9, further comprising an exit module;
the quitting module is used for acquiring a quitting signal for the outbound personnel to quit the outbound.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910827971.6A CN110602334A (en) | 2019-09-03 | 2019-09-03 | Intelligent outbound method and system based on man-machine cooperation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910827971.6A CN110602334A (en) | 2019-09-03 | 2019-09-03 | Intelligent outbound method and system based on man-machine cooperation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110602334A true CN110602334A (en) | 2019-12-20 |
Family
ID=68857154
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910827971.6A Pending CN110602334A (en) | 2019-09-03 | 2019-09-03 | Intelligent outbound method and system based on man-machine cooperation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110602334A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111885272A (en) * | 2020-07-24 | 2020-11-03 | 南京易米云通网络科技有限公司 | Intelligent call-out method for supporting telephone by call center seat and intelligent call center system |
CN112003990A (en) * | 2020-02-20 | 2020-11-27 | 邓晓冬 | Method and system for controlling voice robot by mobile equipment |
CN113656551A (en) * | 2021-08-19 | 2021-11-16 | 中国银行股份有限公司 | Intelligent outbound interruption method and device, storage medium and electronic equipment |
CN117350739A (en) * | 2023-10-26 | 2024-01-05 | 广州易风健康科技股份有限公司 | Manual and AI customer service random switching method for carrying out question-answer dialogue based on emotion recognition |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106409283A (en) * | 2016-08-31 | 2017-02-15 | 上海交通大学 | Audio frequency-based man-machine mixed interaction system and method |
CN108363745A (en) * | 2018-01-26 | 2018-08-03 | 阿里巴巴集团控股有限公司 | The method and apparatus that robot customer service turns artificial customer service |
CN108900726A (en) * | 2018-06-28 | 2018-11-27 | 北京首汽智行科技有限公司 | Artificial customer service forwarding method based on speech robot people |
CN109587358A (en) * | 2017-09-29 | 2019-04-05 | 吴杰 | Artificial intelligence customer service turns artificial customer service call method |
WO2019104180A1 (en) * | 2017-11-22 | 2019-05-31 | [24]7.ai, Inc. | Method and apparatus for managing agent interactions with enterprise customers |
-
2019
- 2019-09-03 CN CN201910827971.6A patent/CN110602334A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106409283A (en) * | 2016-08-31 | 2017-02-15 | 上海交通大学 | Audio frequency-based man-machine mixed interaction system and method |
CN109587358A (en) * | 2017-09-29 | 2019-04-05 | 吴杰 | Artificial intelligence customer service turns artificial customer service call method |
WO2019104180A1 (en) * | 2017-11-22 | 2019-05-31 | [24]7.ai, Inc. | Method and apparatus for managing agent interactions with enterprise customers |
CN108363745A (en) * | 2018-01-26 | 2018-08-03 | 阿里巴巴集团控股有限公司 | The method and apparatus that robot customer service turns artificial customer service |
CN108900726A (en) * | 2018-06-28 | 2018-11-27 | 北京首汽智行科技有限公司 | Artificial customer service forwarding method based on speech robot people |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112003990A (en) * | 2020-02-20 | 2020-11-27 | 邓晓冬 | Method and system for controlling voice robot by mobile equipment |
CN111885272A (en) * | 2020-07-24 | 2020-11-03 | 南京易米云通网络科技有限公司 | Intelligent call-out method for supporting telephone by call center seat and intelligent call center system |
CN113656551A (en) * | 2021-08-19 | 2021-11-16 | 中国银行股份有限公司 | Intelligent outbound interruption method and device, storage medium and electronic equipment |
CN117350739A (en) * | 2023-10-26 | 2024-01-05 | 广州易风健康科技股份有限公司 | Manual and AI customer service random switching method for carrying out question-answer dialogue based on emotion recognition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111246027B (en) | Voice communication system and method for realizing man-machine cooperation | |
CN111508474B (en) | Voice interruption method, electronic equipment and storage device | |
US7783028B2 (en) | System and method of using speech recognition at call centers to improve their efficiency and customer satisfaction | |
CN110602334A (en) | Intelligent outbound method and system based on man-machine cooperation | |
US8914294B2 (en) | System and method of providing an automated data-collection in spoken dialog systems | |
JP4167057B2 (en) | Speech recognition method and system for determining the status of outgoing telephone calls | |
DE69839068T2 (en) | System and method for automatic processing of call and data transmission | |
US6332122B1 (en) | Transcription system for multiple speakers, using and establishing identification | |
JP3168033B2 (en) | Voice telephone dialing | |
CN104168353A (en) | Bluetooth earphone and voice interaction control method thereof | |
EP1497825A1 (en) | Dynamic and adaptive selection of vocabulary and acoustic models based on a call context for speech recognition | |
US20130253932A1 (en) | Conversation supporting device, conversation supporting method and conversation supporting program | |
CN110738981A (en) | interaction method based on intelligent voice call answering | |
CN110807093A (en) | Voice processing method and device and terminal equipment | |
CN101867742A (en) | Television system based on sound control | |
US20060069568A1 (en) | Method and apparatus for recording/replaying application execution with recorded voice recognition utterances | |
CN112102807A (en) | Speech synthesis method, apparatus, computer device and storage medium | |
JP2011217018A (en) | Voice response apparatus, and program | |
US20010056345A1 (en) | Method and system for speech recognition of the alphabet | |
CN113744742A (en) | Role identification method, device and system in conversation scene | |
JP3526101B2 (en) | Voice recognition device | |
CN111901488B (en) | Method for improving outbound efficiency of voice robot based on number state | |
CN109616116B (en) | Communication system and communication method thereof | |
JP7287006B2 (en) | Speaker Determining Device, Speaker Determining Method, and Control Program for Speaker Determining Device | |
JP4760452B2 (en) | Speech training apparatus, speech training system, speech training support method and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191220 |