CN108459838B - Information processing method and electronic equipment - Google Patents
Information processing method and electronic equipment Download PDFInfo
- Publication number
- CN108459838B CN108459838B CN201810276894.5A CN201810276894A CN108459838B CN 108459838 B CN108459838 B CN 108459838B CN 201810276894 A CN201810276894 A CN 201810276894A CN 108459838 B CN108459838 B CN 108459838B
- Authority
- CN
- China
- Prior art keywords
- voice
- application
- information
- response result
- task
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000010365 information processing Effects 0.000 title claims abstract description 19
- 238000003672 processing method Methods 0.000 title claims abstract description 17
- 230000004044 response Effects 0.000 claims abstract description 78
- 238000000034 method Methods 0.000 claims abstract description 23
- 230000003993 interaction Effects 0.000 claims description 76
- 230000002452 interceptive effect Effects 0.000 claims description 24
- 238000012544 monitoring process Methods 0.000 claims description 3
- 238000004458 analytical method Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 6
- 230000001960 triggered effect Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 241000238558 Eucarida Species 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/167—Audio in a user interface, e.g. using voice commands for navigating, audio feedback
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the disclosure discloses an information processing method and electronic equipment. The method comprises the following steps: obtaining a voice input; obtaining a processing result aiming at the voice input, wherein the processing result is used for representing an operation task; and if the response result of the operation task is obtained, outputting inquiry information.
Description
Technical Field
The present disclosure relates to the field of information technologies, and in particular, to an information processing method and an electronic device.
Background
Electronic equipment such as cell-phone, panel computer, intelligent house all possess and have the voice interaction function. The voice interaction function means: the electronic equipment collects voice input by a user, recognizes the voice to obtain a text corresponding to the voice, and performs text processing to obtain semantics; and provides a voice service required by the user based on the semantics. For example, a user performs a search using voice interaction, the electronic device identifies the content that the user needs to search by recognizing the voice of the user, and then provides the content that the user needs in the form of voice or text, so as to provide the user service using voice interaction.
In the prior art, if a task is completed, the voice interaction function of the electronic device is automatically and immediately finished in order to save power consumption and reduce resource usage of the electronic device, and if voice interaction needs to be used again, the user needs to manually trigger the electronic device to restart the voice interaction again, which obviously causes the problem that the operation of the next voice interaction is complicated on one hand and the delay of starting the voice interaction of the task is large on the other hand.
Disclosure of Invention
In view of this, it is desirable to provide an information processing method and an electronic device, which at least partially solve the problem of tedious operation or large delay of voice interaction.
The technical scheme of the disclosure is realized as follows: in a first aspect, an embodiment of the present disclosure provides an information processing method, including:
obtaining a voice input;
obtaining a processing result aiming at the voice input, wherein the processing result is used for representing an operation task;
and if the response result of the operation task is obtained, outputting inquiry information.
Optionally, the method further comprises:
controlling the voice engine to be in a sound receiving state.
Optionally, the method further comprises:
and if no new language input is received within the first preset time, controlling the voice engine to enter a dormant state.
Optionally, the obtaining a response result of the operation task includes:
if the control task represented by the processing result is a single task, displaying content information corresponding to the response result in a voice interaction interface corresponding to a voice engine based on the response result;
if the response result of the operation task is obtained, outputting inquiry information, including:
and if the content information corresponding to the response result is displayed in the voice interaction interface, outputting inquiry information.
Optionally, the obtaining a response result of the operation task includes:
if the control task represented by the processing result comprises a plurality of interaction steps, monitoring the execution condition of the last interaction step;
if the response result of the operation task is obtained, outputting inquiry information, including:
and outputting inquiry information after the last interactive step is executed.
Optionally, the outputting query information if the response result of the operation task is obtained includes:
if the response result of the operation task is obtained, outputting the inquiry information in a voice interaction interface of the first application;
or,
and if the response result of the operation task is obtained, outputting the inquiry information in an application interface of a second application except the first application.
In a second aspect, an embodiment of the present disclosure provides an electronic device, including:
a first obtaining module for obtaining a voice input;
a second obtaining module for obtaining a processing result for the voice input; the processing result is used for representing an operation task;
and the output module is used for outputting inquiry information if the response result of the operation task is obtained.
Optionally, the electronic device further comprises:
and the control module is used for controlling the voice engine to be in a sound receiving state.
Optionally, the control module is further configured to control the speech engine to enter a sleep state if no new language input is received within a first predetermined time.
Optionally, the first obtaining module is configured to, if the control task represented by the processing result is a single task, display content information corresponding to the response result in a voice interaction interface corresponding to a voice engine based on the response result; the output module is used for outputting inquiry information if the content information corresponding to the response result is displayed in the voice interaction interface;
or,
the first obtaining module is configured to monitor an execution condition of a last interaction step if the control task represented by the processing result includes multiple interaction steps; and the output module is used for outputting the inquiry information after the last interactive step is executed.
According to the information processing method and the electronic device provided by the embodiment of the disclosure, after a response result is obtained for an operation task triggered by voice input, no specific operation (for example, stopping the sound reception state of a voice engine) is performed immediately, and query information is output; the user can conveniently execute corresponding operation based on the query information, for example, another voice is input or a query reply is input, so that the electronic equipment is prevented from executing specific operation, for example, stopping of the sound receiving state of the voice engine is prevented, the sound receiving state of the voice engine is maintained, the subsequent voice interaction of the user is facilitated, the time delay of the subsequent voice interaction is reduced, and the speed of responding to the operation task corresponding to the user voice input is improved.
Drawings
Fig. 1 is a schematic flowchart of an information processing method according to an embodiment of the present disclosure;
FIG. 2A is an interaction diagram of a single interaction task provided by an embodiment of the present disclosure;
FIG. 2B is an interaction diagram of a multi-interaction task according to an embodiment of the present disclosure;
fig. 3 is a schematic flow chart of another information processing method provided in the embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of another electronic device provided in the embodiment of the present disclosure.
Detailed Description
The technical solution of the present disclosure is further described in detail below with reference to the drawings and specific embodiments of the specification.
As shown in fig. 1, the present embodiment provides an information processing method including:
step S110: obtaining a voice input;
step S120: obtaining a processing result aiming at the voice input, wherein the processing result is used for representing an operation task;
step S130: and if the response result of the operation task is obtained, outputting inquiry information.
The information processing method provided in this embodiment can be applied to an electronic device, and the electronic device can be a terminal device such as a mobile phone, a tablet computer, or a wearable device.
In step S110, the terminal device may collect voice input by the user or voice input by other devices through various voice collecting devices (e.g., a microphone). After the input voice is collected in step S120, the input voice is subjected to processing such as voice analysis, and a processing result adapted to the voice input is obtained. At this time, the terminal device may process the voice input locally to obtain the processing result; the voice input can also be sent to the cloud device, the processing result is obtained through the information processing of the cloud device, and the processing result is returned to the terminal device. For example, the terminal device collects a voice input, recognizes the voice input in step S120 to obtain a voice content corresponding to the voice input, sends the voice content to the cloud device through the network, and the cloud device performs information processing to obtain the processing result and returns the processing result to the terminal device.
For example, a voice input is collected in step S110, the voice input asking for schedule information of today; if the schedule information of the user is stored in the terminal device such as a mobile phone, the local processing is directly performed in step S120 to obtain a processing result, where the local processing includes: voice recognition of voice input, semantic analysis, and extracting schedule information from the schedule application. The processing result may represent information such as the execution status of the operation task, and the execution status information may include: the information indicating whether the operation task is completed, the information indicating whether the operation task is successfully executed, the information indicating the current progress of the operation task, and the execution status information may include one or more of the above. In some embodiments, the processing result may also be information indicating a task type of the operation task, and the task type of the operation task, which is distinguished by how many interaction steps, may include: a single interactive task comprising a single interactive operation, or a multi-interactive task comprising multiple interactive operations. In short, the information corresponding to the processing result may be various, and the specific implementation is not limited to any one of the above.
For another example, in step S110, the terminal device acquires a voice input, the voice input asks for weather information of today a, the terminal device such as a mobile phone does not store the weather information of today a, in step S120, the voice input is recognized, the voice input is converted into a text, semantic analysis is performed by a semantic analysis method such as a bag-of-words method, it is determined that the voice input is a voice asking for weather information of today a, a text corresponding to the voice input is constructed, or a search term is constructed according to the semantic analysis, and an answer to the inquiry is searched in the network, so that a search server in the network can provide the answer to the search, and thus at least part of tasks are performed by the cloud device on the network side in step S120.
In step S130, query information, which may be a voice query, is output, for example, the terminal device outputs a query voice. In other embodiments, the query information output in step S130 may also be text query information, for example, text of the query information is displayed in a specific display interface.
In the present embodiment, the output of the inquiry information is output after the response result of the operation task is obtained. The response result may be a successful response result in response to the operation task, or may be a failed response result in response to a failure of the operation task. And whether the response result is a successful response result or a failed response result, once the response result of the operation task is obtained and the content information corresponding to the response result is output, the completion of the response of the operation task is indicated. In this embodiment, in order to determine whether the voice application such as voice acquisition, voice recognition, semantic analysis, and the like needs to be continuously maintained in the on state after the response of the operation task is completed, rather than directly closing the voice application, the query information is output in this embodiment. The query information may be first query information for querying whether the user needs to maintain the voice application in the activated posture, or may be second query information for querying whether the user needs to close the voice application, or third query information for querying whether the user continues to have voice input, or fourth query information for querying whether the user has other operation tasks, or fifth query information for querying whether the user needs to continue waiting for voice input. In summary, the query information in the present embodiment may include: various information asking the electronic device whether the voice application needs to be maintained in an open state.
In some embodiments, the step S130 includes:
displaying question text of the inquiry information;
collecting voice replies aiming at the question texts;
and finally determining whether to close the voice application, quit the voice application or maintain the open state of the voice application according to the voice reply.
In other embodiments, the step S130 may include:
displaying question text of the inquiry information;
displaying an operation control responding to the question text, such as a 'cancel' control and/or a 'confirm' control;
detecting a manual operation acting on the operation control,
and determining whether to close the voice application, quit the voice application or maintain the opening state of the voice application according to the meaning of the control acted by the manual operation and the problem text.
The maintaining of the on state of the voice application here means that the sound reception state of the voice engine needs to be maintained, and if the voice application is exited or closed, the voice engine can be controlled to enter the sleep state or the off state.
In short, in this embodiment, by querying the output of the information, the user can instruct the electronic device to perform the operation related to the voice input according to the own requirement, for example, perform the operation related to the state switching of the voice application, so as to avoid the problems that the voice application needs to be restarted to cause complicated operation when the user subsequently uses the voice application because the electronic device automatically closes the voice application after completing an operation task triggered by the voice input, and the response delay of the operation task triggered by the single voice input is large; therefore, the method has the characteristics of simple operation, high user satisfaction and high response speed of the operation task triggered by single voice input.
In some embodiments, the electronic device may generate and display a corresponding reply text from a voice reply input by the user in the interactive interface through voice recognition or the like. The interactive interface shown in fig. 2A shows the text "what are movies new today" with a voice input that triggers an operation task? "; and after the electronic equipment executes corresponding processing, outputting content information 'movie A and movie B' corresponding to the response result of the operation task. Meanwhile, the electronic equipment also detects that the user replies to the inquiry about the inquiry information (the inquiry reply can be a voice reply or a manual operation reply) is 'unused'.
Optionally, as shown in fig. 3, the method further includes:
step S140: controlling the voice engine to be in a sound receiving state.
In some embodiments, the step S140 may include: and controlling the voice engine to be in a sound receiving state according to the query reply obtained based on the query information, for example, if the query reply indicates that the voice application needs to be continuously used, or the voice application is maintained to be in an open state, controlling the voice engine to be in the sound receiving state. The speech engine in the radio reception state may capture speech input by a user, and in some embodiments, the speech engine is in the radio reception state representation and may capture one or more of speech, recognize speech, and perform speech analysis operations, at which time the speech application needs to be maintained in an active state.
In still other embodiments, the step S140 may include:
and when the inquiry information is output, controlling the voice engine to be in a sound receiving state, and controlling the voice engine to be in the sound receiving state within a certain time after the inquiry information is output.
In this embodiment, the speech engine may be a kernel engine of a speech application, and is configured to perform core functions of the speech application, such as speech acquisition, speech recognition, and speech analysis. The voice application may be a separate dedicated voice application, e.g., siri; the voice application may be a voice reference embedded with voice functionality, such as an application with voice input and output functionality, such as WeChat.
Optionally, as shown in fig. 3, the method further includes:
step S150: and if no new language input is received within the first preset time, controlling the voice engine to enter a dormant state.
In some embodiments, the first predetermined time may be a predetermined period of time in which the output time of obtaining the response result is a starting time; alternatively, the first predetermined time may be a predetermined period of time in which the output time of the inquiry information is the starting time.
And detecting whether a new voice input exists in the first preset time, wherein the new voice can be a trigger voice input of the last operation task. If a new voice input is detected, the voice engine is controlled to be maintained in a radio reception state, otherwise, the voice engine enters a sleep state. If the sleep state is entered, the need to re-use the speech engine may require re-waking the speech engine. If no new voice input is received within the first predetermined time, it may indicate that the probability that the user uses the voice engine within a short period of time is very small, so that the voice engine may enter a sleep state to save power consumption of the electronic device.
In this embodiment, the duration corresponding to the first predetermined time is a predetermined duration, and if the starting time of the first predetermined time is determined, and if the predetermined duration is a known duration, the ending time of the first predetermined time is determined. In this embodiment, the known duration may be determined according to a rate at which the user interacts with the electronic device, for example, an average reply duration or a median of reply durations and the like are determined according to a rate at which the user replies to the inquiry information of the electronic device at the historical time, and then the predetermined duration is determined according to the average reply duration or the median of reply durations. For example, the predetermined time period may be a times the average reply time period or a median reply time period, where a is a positive integer not less than 1, and a may take a time period of 1.2, 1.3, or 1.5. In other embodiments, the predetermined time period may be a specific time period preset by the voice application.
To facilitate prompting the user to reply to the query message, in some embodiments, the method further comprises:
displaying a countdown of the reply to the query message;
and if the countdown is finished, determining that the user does not need to continue using the voice engine, and controlling the voice engine to enter a dormant state from a sound receiving state and the like.
If the countdown is displayed, the user can immediately give a reply to the inquiry information or directly ignore the inquiry information according to the use requirement of the user when seeing the countdown. In this embodiment, the receiving no new voice input within the first predetermined time may include:
the reply voice of the inquiry information is not received;
and/or the presence of a gas in the gas,
no other voice input indicative of a new operational task is received.
In some embodiments, the obtaining a response result of the operation task includes:
and if the control task represented by the processing result is a single task, displaying content information corresponding to the response result in a voice interaction interface corresponding to a voice engine based on the response result.
It is possible that a single task or multiple tasks may be triggered for a certain speech input. Generally, if the operation task is a single task, the electronic device may only complete one operation or multiple related operations to obtain a response result, and then the corresponding content information needs to be presented in the voice interaction interface based on the response result. For example, the voice interactive interface may be an interactive interface of a voice application, for example, an interactive interface of a voice application corresponding to a voice engine. Correspondingly, the single task may be a single inter-operational task, which may also be referred to as a single interactive task in some embodiments. The multitask may be a task that requires multiple interactive operations, which may also be referred to as a multi-interactive task. For a single interaction task, the user may only need to input one voice, while the electronic device only needs to provide content information of one response result. In responding to the single interaction task, the electronic device may not need the user to provide additional auxiliary speech in addition. In the multi-interaction task, the electronic device may need to prompt the user to provide more voice inputs to obtain more auxiliary information to complete the response of the operation task during the voice interaction, so that multiple voice interactions and/or multiple text interactions with the user are required.
The step S130 may include:
and if the content information corresponding to the response result is displayed in the voice interaction interface, outputting inquiry information.
In this embodiment, if the voice interaction interface displays the content information corresponding to the response result, the query information is output, for example, after the content information of the response result is displayed in the voice interaction interface, the query text is output.
For example, the search content corresponding to the search result is displayed or projected in the voice interaction interface, and the query information is displayed in the same interaction interface of the content information corresponding to the corresponding output response result. Therefore, the electronic equipment can directly display the inquiry information without switching the voice interaction interface, is more insensitive to the user, and can improve the use satisfaction of the user again.
In this embodiment, the displaying the content information may include: and displaying content information corresponding to the response result through various display screens, and/or projecting the content information corresponding to the response result to a projection area outside the eyes of the user, and/or projecting the content information corresponding to the response result into the eyes of the user.
Optionally, the obtaining a response result of the operation task includes:
if the control task represented by the processing result comprises a plurality of interaction steps, monitoring the execution condition of the last interaction step;
the step S130 may include:
and outputting inquiry information after the last interactive step is executed.
In some embodiments, the operational task may be a multi-interaction task of multiple interaction steps. Generally, if the electronic device executing the current information processing method is an electronic device in an intelligent voice question-and-answer system, if a user proposes an open-type question, the electronic device needs to perform voice interaction with the user for multiple times to obtain more information related to the open-type question, the execution condition of the last interaction step needs to be monitored at this time, if the execution of the last interaction step is completed, it can be considered that a response result of the operation task is obtained, otherwise, it can be considered that the operation task is not completed, the interaction step of the operation task needs to be performed, and after the completion of the last interaction step is completed, query information is output.
In still other embodiments, the user instructs the electronic device to perform a specific task, and the specific task is an open task, and the same electronic device may require multiple interactions with the user to obtain the complete information required to perform the open task, and then the operation task is a multi-interaction task.
FIG. 2A shows an interaction diagram of a single interaction task; the voice input shown in fig. 2A triggers a single interactive task, so only one interactive operation is needed to make the electronic device complete the operation task.
Fig. 2B illustrates an interaction diagram of a multi-interaction task, the voice input illustrated in fig. 2B triggers the multi-interaction task, and the electronic device needs to further acquire various information for performing the operation task through subsequent inquiry or prompt after receiving the voice input, for example, the mail content illustrated in fig. 2B, and the like.
In this embodiment, the method may be as shown in fig. 2A, and the method may include at least one of:
if the query reply indicates that the voice application or the voice engine is no longer used, controlling the voice engine to enter a dormant state;
if the query response indicates that the voice application or the voice engine is used, the voice engine is controlled to remain in the sound receiving state.
Optionally, the completion of the last interaction step may include: and displaying the content information corresponding to the response result considered by the operation.
For example, in step S110, a voice input is collected as: please help to make a travel plan of 4 months. And after the electronic equipment receives the voice input, recognizing and analyzing the voice input to obtain a processing result representing that the operation task is a multi-interaction task. Since the voice input only informs the electronic device of the planned time of the trip, the duration of the trip, the place where the user wants to travel, the cost of the trip, etc.; then at this point the electronic device will further show the interactive text or play interactive voice, e.g. the electronic device will give the interactive question "recommend 7 days, 15 days and 3 days for you, ask which one you prefer? "for another example, the electronic device would give an interactive question" ask you want seaside or city? ". Further for example, the electronic device may also give an interactive question "ask your travel budget interval is? ". In summary, the electronic device may need to obtain sufficient information to complete a given travel plan through multiple interactions with the user if the electronic device needs to complete the task of the travel plan. Such an operation task is a typical multi-interaction operation task.
Optionally, the step S130 may include:
and if the response result of the operation task is obtained, outputting the inquiry information in the voice interaction interface of the first application.
In this embodiment, the interactive page outputting the query information is directly the voice interactive interface of the first application. The first application may be an application that obtains the speech input.
In other embodiments, the step S130 may include:
and if the response result of the operation task is obtained, outputting the inquiry information in an application interface of a second application except the first application.
In this embodiment, the second application is different from the first application, and the first application may not need to be used continuously, so the speech engine in the first application may be turned off in this embodiment, but the query information may be output in an application interface of the second application other than the first application for facilitating one or more speech operations of speech acquisition, recognition, analysis, and the like of the subsequent electronic device. For example, the second application may be an application that is dedicated to controlling whether a speech engine of the electronic device is in a dormant state or a radio reception state.
If the inquiry information is output in the application interface of the second application, the voice interaction interface in the first application can be closed or the voice engine of the first application is controlled to enter a dormant state, so that the related operation of the first application is not influenced; but displaying the query information in any application other than the first application facilitates the electronic device to control the operating state of the speech engine using the second reference.
For example, the first application may be a voice embedded application, a typical voice embedded application may be an application embedded with voice interaction functionality, e.g., WeChat or a search application with voice search functionality, and the second application may be a voice interaction specific application such as siri. If the step S130 is adopted, the first application may exit the voice interaction interface according to the application execution logic thereof, enter another interaction interface (for example, WeChat enters a friend circle display page), and the like, and simultaneously display the query information in a dedicated voice interaction application such as siri, and the like, which may facilitate the user to control the microphone and/or the speaker to switch between the working state and the non-working state.
As shown in fig. 4, the present embodiment provides an electronic device, including:
a first obtaining module 110, configured to obtain a voice input;
a second obtaining module 120, configured to obtain a processing result for the voice input; the processing result is used for representing an operation task;
and the output module 130 is configured to output inquiry information if the response result of the operation task is obtained.
In this embodiment, the first obtaining module, the second obtaining module, and the output module 130 may all correspond to a program module, and after the program module is executed by a processor or a processing circuit, obtaining of voice input, obtaining of a processing result, and outputting of query information may be achieved, so that a problem of low satisfaction of a user due to specific operations of automatically closing voice applications such as voice interaction directly after an operation task is completed may be avoided, and user experience satisfaction and device intelligence may be improved.
Optionally, the electronic device further comprises:
and the control module is used for controlling the voice engine to be in a sound receiving state.
The control module may be specifically configured to control the voice engine to be in a sound reception state according to the reply instruction of the inquiry information, or control the voice engine to be maintained in the sound reception state after a certain time of outputting the inquiry information.
Optionally, the control module is further configured to control the speech engine to enter a sleep state if no new language input is received within a first predetermined time.
The new voice input can be different from the voice input of the triggering electronic equipment which is acquired in the first acquisition module and responds to the human operation task, and the new voice input can be the reply voice of the inquiry information or the triggering voice of the new operation task.
Alternatively,
in some embodiments, the first obtaining module 110 is configured to, if the control task represented by the processing result is a single task, display content information corresponding to the response result in a voice interaction interface corresponding to a voice engine based on the response result; the output module 130 is configured to output query information if the content information corresponding to the response result is displayed in the voice interaction interface;
in other embodiments, the first obtaining module 110 is configured to monitor an execution condition of a last interaction step if the control task represented by the processing result includes a plurality of interaction steps; the output module 130 is configured to output the query information after the last interaction step is completed.
As shown in fig. 5, an embodiment of the present disclosure provides an electronic device, including:
the output device 210 may include various types of display screens and/or projection modules or voice output modules such as speakers;
a memory 220 that may be used to store information, e.g., computer-executable instructions executable by a processor;
the processor 230 is connected to the output device and the memory, and can implement the information processing method provided by one or more of the foregoing technical solutions, for example, the information processing method shown in fig. 1 and/or fig. 3, by executing the computer-executable instructions.
The processor 230 may be connected with the output device 210 and the memory 220 by a bus or the like, and the processor 230 may be various types of processing chips or processing circuits, such as a central processing unit, a microprocessor, a digital signal processor, a programmable array, or the like.
The processor 230 may be executed by a computer program or a software code, and the like, and implement the information processing method implemented by one or more of the foregoing technical solutions.
Several specific examples are provided below in connection with any of the embodiments described above:
example 1:
the present example provides an information processing method including:
the electronic equipment actively keeps the microphone open by continuously and tentatively inquiring whether the user has the next step requirement or not through inquiring information after a voice conversation is finished, so that the voice interaction process is effectively prolonged, the trouble that the user starts the voice again is avoided, the user experience of the conversational voice interaction is improved, and the conversation efficiency is improved.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the modules is only one logical functional division, and there may be other division ways in actual implementation, such as: multiple modules or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the coupling, direct coupling or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be electrical, mechanical or other.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, that is, may be located in one place, or may be distributed on a plurality of network modules; some or all of the modules can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, all functional modules in the embodiments of the present disclosure may be integrated into one processing module, or each module may be separately regarded as one module, or two or more modules may be integrated into one module; the integrated module can be realized in a hardware form, and can also be realized in a form of hardware and a software functional module.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (10)
1. An information processing method comprising:
obtaining a voice input;
obtaining a processing result aiming at the voice input, wherein the processing result is used for representing an operation task;
if a response result representing the completion of the response of the operation task is obtained, query information is output; wherein the query information includes: inquiring whether the electronic equipment needs to maintain the information that the voice application is in the open state;
wherein, if the response result of the operation task is obtained, outputting inquiry information, including:
if the response result of the operation task is obtained and the first application does not need to be used continuously, outputting the inquiry information in an application interface of a second application except the first application; wherein the first application is an application that obtains the speech input.
2. The method of claim 1, wherein the method further comprises:
controlling the voice engine to be in a sound receiving state.
3. The method of claim 2, wherein the method further comprises:
and if no new language input is received within the first preset time, controlling the voice engine to enter a dormant state.
4. The method of claim 1, wherein,
the obtaining of the response result of the operation task comprises:
if the control task represented by the processing result is a single task, displaying content information corresponding to the response result in a voice interaction interface corresponding to a voice engine based on the response result;
if the response result of the operation task is obtained, outputting inquiry information, including:
and if the content information corresponding to the response result is displayed in the voice interaction interface, outputting inquiry information.
5. The method of claim 1, wherein,
the obtaining of the response result of the operation task comprises:
if the control task represented by the processing result comprises a plurality of interaction steps, monitoring the execution condition of the last interaction step;
if the response result of the operation task is obtained, outputting inquiry information, including:
and outputting inquiry information after the last interactive step is executed.
6. The method of claim 1, wherein,
if the response result of the operation task is obtained, query information is output, and the method further comprises the following steps:
and if the response result of the operation task is obtained, outputting the inquiry information in the voice interaction interface of the first application.
7. An electronic device, comprising:
a first obtaining module for obtaining a voice input;
a second obtaining module for obtaining a processing result for the voice input; the processing result is used for representing an operation task;
the output module is used for outputting inquiry information if a response result representing the completion of the response of the operation task is obtained; wherein the query information includes: inquiring whether the electronic equipment needs to maintain the information that the voice application is in the open state;
the output module is specifically configured to output the query information in an application interface of a second application other than the first application if the response result of the operation task is obtained and the first application does not need to be used continuously; wherein the first application is an application that obtains the speech input.
8. The electronic device of claim 7, wherein the electronic device further comprises:
and the control module is used for controlling the voice engine to be in a sound receiving state.
9. The electronic device of claim 8,
the control module is further configured to control the speech engine to enter a sleep state if no new language input is received within a first predetermined time.
10. The electronic device of claim 7,
the first obtaining module is configured to, if the control task represented by the processing result is a single task, display content information corresponding to the response result in a voice interaction interface corresponding to a voice engine based on the response result; the output module is used for outputting inquiry information if the content information corresponding to the response result is displayed in the voice interaction interface;
or,
the first obtaining module is configured to monitor an execution condition of a last interaction step if the control task represented by the processing result includes multiple interaction steps; and the output module is used for outputting the inquiry information after the last interactive step is executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810276894.5A CN108459838B (en) | 2018-03-30 | 2018-03-30 | Information processing method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810276894.5A CN108459838B (en) | 2018-03-30 | 2018-03-30 | Information processing method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108459838A CN108459838A (en) | 2018-08-28 |
CN108459838B true CN108459838B (en) | 2020-12-18 |
Family
ID=63238287
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810276894.5A Active CN108459838B (en) | 2018-03-30 | 2018-03-30 | Information processing method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108459838B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112447180A (en) * | 2019-08-30 | 2021-03-05 | 华为技术有限公司 | Voice wake-up method and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1425158A (en) * | 1999-11-23 | 2003-06-18 | 高通股份有限公司 | Method and apparatus for voice controlled foreign language translation device |
CN104898821A (en) * | 2014-03-03 | 2015-09-09 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN107608586A (en) * | 2012-06-05 | 2018-01-19 | 苹果公司 | Phonetic order during navigation |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102299330B1 (en) * | 2014-11-26 | 2021-09-08 | 삼성전자주식회사 | Method for voice recognition and an electronic device thereof |
-
2018
- 2018-03-30 CN CN201810276894.5A patent/CN108459838B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1425158A (en) * | 1999-11-23 | 2003-06-18 | 高通股份有限公司 | Method and apparatus for voice controlled foreign language translation device |
CN107608586A (en) * | 2012-06-05 | 2018-01-19 | 苹果公司 | Phonetic order during navigation |
CN104898821A (en) * | 2014-03-03 | 2015-09-09 | 联想(北京)有限公司 | Information processing method and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN108459838A (en) | 2018-08-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112272819B (en) | Method and system for passively waking up user interaction device | |
CN104360653B (en) | Troubleshooting methodology and device | |
CN106256116B (en) | A kind of method and terminal controlling application program | |
EP3113549A1 (en) | Method and device for waking up mcu chip | |
WO2017028678A1 (en) | Communication method, server and device | |
CN107809542B (en) | Application program control method and device, storage medium and electronic equipment | |
CN109002387B (en) | User reminding method and device of application program, terminal equipment and storage medium | |
CN107436777B (en) | Mobile terminal, application crash processing method and processing device | |
CN108364645A (en) | A kind of method and device for realizing page interaction based on phonetic order | |
CN111063354B (en) | Man-machine interaction method and device | |
CN112767936B (en) | Voice dialogue method and device, storage medium and electronic equipment | |
CN109741740B (en) | Voice interaction method and device based on external trigger | |
CN110751948A (en) | Voice recognition method, device, storage medium and voice equipment | |
CN111290926A (en) | Terminal prompting method and device, storage medium and terminal | |
CN111862965A (en) | Awakening processing method and device, intelligent sound box and electronic equipment | |
CN108494970B (en) | Terminal state information processing method and device, storage medium and terminal | |
CN109686370A (en) | The method and device of fighting landlord game is carried out based on voice control | |
CN108459838B (en) | Information processing method and electronic equipment | |
EP3955099A1 (en) | Method and device for controlling the operation mode of a terminal device, and storage medium | |
CN107734390B (en) | Live broadcast method, device and storage medium | |
CN109725798B (en) | Intelligent role switching method and related device | |
CN110517684B (en) | Control method and device for intelligent equipment, intelligent equipment and storage medium | |
CN109658924B (en) | Session message processing method and device and intelligent equipment | |
CN112435441B (en) | Sleep detection method and wearable electronic device | |
CN110401775B (en) | Alarm clock setting method, alarm clock setting device and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |