CN112102823B - Voice interaction method of intelligent terminal, intelligent terminal and storage medium - Google Patents
Voice interaction method of intelligent terminal, intelligent terminal and storage medium Download PDFInfo
- Publication number
- CN112102823B CN112102823B CN202010707456.7A CN202010707456A CN112102823B CN 112102823 B CN112102823 B CN 112102823B CN 202010707456 A CN202010707456 A CN 202010707456A CN 112102823 B CN112102823 B CN 112102823B
- Authority
- CN
- China
- Prior art keywords
- control
- label
- voice interaction
- interactable
- intelligent terminal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 103
- 238000000034 method Methods 0.000 title claims abstract description 70
- 238000003860 storage Methods 0.000 title claims abstract description 25
- 238000009826 distribution Methods 0.000 claims description 9
- 238000004140 cleaning Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 abstract description 27
- 238000011161 development Methods 0.000 abstract description 6
- 230000008569 process Effects 0.000 description 21
- 239000002609 medium Substances 0.000 description 19
- 230000002452 interceptive effect Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 5
- 230000001960 triggered effect Effects 0.000 description 5
- 238000003058 natural language processing Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 239000012120 mounting media Substances 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/063—Training
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/063—Training
- G10L2015/0638—Interactive procedures
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention discloses a voice interaction method of an intelligent terminal, the intelligent terminal and a storage medium. The method comprises the following steps: extracting control information of an interactable control on a current display interface; displaying a control label corresponding to each interactable control on the current display interface according to the control information; receiving a voice command input by a user, and matching the voice command with a control label; and determining the control to be interacted selected by the user according to the matching result, and triggering an operation event corresponding to the control to be interacted. The technical scheme provided by the embodiment of the invention can be automatically completed in a system layer, and the problem that the voice interaction function is difficult to expand by a third party application is solved by avoiding integrating the voice interaction function for each application independently, and the development cost of the application integrated voice interaction function is reduced. Meanwhile, by setting a corresponding control label for each interactable control, the efficiency of voice interaction is further improved, and the learning cost of voice interaction is reduced.
Description
Technical Field
The embodiment of the invention relates to the technical field of voice control, in particular to a voice interaction method of an intelligent terminal, the intelligent terminal and a storage medium.
Background
Under the heavy tide of three-network integration, the intelligent terminal gradually becomes a center of home entertainment. In the face of complex new functions and increasingly diverse application software of intelligent terminals, the requirements of consumers on simple and convenient control of the intelligent terminals cannot be met through manual operation. The voice man-machine interaction has the advantages of reducing complicated operation, high efficiency, nature and low learning cost, and is increasingly used in intelligent terminals facing users such as intelligent sound boxes and set top boxes. However, most of the existing graphical interface applications running on the intelligent terminals do not support voice interaction, so that the voice interaction capability needs to be extended on the intelligent terminals.
In the prior art, for a third party application, a third party manufacturer generally waits for developing a voice interaction function of the application, and needs to perform interface modulation with a voice module on an intelligent terminal; for applications that are self-contained on the system, secondary development of the system is generally required to add voice interaction functionality. Because the application on the system can be developed for different providers, the difficulty of increasing the voice interaction function is high, and a great deal of repeated work exists in the process of adding the voice interaction function for each application respectively, so that manpower and material resources are wasted greatly.
Disclosure of Invention
The embodiment of the invention provides a voice interaction method of an intelligent terminal, the intelligent terminal and a storage medium, which are used for avoiding the independent integration of voice interaction functions for each application, thereby solving the problem that the voice interaction functions are difficult to expand for a third party application and reducing the development cost of the application integrated voice interaction functions.
In a first aspect, an embodiment of the present invention provides a voice interaction method of an intelligent terminal, where the method includes:
extracting control information of an interactable control on a current display interface;
displaying a control label corresponding to each interactable control on the current display interface according to the control information;
Receiving a voice command input by a user, and matching the voice command with the control label;
And determining a control to be interacted selected by a user according to the matching result, and triggering an operation event corresponding to the control to be interacted.
In a second aspect, an embodiment of the present invention further provides an intelligent terminal, where the intelligent terminal includes:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the voice interaction method of the intelligent terminal provided by any embodiment of the present invention.
In a third aspect, an embodiment of the present invention further provides a computer readable storage medium, where a computer program is stored, where the program when executed by a processor implements the voice interaction method of the intelligent terminal provided in any embodiment of the present invention.
The embodiment of the invention provides a voice interaction method of an intelligent terminal, which comprises the steps of firstly extracting control information of interactable controls on a current display interface, then displaying control labels corresponding to each interactable control on the current display interface according to the control information, matching a received voice instruction input by a user with the control labels, determining a control to be interacted selected by the user according to a matching result, and further triggering an operation event corresponding to the control to be interacted. The voice interaction method of the intelligent terminal provided by the embodiment of the invention can be automatically completed in a system layer, and the voice interaction function is prevented from being independently integrated for each application, so that the problem that the voice interaction function is difficult to expand by a third party application is solved, and the development cost of the application integrated voice interaction function is reduced. Meanwhile, by setting a corresponding control label for each interactable control and matching the control labels, the training process of the voice can be completed through less data, so that the efficiency of voice interaction is further improved, and the learning cost of voice interaction is reduced.
Drawings
Fig. 1 is a flowchart of a voice interaction method of an intelligent terminal according to a first embodiment of the present invention;
fig. 2 is a flowchart of a voice interaction method of an intelligent terminal according to a second embodiment of the present invention;
Fig. 3 is a schematic structural diagram of a voice interaction device of an intelligent terminal according to a third embodiment of the present invention;
Fig. 4 is a schematic structural diagram of an intelligent terminal according to a fourth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts steps as a sequential process, many of the steps may be implemented in parallel, concurrently, or with other steps. Furthermore, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example 1
Fig. 1 is a flowchart of a voice interaction method of an intelligent terminal according to an embodiment of the present invention. The embodiment is applicable to the situation of adding a voice interaction function to the existing user graphical interface application, the method can be executed by the voice interaction device of the intelligent terminal, the device can be realized by hardware and/or software, and the device can be generally integrated in the intelligent terminal, wherein the intelligent terminal can be, but is not limited to, an intelligent sound box, an intelligent television, a set top box and the like. As shown in fig. 1, the method specifically comprises the following steps:
S11, extracting control information of the interactive control on the current display interface.
The interactive control refers to a control that is visible on the current display interface and can be operated by a traditional graphical interface interaction mode (such as a button or a touch screen, etc.), and correspondingly, a non-interactive control may exist on the current display interface, which may be used for displaying content on a page, but cannot perform further operations. The voice interaction function aims to replace the original manual interaction operation process through a voice interaction mode, and aims at interactable controls in the voice interaction function, so that only control information of the interactable controls is needed to be extracted. Optionally, the control information may include: control type, control position, control size, etc. Specifically, after entering the voice interaction flow, control information of all interactable controls on the current display interface can be automatically extracted through an expanded user graphical interface module, a system browser kernel and the like.
And S12, displaying the control labels corresponding to each interactable control on the current display interface according to the control information.
Wherein, optionally, the control label is a pure number, a pure letter or a combination of numbers and letters, etc., and in order to simplify the voice recognition process, the control label may be set to include one or two characters according to the general number of interactive controls on the current display interface. In a preferred embodiment, the control tab is a two-digit plain number, i.e., the control tab can take on values of 00-99, considering that too many interactable controls are not displayed on a graphical user interface. Specifically, because the user generally visually stays on the interactable control to be interacted on the current display interface, each control label can be displayed on the corresponding interactable control or nearby position, wherein the control labels which are not repeated each other can be randomly generated and displayed in the value range of the control labels according to the number of the interactable controls. After the display process of the control label is completed, the voice module can be started to wait for a voice instruction input by a user in the background of the device.
Optionally, displaying, on the current display interface according to the control information, a control label corresponding to each interactable control, including: determining the display position and the label name of the control label according to the control information; and displaying the control label corresponding to each interactable control on the current display interface according to the display position and the label name.
Specifically, the display position and the label name of the control label can be determined according to the control position in the control information, wherein the control position can be the coordinate range of the corresponding interactable control in the current display interface, the display position of the control label can be the center coordinate of the coordinate range, namely, the control label can be displayed in the label layer on the upper layer of the graphical interface layer where the interactable control is located, and meanwhile, a mask can be added to the graphical interface layer to lighten the background and highlight the control label and the associated line of the control label and the interactable control. Optionally, a cut scene can be added to the display process of the control label, so that a better visual experience is provided for the user. The display position of the control label can be any boundary coordinate in the coordinate range specifically so as to reduce shielding of the interactable control on the basis of indicating the association between the control label and the corresponding interactable control, and therefore observation of the interactable control by a user is not influenced. Regardless of the display method, the control labels are displayed on the premise that the display areas of the control labels do not conflict with each other. The label name of the control label can be determined according to the rule from left to right and then from top to bottom according to the center coordinates of each interactable control.
After the display position and the label name of the control label are determined, the control label corresponding to each interactable control can be displayed on the current display interface according to the display position and the label name. Specifically, different adjacent control labels can be displayed in a differentiated mode through a coloring algorithm by using the least colors, and then the control labels with label names with set colors are drawn at the display positions determined in the current display interface.
Optionally, the control information includes a control size, and a control label corresponding to each interactable control is displayed on the current display interface according to the control information, and the method further includes: and determining the label size of each corresponding control label according to the proportion between the control sizes of each interactable control.
Specifically, the size of the interactive control on the display interface generally represents the popularity and the importance, so that the label size of each corresponding control label can be determined according to the proportion between the control sizes of the interactive control on the current display interface, that is, the larger the control size of the interactive control is, the larger the label size of the corresponding control label is, so as to adapt to the control size, and the observation habit of people is more met.
Optionally, the control information includes a control position, and determining a display position and a label name of the control label according to the control information includes: determining a distribution rule of the interactable controls and a display position of a control label according to the control position of each interactable control; and determining the label name of the control label according to the distribution rule.
Specifically, whether the interactable controls are in the same row, the same column or the same group can be determined according to the control positions of the interactable controls, if so, the label names of all control labels can be determined by adopting a grouping correlation naming method according to the distribution rules, and particularly, the interactable controls in the same row, the same column or the same group can have adjacent or same beginning label names so as to more accord with the observation habit of people.
Optionally, the label name is a fixed length, and determining the display position and the label name of the control label according to the control information includes: if the number of the interactable controls is larger than the number of the settable non-repeated label names, dividing the current display interface into areas; and determining the display position and the label name of the corresponding control label according to the divided area selected by the user and the control information of the interactable control in the divided area.
For example, as above, the control label may be a two-digit number, i.e., the control label may take on a value of 00-99, where the fixed length of the label name is 2, the number of settable non-duplicate label names is 100, or the control label may be a two-digit number of a pure letter, i.e., the control label may take on a value of aa-zz, where the fixed length of the label name is 2, the number of settable non-duplicate label names is 676, or the like. If the number of the interactable controls on the current display interface is greater than the number of the settable non-repeated label names, the current display interface can be firstly divided into areas, specifically, the current display interface can be simply divided into left and right areas or upper and lower areas, the current display interface can be divided into groups according to the distribution rule, and the divided areas can be numbered. After the area division is completed, the selection of the divided areas by the user can be received, and the selection can be specifically performed through a voice instruction, wherein the voice instruction can comprise an up-down left-right direction instruction or a numbered selection instruction and the like. And then determining the display position and the label name of the control label corresponding to the interactive control in the divided area according to the divided area selected by the user, and further displaying the control label. By dividing the current display interface into areas and displaying only the control labels of the interactable controls in the corresponding areas according to the selection of the user, the voice interaction process with the intelligent terminal can be realized through the simpler control labels, the voice interaction efficiency is further improved, the learning cost of voice interaction is further reduced, meanwhile, a solution is provided for the condition that the number of interactable controls exceeds the number of the settable labels, and therefore various voice interaction processes can be successfully completed, and the application range is wider.
S13, receiving a voice instruction input by a user, and matching the voice instruction with the control label.
The voice command input by the user can be input through voice input equipment except the intelligent terminal and received by a voice module on the intelligent terminal, the voice input equipment can be a Bluetooth remote controller, a mobile phone or a tablet personal computer and the like, and the voice command sent by the user can be monitored directly through the voice module. The user can read out the label name of the control label displayed on the current display interface, such as 1-2 digits, as voice command input. The natural language processing (Natural Language Processing, NLP) result of the voice module on the voice instruction may then be matched with a control tab in the current display interface to determine the interactable control that the user wants to interact with.
S14, determining a control to be interacted selected by a user according to the matching result, and triggering an operation event corresponding to the control to be interacted.
Specifically, after the voice command is matched with the control label, if the control label successfully matched with the voice command exists, the interactable control corresponding to the control label is determined to be a control to be interacted selected by a user, an operation event corresponding to the control to be interacted is triggered, and if the control label successfully matched does not exist, no response can be performed. The operation event may be, among other things, sending a click event, waiting for the user to voice input text, recalculating a tag, etc. Optionally, before triggering the operation event, a focus of the corresponding interactable control may be set, and the operation event may be performed at the focus position.
Optionally, the control information includes a control type, and determining a control to be interacted selected by a user according to a matching result, and triggering an operation event corresponding to the control to be interacted, including: and determining an operation event corresponding to the control to be interacted according to the control type.
Specifically, the control types can include a button control, an editing control, a check box control, a list control and the like, and the operation event of the button control can be a click event sending, namely a click process of the button control can be simulated, and processes such as jumping or loading are performed according to the click; the operation event of the editing control can be the content of the text to be input by waiting for the voice of the user, namely when the editing control is selected, the intelligent terminal can enter the state of the text to be input, then the text content input by the voice of the user is received and can be displayed; the operation event of the check box control can be a click event, namely, the click process of the check box control is simulated, and corresponding options are selected; the operation event of the list control may be to recalculate the tab, i.e. to set the tab to the list content after displaying the list content, respectively, to wait for the user to select further by voice. Based on the corresponding relation, after the control information of the interactable control on the current display interface is extracted, the operation event corresponding to the control to be interacted selected by the user can be determined.
According to the technical scheme provided by the embodiment of the invention, firstly, the control information of the interactable controls on the current display interface is extracted, then, the control label corresponding to each interactable control is displayed on the current display interface according to the control information, then, the received voice instruction input by the user is matched with the control label, the control to be interacted selected by the user is determined according to the matching result, and further, the operation event corresponding to the control to be interacted is triggered. The voice interaction method of the intelligent terminal provided by the embodiment of the invention can be automatically completed in a system layer, and the voice interaction function is prevented from being independently integrated for each application, so that the problem that the voice interaction function is difficult to expand by a third party application is solved, and the development cost of the application integrated voice interaction function is reduced. Meanwhile, by setting a corresponding control label for each interactable control and matching the control labels, the training process of the voice can be completed through less data, so that the efficiency of voice interaction is further improved, and the learning cost of voice interaction is reduced.
Example two
Fig. 2 is a flowchart of a voice interaction method of an intelligent terminal according to a second embodiment of the present invention. The technical scheme of the embodiment is further refined on the basis of the technical scheme, and optionally, the intelligent terminal can be awakened before entering the voice interaction flow, and after the condition of stopping voice interaction is met, the label can be cleaned and the like. Specifically, in this embodiment, before extracting control information of the interactive control on the current display interface, the method further includes: receiving a wake-up instruction input by a user to enter a voice interaction flow; after determining the control to be interacted selected by the user according to the matching result and triggering the operation event corresponding to the control to be interacted, the method further comprises the following steps: and if the voice command is received and the waiting time is overtime, cleaning the control label displayed currently, and stopping the voice interaction flow. Correspondingly, as shown in fig. 2, the method specifically comprises the following steps:
s21, receiving a wake-up instruction input by a user to enter a voice interaction flow.
Specifically, the wake-up instruction input by the user can be input through a voice input device other than the intelligent terminal, and before entering the voice interaction flow, the voice module of the intelligent terminal can be in an inactive state, and the system is triggered to enter the voice interaction flow only after the intelligent terminal receives the wake-up instruction input by the user. The wake instruction may include, among other things, a simple representative vocabulary such as "i want" or "i want".
S22, extracting control information of the interactive control on the current display interface.
S23, displaying the control labels corresponding to each interactable control on the current display interface according to the control information.
S24, receiving a voice instruction input by a user, and matching the voice instruction with the control label.
S25, determining a control to be interacted selected by the user according to the matching result, and triggering an operation event corresponding to the control to be interacted.
And S26, if the voice command is received and the waiting time is overtime, cleaning the control label displayed currently, and stopping the voice interaction flow.
Specifically, if the process of receiving the voice command waits for timeout, that is, the voice command input by the user is not received within the preset time range, the current voice interaction flow can be considered to be ended, then the control label displayed at present can be cleaned, the current display interface is restored to the normal working state, and the voice interaction flow is stopped, that is, the voice module is stopped to wait for the state that the user inputs the voice command, so that the system is switched to the state waiting for the next voice wakeup. Optionally, after the operation event corresponding to the control to be interacted is completed, or after the voice interaction exiting instruction input by the user is received, the process of cleaning the control label and stopping the voice interaction flow can be performed.
According to the technical scheme provided by the embodiment of the invention, the voice interaction flow is stopped after the wake-up instruction input by the user is received to enter the voice interaction flow and the condition of stopping voice interaction is met, so that the intelligent terminal only enters the voice interaction state when needed, and resumes the original normal working state at other times, a large amount of calculation resources required to be consumed in the voice interaction flow are saved, meanwhile, the control label displayed currently is cleaned when the voice interaction flow is ended, preparation is made for the next process of generating the control label, and a clearer display interface can be provided for the manual interaction process.
Example III
Fig. 3 is a schematic structural diagram of a voice interaction device of an intelligent terminal according to a third embodiment of the present invention, where the device may be implemented by hardware and/or software, and may be generally integrated in the intelligent terminal, and the intelligent terminal may be, but is not limited to, an intelligent sound box, an intelligent television, a set-top box, and the like. As shown in fig. 3, the apparatus includes:
the information extraction module 31 is used for extracting control information of the interactive control on the current display interface;
The label display module 32 is configured to display a control label corresponding to each interactable control on the current display interface according to the control information;
The instruction matching module 33 is configured to receive a voice instruction input by a user, and match the voice instruction with a control label;
And the operation triggering module 34 is used for determining the control to be interacted selected by the user according to the matching result and triggering an operation event corresponding to the control to be interacted.
According to the technical scheme provided by the embodiment of the invention, firstly, the control information of the interactable controls on the current display interface is extracted, then, the control label corresponding to each interactable control is displayed on the current display interface according to the control information, then, the received voice instruction input by the user is matched with the control label, the control to be interacted selected by the user is determined according to the matching result, and further, the operation event corresponding to the control to be interacted is triggered. The voice interaction method of the intelligent terminal provided by the embodiment of the invention can be automatically completed in a system layer, and the voice interaction function is prevented from being independently integrated for each application, so that the problem that the voice interaction function is difficult to expand by a third party application is solved, and the development cost of the application integrated voice interaction function is reduced. Meanwhile, by setting a corresponding control label for each interactable control and matching the control labels, the training process of the voice can be completed through less data, so that the efficiency of voice interaction is further improved, and the learning cost of voice interaction is reduced.
Based on the above technical solution, optionally, the tag display module 32 includes:
the position and name determining sub-module is used for determining the display position and the label name of the control label according to the control information;
And the label display sub-module is used for displaying the control label corresponding to each interactable control on the current display interface according to the display position and the label name.
Based on the above technical solution, optionally, the control information includes a control size, and the label display module 32 further includes:
And the size determination sub-module is used for determining the label size of each corresponding control label according to the proportion between the control sizes of each interactable control.
On the basis of the above technical solution, optionally, the control information includes a control position, a position and a name determining sub-module, including:
the distribution rule determining unit is used for determining the distribution rule of the interactable control and the display position of the control label according to the control position of each interactable control;
and the name determining unit is used for determining the label name of the control label according to the distribution rule.
Based on the above technical solution, optionally, the tag name is a fixed length, location and name determining sub-module, including:
The area dividing unit is used for dividing the area of the current display interface if the number of the interactable controls is larger than the number of the settable non-repeated label names;
And the position and name determining unit is used for determining the display position and the label name of the corresponding control label according to the divided area selected by the user and the control information of the interactable control in the divided area.
Based on the above technical solution, optionally, the control information includes a control type, and the operation triggering module 34 includes:
And the operation event determination submodule is used for determining an operation event corresponding to the control to be interacted according to the control type.
On the basis of the above technical scheme, optionally, the voice interaction device of the intelligent terminal further comprises:
and the wake-up module is used for receiving a wake-up instruction input by a user before extracting control information of the interactive control on the current display interface so as to enter a voice interaction flow.
On the basis of the above technical scheme, optionally, the voice interaction device of the intelligent terminal further comprises:
And the cleaning module is used for cleaning the currently displayed control label and stopping the voice interaction flow if the voice command is received and the waiting time is overtime after the control to be interacted selected by the user is determined according to the matching result and the operation event corresponding to the control to be interacted is triggered.
The voice interaction device of the intelligent terminal provided by the embodiment of the invention can execute the voice interaction method of the intelligent terminal provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
It should be noted that, in the embodiment of the voice interaction device of the intelligent terminal, each unit and module included are only divided according to the functional logic, but not limited to the above-mentioned division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
Example IV
Fig. 4 is a schematic structural diagram of an intelligent terminal provided in a fourth embodiment of the present invention, and shows a block diagram of an exemplary intelligent terminal suitable for implementing an embodiment of the present invention. The intelligent terminal shown in fig. 4 is only an example, and should not bring any limitation to the functions and the application scope of the embodiment of the present invention. As shown in fig. 4, the intelligent terminal includes a processor 41, a memory 42, an input device 43 and an output device 44; the number of processors 41 in the intelligent terminal may be one or more, in fig. 4, one processor 41 is taken as an example, and the processors 41, the memory 42, the input device 43 and the output device 44 in the intelligent terminal may be connected by a bus or other manners, in fig. 4, by a bus connection is taken as an example.
The memory 42 is used as a computer readable storage medium, and may be used to store software programs, computer executable programs, and modules, such as program instructions/modules corresponding to the voice interaction method of the intelligent terminal in the embodiment of the present invention (for example, the information extraction module 31, the tag display module 32, the instruction matching module 33, and the operation triggering module 34 in the voice interaction device of the intelligent terminal). The processor 41 executes various functional applications and data processing of the intelligent terminal by running software programs, instructions and modules stored in the memory 42, i.e. implements the above-mentioned voice interaction method of the intelligent terminal.
The memory 42 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for functions; the storage data area may store data created according to the use of the smart terminal, etc. In addition, memory 42 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 42 may further include memory remotely located relative to processor 41, which may be connected to the intelligent terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 43 may be used to receive voice instructions entered by a user, generate key signal inputs related to user settings and function controls of the intelligent terminal, etc. The output device 44 may include a display screen or the like that may be used to present the user with interactable controls, as well as labels for the interactable controls, and the like.
Example five
A fifth embodiment of the present invention also provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are for performing a voice interaction method of an intelligent terminal, the method comprising:
extracting control information of an interactable control on a current display interface;
displaying a control label corresponding to each interactable control on the current display interface according to the control information;
receiving a voice command input by a user, and matching the voice command with a control label;
And determining the control to be interacted selected by the user according to the matching result, and triggering an operation event corresponding to the control to be interacted.
Storage media-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, lanbas (Rambus) RAM, etc.; nonvolatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in a computer system in which the program is executed, or may be located in a different second computer system connected to the computer system through a network (such as the internet). The second computer system may provide program instructions to the computer for execution. The term "storage medium" may include two or more storage media that may reside in different locations (e.g., in different computer systems connected by a network). The storage medium may store program instructions (e.g., embodied as a computer program) executable by one or more processors.
Of course, the storage medium containing the computer executable instructions provided in the embodiments of the present invention is not limited to the above-mentioned method operations, and may also perform related operations in the voice interaction method of the intelligent terminal provided in any embodiment of the present invention.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
From the above description of embodiments, it will be clear to a person skilled in the art that the present invention may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a FLASH Memory (FLASH), a hard disk, or an optical disk of a computer, etc., and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments of the present invention.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.
Claims (10)
1. The voice interaction method of the intelligent terminal is characterized by comprising the following steps of:
extracting control information of an interactable control on a current display interface;
Displaying a control label corresponding to each interactable control on the current display interface according to the control information; the control labels are labels which are randomly generated in the value range of the control labels and are not repeated each other according to the number of the interactable controls;
Receiving a voice command input by a user, and matching the voice command with the control label;
And determining a control to be interacted selected by a user according to the matching result, and triggering an operation event corresponding to the control to be interacted.
2. The voice interaction method of the intelligent terminal according to claim 1, wherein the displaying, according to the control information, a control label corresponding to each interactable control on the current display interface includes:
determining the display position and the label name of the control label according to the control information;
And displaying the control label corresponding to each interactable control on the current display interface according to the display position and the label name.
3. The voice interaction method of the intelligent terminal according to claim 2, wherein the control information includes a control size, and the displaying, on the current display interface according to the control information, a control label corresponding to each interactable control further includes:
And determining the label size of each corresponding control label according to the proportion between the control sizes of each interactable control.
4. The voice interaction method of the intelligent terminal according to claim 2, wherein the control information includes a control position, and the determining the display position and the label name of the control label according to the control information includes:
Determining a distribution rule of the interactable controls and a display position of the control label according to the control position of each interactable control;
and determining the label name of the control label according to the distribution rule.
5. The voice interaction method of the intelligent terminal according to claim 2, wherein the tag name is a fixed length, and the determining the display position and the tag name of the control tag according to the control information includes:
If the number of the interactable controls is larger than the number of the settable non-repeated label names, dividing the current display interface into areas;
and determining the display position and the label name of the corresponding control label according to the divided area selected by the user and the control information of the interactable control in the divided area.
6. The voice interaction method of the intelligent terminal according to claim 1, wherein the control information includes a control type, the determining a control to be interacted selected by a user according to a matching result, and triggering an operation event corresponding to the control to be interacted includes:
and determining an operation event corresponding to the control to be interacted according to the control type.
7. The voice interaction method of the intelligent terminal according to claim 1, further comprising, before the extracting control information of the interactable control on the current display interface:
And receiving a wake-up instruction input by a user to enter a voice interaction flow.
8. The voice interaction method of the intelligent terminal according to claim 1, wherein after determining the control to be interacted selected by the user according to the matching result and triggering the operation event corresponding to the control to be interacted, the method further comprises:
And if the voice command is received and the waiting time is overtime, cleaning the control label displayed currently, and stopping the voice interaction flow.
9. An intelligent terminal, characterized by comprising:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the voice interaction method of the intelligent terminal of any of claims 1-8.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements a voice interaction method of an intelligent terminal according to any of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010707456.7A CN112102823B (en) | 2020-07-21 | 2020-07-21 | Voice interaction method of intelligent terminal, intelligent terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010707456.7A CN112102823B (en) | 2020-07-21 | 2020-07-21 | Voice interaction method of intelligent terminal, intelligent terminal and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112102823A CN112102823A (en) | 2020-12-18 |
CN112102823B true CN112102823B (en) | 2024-06-21 |
Family
ID=73749719
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010707456.7A Active CN112102823B (en) | 2020-07-21 | 2020-07-21 | Voice interaction method of intelligent terminal, intelligent terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112102823B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112634896B (en) * | 2020-12-30 | 2023-04-11 | 智道网联科技(北京)有限公司 | Operation method of application program on intelligent terminal and intelligent terminal |
CN115048161A (en) * | 2021-02-26 | 2022-09-13 | 华为技术有限公司 | Application control method, electronic device, apparatus, and medium |
CN113900620B (en) * | 2021-11-09 | 2024-05-03 | 杭州逗酷软件科技有限公司 | Interaction method, device, electronic equipment and storage medium |
CN114048726B (en) * | 2022-01-13 | 2022-04-08 | 北京中科汇联科技股份有限公司 | Computer graphic interface interaction method and system |
CN118410301A (en) * | 2024-05-23 | 2024-07-30 | 广东电网有限责任公司佛山供电局 | Signal acceptance method and device, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105551492A (en) * | 2015-12-04 | 2016-05-04 | 青岛海信传媒网络技术有限公司 | Speech control method, speech control device and terminal |
CN109215650A (en) * | 2018-09-17 | 2019-01-15 | 珠海格力电器股份有限公司 | Voice control method and system based on terminal interface recognition and intelligent terminal |
CN111147777A (en) * | 2019-10-12 | 2020-05-12 | 深圳Tcl数字技术有限公司 | Intelligent terminal voice interaction method and device and storage medium |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9741343B1 (en) * | 2013-12-19 | 2017-08-22 | Amazon Technologies, Inc. | Voice interaction application selection |
CN107481721A (en) * | 2017-08-16 | 2017-12-15 | 北京百度网讯科技有限公司 | Voice interactive method and wearable electronic for wearable electronic |
CN108364644A (en) * | 2018-01-17 | 2018-08-03 | 深圳市金立通信设备有限公司 | A kind of voice interactive method, terminal and computer-readable medium |
CN108320744B (en) * | 2018-02-07 | 2020-06-23 | Oppo广东移动通信有限公司 | Voice processing method and device, electronic equipment and computer readable storage medium |
CN108733343B (en) * | 2018-05-28 | 2021-08-06 | 北京小米移动软件有限公司 | Method, device and storage medium for generating voice control instruction |
CN110610701B (en) * | 2018-06-14 | 2023-08-25 | 淘宝(中国)软件有限公司 | Voice interaction method, voice interaction prompting method, device and equipment |
CN109036396A (en) * | 2018-06-29 | 2018-12-18 | 百度在线网络技术(北京)有限公司 | A kind of exchange method and system of third-party application |
CN109656512A (en) * | 2018-12-20 | 2019-04-19 | Oppo广东移动通信有限公司 | Exchange method, device, storage medium and terminal based on voice assistant |
CN109669754A (en) * | 2018-12-25 | 2019-04-23 | 苏州思必驰信息科技有限公司 | The dynamic display method of interactive voice window, voice interactive method and device with telescopic interactive window |
CN109960537A (en) * | 2019-03-29 | 2019-07-02 | 北京金山安全软件有限公司 | Interaction method and device and electronic equipment |
CN111209437B (en) * | 2020-01-13 | 2023-11-28 | 腾讯科技(深圳)有限公司 | Label processing method and device, storage medium and electronic equipment |
-
2020
- 2020-07-21 CN CN202010707456.7A patent/CN112102823B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105551492A (en) * | 2015-12-04 | 2016-05-04 | 青岛海信传媒网络技术有限公司 | Speech control method, speech control device and terminal |
CN109215650A (en) * | 2018-09-17 | 2019-01-15 | 珠海格力电器股份有限公司 | Voice control method and system based on terminal interface recognition and intelligent terminal |
CN111147777A (en) * | 2019-10-12 | 2020-05-12 | 深圳Tcl数字技术有限公司 | Intelligent terminal voice interaction method and device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112102823A (en) | 2020-12-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112102823B (en) | Voice interaction method of intelligent terminal, intelligent terminal and storage medium | |
CN105549819B (en) | The display methods and device of background application information | |
JP5706587B2 (en) | Method for controlling the system bar of a user equipment and user equipment | |
CN106373249B (en) | Call out the numbers device, bank outlets the method being lined up by all kinds of means and system | |
CN107748686A (en) | Starting guide method, apparatus, storage medium and the intelligent terminal of application program | |
CN107786730A (en) | A kind of task management method and terminal | |
CN107168551A (en) | The input method that a kind of list is filled in | |
CN104090648A (en) | Data entry method and terminal | |
CN112286485B (en) | Method and device for controlling application through voice, electronic equipment and storage medium | |
CN104142778B (en) | A kind of method of text-processing, device and mobile terminal | |
CN105630461A (en) | Managing method of android application interface | |
WO2017156983A1 (en) | List callup method and device | |
CN107678780A (en) | A kind of EMS memory management process, device, storage medium and terminal device | |
CN105575390A (en) | Voice control method and device | |
CN106028172A (en) | Audio/video processing method and device | |
CN112163432A (en) | Translation method, translation device and electronic equipment | |
CN112399222A (en) | Voice instruction learning method and device for smart television, smart television and medium | |
CN112181253A (en) | Information display method and device and electronic equipment | |
CN114115673B (en) | Control method of vehicle-mounted screen | |
CN113485779A (en) | Operation guiding method and device for application program | |
CN100466827C (en) | New message of short message presenting method | |
CN117608552A (en) | GUI-oriented task automatic execution plug-in generation method and service acquisition method | |
CN114356083B (en) | Virtual personal assistant control method, device, electronic equipment and readable storage medium | |
CN111352360A (en) | Robot control method, robot control device, robot, and computer storage medium | |
CN109544587A (en) | A kind of FIG pull handle method, apparatus and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |