[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN107978316A - The method and device of control terminal - Google Patents

The method and device of control terminal Download PDF

Info

Publication number
CN107978316A
CN107978316A CN201711130491.1A CN201711130491A CN107978316A CN 107978316 A CN107978316 A CN 107978316A CN 201711130491 A CN201711130491 A CN 201711130491A CN 107978316 A CN107978316 A CN 107978316A
Authority
CN
China
Prior art keywords
voice
voice information
terminal
user
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711130491.1A
Other languages
Chinese (zh)
Inventor
刘鑫
安凯
邵明绪
石强
李凯文
田姣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Bee Language Mdt Infotech Ltd
Original Assignee
Xi'an Bee Language Mdt Infotech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Bee Language Mdt Infotech Ltd filed Critical Xi'an Bee Language Mdt Infotech Ltd
Priority to CN201711130491.1A priority Critical patent/CN107978316A/en
Publication of CN107978316A publication Critical patent/CN107978316A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Telephone Function (AREA)

Abstract

The disclosure is directed to a kind of method and device of control terminal, the described method includes:Obtain voice and wake up instruction, the voice wakes up instruction and is used to indicate that wearable device gathers user speech information;Perform the voice and wake up instruction, gather user speech information;The user speech information is sent to the terminal with the wearable device wireless connection, the user speech information is used to control the terminal to perform the operation indicated by user speech information.Wearable device can carry out various operations by user speech come control terminal in the technical solution that the disclosure provides, and realize various terminals function, increase the control application type of wearable device and bring brand-new voice man-machine interaction experience for user.

Description

Method and device for controlling terminal
Technical Field
The present disclosure relates to the field of terminal technologies, and in particular, to a method and an apparatus for controlling a terminal.
Background
Wearable equipment is like wireless headset just uses wireless technology on the hands-free earphone, lets the user avoid annoying the stumbling of electric wire, wears at any time on one's body at the user, and wearable equipment can establish wireless connection through the wireless chip in the equipment with the smart mobile phone, and after wireless connection established, the user just can conveniently control the smart mobile phone through operation wearable equipment, realizes functions such as making a call, listening to music. However, when a user operates a wearable device control terminal such as a bluetooth headset, the user can only operate the wearable device control terminal in a key pressing mode, the man-machine interaction mode is single, and the wearable device control terminal can only realize functions of making a call, playing music and the like, and the operation application type is single.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for controlling a terminal, wearable equipment can control the terminal to perform various operations through user voice, various terminal functions are realized, control application types of the wearable equipment are increased, and brand-new voice man-machine interaction experience is brought to the user. The technical scheme is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a method for controlling a terminal, which is applied to a wearable device, the method including:
acquiring a voice awakening instruction, wherein the voice awakening instruction is used for indicating the wearable equipment to acquire user voice information;
executing the voice awakening instruction and collecting voice information of a user;
and sending the user voice information to a terminal in wireless connection with the wearable device, wherein the user voice information is used for controlling the terminal to execute the operation indicated by the user voice information.
In one embodiment, the method further comprises:
receiving feedback voice information returned by the terminal, wherein the feedback voice information is used for informing the terminal of the execution condition of the user voice information;
and playing the feedback voice information.
In one embodiment, the obtaining the voice wake-up instruction includes:
acquiring preset key operation information;
or,
and acquiring preset awakening voice information.
According to a second aspect of the embodiments of the present disclosure, there is provided a method for controlling a terminal, which is applied to the terminal, the method including:
receiving user voice information sent by wearable equipment in wireless connection with the terminal, wherein the user voice information is used for controlling the terminal to execute operation indicated by the user voice information;
determining a voice instruction in the user voice information;
and executing the operation indicated by the voice instruction.
In one embodiment, the performing the operation indicated by the voice instruction includes:
determining an application type related to the operation indicated by the voice instruction, wherein the application type comprises a system application or a third-party application;
when the operation indicated by the voice instruction comprises operation in system application, calling a system application interface, and controlling the system application to execute the operation indicated by the voice instruction;
and when the operation indicated by the voice instruction comprises the operation in the third-party application, calling a third-party application interface to control the third-party application to execute the operation indicated by the voice instruction.
In one embodiment, the method further comprises:
acquiring feedback voice information aiming at the user voice information, wherein the feedback voice information is used for informing the terminal of the execution condition of the user voice information;
returning the feedback voice information to the wearable device.
In one embodiment, the determining the voice instruction in the user voice information includes:
detecting a voice ending end of the user voice information, and determining the user voice before the voice ending end;
sending the user voice information before the voice end to a voice processing cloud;
and receiving a voice instruction in the user voice information returned by the voice processing cloud.
In one embodiment, the obtaining feedback voice information for the user voice information includes:
monitoring the execution condition of the operation through an interface of an application related to the operation indicated by the voice instruction;
generating the feedback characters aiming at the user voice information according to the execution condition;
sending feedback characters aiming at the voice information of the user to the voice processing cloud;
receiving feedback voice information in a digital signal form corresponding to the feedback characters returned by the voice processing cloud;
and converting the feedback voice information in the form of the digital signal into the feedback voice information in the form of the analog signal.
According to a third aspect of the embodiments of the present disclosure, there is provided an apparatus for controlling a terminal, which is applied to a wearable device, the apparatus including:
the wearable device comprises a first acquisition module, a second acquisition module and a control module, wherein the first acquisition module is used for acquiring a voice awakening instruction which is used for indicating the wearable device to acquire user voice information;
the acquisition module is used for executing the voice awakening instruction and acquiring the voice information of the user;
and the sending module is used for sending the user voice information to a terminal in wireless connection with the wearable device, and the user voice information is used for controlling the terminal to execute the operation indicated by the user voice information.
In one embodiment, the apparatus further comprises:
the first receiving module is used for receiving feedback voice information returned by the terminal, wherein the feedback voice information is used for informing the terminal of the execution condition of the user voice information;
and the playing module is used for playing the feedback voice information.
In one embodiment, the first obtaining module comprises:
the first obtaining submodule is used for obtaining preset key operation information;
or,
and the second acquisition submodule is used for acquiring preset awakening voice information.
According to a fourth aspect of the embodiments of the present disclosure, there is provided an apparatus for controlling a terminal, which is applied to the terminal, the apparatus including:
the second receiving module is used for receiving user voice information sent by wearable equipment in wireless connection with the terminal, and the user voice information is used for controlling the terminal to execute the operation indicated by the user voice information;
the determining module is used for determining a voice instruction in the user voice information;
and the execution module is used for executing the operation indicated by the voice instruction.
In one embodiment, the execution module comprises:
the determining submodule is used for determining an application type related to the operation indicated by the voice instruction, and the application type comprises a system application or a third-party application;
the first control submodule is used for calling a system application interface when the operation indicated by the voice instruction comprises the operation in the system application, and controlling the system application to execute the operation indicated by the voice instruction;
and the second control submodule is used for calling a third-party application interface when the operation indicated by the voice instruction comprises the operation in the third-party application, and controlling the third-party application to execute the operation indicated by the voice instruction.
In one embodiment, the apparatus further comprises:
the second acquisition module is used for acquiring feedback voice information aiming at the user voice information, wherein the feedback voice information is used for informing the terminal of the execution condition of the user voice information;
a return module for returning the feedback voice information to the wearable device.
In one embodiment, the determining module comprises:
the detection submodule is used for detecting a voice ending end of the user voice information and determining the user voice information before the voice ending end;
the first sending submodule is used for sending the user voice information before the voice end terminal to a voice processing cloud terminal;
and the first receiving submodule is used for receiving a voice instruction in the user voice information returned by the voice processing cloud.
In one embodiment, the second obtaining module comprises:
the monitoring submodule is used for monitoring the execution condition of the operation through an interface of an application related to the operation indicated by the voice instruction;
the generating submodule is used for generating the feedback characters aiming at the user voice information according to the execution condition;
the second sending submodule is used for sending feedback characters aiming at the voice information of the user to the voice processing cloud;
the second receiving submodule is used for receiving feedback voice information in a digital signal form corresponding to the feedback characters returned by the voice processing cloud;
and the conversion sub-module is used for converting the feedback voice information in the form of the digital signal into the feedback voice information in the form of the analog signal.
According to a fifth aspect of the embodiments of the present disclosure, there is provided an apparatus for controlling a terminal, which is applied to a wearable device, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a voice awakening instruction, wherein the voice awakening instruction is used for indicating the wearable equipment to acquire user voice information;
executing the voice awakening instruction and collecting voice information of a user;
and sending the user voice information to a terminal in wireless connection with the wearable device, wherein the user voice information is used for controlling the terminal to execute the operation indicated by the user voice information.
According to a sixth aspect of the embodiments of the present disclosure, there is provided an apparatus for controlling a terminal, which is applied to the terminal, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
receiving user voice information sent by wearable equipment in wireless connection with the terminal, wherein the user voice information is used for controlling the terminal to execute operation indicated by the user voice information;
determining a voice instruction in the user voice information;
and executing the operation indicated by the voice instruction.
According to a seventh aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium storing computer instructions for application to a wearable device, the computer instructions, when executed by a processor, implement the steps of the method for application to a wearable device.
According to an eighth aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium storing computer instructions for application to a terminal, the computer instructions, when executed by a processor, implement the steps in the method for application to a terminal.
In this embodiment, after acquiring a voice wake-up instruction for instructing the wearable device to acquire a voice for control, the wearable device executes the voice wake-up instruction to acquire user voice information; the user voice information is sent to a terminal in wireless connection with the wearable device, and the terminal is controlled to execute the operation indicated by the user voice information through the user voice information, so that the wearable device can control the terminal to perform various operations through the user voice, various terminal functions are achieved, the control application types of the earphone are increased, and brand-new voice man-machine interaction experience is brought to the user.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a block diagram illustrating a system implementing the method of controlling a terminal according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating a method applied to a control terminal of a wearable device according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating a method of controlling a terminal applied to the terminal according to an exemplary embodiment.
Fig. 4 is a flowchart illustrating a method of controlling a terminal according to an example embodiment.
Fig. 5 is a block diagram illustrating an apparatus applied to a control terminal of a wearable device according to an exemplary embodiment.
Fig. 6 is a block diagram illustrating an apparatus applied to a control terminal of a wearable device according to an exemplary embodiment.
Fig. 7 is a block diagram illustrating an apparatus applied to a control terminal of a wearable device according to an exemplary embodiment.
Fig. 8 is a block diagram illustrating an apparatus applied to a control terminal of a wearable device according to an exemplary embodiment.
Fig. 9 is a block diagram illustrating an apparatus for controlling a terminal applied to a terminal according to an exemplary embodiment.
Fig. 10 is a block diagram illustrating an apparatus for controlling a terminal applied to a terminal according to an exemplary embodiment.
Fig. 11 is a block diagram illustrating an apparatus for controlling a terminal applied to a terminal according to an exemplary embodiment.
Fig. 12 is a block diagram illustrating an apparatus for controlling a terminal applied to a terminal according to an exemplary embodiment.
Fig. 13 is a block diagram illustrating an apparatus for controlling a terminal applied to a terminal according to an exemplary embodiment.
Fig. 14 is a block diagram illustrating an apparatus applied to a control terminal of a wearable device according to an exemplary embodiment.
Fig. 15 is a block diagram illustrating an apparatus for controlling a terminal applied to a terminal according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a block diagram illustrating a system implementing the method of controlling a terminal according to an exemplary embodiment, and the following embodiments may be described with reference to the system illustrated in fig. 1.
Wearable device side embodiment:
fig. 2 is a flowchart illustrating a method applied to a control terminal of a wearable device according to an exemplary embodiment, where the method of controlling a terminal is used in a wearable device or the like, as shown in fig. 2, and includes the following steps 201 to 203:
in step 201, a voice wake-up instruction is obtained, where the voice wake-up instruction is used to instruct the wearable device to collect user voice information.
In step 202, the voice wake-up command is executed to collect the voice information of the user.
In step 203, the user voice information is sent to a terminal wirelessly connected with the wearable device, and the user voice information is used for controlling the terminal to execute an operation indicated by voice content.
Here, the wearable device may be various wearable devices such as a smart band, a smart watch, smart glasses, and a smart headset. In this embodiment, a wearable device is taken as an example to describe, where the earphone is mainly a wireless earphone, and the wireless earphone may be a bluetooth earphone, and the wireless connection may be a bluetooth connection, so that the earphone may send user voice information to the terminal through the wireless connection, and control the terminal to perform various operations through the user voice information. It should be noted here that there are two situations of the user voice information sent by the earphone to the terminal, one is the user voice information for controlling the terminal, and the other is the call voice information sent to the opposite call end of the terminal, so in order to distinguish the two voices, the earphone will collect the user voice information for controlling the terminal only when the earphone obtains the voice wake-up instruction, and send the user voice information to the terminal through the wireless connection, and control the terminal to perform various operations.
Illustratively, the wireless headset is a bluetooth headset, and as shown in fig. 1, the bluetooth headset 11 includes a bluetooth module 110, a power module 111, a key module 112, a microphone 113, a speaker 114, and a Micro Control Unit (MCU) 115. Among them, the micro control unit 115 generally controls the overall operation of the bluetooth headset 11, such as data communication and the like associated operations; the power module 111 may provide power to other modules in the bluetooth headset 11 for operating the bluetooth headset 11.
Here, the key module 112 includes a power key (also called an on/off key), a volume +/volume-key, the user can press the power key to turn on the bluetooth headset 11, after the bluetooth headset 11 is turned on, the bluetooth module 110 can broadcast the identification information of the bluetooth headset 11, so that the terminal can search the bluetooth headset 11 by bluetooth after turning on the bluetooth function, the terminal can send a connection establishment request to the bluetooth module 110 of the searched bluetooth headset 11, at this time, the bluetooth module 110 can automatically reply the connection establishment response, and thus, the wireless connection between the bluetooth module 110 of the bluetooth headset 11 and the terminal can be established.
Here, after the bluetooth headset 11 is turned on, a voice wake-up instruction for instructing the headset to collect the voice information of the user may be acquired, here, an independent voice wake-up key may be added to the key module 112, after the user presses the voice wake-up key on the key module 112, the key module 112 may generate the voice wake-up instruction and send the voice wake-up instruction to the micro control unit 115, after the micro control unit 115 acquires the voice wake-up instruction, the voice wake-up instruction may be executed, the microphone 113 is controlled to collect the voice information of the user, the microphone 113 may send the voice information of the user to the micro control unit 115 after collecting the voice information of the user, and the micro control unit 115 controls the bluetooth module 110 to send the voice information of the user to a terminal wirelessly connected to the bluetooth headset 11. Here, the user voice information is used to control the terminal to perform the operation indicated by the user voice information, for example, "call to zhang san", and after the terminal receives the user voice information "call to zhang san", the operation indicated by the user voice information may be performed: and dialing a three-fold phone call.
It should be noted that the user voice information may control the terminal to perform other operations besides controlling the terminal to make a call, for example, the user voice may also be "what weather is today", and after the terminal receives the user voice information "what weather is today", the terminal may perform the operations: calling a weather application and controlling the weather application to search for the weather of today and displaying the weather condition of the today for a user, such as displaying on a terminal screen or reporting the weather of the today by voice; or, the user voice may also be that "send a little message to wang five to say that me arrives for a moment", and after the terminal receives the user voice message "send a little message to wang five to say that me arrives for a moment", the operation can be executed: invoke the WeChat application and control the WeChat application to send the message "I will arrive a moment" to Wang Wu. The earphone can collect the voice information of the user to control the terminal to execute various operations, and various functions are realized.
The embodiment can acquire a voice wake-up instruction, where the voice wake-up instruction is used to instruct the wearable device to acquire a voice for control; executing the voice awakening instruction and collecting voice information of a user; and sending the user voice information to a terminal in wireless connection with the wearable device, and controlling the terminal to execute the operation indicated by the user voice information by using the user voice information, so that the wearable device can control the terminal to perform various operations through the user voice information, various terminal functions are realized, the control application types of the earphone are increased, and brand-new voice man-machine interaction experience is brought to the user.
In a possible implementation manner, in the method for controlling a terminal, the method may further include the following steps a1 and a 2.
In step a1, feedback voice information returned by the terminal is received, where the feedback voice information is used to notify the terminal of the execution of the user voice information.
In step a2, the feedback voice information is played.
Here, after receiving the user voice information, the terminal may determine a voice instruction in the user voice information and perform an operation indicated by the voice instruction; meanwhile, the terminal can also send feedback voice information to the wearable device to feed back the execution condition of the user voice, for example, after receiving the user voice information of 'call to zhang san', the terminal can execute the operation: the third-party telephone is dialed, and meanwhile, the terminal can feed back the feedback voice information to the earphone, namely that the third-party telephone is being dialed and the call is slightly equal; the wearable equipment can play the feedback voice after receiving the feedback voice 'call is being made to Zhang-up, please wait a little'; thus, the user can know the execution condition of the terminal on the user voice through the feedback voice information.
Here, as shown in fig. 1, taking the wearable device as a bluetooth headset as an example for explanation, the headset may receive feedback voice information returned by the terminal through the bluetooth module 110, the bluetooth module 110 may send the feedback voice information to the micro control unit 115 after receiving the feedback voice information, and the micro control unit 115 may control the speaker 114 to play the feedback voice information.
The embodiment can receive and play feedback voice information returned by the terminal, wherein the feedback voice information is used for notifying the terminal of the execution condition of the user voice information; therefore, the user can clearly know the execution condition of the terminal on the user voice through the feedback voice information, so that the user can carry out subsequent processing.
In a possible implementation, step 201 in the method for controlling a terminal may also be implemented as the following step B1.
In step B1, preset key operation information is acquired.
Here, the voice wake-up command may be preset key operation information, and the key operation information may be key operation information of a hard key or key operation information of a virtual key. For example, an independent voice wake-up key (which may be a hard key or a virtual key) may be set on the key module 112 of the headset, and the preset key operation information may be information that the user presses the voice wake-up key, so that after the user presses the voice wake-up key, the key module 112 may acquire the preset key operation information and send the preset key operation information to the micro control unit 115, and thus, the micro control unit 115 may control other modules to start to perform the step 102 and the step 103. Of course, the preset key operation information may also be information that the user presses the power key and the volume + key, so that the earphone acquires the preset key operation information after the user presses the power key and the volume + key at the same time. The various keys described herein may be hard keys or virtual keys, and are not limited herein.
The embodiment can activate the voice control function through key operation of a user, is quick and convenient, and has low learning cost.
In a possible implementation, step 201 in the method for controlling a terminal may also be implemented as the following step B2.
In step B2, a preset wake-up voice message is obtained.
Here, the voice wake-up command may be preset wake-up voice information, for example, certain preset vocabulary voice information may be used as the preset wake-up voice information, for example, the preset wake-up voice information may be voice information of vocabularies such as "control", when a user wants to use a wearable device such as an earphone through a voice control terminal, the user may input the voice "control", the earphone may collect the voice "control" through the microphone 113 and send the voice "control" to the micro control unit 115, and after the micro control unit 115 recognizes the voice "control", the microphone 113 may be controlled to collect the user voice information and control the bluetooth module 110 to send the user voice information to a terminal wirelessly connected to the earphone.
The embodiment can activate the voice control function by inputting voice by the user without pressing keys, and liberates both hands of the user.
Terminal side embodiment:
fig. 3 is a flowchart illustrating a method for controlling a terminal, which is applied to a terminal according to an exemplary embodiment, and is used in the terminal and other devices, as shown in fig. 3, and includes the following steps 301 to 303:
in step 301, user voice information sent by a wearable device wirelessly connected to the terminal is received, where the user voice information is used to control the terminal to perform an operation indicated by the user voice information.
In step 302, a voice instruction in the user voice information is determined.
In step 303, the operation indicated by the voice instruction is executed.
Here, when the wearable device controls the terminal to operate through the user voice information, the wearable device may send the user voice information for control to the terminal, and after obtaining the user voice information, the terminal may perform voice recognition on the user voice information, recognize a voice instruction in the user voice information, and execute an operation indicated by the voice instruction, where the operation may be an operation executed by a system application in the terminal, or an operation executed by a third-party application in the terminal, and this is not limited herein. For example, if the user voice information for control received by the terminal is "send a little message to wang saying that i will arrive for a moment", the voice can be recognized to obtain a voice instruction: sending a micro message to the fifth generation to say that I will arrive for a moment; thus, after the terminal recognizes the voice command, the terminal can execute the operation indicated by the voice command: and opening the WeChat, and sending information 'I will arrive at a moment' to the King of the friend in the WeChat address book.
According to the embodiment, after the user voice information sent by the wearable device is received, the voice instruction in the user voice information is determined, and the operation indicated by the voice instruction is executed, so that the wearable device can control the terminal to perform various operations through the voice of the user, various terminal functions are realized, the control application types of the earphone are increased, and brand-new voice man-machine interaction experience is brought to the user.
In a possible implementation, step 303 in the method of controlling a terminal described above may be implemented as the following steps C1 to C3.
In step C1, the application type involved in the operation indicated by the voice instruction is determined, and the application type includes a system application or a third-party application.
In step C2, when the operation indicated by the voice instruction includes an operation in a system application, a system application interface is called to control the system application to execute the operation indicated by the voice instruction.
In step C3, when the operation indicated by the voice instruction includes an operation in a third-party application, a third-party application interface is called to control the third-party application to execute the operation indicated by the voice instruction.
Here, the terminal may determine, after recognizing the voice instruction, an application type related to an operation indicated by the voice instruction, where the application type includes a system application or a third-party application; if the user voice information received by the terminal is 'call to Zhang III', the recognized voice instruction is to call to Zhang III, and at the moment, the call making operation indicated by the voice instruction can be determined to relate to system application-conversation application; if the user voice information received by the terminal sends a WeChat to Wang five to say that I will arrive for a moment, the recognized voice instruction sends a WeChat message to Wang five to a WeChat friend, namely that I will arrive for a moment, at the moment, the WeChat sending operation indicated by the voice instruction can be determined to relate to a third-party application, namely a WeChat application.
Here, when the operation indicated by the voice instruction includes an operation in a system application, the terminal may call the system application interface, send control information to the system application through the system application interface, and control the system application to execute the operation indicated by the voice instruction. For example, after receiving the user voice message "call to zhang san", the terminal determines that the voice command is to call zhang san, and at this time, the terminal may call the application interface to control the call application to perform an operation of dialing zhang san number.
Or when the operation indicated by the voice instruction of the terminal comprises an operation in a third-party application, a third-party application interface can be called, and control information is sent to the third-party application through the third-party application interface to control the third-party application to execute the operation indicated by the voice instruction. For example, after receiving a user voice message of ' sending a micro message to wang five to say that i arrives for a moment ', the terminal determines that the voice instruction is a micro message to wang five, and the micro message is ' i arrive for a moment ', at the moment, the terminal calls the micro message application interface to control the micro message application to send the micro message to wang five to say that i arrives for a moment '.
The embodiment may determine an application type involved in the operation indicated by the voice instruction, where the application type includes a system application or a third-party application; when the operation indicated by the voice instruction comprises operation in system application, calling a system application interface, and controlling the system application to execute the operation indicated by the voice instruction; when the operation indicated by the voice instruction comprises the operation in the third-party application, calling a third-party application interface to control the third-party application to execute the operation indicated by the voice instruction; the method can control various terminals to operate in various applications, realize various terminal functions, increase the control application types of the earphone and bring brand new voice man-machine interaction experience for users.
In a possible implementation, the method of controlling a terminal described above may also be implemented as the following steps D1 to D2.
In step D1, feedback voice information for the user voice information is obtained, where the feedback voice information is used to notify the terminal of the execution of the user voice information.
In step D2, the feedback voice information is returned to the wearable device.
Here, after acquiring the voice information of the user, the terminal may determine a voice instruction in the voice information of the user and execute an operation indicated by the voice instruction, and meanwhile, the terminal may further generate feedback voice information for the voice information of the user according to an execution situation of the terminal on the voice instruction, and after acquiring the feedback voice information, the terminal may return the feedback voice information to the wearable device and feed back the execution situation of the terminal on the voice information of the user to the wearable device. For example, after receiving the user voice message of "call to zhang san" sent by the headset, the terminal may perform the following operations: and dialing a third-letter telephone, but the terminal finds that the contact person 'third-letter' is not found in the address list in the executing process, at the moment, the terminal can generate feedback voice information 'third-letter' that the contact person is not found, and the terminal can return the feedback voice information to the earphone after acquiring the feedback voice information. After receiving the feedback voice 'Zhang III, no contact is found', the earphone can play the feedback voice information; therefore, the user can know that the address book of the terminal has no Zhang III through the feedback voice information, at this time, the user may not store the telephone number of Zhang III in the terminal, or the name stored by the user may not be Zhang III, and at this time, the user can input the voice of the user again according to specific conditions.
The embodiment returns feedback voice information to the wearable device, wherein the feedback voice information is used for notifying the terminal of the execution condition of the user voice information; therefore, the user can clearly know the execution condition of the terminal on the user voice through the feedback voice information, so that the user can carry out subsequent processing.
In a possible implementation, step 301 in the method of controlling a terminal described above may be implemented as the following steps E1 to E3.
In step E1, the voice end of the user voice information is detected, and the user voice before the voice end is determined.
In step E2, the user voice information before the voice end is sent to the voice processing cloud.
In step E3, a voice instruction in the user voice information returned by the voice processing cloud is received.
Illustratively, as shown in fig. 1, the terminal 12 includes a bluetooth headset application module 121 and an audio playing module 122, the bluetooth headset application module 121 includes a voice recording sub-module 1211, a voice activity detection sub-module 1212, and a voice intention distribution sub-module 1213; the Speech Processing cloud 13 includes an Automatic Speech Recognition (ASR) module 131, a Natural Language Processing (NLP) module 132, and a text-to-Speech (TTS) module 133.
Here, on the terminal 12, the Voice recording sub-module 1211 of the bluetooth headset application module 121 is configured to receive the user Voice information, convert the user Voice information into a required format, and send the format to the Voice Activity Detection sub-module 1212, where the Voice Activity Detection sub-module 1212 may use a VAD (Voice Activity Detection) algorithm to automatically detect an end point where the user Voice ends and determine the user Voice before the end point, so as to identify and eliminate a long mute period from the user Voice, so as to achieve a function of saving Voice channel resources without reducing service quality; the voice activity detection sub-module 1212 determines the user voice before the voice end, and then sends the user voice information before the voice end to the automatic voice recognition module 131 of the voice processing cloud 13, the automatic voice recognition module 131 can convert the vocabulary content in the human voice into a computer readable input-voice text, and send the converted voice text to the natural language processing module 132, the natural language processing module 132 can perform lexical analysis, syntactic analysis, and semantic analysis on the voice text to obtain the voice intention of the user voice information, which is the voice instruction in the user voice, the natural language processing module 132 can return the voice instruction to the voice intention distribution sub-module 1213 of the terminal 12 after obtaining the voice instruction, and the voice intention distribution sub-module 1213 can obtain the voice intention which is the voice intention, a system Application interface or a third party APP (Application) interface is called according to the voice intention, and the corresponding Application is controlled to execute the operation intended by the user voice. For example, after receiving the user voice information "call to zhang san", the terminal determines that the user intends to call zhang san, and at this time, the voice intention distribution sub-module 1213 calls the terminal system application, i.e., the call application interface, and controls the call application to call zhang san; or, after the terminal receives the user voice message ' send a little message to wang five to say that me will arrive for a moment ', the terminal determines that the user intention is to send a little message to wang five, and the little message is ' i will arrive for a moment ', at this moment, the voice intention distribution submodule 1213 calls the terminal third-party application, namely the little message credit interface, and controls the little message application to send a little message to wang five to say that me will arrive for a moment '.
The embodiment can detect the voice ending end of the user voice information and determine the user voice information before the voice ending end; sending the user voice information before the voice end to a voice processing cloud; the voice instruction in the user voice information is obtained from the voice processing cloud end, voice processing is carried out through the voice processing cloud end, the load of the terminal can be reduced, and processing of the voice processing cloud end is more accurate.
In a possible implementation, the step D1 in the method of controlling a terminal described above may be implemented as the following steps D11 to D15.
In step D11, monitoring the execution of the operation through the interface of the application related to the operation indicated by the voice instruction;
in step D12, generating the feedback text for the user voice information according to the execution condition;
in step D13, sending feedback text for the user voice information to the voice processing cloud.
In step D14, feedback voice information in the form of a digital signal corresponding to the feedback text returned by the voice processing cloud is received.
In step D15, the feedback speech information in the form of digital signals is converted into feedback speech information in the form of analog signals.
Here, after obtaining the voice instruction, i.e. the voice intention, the voice intention distribution submodule 1213 of the terminal calls the terminal system application interface or the third party application interface according to the voice intention, controls the corresponding application to execute the operation indicated by the user voice intention, meanwhile, the voice intention distribution submodule 1213 monitors the operation execution condition of the system application or the third party application through the system application interface or the third party application interface, obtains the execution condition of the terminal on the user voice information, generates the feedback text for the user voice information according to the execution condition, then the voice intention distribution submodule 1213 sends the feedback text to the text-to-speech module 133 of the voice processing 13, the text-to-speech module 133 converts the feedback text into feedback voice information in the form of digital signals, and returns the feedback voice information in the form of digital signals to the audio playing module 122 of the cloud 12, the audio playing module 122 can convert the feedback voice information in the form of digital signal into feedback voice information in the form of analog signal, and then send the feedback voice to the bluetooth module 110 of the headset 11, the bluetooth module 110 obtains the feedback voice information in the form of analog signal and forwards the feedback voice information to the micro control unit 115, and the micro control unit 115 controls the speaker 114 to play the feedback voice information in the form of analog signal.
In this embodiment, the execution condition of the operation may be monitored through an interface of an application related to the operation indicated by the voice instruction, the feedback text for the user voice information is generated according to the execution condition, and then the feedback text for the user voice is sent to the voice processing cloud; feedback voice in a digital signal form corresponding to the feedback words returned by the voice processing cloud; and the feedback voice in the form of the digital signal is converted into the feedback voice in the form of the analog signal, so that the feedback voice corresponding to the feedback characters can be obtained from the voice processing cloud, the load of the terminal can be reduced, and the processing of the voice processing cloud is more accurate.
The implementation is described in detail below by way of several embodiments.
Fig. 4 is a flowchart illustrating a method for controlling a terminal according to an exemplary embodiment, as shown in fig. 4, the method may be implemented by a wearable device, a terminal and a voice processing cloud, including steps 401 and 415.
In step 401, the wearable device acquires a voice wake-up instruction.
The voice wake-up instruction is used for instructing the wearable device to acquire user voice information; wearable equipment acquires voice wake-up instruction, including: acquiring preset key operation information; or acquiring preset voice information.
In step 402, the wearable device executes the voice wake-up instruction and collects voice information of the user.
In step 403, the wearable device sends the user voice information to a terminal wirelessly connected to the wearable device, and the terminal receives the user voice information sent by the wearable device wirelessly connected to the terminal, where the user voice information is used to control the terminal to execute an operation indicated by the user voice information.
In step 404, the terminal detects a voice end of the user voice information, and determines the user voice information before the voice end.
In step 405, the terminal sends the user voice information before the voice end to the voice processing cloud.
In step 406, the voice processing cloud determines the voice command in the user voice message.
In step 407, the voice processing cloud sends the voice instruction in the user voice information to the terminal, and the terminal receives the voice instruction in the user voice information returned by the voice processing cloud.
In step 408, the terminal determines the application type related to the operation indicated by the voice instruction, and when the operation indicated by the voice instruction includes an operation in a system application, calls a system application interface to control the system application to execute the operation indicated by the voice instruction; and when the operation indicated by the voice instruction comprises the operation in the third-party application, calling a third-party application interface to control the third-party application to execute the operation indicated by the voice instruction.
In step 409, the terminal monitors the execution of the operation through the interface of the application related to the operation indicated by the voice instruction.
In step 410, the terminal generates the feedback text for the user voice information according to the execution condition.
In step 411, the terminal sends feedback text for the user voice information to the voice processing cloud.
In step 412, the voice processing cloud converts the feedback words into feedback voice information in a digital signal form, and sends the feedback voice information to the terminal, and the terminal receives the feedback voice information in the digital signal form corresponding to the feedback words returned by the voice processing cloud.
In step 413, the terminal converts the feedback voice information in the form of the digital signal into feedback voice information in the form of an analog signal.
In step 414, the terminal returns the feedback voice information to the wearable device, and the wearable device receives the feedback voice information returned by the terminal.
And the feedback voice information is used for informing the terminal of the execution condition of the user voice.
In step 415, the wearable device plays the feedback voice.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
Fig. 5 is a block diagram illustrating an apparatus applied to a control terminal of a wearable device, which may be implemented as part or all of the wearable device through software, hardware, or a combination of the two, according to an example embodiment. As shown in fig. 5, the apparatus for controlling a terminal includes: a first obtaining module 501, a collecting module 502 and a sending module 503; wherein:
a first obtaining module 501, configured to obtain a voice wake-up instruction, where the voice wake-up instruction is used to instruct the wearable device to collect user voice information;
an acquisition module 502, configured to execute the voice wake-up instruction and acquire user voice information;
a sending module 503, configured to send the user voice information to a terminal wirelessly connected to the wearable device, where the user voice information is used to control the terminal to execute an operation indicated by the user voice information.
In a possible embodiment, fig. 6 is a block diagram illustrating an apparatus applied to a control terminal of a wearable device according to an exemplary embodiment, and as shown in fig. 6, the apparatus of the control terminal may be further configured to include a first receiving module 504 and a playing module 505, where:
a first receiving module 504, configured to receive feedback voice information returned by the terminal, where the feedback voice information is used to notify the terminal of an execution condition of the user voice information;
and a playing module 505, configured to play the feedback voice information.
In one possible embodiment, fig. 7 is a block diagram illustrating an apparatus applied to a control terminal of a wearable device according to an exemplary embodiment, and as shown in fig. 7, a first obtaining module 501 in the apparatus of the control terminal may be configured to include a first obtaining sub-module 5011, where:
the first obtaining submodule 5011 is configured to obtain preset key operation information;
in one possible embodiment, fig. 8 is a block diagram illustrating an apparatus applied to a control terminal of a wearable device according to an exemplary embodiment, and as shown in fig. 8, a first obtaining module 501 in the apparatus of the control terminal may be configured to include a second obtaining sub-module 5012, where:
the second obtaining submodule 5012 is configured to obtain preset wake-up voice information.
Fig. 9 is a block diagram illustrating an apparatus for controlling a terminal, which is applied to a terminal and can be implemented as part or all of the terminal through software, hardware, or a combination of both, according to an exemplary embodiment. As shown in fig. 9, the apparatus for controlling a terminal includes: a second receiving module 901, a determining module 902 and an executing module 903; wherein:
a second receiving module 901, configured to receive user voice information sent by a wearable device wirelessly connected to the terminal, where the user voice information is used to control the terminal to execute an operation indicated by the user voice information;
a determining module 902, configured to determine a voice instruction in the user voice information;
and the execution module 903 is used for executing the operation indicated by the voice instruction.
In a possible embodiment, fig. 10 is a block diagram illustrating an apparatus for controlling a terminal applied to a terminal according to an exemplary embodiment, and as shown in fig. 10, the apparatus for controlling a terminal may further configure the executing module 903 to include a determining submodule 9031, a first controlling submodule 9032, and a second controlling submodule 9033, wherein:
the determining submodule 9031 is configured to determine an application type involved in the operation indicated by the voice instruction, where the application type includes a system application or a third-party application;
the first control sub-module 9032 is configured to, when the operation indicated by the voice instruction includes an operation in a system application, call a system application interface, and control the system application to execute the operation indicated by the voice instruction;
and the second control sub-module 9033 is configured to, when the operation indicated by the voice instruction includes an operation in a third-party application, call a third-party application interface to control the third-party application to execute the operation indicated by the voice instruction.
In a possible embodiment, fig. 11 is a block diagram illustrating an apparatus for controlling a terminal applied to a terminal according to an exemplary embodiment, and as shown in fig. 11, the apparatus for controlling a terminal may be further configured to include a second obtaining module 904 and a returning module 905, where:
a second obtaining module 904, configured to obtain feedback voice information for the user voice information, where the feedback voice information is used to notify the terminal of an execution condition of the user voice information;
a returning module 905, configured to return the feedback voice information to the wearable device.
In one possible embodiment, fig. 12 is a block diagram illustrating an apparatus applied to a control terminal of a terminal according to an exemplary embodiment, and as shown in fig. 12, a determining module 902 in the apparatus of the control terminal may be configured to include a detecting submodule 9021, a first sending submodule 9022 and a first receiving submodule 9023, wherein:
the detection submodule 9021 is configured to detect a voice end of the user voice information, and determine user voice information before the voice end;
the first sending submodule 9022 is configured to send the user voice information before the voice end to the voice processing cloud;
the first receiving submodule 9023 is configured to receive a voice instruction in the user voice information returned by the voice processing cloud.
In one possible embodiment, fig. 13 is a block diagram illustrating an apparatus applied to a control terminal of a terminal according to an exemplary embodiment, and as shown in fig. 13, the second obtaining module 904 in the apparatus of the control terminal may be configured to include a monitoring submodule 9041, a generating submodule 9042, a second sending submodule 9043, a second receiving submodule 9044, and a converting submodule 9045, where:
the monitoring submodule 9041 is configured to monitor an execution situation of the operation through an interface of an application related to the operation indicated by the voice instruction;
a generating submodule 9042, configured to generate the feedback text for the user voice information according to the execution condition;
the second sending submodule 9043 is configured to send feedback characters for the user voice information to the voice processing cloud;
the second receiving submodule 9044 is configured to receive feedback voice information in the form of a digital signal corresponding to the feedback text, which is returned by the voice processing cloud;
and the conversion sub-module 9045 is configured to convert the feedback voice information in the digital signal form into feedback voice information in an analog signal form.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 14 is a block diagram illustrating an apparatus for controlling a terminal, which is suitable for a wearable device or the like, according to an exemplary embodiment. The apparatus 1400 includes a processing component 1411 that further includes one or more processors, and memory resources, represented by the memory 1412, for storing instructions, such as applications, that are executable by the processing component 1411. The application programs stored in the memory 1412 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1411 is configured to execute instructions to perform the above-described method.
The apparatus 1400 may also include a power component 1413 configured to perform power management for the apparatus 1400, a communication interface 1414 configured to connect the apparatus 1400 to other devices, such as terminals, and an input output (I/O) interface 1415. The input/output (I/O) interface 1415 may provide an interface between the processing component 1401 and peripheral interface modules, which may be key modules, such as buttons and the like. These buttons may include, but are not limited to: power button, volume button lamp. The device 1400 may operate based on an operating system stored in the memory 1412, such as Windows Server, Mac OSXTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
The apparatus 1400 may also include an audio component 1416, the audio component 1416 configured to output and/or input audio signals. For example, the audio component 1416 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 1400 is in an operating mode, such as a voice capture mode. The received audio signals may further be transmitted via communications interface 1414. In some embodiments, the audio component 1416 also includes a speaker for outputting audio signals.
The present embodiment provides a computer readable storage medium, wherein the instructions of the storage medium, when executed by the processor of the apparatus 1400, implement the following steps:
acquiring a voice awakening instruction, wherein the voice awakening instruction is used for indicating the wearable equipment to acquire user voice information;
executing the voice awakening instruction and collecting voice information of a user;
and sending the user voice information to a terminal in wireless connection with the wearable device, wherein the user voice information is used for controlling the terminal to execute the operation indicated by the user voice information.
The instructions in the storage medium when executed by the processor may further implement the steps of:
the method further comprises the following steps:
receiving feedback voice information returned by the terminal, wherein the feedback voice information is used for informing the terminal of the execution condition of the user voice information;
and playing the feedback voice information.
The instructions in the storage medium when executed by the processor may further implement the steps of:
the acquiring of the voice wake-up instruction comprises:
acquiring preset key operation information;
or,
and acquiring preset awakening voice information.
The present disclosure also provides a device for controlling a terminal, which is applied to a wearable device, and includes:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a voice awakening instruction, wherein the voice awakening instruction is used for indicating the wearable equipment to acquire user voice information;
executing the voice awakening instruction and collecting voice information of a user;
and sending the user voice information to a terminal in wireless connection with the wearable device, wherein the user voice information is used for controlling the terminal to execute the operation indicated by the user voice information.
The processor may be further configured to:
the method further comprises the following steps:
receiving feedback voice information returned by the terminal, wherein the feedback voice information is used for informing the terminal of the execution condition of the user voice information;
and playing the feedback voice information.
The processor may be further configured to:
the acquiring of the voice wake-up instruction comprises:
acquiring preset key operation information;
or,
and acquiring preset awakening voice information.
Fig. 15 is a block diagram illustrating an apparatus for controlling a terminal according to an exemplary embodiment, where the apparatus 1500 is suitable for a terminal or the like. For example, the apparatus 1500 may be a mobile phone, a game console, a computer, a tablet device, a personal digital assistant, and the like.
The apparatus 1500 may include one or more of the following components: processing component 1501, memory 1502, power component 1503, multimedia component 1504, audio component 1505, input/output (I/O) interface 1506, sensor component 15015, and communication component 1508.
The processing component 1501 generally controls the overall operation of the device 1500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 1501 may include one or more processors 1520 executing instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 1501 can include one or more modules that facilitate interaction between the processing component 1501 and other components. For example, the processing component 1501 may include a multimedia module to facilitate interaction between the multimedia component 1504 and the processing component 1501.
The memory 1502 is configured to store various types of data to support operations at the apparatus 1500. Examples of such data include instructions for any application or method operating on the device 1500, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1502 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power component 1503 provides power to the various components of the device 1500. The power component 1503 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the device 1500.
The multimedia component 1504 includes a screen that provides an output interface between the device 1500 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1504 includes a front camera and/or a rear camera. The front camera and/or the rear camera may receive external multimedia data when the apparatus 1500 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
Audio component 1505 is configured to output and/or input audio signals. For example, the audio component 1505 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 1500 is in operating modes, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in memory 1502 or transmitted via communications component 1508. In some embodiments, audio component 1505 also includes a speaker for outputting audio signals.
The I/O interface 1506 provides an interface between the processing component 1501 and peripheral interface modules, such as keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 1507 includes one or more sensors for providing various aspects of status assessment for the device 1500. For example, the sensor assembly 1507 may detect the open/closed state of the apparatus 1500, the relative positioning of the components, such as the display and keypad of the apparatus 1500, the sensor assembly 1507 may also detect a change in the position of the apparatus 1500 or a component of the apparatus 1500, the presence or absence of user contact with the apparatus 1500, the orientation or acceleration/deceleration of the apparatus 1500, and a change in the temperature of the apparatus 1500. The sensor assembly 1507 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 1507 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1507 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 1508 is configured to facilitate communications between the apparatus 1500 and other devices in a wired or wireless manner. The apparatus 1500 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1508 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 1508 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 1500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer readable storage medium comprising instructions, such as the memory 1502 comprising instructions, executable by the processor 1520 of the apparatus 1500 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The present embodiment provides a computer readable storage medium, the instructions in which when executed by the processor of the apparatus 1500 implement the steps of:
the method comprises the following steps:
receiving user voice information sent by wearable equipment in wireless connection with the terminal, wherein the user voice information is used for controlling the terminal to execute operation indicated by the user voice information;
determining a voice instruction in the user voice information;
and executing the operation indicated by the voice instruction.
The instructions in the storage medium when executed by the processor may further implement the steps of:
the executing the operation indicated by the voice instruction comprises:
determining an application type related to the operation indicated by the voice instruction, wherein the application type comprises a system application or a third-party application;
when the operation indicated by the voice instruction comprises operation in system application, calling a system application interface, and controlling the system application to execute the operation indicated by the voice instruction;
and when the operation indicated by the voice instruction comprises the operation in the third-party application, calling a third-party application interface to control the third-party application to execute the operation indicated by the voice instruction.
The instructions in the storage medium when executed by the processor may further implement the steps of:
the method further comprises the following steps:
acquiring feedback voice information aiming at the user voice information, wherein the feedback voice information is used for informing the terminal of the execution condition of the user voice information;
returning the feedback voice information to the wearable device.
The instructions in the storage medium when executed by the processor may further implement the steps of:
the determining the voice instruction in the user voice information comprises:
detecting a voice ending end of the user voice information, and determining the user voice before the voice ending end;
sending the user voice information before the voice end to a voice processing cloud;
and receiving a voice instruction in the user voice information returned by the voice processing cloud.
The instructions in the storage medium when executed by the processor may further implement the steps of:
the acquiring feedback voice information for the user voice information includes:
monitoring the execution condition of the operation through an interface of an application related to the operation indicated by the voice instruction;
generating the feedback characters aiming at the user voice information according to the execution condition;
sending feedback characters aiming at the voice information of the user to the voice processing cloud;
receiving feedback voice information in a digital signal form corresponding to the feedback characters returned by the voice processing cloud;
and converting the feedback voice information in the form of the digital signal into the feedback voice information in the form of the analog signal.
The present disclosure also provides a device for controlling a terminal, which is applied to a terminal, and includes:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
receiving user voice information sent by wearable equipment in wireless connection with the terminal, wherein the user voice information is used for controlling the terminal to execute operation indicated by the user voice information;
determining a voice instruction in the user voice information;
and executing the operation indicated by the voice instruction.
The processor may be further configured to:
the executing the operation indicated by the voice instruction comprises:
determining an application type related to the operation indicated by the voice instruction, wherein the application type comprises a system application or a third-party application;
when the operation indicated by the voice instruction comprises operation in system application, calling a system application interface, and controlling the system application to execute the operation indicated by the voice instruction;
and when the operation indicated by the voice instruction comprises the operation in the third-party application, calling a third-party application interface to control the third-party application to execute the operation indicated by the voice instruction.
The processor may be further configured to:
the method further comprises the following steps:
acquiring feedback voice information aiming at the user voice information, wherein the feedback voice information is used for informing the terminal of the execution condition of the user voice information;
returning the feedback voice information to the wearable device.
The processor may be further configured to:
the determining the voice instruction in the user voice information comprises:
detecting a voice ending end of the user voice information, and determining the user voice before the voice ending end;
sending the user voice information before the voice end to a voice processing cloud;
and receiving a voice instruction in the user voice information returned by the voice processing cloud.
The processor may be further configured to:
the acquiring feedback voice information for the user voice information includes:
monitoring the execution condition of the operation through an interface of an application related to the operation indicated by the voice instruction;
generating the feedback characters aiming at the user voice information according to the execution condition;
sending feedback characters aiming at the voice information of the user to the voice processing cloud;
receiving feedback voice information in a digital signal form corresponding to the feedback characters returned by the voice processing cloud;
and converting the feedback voice information in the form of the digital signal into the feedback voice information in the form of the analog signal.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (20)

1. A method for controlling a terminal is applied to a wearable device, and comprises the following steps:
acquiring a voice awakening instruction, wherein the voice awakening instruction is used for indicating the wearable equipment to acquire user voice information;
executing the voice awakening instruction and collecting voice information of a user;
and sending the user voice information to a terminal in wireless connection with the wearable device, wherein the user voice information is used for controlling the terminal to execute the operation indicated by the user voice information.
2. The method of claim 1, further comprising:
receiving feedback voice information returned by the terminal, wherein the feedback voice information is used for informing the terminal of the execution condition of the user voice information;
and playing the feedback voice information.
3. The method of claim 1, wherein the obtaining the voice wake-up instruction comprises:
acquiring preset key operation information;
or,
and acquiring preset awakening voice information.
4. A method for controlling a terminal, the method being applied to the terminal and comprising:
receiving user voice information sent by wearable equipment in wireless connection with the terminal, wherein the user voice information is used for controlling the terminal to execute operation indicated by the user voice information;
determining a voice instruction in the user voice information;
and executing the operation indicated by the voice instruction.
5. The method of claim 4, wherein the performing the operation indicated by the voice instruction comprises:
determining an application type related to the operation indicated by the voice instruction, wherein the application type comprises a system application or a third-party application;
when the operation indicated by the voice instruction comprises operation in system application, calling a system application interface, and controlling the system application to execute the operation indicated by the voice instruction;
and when the operation indicated by the voice instruction comprises the operation in the third-party application, calling a third-party application interface to control the third-party application to execute the operation indicated by the voice instruction.
6. The method of claim 4, further comprising:
acquiring feedback voice information aiming at the user voice information, wherein the feedback voice information is used for informing the terminal of the execution condition of the user voice information;
returning the feedback voice information to the wearable device.
7. The method of claim 4, wherein the determining the voice instruction in the user voice information comprises:
detecting a voice ending end of the user voice information, and determining the user voice before the voice ending end;
sending the user voice information before the voice end to a voice processing cloud;
and receiving a voice instruction in the user voice information returned by the voice processing cloud.
8. The method of claim 6, wherein the obtaining feedback voice information for the user voice information comprises:
monitoring the execution condition of the operation through an interface of an application related to the operation indicated by the voice instruction;
generating the feedback characters aiming at the user voice information according to the execution condition;
sending feedback characters aiming at the voice information of the user to the voice processing cloud;
receiving feedback voice information in a digital signal form corresponding to the feedback characters returned by the voice processing cloud;
and converting the feedback voice information in the form of the digital signal into the feedback voice information in the form of the analog signal.
9. An apparatus for controlling a terminal, applied to a wearable device, the apparatus comprising:
the wearable device comprises a first acquisition module, a second acquisition module and a control module, wherein the first acquisition module is used for acquiring a voice awakening instruction which is used for indicating the wearable device to acquire user voice information;
the acquisition module is used for executing the voice awakening instruction and acquiring the voice information of the user;
and the sending module is used for sending the user voice information to a terminal in wireless connection with the wearable device, and the user voice information is used for controlling the terminal to execute the operation indicated by the user voice information.
10. The apparatus of claim 9, further comprising:
the first receiving module is used for receiving feedback voice information returned by the terminal, wherein the feedback voice information is used for informing the terminal of the execution condition of the user voice information;
and the playing module is used for playing the feedback voice information.
11. The apparatus of claim 9, wherein the first obtaining module comprises:
the first obtaining submodule is used for obtaining preset key operation information;
or,
and the second acquisition submodule is used for acquiring preset awakening voice information.
12. An apparatus for controlling a terminal, the apparatus being applied to the terminal, the apparatus comprising:
the second receiving module is used for receiving user voice information sent by wearable equipment in wireless connection with the terminal, and the user voice information is used for controlling the terminal to execute the operation indicated by the user voice information;
the determining module is used for determining a voice instruction in the user voice information;
and the execution module is used for executing the operation indicated by the voice instruction.
13. The apparatus of claim 12, wherein the means for performing comprises:
the determining submodule is used for determining an application type related to the operation indicated by the voice instruction, and the application type comprises a system application or a third-party application;
the first control submodule is used for calling a system application interface when the operation indicated by the voice instruction comprises the operation in the system application, and controlling the system application to execute the operation indicated by the voice instruction;
and the second control submodule is used for calling a third-party application interface when the operation indicated by the voice instruction comprises the operation in the third-party application, and controlling the third-party application to execute the operation indicated by the voice instruction.
14. The apparatus of claim 12, further comprising:
the second acquisition module is used for acquiring feedback voice information aiming at the user voice information, wherein the feedback voice information is used for informing the terminal of the execution condition of the user voice information;
a return module for returning the feedback voice information to the wearable device.
15. The apparatus of claim 12, wherein the determining module comprises:
the detection submodule is used for detecting a voice ending end of the user voice information and determining the user voice information before the voice ending end;
the first sending submodule is used for sending the user voice information before the voice end terminal to a voice processing cloud terminal;
and the first receiving submodule is used for receiving a voice instruction in the user voice information returned by the voice processing cloud.
16. The apparatus of claim 14, wherein the second obtaining module comprises:
the monitoring submodule is used for monitoring the execution condition of the operation through an interface of an application related to the operation indicated by the voice instruction;
the generating submodule is used for generating the feedback characters aiming at the user voice information according to the execution condition;
the second sending submodule is used for sending feedback characters aiming at the voice information of the user to the voice processing cloud;
the second receiving submodule is used for receiving feedback voice information in a digital signal form corresponding to the feedback characters returned by the voice processing cloud;
and the conversion sub-module is used for converting the feedback voice information in the form of the digital signal into the feedback voice information in the form of the analog signal.
17. An apparatus for controlling a terminal, applied to a wearable device, includes:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
acquiring a voice awakening instruction, wherein the voice awakening instruction is used for indicating the wearable equipment to acquire user voice information;
executing the voice awakening instruction and collecting voice information of a user;
and sending the user voice information to a terminal in wireless connection with the wearable device, wherein the user voice information is used for controlling the terminal to execute the operation indicated by the user voice information.
18. An apparatus for controlling a terminal, applied to the terminal, includes:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
receiving user voice information sent by wearable equipment in wireless connection with the terminal, wherein the user voice information is used for controlling the terminal to execute operation indicated by the user voice information;
determining a voice instruction in the user voice information;
and executing the operation indicated by the voice instruction.
19. A computer readable storage medium storing computer instructions, for use in a wearable device, the computer instructions when executed by a processor implement the steps of the method of claims 1 to 3.
20. A computer readable storage medium storing computer instructions, for use in a wearable device, the computer instructions when executed by a processor implement the steps of the method of claims 4 to 8.
CN201711130491.1A 2017-11-15 2017-11-15 The method and device of control terminal Pending CN107978316A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711130491.1A CN107978316A (en) 2017-11-15 2017-11-15 The method and device of control terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711130491.1A CN107978316A (en) 2017-11-15 2017-11-15 The method and device of control terminal

Publications (1)

Publication Number Publication Date
CN107978316A true CN107978316A (en) 2018-05-01

Family

ID=62013601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711130491.1A Pending CN107978316A (en) 2017-11-15 2017-11-15 The method and device of control terminal

Country Status (1)

Country Link
CN (1) CN107978316A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108922537A (en) * 2018-05-28 2018-11-30 Oppo广东移动通信有限公司 Audio identification methods, device, terminal, earphone and readable storage medium storing program for executing
CN109065050A (en) * 2018-09-28 2018-12-21 上海与德科技有限公司 A kind of sound control method, device, equipment and storage medium
CN109192207A (en) * 2018-09-17 2019-01-11 顺丰科技有限公司 Voice communication assembly, voice communication method and system, equipment, storage medium
CN109274723A (en) * 2018-08-30 2019-01-25 出门问问信息科技有限公司 A kind of information-pushing method and device based on earphone
CN109413268A (en) * 2018-10-10 2019-03-01 深圳市领芯者科技有限公司 A kind of assisting navigation software plays the methods, devices and systems of voice
CN109448709A (en) * 2018-10-16 2019-03-08 华为技术有限公司 A kind of terminal throws the control method and terminal of screen
CN109637542A (en) * 2018-12-25 2019-04-16 圆通速递有限公司 A kind of outer paging system of voice
CN109767764A (en) * 2018-12-29 2019-05-17 浙江比逊河鞋业有限公司 A kind of intelligent children's footwear and its control method based on voice control
CN109783733A (en) * 2019-01-15 2019-05-21 三角兽(北京)科技有限公司 User's portrait generating means and method, information processing unit and storage medium
CN109862178A (en) * 2019-01-17 2019-06-07 珠海市黑鲸软件有限公司 A kind of wearable device and its voice control communication method
CN109859762A (en) * 2019-01-02 2019-06-07 百度在线网络技术(北京)有限公司 Voice interactive method, device and storage medium
CN111010482A (en) * 2019-12-13 2020-04-14 上海传英信息技术有限公司 Voice retrieval method, wireless device and computer readable storage medium
CN111048066A (en) * 2019-11-18 2020-04-21 云知声智能科技股份有限公司 Voice endpoint detection system assisted by images on child robot
CN111667827A (en) * 2020-05-28 2020-09-15 北京小米松果电子有限公司 Voice control method and device of application program and storage medium
CN112969116A (en) * 2021-02-01 2021-06-15 深圳市美恩微电子有限公司 Interactive control system of wireless earphone and intelligent terminal
CN113409788A (en) * 2021-07-15 2021-09-17 深圳市同行者科技有限公司 Voice wake-up method, system, device and storage medium
WO2021232913A1 (en) * 2020-05-18 2021-11-25 Oppo广东移动通信有限公司 Voice information processing method and apparatus, and storage medium and electronic device

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108922537B (en) * 2018-05-28 2021-05-18 Oppo广东移动通信有限公司 Audio recognition method, device, terminal, earphone and readable storage medium
CN108922537A (en) * 2018-05-28 2018-11-30 Oppo广东移动通信有限公司 Audio identification methods, device, terminal, earphone and readable storage medium storing program for executing
CN109274723A (en) * 2018-08-30 2019-01-25 出门问问信息科技有限公司 A kind of information-pushing method and device based on earphone
CN109274723B (en) * 2018-08-30 2021-09-14 出门问问信息科技有限公司 Information pushing method and device based on earphone
CN109192207A (en) * 2018-09-17 2019-01-11 顺丰科技有限公司 Voice communication assembly, voice communication method and system, equipment, storage medium
CN109065050A (en) * 2018-09-28 2018-12-21 上海与德科技有限公司 A kind of sound control method, device, equipment and storage medium
CN109413268A (en) * 2018-10-10 2019-03-01 深圳市领芯者科技有限公司 A kind of assisting navigation software plays the methods, devices and systems of voice
CN109448709A (en) * 2018-10-16 2019-03-08 华为技术有限公司 A kind of terminal throws the control method and terminal of screen
CN109637542A (en) * 2018-12-25 2019-04-16 圆通速递有限公司 A kind of outer paging system of voice
CN109767764A (en) * 2018-12-29 2019-05-17 浙江比逊河鞋业有限公司 A kind of intelligent children's footwear and its control method based on voice control
CN109859762A (en) * 2019-01-02 2019-06-07 百度在线网络技术(北京)有限公司 Voice interactive method, device and storage medium
CN109783733B (en) * 2019-01-15 2020-11-06 腾讯科技(深圳)有限公司 User image generation device and method, information processing device, and storage medium
CN109783733A (en) * 2019-01-15 2019-05-21 三角兽(北京)科技有限公司 User's portrait generating means and method, information processing unit and storage medium
CN109862178A (en) * 2019-01-17 2019-06-07 珠海市黑鲸软件有限公司 A kind of wearable device and its voice control communication method
CN111048066A (en) * 2019-11-18 2020-04-21 云知声智能科技股份有限公司 Voice endpoint detection system assisted by images on child robot
CN111010482A (en) * 2019-12-13 2020-04-14 上海传英信息技术有限公司 Voice retrieval method, wireless device and computer readable storage medium
WO2021232913A1 (en) * 2020-05-18 2021-11-25 Oppo广东移动通信有限公司 Voice information processing method and apparatus, and storage medium and electronic device
US12001758B2 (en) 2020-05-18 2024-06-04 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Voice information processing method and electronic device
CN111667827A (en) * 2020-05-28 2020-09-15 北京小米松果电子有限公司 Voice control method and device of application program and storage medium
CN111667827B (en) * 2020-05-28 2023-10-17 北京小米松果电子有限公司 Voice control method and device for application program and storage medium
CN112969116A (en) * 2021-02-01 2021-06-15 深圳市美恩微电子有限公司 Interactive control system of wireless earphone and intelligent terminal
CN113409788A (en) * 2021-07-15 2021-09-17 深圳市同行者科技有限公司 Voice wake-up method, system, device and storage medium

Similar Documents

Publication Publication Date Title
CN107978316A (en) The method and device of control terminal
KR101571993B1 (en) Method for voice calling method for voice playing, devices, program and storage medium thereof
CN112037787B (en) Wake-up control method, device and computer readable storage medium
CN110610699B (en) Voice signal processing method, device, terminal, server and storage medium
CN111696553B (en) Voice processing method, device and readable medium
CN111063354B (en) Man-machine interaction method and device
CN104836897A (en) Method and device for controlling terminal communication through wearable device
EP3779968A1 (en) Audio processing
CN106888327B (en) Voice playing method and device
CN104394265A (en) Automatic session method and device based on mobile intelligent terminal
CN109087650B (en) Voice wake-up method and device
CN105407433A (en) Method and device for controlling sound output equipment
CN111009239A (en) Echo cancellation method, echo cancellation device and electronic equipment
CN109151619A (en) Data communications method and device
CN111968680B (en) Voice processing method, device and storage medium
CN111540350B (en) Control method, device and storage medium of intelligent voice control equipment
CN108766427B (en) Voice control method and device
CN110223500A (en) Infrared equipment control method and device
CN105338163B (en) Method and device for realizing communication and multi-card mobile phone
CN106603882A (en) Incoming call sound volume adjusting method, incoming call sound volume adjusting device and terminal
US20160142885A1 (en) Voice call prompting method and device
CN113726952B (en) Simultaneous interpretation method and device in call process, electronic equipment and storage medium
CN112863511B (en) Signal processing method, device and storage medium
CN111694539B (en) Method, device and medium for switching between earphone and loudspeaker
CN117751585A (en) Control method and device of intelligent earphone, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180501