CN118248145A - Smart home system voice control method, device, equipment and smart home system - Google Patents
Smart home system voice control method, device, equipment and smart home system Download PDFInfo
- Publication number
- CN118248145A CN118248145A CN202410539822.0A CN202410539822A CN118248145A CN 118248145 A CN118248145 A CN 118248145A CN 202410539822 A CN202410539822 A CN 202410539822A CN 118248145 A CN118248145 A CN 118248145A
- Authority
- CN
- China
- Prior art keywords
- equipment
- controlled
- control
- language model
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 230000009471 action Effects 0.000 claims description 76
- 230000006870 function Effects 0.000 claims description 19
- 230000002787 reinforcement Effects 0.000 claims description 15
- 230000007613 environmental effect Effects 0.000 claims description 14
- 238000012549 training Methods 0.000 claims description 10
- 230000015654 memory Effects 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 5
- 230000002829 reductive effect Effects 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 7
- 238000005406 washing Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 238000010411 cooking Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000003993 interaction Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 239000003795 chemical substances by application Substances 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- INFDPOAKFNIJBF-UHFFFAOYSA-N paraquat Chemical compound C1=C[N+](C)=CC=C1C1=CC=[N+](C)C=C1 INFDPOAKFNIJBF-UHFFFAOYSA-N 0.000 description 2
- 235000021395 porridge Nutrition 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 235000014347 soups Nutrition 0.000 description 2
- 240000007594 Oryza sativa Species 0.000 description 1
- 235000007164 Oryza sativa Nutrition 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000007791 dehumidification Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- YHXISWVBGDMDLQ-UHFFFAOYSA-N moclobemide Chemical compound C1=CC(Cl)=CC=C1C(=O)NCCN1CCOCC1 YHXISWVBGDMDLQ-UHFFFAOYSA-N 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000036961 partial effect Effects 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 238000005057 refrigeration Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 235000009566 rice Nutrition 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/16—Speech classification or search using artificial neural networks
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/18—Speech classification or search using natural language modelling
- G10L15/183—Speech classification or search using natural language modelling using context dependencies, e.g. language models
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
- H04L12/2816—Controlling appliance services of a home automation network by calling their functionalities
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Automation & Control Theory (AREA)
- Evolutionary Computation (AREA)
- Selective Calling Equipment (AREA)
Abstract
The application discloses a voice control method, a device and equipment of an intelligent home system and the intelligent home system, which belong to the field of intelligent home, and after receiving user voice data, the method firstly judges whether the user voice data is a preset recognition word; if not, the voice recognition model in the system cannot be recognized, so that the user voice data is sent to the large language model, and the user intention is recognized by the large language model. When the user intends to control the household appliances in the system, the large language model outputs a control word in a preset form, and finally generates a control instruction according to the control word and sends the control instruction to the equipment to be controlled. According to the scheme, the voice recognition model does not need to be additionally trained for the intelligent home system, after the simple voice recognition model in the system cannot be recognized, the intention is recognized by directly utilizing the large language model, so that the cost of the intelligent home system is reduced, the recognition accuracy is ensured, and the user experience is greatly improved.
Description
Technical Field
The application relates to the field of smart home, in particular to a voice control method, device and equipment of a smart home system and the smart home system.
Background
The intelligent home system controls each household appliance in the system by the main controller, and in order to facilitate the use of users, the voice function is provided in the existing intelligent home system and household appliances. The home appliances are controlled by voice recognition.
But the initial speech recognition is performed by detecting whether the user's speech is a preset recognition word through a speech recognition model. If yes, control is performed according to the control instruction corresponding to the identification word, and if not, control is not performed. The intention of the user cannot be accurately identified, and the user experience is poor. The prior art would therefore train a more intelligent speech recognition model for obtaining user intent. Although these intelligent speech recognition models do recognize user intent in some scenarios as compared to the initial speech recognition models, the intelligent speech recognition models control the home appliances. However, the cost of the intelligent speech recognition model is high, and the effect of the recognition intention is different according to the training degree.
Disclosure of Invention
In order to overcome the defects of the prior art, the application provides a voice control method, device and equipment of an intelligent home system and the intelligent home system, so that the problem that the cost of an intelligent voice recognition model in the existing intelligent home system is high is solved, and the effect of recognizing intention is different according to different training degrees.
The technical scheme adopted for solving the technical problems is as follows:
In a first aspect, a method for controlling voice of an intelligent home system is provided, including:
Receiving user voice data;
If the user voice data is not a preset recognition word, the user voice data is sent to a large language model, so that when the large language model recognizes that the user intends to control the household appliance, a control word in a preset form is output, and the control word comprises a type of equipment to be controlled and an action to be executed;
And when the control word is received, generating a control instruction according to the type of the equipment to be controlled and the action to be executed, and sending the control instruction to the equipment to be controlled.
Further, the method further comprises the following steps:
And if the user voice data is a preset recognition word, acquiring a control instruction corresponding to the recognition word and sending the control instruction.
Further, the method further comprises the following steps:
And sending first system information and an output form requirement to the large language model, wherein the first system information consists of equipment types and equipment functions, and the output form requirement is used for indicating the large language model to output control words in a preset form, and the preset form comprises the structure and the format of the control words.
Further, the generating a control instruction according to the type of the device to be controlled and the action to be executed and sending the control instruction to the device to be controlled includes:
If the equipment type of only one piece of equipment in the smart home system is the equipment type to be controlled, the equipment is used as the equipment to be controlled;
and generating a control instruction of the equipment to be controlled according to the action to be executed and sending the control instruction to the equipment to be controlled.
Further, the generating a control instruction according to the type of the device to be controlled and the action to be executed and sending the control instruction to the device to be controlled includes:
If at least two equipment types of equipment to be selected exist in the smart home system and are the equipment types to be controlled, acquiring a receiving position when receiving user voice data;
taking the equipment to be selected at the receiving position as equipment to be controlled;
and generating a control instruction of the equipment to be controlled according to the action to be executed and sending the control instruction to the equipment to be controlled.
Further, the method further comprises the following steps:
and sending second system information and an output form requirement to the large language model, wherein the second system information comprises equipment type, equipment position and equipment function, the output form requirement is used for indicating the large language model to output a control word in a preset form, and the preset form comprises the structure and the format of the control word.
Further, the method further comprises the following steps:
Transmitting a type name and a classification requirement to the large language model, wherein the type name is used for indicating the type of the user intention when the user intention is to control the household appliance; the classification requirement is used to indicate how to classify the user's intent; the type of user intention is different and the preset form is different.
Further, the method further comprises the following steps:
When the output form requirement, the type name, the classification requirement, the first system information or the second system information are sent to the large language model, the output form requirement, the type name, the classification requirement and the first system information are input through voice or characters.
Further, when the large language model identifies that the user intention is not to control the household appliance, outputting a reply corresponding to the user voice data;
and when receiving the reply output by the large language model, controlling the playing device and/or the display device to inform the user of the reply.
Further, the method further comprises the following steps:
Acquiring historical operation data and historical state data of the intelligent home system, and acquiring current environmental parameters at the current moment, wherein the current environmental parameters comprise indoor temperature and outdoor temperature, the historical state data are the historical environmental parameters and the historical operation parameters of all household appliances in the intelligent home system at any moment in the historical time, and the historical operation data are adjustment values of all household appliances in the intelligent home system at the same moment with the historical state data;
Constructing a virtual environment model based on the historical data and a prediction algorithm, wherein the prediction algorithm comprises any one of the following steps: GRU, LSTM, ARIMAX;
Inputting current environmental parameters into the virtual environment model, and training based on a reinforcement learning algorithm to obtain a target model; the reinforcement learning algorithm includes any one of the following: DQN, SAC, PPO;
When the action to be performed does not include a specific value, the generating a control instruction according to the type of the device to be controlled and the action to be performed includes:
Inputting the current environmental parameters into the target model to obtain specific numerical values;
And generating a control instruction according to the specific value and the action to be executed.
Further, when the action to be performed does not include a specific value, the generating a control instruction according to the type of the device to be controlled and the action to be performed includes:
Acquiring historical operation data and historical state data of the smart home system, and acquiring current environment parameters and current operation parameters at the current moment, wherein the current environment parameters comprise indoor temperature and outdoor temperature, the historical state data are the historical environment parameters and the historical operation parameters of all household appliances in the smart home system at any moment in the historical time, and the historical operation data are adjustment values of all household appliances in the smart home system at the same moment with the historical state data;
Obtaining a first text vector of a current environment parameter and a current operation parameter and a second text vector of a historical environment parameter and a historical operation parameter at each moment in the historical time by adopting a vector extraction model;
calculating cosine similarity of the first text vector and each second text vector;
Taking the adjustment value of the historical operation data at the moment corresponding to the second text vector with the maximum cosine similarity as a specific value;
And generating a control instruction according to the specific value and the action to be executed.
Further, the method further comprises the following steps:
When the control word is received, if the intelligent home system comprises the type of the equipment to be controlled and equipment with the type of the equipment to be controlled can execute the action to be executed, generating a control instruction according to the type of the equipment to be controlled and the action to be executed and sending the control instruction to the equipment to be controlled;
And if the intelligent home system does not comprise the equipment type to be controlled and/or equipment with the equipment type to be controlled cannot execute the action to be executed, sending an error prompt to the large language model so that the large language model regenerates the control word and outputs the control word.
In a second aspect, a voice control device for an intelligent home system is provided, including:
The voice data receiving module is used for receiving user voice data;
The voice data sending module is used for sending the user voice data to a large language model if the user voice data is not a preset recognition word, so that when the large language model recognizes that a user intends to control the household appliance, a control word in a preset form is output, and the control word comprises a type of equipment to be controlled and an action to be executed;
And the control instruction generation module is used for generating a control instruction according to the type of the equipment to be controlled and the action to be executed and sending the control instruction to the equipment to be controlled when the control word is received.
In a third aspect, a smart home system voice control apparatus is provided, including:
at least one processor and at least one memory;
The memory stores executable instructions of the processor;
the processor is configured to perform the smart home system voice control method described above.
In a fourth aspect, an intelligent home system is provided, and the voice control method of the intelligent home system is applied.
The beneficial effects are that:
The technical scheme of the application provides a voice control method, a device and equipment of an intelligent home system and the intelligent home system, wherein after receiving user voice data, the method and the device firstly judge whether the user voice data is a preset recognition word; if not, the voice recognition model in the system cannot be recognized, so that the user voice data is sent to the large language model, and the user intention is recognized by the large language model. When the user intends to control the household appliances in the system, the large language model outputs a control word in a preset form, and finally generates a control instruction according to the control word and sends the control instruction to the equipment to be controlled. According to the scheme, the voice recognition model does not need to be additionally trained for the intelligent home system, after the simple voice recognition model in the system cannot be recognized, the intention is recognized by directly utilizing the large language model, so that the cost of the intelligent home system is reduced, the recognition accuracy is ensured, and the user experience is greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a smart home system according to an embodiment of the present application;
Fig. 2 is a schematic diagram of a control flow of an intelligent home system according to an embodiment of the present application;
FIG. 3 is a schematic diagram of reinforcement learning according to an embodiment of the present application;
FIG. 4 is a flowchart of a method for controlling voice of an intelligent home system according to an embodiment of the present application;
FIG. 5 is a block diagram of a voice control device for a smart home system according to an embodiment of the present application;
fig. 6 is a block diagram of a voice control device of an intelligent home system according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail with reference to the accompanying drawings and examples. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, based on the examples herein, which are within the scope of the application as defined by the claims, will be within the scope of the application as defined by the claims.
The voice recognition model carried by the smart home or voice home appliance at the beginning can trigger control only after a specific recognition word is needed, and a user needs to accurately remember and speak the recognition word. An improvement over this prior art is to train a speech recognition model that can recognize the user's intent, triggering control without special recognition words. But on the one hand the higher the accuracy model, the higher the training cost and the longer the training period. On the other hand, in smart home, the composition of the home appliances of different smart home systems is different, and the same speech recognition model cannot be fully adapted to all smart home systems.
With the rapid development of large language models, such as ChatGPT/GPT4.0 of OpenAI, a hundred-degree text, a star fire large language model of science fiction, and the like. The task of speech recognition may be handed over to large language model processing. Based on the above, the application provides an intelligent home system, as shown in fig. 1, comprising a main controller, home appliances and a large language model server;
The main controller can be externally connected with external devices such as a microphone, a loudspeaker, a display screen and the like. The main controller comprises a WiFi module, a main chip, a voice recognition module and a storage module, wherein the main controller interacts with the household electrical appliance and the large language model server through the WiFi module, the storage module stores data, the main chip processes the data, and the voice recognition module recognizes recognition words.
The household appliance comprises an execution and control main board, a WiFi module and a voice recognition module, wherein the execution mechanism represents specific component parts of the household appliance, such as a fan, a compressor, a motor and the like. The control main board is a circuit board for executing programs. The plurality of household appliances are connected with the main controller through the WiFi module. The voice recognition module is a module which is provided when the household appliance is voice control equipment, and the executing mechanism at the moment at least comprises a microphone and is used for receiving user voice. The device is then controlled by voice.
In the scheme of the application, the user sends the voice control instruction to a certain household appliance with the voice control function, and can also send the voice control instruction to the main controller.
The large language model server, namely a third party service providing a large language model API call interface, comprises ChatGPT/GPT4.0 of OpenAI, a hundred-degree large-scale language model of star fire of science fiction, and the like.
The main controller is connected with the large model server by calling an API interface.
The flow of smart home control by the user through the system is shown in fig. 2, and is described in detail as follows:
After the system is powered on and started for the first time, the intelligent household system description and the intention prompt word description enter into the process of intelligent household system description and intention prompt word description. The main task of the intelligent home system description is to inform the large language model of the existing devices, functions, positions and other information in a voice input or text input mode, and to require the large language model to output control words in a preset form.
If the user inputs the following text or voice through the main controller: an air conditioner is arranged in a living room, can refrigerate, heat and dehumidify; the balcony is provided with a washing machine, and has a standard washing mode, a quick washing mode, a rinsing mode and a cleaning mode; the kitchen has an electric cooker with a mode of quick cooking, rice cooking, porridge cooking, heat preservation and soup cooking. ", or then send the system information in JSON format S to the large language model:
[
{
"device type": "air conditioner", "position": "living room",
"Function": [ "refrigeration", "heating", "dehumidification" ]
},
{
"Device type": "washing machine", "position": "balcony",
"Function": [ "Standard wash", "fast wash", "rinse", "clean" ]
},
{
"Device type": "electric cooker", "position": "kitchen",
"Function": [ "cooking"; "porridge"; "keep warm"; "soup" pot
}
]
If a new smart device is added at home, the large language model is informed to add a new device.
Next, the intention prompt word P is described, which facilitates classification of intentions belonging to control of the home appliance by the large language model. The P comprises three types, namely an accurate control type A, a sense type B and a scene type C.
For the accurate control class A, user input, "if I say that the temperature of the living room air conditioner is set to 25 ℃, the control words output by the large language model are: { "device": "air conditioner", "position": "living room", "action": "set temperature", "value": "25"; if the user inputs: the washing machine sets standard washing, and the control words output by the large language model are as follows: { "device": "washing machine", "action": "set mode", "mode": "Standard wash" }. The intent is to accurately control the class.
For experience class B, one example of user input is:
when the input is "good living room heat", the control words correspondingly output by the large language model are as follows:
{ "position": "living room", "device": "air conditioner", "action": "set temperature", "value": "25". }". The large language model can analogize other conditions according to the description of the user, for example, the user says that the bedroom is cold, and then the control words output by the large language model are: { "position": "bedroom", "device": "air conditioner", "action": "set temperature", "value": "28". "above is only a partial example, and the user may input a plurality of prompt words describing the feeling according to the need.
For scene class C, for custom scenes, user input, "if the user says" home mode ", a corresponding output [ {" device ": "air conditioner", "action": "set temperature", "value": "25" }, { "device": "Lamp", "action": "on" }, { "device": "Water heater", "action": "on" }. If I say "leave mode", you output [ { "device": "air conditioner", "action": "off" }, { "device": "Lamp", "action": "off" }, { "device": "monitoring camera", "action": "on" } ". The user may define a variety of modes based on the devices existing in the home and their own needs, just to name a few.
After the user inputs the three types of prompt words, the main controller stores the prompt word P so as to be convenient for loading when the system is started next time. The three prompting words can cover most of user demands of intelligent home scene interaction. And because of the large language model's understanding ability for the user's language, only examples may be entered. Of course, the three categories may be defined accurately and sent to the large language model.
Large language model intent recognition
The large language model judges the intention of the user by combining the prompting word P loaded in the previous step, if the intention of the user is one of accurately controlling the class A, the experience class B or the scene class C, corresponding control words are generated, the flow of household appliance control is entered, and the main controller interacts with the intelligent household appliance and obtains feedback. If it is not an intention to control the appliance, the large language model gives other replies.
Household appliance control and feedback
After the master controller acquires the control word of the large language model, it checks whether the control word is legal, for example, there is no such device or whether the device has such a function. If there are illegal control words, the large language model is required to modify the control words again until legal.
In the control process, there is a problem to be solved. If the user input is not an accurate control class a, no numerical value corresponding to the specific action is specified, and appropriate control data is set. For example, the user says "living room is somewhat cold", and the temperature to which the air conditioner is set is appropriate. For this problem, an appropriate temperature is generally set according to the habit of the user and the current indoor and outdoor environment conditions. Common implementation methods include a pattern matching method and a reinforcement learning method. According to the pattern matching method, data with high similarity are found according to the historical operation data and the indoor and outdoor environment data collected before, so that corresponding temperatures are set.
The reinforcement learning method needs to construct a virtual environment model based on LSTM (long short term memory network) or ARIMA (autoregressive differential moving average model) algorithm according to historical data, then input current environment parameters into the virtual environment model, train by reinforcement learning algorithm (DQN, SAC or PPO and other reinforcement learning algorithms), and iterate out an optimal setting temperature based on user habit and current environment. A schematic diagram of the reinforcement learning model is shown in fig. 3.
The intelligent agent refers to a carrier of the reinforcement learning algorithm, and can run related algorithms according to the environment state and rewards to generate actions.
Virtual environment based on predictive algorithm: an environment model for simulating the running state of the household appliance is constructed by a prediction method (such as GRU, LSTM, ARIMAX algorithm and the like), and corresponding states and rewards can be generated for the actions of the intelligent body.
The action At is an instruction generated by the agent, such as switching on and off, adjusting the set temperature and the like, and t represents a certain moment.
St+1, equipment operation state at a certain moment, such as temperature, humidity, power consumption, compressor frequency, indoor temperature, outdoor temperature, air quality, etc.
The rewards Rt and Rt+1 are a scalar quantity, and the action effect is measured.
After the generation and correction of the control instruction are completed, the main controller sends the control instruction to the corresponding equipment through the WiFi module, the control requirement is completed, the result is fed back to the main controller, the main controller feeds back the control result to the user through the loudspeaker and the display screen, and the interactive interestingness is improved.
In the process, the description of the equipment in the smart home system does not relate to the unique identifier of the equipment, so that the privacy of the user is protected to the greatest extent. By constructing three types of intention prompt words, the intention of the user can be accurately understood, and the requirement of the user is met.
It should be noted that, in the system structure diagram, the network communication module where the home appliance and the main controller interact may be a communication networking module such as bluetooth, zigBee,2G/3G/4G/5G, and the like, besides WiFi.
In addition, when the display screen displays the control effect, the virtual digital person with 3D holographic display can be adopted for display.
According to the embodiment of the application, the intention of the user in the aspect of intelligent home control is accurately identified through the large language model, and the intelligent home system is described by adopting a privacy removing method, so that the requirements of the user can be accurately identified, and the privacy of the user can be protected to a certain extent. Meanwhile, through interaction between the virtual digital human images and the user, the diversity and the interestingness of interaction are increased, the viscosity of the user to the product is improved, and the user experience is improved.
Based on the same inventive concept, the application provides a voice control method of an intelligent home system, which is applied to a main controller, as shown in fig. 4, and comprises the following steps:
s11: receiving user voice data; the user voice data may be acquired by a sound pickup device, such as a microphone, provided at the main controller; the system can also be uploaded to the main controller after being acquired by pickup equipment of home appliances. In actual control, wake-up words are generally set to avoid misoperation caused by user voice. The step of receiving user speech data is entered when the user mentions a wake-up word. If the wake-up word is not received, even if the pick-up device collects the voice of the user, no subsequent steps are performed.
S12: if the user voice data is not a preset recognition word, the user voice data is sent to a large language model, so that when the large language model recognizes that the user intends to control the household appliance, a control word in a preset form is output, and the control word comprises a type of equipment to be controlled and an action to be executed;
And if the user voice data is a preset recognition word, acquiring a control instruction corresponding to the recognition word and sending the control instruction. The voice recognition model which is self and can only recognize the preset recognition word is adopted to recognize the voice data of the user, and the corresponding control instruction can be triggered at the moment when the voice data is the preset recognition word. The user voice data is transmitted to the large language model without calling an API interface, so that the recognition speed is increased, and the response speed of the intelligent home system to the user voice is improved.
Outputting a reply corresponding to the voice data of the user when the large language model recognizes that the user intention is not to control the household appliance; and when receiving the reply output by the large language model, controlling the playing device and/or the display device to inform the user of the reply. Wherein the reply is generated by a large language model.
S13: and when the control word is received, generating a control instruction according to the type of the equipment to be controlled and the action to be executed, and sending the control instruction to the equipment to be controlled.
It should be noted that, since the control word generated by the large language model cannot be guaranteed to be completely correct, the main controller is required to perform verification. So as to ensure that the control words generated by the large language model meet the requirements. Specifically, when the control word is received, if the smart home system includes the type of the device to be controlled and the device with the type of the device to be controlled can execute the action to be executed, a control instruction is generated according to the type of the device to be controlled and the action to be executed, and the control instruction is sent to the device to be controlled; and if the intelligent home system does not comprise the equipment type to be controlled and/or equipment with the equipment type to be controlled cannot execute the action to be executed, sending an error prompt to the large language model so that the large language model regenerates the control word and outputs the control word.
In addition, each time the main controller is started, specific information needs to be sent to the large language model, so that the output of the large language model can be suitable for the intelligent home system where the main controller is located.
In the prior art, some applications of a large language model send all information (such as the device number of each device) of a system to the large language model, so that the large language model directly generates a control instruction to control the device to be controlled, which has the risk of privacy disclosure for a user and the security risk that the large language model of the system can directly control. Therefore, the large language model of the application can not directly control the equipment, only control words can be sent to the main controller, and the final control instruction is generated by the main controller.
In order to protect the privacy of the user, in one embodiment, the application only sends the general composition of the system to the system, and does not relate to specific equipment numbers, thereby avoiding the disclosure of the privacy of the user. Specifically, second system information and an output form requirement are sent to the large language model, wherein the second system information comprises equipment type, equipment position and equipment function, the output form requirement is used for indicating the large language model to output a control word in a preset form, and the preset form comprises the structure and the format of the control word. The second system information is input in order to enable the large language model to know the structure of the system, so that user intention of user voice data is understood, corresponding control words are generated, and privacy of the user is guaranteed because privacy information such as equipment serial numbers and the like is not included in the second system information. The setting of the output form requirement is to facilitate the recognition of the main controller and accelerate the response speed.
For some users, the number of home devices in the system and the location of the home devices also belong to privacy, and therefore. In another embodiment, first system information and an output form requirement are sent to a large language model, wherein the first system information is composed of a device type and a device function, the output form requirement is used for indicating the large language model to output a control word in a preset form, and the preset form comprises a structure and a format of the control word. I.e. the master controller only tells what device types are in the large language model system and what functions the devices of that device type can perform. Thus, the large language model cannot know the specific constitution of the system (such as the number of devices like the device type and the specific installation position of a certain device), and the privacy of the user is protected.
Because the large language model has no specific structure of the system, the output control words are relatively simple and can only contain the adjustment of what type of equipment. The host controller cannot be informed of which device is in which position to control. Therefore, the main controller must also judge the specific control device in the control word.
The smart home system is used for controlling the equipment to be controlled, wherein the equipment type of only one equipment in the smart home system is the equipment type to be controlled; and generating a control instruction of the equipment to be controlled according to the action to be executed and sending the control instruction to the equipment to be controlled.
If at least two equipment types of equipment to be selected exist in the smart home system and are the equipment types to be controlled, acquiring a receiving position when receiving user voice data; taking the equipment to be selected at the receiving position as equipment to be controlled; and generating a control instruction of the equipment to be controlled according to the action to be executed and sending the control instruction to the equipment to be controlled.
In order to accurately identify the user intention of the large language model, the application also classifies the user intention, thereby facilitating the identification of the large language model and outputting control words. Specifically, a type name and a classification requirement are sent to the large language model, wherein the type name is used for indicating the type of the user intention when the user intention is to control the household appliance; the classification requirement is used to indicate how to classify the user's intent; the type of user intention is different and the preset form is different. For specific classification, reference may be made to the exact control class a, experience class B and scene class C in the above embodiments.
The user intention of the experience class and the control word output by the large language model do not comprise specific numerical values. Thus, the main controller either selects qualitative control when generating the control instructions. For example, when controlling the temperature, only the control command of raising or lowering is sent, and for specific values, no control command is sent. Or it may need to generate specific values itself to generate control instructions for transmission.
In one embodiment, a default value is set for each device type's different mode of operation, and is directly taken as a specific value. But this approach does not meet the personalization requirements.
In another implementation, the historical operation data and the historical state data of the smart home system are obtained, the current environment parameters at the current moment are obtained, the current environment parameters comprise indoor temperature and outdoor temperature, the historical state data are the historical environment parameters and the historical operation parameters of all household appliances in the smart home system at any moment in the historical time, and the historical operation data are adjustment values of all household appliances in the smart home system at the same moment with the historical state data; constructing a virtual environment model based on the historical data and a prediction algorithm, wherein the prediction algorithm comprises any one of the following steps: GRU, LSTM, ARIMAX; inputting current environmental parameters into the virtual environment model, and training based on a reinforcement learning algorithm to obtain a target model; the reinforcement learning algorithm includes any one of the following: DQN, SAC, PPO;
When the action to be performed does not include a specific value, the generating a control instruction according to the type of the device to be controlled and the action to be performed includes: inputting the current environmental parameters into the target model to obtain specific numerical values; and generating a control instruction according to the specific value and the action to be executed.
However, this way requires training a model, which is costly, and in another embodiment of the present application, when the action to be performed does not include a specific value, the generating a control instruction according to the type of the device to be controlled and the action to be performed includes:
Acquiring historical operation data and historical state data of the smart home system, and acquiring current environment parameters and current operation parameters at the current moment, wherein the current environment parameters comprise indoor temperature and outdoor temperature, the historical state data are the historical environment parameters and the historical operation parameters of all household appliances in the smart home system at any moment in the historical time, and the historical operation data are adjustment values of all household appliances in the smart home system at the same moment with the historical state data; obtaining a first text vector of the current environment parameter and the current operation parameter and a second text vector of the historical environment parameter and the historical operation parameter at each moment in the historical time by adopting a vector extraction model (such as BERT); calculating cosine similarity of the first text vector and each second text vector; taking the adjustment value of the historical operation data at the moment corresponding to the second text vector with the maximum cosine similarity as a specific value; and generating a control instruction according to the specific value and the action to be executed. Although a vector extraction model is also required in this embodiment, the model may directly employ existing models without targeted training.
When the output form requirement, the type name, the classification requirement, the first system information or the second system information are sent to the large language model, the output form requirement, the type name, the classification requirement and the first system information or the second system information are input through voice or characters. That is, the input may be voice or text.
After receiving user voice data, the intelligent home system voice control method provided by the embodiment of the application firstly judges whether the user voice data is a preset recognition word; if not, the voice recognition model in the system cannot be recognized, so that the user voice data is sent to the large language model, and the user intention is recognized by the large language model. When the user intends to control the household appliances in the system, the large language model outputs a control word in a preset form, and finally generates a control instruction according to the control word and sends the control instruction to the equipment to be controlled. According to the scheme, the voice recognition model does not need to be additionally trained for the intelligent home system, after the simple voice recognition model in the system cannot be recognized, the intention is recognized by directly utilizing the large language model, so that the cost of the intelligent home system is reduced, the recognition accuracy is ensured, and the user experience is greatly improved.
Based on the same inventive concept, as shown in fig. 5, the present application provides a smart home system voice control apparatus 50, comprising:
A voice data receiving module 51 for receiving user voice data;
The voice data sending module 52 is configured to send the user voice data to a large language model if the user voice data is not a preset recognition word, so that when the large language model recognizes that the user intends to control the home appliance, a control word in a preset form is output, where the control word includes a type of a device to be controlled and an action to be performed;
And if the user voice data is a preset recognition word, acquiring a control instruction corresponding to the recognition word and sending the control instruction.
Outputting a reply corresponding to the voice data of the user when the large language model recognizes that the user intention is not to control the household appliance; and when receiving the reply output by the large language model, controlling the playing device and/or the display device to inform the user of the reply.
And the control instruction generating module 53 is configured to generate a control instruction according to the type of the device to be controlled and the action to be performed when the control word is received, and send the control instruction to the device to be controlled.
When the control word is received, if the smart home system comprises the type of the equipment to be controlled and equipment with the type of the equipment to be controlled can execute the action to be executed, generating a control instruction according to the type of the equipment to be controlled and the action to be executed and sending the control instruction to the equipment to be controlled; and if the intelligent home system does not comprise the equipment type to be controlled and/or equipment with the equipment type to be controlled cannot execute the action to be executed, sending an error prompt to the large language model so that the large language model regenerates the control word and outputs the control word.
In one embodiment, the method further comprises the step of sending first system information and an output form requirement to the large language model, wherein the first system information is composed of a device type and a device function, and the output form requirement is used for indicating the large language model to output a control word in a preset form, and the preset form comprises a structure and a format of the control word.
The method for generating the control instruction according to the type of the equipment to be controlled and the action to be executed and sending the control instruction to the equipment to be controlled comprises the following steps:
If the equipment type of only one piece of equipment in the smart home system is the equipment type to be controlled, the equipment is used as the equipment to be controlled; and generating a control instruction of the equipment to be controlled according to the action to be executed and sending the control instruction to the equipment to be controlled.
If at least two equipment types of equipment to be selected exist in the smart home system and are the equipment types to be controlled, acquiring a receiving position when receiving user voice data; taking the equipment to be selected at the receiving position as equipment to be controlled; and generating a control instruction of the equipment to be controlled according to the action to be executed and sending the control instruction to the equipment to be controlled.
In another embodiment, second system information and output form requirements are sent to the large language model, wherein the second system information comprises equipment type, equipment position and equipment function, the output form requirements are used for indicating the large language model to output control words in a preset form, and the preset form comprises the structure and the format of the control words.
Furthermore, the method further comprises: transmitting a type name and a classification requirement to the large language model, wherein the type name is used for indicating the type of the user intention when the user intention is to control the household appliance; the classification requirement is used to indicate how to classify the user's intent; the type of user intention is different and the preset form is different.
Further, when the output form requirement, the type name, the classification requirement, the first system information or the second system information is transmitted to the large language model, the output form requirement, the type name, the classification requirement or the first system information is input through voice or characters.
In one embodiment, historical operation data and historical state data of the smart home system are obtained, current environment parameters at the current moment are obtained, the current environment parameters comprise indoor temperature and outdoor temperature, the historical state data are historical environment parameters and historical operation parameters of all household appliances in the smart home system at any moment in the historical time, and the historical operation data are adjustment values of all household appliances in the smart home system at the same moment with the historical state data; constructing a virtual environment model based on the historical data and a prediction algorithm, wherein the prediction algorithm comprises any one of the following steps: GRU, LSTM, ARIMAX; inputting current environmental parameters into the virtual environment model, and training based on a reinforcement learning algorithm to obtain a target model; the reinforcement learning algorithm includes any one of the following: DQN, SAC, PPO;
When the action to be performed does not include a specific value, the generating a control instruction according to the type of the device to be controlled and the action to be performed includes: inputting the current environmental parameters into the target model to obtain specific numerical values; and generating a control instruction according to the specific value and the action to be executed.
In another embodiment, when the action to be performed does not include a specific value, the generating a control instruction according to the type of the device to be controlled and the action to be performed includes:
Acquiring historical operation data and historical state data of the smart home system, and acquiring current environment parameters and current operation parameters at the current moment, wherein the current environment parameters comprise indoor temperature and outdoor temperature, the historical state data are the historical environment parameters and the historical operation parameters of all household appliances in the smart home system at any moment in the historical time, and the historical operation data are adjustment values of all household appliances in the smart home system at the same moment with the historical state data; obtaining a first text vector of a current environment parameter and a current operation parameter and a second text vector of a historical environment parameter and a historical operation parameter at each moment in the historical time by adopting a vector extraction model; calculating cosine similarity of the first text vector and each second text vector; taking the adjustment value of the historical operation data at the moment corresponding to the second text vector with the maximum cosine similarity as a specific value; and generating a control instruction according to the specific value and the action to be executed.
After receiving user voice data, the intelligent home system voice control device provided by the embodiment of the application firstly judges whether the user voice data is a preset recognition word; if not, the voice recognition model in the system cannot be recognized, so that the user voice data is sent to the large language model, and the user intention is recognized by the large language model. When the user intends to control the household appliances in the system, the large language model outputs a control word in a preset form, and finally generates a control instruction according to the control word and sends the control instruction to the equipment to be controlled. According to the scheme, the voice recognition model does not need to be additionally trained for the intelligent home system, after the simple voice recognition model in the system cannot be recognized, the intention is recognized by directly utilizing the large language model, so that the cost of the intelligent home system is reduced, the recognition accuracy is ensured, and the user experience is greatly improved.
Based on the same inventive concept, as shown in fig. 6, the present application provides a smart home system voice control apparatus 60, comprising:
At least one processor 61 and at least one memory 62;
The memory stores executable instructions of the processor;
the processor is configured to perform the smart home system voice control method provided by the above embodiment.
According to the intelligent home system voice control equipment provided by the embodiment of the application, the executable instruction of the processor is stored through the memory, and when the executable instruction is executed, the processor can firstly judge whether the user voice data is a preset recognition word or not after receiving the user voice data; if not, the voice recognition model in the system cannot be recognized, so that the user voice data is sent to the large language model, and the user intention is recognized by the large language model. When the user intends to control the household appliances in the system, the large language model outputs a control word in a preset form, and finally generates a control instruction according to the control word and sends the control instruction to the equipment to be controlled. According to the scheme, the voice recognition model does not need to be additionally trained for the intelligent home system, after the simple voice recognition model in the system cannot be recognized, the intention is recognized by directly utilizing the large language model, so that the cost of the intelligent home system is reduced, the recognition accuracy is ensured, and the user experience is greatly improved.
It is to be understood that the same or similar parts in the above embodiments may be referred to each other, and that in some embodiments, the same or similar parts in other embodiments may be referred to.
It should be noted that in the description of the present application, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present application, unless otherwise indicated, the meaning of "plurality" means at least two.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.
Claims (15)
1. A voice control method of an intelligent home system is characterized by comprising the following steps:
Receiving user voice data;
If the user voice data is not a preset recognition word, the user voice data is sent to a large language model, so that when the large language model recognizes that the user intends to control the household appliance, a control word in a preset form is output, and the control word comprises a type of equipment to be controlled and an action to be executed;
And when the control word is received, generating a control instruction according to the type of the equipment to be controlled and the action to be executed, and sending the control instruction to the equipment to be controlled.
2. The method as recited in claim 1, further comprising:
And if the user voice data is a preset recognition word, acquiring a control instruction corresponding to the recognition word and sending the control instruction.
3. The method as recited in claim 1, further comprising:
And sending first system information and an output form requirement to the large language model, wherein the first system information consists of equipment types and equipment functions, and the output form requirement is used for indicating the large language model to output control words in a preset form, and the preset form comprises the structure and the format of the control words.
4. A method according to claim 3, characterized in that: the method for generating the control instruction according to the type of the equipment to be controlled and the action to be executed and sending the control instruction to the equipment to be controlled comprises the following steps:
If the equipment type of only one piece of equipment in the smart home system is the equipment type to be controlled, the equipment is used as the equipment to be controlled;
and generating a control instruction of the equipment to be controlled according to the action to be executed and sending the control instruction to the equipment to be controlled.
5. A method according to claim 3, characterized in that: the method for generating the control instruction according to the type of the equipment to be controlled and the action to be executed and sending the control instruction to the equipment to be controlled comprises the following steps:
If at least two equipment types of equipment to be selected exist in the smart home system and are the equipment types to be controlled, acquiring a receiving position when receiving user voice data;
taking the equipment to be selected at the receiving position as equipment to be controlled;
and generating a control instruction of the equipment to be controlled according to the action to be executed and sending the control instruction to the equipment to be controlled.
6. The method as recited in claim 1, further comprising:
and sending second system information and an output form requirement to the large language model, wherein the second system information comprises equipment type, equipment position and equipment function, the output form requirement is used for indicating the large language model to output a control word in a preset form, and the preset form comprises the structure and the format of the control word.
7. The method according to claim 3 or 6, further comprising:
Transmitting a type name and a classification requirement to the large language model, wherein the type name is used for indicating the type of the user intention when the user intention is to control the household appliance; the classification requirement is used to indicate how to classify the user's intent; the type of user intention is different and the preset form is different.
8. The method as recited in claim 7, further comprising:
When the output form requirement, the type name, the classification requirement, the first system information or the second system information are sent to the large language model, the output form requirement, the type name, the classification requirement and the first system information are input through voice or characters.
9. The method as recited in claim 1, further comprising: outputting a reply corresponding to the voice data of the user when the large language model recognizes that the user intention is not to control the household appliance;
and when receiving the reply output by the large language model, controlling the playing device and/or the display device to inform the user of the reply.
10. The method as recited in claim 1, further comprising:
Acquiring historical operation data and historical state data of the intelligent home system, and acquiring current environmental parameters at the current moment, wherein the current environmental parameters comprise indoor temperature and outdoor temperature, the historical state data are the historical environmental parameters and the historical operation parameters of all household appliances in the intelligent home system at any moment in the historical time, and the historical operation data are adjustment values of all household appliances in the intelligent home system at the same moment with the historical state data;
Constructing a virtual environment model based on the historical data and a prediction algorithm, wherein the prediction algorithm comprises any one of the following steps: GRU, LSTM, ARIMAX;
Inputting current environmental parameters into the virtual environment model, and training based on a reinforcement learning algorithm to obtain a target model; the reinforcement learning algorithm includes any one of the following: DQN, SAC, PPO;
When the action to be performed does not include a specific value, the generating a control instruction according to the type of the device to be controlled and the action to be performed includes:
Inputting the current environmental parameters into the target model to obtain specific numerical values;
And generating a control instruction according to the specific value and the action to be executed.
11. The method according to claim 1, characterized in that: when the action to be performed does not include a specific value, the generating a control instruction according to the type of the device to be controlled and the action to be performed includes:
Acquiring historical operation data and historical state data of the smart home system, and acquiring current environment parameters and current operation parameters at the current moment, wherein the current environment parameters comprise indoor temperature and outdoor temperature, the historical state data are the historical environment parameters and the historical operation parameters of all household appliances in the smart home system at any moment in the historical time, and the historical operation data are adjustment values of all household appliances in the smart home system at the same moment with the historical state data;
Obtaining a first text vector of a current environment parameter and a current operation parameter and a second text vector of a historical environment parameter and a historical operation parameter at each moment in the historical time by adopting a vector extraction model;
calculating cosine similarity of the first text vector and each second text vector;
Taking the adjustment value of the historical operation data at the moment corresponding to the second text vector with the maximum cosine similarity as a specific value;
And generating a control instruction according to the specific value and the action to be executed.
12. The method as recited in claim 1, further comprising:
When the control word is received, if the intelligent home system comprises the type of the equipment to be controlled and equipment with the type of the equipment to be controlled can execute the action to be executed, generating a control instruction according to the type of the equipment to be controlled and the action to be executed and sending the control instruction to the equipment to be controlled;
And if the intelligent home system does not comprise the equipment type to be controlled and/or equipment with the equipment type to be controlled cannot execute the action to be executed, sending an error prompt to the large language model so that the large language model regenerates the control word and outputs the control word.
13. An intelligent home system voice control device, comprising:
The voice data receiving module is used for receiving user voice data;
The voice data sending module is used for sending the user voice data to a large language model if the user voice data is not a preset recognition word, so that when the large language model recognizes that a user intends to control the household appliance, a control word in a preset form is output, and the control word comprises a type of equipment to be controlled and an action to be executed;
And the control instruction generation module is used for generating a control instruction according to the type of the equipment to be controlled and the action to be executed and sending the control instruction to the equipment to be controlled when the control word is received.
14. An intelligent home system voice control device, comprising:
at least one processor and at least one memory;
The memory stores executable instructions of the processor;
The processor is configured to perform the method of any of claims 1-12.
15. An wisdom home system, characterized in that: use of the method of any one of claims 1-12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410539822.0A CN118248145A (en) | 2024-04-30 | 2024-04-30 | Smart home system voice control method, device, equipment and smart home system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410539822.0A CN118248145A (en) | 2024-04-30 | 2024-04-30 | Smart home system voice control method, device, equipment and smart home system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118248145A true CN118248145A (en) | 2024-06-25 |
Family
ID=91555607
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410539822.0A Pending CN118248145A (en) | 2024-04-30 | 2024-04-30 | Smart home system voice control method, device, equipment and smart home system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118248145A (en) |
-
2024
- 2024-04-30 CN CN202410539822.0A patent/CN118248145A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111936795B (en) | Air conditioner and method of controlling the same | |
CN105847099B (en) | Internet of things implementation system and method based on artificial intelligence | |
CN109240100A (en) | Intelligent home furnishing control method, equipment, system and storage medium | |
CN111754997B (en) | Control device and operation method thereof, and voice interaction device and operation method thereof | |
US20140288678A1 (en) | Electrical appliance control apparatus, electrical appliance control method, electrical appliance control system, input device, and electrical appliance | |
CN111965989B (en) | System updating method and device, intelligent home control panel and storage medium | |
CN107807965B (en) | Operation control method, resource sharing apparatus, and computer-readable storage medium | |
CN108375911B (en) | Equipment control method and device, storage medium and equipment | |
CN109584874A (en) | Electrical equipment control method, device, electrical equipment and storage medium | |
CN112002316A (en) | Electric appliance control method and device, storage medium and terminal | |
CN114120996A (en) | Voice interaction method and device | |
CN113593544A (en) | Device control method and apparatus, storage medium, and electronic apparatus | |
CN112161393A (en) | Method and device for customizing functions of household appliance, electronic equipment and storage medium | |
CN109974232A (en) | Quick control method, terminal, air-conditioner system and the medium of air-conditioner system | |
CN118248145A (en) | Smart home system voice control method, device, equipment and smart home system | |
CN113284488A (en) | Control method and device for household appliance, voice box and storage medium | |
CN108809777B (en) | Operation control method, Wi-Fi communication module, home appliance device, system, and storage medium | |
KR102378908B1 (en) | Home automation system using artificial intelligence | |
JP6622112B2 (en) | Network system | |
CN110098985A (en) | The method and apparatus of vocal behavior detection | |
CN116009438A (en) | Control scene generation method and device, storage medium and electronic device | |
CN115473755A (en) | Control method and device of intelligent equipment based on digital twins | |
CN115884434A (en) | Connection method of terminal and household appliance, terminal, household appliance and readable storage medium | |
JP2017182382A (en) | Network system, information processing method, and server | |
CN113296415A (en) | Intelligent household electrical appliance control method, intelligent household electrical appliance control device and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |