[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114253612A - Control method and control system - Google Patents

Control method and control system Download PDF

Info

Publication number
CN114253612A
CN114253612A CN202111410583.1A CN202111410583A CN114253612A CN 114253612 A CN114253612 A CN 114253612A CN 202111410583 A CN202111410583 A CN 202111410583A CN 114253612 A CN114253612 A CN 114253612A
Authority
CN
China
Prior art keywords
frame
face
face frame
preset
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111410583.1A
Other languages
Chinese (zh)
Other versions
CN114253612B (en
Inventor
张旦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qigan Electronic Information Technology Co ltd
Original Assignee
Shanghai Qigan Electronic Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qigan Electronic Information Technology Co ltd filed Critical Shanghai Qigan Electronic Information Technology Co ltd
Priority to CN202111410583.1A priority Critical patent/CN114253612B/en
Publication of CN114253612A publication Critical patent/CN114253612A/en
Application granted granted Critical
Publication of CN114253612B publication Critical patent/CN114253612B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • G06F9/4418Suspend and resume; Hibernate and awake
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Hardware Design (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a control method for controlling awakening of electronic equipment, which comprises the steps of acquiring video stream data, carrying out face detection on the video stream data to obtain a face frame, carrying out face recognition on an image in the face frame to determine the matching degree of the image in the face frame and a preset user image, starting voice recognition when the matching degree of the image in the face frame and the preset user image reaches a preset threshold value to judge whether to control the electronic equipment to awaken or not, controlling the electronic equipment to awaken in time in a video stream mode, improving the starting speed of the electronic equipment, and improving the safety of the electronic equipment when the electronic equipment is started in a mode of combining the face recognition and the voice recognition. The invention also provides a control system for realizing the control method.

Description

Control method and control system
Technical Field
The present invention relates to the field of control systems, and in particular, to a control method and a control system.
Background
The existing electronic equipment needs to be manually awakened before and after an operator arrives, so that the operator cannot use the electronic equipment immediately before arriving, and the electronic equipment is inconvenient to use by the operator.
Therefore, there is a need to provide a novel control method and control system to solve the above problems in the prior art.
Disclosure of Invention
The invention aims to provide a control method and a control system, which are convenient for automatically controlling the awakening of electronic equipment, and improve the starting speed of the electronic equipment and the safety of the electronic equipment during starting.
In order to achieve the above object, the control method of the present invention is used for controlling the wake-up of an electronic device, and includes the following steps:
s1: acquiring video stream data, and then carrying out face detection on the video stream data to obtain a face frame;
s2: performing face recognition on the image in the face frame to determine the matching degree of the image in the face frame and a preset user image;
s3: and when the matching degree of the image in the face frame and a preset user image reaches a preset threshold value, starting voice recognition to judge whether to control the electronic equipment to wake up.
The control method has the beneficial effects that: the method comprises the steps of obtaining video stream data, carrying out face detection on the video stream data to obtain a face frame, carrying out face recognition on an image in the face frame to determine the matching degree of the image in the face frame and a preset user image, starting voice recognition when the matching degree of the image in the face frame and the preset user image reaches a preset threshold value to judge whether to control the electronic equipment to wake up or not, timely controlling the electronic equipment to wake up in a video stream mode, improving the starting speed of the electronic equipment, and improving the safety of the electronic equipment when the electronic equipment is started in a mode of combining the face recognition and the voice recognition.
Optionally, before the step S2 is executed, a first face frame comparison step is further included, where the first face frame comparison step includes:
comparing the face frame with a first preset face frame to judge whether the face frame is larger than the first preset face frame;
if the face frame is larger than the first predetermined face frame, the step S2 is executed. The beneficial effects are that: as the starting condition of the face recognition, the useless face recognition is avoided, and the face recognition efficiency is improved.
Optionally, before executing the step S2, a frame counting step is further included, where the frame counting step includes:
if the face frame is judged to be smaller than or equal to a first preset face frame, adding 1 to the frame count to obtain a new frame count, and replacing the frame count with the new frame count;
comparing the new frame count with a preset frame count to judge whether the new frame count is greater than the preset frame count;
and if the new frame count is judged to be larger than the preset frame count, obtaining the movement trend of the target according to the change trend of the face frame.
Optionally, before the step S2 is executed, a second face frame comparison step is further included, where the second face frame comparison step includes:
judging whether the movement trend of the target is close to the electronic equipment or not;
if the motion trend of the target is judged to be close to the electronic equipment, comparing the newly obtained face frame with a second preset face frame to judge whether the newly obtained face frame is larger than the second preset face frame;
if the latest face frame obtained is judged to be larger than a second preset face frame, the step S2 is executed. The beneficial effects are that: as the starting condition of the face recognition, the useless face recognition is avoided, the face recognition efficiency is improved, and the face recognition cannot be started for a long time.
Optionally, the acquiring video stream data and then performing face detection on the video stream data to obtain a face frame includes:
acquiring each frame of image in video stream data, and sequentially carrying out face detection on each frame of image according to the sequence of each frame of image in the video stream data to obtain a face frame.
Optionally, if the face detection is performed on the same frame of image to obtain at least two face frames, only the face frame closest to the central point of the image is obtained. The beneficial effects are that: and acquiring a unique face frame to ensure the correct processing of the subsequent process.
Optionally, the starting voice recognition to determine whether to control the electronic device to wake up includes:
collecting sound information, and then comparing the sound information with pre-stored sound information to judge whether the sound information is matched with the pre-stored sound information;
and if the sound information is judged to be matched with the pre-stored sound information, controlling the electronic equipment to wake up.
The invention also provides a control system, which comprises a video stream data acquisition unit, a face detection unit, a face recognition unit, a voice recognition unit and a wake-up unit, wherein the video stream data acquisition unit is used for acquiring video stream data; the face detection unit is used for receiving the video stream data acquired by the video stream data acquisition unit and then carrying out face detection on the video stream data to obtain a face frame; the face recognition unit is used for carrying out face recognition on the image in the face frame so as to determine the matching degree of the image in the face frame and a preset user image, and starting voice recognition to judge whether to control the electronic equipment to wake up or not after the matching degree of the image in the face frame and the preset user image reaches a preset threshold value; the awakening unit is used for awakening the electronic equipment.
The control system has the advantages that: the video stream data acquisition unit is used for acquiring video stream data; the face detection unit is used for receiving the video stream data acquired by the video stream data acquisition unit and then carrying out face detection on the video stream data to obtain a face frame, the face recognition unit is used for carrying out face recognition on the image in the face frame to determine the matching degree of the image in the face frame and a preset user image, and after the matching degree of the image in the face frame and the preset user image reaches a preset threshold value, voice recognition is started to judge whether to control the electronic equipment to wake up, the wake-up unit is used for waking up the electronic equipment, the electronic equipment is timely controlled to wake up in a video stream mode, the starting speed of the electronic equipment is increased, and the safety of the electronic equipment during starting is improved in a mode of combining the face recognition and the voice recognition.
Optionally, the control system further includes a first face frame comparing unit, where the first face frame comparing unit is configured to compare the face frame with a first preset face frame, so as to determine whether the face frame is larger than the first preset face frame.
Optionally, the control system further comprises a frame counting unit, wherein the frame counting unit is configured to add 1 to the frame count to obtain a new frame count, and then replace the frame count with the new frame count.
Optionally, the control system further includes a frame count comparison unit, where the frame count comparison unit is configured to determine whether the new frame count is greater than a preset frame count, and if the new frame count is greater than the preset frame count, obtain a motion trend of the target according to a change trend of the face frame.
Optionally, the control system further includes a second face frame comparison unit, where the second face frame comparison unit is configured to determine whether a motion trend of the target is approaching the electronic device, and if the motion trend of the target is approaching the electronic device, compare the newly obtained face frame with a second preset face frame to determine whether the newly obtained face frame is larger than the second preset face frame.
Drawings
FIG. 1 is a block diagram of a control system according to the present invention;
FIG. 2 is a flow chart of a control method of the present invention;
fig. 3 is a schematic diagram of two face frames appearing in one frame image according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. Unless defined otherwise, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs. As used herein, the word "comprising" and similar words are intended to mean that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items.
In order to solve the problems in the prior art, an embodiment of the present invention provides a control system, and referring to fig. 1, the control system 100 includes a video stream data obtaining unit 101, a face detection unit 102, a face recognition unit 103, a voice recognition unit 104, and a wake-up unit 105, where the video stream data obtaining unit 101 is configured to obtain video stream data; the face detection unit 102 is configured to receive video stream data acquired by the video stream data acquisition unit 101, and then perform face detection on the video stream data to obtain a face frame; the face recognition unit 103 is configured to perform face recognition on the image in the face frame to determine a matching degree between the image in the face frame and a preset user image, and start voice recognition to determine whether to control the electronic device to wake up after the matching degree between the image in the face frame and the preset user image reaches a preset threshold; the wake-up unit 105 is configured to wake up the electronic device.
In some embodiments, the control system may further include a first face frame comparing unit, where the first face frame comparing unit is configured to compare the face frame with a first preset face frame to determine whether the face frame is larger than the first preset face frame.
In some embodiments, the control system may further comprise a frame count unit for adding n to the frame count to obtain a new frame count and then replacing the frame count with the new frame count, n being greater than 0.
In some embodiments, the control system may further include a frame count comparison unit, where the frame count comparison unit is configured to determine whether the new frame count is greater than a preset frame count, and if the new frame count is greater than the preset frame count, obtain a motion trend of the target according to a change trend of the face frame.
In some embodiments, the control system may further include a second face frame comparing unit, where the second face frame comparing unit is configured to determine whether the motion trend of the target is approaching the electronic device, and if the motion trend of the target is approaching the electronic device, compare the latest face frame with a second preset face frame to determine whether the latest face frame is larger than the second preset face frame.
In some embodiments, the control system may further include a frame count selection unit for selecting or inputting a preset frame count.
FIG. 2 is a flow chart of a control method in some embodiments of the invention. Referring to fig. 2, the control method is implemented by the control system for controlling wake-up of the electronic device, and includes the following steps:
s1: acquiring video stream data, and then carrying out face detection on the video stream data to obtain a face frame;
s2: performing face recognition on the image in the face frame to determine the matching degree of the image in the face frame and a preset user image;
s3: and when the matching degree of the image in the face frame and a preset user image reaches a preset threshold value, starting voice recognition to judge whether to control the electronic equipment to wake up.
In some embodiments, the face detection and face recognition are performed by a trained neural network model, and the training method of the neural network model includes: collecting a custom data set, wherein the custom data set comprises a face picture; carrying out normalization preprocessing on the face picture in the RGB format; obtaining a custom training network model, wherein the custom training network model is compressed based on YOLOV 4; inputting the human face picture subjected to normalization preprocessing into the custom training network model for training; calculating the training loss of the face image through a loss function, carrying out back propagation on the training loss to update a training network model, and finishing training when the performance of the training network model on a verification set meets a preset threshold value; and performing network pruning on the trained network model, and training all data in the pruned network model at least ten times to obtain the trained neural network model for face detection or face recognition. The face detection and the face recognition may also be performed in other manners, and the manners of the face detection and the face recognition are not specifically limited herein.
In some embodiments, before performing the step S2, a first face frame comparison step may be further included, where the first face frame comparison step includes: comparing the face frame with a first preset face frame to judge whether the face frame is larger than the first preset face frame; if the face frame is larger than the first predetermined face frame, the step S2 is executed.
In some embodiments, before performing the step S2, a frame counting step may be further included, the frame counting step including: if the face frame is judged to be smaller than or equal to a first preset face frame, adding 1 to the frame count to obtain a new frame count, and replacing the frame count with the new frame count; comparing the new frame count with a preset frame count to judge whether the new frame count is greater than the preset frame count; and if the new frame count is judged to be larger than the preset frame count, obtaining the movement trend of the target according to the change trend of the face frame.
In some embodiments, taking a frame count of 1, a length of the first predetermined face frame of 6, and a height of the first predetermined face frame of 7 as an example:
coordinates of four vertexes of the face frame in the ninth frame image of the video stream data are (10, 11), (15, 11), (10, 6) and (15, 6) respectively, coordinates of four vertexes of the face frame in the tenth frame image of the video stream data are (9.75, 11.25), (15.25, 11.25), (9.75, 5.75) and (15.25, 5.75) respectively, coordinates of four vertexes of the face frame in the eleventh frame image of the video stream data are (9.5, 11.5), (15.5, 11.5), (9.5, 5.5) and (15.5, 5.5) respectively, that is, the length of the face frame in the ninth image is 5, the height of the face frame in the ninth image is 5, the length of the face frame in the tenth frame image of the video stream data is 5.5, the height of the face frame in the tenth frame image of the video stream data is 5.5, the length of a face frame in an eleventh frame image of the video stream data is 6, and the height of the face frame in the eleventh frame image of the video stream data is 6;
judging that the length of a face frame in the ninth frame image is smaller than that of the first preset face frame, judging that the height of the face frame in the ninth frame image is smaller than that of the first preset face frame, judging that the face frame in the ninth frame image is smaller than the first preset face frame, wherein the frame count is 1, and is equal to 2 after 1 is added, namely the new frame count is 2, and then replacing the frame count with the new frame count, wherein the frame count is also 2;
judging that the length of a face frame in the tenth frame image is smaller than that of the first preset face frame, judging that the height of the face frame in the tenth frame image is smaller than that of the first preset face frame, judging that the face frame in the tenth frame image is smaller than the first preset face frame, adding 1 and then being equal to 3, namely adding 3 to the frame count, replacing the frame count with the new frame count, wherein the frame count is also 3;
judging that the length of a face frame in the eleventh frame image is equal to the length of a first preset face frame, judging that the height of the face frame in the eleventh frame image is smaller than the height of the first preset face frame, judging that the face frame in the eleventh frame image is smaller than the first preset face frame, wherein the frame count is 3, adding 1 and then being equal to 4, namely the new frame count is 4, and then replacing the frame count with the new frame count, wherein the frame count is also 4.
In some embodiments, before performing step S2, a second face frame comparison step may be further included, where the second face frame comparison step includes: judging whether the movement trend of the target is close to the electronic equipment or not; if the motion trend of the target is judged to be close to the electronic equipment, comparing the newly obtained face frame with a second preset face frame to judge whether the newly obtained face frame is larger than the second preset face frame; if the latest face frame obtained is judged to be larger than a second preset face frame, the step S2 is executed.
In some embodiments, the obtaining a movement trend of the target according to the variation trend of the face frame includes: sequentially comparing the sizes of the adjacent face frames according to the acquisition sequence of the face frames; multiplying a first calculation value by a first amplification threshold value or a first reduction threshold value according to a comparison result of the sizes of the adjacent face frames to obtain a new first calculation value, and then replacing the first calculation value by the new first calculation value; and comparing the new first calculated value with a first judgment threshold value to obtain the variation trend of the face frame, and further obtain the target motion trend. Wherein the first calculated value is greater than 0.
In some embodiments, the first zoom-in threshold is greater than 1 and less than 2, the first zoom-out threshold is less than 1 and greater than 0, and the sum of the first zoom-in threshold and the first zoom-out threshold is 2. For example, the first zoom-in threshold is 1.2, and the first zoom-out threshold is 0.8; for another example, the first zoom-in threshold is 1.1, and the first zoom-out threshold is 0.9.
In some embodiments, the comparing the new first calculation value with a first judgment threshold to obtain a variation trend of a face frame, so as to obtain a target motion trend, includes: comparing the new first calculated value with a first judgment threshold value to obtain the variation trend of the face frame; if the new first calculation value is larger than the first judgment threshold value, judging that the change trend of the face frame is larger and larger, and further obtaining that the target motion trend is close to the electronic equipment; if the new first calculation value is smaller than the first judgment threshold, the change trend of the face frame is judged to be smaller and smaller, and then the target motion trend can be obtained to be far away from the electronic equipment.
In some embodiments, the first determination threshold is equal to the first calculated value.
In some embodiments, before performing step S1, the method further comprises a frame count selecting step, wherein the frame count selecting step comprises: a preset frame count is selected or entered.
In some embodiments, the obtaining video stream data and then performing face detection on the video stream data to obtain a face box includes: acquiring each frame of image in video stream data, and sequentially carrying out face detection on each frame of image according to the sequence of each frame of image in the video stream data to obtain a face frame.
In some embodiments, if the face detection is performed on the same frame of image to obtain at least two face frames, only the face frame closest to the center point of the image is obtained.
Fig. 3 is a schematic diagram of two face frames appearing in one frame image according to the present invention. Referring to fig. 3, the image includes a central point 201, a left face frame 202, and a right face frame 203 of the image, where the central point 201 is an intersection of diagonal lines of the image, a distance from the left face frame 202 to the central point 201 is smaller than a distance from the right face frame 203 to the central point 201, and when the face frame in the image is obtained, only the left face frame 202 is obtained.
In some embodiments, the initiating voice recognition to determine whether to control the electronic device to wake up includes: collecting sound information, and then comparing the sound information with pre-stored sound information to judge whether the sound information is matched with the pre-stored sound information; and if the sound information is judged to be matched with the pre-stored sound information, controlling the electronic equipment to wake up. For example, the pre-stored sound information is 'hello computer', and if the collected sound information is 'hello computer', the sound information is judged to be matched with the pre-stored sound information, and the electronic equipment is controlled to wake up; for another example, the pre-stored sound information is "hello computer", and if the collected sound information is "hello computer", it is determined that the sound information matches the pre-stored sound information, and the electronic device is not controlled to wake up, and then the sound information continues to be collected.
In some embodiments, the starting voice recognition to determine whether to control the electronic device to wake up may further include: and starting timing while starting voice recognition, if the timing exceeds preset time and the electronic equipment is not controlled to wake up, emptying all the acquired face frames, and re-executing the step S1.
Although the embodiments of the present invention have been described in detail hereinabove, it is apparent to those skilled in the art that various modifications and variations can be made to these embodiments. However, it is to be understood that such modifications and variations are within the scope and spirit of the present invention as set forth in the following claims. Moreover, the invention as described herein is capable of other embodiments and of being practiced or of being carried out in various ways.

Claims (12)

1. A control method for controlling wake-up of an electronic device, comprising the steps of:
s1: acquiring video stream data, and then carrying out face detection on the video stream data to obtain a face frame;
s2: performing face recognition on the image in the face frame to determine the matching degree of the image in the face frame and a preset user image;
s3: and when the matching degree of the image in the face frame and a preset user image reaches a preset threshold value, starting voice recognition to judge whether to control the electronic equipment to wake up.
2. The control method according to claim 1, wherein before performing the step S2, the method further comprises a first face frame comparison step, and the first face frame comparison step comprises:
comparing the face frame with a first preset face frame to judge whether the face frame is larger than the first preset face frame;
if the face frame is larger than the first predetermined face frame, the step S2 is executed.
3. The control method according to claim 2, further comprising a frame counting step before performing the step S2, the frame counting step comprising:
if the face frame is judged to be smaller than or equal to a first preset face frame, adding 1 to the frame count to obtain a new frame count, and replacing the frame count with the new frame count;
comparing the new frame count with a preset frame count to judge whether the new frame count is greater than the preset frame count;
and if the new frame count is judged to be larger than the preset frame count, obtaining the movement trend of the target according to the change trend of the face frame.
4. The control method according to claim 3, wherein before executing the step S2, the method further comprises a second face frame comparison step, wherein the second face frame comparison step comprises:
judging whether the movement trend of the target is close to the electronic equipment or not;
if the motion trend of the target is judged to be close to the electronic equipment, comparing the newly obtained face frame with a second preset face frame to judge whether the newly obtained face frame is larger than the second preset face frame;
if the latest face frame obtained is judged to be larger than a second preset face frame, the step S2 is executed.
5. The control method according to claim 1, wherein the obtaining video stream data and then performing face detection on the video stream data to obtain a face frame comprises:
acquiring each frame of image in video stream data, and sequentially carrying out face detection on each frame of image according to the sequence of each frame of image in the video stream data to obtain a face frame.
6. The control method according to claim 5, wherein if at least two face frames are obtained by performing face detection on the same frame of the image, only the face frame closest to the center point of the image is obtained.
7. The control method according to claim 1, wherein the starting voice recognition to determine whether to control the electronic device to wake up comprises:
collecting sound information, and then comparing the sound information with pre-stored sound information to judge whether the sound information is matched with the pre-stored sound information;
and if the sound information is judged to be matched with the pre-stored sound information, controlling the electronic equipment to wake up.
8. A control system for implementing the control method according to any one of claims 1 to 7, the control system comprising a video stream data acquisition unit for acquiring video stream data, a face detection unit, a face recognition unit, a voice recognition unit, and a wake-up unit; the face detection unit is used for receiving the video stream data acquired by the video stream data acquisition unit and then carrying out face detection on the video stream data to obtain a face frame; the face recognition unit is used for carrying out face recognition on the image in the face frame so as to determine the matching degree of the image in the face frame and a preset user image, and starting voice recognition to judge whether to control the electronic equipment to wake up or not after the matching degree of the image in the face frame and the preset user image reaches a preset threshold value; the awakening unit is used for awakening the electronic equipment.
9. The control system according to claim 8, further comprising a first face frame comparison unit, wherein the first face frame comparison unit is configured to compare the face frame with a first preset face frame to determine whether the face frame is larger than the first preset face frame.
10. The control system of claim 9, further comprising a frame count unit configured to increment a frame count by 1 to obtain a new frame count and then replace the frame count with the new frame count.
11. The control system according to claim 10, further comprising a frame count comparison unit, wherein the frame count comparison unit is configured to determine whether the new frame count is greater than a preset frame count, and if the new frame count is greater than the preset frame count, obtain a movement trend of the target according to a change trend of the face frame.
12. The control system according to claim 11, further comprising a second face frame comparison unit, wherein the second face frame comparison unit is configured to determine whether a movement trend of the target is approaching the electronic device, and if the movement trend of the target is approaching the electronic device, compare the newly obtained face frame with a second preset face frame to determine whether the newly obtained face frame is larger than the second preset face frame.
CN202111410583.1A 2021-11-25 2021-11-25 Control method and control system Active CN114253612B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111410583.1A CN114253612B (en) 2021-11-25 2021-11-25 Control method and control system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111410583.1A CN114253612B (en) 2021-11-25 2021-11-25 Control method and control system

Publications (2)

Publication Number Publication Date
CN114253612A true CN114253612A (en) 2022-03-29
CN114253612B CN114253612B (en) 2024-09-06

Family

ID=80791186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111410583.1A Active CN114253612B (en) 2021-11-25 2021-11-25 Control method and control system

Country Status (1)

Country Link
CN (1) CN114253612B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105700363A (en) * 2016-01-19 2016-06-22 深圳创维-Rgb电子有限公司 Method and system for waking up smart home equipment voice control device
CN108876998A (en) * 2018-06-13 2018-11-23 上海快鱼电子有限公司 Gate inhibition's method and system of testimony of a witness unification based on living things feature recognition
CN109976506A (en) * 2017-12-28 2019-07-05 深圳市优必选科技有限公司 Awakening method of electronic equipment, storage medium and robot
CN110782568A (en) * 2018-07-13 2020-02-11 宁波其兰文化发展有限公司 Access control system based on video photography
WO2021190387A1 (en) * 2020-03-25 2021-09-30 维沃移动通信有限公司 Detection result output method, electronic device, and medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105700363A (en) * 2016-01-19 2016-06-22 深圳创维-Rgb电子有限公司 Method and system for waking up smart home equipment voice control device
CN109976506A (en) * 2017-12-28 2019-07-05 深圳市优必选科技有限公司 Awakening method of electronic equipment, storage medium and robot
CN108876998A (en) * 2018-06-13 2018-11-23 上海快鱼电子有限公司 Gate inhibition's method and system of testimony of a witness unification based on living things feature recognition
CN110782568A (en) * 2018-07-13 2020-02-11 宁波其兰文化发展有限公司 Access control system based on video photography
WO2021190387A1 (en) * 2020-03-25 2021-09-30 维沃移动通信有限公司 Detection result output method, electronic device, and medium

Also Published As

Publication number Publication date
CN114253612B (en) 2024-09-06

Similar Documents

Publication Publication Date Title
CN109035246B (en) Face image selection method and device
CN106782536B (en) Voice awakening method and device
CN105632486B (en) Voice awakening method and device of intelligent hardware
CN110970016B (en) Awakening model generation method, intelligent terminal awakening method and device
TW201823927A (en) Method for awaking an intelligent robot and intelligent robot
CN110767231A (en) Voice control equipment awakening word identification method and device based on time delay neural network
CN113160815B (en) Intelligent control method, device, equipment and storage medium for voice wakeup
CN110443350B (en) Model quality detection method, device, terminal and medium based on data analysis
CN111936990A (en) Method and device for waking up screen
CN104616002A (en) Facial recognition equipment used for judging age groups
CN111128155B (en) Awakening method, device, equipment and medium for intelligent equipment
CN111312222A (en) Awakening and voice recognition model training method and device
CN108509225B (en) Information processing method and electronic equipment
CN109955257A (en) Robot awakening method and device, terminal equipment and storage medium
CN112686189A (en) Illegal user processing method and device and electronic equipment
CN108399009A (en) The method and device of smart machine is waken up using human-computer interaction gesture
CN114253612A (en) Control method and control system
CN112700568B (en) Identity authentication method, equipment and computer readable storage medium
CN114253611A (en) Control method and control system
CN113821109B (en) Control method and control system
CN114253613A (en) Control method and control system
CN106909880A (en) Facial image preprocess method in recognition of face
CN106679102A (en) Air conditioner control method and device based on terminal equipment
CN111161745A (en) Awakening method, device, equipment and medium for intelligent equipment
CN117576767A (en) Digital human interaction simulation method, device and terminal based on line of sight recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant