[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110225196B - Terminal control method and terminal equipment - Google Patents

Terminal control method and terminal equipment Download PDF

Info

Publication number
CN110225196B
CN110225196B CN201910464526.8A CN201910464526A CN110225196B CN 110225196 B CN110225196 B CN 110225196B CN 201910464526 A CN201910464526 A CN 201910464526A CN 110225196 B CN110225196 B CN 110225196B
Authority
CN
China
Prior art keywords
user
screen image
image
target
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910464526.8A
Other languages
Chinese (zh)
Other versions
CN110225196A (en
Inventor
余志豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201910464526.8A priority Critical patent/CN110225196B/en
Publication of CN110225196A publication Critical patent/CN110225196A/en
Application granted granted Critical
Publication of CN110225196B publication Critical patent/CN110225196B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Environmental & Geological Engineering (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides a terminal control method and terminal equipment, relates to the technical field of communication, and aims to solve the problem that the existing terminal equipment has a large adverse effect on immature life. The terminal control method comprises the following steps: collecting a face image of a user; obtaining expression information of the user based on the facial feature information in the facial image of the user; under the condition that the expression information is matched with a preset expression type, acquiring a target screen image which belongs to the same time period with the face image of the user; and sending the target screen image to a target terminal. The terminal control method in the embodiment of the invention is applied to terminal equipment.

Description

Terminal control method and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a terminal control method and terminal equipment.
Background
At present, terminal equipment such as smart phones and the like are more and more widely applied, and are not only applied to adult groups, but also applied to immature groups.
On one hand, the parents and the children can quickly establish contact through the terminal equipment, and the children can also use the terminal equipment to assist in learning. On the other hand, when the terminal device is used for watching videos and playing games, the underage people may watch some bloody, violent and even yellow contents, and for the underage people, the contents can influence the health growth of the underage people.
Therefore, the existing terminal equipment has great adverse effect on immature life.
Disclosure of Invention
The embodiment of the invention provides a terminal control method, which aims to solve the problem that the existing terminal equipment has large adverse effect on immature life.
In order to solve the technical problem, the invention is realized as follows:
the embodiment of the invention provides a terminal control method, which is applied to terminal equipment and comprises the following steps: collecting a face image of a user; obtaining expression information of the user based on the facial feature information in the facial image of the user; under the condition that the expression information is matched with a preset expression type, acquiring a target screen image which belongs to the same time period with the face image of the user; and sending the target screen image to a target terminal.
An embodiment of the present invention further provides a terminal device, including: the face image acquisition module is used for acquiring a face image of a user; the expression information obtaining module is used for obtaining the expression information of the user based on the facial feature information in the facial image of the user; the target screen image acquisition module is used for acquiring a target screen image which belongs to the same time period with the face image of the user under the condition that the expression information is matched with a preset expression type; and the target screen image sending module is used for sending the target screen image to a target terminal.
The embodiment of the invention also provides terminal equipment, which comprises a processor, a memory and a computer program which is stored on the memory and can run on the processor, wherein the computer program realizes the steps of the terminal control method when being executed by the processor.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the terminal control method are implemented.
In this embodiment, when a user uses a terminal device, for example, to watch a video, play a game, or the like, the terminal device acquires a face image of the current user, extracts face feature information in the face image for analysis, and obtains expression information of the user based on the face feature information in the face image. When the expression information of the user is matched with the preset expression type, the expression information of the user can be obtained to meet the specific expression type, for example, the expression type different from a normal expression type and the like, and a target screen image which causes the current expression of the user is obtained and sent to the target terminal. Especially, when the user is children, when children see some abnormal pictures, the facial features are different from those in normal states, namely the children can possibly make expressions such as distraction, difficulty, fear and the like, so that the target screen image which causes the children to make abnormal expressions is obtained and sent to the target terminal used by the guardian, the guardian can see the target screen image at the first time, corresponding guidance can be performed on the children, the children can be helped to reasonably utilize the terminal equipment, and adverse effects of the terminal equipment on the children are reduced.
Drawings
Fig. 1 is one of flowcharts of a terminal control method of an embodiment of the present invention;
fig. 2 is a second flowchart of a terminal control method according to an embodiment of the present invention;
fig. 3 is a third flowchart of a terminal control method according to an embodiment of the present invention;
fig. 4 is a fourth flowchart of a terminal control method according to an embodiment of the present invention;
FIG. 5 is one of the block diagrams of a terminal device of an embodiment of the present invention;
fig. 6 is a second block diagram of the terminal device according to the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, a flowchart of a terminal control method according to an embodiment of the present invention is shown, and is applied to a terminal device, and includes:
step S1: the face image of a user is collected.
When a user uses the terminal equipment, the face image of the user is collected, preferably the face image of the current user is collected in real time, and the face image of the user is found to be abnormal.
Preferably, a front camera of the control terminal device collects a face image of the user in front of the screen.
Step S2: and obtaining expression information of the user based on the facial feature information in the facial image of the user.
Preferably, the front-facing camera extracts the face feature information of the face image based on a face recognition technology, including: mouth, eyebrow, eye, etc. Further, the facial feature information in the facial image of the user is analyzed to obtain the expression information of the user.
In general, when a person is in an angry mood, the face has the characteristics of forehead wrinkles, gaze fixation, nose wing expansion, square opening or tight closing, so that the collected expression information of the user corresponds to anger; when people are in a fear mood, the face has the characteristics of straight forehead and eyebrow, enlarged eyes, raised forehead or parallel wrinkles, slightly wrinkled eyebrows and tense facial muscles, so that the collected expression information of the user corresponds to the fear; when people are in sad emotions, the face has the characteristics of eyebrow drop, canthus collapse, canthus pull-down and possibly lacrimation, so that the collected expression information of the user corresponds to sadness. When people are in a peaceful mood, the facial features can be considered to be in a normal state, and therefore the collected expression information of the user is correspondingly peaceful.
Therefore, in this step, based on the analysis of the facial feature information in the facial image of the user, it can be obtained whether the expression information of the user is peaceful in normal state, angry, sad, fear, sad, happy, excited, and the like in abnormal state.
For example, a threshold may be set, and when the matching degree between the facial feature information in the facial image of the user and the facial feature information in the facial image under one emotion is greater than the threshold, the expression information of the user is considered to correspond to the emotional expression.
More specifically, each feature point in the face image may be sequentially matched with each feature point of a corresponding portion of the face image under any emotion, and the obtained matching degrees are integrated to determine whether the expression information of the user corresponds to the emotional expression.
Illustratively, the face recognition system matches the face feature information in the face images under different emotions according to the change of each feature information in the face images. When the matching degree of the feature information in the face image and the face feature information in the face image under a certain emotion reaches more than 80%, the fact that the user is under the emotion is recognized, and the expression information of the user is the emotional expression.
Preferably, the face image under a specified emotion may be prestored to be used as a reference standard for analyzing the face feature information of the face image.
Preferably, a large number of face images under different emotions can be acquired through big data, and are analyzed by using an image recognition technology, main face feature information required for emotion recognition, such as mouth, eyebrow and eyes, is extracted, and common face feature information is obtained and is used as a reference standard for analyzing the face feature information in the face images.
Step S3: and under the condition that the expression information of the user is matched with the preset expression type, acquiring a target screen image which belongs to the same time period with the face image of the user.
Preferably, in order to find out the condition that the expression of the user is abnormal in time, expressions such as anger, sadness, fear, sadness, joy, excitement and the like in a state of extraordinary state can be used as the preset expression types.
In this step, when the expression information of the user is an abnormal expression, a target screen image belonging to the same period as the face image of the user is acquired to acquire a target screen image causing the abnormal expression of the user.
Preferably, if the expression of the face image of the user at a certain moment is abnormal, acquiring a screenshot at the moment or screen videos before and after the moment; and if the expression of the face image of the user in the whole time interval is abnormal, acquiring the screen video in the time interval or the screen videos before and after the time interval.
Step S4: and sending the target screen image to the target terminal.
Preferably, the identification information of the target terminal may be entered in the terminal device in advance, so that when the expression of the face image of the user is abnormal, the target screen image displayed by the terminal device is sent to the target terminal in time.
Further, the user to which the target terminal belongs can find the abnormal condition in time and take corresponding processing measures.
In particular, the terminal control method in this embodiment is more suitable for the child mode of the terminal device.
Illustratively, a child mode can be added in a system mode of the terminal device, in this mode, the terminal device and a specified target terminal can be bound, and a child monitoring function is started, and in this function, the front-facing camera can automatically acquire a facial image of a user, preferably a facial image of a child, so as to inform the user to which the target terminal belongs in time when the expression of the child is abnormal, and generally, the user to which the target terminal belongs is a guardian.
In this embodiment, when a user uses a terminal device, for example, to watch a video, play a game, or the like, the terminal device acquires a face image of the current user, extracts face feature information in the face image for analysis, and obtains expression information of the user based on the face feature information in the face image. When the expression information of the user is matched with the preset expression type, the expression information of the user can be obtained to meet the specific expression type, for example, the expression type different from a normal expression type and the like, and a target screen image which causes the current expression of the user is obtained and sent to the target terminal. Especially, when the user is children, when children see some abnormal pictures, the facial features are different from those in normal states, namely the children can possibly make expressions such as distraction, difficulty, fear and the like, so that the target screen image which causes the children to make abnormal expressions is obtained and sent to the target terminal used by the guardian, the guardian can see the target screen image at the first time, corresponding guidance can be performed on the children, the children can be helped to reasonably utilize the terminal equipment, and adverse effects of the terminal equipment on the children are reduced.
On the basis of the embodiment shown in fig. 1, fig. 2 shows a flowchart of a terminal control method according to an embodiment of the present invention, and before step S3, the method further includes:
step S5: and recording the screen image displayed by the terminal equipment based on the input response of the terminal equipment to the user.
Based on the child monitoring function, the terminal equipment can automatically open the front camera to collect face images of the user, and can record videos of the user in the screen desktop operation process in real time.
The terminal device displays a corresponding interface on a screen corresponding to an input based on the input, such as touch, press, slide and the like, performed by a user on the terminal device. Under the child monitoring function, the terminal device records the video of the screen display content to record the video of the user in the screen desktop operation process.
Preferably, the recorded video can be continuously updated and stored locally or in the cloud according to a certain period so as to be convenient for viewing at any time, thereby being more favorable for a guardian to know the condition that children use the terminal equipment, helping children to reasonably utilize the terminal equipment, and further reducing the adverse effect of the terminal equipment on children.
Correspondingly, step S3 includes:
step S31: and under the condition that the expression information at the first moment is matched with the preset expression type, acquiring a first screen image in the period of the first moment.
Preferably, when the expression information of the user at the first time matches the preset expression type, the first screen image displayed in the period to which the first time belongs may be intercepted from the monitoring video recorded in step S5.
The first screen image is a dynamic image, such as a video, a motion picture, etc., the start time of the first screen image is earlier than the first time, and the end time of the first screen image is later than the first time.
Preferably, the first screen image within one minute period before and after the first time may be selected.
It is conceivable that the first screen image may be a still image at a certain time or a moving image at a certain time. The dynamic image is more beneficial to a guardian to know the real situation so as to determine the antecedent consequence of the abnormal expression of the child.
In this embodiment, the terminal device may monitor an operation process of the user, and the operation process of the user may be reflected in the screen display change, so as to record the screen display change as a monitoring video and acquire a face image at the same time. On one hand, the recorded monitoring video can be used for being checked at any time, so that a guardian can know the child conveniently; on the other hand, when the user is abnormal, the screen image which causes the abnormality can be captured in time so as to be sent to the guardian in time. Therefore, the child is monitored from two aspects, and the adverse effect of the terminal device on the child is reduced.
In more embodiments, in order to reduce the memory occupied by the monitoring video, the target screen image may be directly captured only when needed or within a preset time period after the moment without real-time recording.
On the basis of the embodiment shown in fig. 1, fig. 3 shows a flowchart of a terminal control method according to an embodiment of the present invention, and step S4 includes:
step S41: and sending a notification message to the target terminal.
Preferably, a notification message may be transmitted to the target terminal.
On one hand, the notification message is a common function of the terminal device, which is more beneficial to arousing the attention of the guardian and avoiding the misreading; on the other hand, the notification message is a common function of the terminal device and is easy to implement.
The notification message includes a short message, a push, and the like.
Wherein the notification message is at least a target screen image belonging to the same period as the facial image of the user, and content for representing expression information of the user.
Preferably, when the expression information of the user is matched with a preset expression type, the facial image of the user at the moment can be output, and the output facial image can be in a video form or a picture form.
Further, the output face image of the user and the target screen image which belongs to the same time period with the face image of the user are sent to the target terminal. The guardian can combine the face image and the target screen image to provide help for the child.
For example, if a child is frightened from a face image, the guardian can perform comfort and provide psychological treatment, and forbid the child from seeing the video animation related to the target screen image; if the target screen image is yellow storm content and the children are seen from the face image for fear, the children are prohibited from seeing the video animation related to the target screen image; if the child is smiling from the face image, the preference of the child can be known according to the target screen image, and good communication and education culture can be performed according to the preference.
Preferably, when the expression information of the user matches with the preset expression type, other content used for representing the expression information of the user at the moment, such as a keyword: easy to worry, difficult to pass, etc.
Further, other contents for representing the expression information of the user at that moment and the target screen image belonging to the same period as the face image of the user are transmitted to the target terminal.
Preferably, the content form for representing the emotive information of the user is not limited to images, characters, voice, and the like.
In this embodiment, the method notifies the guardian in a notification message, and when the child has an abnormal expression, the content of the expression information for representing the user and the target screen image for generating the face image are included in the notification message, so that the guardian can see the content related to the expression information and the target screen image of the user at the same time, and provide more reference information for the guardian, thereby facilitating the guardian to make a correct guide for the child and enabling the child to grow healthily.
On the basis of the embodiment shown in fig. 1, fig. 4 shows a flowchart of a terminal control method according to an embodiment of the present invention, and step S4 includes:
step S42: and acquiring a pre-stored first account number.
Step S43: and sending the target screen image to a target terminal corresponding to the first account.
Preferably, the pre-stored first account is an identification code of the target terminal bound in advance.
The identification code of the target terminal comprises a telephone number, an account of a social application and the like.
For example, in the aforementioned child mode, a default contact number or a WeChat account number, such as a phone number of a guardian, may be bound, and when the terminal device detects an abnormal condition, the system may immediately send a notification message to the bound number.
Or the system can immediately dial the bound number, and when the opposite side is connected, the system can play preset voice to play the role of informing the abnormity.
In summary, in order to solve the problem that no guardian monitors the child in the vicinity in real time when the child uses the terminal device, the terminal device system can play a role in real-time monitoring. The facial expression recognition method comprises the steps of capturing facial expressions of children in real time by utilizing a face recognition function, analyzing the types of the expressions, deducing that pictures seen by the children are abnormal according to the expression types of the children, namely, blood fishy violence pictures, and accordingly carrying out screen capture.
In further embodiments, after step S3, the method may further include: and analyzing the target screen image which belongs to the same time period with the face image of the user, and sending the target screen image which belongs to the same time period with the face image of the user to the target terminal under the condition that the target screen image which belongs to the same time period with the face image of the user is matched with a preset image.
Illustratively, the facial expression of a child is captured in real time by using a face recognition function, the type of the expression is analyzed, the target screen image watched by the child is inferred to be abnormal according to the expression type of the child, the target screen image watched by the child is acquired and analyzed, and if the target screen image is matched with a preset image, which can be a blood smell violence image, the target screen image is sent. Therefore, a step of analyzing the target screen image is added, the phenomenon of mistaken sending caused by the misjudgment of the facial expression of the child can be avoided, and the accuracy of the event of sending the target screen image to the target terminal is improved.
Fig. 5 shows a block diagram of a terminal device according to another embodiment of the present invention, including:
the face image acquisition module 10 is used for acquiring a face image of a user;
the expression information obtaining module 20 is configured to obtain expression information of the user based on facial feature information in a facial image of the user;
the target screen image acquisition module 30 is configured to acquire a target screen image which belongs to the same time period as the facial image of the user when the expression information matches a preset expression type;
and a target screen image transmitting module 40 for transmitting the target screen image to the target terminal.
In this embodiment, when a user uses a terminal device, for example, to watch a video, play a game, or the like, the terminal device acquires a face image of the current user, extracts face feature information in the face image for analysis, and obtains expression information of the user based on the face feature information in the face image. When the expression information of the user is matched with the preset expression type, the expression information of the user can be obtained to meet the specific expression type, for example, the expression type different from a normal expression type and the like, and a target screen image which causes the current expression of the user is obtained and sent to the target terminal. Especially, when the user is children, when children see some abnormal pictures, the facial features are different from those in normal states, namely the children can possibly make expressions such as distraction, difficulty, fear and the like, so that the target screen image which causes the children to make abnormal expressions is obtained and sent to the target terminal used by the guardian, the guardian can see the target screen image at the first time, corresponding guidance can be performed on the children, the children can be helped to reasonably utilize the terminal equipment, and adverse effects of the terminal equipment on the children are reduced.
Preferably, the method further comprises the following steps:
the screen image recording module is used for recording the screen image displayed by the terminal equipment based on the input response of the terminal equipment to the user;
the target screen image acquisition module 30 includes:
the first obtaining unit is used for obtaining a first screen image in a time period of a first moment when the expression information of the first moment is matched with a preset expression type;
the first screen image is a dynamic image, the starting time of the first screen image is earlier than the first time, and the ending time of the first screen image is later than the first time.
Preferably, the target screen image transmission module 40 includes:
a message sending unit, configured to send a notification message to a target terminal;
wherein the notification message at least includes a target screen image belonging to the same period as the face image of the user and contents for representing expression information of the user.
Preferably, the target screen image transmission module 40 includes:
the first account number obtaining unit is used for obtaining a pre-stored first account number;
and the first sending unit is used for sending the target screen image to the target terminal corresponding to the first account.
The terminal device provided in the embodiment of the present invention can implement each process implemented by the terminal device in the method embodiments of fig. 1 to fig. 4, and is not described herein again to avoid repetition.
Fig. 6 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention, where the terminal device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 6 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, a pedometer, and the like.
The processor 110 is configured to acquire a face image of a user; obtaining expression information of the user based on the facial feature information in the facial image of the user; under the condition that the expression information is matched with a preset expression type, acquiring a target screen image which belongs to the same time period with the face image of the user; and sending the target screen image to a target terminal.
In this embodiment, when a user uses a terminal device, for example, to watch a video, play a game, or the like, the terminal device acquires a face image of the current user, extracts face feature information in the face image for analysis, and obtains expression information of the user based on the face feature information in the face image. When the expression information of the user is matched with the preset expression type, the expression information of the user can be obtained to meet the specific expression type, for example, the expression type different from a normal expression type and the like, and a target screen image which causes the current expression of the user is obtained and sent to the target terminal. Especially, when the user is children, when children see some abnormal pictures, the facial features are different from those in normal states, namely the children can possibly make expressions such as distraction, difficulty, fear and the like, so that the target screen image which causes the children to make abnormal expressions is obtained and sent to the target terminal used by the guardian, the guardian can see the target screen image at the first time, corresponding guidance can be performed on the children, the children can be helped to reasonably utilize the terminal equipment, and adverse effects of the terminal equipment on the children are reduced.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 102, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the graphics processing unit 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 6, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The terminal device 100 may further include a power supply 111 (such as a battery) for supplying power to each component, and preferably, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 100 includes some functional modules that are not shown, and are not described in detail here.
Preferably, an embodiment of the present invention further provides a terminal device, which includes a processor 110, a memory 109, and a computer program stored in the memory 109 and capable of running on the processor 110, where the computer program, when executed by the processor 110, implements each process of the terminal control method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the terminal control method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (8)

1. A terminal control method is applied to terminal equipment and is characterized by comprising the following steps:
collecting a face image of a user;
obtaining expression information of the user based on the facial feature information in the facial image of the user;
under the condition that the expression information is matched with a preset expression type, acquiring a target screen image which belongs to the same time period with the face image of the user, wherein the preset expression type is one of anger, sadness, fear, difficulty, happiness and excitement;
sending the target screen image to a target terminal;
the sending the target screen image to a target terminal includes:
sending a notification message to a target terminal;
wherein the notification message at least includes a target screen image belonging to the same period as the face image of the user and content for representing expression information of the user.
2. The method of claim 1, wherein before acquiring the target screen image within the same time period as the facial image of the user when the expression information matches a preset expression type, the method further comprises:
recording a screen image displayed by the terminal equipment based on the input response of the terminal equipment to the user;
under the condition that the expression information is matched with a preset expression type, acquiring a target screen image which belongs to the same time period with the face image of the user, wherein the target screen image comprises:
under the condition that the expression information at a first moment is matched with a preset expression type, acquiring a first screen image in a period of time to which the first moment belongs;
the first screen image is a dynamic image, the starting time of the first screen image is earlier than the first time, and the ending time of the first screen image is later than the first time.
3. The method of claim 1, wherein the target terminal sending the target screen image comprises:
acquiring a pre-stored first account;
and sending the target screen image to a target terminal corresponding to the first account.
4. A terminal device, comprising:
the face image acquisition module is used for acquiring a face image of a user;
the expression information obtaining module is used for obtaining the expression information of the user based on the facial feature information in the facial image of the user;
the target screen image acquisition module is used for acquiring a target screen image which belongs to the same time period with the face image of the user under the condition that the expression information is matched with a preset expression type, wherein the preset expression type is one of anger, sadness, fear, difficulty, fun and excitement;
the target screen image sending module is used for sending the target screen image to a target terminal;
the target screen image transmission module includes:
a message sending unit, configured to send a notification message to a target terminal;
wherein the notification message at least includes a target screen image belonging to the same period as the face image of the user and content for representing expression information of the user.
5. The terminal device of claim 4, further comprising:
the screen image recording module is used for recording the screen image displayed by the terminal equipment based on the input response of the terminal equipment to the user;
the target screen image acquisition module includes:
the first obtaining unit is used for obtaining a first screen image in a time period of a first moment when the expression information is matched with a preset expression type at the first moment;
the first screen image is a dynamic image, the starting time of the first screen image is earlier than the first time, and the ending time of the first screen image is later than the first time.
6. The terminal device according to claim 4, wherein the target screen image transmission module includes:
the first account number obtaining unit is used for obtaining a pre-stored first account number;
and the first sending unit is used for sending the target screen image to a target terminal corresponding to the first account.
7. A terminal device, characterized in that it comprises a processor, a memory, a computer program stored on said memory and executable on said processor, said computer program realizing the steps of the terminal control method according to any one of claims 1 to 3 when executed by said processor.
8. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the terminal control method according to any one of claims 1 to 3.
CN201910464526.8A 2019-05-30 2019-05-30 Terminal control method and terminal equipment Active CN110225196B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910464526.8A CN110225196B (en) 2019-05-30 2019-05-30 Terminal control method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910464526.8A CN110225196B (en) 2019-05-30 2019-05-30 Terminal control method and terminal equipment

Publications (2)

Publication Number Publication Date
CN110225196A CN110225196A (en) 2019-09-10
CN110225196B true CN110225196B (en) 2021-01-26

Family

ID=67818670

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910464526.8A Active CN110225196B (en) 2019-05-30 2019-05-30 Terminal control method and terminal equipment

Country Status (1)

Country Link
CN (1) CN110225196B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110866465A (en) * 2019-10-29 2020-03-06 维沃移动通信有限公司 Control method of electronic equipment and electronic equipment
CN111445417B (en) * 2020-03-31 2023-12-19 维沃移动通信有限公司 Image processing method, device, electronic equipment and medium
CN113709165A (en) * 2021-08-31 2021-11-26 贵州东冠科技有限公司 Information security filtering system and method for micro-expressions

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101917674A (en) * 2010-06-23 2010-12-15 中兴通讯股份有限公司 Method and device for transmitting information in call
CN102546905A (en) * 2010-12-20 2012-07-04 康佳集团股份有限公司 Mobile terminal, method for realizing screen capture in same and system
CN102799868B (en) * 2012-07-10 2014-09-10 吉林禹硕动漫游戏科技股份有限公司 Method for identifying key facial expressions of human faces
KR102182398B1 (en) * 2013-07-10 2020-11-24 엘지전자 주식회사 Electronic device and control method thereof
TW201512882A (en) * 2013-09-30 2015-04-01 Hon Hai Prec Ind Co Ltd Identity authentication system and method thereof
CN106502712A (en) * 2015-09-07 2017-03-15 北京三星通信技术研究有限公司 APP improved methods and system based on user operation
CN106897725A (en) * 2015-12-18 2017-06-27 西安中兴新软件有限责任公司 A kind of method and device for judging user's asthenopia
CN105938543A (en) * 2016-03-30 2016-09-14 乐视控股(北京)有限公司 Addiction-prevention-based terminal operation control method, device, and system
CN109284221B (en) * 2018-10-31 2022-06-03 中国农业银行股份有限公司 Early warning system and method
CN109257649B (en) * 2018-11-28 2021-12-24 维沃移动通信有限公司 Multimedia file generation method and terminal equipment
CN109640119B (en) * 2019-02-21 2021-06-11 百度在线网络技术(北京)有限公司 Method and device for pushing information

Also Published As

Publication number Publication date
CN110225196A (en) 2019-09-10

Similar Documents

Publication Publication Date Title
CN109151180B (en) Object identification method and mobile terminal
CN109381165B (en) Skin detection method and mobile terminal
CN107613131B (en) Application program disturbance-free method, mobile terminal and computer-readable storage medium
CN108632658B (en) Bullet screen display method and terminal
CN108494665B (en) Group message display method and mobile terminal
CN109960813A (en) A kind of interpretation method, mobile terminal and computer readable storage medium
CN108616448B (en) Information sharing path recommendation method and mobile terminal
CN107742072B (en) Face recognition method and mobile terminal
CN110825226A (en) Message viewing method and terminal
CN111614544B (en) Message processing method and electronic equipment
CN108683850B (en) Shooting prompting method and mobile terminal
CN110225196B (en) Terminal control method and terminal equipment
CN108334196B (en) File processing method and mobile terminal
CN107734170B (en) Notification message processing method, mobile terminal and wearable device
CN108962187B (en) Screen brightness adjusting method and mobile terminal
CN108366221A (en) A kind of video call method and terminal
CN109166164B (en) Expression picture generation method and terminal
CN107809674A (en) A kind of customer responsiveness acquisition, processing method, terminal and server based on video
CN108133708B (en) Voice assistant control method and device and mobile terminal
CN110062281B (en) Play progress adjusting method and terminal equipment thereof
CN109164908B (en) Interface control method and mobile terminal
CN110750198A (en) Expression sending method and mobile terminal
CN109819331B (en) Video call method, device and mobile terminal
CN110213439B (en) Message processing method and terminal
CN109361804B (en) Incoming call processing method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant