CN111258414B - Method and device for adjusting screen - Google Patents
Method and device for adjusting screen Download PDFInfo
- Publication number
- CN111258414B CN111258414B CN201811459959.6A CN201811459959A CN111258414B CN 111258414 B CN111258414 B CN 111258414B CN 201811459959 A CN201811459959 A CN 201811459959A CN 111258414 B CN111258414 B CN 111258414B
- Authority
- CN
- China
- Prior art keywords
- screen
- expression
- face image
- determining
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- User Interface Of Digital Computer (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the application discloses a method and a device for adjusting a screen. One embodiment of the above method comprises: acquiring at least one image of a preset space in front of a screen; extracting feature information of the face image in response to determining that the at least one image includes the face image; and adjusting display information of the screen in response to determining that the feature information satisfies a preset condition. According to the embodiment, the screen can be adjusted according to the face state, so that the interactivity between the user and the terminal is improved.
Description
Technical Field
The embodiment of the application relates to the technical field of computers, in particular to a method and a device for adjusting a screen.
Background
With the development of communication technology, terminal devices are increasingly used. Through the terminal equipment, people can get data, browse pictures or videos and the like on the internet, and great convenience is brought to life and work of people.
The existing terminal device cannot adjust the display screen according to the state of the currently used user.
Disclosure of Invention
The embodiment of the application provides a method and a device for adjusting a screen.
In a first aspect, an embodiment of the present application provides a method for adjusting a screen, including: acquiring at least one image of a preset space in front of a screen; extracting feature information of the face image in response to determining that the at least one image includes the face image; and adjusting display information of the screen in response to determining that the characteristic information meets a preset condition.
In some embodiments, the above method further comprises: the screen is locked in response to determining that the at least one image does not include a face image.
In some embodiments, the extracting feature information of the face image includes: extracting the expression characteristics of the face image, and carrying out expression recognition on the face image according to the extracted expression characteristics to obtain an expression recognition result.
In some embodiments, the adjusting the display information of the screen in response to determining that the feature information satisfies a preset condition includes: and in response to determining that the preset expression set comprises the expression indicated by the expression recognition result, selecting an expression picture corresponding to the expression indicated by the expression recognition result from the preset expression picture set, and controlling the screen to display the selected expression picture.
In some embodiments, the extracting feature information of the face image includes: extracting eye features of a face image, and identifying the eye state of a face object indicated by the face image according to the extracted eye features; and determining the closing degree of the human eyes according to the human eye state.
In some embodiments, the adjusting the display information of the screen in response to determining that the feature information satisfies a preset condition includes: and reducing the display brightness of the screen in response to determining that the number of images with the human eye closure degree larger than the first preset threshold is larger than the second preset threshold.
In a second aspect, an embodiment of the present application provides an apparatus for adjusting a screen, including: an image acquisition unit configured to acquire at least one image of a pre-set space in front of a screen; a feature extraction unit configured to extract feature information of the face image in response to a determination that the at least one image includes the face image; and a screen adjusting unit configured to adjust display information of the screen in response to determining that the characteristic information satisfies a preset condition.
In some embodiments, the apparatus further comprises: and a screen locking unit configured to lock the screen in response to a determination that the at least one image does not include a face image.
In some embodiments, the above feature extraction unit is further configured to: extracting the expression characteristics of the face image, and carrying out expression recognition on the face image according to the extracted expression characteristics to obtain an expression recognition result.
In some embodiments, the screen adjustment unit is further configured to: in response to determining that the preset expression set comprises the expression indicated by the expression recognition result, selecting an expression picture corresponding to the expression indicated by the expression recognition result from the preset expression picture set; and controlling the screen to display the selected expression picture.
In some embodiments, the above feature extraction unit is further configured to: extracting eye features of a face image, and identifying the eye state of a face object indicated by the face image according to the extracted eye features; and determining the closing degree of the human eyes according to the human eye state.
In some embodiments, the screen adjustment unit is further configured to: and reducing the display brightness of the screen in response to determining that the number of images with the human eye closure degree larger than the first preset threshold is larger than the second preset threshold.
In a third aspect, embodiments of the present application provide an apparatus, including: one or more processors; and a storage device having one or more programs stored thereon, which when executed by the one or more processors cause the one or more processors to implement the method as described in any of the embodiments of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable medium having stored thereon a computer program which, when executed by a processor, implements a method as described in any of the embodiments of the first aspect.
The method and the device for adjusting a screen provided in the above embodiments of the present application may first obtain at least one image of a preset space in front of the screen. Then, when it is determined that the face image is included in the at least one image, feature information of the face image is extracted. Finally, when the characteristic information is determined to meet the preset condition, the display information of the screen is adjusted, so that the screen can be adjusted according to the face state, and the interactivity between the user and the terminal is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings, in which:
FIG. 1 is an exemplary system architecture diagram in which an embodiment of the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a method for adjusting a screen according to the present application;
FIG. 3 is a schematic illustration of one application scenario of a method for adjusting a screen according to the present application;
FIG. 4 is a flow chart of another embodiment of a method for adjusting a screen according to the present application;
FIG. 5 is a flow chart of yet another embodiment of a method for adjusting a screen according to the present application;
FIG. 6 is a schematic structural view of one embodiment of an apparatus for adjusting a screen according to the present application;
fig. 7 is a schematic diagram of a computer system suitable for use in implementing the apparatus of the embodiments of the present application.
Detailed Description
The present application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which embodiments of the methods for adjusting a screen or the apparatus for adjusting a screen of the present application may be applied.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications, such as a web browser application, a shopping class application, a search class application, an instant messaging tool, a mailbox client, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be various electronic devices with display screens including, but not limited to, smartphones, tablets, laptop and desktop computers, and the like. An image capturing device (not shown in the figure) may also be provided near the display screen of the terminal apparatus 101, 102, 103 for capturing an image in front of the display screen of the terminal apparatus 101, 102, 103. The image capturing device may be a camera, a front camera, or the like mounted on the terminal apparatus 101, 102, 103, or may be a monitoring camera or the like mounted in a space where the terminal apparatus 101, 102, 103 is located.
The server 105 may be a server that provides various services, such as a feature extraction server that performs feature extraction on face images of users in front of the terminal devices 101, 102, 103. The feature extraction server may perform processing such as analysis on the received data such as images, and feed back the processing results (e.g., screen adjustment information) to the terminal devices 101, 102, 103.
The server may be hardware or software. When the server is hardware, the server may be implemented as a distributed server cluster formed by a plurality of servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules (e.g., to provide distributed services), or as a single software or software module. The present invention is not particularly limited herein.
It should be noted that, the steps included in the method for adjusting a screen provided in the embodiment of the present application may be performed by all of the terminal devices 101, 102, 103, or may be performed by all of the server 105. It is also possible that a part of the steps are performed by the terminal devices 101, 102, 103 and another part of the steps are performed by the server 105. Accordingly, the units or modules included in the means for adjusting the screen may be all provided in the terminal devices 101, 102, 103, may be all provided in the server 105, or may be part of the units or modules provided in the terminal devices 101, 102, 103, and another part of the units or modules provided in the server 105. When the method for adjusting a screen is performed by the terminal apparatuses 101, 102, 103, the above-described system architecture 100 may not include the network 104 and the server 105.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for adjusting a screen according to the present application is shown. The method for adjusting a screen of the present embodiment includes the steps of:
step 201, at least one image of a preset space in front of a screen is acquired.
In the present embodiment, the execution subject of the method for adjusting a screen (e.g., the terminal devices 101, 102, 103 or the server 105 shown in fig. 1) may acquire at least one image of a preset space in front of the screen by a wired connection or a wireless connection. The screen herein refers to a screen of the terminal device. The terminal equipment can be further provided with an image acquisition device, such as a camera, for acquiring images of a preset space in front of the screen. Or, a monitoring camera can be arranged in the space where the terminal equipment is located and used for collecting images of a preset space in front of the screen. The predetermined space may be a space within a predetermined distance in front of the screen, for example, a space within 1 meter in front of the screen, or the like. The execution body can acquire the image of the preset space in front of the screen in real time through the image acquisition device.
It should be noted that the wireless connection may include, but is not limited to, 3G/4G connections, wiFi connections, bluetooth connections, wiMAX connections, zigbee connections, UWB (ultra wideband) connections, and other now known or later developed wireless connection means.
And 202, extracting characteristic information of the face image in response to determining that the at least one image comprises the face image.
After the execution subject acquires the at least one image, it may be determined whether each of the at least one image includes a face image. And if the face images are determined to be included in the images, extracting characteristic information of the face images. Since the feature information may describe the shape and distance of the facial organ, the feature information may reflect the user's expression, the user's eye state, smile, etc. It can be appreciated that the face image feature extraction is a relatively widely applied technology at present, and will not be described herein.
In some optional implementations of this embodiment, the screen may be locked if the executing subject determines that each of the at least one image does not include a face image.
In this implementation manner, when no face image is included in each of the at least one image, the execution subject may determine that the user is not in front of the screen or the user is not looking at the screen, and then the execution subject may lock the screen of the terminal device, so that on one hand, electric energy may be saved, and on the other hand, privacy of the user may be protected.
In step 203, in response to determining that the feature information satisfies a preset condition, display information of the screen is adjusted.
In this embodiment, after extracting the feature information of the face image, the executing body may determine whether the feature information meets a preset condition. If so, the display information of the screen may be adjusted. The above-described preset condition may be a condition for the feature information. For example, when the feature information includes a facial expression, the preset condition may be that the expression is a heart injury. The display information of the screen may be content displayed by the screen, display brightness of the screen, display contrast of the screen, or the like.
With continued reference to fig. 3, fig. 3 is a schematic diagram of one application scenario of the method for adjusting a screen according to the present embodiment. In the application scenario of fig. 3, the user looks at news on a website in front of the screen, and when seeing interesting news, the user shows a smiling expression. The camera installed on the screen acquires images comprising the face images of the user and extracts the expressions of the face images. Then, if the expression satisfies the preset condition, a smile picture is displayed on the screen.
The method for adjusting a screen provided in the above embodiment of the present application may first acquire at least one image of a preset space in front of the screen. Then, when it is determined that the face image is included in the at least one image, feature information of the face image is extracted. Finally, when the characteristic information is determined to meet the preset condition, the display information of the screen is adjusted, so that the screen can be adjusted according to the face state, and the interactivity between the user and the terminal is improved.
With continued reference to fig. 4, a flow 400 of another embodiment of a method for adjusting a screen according to the present application is shown. As shown in fig. 4, the method for adjusting a screen of the present embodiment includes the steps of:
step 401, at least one image of a preset space in front of a screen is acquired.
This step is similar to the principle of step 201 shown in fig. 2 and will not be described again here.
In step 402, in response to determining that at least one image includes a face image, extracting expression features of the face image and performing expression recognition on the face image according to the extracted expression features, so as to obtain an expression recognition result.
After the execution main body determines that each image comprises a face image, the expression characteristics of the face images can be extracted so as to perform expression recognition on the face images and obtain an expression recognition result. The execution body may implement expression recognition in various ways. For example, the execution subject may perform expression recognition on the face image by using a template-based matching method, a neural network-based method, a probabilistic model-based method, a support vector machine-based method, or the like.
In some optional implementations of the present embodiment, the execution subject may implement expression recognition of the face image by: and importing the facial image into a pre-established expression recognition model to obtain an expression recognition result of the facial image. The expression recognition model can be used for representing the corresponding relation between the face image and the expression recognition result.
As an example, the expression recognition model described above may include a feature extraction section and a correspondence table. The feature extraction part may be used to extract features of the face image, so as to obtain feature vectors of the face image. The correspondence table may store correspondence between a plurality of feature vectors and expression recognition results, and the correspondence table may be preset by a technician based on statistics of a large number of feature vectors and expression recognition results. In this way, the expression recognition model may first perform feature extraction on the imported face image to obtain the target feature vector. And comparing the target feature vector with a plurality of feature vectors in a corresponding relation table in sequence, and taking the expression recognition result corresponding to the feature vector in the corresponding relation table as the expression recognition result of the target feature vector if one feature vector in the corresponding relation table is the same as or similar to the target feature vector.
In some alternative implementations, the expression recognition model may be a neural network that may include an input network, an intermediate network, and an output network that may include a separable convolution layer and an activation function layer. Here, the neural network may be the execution subject or other execution subjects for training the neural network, which are trained by:
firstly, a sample set is obtained, wherein the samples in the sample set can comprise sample face images and expressions of faces corresponding to the sample face images. Wherein a sample face image may refer to a face image directly acquired by an image acquisition device (e.g., a camera).
Then, the sample face image of the sample in the sample set is taken as input, and the expression of the face corresponding to the input sample face image is taken as expected output, so that the neural network is obtained through training. As an example, when training the above neural network, first, a sample face image may be used as an input of the initial neural network, and a predicted expression corresponding to the input sample face image may be obtained. Here, the initial neural network may refer to a neural network that is untrained or untrained. And secondly, comparing the predicted expression corresponding to the sample face image with the corresponding expression, and determining whether the initial neural network reaches a preset condition according to a comparison result. The preset condition may be that a difference between a predicted expression corresponding to the sample face image and a corresponding expression is smaller than a preset difference threshold. Then, in response to determining that the preset condition is reached, the initial neural network described above may be determined as a trained neural network. Finally, in response to determining that the preset condition is not reached, network parameters of the initial neural network may be adjusted, and the training process described above may continue using unused samples. As an example, the network parameters of the initial neural network described above may be adjusted using a back propagation algorithm (Back Propgation Algorithm, BP algorithm) and a gradient descent method. It should be noted that the back propagation algorithm and the gradient descent method are well known techniques widely studied and applied at present, and will not be described herein.
Step 403, in response to determining that the preset expression set includes the expression indicated by the expression recognition result, selecting an expression picture corresponding to the expression indicated by the expression recognition result from the preset expression picture set.
After the facial image is subjected to the expression recognition, whether the preset expression set comprises the expression indicated by the expression recognition result or not can be determined. And if so, selecting an expression picture corresponding to the expression indicated by the expression recognition result from a preset expression picture set. In this embodiment, the expression set may include smiles, laughter, crying, and other expressions. The above-described expression picture set may include pictures of various expressions, for example, a picture of laughter, a picture of smile, and the like. It can be understood that each expression picture in the expression picture set is marked with an expression label. The executing body can determine the expression of the expression picture representation through the expression label.
Step 404, controlling the screen to display the selected expression picture.
The execution body may display the selected emoticons on a screen to enhance interactivity with the user.
In some optional implementations of this embodiment, the execution body may further set a display manner of the expression picture. For example, the executing body may control the above-described emoticon to fall from above the screen to below the screen. Alternatively, the execution body may control the above-mentioned expression picture to gradually disappear after vibrating a plurality of times in the middle of the screen, and the like.
The method for adjusting the screen provided by the embodiment of the application can adjust the display picture of the screen according to the expression of the user, so that the interactivity between the terminal and the user is enhanced.
With continued reference to fig. 5, a flow 500 of yet another embodiment of a method for adjusting a screen according to the present application is shown. As shown in fig. 5, the method for adjusting a screen of the present embodiment includes the steps of:
step 501, at least one image of a pre-set space in front of a screen is acquired.
In this embodiment, the execution body may control the image acquisition device to acquire an image of a preset space in front of the screen according to a certain acquisition frequency. The shooting time interval between the images is smaller than the preset duration.
Step 502, in response to determining that the at least one image includes a face image, extracting eye features of the face image and identifying a human eye state of a face object indicated by the face image from the extracted eye features.
After determining that the face image is included in each image, the execution body may extract an eye feature of the face image to identify an eye state of a face object indicated by the face image. The eye state may include a closed eye state and an open eye state, among others. The execution subject may first calibrate the feature points of the upper eyelid and the lower eyelid of the face object in each face image. Then, a distance value between the upper eyelid feature point and the lower eyelid feature point is determined. When the distance value is larger than 0, the eye state is open. When the distance value is equal to 0, the eye state is closed.
Step 503, determining the degree of eye closure according to the eye state.
In this embodiment, the execution body may determine the eye closure degree after determining the eye state of the face object indicated by the face image. Specifically, the execution subject may determine a maximum value of the distance between the upper eyelid feature point and the lower eyelid feature point through each face image. And the maximum distance is set as the maximum opening state of the human eyes. For each face image, the execution subject may calculate a ratio of a distance between the upper eyelid feature point and the lower eyelid feature point to the above-described distance maximum value in the face image. And then calculating the difference between the ratio of 1 and the ratio, and obtaining the value which is the degree of eye closure.
In step 504, in response to determining that the number of images with the human eye closure level greater than the first preset threshold is greater than the second preset threshold, the display brightness of the screen is reduced.
And when the execution main body determines that the human eye closing degree is larger than the first preset threshold value, the execution main body determines that the user is tired. At this time, the execution body may further count whether the number of images in which the degree of eye closure is greater than the first preset threshold value is greater than the second preset threshold value in each image. If the number is greater than the second preset threshold, the fatigue time of the user is determined to be longer. At this time, the execution subject can reduce the display brightness of the screen to avoid the stimulus of the excessively bright screen to the human eyes, and can also save electric energy.
The method for adjusting the screen provided by the embodiment of the application can adjust the display brightness of the screen according to the human eye state of the user, is beneficial to protecting eyes and saves electric energy.
With further reference to fig. 6, as an implementation of the method shown in the foregoing figures, the present application provides an embodiment of an apparatus for adjusting a screen, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 6, the apparatus 600 for adjusting a screen of the present embodiment includes: an image acquisition unit 601, a feature extraction unit 602, and a screen adjustment unit 603.
An image acquisition unit 601 is configured to acquire at least one image of a pre-set space in front of a screen.
The feature extraction unit 602 is configured to extract feature information of a face image in response to determining that the at least one image includes the face image.
The screen adjusting unit 603 is configured to adjust display information of the screen in response to determining that the feature information satisfies a preset condition.
In some optional implementations of this embodiment, the apparatus 600 may further include a screen locking unit, not shown in fig. 6, configured to lock the screen in response to determining that the at least one image does not include a face image.
In some optional implementations of this embodiment, the feature extraction unit 602 is further configured to: extracting the expression characteristics of the face image, and carrying out expression recognition on the face image according to the extracted expression characteristics to obtain an expression recognition result.
In some optional implementations of this embodiment, the screen adjustment unit 603 is further configured to: in response to determining that the preset expression set comprises the expression indicated by the expression recognition result, selecting an expression picture corresponding to the expression indicated by the expression recognition result from the preset expression picture set; and controlling the screen to display the selected expression picture.
In some optional implementations of this embodiment, the feature extraction unit 602 is further configured to: extracting eye features of the face image and identifying the eye state of a face object indicated by the face image according to the extracted eye features; and determining the closing degree of the human eyes according to the human eye state.
In some optional implementations of this embodiment, the screen adjustment unit 603 is further configured to: and reducing the display brightness of the screen in response to determining that the number of images with the human eye closure degree greater than the first preset threshold is greater than the second preset threshold.
The device for adjusting a screen provided in the above embodiment of the present application may first acquire at least one image of a preset space in front of the screen. Then, when it is determined that the face image is included in the at least one image, feature information of the face image is extracted. Finally, when the characteristic information is determined to meet the preset condition, the display information of the screen is adjusted, so that the screen can be adjusted according to the face state, and the interactivity between the user and the terminal is improved.
It should be understood that the units 601 to 603 described in the apparatus 600 for adjusting a screen correspond to the respective steps in the method described with reference to fig. 2, respectively. Thus, the operations and features described above with respect to the method for adjusting a screen are equally applicable to the apparatus 600 and the units contained therein, and are not described in detail herein.
Referring now to FIG. 7, there is illustrated a schematic diagram of a computer system 700 suitable for use in implementing the apparatus of the embodiments of the present application. The apparatus shown in fig. 7 is merely an example, and should not be construed as limiting the functionality and scope of use of the embodiments herein.
As shown in fig. 7, the computer system 700 includes a Central Processing Unit (CPU) 701, which can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the system 700 are also stored. The CPU 701, ROM 702, and RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input section 706 including a keyboard, a mouse, and the like; an output portion 707 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 708 including a hard disk or the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. The drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read therefrom is mounted into the storage section 708 as necessary.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a machine-readable medium, the computer program comprising program code for performing the method shown in the flow diagrams. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 709, and/or installed from the removable medium 711. The above-described functions defined in the method of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 701.
It should be noted that the computer readable medium described in the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present application may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented by software, or may be implemented by hardware. The described units may also be provided in a processor, for example, described as: a processor includes an image acquisition unit, a feature extraction unit, and a screen adjustment unit. The names of these units do not constitute a limitation of the unit itself in some cases, and for example, the image acquisition unit may also be described as "a unit that acquires at least one image of a pre-set space in front of a screen".
As another aspect, the present application also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring at least one image of a preset space in front of a screen; extracting feature information of the face image in response to determining that the at least one image includes the face image; and adjusting display information of the screen in response to determining that the feature information satisfies a preset condition.
The foregoing description is only of the preferred embodiments of the present application and is presented as a description of the principles of the technology being utilized. It will be appreciated by persons skilled in the art that the scope of the invention referred to in this application is not limited to the specific combinations of features described above, but it is intended to cover other embodiments in which any combination of features described above or equivalents thereof is possible without departing from the spirit of the invention. Such as the above-described features and technical features having similar functions (but not limited to) disclosed in the present application are replaced with each other.
Claims (12)
1. A method for adjusting a screen, comprising:
acquiring at least one image of a preset space in front of a screen;
extracting feature information of the face image in response to determining that the at least one image includes the face image;
adjusting display information of the screen in response to determining that the characteristic information meets a preset condition;
wherein, the display information of the screen comprises the display content of the screen; and
the response to determining that the characteristic information meets a preset condition, adjusting display information of the screen, including:
in response to determining that the preset expression set comprises the expression indicated by the expression recognition result based on the feature information of the face image, selecting an expression picture corresponding to the expression indicated by the expression recognition result from the preset expression picture set;
and controlling the screen to display the selected expression picture.
2. The method of claim 1, wherein the method further comprises:
the screen is locked in response to determining that the at least one image does not include a face image.
3. The method of claim 1, wherein the extracting feature information of the face image comprises:
extracting the expression characteristics of the face image, and carrying out expression recognition on the face image according to the extracted expression characteristics to obtain an expression recognition result.
4. The method of claim 1, wherein the extracting feature information of the face image comprises:
extracting eye features of the face image and identifying the eye state of a face object indicated by the face image according to the extracted eye features;
and determining the closing degree of the human eyes according to the human eye state.
5. The method of claim 4, wherein the adjusting the display information of the screen in response to determining that the characteristic information satisfies a preset condition comprises:
and reducing the display brightness of the screen in response to determining that the number of images with the human eye closure degree greater than the first preset threshold is greater than the second preset threshold.
6. An apparatus for adjusting a screen, comprising:
an image acquisition unit configured to acquire at least one image of a pre-set space in front of a screen;
a feature extraction unit configured to extract feature information of a face image in response to a determination that the at least one image includes the face image;
a screen adjustment unit configured to adjust display information of the screen in response to determining that the feature information satisfies a preset condition;
wherein, the display information of the screen comprises the display content of the screen; and the screen adjustment unit is further configured to:
in response to determining that the preset expression set comprises the expression indicated by the expression recognition result based on the feature information of the face image, selecting an expression picture corresponding to the expression indicated by the expression recognition result from the preset expression picture set;
and controlling the screen to display the selected expression picture.
7. The apparatus of claim 6, wherein the apparatus further comprises:
and a screen locking unit configured to lock the screen in response to determining that the at least one image does not include a face image.
8. The apparatus of claim 6, wherein the feature extraction unit is further configured to:
extracting the expression characteristics of the face image, and carrying out expression recognition on the face image according to the extracted expression characteristics to obtain an expression recognition result.
9. The apparatus of claim 6, wherein the feature extraction unit is further configured to:
extracting eye features of the face image and identifying the eye state of a face object indicated by the face image according to the extracted eye features;
and determining the closing degree of the human eyes according to the human eye state.
10. The apparatus of claim 9, wherein the screen adjustment unit is further configured to:
and reducing the display brightness of the screen in response to determining that the number of images with the human eye closure degree greater than the first preset threshold is greater than the second preset threshold.
11. An apparatus, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-5.
12. A computer readable medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811459959.6A CN111258414B (en) | 2018-11-30 | 2018-11-30 | Method and device for adjusting screen |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811459959.6A CN111258414B (en) | 2018-11-30 | 2018-11-30 | Method and device for adjusting screen |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111258414A CN111258414A (en) | 2020-06-09 |
CN111258414B true CN111258414B (en) | 2023-08-04 |
Family
ID=70944774
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811459959.6A Active CN111258414B (en) | 2018-11-30 | 2018-11-30 | Method and device for adjusting screen |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111258414B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112298059A (en) * | 2020-10-26 | 2021-02-02 | 武汉华星光电技术有限公司 | Vehicle-mounted display screen adjusting device and vehicle |
CN112416284B (en) * | 2020-12-10 | 2022-09-23 | 三星电子(中国)研发中心 | Method, apparatus, device and storage medium for sharing screen |
CN113760156A (en) * | 2021-02-08 | 2021-12-07 | 北京沃东天骏信息技术有限公司 | Method and device for adjusting terminal screen display |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013175847A1 (en) * | 2012-05-22 | 2013-11-28 | ソニー株式会社 | Image processing device, image processing method, and program |
CN103777760A (en) * | 2014-02-26 | 2014-05-07 | 北京百纳威尔科技有限公司 | Method and device for switching screen display direction |
CN104460995A (en) * | 2014-11-28 | 2015-03-25 | 广东欧珀移动通信有限公司 | Display processing method, display processing device and terminal |
CN104866082A (en) * | 2014-02-25 | 2015-08-26 | 北京三星通信技术研究有限公司 | User behavior based reading method and device |
CN105353875A (en) * | 2015-11-05 | 2016-02-24 | 小米科技有限责任公司 | Method and apparatus for adjusting visible area of screen |
CN105630143A (en) * | 2014-11-18 | 2016-06-01 | 中兴通讯股份有限公司 | Screen display adjusting method and device |
CN105653041A (en) * | 2016-01-29 | 2016-06-08 | 北京小米移动软件有限公司 | Display state adjusting method and device |
CN106057171A (en) * | 2016-07-21 | 2016-10-26 | 广东欧珀移动通信有限公司 | Control method and device |
EP3154270A1 (en) * | 2015-10-08 | 2017-04-12 | Xiaomi Inc. | Method and device for adjusting and displaying an image |
CN106569611A (en) * | 2016-11-11 | 2017-04-19 | 努比亚技术有限公司 | Apparatus and method for adjusting display interface, and terminal |
CN106855744A (en) * | 2016-12-30 | 2017-06-16 | 维沃移动通信有限公司 | A kind of screen display method and mobile terminal |
CN107077593A (en) * | 2014-07-14 | 2017-08-18 | 华为技术有限公司 | For the enhanced system and method for display screen |
CN107092352A (en) * | 2017-03-27 | 2017-08-25 | 深圳市金立通信设备有限公司 | A kind of screen control method answered based on distance perspective and terminal |
CN108037824A (en) * | 2017-12-06 | 2018-05-15 | 广东欧珀移动通信有限公司 | Screen parameter adjusting method, device and equipment |
-
2018
- 2018-11-30 CN CN201811459959.6A patent/CN111258414B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013175847A1 (en) * | 2012-05-22 | 2013-11-28 | ソニー株式会社 | Image processing device, image processing method, and program |
CN104866082A (en) * | 2014-02-25 | 2015-08-26 | 北京三星通信技术研究有限公司 | User behavior based reading method and device |
CN103777760A (en) * | 2014-02-26 | 2014-05-07 | 北京百纳威尔科技有限公司 | Method and device for switching screen display direction |
CN107077593A (en) * | 2014-07-14 | 2017-08-18 | 华为技术有限公司 | For the enhanced system and method for display screen |
CN105630143A (en) * | 2014-11-18 | 2016-06-01 | 中兴通讯股份有限公司 | Screen display adjusting method and device |
CN104460995A (en) * | 2014-11-28 | 2015-03-25 | 广东欧珀移动通信有限公司 | Display processing method, display processing device and terminal |
EP3154270A1 (en) * | 2015-10-08 | 2017-04-12 | Xiaomi Inc. | Method and device for adjusting and displaying an image |
CN105353875A (en) * | 2015-11-05 | 2016-02-24 | 小米科技有限责任公司 | Method and apparatus for adjusting visible area of screen |
CN105653041A (en) * | 2016-01-29 | 2016-06-08 | 北京小米移动软件有限公司 | Display state adjusting method and device |
CN106057171A (en) * | 2016-07-21 | 2016-10-26 | 广东欧珀移动通信有限公司 | Control method and device |
CN106569611A (en) * | 2016-11-11 | 2017-04-19 | 努比亚技术有限公司 | Apparatus and method for adjusting display interface, and terminal |
CN106855744A (en) * | 2016-12-30 | 2017-06-16 | 维沃移动通信有限公司 | A kind of screen display method and mobile terminal |
CN107092352A (en) * | 2017-03-27 | 2017-08-25 | 深圳市金立通信设备有限公司 | A kind of screen control method answered based on distance perspective and terminal |
CN108037824A (en) * | 2017-12-06 | 2018-05-15 | 广东欧珀移动通信有限公司 | Screen parameter adjusting method, device and equipment |
Non-Patent Citations (1)
Title |
---|
Android安全保护机制及解密方法研究;孙奕;;信息网络安全(01);第71-74页 * |
Also Published As
Publication number | Publication date |
---|---|
CN111258414A (en) | 2020-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10936919B2 (en) | Method and apparatus for detecting human face | |
WO2020000879A1 (en) | Image recognition method and apparatus | |
CN108830235B (en) | Method and apparatus for generating information | |
US11436863B2 (en) | Method and apparatus for outputting data | |
CN107622240B (en) | Face detection method and device | |
CN110827378A (en) | Virtual image generation method, device, terminal and storage medium | |
CN109993150B (en) | Method and device for identifying age | |
CN109189544B (en) | Method and device for generating dial plate | |
CN111258414B (en) | Method and device for adjusting screen | |
US11232560B2 (en) | Method and apparatus for processing fundus image | |
CN112527115A (en) | User image generation method, related device and computer program product | |
CN110570383B (en) | Image processing method and device, electronic equipment and storage medium | |
CN110046571B (en) | Method and device for identifying age | |
CN108399401B (en) | Method and device for detecting face image | |
CN108470131B (en) | Method and device for generating prompt message | |
CN108038473B (en) | Method and apparatus for outputting information | |
CN109949213B (en) | Method and apparatus for generating image | |
CN110059624A (en) | Method and apparatus for detecting living body | |
CN112732553A (en) | Image testing method and device, electronic equipment and storage medium | |
CN110008926B (en) | Method and device for identifying age | |
CN115830668A (en) | User authentication method and device based on facial recognition, computing equipment and medium | |
CN109241930B (en) | Method and apparatus for processing eyebrow image | |
CN112967299B (en) | Image cropping method and device, electronic equipment and computer readable medium | |
CN111260756B (en) | Method and device for transmitting information | |
CN108256451B (en) | Method and device for detecting human face |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |