WO2020111776A1 - Electronic device for focus tracking photographing and method thereof - Google Patents
Electronic device for focus tracking photographing and method thereof Download PDFInfo
- Publication number
- WO2020111776A1 WO2020111776A1 PCT/KR2019/016477 KR2019016477W WO2020111776A1 WO 2020111776 A1 WO2020111776 A1 WO 2020111776A1 KR 2019016477 W KR2019016477 W KR 2019016477W WO 2020111776 A1 WO2020111776 A1 WO 2020111776A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- target character
- feature set
- weight
- matched
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/673—Focus control based on electronic image sensor signals based on contrast or high frequency components of image signals, e.g. hill climbing method
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/631—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
- H04N23/632—Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/633—Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
- H04N23/635—Region indicators; Field of view indicators
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
Definitions
- the present disclosure generally relate to a photographing device and method, more particularly, to an electronic device capable of photographing and a focus tracking photographing method of the electronic device.
- the terminal device When photographing a person by using a mobile terminal in the prior art, when there are a plurality of faces in the preview image captured by the camera, the terminal device usually focuses all faces in the preview image automatically based on face recognition, or selects a target to focus by clicking on the screen with a finger.
- Exemplary embodiments address the above disadvantages and other disadvantages not described above. Moreover, the exemplary embodiments are not required to overcome the disadvantages described above, and the exemplary embodiments may not overcome any of the problems described above.
- a focus tracking photographing method for an electronic device may include: obtaining a preview image including a plurality of faces; performing facial feature extraction for each of the plurality of faces; matching extracted facial features for each of the plurality of faces with at least one target character feature set respectively, to identify at least one matched target character in the preview image; and performing auto-focusing on the at least one matched target character in the preview image based on information associated with the identified at least one matched target character.
- an electronic device comprising: a feature extracting module configured to perform facial feature extraction for each of a plurality of faces included in a preview image; a feature matching module configured to match extracted facial features for each of the plurality of faces with at least one target character feature set respectively, to identify at least one matched target character in the preview image; and a focusing module configured to perform auto-focusing on at least one matched target characters in the preview image based on information associated with the identified at least one matched target character.
- the electronic device may quickly identify a person, who the user of the electronic device may be interested in, in the crowd and perform auto-focusing, thereby avoiding the difficulty that it is difficult for the user to adjust the focus, which has an additional advantage of reducing shooting time and improving shooting quality.
- the electronic device may effectively reduce the impact of strangers.
- FIG. 1 is a diagram illustrating an example configuration of an electronic device according to an embodiment of the present disclosure
- FIG. 2 is a flowchart illustrating a method of establishing and updating a target character feature set by a target character feature set managing module according to an embodiment of the present disclosure
- FIGS. 3A, 3B, 3C, 3D and 3E are diagrams illustrating operations of a focusing module in a focus tracking mode, according to an embodiment of the present disclosure.
- FIG. 4 is a flowchart illustrating a focus tracking photographing method of an electronic device in the focus tracking mode according to an embodiment of the present disclosure.
- FIG. 1 is a block diagram illustrating an example configuration of the electronic device according to an embodiment of the present invention.
- the electronic device described herein may be any electronic device having a photographing function.
- a portable device will be employed as a representative example of the electronic device in some embodiments of the invention, it should be understood that some components of the electronic device may be omitted or replaced.
- the electronic device 100 may include a camera module 110 and a controller 120.
- the controller 120 may include a feature extracting module 122 and a feature matching module 124.
- the feature extracting module 122 may determine whether a face appears in a preview image captured by an image capturing module of the camera module 110, and perform auto-focusing or manual-focusing on the appeared face according to a preset setting.
- a human face has certain structural distribution features (such as structural features and positional features of facial features (e.g., eyes, nose, mouth, eyebrows, etc.)), rules generated from these features may be used to determine whether a face is included in the image.
- Adaboost Adaptive Boosting
- the method for determining whether a face is included in an image by the feature extracting module 122 may be implemented through various known methods. Further, in the embodiment of the present disclosure, the operation of performing auto-focusing or manual-focusing on the face in the preview image may also be implemented through various known methods.
- the feature extracting module 122 may determine whether a plurality of faces appear in the preview image captured by the image capturing module 112 of the camera module 110, and if the faces appear, a facial feature extraction may be performed for each face. In an embodiment, after determining that a plurality of faces are included in the preview image, the feature extracting module 122 may extract facial features for all of the faces included in the preview image based on deep learning techniques. As described above, the human face has a certain structural distribution features, for example, feature data that may used for face recognition may be obtained based on the shape description of the facial features of the face and the distance feature between the facial features. The geometrical description of the facial features and the structural relationship between the facial features may also be used as feature data for facial recognition. In the embodiment of the present disclosure, the method for extracting facial features by the feature extracting module 122 may be implemented through various known methods such as AdaBoost learning algorithm.
- the feature matching module 124 may match facial features of each face extracted by the feature extracting module 122 with one or more target character feature sets respectively, to determine whether a plurality of faces included in the image include a face which has a target character feature set corresponding thereto, so that it is possible to determine whether a matched target character appears in the image.
- the target character feature set is established for each character separately, and each target character feature set includes facial features of a character corresponding thereto. A character having the matched target character feature set may be recognized as the target character.
- controller 120 may further include control circuits and other elements for controlling the overall operation of the electronic device 100 or the operations of elements within the electronic device 100.
- the controller 120 may control a predetermined operation or function in the electronic device 100 to be performed according to a predetermined setting or in response to any combination of user inputs.
- the controller 120 may further include a target character feature set managing module (not shown) for managing the target character feature sets.
- a target character feature set managing module (not shown) for managing the target character feature sets. A method of establishing and updating a target character feature set will be described in detail below with reference to FIG. 2.
- the camera module 110 may include an image capturing module 112 and a focusing module 114.
- the image capturing module 112 may convert an image obtained by an image sensor module (not shown) in the image capturing module 112 into a preview image and display the same in a display module (not shown) in the electronic device 100 when the camera module 110 is driven, and when a request for capturing an image is generated through the shutter button, may store the captured image in the electronic device or store the captured image in the electronic device after performing a series of image processing on the captured image.
- the focusing module 114 may include a driving module capable of driving a lens system included in the image sensor module.
- the focusing module 114 may focus on the point at which the user input is detected based on the user's input to the display module. Further, the focusing module 114 may determine whether the focal length is appropriate by judging the sharpness of the images taken under the different focal lengths, and further adjust the focal length by controlling the lens system.
- the focusing module 114 may control the lens system to perform single focusing or central focusing on the face determined as the subject.
- the focus mode may be set in the preview mode, and the focus mode may be switched at any time according to the user's input.
- the focus mode is not limited to the focus mode described above, and may include other focus modes known in the art.
- the operations of the focusing module 114 in the focus tracking mode will be described in detail with reference to FIG. 3.
- the focusing module 114 may be configured to automatically focus on matched target characters who appear in the image according to related information of the matched target characters who appear in the image.
- the focusing module 114 may also include a separate processing module (not shown) for determining to use at least one of a single-target focusing mode and a multi-targets focusing mode in the focus tracking mode.
- the single-target focusing mode may indicate focusing on a face of a single character or focusing on faces of a plurality of characters respectively
- the multi-targets focusing mode refers to focusing on the center point of a polygon formed by lines connecting the center point of the face of each of a plurality of characters.
- the single-target focusing mode, the multi-targets focusing mode, or both may be selected according to positions of a plurality of target characters.
- the electronic device 100 may also include a communication module (not shown) for communicating with an external device or connecting to a network.
- the communication module may also be used to transmit data to outside or receive data from the outside.
- the electronic device 100 may further include a memory (not shown) for storing local images and target character feature sets established by the target character feature set managing module.
- the electronic device 100 may also include an internal bus for communicating between elements in electronic device 100.
- FIG. 2 is a flowchart illustrating a method for establishing and updating a target character feature set by the target character feature set managing module, according to an embodiment of the present disclosure.
- the target character feature set managing module may be configured to establish a target character feature set for each target character separately based on local images in the electronic device 100.
- the local images may include images in an album of the electronic device 100, images downloaded from the Internet or a cloud server via the communication module, and images received from another electronic device via the communication module.
- a character image may refer to an image in which a character is included.
- the target character feature set managing module may communicate through the feature extracting module 122 to determine character images in the local images through the feature extracting module 122. In general, the target character feature set managing module operates in the background while the electronic device 100 is in a standby state or the camera module 110 is in an inactive state.
- the operations of establishing a target character feature set by the target character feature set managing module may include the following steps, first, in step 201, the target character feature set managing module selects the character images as the target image set from the local images.
- the target character feature set managing module may identify and extract facial features of each face included in all the character images in the target image set by using the feature extracting module 122.
- the target character feature set managing module may communicate with the feature matching module 124 such that the feature matching module 124 compares the extracted facial features of each face with each other , and may establish a the target character feature set for each character separately based on a degree of similarity of the extracted facial features of each face. Facial features, the degree of similarity of which is greater than a predetermined threshold, will be determined to be facial features of the same character.
- Each target character feature set contains facial features of the target character corresponding thereto.
- the target character feature set managing module may allocate a weight to each target character feature set, the weight is determined by counting the number of occurrences of the same character in all images of the target image set. Specifically, the target character feature set managing module may count the number of occurrences of the same character in all the character images of the target image set, for example, if a character appears only once, a target character feature set will not be established for facial features of this character, or the target character feature set corresponding to this character may be allocated with the lowest weight. In addition, the target character feature set managing module may further allocate weights to different target character feature sets according to appearing frequencies of different target characters in the target image set.
- a weight of the target character feature set corresponding to the person is also relatively higher.
- the weight of the target character feature set actually reflects the extent to which the user of the electronic device is interested in the character for the target character feature set.
- a character for a target character feature set with a higher weight may be a character who the user of the electronic device has a higher interest in.
- the operations of establishing a target character feature set by the target character feature set managing module may be implemented using deep learning algorithms.
- the established target character feature set may be stored in the memory of the electronic device.
- the target character feature set managing module may further update the target character feature sets according to a command input by the user. In another embodiment, the target character feature set managing module may further update the target character feature sets according to a time period preset by the user or according to a time period defaulted by the system. The updating of the target character feature sets may include adding a new target character feature set, updating the weights of the target character feature sets, and deleting a target character feature set the weight of which is lower than a threshold.
- the target character feature set managing module may perform the selection of the character image, the extraction of facial features of the character, and the establishment of the target character feature set only for images newly added in the local images after the last update, and re-allocates weights to the previously stored target character feature sets in the memory and the newly established target character feature sets, and deletes a target character feature set the weight of which is lower than the threshold.
- the periodically updating of the target character feature sets may be classified into short periodic update and long periodic update. For example, in a case where a short period is one week and a long period is one month, the target character feature set managing module may repeat the above-described updating operations for the images newly added in the local images after the last update every other week, and, the target character feature set managing module may update all the images in the local images every other month.
- the target character feature sets may accurately reflect the extent of user's interest in different characters.
- the focusing module 114 may be configured to perform auto-focusing on matched target characters who appear in the image according to related information of the matched target characters who appear in the image.
- the related information of the matched target characters who appear in the image includes at least one of the number of the matched target characters, the distance between centers of the faces of the matched target characters, and the weights of the target character feature sets corresponding to the matched target characters.
- the focusing module 114 may select at least one of a single-target focusing mode and a multi-targets focusing mode according to the number of matched target characters, the distance between the centers of the faces of matched target characters, and the weights of the target character feature sets corresponding to the matched target characters to perform auto-focusing on the matched target characters.
- the feature matching module 124 determines that there is only one matched target character (for example, the object 302)
- the matched one target character i.e., the object 302
- the single-target focusing mode includes performing a separate focusing on the matched target character, that is, focusing on the center of the face of the matched target character.
- the focusing module 114 may determine the differences between the plurality of weights of the target character feature sets corresponding to the matched two or more target characters, respectively, wherein if the differences between the highest weight and other weights among the plurality of weights are all greater than a predetermined threshold, the single-target focusing mode is selected to focus on a target character for a target character feature set with the highest weight. Still as shown in FIG. 3A , in another embodiment, if the objects 301 to 304 have target character feature sets corresponding thereto respectively, that is, the objects 301 to 304 are all determined as the target characters.
- the focusing module 114 may determine weights of the target character feature sets corresponding to the objects 301 to 304, if the target character feature set corresponding to the object 302 has the highest weight, and each of the differences between the weight of the target character feature set corresponding to the object 302 and the weights of the target character feature sets corresponding to the object 301, the object 303, and the object 304 is greater than a preset weight threshold, the focusing module 114 may determine the object 302 as the focused object, and focus on the object 302 under the single-target focusing mode.
- the objects 301 to 304 are all determined as the target characters, the user is much more interested in the object 302 than the other three objects, therefore, the focusing module may focus on the object 302 only.
- the focusing module may determine an object for target character feature set with the highest weight and an object for a target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, as the objects to be focused, by determining the weights of the target character feature sets corresponding to the objects 301 to 304, and determining the differences between the highest weight and other weights among the weights of the target character feature sets corresponding to the objects 301 to 304.
- the object 302 and the object 303 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, then the focusing module 114 may determine the object 302 and the object 303 as the objects to be focused.
- the focusing module 114 may select a focus mode for object 302 and the object 303.
- the object 302 and the object 304 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, then the focusing module 114 may determine the object 302 and the object 304 as the objects to be focused.
- the focusing module 114 may select a focus mode for the objects 302 and the object 304.
- the focusing module 114 may determine the objects 302 to 304 as the objects to be focused. That is, although the objects 301 to 304 are all determined as the target characters, the extent of the user's interest in the objects 302 to 304 are similar and much higher than that of the user's interest in the object 301, and therefore, the focusing module 114 may select a focus mode for the objects 302 to 304.
- the object 301 and the objects 303 to 304 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, then the focusing module 114 may determine the object 301 and the objects 303 to 304 as the objects to be focused.
- the focusing module 114 may select a focus mode for the object 301 and the objects 303 to 304.
- the focusing module 114 determines a distance between the centers of the faces of the target character for the target character feature set with the highest weight and the target character for the target character feature set with the at least one weight. That is, in the example shown in FIG. 3B, the distance between the centers of the faces of the object 302 and the object 303 is determined. In the example shown in FIG.
- the distance between the centers of the faces of the object 302 and the object 304 is determined.
- the distances between the object 302 to 304 are determined respectively.
- the distances between the centers of the faces of object 301 and the objects 303 to 304 are determined respectively.
- the focusing module 114 may perform central focusing only on two or more target characters, the distance between which being less than a predetermined threshold. If the distance between the centers of the faces of two target characters is less than the predetermined threshold, the focusing module 114 may determine that the two target characters are in a state of being next to each other. For example, if the distance between the centers of the faces of the object 302 and the object 303 in FIG. 3B is less than the predetermined threshold, it is determined that the object 302 and the object 303 are in a state of being next to each other. For example, if the distances between the centers of the faces of the object 303 and the object 302 and the object 304 in FIG.
- 3D are all less than the predetermined threshold, it is determined that the object 302 to the object 304 are in a state of being next to each other.
- FIG. 3E only the distance between the centers of the faces of the object 303 and the object 304 is less than the predetermined threshold, then it is determined that the object 303 and the object 304 are in a state of being next to each other.
- the focusing module 114 may perform central focusing only on two or more target characters, the distance between which being less than the predetermined threshold, that is to say, the focusing module 114 may perform central focusing only on the target characters in the state of being next to each other.
- the central focusing may mean focusing on the center of a straight line or the center of a polygon formed by the centers of the faces of the plurality of target characters.
- the object 302 and the object 303 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, and the distance between the centers of the faces of the object 302 and the object 303 is less than the predetermined threshold, therefore, the focusing module 114 selects the multi-targets focusing mode to perform central focusing on the object 302 and the object 303.
- the object 302 and the object 304 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, and the distance between the center of the face of the object 302 and the center of the face of the object 304 is greater than the predetermined threshold, therefore, the focusing module 114 selects the single-target focusing mode to perform separate focusing on the object 302 and the object 304 respectively.
- the objects 302 to 304 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, and the distances between the centers of the faces of the object 303 and the object 302 and object 304 are all less than the predetermined threshold, therefore, the focusing module 114 selects the multi-targets focusing mode to perform central focusing on the objects 302 to 304.
- the objects 301 and the objects 303 to 304 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, and only the distance between the centers of the faces of the object 303 and the object 304 is less than the predetermined threshold, therefore, the focusing module 114 selects the single-target focusing mode to perform separate focusing on the object 301, and selects the multi-targets focusing mode to perform central focusing on the objects 303 to 304.
- FIGS. 3A to 3E are merely exemplary embodiments, and the present invention is not limited to the embodiments illustrated in FIGS. 3A to 3E, instead, various modifications of the embodiments described above may also be included.
- FIG. 4 is a flowchart illustrating a focus tracking photographing method of the electronic device in the focus tracking mode according to an embodiment of the present disclosure.
- step 401 it is determined whether a plurality of faces appear in an image captured in a preview mode, and if a plurality of faces appear, the method proceeds to step 402. If no faces appear, the method proceeds to step 404.
- facial feature extraction is performed for each face.
- the facial feature extraction may be implemented based on machine learning techniques, for example, AdaBoost learning algorithm.
- step 403 the extracted facial features of each face are matched with one or more target character feature sets respectively to determine whether a matched target character appears in the image. If no matched target character appears in the image, the method proceeds to step 404. If a matched target character appears, the method proceeds to step 405.
- step 404 that is, in the case where no faces appear in the image captured in the preview mode or only one face appears, or if a face appears in the image captured in the preview mode, but no matched target character appears, the focus mode can be automatically switched to the normal focus mode.
- step 405 auto-focusing on the matched target characters who appear in the image is performed according to the related information of the matched target characters who appear in the image.
- the related information of the matched target characters who appear in the image includes at least one of the number of matched target characters, the distance between the centers of the faces of the matched target characters, and the weights of the target character feature sets corresponding to the matched target characters.
- the performing of auto-focusing on matched target characters who appear in the image according to related information of the matched target characters who appear in the image may include: selecting at least one of a single-target focusing mode and a multi-targets focusing mode based on the related information of the matched target characters who appear in the image.
- the single-target focusing mode is selected; in a case where the number of the matched target characters is greater than or equal to two, the differences between a plurality of weights of the target character feature sets corresponding to the matched target characters are determined, wherein if differences between the highest weight and other weights among the plurality of weights are all greater than a preset weight threshold, and the single-target focusing mode is selected to focus on a target character for a target character feature set with the highest weight, and wherein, if a difference between the highest weight of the plurality of weights and at least one weight of the other weights is less than the preset weight threshold, a distance between the centers of the faces of the target character for the target character feature set with the highest weight and a target character for the target character feature set with the at least one weight is determined, and the multi-targets focusing mode is selected to focus on two or more target characters, the distance between which being less than a preset weight threshold
- An electronic device may preferably use an album of the electronic device or other image resources downloaded to the electronic device as a macroscopic database, and analyze characters who frequently appear in the macro database. Characters who frequently appear in the macroscopic database are typically the owner of the electronic device (i.e., the user of the electronic device) and family members or close friends of the owner or a person the owner interested in.
- the electronic device may quickly identify a person who the user of the electronic device may be interested in the crowd and perform auto focusing, thereby avoiding the difficulty that it is difficult for the user to adjust the focus, which has an additional advantage of reducing shooting time and improving shooting quality.
- it may effectively reduce the impact of strangers.
- various embodiments of the present disclosure can be performed by program commands that can be executed by various computers and can be stored in a recording medium readable by a computer.
- Recording media readable by a computer may include program commands, data files, data structures, and combinations thereof.
- the program commands stored in the recording medium may be program commands specifically designed for the present disclosure or program commands commonly used in the field of computer software.
- Non-transitory computer readable recording medium is any data storage device that can store data which can be subsequently read by a computer system.
- Examples of the non-transitory computer readable recording medium include a read only memory (ROM), a random access memory (RAM), a compact disk ROM (CD-ROM), a magnetic tape, a floppy disk, and an optical data storage device.
- the non-transitory computer readable recording medium can further be distributed over a network coupled computer system to store and execute the computer readable code in a distributed mode.
- programmers skilled in the art to which the present disclosure pertains may readily interpret functional programs, code, and code segments for implementing the present disclosure.
- processors operating in accordance with stored instructions may implement the functions associated with the various embodiments of the present disclosure as described above. If this is the case, the instructions may be stored in one or more non-transitory processor readable media, which falls within the scope of the present disclosure. Examples of the processor readable media include ROM, RAM, CD-ROM, magnetic tape, floppy disks, and optical data storage devices. The processor readable media can also be distributed over network coupled computer systems for storing and executing instructions in a distributed mode. Furthermore, programmers skilled in the art to which the present disclosure pertains may readily interpret functional computer programs, code, and code segments for implementing the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
Abstract
An electronic device and a focus tracking photographing method thereof are provided. A focus tracking photographing method for an electronic device may include: obtaining a preview image including a plurality of faces; performing facial feature extraction for each of the plurality of faces; matching the extracted facial features for each of the plurality of faces with at least one target character feature set respectively, to identify a matched target character in the preview image; and performing auto-focusing on the at least one matched target character in the preview image based on a result of the matching.
Description
The present disclosure generally relate to a photographing device and method, more particularly, to an electronic device capable of photographing and a focus tracking photographing method of the electronic device.
With the development of technology, smart devices with cameras have become the mainstream terminal devices of the current era, and have become the main tools for taking pictures. When photographing a person by using a mobile terminal in the prior art, when there are a plurality of faces in the preview image captured by the camera, the terminal device usually focuses all faces in the preview image automatically based on face recognition, or selects a target to focus by clicking on the screen with a finger.
In the prior art, when an electronic device performs photographing, there are at least the following defects: the user is required to select a target and repeatedly perform focusing, and the process is cumbersome, resulting in inefficient operations.
Exemplary embodiments address the above disadvantages and other disadvantages not described above. Moreover, the exemplary embodiments are not required to overcome the disadvantages described above, and the exemplary embodiments may not overcome any of the problems described above.
According to an aspect of the present disclosure, a focus tracking photographing method for an electronic device is provided, the method may include: obtaining a preview image including a plurality of faces; performing facial feature extraction for each of the plurality of faces; matching extracted facial features for each of the plurality of faces with at least one target character feature set respectively, to identify at least one matched target character in the preview image; and performing auto-focusing on the at least one matched target character in the preview image based on information associated with the identified at least one matched target character.
According to another aspect of the present disclosure, an electronic device is provided, the electronic device comprises: a feature extracting module configured to perform facial feature extraction for each of a plurality of faces included in a preview image; a feature matching module configured to match extracted facial features for each of the plurality of faces with at least one target character feature set respectively, to identify at least one matched target character in the preview image; and a focusing module configured to perform auto-focusing on at least one matched target characters in the preview image based on information associated with the identified at least one matched target character.
When the focus tracking mode of the electronic device is activated, the electronic device may quickly identify a person, who the user of the electronic device may be interested in, in the crowd and perform auto-focusing, thereby avoiding the difficulty that it is difficult for the user to adjust the focus, which has an additional advantage of reducing shooting time and improving shooting quality. In addition, when taking photos in outdoor, scenic spots and other places, it may effectively reduce the impact of strangers.
The above and other features and advantages of the present disclosure will become more apparent by explaining exemplary embodiments of the present disclosure in detail with reference to the accompanying drawings, in which:
FIG. 1 is a diagram illustrating an example configuration of an electronic device according to an embodiment of the present disclosure;
FIG. 2 is a flowchart illustrating a method of establishing and updating a target character feature set by a target character feature set managing module according to an embodiment of the present disclosure;
FIGS. 3A, 3B, 3C, 3D and 3E are diagrams illustrating operations of a focusing module in a focus tracking mode, according to an embodiment of the present disclosure; and
FIG. 4 is a flowchart illustrating a focus tracking photographing method of an electronic device in the focus tracking mode according to an embodiment of the present disclosure.
The invention will be described more fully hereinafter with reference to the drawings illustrating an illustrative embodiment of the invention. However, the present invention may be embodied in many forms and should not be construed as being limited to the embodiments set forth herein; rather, these embodiments are provided so that the disclosure will be thorough and complete, and the scope of the invention is fully conveyed to those skilled in the art. Throughout the disclosure, the same reference numerals may be understood to refer to the same parts, components and structures.
The terms used herein are solely intended to describe a specific embodiment, and not to limit the scope of the present disclosure.
It is to be understood that a singular form are intended to include plural forms unless the context clearly indicates otherwise. It will also be understood that the term "comprise", when used in the specification, is intended to indicate the presence of specified features, integrations, steps, operations, elements and/or components, but do not preclude the presence or addition of one or more of other features, integrations, steps, operations, elements, components, or combinations thereof.
Unless otherwise defined, all terms used herein, including technical and scientific terms, have the same meaning as those commonly understood by those ordinary skilled in the art to which this invention pertains. It will also be understood that terms (such as those defined in a generally used dictionary) should be interpreted to have meanings that are consistent with the meanings in the context of the relevant art, and are not to be interpreted to have ideal or excessively formal sense, unless explicitly defined herein.
Related elements in the electronic device will be described in detail below with reference to FIG. 1.
FIG. 1 is a block diagram illustrating an example configuration of the electronic device according to an embodiment of the present invention. The electronic device described herein may be any electronic device having a photographing function. Although a portable device will be employed as a representative example of the electronic device in some embodiments of the invention, it should be understood that some components of the electronic device may be omitted or replaced.
Referring to FIG. 1, the electronic device 100 may include a camera module 110 and a controller 120.
The controller 120 may include a feature extracting module 122 and a feature matching module 124.
In a normal focus mode, the feature extracting module 122 may determine whether a face appears in a preview image captured by an image capturing module of the camera module 110, and perform auto-focusing or manual-focusing on the appeared face according to a preset setting. For example, a human face has certain structural distribution features (such as structural features and positional features of facial features (e.g., eyes, nose, mouth, eyebrows, etc.)), rules generated from these features may be used to determine whether a face is included in the image. In the prior art, there are many methods for determining whether a face is included in an image, such as Adaboost (Adaptive Boosting) learning algorithm and the like. In an embodiment of the present disclosure, the method for determining whether a face is included in an image by the feature extracting module 122 may be implemented through various known methods. Further, in the embodiment of the present disclosure, the operation of performing auto-focusing or manual-focusing on the face in the preview image may also be implemented through various known methods.
In the focus tracking mode, the feature extracting module 122 may determine whether a plurality of faces appear in the preview image captured by the image capturing module 112 of the camera module 110, and if the faces appear, a facial feature extraction may be performed for each face. In an embodiment, after determining that a plurality of faces are included in the preview image, the feature extracting module 122 may extract facial features for all of the faces included in the preview image based on deep learning techniques. As described above, the human face has a certain structural distribution features, for example, feature data that may used for face recognition may be obtained based on the shape description of the facial features of the face and the distance feature between the facial features. The geometrical description of the facial features and the structural relationship between the facial features may also be used as feature data for facial recognition. In the embodiment of the present disclosure, the method for extracting facial features by the feature extracting module 122 may be implemented through various known methods such as AdaBoost learning algorithm.
The feature matching module 124 may match facial features of each face extracted by the feature extracting module 122 with one or more target character feature sets respectively, to determine whether a plurality of faces included in the image include a face which has a target character feature set corresponding thereto, so that it is possible to determine whether a matched target character appears in the image. In an embodiment, the target character feature set is established for each character separately, and each target character feature set includes facial features of a character corresponding thereto. A character having the matched target character feature set may be recognized as the target character.
Further, the controller 120 may further include control circuits and other elements for controlling the overall operation of the electronic device 100 or the operations of elements within the electronic device 100.The controller 120 may control a predetermined operation or function in the electronic device 100 to be performed according to a predetermined setting or in response to any combination of user inputs.
The controller 120 may further include a target character feature set managing module (not shown) for managing the target character feature sets. A method of establishing and updating a target character feature set will be described in detail below with reference to FIG. 2.
The camera module 110 may include an image capturing module 112 and a focusing module 114.
The image capturing module 112 may convert an image obtained by an image sensor module (not shown) in the image capturing module 112 into a preview image and display the same in a display module (not shown) in the electronic device 100 when the camera module 110 is driven, and when a request for capturing an image is generated through the shutter button, may store the captured image in the electronic device or store the captured image in the electronic device after performing a series of image processing on the captured image.
The focusing module 114 may include a driving module capable of driving a lens system included in the image sensor module. In the normal focus mode, the focusing module 114 may focus on the point at which the user input is detected based on the user's input to the display module. Further, the focusing module 114 may determine whether the focal length is appropriate by judging the sharpness of the images taken under the different focal lengths, and further adjust the focal length by controlling the lens system. In the focus tracking mode, the focusing module 114 may control the lens system to perform single focusing or central focusing on the face determined as the subject. The focus mode may be set in the preview mode, and the focus mode may be switched at any time according to the user's input. The focus mode is not limited to the focus mode described above, and may include other focus modes known in the art. Hereinafter, the operations of the focusing module 114 in the focus tracking mode will be described in detail with reference to FIG. 3.
The focusing module 114 may be configured to automatically focus on matched target characters who appear in the image according to related information of the matched target characters who appear in the image. The focusing module 114 may also include a separate processing module (not shown) for determining to use at least one of a single-target focusing mode and a multi-targets focusing mode in the focus tracking mode. The single-target focusing mode may indicate focusing on a face of a single character or focusing on faces of a plurality of characters respectively, the multi-targets focusing mode refers to focusing on the center point of a polygon formed by lines connecting the center point of the face of each of a plurality of characters. In the embodiment of the present disclosure, when photographing is performed in the focus tracking mode, the single-target focusing mode, the multi-targets focusing mode, or both may be selected according to positions of a plurality of target characters.
The electronic device 100 may also include a communication module (not shown) for communicating with an external device or connecting to a network. The communication module may also be used to transmit data to outside or receive data from the outside. Further, the electronic device 100 may further include a memory (not shown) for storing local images and target character feature sets established by the target character feature set managing module. Moreover, the electronic device 100 may also include an internal bus for communicating between elements in electronic device 100.
FIG. 2 is a flowchart illustrating a method for establishing and updating a target character feature set by the target character feature set managing module, according to an embodiment of the present disclosure.
The target character feature set managing module may be configured to establish a target character feature set for each target character separately based on local images in the electronic device 100. The local images may include images in an album of the electronic device 100, images downloaded from the Internet or a cloud server via the communication module, and images received from another electronic device via the communication module. A character image may refer to an image in which a character is included. The target character feature set managing module may communicate through the feature extracting module 122 to determine character images in the local images through the feature extracting module 122. In general, the target character feature set managing module operates in the background while the electronic device 100 is in a standby state or the camera module 110 is in an inactive state.
As shown in FIG. 2, the operations of establishing a target character feature set by the target character feature set managing module may include the following steps, first, in step 201, the target character feature set managing module selects the character images as the target image set from the local images.
Then, in step 202, the target character feature set managing module may identify and extract facial features of each face included in all the character images in the target image set by using the feature extracting module 122.
In step 203, the target character feature set managing module may communicate with the feature matching module 124 such that the feature matching module 124 compares the extracted facial features of each face with each other , and may establish a the target character feature set for each character separately based on a degree of similarity of the extracted facial features of each face. Facial features, the degree of similarity of which is greater than a predetermined threshold, will be determined to be facial features of the same character. Each target character feature set contains facial features of the target character corresponding thereto.
The target character feature set managing module may allocate a weight to each target character feature set, the weight is determined by counting the number of occurrences of the same character in all images of the target image set. Specifically, the target character feature set managing module may count the number of occurrences of the same character in all the character images of the target image set, for example, if a character appears only once, a target character feature set will not be established for facial features of this character, or the target character feature set corresponding to this character may be allocated with the lowest weight. In addition, the target character feature set managing module may further allocate weights to different target character feature sets according to appearing frequencies of different target characters in the target image set. Specifically, if a character appears relatively frequently in all the character images, this person is a character who appears relatively frequently in the target image set, therefore, a weight of the target character feature set corresponding to the person is also relatively higher. Thus, the weight of the target character feature set actually reflects the extent to which the user of the electronic device is interested in the character for the target character feature set. A character for a target character feature set with a higher weight may be a character who the user of the electronic device has a higher interest in.
In an embodiment of the present disclosure, the operations of establishing a target character feature set by the target character feature set managing module may be implemented using deep learning algorithms. The established target character feature set may be stored in the memory of the electronic device.
In an embodiment, the target character feature set managing module may further update the target character feature sets according to a command input by the user. In another embodiment, the target character feature set managing module may further update the target character feature sets according to a time period preset by the user or according to a time period defaulted by the system. The updating of the target character feature sets may include adding a new target character feature set, updating the weights of the target character feature sets, and deleting a target character feature set the weight of which is lower than a threshold.
In an embodiment, when the updating is performed, the target character feature set managing module may perform the selection of the character image, the extraction of facial features of the character, and the establishment of the target character feature set only for images newly added in the local images after the last update, and re-allocates weights to the previously stored target character feature sets in the memory and the newly established target character feature sets, and deletes a target character feature set the weight of which is lower than the threshold.
In another embodiment, the periodically updating of the target character feature sets may be classified into short periodic update and long periodic update. For example, in a case where a short period is one week and a long period is one month, the target character feature set managing module may repeat the above-described updating operations for the images newly added in the local images after the last update every other week, and, the target character feature set managing module may update all the images in the local images every other month.
By updating the target character feature sets, the target character feature sets may accurately reflect the extent of user's interest in different characters.
Hereinafter, operations of the focusing module 114 of the electronic device 100 in the focus tracking mode will be described in detail with reference to FIG. 3A to 3E.
As described above, the focusing module 114 may be configured to perform auto-focusing on matched target characters who appear in the image according to related information of the matched target characters who appear in the image. The related information of the matched target characters who appear in the image includes at least one of the number of the matched target characters, the distance between centers of the faces of the matched target characters, and the weights of the target character feature sets corresponding to the matched target characters. Specifically, the focusing module 114 may select at least one of a single-target focusing mode and a multi-targets focusing mode according to the number of matched target characters, the distance between the centers of the faces of matched target characters, and the weights of the target character feature sets corresponding to the matched target characters to perform auto-focusing on the matched target characters.
As shown in FIG. 3A, in the case where the feature matching module 124 determines that there is only one matched target character (for example, the object 302), the matched one target character (i.e., the object 302) is focused under the single-target focusing mode, wherein, the single-target focusing mode includes performing a separate focusing on the matched target character, that is, focusing on the center of the face of the matched target character.
In the case where the number of the matched target characters is two or more, the focusing module 114 may determine the differences between the plurality of weights of the target character feature sets corresponding to the matched two or more target characters, respectively, wherein if the differences between the highest weight and other weights among the plurality of weights are all greater than a predetermined threshold, the single-target focusing mode is selected to focus on a target character for a target character feature set with the highest weight. Still as shown in FIG. 3A , in another embodiment, if the objects 301 to 304 have target character feature sets corresponding thereto respectively, that is, the objects 301 to 304 are all determined as the target characters. In this case, the focusing module 114 may determine weights of the target character feature sets corresponding to the objects 301 to 304, if the target character feature set corresponding to the object 302 has the highest weight, and each of the differences between the weight of the target character feature set corresponding to the object 302 and the weights of the target character feature sets corresponding to the object 301, the object 303, and the object 304 is greater than a preset weight threshold, the focusing module 114 may determine the object 302 as the focused object, and focus on the object 302 under the single-target focusing mode. In the above example, it may be understood that although the objects 301 to 304 are all determined as the target characters, the user is much more interested in the object 302 than the other three objects, therefore, the focusing module may focus on the object 302 only.
As shown in FIGS. 3B to 3E, in the case where the number of the matched target characters is two or more, for example, the objects 301 to 304 are all determined as the target characters, the focusing module may determine an object for target character feature set with the highest weight and an object for a target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, as the objects to be focused, by determining the weights of the target character feature sets corresponding to the objects 301 to 304, and determining the differences between the highest weight and other weights among the weights of the target character feature sets corresponding to the objects 301 to 304.
In the case where the objects 301 to 304 are all determined as the target characters, as shown in FIG. 3B, the object 302 and the object 303 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, then the focusing module 114 may determine the object 302 and the object 303 as the objects to be focused. That is, although the objects 301 to 304 are all determined as the target characters, the extent of the user's interest in the object 302 and the object 303 are similar and much higher than that of the user's interest in the other two objects, and therefore, the focusing module 114 may select a focus mode for object 302 and the object 303.
In the case where the objects 301 to 304 are all determined as the target characters, as shown in FIG. 3C, the object 302 and the object 304 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, then the focusing module 114 may determine the object 302 and the object 304 as the objects to be focused. That is, although the objects 301 to 304 are all determined as the target characters, the extent of the user's interest in the object 302 and the object 304 are similar and much higher than that of the user's interest in the other two objects, and therefore, the focusing module 114 may select a focus mode for the objects 302 and the object 304.
In the case where the objects 301 to 304 are all determined as the target characters, as shown in FIG. 3D, the objects 302 to 304 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, then the focusing module 114 may determine the objects 302 to 304 as the objects to be focused. That is, although the objects 301 to 304 are all determined as the target characters, the extent of the user's interest in the objects 302 to 304 are similar and much higher than that of the user's interest in the object 301, and therefore, the focusing module 114 may select a focus mode for the objects 302 to 304.
In the case where the objects 301 to 304 are all determined as the target characters, as shown in FIG. 3E, the object 301 and the objects 303 to 304 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, then the focusing module 114 may determine the object 301 and the objects 303 to 304 as the objects to be focused. That is, although the objects 301 to 304 are all determined as the target characters, the extent of the user's interest in the object 301 and the objects 303 to 304 are similar and much higher than that of the user's interest in the object 302, therefore, the focusing module 114 may select a focus mode for the object 301 and the objects 303 to 304.
In the example shown in FIGS. 3B to 3E, in the case where the number of the matched target characters is greater than or equal to two, and it is determined that the difference between the highest weight of the plurality of weights of the target character feature sets corresponding to the matched target characters and at least one of the other weights is less than the first threshold , the focusing module 114 determines a distance between the centers of the faces of the target character for the target character feature set with the highest weight and the target character for the target character feature set with the at least one weight. That is, in the example shown in FIG. 3B, the distance between the centers of the faces of the object 302 and the object 303 is determined. In the example shown in FIG. 3C, the distance between the centers of the faces of the object 302 and the object 304 is determined. In the example shown in FIG. 3D, the distances between the object 302 to 304 are determined respectively. In the example shown in FIG. 3E, the distances between the centers of the faces of object 301 and the objects 303 to 304 are determined respectively.
The focusing module 114 may perform central focusing only on two or more target characters, the distance between which being less than a predetermined threshold. If the distance between the centers of the faces of two target characters is less than the predetermined threshold, the focusing module 114 may determine that the two target characters are in a state of being next to each other. For example, if the distance between the centers of the faces of the object 302 and the object 303 in FIG. 3B is less than the predetermined threshold, it is determined that the object 302 and the object 303 are in a state of being next to each other. For example, if the distances between the centers of the faces of the object 303 and the object 302 and the object 304 in FIG. 3D are all less than the predetermined threshold, it is determined that the object 302 to the object 304 are in a state of being next to each other. For another example, in FIG. 3E, only the distance between the centers of the faces of the object 303 and the object 304 is less than the predetermined threshold, then it is determined that the object 303 and the object 304 are in a state of being next to each other.
As described above, the focusing module 114 may perform central focusing only on two or more target characters, the distance between which being less than the predetermined threshold, that is to say, the focusing module 114 may perform central focusing only on the target characters in the state of being next to each other. The central focusing may mean focusing on the center of a straight line or the center of a polygon formed by the centers of the faces of the plurality of target characters.
In the example shown in FIG. 3B, the object 302 and the object 303 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, and the distance between the centers of the faces of the object 302 and the object 303 is less than the predetermined threshold, therefore, the focusing module 114 selects the multi-targets focusing mode to perform central focusing on the object 302 and the object 303.
In the example shown in FIG. 3C, the object 302 and the object 304 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, and the distance between the center of the face of the object 302 and the center of the face of the object 304 is greater than the predetermined threshold, therefore, the focusing module 114 selects the single-target focusing mode to perform separate focusing on the object 302 and the object 304 respectively.
In the example shown in FIG. 3D, the objects 302 to 304 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, and the distances between the centers of the faces of the object 303 and the object 302 and object 304 are all less than the predetermined threshold, therefore, the focusing module 114 selects the multi-targets focusing mode to perform central focusing on the objects 302 to 304.
In the example shown in FIG. 3E, the objects 301 and the objects 303 to 304 are respectively determined as the object for target character feature set with the highest weight and the object for the target character feature set with a weight, a difference of which from the highest weight being less than the preset weight threshold, among the objects 301 to 304, and only the distance between the centers of the faces of the object 303 and the object 304 is less than the predetermined threshold, therefore, the focusing module 114 selects the single-target focusing mode to perform separate focusing on the object 301, and selects the multi-targets focusing mode to perform central focusing on the objects 303 to 304.
It should be noted that the cases illustrated in FIGS. 3A to 3E are merely exemplary embodiments, and the present invention is not limited to the embodiments illustrated in FIGS. 3A to 3E, instead, various modifications of the embodiments described above may also be included.
FIG. 4 is a flowchart illustrating a focus tracking photographing method of the electronic device in the focus tracking mode according to an embodiment of the present disclosure.
In step 401, it is determined whether a plurality of faces appear in an image captured in a preview mode, and if a plurality of faces appear, the method proceeds to step 402. If no faces appear, the method proceeds to step 404.
In step 402, facial feature extraction is performed for each face. In an embodiment, the facial feature extraction may be implemented based on machine learning techniques, for example, AdaBoost learning algorithm.
In step 403, the extracted facial features of each face are matched with one or more target character feature sets respectively to determine whether a matched target character appears in the image. If no matched target character appears in the image, the method proceeds to step 404. If a matched target character appears, the method proceeds to step 405.
In step 404, that is, in the case where no faces appear in the image captured in the preview mode or only one face appears, or if a face appears in the image captured in the preview mode, but no matched target character appears, the focus mode can be automatically switched to the normal focus mode.
In step 405, auto-focusing on the matched target characters who appear in the image is performed according to the related information of the matched target characters who appear in the image. According to an embodiment of the present application, the related information of the matched target characters who appear in the image includes at least one of the number of matched target characters, the distance between the centers of the faces of the matched target characters, and the weights of the target character feature sets corresponding to the matched target characters. According to an embodiment of the present application, the performing of auto-focusing on matched target characters who appear in the image according to related information of the matched target characters who appear in the image may include: selecting at least one of a single-target focusing mode and a multi-targets focusing mode based on the related information of the matched target characters who appear in the image. According to an embodiment of the present application, in a case where the number of the matched target characters is one, the single-target focusing mode is selected; in a case where the number of the matched target characters is greater than or equal to two, the differences between a plurality of weights of the target character feature sets corresponding to the matched target characters are determined, wherein if differences between the highest weight and other weights among the plurality of weights are all greater than a preset weight threshold, and the single-target focusing mode is selected to focus on a target character for a target character feature set with the highest weight, and wherein, if a difference between the highest weight of the plurality of weights and at least one weight of the other weights is less than the preset weight threshold, a distance between the centers of the faces of the target character for the target character feature set with the highest weight and a target character for the target character feature set with the at least one weight is determined, and the multi-targets focusing mode is selected to focus on two or more target characters, the distance between which being less than a preset distance threshold, among the target character for the target character feature set with the highest weight and the target character for the target character feature set with the at least one weight, and the single-target focusing mode is selected to focus on a single target character, the distance for which being greater than the preset distance threshold, among the target character for the target character feature set with the highest weight and the target character for the target character feature set with the at least one weight.
An electronic device according to an embodiment of the present disclosure may preferably use an album of the electronic device or other image resources downloaded to the electronic device as a macroscopic database, and analyze characters who frequently appear in the macro database. Characters who frequently appear in the macroscopic database are typically the owner of the electronic device (i.e., the user of the electronic device) and family members or close friends of the owner or a person the owner interested in. When the focus tracking mode of the electronic device is activated, the electronic device may quickly identify a person who the user of the electronic device may be interested in the crowd and perform auto focusing, thereby avoiding the difficulty that it is difficult for the user to adjust the focus, which has an additional advantage of reducing shooting time and improving shooting quality. In addition, when taking photos in outdoor, scenic spots and other places, it may effectively reduce the impact of strangers.
As described above, various embodiments of the present disclosure can be performed by program commands that can be executed by various computers and can be stored in a recording medium readable by a computer. Recording media readable by a computer may include program commands, data files, data structures, and combinations thereof. The program commands stored in the recording medium may be program commands specifically designed for the present disclosure or program commands commonly used in the field of computer software.
Certain aspects of the present disclosure can further be implemented as computer readable code on a non-transitory computer readable recording medium. The non-transitory computer readable recording medium is any data storage device that can store data which can be subsequently read by a computer system. Examples of the non-transitory computer readable recording medium include a read only memory (ROM), a random access memory (RAM), a compact disk ROM (CD-ROM), a magnetic tape, a floppy disk, and an optical data storage device. The non-transitory computer readable recording medium can further be distributed over a network coupled computer system to store and execute the computer readable code in a distributed mode. Furthermore, programmers skilled in the art to which the present disclosure pertains may readily interpret functional programs, code, and code segments for implementing the present disclosure.
For example, particular electrical components may be employed in a mobile device or a similar or related circuitry for implementing the functions associated with the various embodiments described above of present disclosure. Alternatively, one or more processors operating in accordance with stored instructions may implement the functions associated with the various embodiments of the present disclosure as described above. If this is the case, the instructions may be stored in one or more non-transitory processor readable media, which falls within the scope of the present disclosure. Examples of the processor readable media include ROM, RAM, CD-ROM, magnetic tape, floppy disks, and optical data storage devices. The processor readable media can also be distributed over network coupled computer systems for storing and executing instructions in a distributed mode. Furthermore, programmers skilled in the art to which the present disclosure pertains may readily interpret functional computer programs, code, and code segments for implementing the present disclosure.
Although the present disclosure has been shown and described with respect to the embodiments of the present disclosure, it will be understood by those skilled in the art that, various changes in form and detail may be made herein without departing from the spirit and scope of the disclosure as defined by the claims and their equivalents.
Claims (15)
- A focus tracking photographing method for an electronic device, comprising:obtaining a preview image including a plurality of faces;performing facial feature extraction for each of the plurality of faces;matching extracted facial features for each of the plurality of faces with at least one target character feature set respectively, to identify at least one matched target character in the preview image; andperforming auto-focusing on the at least one matched target character in the preview image based on information associated with the identified at least one matched target character.
- The method of claim 1, wherein the at least one target character feature set is established for each target character separately based on local images in the electronic device, and wherein a target character feature set is established by:selecting images including at least one character from the local images as a target image set;extracting facial features of each face of the at least one character included in images of the target image set;comparing the extracted facial features of each face with each other, andestablishing the target character feature set for each of the at least one character separately based on a degree of similarity of the extracted face features of each face.
- The method of claim 2, wherein each target character feature set is allocated with a weight, the weight being determined by counting the number of occurrences of the same character in all images of the target image set.
- The method of claim 1, wherein the information associated with the matched target characters includes at least one of a number of the matched target characters, the distance between centers of the faces of the matched target characters, and the weights of the target character feature sets corresponding to the matched target characters.
- The method of claim 4, wherein the performing of auto-focusing comprises: selecting a single-target focusing mode or a multi-targets focusing mode based on the information associated with the matched target characters,wherein in a case where the number of the matched target characters is one, the single-target focusing mode is selected.
- The method of claim 5, wherein in a case where the number of the matched target characters is greater than or equal to two, a difference between a plurality of weights of the target character feature sets corresponding to the matched target characters are determined,wherein if differences between the highest weight and other weights among the plurality of weights are all greater than a preset weight threshold, and the single-target focusing mode is selected to focus on a target character for a target character feature set with the highest weight, andwherein, if a difference between the highest weight of the plurality of weights and at least one weight of the other weights is less than the preset weight threshold, a distance between the centers of the faces of the target character for the target character feature set with the highest weight and a target character for the target character feature set with the at least one weight is determined, and the multi-targets focusing mode is selected to focus on two or more target characters, the distance between which being less than a preset distance threshold, among the target character for the target character feature set with the highest weight and the target character for the target character feature set with the at least one weight, and the single-target focusing mode is selected to focus on a single target character, the distance for which being greater than the preset distance threshold, among the target character for the target character feature set with the highest weight and the target character for the target character feature set with the at least one weight.
- An electronic device comprising:a feature extracting module configured to perform facial feature extraction for each of a plurality of faces included in a preview image;a feature matching module configured to match extracted facial features for each of the plurality of faces with at least one target character feature set respectively, to identify at least one matched target character in the preview image; anda focusing module configured to perform auto-focusing on at least one matched target characters in the preview image based on information associated with the identified at least one matched target character.
- The electronic device of claim 7, further comprising:a target character feature set managing module configured to establish a target character feature set for each target character separately based on local images in the electronic device,wherein, the target character feature set managing module further configured to:select images including at least one character from the local images as a target image set,extract facial features of each face of the at least one character included in images of the target image set,compare extracted facial features of each face with each other, andestablish the target character feature set for each of the at least one character separately based on a degree of similarity of the extracted face features of each face.
- The electronic device of claim 8, wherein the target character feature set managing module is further configured to allocate a weight to each target character feature set, the weight being determined by counting the number of occurrences of the same character in all images of the target image set.
- The electronic device of claim 7, wherein the information associated with the matched target characters includes at least one of a number of the matched target characters, the distance between centers of the faces of the matched target characters, and the weights of the target character feature sets corresponding to the matched target characters.
- The electronic device of claim 10, wherein the focusing module further configured to select a single-target focusing mode or a multi-targets focusing mode based on the information associated with the matched target characters,wherein in a case where the number of the matched target characters is one, the single-target focusing mode is selected.
- The electronic device of claim 11, wherein in a case where the number of the matched target characters is greater than or equal to two, the differences between a plurality of weights of the target character feature sets corresponding to the matched target characters are determined, wherein if differences between the highest weight and other weights among the plurality of weights are all greater than a preset weight threshold, and the single-target focusing mode is selected to focus on a target character for a target character feature set with the highest weight, andwherein, if a difference between the highest weight of the plurality of weights and at least one weight of the other weights is less than the preset weight threshold, a distance between the centers of the faces of the target character for the target character feature set with the highest weight and a target character for the target character feature set with the at least one weight is determined, and the multi-targets focusing mode is selected to focus on two or more target characters, the distance between which being less than a preset distance threshold, among the target character for the target character feature set with the highest weight and the target character for the target character feature set with the at least one weight, and the single-target focusing mode is selected to focus on a single target character, the distance for which being greater than the preset distance threshold, among the target character for the target character feature set with the highest weight and the target character for the target character feature set with the at least one weight.
- An electronic device comprising:a camera module;a controller configured to:obtain a preview image including a plurality of faces;perform facial feature extraction for each of the plurality of faces;match extracted facial features for each of the plurality of faces with at least one target character feature set respectively, to identify at least one matched target character in the preview image; andcontrol the camera module to perform auto-focusing on the at least one matched target character in the preview image based on information associated with the identified at least one matched target character.
- The electronic device of claim 13, wherein the at least one target character feature set is established for each target character separately based on local images in the electronic device, and wherein the controller is further configured to:select images including at least one character from the local images as a target image set,extract facial features of each face of the at least one character included in images of the target image set,compare the extracted facial features of each face with each other, andestablish the target character feature set for each of the at least one character separately based on a degree of similarity of the extracted face features of each face.
- A computer readable storage medium storing a program which includes instructions for performing the method of any one of claims 1-6.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811469213.3 | 2018-11-27 | ||
CN201811469213.3A CN109474785A (en) | 2018-11-27 | 2018-11-27 | The focus of electronic device and electronic device tracks photographic method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020111776A1 true WO2020111776A1 (en) | 2020-06-04 |
Family
ID=65674916
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2019/016477 WO2020111776A1 (en) | 2018-11-27 | 2019-11-27 | Electronic device for focus tracking photographing and method thereof |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109474785A (en) |
WO (1) | WO2020111776A1 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107241618B (en) * | 2017-08-07 | 2020-07-28 | 苏州市广播电视总台 | Recording method and recording apparatus |
CN110266941A (en) * | 2019-05-31 | 2019-09-20 | 维沃移动通信(杭州)有限公司 | A kind of panorama shooting method and terminal device |
CN110290324B (en) * | 2019-06-28 | 2021-02-02 | Oppo广东移动通信有限公司 | Device imaging method and device, storage medium and electronic device |
CN110830712A (en) * | 2019-09-16 | 2020-02-21 | 幻想动力(上海)文化传播有限公司 | Autonomous photographing system and method |
CN110581954A (en) * | 2019-09-30 | 2019-12-17 | 深圳酷派技术有限公司 | shooting focusing method and device, storage medium and terminal |
CN114143467B (en) * | 2021-12-20 | 2024-10-29 | 努比亚技术有限公司 | Shooting method based on automatic focusing and zooming, mobile terminal and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002333652A (en) * | 2001-05-10 | 2002-11-22 | Oki Electric Ind Co Ltd | Photographing device and reproducing apparatus |
JP2009017038A (en) * | 2007-07-02 | 2009-01-22 | Fujifilm Corp | Digital camera |
EP1737216B1 (en) * | 2005-06-22 | 2009-05-20 | Omron Corporation | Object determination device, imaging device and monitor |
JP2010028720A (en) * | 2008-07-24 | 2010-02-04 | Sanyo Electric Co Ltd | Image capturing apparatus |
EP2187624A1 (en) * | 2008-11-18 | 2010-05-19 | Fujinon Corporation | Autofocus system |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010117663A (en) * | 2008-11-14 | 2010-05-27 | Fujinon Corp | Autofocus system |
JP5990951B2 (en) * | 2012-03-15 | 2016-09-14 | オムロン株式会社 | Imaging apparatus, imaging apparatus control method, imaging apparatus control program, and computer-readable recording medium recording the program |
CN106713734B (en) * | 2015-11-17 | 2020-02-21 | 华为技术有限公司 | Automatic focusing method and device |
CN105915782A (en) * | 2016-03-29 | 2016-08-31 | 维沃移动通信有限公司 | Picture obtaining method based on face identification, and mobile terminal |
CN107395986A (en) * | 2017-08-28 | 2017-11-24 | 联想(北京)有限公司 | Image acquiring method, device and electronic equipment |
-
2018
- 2018-11-27 CN CN201811469213.3A patent/CN109474785A/en active Pending
-
2019
- 2019-11-27 WO PCT/KR2019/016477 patent/WO2020111776A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2002333652A (en) * | 2001-05-10 | 2002-11-22 | Oki Electric Ind Co Ltd | Photographing device and reproducing apparatus |
EP1737216B1 (en) * | 2005-06-22 | 2009-05-20 | Omron Corporation | Object determination device, imaging device and monitor |
JP2009017038A (en) * | 2007-07-02 | 2009-01-22 | Fujifilm Corp | Digital camera |
JP2010028720A (en) * | 2008-07-24 | 2010-02-04 | Sanyo Electric Co Ltd | Image capturing apparatus |
EP2187624A1 (en) * | 2008-11-18 | 2010-05-19 | Fujinon Corporation | Autofocus system |
Also Published As
Publication number | Publication date |
---|---|
CN109474785A (en) | 2019-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020111776A1 (en) | Electronic device for focus tracking photographing and method thereof | |
KR101423916B1 (en) | Method and apparatus for recognizing the plural number of faces | |
CN104574397B (en) | The method and mobile terminal of a kind of image procossing | |
KR101534808B1 (en) | Method and System for managing Electronic Album using the Facial Recognition | |
JP2008547126A (en) | Preconfigured settings for portable devices | |
CN103607538A (en) | Photographing method and photographing apparatus | |
CN105872363A (en) | Adjustingmethod and adjusting device of human face focusing definition | |
CN102272673A (en) | Method, apparatus and computer program product for automatically taking photos of oneself | |
WO2011078596A2 (en) | Method, system, and computer-readable recording medium for adaptively performing image-matching according to conditions | |
CN107360366B (en) | Photographing method and device, storage medium and electronic equipment | |
CN114387548A (en) | Video and liveness detection method, system, device, storage medium and program product | |
WO2024087797A1 (en) | Line-of-sight direction data collection method, apparatus and device, and storage medium | |
CN111970437B (en) | Text shooting method, wearable device and storage medium | |
CN111881740A (en) | Face recognition method, face recognition device, electronic equipment and medium | |
CN108780568A (en) | A kind of image processing method, device and aircraft | |
CN114125226A (en) | Image shooting method and device, electronic equipment and readable storage medium | |
CN108259767B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
KR102664027B1 (en) | Camera to analyze video based on artificial intelligence and method of operating thereof | |
JP2011090410A (en) | Image processing apparatus, image processing system, and control method of image processing apparatus | |
CN112188108A (en) | Photographing method, terminal, and computer-readable storage medium | |
CN108076280A (en) | A kind of image sharing method and device based on image identification | |
CN108495038B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN114143429B (en) | Image shooting method, device, electronic equipment and computer readable storage medium | |
CN113938597B (en) | Face recognition method, device, computer equipment and storage medium | |
US11544834B2 (en) | Information processing system for extracting images, image capturing apparatus, information processing apparatus, control methods therefor, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19890282 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19890282 Country of ref document: EP Kind code of ref document: A1 |