WO2016101481A1 - 自动对焦方法及装置 - Google Patents
自动对焦方法及装置 Download PDFInfo
- Publication number
- WO2016101481A1 WO2016101481A1 PCT/CN2015/077963 CN2015077963W WO2016101481A1 WO 2016101481 A1 WO2016101481 A1 WO 2016101481A1 CN 2015077963 W CN2015077963 W CN 2015077963W WO 2016101481 A1 WO2016101481 A1 WO 2016101481A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- coordinate
- spatial
- target object
- focus
- distance
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B7/00—Mountings, adjusting means, or light-tight connections, for optical elements
- G02B7/28—Systems for automatic generation of focusing signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
Definitions
- the present disclosure relates to the field of communication terminal technologies, and in particular, to an autofocus method and apparatus.
- a camera function integrated on the smart terminal. Through the camera function, the user can perform scenes or characters of interest at any time and any place. Shooting. When the camera function is turned on, when the user framing through the viewfinder, manual focus can be used to achieve focus on the framing content by clicking on a framing target in the viewfinder, such as a person's face.
- the present disclosure provides an autofocus method and apparatus to solve the cumbersome manual focus operation in the related art, resulting in a problem that the user's shooting experience is poor.
- an autofocus method comprising:
- the target object is automatically focused according to the second spatial data.
- the acquiring the first spatial data of the target object includes:
- a first space vector angle of the first vector between the focus and the first position is calculated.
- the calculating the focus to the first vertical distance of the image sensor comprises:
- a difference between the image distance and a fixed focal length is calculated, the difference being used as the first vertical distance of the focus to the image sensor.
- the obtaining, by the first vertical distance, the first spatial coordinate of the first position of the target object imaged on the image sensor comprises:
- a first spatial coordinate imaged by the target object on the image sensor Determining, according to the second two-dimensional coordinate and the first vertical distance, a first spatial coordinate imaged by the target object on the image sensor, wherein an X-axis coordinate value of the first spatial coordinate is the An X-axis coordinate value of the second two-dimensional coordinate, a Y-axis coordinate value of the first spatial coordinate is a Y-axis coordinate value of the second two-dimensional coordinate, and a Z-axis coordinate value of the first spatial coordinate is the The first vertical distance.
- acquiring location change data includes:
- the spatial change vector angle detected by the direction sensor as the position change data is acquired.
- the calculating the second spatial data of the target object according to the first spatial data and the position change data includes:
- the second space vector angle is a space vector angle of the second vector between the focus and the second position
- the second position is a position at which the target object is imaged on the image sensor after the autofocus is completed
- the performing autofocus according to the second spatial data includes:
- the lens group is moved until the distance from the lens group to the image sensor is the adjusted image distance.
- the method before performing autofocusing according to the second spatial data, the method includes:
- the second spatial coordinate is corrected according to the third spatial coordinate, and the corrected first Two spatial coordinates, including:
- an autofocus apparatus comprising:
- An acquiring module configured to acquire first spatial data of the target object after the user clicks on the target object in the viewfinder to complete manual focusing;
- a detecting module configured to acquire position change data when detecting that the framing content in the viewfinder changes
- a first calculating module configured to calculate second spatial data of the target object according to the first spatial data and the position change data
- a focusing module configured to perform auto focusing on the target object according to the second spatial data.
- the acquiring module includes:
- a first vertical distance calculation sub-module configured to calculate a first vertical distance of the focus to the image sensor, wherein the target object is imaged on the image sensor when the manual focus is completed;
- a first spatial coordinate obtaining submodule configured to use the focus as an origin of a three-dimensional rectangular coordinate system, and obtain a first space in which the target object is imaged at a first position on the image sensor according to the first vertical distance coordinate;
- a first space vector angle calculation submodule configured to calculate a first space vector angle of the first vector between the focus and the first position.
- the first vertical distance calculation submodule includes:
- the image distance obtaining sub-module is used to obtain the image distance when the manual focus is completed
- the difference calculation sub-module is configured to calculate a difference between the image distance and the fixed focal length, and use the difference as the first vertical distance of the focus to the image sensor.
- the first spatial coordinate obtaining submodule includes:
- a first two-dimensional coordinate acquisition sub-module configured to acquire, by using a center of the viewfinder as an origin of a plane rectangular coordinate system, a first two-dimensional coordinate of the target object in the plane rectangular coordinate system, where The center of the viewfinder is in the same normal direction as the focus;
- a second two-dimensional coordinate obtaining sub-module configured to convert the first two-dimensional coordinates according to a preset ratio, to obtain a second two-dimensional coordinate that the target object is imaged on the image sensor;
- a first spatial coordinate determining submodule configured to determine, according to the second two-dimensional coordinate and the first vertical distance, a first spatial coordinate imaged by the target object on the image sensor, wherein the first The X-axis coordinate value of the spatial coordinate is an X-axis coordinate value of the second two-dimensional coordinate, and the Y-axis coordinate value of the first spatial coordinate is a Y-axis coordinate value of the second two-dimensional coordinate, the first The Z-axis coordinate value of the spatial coordinates is the first vertical distance.
- the detecting module includes:
- An acceleration detecting submodule configured to determine, by the acceleration data detected by the acceleration sensor, whether the viewfinder moves;
- a change vector angle acquisition submodule configured to acquire, as the viewfinder moves, a spatial change vector angle detected by the direction sensor as the position change data.
- the first calculating module includes:
- a first straight line distance calculation submodule configured to calculate a first straight line distance of the focus to the first position according to the first space coordinate
- a second space vector angle calculation submodule configured to calculate a second space vector angle according to the first space vector angle and the space change vector angle, where the second space vector angle is the focus and the second position a space vector angle of the second vector between the second position, where the target object is imaged on the image sensor after the autofocus is completed;
- a second spatial coordinate calculation submodule configured to calculate a second spatial coordinate of the second position according to the first straight line distance and the second space vector angle.
- the focusing module includes:
- a second vertical distance obtaining submodule for obtaining a second vertical distance of the focus to the second position according to the second spatial coordinate, wherein the second vertical distance is the Z of the second spatial coordinate Axis coordinate value;
- the lens group moving sub-module is configured to move the lens group until the distance from the lens group to the image sensor is the adjusted image distance.
- the device further includes:
- a second calculating module configured to calculate a third spatial coordinate of the second location by using an image recognition algorithm
- a correction module configured to correct the second spatial coordinate according to the third spatial coordinate to obtain the corrected second spatial coordinate.
- the calibration module includes:
- a correction threshold judging module configured to determine whether a distance between the third spatial coordinate and the second spatial coordinate is less than a preset correction threshold
- a corrected coordinate value calculation submodule configured to calculate, as less than the correction threshold, an average value of the X-axis coordinate values of the third spatial coordinate and the second spatial coordinate as the X-axis of the corrected second spatial coordinate a coordinate value, an average value of the Y-axis coordinate values of the third space coordinate and the second space coordinate is calculated as a Y-axis coordinate value of the corrected second space coordinate, and according to the first straight line distance,
- the X-axis coordinate value of the corrected second spatial coordinate and the corrected Y-axis coordinate value of the second spatial coordinate are used to calculate a Z-axis coordinate value of the corrected second spatial coordinate.
- another autofocus device comprising:
- a memory for storing processor executable instructions
- processor is configured to:
- the target object is automatically focused according to the second spatial data.
- the first spatial data of the target object is acquired, and when the finder content in the viewfinder is detected to change, the position change data is acquired.
- the second spatial data of the target object is calculated according to the first spatial data and the position change data, the autofocus may be completed according to the second spatial data. Therefore, when the user takes a picture, if the viewfinder moves, but the target object does not move out of the viewfinder, the target object can be automatically focused, thereby avoiding the manual focus operation when the framing content changes, simplifying the focus operation flow and improving The focusing speed increases the user's shooting experience accordingly.
- the present disclosure obtains the first spatial coordinate imaged by the target object on the image sensor by acquiring the image distance obtained after the manual focus is completed and the focus as the origin of the three-dimensional Cartesian coordinate system when acquiring the first spatial data of the target object.
- the first space vector angle so that the first spatial coordinate and the first spatial vector angle can be used to calculate spatial data after the position of the target object changes, thereby facilitating automatic focusing.
- the disclosure can also use the acceleration sensor integrated in the terminal to determine whether the viewfinder moves, and when the viewfinder moves, the direction change sensor can detect the spatial variation vector angle generated by the movement, thereby being able to change the vector angle according to the space.
- a spatial coordinate and a first spatial vector angle are used to calculate spatial data after the position of the target object changes to achieve autofocus.
- the present disclosure can also correct the second spatial coordinate by the third spatial coordinate calculated by the image recognition algorithm before performing autofocus according to the second spatial coordinate, thereby further improving the accuracy of the auto focus.
- FIG. 1 is a flow chart of an autofocus method according to an exemplary embodiment of the present disclosure.
- FIG. 2 is a flow chart of another autofocus method according to an exemplary embodiment of the present disclosure.
- FIG. 3 is a schematic diagram of imaging after completion of focusing of a terminal according to an exemplary embodiment of the present disclosure.
- FIG. 4 is a block diagram of an autofocus apparatus according to an exemplary embodiment of the present disclosure.
- FIG. 5 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
- FIG. 6 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
- FIG. 7 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
- FIG. 8 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
- FIG. 9 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
- FIG. 10 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
- FIG. 11 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
- FIG. 12 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
- FIG. 13 is a schematic structural diagram of an autofocus device according to an exemplary embodiment of the present disclosure.
- first, second, third, etc. may be used in the present disclosure to describe various information, such information should not be limited to these terms. These terms are only used to distinguish the same type of information from each other.
- first information may also be referred to as second information without departing from the scope of the present disclosure.
- second information may also be referred to as first information.
- word "if” as used herein may be interpreted as "when” or “when” or “in response to a determination.”
- FIG. 1 is a flowchart of an auto focus method according to an exemplary embodiment. The method may be used in a terminal, including the following steps:
- step 101 when the user clicks on the target object in the viewfinder to complete manual focusing, the first spatial data of the target object is acquired.
- the terminal in the embodiment of the present disclosure mainly refers to various smart terminals integrated with camera functions, such as a smart phone, a tablet computer, a PDA (Personal Digital Assistant), and the like.
- the lens group for implementing the camera function on the smart terminal usually adopts a fixed focus (f), that is, optical zoom cannot be performed, and during the focusing process, the terminal changes the lens group and the image sensor for imaging by moving the lens group.
- the distance between the two is such that the distance is equal to the image distance (v), that is, the focal plane imaged by the focus coincides with the vertical plane of the image sensor, thereby making the image clear, and the focus is completed.
- the user when the user turns on the camera function of the terminal, the user can view the viewfinder.
- the scene content adjusts the picture you want to shoot, and you can perform manual focus by clicking on a target object in the viewfinder.
- the target object is imaged on the image sensor, and the image is sharp.
- the viewfinder content changes, causing the position of the target object to change in the viewfinder, but the terminal can automatically focus on the target object when the viewfinder is not removed.
- the first spatial data of the target object is acquired, and the first spatial data may include the first spatial coordinate and the first spatial vector angle, so as to complete the subsequent autofocus process by using the first spatial data.
- the terminal may first calculate the first vertical distance of the focus to the image sensor, and use the focus as the origin to establish a three-dimensional rectangular coordinate system. At this time, since the manual focus has been completed, the target object is imaged.
- the first position of the target object imaged on the image sensor is obtained, and the first spatial coordinate of the first position in the three-dimensional Cartesian coordinate system is obtained, the first space coordinate is determined by the X-axis coordinate and the Y-axis coordinate
- the Z-axis coordinate composition wherein the Z-axis coordinate value is the aforementioned first vertical distance; and then based on the aforementioned three-dimensional rectangular coordinate system, the first angle between the focus and the first position can be calculated by using the vector angle formula in the related art a first space vector angle of the vector, the first space vector angle including an angle between the first vector and the X-axis of the X-axis, an angle between the first vector and the Y-axis of the Y-axis, and a Z-axis of the first vector and the Z-axis angle.
- step 102 when it is detected that the framing content in the finder changes, the position change data is acquired.
- the smart terminal usually integrates a plurality of sensors with different functions, which may include an acceleration sensor and a direction sensor.
- the acceleration sensor is used to detect the magnitude and direction of the acceleration received by the smart terminal, so that the terminal can be judged whether the rotation occurs, and the direction sensor is used for The moving angle of each coordinate axis of the smart terminal in the three-dimensional space is detected.
- the direction sensor may be specifically a gyro sensor.
- the terminal may determine whether the terminal has rotated according to the acceleration data, and thereby determine whether the viewfinder has moved; when determining that the viewfinder moves, and stops moving.
- the space change vector angle detected by the direction sensor is obtained, the space change vector angle is the X-axis change angle of the current space vector angle relative to the space vector angle when the manual focus is completed on the X-axis, and the Y-axis The Y-axis change angle and the Z-axis change angle on the Z-axis.
- step 103 second spatial data of the target object is calculated based on the first spatial data and the position change data.
- a first space vector angle of the first vector between the focus and the first position is obtained in step 101, and a spatial variation vector angle is obtained in step 102, so that the first space vector angle and space change may be used in this step.
- a vector angle, the second space vector angle is calculated as a space vector angle of the second vector between the focus and the second position, and the second position is the image of the target object imaged after the autofocus is completed.
- the position of the X-axis of the second space vector angle is the sum of the X-axis angle of the first space vector angle and the X-axis change angle of the spatial variation vector angle
- the angle of the Y-axis of the second space vector angle is The sum of the Y-axis angle of the first space vector angle and the Y-axis change angle of the spatially varying vector angle
- the Z-axis angle of the second space vector angle is the Z-axis angle of the first space vector angle and the spatial variation vector angle
- the sum of the Z-axis change angles; the first position
- step 104 the target object is automatically focused in accordance with the second spatial data.
- a second vertical distance from the focus to the second position may be obtained according to the second spatial coordinate, where the second vertical distance is the Z-axis coordinate value of the second spatial coordinate, and the calculation is performed.
- the sum of the two vertical distances and the fixed focal length, the sum is taken as the adjusted image distance, and then the terminal moves the lens group until the distance from the lens group to the image sensor is the adjusted image distance, the image of the target object falls on the image On the sensor, the target object is clearly imaged at this time, and autofocus is completed.
- the second spatial data before the autofocusing according to the second spatial data, the second spatial data may be corrected, that is, when the second spatial data is the second spatial coordinate of the second location, the image recognition may be performed.
- the algorithm calculates a third spatial coordinate of the second position, and corrects the second spatial coordinate according to the third spatial coordinate to obtain the corrected second spatial coordinate.
- the terminal may determine whether the distance between the third spatial coordinate and the second spatial coordinate is less than a preset correction threshold, and when less than the correction threshold, calculate the third spatial coordinate and the second space.
- the average value of the X-axis coordinate values of the coordinates is taken as the X-axis coordinate value of the corrected second space coordinate, and the average value of the Y-axis coordinate values of the third space coordinate and the second space coordinate is calculated as the corrected second space coordinate.
- the Y-axis coordinate value, and then the corrected second space can be calculated according to the first straight line distance, the corrected X-axis coordinate value of the second spatial coordinate, and the corrected Y-axis coordinate value of the second spatial coordinate.
- the first spatial data of the target object is acquired, and when the finder content in the viewfinder is detected to change, the position is acquired.
- the change data after the second spatial data of the target object is calculated according to the first spatial data and the position change data, the auto focus can be completed according to the second spatial data.
- the embodiment may further correct the second spatial coordinate by using the third spatial coordinate calculated by the image recognition algorithm before performing autofocus according to the second spatial coordinate, thereby The accuracy of autofocus can be further improved.
- FIG. 2 is a flowchart of another autofocus method according to an exemplary embodiment.
- the method can be used in a terminal, including the following steps:
- step 201 when the user clicks on the target object in the viewfinder to complete manual focusing, the first vertical distance of the focus to the image sensor is calculated.
- the terminal in the embodiment of the present disclosure mainly refers to various intelligent terminals integrated with camera functions.
- the Gaussian imaging formula is satisfied between the focal length (f), the object distance (u) and the image distance (v), wherein
- the focal length refers to the distance between the lens group and the focus.
- the object distance refers to the distance from the vertical plane where the object is photographed to the lens group, and the distance from the image taken to the lens group.
- FIG. 3 is a schematic diagram of imaging after completion of focusing of a terminal according to an exemplary embodiment.
- the picture to be photographed can be adjusted by viewing the framing content in the viewfinder, and a certain target object in the viewfinder can be clicked, as shown in FIG. Perform manual focus.
- the target object is imaged on the image sensor, and the image is clear at this time.
- the image distance is v1 and the fixed focal length is f
- the first vertical distance from the focus to the image sensor is d1
- step 202 the first spatial coordinate of the first position imaged by the target object on the image sensor is obtained according to the first vertical distance with the focus as the origin of the three-dimensional Cartesian coordinate system.
- the center of the viewfinder can be first used as the origin of the plane rectangular coordinate system, and the center of the viewfinder is in the same normal direction as the focus.
- the first two-dimensional object of the target object in the plane rectangular coordinate system is acquired.
- the target object is imaged.
- the first position P1 of the target object imaged on the image sensor is set, so the first space coordinate of the first position P1 in the three-dimensional Cartesian coordinate system is obtained, and according to the size of the viewfinder and the image sensor
- the size is converted into a first two-dimensional coordinate P(x, y) according to a preset ratio, and a second two-dimensional coordinate imaged by the target object on the image sensor is obtained, which is assumed to be (x1, y1).
- the viewfinder has a pixel size of 1440 ⁇ 1080
- the image sensor has a length and width of 0.261 inches and 0.196 inches, respectively, assuming that the first two-dimensional coordinates of the target object on the viewfinder are P (500px, 500px) corresponds to the second two-dimensional coordinates in the three-dimensional Cartesian coordinate system (0.090 inches, 0.090 inches).
- the first spatial coordinate P1(x1, y1, z1) imaged by the target object on the image sensor may be determined according to the second two-dimensional coordinate (x1, y1) and the first vertical distance d1, wherein the first spatial coordinate X
- the axis coordinate value is the X-axis coordinate value x1 of the second two-dimensional coordinate
- the Y-axis coordinate value of the first space coordinate is the Y-axis coordinate value y1 of the second two-dimensional coordinate
- the Z-axis coordinate value of the first spatial coordinate is the first value.
- the vertical distance z1 d1.
- step 203 a first space vector angle of the first vector between the focus and the first position is calculated.
- the terminal can calculate the first space vector angle of the focus to the first position P1 by using the vector angle formula of the three-dimensional Cartesian coordinate system ( ⁇ x1, ⁇ y1, ⁇ z1), wherein the angle between the first vector and the X-axis of the X-axis is ⁇ x1, the angle between the first vector and the Y-axis of the Y-axis is ⁇ y1, and the angle between the first vector and the Z-axis of the Z-axis is ⁇ Z1.
- step 204 it is determined whether the viewfinder has moved by the acceleration data detected by the acceleration sensor.
- a plurality of sensors having different functions are generally integrated on the terminal, wherein the acceleration sensor can be used to detect the magnitude and direction of the acceleration received by the terminal.
- the acceleration sensor can be used to detect the magnitude and direction of the acceleration received by the terminal.
- the terminal acquires the detected by the acceleration sensor, After the acceleration data, it can be determined whether the terminal has rotated according to the acceleration data, and then it can be determined whether the viewfinder has moved.
- step 205 when the viewfinder moves, the spatial change vector angle as the position change data detected by the direction sensor is acquired.
- the terminal can also integrate a direction sensor for detecting the moving angle of each coordinate axis of the terminal in the three-dimensional space.
- the direction sensor can be specifically a gyro sensor.
- the space change vector angle detected by the direction sensor may be acquired when the movement stops, and the spatial change vector angle is the current space vector angle relative to the manual focus completion.
- the space vector angles are the X-axis change angle ⁇ ⁇ x on the X-axis, the Y-axis change angle ⁇ ⁇ y on the Y-axis, and the Z-axis change angle ⁇ ⁇ z on the Z-axis, respectively.
- step 206 a first straight line distance of the focus to the first position is calculated based on the first spatial coordinates.
- the first spatial coordinate P1 (x1, y1, z1) is obtained in the foregoing step 202.
- the first straight line distance ⁇ of the focus to P1 can be calculated according to P1(x1, y1, z1), and the calculation formula of the ⁇ is as follows :
- a second space vector angle is calculated based on the first space vector angle and the spatial variation vector angle.
- the second space vector angle may be calculated according to the first space vector angle obtained in step 203 and the spatial change vector angle obtained in step 205, where the second space vector angle is between the focus and the second position P2.
- the space vector angle of the two vectors, the second position P2 is that after the manual focus is completed, the framing content in the viewfinder changes, but when the target object is not moved out of the viewfinder, the target object is imaged after the terminal completes the auto focus. The location on the top.
- the second space vector angle is calculated as follows:
- ⁇ x2 ⁇ x1+ ⁇ x
- step 208 the second space sitting in the second position is calculated according to the first straight line distance and the second space vector angle. Standard.
- the second spatial coordinate P2 of the second position P2 can be calculated according to the first straight line distance ⁇ calculated in step 206 and the second space vector angle ( ⁇ x2, ⁇ y2, ⁇ z2) calculated in step 207. (x2, y2, z2), where ⁇ is multiplied by the cosine of ⁇ x2 to obtain the X-axis coordinate value x2 of P2, and ⁇ is multiplied by the cosine of ⁇ y2 to obtain the Y-axis coordinate value y2, ⁇ and ⁇ z2 of P2
- the cosine value is multiplied to obtain the Z-axis coordinate value z2 of P2, that is, the second spatial coordinate can be calculated according to the following formula:
- step 209 a second vertical distance of the focus to the second position is obtained according to the second spatial coordinate, wherein the second vertical distance is a Z-axis coordinate value of the second spatial coordinate.
- the second spatial coordinate P2 (x2, y2, z2) of the second position P2 is obtained in step 208, so that the second vertical distance d2 of the focus to the second position P2 can be obtained according to the second spatial coordinate, the second vertical distance D2 is the Z-axis coordinate value z2 of the second spatial coordinate.
- step 210 the sum of the second vertical distance and the fixed focus is calculated, and the sum is taken as the adjusted image distance.
- V2 d2+f formula (5)
- step 211 the lens group is moved until the distance from the lens group to the image sensor is the adjusted image distance.
- the adjusted image distance is calculated according to the position of the target object imaged on the image sensor after the autofocus in the foregoing step.
- the terminal can perform auto focus by controlling the movement of the lens group, and the auto focus is completed when the lens group moves to the image distance v2 after the adjustment from the image sensor.
- the present disclosure also provides an embodiment of an autofocus device and a terminal to which it is applied.
- FIG. 4 is a block diagram of an auto-focusing device according to an exemplary embodiment of the present disclosure.
- the device includes an acquisition module 410, a detection module 420, a first calculation module 430, and a focus module 440.
- the acquiring module 410 is configured to acquire first spatial data of the target object after the user clicks on the target object in the viewfinder to complete manual focusing.
- the detecting module 420 is configured to acquire position change data when detecting that the framing content in the viewfinder changes;
- the first calculating module 430 is configured to calculate second spatial data of the target object according to the first spatial data and the position change data;
- the focusing module 440 is configured to perform auto focusing on the target object according to the second spatial data.
- the target object in the viewfinder when the user takes a picture by using the terminal, when the user clicks the target object in the viewfinder to complete the manual focus, the first spatial data of the target object is acquired, and when the finder content in the viewfinder is detected to change, the position is acquired.
- the change data after the second spatial data of the target object is calculated according to the first spatial data and the position change data, the auto focus can be completed according to the second spatial data. Therefore, when the user takes a picture, if the viewfinder moves, but the target object does not move out of the viewfinder, the target object can be automatically focused, thereby avoiding the manual focus operation when the framing content changes, simplifying the focus operation flow and improving The focusing speed increases the user's shooting experience accordingly.
- FIG. 5 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
- the acquiring module 410 may include: The first vertical distance calculation sub-module 411, the first spatial coordinate acquisition sub-module 412, and the first spatial vector angle calculation sub-module 413.
- the first vertical distance calculation sub-module 411 is configured to calculate a first vertical distance of the focus to the image sensor, wherein the target object is imaged on the image sensor when the manual focus is completed;
- the first spatial coordinate obtaining sub-module 412 is configured to use the focus as an origin of a three-dimensional Cartesian coordinate system, and obtain a first position of the target object imaged on the image sensor according to the first vertical distance First space coordinate;
- the first space vector angle calculation sub-module 413 is configured to calculate a first space vector angle of the first vector between the focus and the first position.
- FIG. 6 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
- the embodiment is based on the foregoing embodiment shown in FIG.
- the module 411 can include an image distance obtaining sub-module 4111 and a difference calculating sub-module 4112.
- the image distance obtaining sub-module 4111 is configured to obtain an image distance when the manual focus is completed
- the difference calculation sub-module 4112 is configured to calculate a difference between the image distance and a fixed focus, using the difference as the first vertical distance of the focus to the image sensor.
- FIG. 7 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
- the embodiment is based on the foregoing embodiment shown in FIG.
- the module 412 may include: a first two-dimensional coordinate acquisition sub-module 4121, a second two-dimensional coordinate obtaining sub-module 4122, and a first spatial coordinate The sub-module 4123 is determined.
- the first two-dimensional coordinate acquisition sub-module 4121 is configured to acquire the first two-dimensionality of the target object in the plane rectangular coordinate system with the center of the viewfinder as an origin of the plane rectangular coordinate system. a coordinate, wherein a center of the viewfinder is in the same normal direction as the focus;
- the second two-dimensional coordinate obtaining sub-module 4122 is configured to convert the first two-dimensional coordinates according to a preset ratio to obtain a second two-dimensional coordinate that the target object is imaged on the image sensor;
- the first spatial coordinate determining sub-module 4123 is configured to determine, according to the second two-dimensional coordinate and the first vertical distance, a first spatial coordinate imaged by the target object on the image sensor, where
- the X-axis coordinate value of the first spatial coordinate is an X-axis coordinate value of the second two-dimensional coordinate
- the Y-axis coordinate value of the first spatial coordinate is a Y-axis coordinate value of the second two-dimensional coordinate
- the Z-axis coordinate value of the first spatial coordinate is the first vertical distance.
- the image distance obtained by the manual focus is completed, and the focus is taken as the origin of the three-dimensional rectangular coordinate system, and the image of the target object imaged on the image sensor is obtained.
- a spatial coordinate and a first spatial vector angle so that the spatial data after the change of the target object position can be calculated by using the first spatial coordinate and the first spatial vector angle, thereby facilitating automatic focusing.
- FIG. 8 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
- the detecting module 420 may include: The acceleration detection sub-module 421 and the variation vector angle acquisition sub-module 422.
- the acceleration detecting sub-module 421 is configured to determine, by the acceleration data detected by the acceleration sensor, whether the viewfinder moves;
- the change vector angle acquisition sub-module 422 is configured to acquire a spatial change vector angle as the position change data detected by the direction sensor when the viewfinder moves.
- FIG. 9 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
- the first computing module 430 may be based on the foregoing embodiment shown in FIG. 8 .
- the first straight line distance calculation submodule 431, the second space vector angle calculation submodule 432, and the second space coordinate calculation submodule 433 are included.
- the first straight-line distance calculation sub-module 431 is configured to calculate a first straight-line distance of the focus to the first position according to the first spatial coordinate;
- the second space vector angle calculation sub-module 432 is configured to calculate a second space vector angle according to the first space vector angle and the space change vector angle, where the second space vector angle is the focus and a spatial vector angle of a second vector between the second positions, the second position being a position at which the target object is imaged on the image sensor after the autofocus is completed;
- the second spatial coordinate calculation sub-module 433 is configured to calculate a second spatial coordinate of the second position according to the first linear distance and the second spatial vector angle.
- FIG. 10 is another auto-focus device frame according to an exemplary embodiment of the present disclosure.
- the focus module 440 may include a second vertical distance obtaining sub-module 441, an adjusted image distance calculating sub-module 442, and a lens group moving sub-module 443.
- the second vertical distance obtaining submodule 441 is configured to obtain a second vertical distance from the focus to the second position according to the second spatial coordinate, wherein the second vertical distance is the Z-axis coordinate value of the second spatial coordinate;
- the adjustment image distance calculation sub-module 442 is configured to calculate a sum of the second vertical distance and a fixed focus, and use the sum as an adjusted image distance;
- the lens group moving sub-module 443 is configured to move the lens group until the distance of the lens group to the image sensor is the adjusted image distance.
- the acceleration sensor integrated in the terminal determines whether the viewfinder moves, and when the viewfinder moves, the direction change sensor can detect the spatial variation vector angle generated by the movement, thereby being able to change the vector angle according to the space.
- the first spatial coordinate and the first spatial vector angle are used to calculate spatial data after the position of the target object changes, so as to achieve autofocus.
- FIG. 11 is a block diagram of another auto-focusing device according to an exemplary embodiment of the present disclosure.
- the embodiment may be based on the foregoing embodiment shown in FIG. 9 or FIG.
- the second calculation module 450 and the correction module 460 are included.
- the second calculation module 450 is configured to calculate a third spatial coordinate of the second location by using an image recognition algorithm
- the correction module 460 is configured to correct the second spatial coordinate according to the third spatial coordinate to obtain the corrected second spatial coordinate.
- FIG. 12 is a block diagram of another auto-focusing apparatus according to an exemplary embodiment of the present disclosure.
- the correction module 460 may include: The correction threshold determination sub-module 461 and the correction coordinate value calculation sub-module 462.
- the correction threshold determining sub-module 461 is configured to determine whether a distance between the third spatial coordinate and the second spatial coordinate is less than a preset correction threshold
- the corrected coordinate value calculation sub-module 462 is configured to calculate, as less than the correction threshold, an average value of the X-axis coordinate values of the third spatial coordinate and the second spatial coordinate as the corrected second space Calculating an average value of the Y-axis coordinate values of the third spatial coordinate and the second spatial coordinate as a Y-axis coordinate value of the corrected second spatial coordinate, and according to the first straight
- the Z-axis coordinate value of the corrected second spatial coordinate is calculated by the line distance, the corrected X-axis coordinate value of the second spatial coordinate, and the corrected Y-axis coordinate value of the second spatial coordinate.
- the third spatial coordinate calculated by the image recognition algorithm corrects the second spatial coordinate, thereby further improving the accuracy of the autofocus.
- the present disclosure also provides another auto-focusing device, the device including a processor; for storing A processor executable memory of instructions; wherein the processor is configured to:
- the device embodiment since it basically corresponds to the method embodiment, reference may be made to the partial description of the method embodiment.
- the device embodiments described above are merely illustrative, wherein the modules described as separate components may or may not be physically separate, and the components displayed as modules may or may not be physical modules, ie may be located A place, or it can be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the objectives of the present disclosure. Those of ordinary skill in the art can understand and implement without any creative effort.
- FIG. 13 is a schematic structural diagram of an apparatus 1300 for controlling video picture presentation according to an exemplary embodiment of the present disclosure.
- device 1300 can be a mobile phone with routing functionality, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
- device 1300 can include one or more of the following components: processing component 1302, memory 1304, power component 1306, multimedia component 1308, audio component 1310, input/output (I/O) interface 1313, sensor component 1314, And a communication component 1316.
- Processing component 1302 typically controls the overall operation of device 1300, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
- Processing component 1302 can include one or more processors 1320 to execute instructions to perform all or part of the steps described above.
- processing component 1302 can include one or more modules to facilitate interaction between component 1302 and other components.
- processing component 1302 can include a multimedia module to facilitate interaction between multimedia component 1308 and processing component 1302.
- Memory 1304 is configured to store various types of data to support operation at device 1300. Examples of such data include instructions for any application or method operating on device 1300, contact data, phone book data, messages, pictures, videos, and the like.
- Memory 1304 can be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Disk or Optical Disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read only memory
- EPROM erasable Programmable Read Only Memory
- PROM Programmable Read Only Memory
- ROM Read Only Memory
- Magnetic Memory Flash Memory
- Disk Disk or Optical Disk.
- Power component 1306 provides power to various components of device 1300.
- Power component 1306 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 1300.
- the multimedia component 1308 includes a screen between the device 1300 and the user that provides an output interface.
- the screen can include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive input signals from the user.
- the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel. The touch sensor may sense not only the boundary of the touch or sliding action, but also the duration and pressure associated with the touch or slide operation.
- the multimedia component 1308 includes a front camera and/or a rear camera. When the device 1300 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
- the audio component 1310 is configured to output and/or input an audio signal.
- the audio component 1310 includes a microphone (MIC) that is configured to receive an external audio signal when the device 1300 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode.
- the received audio signal may be further stored in memory 1304 or transmitted via communication component 1316.
- the audio component 1310 also includes a speaker for outputting an audio signal.
- the I/O interface 1313 provides an interface between the processing component 1302 and the peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to, a home button, a volume button, a start button, and a lock button.
- Sensor assembly 1314 includes one or more sensors for providing device 1300 with a status assessment of various aspects.
- sensor assembly 1314 can detect an open/closed state of device 1300, a relative positioning of components, such as the display and keypad of device 1300, and sensor component 1314 can also detect a change in position of one component of device 1300 or device 1300. The presence or absence of contact by the user with the device 1300, the orientation or acceleration/deceleration of the device 1300 and the temperature change of the device 1300.
- Sensor assembly 1314 can include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
- Sensor assembly 1314 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor component 1314 can also include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, a microwave sensor, or a temperature sensor.
- Communication component 1316 is configured to facilitate wired or wireless communication between device 1300 and other devices.
- the device 1300 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
- communication component 1316 receives broadcast signals or broadcast associated information from an external broadcast management system via a broadcast channel.
- the communication component 1316 also includes a near field communication (NFC) module to facilitate short range communication.
- NFC near field communication
- the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB ultra-wideband
- Bluetooth Bluetooth
- apparatus 1300 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), Field Programmable Gate Array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation for performing the above methods.
- ASICs application specific integrated circuits
- DSPs digital signal processors
- DSPDs digital signal processing devices
- PLDs programmable logic devices
- FPGA Field Programmable Gate Array
- controller microcontroller, microprocessor or other electronic component implementation for performing the above methods.
- non-transitory computer readable storage medium comprising instructions, such as a memory 1304 comprising instructions executable by processor 1320 of apparatus 1300 to perform the above method.
- the non-transitory computer readable storage medium may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.
- a non-transitory computer readable storage medium when instructions in the storage medium are executed by a processor of a terminal, enabling the terminal to perform an autofocus method, the method comprising: when the user clicks on a target in the viewfinder Obtaining first spatial data of the target object after the object is manually focused; acquiring position change data when detecting that the framing content in the finder changes; according to the first spatial data and the position change data Calculating second spatial data of the target object; performing autofocus according to the second spatial data.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Studio Devices (AREA)
- Focusing (AREA)
- Automatic Focus Adjustment (AREA)
Abstract
Description
Claims (19)
- 一种自动对焦方法,其特征在于,所述方法包括:当用户点击取景器中的目标物体完成手动对焦后,获取所述目标物体的第一空间数据;当检测到所述取景器中的取景内容发生变化时,获取位置变化数据;根据所述第一空间数据和所述位置变化数据计算所述目标物体的第二空间数据;按照所述第二空间数据对所述目标物体进行自动对焦。
- 根据权利要求1所述的方法,其特征在于,所述获取所述目标物体的第一空间数据,包括:计算焦点到图像传感器的第一垂直距离,其中,所述手动对焦完成时所述目标物体所成像位于所述图像传感器上;以所述焦点作为三维直角坐标系的原点,根据所述第一垂直距离获得所述目标物体所成像在所述图像传感器上的第一位置的第一空间坐标;计算所述焦点与所述第一位置之间的第一向量的第一空间向量角。
- 根据权利要求2所述的方法,其特征在于,所述计算焦点到图像传感器的第一垂直距离,包括:获得所述手动对焦完成时的像距;计算所述像距与定焦焦距之间的差值,将所述差值作为所述焦点到图像传感器的第一垂直距离。
- 根据权利要求2所述的方法,其特征在于,所述根据所述第一垂直距离获得所述目标物体所成像在所述图像传感器上的第一位置的第一空间坐标,包括:以所述取景器的中心作为平面直角坐标系的原点,获取所述目标物体在所述平面直角坐标系中的第一二维坐标,其中,所述取景器的中心与所述焦点在同一法线方向;按照预设比例转换所述第一二维坐标,获得所述目标物体所成像在所述图像传感器上的第二二维坐标;根据所述第二二维坐标和所述第一垂直距离确定所述目标物体所成像在所述图像传感器上的第一空间坐标,其中,所述第一空间坐标的X轴坐标值为所述第二二维坐标的X轴坐标值,所述第一空间坐标的Y轴坐标值为所述第二二维坐标的Y轴坐标值,所述第一空间坐标的Z轴坐标值为所述第一垂直距离。
- 根据权利要求2所述的方法,其特征在于,所述当检测到所述取景器中的取景内容发生变化时,获取位置变化数据,包括:通过加速度传感器检测到的加速度数据判断所述取景器是否发生移动;当所述取景器发生移动时,获取通过方向传感器检测到的作为所述位置变化数据的空间变化向量角。
- 根据权利要求5所述的方法,其特征在于,所述根据所述第一空间数据和所述位 置变化数据计算所述目标物体的第二空间数据,包括:根据所述第一空间坐标计算所述焦点到所述第一位置的第一直线距离;根据所述第一空间向量角和所述空间变化向量角,计算第二空间向量角,所述第二空间向量角为所述焦点与第二位置之间的第二向量的空间向量角,所述第二位置为所述自动对焦完成后所述目标物体所成像在所述图像传感器上的位置;根据所述第一直线距离与所述第二空间向量角计算所述第二位置的第二空间坐标。
- 根据权利要求6所述的方法,其特征在于,所述按照所述第二空间数据进行自动对焦,包括:根据所述第二空间坐标获得所述焦点到所述第二位置的第二垂直距离,其中,所述第二垂直距离为所述第二空间坐标的Z轴坐标值;计算所述第二垂直距离与定焦焦距的和,将所述和作为调整后的像距;移动镜头组,直至所述镜头组到所述图像传感器的距离为所述调整后的像距。
- 根据权利要求6或7所述的方法,其特征在于,所述按照所述第二空间数据进行自动对焦之前,所述方法包括:通过图像识别算法计算所述第二位置的第三空间坐标;根据所述第三空间坐标对所述第二空间坐标进行校正,获得校正后的第二空间坐标。
- 根据权利要求8所述的方法,其特征在于,所述根据所述第三空间坐标对所述第二空间坐标进行校正,获得校正后的第二空间坐标,包括:判断所述第三空间坐标与所述第二空间坐标之间的距离是否小于预设的校正阈值;当小于所述校正阈值时,计算所述第三空间坐标和所述第二空间坐标的X轴坐标值的平均值作为校正后的第二空间坐标的X轴坐标值,以及计算所述第三空间坐标和所述第二空间坐标的Y轴坐标值的平均值作为校正后的第二空间坐标的Y轴坐标值;根据所述第一直线距离、所述校正后的第二空间坐标的X轴坐标值、以及所述校正后的第二空间坐标的Y轴坐标值,计算所述校正后的第二空间坐标的Z轴坐标值。
- 一种自动对焦装置,其特征在于,所述装置包括:获取模块,用于当用户点击取景器中的目标物体完成手动对焦后,获取所述目标物体的第一空间数据;检测模块,用于当检测到所述取景器中的取景内容发生变化时,获取位置变化数据;第一计算模块,用于根据所述第一空间数据和所述位置变化数据计算所述目标物体的第二空间数据;对焦模块,用于按照所述第二空间数据对所述目标物体进行自动对焦。
- 根据权利要求10所述的装置,其特征在于,所述获取模块,包括:第一垂直距离计算子模块,用于计算焦点到图像传感器的第一垂直距离,其中,所述手动对焦完成时所述目标物体所成像位于所述图像传感器上;第一空间坐标获得子模块,用于以所述焦点作为三维直角坐标系的原点,根据所述第 一垂直距离获得所述目标物体所成像在所述图像传感器上的第一位置的第一空间坐标;第一空间向量角计算子模块,用于计算所述焦点与所述第一位置之间的第一向量的第一空间向量角。
- 根据权利要求11所述的装置,其特征在于,所述第一垂直距离计算子模块,包括:像距获得子模块,用于获得所述手动对焦完成时的像距;差值计算子模块,用于计算所述像距与定焦焦距之间的差值,将所述差值作为所述焦点到图像传感器的第一垂直距离。
- 根据权利要求11所述的装置,其特征在于,所述第一空间坐标获得子模块,包括:第一二维坐标获取子模块,用于以所述取景器的中心作为平面直角坐标系的原点,获取所述目标物体在所述平面直角坐标系中的第一二维坐标,其中,所述取景器的中心与所述焦点在同一法线方向;第二二维坐标获得子模块,用于按照预设比例转换所述第一二维坐标,获得所述目标物体所成像在所述图像传感器上的第二二维坐标;第一空间坐标确定子模块,用于根据所述第二二维坐标和所述第一垂直距离确定所述目标物体所成像在所述图像传感器上的第一空间坐标,其中,所述第一空间坐标的X轴坐标值为所述第二二维坐标的X轴坐标值,所述第一空间坐标的Y轴坐标值为所述第二二维坐标的Y轴坐标值,所述第一空间坐标的Z轴坐标值为所述第一垂直距离。
- 根据权利要求11所述的装置,其特征在于,所述检测模块,包括:加速度检测子模块,用于通过加速度传感器检测到的加速度数据判断所述取景器是否发生移动;变化向量角获取子模块,用于当所述取景器发生移动时,获取通过方向传感器检测到的作为所述位置变化数据的空间变化向量角。
- 根据权利要求14所述的装置,其特征在于,所述第一计算模块,包括:第一直线距离计算子模块,用于根据所述第一空间坐标计算所述焦点到所述第一位置的第一直线距离;第二空间向量角计算子模块,用于根据所述第一空间向量角和所述空间变化向量角,计算第二空间向量角,所述第二空间向量角为所述焦点与第二位置之间的第二向量的空间向量角,所述第二位置为所述自动对焦完成后所述目标物体所成像在所述图像传感器上的位置;第二空间坐标计算子模块,用于根据所述第一直线距离与所述第二空间向量角计算所述第二位置的第二空间坐标。
- 根据权利要求15所述的装置,其特征在于,所述对焦模块,包括:第二垂直距离获得子模块,用于根据所述第二空间坐标获得所述焦点到所述第二位置 的第二垂直距离,其中,所述第二垂直距离为所述第二空间坐标的Z轴坐标值;调整像距计算子模块,用于计算所述第二垂直距离与定焦焦距的和,将所述和作为调整后的像距;镜头组移动子模块,用于移动镜头组,直至所述镜头组到所述图像传感器的距离为所述调整后的像距。
- 根据权利要求15或16所述的装置,其特征在于,所述装置还包括:第二计算模块,用于通过图像识别算法计算所述第二位置的第三空间坐标;校正模块,用于根据所述第三空间坐标对所述第二空间坐标进行校正,获得校正后的第二空间坐标。
- 根据权利要求17所述的装置,其特征在于,所述校正模块,包括:校正阈值判断子模块,用于判断所述第三空间坐标与所述第二空间坐标之间的距离是否小于预设的校正阈值;校正坐标值计算子模块,用于当小于所述校正阈值时,计算所述第三空间坐标和所述第二空间坐标的X轴坐标值的平均值作为校正后的第二空间坐标的X轴坐标值,计算所述第三空间坐标和所述第二空间坐标的Y轴坐标值的平均值作为校正后的第二空间坐标的Y轴坐标值,以及根据所述第一直线距离、所述校正后的第二空间坐标的X轴坐标值、以及所述校正后的第二空间坐标的Y轴坐标值,计算所述校正后的第二空间坐标的Z轴坐标值。
- 一种自动对焦装置,其特征在于,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为:当用户点击取景器中的目标物体完成手动对焦后,获取所述目标物体的第一空间数据;当检测到所述取景器中的取景内容发生变化时,获取位置变化数据;根据所述第一空间数据和所述位置变化数据计算所述目标物体的第二空间数据;按照所述第二空间数据对所述目标物体进行自动对焦。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020157016842A KR101678483B1 (ko) | 2014-12-26 | 2015-04-30 | 자동 핀트 맞춤 방법, 장치, 프로그램 및 기록매체 |
RU2015129487A RU2612892C2 (ru) | 2014-12-26 | 2015-04-30 | Способ автоматической фокусировки и устройство автоматической фокусировки |
MX2015009132A MX358881B (es) | 2014-12-26 | 2015-04-30 | Método de auto-enfoque y dispositivo de auto-enfoque. |
JP2016565542A JP6348611B2 (ja) | 2014-12-26 | 2015-04-30 | 自動ピント合わせ方法、装置、プログラム及び記録媒体 |
BR112015019722A BR112015019722A2 (pt) | 2014-12-26 | 2015-04-30 | método de autofocagem e dispositivo de autofocagem |
US14/809,591 US9729775B2 (en) | 2014-12-26 | 2015-07-27 | Auto-focusing method and auto-focusing device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410832108.7A CN104469167B (zh) | 2014-12-26 | 2014-12-26 | 自动对焦方法及装置 |
CN201410832108.7 | 2014-12-26 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/809,591 Continuation US9729775B2 (en) | 2014-12-26 | 2015-07-27 | Auto-focusing method and auto-focusing device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016101481A1 true WO2016101481A1 (zh) | 2016-06-30 |
Family
ID=52914463
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/077963 WO2016101481A1 (zh) | 2014-12-26 | 2015-04-30 | 自动对焦方法及装置 |
Country Status (9)
Country | Link |
---|---|
US (1) | US9729775B2 (zh) |
EP (1) | EP3038345B1 (zh) |
JP (1) | JP6348611B2 (zh) |
KR (1) | KR101678483B1 (zh) |
CN (1) | CN104469167B (zh) |
BR (1) | BR112015019722A2 (zh) |
MX (1) | MX358881B (zh) |
RU (1) | RU2612892C2 (zh) |
WO (1) | WO2016101481A1 (zh) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104469167B (zh) * | 2014-12-26 | 2017-10-13 | 小米科技有限责任公司 | 自动对焦方法及装置 |
CN105100624B (zh) * | 2015-08-28 | 2019-03-01 | Oppo广东移动通信有限公司 | 一种拍摄方法及终端 |
JP6335394B2 (ja) | 2015-09-25 | 2018-05-30 | 富士フイルム株式会社 | 撮像システム及び撮像制御方法 |
CN105262954B (zh) * | 2015-11-17 | 2019-07-19 | 腾讯科技(深圳)有限公司 | 触发摄像头自动聚焦的方法和装置 |
CN106534702A (zh) * | 2016-12-22 | 2017-03-22 | 珠海市魅族科技有限公司 | 一种对焦的方法以及对焦装置 |
CN110276767B (zh) | 2019-06-28 | 2021-08-31 | Oppo广东移动通信有限公司 | 图像处理方法和装置、电子设备、计算机可读存储介质 |
CN110248096B (zh) | 2019-06-28 | 2021-03-12 | Oppo广东移动通信有限公司 | 对焦方法和装置、电子设备、计算机可读存储介质 |
CN110267041B (zh) | 2019-06-28 | 2021-11-09 | Oppo广东移动通信有限公司 | 图像编码方法、装置、电子设备和计算机可读存储介质 |
CN110660090B (zh) | 2019-09-29 | 2022-10-25 | Oppo广东移动通信有限公司 | 主体检测方法和装置、电子设备、计算机可读存储介质 |
CN110796041B (zh) | 2019-10-16 | 2023-08-18 | Oppo广东移动通信有限公司 | 主体识别方法和装置、电子设备、计算机可读存储介质 |
CN112136312A (zh) * | 2019-10-21 | 2020-12-25 | 深圳市大疆创新科技有限公司 | 一种获取目标距离的方法、控制装置及移动平台 |
CN110996003B (zh) * | 2019-12-16 | 2022-03-25 | Tcl移动通信科技(宁波)有限公司 | 一种拍照定位方法、装置及移动终端 |
WO2022151473A1 (zh) * | 2021-01-18 | 2022-07-21 | 深圳市大疆创新科技有限公司 | 拍摄控制方法、拍摄控制装置及云台组件 |
CN116547578A (zh) * | 2021-06-24 | 2023-08-04 | 深圳市大疆创新科技有限公司 | 镜头的对焦控制方法和装置、拍摄装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000066086A (ja) * | 1998-08-20 | 2000-03-03 | Nikon Corp | 自動焦点調節装置 |
CN1973231A (zh) * | 2004-05-07 | 2007-05-30 | 株式会社理光 | 图像获取相机 |
CN103747183A (zh) * | 2014-01-15 | 2014-04-23 | 北京百纳威尔科技有限公司 | 一种手机拍摄对焦方法 |
CN104243806A (zh) * | 2013-06-20 | 2014-12-24 | 索尼公司 | 成像装置、信息显示方法和信息处理单元 |
CN104469167A (zh) * | 2014-12-26 | 2015-03-25 | 小米科技有限责任公司 | 自动对焦方法及装置 |
Family Cites Families (49)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3513950B2 (ja) * | 1993-12-14 | 2004-03-31 | 株式会社ニコン | 像振れ補正カメラ |
JP2001235675A (ja) * | 2000-02-22 | 2001-08-31 | Eiji Kawamura | 焦点制御システム |
US6968094B1 (en) * | 2000-03-27 | 2005-11-22 | Eastman Kodak Company | Method of estimating and correcting camera rotation with vanishing point location |
US20020080257A1 (en) * | 2000-09-27 | 2002-06-27 | Benjamin Blank | Focus control system and process |
JP2002131624A (ja) * | 2000-10-25 | 2002-05-09 | Olympus Optical Co Ltd | 多点自動焦点カメラ |
CN1315098C (zh) * | 2001-05-25 | 2007-05-09 | 松下电器产业株式会社 | 广角图像生成装置 |
US20030103067A1 (en) * | 2001-12-05 | 2003-06-05 | Trell Erik Y. | Method and device for material, graphical and computer/holography-animated structural reproduction, rendition and exploration of real space elementary particle states, transitions, properties and processes |
WO2003098922A1 (en) * | 2002-05-15 | 2003-11-27 | The Board Of Governors For Higher Education, State Of Rhode Island And Providence Plantations | An imaging system and method for tracking the motion of an object |
JP4211292B2 (ja) * | 2002-06-03 | 2009-01-21 | ソニー株式会社 | 画像処理装置および画像処理方法、プログラム並びにプログラム記録媒体 |
JP3922543B2 (ja) * | 2002-06-05 | 2007-05-30 | ソニー株式会社 | 撮像装置、および画像表示装置 |
JP2005338352A (ja) * | 2004-05-26 | 2005-12-08 | Fujinon Corp | オートフォーカスシステム |
JP3829144B2 (ja) * | 2004-11-25 | 2006-10-04 | シャープ株式会社 | 合焦エリア調節カメラ付携帯端末 |
DE102004060609A1 (de) * | 2004-12-16 | 2006-06-29 | Yxlon International Security Gmbh | Verfahren zum Messen des Impulsübertragungsspektrums von elastisch gestreuten Röntgenquanten |
US7924311B2 (en) * | 2004-12-21 | 2011-04-12 | Panasonic Corporation | Camera terminal and monitoring system |
US7409149B2 (en) * | 2005-11-03 | 2008-08-05 | International Business Machines Corporation | Methods for improved autofocus in digital imaging systems |
US7627240B2 (en) * | 2006-03-27 | 2009-12-01 | Nokia Corporation | Optical device with improved autofocus performance and method related thereto |
WO2008026434A1 (fr) * | 2006-08-30 | 2008-03-06 | Nikon Corporation | Appareil et procédé d'alignement d'image, ainsi que caméra |
EP2168079B1 (en) * | 2007-01-23 | 2015-01-14 | Valeo Schalter und Sensoren GmbH | Method and system for universal lane boundary detection |
JP2009049810A (ja) * | 2007-08-21 | 2009-03-05 | Canon Inc | 撮像装置及びその制御方法及びプログラム |
JP5268433B2 (ja) * | 2008-06-02 | 2013-08-21 | キヤノン株式会社 | 撮像装置、及び撮像装置の制御方法 |
JP2009294509A (ja) * | 2008-06-06 | 2009-12-17 | Sony Corp | 3次元像表示装置 |
JP5366454B2 (ja) * | 2008-06-30 | 2013-12-11 | キヤノン株式会社 | 光学機器 |
US8237807B2 (en) * | 2008-07-24 | 2012-08-07 | Apple Inc. | Image capturing device with touch screen for adjusting camera settings |
JP2010050603A (ja) | 2008-08-20 | 2010-03-04 | Casio Comput Co Ltd | 撮影装置およびプログラム |
US8134597B2 (en) * | 2008-12-05 | 2012-03-13 | Sony Ericsson Mobile Communications Ab | Camera system with touch focus and method |
JP2011030008A (ja) * | 2009-07-27 | 2011-02-10 | Canon Inc | 撮像装置 |
JP5574650B2 (ja) * | 2009-09-11 | 2014-08-20 | 古野電気株式会社 | 画像処理装置、これを搭載したレーダ装置、画像処理方法及び画像処理プログラム |
JP5654223B2 (ja) * | 2009-09-11 | 2015-01-14 | 古野電気株式会社 | 画像処理装置、これを搭載したレーダ装置、画像処理方法及び画像処理プログラム |
TWI413854B (zh) * | 2009-10-07 | 2013-11-01 | Altek Corp | A digital camera capable of detecting the name of the subject being used and a method thereof |
JP2011139379A (ja) | 2009-12-28 | 2011-07-14 | Canon Inc | 画像処理装置、画像処理方法及びプログラム |
CN101762871B (zh) * | 2009-12-30 | 2011-04-27 | 北京控制工程研究所 | 一种姿态敏感器光学系统 |
JP5589527B2 (ja) * | 2010-04-23 | 2014-09-17 | 株式会社リコー | 撮像装置および追尾被写体検出方法 |
JP5728393B2 (ja) * | 2010-06-24 | 2015-06-03 | パナソニック株式会社 | 全方位撮影システム |
JP5594157B2 (ja) * | 2011-01-14 | 2014-09-24 | 株式会社Jvcケンウッド | 撮像装置および撮像方法 |
EP2716030A1 (en) * | 2011-05-30 | 2014-04-09 | Sony Ericsson Mobile Communications AB | Improved camera unit |
KR101784523B1 (ko) | 2011-07-28 | 2017-10-11 | 엘지이노텍 주식회사 | 터치형 휴대용 단말기 |
US10099614B2 (en) | 2011-11-28 | 2018-10-16 | Magna Electronics Inc. | Vision system for vehicle |
JP5370542B1 (ja) * | 2012-06-28 | 2013-12-18 | カシオ計算機株式会社 | 画像処理装置、撮像装置、画像処理方法及びプログラム |
JP5409873B2 (ja) * | 2012-10-22 | 2014-02-05 | キヤノン株式会社 | 情報処理装置、その制御方法、プログラム及び記憶媒体 |
JP6271990B2 (ja) * | 2013-01-31 | 2018-01-31 | キヤノン株式会社 | 画像処理装置、画像処理方法 |
KR101431373B1 (ko) * | 2013-02-26 | 2014-08-18 | 경북대학교 산학협력단 | 스테레오 정합을 이용한 차량의 움직임 측정 장치 |
JP6103526B2 (ja) * | 2013-03-15 | 2017-03-29 | オリンパス株式会社 | 撮影機器,画像表示機器,及び画像表示機器の表示制御方法 |
US9877011B2 (en) * | 2013-03-19 | 2018-01-23 | Hitachi Kokusai Electric Inc. | Image display apparatus and image display method |
CA2819956C (en) * | 2013-07-02 | 2022-07-12 | Guy Martin | High accuracy camera modelling and calibration method |
CN103699592B (zh) * | 2013-12-10 | 2018-04-27 | 天津三星通信技术研究有限公司 | 应用于便携式终端的视频拍摄定位方法及便携式终端 |
JP2015167603A (ja) * | 2014-03-05 | 2015-09-28 | コニカミノルタ株式会社 | 撮影台 |
JP6415196B2 (ja) * | 2014-09-08 | 2018-10-31 | キヤノン株式会社 | 撮像装置および撮像装置の制御方法 |
JP6445026B2 (ja) * | 2014-10-08 | 2018-12-26 | エルジー エレクトロニクス インコーポレイティド | 3dビデオコーディングにおけるカメラパラメータ処理方法および装置 |
US9684830B2 (en) * | 2014-11-14 | 2017-06-20 | Intel Corporation | Automatic target selection for multi-target object tracking |
-
2014
- 2014-12-26 CN CN201410832108.7A patent/CN104469167B/zh active Active
-
2015
- 2015-04-30 WO PCT/CN2015/077963 patent/WO2016101481A1/zh active Application Filing
- 2015-04-30 MX MX2015009132A patent/MX358881B/es active IP Right Grant
- 2015-04-30 JP JP2016565542A patent/JP6348611B2/ja active Active
- 2015-04-30 KR KR1020157016842A patent/KR101678483B1/ko active IP Right Grant
- 2015-04-30 RU RU2015129487A patent/RU2612892C2/ru active
- 2015-04-30 BR BR112015019722A patent/BR112015019722A2/pt not_active IP Right Cessation
- 2015-07-27 US US14/809,591 patent/US9729775B2/en active Active
- 2015-11-17 EP EP15194854.4A patent/EP3038345B1/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000066086A (ja) * | 1998-08-20 | 2000-03-03 | Nikon Corp | 自動焦点調節装置 |
CN1973231A (zh) * | 2004-05-07 | 2007-05-30 | 株式会社理光 | 图像获取相机 |
CN104243806A (zh) * | 2013-06-20 | 2014-12-24 | 索尼公司 | 成像装置、信息显示方法和信息处理单元 |
CN103747183A (zh) * | 2014-01-15 | 2014-04-23 | 北京百纳威尔科技有限公司 | 一种手机拍摄对焦方法 |
CN104469167A (zh) * | 2014-12-26 | 2015-03-25 | 小米科技有限责任公司 | 自动对焦方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
RU2015129487A (ru) | 2017-01-23 |
MX2015009132A (es) | 2016-08-17 |
EP3038345B1 (en) | 2022-09-14 |
US20160191783A1 (en) | 2016-06-30 |
US9729775B2 (en) | 2017-08-08 |
KR20160091246A (ko) | 2016-08-02 |
BR112015019722A2 (pt) | 2017-07-18 |
CN104469167B (zh) | 2017-10-13 |
MX358881B (es) | 2018-08-31 |
CN104469167A (zh) | 2015-03-25 |
RU2612892C2 (ru) | 2017-03-13 |
JP2017505466A (ja) | 2017-02-16 |
EP3038345A1 (en) | 2016-06-29 |
JP6348611B2 (ja) | 2018-06-27 |
KR101678483B1 (ko) | 2016-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016101481A1 (zh) | 自动对焦方法及装置 | |
CN110493526B (zh) | 基于多摄像模块的图像处理方法、装置、设备及介质 | |
JP6267363B2 (ja) | 画像を撮影する方法および装置 | |
CN108419016B (zh) | 拍摄方法、装置及终端 | |
WO2016008246A1 (zh) | 拍摄参数调节方法及装置 | |
CN110557547B (zh) | 镜头位置调整方法及装置 | |
EP2991336B1 (en) | Image capturing method and apparatus | |
CN106210496B (zh) | 照片拍摄方法及装置 | |
JP6335289B2 (ja) | 画像フィルタを生成する方法及び装置 | |
CN105282441B (zh) | 拍照方法及装置 | |
WO2018205902A1 (zh) | 防抖控制方法和装置 | |
EP3544286B1 (en) | Focusing method, device and storage medium | |
WO2017124899A1 (zh) | 一种信息处理方法及装置、电子设备 | |
WO2018133388A1 (zh) | 智能飞行设备的拍摄方法及智能飞行设备 | |
CN113364965A (zh) | 基于多摄像头的拍摄方法、装置及电子设备 | |
WO2018053722A1 (zh) | 全景照片拍摄方法及装置 | |
WO2016155227A1 (zh) | 显示取景信息的方法及装置 | |
US11555696B2 (en) | Electronic terminal, photographing method and device, and storage medium | |
CN106331463B (zh) | 摄像头焦距放大方法及装置 | |
CN112866555B (zh) | 拍摄方法、装置、设备及存储介质 | |
CN114244999A (zh) | 自动对焦的方法、装置、摄像设备及存储介质 | |
CN114666490A (zh) | 对焦方法、装置、电子设备和存储介质 | |
CN107295229B (zh) | 移动终端的拍照方法和装置 | |
WO2019134513A1 (zh) | 拍照对焦方法、装置、存储介质及电子设备 | |
CN106131403B (zh) | 触摸对焦方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 20157016842 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2016565542 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2015/009132 Country of ref document: MX |
|
ENP | Entry into the national phase |
Ref document number: 2015129487 Country of ref document: RU Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15871557 Country of ref document: EP Kind code of ref document: A1 |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112015019722 Country of ref document: BR |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 112015019722 Country of ref document: BR Kind code of ref document: A2 Effective date: 20150817 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15871557 Country of ref document: EP Kind code of ref document: A1 |