WO2022126430A1 - 辅助对焦方法、装置及系统 - Google Patents
辅助对焦方法、装置及系统 Download PDFInfo
- Publication number
- WO2022126430A1 WO2022126430A1 PCT/CN2020/136829 CN2020136829W WO2022126430A1 WO 2022126430 A1 WO2022126430 A1 WO 2022126430A1 CN 2020136829 W CN2020136829 W CN 2020136829W WO 2022126430 A1 WO2022126430 A1 WO 2022126430A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- depth distribution
- depth
- focus
- scene
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 63
- 230000000875 corresponding effect Effects 0.000 claims description 117
- 230000002452 interceptive effect Effects 0.000 claims description 28
- 238000003384 imaging method Methods 0.000 claims description 13
- 230000002596 correlated effect Effects 0.000 claims description 9
- 230000003287 optical effect Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 6
- 238000009877 rendering Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 11
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/63—Control of cameras or camera modules by using electronic viewfinders
- H04N23/633—Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/671—Focus control based on electronic image sensor signals in combination with active ranging signals, e.g. using light or sound signals emitted toward objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/958—Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging
Definitions
- the present application relates to the technical field of image acquisition, and in particular, to an auxiliary focusing method, device and system.
- the effect of focusing on the imaging device by means of auto-focusing is not ideal, so it is necessary to focus the imaging device through manual focusing by the user.
- the user needs to adjust the position of the focus ring according to the distance between the target object and the camera device, in order to adjust the position of the focus, so that the focus is on the plane where the target object the user wants to shoot is located, so that the target object can only be displayed in the camera device. Clear imaging.
- the present application provides a method, device and system for assisting focusing.
- a method for assisting focusing comprising:
- an auxiliary focus image is generated according to the depth information of each object in the target scene, and the auxiliary focus image is used to show the depth distribution of the object and the current focus of the camera in the target scene. describe the location in the target scene;
- an auxiliary focusing device includes a processor, a memory, and a computer program stored on the memory for execution by the processor, the processor executing the The computer program implements the following steps:
- an auxiliary focus image is generated according to the depth information of each object in the target scene, and the auxiliary focus image is used to show the depth distribution of the object and the current focus of the camera in the target scene. describe the location in the target scene;
- the user's adjustment to the position of the focus is received, and the auxiliary focus image is updated according to the adjusted position of the focus.
- an auxiliary focusing assistance system comprising the auxiliary focusing device, a camera device and a distance measuring device as mentioned in the second aspect above
- an auxiliary focus image can be generated according to the depth information of each object in the target scene, and the depth distribution of each object in the target scene can be visually displayed through the auxiliary focus image. and the corresponding position of the focus of the camera in the target scene, so that the user can intuitively understand the position of the current focus from the auxiliary focus image, and adjust the position of the focus according to the depth distribution of each object, so that the object of interest to the user can be clearly Imaging, providing a reference for the user's manual focusing through the auxiliary focusing image, which is convenient for the user to manually focus and improves the efficiency of the user's manual focusing.
- FIG. 1 is a flowchart of an auxiliary focusing method according to an embodiment of the present application.
- FIG. 2 is a schematic diagram of a depth distribution image according to an embodiment of the present application.
- FIG. 3 is a schematic diagram of a depth distribution image according to an embodiment of the present application.
- 4(a)-4(c) are schematic diagrams showing the focus position through a depth distribution image according to an embodiment of the present application.
- FIG. 5 is a schematic diagram of a lens imaging principle according to an embodiment of the present application.
- 6(a)-6(b) are schematic diagrams showing a focus area through a depth distribution image according to an embodiment of the present application.
- FIG. 7(a)-7(b) are schematic diagrams of auxiliary focus images according to an embodiment of the present application.
- FIG. 8 is a schematic diagram of a depth distribution image composed of multiple layers according to an embodiment of the present application.
- FIG. 9 is a schematic diagram of an application scenario of an embodiment of the present application.
- FIG. 10 is a schematic diagram of an auxiliary focus image according to an embodiment of the present application.
- FIG. 11 is a schematic diagram of a logical structure of an auxiliary focusing device according to an embodiment of the present application.
- the lens of the camera device is generally composed of multiple groups of lenses. By adjusting the distance between one or more lens groups and the imaging plane (ie, the photosensitive element), the position of the focus can be adjusted.
- the lens group used to change the focus position called the focusing lens group
- the position of the focus can be changed by adjusting the position of the focusing lens group, such as moving the focus forward or backward, so that it is aimed at the target object.
- the camera device can include a focus ring
- the focus ring generally includes a ruler.
- the ruler can indicate the position of the focus ring when the distance between the target object and the camera device is different.
- Auto focus can be determined by the camera device itself, and automatically drive the focus ring to adjust to the corresponding position, without the need for manual adjustment by the user.
- the effect of using autofocus may not be ideal, so manual focusing is required.
- the position of the focus ring on the camera device can be adjusted according to the distance between the target object and the camera device, thereby changing the position of the focus, so that the object of interest to the user can be clearly imaged.
- some ranging devices can be used to measure the distance between each target object in the shooting scene and the camera device, such as depth sensors, lidar, etc. Through these ranging devices, the depth of the shooting scene can be obtained. For images or point clouds, for depth images, the depth value needs to be mapped to the image pixel value according to a certain rule, which cannot intuitively reflect the depth distribution of each object in the scene.
- the point cloud needs to be manually dragged to see the distribution of the 3D scene on the 2D screen, which is not easy to operate. Therefore, the depth of each object in the shooting scene is displayed only through the depth image or point cloud collected by the ranging device, the user cannot intuitively understand the depth distribution of each object, and it is inconvenient for the user to manually focus.
- an embodiment of the present application provides a method for assisting focus.
- an assisting focus image can be generated according to the depth information of each object in the target scene, and the assisting focus image can intuitively Display the depth distribution of each object in the target scene and the corresponding position of the focus of the camera in the target scene, so that the user can intuitively understand the position of the current focus from the auxiliary focus image, and determine how to adjust the focus according to the depth distribution of each object. position, so that the object of interest to the user can be clearly imaged.
- the assisted focusing method of the present application can be performed by the camera device, or can be performed by other devices that are communicatively connected to the camera device.
- a special focus device can be used to adjust the focus position of the camera device.
- the method may also be performed by a follow focus device equipped with the camera device.
- the method includes the following steps:
- the user needs to know the distance between the target object and the camera device, and adjust the focus position according to the distance. Therefore, when the user uses the camera device to shoot the target scene, the depth information of each object in the target scene can be obtained first.
- the depth information of each object in the target scene can be obtained through some ranging devices.
- the camera device can be equipped with ranging devices such as lidar, depth camera, depth sensor, infrared rangefinder, etc., and the target scene can be obtained through these ranging devices.
- the depth information of each object in may be an integrated device including a color camera and a depth camera, or a device obtained by combining the two cameras.
- an auxiliary focus image can be generated according to the depth information of each object.
- the auxiliary focus image can be various forms of images.
- the auxiliary focus image can display the depth value of each object and the current focus position.
- the auxiliary focus image can also only display the distance relationship between each object and the camera device, as long as the image can intuitively display the depth distribution of each object in the target scene and the focus of the camera device at the current moment in the target scene. location.
- the auxiliary focusing image can be displayed to the user through the interactive interface.
- the interactive interface can be the interactive interface provided by the camera device, and the auxiliary focusing method consists of
- the interactive interface may be an interactive interface provided by a camera device or an interactive interface provided by a follow focus device.
- the interactive interface may also be an interactive interface provided by other devices that are communicatively connected to the camera device or the focus device, which is not limited in this embodiment of the present application.
- S106 Receive an adjustment of the position of the focus by the user, and update the auxiliary focus image according to the adjusted position of the focus.
- the user can intuitively see the position of the current focus.
- the user can determine the position of the object of interest according to the depth distribution of each object, and then The position of the object adjusts the position of the focus.
- the auxiliary focus image can be updated according to the adjusted focus position, so as to display the focus position in the image in real time and provide a reference for the user to adjust the focus position.
- an auxiliary focus image is generated according to the depth information of each object in the target scene, and displayed to the user through an interactive interface, and the depth distribution of each object in the target scene and the position of the current focus in the target scene are displayed by using the auxiliary focus image, so that The user can intuitively know where the current focus is in the scene according to the auxiliary object image, and how to adjust the focus to the plane where the object of interest is located, which provides great convenience for the user to manually focus and improves the user's manual focus. efficiency, thereby improving the user experience.
- the auxiliary focus image may only show the depth distribution of each object in the target scene and the location of the focus.
- the auxiliary focus image can also simultaneously display the scene image of the target scene collected by the camera. For example, when generating the auxiliary focus image, the scene image of the target scene collected by the camera can be obtained, and then according to the target scene The depth information of each object generates a depth distribution image, and the depth distribution image can be used to display the depth distribution of each object, and then an auxiliary focus image is generated according to the scene image and the depth distribution image.
- the depth distribution of each object in the target scene can be displayed by the corresponding projection points of each object on the depth distribution image.
- the depth distribution image can be obtained by projecting each object in the target scene on a specified axis, and each target object can correspond to one or more projection points on the depth distribution image, wherein the specified axis and the optical axis of the camera device Axial misalignment. In this way, during the projection process, the depth information of each object in the target scene can be preserved to avoid loss of depth information.
- the target scene includes three objects: pedestrian 22 , vehicle 23 and house 24 .
- the optical axis of the camera device 21 coincides with the Z axis
- the distance between the above three objects and the camera device 21 is They are Z1, Z2 and Z3 respectively.
- the camera device 21 collects images of the target scene to obtain the scene image 25.
- the depth information of the target scene can be obtained through the ranging device, for example, the three-dimensional point cloud of the target scene can be obtained through lidar or the depth camera can be used to obtain the depth information of the target scene.
- each object can be projected on the Y-axis or X-axis direction, that is, a depth distribution image that can show the depth distribution of each object can be obtained.
- the depth distribution image 26 in FIG. 2 is obtained by projecting each object in the target scene along the Y-axis direction.
- the horizontal axis of the depth distribution image 26 represents the position distribution of each object in the X-axis direction.
- the vertical axis of the depth distribution image 26 The axis represents the depth distance from the imaging device 21 of each object.
- the horizontal axis of the depth distribution image 27 represents the distance between each object and the camera 21, and the vertical axis of the depth distribution image 27 represents each The position distribution of the object in the Y-axis direction.
- the horizontal axis or the vertical axis of the depth distribution image may be used to show the depth of the projection point corresponding to each object.
- the projection point is located at the right (or left) position of the horizontal axis of the image, it means that the depth value of the projection point is larger, or if the projection point is located at the upper (or lower) position of the vertical axis of the image, it means the depth of the projection point. the smaller the value.
- the horizontal axis or vertical axis of the depth distribution image may carry scales, and each scale is marked with a corresponding depth value.
- the depth value of each scale mark can be used to determine the depth value of the projection point, thereby determining the depth value of each object.
- the depth distribution image may also not carry a scale, as long as it is necessary to identify which direction of the horizontal axis or the vertical axis represents the direction of increasing depth.
- the scale carried by the horizontal axis or the vertical axis of the depth distribution image may be a uniform scale. In some embodiments, the scale carried by the horizontal axis or the vertical axis of the depth distribution image may also be a non-uniform scale, for example, the scale and depth value are only marked at the depth position corresponding to each object.
- the vertical axis of the depth distribution image may be used to represent the depth value of the projection point
- the horizontal axis of the depth distribution image may represent the position distribution of the object corresponding to the projection point in the three-dimensional space, for example, it may represent the projection point
- the corresponding object is the left or right position (or the upper or lower position) of the non-projection axis and the non-depth axis in the three-dimensional space.
- the attributes of each projection point in the depth distribution image can be used to represent the number of spatial 3D points corresponding to each object projected to the projection point.
- each object in the 3D space can be regarded as composed of many spatial 3D points.
- Each projection point can be obtained by projecting three-dimensional points corresponding to different Y-coordinates under a fixed X coordinate and a fixed depth value, so the number of spatial three-dimensional points projected to the projected point can be represented by the attributes of the projected point.
- the camera 31 collects images of two objects along the Z-axis, and the depth distances between the two objects and the camera 31 are Z1 and Z1, respectively.
- the depth distribution image 34 is obtained by projecting two objects along the Y-axis direction
- the projection point corresponding to the small cuboid 32 on the depth distribution image 34 is 341
- the projection point corresponding to the large cuboid 33 on the depth distribution image 34 is 342
- the vertical axis of the depth distribution image can be used to represent the depth value of the two objects at the corresponding projection points of the depth distribution image
- the horizontal axis of the depth distribution image can represent the two objects in the three-dimensional space.
- the X-axis is up For example, an object located on the left side of the X-axis in the three-dimensional space is located on the left side of the horizontal axis of the depth distribution image, and an object located on the right side of the X-axis in the three-dimensional space is located on the right side of the horizontal axis of the depth distribution image. s position.
- the property of the projected point may be any of the gray value of the projected point, the color of the projected point, or the shape of the projected point. For example, the larger the gray value of the projection point, the greater the number of 3D points of the object projected to the projection point, or the darker the color of the projection point, the greater the number of 3D points projected to the projection point.
- the grayscale value of each projected point in the depth profile image is positively correlated with the number of three-dimensional points of the object projected onto that projected point. That is, the larger the gray value of the projection point, the greater the number of three-dimensional points projected to the projection point.
- the height of the small cuboid 32 in the Y-axis direction is small, while the height of the large cuboid 33 in the Y-axis direction is relatively high. Therefore, for the same position on the X-axis, the number of three-dimensional points in space corresponding to the large cuboid 33 The number of three-dimensional points in the space corresponding to the small cuboid 22 is relatively small, so the brightness value of the projection point corresponding to the large cuboid 33 can be set to be larger, indicating that the number of three-dimensional points corresponding to each projection point is large, and the small cuboid 32 The brightness value of the corresponding projection point is set to be smaller, indicating that the number of corresponding 3D points is small.
- the distribution range of the projection points corresponding to each object in the horizontal axis direction of the depth distribution image is positively related to the size of each object. For example, the wider the distribution range of each object in the horizontal axis direction of the depth distribution image, Indicates that the size of the object in the corresponding axis is larger. As shown in FIG. 3 , compared with the small cuboid 32 , the length of the large cuboid 33 in the X-axis direction is longer, so on the depth distribution image obtained by projection along the Y-axis, the projection point corresponding to the large cuboid 33 is on the horizontal axis of the image. The distribution range is also wider in the direction.
- the depth distribution image is obtained by projecting along the X axis
- the horizontal axis of the depth distribution image is used to represent the depth value
- the vertical axis is used to represent the position distribution of the object corresponding to the projection point in the three-dimensional space.
- the height in the Y-axis direction is higher, so the projection points corresponding to the small rectangular 32-body have a wider distribution range in the longitudinal axis direction of the depth distribution image.
- the vertical axis of the depth distribution image can be used to represent the depth value, and the height of the distribution position of the projection points corresponding to each object in the direction of the vertical axis of the depth distribution image is positively related to the depth distance between each object and the camera device, that is, The object corresponding to the projection point located at the lower position in the vertical axis direction of the depth distribution image is closer to the camera device.
- the distance between the large cuboid 33 and the camera is farther, so on the depth distribution image obtained by projection along the Y axis, the projection point corresponding to the large cuboid 33 is on the vertical axis of the depth distribution image. top position.
- the scale of the vertical axis of the depth distribution image can be obtained by quantifying the depth value of each object in the target scene. For example, the depth distribution range of each object in the target scene can be determined, and then the depth value of each object in the target scene can be quantified according to the height of the depth distribution image.
- the quantized depth values of objects located at different depths may also be the same.
- the corresponding position of the focal point of the camera in the target scene can be shown by the depth distribution image.
- the depth corresponding to the focus of the camera device may be identified in the depth distribution image by specifying a marker.
- the depth value corresponding to the focus can be marked on the depth distribution image.
- the position of the focus can be indicated by a straight line pointing to the depth value corresponding to the focus in the depth distribution image (black in the figure).
- the projection points corresponding to the object on the focal plane can be identified as different colors (the black projection in the figure). point), so that the position of the focal point can be determined.
- the current position of the focus may also be displayed through the scene image, for example, the object on the plane where the focus is located may be identified in the scene image.
- the object corresponding to the focus (the object marked by the black box in the figure) can be displayed in the scene image, and the depth of the focus position can be determined by corresponding the scene image and the depth distribution image.
- the depth distribution image can also show the current focus area of the camera, and the focus area is the area located in the camera.
- the image of the object located in the focus area in the camera device is clear.
- the focus area in the depth distribution image it is convenient for the user to adjust the position of the focus, so as to adjust one or more objects of interest into the focus area.
- the depth range corresponding to the focus area may be identified in the depth image, or a frame may be used to select a frame in the depth distribution image.
- focus area As shown in Figure 6(a), a marquee can be used to frame the focus area in the depth distribution image, and the objects located in the marquee can be clearly imaged.
- the projection points corresponding to the objects located in the focus area may also be identified in the depth distribution image, so that the user can determine which objects in the current target scene are combined with the projection points identified in the depth distribution image and the scene image. Can be clearly imaged.
- the projection point corresponding to the object located in the focus area may be rendered into a specified color (for example, as shown in Fig. 6(b) black projection points), in some embodiments, marquees, characters or other marks can also be marked around the projection points corresponding to the objects located in the focus area, so that the user can identify these projection points.
- a specified color for example, as shown in Fig. 6(b) black projection points
- marquees, characters or other marks can also be marked around the projection points corresponding to the objects located in the focus area, so that the user can identify these projection points.
- the depth distribution image can display the focus position and the focus area, in order for the user to intuitively see whether the target object of interest is currently located in the focus area, or whether the focus is adjusted to the plane where the target object of interest is located.
- one or more target objects that the user is interested in may also be identified from the scene image collected by the camera, and then the projection points corresponding to these target objects are identified in the depth distribution image. In this way, the user can clearly know whether the object of interest can be clearly imaged at present according to the depth distribution image, and if the image cannot be clearly imaged, how to adjust the position of the focus.
- the user can select the target object of interest by inputting a selection instruction in the interactive interface. For example, the user can select or click one or more objects in the scene image displayed on the interactive interface as the object of interest. .
- the target object that the user is interested in can also be automatically identified by the device executing the auxiliary focusing method, for example, a specified type of object can be automatically identified, such as a human face, a living body, or a larger proportion of the image. Objects with a certain threshold are regarded as objects of interest to the user, and then the corresponding projection points of these objects are identified in the depth distribution image.
- the scene image and the depth distribution image may be spliced side by side to obtain an auxiliary focus image (as shown in FIG. 4(a) ), wherein the splicing
- the form can be up-down splicing, left-right splicing, as long as the scene image of the target scene, the depth distribution of each object in the target scene, and the focus position of the camera device can be simultaneously displayed in one image, which is not limited in this application.
- the depth distribution image may be an image with a certain transparency.
- the depth distribution image may be superimposed on the scene image to generate an auxiliary focus image (such as Figures 7(a) and 7(b)).
- the depth distribution image can be superimposed on an area of the scene image with few objects so that the objects in the scene image are not occluded.
- the size of the depth distribution image may be the same as the size of the scene image (as shown in FIG. 7(a)), and in some embodiments, the size of the depth distribution image may also be smaller than the size of the scene image (as shown in FIG. 7(a) ).
- the depth distribution image can be smaller than the size of the scene image, so that only a small area of the scene image is overlapped to avoid blocking the content in the scene image.
- the depth distribution image may include only one layer, and the projection points corresponding to all objects in the target scene are displayed in one layer.
- the depth distribution image may also include multiple layers, and each layer may display a projection point corresponding to an object (as shown in FIG. 8 ) . In this way, the projection points of different objects can be located in different layers and will not be stacked together, which is convenient for users to view.
- each layer may also display multiple projection points of objects with relatively close depth distances.
- the multiple layers can be staggered, and the arrangement order of the multiple layers can be based on the attention of the objects corresponding to the images. degree determined. For example, the layers corresponding to the objects that the user is more concerned about may be arranged in the front, and the layers corresponding to the objects that the user is not interested in may be arranged in the back.
- the depth distribution image shows the depth distribution of each object by means of projection points
- the scene image and the depth distribution image may be determined to correspond to the same object respectively.
- the target pixel point and target projection point of the depth image are displayed in association with the target pixel point and the target projection point corresponding to the same object in the auxiliary focus image, so that the user can quickly identify the projection point of the depth image in the scene image through the auxiliary focus image.
- a selection frame of the same color may be used to frame the target projection point and the target pixel point.
- the color of the target projection point can also be rendered into the color of the target pixel point (that is, the object), so that the pixel point of the same object can be associated with the projection point according to the color.
- the viewing angle range corresponding to the scene image is consistent with the viewing angle range corresponding to the depth distribution image.
- the depth distribution image can be generated according to the depth image collected by the depth camera.
- the camera device and the depth camera can be fixed on the same gimbal at the same time.
- the two cameras rotate as the gimbal rotates. It changes with the rotation of the gimbal. In this way, as the PTZ rotates, the displayed content in the scene image and the displayed content in the depth distribution image will both change.
- the viewing angle range corresponding to the scene image may be only a part of the viewing angle range corresponding to the depth distribution image.
- the viewing angle range of the depth camera may be fixed, which collects the depth image of the entire target scene, and the camera device only Collect images of some objects in the target scene.
- the viewing angle of the camera device moves, the scene image will change, and the content displayed by the depth distribution image will remain unchanged.
- the depth distribution image It can also be a good way to show the movement of the subject in the whole scene.
- the auxiliary focus image may be displayed after the user turns on the auxiliary focus mode.
- the interactive interface only displays the scene image collected by the camera, which is convenient for the user to view the collected image.
- the auxiliary focusing image is displayed on the interactive interface, so as to facilitate the user to adjust the position of the focus according to the auxiliary focusing image.
- the camera device is an integrated device including a depth camera 91 and a color camera 92 , and the relative position parameters of the two cameras can be pre-calibrated.
- the camera device also includes an interactive interface for displaying the scene image captured by the color camera in real time. After the camera device is started, the color camera can capture the scene image of the scene, and the depth camera can capture the depth image of the scene. Then, according to the relative position parameters of the two cameras, the coordinate systems of the two images can be unified, for example, both are unified to the coordinate system of the color camera, or both are unified to the coordinate system of the depth camera.
- the depth distribution image of each object in the scene can be generated according to the depth image. It is assumed that the Z-axis direction in the three-dimensional space is the depth direction, the X-axis direction is consistent with the X-axis direction of the depth image, and the Y-axis direction is consistent with the Y-axis direction of the depth image.
- the depth distribution image can be regarded as an image obtained by projecting each object in the scene along the Y-axis direction.
- the X-axis of the depth distribution image can correspond to the X-axis of the depth image, that is, it represents the position distribution of the object on the X-axis in the three-dimensional space. The larger the size of the object in the X-axis direction in the three-dimensional space, the more The larger the image in the X-axis direction.
- the Y-axis of the depth distribution image can represent the depth of each object in the scene.
- the Y-axis of the depth distribution image can carry scales, and each scale can identify a corresponding depth value, wherein the scales can be uniformly distributed or non-uniformly distributed.
- each column of the depth image can be traversed. According to the depth value of each pixel in the column, and the scale of the Y-axis on the depth distribution image, draw on the depth distribution image to form the pixel of the column. depth distribution. Since the depth value and the Y-axis coordinates of the depth distribution image require scale conversion, after quantizing the depth value according to the image height, the quantized depth values of pixels at different positions may be the same.
- the gray value of each pixel in the depth distribution image represents the frequency of occurrence of three-dimensional points of each object in the column and depth. The greater the brightness of the pixels on the depth distribution image, the more and denser the points at the depth in this column in the space.
- the auxiliary focus image can be obtained according to the depth distribution image and the scene image.
- the depth distribution image may be an image with a certain transparency, and the depth distribution image may be superimposed on the scene image to obtain an auxiliary focus image, and then the auxiliary focus image is displayed on the interactive interface of the camera device.
- FIG. 10 it is an auxiliary focus image generated according to the scene image and the depth distribution image collected by the color camera.
- the position of the focus can also be marked on the depth distribution image.
- the depth value pointed to by the line with the arrow in Figure 10 is the depth value corresponding to the focus position.
- the focus area is marked on the depth distribution image, for example, the focus area is selected in the depth distribution image, such as the area selected by the white box in Figure 10, or the objects in the focus area can be rendered into pixels corresponding to the depth distribution image. Specify the color.
- the user can know the current focus position through the auxiliary focus image, and adjust the focus position according to the position of the object he is interested in. After the user adjusts the focus position, the auxiliary focus image will be updated according to the focus position.
- a control can be set in the interactive interface of the camera device, which is used to enable or disable the auxiliary focus mode.
- the interactive interface displays the auxiliary focus image.
- the interactive interface only displays the auxiliary focus image. Displays an image of the scene captured by a color camera.
- the user can intuitively know the depth distribution of each object in the scene, the position of the focus, and the focus area according to the auxiliary focus image, so that the position of the focus can be adjusted so that the object of interest is located in the focus area. so that the object of interest can be clearly imaged.
- This method can facilitate the user to focus and improve the focus efficiency.
- the present application also provides an auxiliary focusing device.
- the auxiliary focusing device 110 includes a processor 111 , a memory 112 , and a computer stored in the memory 112 for execution by the processor 111 .
- program, the processor 111 implements the following steps when executing the computer program:
- an auxiliary focus image is generated according to the depth information of each object in the target scene, and the auxiliary focus image is used to show the depth distribution of the object and the current focus of the camera in the target scene. describe the location in the target scene;
- the user's adjustment to the position of the focus is received, and the auxiliary focus image is updated according to the adjusted position of the focus.
- the processor when configured to generate an auxiliary focus image according to the depth information of each object in the target scene, it is specifically configured to:
- the auxiliary focus image is generated according to the scene image and the depth distribution image.
- the depth distribution of the object is shown by the corresponding projection points of the object in the depth distribution image, and the projection points are obtained by projecting the object along a specified axis, the specified axis It does not coincide with the axial direction of the optical axis of the camera device.
- the horizontal or vertical axis of the depth profile image is used to show the depth of the projection point.
- the horizontal axis or the vertical axis of the depth distribution image carries scales, and each scale is marked with a corresponding depth value.
- the scale is a non-uniform scale.
- the vertical axis of the depth distribution image represents the depth value of the projection point
- the horizontal axis of the depth distribution image represents the position distribution of the object corresponding to the projection point in space
- the projection point The property of is used to characterize the number of spatial 3D points corresponding to the object projected to the projected point.
- the property of the projected point includes any of the following: the gray value of the projected point, the color of the projected point, or the shape of the projected point.
- the grayscale value of the projection point is positively related to the number of spatial three-dimensional points corresponding to the object projected to the projection point.
- the distribution range of the projection points corresponding to the object in the horizontal axis direction of the depth distribution image is positively correlated with the size of the object.
- the height of the distribution position of the projection point corresponding to the object in the direction of the longitudinal axis of the depth distribution image is positively correlated with the distance between the object and the camera device.
- the scale of the vertical axis of the depth distribution image is obtained by quantifying the depth value of the object.
- the corresponding position of the focal point of the camera in the scene is shown by the depth profile image.
- the processor when the processor is configured to display the position of the focal point of the camera in the scene in the depth distribution image, the processor is specifically configured to:
- the corresponding depth of the focal point of the camera is identified in the depth image by specifying the identification.
- the depth distribution image is further used to show the focus area of the camera, wherein the object located in the focus area is clearly imaged in the camera.
- the processor is configured to, when the depth distribution image shows the focus area of the camera, specifically:
- Projection points corresponding to objects located in the focus area are identified in the depth distribution image.
- the processor when the processor is configured to identify the projection point corresponding to the object located in the focus area in the depth distribution image, the processor is specifically configured to:
- the projection point corresponding to the object in the focus area in the depth distribution image is rendered into a specified color.
- the processor is further configured to:
- a projection point corresponding to the target object is identified in the depth distribution image.
- the processor when the processor is configured to determine the target object that the user is interested in from the scene image, it is specifically configured to:
- An object of a specified type is identified from the scene image as the target object.
- the depth distribution image includes a plurality of layers, each of which is used to display a depth distribution of a projection point corresponding to the object.
- the multiple layers are arranged in a staggered manner, and the arrangement order of the multiple layers is determined based on the attention degree of the objects corresponding to the layers.
- the processor is further configured to:
- the target pixel point and the target projection point are displayed in association in the auxiliary focus image.
- the processor when the processor is configured to associate and display the target pixel point and the target projection point in the auxiliary focus image, the processor is specifically configured to:
- the same character is identified at the adjacent positions of the target projection point and the target pixel point.
- the depth distribution image is an image with a certain transparency
- the processor when configured to generate the auxiliary focus image according to the scene image and the depth distribution image, it is specifically configured to:
- the depth distribution image is superimposed on the scene image to generate the auxiliary focus image.
- the size of the depth distribution image is the same as the size of the scene image.
- the size of the depth distribution image is smaller than that of the scene image.
- the processor when configured to generate the auxiliary focus image according to the scene image and the depth distribution image, it is specifically configured to:
- the depth distribution image and the scene image are stitched side by side to generate the auxiliary focus image.
- the scene image is consistent with the viewing angle range corresponding to the depth distribution image
- the viewing angle range corresponding to the scene image is a part of the viewing angle range corresponding to the depth distribution image.
- the processor before the processor is configured to display the auxiliary focus image to the user, the processor is further configured to:
- An instruction input by the user is received, where the instruction is used to instruct to turn on the auxiliary focus mode.
- the present application further provides an auxiliary focusing system, including the auxiliary focusing device, the camera device and the ranging device mentioned in the above embodiments.
- the auxiliary focusing system further includes a pan/tilt, and the camera device and the ranging device are fixed to the pan/tilt.
- an embodiment of the present specification further provides a computer storage medium, where a program is stored in the storage medium, and when the program is executed by a processor, the focusing auxiliary method in any of the foregoing embodiments is implemented.
- Embodiments of the present specification may take the form of a computer program product embodied on one or more storage media having program code embodied therein, including but not limited to disk storage, CD-ROM, optical storage, and the like.
- Computer-usable storage media includes permanent and non-permanent, removable and non-removable media, and storage of information can be accomplished by any method or technology.
- Information may be computer readable instructions, data structures, modules of programs, or other data.
- Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
- PRAM phase-change memory
- SRAM static random access memory
- DRAM dynamic random access memory
- RAM random access memory
- ROM read only memory
- EEPROM Electrically Erasable Programmable Read Only Memory
- Flash Memory or other memory technology
- CD-ROM Compact Disc Read Only Memory
- CD-ROM Compact Disc Read Only Memory
- DVD Digital Versatile Disc
- Magnetic tape cassettes magnetic tape magnetic disk storage or other magnetic storage devices or any other non-
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
一种辅助对焦方法、装置及系统。在用户使用摄像装置对目标场景进行拍摄时,可以根据目标场景中的各对象的深度信息生成辅助对焦图像,通过辅助对焦图像可以直观地展示目标场景各对象的深度分布以及摄像装置的焦点在目标场景中对应的位置,从而用户可以从辅助对焦图像中直观的了解当前焦点所在的位置,以及根据各对象的深度分布调整焦点的位置,使得用户感兴趣的对象可以清晰成像,通过辅助对焦图像为用户手动对焦提供参考,方便用户手动对焦,提升用户手动对焦的效率。
Description
本申请涉及图像采集技术领域,具体而言,涉及一种辅助对焦方法、装置及系统。
在一些拍摄场景中,采用自动对焦的方式对摄像装置进行对焦的效果不太理想,因而需要通过用户手动对焦的方式对摄像装置进行对焦。用户在手动对焦时,需要根据目标对象与摄像装置的距离调整对焦环的位置,以便调整焦点的位置,使焦点对准用户想要拍摄的目标对象所在的平面,从而目标对象在摄像装置中才能清晰成像。为了方便用户调整焦点的位置,有必要提供一种辅助用户手动对焦的方案。
发明内容
有鉴于此,本申请提供一种辅助对焦方法、装置及系统。
根据本申请的第一方面,提供一种辅助对焦方法,所述方法包括:
在摄像装置对目标场景进行拍摄的情况下,根据目标场景中各对象的深度信息生成辅助对焦图像,所述辅助对焦图像用于展示所述对象的深度分布以及所述摄像装置的焦点当前在所述目标场景中的位置;
将所述辅助对焦图像通过交互界面展示给用户;
接收用户对所述焦点的位置的调整,根据调整后的焦点的位置更新所述辅助对焦图像
根据本申请的第二方面,提供一种辅助对焦装置,所述辅助对焦装置包括处理器、存储器、存储在所述存储器上可供所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
在摄像装置对目标场景进行拍摄的情况下,根据目标场景中各对象的 深度信息生成辅助对焦图像,所述辅助对焦图像用于展示所述对象的深度分布以及所述摄像装置的焦点当前在所述目标场景中的位置;
将所述辅助对焦图像通过交互界面展示给用户;
接收用户对所述焦点的位置的调整,根据调整后的焦点的位置更新所述辅助对焦图像。
根据本申请的第三方面,提供一种辅助对焦辅助系统,所述系统包括如上述第二方面提及的辅助对焦装置、摄像装置以及测距装置
应用本申请提供的方案,在用户使用摄像装置对目标场景进行拍摄时,可以根据目标场景中的各对象的深度信息生成辅助对焦图像,通过辅助对焦图像可以直观地展示目标场景各对象的深度分布以及摄像装置的焦点在目标场景中对应的位置,从而用户可以从辅助对焦图像中直观的了解当前焦点所在的位置,以及根据各对象的深度分布调整焦点的位置,使得用户感兴趣的对象可以清晰成像,通过辅助对焦图像为用户手动对焦提供参考,方便用户手动对焦,提升用户手动对焦的效率。
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一个实施例的辅助对焦方法的流程图。
图2是本申请一个实施例的一种深度分布图像的示意图。
图3是本申请一个实施例的一种深度分布图像的示意图。
图4(a)-4(c)是本申请一个实施例的一种通过深度分布图像展示焦点位置的示意图。
图5是本申请一个实施例的镜头成像原理的示意图。
图6(a)-6(b)是本申请一个实施例的通过深度分布图像展示对焦区域的示意图。
图7(a)-7(b)是本申请一个实施例的辅助对焦图像的示意图。
图8是本申请一个实施例的深度分布图像由多个图层构成的示意图。
图9是本申请一个实施例的应用场景的示意图。
图10是本申请一个实施例的辅助对焦图像的示意图。
图11是本申请一个实施例的辅助对焦装置的逻辑结构的示意图。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
在采用摄像装置采集图像之前,需要对摄像装置进行对焦处理,使摄像装置的焦点对准用户想要拍摄的目标对象所在的平面,以确保采集的图像中目标对象是清晰的。摄像装置的镜头一般由多组镜片组成,通过调整其中的一组或多组镜片组与成像平面(即感光元件)的距离,即可以调整焦点的位置,其中,用于改变焦点位置的镜片组,称为对焦镜组,通过调整对焦镜组的位置可以改变焦点的位置,比如将焦点前移或后移,使其对准目标对象。通常,摄像装置可以包括对焦环,对焦环一般包括标尺,标尺可以指示目标对象与摄像装置的距离不同时对焦环所对应的位置,通过调节对焦环的位置即可以调节对焦镜组的位置,进而改变焦点的位置。
目前,对焦方式有两种,自动对焦和手动对焦。自动对焦可以由摄像装置自行确定焦点的位置,并自动驱动对焦环调整到对应的位置,无需用 户手动调节。但是,在拍摄场景光线较弱、拍摄场景反差较小、微距拍摄等场景,使用自动对焦的效果可能不太理想,因而需要采用手动对焦的方式。
在进行手动对焦时,可以根据目标对象与摄像装置的距离调整摄像装置上的对焦环的位置,从而改变焦点的位置,使得用户感兴趣的对象可以清晰成像。目前,为了更加准确地调整焦点的位置,可以采用一些测距装置测量拍摄场景中的各目标对象和摄像装置的距离,比如深度传感器、激光雷达等,通过这些测距装置可以得到拍摄场景的深度图像或者点云等,对于深度图像,深度值需要按一定规律映射为图像像素值,不能直观的反应场景中各对象的深度分布情况。而点云则需要人为拖动,才能在2D屏幕上看出3D景物的分布,不易操作。所以,仅通过测距装置采集的深度图像或点云展示拍摄场景中各对象的深度,用户无法直观地了解各对象的深度分布情况,不方便用户手动对焦。
基于此,本申请实施例提供一种辅助对焦的方法,在用户使用摄像装置对目标场景进行拍摄时,可以根据目标场景中的各对象的深度信息生成辅助对焦图像,通过辅助对焦图像可以直观地展示目标场景各对象的深度分布以及摄像装置的焦点在目标场景中对应的位置,从而用户可以从辅助对焦图像中直观的了解当前焦点所在的位置,以及根据各对象的深度分布确定该如何调整焦点的位置,使得用户感兴趣的对象可以清晰成像。
本申请的辅助对焦方法可以由于摄像装置执行,也可以由与摄像装置通信连接的其他设备执行,比如,在某些场景中,可以采用专门的跟焦设备调节摄像装置的焦点位置,这种场景下,该方法也可以由所述摄像装置配备的跟焦设备执行。
具体的,所述方法如图1所示,包括以下步骤:
S102、在摄像装置对目标场景进行拍摄的情况下,根据目标场景中各对象的深度信息生成辅助对焦图像,所述辅助对焦图像用于展示所述对象的深度分布以及所述摄像装置的焦点当前在所述目标场景中的位置;
手动对焦的模式下,用户需知道目标对象与摄像装置的距离,根据距离调整焦点的位置,所以,在用户采用摄像装置对目标场景进行拍摄时,可以先获取目标场景中各对象的深度信息,其中,目标场景各对象的深度信息可以通过一些测距装置获取,比如,可以给摄像装置配备激光雷达、深度相机、深度传感器、红外测距仪等测距装置,通过这些测距装置获取目标场景中各对象的深度信息。比如,在某些实施例中,摄像装置可以是包括彩色相机和深度相机的一体化设备,或者有两中相机组合得到的设备。
获取到各对象的深度信息后,可以根据各对象的深度信息生成辅助对焦图像,辅助对焦图像可以是各种形式的图像,比如,辅助对焦图像可以显示各对象的深度值,以及当前焦点所在位置的深度值,辅助对焦图像也可以只仅显示各对象与摄像装置的距离远近关系,只要通过该图像可以直观地展示目标场景中各对象的深度分布以及当前时刻摄像装置的焦点在该目标场景的位置即可。
S104、将所述辅助对焦图像通过交互界面展示给用户;
在生成辅助对焦图像后,可以通过交互界面将辅助对焦图像展示给用户,在该辅助对焦的方法由摄像装置执行的场景,交互界面可以是摄像装置提供的交互界面,在该辅助对焦的方法由其他设备(比如跟焦设备)执行的场景,该交互界面可以是摄像装置提供的交互界面,也可以是跟焦设备的提供的交互界面。当然,在一些场景中,交互界面也可以是与摄像装置或者跟焦设备通信连接的其他设备提供的交互界面,本申请实施例不作限制。
S106、接收用户对所述焦点的位置的调整,根据调整后的焦点的位置更新所述辅助对焦图像。
用户在看到交互界面展示的辅助对焦图像后,即可以直观地看到当前焦点所在的位置,用户可以根据各对象的深度分布确定自己感兴趣的对象所处的位置,然后根据自己感兴趣的对象所处的位置调整焦点的位置。在接收到用户对焦点位置的调整指令后,可以根据调整后的焦点的位置更新 辅助对焦图像,以便在图像中实时显示焦点的位置,为用户调整焦点位置提供参考。
本申请实施例通过根据目标场景各对象的深度信息生成辅助对焦图像,并通过交互界面展示给用户,利用辅助对焦图像展示目标场景中各对象的深度分布以及当前焦点在目标场景中的位置,这样用户根据辅助对象图像便可以直观地知道当前焦点位于场景的哪个位置,以及要将焦点调整至感兴趣对象所在平面应该如何调整,为用户手动对焦提供了极大的便利,提高了用户手动对焦的效率,从而提升了用户体验。
在某些实施例中,辅助对焦图像可以仅展示目标场景中各对象的深度分布以及焦点的位置。在某些实施例中,辅助对焦图像也可以同时展示摄像装置采集的目标场景的场景图像,比如,在生成辅助对焦图像时,可以获取摄像装置采集的目标场景的场景图像,然后根据目标场景中各对象的深度信息生成深度分布图像,深度分布图像可以用于展示各对象的深度分布,然后根据场景图像和深度分布图像生成辅助对焦图像。
在某些实施例中,目标场景中各对象的深度分布可以通过各对象在深度分布图像上对应的投影点展示。比如,深度分布图像可以通过将目标场景中各对象在指定轴向上投影得到,每个目标对象可以对应深度分布图像上的一个或者多个投影点,其中,指定轴向与摄像装置的光轴轴向不重合。这样,在投影过程中,便可以保留目标场景中各对象的深度信息,避免深度信息损失。
举个例子,如图2所示,目标场景中包括行人22、车辆23以及房子24三个对象,假设摄像装置21的光轴轴向与Z轴重合,上述三个对象与摄像装置21的距离分别为Z1、Z2和Z3,摄像装置21对目标场景进行图像采集得到场景图像25,同时可以通过测距装置获取目标场景的深度信息,比如通过激光雷达获取目标场景的三维点云或者通过深度相机获取的目标场景的深度图像,然后根据三维点云或者深度图像确定将目标场景中各对象在非Z轴方向上进行投影得到的展示各对象深度分布的深度分布图 像。比如,可以将各对象在Y轴或X轴方向上投影,即可以得到可以展示各对象深度分布的深度分布图像。如图2中的深度分布图像26即为将目标场景中各对象沿Y轴方向投影得到,深度分布图像26的横轴表示各对象在X轴方向上的位置分布情况,深度分布图像26的纵轴表示各对象的与摄像装置21的深度距离。如图2中的深度分布图像27即为将目标场景中各对象沿X轴方向投影得到,深度分布图像27的横轴表示各对象与摄像装置21的距离,深度分布图像27的纵轴表示各对象在Y轴方向上的位置分布情况。
为了用户可以更直观地查看目标场景中各对象的深度分布情况,在某些实施例中,深度分布图像的横轴或者纵轴可以用于展示各对象对应的投影点的深度。比如,投影点位于图像横轴越右边(或左边)的位置,则说明该投影点的深度值越大,或者投影点位于图像纵轴越上方(或下方)的位置,则说明投影点的深度值越小。
在某些实施例中,为了方便用户知道目标场景中各对象与摄像装置的深度距离的具体数值,深度分布图像的横轴或纵轴可以携带刻度,每个刻度标识有对应的深度值,通过各个刻度标识的深度值,即可以确定投影点的深度值,从而确定各对象的深度值。当然,深度分布图像也可以不携带刻度,只要标识出横轴或者纵轴的哪个方向表示深度增大的方向即可。
在某些实施例中,深度分布图像的横轴或者纵轴携带的刻度可以是均匀的刻度。在某些实施例中,深度分布图像的横轴或者纵轴携带的刻度也可以是非均匀的刻度,比如,只在各对象所对应的深度位置标识刻度和深度值。
在某些实施例中,深度分布图像的纵轴可以用于表示投影点的深度值,深度分布图像的横轴可以表示投影点对应的对象在三维空间中的位置分布,比如,可以表示投影点对应的对象是在三维空间中非投影轴向且非深度轴向上偏左还是偏右的位置(或者是偏上还是偏下的位置)。深度分布图像中各投影点的属性可以用于表征投影到该投影点的各对象对应的空间三 维点的数量,比如,三维空间的各对象可以看成由许多空间三维点构成,深度图像上的每个投影点可以由固定的X坐标、固定的深度值下,不同Y坐标对应的三维点投影得到,因而可以通过投影点的属性来表示投影到该投影点的空间三维点数量。
如图3所示,目标场景中有一个小长方体32和一个大长方体33,摄像装置31沿着Z轴的方向采集两个对象的图像,两个对象与摄像装置31的深度距离分别为Z1和Z2,其中,深度分布图像34通过将两个对象沿着Y轴方向投影得到,小长方体32在深度分布图像34上对应的投影点为341,大长方体33在深度分布图像34上对应的投影点为342,其中,可以用深度分布图像的纵轴表示两个对象在深度分布图像对应的投影点的深度值,而深度分布图像的横轴则可以表示两个对象在三维空间中X轴向上的位置分布情况,比如,位于三维空间X轴偏左边的物体,则在深度分布图像横轴方向偏左边的位置,位于三维空间X轴偏右边的物体,则在深度分布图像横轴方向偏右边的位置。
在某些实施例中,投影点的属性可以是投影点的灰度值、投影点的颜色或投影点的形状中的任一种。比如,投影点的灰度值越大表示投影到该投影点的对象的三维点的数量越多,或者投影点的颜色越深表示投影到该投影点的三维点的数量越多。
在某些实施例中,深度分布图像中各投影点的灰度值与投影到该投影点的对象的三维点的数量正相关。即投影点的灰度值越大,则表示投影到该投影点的三维点的数量越多。
如图3所示,小长方体32在Y轴方向上的高度较小,而大长方体33在Y轴方向上的高度较高,因而,针对X轴同一位置,大长方体33对应的空间三维点数量较多,小长方体22对应的空间三维点数量较少,因而可以将大长方体33对应的投影点的亮度值设置的大一些,表示每个投影点对应的三维点数量较多,将小长方体32对应的投影点的亮度值设置的小一些,表示其对应的三维点数量较少。
在某些实施例中,各对象对应的投影点在深度分布图像横轴方向上的分布范围与各对象的尺寸正相关,比如,各对象在深度分布图像横轴方向上的分布范围越宽,表示该对象在对应轴向上的尺寸越大。如图3所示,相比于小长方体32,大长方体33在X轴方向的长度更长,因而沿着Y轴投影得到的深度分布图像上,大长方体33对应的投影点在图像的横轴方向上分布范围也更宽。当然,如果沿X轴投影得到深度分布图像,采用深度分布图像的横轴表示深度值,纵轴表示投影点对应的对象在三维空间的位置分布情况,则小长方体32相比与大长方体33在Y轴方向的高度更高,因而小长方32体对应的投影点在深度分布图像的纵轴方向上分布范围更宽。
在某些实施例中,可以采用深度分布图像的纵轴表示深度值,各对象对应的投影点在深度分布图像纵轴方向上的分布位置的高度与各对象与摄像装置深度距离正相关,即位于深度分布图像纵轴方向上越下方的位置的投影点所对应的对象与摄像装置的距离越近。如图3所示,相比于小长方体32,大长方体33与摄像装置的距离较远,因而沿着Y轴投影得到的深度分布图像上,大长方体33对应的投影点在深度分布图像纵轴靠上方位置。
在某些实施例中,深度分布图像的纵轴的刻度可以通过对目标场景中各对象的深度值进行量化得到。比如,可以确定目标场景中各对象的深度分布范围,然后根据深度分布图像的高度,对目标场景中各对象的深度值进行量化。当然,在一些实施例中,将深度值按照深度分布图像的高度量化后,位于不同深度的对象,量化后的深度值也可能相同。
在某些实施例中,摄像装置的焦点在目标场景中对应的位置可以通过所述深度分布图像展示。其中,通过深度分布图像展示焦点的位置的方式可以有多种,比如,在某些实施例中,通可以通过指定标识在深度分布图像中标识摄像装置的焦点对应的深度。比如,可以在深度分布图像上标注焦点对应的深度值,如图4(a)所示,可以在深度分布图像中通过一条指 向焦点对应的深度值的直线指示焦点的位置(如图中的黑色线条),或者在深度分布图像上将焦点所在平面的对象标识出来,如图4(b)所示,可以将焦点所在平面的对象对应的投影点标识成不同的颜色(如图中的黑色投影点),从而可以确定焦点的位置。当然,在某些实施例中,也可以通过场景图像展示焦点当前的位置,比如,可以在场景图像中标识焦点所在平面上的对象。如图4(c)所示,可以在场景图像中显示焦点对应的对象(如图中黑框标识的对象),通过将场景图像和深度分布图像对应,即可以确定焦点所在位置的深度。
通常而言,如图5所示,摄像装置在完成对焦后,不仅仅是焦点所在平面的物体在摄像装置中的成像是清晰的,焦点前后一定距离范围内的物体在摄像装置中的成像也是清晰的,这个距离范围即为摄像装置的景深。当摄像装置的焦点位置确定后,焦点前后一定距离范围内的对象在摄像装置中的成像都是清晰的,我们可以称景深范围对应的区域为对焦区域。为了让用户可以直观地知道当前目标场景中哪些对象在摄像装置的成像是清晰的,在某些实施例中,深度分布图像还可以展示摄像装置当前的对焦区域,对焦区域即为位于所述摄像装置景深范围内的区域,位于对焦区域内的对象在摄像装置中的成像是清晰的。通过在深度分布图像中显示对焦区域,方便用户调整焦点的位置,以便将感兴趣的一个或者多个对象调整至对焦区域内。
在深度分布图像展示对焦区域的方式也可以有多种,比如,在某些实施例中,可以在深度图像中标识对焦区域对应的深度范围,或者可以采用选框在深度分布图像中框选出对焦区域。如图6(a)所示,可以用选框在深度分布图像中框选出对焦区域,位于选框内的对象都可以清晰成像。某些实施例中,也可以在深度分布图像中标识出位于对焦区域的对象对应的投影点,以便用户结合深度分布图像中标识出的投影点以及场景图像便可以确定当前目标场景中的哪些对象可以清晰成像。
在深度分布图像中标识出位于对焦区域的对象对应的投影点的方式可 以有多种,比如,在一些实施例中,可以将位于对焦区域的对象对应的投影点渲染成指定颜色,(比如图6(b)中的黑色投影点),在一些实施例中,也可以在位于对焦区域内的对象对应的投影点周边标注选框、字符或者其他标识,以便用户可以识别出这些投影点。
由于深度分布图像可以显示焦点位置和对焦区域,为了让用户可以直观地看到感兴趣的目标对象当前是否位于对焦区域,或者焦点是否调整到了感兴趣的目标对象所在的平面。在一些实施例中,还可以从摄像装置采集的场景图像中识别出用户感兴趣的一个或者多个目标对象,然后在深度分布图像中标识出这些目标对象对应的投影点。这样用户根据深度分布图像便可以清楚的知道自己感兴趣的对象当前是否可以清晰成像,如果无法清晰成像,应如何调整焦点的位置。
在一些实施例中,用户可以通过在交互界面输入选择指令选择自己感兴趣的目标对象,比如,用户可以在交互界面显示的场景图像中框选或者点击一个或者多个对象作为自己感兴趣的对象。在一些实施例中,也可以由执行该辅助对焦方法的设备自动识别出用户感兴趣的目标对象,比如,可以自动识别出指定类型的对象,比如、人脸、活体或者在画面中占比大于一定阈值的对象,作为用户感兴趣的对象,然后在深度分布图像中标识出这些对象对应的投影点。
在某些实施例中,在根据场景图像和深度分布图像生成辅助对焦图像时,可以将场景图像和深度分布图像并排拼接,得到辅助对焦图像(如图4(a)所示),其中,拼接形式可以是上下拼接,左右拼接,只要在一个图像中可以同时显示目标场景的场景图像和目标场景中各对象的深度分布以及摄像装置的焦点位置即可,本申请不作限制。
在某些实施例中,深度分布图像可以是具有一定透明度的图像,在根据场景图像和深度分布图像生成辅助对焦图像时,可以将深度分布图像叠加到场景图像上,以生成辅助对焦图像(如图7(a)和7(b))。比如,可以将深度分布图像叠加到场景图像中的对象较少的区域,这样便不会遮挡 场景图像中的各对象。
在一些实施例中,深度分布图像的尺寸可以和场景图像的尺寸一致(如图7(a)),在一些实施例中,深度分布图像的尺寸也可以小于场景图像的尺寸(如图7(b)),比如在将深度分布图像叠加到场景图像的场景,深度分布图像可以小于场景图像的尺寸,这样便只重叠场景图像的小部分区域,避免遮挡场景图像中的内容。
在一些实施例中,深度分布图像可以只包括一个图层,在一个图层中显示目标场景中的所有对象对应的投影点。在一些实施例中,为了方便用户从深度分布图像中区分出不同的对象,深度分布图像也可以包括多个图层,每个图层可以显示一个对象对应的投影点(如图8所示)。这样不同对象的投影点可以位于不同图层,不会堆叠到一起,方便用户查看。当然,在一些实施例中,也可以每个图层显示多个深度距离比较接近的对象的投影点。
在一些实施例中,如果采用多个图层显示目标场景中各对象对应的投影点,多个图层可以错开排布,多个图层的排布顺序可以基于各图像对应的对象的被关注度确定。比如,可以将用户比较关注的对象对应的图层排在前后,用户不感兴趣的对象对应的图层排在后面。
当然,由于深度分布图像是通过投影点的方式展示各对象的深度分布,通过深度图像用户不太容易区分投影点对应的是哪个对象,因而可以结合场景图像来确定。在一些实施例中,为了方便用户将深度分布图像中的投影点与目标场景图像中的对象关联起来,在生成辅助对焦图像时,可以分别在场景图像和深度分布图像中确定对应于同一个对象的目标像素点以及目标投影点,在辅助对焦图像中关联显示对应于同一个对象的目标像素点和目标投影点,以便用户通过辅助对焦图像可以快速识别深度图像的投影点在场景图像中对应的对象
在一些实施例中,在关联显示对应于同一个对象的目标像素点和目标投影点时,可以采用相同颜色的选择框框选目标投影点和目标像素点。当 然,由于场景图像是彩色图像,因而,也可以将目标投影点的颜色渲染成目标像素点(即对象)的颜色,这样根据颜色即可以将同一对象的像素点和投影点关联。当然,也可以在目标投影点和目标像素点的邻近位置标识相同的字符将两者关联,具体的实现方式很多,只要可以将同一个对象在场景图像的像素点以及在深度分布图像中的投影点对应即可,本申请不作限制。
在一些实施例中,场景图像对应的视角范围与深度分布图像对应的视角范围一致。比如,深度分布图像可以根据深度相机采集的深度图像生成,摄像装置和深度相机可以同时固定于同一个云台上,两个相机随着云台转动而转动,两者采集的图像的视角范围同时随着云台的转动而变化。这样随着云台转动,场景图像中的显示的内容和深度分布图像中的显示的内容都会发生变化。在一些实施中,场景图像对应的视角范围可以只是深度分布图像对应的视角范围的一部分,比如,深度相机的视角范围可以是固定的,其采集的是整个目标场景的深度图像,而摄像装置只是采集目标场景中一部分对象的图像,这样,摄像装置的视角移动时场景图像会变化,深度分布图像显示的内容固定不变,采用这种方式,当被拍摄对象为运动物体的场景,深度分布图像也可以很好的展示被拍摄对象在整个场景中的运动情况。
在一些实施例中,辅助对焦图像可以在用户开启辅助对焦模式后才显示,比如,在用户未开启辅助对焦模式时,交互界面仅仅展示摄像装置采集的场景图像,方便用户查看采集的图像,当接收到用户输入的开启辅助对焦模式的指令后,则在交互界面显示该辅助对焦图像,方便用户根据辅助对焦图像调整焦点的位置。为了进一步解释本申请实施例提供的辅助对焦方法,以下结合一个具体的实施例加以解释。
如图9所示,为本申请一个实施例的应用场景示意图,摄像装置为包括深度相机91和彩色相机92的一体式设备,两个相机的相对位置参数可以预先标定。摄像装置还包括交互界面,用于实时展示彩色相机采集的场 景图像。在摄像装置启动后,彩色相机可以采集场景的场景图像,深度相机可以采集场景的深度图像。然后可以根据两个相机的相对位置参数,统一两种图像的坐标系,比如,都统一到彩色相机的坐标系下,或者都统一到深度相机的坐标系下。
统一两种图像的坐标系后,可以根据深度图像生成场景中各对象的深度分布图像。假设三维空间的中的Z轴方向为深度方向,X轴方向与深度图像的X轴方向一致,Y轴方向与深度图像的Y轴方向一致。深度分布图像可以看成是将场景中各对象沿Y轴方向投影得到的图像。其中,深度分布图像的X轴可以和深度图像的X轴对应,即表示对象在三维空间中X轴上的位置分布,对象在三维空间中X轴方向上的尺寸越大,则在深度分布图像X轴方向上成像越大。
深度分布图像的Y轴可以表示场景中各对象深度。深度分布图像的Y轴可以携带刻度,每个刻度可以标识对应的深度值,其中,刻度可以是均匀分布,也可非均匀分布。在根据深度图像生成深度分布图像时,可以遍历深度图像的每一列,根据该列每个像素的深度值,按照深度分布图像上Y轴的刻度,绘制在深度分布图像上,形成该列像素的深度分布。由于深度值与深度分布图像Y轴的坐标需要尺度转换,因而,将深度值按图像高度量化后,不同位置的像素,量化后的深度值可能相同。在绘制深度分布图像时,深度分布图像中每个像素的灰度值,表示该列、该深度下各对象的三维点出现的频率。深度分布图像上的像素的亮度越大,表示空间中,在该列该深度处的点越多、越密集。
在生成深度分布图像后,即可以根据深度分布图像和场景图像得到辅助对焦图像。其中。深度分布图像可以是具有一定透明度的图像,可以将深度分布图像叠加到场景图像上,得到辅助对焦图像,然后在摄像装置的交互界面展示辅助对焦图像。如图10所示,为根据彩色相机采集的场景图像和深度分布图像生成的辅助对焦图像。
为了让用户可以直观地知道当前焦点的位置,还可以在深度分布图像 上标识焦点的位置,如图10中带箭头的线条指向的深度值即为焦点位置对应的深度值,当然,还可以在深度分布图像上标识对焦区域,比如在深度分布图像中框选出对焦区域,如图10中的白色选框框选的区域,或者也可以将对焦区域内的对象在深度分布图像对应的像素渲染成指定颜色。
用户通过辅助对焦图像,即可以知道当前焦点的位置,并且根据自己感兴趣的对象所在的位置调整焦点的位置,用户调整焦点的位置后,则会重新根据焦点的位置更新辅助对焦图像。
当然,可以在摄像装置的交互界面中设置一个控件,该控件用于开启或者关闭辅助对焦模式,开始辅助对焦模式时,交互界面显示的是辅助对焦图像,关闭辅助对焦模式时,交互界面则仅显示彩色相机采集的场景图像。
通过本申请实施例提供的辅助对焦方法,用户可以根据辅助对焦图像直观的知道场景中各个对象的深度分布、焦点所在位置以及对焦区域,从而可以调整焦点的位置,使感兴趣的对象位于对焦区域内,从而感兴趣对象可以清晰成像。通过这种方法,可以方便用户对焦,提升对焦效率。
相应的,本申请还提供一种辅助对焦装置,如图11所示,所述辅助对焦装置110包括处理器111、存储器112、存储在所述存储器112上可供所述处理器111执行的计算机程序,所述处理器111执行所述计算机程序时实现以下步骤:
在摄像装置对目标场景进行拍摄的情况下,根据目标场景中各对象的深度信息生成辅助对焦图像,所述辅助对焦图像用于展示所述对象的深度分布以及所述摄像装置的焦点当前在所述目标场景中的位置;
将所述辅助对焦图像通过交互界面展示给用户;
接收用户对所述焦点的位置的调整,根据调整后的焦点的位置更新所述辅助对焦图像。
在一些实施例中,所述处理器用于根据目标场景中各对象的深度信息生成辅助对焦图像时,具体用于:
获取所述摄像装置采集的目标场景的场景图像;
根据目标场景中各对象的深度信息生成深度分布图像,所述深度分布图像用于展示所述对象的深度分布;
根据所述场景图像和所述深度分布图像生成所述辅助对焦图像。
在一些实施例中,所述对象的深度分布通过所述对象在所述深度分布图像中对应的投影点展示,所述投影点通过将所述对象沿指定轴向投影得到,所述指定轴向与所述摄像装置光轴轴向不重合。
在一些实施例中,所述深度分布图像的横轴或纵轴用于展示所述投影点的深度。
在一些实施例中,所述深度分布图像的横轴或纵轴携带有刻度,每个刻度标识有对应的深度值。
在一些实施例中,所述刻度为非均匀的刻度。
在一些实施例中,所述深度分布图像的纵轴表示所述投影点的深度值,所述深度分布图像的横轴表示所述投影点对应的对象在空间中的位置分布,所述投影点的属性用于表征投影到所述投影点的所述对象对应的空间三维点的数量。
在一些实施例中,投影点的属性包括以下任一种:所述投影点的灰度值、所述投影点的颜色或所述投影点的形状。
在一些实施例中,所述投影点的灰度值与投影到所述投影点的所述对象对应的空间三维点的数量正相关。
在一些实施例中,所述对象对应的投影点在所述深度分布图像横轴方向的分布范围与所述对象的尺寸大小正相关。
在一些实施例中,所述对象对应的投影点在所述深度分布图像纵轴方向上的分布位置的高度与所述对象与所述摄像装置的距离正相关。
在一些实施例中,所述深度分布图像的纵轴的刻度通过对所述对象的深度值进行量化得到。
在一些实施例中,所述摄像装置的焦点在所述场景中对应的位置通过 所述深度分布图像展示。
在一些实施例中,所述处理器用于在所述深度分布图像中展示所述摄像装置的焦点在所述场景中的位置时,具体用于:
通过指定标识在所述深度图像中标识所述摄像装置的焦点的对应的深度。
在一些实施例中,所述深度分布图像还用于展示所述摄像装置的对焦区域,其中,位于所述对焦区域的对象在所述摄像装置中成像清晰。
在一些实施例中,所述处理器用于在所述深度分布图像展示所述摄像装置的对焦区域时,具体用于:
在所述深度分布图像中框选所述对焦区域;或
在所述深度分布图像中标识位于所述对焦区域内的对象对应的投影点。
在一些实施例中,所述处理器用于在所述深度分布图像中标识所述位于所述对焦区域内的对象对应的投影点时,具体用于:
将所述深度分布图像中所述对焦区域内的对象对应的投影点渲染成指定颜色。
在一些实施例中,所述处理器还用于:
从所述场景图像中确定用户感兴趣的目标对象;
在所述深度分布图像中标识所述目标对象对应的投影点。
在一些实施例中,所述处理器用于从所述场景图像中确定用户感兴趣的目标对象时,具体用于:
基于用户通过所述交互界面输入的选择指令从所述场景图像中确定用户感兴趣的目标对象;或
从所述场景图像中识别出指定类型的对象作为所述目标对象。
在一些实施例中,所述深度分布图像包括多个图层,每个所述图层用于显示一个所述对象对应的投影点的深度分布。
在一些实施例中,所述多个图层错开排布,所述多个图层的排布顺序基于所述图层对应的对象的被关注度确定。
在一些实施例中,所述处理器还用于:
确定对应于同一个对象的所述场景图像中的目标像素点以及所述深度分布图像中的目标投影点;
在所述辅助对焦图像中关联显示所述目标像素点和所述目标投影点。
在一些实施例中,所述处理器用于在所述辅助对焦图像中关联显示所述目标像素点和所述目标投影点时,具体用于:
采用相同颜色的选择框框选所述目标像素点以及所述目标投影点;或
将所述目标投影点的颜色渲染成所述目标像素点对应的颜色;或
在所述目标投影点和所述目标像素点的邻近位置标识相同的字符。
在一些实施例中,所述深度分布图像为具有一定透明度的图像,所述处理器用于根据所述场景图像和所述深度分布图像生成所述辅助对焦图像时,具体用于:
将所述深度分布图像叠加于所述场景图像之上,以生成所述辅助对焦图像。
在一些实施例中,所述深度分布图像的尺寸与所述场景图像的尺寸一致;或
所述深度分布图像的尺寸小于所述场景图像。
在一些实施例中,所述处理器用于根据所述场景图像和所述深度分布图像生成所述辅助对焦图像时,具体用于:
将所述深度分布图像与所述场景图像并排拼接,以生成所述辅助对焦图像。
在一些实施例中,所述场景图像与所述深度分布图像对应的视角范围一致;或
所述场景图像对应的视角范围为所述深度分布图像对应的视角范围的一部分。
在一些实施例中,所述处理器用于将所述辅助对焦图像展示给用户之前,还用于:
接收用户输入的指令,所述指令用于指示开启辅助对焦模式。
进一步的,本申请还提供一种辅助对焦系统,包括上述各实施例中提及的辅助对焦装置、摄像装置以及测距装置。
在一些实施例中,所述辅助对焦系统还包括云台,所述摄像装置以及所述测距装置固定于所述云台。
相应地,本说明书实施例还提供一种计算机存储介质,所述存储介质中存储有程序,所述程序被处理器执行时实现上述任一实施例中的辅助对焦方法。
本说明书实施例可采用在一个或多个其中包含有程序代码的存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机可用存储介质包括永久性和非永久性、可移动和非可移动媒体,可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括但不限于:相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用 来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本发明实施例所提供的方法和装置进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。
Claims (58)
- 一种辅助对焦方法,其特征在于,所述方法包括:在摄像装置对目标场景进行拍摄的情况下,根据目标场景中各对象的深度信息生成辅助对焦图像,所述辅助对焦图像用于展示所述对象的深度分布以及所述摄像装置的焦点当前在所述目标场景中的位置;将所述辅助对焦图像通过交互界面展示给用户;接收用户对所述焦点的位置的调整,根据调整后的焦点的位置更新所述辅助对焦图像。
- 根据权利要求1所述的方法,其特征在于,根据目标场景中各对象的深度信息生成辅助对焦图像,包括:获取所述摄像装置采集的目标场景的场景图像;根据目标场景中各对象的深度信息生成深度分布图像,所述深度分布图像用于展示所述对象的深度分布;根据所述场景图像和所述深度分布图像生成所述辅助对焦图像。
- 根据权利要求2所述的方法,其特征在于,所述对象的深度分布通过所述对象在所述深度分布图像中对应的投影点展示,所述投影点通过将所述对象沿指定轴向投影得到,所述指定轴向与所述摄像装置光轴轴向不重合。
- 根据权利要求3所述的方法,其特征在于,所述深度分布图像的横轴或纵轴用于展示所述投影点的深度。
- 根据权利要求4所述的方法,其特征在于,所述深度分布图像的横轴或纵轴携带有刻度,每个刻度标识有对应的深度值。
- 根据权利要求5所述的方法,其特征在于,所述刻度为非均匀的刻度。
- 根据权利要求4-6任一项所述的方法,其特征在于,所述深度分布图像的纵轴表示所述投影点的深度值,所述深度分布图像的横轴表示所述投影点对应的对象在空间中的位置分布,所述投影点的属性用于表征投影 到所述投影点的所述对象对应的空间三维点的数量。
- 根据权利要求7所述的方法,其特征在于,投影点的属性包括以下任一种:所述投影点的灰度值、所述投影点的颜色或所述投影点的形状。
- 根据权利要求8所述的方法,其特征在于,所述投影点的灰度值与投影到所述投影点的所述对象对应的空间三维点的数量正相关。
- 根据权利要求7所述的方法,其特征在于,所述对象对应的投影点在所述深度分布图像横轴方向的分布范围与所述对象的尺寸大小正相关。
- 根据权利要求7所述的方法,其特征在于,所述对象对应的投影点在所述深度分布图像纵轴方向上的分布位置的高度与所述对象与所述摄像装置的距离正相关。
- 根据权利要求7-11任一项所述的方法,其特征在于,所述深度分布图像的纵轴的刻度通过对所述对象的深度值进行量化得到。
- 根据权利要求2-12任一项所述的方法,其特征在于,所述摄像装置的焦点在所述场景中对应的位置通过所述深度分布图像展示。
- 根据权利要求13所述的方法,其特征在于,在所述深度分布图像中展示所述摄像装置的焦点在所述场景中的位置,包括:通过指定标识在所述深度图像中标识所述摄像装置的焦点的对应的深度。
- 根据权利要求2-14任一项所述的方法,其特征在于,所述深度分布图像还用于展示所述摄像装置的对焦区域,其中,位于所述对焦区域的对象在所述摄像装置中成像清晰。
- 根据权利要求15所述的方法,其特征在于,在所述深度分布图像展示所述摄像装置的对焦区域,包括:在所述深度分布图像中框选所述对焦区域;或在所述深度分布图像中标识位于所述对焦区域内的对象对应的投影点。
- 根据权利要求16所述的方法,其特征在于,在所述深度分布图像中标识所述位于所述对焦区域内的对象对应的投影点,包括:将所述深度分布图像中所述对焦区域内的对象对应的投影点渲染成指定颜色。
- 根据权利要求2-17任一项所述的方法,其特征在于,还包括:从所述场景图像中确定用户感兴趣的目标对象;在所述深度分布图像中标识所述目标对象对应的投影点。
- 根据权利要求18所述的方法,其特征在于,从所述场景图像中确定用户感兴趣的目标对象,包括:基于用户通过所述交互界面输入的选择指令从所述场景图像中确定用户感兴趣的目标对象;或从所述场景图像中识别出指定类型的对象作为所述目标对象。
- 根据权利要求2-19任一项所述的方法,其特征在于,所述深度分布图像包括多个图层,每个所述图层用于显示一个所述对象对应的投影点的深度分布。
- 根据权利要求20所述的方法,其特征在于,所述多个图层错开排布,所述多个图层的排布顺序基于所述图层对应的对象的被关注度确定。
- 根据权利要求2-21任一项所述的方法,其特征在于,还包括:确定对应于同一个对象的所述场景图像中的目标像素点以及所述深度分布图像中的目标投影点;在所述辅助对焦图像中关联显示所述目标像素点和所述目标投影点。
- 根据权利要求22所述的方法,其特征在于,在所述辅助对焦图像中关联显示所述目标像素点和所述目标投影点,包括:采用相同颜色的选择框框选所述目标像素点以及所述目标投影点;或将所述目标投影点的颜色渲染成所述目标像素点对应的颜色;或在所述目标投影点和所述目标像素点的邻近位置标识相同的字符。
- 根据权利要求2-23任一项所述的方法,其特征在于,所述深度分布图像为具有一定透明度的图像,根据所述场景图像和所述深度分布图像生成所述辅助对焦图像,包括:将所述深度分布图像叠加于所述场景图像之上,以生成所述辅助对焦图像。
- 根据权利要求24所述的方法,其特征在于,所述深度分布图像的尺寸与所述场景图像的尺寸一致;或所述深度分布图像的尺寸小于所述场景图像。
- 根据权利要求2-23任一项所述的方法,其特征在于,根据所述场景图像和所述深度分布图像生成所述辅助对焦图像,包括:将所述深度分布图像与所述场景图像并排拼接,以生成所述辅助对焦图像。
- 根据权利要求2-26任一项所述的方法,其特征在于,所述场景图像与所述深度分布图像对应的视角范围一致;或所述场景图像对应的视角范围为所述深度分布图像对应的视角范围的一部分。
- 根据权利要求1-27任一项所述的方法,其特征在于,将所述辅助对焦图像展示给用户之前,还包括:接收用户输入的指令,所述指令用于指示开启辅助对焦模式。
- 一种辅助对焦装置,其特征在于,所述辅助对焦装置包括处理器、存储器、存储在所述存储器上可供所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现以下步骤:在摄像装置对目标场景进行拍摄的情况下,根据目标场景中各对象的深度信息生成辅助对焦图像,所述辅助对焦图像用于展示所述对象的深度分布以及所述摄像装置的焦点当前在所述目标场景中的位置;将所述辅助对焦图像通过交互界面展示给用户;接收用户对所述焦点的位置的调整,根据调整后的焦点的位置更新所述辅助对焦图像。
- 根据权利要求29所述的装置,其特征在于,所述处理器用于根据目标场景中各对象的深度信息生成辅助对焦图像时,具体用于:获取所述摄像装置采集的目标场景的场景图像;根据目标场景中各对象的深度信息生成深度分布图像,所述深度分布图像用于展示所述对象的深度分布;根据所述场景图像和所述深度分布图像生成所述辅助对焦图像。
- 根据权利要求30所述的装置,其特征在于,所述对象的深度分布通过所述对象在所述深度分布图像中对应的投影点展示,所述投影点通过将所述对象沿指定轴向投影得到,所述指定轴向与所述摄像装置光轴轴向不重合。
- 根据权利要求31所述的装置,其特征在于,所述深度分布图像的横轴或纵轴用于展示所述投影点的深度。
- 根据权利要求32所述的装置,其特征在于,所述深度分布图像的横轴或纵轴携带有刻度,每个刻度标识有对应的深度值。
- 根据权利要求33所述的装置,其特征在于,所述刻度为非均匀的刻度。
- 根据权利要求33-34任一项所述的装置,其特征在于,所述深度分布图像的纵轴表示所述投影点的深度值,所述深度分布图像的横轴表示所述投影点对应的对象在空间中的位置分布,所述投影点的属性用于表征投影到所述投影点的所述对象对应的空间三维点的数量。
- 根据权利要求35所述的装置,其特征在于,投影点的属性包括以下任一种:所述投影点的灰度值、所述投影点的颜色或所述投影点的形状。
- 根据权利要求36所述的装置,其特征在于,所述投影点的灰度值与投影到所述投影点的所述对象对应的空间三维点的数量正相关。
- 根据权利要求35所述的装置,其特征在于,所述对象对应的投影点在所述深度分布图像横轴方向的分布范围与所述对象的尺寸大小正相关。
- 根据权利要求35所述的装置,其特征在于,所述对象对应的投影点在所述深度分布图像纵轴方向上的分布位置的高度与所述对象与所述摄像装置的距离正相关。
- 根据权利要求35-39任一项所述的装置,其特征在于,所述深度分布图像的纵轴的刻度通过对所述对象的深度值进行量化得到。
- 根据权利要求30-40任一项所述的装置,其特征在于,所述摄像装置的焦点在所述场景中对应的位置通过所述深度分布图像展示。
- 根据权利要求41所述的方法,其特征在于,所述处理器用于在所述深度分布图像中展示所述摄像装置的焦点在所述场景中的位置时,具体用于:通过指定标识在所述深度图像中标识所述摄像装置的焦点的对应的深度。
- 根据权利要求30-42任一项所述的装置,其特征在于,所述深度分布图像还用于展示所述摄像装置的对焦区域,其中,位于所述对焦区域的对象在所述摄像装置中成像清晰。
- 根据权利要求43所述的装置,其特征在于,所述处理器用于在所述深度分布图像展示所述摄像装置的对焦区域时,具体用于:在所述深度分布图像中框选所述对焦区域;或在所述深度分布图像中标识位于所述对焦区域内的对象对应的投影点。
- 根据权利要求44所述的装置,其特征在于,所述处理器用于在所述深度分布图像中标识所述位于所述对焦区域内的对象对应的投影点时,具体用于:将所述深度分布图像中所述对焦区域内的对象对应的投影点渲染成指定颜色。
- 根据权利要求30-45任一项所述的装置,其特征在于,所述处理器还用于:从所述场景图像中确定用户感兴趣的目标对象;在所述深度分布图像中标识所述目标对象对应的投影点。
- 根据权利要求46所述的装置,其特征在于,所述处理器用于从所述场景图像中确定用户感兴趣的目标对象时,具体用于:基于用户通过所述交互界面输入的选择指令从所述场景图像中确定用户感兴趣的目标对象;或从所述场景图像中识别出指定类型的对象作为所述目标对象。
- 根据权利要求30-47任一项所述的装置,其特征在于,所述深度分布图像包括多个图层,每个所述图层用于显示一个所述对象对应的投影点的深度分布。
- 根据权利要求48所述的装置,其特征在于,所述多个图层错开排布,所述多个图层的排布顺序基于所述图层对应的对象的被关注度确定。
- 根据权利要求30-49任一项所述的装置,其特征在于,所述处理器还用于:确定对应于同一个对象的所述场景图像中的目标像素点以及所述深度分布图像中的目标投影点;在所述辅助对焦图像中关联显示所述目标像素点和所述目标投影点。
- 根据权利要求50所述的装置,其特征在于,所述处理器用于在所述辅助对焦图像中关联显示所述目标像素点和所述目标投影点时,具体用于:采用相同颜色的选择框框选所述目标像素点以及所述目标投影点;或将所述目标投影点的颜色渲染成所述目标像素点对应的颜色;或在所述目标投影点和所述目标像素点的邻近位置标识相同的字符。
- 根据权利要求30-51任一项所述的装置,其特征在于,所述深度分布图像为具有一定透明度的图像,所述处理器用于根据所述场景图像和所述深度分布图像生成所述辅助对焦图像时,具体用于:将所述深度分布图像叠加于所述场景图像之上,以生成所述辅助对焦图像。
- 根据权利要求52所述的装置,其特征在于,所述深度分布图像的尺寸与所述场景图像的尺寸一致;或所述深度分布图像的尺寸小于所述场景图像。
- 根据权利要求30-53所述的装置,其特征在于,所述处理器用于根据所述场景图像和所述深度分布图像生成所述辅助对焦图像时,具体用于:将所述深度分布图像与所述场景图像并排拼接,以生成所述辅助对焦图像。
- 根据权利要求30-54任一项所述的装置,其特征在于,所述场景图像与所述深度分布图像对应的视角范围一致;或所述场景图像对应的视角范围为所述深度分布图像对应的视角范围的一部分。
- 根据权利要求29-55任一项所述的装置,其特征在于,所述处理器用于将所述辅助对焦图像展示给用户之前,还用于:接收用户输入的指令,所述指令用于指示开启辅助对焦模式。
- 一种对焦辅助系统,其特征在于,包括如权利要求29-56所述的辅助对焦装置、摄像装置以及测距装置。
- 根据权利要求57所述的对焦辅助系统,其特征在于,还包括云台,所述摄像装置以及所述测距装置固定于所述云台。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410416935.1A CN118283418A (zh) | 2020-12-16 | 2020-12-16 | 辅助对焦方法、装置 |
PCT/CN2020/136829 WO2022126430A1 (zh) | 2020-12-16 | 2020-12-16 | 辅助对焦方法、装置及系统 |
CN202080077437.9A CN114788254B (zh) | 2020-12-16 | 2020-12-16 | 辅助对焦方法、装置及系统 |
US18/210,110 US20230328400A1 (en) | 2020-12-16 | 2023-06-15 | Auxiliary focusing method, apparatus, and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2020/136829 WO2022126430A1 (zh) | 2020-12-16 | 2020-12-16 | 辅助对焦方法、装置及系统 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/210,110 Continuation US20230328400A1 (en) | 2020-12-16 | 2023-06-15 | Auxiliary focusing method, apparatus, and system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022126430A1 true WO2022126430A1 (zh) | 2022-06-23 |
Family
ID=82059903
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/136829 WO2022126430A1 (zh) | 2020-12-16 | 2020-12-16 | 辅助对焦方法、装置及系统 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20230328400A1 (zh) |
CN (2) | CN118283418A (zh) |
WO (1) | WO2022126430A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116074624A (zh) * | 2022-07-22 | 2023-05-05 | 荣耀终端有限公司 | 一种对焦方法和装置 |
WO2024138648A1 (zh) * | 2022-12-30 | 2024-07-04 | 深圳市大疆创新科技有限公司 | 拍摄设备的辅助对焦方法、装置、云台和跟焦装置 |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115294508B (zh) * | 2022-10-10 | 2023-01-06 | 成都唐米科技有限公司 | 一种基于静态空间三维重构的跟焦方法、系统及摄像系统 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008256826A (ja) * | 2007-04-03 | 2008-10-23 | Nikon Corp | カメラ |
CN102348058A (zh) * | 2010-07-27 | 2012-02-08 | 三洋电机株式会社 | 电子设备 |
CN104052925A (zh) * | 2013-03-15 | 2014-09-17 | 奥林巴斯映像株式会社 | 显示设备和显示设备的控制方法 |
CN105474622A (zh) * | 2013-08-30 | 2016-04-06 | 高通股份有限公司 | 用于产生全对焦图像的方法和设备 |
CN106060358A (zh) * | 2016-07-20 | 2016-10-26 | 成都微晶景泰科技有限公司 | 场景连续分析方法、设备及成像装置 |
CN106134176A (zh) * | 2014-04-03 | 2016-11-16 | 高通股份有限公司 | 用于多焦点成像的系统和方法 |
CN106226975A (zh) * | 2016-07-20 | 2016-12-14 | 成都微晶景泰科技有限公司 | 自动对焦方法、设备及成像装置 |
US20180249092A1 (en) * | 2015-10-27 | 2018-08-30 | Olympus Corporation | Imaging device, endoscope apparatus, and method for operating imaging device |
-
2020
- 2020-12-16 CN CN202410416935.1A patent/CN118283418A/zh active Pending
- 2020-12-16 WO PCT/CN2020/136829 patent/WO2022126430A1/zh active Application Filing
- 2020-12-16 CN CN202080077437.9A patent/CN114788254B/zh active Active
-
2023
- 2023-06-15 US US18/210,110 patent/US20230328400A1/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008256826A (ja) * | 2007-04-03 | 2008-10-23 | Nikon Corp | カメラ |
CN102348058A (zh) * | 2010-07-27 | 2012-02-08 | 三洋电机株式会社 | 电子设备 |
CN104052925A (zh) * | 2013-03-15 | 2014-09-17 | 奥林巴斯映像株式会社 | 显示设备和显示设备的控制方法 |
CN105474622A (zh) * | 2013-08-30 | 2016-04-06 | 高通股份有限公司 | 用于产生全对焦图像的方法和设备 |
CN106134176A (zh) * | 2014-04-03 | 2016-11-16 | 高通股份有限公司 | 用于多焦点成像的系统和方法 |
US20180249092A1 (en) * | 2015-10-27 | 2018-08-30 | Olympus Corporation | Imaging device, endoscope apparatus, and method for operating imaging device |
CN106060358A (zh) * | 2016-07-20 | 2016-10-26 | 成都微晶景泰科技有限公司 | 场景连续分析方法、设备及成像装置 |
CN106226975A (zh) * | 2016-07-20 | 2016-12-14 | 成都微晶景泰科技有限公司 | 自动对焦方法、设备及成像装置 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116074624A (zh) * | 2022-07-22 | 2023-05-05 | 荣耀终端有限公司 | 一种对焦方法和装置 |
CN116074624B (zh) * | 2022-07-22 | 2023-11-10 | 荣耀终端有限公司 | 一种对焦方法和装置 |
WO2024138648A1 (zh) * | 2022-12-30 | 2024-07-04 | 深圳市大疆创新科技有限公司 | 拍摄设备的辅助对焦方法、装置、云台和跟焦装置 |
Also Published As
Publication number | Publication date |
---|---|
CN114788254B (zh) | 2024-04-30 |
CN118283418A (zh) | 2024-07-02 |
CN114788254A (zh) | 2022-07-22 |
US20230328400A1 (en) | 2023-10-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110111262B (zh) | 一种投影仪投影畸变校正方法、装置和投影仪 | |
CN107087107B (zh) | 基于双摄像头的图像处理装置及方法 | |
US10425638B2 (en) | Equipment and method for promptly performing calibration and verification of intrinsic and extrinsic parameters of a plurality of image capturing elements installed on electronic device | |
US9071827B1 (en) | Method and system for automatic 3-D image creation | |
US9313419B2 (en) | Image processing apparatus and image pickup apparatus where image processing is applied using an acquired depth map | |
JP4841677B2 (ja) | 立体画像を選択するための装置 | |
US20170287166A1 (en) | Camera calibration method using a calibration target | |
WO2022126430A1 (zh) | 辅助对焦方法、装置及系统 | |
RU2466438C2 (ru) | Способ облегчения фокусировки | |
KR20170135855A (ko) | 패닝 샷들의 자동 생성 | |
CN105308503A (zh) | 利用短程相机校准显示系统的系统和方法 | |
CN106412426A (zh) | 全聚焦摄影装置及方法 | |
WO2020020021A1 (zh) | 测温处理方法、装置及热成像设备 | |
JP7378219B2 (ja) | 撮像装置、画像処理装置、制御方法、及びプログラム | |
CN114359406A (zh) | 自动对焦双目摄像头的标定、3d视觉及深度点云计算方法 | |
CN114945943A (zh) | 基于虹膜大小估计深度 | |
KR101082545B1 (ko) | 영상 변환 기능을 구비한 이동통신 단말기 | |
CN110691228A (zh) | 基于三维变换的深度图像噪声标记方法、装置和存储介质 | |
TWI412729B (zh) | Method and system of real navigation display | |
CN108391048A (zh) | 具有说明功能的数据生成方法及全景拍摄系统 | |
CN107527323A (zh) | 镜头畸变的标定方法及装置 | |
CN113297344B (zh) | 基于三维遥感图像的地性线匹配方法、装置及地物目标位置定位方法 | |
CN117528236B (zh) | 虚拟摄像机的调整方法和装置 | |
CN117152400B (zh) | 交通道路上多路连续视频与三维孪生场景融合方法及系统 | |
JP7463133B2 (ja) | 面積計測装置、面積計測方法、及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20965448 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20965448 Country of ref document: EP Kind code of ref document: A1 |