[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN115190236B - Image shooting method, device, computer equipment and storage medium - Google Patents

Image shooting method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN115190236B
CN115190236B CN202110370466.0A CN202110370466A CN115190236B CN 115190236 B CN115190236 B CN 115190236B CN 202110370466 A CN202110370466 A CN 202110370466A CN 115190236 B CN115190236 B CN 115190236B
Authority
CN
China
Prior art keywords
image
preview image
terminal
composition
indication information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110370466.0A
Other languages
Chinese (zh)
Other versions
CN115190236A (en
Inventor
阿克谢亚·巴拉德瓦
马纳夫·拉奥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oneplus Technology Shenzhen Co Ltd
Original Assignee
Oneplus Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oneplus Technology Shenzhen Co Ltd filed Critical Oneplus Technology Shenzhen Co Ltd
Priority to CN202110370466.0A priority Critical patent/CN115190236B/en
Publication of CN115190236A publication Critical patent/CN115190236A/en
Application granted granted Critical
Publication of CN115190236B publication Critical patent/CN115190236B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The application relates to an image shooting method, an image shooting device, computer equipment and a storage medium. The method comprises the following steps: performing scene detection processing on a preview image captured by a terminal and aiming at a shooting object to obtain a scene type corresponding to the preview image; determining target image characteristics in the preview image according to the scene type corresponding to the preview image; generating and outputting composition indication information aiming at the shooting object according to the target image characteristics; the composition indication information is used for indicating a user to adjust the shooting position of the terminal to a composition position, and an image shot when the terminal is positioned at the composition position accords with a preset composition rule aiming at the shooting object; and acquiring a target image shot by the terminal at the composition position. The method can directly realize the shooting of the symmetrical images in the terminal, and can rapidly acquire the symmetrical images without post image processing after shooting the images.

Description

Image shooting method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of image capturing technologies, and in particular, to an image capturing method, an image capturing device, a computer device, and a storage medium.
Background
The image shooting is a function which the existing handheld terminal (such as a mobile phone) can have, so that in order to enable a user to shoot images with different kinds of effects, researchers also add a plurality of additional shooting functions in a camera of the terminal, such as a beautifying function, a delay shooting function, a multi-person synthesis function, a panoramic shooting function, a horizontal image shooting function and the like.
However, there is no symmetrical image capturing function in the image capturing function of the terminal at present, if a user wants to capture a symmetrical image, only it can be estimated whether the image to be captured is symmetrical or not, or after the image capturing, the post image processing is performed by an auxiliary means such as a third party application program, so as to obtain the symmetrical image.
Disclosure of Invention
The embodiment of the application provides an image shooting method, an image shooting device, computer equipment and a storage medium, wherein symmetrical images can be shot in a terminal without post processing.
An image capturing method, the method comprising:
performing scene detection processing on a preview image captured by a terminal and aiming at a shooting object to obtain a scene type corresponding to the preview image;
Determining target image characteristics in the preview image according to the scene type corresponding to the preview image;
generating and outputting composition indication information aiming at the shooting object according to the target image characteristics; the composition indication information is used for indicating a user to adjust the shooting position of the terminal to a composition position, and an image shot when the terminal is positioned at the composition position accords with a preset composition rule aiming at the shooting object;
and acquiring a target image shot by the terminal at the composition position.
In one embodiment, the performing scene detection processing on the preview image captured by the terminal and directed at the shooting object to obtain a scene type corresponding to the preview image includes:
Acquiring a depth image corresponding to the preview image;
Acquiring depth values of all pixel points in an image area where the shooting object is located in the depth image, and determining difference values between the depth values of two adjacent pixel points in the image area;
and determining the scene type corresponding to the preview image according to the difference value.
In one embodiment, the determining the target image feature in the preview image according to the scene type corresponding to the preview image includes:
if the scene type corresponding to the preview image is a three-dimensional scene, detecting vanishing points and vanishing lines in the preview image as target image features in the preview image;
The generating and outputting composition indication information for the shooting object according to the target image features includes:
acquiring the number of vanishing points detected in the preview image;
Determining the perspective type corresponding to the preview image according to the number of vanishing points;
and generating and outputting composition indication information aiming at the shooting object according to the perspective type.
In one embodiment, the generating and outputting composition indication information for the photographic subject according to the perspective type includes:
If the perspective type corresponding to the preview image is single-point perspective, displaying an auxiliary line and a longitudinal center line which pass through the vanishing point in the preview image; the single-point perspective is used for indicating that vanishing points in the preview image are one; the auxiliary line is parallel to the longitudinal centerline;
generating and outputting first composition indication information for the shooting object according to the relative position relation between the vanishing point and the longitudinal center line; the first composition indication information is used for indicating a user to adjust the terminal to a position where the auxiliary line coincides with the longitudinal center line.
In one embodiment, the generating and outputting composition indication information for the shooting object according to the perspective type further includes:
If the perspective type single-point perspective corresponding to the preview image is adopted, displaying all vanishing lines and longitudinal center lines of the preview image; the vanishing line is used for determining vanishing points of the preview image;
generating and outputting second composition indication information for the shooting object according to the relative position relation between the vanishing line and the longitudinal center line; the second composition indication information is used for indicating a user to adjust the terminal to a position with an equal number of vanishing lines on both sides of the longitudinal center line.
In one embodiment, the generating and outputting composition indication information for the shooting object according to the perspective type further includes:
If the perspective type corresponding to the preview image is two-point perspective, displaying a vertical line and a longitudinal center line passing through the shot object in the preview image; the two-point perspective is used for representing that the vanishing points in the preview image are two;
Generating and outputting third composition indication information for the photographing object according to a relative positional relationship between the vertical line and the longitudinal center line; the third composition indication information is used for indicating a user to adjust the terminal to a position where the vertical line and the longitudinal center line coincide.
In one embodiment, after generating and outputting the third composition instruction information for the photographic subject according to the relative positional relationship between the vertical line and the longitudinal center line, further comprising:
Determining the position of each vanishing point in the preview image;
Acquiring a first vertical distance between a left vanishing point and the longitudinal center line and a second vertical distance between a right vanishing point and the longitudinal center line in the preview image;
Generating and outputting fourth composition indication information aiming at the shooting object according to the numerical comparison result of the first vertical distance and the second vertical distance; the fourth composition indication information is used for indicating a user to adjust the terminal to a position where the vertical line and the longitudinal center line coincide, and the first vertical distance and the second vertical distance are equal.
In one embodiment, the determining the target image feature in the preview image according to the scene type corresponding to the preview image further includes:
If the scene type corresponding to the preview image is a two-dimensional scene, acquiring an interested region to be shot in the preview image as a target image feature in the preview image;
the generating and outputting composition indication information for the shooting object according to the target image features further includes:
If the camera shooting mode of the terminal is detected to be a manual shooting mode, acquiring the included angle information between the terminal and the plane of the shot object according to the region of interest;
Generating and outputting fifth composition indication information aiming at the shooting object according to the included angle information; the fifth composition indication information is used for indicating a user to adjust the terminal to a position where the terminal is parallel to a plane where the shooting object is located.
In one embodiment, after acquiring the region of interest to be photographed in the preview image, the method further includes:
if the camera shooting mode of the terminal is detected to be an automatic shooting mode, shooting an interested image corresponding to the interested region;
And inputting the interested image into a trained symmetrical conversion model to obtain a target image corresponding to the interested region.
In one embodiment, the composition position is a symmetrical image composition position; the obtaining the target image shot by the terminal at the composition position comprises the following steps:
and acquiring the symmetrical image shot by the terminal at the symmetrical image composition position.
An image capturing apparatus, the apparatus comprising:
the scene detection module is used for carrying out scene detection processing on a preview image captured by the terminal and aiming at a shooting object to obtain a scene type corresponding to the preview image;
the feature determining module is used for determining target image features in the preview image according to the scene types corresponding to the preview image;
The information generation module is used for generating and outputting composition indication information aiming at the shooting object according to the target image characteristics; the composition indication information is used for indicating a user to adjust the shooting position of the terminal to a composition position, and an image shot when the terminal is positioned at the composition position accords with a preset composition rule aiming at the shooting object;
and the image acquisition module is used for acquiring the target image shot by the terminal at the composition position.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
performing scene detection processing on a preview image captured by a terminal and aiming at a shooting object to obtain a scene type corresponding to the preview image;
Determining target image characteristics in the preview image according to the scene type corresponding to the preview image;
generating and outputting composition indication information aiming at the shooting object according to the target image characteristics; the composition indication information is used for indicating a user to adjust the shooting position of the terminal to a composition position, and an image shot when the terminal is positioned at the composition position accords with a preset composition rule aiming at the shooting object;
and acquiring a target image shot by the terminal at the composition position.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
performing scene detection processing on a preview image captured by a terminal and aiming at a shooting object to obtain a scene type corresponding to the preview image;
Determining target image characteristics in the preview image according to the scene type corresponding to the preview image;
generating and outputting composition indication information aiming at the shooting object according to the target image characteristics; the composition indication information is used for indicating a user to adjust the shooting position of the terminal to a composition position, and an image shot when the terminal is positioned at the composition position accords with a preset composition rule aiming at the shooting object;
and acquiring a target image shot by the terminal at the composition position.
According to the image shooting method, the device, the computer equipment and the storage medium, the scene detection processing is carried out on the preview image which is captured by the terminal and aims at the shooting object, so that the scene type corresponding to the preview image is obtained, the target image characteristics in the preview image are determined according to the scene type corresponding to the preview image, and further composition indication information aiming at the shooting object is generated and output according to the target image characteristics, and the user is instructed to adjust the shooting position of the terminal to the composition position through the composition indication information, so that the target image shot by the terminal at the composition position accords with the preset composition rule aiming at the shooting object. According to the method, through the set target image shooting function, when an image is shot, corresponding composition indicating information is generated and output according to the scene type and the target image characteristics corresponding to the preview image, so that a user is instructed to adjust the shooting position of the terminal, the terminal is enabled to be adjusted to the shooting position enabling the shot image to accord with the preset composition rule, therefore, the acquisition of the symmetrical image can be directly completed at the composition position, the symmetrical image is not required to be obtained through post image processing after the image is shot, and the defects that time and labor are consumed, the operation is complex and important information in the image is easy to lose in the traditional method for obtaining the symmetrical image through post processing are overcome.
Drawings
FIG. 1 is a flow chart of an image capturing method according to an embodiment;
FIG. 2 is a schematic diagram of a process of adjusting a terminal position through a vanishing point in one embodiment;
FIG. 3 is a schematic diagram of a process of adjusting a terminal position through a vanishing line in one embodiment;
Fig. 4 is a schematic diagram of a process of adjusting a terminal by a vertical line passing through a photographing object in one embodiment;
FIG. 5 is a schematic illustration of a process for adjusting the position of a terminal by the distance between the vanishing point and the longitudinal centerline in one embodiment;
FIG. 6a is a schematic diagram of a relationship between different shooting positions and a plane of a shooting object in an embodiment;
FIG. 6b is a schematic diagram of images captured by a terminal at different capturing positions in one embodiment;
FIG. 7 is a schematic diagram of an image conversion process of interest in an automatic photographing mode in one embodiment;
FIG. 8 is a flowchart of an image capturing method according to another embodiment;
FIG. 9 is a schematic diagram of a complete flow of an image capturing method according to one embodiment;
FIG. 10 is a block diagram showing the structure of an image capturing apparatus according to an embodiment;
FIG. 11 is an internal block diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
In one embodiment, as shown in fig. 1, an image capturing method is provided, and this embodiment is applied to a terminal for illustration, where the terminal refers to a device with an image capturing function, and may be, but not limited to, various smartphones. In this embodiment, the method includes the steps of:
Step S102, performing scene detection processing on the preview image of the shooting object captured by the terminal to obtain a scene type corresponding to the preview image.
The scene types include three-dimensional scenes and two-dimensional scenes.
The depth value change between pixel points in the preview image of the three-dimensional scene is greater than the difference threshold, and the scene type corresponding to the image shown in fig. 4 is a three-dimensional scene.
The depth value change between the pixel points in the preview image of the two-dimensional scene is smaller than or equal to the difference threshold, and the scene type corresponding to the image shown in fig. 7 is a two-dimensional scene.
Specifically, a depth sensor is arranged in the terminal, if the current shooting mode of the terminal is detected to be a symmetrical image shooting mode, a symmetrical image shooting processing flow is entered, specifically, scene detection processing can be performed on a preview image of a shooting object captured by the terminal based on data acquired by the depth sensor, and a scene type corresponding to the preview image is determined, so that target image characteristics in the preview image can be further determined according to the scene type.
The terminal can be further provided with a selection entry of a scene type corresponding to the preview image, the selection entry comprises a two-dimensional scene selection entry and a three-dimensional scene selection entry, and the scene type corresponding to the preview image is determined by receiving a triggering operation for the selection entry.
In one embodiment, before step S102, the current shooting mode of the terminal is further detected, and if the current shooting mode of the terminal is detected to be the symmetrical image shooting mode, the scene detection process is performed on the preview image captured by the terminal. The shooting mode of the terminal can be various, such as a portrait shooting mode, a panoramic shooting mode, a common shooting mode, a night scene shooting mode, a horizontal image shooting mode and the like. In a specific implementation, the symmetrical image capturing mode may be set in a camera application installed in the terminal as an additional capturing function of the camera.
Further, in one embodiment, step S204 specifically includes: obtaining a depth image corresponding to the preview image, obtaining depth values of all pixel points in an image area where a shooting object is located in the depth image, and determining differences between the depth values of two adjacent pixel points in the image area; and determining the scene type corresponding to the preview image according to the difference value.
Specifically, each pixel value of the depth map represents the distance between a certain point in the scene and the camera, and the depth map corresponding to the preview image can be obtained through a depth sensor arranged in the terminal, so that the depth value (namely, the z-axis coordinate value) is provided for each pixel point in the preview image, and the scene type corresponding to the preview image is determined according to the change condition of each depth value. More specifically, the depth value of each pixel point of the shot object in the image area of the preview image can be obtained, the depth value change between every two adjacent pixel points is represented by calculating the difference value of the depth values of the two adjacent pixel points, if the depth value difference value is smaller than or equal to the difference value threshold value, the depth value change between each pixel point is smaller, that is, the distance between each point of the shot object and the camera is almost equal, that is, the plane where the shot object is located is basically parallel to the camera, and then the scene type corresponding to the preview image is determined to be a two-dimensional scene. If the depth value difference is larger than the difference threshold, the depth value change among the pixel points is larger, and the distance between each point of the shooting object and the camera is far or near, and the scene type corresponding to the preview image is judged to be a three-dimensional scene.
Step S104, determining target image characteristics in the preview image according to the scene type corresponding to the preview image.
Wherein the target image feature is a feature for determining a photographing position of the terminal.
Specifically, if the scene type corresponding to the preview image is a three-dimensional scene, vanishing points and vanishing lines in the preview image may be detected as target image features in the preview image, and if the scene type corresponding to the preview image is a two-dimensional scene, a region of interest (ROI) selected in the preview image may be detected as target image features in the preview image. Wherein the vanishing point represents the visual intersection of a plurality of parallel lines in the perspective view, such as point 20 shown in fig. 2, being the vanishing point of the image shown in fig. 2, wherein each parallel line used to determine the vanishing point is referred to as a vanishing line.
Step S106, generating and outputting composition indication information aiming at a shooting object according to the target image characteristics; the composition indication information is used for indicating a user to adjust the shooting position of the terminal to a composition position, and when the terminal is at the composition position, the shot image accords with a preset composition rule aiming at a shooting object.
Specifically, after determining the target image features of the preview image, composition indication information for the shooting object may be generated and output according to the target image features, and the user may be instructed to adjust the shooting position of the terminal to a composition position that may enable the shot image to conform to a preset composition rule for the shooting object through the composition indication information, so as to shoot a target image meeting the requirement at the composition position. More specifically, if the scene type corresponding to the preview image is determined to be a three-dimensional scene, the perspective type corresponding to the preview image may be further detected, and composition indication information for the shooting object may be generated and output according to the perspective type. Because the image shot by the camera at the terminal is shot by a single camera, the perspective type corresponding to the preview image only comprises single-point perspective and two-point perspective. If the scene type corresponding to the preview image is determined to be a two-dimensional scene, further detecting whether a camera shooting mode of the terminal is an automatic shooting mode or a manual shooting mode, and generating and outputting composition indication information aiming at a shooting object by adopting a corresponding processing method according to the camera shooting mode.
Step S108, acquiring a target image shot by the terminal at the composition position.
Specifically, the target object may be a symmetrical image of the photographing object, the composition indication information may be displayed in the terminal interface after the composition indication information is generated, the user is instructed to adjust the terminal through the composition indication information, the terminal is adjusted to a photographing position where the target image of the photographing object may be photographed, and the target image of the photographing object is photographed at the photographing position.
Further, in one embodiment, the composition position is a symmetrical image composition position, and the obtaining the target image shot by the terminal at the composition position includes: the symmetrical images shot by the terminal at the symmetrical image composition positions are obtained, the symmetrical images of the shot objects are obtained through direct shooting, the symmetrical images are not needed to be obtained through post-image processing after the images are shot, and the convenience of the symmetrical image shooting method is greatly improved.
In the image shooting method, scene detection processing is performed on the preview image captured by the terminal and aiming at the shooting object to obtain the scene type corresponding to the preview image, so that the target image characteristics in the preview image are determined according to the scene type corresponding to the preview image, composition indicating information aiming at the shooting object is generated and output according to the target image characteristics, and a user is instructed to adjust the shooting position of the terminal to the composition position through the composition indicating information, so that the target image shot by the terminal at the composition position accords with the preset composition rule aiming at the shooting object. According to the method, through the set target image shooting function, when an image is shot, corresponding composition indicating information is generated and output according to the scene type and the target image characteristics corresponding to the preview image, so that a user is instructed to adjust the shooting position of the terminal, the terminal is enabled to be adjusted to the shooting position enabling the shot image to accord with the preset composition rule, therefore, the acquisition of the symmetrical image can be directly completed at the composition position, the symmetrical image is not required to be obtained through post image processing after the image is shot, and the defects that time and labor are consumed, the operation is complex and important information in the image is easy to lose in the traditional method for obtaining the symmetrical image through post processing are overcome.
In one embodiment, the step S106 includes: if the scene type corresponding to the preview image is a three-dimensional scene, detecting vanishing points and vanishing lines in the preview image as target image features in the preview image; the step S104 includes: acquiring the number of vanishing points detected in the preview image; determining the perspective type corresponding to the preview image according to the number of vanishing points; composition indication information for a photographic subject is generated and output according to the perspective type.
Wherein, the single-point perspective represents the perspective of a square or rectangular object with one surface parallel to the picture, and only one vanishing point exists.
Wherein, two-point perspective, also called angle perspective, means that two groups of vertical faces of the shot object are not parallel to the picture and form a certain included angle on the picture, and two vanishing points exist.
Specifically, since the perspective type of the image can be determined according to the number of vanishing points, after determining that the scene type corresponding to the preview image is a three-dimensional scene, the perspective type corresponding to the preview image can be determined according to the number of vanishing points by detecting the number of vanishing points in the preview image. More specifically, a perspective type detection model may be trained in advance, and after the preview image captured by the terminal is detected, the preview image is input into the trained perspective type detection model to obtain the perspective type corresponding to the preview image. For example, the vanishing point of the image shown in fig. 2 is one, and thus, the perspective type corresponding to the image shown in fig. 2 is single-point perspective. The vanishing points of the image shown in fig. 5 are two, and thus, the perspective type corresponding to the image shown in fig. 5 is a two-point perspective.
Further, in one embodiment, the step of generating and outputting composition indication information for a photographing object according to the perspective type includes: if the perspective type corresponding to the preview image is single-point perspective, displaying an auxiliary line and a longitudinal center line which pass through the vanishing point in the preview image; single point perspective is used to represent that the vanishing point in the preview image is one; the auxiliary line is parallel to the longitudinal center line; generating and outputting first composition indication information aiming at a shooting object according to the relative position relation between the vanishing point and the longitudinal center line; the first composition indication information is used for indicating a user to adjust the terminal to a position where the auxiliary line coincides with the longitudinal center line.
Specifically, the first composition indication information includes: if the vanishing point is positioned at the left side of the longitudinal center line, the user is instructed to move the terminal leftwards; if the vanishing point is positioned on the right side of the longitudinal center line, the user is instructed to move the terminal rightward, and the user is instructed to adjust the terminal to a position where an auxiliary line passing through the vanishing point coincides with the longitudinal center line through the relative position relationship between the vanishing point and the longitudinal center line, wherein the position is a position where a symmetrical image of a shooting object can be shot. The first composition indication information may be text information, or may be information of a symbol (such as a left arrow, a right arrow, etc.) indicating a direction.
For example, referring to fig. 2, fig. 2 is a schematic diagram of a process of adjusting a terminal position through a vanishing point in an embodiment, in which an upper half image represents an image before the terminal is not adjusted, and a lower half image represents an image after the terminal is adjusted. In the drawing, the dot 20 represents the vanishing point of the preview image shown in fig. 2, the line 22 represents the auxiliary line passing through the vanishing point 20, the line 24 represents the longitudinal center line of the preview image, and as can be seen from the image before the terminal is not adjusted in fig. 2, the vanishing point 20 is located at the left side of the longitudinal center line 24, so that the first composition indication information for moving the terminal to the left can be displayed in the terminal, and the user can be instructed to move the terminal to the position where the auxiliary line 22 overlaps with the longitudinal center line 24, thereby obtaining the symmetrical image shown in the lower half.
In this embodiment, when the scene type corresponding to the preview image is a three-dimensional scene, the vanishing point theory is used to capture the symmetry point, and then the shooting position of the terminal is adjusted according to the assistance of the symmetry point, so that the terminal is adjusted to the position where the assistance line passing through the vanishing point coincides with the longitudinal center line, thereby realizing the shooting of the symmetrical image of the single-point perspective image in the three-dimensional scene, overcoming the defect that the shot symmetrical image is inaccurate by a shooting method estimated approximately by people, avoiding post processing, reducing the consumption of time and energy, and retaining important information in the image.
In one embodiment, the step of generating and outputting composition indication information for a photographic subject according to the perspective type includes: if the perspective type single-point perspective corresponding to the preview image is adopted, displaying all vanishing lines and longitudinal center lines of the preview image; the vanishing line is used for determining vanishing points of the preview image; generating and outputting second composition indication information for the shooting object according to the relative position relation between the vanishing line and the longitudinal center line; the second composition indication information is used to indicate that the user adjusts the terminal to a position having an equal number of vanishing lines on both sides of the longitudinal center line.
Specifically, if the perspective type single-point perspective corresponding to the preview image, in addition to capturing a symmetrical image by adjusting the position of the terminal according to the vanishing point, the present embodiment also proposes a method of adjusting the capturing position of the terminal according to the vanishing line for determining the vanishing point to capture the symmetrical image. More specifically, the specific implementation procedure of generating and outputting the second composition indication information for the photographic subject according to the relative positional relationship between the vanishing line and the longitudinal center line includes: acquiring the number of vanishing lines on two sides of the longitudinal center line, and if the number of vanishing lines on the left side is larger than the number of vanishing lines on the right side, indicating the user to move the terminal leftwards; if the number of the left vanishing lines is smaller than the number of the right vanishing lines, the user is instructed to move the terminal rightward, and the user is instructed to move the terminal to a position where the number of the left vanishing lines is equal to the number of the right vanishing lines through comparison of the numbers of the left vanishing lines and the right vanishing lines. If the number of vanishing lines is odd, the user is instructed to move the terminal to a position where the number of vanishing lines on the left side is equal to the number of vanishing lines on the right side, and the vanishing line in the middle coincides with the longitudinal center line. The second composition indication information may be text information, or may be information indicating a symbol (e.g., a left arrow, a right arrow, etc.) of the direction, similar to the first composition indication information.
For example, referring to fig. 3, fig. 3 is a schematic diagram illustrating a process of adjusting a terminal position by a vanishing line in an embodiment, in which an upper half image represents an image before the terminal is not adjusted, and a lower half image represents an image after the terminal is adjusted. The line 30 in the figure shows the vanishing line, the line 32 shows the longitudinal center line, and as can be seen from the image before the terminal is not adjusted in fig. 3, the four vanishing lines are all located at the left side of the longitudinal center line, so that the second composition indication information for adjusting the terminal to the left can be displayed in the terminal, and the user is instructed to move the terminal to the position with two vanishing lines at the left and right sides, so as to obtain the symmetrical image shown in the lower half of fig. 3.
In this embodiment, in addition to the method of adjusting the shooting position of the terminal according to the vanishing point and the longitudinal center line, a method of adjusting the shooting position of the terminal according to the vanishing line and the longitudinal center line is also provided, so that flexibility and diversity of the image shooting method of the single-point perspective image are realized, on the basis of shooting symmetrical images of the single-point perspective image in a three-dimensional scene, the defect that the shot symmetrical images are inaccurate by a shooting method estimated approximately by people is overcome, post processing is not needed, time and energy consumption is reduced, and important information in the images can be reserved.
In one embodiment, the step of generating and outputting composition indication information for the photographic subject according to the perspective type further includes: if the perspective type corresponding to the preview image is two-point perspective, displaying a vertical line and a longitudinal center line passing through the shot object in the preview image; the two-point perspective is used for representing that the vanishing points in the preview image are two; generating and outputting third composition indication information for the shooting object according to the relative position relation between the vertical line and the longitudinal center line; the third composition indication information is used for indicating the user to adjust the terminal to a position where the vertical line and the longitudinal center line coincide.
Specifically, the third composition indication information includes: if the vertical line passing through the shooting object is positioned at the left side of the longitudinal center line, the user is instructed to move the terminal leftwards; if the vertical line passing through the shooting object is positioned on the right side of the longitudinal center line, the user is instructed to move the terminal rightward, and the user is instructed to adjust the terminal to a position where the vertical line passing through the shooting object coincides with the longitudinal center line through the relative position relation between the vertical line and the longitudinal center line. The third composition indication information may be text information, or information indicating a sign of the direction (e.g., an arrow sign such as a left arrow or a right arrow), similar to the first composition indication information.
For example, referring to fig. 4, fig. 4 is a schematic diagram of a process of adjusting a terminal by a vertical line passing through a photographing object in one embodiment, in which an upper half image represents an image before the terminal is not adjusted, and a lower half image represents an image after the terminal is adjusted. In the drawing, a line 40 represents a vertical line passing through the subject, and a line 42 represents a longitudinal center line, and as can be seen from the image before the terminal is not adjusted in fig. 4, the vertical line passing through the subject is located on the right side of the longitudinal center line, and thus, composition indication information for moving the terminal to the right, indicating the user to move to the right, so that the vertical line passing through the subject coincides with the longitudinal center line, can be displayed in the terminal.
Further, after generating and outputting the third composition instruction information for the photographic subject, it further includes: determining the position of each vanishing point in the preview image; acquiring a first vertical distance between a left vanishing point and a longitudinal center line and a second vertical distance between a right vanishing point and the longitudinal center line in a preview image; generating and outputting fourth composition indication information aiming at a shooting object according to the numerical comparison result of the first vertical distance and the second vertical distance; the fourth composition indication information is used for indicating a user to adjust the terminal to a position where the vertical line and the longitudinal center line coincide, and the first vertical distance and the second vertical distance are equal.
Specifically, the fourth composition indication information includes: if the first vertical distance is smaller than the second vertical distance, the user is instructed to rotate the terminal anticlockwise under the condition that a vertical line passing through the shooting object coincides with the longitudinal center line; if the first vertical distance is greater than the second vertical distance, under the condition that a vertical line passing through the shooting object coincides with the longitudinal center line, the user is instructed to rotate the terminal clockwise, and the user is instructed to adjust the terminal to a position where the first vertical distance is equal to the second vertical distance under the condition that the vertical line passing through the shooting object coincides with the longitudinal center line through the numerical comparison result of the first vertical distance and the second vertical distance. The fourth composition indication information may include a value of the first vertical distance and a value of the second vertical distance, and may be text information, or may be information of a symbol indicating a direction (such as an arrow symbol of a counterclockwise rotation arrow, a clockwise rotation arrow, etc.).
For example, referring to fig. 5, fig. 5 is a schematic diagram of a process of adjusting a terminal position by a distance between a vanishing point and a longitudinal center line in one embodiment, in which an upper half image represents an image before the terminal is not adjusted, and a lower half image represents an image after the terminal is adjusted. As can be seen from the image before the terminal is not adjusted in fig. 5, the first vertical distance a of the left vanishing point with respect to the longitudinal center line is smaller than the second vertical distance B of the right vanishing point with respect to the longitudinal center line, and thus, fourth composition indication information for overlapping the vertical line passing through the photographing object with the longitudinal center line and rotating the terminal counterclockwise, indicating to the user to adjust the terminal to a position where the vertical line coincides with the longitudinal center line and the first vertical distance is equal to the second vertical distance, can be displayed in the terminal, resulting in the image shown in the lower half of fig. 5.
In this embodiment, the terminal is adjusted to the position where the vertical line and the longitudinal center line of the shooting object pass through the preview image, and then the terminal is adjusted to the position where the vertical line and the longitudinal center line coincide with each other by comparing the first vertical distance between the left vanishing point and the longitudinal center line with the second vertical distance between the right vanishing point and the longitudinal center line, and the first vertical distance and the second vertical distance are equal, so that the shooting of the symmetrical image of the two-point perspective image in the three-dimensional scene is realized, the defect that the shot symmetrical image is inaccurate by the shooting method estimated by people is overcome, the later processing is not needed, the consumption of time and energy is reduced, and the important information in the image can be kept.
In one embodiment, the step S106 further includes: if the scene type corresponding to the preview image is a two-dimensional scene, acquiring an interested region to be shot in the preview image as a target image characteristic in the preview image; the step S104 further includes: if the camera shooting mode of the terminal is detected to be a manual shooting mode, acquiring the information of an included angle between the terminal and a plane where a shot object is located according to the region of interest; generating and outputting fifth composition indication information aiming at a shooting object according to the included angle information; the fifth composition indication information is used for indicating a user to adjust the terminal to a position where the terminal is parallel to a plane where the shooting object is located.
Further, in one embodiment, after acquiring the region of interest to be photographed in the preview image, the method further includes: if the camera shooting mode of the terminal is detected to be an automatic shooting mode, shooting an interested image corresponding to the interested region; and inputting the interested image into the trained symmetrical conversion model to obtain a target image corresponding to the interested region.
Wherein the included angle information may include an included angle and an included angle direction, wherein the included angle direction may represent a direction in which the included angle opening is oriented.
Specifically, the region of interest is an area that can be adjusted according to requirements, if the scene type corresponding to the preview image is a two-dimensional scene, whether the camera shooting mode of the terminal is a manual shooting mode or an automatic shooting mode is detected first, and if the camera shooting mode is the manual shooting mode, the included angle information between the terminal and the plane where the shot object is located is acquired according to the preview image corresponding to the region of interest. It will be understood that when the terminal is not parallel to the plane in which the subject is located, an included angle is formed between the terminal and the plane in which the subject is located, for example, as shown in fig. 6a, when the terminal is located in the plane 1, the terminal and the plane in which the subject is located form an included angle α, and the opening of the included angle α faces to the left, and the corresponding preview image is shown in fig. 6b, which is shown in fig. 1; when the terminal is positioned on the plane 2, an included angle beta is formed between the terminal and the plane on which the shooting object is positioned, the opening of the included angle beta faces to the right, and the corresponding preview image is shown as the 2 nd image in the figure 6 b; when the terminal is in the plane 3, the terminal is parallel to the plane where the shooting object is located, and the corresponding preview image is a regular quadrangle. Generating and outputting fifth composition indication information for the shooting object according to the included angle information, wherein the fifth composition indication information can comprise: and according to the included angle direction and the included angle, indicating the user to rotate the terminal by the included angle in the same direction as the included angle direction. For example, as shown in fig. 6a, when the terminal is in the plane 1, the angle αopening is directed to the left, and the user is instructed to rotate the terminal to the left, i.e., counterclockwise, and to rotate the terminal to a position parallel to the plane in which the subject is located. If the terminal is in the plane 2, the user is instructed to rotate the terminal to the right, i.e. clockwise, and the terminal is rotated to a position parallel to the plane in which the subject is located, because the included angle β is opened to the right.
If the camera shooting mode of the terminal is detected to be an automatic shooting mode, after the triggering operation of a user for a shooting button is received, the region of interest is extracted as a single image by the terminal, and at the moment, the central line of the region of interest is a symmetrical line, so that when the image of interest is input into a trained symmetrical conversion model, the image of interest is converted according to the included angle information between the terminal and the plane where the shooting object is located, which is acquired by the sensor, when the image of interest is shot, and the acquired image is a symmetrical image corresponding to the image of interest. For example, referring to fig. 7, fig. 7 is a schematic diagram of an interesting image conversion process in an automatic shooting mode, the upper half image of fig. 7 represents an image before conversion, the lower half image represents an image after conversion, and as can be seen from the image before conversion in fig. 7, the shot image is an inverted trapezoid image due to the included angle between the terminal and the plane where the shooting object is located, therefore, the image in the upper half of fig. 7 can be input into a symmetrical conversion model after training, so as to obtain the image after conversion shown in the lower half of fig. 7, and the image is presented as a regular quadrangle.
It should be noted that, in this embodiment, only a regular quadrilateral region of interest is taken as an example and is not limited to this embodiment, and the region of interest may be in other shapes, for example, if the region of interest is circular, the captured image of interest may be an image similar to an ellipse because of the included angle between the terminal and the plane where the subject is located, so that the elliptical image of interest needs to be converted to obtain a corresponding circular image of interest.
In this embodiment, when the scene type corresponding to the preview image is a two-dimensional scene, different processing methods are adopted for the manual shooting mode and the automatic shooting mode, so that the diversity of the image shooting method in the two-dimensional scene is improved, and when in the manual shooting mode, the terminal is adjusted according to the included angle information between the terminal and the plane where the shooting object is located, so that the terminal is parallel to the plane where the shooting object is located, and the shooting of the symmetrical image when in the manual shooting mode in the two-dimensional scene is realized. In the automatic shooting mode, the symmetrical conversion model is trained in advance so as to convert the interested image into the symmetrical image after the interested image corresponding to the interested region is obtained, the symmetrical image is obtained in the automatic shooting mode in a two-dimensional scene, the defect that the shot symmetrical image is inaccurate by a shooting method estimated approximately by people is overcome, the later processing is not needed, the consumption of time and energy is reduced, and important information in the image can be reserved. Furthermore, by using customizable regions of interest, more flexibility and accuracy is provided in capturing a photographic subject in a particular scene.
In another embodiment, as shown in fig. 8, there is provided an image photographing method including the steps of:
step S802, detecting a current shooting mode of a terminal;
step S804, if the current shooting mode is detected to be a symmetrical image shooting mode, performing scene detection processing on the preview image captured by the terminal and aiming at the shooting object to obtain a scene type corresponding to the preview image;
step S806, if the scene type corresponding to the preview image is a three-dimensional scene, detecting vanishing points and vanishing lines in the preview image as target image features in the preview image;
Step S808, determining the perspective type corresponding to the preview image according to the number of vanishing points obtained by detection; generating and outputting composition indication information aiming at a shooting object according to the perspective type;
Step S810, if the scene type corresponding to the preview image is a two-dimensional scene, acquiring an interested region to be shot in the preview image as a target image feature in the preview image;
step S812, detecting a camera shooting mode of the terminal, and generating and outputting composition indication information for a shooting object according to the detected camera shooting mode;
step S814, instruct the user to adjust the terminal to a corresponding photographing position and photograph a symmetrical image of the photographing object through the composition instruction information.
According to the image shooting method, through the set symmetrical image shooting function, when an image is shot, corresponding composition indication information is generated according to the scene type and the target image characteristics corresponding to the preview image, so that a user is instructed to adjust the terminal to the shooting position of the symmetrical image of a shot object, the acquisition of the symmetrical image is directly completed, after the image is shot, the symmetrical image is obtained through post-image processing, and therefore the defects that time and labor are consumed, the operation is complex and important information in the image is easy to lose in the traditional method for obtaining the symmetrical image through post-processing are overcome.
In one embodiment, to facilitate understanding of embodiments of the application by those skilled in the art, a specific example will be described below in conjunction with the accompanying drawings. Referring to fig. 9, a complete flow diagram of an image capturing method is shown. The image comprises a processing method for the case that the scene type corresponding to the preview image is a three-dimensional scene and a processing method for the case that the scene type corresponding to the preview image is a two-dimensional scene.
Firstly, scene detection is carried out on a preview image, if the scene type corresponding to the preview image is judged to be a three-dimensional scene, a vanishing point theory is adopted to obtain a symmetrical image, and the method comprises the following steps:
(1) And determining a shooting object or a region of interest of the main body, and performing perspective detection on a preview image of the shooting object.
(2A) If the perspective type corresponding to the preview image is single-point perspective, determining a vanishing line and a vanishing point of the preview image, and adjusting the terminal according to the vanishing line or the vanishing point.
The process of adjusting the terminal according to the vanishing point comprises the following steps: if the auxiliary line passing through the vanishing point is overlapped with the central line (namely, the longitudinal central line) of the preview image, the shooting position of the terminal is not required to be adjusted, and the shooting object can be directly shot; if the auxiliary line is not coincident with the central line, indicating the user to move the terminal leftwards if the vanishing point is positioned at the left side of the central line; if the vanishing point is positioned on the right side of the central line, the user is instructed to move the terminal rightward, and finally the terminal is adjusted to the position where the auxiliary line coincides with the central line.
The process of adjusting the terminal according to the vanishing line comprises the following steps: if the numbers of vanishing lines at two sides of the center line are equal, the shooting position of the terminal is not required to be adjusted, and the shooting object can be directly shot; if the numbers of vanishing lines on the two sides of the central line are not equal, indicating the user to move the terminal leftwards if the numbers of vanishing lines on the left side of the central line are more; if the number of vanishing lines on the right side of the center line is more, indicating the user to move the terminal rightward; and finally, adjusting the terminal to a position with the same number of vanishing lines at the left side of the central line and the right side of the central line.
(2B) If the perspective type corresponding to the preview image is two-point perspective, determining a vertical line and a vanishing point passing through the shooting object, and adjusting the terminal according to the vertical line and the vanishing point passing through the shooting object.
The process of adjusting the terminal according to a vertical line passing through the photographing object includes: if the vertical line coincides with the central line, the shooting position of the terminal does not need to be adjusted, and the shooting object can be directly shot; if the vertical line is not coincident with the central line, indicating the user to move the terminal leftwards if the vertical line is positioned at the left side of the central line; if the vertical line is positioned on the right side of the central line, the user is instructed to move the terminal rightward, and finally the terminal is adjusted to the position where the vertical line coincides with the central line.
Then judging whether the distances between the vanishing points and the center line are equal, if not, if the distances between the left vanishing point and the center line are larger, and under the condition of keeping the vertical line coincident with the center line, indicating the user to rotate the terminal clockwise; if the distance between the right vanishing point and the central line is larger, under the condition of keeping the coincidence of the vertical line and the central line, the user is instructed to rotate the terminal anticlockwise, and finally the terminal is adjusted to a position where the vertical line coincides with the central line and the distance between each vanishing point and the central line is equal.
If the scene type corresponding to the preview image is determined to be a three-dimensional scene, a symmetrical image is acquired by adopting a symmetry line concept, and the method comprises the following steps:
(1) And receiving the adjustment operation of the user on the region of interest, determining the region of interest, and then detecting whether the terminal is parallel to the 2D plane where the shooting object is located.
(2) If not, adopting a corresponding processing method according to different camera shooting modes.
(3A) If the camera shooting mode is an automatic shooting mode, extracting the region of interest, inputting the interest into the trained symmetrical conversion model, performing post-processing and correcting the angle to obtain a processed image of interest, namely a symmetrical image corresponding to the extracted region of interest.
(3B) If the camera shooting mode is a manual shooting mode, the user is instructed to rotate the terminal to a plane parallel to a 2D plane where the shooting object is located, and the region of interest is extracted to obtain a symmetrical image corresponding to the region of interest.
It should be understood that, although the steps in the flowcharts of fig. 1 and 8-9 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps of FIGS. 1, 8-9 may include steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the steps or stages in other steps.
In one embodiment, as shown in fig. 10, there is provided an image photographing apparatus including: a scene detection module 1002, a feature determination module 1004, an information generation module 1006, and an image acquisition module 1008, wherein:
The scene detection module 1002 is configured to perform scene detection processing on a preview image captured by the terminal and directed against a shooting object, to obtain a scene type corresponding to the preview image;
A feature determining module 1004, configured to determine a target image feature in the preview image according to a scene type corresponding to the preview image;
An information generating module 1006 for generating and outputting composition indication information for a shooting subject according to the target image characteristics; the composition indication information is used for indicating a user to adjust the shooting position of the terminal to a composition position, and an image shot when the terminal is positioned at the composition position accords with a preset composition rule aiming at the shooting object;
And the image acquisition module 1008 is used for acquiring a target image shot by the terminal at the composition position.
In one embodiment, the above-mentioned scene detection module 1002 is specifically configured to obtain a depth image corresponding to the preview image, and obtain a depth value of each pixel point in an image area where the shooting object is located in the depth image; obtaining the difference value between the depth values of two adjacent pixel points in the image area; and determining the scene type corresponding to the preview image according to the difference value.
In one embodiment, the feature determining module 1004 is specifically configured to detect, if the scene type corresponding to the preview image is a three-dimensional scene, a vanishing point and a vanishing line in the preview image as the target image feature in the preview image; the information generating module 1006 is configured to obtain the number of vanishing points detected in the preview image; determining the perspective type corresponding to the preview image according to the number of vanishing points; composition indication information for a photographic subject is generated and output according to the perspective type.
In one embodiment, the information generating module 1006 is further configured to display an auxiliary line and a longitudinal center line that pass through the vanishing point in the preview image if the perspective type corresponding to the preview image is single-point perspective; single point perspective is used to represent that the vanishing point in the preview image is one; the auxiliary line is parallel to the longitudinal center line; generating and outputting first composition indication information aiming at a shooting object according to the relative position relation between the vanishing point and the longitudinal center line; the first composition indication information is used for indicating a user to adjust the terminal to a position where the auxiliary line coincides with the longitudinal center line.
In one embodiment, the information generating module 1006 is further configured to display each vanishing line and the longitudinal center line of the preview image if the perspective type single-point perspective corresponding to the preview image; the vanishing line is used for determining vanishing points of the preview image; generating and outputting second composition indication information for the shooting object according to the relative position relation between the vanishing line and the longitudinal center line; the second composition indication information is used to indicate that the user adjusts the terminal to a position having an equal number of vanishing lines on both sides of the longitudinal center line.
In one embodiment, the information generating module 1006 is further configured to display a vertical line and a longitudinal center line passing through the photographed object in the preview image if the perspective type corresponding to the preview image is two-point perspective; the two-point perspective is used for representing that the vanishing points in the preview image are two; generating and outputting third composition indication information for the shooting object according to the relative position relation between the vertical line and the longitudinal center line; the third composition indication information is used for indicating the user to adjust the terminal to a position where the vertical line and the longitudinal center line coincide.
In one embodiment, the information generating module 1006 is further configured to determine a location of each vanishing point in the preview image; acquiring a first vertical distance between a left vanishing point and a longitudinal center line and a second vertical distance between a right vanishing point and the longitudinal center line in a preview image; generating and outputting fourth composition indication information aiming at a shooting object according to the numerical comparison result of the first vertical distance and the second vertical distance; the fourth composition indication information is used for indicating a user to adjust the terminal to a position where the vertical line and the longitudinal center line coincide, and the first vertical distance and the second vertical distance are equal.
In an embodiment, the feature determining module 1004 is further configured to obtain, if the scene type corresponding to the preview image is a two-dimensional scene, a region of interest to be photographed in the preview image as a target image feature in the preview image; the information generating module 1006 is further configured to, if it is detected that the camera shooting mode of the terminal is a manual shooting mode, obtain, according to the region of interest, information of an included angle between the terminal and a plane where the shot object is located; generating and outputting fifth composition indication information aiming at a shooting object according to the included angle information; the fifth composition indication information is used for indicating a user to adjust the terminal to a position where the terminal is parallel to a plane where the shooting object is located.
In one embodiment, the apparatus further includes an image conversion module, configured to capture an image of interest corresponding to the region of interest if it is detected that the camera capturing mode of the terminal is an automatic capturing mode; and inputting the interested image into the trained symmetrical conversion model to obtain a target image corresponding to the interested region.
In one embodiment, the patterning location is a symmetrical image patterning location; the image acquisition module 1008 is specifically configured to acquire a symmetrical image captured by the terminal at a symmetrical image composition position.
It should be noted that, the image capturing device of the present application corresponds to the image capturing method of the present application one by one, and the technical features and the beneficial effects described in the embodiments of the image capturing method are applicable to the embodiments of the image capturing device, and specific content may be referred to the description in the embodiments of the method of the present application, which is not repeated herein, and thus is described herein.
In addition, each of the modules in the image photographing device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure thereof may be as shown in fig. 11. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an image capturing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 11 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (12)

1. An image capturing method, the method comprising:
Performing scene detection processing on a preview image captured by a terminal and aiming at a shooting object to obtain a scene type corresponding to the preview image; the scene type comprises a three-dimensional scene and a two-dimensional scene;
Determining target image characteristics in the preview image according to the scene type corresponding to the preview image;
generating and outputting composition indication information aiming at the shooting object according to the target image characteristics; the composition indication information is used for indicating a user to adjust the shooting position of the terminal to a composition position, and an image shot when the terminal is positioned at the composition position accords with a preset composition rule aiming at the shooting object;
acquiring a target image shot by the terminal at the composition position;
If the scene type corresponding to the preview image is a three-dimensional scene, detecting the perspective type corresponding to the preview image, and generating and outputting composition indication information aiming at the shooting object according to the perspective type;
Further comprises: determining the position of each vanishing point in the preview image under the condition that the perspective type is two-point perspective and the vertical line passing through the shooting object in the preview image is coincident with the longitudinal center line of the preview image; acquiring a first vertical distance between a left vanishing point and the longitudinal center line and a second vertical distance between a right vanishing point and the longitudinal center line in the preview image; generating and outputting fourth composition indication information aiming at the shooting object according to the numerical comparison result of the first vertical distance and the second vertical distance; the fourth composition indication information is used for indicating a user to adjust the terminal to a position where the vertical line coincides with the longitudinal center line and the first vertical distance is equal to the second vertical distance; the two-point perspective indicates that the vanishing points in the preview image are two; the vanishing point represents a visual intersection of a plurality of parallel lines in the perspective view, each parallel line used to determine the vanishing point being referred to as a vanishing line.
2. The method according to claim 1, wherein the performing scene detection processing on the preview image of the shooting object captured by the terminal to obtain a scene type corresponding to the preview image includes:
Acquiring a depth image corresponding to the preview image;
Acquiring depth values of all pixel points in an image area where the shooting object is located in the depth image, and determining difference values between the depth values of two adjacent pixel points in the image area;
and determining the scene type corresponding to the preview image according to the difference value.
3. The method according to claim 1, wherein the determining the target image feature in the preview image according to the scene type corresponding to the preview image includes:
if the scene type corresponding to the preview image is a three-dimensional scene, detecting vanishing points and vanishing lines in the preview image as target image features in the preview image;
The generating and outputting composition indication information for the shooting object according to the target image features includes:
acquiring the number of vanishing points detected in the preview image;
Determining the perspective type corresponding to the preview image according to the number of vanishing points;
and generating and outputting composition indication information aiming at the shooting object according to the perspective type.
4. A method according to claim 3, wherein the generating and outputting composition indication information for the photographic subject according to the perspective type includes:
If the perspective type corresponding to the preview image is single-point perspective, displaying an auxiliary line and a longitudinal center line which pass through the vanishing point in the preview image; the single-point perspective is used for indicating that vanishing points in the preview image are one; the auxiliary line is parallel to the longitudinal centerline;
generating and outputting first composition indication information for the shooting object according to the relative position relation between the vanishing point and the longitudinal center line; the first composition indication information is used for indicating a user to adjust the terminal to a position where the auxiliary line coincides with the longitudinal center line.
5. The method according to claim 3, wherein the generating and outputting composition indication information for the photographic subject according to the perspective type further comprises:
If the perspective type corresponding to the preview image is single-point perspective, displaying all vanishing lines and longitudinal center lines of the preview image; the vanishing line is used for determining vanishing points of the preview image;
generating and outputting second composition indication information for the shooting object according to the relative position relation between the vanishing line and the longitudinal center line; the second composition indication information is used for indicating a user to adjust the terminal to a position with an equal number of vanishing lines on both sides of the longitudinal center line.
6. The method according to claim 3, wherein the generating and outputting composition indication information for the photographic subject according to the perspective type further comprises:
if the perspective type corresponding to the preview image is two-point perspective, displaying a vertical line and a longitudinal center line passing through the shot object in the preview image;
Generating and outputting third composition indication information for the photographing object according to a relative positional relationship between the vertical line and the longitudinal center line; the third composition indication information is used for indicating a user to adjust the terminal to a position where the vertical line and the longitudinal center line coincide.
7. The method of claim 1, wherein determining the target image feature in the preview image according to the scene type corresponding to the preview image further comprises:
If the scene type corresponding to the preview image is a two-dimensional scene, acquiring an interested region to be shot in the preview image as a target image feature in the preview image;
the generating and outputting composition indication information for the shooting object according to the target image features further includes:
If the camera shooting mode of the terminal is detected to be a manual shooting mode, acquiring the included angle information between the terminal and the plane of the shot object according to the region of interest;
Generating and outputting fifth composition indication information aiming at the shooting object according to the included angle information; the fifth composition indication information is used for indicating a user to adjust the terminal to a position where the terminal is parallel to a plane where the shooting object is located.
8. The method of claim 7, further comprising, after acquiring the region of interest to be photographed in the preview image:
if the camera shooting mode of the terminal is detected to be an automatic shooting mode, shooting an interested image corresponding to the interested region;
And inputting the interested image into a trained symmetrical conversion model to obtain a target image corresponding to the interested region.
9. The method of claim 1, wherein the patterning location is a symmetrical image patterning location; the obtaining the target image shot by the terminal at the composition position comprises the following steps:
and acquiring the symmetrical image shot by the terminal at the symmetrical image composition position.
10. An image capturing apparatus, the apparatus comprising:
The scene detection module is used for carrying out scene detection processing on a preview image captured by the terminal and aiming at a shooting object to obtain a scene type corresponding to the preview image; the scene type comprises a three-dimensional scene and a two-dimensional scene;
the feature determining module is used for determining target image features in the preview image according to the scene types corresponding to the preview image;
The information generation module is used for generating and outputting composition indication information aiming at the shooting object according to the target image characteristics; the composition indication information is used for indicating a user to adjust the shooting position of the terminal to a composition position, and an image shot when the terminal is positioned at the composition position accords with a preset composition rule aiming at the shooting object;
the image acquisition module is used for acquiring a target image shot by the terminal at the composition position;
The information generation module is further configured to detect a perspective type corresponding to the preview image if the scene type corresponding to the preview image is a three-dimensional scene, and generate and output composition indication information for the shooting object according to the perspective type;
The information generating module is further configured to determine a position of each vanishing point in the preview image when the perspective type is two-point perspective and a vertical line passing through the shooting object in the preview image is coincident with a longitudinal center line of the preview image; acquiring a first vertical distance between a left vanishing point and the longitudinal center line and a second vertical distance between a right vanishing point and the longitudinal center line in the preview image; generating and outputting fourth composition indication information aiming at the shooting object according to the numerical comparison result of the first vertical distance and the second vertical distance; the fourth composition indication information is used for indicating a user to adjust the terminal to a position where the vertical line coincides with the longitudinal center line and the first vertical distance is equal to the second vertical distance; the two-point perspective indicates that the vanishing points in the preview image are two; the vanishing point represents a visual intersection of a plurality of parallel lines in the perspective view, each parallel line used to determine the vanishing point being referred to as a vanishing line.
11. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 9 when the computer program is executed.
12. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 9.
CN202110370466.0A 2021-04-07 2021-04-07 Image shooting method, device, computer equipment and storage medium Active CN115190236B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110370466.0A CN115190236B (en) 2021-04-07 2021-04-07 Image shooting method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110370466.0A CN115190236B (en) 2021-04-07 2021-04-07 Image shooting method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115190236A CN115190236A (en) 2022-10-14
CN115190236B true CN115190236B (en) 2024-05-28

Family

ID=83511972

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110370466.0A Active CN115190236B (en) 2021-04-07 2021-04-07 Image shooting method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115190236B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024195182A1 (en) * 2023-03-23 2024-09-26 パナソニックIpマネジメント株式会社 Information processing device, information processing method, and information processing program

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003040036A (en) * 2001-07-31 2003-02-13 Mitsubishi Motors Corp Vehicle-circumstances viewing apparatus
US8897600B1 (en) * 2013-12-20 2014-11-25 I.R.I.S. Method and system for determining vanishing point candidates for projective correction
JPWO2013030932A1 (en) * 2011-08-29 2015-03-23 パイオニア株式会社 Navigation device, image display control device, server, adjustment device, and front image display control method
CN105389536A (en) * 2014-08-22 2016-03-09 现代摩比斯株式会社 Lane detection apparatus and operating method for the same
CN105933606A (en) * 2016-05-24 2016-09-07 青岛海信移动通信技术股份有限公司 Photographing method, device and mobile terminal
CN107509032A (en) * 2017-09-08 2017-12-22 维沃移动通信有限公司 One kind is taken pictures reminding method and mobile terminal
CN107547789A (en) * 2016-06-24 2018-01-05 聚晶半导体股份有限公司 The method of video capturing device and its photography composition
CN107680138A (en) * 2017-09-21 2018-02-09 珠海中视科技有限公司 Interactive photography machine scaling method based on facial image
CN108769419A (en) * 2018-06-04 2018-11-06 Oppo(重庆)智能科技有限公司 Photographic method, mobile terminal and computer readable storage medium
CN108810418A (en) * 2018-07-16 2018-11-13 Oppo广东移动通信有限公司 Image processing method, device, mobile terminal and computer readable storage medium
CN110086991A (en) * 2019-04-28 2019-08-02 深圳回收宝科技有限公司 Image pickup method, device, portable terminal and the storage medium of terminal detection image
CN112135047A (en) * 2020-09-23 2020-12-25 努比亚技术有限公司 Image processing method, mobile terminal and computer storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013030932A (en) * 2011-07-27 2013-02-07 Denso Corp Communication system and sub-master node used for the communication system
JP2018165912A (en) * 2017-03-28 2018-10-25 アイシン精機株式会社 Support apparatus

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003040036A (en) * 2001-07-31 2003-02-13 Mitsubishi Motors Corp Vehicle-circumstances viewing apparatus
JPWO2013030932A1 (en) * 2011-08-29 2015-03-23 パイオニア株式会社 Navigation device, image display control device, server, adjustment device, and front image display control method
US8897600B1 (en) * 2013-12-20 2014-11-25 I.R.I.S. Method and system for determining vanishing point candidates for projective correction
CN105389536A (en) * 2014-08-22 2016-03-09 现代摩比斯株式会社 Lane detection apparatus and operating method for the same
CN105933606A (en) * 2016-05-24 2016-09-07 青岛海信移动通信技术股份有限公司 Photographing method, device and mobile terminal
CN107547789A (en) * 2016-06-24 2018-01-05 聚晶半导体股份有限公司 The method of video capturing device and its photography composition
CN107509032A (en) * 2017-09-08 2017-12-22 维沃移动通信有限公司 One kind is taken pictures reminding method and mobile terminal
CN107680138A (en) * 2017-09-21 2018-02-09 珠海中视科技有限公司 Interactive photography machine scaling method based on facial image
CN108769419A (en) * 2018-06-04 2018-11-06 Oppo(重庆)智能科技有限公司 Photographic method, mobile terminal and computer readable storage medium
CN108810418A (en) * 2018-07-16 2018-11-13 Oppo广东移动通信有限公司 Image processing method, device, mobile terminal and computer readable storage medium
CN110086991A (en) * 2019-04-28 2019-08-02 深圳回收宝科技有限公司 Image pickup method, device, portable terminal and the storage medium of terminal detection image
CN112135047A (en) * 2020-09-23 2020-12-25 努比亚技术有限公司 Image processing method, mobile terminal and computer storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
K. Bitsakos,et al.Bilateral symmetry of object silhouettes under perspective projection.《2008 19th International Conference on Pattern Recognition》.2009,全文. *
基于机器视觉的先进辅助驾驶系统关键技术研究;范延军;《中国博士学位论文全文数据库 工程科技Ⅱ辑》;全文 *

Also Published As

Publication number Publication date
CN115190236A (en) 2022-10-14

Similar Documents

Publication Publication Date Title
US10915998B2 (en) Image processing method and device
US20190228263A1 (en) Training assistance using synthetic images
US20190089910A1 (en) Dynamic generation of image of a scene based on removal of undesired object present in the scene
US20130051673A1 (en) Portable electronic and method of processing a series of frames
US20120194636A1 (en) Information processing apparatus, information processing method, program, and imaging apparatus
CN111462503B (en) Vehicle speed measuring method and device and computer readable storage medium
US9361731B2 (en) Method and apparatus for displaying video on 3D map
CN106997579B (en) Image splicing method and device
CN111915483B (en) Image stitching method, device, computer equipment and storage medium
US20200027222A1 (en) Panoramic photographing method for unmanned aerial vehicle and unmanned aerial vehicle using the same
WO2018040180A1 (en) Photographing method and apparatus
US10482571B2 (en) Dual fisheye, hemispherical image projection and stitching method, device and computer-readable medium
US20220358619A1 (en) Automatic dolly zoom image processing device
CN110361005A (en) Positioning method, positioning device, readable storage medium and electronic equipment
KR101703013B1 (en) 3d scanner and 3d scanning method
KR20120066567A (en) Image processing device and program
CN115190236B (en) Image shooting method, device, computer equipment and storage medium
KR100934211B1 (en) How to create a panoramic image on a mobile device
KR101597915B1 (en) Image processing apparatus and image processing method
CN111654624B (en) Shooting prompting method and device and electronic equipment
US20200137326A1 (en) Applying an annotation to an image based on keypoints
CN113034347A (en) Oblique photographic image processing method, device, processing equipment and storage medium
Ha et al. Embedded panoramic mosaic system using auto-shot interface
EP3216005A1 (en) Image processing device and method for geometric calibration of images
JP2018125658A (en) Portable information processing device having camera function, display control method thereof, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant