[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111402136B - Panorama generation method and device, computer readable storage medium and electronic equipment - Google Patents

Panorama generation method and device, computer readable storage medium and electronic equipment Download PDF

Info

Publication number
CN111402136B
CN111402136B CN202010196117.7A CN202010196117A CN111402136B CN 111402136 B CN111402136 B CN 111402136B CN 202010196117 A CN202010196117 A CN 202010196117A CN 111402136 B CN111402136 B CN 111402136B
Authority
CN
China
Prior art keywords
image
images
mapping
determining
panorama
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010196117.7A
Other languages
Chinese (zh)
Other versions
CN111402136A (en
Inventor
饶童
杨永林
陈昱彤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
You Can See Beijing Technology Co ltd AS
Original Assignee
You Can See Beijing Technology Co ltd AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by You Can See Beijing Technology Co ltd AS filed Critical You Can See Beijing Technology Co ltd AS
Priority to CN202010196117.7A priority Critical patent/CN111402136B/en
Publication of CN111402136A publication Critical patent/CN111402136A/en
Priority to US17/200,659 priority patent/US11146727B2/en
Priority to US17/383,157 priority patent/US11533431B2/en
Priority to US17/981,056 priority patent/US20230056036A1/en
Application granted granted Critical
Publication of CN111402136B publication Critical patent/CN111402136B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the disclosure discloses a panorama generating method and device, wherein the method comprises the following steps: acquiring at least two image sequences for generating a panorama; determining effective images in at least two image sequences based on the shooting modes of the at least two image sequences, and determining the connection relation between the effective images; determining internal parameters of the camera; determining a camera attitude angle corresponding to the effective image based on the internal parameters and the connection relation; and mapping the effective image to a mapping surface taking the camera as a center based on the camera attitude angle to obtain a panoramic image. According to the embodiment of the disclosure, the panoramic view with a large angle of view can be generated in different modes according to habits of different users, so that the flexibility and the efficiency of generating the panoramic view are improved.

Description

Panorama generation method and device, computer readable storage medium and electronic equipment
Technical Field
The disclosure relates to the technical field of computers, in particular to a panorama generating method and device, a computer readable storage medium and electronic equipment.
Background
Currently, panoramas are widely used in VR scenes. In some fields, such as map, house rentals, interior decoration, etc., surrounding environments may be presented to a user in a near-live-action manner using a panorama. Meanwhile, the panorama comprises a large amount of scene information and can be effectively applied to a depth map estimation algorithm.
In the process of photographing a panorama, it is generally required that a user holds a camera in place to rotate one round around a vertical direction as an axis, and thus, the angle of view of the photographed panorama in the vertical direction is limited.
Disclosure of Invention
The present disclosure has been made in order to solve the above technical problems. Embodiments of the present disclosure provide a panorama generating method, apparatus, computer-readable storage medium, and electronic device.
The embodiment of the disclosure provides a panorama generating method, which comprises the following steps: acquiring at least two image sequences for generating a panoramic image, wherein the at least two image sequences comprise at least one image sequence shot in a discrete manner; determining effective images in at least two image sequences based on the shooting modes of the at least two image sequences, and determining the connection relation between the effective images; determining internal parameters of the camera; determining a camera attitude angle corresponding to the effective image based on the internal parameters and the connection relation; and mapping the effective image to a mapping surface taking the camera as a center based on the camera attitude angle to obtain a panoramic image.
In some embodiments, determining valid images in the at least two image sequences and determining a connection relationship between the valid images based on a manner of capturing the at least two image sequences includes: and in response to determining that each image sequence in the at least two image sequences is shot in a transverse discrete mode, determining that each image in the image sequences is a valid image for each image sequence in the at least two image sequences, and determining the connection relation between the images in the image sequences through feature extraction and feature matching.
In some embodiments, mapping the effective image to a camera-centered mapping surface based on the camera pose angle to obtain a panoramic image includes: adjusting the yaw angles of the images arranged in the vertical direction to be consistent; and for each image sequence in the at least two image sequences, mapping each image in the image sequences to a mapping surface based on the camera attitude angle of each image in the image sequences after the yaw angle adjustment, so as to obtain a sub-panorama corresponding to the image sequence. Determining the characteristics of each obtained sub-panorama; and merging all the sub-panoramas based on the characteristics of all the sub-panoramas to obtain a final panoramas.
In some embodiments, determining valid images in the at least two image sequences and determining a connection relationship between the valid images based on a manner of capturing the at least two image sequences includes: in response to determining that each of the at least two image sequences is captured in a longitudinal discrete manner, determining, for each of the at least two image sequences, a mapping relationship between a target image and other images in the image sequence; based on the mapping relation, fusing other images to the target image to obtain a fused image corresponding to the image sequence as an effective image; and determining the connection relation between the obtained fusion images through feature extraction and feature matching.
In some embodiments, mapping the effective image to a camera-centered mapping surface based on the camera pose angle to obtain a panoramic image includes: mapping the fusion image to a mapping surface for each fusion image in each fusion image to obtain a sub-panorama corresponding to the fusion image; determining the characteristics of each obtained sub-panorama; and merging all the sub-panoramas based on the characteristics of all the sub-panoramas to obtain a final panoramas.
In some embodiments, determining valid images in the at least two image sequences and determining a connection relationship between the valid images based on a manner of capturing the at least two image sequences includes: in response to determining that a first image sequence in at least two image sequences is obtained by shooting in a transverse discrete mode, and other image sequences are obtained by shooting in a transverse continuous mode, determining that each image included in the first image sequence is a valid image; determining a connection relation between each image in the first image sequence through feature extraction and feature matching; for each image sequence in other image sequences, determining that a key frame image in the image sequence is a valid image, and determining the connection relation between the key frame image and the first image sequence through feature extraction and feature matching.
In some embodiments, mapping the effective image to a camera-centered mapping surface based on the camera pose angle to obtain a panoramic image includes: mapping each image in the first image sequence to a mapping surface to obtain a sub-panorama corresponding to the first image sequence; mapping the key frame image to a mapping surface to obtain a mapping image; determining the characteristics of the mapping image and the sub-panorama; and combining the mapping image and the sub-panorama based on the characteristics of the mapping image and the sub-panorama to obtain a final panorama.
According to another aspect of an embodiment of the present disclosure, there is provided a panorama generating apparatus, including: the acquisition module is used for acquiring at least two image sequences for generating a panoramic image, wherein the at least two image sequences comprise at least one image sequence shot in a discrete mode; the first determining module is used for determining effective images in at least two image sequences based on the shooting modes of the at least two image sequences and determining the connection relation between the effective images; the second determining module is used for determining internal parameters of the camera; the third determining module is used for determining a camera attitude angle corresponding to the effective image based on the internal parameters and the connection relation; and the mapping module is used for mapping the effective image to a mapping surface taking the camera as a center based on the camera attitude angle to obtain a panoramic image.
In some embodiments, the first determination module comprises: and the first determining unit is used for determining that each image in the image sequences is a valid image for each image in the at least two image sequences in response to the fact that each image in the at least two image sequences is shot in a transverse discrete mode, and determining the connection relation between the images in the image sequences through feature extraction and feature matching.
In some embodiments, the mapping module comprises: an adjusting unit for adjusting a yaw angle of the vertically aligned images to be uniform; and the first mapping unit is used for mapping each image in the image sequences to a mapping surface based on the camera attitude angle of each image in the image sequences after the yaw angle adjustment to obtain a sub-panorama corresponding to the image sequences. A second determining unit for determining the characteristics of each obtained sub-panorama; the first merging unit is used for merging the sub-panoramas based on the characteristics of the sub-panoramas to obtain a final panoramas.
In some embodiments, the first determination module comprises: the fusion unit is used for determining the mapping relation between the target image and other images in the image sequences for each image sequence in the at least two image sequences in response to the fact that each image sequence in the at least two image sequences is shot in a longitudinal discrete mode; based on the mapping relation, fusing other images to the target image to obtain a fused image corresponding to the image sequence as an effective image; and a third determining unit for determining the connection relation between the obtained fusion images through feature extraction and feature matching.
In some embodiments, the mapping module comprises: the second mapping unit is used for mapping the fusion image to a mapping surface for each fusion image in the fusion images to obtain a sub-panorama corresponding to the fusion image; a fourth determining unit configured to determine characteristics of the obtained respective sub-panoramas; and the second merging unit is used for merging the sub-panoramas based on the characteristics of the sub-panoramas to obtain a final panoramas.
In some embodiments, the first determination module comprises: a fifth determining unit, configured to determine, in response to determining that a first image sequence of the at least two image sequences is obtained by capturing in a laterally discrete manner, and other image sequences are obtained by capturing in a laterally continuous manner, that each image included in the first image sequence is a valid image; determining a connection relation between each image in the first image sequence through feature extraction and feature matching; and a sixth determining unit, configured to determine, for each of the other image sequences, that the key frame image in the image sequence is a valid image, and determine a connection relationship between the key frame image and the first image sequence by feature extraction and feature matching.
In some embodiments, the mapping module comprises: the third mapping unit is used for mapping each image in the first image sequence to a mapping surface to obtain a sub-panorama corresponding to the first image sequence; a fourth mapping unit, configured to map the key frame image to a mapping plane, to obtain a mapped image; a seventh determining unit, configured to determine a feature of the mapping image and the sub-panorama; and the third merging unit is used for merging the mapping image and the sub-panorama based on the characteristics of the mapping image and the sub-panorama to obtain a final panorama.
According to another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the panorama generating method described above.
According to another aspect of an embodiment of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; and the processor is used for reading the executable instructions from the memory and executing the instructions to realize the panorama generating method.
According to the panorama generation method, the device, the computer-readable storage medium and the electronic equipment provided by the embodiment of the disclosure, through determining effective images arranged in different modes from a shot image sequence in different shooting modes, determining the connection relation between the effective images, calibrating a camera to obtain camera internal parameters, determining camera attitude angles of the effective images based on the internal parameters and the connection relation, and finally mapping each effective image into a panorama based on the camera attitude angles, the panorama with a large field angle can be generated in different modes according to habits of different users, and flexibility and efficiency of panorama generation are improved.
The technical scheme of the present disclosure is described in further detail below through the accompanying drawings and examples.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing embodiments thereof in more detail with reference to the accompanying drawings. The accompanying drawings are included to provide a further understanding of embodiments of the disclosure, and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure, without limitation to the disclosure. In the drawings, like reference numerals generally refer to like parts or steps.
Fig. 1 is a system diagram to which the present disclosure is applicable.
Fig. 2 is a flowchart illustrating a panorama generating method according to an exemplary embodiment of the present disclosure.
Fig. 3 is a flowchart illustrating a panorama generating method according to another exemplary embodiment of the present disclosure.
Fig. 4 is an exemplary schematic diagram of a laterally discrete manner of shooting of a panorama generating method of an embodiment of the present disclosure.
Fig. 5 is a flowchart illustrating a panorama generating method according to still another exemplary embodiment of the present disclosure.
Fig. 6 is an exemplary schematic view of a longitudinal discrete manner of photographing of a panorama generating method according to an embodiment of the present disclosure.
Fig. 7 is a flowchart illustrating a panorama generating method according to still another exemplary embodiment of the present disclosure.
Fig. 8 is an exemplary schematic view of a landscape sequential manner photographing of a panorama generating method according to an embodiment of the present disclosure.
Fig. 9 is a schematic structural view of a panorama generating apparatus according to an exemplary embodiment of the present disclosure.
Fig. 10 is a schematic structural view of a panorama generating apparatus according to another exemplary embodiment of the present disclosure.
Fig. 11 is a block diagram of an electronic device provided in an exemplary embodiment of the present disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present disclosure and not all of the embodiments of the present disclosure, and that the present disclosure is not limited by the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is specifically stated otherwise.
It will be appreciated by those of skill in the art that the terms "first," "second," etc. in embodiments of the present disclosure are used merely to distinguish between different steps, devices or modules, etc., and do not represent any particular technical meaning nor necessarily logical order between them.
It should also be understood that in embodiments of the present disclosure, "plurality" may refer to two or more, and "at least one" may refer to one, two or more.
It should also be appreciated that any component, data, or structure referred to in the presently disclosed embodiments may be generally understood as one or more without explicit limitation or the contrary in the context.
In addition, the term "and/or" in this disclosure is merely an association relationship describing an association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the front and rear association objects are an or relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and that the same or similar features may be referred to each other, and for brevity, will not be described in detail.
Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but where appropriate, the techniques, methods, and apparatus should be considered part of the specification.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
Embodiments of the present disclosure may be applicable to electronic devices such as terminal devices, computer systems, servers, etc., which may operate with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with the terminal device, computer system, server, or other electronic device include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, network personal computers, minicomputer systems, mainframe computer systems, and distributed cloud computing technology environments that include any of the above systems, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc., that perform particular tasks or implement particular abstract data types. The computer system/server may be implemented in a distributed cloud computing environment in which tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computing system storage media including memory storage devices.
Summary of the application
The conventional panorama photographing apparatus has a problem in that a vertical viewing angle (VFOV) is small, resulting in poor effect of depth estimation, and is limited in viewing angle when presented to an end user, and thus it is required to expand the vertical viewing angle when photographing a panorama.
Exemplary System
Fig. 1 illustrates an exemplary system architecture 100 to which the panorama generating method or panorama generating apparatus of an embodiment of the present disclosure may be applied.
As shown in fig. 1, a system architecture 100 may include a terminal device 101, a network 102, and a server 103. Network 102 is a medium used to provide communication links between terminal device 101 and server 103. Network 102 may include various connection types such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the server 103 via the network 102 using the terminal device 101 to receive or send messages or the like. The terminal device 101 may have various communication client applications installed thereon, such as a photographing-type application, a map-type application, a three-dimensional model application, and the like.
The terminal device 101 may be various electronic devices including, but not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like.
The server 103 may be a server providing various services, such as a background image processing server processing an image sequence uploaded by the terminal device 101. The background image processing server can process the received image sequence to obtain information such as panoramic images.
It should be noted that, the panorama generating method provided by the embodiment of the present disclosure may be performed by the server 103 or may be performed by the terminal device 101, and accordingly, the panorama generating apparatus may be provided in the server 103 or may be provided in the terminal device 101.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. In the case where the image sequence does not need to be acquired from a remote location, the system architecture described above may not include a network, but only a server or terminal device.
Exemplary method
Fig. 2 is a flowchart illustrating a panorama generating method according to an exemplary embodiment of the present disclosure. The present embodiment is applicable to an electronic device (such as the terminal device 101 or the server 103 shown in fig. 1), and as shown in fig. 2, the method includes the steps of:
at step 201, at least two image sequences for generating a panorama are acquired.
In this embodiment, the electronic device may obtain at least two image sequences for generating the panoramic image locally or remotely. The image sequences may be obtained by photographing surrounding scenes by an integrated camera on the electronic device or a camera connected with the electronic device, and the at least two image sequences include at least one image sequence photographed in a discrete manner. Discrete photographing refers to photographing an image by a camera at a certain position in a certain posture, then changing the posture and/or position of the camera, photographing an image again, and repeating the operation to obtain an image sequence. The arrangement of the images in the image sequence may be either landscape or portrait.
As an example, a row of images arranged laterally may be a sequence of images, or a column of images arranged longitudinally may be a sequence of images.
Step 202, determining effective images in at least two image sequences based on the shooting modes of the at least two image sequences, and determining connection relations among the effective images.
In this embodiment, the electronic device may determine effective images in at least two image sequences based on a photographing manner of the at least two image sequences, and determine a connection relationship between the effective images. The camera may take a discrete image (i.e., the camera is stopped at a position to take an image) or may take a continuous image (e.g., a video image). The effective image is an image for mapping to a three-dimensional mapping surface to generate a panorama. Such as key frames of a video capture modality.
In step 203, the internal parameters of the camera are determined.
In this embodiment, the electronic device may determine the internal parameters of the camera. The internal reference of the camera is typically an internal reference matrix (Camera Intrinsics) K. The camera's internal parameters may be fixed, i.e. the camera's internal parameters are known, and the electronic device may obtain pre-entered internal parameters. The internal reference of the camera can also be obtained through calibration, and the electronic device can calibrate the camera by using the image sequence obtained in the step 201 to obtain the internal reference of the camera. The camera internal parameter calibration method is a widely used known technology at present, and is not described herein.
And 204, determining a camera attitude angle corresponding to the effective image based on the internal parameters and the connection relation.
In this embodiment, the electronic device may determine the camera pose angle corresponding to the effective image based on the internal reference and the connection relationship. The camera attitude angle is used for representing the shooting direction of the camera under the three-dimensional coordinate system. The three-dimensional coordinate system may be a rectangular coordinate system established with the camera position as the origin. Attitude angles may include pitch angle (pitch), yaw angle (yaw), roll angle (roll). The pitch angle is used for representing deflection of the optical axis of the camera along the vertical direction, the yaw angle is used for representing deflection of the optical axis of the camera on the horizontal plane, and the roll angle is used for representing the rolling degree of the camera along the optical axis.
The electronic device can determine the camera attitude angle according to various existing methods based on the connection relationship between the internal reference and the image. For example, a method of determining a camera pose angle may include, but is not limited to, at least one of: such as photometric errors, reprojection errors, 3D geometry errors, etc.
Step 205, mapping the effective image to a mapping surface centered on the camera based on the camera attitude angle, to obtain a panoramic image.
In this embodiment, the electronic device may map the effective image to a mapping surface centered on the camera based on the camera pose angle, to obtain the panorama. In particular, the panorama can be an image of a mapping surface that maps to various shapes (e.g., spherical, cylindrical, etc.). And establishing a reference coordinate system in a certain direction by taking the sphere center (or the cylinder center) as a center, wherein a conversion relation exists between the coordinate system of the plane image shot by the camera and the reference coordinate system, the conversion relation can be represented by a camera attitude angle, and the camera attitude angle indicates which part of the panoramic image is mapped to the plane image. It should be noted that, the method of mapping the two-dimensional image to the three-dimensional mapping surface is a currently known technology, and will not be described herein.
According to the method provided by the embodiment of the disclosure, through determining the effective images arranged in different modes from the shot image sequence in different shooting modes, determining the connection relation between the effective images, calibrating the camera to obtain the camera internal parameters, determining the camera attitude angles of the effective images based on the internal parameters and the connection relation, and finally mapping each effective image into a panoramic image based on the camera attitude angles, the panoramic images with large field angles can be generated in different modes according to habits of different users, and the flexibility and efficiency of generating the panoramic images are improved.
With further reference to fig. 3, a flow diagram of yet another embodiment of a panorama generating method is shown. As shown in fig. 3, the panorama generating method includes the steps of:
step 301, at least two image sequences for generating a panorama are acquired.
In this embodiment, step 301 is substantially identical to step 201 in the corresponding embodiment of fig. 2, and will not be described herein.
In step 302, in response to determining that each image sequence in the at least two image sequences is captured in a laterally discrete manner, for each image sequence in the at least two image sequences, determining that each image in the image sequence is a valid image, and determining a connection relationship between images in the image sequences through feature extraction and feature matching.
In this embodiment, the images obtained by the horizontal discrete mode may be distributed in at least two rows from top to bottom, and each row is an image sequence. As an example, as shown in fig. 4, the camera is horizontally rotated 360 ° in three-dimensional space in the direction of the arrow in the figure, three lines of images, i.e., three image sequences, each corresponding to one pitch angle, can be obtained.
Each image included in each image sequence in this embodiment is a valid image, i.e. the images can be mapped into a panorama.
The electronic device may extract feature points in each image using existing feature extraction methods. As an example, the feature extraction algorithm may include, but is not limited to, at least one of: SIFT (Scale Invariant Feature Transform, scale-invariant feature transform), SURF (Speeded Up Robust Features, improved way of feature extraction and description), ORB (Oriented FAST and Rotated BRIEF, algorithm for rapid feature point extraction and description), etc. After the feature points are obtained, the feature points can be matched, and the same points in the characterization space are connected, so that the connection relation between the images is determined.
In step 303, the camera's internal parameters are determined.
In this embodiment, step 303 is substantially identical to step 203 in the corresponding embodiment of fig. 2, and will not be described herein.
And step 304, determining a camera attitude angle corresponding to the effective image based on the internal parameters and the connection relation.
In this embodiment, step 304 is substantially identical to step 204 in the corresponding embodiment of fig. 2, and will not be described herein.
In step 305, the yaw angles of the vertically aligned images are adjusted to be uniform.
As shown in fig. 4, a yaw angle (yaw) of each row of images may have a deviation, so that the upper and lower adjacent images are misplaced, and by adjusting the yaw angle, the upper and lower adjacent images can be aligned, which is beneficial to improving the accuracy of generating the panoramic image.
Step 306, for each image sequence in the at least two image sequences, mapping each image in the image sequences to a mapping surface based on the camera attitude angle of each image in the image sequences after the yaw angle adjustment, so as to obtain a sub-panorama corresponding to the image sequence.
In this embodiment, for a certain image sequence, based on the camera pose angle of each image in the image sequence, the mapping relationship between the image and the panorama (that is, the position of the pixel point in the image mapped onto the mapping surface of the panorama) may be determined, and according to the mapping relationship, a sub-panorama corresponding to the image sequence may be generated.
Step 307, determining the characteristics of each obtained sub-panorama.
In this embodiment, the electronic device may determine the features of each sub-panorama according to existing feature extraction methods (e.g., the various algorithms described in step 302 above).
Step 308, merging the sub-panoramas based on the features of the sub-panoramas to obtain the final panoramas.
In this embodiment, the electronic device may connect the sub-panoramas together based on the features of the sub-panoramas, and fuse the pixels of the sub-panoramas connected together, thereby obtaining the final panoramas. As an example, the color values of pixels representing the same three-dimensional spatial point in two interconnected sub-panoramas may be averaged (or weighted summed based on other weights) to obtain the color values of pixels in the final panorama.
According to the method provided by the corresponding embodiment of fig. 3, when the shooting mode is a horizontal discrete mode, sub-panoramas corresponding to each image sequence are generated based on the connection relation between each image in each image sequence, and then the sub-panoramas are combined to obtain a final panoramas, the shooting speed of the horizontal discrete mode is high, and the generation of each sub-panoramas can be executed in parallel, so that the efficiency of the panoramas is improved.
With further reference to fig. 5, a flow diagram of yet another embodiment of a panorama generating method is shown. As shown in fig. 5, the panorama generating method includes the steps of:
at step 501, at least two image sequences for generating a panorama are acquired.
In this embodiment, step 501 is substantially identical to step 201 in the corresponding embodiment of fig. 2, and will not be described herein.
Step 502, in response to determining that each image sequence in at least two image sequences is shot in a longitudinal discrete mode, determining a mapping relationship between a target image and other images in the image sequence for each image sequence in at least two image sequences; and based on the mapping relation, fusing other images to the target image to obtain a fused image corresponding to the image sequence as an effective image.
In this embodiment, at least two columns of images captured in a longitudinal discrete manner may be distributed from left to right, and each column is an image sequence. As an example, as shown in fig. 6, the camera may be horizontally rotated in three-dimensional space by 360 ° in the direction of the arrow in the figure, and a plurality of columns of images, each column being an image sequence, may be obtained.
The target image may be a pre-specified image, and for example, for a column of images as shown in fig. 6, an image located in the middle may be a target image. The electronic device can extract the characteristic point of each image in an image sequence by using the existing characteristic extraction method, and perform characteristic matching by using the characteristic point to obtain a homography matrix between the images, so as to determine the mapping relation between other images and the target image.
The electronic device can fuse other images to the target image by using the mapping relation, so that a fused image is obtained as an effective image.
In step 503, the connection relationship between the obtained fusion images is determined through feature extraction and feature matching.
In this embodiment, the electronic device may determine the connection relationship between the obtained fusion images according to the feature extraction and feature matching method described in step 302 in the corresponding embodiment of fig. 3.
At step 504, the camera's internal parameters are determined.
In this embodiment, step 504 is substantially identical to step 203 in the corresponding embodiment of fig. 2, and will not be described herein.
Step 505, determining a camera attitude angle corresponding to the effective image based on the internal parameters and the connection relation.
In this embodiment, step 505 is substantially identical to step 204 in the corresponding embodiment of fig. 2, and will not be described herein.
And step 506, mapping the fusion image to a mapping surface for each fusion image in the fusion images to obtain a sub-panorama corresponding to the fusion image.
In this embodiment, for a certain fusion image, based on the camera pose angle of the fusion image, the mapping relationship between the fusion image and the panorama (that is, the position of the pixel point in the fusion image mapped onto the mapping surface of the panorama) may be determined, and according to the mapping relationship, the sub-panorama corresponding to the fusion image may be generated.
In step 507, the characteristics of the resulting sub-panoramas are determined.
In this embodiment, the electronic device may determine the features of each sub-panorama according to existing feature extraction methods (e.g., the various algorithms described in step 302 above).
Step 508, merging the sub-panoramas based on the features of the sub-panoramas to obtain the final panoramas.
In this embodiment, step 508 is substantially identical to step 308 in the corresponding embodiment of fig. 3, and will not be described herein.
According to the method provided by the corresponding embodiment of fig. 5, when the shooting mode is a longitudinal discrete mode, the images in each image sequence are fused first to obtain fused images, then each fused image is mapped onto the mapping surface of the panorama, sub-panoramas corresponding to each image sequence are generated, and then the sub-panoramas are combined to obtain a final panoramas.
With further reference to fig. 7, a flow diagram of yet another embodiment of a panorama generating method is shown. As shown in fig. 7, the panorama generating method includes the steps of:
At step 701, at least two image sequences for generating a panorama are acquired.
In this embodiment, step 701 is substantially identical to step 201 in the corresponding embodiment of fig. 2, and will not be described herein.
Step 702, in response to determining that a first image sequence in at least two image sequences is obtained by shooting in a horizontal discrete mode and other image sequences are obtained by shooting in a horizontal continuous mode, determining that each image included in the first image sequence is a valid image; and determining a connection relationship between each image in the first image sequence through feature extraction and feature matching.
In this embodiment, the first image sequence is an image sequence captured by the camera first. As shown in fig. 8, 801 is a first image sequence. After the first image sequence is captured, the image sequence is continuously captured in a horizontal continuous mode by changing the pitch angle of the camera, and the image sequence captured in the horizontal continuous mode can be an image frame sequence captured in a video mode. As shown in fig. 8, 802 and 803 are image sequences photographed in a laterally continuous manner.
The electronic device may determine a connection relationship between each image in the first image sequence according to the feature extraction and feature matching method described in step 302.
Step 703, for each image sequence in the other image sequences, determining that the key frame image in the image sequence is a valid image, and determining the connection relationship between the key frame image and the first image sequence through feature extraction and feature matching.
In this embodiment, as shown in fig. 8, the frame marked with "#" is a key frame. A key frame (also called an I frame) is a frame in which image data is completely retained in a compressed video, and when decoding the key frame, decoding can be completed only by the image data of the present frame. In video, a key frame is generally a frame when a scene, an object image, etc. in the video change significantly, that is, the key frame contains key information of a plurality of frames corresponding to a time range. In general, the time interval between temporally adjacent key frames is reasonable and neither too long nor too short. By extracting the key frames, a small number of images can be extracted from a plurality of image frames, the images contain a plurality of characteristic points corresponding to different space points, and the adjacent key frames have enough matched characteristic points. The electronic device may extract the key frames in various ways, such as color feature-based methods, motion analysis-based methods, clustering-based methods, and so forth.
The electronic device may determine a connection relationship between the key frame image and the first image sequence according to the feature extraction and feature matching method described in step 302.
At step 704, internal parameters of the camera are determined.
In this embodiment, step 704 is substantially identical to step 203 in the corresponding embodiment of fig. 2, and will not be described herein.
Step 705, determining a camera attitude angle corresponding to the effective image based on the internal parameters and the connection relation.
In this embodiment, step 705 is substantially identical to step 204 in the corresponding embodiment of fig. 2, and will not be described herein.
And step 706, mapping each image in the first image sequence to a mapping surface to obtain a sub-panorama corresponding to the first image sequence.
In this embodiment, for a certain image sequence, based on the camera pose angle of the image, the mapping relationship between each image in the image sequence and the panorama (that is, the position of the pixel point in the image mapped onto the mapping surface of the panorama) may be determined, and according to the mapping relationship, a sub-panorama corresponding to the image sequence may be generated.
Step 707, mapping the key frame image to a mapping surface to obtain a mapping image.
In this embodiment, each key frame image may be mapped to the mapping surface according to the same method as step 706, to obtain a mapping image corresponding to each key frame image.
At step 708, features mapping the image to the sub-panoramas are determined.
In this embodiment, the electronic device may determine the features of the mapping image and the sub-panorama according to the feature extraction and feature matching method described in step 302 in the corresponding embodiment of fig. 3.
Step 709, merging the mapping image and the sub-panorama based on the features of the mapping image and the sub-panorama to obtain a final panorama.
In this embodiment, the electronic device may connect each of the mapping images and the sub-panoramas together based on the features of each of the mapping images and the sub-panoramas, and fuse the pixels of the connected images, thereby obtaining a final panoramas.
In the method provided in the corresponding embodiment of fig. 7, when the first image sequence is obtained by shooting in a horizontal discrete manner and the other image sequences are obtained by shooting in a horizontal continuous manner, a sub-panorama is first generated based on the first image sequence, key frame images in the other image sequences are extracted, the key frame images are mapped to the mapping surface of the panorama, and finally the mapping images and the panorama are combined to obtain the final panorama. Because the information quantity of the video is much larger than that of the discrete images and the selection of the key frames is flexible, the success rate of generating the panoramic image by image stitching can be improved.
Exemplary apparatus
Fig. 9 is a schematic structural view of a panorama generating apparatus according to an exemplary embodiment of the present disclosure. The embodiment may be applied to an electronic device, as shown in fig. 9, where the panorama generating apparatus includes: an acquisition module 901, configured to acquire at least two image sequences for generating a panorama, where the at least two image sequences include at least one image sequence captured in a discrete manner; a first determining module 902, configured to determine valid images in at least two image sequences based on a capturing manner of the at least two image sequences, and determine a connection relationship between the valid images; a second determining module 903, configured to determine an internal parameter of the camera; a third determining module 904, configured to determine a camera pose angle corresponding to the effective image based on the internal reference and the connection relationship; the mapping module 905 is configured to map the effective image to a mapping surface centered on the camera based on the camera pose angle, to obtain a panorama.
In this embodiment, the acquisition module 901 may acquire at least two image sequences for generating a panorama from locally or remotely. The image sequences may be obtained by shooting surrounding scenes by a camera integrated on the panorama generating device or a camera connected with the device, and the at least two image sequences include at least one image sequence shot in a discrete manner. Discrete photographing refers to photographing an image by a camera at a certain position in a certain posture, then changing the posture and/or position of the camera, photographing an image again, and repeating the operation to obtain an image sequence. The arrangement of the images in the image sequence may be either landscape or portrait.
As an example, a row of images arranged laterally may be a sequence of images, or a column of images arranged longitudinally may be a sequence of images.
In this embodiment, the first determining module 902 may determine effective images in at least two image sequences based on the capturing modes of the at least two image sequences, and determine a connection relationship between the effective images. The camera may take a discrete image (i.e., the camera is stopped at a position to take an image) or may take a continuous image (e.g., a video image). The effective image is an image for mapping to a three-dimensional mapping surface to generate a panorama. Such as key frames of a video capture modality.
In this embodiment, the second determination module 903 may determine an internal parameter of the camera. The internal reference of the camera is typically an internal reference matrix (Camera Intrinsics) K. The camera's internal parameters may be fixed, i.e., the camera's internal parameters are known, and the second determination module 903 may obtain pre-entered internal parameters. The internal reference of the camera can also be obtained through calibration, and the electronic device can calibrate the camera by using the image sequence obtained in the step 201 to obtain the internal reference of the camera. The camera internal parameter calibration method is a widely used known technology at present, and is not described herein.
In this embodiment, the third determining module 904 may determine the camera pose angle corresponding to the effective image based on the internal parameters and the connection relationship. The camera attitude angle is used for representing the shooting direction of the camera under the three-dimensional coordinate system. The three-dimensional coordinate system may be a rectangular coordinate system established with the camera position as the origin. Attitude angles may include pitch angle (pitch), yaw angle (yaw), roll angle (roll). The pitch angle is used for representing deflection of the optical axis of the camera along the vertical direction, the yaw angle is used for representing deflection of the optical axis of the camera on the horizontal plane, and the roll angle is used for representing the rolling degree of the camera along the optical axis.
The third determining module 904 may determine the camera pose angle according to various existing methods based on the connection relationship between the internal reference and the image. For example, a method of determining a camera pose angle may include, but is not limited to, at least one of: such as photometric errors, reprojection errors, 3D geometry errors, etc.
In this embodiment, the mapping module 905 may map the effective image to a mapping surface centered on the camera based on the camera pose angle, to obtain the panorama. In particular, the panorama can be an image of a mapping surface that maps to various shapes (e.g., spherical, cylindrical, etc.). And establishing a reference coordinate system in a certain direction by taking the sphere center (or the cylinder center) as a center, wherein a conversion relation exists between the coordinate system of the plane image shot by the camera and the reference coordinate system, the conversion relation can be represented by a camera attitude angle, and the camera attitude angle indicates which part of the panoramic image is mapped to the plane image. It should be noted that, the method of mapping the two-dimensional image to the three-dimensional mapping surface is a currently known technology, and will not be described herein.
Referring to fig. 10, fig. 10 is a schematic structural view of a panorama generating apparatus according to another exemplary embodiment of the present disclosure.
In some alternative implementations, the first determining module 902 may include: the first determining unit 9021 is configured to determine, for each of the at least two image sequences, that each of the images of the image sequences is a valid image in response to determining that each of the at least two image sequences is captured in a laterally discrete manner, and determine a connection relationship between the images of the image sequences by feature extraction and feature matching.
In some alternative implementations, the mapping module 905 may include: an adjustment unit 90501 for adjusting a yaw angle of the vertically aligned images to be uniform; the first mapping unit 90502 is configured to map, for each image sequence of the at least two image sequences, each image of the image sequences to a mapping surface based on the camera pose angle of each image of the image sequences after the yaw angle adjustment, so as to obtain a sub-panorama corresponding to the image sequence. A second determining unit 90503 for determining the features of the respective obtained sub-panoramas; the first merging unit 90504 is configured to merge the sub-panoramas based on the features of the sub-panoramas, to obtain a final panoramas.
In some alternative implementations, the first determining module 902 may include: a fusion unit 9022, configured to determine, in response to determining that each of the at least two image sequences is captured in a longitudinally discrete manner, a mapping relationship between a target image and other images in the image sequence for each of the at least two image sequences; based on the mapping relation, fusing other images to the target image to obtain a fused image corresponding to the image sequence as an effective image; a third determining unit 9023 for determining a connection relationship between the respective obtained fusion images by feature extraction and feature matching.
In some alternative implementations, the mapping module 905 may include: a second mapping unit 90505, configured to map, for each of the fused images, the fused image to a mapping surface, to obtain a sub-panorama corresponding to the fused image; a fourth determining unit 90506 for determining the characteristics of the respective obtained sub-panoramas; and a second merging unit 90507, configured to merge the sub-panoramas based on the features of the sub-panoramas, to obtain a final panoramas.
In some alternative implementations, the first determining module 902 may include: a fifth determining unit 9024, configured to determine, in response to determining that a first image sequence of the at least two image sequences is captured in a laterally discrete manner, and other image sequences are captured in a laterally continuous manner, that each image included in the first image sequence is a valid image; determining a connection relation between each image in the first image sequence through feature extraction and feature matching; a sixth determining unit 9025 is configured to determine, for each of the other image sequences, that a key frame image in the image sequence is a valid image, and determine a connection relationship between the key frame image and the first image sequence by feature extraction and feature matching.
In some alternative implementations, the mapping module 905 may include: a third mapping unit 90508, configured to map each image in the first image sequence to a mapping surface, to obtain a sub-panorama corresponding to the first image sequence; a fourth mapping unit 90509, configured to map the key frame image to a mapping plane to obtain a mapped image; a seventh determining unit 90510 for determining features of the mapped image and the sub-panorama; and a third merging unit 90511, configured to merge the mapping image and the sub-panorama based on the features of the mapping image and the sub-panorama, to obtain a final panorama.
According to the panorama generating device provided by the embodiment of the disclosure, through determining the effective images arranged in different modes from the shot image sequence in different shooting modes, determining the connection relation between the effective images, calibrating the camera to obtain the camera internal parameters, determining the camera attitude angles of the effective images based on the internal parameters and the connection relation, and finally mapping each effective image into a panorama based on the camera attitude angles, so that the panorama with a large field angle can be generated in different modes according to habits of different users, and the flexibility and efficiency of panorama generation are improved.
Exemplary electronic device
Next, an electronic device according to an embodiment of the present disclosure is described with reference to fig. 11. The electronic device may be either or both of the terminal device 101 and the server 103 as shown in fig. 1, or a stand-alone device independent thereof, which may communicate with the terminal device 101 and the server 103 to receive the acquired input signals therefrom.
Fig. 11 illustrates a block diagram of an electronic device according to an embodiment of the disclosure.
As shown in fig. 11, the electronic device 1100 includes one or more processors 1101 and memory 1102.
The processor 1101 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 1100 to perform desired functions.
Memory 1102 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, random Access Memory (RAM) and/or cache memory (cache) and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on a computer readable storage medium and the processor 1101 may execute the program instructions to implement the panorama generating method and/or other desired functions of the various embodiments of the present disclosure above. Various contents such as image sequences may also be stored in the computer-readable storage medium.
In one example, the electronic device 1100 may further include: an input device 1103 and an output device 1104, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, when the electronic device is the terminal device 101 or the server 103, the input means 1103 may be a device such as a camera, a mouse, a keyboard, or the like for inputting a sequence of images. When the electronic device is a stand-alone device, the input means 1103 may be a communication network connector for receiving the input image sequence from the terminal device 101 and the server 103.
The output device 1104 may output various information to the outside, including the generated panorama. The output device 1104 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 1100 that are relevant to the present disclosure are shown in fig. 11, with components such as buses, input/output interfaces, etc. omitted for simplicity. In addition, the electronic device 1100 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer readable storage Medium
In addition to the methods and apparatus described above, embodiments of the present disclosure may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in a panorama generating method according to various embodiments of the present disclosure described in the "exemplary methods" section of the present description.
The computer program product may write program code for performing the operations of embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform the steps in a panorama generating method according to various embodiments of the present disclosure described in the above-mentioned "exemplary method" section of the present disclosure.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present disclosure have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present disclosure are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present disclosure. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, since the disclosure is not necessarily limited to practice with the specific details described.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different manner from other embodiments, so that the same or similar parts between the embodiments are mutually referred to. For system embodiments, the description is relatively simple as it essentially corresponds to method embodiments, and reference should be made to the description of method embodiments for relevant points.
The block diagrams of the devices, apparatuses, devices, systems referred to in this disclosure are merely illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, firmware. The above-described sequence of steps for the method is for illustration only, and the steps of the method of the present disclosure are not limited to the sequence specifically described above unless specifically stated otherwise. Furthermore, in some embodiments, the present disclosure may also be implemented as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the apparatus, devices and methods of the present disclosure, components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered equivalent to the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the disclosure to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (14)

1. A panorama generating method, comprising:
acquiring at least two image sequences for generating a panoramic image, wherein the at least two image sequences comprise at least one image sequence shot in a discrete mode, the at least two image sequences comprise at least two rows of images and at least two columns of images, the at least one image sequence shot in a discrete mode comprises at least one row of images or at least one column of images, each row of images in the at least one row of images corresponds to the same camera pitch angle, and each column of images in the at least one column of images corresponds to the same camera yaw angle;
determining effective images in the at least two image sequences based on the shooting modes of the at least two image sequences, and determining a connection relation between the effective images;
determining an internal reference of the camera;
determining a camera attitude angle corresponding to the effective image based on the internal reference and the connection relation;
Mapping the effective image to a mapping surface taking the camera as a center based on the camera attitude angle to obtain a panoramic image;
the mapping the effective image to a mapping surface centered on the camera based on the camera attitude angle to obtain a panoramic image includes:
mapping each row of images in the at least one row of images to the mapping surface based on the camera attitude angle corresponding to each row of images in the at least one row of images to obtain a sub-panorama corresponding to each row of images, or mapping each column of images in the at least one column of images to the mapping surface based on the camera attitude angle corresponding to each column of images in the at least one column of images to obtain a sub-panorama corresponding to each column of images;
determining the characteristics of each obtained sub-panorama;
combining the sub-panoramas based on the characteristics of the sub-panoramas to obtain a final panoramas;
the determining the effective images in the at least two image sequences and determining the connection relation between the effective images based on the shooting modes of the at least two image sequences comprises the following steps:
in response to determining that each image sequence in the at least two image sequences is shot in a longitudinal discrete mode, determining a mapping relationship between a target image and other images in the image sequences for each image sequence in the at least two image sequences; based on the mapping relation, fusing the other images to a target image to obtain a fused image corresponding to the image sequence as an effective image;
And determining the connection relation between the obtained fusion images through feature extraction and feature matching.
2. The method of claim 1, wherein the determining the valid images in the at least two image sequences based on the photographing manner of the at least two image sequences and determining the connection relationship between the valid images includes:
and in response to determining that each image sequence in the at least two image sequences is shot in a transverse discrete mode, determining that each image in the image sequences is a valid image for each image sequence in the at least two image sequences, and determining the connection relation between the images in the image sequences through feature extraction and feature matching.
3. The method of claim 2, wherein the mapping the effective image to a mapping surface centered on the camera based on the camera pose angle to obtain a panorama comprises:
adjusting the yaw angles of the images arranged in the vertical direction to be consistent;
for each image sequence in the at least two image sequences, mapping each image in the image sequence to the mapping surface based on the camera attitude angle of each image in the image sequence after the yaw angle adjustment to obtain a sub-panorama corresponding to the image sequence;
Determining the characteristics of each obtained sub-panorama;
and merging all the sub-panoramas based on the characteristics of all the sub-panoramas to obtain a final panoramas.
4. The method of claim 1, wherein the mapping the effective image to a mapping surface centered on the camera based on the camera pose angle to obtain a panorama comprises:
for each fusion image in the fusion images, mapping the fusion image to the mapping surface to obtain a sub-panorama corresponding to the fusion image;
determining the characteristics of each obtained sub-panorama;
and merging all the sub-panoramas based on the characteristics of all the sub-panoramas to obtain a final panoramas.
5. The method of claim 1, wherein the determining the valid images in the at least two image sequences based on the photographing manner of the at least two image sequences and determining the connection relationship between the valid images includes:
in response to determining that a first image sequence in the at least two image sequences is obtained by shooting in a transverse discrete mode, and other image sequences are obtained by shooting in a transverse continuous mode, determining that each image included in the first image sequence is a valid image; determining the connection relation between each image in the first image sequence through feature extraction and feature matching;
And for each image sequence in the other image sequences, determining a key frame image in the image sequence as a valid image, and determining the connection relation between the key frame image and the first image sequence through feature extraction and feature matching.
6. The method of claim 5, wherein the mapping the effective image to the camera-centered mapping surface based on the camera pose angle results in a panorama comprising:
mapping each image in the first image sequence to the mapping surface to obtain a sub-panorama corresponding to the first image sequence;
mapping the key frame image to the mapping surface to obtain a mapping image;
determining features of the mapping image and the sub-panorama;
and merging the mapping image and the sub-panorama based on the characteristics of the mapping image and the sub-panorama to obtain a final panorama.
7. A panorama generating apparatus, comprising:
an acquisition module, configured to acquire at least two image sequences for generating a panorama, where the at least two image sequences include at least one image sequence captured in a discrete manner, the at least one image sequence captured in a discrete manner includes at least one row of images or at least one column of images, each row of images in the at least one row of images corresponds to a same camera pitch angle, and each column of images in the at least one column of images corresponds to a same camera yaw angle;
The first determining module is used for determining effective images in the at least two image sequences based on the shooting modes of the at least two image sequences and determining the connection relation between the effective images;
a second determining module for determining an internal parameter of the camera;
the third determining module is used for determining a camera attitude angle corresponding to the effective image based on the internal reference and the connection relation;
the mapping module is used for mapping the effective image to a mapping surface taking the camera as a center based on the camera attitude angle to obtain a panoramic image;
the mapping module is further to:
mapping each row of images in the at least one row of images to the mapping surface based on the camera attitude angle corresponding to each row of images in the at least one row of images to obtain a sub-panorama corresponding to each row of images, or mapping each column of images in the at least one column of images to the mapping surface based on the camera attitude angle corresponding to each column of images in the at least one column of images to obtain a sub-panorama corresponding to each column of images;
determining the characteristics of each obtained sub-panorama;
combining the sub-panoramas based on the characteristics of the sub-panoramas to obtain a final panoramas;
The first determining module includes:
a fusion unit, configured to determine, in response to determining that each of the at least two image sequences is captured in a longitudinally discrete manner, a mapping relationship between a target image and other images in the image sequence for each of the at least two image sequences; based on the mapping relation, fusing the other images to a target image to obtain a fused image corresponding to the image sequence as an effective image;
and a third determining unit for determining the connection relation between the obtained fusion images through feature extraction and feature matching.
8. The apparatus of claim 7, wherein the first determination module comprises:
and the first determining unit is used for determining that each image in the image sequences is a valid image for each image sequence in the at least two image sequences in response to the fact that each image sequence in the at least two image sequences is shot in a transverse discrete mode, and determining the connection relation between the images in the image sequences through feature extraction and feature matching.
9. The apparatus of claim 8, wherein the mapping module comprises:
An adjusting unit for adjusting a yaw angle of the vertically aligned images to be uniform;
a first mapping unit, configured to map, for each image sequence of the at least two image sequences, each image of the image sequences to the mapping surface based on a camera pose angle of each image of the image sequences after yaw angle adjustment, to obtain a sub-panorama corresponding to the image sequence;
a second determining unit for determining the characteristics of each obtained sub-panorama;
and the first merging unit is used for merging the sub-panoramas based on the characteristics of the sub-panoramas to obtain a final panoramas.
10. The apparatus of claim 7, wherein the mapping module comprises:
the second mapping unit is used for mapping each fusion image in the fusion images to the mapping surface to obtain a sub-panorama corresponding to the fusion image;
a fourth determining unit configured to determine characteristics of the obtained respective sub-panoramas;
and the second merging unit is used for merging the sub-panoramas based on the characteristics of the sub-panoramas to obtain a final panoramas.
11. The apparatus of claim 7, wherein the first determination module comprises:
a fifth determining unit, configured to determine, in response to determining that a first image sequence in the at least two image sequences is obtained by capturing in a laterally discrete manner, and other image sequences are obtained by capturing in a laterally continuous manner, that each image included in the first image sequence is a valid image; determining the connection relation between each image in the first image sequence through feature extraction and feature matching;
and a sixth determining unit, configured to determine, for each image sequence in the other image sequences, that a key frame image in the image sequence is a valid image, and determine a connection relationship between the key frame image and the first image sequence through feature extraction and feature matching.
12. The apparatus of claim 11, wherein the mapping module comprises:
a third mapping unit, configured to map each image in the first image sequence to the mapping surface, to obtain a sub-panorama corresponding to the first image sequence;
a fourth mapping unit, configured to map the key frame image to the mapping surface, to obtain a mapped image;
A seventh determining unit, configured to determine features of the mapping image and the sub-panorama;
and the third merging unit is used for merging the mapping image and the sub-panorama based on the characteristics of the mapping image and the sub-panorama to obtain a final panorama.
13. A computer readable storage medium storing a computer program for performing the method of any one of the preceding claims 1-6.
14. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the method of any of the preceding claims 1-6.
CN202010196117.7A 2020-03-16 2020-03-19 Panorama generation method and device, computer readable storage medium and electronic equipment Active CN111402136B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202010196117.7A CN111402136B (en) 2020-03-19 2020-03-19 Panorama generation method and device, computer readable storage medium and electronic equipment
US17/200,659 US11146727B2 (en) 2020-03-16 2021-03-12 Method and device for generating a panoramic image
US17/383,157 US11533431B2 (en) 2020-03-16 2021-07-22 Method and device for generating a panoramic image
US17/981,056 US20230056036A1 (en) 2020-03-16 2022-11-04 Method and device for generating a panoramic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010196117.7A CN111402136B (en) 2020-03-19 2020-03-19 Panorama generation method and device, computer readable storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN111402136A CN111402136A (en) 2020-07-10
CN111402136B true CN111402136B (en) 2023-12-15

Family

ID=71431024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010196117.7A Active CN111402136B (en) 2020-03-16 2020-03-19 Panorama generation method and device, computer readable storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111402136B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111833250B (en) * 2020-07-13 2024-09-03 北京爱笔科技有限公司 Panoramic image stitching method, device, equipment and storage medium
CN113012290B (en) * 2021-03-17 2023-02-28 展讯通信(天津)有限公司 Terminal posture-based picture display and acquisition method and device, storage medium and terminal
CN113689482B (en) * 2021-10-20 2021-12-21 贝壳技术有限公司 Shooting point recommendation method and device and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001167249A (en) * 1999-12-06 2001-06-22 Sanyo Electric Co Ltd Method and device for synthesizing image and recording medium stored with image synthesizing program
CN101123722A (en) * 2007-09-25 2008-02-13 北京智安邦科技有限公司 Panorama video intelligent monitoring method and system
CN102201115A (en) * 2011-04-07 2011-09-28 湖南天幕智能科技有限公司 Real-time panoramic image stitching method of aerial videos shot by unmanned plane
CN103118230A (en) * 2013-02-28 2013-05-22 腾讯科技(深圳)有限公司 Panorama acquisition method, device and system
CN103176347A (en) * 2011-12-22 2013-06-26 百度在线网络技术(北京)有限公司 Method and device for shooting panorama and electronic device
CN104463956A (en) * 2014-11-21 2015-03-25 中国科学院国家天文台 Construction method and device for virtual scene of lunar surface
CN105611169A (en) * 2015-12-31 2016-05-25 联想(北京)有限公司 Image obtaining method and electronic device
KR101642975B1 (en) * 2015-04-27 2016-07-26 주식회사 피씨티 Panorama Space Modeling Method for Observing an Object
CN106357976A (en) * 2016-08-30 2017-01-25 深圳市保千里电子有限公司 Omni-directional panoramic image generating method and device
CN107451952A (en) * 2017-08-04 2017-12-08 追光人动画设计(北京)有限公司 A kind of splicing and amalgamation method of panoramic video, equipment and system
CN109076158A (en) * 2017-12-22 2018-12-21 深圳市大疆创新科技有限公司 Panorama photographic method, photographing device and machine readable storage medium
CN110111241A (en) * 2019-04-30 2019-08-09 北京字节跳动网络技术有限公司 Method and apparatus for generating dynamic image
CN110874818A (en) * 2018-08-31 2020-03-10 阿里巴巴集团控股有限公司 Image processing and virtual space construction method, device, system and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9984494B2 (en) * 2015-01-26 2018-05-29 Uber Technologies, Inc. Map-like summary visualization of street-level distance data and panorama data

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001167249A (en) * 1999-12-06 2001-06-22 Sanyo Electric Co Ltd Method and device for synthesizing image and recording medium stored with image synthesizing program
CN101123722A (en) * 2007-09-25 2008-02-13 北京智安邦科技有限公司 Panorama video intelligent monitoring method and system
CN102201115A (en) * 2011-04-07 2011-09-28 湖南天幕智能科技有限公司 Real-time panoramic image stitching method of aerial videos shot by unmanned plane
CN103176347A (en) * 2011-12-22 2013-06-26 百度在线网络技术(北京)有限公司 Method and device for shooting panorama and electronic device
CN103118230A (en) * 2013-02-28 2013-05-22 腾讯科技(深圳)有限公司 Panorama acquisition method, device and system
CN104463956A (en) * 2014-11-21 2015-03-25 中国科学院国家天文台 Construction method and device for virtual scene of lunar surface
KR101642975B1 (en) * 2015-04-27 2016-07-26 주식회사 피씨티 Panorama Space Modeling Method for Observing an Object
CN105611169A (en) * 2015-12-31 2016-05-25 联想(北京)有限公司 Image obtaining method and electronic device
CN106357976A (en) * 2016-08-30 2017-01-25 深圳市保千里电子有限公司 Omni-directional panoramic image generating method and device
CN107451952A (en) * 2017-08-04 2017-12-08 追光人动画设计(北京)有限公司 A kind of splicing and amalgamation method of panoramic video, equipment and system
CN109076158A (en) * 2017-12-22 2018-12-21 深圳市大疆创新科技有限公司 Panorama photographic method, photographing device and machine readable storage medium
CN110874818A (en) * 2018-08-31 2020-03-10 阿里巴巴集团控股有限公司 Image processing and virtual space construction method, device, system and storage medium
CN110111241A (en) * 2019-04-30 2019-08-09 北京字节跳动网络技术有限公司 Method and apparatus for generating dynamic image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈泗勇.全景拼接系统的研究与实现.福建电脑.2019,第35卷(第11期),21-24. *

Also Published As

Publication number Publication date
CN111402136A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
CN112927362B (en) Map reconstruction method and device, computer readable medium and electronic equipment
CN111402136B (en) Panorama generation method and device, computer readable storage medium and electronic equipment
JP6902122B2 (en) Double viewing angle Image calibration and image processing methods, equipment, storage media and electronics
CN111432119B (en) Image shooting method and device, computer readable storage medium and electronic equipment
CN112489114B (en) Image conversion method, image conversion device, computer readable storage medium and electronic equipment
CN111008985B (en) Panorama picture seam detection method and device, readable storage medium and electronic equipment
CN106357991A (en) Image processing method, image processing apparatus, and display system
CN112102199B (en) Depth image cavity region filling method, device and system
CN111612842B (en) Method and device for generating pose estimation model
CN111402404B (en) Panorama complementing method and device, computer readable storage medium and electronic equipment
US11044398B2 (en) Panoramic light field capture, processing, and display
CN115690382B (en) Training method of deep learning model, and method and device for generating panorama
US11533431B2 (en) Method and device for generating a panoramic image
CN111415386B (en) Shooting device position prompting method and device, storage medium and electronic device
WO2023005170A1 (en) Generation method and apparatus for panoramic video
CN113592940B (en) Method and device for determining target object position based on image
CN112995491B (en) Video generation method and device, electronic equipment and computer storage medium
WO2018100230A1 (en) Method and apparatuses for determining positions of multi-directional image capture apparatuses
CN108920598B (en) Panorama browsing method and device, terminal equipment, server and storage medium
CN116708862A (en) Virtual background generation method for live broadcasting room, computer equipment and storage medium
WO2018150086A2 (en) Methods and apparatuses for determining positions of multi-directional image capture apparatuses
US9898486B2 (en) Method, a system, an apparatus and a computer program product for image-based retrieval
WO2021073562A1 (en) Multipoint cloud plane fusion method and device
CN114900742A (en) Scene rotation transition method and system based on video plug flow
CN112465716A (en) Image conversion method and device, computer readable storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20200921

Address after: 100085 Floor 102-1, Building No. 35, West Second Banner Road, Haidian District, Beijing

Applicant after: Seashell Housing (Beijing) Technology Co.,Ltd.

Address before: 300 457 days Unit 5, Room 1, 112, Room 1, Office Building C, Nangang Industrial Zone, Binhai New Area Economic and Technological Development Zone, Tianjin

Applicant before: BEIKE TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220328

Address after: 100085 8th floor, building 1, Hongyuan Shouzhu building, Shangdi 6th Street, Haidian District, Beijing

Applicant after: As you can see (Beijing) Technology Co.,Ltd.

Address before: 100085 Floor 101 102-1, No. 35 Building, No. 2 Hospital, Xierqi West Road, Haidian District, Beijing

Applicant before: Seashell Housing (Beijing) Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant