Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present disclosure and not all of the embodiments of the present disclosure, and that the present disclosure is not limited by the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is specifically stated otherwise.
It will be appreciated by those of skill in the art that the terms "first," "second," etc. in embodiments of the present disclosure are used merely to distinguish between different steps, devices or modules, etc., and do not represent any particular technical meaning nor necessarily logical order between them.
It should also be understood that in embodiments of the present disclosure, "plurality" may refer to two or more, and "at least one" may refer to one, two or more.
It should also be appreciated that any component, data, or structure referred to in the presently disclosed embodiments may be generally understood as one or more without explicit limitation or the contrary in the context.
In addition, the term "and/or" in this disclosure is merely an association relationship describing an association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the front and rear association objects are an or relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and that the same or similar features may be referred to each other, and for brevity, will not be described in detail.
Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but where appropriate, the techniques, methods, and apparatus should be considered part of the specification.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
Embodiments of the present disclosure may be applicable to electronic devices such as terminal devices, computer systems, servers, etc., which may operate with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with the terminal device, computer system, server, or other electronic device include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, network personal computers, minicomputer systems, mainframe computer systems, and distributed cloud computing technology environments that include any of the above systems, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc., that perform particular tasks or implement particular abstract data types. The computer system/server may be implemented in a distributed cloud computing environment in which tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computing system storage media including memory storage devices.
Summary of the application
The conventional panorama photographing apparatus has a problem in that a vertical viewing angle (VFOV) is small, resulting in poor effect of depth estimation, and is limited in viewing angle when presented to an end user, and thus it is required to expand the vertical viewing angle when photographing a panorama.
Exemplary System
Fig. 1 illustrates an exemplary system architecture 100 to which the panorama generating method or panorama generating apparatus of an embodiment of the present disclosure may be applied.
As shown in fig. 1, a system architecture 100 may include a terminal device 101, a network 102, and a server 103. Network 102 is a medium used to provide communication links between terminal device 101 and server 103. Network 102 may include various connection types such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the server 103 via the network 102 using the terminal device 101 to receive or send messages or the like. The terminal device 101 may have various communication client applications installed thereon, such as a photographing-type application, a map-type application, a three-dimensional model application, and the like.
The terminal device 101 may be various electronic devices including, but not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like.
The server 103 may be a server providing various services, such as a background image processing server processing an image sequence uploaded by the terminal device 101. The background image processing server can process the received image sequence to obtain information such as panoramic images.
It should be noted that, the panorama generating method provided by the embodiment of the present disclosure may be performed by the server 103 or may be performed by the terminal device 101, and accordingly, the panorama generating apparatus may be provided in the server 103 or may be provided in the terminal device 101.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. In the case where the image sequence does not need to be acquired from a remote location, the system architecture described above may not include a network, but only a server or terminal device.
Exemplary method
Fig. 2 is a flowchart illustrating a panorama generating method according to an exemplary embodiment of the present disclosure. The present embodiment is applicable to an electronic device (such as the terminal device 101 or the server 103 shown in fig. 1), and as shown in fig. 2, the method includes the steps of:
at step 201, at least two image sequences for generating a panorama are acquired.
In this embodiment, the electronic device may obtain at least two image sequences for generating the panoramic image locally or remotely. The image sequences may be obtained by photographing surrounding scenes by an integrated camera on the electronic device or a camera connected with the electronic device, and the at least two image sequences include at least one image sequence photographed in a discrete manner. Discrete photographing refers to photographing an image by a camera at a certain position in a certain posture, then changing the posture and/or position of the camera, photographing an image again, and repeating the operation to obtain an image sequence. The arrangement of the images in the image sequence may be either landscape or portrait.
As an example, a row of images arranged laterally may be a sequence of images, or a column of images arranged longitudinally may be a sequence of images.
Step 202, determining effective images in at least two image sequences based on the shooting modes of the at least two image sequences, and determining connection relations among the effective images.
In this embodiment, the electronic device may determine effective images in at least two image sequences based on a photographing manner of the at least two image sequences, and determine a connection relationship between the effective images. The camera may take a discrete image (i.e., the camera is stopped at a position to take an image) or may take a continuous image (e.g., a video image). The effective image is an image for mapping to a three-dimensional mapping surface to generate a panorama. Such as key frames of a video capture modality.
In step 203, the internal parameters of the camera are determined.
In this embodiment, the electronic device may determine the internal parameters of the camera. The internal reference of the camera is typically an internal reference matrix (Camera Intrinsics) K. The camera's internal parameters may be fixed, i.e. the camera's internal parameters are known, and the electronic device may obtain pre-entered internal parameters. The internal reference of the camera can also be obtained through calibration, and the electronic device can calibrate the camera by using the image sequence obtained in the step 201 to obtain the internal reference of the camera. The camera internal parameter calibration method is a widely used known technology at present, and is not described herein.
And 204, determining a camera attitude angle corresponding to the effective image based on the internal parameters and the connection relation.
In this embodiment, the electronic device may determine the camera pose angle corresponding to the effective image based on the internal reference and the connection relationship. The camera attitude angle is used for representing the shooting direction of the camera under the three-dimensional coordinate system. The three-dimensional coordinate system may be a rectangular coordinate system established with the camera position as the origin. Attitude angles may include pitch angle (pitch), yaw angle (yaw), roll angle (roll). The pitch angle is used for representing deflection of the optical axis of the camera along the vertical direction, the yaw angle is used for representing deflection of the optical axis of the camera on the horizontal plane, and the roll angle is used for representing the rolling degree of the camera along the optical axis.
The electronic device can determine the camera attitude angle according to various existing methods based on the connection relationship between the internal reference and the image. For example, a method of determining a camera pose angle may include, but is not limited to, at least one of: such as photometric errors, reprojection errors, 3D geometry errors, etc.
Step 205, mapping the effective image to a mapping surface centered on the camera based on the camera attitude angle, to obtain a panoramic image.
In this embodiment, the electronic device may map the effective image to a mapping surface centered on the camera based on the camera pose angle, to obtain the panorama. In particular, the panorama can be an image of a mapping surface that maps to various shapes (e.g., spherical, cylindrical, etc.). And establishing a reference coordinate system in a certain direction by taking the sphere center (or the cylinder center) as a center, wherein a conversion relation exists between the coordinate system of the plane image shot by the camera and the reference coordinate system, the conversion relation can be represented by a camera attitude angle, and the camera attitude angle indicates which part of the panoramic image is mapped to the plane image. It should be noted that, the method of mapping the two-dimensional image to the three-dimensional mapping surface is a currently known technology, and will not be described herein.
According to the method provided by the embodiment of the disclosure, through determining the effective images arranged in different modes from the shot image sequence in different shooting modes, determining the connection relation between the effective images, calibrating the camera to obtain the camera internal parameters, determining the camera attitude angles of the effective images based on the internal parameters and the connection relation, and finally mapping each effective image into a panoramic image based on the camera attitude angles, the panoramic images with large field angles can be generated in different modes according to habits of different users, and the flexibility and efficiency of generating the panoramic images are improved.
With further reference to fig. 3, a flow diagram of yet another embodiment of a panorama generating method is shown. As shown in fig. 3, the panorama generating method includes the steps of:
step 301, at least two image sequences for generating a panorama are acquired.
In this embodiment, step 301 is substantially identical to step 201 in the corresponding embodiment of fig. 2, and will not be described herein.
In step 302, in response to determining that each image sequence in the at least two image sequences is captured in a laterally discrete manner, for each image sequence in the at least two image sequences, determining that each image in the image sequence is a valid image, and determining a connection relationship between images in the image sequences through feature extraction and feature matching.
In this embodiment, the images obtained by the horizontal discrete mode may be distributed in at least two rows from top to bottom, and each row is an image sequence. As an example, as shown in fig. 4, the camera is horizontally rotated 360 ° in three-dimensional space in the direction of the arrow in the figure, three lines of images, i.e., three image sequences, each corresponding to one pitch angle, can be obtained.
Each image included in each image sequence in this embodiment is a valid image, i.e. the images can be mapped into a panorama.
The electronic device may extract feature points in each image using existing feature extraction methods. As an example, the feature extraction algorithm may include, but is not limited to, at least one of: SIFT (Scale Invariant Feature Transform, scale-invariant feature transform), SURF (Speeded Up Robust Features, improved way of feature extraction and description), ORB (Oriented FAST and Rotated BRIEF, algorithm for rapid feature point extraction and description), etc. After the feature points are obtained, the feature points can be matched, and the same points in the characterization space are connected, so that the connection relation between the images is determined.
In step 303, the camera's internal parameters are determined.
In this embodiment, step 303 is substantially identical to step 203 in the corresponding embodiment of fig. 2, and will not be described herein.
And step 304, determining a camera attitude angle corresponding to the effective image based on the internal parameters and the connection relation.
In this embodiment, step 304 is substantially identical to step 204 in the corresponding embodiment of fig. 2, and will not be described herein.
In step 305, the yaw angles of the vertically aligned images are adjusted to be uniform.
As shown in fig. 4, a yaw angle (yaw) of each row of images may have a deviation, so that the upper and lower adjacent images are misplaced, and by adjusting the yaw angle, the upper and lower adjacent images can be aligned, which is beneficial to improving the accuracy of generating the panoramic image.
Step 306, for each image sequence in the at least two image sequences, mapping each image in the image sequences to a mapping surface based on the camera attitude angle of each image in the image sequences after the yaw angle adjustment, so as to obtain a sub-panorama corresponding to the image sequence.
In this embodiment, for a certain image sequence, based on the camera pose angle of each image in the image sequence, the mapping relationship between the image and the panorama (that is, the position of the pixel point in the image mapped onto the mapping surface of the panorama) may be determined, and according to the mapping relationship, a sub-panorama corresponding to the image sequence may be generated.
Step 307, determining the characteristics of each obtained sub-panorama.
In this embodiment, the electronic device may determine the features of each sub-panorama according to existing feature extraction methods (e.g., the various algorithms described in step 302 above).
Step 308, merging the sub-panoramas based on the features of the sub-panoramas to obtain the final panoramas.
In this embodiment, the electronic device may connect the sub-panoramas together based on the features of the sub-panoramas, and fuse the pixels of the sub-panoramas connected together, thereby obtaining the final panoramas. As an example, the color values of pixels representing the same three-dimensional spatial point in two interconnected sub-panoramas may be averaged (or weighted summed based on other weights) to obtain the color values of pixels in the final panorama.
According to the method provided by the corresponding embodiment of fig. 3, when the shooting mode is a horizontal discrete mode, sub-panoramas corresponding to each image sequence are generated based on the connection relation between each image in each image sequence, and then the sub-panoramas are combined to obtain a final panoramas, the shooting speed of the horizontal discrete mode is high, and the generation of each sub-panoramas can be executed in parallel, so that the efficiency of the panoramas is improved.
With further reference to fig. 5, a flow diagram of yet another embodiment of a panorama generating method is shown. As shown in fig. 5, the panorama generating method includes the steps of:
at step 501, at least two image sequences for generating a panorama are acquired.
In this embodiment, step 501 is substantially identical to step 201 in the corresponding embodiment of fig. 2, and will not be described herein.
Step 502, in response to determining that each image sequence in at least two image sequences is shot in a longitudinal discrete mode, determining a mapping relationship between a target image and other images in the image sequence for each image sequence in at least two image sequences; and based on the mapping relation, fusing other images to the target image to obtain a fused image corresponding to the image sequence as an effective image.
In this embodiment, at least two columns of images captured in a longitudinal discrete manner may be distributed from left to right, and each column is an image sequence. As an example, as shown in fig. 6, the camera may be horizontally rotated in three-dimensional space by 360 ° in the direction of the arrow in the figure, and a plurality of columns of images, each column being an image sequence, may be obtained.
The target image may be a pre-specified image, and for example, for a column of images as shown in fig. 6, an image located in the middle may be a target image. The electronic device can extract the characteristic point of each image in an image sequence by using the existing characteristic extraction method, and perform characteristic matching by using the characteristic point to obtain a homography matrix between the images, so as to determine the mapping relation between other images and the target image.
The electronic device can fuse other images to the target image by using the mapping relation, so that a fused image is obtained as an effective image.
In step 503, the connection relationship between the obtained fusion images is determined through feature extraction and feature matching.
In this embodiment, the electronic device may determine the connection relationship between the obtained fusion images according to the feature extraction and feature matching method described in step 302 in the corresponding embodiment of fig. 3.
At step 504, the camera's internal parameters are determined.
In this embodiment, step 504 is substantially identical to step 203 in the corresponding embodiment of fig. 2, and will not be described herein.
Step 505, determining a camera attitude angle corresponding to the effective image based on the internal parameters and the connection relation.
In this embodiment, step 505 is substantially identical to step 204 in the corresponding embodiment of fig. 2, and will not be described herein.
And step 506, mapping the fusion image to a mapping surface for each fusion image in the fusion images to obtain a sub-panorama corresponding to the fusion image.
In this embodiment, for a certain fusion image, based on the camera pose angle of the fusion image, the mapping relationship between the fusion image and the panorama (that is, the position of the pixel point in the fusion image mapped onto the mapping surface of the panorama) may be determined, and according to the mapping relationship, the sub-panorama corresponding to the fusion image may be generated.
In step 507, the characteristics of the resulting sub-panoramas are determined.
In this embodiment, the electronic device may determine the features of each sub-panorama according to existing feature extraction methods (e.g., the various algorithms described in step 302 above).
Step 508, merging the sub-panoramas based on the features of the sub-panoramas to obtain the final panoramas.
In this embodiment, step 508 is substantially identical to step 308 in the corresponding embodiment of fig. 3, and will not be described herein.
According to the method provided by the corresponding embodiment of fig. 5, when the shooting mode is a longitudinal discrete mode, the images in each image sequence are fused first to obtain fused images, then each fused image is mapped onto the mapping surface of the panorama, sub-panoramas corresponding to each image sequence are generated, and then the sub-panoramas are combined to obtain a final panoramas.
With further reference to fig. 7, a flow diagram of yet another embodiment of a panorama generating method is shown. As shown in fig. 7, the panorama generating method includes the steps of:
At step 701, at least two image sequences for generating a panorama are acquired.
In this embodiment, step 701 is substantially identical to step 201 in the corresponding embodiment of fig. 2, and will not be described herein.
Step 702, in response to determining that a first image sequence in at least two image sequences is obtained by shooting in a horizontal discrete mode and other image sequences are obtained by shooting in a horizontal continuous mode, determining that each image included in the first image sequence is a valid image; and determining a connection relationship between each image in the first image sequence through feature extraction and feature matching.
In this embodiment, the first image sequence is an image sequence captured by the camera first. As shown in fig. 8, 801 is a first image sequence. After the first image sequence is captured, the image sequence is continuously captured in a horizontal continuous mode by changing the pitch angle of the camera, and the image sequence captured in the horizontal continuous mode can be an image frame sequence captured in a video mode. As shown in fig. 8, 802 and 803 are image sequences photographed in a laterally continuous manner.
The electronic device may determine a connection relationship between each image in the first image sequence according to the feature extraction and feature matching method described in step 302.
Step 703, for each image sequence in the other image sequences, determining that the key frame image in the image sequence is a valid image, and determining the connection relationship between the key frame image and the first image sequence through feature extraction and feature matching.
In this embodiment, as shown in fig. 8, the frame marked with "#" is a key frame. A key frame (also called an I frame) is a frame in which image data is completely retained in a compressed video, and when decoding the key frame, decoding can be completed only by the image data of the present frame. In video, a key frame is generally a frame when a scene, an object image, etc. in the video change significantly, that is, the key frame contains key information of a plurality of frames corresponding to a time range. In general, the time interval between temporally adjacent key frames is reasonable and neither too long nor too short. By extracting the key frames, a small number of images can be extracted from a plurality of image frames, the images contain a plurality of characteristic points corresponding to different space points, and the adjacent key frames have enough matched characteristic points. The electronic device may extract the key frames in various ways, such as color feature-based methods, motion analysis-based methods, clustering-based methods, and so forth.
The electronic device may determine a connection relationship between the key frame image and the first image sequence according to the feature extraction and feature matching method described in step 302.
At step 704, internal parameters of the camera are determined.
In this embodiment, step 704 is substantially identical to step 203 in the corresponding embodiment of fig. 2, and will not be described herein.
Step 705, determining a camera attitude angle corresponding to the effective image based on the internal parameters and the connection relation.
In this embodiment, step 705 is substantially identical to step 204 in the corresponding embodiment of fig. 2, and will not be described herein.
And step 706, mapping each image in the first image sequence to a mapping surface to obtain a sub-panorama corresponding to the first image sequence.
In this embodiment, for a certain image sequence, based on the camera pose angle of the image, the mapping relationship between each image in the image sequence and the panorama (that is, the position of the pixel point in the image mapped onto the mapping surface of the panorama) may be determined, and according to the mapping relationship, a sub-panorama corresponding to the image sequence may be generated.
Step 707, mapping the key frame image to a mapping surface to obtain a mapping image.
In this embodiment, each key frame image may be mapped to the mapping surface according to the same method as step 706, to obtain a mapping image corresponding to each key frame image.
At step 708, features mapping the image to the sub-panoramas are determined.
In this embodiment, the electronic device may determine the features of the mapping image and the sub-panorama according to the feature extraction and feature matching method described in step 302 in the corresponding embodiment of fig. 3.
Step 709, merging the mapping image and the sub-panorama based on the features of the mapping image and the sub-panorama to obtain a final panorama.
In this embodiment, the electronic device may connect each of the mapping images and the sub-panoramas together based on the features of each of the mapping images and the sub-panoramas, and fuse the pixels of the connected images, thereby obtaining a final panoramas.
In the method provided in the corresponding embodiment of fig. 7, when the first image sequence is obtained by shooting in a horizontal discrete manner and the other image sequences are obtained by shooting in a horizontal continuous manner, a sub-panorama is first generated based on the first image sequence, key frame images in the other image sequences are extracted, the key frame images are mapped to the mapping surface of the panorama, and finally the mapping images and the panorama are combined to obtain the final panorama. Because the information quantity of the video is much larger than that of the discrete images and the selection of the key frames is flexible, the success rate of generating the panoramic image by image stitching can be improved.
Exemplary apparatus
Fig. 9 is a schematic structural view of a panorama generating apparatus according to an exemplary embodiment of the present disclosure. The embodiment may be applied to an electronic device, as shown in fig. 9, where the panorama generating apparatus includes: an acquisition module 901, configured to acquire at least two image sequences for generating a panorama, where the at least two image sequences include at least one image sequence captured in a discrete manner; a first determining module 902, configured to determine valid images in at least two image sequences based on a capturing manner of the at least two image sequences, and determine a connection relationship between the valid images; a second determining module 903, configured to determine an internal parameter of the camera; a third determining module 904, configured to determine a camera pose angle corresponding to the effective image based on the internal reference and the connection relationship; the mapping module 905 is configured to map the effective image to a mapping surface centered on the camera based on the camera pose angle, to obtain a panorama.
In this embodiment, the acquisition module 901 may acquire at least two image sequences for generating a panorama from locally or remotely. The image sequences may be obtained by shooting surrounding scenes by a camera integrated on the panorama generating device or a camera connected with the device, and the at least two image sequences include at least one image sequence shot in a discrete manner. Discrete photographing refers to photographing an image by a camera at a certain position in a certain posture, then changing the posture and/or position of the camera, photographing an image again, and repeating the operation to obtain an image sequence. The arrangement of the images in the image sequence may be either landscape or portrait.
As an example, a row of images arranged laterally may be a sequence of images, or a column of images arranged longitudinally may be a sequence of images.
In this embodiment, the first determining module 902 may determine effective images in at least two image sequences based on the capturing modes of the at least two image sequences, and determine a connection relationship between the effective images. The camera may take a discrete image (i.e., the camera is stopped at a position to take an image) or may take a continuous image (e.g., a video image). The effective image is an image for mapping to a three-dimensional mapping surface to generate a panorama. Such as key frames of a video capture modality.
In this embodiment, the second determination module 903 may determine an internal parameter of the camera. The internal reference of the camera is typically an internal reference matrix (Camera Intrinsics) K. The camera's internal parameters may be fixed, i.e., the camera's internal parameters are known, and the second determination module 903 may obtain pre-entered internal parameters. The internal reference of the camera can also be obtained through calibration, and the electronic device can calibrate the camera by using the image sequence obtained in the step 201 to obtain the internal reference of the camera. The camera internal parameter calibration method is a widely used known technology at present, and is not described herein.
In this embodiment, the third determining module 904 may determine the camera pose angle corresponding to the effective image based on the internal parameters and the connection relationship. The camera attitude angle is used for representing the shooting direction of the camera under the three-dimensional coordinate system. The three-dimensional coordinate system may be a rectangular coordinate system established with the camera position as the origin. Attitude angles may include pitch angle (pitch), yaw angle (yaw), roll angle (roll). The pitch angle is used for representing deflection of the optical axis of the camera along the vertical direction, the yaw angle is used for representing deflection of the optical axis of the camera on the horizontal plane, and the roll angle is used for representing the rolling degree of the camera along the optical axis.
The third determining module 904 may determine the camera pose angle according to various existing methods based on the connection relationship between the internal reference and the image. For example, a method of determining a camera pose angle may include, but is not limited to, at least one of: such as photometric errors, reprojection errors, 3D geometry errors, etc.
In this embodiment, the mapping module 905 may map the effective image to a mapping surface centered on the camera based on the camera pose angle, to obtain the panorama. In particular, the panorama can be an image of a mapping surface that maps to various shapes (e.g., spherical, cylindrical, etc.). And establishing a reference coordinate system in a certain direction by taking the sphere center (or the cylinder center) as a center, wherein a conversion relation exists between the coordinate system of the plane image shot by the camera and the reference coordinate system, the conversion relation can be represented by a camera attitude angle, and the camera attitude angle indicates which part of the panoramic image is mapped to the plane image. It should be noted that, the method of mapping the two-dimensional image to the three-dimensional mapping surface is a currently known technology, and will not be described herein.
Referring to fig. 10, fig. 10 is a schematic structural view of a panorama generating apparatus according to another exemplary embodiment of the present disclosure.
In some alternative implementations, the first determining module 902 may include: the first determining unit 9021 is configured to determine, for each of the at least two image sequences, that each of the images of the image sequences is a valid image in response to determining that each of the at least two image sequences is captured in a laterally discrete manner, and determine a connection relationship between the images of the image sequences by feature extraction and feature matching.
In some alternative implementations, the mapping module 905 may include: an adjustment unit 90501 for adjusting a yaw angle of the vertically aligned images to be uniform; the first mapping unit 90502 is configured to map, for each image sequence of the at least two image sequences, each image of the image sequences to a mapping surface based on the camera pose angle of each image of the image sequences after the yaw angle adjustment, so as to obtain a sub-panorama corresponding to the image sequence. A second determining unit 90503 for determining the features of the respective obtained sub-panoramas; the first merging unit 90504 is configured to merge the sub-panoramas based on the features of the sub-panoramas, to obtain a final panoramas.
In some alternative implementations, the first determining module 902 may include: a fusion unit 9022, configured to determine, in response to determining that each of the at least two image sequences is captured in a longitudinally discrete manner, a mapping relationship between a target image and other images in the image sequence for each of the at least two image sequences; based on the mapping relation, fusing other images to the target image to obtain a fused image corresponding to the image sequence as an effective image; a third determining unit 9023 for determining a connection relationship between the respective obtained fusion images by feature extraction and feature matching.
In some alternative implementations, the mapping module 905 may include: a second mapping unit 90505, configured to map, for each of the fused images, the fused image to a mapping surface, to obtain a sub-panorama corresponding to the fused image; a fourth determining unit 90506 for determining the characteristics of the respective obtained sub-panoramas; and a second merging unit 90507, configured to merge the sub-panoramas based on the features of the sub-panoramas, to obtain a final panoramas.
In some alternative implementations, the first determining module 902 may include: a fifth determining unit 9024, configured to determine, in response to determining that a first image sequence of the at least two image sequences is captured in a laterally discrete manner, and other image sequences are captured in a laterally continuous manner, that each image included in the first image sequence is a valid image; determining a connection relation between each image in the first image sequence through feature extraction and feature matching; a sixth determining unit 9025 is configured to determine, for each of the other image sequences, that a key frame image in the image sequence is a valid image, and determine a connection relationship between the key frame image and the first image sequence by feature extraction and feature matching.
In some alternative implementations, the mapping module 905 may include: a third mapping unit 90508, configured to map each image in the first image sequence to a mapping surface, to obtain a sub-panorama corresponding to the first image sequence; a fourth mapping unit 90509, configured to map the key frame image to a mapping plane to obtain a mapped image; a seventh determining unit 90510 for determining features of the mapped image and the sub-panorama; and a third merging unit 90511, configured to merge the mapping image and the sub-panorama based on the features of the mapping image and the sub-panorama, to obtain a final panorama.
According to the panorama generating device provided by the embodiment of the disclosure, through determining the effective images arranged in different modes from the shot image sequence in different shooting modes, determining the connection relation between the effective images, calibrating the camera to obtain the camera internal parameters, determining the camera attitude angles of the effective images based on the internal parameters and the connection relation, and finally mapping each effective image into a panorama based on the camera attitude angles, so that the panorama with a large field angle can be generated in different modes according to habits of different users, and the flexibility and efficiency of panorama generation are improved.
Exemplary electronic device
Next, an electronic device according to an embodiment of the present disclosure is described with reference to fig. 11. The electronic device may be either or both of the terminal device 101 and the server 103 as shown in fig. 1, or a stand-alone device independent thereof, which may communicate with the terminal device 101 and the server 103 to receive the acquired input signals therefrom.
Fig. 11 illustrates a block diagram of an electronic device according to an embodiment of the disclosure.
As shown in fig. 11, the electronic device 1100 includes one or more processors 1101 and memory 1102.
The processor 1101 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 1100 to perform desired functions.
Memory 1102 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, random Access Memory (RAM) and/or cache memory (cache) and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on a computer readable storage medium and the processor 1101 may execute the program instructions to implement the panorama generating method and/or other desired functions of the various embodiments of the present disclosure above. Various contents such as image sequences may also be stored in the computer-readable storage medium.
In one example, the electronic device 1100 may further include: an input device 1103 and an output device 1104, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, when the electronic device is the terminal device 101 or the server 103, the input means 1103 may be a device such as a camera, a mouse, a keyboard, or the like for inputting a sequence of images. When the electronic device is a stand-alone device, the input means 1103 may be a communication network connector for receiving the input image sequence from the terminal device 101 and the server 103.
The output device 1104 may output various information to the outside, including the generated panorama. The output device 1104 may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device 1100 that are relevant to the present disclosure are shown in fig. 11, with components such as buses, input/output interfaces, etc. omitted for simplicity. In addition, the electronic device 1100 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer readable storage Medium
In addition to the methods and apparatus described above, embodiments of the present disclosure may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in a panorama generating method according to various embodiments of the present disclosure described in the "exemplary methods" section of the present description.
The computer program product may write program code for performing the operations of embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform the steps in a panorama generating method according to various embodiments of the present disclosure described in the above-mentioned "exemplary method" section of the present disclosure.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present disclosure have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present disclosure are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present disclosure. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, since the disclosure is not necessarily limited to practice with the specific details described.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different manner from other embodiments, so that the same or similar parts between the embodiments are mutually referred to. For system embodiments, the description is relatively simple as it essentially corresponds to method embodiments, and reference should be made to the description of method embodiments for relevant points.
The block diagrams of the devices, apparatuses, devices, systems referred to in this disclosure are merely illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, firmware. The above-described sequence of steps for the method is for illustration only, and the steps of the method of the present disclosure are not limited to the sequence specifically described above unless specifically stated otherwise. Furthermore, in some embodiments, the present disclosure may also be implemented as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the apparatus, devices and methods of the present disclosure, components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered equivalent to the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the disclosure to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.