CN118840520A - Camera distortion simulation method and device and electronic equipment - Google Patents
Camera distortion simulation method and device and electronic equipment Download PDFInfo
- Publication number
- CN118840520A CN118840520A CN202411330360.8A CN202411330360A CN118840520A CN 118840520 A CN118840520 A CN 118840520A CN 202411330360 A CN202411330360 A CN 202411330360A CN 118840520 A CN118840520 A CN 118840520A
- Authority
- CN
- China
- Prior art keywords
- image
- simulation
- virtual perspective
- pixel point
- distorted
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000004088 simulation Methods 0.000 title claims abstract description 73
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000005070 sampling Methods 0.000 claims description 60
- 238000013507 mapping Methods 0.000 claims description 46
- 230000004927 fusion Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 abstract description 13
- 238000012805 post-processing Methods 0.000 abstract description 6
- 238000009877 rendering Methods 0.000 abstract description 6
- 238000012545 processing Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 3
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 3
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 3
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 3
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Architecture (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
Abstract
本申请提供了相机畸变仿真方法、装置及电子设备。本申请中,通过借助于3D渲染领域下N个虚拟透视相机在同一时刻针对同一3D场景画面分别渲染出的无畸变图像,并基于一张畸变图像以及上述N个虚拟透视相机的视场角从N张无畸变图像中采样出畸变仿真图像中的仿真像素点,而非直接对现有无畸变图像进行处理,这相比现有常规图像后处理的方法,能够避免在相机畸变仿真过程中畸变明显的图像区域模糊。
The present application provides a camera distortion simulation method, device and electronic device. In the present application, by using N virtual perspective cameras in the 3D rendering field to render undistorted images for the same 3D scene at the same time, and based on a distorted image and the field of view of the above N virtual perspective cameras, simulated pixel points in the distorted simulation image are sampled from the N undistorted images, rather than directly processing the existing undistorted images. Compared with the existing conventional image post-processing methods, this can avoid blurring of the image area with obvious distortion during the camera distortion simulation process.
Description
技术领域Technical Field
本申请涉及图像处理,特别涉及相机畸变仿真方法、装置及电子设备。The present application relates to image processing, and in particular to a camera distortion simulation method, device and electronic device.
背景技术Background Art
由于真实相机中透镜中央区放大率与边缘区不一致等原因,常会导致真实相机采集的图像产生畸变。该畸变在广角镜头或鱼眼镜头所采集的图像中更加明显。The images captured by real cameras are often distorted due to the inconsistency between the magnification of the central area of the lens and the edge area. This distortion is more obvious in images captured by wide-angle lenses or fisheye lenses.
目前,模拟真实相机畸变的常用方法是图像后处理的方法,具体是:对一张正常透视相机拍摄的无畸变图像进行畸变处理得到畸变图像来进行模拟仿真。但是,这种方法在模拟如上述的广角镜头、鱼眼镜头等所采集的存在比较明显(严重)的畸变的图像时,如果对上述无畸变图像进行畸变处理以使图像中央区域存在明显(严重)畸变,则可能会导致整个图像中央区域过于模糊,影响图像畸变仿真效果。At present, the common method for simulating real camera distortion is the image post-processing method, specifically: a distortion-free image taken by a normal perspective camera is distorted to obtain a distorted image for simulation. However, when this method simulates images with obvious (severe) distortion, such as those collected by the wide-angle lens and fisheye lens mentioned above, if the distortion-free image is distorted so that the central area of the image has obvious (severe) distortion, it may cause the central area of the entire image to be too blurred, affecting the image distortion simulation effect.
发明内容Summary of the invention
本申请提供了相机畸变仿真方法、装置及电子设备,以避免在相机畸变仿真过程中畸变明显的图像区域模糊。The present application provides a camera distortion simulation method, device and electronic device to avoid blurring of image areas with obvious distortion during the camera distortion simulation process.
本申请实施例提供一种相机畸变仿真方法,该方法包括:The present application provides a camera distortion simulation method, the method comprising:
获得N张无畸变图像;所述N张无畸变图像包括N个虚拟透视相机在同一时刻针对同一3D场景画面分别渲染出的无畸变图像;所述3D场景下被部署了N个虚拟透视相机,N大于1,所述N个虚拟透视相机被部署在同一位置且所述N个虚拟透视相机的镜头的朝向满足设定相似要求;所述N个虚拟透视相机的视场角不同;Obtaining N distortion-free images; the N distortion-free images include distortion-free images respectively rendered by N virtual perspective cameras at the same time for the same 3D scene; N virtual perspective cameras are deployed in the 3D scene, N is greater than 1, the N virtual perspective cameras are deployed at the same position and the orientations of the lenses of the N virtual perspective cameras meet the set similarity requirements; the N virtual perspective cameras have different field of view angles;
获得一张畸变图像;Obtain a distorted image;
基于所述N张无畸变图像和所述畸变图像,生成畸变仿真图像;所述畸变仿真图像中每一仿真像素点,是基于所述畸变图像中与该仿真像素点具有映射关系的映射像素点,以及所述N个虚拟透视相机的视场角从所述N张无畸变图像中采样得到。Based on the N undistorted images and the distorted images, a distorted simulated image is generated; each simulated pixel point in the distorted simulated image is based on a mapped pixel point in the distorted image that has a mapping relationship with the simulated pixel point, and the field of view angles of the N virtual perspective cameras are sampled from the N undistorted images.
本申请实施例还提供了一种相机畸变仿真装置,该装置包括:The present application also provides a camera distortion simulation device, which includes:
获得单元,用于获得N张无畸变图像;所述N张无畸变图像包括N个虚拟透视相机在同一时刻针对同一3D场景画面分别渲染出的无畸变图像;所述3D场景下被部署了N个虚拟透视相机,N大于1,所述N个虚拟透视相机被部署在同一位置且所述N个虚拟透视相机的镜头的朝向满足设定相似要求;所述N个虚拟透视相机的视场角不同;以及,An obtaining unit is used to obtain N distortion-free images; the N distortion-free images include distortion-free images respectively rendered by N virtual perspective cameras for the same 3D scene at the same time; N virtual perspective cameras are deployed in the 3D scene, N is greater than 1, the N virtual perspective cameras are deployed at the same position and the orientations of the lenses of the N virtual perspective cameras meet the set similarity requirements; the N virtual perspective cameras have different field of view angles; and,
获得一张畸变图像;Obtain a distorted image;
畸变仿真单元,用于基于所述N张无畸变图像和所述畸变图像,生成畸变仿真图像;所述畸变仿真图像中每一仿真像素点,是基于所述畸变图像中与该仿真像素点具有映射关系的映射像素点,以及所述N个虚拟透视相机的视场角从所述N张无畸变图像中采样得到。A distortion simulation unit is used to generate a distortion simulation image based on the N undistorted images and the distorted image; each simulated pixel point in the distortion simulation image is based on a mapped pixel point in the distorted image that has a mapping relationship with the simulated pixel point, and the field of view angles of the N virtual perspective cameras are sampled from the N undistorted images.
本申请实施例还提供了一种电子设备。该电子设备包括:处理器和机器可读存储介质;The embodiment of the present application also provides an electronic device. The electronic device includes: a processor and a machine-readable storage medium;
所述机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;The machine-readable storage medium stores machine-executable instructions that can be executed by the processor;
所述处理器用于执行机器可执行指令,以实现上述公开的方法的步骤。The processor is used to execute machine executable instructions to implement the steps of the above disclosed method.
由以上技术方案可以看出,本申请中,通过借助于3D渲染领域下N个虚拟透视相机在同一时刻针对同一3D场景画面分别渲染出的无畸变图像,并基于一张畸变图像以及上述N个虚拟透视相机的视场角从N张无畸变图像中采样出畸变仿真图像中的仿真像素点,而非直接对现有无畸变图像进行处理,这相比现有常规图像后处理的方法,能够避免在相机畸变仿真过程中畸变明显的图像区域模糊;It can be seen from the above technical solutions that in the present application, by using N virtual perspective cameras in the 3D rendering field to render undistorted images for the same 3D scene at the same time, and based on a distorted image and the field of view of the N virtual perspective cameras, simulated pixel points in the distorted simulated image are sampled from the N undistorted images, rather than directly processing the existing undistorted images. Compared with the existing conventional image post-processing methods, this can avoid blurring of image areas with obvious distortion during camera distortion simulation;
进一步地,本实施例中,基于一张畸变图像以及上述N个虚拟透视相机的视场角从N张无畸变图像中采样出畸变仿真图像中的仿真像素点,这相当于在实时渲染过程利用N张无畸变图像中的像素点拼接融合的方式,这完全可以消除现有对一张现成图像进行后处理所导致的畸变仿真图像中心区域模糊。Furthermore, in this embodiment, based on a distorted image and the field of view of the above-mentioned N virtual perspective cameras, simulated pixel points in the distorted simulation image are sampled from N undistorted images, which is equivalent to splicing and fusing the pixel points in N undistorted images in the real-time rendering process, which can completely eliminate the blurring of the central area of the distorted simulation image caused by the existing post-processing of a ready-made image.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the present disclosure.
图1为本申请实施例提供的方法流程图;FIG1 is a flow chart of a method provided in an embodiment of the present application;
图2为本申请实施例提供的无畸变图像示意图;FIG2 is a schematic diagram of a distortion-free image provided by an embodiment of the present application;
图3为本申请实施例提供的畸变图像示意图;FIG3 is a schematic diagram of a distorted image provided by an embodiment of the present application;
图4为本申请实施例提供的畸变仿真图像生成流程图;FIG4 is a flowchart of generating a distortion simulation image provided by an embodiment of the present application;
图5为本申请实施例提供的混合掩码图;FIG5 is a hybrid mask diagram provided by an embodiment of the present application;
图6、图7为本申请实施例提供的仿真效果图;FIG6 and FIG7 are simulation effect diagrams provided in the embodiments of the present application;
图8为本申请实施例提供的装置结构图;FIG8 is a structural diagram of a device provided in an embodiment of the present application;
图9为本申请实施例提供的电子设备结构图。FIG. 9 is a structural diagram of an electronic device provided in an embodiment of the present application.
具体实施方式DETAILED DESCRIPTION
为了使本领域技术人员更好地理解本申请实施例提供的技术方案,并使本申请实施例的上述目的、特征和优点能够更加明显易懂,下面结合附图对本申请实施例中技术方案作进一步详细的说明。In order to enable those skilled in the art to better understand the technical solutions provided by the embodiments of the present application and to make the above-mentioned purposes, features and advantages of the embodiments of the present application more obvious and understandable, the technical solutions in the embodiments of the present application are further described in detail below in conjunction with the accompanying drawings.
参见图1,图1为本申请实施例提供的方法流程图。该方法应用于电子设备比如运行3D场景的服务器等,本实施例并不具体限定。该流程可适用于实时渲染领域的相机畸变仿真。See Figure 1, which is a flow chart of a method provided in an embodiment of the present application. The method is applied to electronic devices such as servers running 3D scenes, etc., and is not specifically limited in this embodiment. The process can be applied to camera distortion simulation in the field of real-time rendering.
作为一个实施例,本实施例应用于3D场景。在3D场景下,本实施例会创建N个虚拟透视相机,N大于1。比如N为2。As an embodiment, this embodiment is applied to a 3D scene. In the 3D scene, this embodiment will create N virtual perspective cameras, where N is greater than 1. For example, N is 2.
在一个实施例中,上述N个虚拟透视相机被部署在同一位置且N个虚拟透视相机的镜头的朝向满足设定相似要求(比如朝向一致等)。In one embodiment, the N virtual perspective cameras are deployed at the same location and the orientations of the lenses of the N virtual perspective cameras meet a set similarity requirement (such as consistent orientation, etc.).
在一个实施例中,上述N个虚拟透视相机的视场角不同。In one embodiment, the N virtual perspective cameras have different field of view angles.
基于如上描述,则如图1所示,该流程可包括以下步骤:Based on the above description, as shown in FIG1 , the process may include the following steps:
步骤101,获得N张无畸变图像。Step 101: Obtain N distortion-free images.
在本实施例中,N张无畸变图像包括上述N个虚拟透视相机在同一时刻针对同一3D场景画面分别渲染出的无畸变图像。图2以N为2为例示出2张无畸变图像。如上描述,N个虚拟透视相机的视场角不同,则当N为2时,该2个虚拟透视相机的视场角也不同,比如其中一个视场角为90°,另一个视场角为60°。由于上述N个虚拟透视相机的视场角不同,那上述N个虚拟透视相机的视场角就肯定会有大小之分。为便于描述,当N为2时,2个虚拟透视相机中其中一个具有较大视场角比如90°的虚拟透视相机所渲染出的无畸变图像可称为宽视场角图像。对应地,另一个虚拟透视相机所渲染出的无畸变图像可称为窄视场角图像。图2举例示出了宽视场角图像(简称宽视角图像)和窄视场角图像(简称窄视角图像)。In this embodiment, the N undistorted images include the undistorted images rendered by the N virtual perspective cameras at the same time for the same 3D scene. FIG. 2 shows two undistorted images with N being 2 as an example. As described above, the N virtual perspective cameras have different field of view angles. When N is 2, the field of view angles of the two virtual perspective cameras are also different, for example, one of the field of view angles is 90° and the other is 60°. Since the field of view angles of the N virtual perspective cameras are different, the field of view angles of the N virtual perspective cameras will definitely be different. For ease of description, when N is 2, the undistorted image rendered by one of the two virtual perspective cameras having a larger field of view angle, such as 90°, can be called a wide field of view angle image. Correspondingly, the undistorted image rendered by the other virtual perspective camera can be called a narrow field of view angle image. FIG. 2 shows an example of a wide field of view angle image (referred to as a wide viewing angle image) and a narrow field of view angle image (referred to as a narrow viewing angle image).
需要说明的是,本实施例中,N个虚拟透视相机渲染出的无畸变图像的图像分辨率相同。基于上述N个虚拟透视相机的视场角不同,则相对而言,具有较大视场角的虚拟透视相机渲染出的无畸变图像中场景范围更广,其可以包含小视场角的虚拟透视相机渲染出的无畸变图像中所有或部分区域。比如图2所示的宽视场角图像包含了整个窄视场角图像中的内容。It should be noted that, in this embodiment, the image resolution of the undistorted images rendered by the N virtual perspective cameras is the same. Based on the different field of view angles of the above-mentioned N virtual perspective cameras, relatively speaking, the undistorted image rendered by the virtual perspective camera with a larger field of view has a wider scene range, which can include all or part of the area of the undistorted image rendered by the virtual perspective camera with a small field of view. For example, the wide field of view image shown in Figure 2 includes the content of the entire narrow field of view image.
但是,在相同图像分辨率下,相比较大视场角,较小视场角的虚拟透视相机渲染出的无畸变图像中场景细节要更为丰富。However, at the same image resolution, the undistorted image rendered by a virtual perspective camera with a smaller field of view has richer scene details than that rendered by a virtual perspective camera with a larger field of view.
基于如上描述,后续在生成畸变仿真图像时会自适应从各视场角的虚拟透视相机所渲染出的无畸变图像种采样,具体见步骤103描述,这里暂不赘述。Based on the above description, when generating the distortion simulation image later, the distortion-free image rendered by the virtual perspective camera at each field of view angle will be sampled adaptively. For details, see step 103, which will not be described here.
步骤102,获得一张畸变图像。Step 102: Obtain a distorted image.
作为一个实施例,该畸变图像是通过将一张正常无畸变图像按照设定畸变算法比如OpenCV算法转变生成。图3举例示出了畸变图像。As an embodiment, the distorted image is generated by transforming a normal undistorted image according to a set distortion algorithm such as an OpenCV algorithm. FIG3 shows an example of a distorted image.
可选地,畸变图像的图像分辨率与上述N张无畸变图像的图像分辨率相同。Optionally, the image resolution of the distorted image is the same as the image resolution of the N undistorted images.
或者,畸变图像的图像分辨率与上述N张无畸变图像的图像分辨率具有倍数关系(但这里的畸变图像的长宽比与N张无畸变图像的长宽比相同)。Alternatively, the image resolution of the distorted image is in a multiple relationship with the image resolution of the N undistorted images (but the aspect ratio of the distorted image here is the same as the aspect ratio of the N undistorted images).
步骤103,基于上述N张无畸变图像和上述畸变图像,生成畸变仿真图像,其中,畸变仿真图像中每一仿真像素点,是基于上述畸变图像中与该仿真像素点具有映射关系的映射像素点,以及上述N个虚拟透视相机的视场角从上述N张无畸变图像中采样得到。Step 103, based on the N undistorted images and the distorted images, a distorted simulated image is generated, wherein each simulated pixel in the distorted simulated image is based on a mapped pixel in the distorted image that has a mapping relationship with the simulated pixel, and the field of view of the N virtual perspective cameras is sampled from the N undistorted images.
如上描述的大视场角的虚拟透视相机渲染出的无畸变图像、以及小视场角的虚拟透视相机渲染出的无畸变图像的特点,本实施例在生成畸变仿真图像时,尽可能从像素点的精度更高、畸变后效果更为清晰的角度触发来生成畸变仿真图像。比如,对于N张无畸变图像中重叠区域,畸变仿真图像中与该重叠区域具有映射关系的仿真区域中各仿真像素点是从最小视场角的虚拟透视相机渲染出的无畸变图像中采样得到,比如具体是从最小视场角的虚拟透视相机渲染出的无畸变图像中采样出与该仿真像素点具有映射关系的映射像素点的像素信息作为该仿真像素点的像素信息。As described above, the characteristics of the undistorted image rendered by the virtual perspective camera with a large field of view and the undistorted image rendered by the virtual perspective camera with a small field of view, when generating the distorted simulation image, this embodiment triggers the generation of the distorted simulation image from an angle with higher pixel accuracy and clearer distortion effect as much as possible. For example, for the overlapping area in N undistorted images, each simulated pixel point in the simulated area having a mapping relationship with the overlapping area in the distorted simulation image is sampled from the undistorted image rendered by the virtual perspective camera with the smallest field of view, for example, specifically, the pixel information of the mapped pixel point having a mapping relationship with the simulated pixel point is sampled from the undistorted image rendered by the virtual perspective camera with the smallest field of view as the pixel information of the simulated pixel point.
也即,当N张无畸变图像中存在重叠区域,畸变仿真图像中与该重叠区域具有映射关系的仿真区域中各仿真像素点是从最小视场角的虚拟透视相机渲染出的无畸变图像中采样得到,基于上述的小视场角的虚拟透视相机渲染出的无畸变图像具有的场景细节更丰富等特点,则这里可使得仿真区域中仿真像素点的精度更高,畸变后效果更为清晰。下文举例描述了步骤103,这里暂不赘述。That is, when there is an overlapping area in N undistorted images, each simulated pixel point in the simulated area in the distorted simulated image that has a mapping relationship with the overlapping area is sampled from the undistorted image rendered by the virtual perspective camera with the smallest field of view. Based on the characteristics of the undistorted image rendered by the virtual perspective camera with a small field of view, such as richer scene details, the accuracy of the simulated pixel points in the simulated area can be made higher, and the distorted effect can be made clearer. The following describes step 103 by way of example, which will not be described here.
至此,完成图1所示流程。At this point, the process shown in Figure 1 is completed.
通过图1所示流程可以看出,本实施例通过借助于3D渲染领域下N个虚拟透视相机在同一时刻针对同一3D场景画面分别渲染出的无畸变图像,并基于一张畸变图像以及上述N个虚拟透视相机的视场角从N张无畸变图像中采样出畸变仿真图像中的仿真像素点,而非直接对现有无畸变图像进行处理,这相比现有常规图像后处理的方法,能够避免在相机畸变仿真过程中畸变明显的图像区域模糊;It can be seen from the process shown in FIG1 that, in this embodiment, by using the non-distorted images rendered by N virtual perspective cameras in the 3D rendering field at the same time for the same 3D scene screen, and based on a distorted image and the field of view of the N virtual perspective cameras, simulated pixel points in the distorted simulated image are sampled from the N non-distorted images, rather than directly processing the existing non-distorted images. Compared with the existing conventional image post-processing method, this can avoid blurring of the image area with obvious distortion during the camera distortion simulation process;
进一步地,本实施例中,基于一张畸变图像以及上述N个虚拟透视相机的视场角从N张无畸变图像中采样出畸变仿真图像中的仿真像素点,这相当于在实时渲染过程利用N张无畸变图像中的像素点拼接融合的方式,这完全可以消除现有对一张现成图像进行后处理所导致的畸变仿真图像中心区域模糊。Furthermore, in this embodiment, based on a distorted image and the field of view of the above-mentioned N virtual perspective cameras, simulated pixel points in the distorted simulation image are sampled from N undistorted images, which is equivalent to splicing and fusing the pixel points in N undistorted images in the real-time rendering process, which can completely eliminate the blurring of the central area of the distorted simulation image caused by the existing post-processing of a ready-made image.
下面对上述步骤103进行描述:The above step 103 is described below:
作为一个实施例,上述步骤103中,基于N张无畸变图像和畸变图像,生成畸变仿真图像包括:针对仿真图像模板中每一仿真像素点执行以下步骤a以得到畸变仿真图像:As an embodiment, in the above step 103, based on the N undistorted images and the distorted images, generating the distorted simulated image includes: performing the following step a for each simulated pixel point in the simulated image template to obtain the distorted simulated image:
步骤a:基于畸变图像中与该仿真像素点具有映射关系的映射像素点、以及N个虚拟透视相机的视场角,从N张无畸变图像中确定采样图像;基于采样图像中与该仿真像素点具有映射关系的映射像素点的像素参数修正仿真图像模板中该仿真像素点的像素参数。Step a: Based on the mapped pixel points in the distorted image that have a mapping relationship with the simulated pixel points and the field of view of N virtual perspective cameras, a sampling image is determined from N undistorted images; based on the pixel parameters of the mapped pixel points in the sampling image that have a mapping relationship with the simulated pixel points, the pixel parameters of the simulated pixel points in the simulated image template are corrected.
作为一个实施例,这里的像素参数比如为像素值等,本实施例并不具体限定。As an embodiment, the pixel parameter here is, for example, a pixel value, etc., which is not specifically limited in this embodiment.
为便于理解步骤103,图4举例示出了本申请实施例提供的畸变仿真图像的生成流程图。To facilitate understanding of step 103, FIG4 illustrates an example flow chart of generating a distortion simulation image provided in an embodiment of the present application.
参见图4,图4为本申请实施例提供的畸变仿真图像生成流程图。如图4所示,该流程可包括以下步骤:See Figure 4, which is a flowchart of generating a distortion simulation image provided by an embodiment of the present application. As shown in Figure 4, the process may include the following steps:
步骤401,按照顺序遍历仿真图像模板中各未被遍历的仿真像素点,将遍历到的仿真像素点作为当前像素点。Step 401, traverse each untraversed simulated pixel point in the simulated image template in order, and use the traversed simulated pixel point as the current pixel point.
比如,本实施例会先按照顺序从前至后,从左至右的顺序遍历仿真图像模板中各未被遍历的仿真像素点,将遍历到的仿真像素点作为当前像素点。For example, in this embodiment, each untraversed simulated pixel point in the simulated image template is first traversed in order from front to back and from left to right, and the traversed simulated pixel point is used as the current pixel point.
步骤402,基于畸变图像中与当前仿真像素点具有映射关系的映射像素点、以及N个虚拟透视相机的视场角,从上述N张无畸变图像中确定采样图像。Step 402 : determining a sampled image from the N undistorted images based on the mapped pixel points in the distorted image that have a mapping relationship with the current simulated pixel point and the field of view of the N virtual perspective cameras.
作为一个实施例,上述畸变图像中任一像素点的像素信息包括对应第一指定属性的第一属性值和对应第二指定属性的第二属性值。这里,比如第一指定属性对应蓝色(Blue)通道,第二指定属性对应不透明(alpha)通道。对应地,上述对应第一指定属性的第一属性值为蓝色通道数值;对应第二指定属性的第二属性值为不透明通道数值。As an embodiment, the pixel information of any pixel in the distorted image includes a first attribute value corresponding to a first specified attribute and a second attribute value corresponding to a second specified attribute. Here, for example, the first specified attribute corresponds to a blue channel, and the second specified attribute corresponds to an alpha channel. Correspondingly, the first attribute value corresponding to the first specified attribute is a blue channel value; the second attribute value corresponding to the second specified attribute is an alpha channel value.
基于此,本实施例可借助于畸变图像中与该仿真像素点具有映射关系的映射像素点的坐标信息、以及该映射像素点的像素信息中对应第一指定属性的第一属性值和对应第二指定属性的第二属性值,以及N个虚拟透视相机的视场角,从上述N张无畸变图像中确定采样图像。Based on this, this embodiment can determine the sampling image from the above-mentioned N undistorted images with the help of the coordinate information of the mapping pixel point in the distorted image that has a mapping relationship with the simulated pixel point, the first attribute value corresponding to the first specified attribute and the second attribute value corresponding to the second specified attribute in the pixel information of the mapping pixel point, and the field of view of the N virtual perspective cameras.
比如,基于畸变图像中与该仿真像素点具有映射关系的映射像素点的坐标信息、以及该映射像素点的像素信息中对应第一指定属性的第一属性值和对应第二指定属性的第二属性值,确定参考决策系数。这里的参考决策系数用于确定用于选择采样图像的目标采样决策系数,具体下述。之后基于参考决策系数、以及N个虚拟透视相机的视场角,确定目标采样决策系数;最后基于目标采样决策系数,从N张无畸变图像中确定采样图像。For example, based on the coordinate information of the mapped pixel point in the distorted image that has a mapping relationship with the simulated pixel point, and the first attribute value corresponding to the first specified attribute and the second attribute value corresponding to the second specified attribute in the pixel information of the mapped pixel point, a reference decision coefficient is determined. The reference decision coefficient here is used to determine the target sampling decision coefficient for selecting the sampled image, as described below. Then, based on the reference decision coefficient and the field of view of the N virtual perspective cameras, the target sampling decision coefficient is determined; finally, based on the target sampling decision coefficient, the sampled image is determined from the N undistorted images.
作为一个实施例,参考决策系数依据畸变图像中与该仿真像素点具有映射关系的映射像素点的坐标信息、以及由该映射像素点的像素信息中对应第一指定属性的第一属性值和对应第二指定属性的第二属性值组成的向量确定。As an embodiment, the reference decision coefficient is determined based on the coordinate information of a mapping pixel point in the distorted image that has a mapping relationship with the simulated pixel point, and a vector consisting of a first attribute value corresponding to a first specified attribute and a second attribute value corresponding to a second specified attribute in the pixel information of the mapping pixel point.
比如,参考决策系数通过下式表示:。For example, the reference decision coefficient is expressed by the following formula: .
其中,表示参考决策系数,表示畸变图像中与该仿真像素点具有映射关系的映射像素点的坐标信息,表示该映射像素点的像素信息中对应第一指定属性的第一属性值比如上述的蓝色通道数值和对应第二指定属性的第二属性值比如上述的不透明通道数值组成的向量。in, represents the reference decision coefficient, represents the coordinate information of the mapped pixel point in the distorted image that has a mapping relationship with the simulated pixel point, A vector representing a first attribute value corresponding to a first specified attribute in the pixel information of the mapped pixel, such as the above-mentioned blue channel value, and a second attribute value corresponding to a second specified attribute, such as the above-mentioned opaque channel value.
作为一个实施例,上述基于参考决策系数、以及N个虚拟透视相机的视场角,确定目标采样决策系数在具体实现时有很多实现方式,比如:确定各虚拟透视相机的视场角的指定三角函数对应的三角函数数值;基于参考决策系数和各采样决策子系数,确定目标采样决策系数;任一采样决策子系数是通过对两个虚拟透视相机的视场角所对应的三角函数数值进行指定运算得到。As an embodiment, there are many ways to implement the above-mentioned determination of the target sampling decision coefficient based on the reference decision coefficient and the field of view angles of N virtual perspective cameras, such as: determining the trigonometric function value corresponding to the specified trigonometric function of the field of view angle of each virtual perspective camera; determining the target sampling decision coefficient based on the reference decision coefficient and each sampling decision sub-coefficient; any sampling decision sub-coefficient is obtained by performing a specified operation on the trigonometric function values corresponding to the field of view angles of two virtual perspective cameras.
可选地,上述确定各虚拟透视相机的视场角的指定三角函数对应的三角函数数值比如可为:确定各虚拟透视相机的视场角的正切值。Optionally, the trigonometric function value corresponding to the specified trigonometric function for determining the field of view angle of each virtual perspective camera may be, for example, a tangent value for determining the field of view angle of each virtual perspective camera.
可选地,上述基于参考决策系数和各采样决策子系数,确定目标采样决策系数比如可为:先将各采样决策子系数和参考决策系数输入指定算法,确定目标采样决策系数。该指定算法可根据实际需求设置,本实施例并不具体限定。比如将每一采样决策子系数与其对应的比例权重进行相乘得到第一结果,将得到的第一结果与参考决策系数相加得到第二结果,将第二结果除以各采样决策子系数的和,将最终的结果确定为目标采样决策系数。这里,任一采样决策子系数对应的比例权重可相同,比如均可为1/N;或者,至少两个采样决策子系数对应的比例权重不同,比如可根据实际需求设置,本实施例并不具体限定。Optionally, the above-mentioned determination of the target sampling decision coefficient based on the reference decision coefficient and each sampling decision sub-coefficient may be, for example, as follows: first input each sampling decision sub-coefficient and the reference decision coefficient into a specified algorithm to determine the target sampling decision coefficient. The specified algorithm may be set according to actual needs, and the present embodiment does not specifically limit it. For example, each sampling decision sub-coefficient is multiplied by its corresponding proportional weight to obtain a first result, the first result obtained is added to the reference decision coefficient to obtain a second result, the second result is divided by the sum of each sampling decision sub-coefficient, and the final result is determined as the target sampling decision coefficient. Here, the proportional weight corresponding to any sampling decision sub-coefficient may be the same, for example, both may be 1/N; or, the proportional weights corresponding to at least two sampling decision sub-coefficients are different, for example, they may be set according to actual needs, and the present embodiment does not specifically limit it.
为便于理解,下面以N为2为例举例示出上述指定算法。For ease of understanding, the above-mentioned specified algorithm is illustrated below by taking N as 2 as an example.
比如,可选地,目标采样决策系数具体可通过下述指定算法表示:For example, optionally, the target sampling decision coefficient may be specifically expressed by the following specified algorithm:
; ;
其中,为视场角最小的虚拟透视相机的水平视场角,为视场角最大的虚拟透视相机的水平视场角。表示目标采样决策系数。表示采样决策子系数,表示上述参考决策系数。in, is the horizontal field of view of the virtual perspective camera with the smallest field of view, It is the horizontal field of view of the virtual perspective camera with the largest field of view. Represents the target sampling decision coefficient. represents the sampling decision sub-coefficient, represents the above reference decision coefficient.
需要说明的是,上述只是以N为2为例进行的举例描述,N为其它值的方式如上对指定算法的描述,这里不再赘述。It should be noted that the above description is only an example description using N as 2. The method of specifying the algorithm when N is other values is the same as the description of the specified algorithm above, which will not be repeated here.
作为一个实施例,基于目标采样决策系数,从N张无畸变图像中确定采样图像比如可为:从已获得的各虚拟透视相机的视场角当前对应的决策系数范围内,确定所述目标采样决策系数所处的目标决策系数范围;将当前具有与该目标决策系数范围对应的视场角的虚拟透视相机所渲染出的无畸变图像确定为所述采样图像。As an embodiment, based on the target sampling decision coefficient, a sampling image is determined from N undistorted images, for example: from the decision coefficient range currently corresponding to the field of view angle of each virtual perspective camera that has been obtained, the target decision coefficient range in which the target sampling decision coefficient is located is determined; and the undistorted image rendered by the virtual perspective camera that currently has a field of view angle corresponding to the target decision coefficient range is determined as the sampling image.
比如,以N为2为例,则基于目标采样决策系数,从N张无畸变图像中确定采样图像可为:若目标采样决策系数在窄视场角对应的决策系数范围比如0到1范围内,则确定视场角最小的虚拟透视相机渲染出的无畸变图像为上述采样图像,否则,确定视场角最大的虚拟透视相机渲染出的无畸变图像为上述采样图像。For example, taking N as 2, the sampling image can be determined from N undistorted images based on the target sampling decision coefficient: if the target sampling decision coefficient is within the decision coefficient range corresponding to the narrow field of view, such as the range of 0 to 1, then the undistorted image rendered by the virtual perspective camera with the smallest field of view is determined as the above sampling image; otherwise, the undistorted image rendered by the virtual perspective camera with the largest field of view is determined as the above sampling image.
步骤403,基于采样图像中与当前仿真像素点具有映射关系的映射像素点的像素参数修正仿真图像模板中该当前仿真像素点的像素参数;若当前仿真像素点为仿真图像模板中最后一个被遍历的仿真像素点,则将此时的仿真图像模板确定为上述畸变仿真图像,否则,返回按照顺序遍历仿真图像模板中各未被遍历的仿真像素点的步骤。Step 403, based on the pixel parameters of the mapped pixel points in the sampled image that have a mapping relationship with the current simulated pixel point, correct the pixel parameters of the current simulated pixel point in the simulated image template; if the current simulated pixel point is the last simulated pixel point traversed in the simulated image template, the simulated image template at this time is determined as the above-mentioned distorted simulated image, otherwise, return to the step of traversing each untraversed simulated pixel point in the simulated image template in sequence.
作为一个实施例,上述基于采样图像中与该仿真像素点具有映射关系的映射像素点的像素参数修正仿真图像模板中该仿真像素点的像素参数具体可为:将仿真图像模板中该仿真像素点的像素值修改为该采样图像中与该仿真像素点具有映射关系的映射像素点的像素值。As an embodiment, the above-mentioned correction of the pixel parameters of the simulated pixel point in the simulated image template based on the pixel parameters of the mapped pixel point in the sampled image that has a mapping relationship with the simulated pixel point can specifically be: modifying the pixel value of the simulated pixel point in the simulated image template to the pixel value of the mapped pixel point in the sampled image that has a mapping relationship with the simulated pixel point.
至此,完成图4所示流程。At this point, the process shown in FIG. 4 is completed.
通过图4所示流程最终实现了上述基于N张无畸变图像和畸变图像,生成畸变仿真图像。The process shown in FIG4 finally realizes the generation of the distorted simulation image based on N undistorted images and distorted images.
在本实施例中,在畸变仿真图像中,为避免拼接造成的缝隙,则对于从边界线开始并沿着朝向图像中心区域的方向确定的待调整区域,可基于各虚拟透视相机渲染的无畸变图像对该待调整区域中的仿真像素点进行调整,以避免上述缝隙。这里,边界线基于畸变仿真图像中相邻边界仿真像素点确定。相邻边界仿真像素点是分别从两个不同虚拟透视相机渲染出的无畸变图像中采样得到。In this embodiment, in the distorted simulated image, in order to avoid gaps caused by stitching, for the area to be adjusted starting from the boundary line and along the direction toward the center area of the image, the simulated pixel points in the area to be adjusted can be adjusted based on the undistorted image rendered by each virtual perspective camera to avoid the above-mentioned gap. Here, the boundary line is determined based on adjacent boundary simulated pixel points in the distorted simulated image. The adjacent boundary simulated pixel points are sampled from the undistorted images rendered by two different virtual perspective cameras.
比如,以N为2为例,则边界线以宽视场角的无畸变图像中的像素点和窄视场角的无畸变图像中的像素点之间的分界线。For example, taking N as 2, the boundary line is a boundary line between pixel points in the undistorted image with a wide field of view and pixel points in the undistorted image with a narrow field of view.
作为一个实施例,上述对待调整区域中的仿真像素点进行调整比如可为:对于待调整区域内每一仿真像素点,从各虚拟透视相机渲染的无畸变图像中选择该仿真像素点映射的映射像素点,将各映射像素点的像素参数比如像素值进行融合,将融合结果确定为该仿真像素点的像素参数比如像素值。As an embodiment, the above-mentioned adjustment of the simulated pixel points in the area to be adjusted may be, for example: for each simulated pixel point in the area to be adjusted, a mapping pixel point mapped by the simulated pixel point is selected from the undistorted image rendered by each virtual perspective camera, the pixel parameters of each mapping pixel point, such as pixel values, are fused, and the fusion result is determined as the pixel parameter, such as pixel value, of the simulated pixel point.
仍以N为2为例,上述待调整区域为畸变仿真图像中从边界线开始至图像中心的区域。当然,若N大于2,则上述待调整区域为畸变仿真图像中从边界线开始至沿着朝向图像中心区域的方向上的另一边界线之间的区域。Still taking N as 2 as an example, the above-mentioned area to be adjusted is the area from the boundary line to the center of the image in the distorted simulation image. Of course, if N is greater than 2, the above-mentioned area to be adjusted is the area from the boundary line to another boundary line in the direction toward the center of the image in the distorted simulation image.
作为一个实施例,对于上述待调整区域内每一仿真像素点,可从各虚拟透视相机渲染的无畸变图像中选择该仿真像素点映射的映射像素点,将各映射像素点的像素参数比如像素值进行线性融合,将融合结果确定为该仿真像素点的像素参数比如像素值。该步骤可以通过如图5所示的混合掩码图实现。如图5所示的混合掩码图,纯白表示完全使用窄视场角图像中的像素点,黑色表示完全使用宽视场角图像中的像素点,而交界的灰色过渡部分则对窄视场角图像和宽视场角图像中相同位置上的像素点进行线性融合后的融合结果。As an embodiment, for each simulated pixel point in the above-mentioned area to be adjusted, a mapped pixel point mapped by the simulated pixel point can be selected from the undistorted images rendered by each virtual perspective camera, and the pixel parameters of each mapped pixel point, such as the pixel value, are linearly fused, and the fusion result is determined as the pixel parameter, such as the pixel value, of the simulated pixel point. This step can be achieved by a mixed mask map as shown in FIG5. In the mixed mask map shown in FIG5, pure white indicates that the pixel points in the narrow field of view image are fully used, black indicates that the pixel points in the wide field of view image are fully used, and the gray transition part at the junction is the fusion result after linear fusion of the pixel points at the same position in the narrow field of view image and the wide field of view image.
以上对本申请实施例提供的方法进行了描述。为使上述方法的效果更加明显,图6、图7举例示出了窄视角图像畸变仿真后的清晰效果图、以及宽视角图像和窄视角图像结合畸变仿真后的清晰效果图。通过将窄视角图像或者宽视角图像和窄视角图像结合进行畸变仿真,能够避免在相机畸变仿真过程中畸变明显的图像区域模糊,保证整个图像清晰。The method provided by the embodiment of the present application is described above. In order to make the effect of the above method more obvious, Figures 6 and 7 show examples of clear effect pictures after the distortion simulation of a narrow viewing angle image, and clear effect pictures after the distortion simulation of a wide viewing angle image and a narrow viewing angle image. By combining a narrow viewing angle image or a wide viewing angle image and a narrow viewing angle image for distortion simulation, it is possible to avoid blurring of the image area with obvious distortion during the camera distortion simulation process, thereby ensuring that the entire image is clear.
下面对本申请实施例提供的装置进行描述:The following is a description of the device provided in the embodiment of the present application:
参见图8,图8为本申请实施例提供的装置结构图。如图8所示,该装置可包括:See Figure 8, which is a structural diagram of a device provided in an embodiment of the present application. As shown in Figure 8, the device may include:
获得单元,用于获得N张无畸变图像;所述N张无畸变图像包括N个虚拟透视相机在同一时刻针对同一3D场景画面分别渲染出的无畸变图像;所述3D场景下被部署了N个虚拟透视相机,N大于1,所述N个虚拟透视相机被部署在同一位置且所述N个虚拟透视相机的镜头的朝向满足设定相似要求;所述N个虚拟透视相机的视场角不同;以及,An obtaining unit is used to obtain N distortion-free images; the N distortion-free images include distortion-free images respectively rendered by N virtual perspective cameras for the same 3D scene at the same time; N virtual perspective cameras are deployed in the 3D scene, N is greater than 1, the N virtual perspective cameras are deployed at the same position and the orientations of the lenses of the N virtual perspective cameras meet the set similarity requirements; the N virtual perspective cameras have different field of view angles; and,
获得一张畸变图像;Obtain a distorted image;
畸变仿真单元,用于基于所述N张无畸变图像和所述畸变图像,生成畸变仿真图像;所述畸变仿真图像中每一仿真像素点,是基于所述畸变图像中与该仿真像素点具有映射关系的映射像素点,以及所述N个虚拟透视相机的视场角从所述N张无畸变图像中采样得到。A distortion simulation unit is used to generate a distortion simulation image based on the N undistorted images and the distorted image; each simulated pixel point in the distortion simulation image is based on a mapped pixel point in the distorted image that has a mapping relationship with the simulated pixel point, and the field of view angles of the N virtual perspective cameras are sampled from the N undistorted images.
作为一个实施例,所述N张无畸变图像的分辨率相同;As an embodiment, the N distortion-free images have the same resolution;
所述畸变图像是按照设定畸变算法生成的;The distorted image is generated according to a set distortion algorithm;
所述畸变图像的图像分辨率与所述N张无畸变图像的图像分辨率相同,或者所述畸变图像的图像分辨率与所述N张无畸变图像的图像分辨率具有倍数关系。The image resolution of the distorted image is the same as the image resolution of the N undistorted images, or the image resolution of the distorted image is a multiple of the image resolution of the N undistorted images.
作为一个实施例,所述基于所述N张无畸变图像和所述畸变图像,生成畸变仿真图像包括:As an embodiment, generating a distortion simulation image based on the N distortion-free images and the distortion image includes:
针对仿真图像模板中每一仿真像素点执行以下步骤以得到畸变仿真图像:所述畸变图像中任一像素点的像素信息包括对应第一指定属性的第一属性值和对应第二指定属性的第二属性值;The following steps are performed for each simulated pixel point in the simulated image template to obtain a distorted simulated image: pixel information of any pixel point in the distorted image includes a first attribute value corresponding to a first specified attribute and a second attribute value corresponding to a second specified attribute;
所述基于N张无畸变图像和所述畸变图像,生成畸变仿真图像包括:The generating a distortion simulation image based on the N distortion-free images and the distortion image comprises:
针对仿真图像模板中每一仿真像素点,基于所述畸变图像中与该仿真像素点具有映射关系的映射像素点的坐标信息、以及该映射像素点的像素信息中对应第一指定属性的第一属性值和对应第二指定属性的第二属性值,确定参考决策系数;所述参考决策系数用于确定用于选择采样图像的目标采样决策系数;For each simulated pixel point in the simulated image template, a reference decision coefficient is determined based on coordinate information of a mapped pixel point in the distorted image that has a mapping relationship with the simulated pixel point, and a first attribute value corresponding to a first specified attribute and a second attribute value corresponding to a second specified attribute in the pixel information of the mapped pixel point; the reference decision coefficient is used to determine a target sampling decision coefficient for selecting a sampling image;
基于所述参考决策系数、以及N个虚拟透视相机的视场角,确定目标采样决策系数;Determine a target sampling decision coefficient based on the reference decision coefficient and the field of view angles of N virtual perspective cameras;
基于所述目标采样决策系数,从所述N张无畸变图像中确定采样图像;Based on the target sampling decision coefficient, determining a sampling image from the N undistorted images;
基于所述采样图像中与该仿真像素点具有映射关系的映射像素点的像素参数修正所述仿真图像模板中该仿真像素点的像素参数。The pixel parameters of the simulated pixel point in the simulated image template are corrected based on the pixel parameters of the mapped pixel point in the sampled image that has a mapping relationship with the simulated pixel point.
作为一个实施例,所述第一指定属性对应蓝色Blue通道,所述对应第一指定属性的第一属性值为蓝色通道数值;As an embodiment, the first specified attribute corresponds to a blue channel, and the first attribute value corresponding to the first specified attribute is a blue channel value;
所述第二指定属性对应不透明alpha通道,所述对应第二指定属性的第二属性值为不透明通道数值;The second specified attribute corresponds to an opaque alpha channel, and the second attribute value corresponding to the second specified attribute is an opaque channel value;
所述参考决策系数依据所述畸变图像中与该仿真像素点具有映射关系的映射像素点的坐标信息、以及由该映射像素点的像素信息中对应第一指定属性的第一属性值和对应第二指定属性的第二属性值组成的向量确定。The reference decision coefficient is determined based on the coordinate information of a mapping pixel point in the distorted image that has a mapping relationship with the simulated pixel point, and a vector consisting of a first attribute value corresponding to a first specified attribute and a second attribute value corresponding to a second specified attribute in the pixel information of the mapping pixel point.
作为一个实施例,所述参考决策系数通过下式表示:;As an embodiment, the reference decision coefficient is expressed by the following formula: ;
其中,表示参考决策系数,表示所述畸变图像中与该仿真像素点具有映射关系的映射像素点的坐标信息,表示该映射像素点的像素信息中对应第一指定属性的第一属性值和对应第二指定属性的第二属性值组成的向量。in, represents the reference decision coefficient, represents the coordinate information of the mapped pixel point in the distorted image that has a mapping relationship with the simulated pixel point, A vector consisting of a first attribute value corresponding to a first specified attribute and a second attribute value corresponding to a second specified attribute in the pixel information of the mapped pixel.
作为一个实施例,所述基于所述参考决策系数、以及N个虚拟透视相机的视场角,确定目标采样决策系数包括:As an embodiment, determining the target sampling decision coefficient based on the reference decision coefficient and the field of view angles of N virtual perspective cameras includes:
确定各虚拟透视相机的视场角的指定三角函数对应的三角函数数值;Determine trigonometric function values corresponding to the specified trigonometric functions of the field of view angles of each virtual perspective camera;
基于所述参考决策系数和各采样决策子系数,确定目标采样决策系数;任一采样决策子系数是通过对任两个虚拟透视相机的视场角所对应的三角函数数值进行指定运算得到。Based on the reference decision coefficient and each sampling decision sub-coefficient, a target sampling decision coefficient is determined; any sampling decision sub-coefficient is obtained by performing a specified operation on the trigonometric function values corresponding to the field of view angles of any two virtual perspective cameras.
作为一个实施例,所述基于所述目标采样决策系数,从所述N张无畸变图像中确定采样图像包括:As an embodiment, determining a sampling image from the N distortion-free images based on the target sampling decision coefficient includes:
从已获得的各虚拟透视相机的视场角当前对应的决策系数范围内,确定所述目标采样决策系数所处的目标决策系数范围;将当前具有与该目标决策系数范围对应的视场角的虚拟透视相机所渲染出的无畸变图像确定为所述采样图像。The target decision coefficient range in which the target sampling decision coefficient is located is determined from the decision coefficient range currently corresponding to the field of view angle of each virtual perspective camera that has been obtained; and the distortion-free image rendered by the virtual perspective camera that currently has a field of view angle corresponding to the target decision coefficient range is determined as the sampling image.
作为一个实施例,畸变仿真单元进一步对所述畸变仿真图像中从边界线开始并沿着朝向图像中心区域的方向确定待调整区域;所述边界线基于所述畸变仿真图像中相邻边界仿真像素点确定,所述相邻边界仿真像素点是分别从两个不同虚拟透视相机渲染出的无畸变图像中采样得到;As an embodiment, the distortion simulation unit further determines the area to be adjusted in the distortion simulation image starting from the boundary line and along the direction toward the center area of the image; the boundary line is determined based on adjacent boundary simulation pixel points in the distortion simulation image, and the adjacent boundary simulation pixel points are sampled from the undistorted images rendered by two different virtual perspective cameras respectively;
对所述待调整区域内每一仿真像素点,从各虚拟透视相机渲染的无畸变图像中选择该仿真像素点映射的映射像素点,将各映射像素点的像素参数进行融合,将融合结果确定为该仿真像素点的像素参数。For each simulated pixel point in the area to be adjusted, a mapping pixel point mapped by the simulated pixel point is selected from the undistorted images rendered by each virtual perspective camera, pixel parameters of each mapping pixel point are fused, and the fusion result is determined as the pixel parameter of the simulated pixel point.
作为一个实施例,若N为2,所述待调整区域为所述畸变仿真图像中从边界线开始至图像中心的区域。As an embodiment, if N is 2, the area to be adjusted is the area from the boundary line to the center of the image in the distorted simulation image.
至此,完成图8所示装置的结构描述。At this point, the structural description of the device shown in FIG. 8 is completed.
本申请实施例还提供了图8所示装置的硬件结构。参见图9,图9为本申请实施例提供的电子设备结构图。如图9所示,该硬件结构可包括:处理器和机器可读存储介质,机器可读存储介质存储有能够被所述处理器执行的机器可执行指令;所述处理器用于执行机器可执行指令,以实现本申请上述示例公开的方法。The embodiment of the present application also provides a hardware structure of the device shown in FIG8. See FIG9, which is a structural diagram of an electronic device provided in the embodiment of the present application. As shown in FIG9, the hardware structure may include: a processor and a machine-readable storage medium, wherein the machine-readable storage medium stores machine-executable instructions that can be executed by the processor; the processor is used to execute the machine-executable instructions to implement the method disclosed in the above example of the present application.
基于与上述方法同样的申请构思,本申请实施例还提供一种机器可读存储介质,所述机器可读存储介质上存储有若干计算机指令,所述计算机指令被处理器执行时,能够实现本申请上述示例公开的方法。Based on the same application concept as the above method, an embodiment of the present application also provides a machine-readable storage medium, on which a number of computer instructions are stored. When the computer instructions are executed by a processor, the method disclosed in the above example of the present application can be implemented.
示例性的,上述机器可读存储介质可以是任何电子、磁性、光学或其它物理存储装置,可以包含或存储信息,如可执行指令、数据,等等。例如,机器可读存储介质可以是:RAM(Radom Access Memory,随机存取存储器)、易失存储器、非易失性存储器、闪存、存储驱动器(如硬盘驱动器)、固态硬盘、任何类型的存储盘(如光盘、dvd等),或者类似的存储介质,或者它们的组合。Exemplarily, the above-mentioned machine-readable storage medium can be any electronic, magnetic, optical or other physical storage device, which can contain or store information, such as executable instructions, data, etc. For example, the machine-readable storage medium can be: RAM (Radom Access Memory), volatile memory, non-volatile memory, flash memory, storage drive (such as hard disk drive), solid state drive, any type of storage disk (such as optical disk, DVD, etc.), or similar storage medium, or a combination thereof.
以上所述仅为本申请的实施例而已,并不用于限制本申请。对于本领域技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本申请的权利要求范围之内。The above is only an embodiment of the present application and is not intended to limit the present application. For those skilled in the art, the present application may have various changes and variations. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411330360.8A CN118840520B (en) | 2024-09-23 | 2024-09-23 | Camera distortion simulation method, device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411330360.8A CN118840520B (en) | 2024-09-23 | 2024-09-23 | Camera distortion simulation method, device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118840520A true CN118840520A (en) | 2024-10-25 |
CN118840520B CN118840520B (en) | 2024-12-06 |
Family
ID=93143032
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411330360.8A Active CN118840520B (en) | 2024-09-23 | 2024-09-23 | Camera distortion simulation method, device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118840520B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107407554A (en) * | 2015-02-16 | 2017-11-28 | 英特尔公司 | Simulating Multi-Camera Imaging Systems |
CN109461213A (en) * | 2018-11-16 | 2019-03-12 | 京东方科技集团股份有限公司 | Image processing method, device, equipment and storage medium based on virtual reality |
CN113989467A (en) * | 2021-10-28 | 2022-01-28 | 杭州海康威视数字技术股份有限公司 | Virtual distortion image generation method and device |
CN114332246A (en) * | 2021-12-30 | 2022-04-12 | 北京五一视界数字孪生科技股份有限公司 | Virtual simulation method and device for camera distortion |
CN115200509A (en) * | 2021-04-08 | 2022-10-18 | 北京理工大学 | A measurement method, device and control device based on fringe projection measurement model |
US20230162332A1 (en) * | 2020-06-28 | 2023-05-25 | Huawei Technologies Co., Ltd. | Image Transformation Method and Apparatus |
CN117456006A (en) * | 2023-09-27 | 2024-01-26 | 北京耐德佳显示技术有限公司 | Method and device for acquiring predistortion mapping relation of near-eye display equipment |
-
2024
- 2024-09-23 CN CN202411330360.8A patent/CN118840520B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107407554A (en) * | 2015-02-16 | 2017-11-28 | 英特尔公司 | Simulating Multi-Camera Imaging Systems |
CN109461213A (en) * | 2018-11-16 | 2019-03-12 | 京东方科技集团股份有限公司 | Image processing method, device, equipment and storage medium based on virtual reality |
US20230162332A1 (en) * | 2020-06-28 | 2023-05-25 | Huawei Technologies Co., Ltd. | Image Transformation Method and Apparatus |
CN115200509A (en) * | 2021-04-08 | 2022-10-18 | 北京理工大学 | A measurement method, device and control device based on fringe projection measurement model |
CN113989467A (en) * | 2021-10-28 | 2022-01-28 | 杭州海康威视数字技术股份有限公司 | Virtual distortion image generation method and device |
CN114332246A (en) * | 2021-12-30 | 2022-04-12 | 北京五一视界数字孪生科技股份有限公司 | Virtual simulation method and device for camera distortion |
CN117456006A (en) * | 2023-09-27 | 2024-01-26 | 北京耐德佳显示技术有限公司 | Method and device for acquiring predistortion mapping relation of near-eye display equipment |
Non-Patent Citations (1)
Title |
---|
曾吉勇, 苏显渝: "水平场景无畸变折反射全景成像系统透镜畸变的消除", 光学学报, no. 06, 17 July 2004 (2004-07-17) * |
Also Published As
Publication number | Publication date |
---|---|
CN118840520B (en) | 2024-12-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6561216B2 (en) | Generating intermediate views using optical flow | |
US7961222B2 (en) | Image capturing apparatus and image capturing method | |
US9591237B2 (en) | Automated generation of panning shots | |
CN100437639C (en) | Image processing apparatus and image processing meethod, storage medium, and computer program | |
JP4118059B2 (en) | Method and apparatus for digital video processing | |
CN111062881A (en) | Image processing method and device, storage medium and electronic equipment | |
EP2862356A1 (en) | Method and apparatus for fusion of images | |
KR20110078175A (en) | Method and apparatus for generating image data | |
JP6308748B2 (en) | Image processing apparatus, imaging apparatus, and image processing method | |
CN107950018A (en) | The shallow depth of field true to nature presented by focus stack | |
WO2006077777A1 (en) | Imaging device, imaging method, and imaging device design method | |
WO2019127512A1 (en) | Image processing method for photography device, photography device and movable platform | |
JP2011118908A (en) | Image generation method, device, program of the same, and recording medium recording program | |
CN114339042B (en) | Image processing method and device based on multiple cameras, and computer-readable storage medium | |
CN111866523B (en) | Panoramic video synthesis method and device, electronic equipment and computer storage medium | |
CN107333064A (en) | The joining method and system of a kind of spherical panorama video | |
US20050091019A1 (en) | Visualization and setting of a virtual camera and lens system in a computer graphic modeling environment | |
CN115578286A (en) | High dynamic range hybrid exposure imaging method and apparatus | |
TWI626500B (en) | Image capturing apparatus, lens unit, and signal processing apparatus | |
KR20090108495A (en) | How to create a panoramic image on a mobile device | |
CN101753771B (en) | A fully automatic method for solving panoramic image parameters | |
CN118840520A (en) | Camera distortion simulation method and device and electronic equipment | |
JP4578653B2 (en) | Depth image generation apparatus, depth image generation method, and computer-readable recording medium storing a program for causing a computer to execute the method | |
TWI551141B (en) | A high dynamic range image synthesizing apparatus and a method thereof for performing exposure mapping based on individual pixels | |
CN114554154A (en) | Audio and video pickup position selection method and system, audio and video collection terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |