CN115841539A - Three-dimensional light field generation method and device based on visual shell - Google Patents
Three-dimensional light field generation method and device based on visual shell Download PDFInfo
- Publication number
- CN115841539A CN115841539A CN202211505959.1A CN202211505959A CN115841539A CN 115841539 A CN115841539 A CN 115841539A CN 202211505959 A CN202211505959 A CN 202211505959A CN 115841539 A CN115841539 A CN 115841539A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- light field
- virtual camera
- reconstructed
- target model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 230000000007 visual effect Effects 0.000 title claims abstract description 22
- 239000002131 composite material Substances 0.000 claims abstract description 80
- 238000009877 rendering Methods 0.000 claims abstract description 53
- 230000001360 synchronised effect Effects 0.000 claims abstract description 51
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 17
- 238000004590 computer program Methods 0.000 claims description 18
- 238000012937 correction Methods 0.000 claims description 7
- 230000003287 optical effect Effects 0.000 claims description 5
- 230000011218 segmentation Effects 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 5
- 239000003086 colorant Substances 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Landscapes
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
Abstract
本发明提供一种基于可视外壳的三维光场生成方法和装置,包括:基于多路RGB相机采集待重建的目标模型的连续同步视频流;基于三维光场显示设备的参数确定虚拟相机阵列的排列方式;使用光场多视点编码方法确定生成合成图所需的全部子像素及其对应的虚拟相机信息;基于生成合成图所需的全部子像素及其对应的虚拟相机信息,控制对应的多个虚拟相机发射渲染光线;基于可视外壳算法和多路同步视频帧,确定渲染光线与待重建的目标模型的三维相交点与非相交点,并确定相交点的颜色、坐标与非相交点的背景颜色;基于三维相交点的颜色、坐标与非相交点的背景颜色生成合成图;基于合成图生成三维光场。从而实现三维光场的快速生成。
The present invention provides a method and device for generating a three-dimensional light field based on a visual shell, including: collecting continuous synchronous video streams of a target model to be reconstructed based on multiple RGB cameras; determining the parameters of a virtual camera array based on parameters of a three-dimensional light field display device Arrangement method; use the light field multi-view encoding method to determine all the sub-pixels and their corresponding virtual camera information required to generate the composite image; based on all the sub-pixels and their corresponding virtual camera information required to generate the composite image, control the corresponding multiple A virtual camera emits rendering light; based on the visible shell algorithm and multiple synchronous video frames, determine the three-dimensional intersection and non-intersecting points of the rendering light and the target model to be reconstructed, and determine the color, coordinates and non-intersecting points of the intersection points. Background color; generate a composite image based on the color and coordinates of the three-dimensional intersection point and the background color of the non-intersecting point; generate a three-dimensional light field based on the composite image. In this way, the rapid generation of three-dimensional light field can be realized.
Description
技术领域technical field
本发明涉及三维成像技术领域,尤其涉及一种基于可视外壳的三维光场生成方法和装置。The invention relates to the technical field of three-dimensional imaging, in particular to a method and device for generating a three-dimensional light field based on a visible shell.
背景技术Background technique
近年来,随着三维光场显示技术的发展,因其无需佩戴额外的设备,能够为观看者提供大视角、高分辨率、细节丰富的显示效果,在教育、医疗、国防、增强现实等领域得到广泛地应用,吸引了国内外研究人员的广泛关注。In recent years, with the development of three-dimensional light field display technology, because it does not need to wear additional equipment, it can provide viewers with large viewing angles, high resolution, and detailed display effects. It has been widely used and attracted extensive attention of researchers at home and abroad.
但对于三维光场显示的方法,仍然很难实现对于真实场景的实时动态显示。具体来说,传统的光场内容生成方法包括:对真实场景的三维重建和多视点渲染、对仿真模型的多视点渲染、和稠密视点生成三类方法。对于第一类方法,其对采集到的多视点图像进行三维重建、多视点渲染及合成图计算过程,可以实现真实场景的三维显示,但建模、渲染等操作需要花费大量时间,不能满足三维光场的快速生成需求。第二类方法可以实现静态场景实时的三维显示,但其仿真模型是预先人造的,真实感不足,且对于动态场景需要每帧进行模型制作,成本较高,也需要花费大量时间,不能满足三维光场的快速生成需求。第三类方法是基于深度学习进行密集视点生成,往往需要训练过程,且对于环境及人物的改变十分敏感,鲁棒性、稳定性不足,不能满足三维光场的快速生成需求。However, for the method of 3D light field display, it is still difficult to realize real-time dynamic display of real scenes. Specifically, traditional light field content generation methods include: 3D reconstruction and multi-view rendering of real scenes, multi-view rendering of simulation models, and dense viewpoint generation. For the first type of method, it performs 3D reconstruction, multi-view rendering and synthetic image calculation process on the collected multi-view images, which can realize the 3D display of the real scene, but the modeling, rendering and other operations take a lot of time and cannot meet the requirements of 3D. Requirements for fast generation of light fields. The second type of method can realize real-time 3D display of static scenes, but its simulation model is pre-manufactured, which is not realistic enough, and for dynamic scenes, each frame needs to be modeled, which is costly and takes a lot of time, and cannot meet the requirements of 3D. Requirements for fast generation of light fields. The third type of method is based on deep learning for dense viewpoint generation, which often requires a training process, and is very sensitive to changes in the environment and characters, and lacks robustness and stability, and cannot meet the needs of rapid generation of 3D light fields.
发明内容Contents of the invention
本发明提供一种基于可视外壳的三维光场生成方法和装置,用以解决现有技术中对于真实场景三维光场生成时间长的缺陷,实现三维光场的快速生成。The present invention provides a method and device for generating a three-dimensional light field based on a visible shell, which is used to solve the defect in the prior art that it takes a long time to generate a three-dimensional light field in a real scene, and realize rapid generation of a three-dimensional light field.
本发明提供一种基于可视外壳的三维光场生成方法,包括:基于多路RGB相机采集得到三维场景中待重建的目标模型的连续同步视频流,所述连续同步视频流包括多路同步视频帧;基于三维光场显示设备的参数确定虚拟相机阵列的排列方式,所述虚拟相机阵列包括多个虚拟相机;基于所述虚拟相机阵列的排列方式、所述三维光场显示设备的参数,使用光场多视点编码方法确定生成合成图所需的全部子像素及其对应的虚拟相机信息;基于所述生成合成图所需的全部子像素及其对应的虚拟相机信息,控制所述对应的多个虚拟相机发射渲染光线;基于可视外壳算法和所述多路同步视频帧,确定渲染光线与待重建的目标模型的三维相交点与非相交点,并确定相交点的颜色、坐标与非相交点的背景颜色;基于所述三维相交点的颜色、坐标与所述非相交点的背景颜色生成合成图;基于所述合成图生成三维光场。The present invention provides a method for generating a three-dimensional light field based on a visual shell, comprising: acquiring a continuous synchronous video stream of a target model to be reconstructed in a three-dimensional scene based on multi-channel RGB cameras, and the continuous synchronous video stream includes multiple synchronous video streams frame; determine the arrangement of the virtual camera array based on the parameters of the three-dimensional light field display device, and the virtual camera array includes a plurality of virtual cameras; based on the arrangement of the virtual camera array and the parameters of the three-dimensional light field display device, use The light field multi-viewpoint encoding method determines all sub-pixels and corresponding virtual camera information required for generating a composite image; based on all sub-pixels and corresponding virtual camera information required for generating a composite image, control the corresponding A virtual camera emits rendering light; based on the visible shell algorithm and the multi-channel synchronous video frame, determine the three-dimensional intersection point and non-intersection point between the rendering light and the target model to be reconstructed, and determine the color, coordinates and non-intersection point of the intersection point The background color of the point; generating a composite image based on the color and coordinates of the three-dimensional intersection point and the background color of the non-intersecting point; generating a three-dimensional light field based on the composite image.
在一些实施例中,所述基于多路RGB相机采集得到三维场景中待重建的目标模型的连续同步视频流,包括:基于所述多路RGB相机同步采集并保存三维场景中待重建的目标模型连续运动的视频流,并进行畸变校正、颜色校正和实例分割后,得到待重建的目标模型的连续视频流,其中所述多路RGB相机在同步采集前被标定于三维场景中。In some embodiments, the acquisition of the continuous synchronous video stream of the target model to be reconstructed in the 3D scene based on multi-channel RGB cameras includes: synchronously collecting and saving the target model to be reconstructed in the 3D scene based on the multi-channel RGB cameras Continuously moving video stream, and after performing distortion correction, color correction and instance segmentation, a continuous video stream of the target model to be reconstructed is obtained, wherein the multiple RGB cameras are calibrated in the 3D scene before synchronous acquisition.
在一些实施例中,所述虚拟相机阵列的排列方式包括虚拟相机的个数、位置和朝向。In some embodiments, the arrangement of the virtual camera array includes the number, position and orientation of the virtual cameras.
在一些实施例中,所述基于所述虚拟相机阵列的排列方式、所述三维光场显示设备的参数,使用光场多视点编码方法确定生成合成图所需的全部子像素及其对应的虚拟相机信息,包括:基于所述三维光场显示设备的最大视角范围,确定合成图的视角范围;基于所述虚拟相机阵列的排列方式和所述合成图的视角范围,确定出生成合成图所需的子像素及其对应的虚拟相机信息。In some embodiments, based on the arrangement of the virtual camera array and the parameters of the three-dimensional light field display device, all sub-pixels and their corresponding virtual pixels required to generate the composite image are determined using a light field multi-viewpoint encoding method. Camera information, including: based on the maximum viewing angle range of the three-dimensional light field display device, determining the viewing angle range of the composite image; based on the arrangement of the virtual camera array and the viewing angle range of the composite image, determining the The sub-pixels and their corresponding virtual camera information.
在一些实施例中,所述基于所述生成合成图所需的全部子像素及其对应的虚拟相机信息,控制所述对应的多个虚拟相机发射渲染光线,包括:根据生成合成图所需的子像素对应的虚拟相机编号,控制对应所述虚拟相机编号的虚拟相机发出对应的渲染光线。In some embodiments, the controlling the corresponding plurality of virtual cameras to emit rendering rays based on all the sub-pixels required for generating the composite image and their corresponding virtual camera information includes: according to the information required for generating the composite image The virtual camera number corresponding to the sub-pixel is used to control the virtual camera corresponding to the virtual camera number to emit corresponding rendering rays.
在一些实施例中,所述基于可视外壳算法和所述多路同步视频帧,确定渲染光线与待重建的目标模型的三维相交点与非相交点,并确定相交点的颜色、坐标与非相交点的背景颜色,包括:将渲染光线在前向搜索待重建的目标模型表面的过程中的光线前端所处的坐标点,透视投影到多路同步视频帧中,并计算全部二维投影点坐标;判断在前向搜索过程中,所述全部二维投影点坐标是否处均于待重建的目标模型的轮廓图之内;若全部二维投影点坐标均处于待重建的目标模型的轮廓图之内,则确定光线前端所处的坐标点到达待重建的目标模型表面并相交,将所述光线前端所处的坐标点确定为三维相交点的坐标,并根据所述二维投影点对应的单个视频帧,得到三维相交点的颜色;若二维投影点坐标不均处于待重建的目标模型的轮廓图之内,则确定光线前端所处的坐标点还没有到达待重建的目标模型表面,则继续使渲染光线前向搜索,若所述光线前端所处的坐标点坐标超过待重建的目标模型所处的三维场景范围,则停止前向搜索,确定此渲染光线不与待重建的目标模型相交,并将所述光线前端所处的坐标点确定为非相交点,将所述非相交点的颜色确定为背景颜色。In some embodiments, the visual hull algorithm and the multi-channel synchronous video frames determine the three-dimensional intersection points and non-intersection points of the rendering light and the target model to be reconstructed, and determine the color, coordinates, and non-intersection points of the intersection points. The background color of the intersection point, including: the coordinate point where the front end of the rendering ray is located in the process of searching the surface of the target model to be reconstructed in the forward direction, perspective projection into multiple synchronous video frames, and calculating all 2D projection points Coordinates; determine whether the coordinates of all the two-dimensional projected points are within the contour map of the target model to be reconstructed during the forward search process; if all the coordinates of the two-dimensional projected points are in the contour map of the target model to be reconstructed Within, it is determined that the coordinate point at the front end of the ray arrives at the surface of the target model to be reconstructed and intersects, and the coordinate point at the front end of the ray is determined as the coordinate of the three-dimensional intersection point, and according to the corresponding For a single video frame, the color of the three-dimensional intersection point is obtained; if the coordinates of the two-dimensional projection point are unevenly within the contour map of the target model to be reconstructed, it is determined that the coordinate point where the front of the light is located has not yet reached the surface of the target model to be reconstructed, Then continue to make the forward search of the rendering ray. If the coordinates of the coordinate point where the front end of the ray is located exceeds the range of the 3D scene where the target model to be reconstructed is located, then stop the forward search and determine that the rendering ray does not match the target model to be reconstructed. Intersect, and determine the coordinate point where the front end of the light is located as a non-intersecting point, and determine the color of the non-intersecting point as the background color.
本发明还提供一种基于可视外壳的三维光场生成装置,包括:多路RGB相机,用于同步采集得到三维场景中待重建的目标模型的连续同步视频流,所述连续同步视频流包括多路同步视频帧;光学计算设备,用于基于三维光场显示设备的参数确定虚拟相机阵列的排列方式,所述虚拟相机阵列包括多个虚拟相机;还用于基于所述虚拟相机阵列的排列方式、三维光场显示设备的参数,使用光场多视点编码方法确定生成合成图所需的全部子像素及其对应的虚拟相机信息;还用于基于所述生成合成图所需的全部子像素及其对应的虚拟相机信息,控制所述对应的多个虚拟相机发射渲染光线;还用于基于可视外壳算法和所述多路同步视频帧,确定渲染光线与待重建的目标模型的三维相交点与非相交点,并确定相交点的颜色、坐标与非相交点的背景颜色;还用于基于所述三维相交点的颜色、坐标与所述非相交点的背景颜色生成合成图;三维光场显示设备,用于基于所述合成图生成三维光场。The present invention also provides a three-dimensional light field generation device based on a visible shell, including: multiple RGB cameras, used to synchronously acquire continuous synchronous video streams of the target model to be reconstructed in the three-dimensional scene, and the continuous synchronous video streams include Multi-channel synchronous video frames; an optical computing device, used to determine the arrangement of the virtual camera array based on the parameters of the three-dimensional light field display device, and the virtual camera array includes a plurality of virtual cameras; and also used for the arrangement based on the virtual camera array method, the parameters of the three-dimensional light field display device, using the light field multi-viewpoint encoding method to determine all the sub-pixels required to generate the composite image and the corresponding virtual camera information; it is also used to generate all the sub-pixels required to generate the composite image based on the and its corresponding virtual camera information, controlling the corresponding plurality of virtual cameras to emit rendering rays; it is also used to determine the three-dimensional intersection between the rendering rays and the target model to be reconstructed based on the visible shell algorithm and the multi-channel synchronous video frame points and non-intersecting points, and determine the color of the intersecting point, the coordinates and the background color of the non-intersecting point; it is also used to generate a composite map based on the color, coordinates and the background color of the non-intersecting point of the three-dimensional intersection point; three-dimensional light A field display device for generating a three-dimensional light field based on the composite map.
本发明还提供一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现如上述任一种所述基于可视外壳的三维光场生成方法的步骤。The present invention also provides an electronic device, including a memory, a processor, and a computer program stored on the memory and operable on the processor. When the processor executes the program, the visual-based Steps of a method for generating a 3D light field for a shell.
本发明还提供一种非暂态计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现如上述任一种所述基于可视外壳的三维光场生成方法的步骤。The present invention also provides a non-transitory computer-readable storage medium, on which a computer program is stored. When the computer program is executed by a processor, the steps of any one of the methods for generating a three-dimensional light field based on a visible shell as described above can be realized. .
本发明还提供一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现如上述任一种所述基于可视外壳的三维光场生成方法的步骤。The present invention also provides a computer program product, including a computer program. When the computer program is executed by a processor, the steps of any method for generating a three-dimensional light field based on a visible shell as described above are implemented.
本发明提供的一种基于可视外壳的三维光场生成方法和装置,通过光场多视点编码方法确定出合成图中必要的子像素信息,再基于可视外壳算法,对合成图中子像素对应的,必要的三维模型点及颜色进行重建,即完成了合成图中必要的子像素信息的计算,提高了合成图的计算速度,达到实时显示的效果,显示的内容更加贴近真实,且具备很好的迁移性和鲁棒性,提高了三维光场显示对于三维信息的感知及利用,使得对于动态场景的光场显示更加灵活,节约了时间和计算成本。The present invention provides a method and device for generating a three-dimensional light field based on a visible shell. The necessary sub-pixel information in the composite image is determined by the light field multi-viewpoint coding method, and then based on the visible shell algorithm, the sub-pixel information in the composite image is determined. Correspondingly, the necessary 3D model points and colors are reconstructed, that is, the calculation of the necessary sub-pixel information in the composite image is completed, the calculation speed of the composite image is improved, and the effect of real-time display is achieved. The displayed content is closer to reality, and has Good mobility and robustness improve the perception and utilization of 3D information for 3D light field display, making the light field display of dynamic scenes more flexible, saving time and computing costs.
附图说明Description of drawings
为了更清楚地说明本发明或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the present invention or the technical solutions in the prior art, the accompanying drawings that need to be used in the description of the embodiments or the prior art will be briefly introduced below. Obviously, the accompanying drawings in the following description are the present invention. For some embodiments of the invention, those skilled in the art can also obtain other drawings based on these drawings without creative effort.
图1是本发明提供的一种基于可视外壳的三维光场生成方法的流程示意图;Fig. 1 is a schematic flow chart of a method for generating a three-dimensional light field based on a visible shell provided by the present invention;
图2是本发明一些实施例提供的虚拟相机阵列的排列方式的示意图;Fig. 2 is a schematic diagram of the arrangement of the virtual camera array provided by some embodiments of the present invention;
图3为本发明提供的GPU多线程计算流程的示意图Fig. 3 is a schematic diagram of the GPU multi-thread calculation process provided by the present invention
图4为本发明提供的一种基于可视外壳的三维光场生成装置的示意图Fig. 4 is a schematic diagram of a three-dimensional light field generating device based on a visible shell provided by the present invention
图5是本发明提供的电子设备的结构示意图。Fig. 5 is a schematic structural diagram of an electronic device provided by the present invention.
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚,下面将结合本发明中的附图,对本发明中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。In order to make the purpose, technical solutions and advantages of the present invention clearer, the technical solutions in the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the present invention. Obviously, the described embodiments are part of the embodiments of the present invention , but not all examples. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of the present invention.
下面结合图1-图3描述本发明的一种基于可视外壳的三维光场生成方法。A method for generating a three-dimensional light field based on a visible shell of the present invention will be described below with reference to FIGS. 1-3 .
图1是本发明提供的一种基于可视外壳的三维光场生成方法的流程示意图之一,包括步骤S101~S107:Fig. 1 is one of the schematic flow charts of a method for generating a three-dimensional light field based on a visible shell provided by the present invention, including steps S101 to S107:
步骤S101,基于多路RGB相机采集得到三维场景中待重建的目标模型的连续同步视频流,所述连续同步视频流包括多路同步视频帧。In step S101, a continuous synchronous video stream of a target model to be reconstructed in a 3D scene is acquired based on multiple RGB cameras, and the continuous synchronous video stream includes multiple synchronous video frames.
在真实世界三维场景中搭建多路RGB相机采集装置,标定三维场景中的多路RGB相机,多路RGB相机同步采集并保存三维场景中关于待重建的目标模型连续运动的视频流,并进行畸变校正、颜色校正和实例分割后,得到连续同步视频流。待重建的目标模型为三维模型。Build a multi-channel RGB camera acquisition device in the real-world 3D scene, calibrate the multi-channel RGB camera in the 3D scene, and the multi-channel RGB camera synchronously collect and save the video stream of the continuous motion of the target model to be reconstructed in the 3D scene, and perform distortion After rectification, color correction and instance segmentation, a continuous synchronized video stream is obtained. The target model to be reconstructed is a three-dimensional model.
在一些实施例中,可以通过搭建多路RGB相机同步采集阵列,在三维空间中实现大视角多视点光场的同步采集,多路RGB相机阵列并不强制调整每个相机的位置和朝向,遵循任意分布。In some embodiments, synchronous acquisition of large viewing angles and multi-viewpoint light fields can be realized in three-dimensional space by building a multi-channel RGB camera synchronous acquisition array. The multi-channel RGB camera array does not force the adjustment of the position and orientation of each camera. Follow the Arbitrary distribution.
在一些实施例中,可以基于OBS(Open Broadcaster Software)开源软件,对连续运动的待重建的目标模型进行多路同步视频流的采集和本地保存,还可以通过增量式相机标定方法进行多路相机内参外参标定和畸变校正。In some embodiments, based on the OBS (Open Broadcaster Software) open source software, the acquisition and local storage of multiple synchronous video streams can be performed on the target model to be reconstructed in continuous motion, and multiple channels can also be performed through an incremental camera calibration method. Camera internal and external reference calibration and distortion correction.
在一些实施例中,可以基于颜色标定板对采集的视频流进行颜色校正;In some embodiments, color correction may be performed on the captured video stream based on a color calibration panel;
在一些实施例中,可以基于实例分割算法对每路每帧视频中的待重建的目标模型进行实例分割;实例分割算法包括深度学习算法、绿幕抠图方法等。In some embodiments, the target model to be reconstructed in each channel and frame of video can be instance-segmented based on an instance segmentation algorithm; the instance segmentation algorithm includes a deep learning algorithm, a green screen matting method, and the like.
步骤S102,基于三维光场显示设备的参数确定虚拟相机阵列的排列方式,所述虚拟相机阵列包括多个虚拟相机。Step S102, determining an arrangement mode of a virtual camera array based on parameters of a three-dimensional light field display device, where the virtual camera array includes a plurality of virtual cameras.
三维光场显示设备用于显示三维光场,不同的三维光场显示设备具有不同的参数,例如,不同的三维光场显示设备具有不同的显示视角、最近观看距离、最远观看距离、最佳观看距离等。在一些实施例中,可以根据三维光场显示设备的视点数目和最佳观看距离等参数来确定虚拟相机阵列的排列方式。A 3D light field display device is used to display a 3D light field. Different 3D light field display devices have different parameters. For example, different 3D light field display devices have different display angles, the shortest viewing distance, the longest viewing viewing distance etc. In some embodiments, the arrangement of the virtual camera array can be determined according to parameters such as the number of viewpoints of the three-dimensional light field display device and the optimal viewing distance.
在一些实施例中,虚拟相机阵列的排列方式包括虚拟相机的个数、位置和朝向。例如,所述虚拟相机的朝向为平行式、汇聚式或离轴式。作为示例,图2为本发明一些实施例提供的虚拟相机阵列的排列方式的示意图。如图2所示,对于每个虚拟相机的位置,可以根据三维光场显示设备的最佳观看距离和人为选定的观看角度,来设定虚拟相机阵列距离待重建的目标模型中心的垂直距离d和每两个虚拟相机之间的距离l。In some embodiments, the arrangement of the virtual camera array includes the number, position and orientation of the virtual cameras. For example, the orientation of the virtual camera is parallel, converging or off-axis. As an example, FIG. 2 is a schematic diagram of an arrangement manner of a virtual camera array provided by some embodiments of the present invention. As shown in Figure 2, for the position of each virtual camera, the vertical distance between the virtual camera array and the center of the target model to be reconstructed can be set according to the optimal viewing distance of the 3D light field display device and the artificially selected viewing angle d and the distance l between every two virtual cameras.
在一些实施例中,虚拟相机排列时满足的条件包括虚拟相机阵列的总体视角范围小于RGB相机阵列总体采集视角范围,虚拟相机阵列的渲染总视角范围要在三维光场显示设备提供的视角范围内,虚拟相机阵列的位置要在三维光场显示设备提供的最近观看距离、最远观看距离之间。In some embodiments, the conditions met when the virtual cameras are arranged include that the overall viewing angle range of the virtual camera array is smaller than the overall collection viewing angle range of the RGB camera array, and the total rendering viewing angle range of the virtual camera array must be within the viewing angle range provided by the three-dimensional light field display device. , the position of the virtual camera array should be between the shortest viewing distance and the longest viewing distance provided by the three-dimensional light field display device.
步骤S103,基于所述虚拟相机阵列的排列方式、所述三维光场显示设备的参数,使用光场多视点编码方法确定生成合成图所需的全部子像素及其对应的虚拟相机信息。Step S103, based on the arrangement of the virtual camera array and the parameters of the 3D light field display device, use a light field multi-viewpoint encoding method to determine all sub-pixels and their corresponding virtual camera information required to generate a composite image.
合成图所需的子像素及其对应的虚拟相机信息表示为了合成图所需要的每个子像素,以及所述每个子像素分别是由哪个虚拟相机获取得到的。例如,对应的虚拟相机信息可以为子图像索引。作为示例,对于一张分辨率为4K的合成图,其需要3840x 2160x 3个子像素,其中某个子像素的子图像索引为10,表示该子像素的颜色来源于10号虚拟相机,合成图所需的每个子像素及其对应的虚拟相机信息可以使用光场多视点编码方法确定。The sub-pixels required for the composite image and the corresponding virtual camera information represent each sub-pixel required for the composite image, and which virtual camera obtains each sub-pixel. For example, the corresponding virtual camera information may index the sub-image. As an example, for a synthetic image with a resolution of 4K, it needs 3840x 2160x 3 sub-pixels, and the sub-image index of a certain sub-pixel is 10, which means that the color of this sub-pixel comes from virtual camera No. 10, and the synthetic image requires Each sub-pixel of and its corresponding virtual camera information can be determined using a light field multi-view encoding method.
通过光场多视点编码方法确定了合成图所需的多个子像素及其对应的虚拟相机信息,从而避免了完备的目标模型重建任务,且可以只渲染合成图中必要的多视点信息。通过本方法得到的合成图避免了冗余信息的计算。作为示例,为了生成一张分辨率为4k的合成图,传统方法中需要在完成目标模型的三维重建之后,渲染出多个视点的4k子图像,再合成一张4k合成图。完备的目标模型的三维重建包含了较多的冗余信息。此外,合成图的分辨率和多个视点的子图像分辨率一致,均为4k,从而每一个视点的分辨率为4k的子像素信息只利用了其中一小部分,例如,如果有60个虚拟视点,则每个视点的4k子图像的子像素只利用了1/60,基于60个视点所对应的1/60的子像素信息从而生成合成图,其他冗余的59/60子像素信息不能用于生成合成图中,这样也使得计算合成图时计算速度变慢。Multiple sub-pixels and their corresponding virtual camera information required by the composite image are determined by the light field multi-view encoding method, thereby avoiding the complete target model reconstruction task, and only the necessary multi-view information in the composite image can be rendered. The synthetic graph obtained by this method avoids the calculation of redundant information. As an example, in order to generate a composite image with a resolution of 4k, in the traditional method, after the 3D reconstruction of the target model is completed, 4k sub-images from multiple viewpoints are rendered, and then a 4k composite image is synthesized. The 3D reconstruction of a complete object model contains more redundant information. In addition, the resolution of the composite image is the same as the sub-image resolution of multiple viewpoints, both are 4k, so the sub-pixel information with a resolution of 4k for each viewpoint only uses a small part of it. For example, if there are 60 virtual Viewpoints, the subpixels of the 4k subimages of each viewpoint only use 1/60, based on the 1/60 subpixel information corresponding to 60 viewpoints to generate a composite image, other redundant 59/60 subpixel information cannot It is used to generate a composite graph, which also slows down the calculation speed when computing the composite graph.
而本发明从生成合成图的角度出发,首先判断合成图生成所需要的全部子像素及其对应的虚拟相机信息,后续只计算需要的三维模型点信息,得到合成图生成所需要的子像素,从而大大提高了合成图的合成效率,避免了在计算合成图时其他冗余的像素信息的浪费。However, from the perspective of generating a composite image, the present invention first judges all the sub-pixels required for generating the composite image and their corresponding virtual camera information, and then only calculates the required 3D model point information to obtain the sub-pixels required for generating the composite image. Therefore, the synthesis efficiency of the composite image is greatly improved, and the waste of other redundant pixel information is avoided when the composite image is calculated.
在一些实施例中,基于所述三维光场显示设备的最大视角范围,确定合成图的视角范围;基于所述虚拟相机阵列的排列方式和所述合成图的视角范围,确定出生成合成图所需的全部子像素及其对应的虚拟相机信息。例如,三维光场显示设备的最大视角范围为60度,则本发明只显示计算60度视角所需的信息,即计算出生成合成图所需的多个子像素索引及其对应的虚拟相机信息,通过只计算有效视角范围内的待重建的目标模型,不计算完备的三维模型,以提高计算效率,可以保证总体计算流程的实时性。In some embodiments, based on the maximum viewing angle range of the three-dimensional light field display device, the viewing angle range of the composite image is determined; based on the arrangement of the virtual camera array and the viewing angle range of the composite image, determine the All required sub-pixels and their corresponding virtual camera information. For example, if the maximum viewing angle range of a three-dimensional light field display device is 60 degrees, the present invention only displays the information required to calculate the 60-degree viewing angle, that is, calculates the multiple sub-pixel indexes and corresponding virtual camera information required to generate the composite image, By only calculating the target model to be reconstructed within the effective viewing angle range, and not calculating the complete three-dimensional model, the calculation efficiency is improved, and the real-time performance of the overall calculation process can be guaranteed.
步骤S104,基于所述生成合成图所需的全部子像素及其对应的虚拟相机信息,控制所述对应的多个虚拟相机发射渲染光线。Step S104, based on all the sub-pixels required for generating the composite image and the corresponding virtual camera information, control the corresponding multiple virtual cameras to emit rendering rays.
在一些实施例中,根据生成合成图所需的子像素对应的虚拟相机编号,控制对应所述虚拟相机编号的虚拟相机发出对应的渲染光线。例如,可以根据生成合成图所需的子像素对应的视点索引号,选择对应的虚拟相机发出对应的渲染光线,从而去探测子像素对应的三维模型点的坐标和颜色,即子像素的颜色。例如,其中1号子像素的子图像索引为10,表示该1号子像素的颜色来源于10号虚拟相机,合成图所需的1号子像素及其对应的虚拟相机信息可以表示为1号子像素及其对应的10号虚拟相机,则控制10号虚拟相机发射渲染光线去探测1号子像素对应的三维模型点的坐标和颜色,此颜色即为1号子像素的颜色。In some embodiments, according to the virtual camera number corresponding to the sub-pixel required to generate the composite image, the virtual camera corresponding to the virtual camera number is controlled to emit a corresponding rendering light. For example, according to the viewpoint index number corresponding to the sub-pixel required to generate the composite image, the corresponding virtual camera can be selected to emit the corresponding rendering light, so as to detect the coordinates and color of the 3D model point corresponding to the sub-pixel, that is, the color of the sub-pixel. For example, the sub-image index of the No. 1 sub-pixel is 10, which means that the color of the No. 1 sub-pixel comes from the No. 10 virtual camera, and the No. 1 sub-pixel and its corresponding virtual camera information required for the composite image can be expressed as No. 1 The sub-pixel and its corresponding virtual camera No. 10 control the No. 10 virtual camera to emit rendering light to detect the coordinates and color of the 3D model point corresponding to the No. 1 sub-pixel, and this color is the color of the No. 1 sub-pixel.
子图像索引表示在某一观看位置,三维光场显示设备显示的子图像视点序号,也表示生成该子像素对应的虚拟相机编号。子图像索引由光场多视点编码方法计算得到。The sub-image index indicates the viewpoint number of the sub-image displayed by the 3D light field display device at a certain viewing position, and also indicates the virtual camera number corresponding to the sub-pixel. The sub-image index is calculated by the light field multi-view coding method.
控制对应的多个虚拟相机发射渲染光线可以表示从多个视点对待重建的目标模型进行多视点渲染,多视点渲染是一种对于三维模型的实时渲染方法,可以生成在任意观测条件下目标物体的虚拟视点。Controlling the corresponding multiple virtual cameras to emit rendering rays can mean performing multi-viewpoint rendering on the target model to be reconstructed from multiple viewpoints. Multi-viewpoint rendering is a real-time rendering method for 3D models, which can generate images of target objects under any observation conditions. virtual viewpoint.
步骤S105,基于可视外壳算法和所述多路同步视频帧,确定渲染光线与待重建的目标模型的三维相交点与非相交点,并确定相交点的颜色、坐标与非相交点的背景颜色。Step S105, based on the visible shell algorithm and the multi-channel synchronous video frames, determine the three-dimensional intersection and non-intersection points of the rendering light and the target model to be reconstructed, and determine the color of the intersection point, the coordinates and the background color of the non-intersection point .
可视外壳算法(visual hull)是一种实时的三维模型重建方法。其通过获取关于待重建的目标模型的多视点图像及提取出的轮廓图信息,经过透视投影的数学计算,确定出三维空间中目标物体的交集凸包,从而实现目标模型的重建。Visual hull algorithm (visual hull) is a real-time 3D model reconstruction method. It obtains the multi-viewpoint image and the extracted contour map information about the target model to be reconstructed, and determines the intersection convex hull of the target object in the three-dimensional space through the mathematical calculation of perspective projection, so as to realize the reconstruction of the target model.
在一些实施例中,将渲染光线在前向搜索待重建的目标模型表面的过程中的光线前端所处的坐标点,透视投影到多路同步视频帧中,并计算全部二维投影点坐标;In some embodiments, the coordinate points where the front end of the rendering light is located in the process of forward searching the surface of the target model to be reconstructed are perspective-projected into multiple synchronous video frames, and the coordinates of all two-dimensional projection points are calculated;
判断在前向搜索过程中,所述全部二维投影点坐标是否处于待重建的目标模型的轮廓图之内;Judging during the forward search process, whether the coordinates of all the two-dimensional projected points are within the contour map of the target model to be reconstructed;
若全部二维投影点坐标均处于待重建的目标模型的轮廓图之内,则确定光线前端所处的坐标点到达目标模型表面并相交,将所述渲染光线前段所处的坐标点确定为三维相交点的坐标,并根据所述二维投影点对应的单个视频帧,得到三维相交点的颜色;If the coordinates of all the two-dimensional projection points are within the contour map of the target model to be reconstructed, then determine that the coordinate point at the front end of the light reaches the surface of the target model and intersects, and determines the coordinate point at the front end of the rendering light as a three-dimensional The coordinates of the intersection point, and according to the single video frame corresponding to the two-dimensional projection point, obtain the color of the three-dimensional intersection point;
若二维投影点坐标不均处于待重建的目标模型的轮廓图之内,则确定渲染光线前端所处的坐标点还没有到达待重建的目标模型表面,则继续使渲染光线前向搜索,若所述光线前端点坐标超过待重建的目标模型所处的三维场景范围,则停止前向搜索,确定此渲染光线不与目标模型相交,并将所述光线前端点确定为非相交点,将所述非相交点的颜色确定为背景颜色。If the coordinates of the two-dimensional projection point are unevenly within the contour map of the target model to be reconstructed, it is determined that the coordinate point where the front end of the rendering ray is located has not yet reached the surface of the target model to be reconstructed, and then continue to search forward for the rendering ray, if When the coordinates of the front-end point of the ray exceed the range of the three-dimensional scene where the target model to be reconstructed is located, stop the forward search, determine that the rendering ray does not intersect the target model, and determine the front-end point of the ray as a non-intersecting point, and The color of the non-intersecting points is determined as the background color.
步骤S106,基于所述三维相交点的颜色、坐标与所述非相交点的背景颜色生成合成图。Step S106, generating a composite image based on the color and coordinates of the three-dimensional intersection point and the background color of the non-intersection point.
当合成图中的每一个子像素的颜色都被求解之后,便可以基于所述三维相交点的颜色、坐标与所述非相交点的背景颜色生成合成图。After the color of each sub-pixel in the composite image is solved, the composite image can be generated based on the color and coordinates of the three-dimensional intersection point and the background color of the non-intersection point.
步骤S107,基于所述合成图生成三维光场。Step S107, generating a three-dimensional light field based on the composite image.
三维光场显示设备基于一张合成图生成三维光场,从而可以显示一帧图像。通过合成图的连续计算,三维光场便可以实时地生成及显示多帧图像,从而达到实时显示视频的目的。A three-dimensional light field display device generates a three-dimensional light field based on a synthetic image, so as to display a frame of image. Through the continuous calculation of the composite image, the three-dimensional light field can generate and display multiple frames of images in real time, so as to achieve the purpose of real-time video display.
为了提高计算效率,此三维光场生成方法使用GPU并行计算实现,每一个GPU线程负责承担一条虚拟光线的计算任务,计算过程还使用CUDA(Compute Unified DeviceArchitecture,统一计算设备架构)并行加速,从而达到实时地对RGB真实相机拍摄的连续视频流进行处理,生成实时的三维光场。图3为本发明提供的GPU多线程计算流程的示意图。如图3所示,GPU开始计算后,先进行数据读取和参数初始化,再启动GPU核函数进行并行计算,通过GPU多线程的同步计算,计算出生成合成图所需的所有模型点的坐标及对应的纹理,即完成了合成图中每个子像素颜色的计算,然后将合成图发送到三维光场显示设备进行显示。其中每一个GPU线程负责承担一条虚拟光线的计算任务,每一条虚拟光线的计算任务包括:通过虚拟相机阵列中的虚拟相机投射渲染光线,再基于可视外壳算法计算渲染光线与待重建的三维模型相交的三维点坐标,然后计算三维模型点对应的纹理,若渲染光线不与待重建的三维模型相交则将对应的非相交点赋值为背景颜色,即实现了此虚拟光线对应的三维模型点重建及合成图中对应的子像素颜色的计算。最终通过GPU多线程同步计算多条虚拟光线的计算任务,得到多个三维模型点的坐标及纹理,并生成一张合成图。In order to improve computing efficiency, this 3D light field generation method is implemented using GPU parallel computing, each GPU thread is responsible for the computing task of a virtual ray, and the computing process is also accelerated using CUDA (Compute Unified Device Architecture, unified computing device architecture), so as to achieve Process the continuous video stream captured by the RGB real camera in real time to generate a real-time 3D light field. FIG. 3 is a schematic diagram of the GPU multi-thread calculation process provided by the present invention. As shown in Figure 3, after the GPU starts computing, it first reads data and initializes parameters, then starts the GPU kernel function for parallel computing, and calculates the coordinates of all model points required to generate the composite map through synchronous computing of GPU multi-threads And the corresponding texture, that is, the calculation of the color of each sub-pixel in the composite image is completed, and then the composite image is sent to a three-dimensional light field display device for display. Each GPU thread is responsible for the calculation task of a virtual ray, and the calculation task of each virtual ray includes: projecting a rendering ray through the virtual camera in the virtual camera array, and then calculating the rendering ray and the 3D model to be reconstructed based on the visible shell algorithm Intersecting 3D point coordinates, and then calculate the texture corresponding to the 3D model point, if the rendering light does not intersect with the 3D model to be reconstructed, assign the corresponding non-intersecting point as the background color, that is, the reconstruction of the 3D model point corresponding to this virtual light is realized And the calculation of the corresponding sub-pixel color in the composite image. Finally, the calculation tasks of multiple virtual rays are calculated synchronously through GPU multithreading, and the coordinates and textures of multiple 3D model points are obtained, and a composite image is generated.
基于同一发明构思,本发明的实施例提供了一种基于可视外壳的三维光场生成装置,如图4所示,下面对本发明提供的三维光场生成装置进行描述,下文描述的基于可视外壳的三维光场生成装置与上文描述的三维光场生成方法可相互对应参照:Based on the same inventive concept, an embodiment of the present invention provides a three-dimensional light field generation device based on a visual shell, as shown in Figure 4, the following describes the three-dimensional light field generation device provided by the present invention, and the following description is based on a visual The three-dimensional light field generation device of the housing and the three-dimensional light field generation method described above can be referred to each other correspondingly:
一种基于可视外壳的三维光场生成装置,包括:多路RGB相机41,用于同步采集得到三维场景中待重建的目标模型的连续同步视频流,所述连续同步视频流包括多路同步视频帧;光学计算设备42,用于基于三维光场显示设备的参数确定虚拟相机阵列的排列方式,所述虚拟相机阵列包括多个虚拟相机;还用于基于所述虚拟相机阵列的排列方式、三维光场显示设备的参数,使用光场多视点编码方法确定生成合成图所需的多个子像素及其对应的虚拟相机信息;还用于基于所述生成合成图所需的全部子像素及其对应的虚拟相机信息,控制所述对应的多个虚拟相机发射渲染光线;还用于基于可视外壳算法和所述多路同步视频帧,确定渲染光线与待重建的目标模型的三维相交点与非相交点,并确定相交点的颜色、坐标与非相交点的背景颜色;还用于基于所述三维相交点的颜色、坐标与所述非相交点的背景颜色生成合成图;三维光场显示设备43,用于基于所述合成图生成三维光场。A three-dimensional light field generation device based on a visual shell, comprising: multiple RGB cameras 41, used to synchronously acquire a continuous synchronous video stream of a target model to be reconstructed in a three-dimensional scene, and the continuous synchronous video stream includes multiple synchronous Video frame; optical computing device 42, used to determine the arrangement of the virtual camera array based on the parameters of the three-dimensional light field display device, the virtual camera array includes a plurality of virtual cameras; it is also used to determine the arrangement based on the virtual camera array, The parameters of the three-dimensional light field display device, using the light field multi-viewpoint encoding method to determine the multiple sub-pixels required to generate the composite image and their corresponding virtual camera information; The corresponding virtual camera information is used to control the corresponding plurality of virtual cameras to emit rendering light; it is also used to determine the three-dimensional intersection point and non-intersecting point, and determine the color, coordinates and background color of the non-intersecting point of the intersecting point; it is also used to generate a composite map based on the color, coordinates and background color of the non-intersecting point of the three-dimensional intersection point; three-dimensional light field display A device 43, configured to generate a three-dimensional light field based on the composite image.
图5示例了一种电子设备的实体结构示意图,如图5所示,该电子设备可以包括:处理器(processor)510、通信接口(Communications Interface)520、存储器(memory)530和通信总线540,其中,处理器510,通信接口520,存储器530通过通信总线540完成相互间的通信。处理器510可以调用存储器530中的逻辑指令,以执行三维光场生成方法,该方法包括:基于多路RGB相机采集得到三维场景中待重建的目标模型的连续同步视频流,所述连续同步视频流包括多路同步视频帧;基于三维光场显示设备的参数确定虚拟相机阵列的排列方式,所述虚拟相机阵列包括多个虚拟相机;基于所述虚拟相机阵列的排列方式、所述三维光场显示设备的参数,使用光场多视点编码方法确定生成合成图所需的全部子像素及其对应的虚拟相机信息;基于所述生成合成图所需的全部子像素及其对应的虚拟相机信息,控制所述对应的多个虚拟相机发射渲染光线;基于可视外壳算法和所述多路同步视频帧,确定渲染光线与待重建的目标模型的三维相交点与非相交点,并确定相交点的颜色、坐标与非相交点的背景颜色;基于所述三维相交点的颜色、坐标与所述非相交点的背景颜色生成合成图;基于所述合成图生成三维光场。FIG. 5 illustrates a schematic diagram of the physical structure of an electronic device. As shown in FIG. 5, the electronic device may include: a processor (processor) 510, a communication interface (Communications Interface) 520, a memory (memory) 530 and a
此外,上述的存储器530中的逻辑指令可以通过软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。In addition, the above logic instructions in the
另一方面,本发明还提供一种计算机程序产品,所述计算机程序产品包括计算机程序,计算机程序可存储在非暂态计算机可读存储介质上,所述计算机程序被处理器执行时,计算机能够执行上述各方法所提供的三维光场生成方法,该方法包括:基于多路RGB相机采集得到三维场景中待重建的目标模型的连续同步视频流,所述连续同步视频流包括多路同步视频帧;基于三维光场显示设备的参数确定虚拟相机阵列的排列方式,所述虚拟相机阵列包括多个虚拟相机;基于所述虚拟相机阵列的排列方式、所述三维光场显示设备的参数,使用光场多视点编码方法确定生成合成图所需的全部子像素及其对应的虚拟相机信息;基于所述生成合成图所需的全部子像素及其对应的虚拟相机信息,控制所述对应的多个虚拟相机发射渲染光线;基于可视外壳算法和所述多路同步视频帧,确定渲染光线与待重建的目标模型的三维相交点与非相交点,并确定相交点的颜色、坐标与非相交点的背景颜色;基于所述三维相交点的颜色、坐标与所述非相交点的背景颜色生成合成图;基于所述合成图生成三维光场。On the other hand, the present invention also provides a computer program product. The computer program product includes a computer program that can be stored on a non-transitory computer-readable storage medium. When the computer program is executed by a processor, the computer can Executing the three-dimensional light field generation method provided by the above-mentioned methods, the method includes: collecting and obtaining a continuous synchronous video stream of a target model to be reconstructed in a three-dimensional scene based on multi-channel RGB cameras, and the continuous synchronous video stream includes multiple synchronous video frames ; Determine the arrangement of the virtual camera array based on the parameters of the three-dimensional light field display device, the virtual camera array includes a plurality of virtual cameras; based on the arrangement of the virtual camera array and the parameters of the three-dimensional light field display device, using light The field multi-viewpoint encoding method determines all sub-pixels required for generating a composite image and corresponding virtual camera information; based on all sub-pixels required for generating a composite image and corresponding virtual camera information, controls the corresponding multiple The virtual camera emits rendering light; based on the visible shell algorithm and the multi-channel synchronous video frame, determine the three-dimensional intersection and non-intersection points of the rendering light and the target model to be reconstructed, and determine the color, coordinates and non-intersection points of the intersection points the background color of the 3D intersection point; generate a composite image based on the color and coordinates of the three-dimensional intersection point and the background color of the non-intersection point; generate a three-dimensional light field based on the composite image.
又一方面,本发明还提供一种非暂态计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现以执行上述各方法提供的三维光场生成方法,该方法包括:基于多路RGB相机采集得到三维场景中待重建的目标模型的连续同步视频流,所述连续同步视频流包括多路同步视频帧;基于三维光场显示设备的参数确定虚拟相机阵列的排列方式,所述虚拟相机阵列包括多个虚拟相机;基于所述虚拟相机阵列的排列方式、所述三维光场显示设备的参数,使用光场多视点编码方法确定生成合成图所需的全部子像素及其对应的虚拟相机信息;基于所述生成合成图所需的全部子像素及其对应的虚拟相机信息,控制所述对应的多个虚拟相机发射渲染光线;基于可视外壳算法和所述多路同步视频帧,确定渲染光线与待重建的目标模型的三维相交点与非相交点,并确定相交点的颜色、坐标与非相交点的背景颜色;基于所述三维相交点的颜色、坐标与所述非相交点的背景颜色生成合成图;基于所述合成图生成三维光场。In yet another aspect, the present invention also provides a non-transitory computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, it is implemented to perform the three-dimensional light field generation method provided by the above methods, the method Including: obtaining the continuous synchronous video stream of the target model to be reconstructed in the 3D scene based on multi-channel RGB camera acquisition, the continuous synchronous video stream includes multiple synchronous video frames; determining the arrangement of the virtual camera array based on the parameters of the three-dimensional light field display device In this way, the virtual camera array includes a plurality of virtual cameras; based on the arrangement of the virtual camera array and the parameters of the three-dimensional light field display device, a light field multi-viewpoint encoding method is used to determine all the sub-pixels required to generate the composite image and its corresponding virtual camera information; based on all sub-pixels and corresponding virtual camera information required for generating the composite image, control the corresponding multiple virtual cameras to emit rendering light; based on the visible shell algorithm and the multiple synchronous video frame, determine the three-dimensional intersection point and non-intersection point of the rendering light and the target model to be reconstructed, and determine the color, coordinates and background color of the non-intersecting point; based on the color, coordinates and non-intersecting point of the three-dimensional intersection point The background colors of the non-intersecting points generate a composite map; a three-dimensional light field is generated based on the composite map.
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。The device embodiments described above are only illustrative, and the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in One place, or it can be distributed to multiple network elements. Part or all of the modules can be selected according to actual needs to achieve the purpose of the solution of this embodiment. It can be understood and implemented by those skilled in the art without any creative efforts.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。Through the above description of the implementations, those skilled in the art can clearly understand that each implementation can be implemented by means of software plus a necessary general hardware platform, and of course also by hardware. Based on this understanding, the essence of the above technical solution or the part that contributes to the prior art can be embodied in the form of software products, and the computer software products can be stored in computer-readable storage media, such as ROM/RAM, magnetic discs, optical discs, etc., including several instructions to make a computer device (which may be a personal computer, server, or network device, etc.) execute the methods described in various embodiments or some parts of the embodiments.
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present invention, rather than to limit them; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: it can still be Modifications are made to the technical solutions described in the foregoing embodiments, or equivalent replacements are made to some of the technical features; and these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the various embodiments of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211505959.1A CN115841539A (en) | 2022-11-28 | 2022-11-28 | Three-dimensional light field generation method and device based on visual shell |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211505959.1A CN115841539A (en) | 2022-11-28 | 2022-11-28 | Three-dimensional light field generation method and device based on visual shell |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115841539A true CN115841539A (en) | 2023-03-24 |
Family
ID=85577391
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211505959.1A Pending CN115841539A (en) | 2022-11-28 | 2022-11-28 | Three-dimensional light field generation method and device based on visual shell |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115841539A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116152417A (en) * | 2023-04-19 | 2023-05-23 | 北京天图万境科技有限公司 | Multi-viewpoint perspective space fitting and rendering method and device |
-
2022
- 2022-11-28 CN CN202211505959.1A patent/CN115841539A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116152417A (en) * | 2023-04-19 | 2023-05-23 | 北京天图万境科技有限公司 | Multi-viewpoint perspective space fitting and rendering method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10096157B2 (en) | Generation of three-dimensional imagery from a two-dimensional image using a depth map | |
JP4764305B2 (en) | Stereoscopic image generating apparatus, method and program | |
Cao et al. | Semi-automatic 2D-to-3D conversion using disparity propagation | |
US7573475B2 (en) | 2D to 3D image conversion | |
US8189035B2 (en) | Method and apparatus for rendering virtual see-through scenes on single or tiled displays | |
CN108513123B (en) | Image array generation method for integrated imaging light field display | |
CN110178370A (en) | Use the light stepping and this rendering of virtual view broadcasting equipment progress for solid rendering | |
CN102034265B (en) | Three-dimensional view acquisition method | |
CN102243768B (en) | A three-dimensional virtual scene rendering method | |
CN103077552B (en) | A kind of three-dimensional display method based on multi-view point video | |
CN114049464B (en) | Reconstruction method and device of three-dimensional model | |
CN108573521B (en) | Real-time interactive naked-eye 3D display method based on CUDA parallel computing framework | |
US9196080B2 (en) | Medial axis decomposition of 2D objects to synthesize binocular depth | |
CN101276478A (en) | Texture processing apparatus, method and program | |
GB2546720B (en) | Method of and apparatus for graphics processing | |
WO2012140397A2 (en) | Three-dimensional display system | |
Bleyer et al. | Temporally consistent disparity maps from uncalibrated stereo videos | |
JP7505481B2 (en) | Image processing device and image processing method | |
CN115830202A (en) | A three-dimensional model rendering method and device | |
CN115841539A (en) | Three-dimensional light field generation method and device based on visual shell | |
JP6799468B2 (en) | Image processing equipment, image processing methods and computer programs | |
CN105791798B (en) | A kind of 4K based on GPU surpasses the real-time method for transformation of multiple views 3D videos and device | |
US12125181B2 (en) | Image generation device and image generation method | |
CN115908755A (en) | AR projection method, system and AR projector | |
Thatte et al. | Real-World Virtual Reality With Head-Motion Parallax |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |