CN103136793A - Live-action fusion method based on augmented reality and device using the same - Google Patents
Live-action fusion method based on augmented reality and device using the same Download PDFInfo
- Publication number
- CN103136793A CN103136793A CN2011103961666A CN201110396166A CN103136793A CN 103136793 A CN103136793 A CN 103136793A CN 2011103961666 A CN2011103961666 A CN 2011103961666A CN 201110396166 A CN201110396166 A CN 201110396166A CN 103136793 A CN103136793 A CN 103136793A
- Authority
- CN
- China
- Prior art keywords
- data
- image processor
- video
- augmented reality
- camera
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 15
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 10
- 238000000034 method Methods 0.000 claims abstract description 23
- 239000011521 glass Substances 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 4
- 238000012549 training Methods 0.000 abstract description 15
- 238000004088 simulation Methods 0.000 abstract description 11
- 238000012545 processing Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000004927 fusion Effects 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 230000000750 progressive effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Landscapes
- Processing Or Creating Images (AREA)
Abstract
本发明涉及一种基于增强现实的实景融合方法及装置,装置包括:图像处理器、摄像机以及显示器,其中摄像机的图像数据输出端与图像处理器的数据输入端相连,图像处理器的输出端接有显示器;方法为:利用摄像机采集真实景物的视频数据,输入到图像处理器中进行处理;将上述视频数据转化成连续的单帧图像,将每一帧图像转换为图形纹理数据;利用三维图形引擎将此纹理数据映射到超过所有虚拟景物距离的矩形面上;在三维图形引擎中将上述矩形面与图像处理器生成的虚拟景物同时进行渲染,形成虚实结合的图像视频。本发明能够大幅提观瞄跟踪类装置模拟训练的真实感,并将观瞄跟踪类装置模拟训练从室内扩展到室外。
The present invention relates to a real-scene fusion method and device based on augmented reality. The device includes: an image processor, a camera and a display, wherein the image data output end of the camera is connected with the data input end of the image processor, and the output end of the image processor is connected to the There is a display; the method is: use the camera to collect the video data of the real scene, input it into the image processor for processing; convert the above video data into continuous single-frame images, and convert each frame of image into graphic texture data; use three-dimensional graphics The engine maps this texture data to a rectangular surface that exceeds the distance of all virtual scenes; in the 3D graphics engine, the above rectangular surface and the virtual scene generated by the image processor are simultaneously rendered to form an image video that combines virtual and real. The invention can greatly improve the sense of reality of the simulation training of the observation, aiming and tracking devices, and extend the simulation training of the viewing, aiming and tracking devices from indoor to outdoor.
Description
技术领域 technical field
本发明涉及一种图像增强现实视觉技术,具体地说是一种基于增强现实的实景融合方法及装置。The invention relates to an image augmented reality vision technology, in particular to a real scene fusion method and device based on augmented reality.
背景技术 Background technique
目前大量使用的各类观瞄跟踪类装置的模拟训练器材主要是基于虚拟现实技术,由计算机生成纯虚拟二维或三维场景,场景的纵深感、层次感、天气影响、场景变化等都与实际景物有十分明显的差异,场景逼真度较差。此外,由于采用纯虚拟场景,绝大部分此类器材是在室内使用,不能与其他室外训练结合,限制了训练装置的使用范围。At present, the simulation training equipment of various sighting, aiming and tracking devices used in large quantities is mainly based on virtual reality technology, and pure virtual two-dimensional or three-dimensional scenes are generated by computers. There are very obvious differences in the scenery, and the scene fidelity is poor. In addition, due to the use of pure virtual scenes, most of these equipment are used indoors and cannot be combined with other outdoor training, which limits the scope of use of training equipment.
发明内容 Contents of the invention
针对现有技术中模拟训练器材视觉仿真效果存在场景逼真度较差、不能与其他室外训练结合等不足之处,本发明要解决的技术问题是提供一种真实性强、使用方便、能够在室外应用,更贴近于实战环境的增强现实的实景融合方法及装置。Aiming at the deficiencies in the visual simulation effect of the simulation training equipment in the prior art, such as poor scene fidelity and inability to combine with other outdoor training, the technical problem to be solved by the present invention is to provide a real, easy-to-use, outdoor Application, a real-scene fusion method and device of an augmented reality closer to an actual combat environment.
为解决上述技术问题,本发明采用的技术方案是:In order to solve the problems of the technologies described above, the technical solution adopted in the present invention is:
本发明基于增强现实的实景融合装置包括:图像处理器、摄像机以及显示器,其中摄像机的图像数据输出端与图像处理器的数据输入端相连,图像处理器的输出端接有显示器。The real-scene fusion device based on augmented reality of the present invention includes: an image processor, a camera and a display, wherein the image data output end of the camera is connected with the data input end of the image processor, and the output end of the image processor is connected with a display.
所述摄像机采用工业彩色数字摄像机,扫描方式为逐行扫描,感光元件宽高比例为4∶3,具有IEEE-1934b输出接口,能以RGB格式输出分辨率不低于640×480像素、刷新率不低于25fps的视频数据;快门与增益可控,带有自动曝光控制。The camera adopts an industrial color digital camera, the scanning method is progressive scanning, the width and height ratio of the photosensitive element is 4:3, it has an IEEE-1934b output interface, and can output in RGB format with a resolution of not less than 640×480 pixels and a refresh rate of Video data not lower than 25fps; shutter and gain controllable, with automatic exposure control.
所述的显示器采用双目式眼镜显示器,分辨率不低于640×480像素,刷新率不低于60Hz,色彩深度不低于16位,具有VGA信号输入接口。The display adopts a binocular glasses display with a resolution of not less than 640×480 pixels, a refresh rate of not less than 60Hz, a color depth of not less than 16 bits, and a VGA signal input interface.
本发明基于增强现实的实景融合方包括以下步骤:The present invention is based on the augmented reality real scene fusion side and comprises the following steps:
利用摄像机采集真实景物的视频数据,输入到图像处理器中进行处理;Use the video camera to collect the video data of the real scene, and input it into the image processor for processing;
将上述视频数据转化成连续的单帧图像,将每一帧图像转换为图形纹理数据;Convert the above video data into continuous single-frame images, and convert each frame of images into graphic texture data;
利用三维图形引擎将此纹理数据映射到超过所有虚拟景物距离的矩形面上;Use a 3D graphics engine to map this texture data onto a rectangular surface that exceeds the distance of all virtual scenes;
在三维图形引擎中将上述矩形面与图像处理器生成的虚拟景物同时进行渲染,形成虚实结合的图像视频。In the three-dimensional graphics engine, the above-mentioned rectangular surface and the virtual scene generated by the image processor are simultaneously rendered to form an image video combining virtual and real.
利用三维图形引擎将此纹理数据映射到超过所有虚拟景物距离的矩形面上的过程为:利用三维图形引擎在超过所有虚拟景物距离的空间上绘制一个与三维图形引擎的FOV截面大小相同的矩形面;利用三维图形引擎对纹理数据进行纹理滤波;将此纹理数据绑定在上述矩形面上。The process of using the 3D graphics engine to map this texture data to a rectangular surface that exceeds the distance of all virtual scenes is: use the 3D graphics engine to draw a rectangular surface with the same size as the FOV section of the 3D graphics engine in a space that exceeds the distance of all virtual scenes ; Use the 3D graphics engine to perform texture filtering on the texture data; bind the texture data to the above-mentioned rectangular surface.
摄像机采集真实景物的视频数据采用多线程采集方法,为每一路需要采集的视频开辟两个异步执行的采集线程。The camera collects the video data of the real scene using a multi-threaded collection method, and opens up two asynchronously executed collection threads for each video that needs to be collected.
本发明的有益效果是:The beneficial effects of the present invention are:
1.能够大幅提观瞄跟踪类装置模拟训练的真实感。1. It can greatly improve the sense of reality of the simulation training of observation, aiming and tracking devices.
目前大量使用的各类观瞄跟踪类装置的计算机视景仿真模拟训练器材绝大多数是由计算机生成纯虚拟二维或三维场景,场景的纵深感、层次感、天气影响、场景变化等都与实际景物有十分明显的差异,场景逼真度较差。本发明是将真实的室外景象与计算机生成的虚拟景物结合,场景的主体是室外的真实景象,大幅提高了训练场景的真实感。At present, the vast majority of computer vision simulation training equipment for various types of sighting and tracking devices are purely virtual two-dimensional or three-dimensional scenes generated by computers. There are very obvious differences in the actual scenery, and the scene fidelity is poor. The invention combines the real outdoor scene with the virtual scene generated by the computer, and the main body of the scene is the real outdoor scene, which greatly improves the realism of the training scene.
2.将观瞄跟踪类装置模拟训练从室内扩展到室外2. Extend the simulation training of sight-seeing and tracking devices from indoor to outdoor
由于目前大部分观瞄跟踪类装置模拟训练器材使用纯虚拟的训练场景,因此绝大部分只能在室内使用。本发明的装置可以在室外应用,不仅拓展了模拟训练装置的使用范围,还使模拟训练可以与其他室外训练科目相结合,有助于提高了受训人员的综合技能。Since most of the current simulation training equipment for sight-seeing and tracking devices use purely virtual training scenes, most of them can only be used indoors. The device of the invention can be applied outdoors, which not only expands the use range of the simulation training device, but also enables the simulation training to be combined with other outdoor training subjects, which helps to improve the comprehensive skills of trainees.
附图说明 Description of drawings
图1为本发明装置组成结构框图;Fig. 1 is a structural block diagram of the composition of the device of the present invention;
图2为本发明实景融合方法流程图;Fig. 2 is a flowchart of the real scene fusion method of the present invention;
图3为本发明方法中CVid类构架示意图;Fig. 3 is a schematic diagram of the CVid class framework in the method of the present invention;
图4为CVid类模型示意图。Figure 4 is a schematic diagram of the CVid class model.
具体实施方式 Detailed ways
下面通过结合附图对本发明作进一步详细说明。The present invention will be described in further detail below in conjunction with the accompanying drawings.
如图1所示,为本发明基于增强现实的实景融合装置的组成结构,包括图像处理器、摄像机以及显示器,其中摄像机的图像数据输出端与图像处理器的数据输入端相连,图像处理器的输出端接有显示器。As shown in Figure 1, it is the composition structure of the real-scene fusion device based on augmented reality of the present invention, including image processor, camera and display, wherein the image data output end of camera is connected with the data input end of image processor, and the image processor's The output terminal is connected with a display.
本是实施例中,图像处理器为一台高性能计算机,通过IEEE-1394b接口连接彩色数字摄像机,通过VGA接口连接双目式眼镜显示器。图形引擎是一种基于某图形库(如OpenGL、DirectX等)的换件平台或软件环境,运行于计算机之上。摄像机采用工业彩色数字摄像机,扫描方式为逐行扫描,感光元件宽高比例为4∶3,具有IEEE-1934b输出接口,能以RGB格式输出分辨率不低于640×480像素、刷新率不低于25fps的视频数据。快门与增益可控,带有自动曝光控制(自动快门法、自动增益法,能进行最大增益法控制曝光)。本实施例采用Point Grey Research公司生产的Flea2彩色数字摄像机。安装时,数字彩色摄像机的光轴与模拟观瞄跟踪装置瞄准线重合。双目式眼镜显示器是一种分辨率不低于640×480像素,刷新率不低于60Hz的眼睛是显示器,色彩深度不低于16位,具有VGA信号输入接口。本实施例采用的双目式眼镜显示器分辨率为800×600,32位色彩深度,刷新率60Hz,带有VGA信号输入接口。In this embodiment, the image processor is a high-performance computer, connected to a color digital camera through an IEEE-1394b interface, and connected to a binocular glasses display through a VGA interface. A graphics engine is a switching platform or software environment based on a certain graphics library (such as OpenGL, DirectX, etc.), running on a computer. The camera adopts industrial color digital camera, the scanning method is progressive scanning, the aspect ratio of the photosensitive element is 4:3, it has an IEEE-1934b output interface, and can output in RGB format with a resolution of not less than 640×480 pixels and a refresh rate of not low Video data at 25fps. Shutter and gain controllable, with automatic exposure control (automatic shutter method, automatic gain method, can control exposure by maximum gain method). Present embodiment adopts the Flea2 color digital video camera that Point Gray Research company produces. When installed, the optical axis of the digital color camera coincides with the aiming line of the analog sighting and tracking device. The binocular glasses display is an eye display with a resolution of not less than 640×480 pixels, a refresh rate of not less than 60Hz, a color depth of not less than 16 bits, and a VGA signal input interface. The binocular glasses display used in this embodiment has a resolution of 800×600, a 32-bit color depth, a refresh rate of 60 Hz, and a VGA signal input interface.
本发明基于增强现实的实景融合方法包括以下步骤:The real scene fusion method based on augmented reality of the present invention comprises the following steps:
利用摄像机采集真实景物的视频数据,输入到图像处理器中进行处理;Use the video camera to collect the video data of the real scene, and input it into the image processor for processing;
将上述视频数据转化成连续的单帧图像,将每一帧图像转换为纹理数据;Convert the above video data into continuous single-frame images, and convert each frame of images into texture data;
利用三维图形引擎将此纹理数据映射到超过所有虚拟景物距离的矩形面上;Use a 3D graphics engine to map this texture data onto a rectangular surface that exceeds the distance of all virtual scenes;
在三维图形引擎中将上述矩形面与图像处理器生成的虚拟景物同时进行渲染,形成虚实结合的图像视频。In the three-dimensional graphics engine, the above-mentioned rectangular surface and the virtual scene generated by the image processor are simultaneously rendered to form an image video combining virtual and real.
利用三维图形引擎将此纹理数据映射到超过所有虚拟景物距离的矩形面上的过程为:利用三维图形引擎在超过所有虚拟景物距离的空间上绘制一个与三维图形引擎的FOV截面大小相同的矩形面;利用三维图形引擎对纹理数据进行纹理滤波;将此纹理数据绑定在上述矩形面上。The process of using the 3D graphics engine to map this texture data to a rectangular surface that exceeds the distance of all virtual scenes is: use the 3D graphics engine to draw a rectangular surface with the same size as the FOV section of the 3D graphics engine in a space that exceeds the distance of all virtual scenes ; Use the 3D graphics engine to perform texture filtering on the texture data; bind the texture data to the above-mentioned rectangular surface.
摄像机采集真实景物的视频数据采用多线程采集方法,为每一路需要采集的视频开辟两个异步执行的采集线程。The camera collects the video data of the real scene using a multi-threaded collection method, and opens up two asynchronously executed collection threads for each video that needs to be collected.
如图2所示,本是实施例中实景融合方法的实现过程如下:As shown in Figure 2, the implementation process of the real scene fusion method in this embodiment is as follows:
来自于数字摄像机的视频数据经CVid类对象处理后成一序列的纹理数据。The video data from the digital camera is processed by the CVid class object into a sequence of texture data.
如图3所示,为CVid类构架示意图。CVid类对象构建于ARToolkit库与DSVL库之上,实现视频采集、将单帧视频转换为纹理数据的功能。CVid类对象主要实现了摄像机视频的连续采集和单帧图像的获取。As shown in Figure 3, it is a schematic diagram of the CVid class framework. The CVid class object is built on the ARToolkit library and DSVL library to realize the functions of video capture and converting single-frame video into texture data. The CVid class object mainly realizes the continuous acquisition of camera video and the acquisition of single frame image.
如图2、4所示,为CVid类的模型。构造函数CVid()获得描述摄像机设置的.xml文件。RunCapture()读取这些设置并建立采集缓冲区、建立采集线程并开始采集视频。高性能采集方法是一种异步执行多线程采集方法,为每一路需要采集的视频开辟两个异步执行的采集线程以保证能够连续不间断地采集视频。其中DummyThreadProc(LPVOID lpParameter),是专门开辟的采集线程,为保证采集过程的性能,开辟了两个DummyThreadProc线程,为的是能够不间断地采集视频。SceneBlend()负责将实景的纹理数据与虚拟场景融合在一起,其具体处理过程如下:调用CVid类对象的VideoGetImage()方法获得视频的当前一帧,保存在全局变量中;将获得的一帧视频变成纹理;利用三维图形引擎在超过所有虚拟景物距离的空间(本实施例设置为10000米处的空间上)上绘制一个刚好与三维图形引擎的FOV截面大小相同的矩形面;利用三维图形引擎对纹理数据进行纹理滤波;将此纹理数据绑定在上述矩形面上,并与虚拟场景进行混合。调用CVid对象的VideoGetNext()方法获取下一帧图像。SceneBlend()被调用并执行后,绘制的矩形和映射的纹理进入帧缓存,接下来由三维图形引擎内部实行深度测试与渲染等后续操作,最终生成一帧虚实结合的图形。这个过程循环在仿真软件主循环中循环地执行就形成了虚实结合的视频,最后输送给双目式眼镜显示器。As shown in Figures 2 and 4, it is a model of the CVid class. The constructor CVid() gets the .xml file describing the camera settings. RunCapture() reads these settings and creates a capture buffer, creates a capture thread and starts capturing video. The high-performance acquisition method is an asynchronously executed multi-threaded acquisition method, which opens up two asynchronously executed acquisition threads for each video to be acquired to ensure continuous and uninterrupted video acquisition. Among them, DummyThreadProc (LPVOID lpParameter) is a collection thread specially developed. In order to ensure the performance of the collection process, two DummyThreadProc threads have been opened up in order to be able to collect video without interruption. SceneBlend() is responsible for blending the texture data of the real scene with the virtual scene. The specific processing process is as follows: call the VideoGetImage() method of the CVid class object to obtain the current frame of the video, and save it in the global variable; the obtained frame of video Become texture; Utilize the three-dimensional graphics engine to draw a rectangular surface just identical with the FOV section size of the three-dimensional graphics engine on the space (the present embodiment is set to the space at 10,000 meters) beyond the distance of all virtual scenery; Utilize the three-dimensional graphics engine Perform texture filtering on the texture data; bind the texture data on the above rectangular surface and mix it with the virtual scene. Call the VideoGetNext() method of the CVid object to get the next frame of image. After SceneBlend() is called and executed, the drawn rectangle and mapped texture enter the frame buffer, and then the 3D graphics engine performs subsequent operations such as depth testing and rendering, and finally generates a frame of virtual-real graphics. This process loop is executed cyclically in the main loop of the simulation software to form a virtual-real video, which is finally sent to the binocular glasses display.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011103961666A CN103136793A (en) | 2011-12-02 | 2011-12-02 | Live-action fusion method based on augmented reality and device using the same |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011103961666A CN103136793A (en) | 2011-12-02 | 2011-12-02 | Live-action fusion method based on augmented reality and device using the same |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103136793A true CN103136793A (en) | 2013-06-05 |
Family
ID=48496576
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011103961666A Pending CN103136793A (en) | 2011-12-02 | 2011-12-02 | Live-action fusion method based on augmented reality and device using the same |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103136793A (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103761734A (en) * | 2014-01-08 | 2014-04-30 | 北京航空航天大学 | Binocular stereoscopic video scene fusion method for keeping time domain consistency |
CN104539925A (en) * | 2014-12-15 | 2015-04-22 | 北京邮电大学 | 3D scene reality augmentation method and system based on depth information |
CN105005836A (en) * | 2014-04-18 | 2015-10-28 | 北京睿蓝空信息技术有限公司 | Site integrated there-dimensional system and site integrated three-dimensional management platform |
CN105488840A (en) * | 2015-11-26 | 2016-04-13 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN105787994A (en) * | 2016-01-26 | 2016-07-20 | 王创 | Entertainment method using 3D technology for simulating street scenery |
CN106027855A (en) * | 2016-05-16 | 2016-10-12 | 深圳迪乐普数码科技有限公司 | Method and terminal for realizing virtual rocker arm |
CN106130886A (en) * | 2016-07-22 | 2016-11-16 | 聂迪 | The methods of exhibiting of extension information and device |
CN106713988A (en) * | 2016-12-09 | 2017-05-24 | 福建星网视易信息系统有限公司 | Beautifying method and system for virtual scene live |
CN107154197A (en) * | 2017-05-18 | 2017-09-12 | 河北中科恒运软件科技股份有限公司 | Immersion flight simulator |
CN107222718A (en) * | 2017-06-20 | 2017-09-29 | 中国人民解放军78092部队 | Actual situation combination remote exhibition device and method based on augmented reality |
CN107519640A (en) * | 2016-12-07 | 2017-12-29 | 福建蓝帽子互动娱乐科技股份有限公司 | One kind fishing toy, system and method |
CN107682688A (en) * | 2015-12-30 | 2018-02-09 | 视辰信息科技(上海)有限公司 | Video real time recording method and recording arrangement based on augmented reality |
CN108536286A (en) * | 2018-03-22 | 2018-09-14 | 上海皮格猫信息科技有限公司 | A kind of VR work auxiliary system, method and the VR equipment of fusion real-world object |
WO2019041351A1 (en) * | 2017-09-04 | 2019-03-07 | 艾迪普(北京)文化科技股份有限公司 | Real-time aliasing rendering method for 3d vr video and virtual three-dimensional scene |
CN114185320A (en) * | 2020-09-15 | 2022-03-15 | 中国科学院软件研究所 | Evaluation method, device and system for unmanned system cluster and storage medium |
CN114419293A (en) * | 2022-01-26 | 2022-04-29 | 广州鼎飞航空科技有限公司 | Augmented reality data processing method, device and equipment |
CN115661419A (en) * | 2022-12-26 | 2023-01-31 | 广东新禾道信息科技有限公司 | Real-scene 3D augmented reality visualization method and system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1609895A (en) * | 2003-10-20 | 2005-04-27 | 上海科技馆 | Method for generating animal image moved along with person |
CN101021952A (en) * | 2007-03-23 | 2007-08-22 | 北京中星微电子有限公司 | Method and apparatus for realizing three-dimensional video special efficiency |
-
2011
- 2011-12-02 CN CN2011103961666A patent/CN103136793A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1609895A (en) * | 2003-10-20 | 2005-04-27 | 上海科技馆 | Method for generating animal image moved along with person |
CN101021952A (en) * | 2007-03-23 | 2007-08-22 | 北京中星微电子有限公司 | Method and apparatus for realizing three-dimensional video special efficiency |
Non-Patent Citations (1)
Title |
---|
王学伟等: "基于增强现实的动态红外视景生成研究", 《红外与激光工程》, vol. 37, 30 June 2008 (2008-06-30), pages 358 - 361 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103761734B (en) * | 2014-01-08 | 2016-09-28 | 北京航空航天大学 | A kind of binocular stereoscopic video scene fusion method of time domain holding consistency |
CN103761734A (en) * | 2014-01-08 | 2014-04-30 | 北京航空航天大学 | Binocular stereoscopic video scene fusion method for keeping time domain consistency |
CN105005836A (en) * | 2014-04-18 | 2015-10-28 | 北京睿蓝空信息技术有限公司 | Site integrated there-dimensional system and site integrated three-dimensional management platform |
CN104539925A (en) * | 2014-12-15 | 2015-04-22 | 北京邮电大学 | 3D scene reality augmentation method and system based on depth information |
CN105488840B (en) * | 2015-11-26 | 2019-04-23 | 联想(北京)有限公司 | A kind of information processing method and electronic equipment |
CN105488840A (en) * | 2015-11-26 | 2016-04-13 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN107682688A (en) * | 2015-12-30 | 2018-02-09 | 视辰信息科技(上海)有限公司 | Video real time recording method and recording arrangement based on augmented reality |
CN105787994A (en) * | 2016-01-26 | 2016-07-20 | 王创 | Entertainment method using 3D technology for simulating street scenery |
CN106027855A (en) * | 2016-05-16 | 2016-10-12 | 深圳迪乐普数码科技有限公司 | Method and terminal for realizing virtual rocker arm |
CN106027855B (en) * | 2016-05-16 | 2019-06-25 | 深圳迪乐普数码科技有限公司 | A kind of implementation method and terminal of virtual rocker arm |
CN106130886A (en) * | 2016-07-22 | 2016-11-16 | 聂迪 | The methods of exhibiting of extension information and device |
CN107519640A (en) * | 2016-12-07 | 2017-12-29 | 福建蓝帽子互动娱乐科技股份有限公司 | One kind fishing toy, system and method |
CN106713988A (en) * | 2016-12-09 | 2017-05-24 | 福建星网视易信息系统有限公司 | Beautifying method and system for virtual scene live |
CN107154197A (en) * | 2017-05-18 | 2017-09-12 | 河北中科恒运软件科技股份有限公司 | Immersion flight simulator |
CN107222718A (en) * | 2017-06-20 | 2017-09-29 | 中国人民解放军78092部队 | Actual situation combination remote exhibition device and method based on augmented reality |
US11076142B2 (en) | 2017-09-04 | 2021-07-27 | Ideapool Culture & Technology Co., Ltd. | Real-time aliasing rendering method for 3D VR video and virtual three-dimensional scene |
WO2019041351A1 (en) * | 2017-09-04 | 2019-03-07 | 艾迪普(北京)文化科技股份有限公司 | Real-time aliasing rendering method for 3d vr video and virtual three-dimensional scene |
CN108536286A (en) * | 2018-03-22 | 2018-09-14 | 上海皮格猫信息科技有限公司 | A kind of VR work auxiliary system, method and the VR equipment of fusion real-world object |
CN114185320A (en) * | 2020-09-15 | 2022-03-15 | 中国科学院软件研究所 | Evaluation method, device and system for unmanned system cluster and storage medium |
CN114185320B (en) * | 2020-09-15 | 2023-10-24 | 中国科学院软件研究所 | Evaluation method, device and system for unmanned system cluster and storage medium |
CN114419293A (en) * | 2022-01-26 | 2022-04-29 | 广州鼎飞航空科技有限公司 | Augmented reality data processing method, device and equipment |
CN115661419A (en) * | 2022-12-26 | 2023-01-31 | 广东新禾道信息科技有限公司 | Real-scene 3D augmented reality visualization method and system |
CN115661419B (en) * | 2022-12-26 | 2023-04-28 | 广东新禾道信息科技有限公司 | Real-scene 3D augmented reality visualization method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103136793A (en) | Live-action fusion method based on augmented reality and device using the same | |
CN110650368B (en) | Video processing method, apparatus and electronic device | |
CN100594519C (en) | A Method of Real-time Generating Augmented Reality Environment Illumination Model Using Spherical Panoramic Camera | |
CN113099204B (en) | Remote live-action augmented reality method based on VR head-mounted display equipment | |
CN112533002A (en) | Dynamic image fusion method and system for VR panoramic live broadcast | |
CN110060351B (en) | RGBD camera-based dynamic three-dimensional character reconstruction and live broadcast method | |
CN101277454A (en) | A real-time stereoscopic video generation method based on binocular cameras | |
CN113891060B (en) | Free viewpoint video reconstruction method, play processing method, device and storage medium | |
CN101631257A (en) | Method and device for realizing three-dimensional playing of two-dimensional video code stream | |
CN104427230B (en) | The method of augmented reality and the system of augmented reality | |
US11783445B2 (en) | Image processing method, device and apparatus, image fitting method and device, display method and apparatus, and computer readable medium | |
CN107330964A (en) | A kind of display methods and system of complex three-dimensional object | |
CN107240147A (en) | Image rendering method and system | |
CN111091491B (en) | Panoramic video pixel redistribution method and system for equidistant cylindrical projection | |
CN114494559A (en) | A 3D rendering fusion method, system and medium based on multi-GPU collaboration | |
CN106453913A (en) | Method and apparatus for previewing panoramic contents | |
CN114998559A (en) | Real-time remote rendering method for mixed reality binocular stereoscopic vision image | |
EP3057316B1 (en) | Generation of three-dimensional imagery to supplement existing content | |
CN107562185A (en) | It is a kind of based on the light field display system and implementation method of wearing VR equipment | |
CN105578172A (en) | Naked-eye 3D video displaying method based on Unity 3D engine | |
CN108564654B (en) | Picture entering mode of three-dimensional large scene | |
CN108765582B (en) | Panoramic picture display method and device | |
CN202331865U (en) | Real scene fusion device based on augmented reality | |
CN109961395A (en) | The generation of depth image and display methods, device, system, readable medium | |
CN109272445A (en) | Panoramic video joining method based on Sphere Measurement Model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C12 | Rejection of a patent application after its publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20130605 |