CN113115015A - Multi-source information fusion visualization method and system - Google Patents
Multi-source information fusion visualization method and system Download PDFInfo
- Publication number
- CN113115015A CN113115015A CN202110214911.4A CN202110214911A CN113115015A CN 113115015 A CN113115015 A CN 113115015A CN 202110214911 A CN202110214911 A CN 202110214911A CN 113115015 A CN113115015 A CN 113115015A
- Authority
- CN
- China
- Prior art keywords
- dimensional
- video
- map
- source information
- fusion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/293—Generating mixed stereoscopic images; Generating mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Graphics (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention provides a multi-source information fusion visualization method and a system, comprising the following steps: acquiring sensor data; fusing the sensor data with three-dimensional geographic information; performing linkage switching on the fused three-dimensional geographic information and the two-dimensional plane map; and performing reality simulation and video fusion on the three-dimensional map roaming scene. The multi-source information fusion visualization method and system provided by the invention realize the function of video fusion in a large scene and positioning in a three-dimensional map based on a three-dimensional geographic information system, can play a monitoring video to a window, realize multi-window combined monitoring, can switch to specific small scene roaming, and realize fine monitoring, thereby improving the application values of video analysis and artificial intelligence.
Description
Technical Field
The invention relates to the field of geographic information three-dimensional visualization, in particular to a multi-source information fusion visualization method and system.
Background
At present, an augmented reality system based on video fusion provides a good way for people to quickly and intuitively understand multi-channel video images, and is widely applied to the fields of video monitoring, urban virtual simulation, remote control of robots and the like. The video and the three-dimensional scene are fused, the video is simply embedded into the three-dimensional scene by the current method, the spatial position and the spatial information of the video are rarely considered, the reality sense is insufficient, and the effect of practical application cannot be achieved. Meanwhile, in the process of rendering the video-enhanced three-dimensional scene, when a large-scale three-dimensional model and video data are loaded, the scene display speed is slowed, so that the scene display efficiency is greatly influenced, because the data volume of the video image is often large and needs to be dynamically updated in real time, and a lot of resources of the system need to be consumed. In summary, how to fuse a plurality of videos and a three-dimensional scene according to correct spatial positions and information, improve the sense of reality, and improve the efficiency of scene display is two major problems to be solved urgently in the current multi-video and three-dimensional scene fusion technology, and relevant research work is performed on the two problems, so that the method has important practical significance and application value.
In the prior art, video fusion based on a single three-dimensional engine is mostly limited in application range compared with video fusion of a three-dimensional geographic information system, a full-city, national or larger-range three-dimensional visual application platform cannot be formed, display is limited in a single scene, only spliced scenes in a small-range scene or a large or small display area are provided on the basis of video fusion, and provided information and functions are also very deficient.
Disclosure of Invention
Aiming at the problems in the prior art, the embodiment of the invention provides a multi-source information fusion visualization method and system.
The invention provides a multi-source information fusion visualization method, which comprises the following steps:
acquiring sensor data;
fusing the sensor data with three-dimensional geographic information;
performing linkage switching on the fused three-dimensional geographic information and the two-dimensional plane map;
and performing reality simulation and video fusion on the three-dimensional map roaming scene.
According to the multi-source information fusion visualization method provided by the invention, the sensor data specifically comprises the following steps:
video data and indoor and outdoor positioning data of vehicles and people.
According to the multi-source information fusion visualization method provided by the invention, the fusion of the sensor data and the three-dimensional geographic information specifically comprises the following steps:
converting the video data into an image model with a 16:9 ratio by using a blender software;
video splicing is carried out on each image model by using an osgearth engine;
and loading the spliced image model onto the three-dimensional earth.
According to the multi-source information fusion visualization method provided by the invention, the linkage switching of the fused three-dimensional geographic information and the two-dimensional plane map specifically comprises the following steps:
switching the three-dimensional earth and the two-dimensional plane map by using UDP messages;
and calling a third-party online map to link the three-dimensional earth and the two-dimensional plane map.
According to the multi-source information fusion visualization method provided by the invention, the real simulation and video fusion of the three-dimensional map roaming scene specifically comprises the following steps:
the method comprises the following steps of (1) carrying out real simulation on a three-dimensional map roaming scene by adopting an Enviro-Sky and Weather plug-in of unity 3D;
and carrying out video splicing on the video data by using the osgearth engine.
The invention also provides a multi-source information fusion visualization system, which comprises:
the data acquisition module is used for acquiring sensor data;
the three-dimensional information fusion module is used for fusing the sensor data with three-dimensional geographic information;
the map switching module is used for performing linkage switching on the fused three-dimensional geographic information and the two-dimensional plane map;
and the three-dimensional map fusion module is used for performing reality simulation and video fusion on the three-dimensional map roaming scene.
According to the multi-source information fusion visualization system provided by the invention, the three-dimensional information fusion module is specifically used for:
converting the video data into an image model with a 16:9 ratio by using a blender software;
video splicing is carried out on each image model by using an osgearth engine;
and loading the spliced image model onto the three-dimensional earth.
According to the multi-source information fusion visualization system provided by the invention, the three-dimensional map fusion module is specifically used for:
the method comprises the following steps of (1) carrying out real simulation on a three-dimensional map roaming scene by adopting an Enviro-Sky and Weather plug-in of unity 3D;
and carrying out video splicing on the video data by using the osgearth engine.
The invention also provides an electronic device, which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the program to realize the steps of any one of the multi-source information fusion visualization methods.
The present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the multi-source information fusion visualization method as described in any of the above.
The multi-source information fusion visualization method and system provided by the invention realize the function of video fusion in a large scene and positioning in a three-dimensional map based on a three-dimensional geographic information system, can play a monitoring video to a window, realize multi-window combined monitoring, can switch to specific small scene roaming, and realize fine monitoring, thereby improving the application values of video analysis and artificial intelligence.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a multi-source information fusion visualization method provided by the invention;
FIG. 2 is a schematic structural diagram of a multi-source information fusion visualization system provided by the present invention;
fig. 3 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that in the description of the embodiments of the present invention, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element. The terms "upper", "lower", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience in describing the present invention and simplifying the description, but do not indicate or imply that the referred devices or elements must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Unless expressly stated or limited otherwise, the terms "mounted," "connected," and "connected" are intended to be inclusive and mean, for example, that they may be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The multi-source information fusion visualization method and system provided by the embodiment of the invention are described below with reference to fig. 1 to 3.
Fig. 1 is a schematic flow chart of a multi-source information fusion visualization method provided by the present invention, as shown in fig. 1, including but not limited to the following steps:
step S1: sensor data is acquired.
Specifically, the invention adopts various sensors to collect sensor data, such as video data, indoor and outdoor positioning data of vehicles and personnel, alarm data and the like.
The video data is processed under a three-dimensional map scene and is fused with the three-dimensional scene. The video data corresponds to the spatial position of the three-dimensional scene, and the video is placed at the real position of the scene shot in the video, such as the monitoring video at an intersection, and the video is pasted at the intersection corresponding to the reality in the three-dimensional scene, so that the effect of fusing the video into the three-dimensional space is achieved.
Step S2: and fusing the sensor data with the three-dimensional geographic information.
Specifically, step S2 specifically includes:
converting the video data into an image model with a 16:9 ratio by using a blender software;
video splicing is carried out on each image model by using an OSGCLR (open source search) instruction function of an osgearth engine;
and finally, loading the spliced image model onto the three-dimensional earth.
Step S3: and performing linkage switching on the fused three-dimensional geographic information and the two-dimensional plane map.
Specifically, step S3 specifically includes:
switching the three-dimensional earth and the two-dimensional plane map by using UDP messages;
and calling a third-party online map to link the three-dimensional earth and the two-dimensional plane map.
Preferably, the method calls the necessary map to carry out linkage on the three-dimensional earth and the two-dimensional plane map, the movement of the visual angle in the three-dimensional scene can link the corresponding visual angle in the three-dimensional map, and the three-dimensional map and the three-dimensional plane map can simultaneously display the visual three-dimensional scene and the three-dimensional geographic information.
Step S4: and performing reality simulation and video fusion on the three-dimensional map roaming scene.
Specifically, step S4 specifically includes:
the navigation scene of the three-dimensional map is actually simulated by adopting the Enviro-Sky and Weather plug-in of unity3D, and the real environment including cloudy, sunny, rainy and snowy scenes and the like is simulated in a small scene, so that the change of time and Weather is realized.
And performing video splicing on each video data by using an osgearth engine, and performing fixed-point real-time monitoring.
The invention adopts a plurality of video windows to realize the simultaneous monitoring of the conditions of a plurality of places on the three-dimensional earth and the video windows. The video in the three-dimensional scene can be called out through clicking the video by a right button, and a certain window below is selected to be played, wherein the real-time monitoring video is played. The video is played by calling the FFmpeg library.
The three-dimensional geographic information system and the game engine are combined, namely the three-dimensional geographic information system is used for displaying outdoor large scenes, and the game engine is used for displaying more detailed indoor scenes, so that a three-dimensional map can be displayed, three-dimensional geographic information such as terrain, city and the like can be displayed like Google Earth, and information of a park and indoor scenes can be displayed like a game. The three-dimensional map has the advantage of showing macroscopic abstract information, the three-dimensional geographic information system has the advantage of showing macroscopic avatar information, and the game engine has the advantage of showing a garden and an indoor scene. The rendering effect of the game is obviously better than that of a three-dimensional geographic information system, and the game is more suitable for displaying three-dimensional detail information. The game engine can be combined with a building information system to truly show the information in the building, such as rooms, pipelines and the like.
The invention integrates the advantages of WPF (Windows Presentation function), OSG (OpenSceneGraph) and Unity, and the monitoring video is pasted at the corresponding real position as the dynamic texture in the OSG, and simultaneously all the current monitoring videos can be checked in the WPF and played through a small window. The Unity can show a specific small scene, so that the function of positioning in a three-dimensional map while video fusion in a large scene is realized, a monitoring video can be played to a window, multi-window combined monitoring is realized, the specific small scene roaming can be switched, video fusion can be performed in the small scene, and fine monitoring is realized.
The invention realizes the visual fusion application of the three-dimensional geographic information in the global wide area range based on the three-dimensional geographic information system (3D GIS), and simultaneously has natural time and space advantages in the aspects of mining and processing the three-dimensional visual data based on the video fusion technology of the three-dimensional geographic information. By means of a built-in geographical information coordinate system with unified international standards, such as longitude, latitude, altitude and the like, accurate positioning and space-time dynamic analysis of the three-dimensional visual video fusion platform are achieved, and video analysis and artificial intelligence application values are improved.
Based on any of the above embodiments, fig. 2 is a schematic structural diagram of the multi-source information fusion visualization system provided by the present invention, as shown in fig. 2, including but not limited to:
the data acquisition module is used for acquiring sensor data;
the three-dimensional information fusion module is used for fusing the sensor data with three-dimensional geographic information;
the map switching module is used for performing linkage switching on the fused three-dimensional geographic information and the two-dimensional plane map;
and the three-dimensional map fusion module is used for performing reality simulation and video fusion on the three-dimensional map roaming scene.
Based on any of the above embodiments, further, the three-dimensional information fusion module is specifically configured to:
converting the video data into an image model with a 16:9 ratio by using a blender software;
video splicing is carried out on each image model by using an osgearth engine;
and (5) placing the spliced image model on a three-dimensional earth.
Based on any of the above embodiments, further, the three-dimensional map fusion module is specifically configured to:
the method comprises the following steps of (1) carrying out real simulation on a three-dimensional map roaming scene by adopting an Enviro-Sky and Weather plug-in of unity 3D;
and carrying out video splicing on the video data by using the osgearth engine.
It should be noted that, in specific execution, the multi-source information fusion visualization system provided in the embodiment of the present invention may be implemented based on the multi-source information fusion visualization method described in any of the above embodiments, and details of this embodiment are not described herein.
Fig. 3 is a schematic structural diagram of an electronic device provided in the present invention, and as shown in fig. 3, the electronic device may include: a processor (processor)310, a communication interface (communication interface)320, a memory (memory)330 and a communication bus 340, wherein the processor 310, the communication interface 320 and the memory 330 communicate with each other via the communication bus 340. Processor 310 may invoke logic instructions in memory 330 to perform a multi-source information fusion visualization method comprising:
acquiring sensor data;
fusing the sensor data with three-dimensional geographic information;
performing linkage switching on the fused three-dimensional geographic information and the two-dimensional plane map;
and performing reality simulation and video fusion on the three-dimensional map roaming scene.
In addition, the logic instructions in the memory 330 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
In another aspect, the present invention also provides a computer program product, which includes a computer program stored on a non-transitory computer-readable storage medium, the computer program including program instructions, when executed by a computer, the computer being capable of executing the multi-source information fusion visualization method provided by the above methods, the method including:
acquiring sensor data;
fusing the sensor data with three-dimensional geographic information;
performing linkage switching on the fused three-dimensional geographic information and the two-dimensional plane map;
and performing reality simulation and video fusion on the three-dimensional map roaming scene.
In yet another aspect, the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, the computer program being implemented by a processor to perform the multi-source information fusion visualization method provided in the foregoing embodiments, the method including:
acquiring sensor data;
fusing the sensor data with three-dimensional geographic information;
performing linkage switching on the fused three-dimensional geographic information and the two-dimensional plane map;
and performing reality simulation and video fusion on the three-dimensional map roaming scene.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A multi-source information fusion visualization method is characterized by comprising the following steps:
acquiring sensor data;
fusing the sensor data with three-dimensional geographic information;
performing linkage switching on the fused three-dimensional geographic information and the two-dimensional plane map;
and performing reality simulation and video fusion on the three-dimensional map roaming scene.
2. The multi-source information fusion visualization method according to claim 1, wherein the sensor data specifically includes:
video data and indoor and outdoor positioning data of vehicles and people.
3. The multi-source information fusion visualization method according to claim 1, wherein the fusing the sensor data with the three-dimensional geographic information specifically comprises:
converting the video data into an image model with a 16:9 ratio by using a blender software;
video splicing is carried out on each image model by using an osgearth engine;
and loading the spliced image model onto the three-dimensional earth.
4. The multi-source information fusion visualization method according to claim 1, wherein the linkage switching of the fused three-dimensional geographic information and the two-dimensional planar map specifically comprises:
switching the three-dimensional earth and the two-dimensional plane map by using UDP messages;
and calling a third-party online map to link the three-dimensional earth and the two-dimensional plane map.
5. The multi-source information fusion visualization method according to claim 1, wherein the performing reality simulation and video fusion on the three-dimensional map roaming scene specifically comprises:
the method comprises the following steps of (1) carrying out real simulation on a three-dimensional map roaming scene by adopting an Enviro-Sky and Weather plug-in of unity 3D;
and carrying out video splicing on the video data by using the osgearth engine.
6. A multi-source information fusion visualization system, comprising:
the data acquisition module is used for acquiring sensor data;
the three-dimensional information fusion module is used for fusing the sensor data with three-dimensional geographic information;
the map switching module is used for performing linkage switching on the fused three-dimensional geographic information and the two-dimensional plane map;
and the three-dimensional map fusion module is used for performing reality simulation and video fusion on the three-dimensional map roaming scene.
7. The multi-source information fusion visualization system according to claim 6, wherein the three-dimensional information fusion module is specifically configured to:
converting the video data into an image model with a 16:9 ratio by using a blender software;
video splicing is carried out on each image model by using an osgearth engine;
and loading the spliced image model onto the three-dimensional earth.
8. The multi-source information fusion visualization system according to claim 6, wherein the three-dimensional map fusion module is specifically configured to:
the method comprises the following steps of (1) carrying out real simulation on a three-dimensional map roaming scene by adopting an Enviro-Sky and Weather plug-in of unity 3D;
and carrying out video splicing on the video data by using the osgearth engine.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the multi-source information fusion visualization method steps according to any one of claims 1 to 5 when executing the computer program.
10. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the multi-source information fusion visualization method steps according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110214911.4A CN113115015A (en) | 2021-02-25 | 2021-02-25 | Multi-source information fusion visualization method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110214911.4A CN113115015A (en) | 2021-02-25 | 2021-02-25 | Multi-source information fusion visualization method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113115015A true CN113115015A (en) | 2021-07-13 |
Family
ID=76709443
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110214911.4A Pending CN113115015A (en) | 2021-02-25 | 2021-02-25 | Multi-source information fusion visualization method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113115015A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113902866A (en) * | 2021-09-24 | 2022-01-07 | 广州市城市规划勘测设计研究院 | Double-engine driven digital twin system |
CN114779978A (en) * | 2022-04-27 | 2022-07-22 | 天津大学 | Fusion method, system, device and medium for data visualization |
CN115526064A (en) * | 2022-11-03 | 2022-12-27 | 西安羚控电子科技有限公司 | Battlefield simulation multi-source data fusion method and device and storage medium |
CN117495694A (en) * | 2023-11-09 | 2024-02-02 | 大庆安瑞达科技开发有限公司 | Method for fusing video and map three-dimensional scene, electronic equipment and storage medium |
CN118033662A (en) * | 2024-01-09 | 2024-05-14 | 大庆安瑞达科技开发有限公司 | Method for projecting 360-degree radar image based on three-dimensional geographic information |
CN118334286A (en) * | 2024-06-17 | 2024-07-12 | 中科星图智慧科技安徽有限公司 | Method, device, equipment and medium for linking VR and Cesium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110193966A1 (en) * | 2010-01-14 | 2011-08-11 | Oren Golan | Systems and methods for managing and displaying video sources |
CN103795976A (en) * | 2013-12-30 | 2014-05-14 | 北京正安融翰技术有限公司 | Full space-time three-dimensional visualization method |
CN106874436A (en) * | 2017-01-31 | 2017-06-20 | 杭州市公安局上城区分局 | The Multi-Source Image Data Fusion imaging system of three-dimensional police geographical information platform |
US20180139416A1 (en) * | 2015-05-26 | 2018-05-17 | Panasonic Intellectual Property Management Co., Ltd. | Tracking support apparatus, tracking support system, and tracking support method |
CN108803876A (en) * | 2018-06-08 | 2018-11-13 | 华北水利水电大学 | Hydraulic engineering displaying exchange method based on augmented reality and system |
CN109068103A (en) * | 2018-09-17 | 2018-12-21 | 北京智汇云舟科技有限公司 | Dynamic video space-time virtual reality fusion method and system based on three-dimensional geographic information |
CN110379010A (en) * | 2019-06-25 | 2019-10-25 | 北京邮电大学 | Three-dimensional geographic information method for visualizing and system based on video fusion |
US20200175251A1 (en) * | 2018-11-29 | 2020-06-04 | International Business Machines Corporation | Estimating a height of a cloud depicted in an image |
CN111274337A (en) * | 2019-12-31 | 2020-06-12 | 北方信息控制研究院集团有限公司 | Two-dimensional and three-dimensional integrated GIS system based on live-action three-dimension |
CN111586351A (en) * | 2020-04-20 | 2020-08-25 | 上海市保安服务(集团)有限公司 | Visual monitoring system and method for fusion of three-dimensional videos of venue |
CN112383746A (en) * | 2020-10-29 | 2021-02-19 | 北京软通智慧城市科技有限公司 | Video monitoring method and device in three-dimensional scene, electronic equipment and storage medium |
-
2021
- 2021-02-25 CN CN202110214911.4A patent/CN113115015A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110193966A1 (en) * | 2010-01-14 | 2011-08-11 | Oren Golan | Systems and methods for managing and displaying video sources |
CN103795976A (en) * | 2013-12-30 | 2014-05-14 | 北京正安融翰技术有限公司 | Full space-time three-dimensional visualization method |
US20180139416A1 (en) * | 2015-05-26 | 2018-05-17 | Panasonic Intellectual Property Management Co., Ltd. | Tracking support apparatus, tracking support system, and tracking support method |
CN106874436A (en) * | 2017-01-31 | 2017-06-20 | 杭州市公安局上城区分局 | The Multi-Source Image Data Fusion imaging system of three-dimensional police geographical information platform |
CN108803876A (en) * | 2018-06-08 | 2018-11-13 | 华北水利水电大学 | Hydraulic engineering displaying exchange method based on augmented reality and system |
CN109068103A (en) * | 2018-09-17 | 2018-12-21 | 北京智汇云舟科技有限公司 | Dynamic video space-time virtual reality fusion method and system based on three-dimensional geographic information |
US20200175251A1 (en) * | 2018-11-29 | 2020-06-04 | International Business Machines Corporation | Estimating a height of a cloud depicted in an image |
CN110379010A (en) * | 2019-06-25 | 2019-10-25 | 北京邮电大学 | Three-dimensional geographic information method for visualizing and system based on video fusion |
CN111274337A (en) * | 2019-12-31 | 2020-06-12 | 北方信息控制研究院集团有限公司 | Two-dimensional and three-dimensional integrated GIS system based on live-action three-dimension |
CN111586351A (en) * | 2020-04-20 | 2020-08-25 | 上海市保安服务(集团)有限公司 | Visual monitoring system and method for fusion of three-dimensional videos of venue |
CN112383746A (en) * | 2020-10-29 | 2021-02-19 | 北京软通智慧城市科技有限公司 | Video monitoring method and device in three-dimensional scene, electronic equipment and storage medium |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113902866A (en) * | 2021-09-24 | 2022-01-07 | 广州市城市规划勘测设计研究院 | Double-engine driven digital twin system |
CN114779978A (en) * | 2022-04-27 | 2022-07-22 | 天津大学 | Fusion method, system, device and medium for data visualization |
CN115526064A (en) * | 2022-11-03 | 2022-12-27 | 西安羚控电子科技有限公司 | Battlefield simulation multi-source data fusion method and device and storage medium |
CN115526064B (en) * | 2022-11-03 | 2023-03-28 | 西安羚控电子科技有限公司 | Battlefield simulation multi-source data fusion method and device and storage medium |
CN117495694A (en) * | 2023-11-09 | 2024-02-02 | 大庆安瑞达科技开发有限公司 | Method for fusing video and map three-dimensional scene, electronic equipment and storage medium |
CN117495694B (en) * | 2023-11-09 | 2024-05-31 | 大庆安瑞达科技开发有限公司 | Method for fusing video and map three-dimensional scene, electronic equipment and storage medium |
CN118033662A (en) * | 2024-01-09 | 2024-05-14 | 大庆安瑞达科技开发有限公司 | Method for projecting 360-degree radar image based on three-dimensional geographic information |
CN118334286A (en) * | 2024-06-17 | 2024-07-12 | 中科星图智慧科技安徽有限公司 | Method, device, equipment and medium for linking VR and Cesium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113115015A (en) | Multi-source information fusion visualization method and system | |
US10255726B2 (en) | Systems and methods for augmented reality representations of networks | |
CA2790842C (en) | Integrated gis system with interactive 3d interface | |
CN106296815B (en) | Construction and display method of interactive three-dimensional digital city | |
CN107885096B (en) | Unmanned aerial vehicle patrols and examines three-dimensional emulation monitored control system of flight path | |
CN110379010A (en) | Three-dimensional geographic information method for visualizing and system based on video fusion | |
CN112598993B (en) | City map platform visualization method and device based on CIM and related products | |
CN112711458B (en) | Method and device for displaying prop resources in virtual scene | |
CN108269305A (en) | A kind of two dimension, three-dimensional data linkage methods of exhibiting and system | |
CN109242966B (en) | 3D panoramic model modeling method based on laser point cloud data | |
CN114756937A (en) | Visualization system and method based on UE4 engine and Cesium framework | |
CN112529022A (en) | Training sample generation method and device | |
CN109741431B (en) | Two-dimensional and three-dimensional integrated electronic map frame | |
CN115439635A (en) | Method and equipment for presenting mark information of target object | |
WO2023231793A9 (en) | Method for virtualizing physical scene, and electronic device, computer-readable storage medium and computer program product | |
CN111127661A (en) | Data processing method and device and electronic equipment | |
CN112070901A (en) | AR scene construction method and device for garden, storage medium and terminal | |
US11585672B1 (en) | Three-dimensional representations of routes | |
US11726740B2 (en) | Immersive audio tours | |
Ha et al. | DigiLog Space: Real-time dual space registration and dynamic information visualization for 4D+ augmented reality | |
CN114090713A (en) | Service providing method and system based on augmented reality and electronic equipment | |
WO2024221306A1 (en) | Operation planning method and apparatus for movable platform, and storage medium | |
CN112927327B (en) | Three-dimensional visualization method for biomedical platform data map | |
US20220375175A1 (en) | System For Improving The Precision and Accuracy of Augmented Reality | |
Blanco Pons | Analysis and development of augmented reality applications for the dissemination of cultural heritage |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210713 |
|
RJ01 | Rejection of invention patent application after publication |