CN111680118A - System and method for fusing graphic visual expression - Google Patents
System and method for fusing graphic visual expression Download PDFInfo
- Publication number
- CN111680118A CN111680118A CN202010521598.4A CN202010521598A CN111680118A CN 111680118 A CN111680118 A CN 111680118A CN 202010521598 A CN202010521598 A CN 202010521598A CN 111680118 A CN111680118 A CN 111680118A
- Authority
- CN
- China
- Prior art keywords
- subunit
- data
- unit
- map
- graphic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/29—Geographical information databases
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2431—Multiple classes
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Databases & Information Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Remote Sensing (AREA)
- Processing Or Creating Images (AREA)
- Instructional Devices (AREA)
Abstract
The invention provides a system and a method for fusing graphic visual expression, which comprises a graphic data classification module, a graphic data fusion module and a graphic data fusion module, wherein the graphic data classification module is used for classifying and defining graphic data; the expression mode classification module is used for defining the classification of the expression modes of the graphic data; the GIS service classification module is used for defining the classification of the GIS service; the graphic expression assembly model is used for assembling graphic data, expression modes and GIS service; forming a configuration model, forming a configuration file from the assembled result and storing the configuration file in a database; and the page rendering model is used for rendering the assembled graphic data, the expression mode and the GIS service to a WEB terminal for a user to browse and use. The invention can integrate the flow of graphic visual expression, organically combine the graphic data, the expression mode and the GIS service together, and can quickly develop products required by a user according to the normalized flow and the previously formed configuration file when the demand of the graphic data or the user changes.
Description
Technical Field
The invention belongs to the field of communication, and particularly relates to a system and a method for fusing graphic visual expression.
Background
The fusion graphic visual expression is a visual expression technology for assembling and rendering graphic data according to the expression requirement of a user. The technology converts data from 'numbers and characters' into concise graphs and charts while not detracting from the data value as much as possible, facilitates data mining and data display, and promotes the depth development of visualization effect and big data analysis. In order to achieve higher information acquisition speed, more intuitive data presentation and analysis, clearer connection between operation and business systems and stronger emerging trend perception capability, the method for fusing graphic visual expression is indispensable. However, the prior art fusion graphic visual expression method is not suitable for frequent updating, which is problematic in that:
(1) when the requirements of graphic data or a user are changed, the whole product needs to be reconstructed, and the universality and the expandability are poor;
(2) the formed product can not be changed at will according to the requirements of users and can not adapt to the market requirements which change rapidly at present;
(3) higher learning cost needs to be invested after a user changes a user.
Disclosure of Invention
In view of the above-mentioned deficiencies in the prior art, the present invention provides a system and method for fusing graphic visual expressions to solve the problems in the prior art.
In order to achieve the above purpose, the invention adopts the technical scheme that:
the scheme provides a system for fusing graphic visual expression, which comprises a graphic data classification module, an expression mode classification module, a GIS service classification module, a graphic expression assembly model, a formation configuration model and a page rendering model;
the graphic data classification module is used for classifying and defining the graphic data;
the expression mode classification module is used for defining the classification of the expression modes of the graphic data;
the GIS service classification module is used for defining the classification of the GIS service;
the graphic expression assembly model is used for assembling graphic data, expression modes and GIS service;
the forming configuration model is used for forming a configuration file from an assembled result and storing the configuration file to a database;
and the page rendering model is used for rendering the assembled graphic data, expression mode and GIS service to a WEB terminal for a user to browse and use.
The invention has the beneficial effects that: the method can standardize the flow of fusing graphic visual expression, organically combines graphic data, an expression mode and GIS service together, improves the research and development efficiency, saves the research and development and maintenance cost, shortens the research and development period, improves the data utilization rate, has the characteristic of high research and development speed, can quickly develop products required by a user according to the standardized flow and previously formed configuration files when the graphic data or the requirement of the user is changed, can quickly adapt to the market requirement which is changed at any time, and reduces the learning cost and the research and development cost by using the standardized flow.
Further, the graphic data classification model comprises a two-dimensional data unit, a three-dimensional data unit, a POI data unit and a stream data unit;
the two-dimensional data unit is used for displaying a plane map;
the three-dimensional data unit is used for simulating a three-dimensional map;
the POI data unit is used for displaying information point data required by a user;
and the stream data unit is used for displaying the dynamic data set.
The beneficial effects of the further scheme are as follows: the scheme normalizes the classification of the graphic data according to the visual characteristics of the graphic data, the classification covers all the graphic data at present, and users of all the graphic data can classify the used or to-be-used graphic data into the scheme.
Still further, the two-dimensional data unit comprises a first terrain data subunit, a first image data subunit and a first vector data subunit;
the first terrain data subunit is used for preprocessing terrain data and processing the preprocessed terrain data to form map data;
the first image data subunit is used for acquiring an image of an object by using an imaging device and recording map data of the image of the object by using a digital signal through optical/electrical conversion;
the first vector data subunit is used for describing the calculated undistorted map data by using a straight line or a curve;
the three-dimensional data unit comprises a second topographic data subunit, a second image data subunit, a second vector data subunit, an artificial model subunit, a point cloud subunit, an oblique photography subunit and a BIM data subunit;
the second topographic data subunit is used for preprocessing topographic data and processing the preprocessed topographic data to form map data;
the second image data subunit is used for acquiring the image of the object by using the imaging equipment and recording the map data of the image of the object by using a digital signal through optical/electrical conversion;
the second vector data subunit is used for describing the calculated undistorted map data by using a straight line or a curve;
the artificial model subunit is used for representing the polygon of any object;
the point cloud subunit is used for drawing an object into a grid by utilizing the three-dimensional coordinates, the laser reflection intensity and the color information;
the oblique photography subunit is used for acquiring high-resolution textures of the top surface and the side view of the building by synchronously acquiring images from a vertical angle, four oblique angles and five different viewing angles;
and the BIM data subunit is used for describing the geometric information, professional attribute and state information of the building component and the state information of the non-component object by using the three-dimensional model of the building engineering.
The beneficial effects of the further scheme are as follows: the scheme refines the classification of the graphic data according to different characteristics of the graphic data, can quickly and accurately find the type of the graphic data required by a user or a developer when the user or the developer needs to use the graphic data, and can carry out number matching and seat on the existing graphic data, and has the characteristics of quick search and accurate positioning.
Still further, the expression classification model comprises a theme unit, a static unit, a dynamic unit, a real-time unit and a streaming unit;
the theme unit is used for representing data by using a map or a chart; the theme unit comprises a map subunit and a chart subunit;
the map subunit is used for expressing the graphic data by taking a map as a carrier;
the chart subunit is used for expressing the graphic data by taking the statistical chart as a carrier;
the static unit is used for representing the solidification state of the map information;
the dynamic unit is used for showing dynamic traces of objects on the map and forming a dynamic expression mode according to different time tracks;
the real-time unit is used for dynamically updating the position of the target object and the expression mode of the attribute change in real time;
and the streaming unit comprises expressions of a time starting point and a time ending point and is used for representing a dynamic track of a fixed time period thing.
The beneficial effects of the further scheme are as follows: the scheme standardizes the classification of the graphic data expression modes, and a user can quickly position the required expression modes in the classification according to the requirements.
Still further, the GIS service classification model comprises a map service unit, a scene service unit, a terrain service unit, a 3DTiles service unit, a model data service unit, a space query unit, a space analysis unit, a space aggregation unit, a path planning unit, a route and track unit and a graph plotting unit;
the map service unit is used for providing online and offline map services; the map service unit comprises an online map subunit and an offline map subunit;
the online map subunit is used for realizing the online information service of the map according to the geographic information requirement proposed by the user;
the off-line map subunit is used for off-line map viewing according to the geographic information requirement proposed by the user;
the scene service unit is used for viewing real-time video information or historical records through multiple visual angles;
the terrain service unit is used for representing a map on the ground;
the 3DTiles service unit is used for representing a three-dimensional model of a building, a tree, a point cloud and vector data;
the model data service unit is used for representing static characteristics, dynamic characteristics and constraint conditions of objects;
the space query unit is used for displaying the space distribution information of the query result on a map; the space query unit comprises a keyword space query subunit and a graph on graph subunit;
the keyword space query subunit is used for displaying the geographical distribution searched by the user on a map;
the map on graph subunit is used for displaying the search result on a map;
the space analysis unit is used for analyzing the length, distance, area, direct projection, visual field range, sunshine information and square information of the target object displayed on the map;
the space aggregation unit is used for displaying data required by a user in a form of clustering adjacent point elements; the space aggregation unit comprises an aggregation point subunit and a potential diagram subunit;
the aggregation point subunit is used for displaying the adjacent point elements in a clustering mode;
the potential diagram subunit is used for displaying the clustered data;
the path planning unit is used for guiding route planning from one point to another point;
the route track unit is used for representing the movement and posture change of objects;
the graphic plotting unit is used for drawing points, lines and surfaces on a map.
The beneficial effects of the further scheme are as follows: the scheme standardizes the detailed classification of the GIS service, and the user can rapidly and accurately find the required GIS service from the scheme according to the requirement, so that the method has the advantages of high speed, accurate target and rapid research and development.
Still further, the 3D files service unit comprises a shadow tilt photography unit, an Shp floor gray mold unit, an artificial modeling subunit and a BIM subunit;
the oblique photography subunit is used for establishing a three-dimensional model of the earth surface;
the Shp floor gray mold unit is used for representing a three-dimensional building by using a single color;
the artificial modeling subunit is used for constructing a 3D model by using artificial software, a scanner and a photo;
and the BIM subunit is used for describing services of geometric information, professional attribute and state information of the building member and state information of the non-member object.
The beneficial effects of the further scheme are as follows: the scheme can be used for quickly constructing a three-dimensional model for representing buildings, trees, point clouds and vector data.
Still further, the space analysis unit comprises a quantity calculation subunit, a position analysis subunit, a visibility analysis subunit, a visual field analysis subunit, a sunshine analysis subunit and a square quantity analysis subunit;
the measuring and calculating subunit is used for measuring and calculating the distance and the area of the target object;
the position analysis subunit is used for analyzing the optimal distance between the specified point and another point, another line or another surface according to the specified point;
the perspective analysis subunit is used for judging whether the straight line can directly irradiate the target under the condition of no shielding;
the visual field analysis subunit is used for determining a sight line range on the three-dimensional map;
the sunshine analysis subunit is used for judging the sunshine information of the building at a certain fixed time or a fixed time period by simulating the running track of the sun;
and the square analysis subunit is used for excavating within a specified range and depth and calculating to obtain the excavated land square.
Still further, the route track unit comprises a historical track subunit, a real-time track subunit, a bone animation subunit and a posture display subunit;
the historical track subunit is used for displaying a route of the target moving from one point to another point within a fixed time;
the real-time track subunit is used for displaying the moving track of the target;
the skeleton animation subunit is used for displaying the non-integral change of the human or the motion;
and the posture display subunit is used for displaying the integral posture change of the human or animal.
The beneficial effects of the further scheme are as follows: the scheme aims at analyzing and defining classification of the target object in the time and space dimensions, and has the advantages of wide range, detailed classification, accurate target and clear expression.
Based on the system, the invention also provides a method for fusing the visual expression of the graph, which comprises the following steps:
s1, defining the type, expression mode and GIS service classification of the graphic data;
s2, assembling the graphic data, the expression mode and the GIS service classification selected by the user;
s3, forming a configuration file by the assembled result and storing the configuration file in a database, and providing a plurality of sets of style models for a user to select according to the assembled result;
and S4, rendering the data through the display terminal according to the configuration file, and finishing the visual expression of the fusion graph.
Further, the display terminal in the step S4 is a WEB end.
The invention has the beneficial effects that: the method can standardize the flow of fusing graphic visual expression, organically combines graphic data, an expression mode and GIS service together, improves the research and development efficiency, saves the research and development and maintenance cost, shortens the research and development period, improves the data utilization rate, has the characteristic of high research and development speed, can quickly develop products required by a user according to the standardized flow and previously formed configuration files when the graphic data or the requirement of the user is changed, can quickly adapt to the market requirement which is changed at any time, and reduces the learning cost and the research and development cost by using the standardized flow.
Drawings
FIG. 1 is a system block diagram of the present invention.
FIG. 2 is a diagram illustrating a graph data classification model according to the present invention.
FIG. 3 is a schematic diagram of the expression classification model according to the present invention.
FIG. 4 is a diagram illustrating a GIS service classification model according to the present invention.
FIG. 5 is a diagram illustrating a structure of a map service unit according to the present invention.
Fig. 6 is a schematic diagram of a 3DTiles service unit structure in the present invention.
FIG. 7 is a schematic structural diagram of a spatial analysis unit according to the present invention.
Fig. 8 is a schematic diagram of a route track unit structure according to the present invention.
FIG. 9 is a flow chart of a method of the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
Example 1
As shown in fig. 1, the present invention provides a system for fusing graphic visual expression, which includes a graphic data classification module, an expression mode classification module, a GIS service classification module, a graphic expression assembly model, a formation configuration model and a page rendering model; the graphic data classification module is used for classifying and defining the graphic data; the expression mode classification module is used for defining the classification of the expression modes of the graphic data; the GIS service classification module is used for defining the classification of the GIS service; the graphic expression assembly model is used for assembling graphic data, expression modes and GIS service; forming a configuration model, forming a configuration file from the assembled result and storing the configuration file in a database; and the page rendering model is used for rendering the assembled graphic data, the expression mode and the GIS service to a WEB terminal for a user to browse and use.
In this embodiment, the graphic data classification module, the expression mode classification module, and the GIS service classification module are bases, the graphic expression assembly model and the configuration model are processes for assembling and molding data, and the page rendering model is an expression mode for presenting data to a web side.
As shown in fig. 2, the graphic data classification model includes a two-dimensional data unit, a three-dimensional data unit, a POI data unit, and a stream data unit; the two-dimensional data unit is used for displaying a plane map; the three-dimensional data unit is used for simulating a three-dimensional map; the POI data unit is used for displaying the information point data required by the user; and the stream data unit is used for displaying the dynamic data set. The two-dimensional data unit comprises a first terrain data subunit, a first image data subunit and a first vector data subunit; the first terrain data subunit is used for preprocessing terrain data and processing the preprocessed terrain data to form map data; a first image data subunit, configured to acquire an image of an object using an imaging device, and record map data of the image of the object as a digital signal through optical/electrical conversion; and the first vector data subunit is used for describing the calculated undistorted map data by using a straight line or a curve. The three-dimensional data unit comprises a second topographic data subunit, a second image data subunit, a second vector data subunit, an artificial model subunit, a point cloud subunit, an oblique photography subunit and a BIM data subunit; the second topographic data subunit is used for preprocessing topographic data and processing the preprocessed topographic data to form map data; a second image data subunit, configured to acquire an image of the object using the imaging device, and record map data of the image of the object as a digital signal through optical/electrical conversion; a second vector data subunit for describing the calculated non-distorted map data by using a straight line or a curve; the artificial model subunit is used for expressing the polygon of any object; the point cloud subunit is used for drawing the object into a grid by utilizing the three-dimensional coordinates, the laser reflection intensity and the color information; the oblique photography subunit is used for acquiring high-resolution textures of the top surface and the side view of the building by synchronously acquiring images from a vertical view, four oblique views and five different viewing angles; and the BIM data subunit is used for describing the geometric information, professional attribute and state information of the building component and the state information of the non-component object by using the three-dimensional model of the building engineering.
In this embodiment, the two-dimensional data unit is a planar map, the three-dimensional data unit is a simulated three-dimensional map, and the two-dimensional data unit has one higher dimension than the two-dimensional data unit, the POI data unit includes name, category, coordinate, and classified information point data, and the stream data unit is a set of sequential, large, fast, and continuously arriving dynamic data sets that continue with time and grow infinitely, and is used for information display relating to real-time status, history, and change tracks, such as network monitoring, sensor network, aerospace, weather measurement and control, and financial services.
In this embodiment, the first topographic data subunit is configured to perform, after preprocessing of the sampled data, plane normalization, data sorting, data grid arrangement, and the like, so as to form map data that has a certain rule and is convenient to access, and has only two dimensions of length and width, the first image data subunit has a high dimension in addition to the length and width, and the first vector data subunit is described using a straight line and a curve, and map data that is not distorted regardless of enlargement, reduction, rotation, or the like and is composed of elements such as some points, lines, rectangles, polygons, circles, arcs, and the like, which are obtained by calculation using a mathematical formula, has both a size and a dimension of a direction.
In this embodiment, the second graphic data subunit is map data with a certain rule and convenient for access, the second image data subunit is map data obtained by an imaging device, the artificial model subunit is polygonal representation of an object by a computer or other video devices, the point cloud subunit is a point data set of an appearance surface of a product obtained by a measuring instrument, the oblique photography subunit acquires images synchronously from a plurality of different viewing angles to obtain high-resolution textures of the top surface and the side view of the building, and the BIM data subunit describes geometric information, professional attributes and state information of building components and state information of non-component objects (such as space and motion behaviors) by establishing a virtual three-dimensional model of building engineering and using a digitization technology.
As shown in FIG. 3, the expression classification model includes a topic unit, a static unit, a dynamic unit, a real-time unit, and a streaming unit; and the theme unit is used for representing the data by using a map or a chart. Namely, the expression mode of the map or the chart, and is also the basis of the static, dynamic, real-time and streaming expression modes; the theme unit comprises a map subunit and a chart subunit; the map subunit is used for expressing the graphic data by taking a map as a carrier; the chart subunit is used for expressing the graphic data by taking the statistical chart as a carrier; a static unit for representing a solidification state of the map information; the dynamic unit is used for showing dynamic traces of objects on the map and forming a dynamic expression mode according to different time tracks; the real-time unit is used for dynamically updating the position of the target object and the expression mode of the attribute change in real time; and the streaming unit comprises expressions of a time starting point and a time ending point and is used for representing a dynamic track of a fixed time period thing.
In this embodiment, the dynamic unit represents the dynamic trace of the objective object, reflects the transition and movement of natural and human phenomena, but is still static in vision, and requires the reader to associate with different time tracks to form an expression mode of a dynamic concept. The streaming unit contains sequential, massive, rapid, and persistent expressions of time start and end points.
As shown in fig. 4-5, the GIS service classification model includes a map service unit, a scene service unit, a terrain service unit, a 3DTiles service unit, a model data service unit, a space query unit, a space analysis unit, a space aggregation unit, a path planning unit, a route trajectory unit, and a graph plotting unit; the map service unit is used for providing online and offline map services; the map service unit comprises an online map subunit and an offline map subunit; the online map subunit is used for realizing the online information service of the map according to the geographic information requirement proposed by the user; the off-line map subunit is used for off-line map viewing according to the geographic information requirement proposed by the user; the scene service unit is used for viewing real-time video information or historical records through multiple visual angles; a terrain service unit for representing a map on the ground; the 3D files service unit is used for representing a three-dimensional model of a building, a tree, a point cloud and vector data; the model data service unit is used for representing the static characteristics, the dynamic characteristics and the constraint conditions of the things; the spatial query unit is used for displaying spatial distribution information of a query result on a map; the space query unit comprises a keyword space query subunit and a graph on graph subunit; the keyword space query subunit is used for displaying the geographical distribution searched by the user on a map; the map on graph subunit is used for displaying the search result on a map; the space analysis unit is used for analyzing the length, distance, area, direct projection, visual field range, sunshine information and square information of the target object displayed on the map; the space aggregation unit is used for displaying data required by a user in a form of clustering adjacent point elements; the space aggregation unit comprises an aggregation point subunit and a potential diagram subunit; the aggregation point subunit is used for displaying the adjacent point elements in a clustering mode; the potential diagram subunit is used for displaying the clustered data; a path planning unit for guiding route planning from one point to another point; a route trajectory unit for representing movement and posture change of an object; a graphic plotting unit for plotting points, lines and surfaces on a map.
In this embodiment, the map service unit, the terrain service unit, and the 3Dtiles service unit are the basis of the GIS service, the scene service unit, the model data service unit, the path planning unit, and the route trajectory unit are dynamic services of the GIS service, and the space query unit, the space analysis unit, the space aggregation unit, and the graph plotting unit are real-time feedback services in the GIS service.
In this embodiment, the online map subunit: the map service side provides convenient, fast and accurate on-line information service for the user through automatic search, manual inquiry, on-line communication and other modes according to the geographic information requirement provided by the user. An off-line map subunit: the function is that the user can view the map while offline. The scene service unit is also called scene camera and view: the function of viewing real-time video or historical records from multiple angles is realized. A terrain service unit: map functions representing the features, water systems, vegetation, engineering buildings, residential areas, etc. on the ground.
In this embodiment, the spatial aggregation is divided into an aggregation point and a thermodynamic diagram: polymerization point: and displaying the data required to be displayed by the user in the form of creating a face element around the adjacent point element cluster. Thermodynamic diagrams: and displaying the data required to be displayed by the user in a special highlight form.
As shown in fig. 6, the 3DTiles service unit includes a oblique photography shadow unit, an Shp floor gray mold unit, an artificial modeling subunit, and a BIM subunit; the oblique photography subunit is used for quickly establishing a fine three-dimensional model of the earth surface; an Shp floor gray mold unit for expressing a three-dimensional building using a single color; the artificial modeling subunit is used for constructing a 3D model by using artificial software, a scanner and a photo; and the BIM subunit is used for describing services of geometric information, professional attributes and state information of the building components and state information of the non-component objects.
In the embodiment, the oblique photography subunit is more in line with human vision, the Shp floor gray mold is more visual, the manual modeling subunit is more detailed, and the BIM subunit describes detailed information of building construction.
In the embodiment, the oblique shadow shooting unit simultaneously acquires images from five different angles, such as one vertical angle, four oblique angles and the like, by carrying a plurality of sensors on the same flight platform, so that the user is introduced with a real and intuitive world function conforming to human vision. The artificial modeling subunit: a service for artificial software to build 3D models. The BIM subunit describes the services of geometric information, professional attributes and state information of building components and state information of non-component objects (such as space and motion behaviors) by establishing a virtual three-dimensional model of the building engineering and utilizing a digital technology.
As shown in fig. 7, the spatial analysis unit includes a quantity calculation subunit, a position analysis subunit, a visibility analysis subunit, a visual field analysis subunit, a sunshine analysis subunit, and a square quantity analysis subunit; a measuring subunit, for measuring the distance and area of the target object; a position analyzing subunit for analyzing the optimal distance from the specified point to another point, another line, or another surface, based on the specified point; the perspective analysis subunit is used for judging whether the straight line can directly irradiate the target under the condition of no shielding; the visual field analysis subunit is used for determining a sight line range on the three-dimensional map; the sunshine analysis subunit is used for judging the sunshine information of the building at a certain fixed time or a fixed time period by simulating the running track of the sun; and the square analysis subunit is used for excavating within a specified range and depth and calculating to obtain the excavated land square.
In this embodiment, the visual domain analysis subunit defines the range on the three-dimensional map by user, and analyzes the service of which areas are within the visual line range and which areas are outside the visual line range.
As shown in fig. 8, the route track unit includes a history track subunit, a real-time track subunit, a skeleton animation subunit, and a posture display subunit; the historical track subunit is used for showing a route of the target moving from one point to another point within fixed time; the real-time track subunit is used for displaying the moving track of the target; a skeletal animation subunit for exhibiting non-global changes in a person or action; and the posture display subunit is used for displaying the whole posture change of the human or the complex animal.
In this embodiment, the history track subunit: a service that presents a route for a data object to move from one point to another over a period of time. A real-time trajectory subunit: and displaying the moving track of the data object according to the time passing. Bone animation subunit: the service for dynamically showing the animation effect aiming at characters or complex animals and the like with non-integral change. A posture display subunit: a service for overall pose change of the model.
Example 2
As shown in fig. 9, the present invention further provides a method for fusing graphic visual expressions, which is implemented as follows:
s1, defining the type, expression mode and GIS service classification of the graphic data;
s2, assembling the graphic data, the expression mode and the GIS service classification selected by the user;
s3, forming a configuration file by the assembled result and storing the configuration file in a database, and providing a plurality of sets of style models for a user to select according to the assembled result;
and S4, rendering the data according to the configuration file through a display terminal to finish the fusion graph visual expression method, wherein the display terminal is a WEB terminal.
Through the design, the method can standardize the flow of fusing graphic visual expression, organically combines graphic data, expression modes and GIS service together, improves the research and development efficiency, saves the research and development and maintenance cost, shortens the research and development period, improves the data utilization rate, has the characteristic of high research and development speed, can quickly develop products required by a user according to the standardized flow and previously formed configuration files when the demand of the graphic data or the user changes, quickly adapts to the market demand which changes at any time, and reduces the learning cost and the research and development cost by using the standardized flow.
Claims (10)
1. A system for fusing graphic visual expression is characterized by comprising a graphic data classification module, an expression mode classification module, a GIS service classification module, a graphic expression assembly model, a formation configuration model and a page rendering model;
the graphic data classification module is used for classifying and defining the graphic data;
the expression mode classification module is used for defining the classification of the expression modes of the graphic data;
the GIS service classification module is used for defining the classification of the GIS service;
the graphic expression assembly model is used for assembling graphic data, expression modes and GIS service;
the forming configuration model is used for forming a configuration file from an assembled result and storing the configuration file to a database;
and the page rendering model is used for rendering the assembled graphic data, expression mode and GIS service to a WEB terminal for a user to browse and use.
2. The system for fusing graphic visual representations according to claim 1, wherein the graphic data classification model comprises a two-dimensional data unit, a three-dimensional data unit, a POI data unit, and a stream data unit;
the two-dimensional data unit is used for displaying a plane map;
the three-dimensional data unit is used for simulating a three-dimensional map;
the POI data unit is used for displaying information point data required by a user;
and the stream data unit is used for displaying the dynamic data set.
3. The system for fusing graphic visual representations according to claim 2, wherein the two-dimensional data unit comprises a first terrain data subunit, a first image data subunit, and a first vector data subunit;
the first terrain data subunit is used for preprocessing terrain data and processing the preprocessed terrain data to form map data;
the first image data subunit is used for acquiring an image of an object by using an imaging device and recording map data of the image of the object by using a digital signal through optical/electrical conversion;
the first vector data subunit is used for describing the calculated undistorted map data by using a straight line or a curve;
the three-dimensional data unit comprises a second topographic data subunit, a second image data subunit, a second vector data subunit, an artificial model subunit, a point cloud subunit, an oblique photography subunit and a BIM data subunit;
the second topographic data subunit is used for preprocessing topographic data and processing the preprocessed topographic data to form map data;
the second image data subunit is used for acquiring the image of the object by using the imaging equipment and recording the map data of the image of the object by using a digital signal through optical/electrical conversion;
the second vector data subunit is used for describing the calculated undistorted map data by using a straight line or a curve;
the artificial model subunit is used for representing the polygon of any object;
the point cloud subunit is used for drawing an object into a grid by utilizing the three-dimensional coordinates, the laser reflection intensity and the color information;
the oblique photography subunit is used for acquiring high-resolution textures of the top surface and the side view of the building by synchronously acquiring images from a vertical angle, four oblique angles and five different viewing angles;
and the BIM data subunit is used for describing the geometric information, professional attribute and state information of the building component and the state information of the non-component object by using the three-dimensional model of the building engineering.
4. The system for fusing graphic visual representations according to claim 1, wherein the representation classification model comprises a topic unit, a static unit, a dynamic unit, a real-time unit, and a streaming unit;
the theme unit is used for representing data by using a map or a chart; the theme unit comprises a map subunit and a chart subunit;
the map subunit is used for expressing the graphic data by taking a map as a carrier;
the chart subunit is used for expressing the graphic data by taking the statistical chart as a carrier;
the static unit is used for representing the solidification state of the map information;
the dynamic unit is used for showing dynamic traces of objects on the map and forming a dynamic expression mode according to different time tracks;
the real-time unit is used for dynamically updating the position of the target object and the expression mode of the attribute change in real time;
and the streaming unit comprises expressions of a time starting point and a time ending point and is used for representing a dynamic track of a fixed time period thing.
5. The system for fusing graphic visual expressions according to claim 1, wherein the GIS service classification model includes a map service unit, a scene service unit, a terrain service unit, a 3d files service unit, a model data service unit, a space query unit, a space analysis unit, a space aggregation unit, a path planning unit, a route trajectory unit, and a graphic plotting unit;
the map service unit is used for providing online and offline map services; the map service unit comprises an online map subunit and an offline map subunit;
the online map subunit is used for realizing the online information service of the map according to the geographic information requirement proposed by the user;
the off-line map subunit is used for off-line map viewing according to the geographic information requirement proposed by the user;
the scene service unit is used for viewing real-time video information or historical records through multiple visual angles;
the terrain service unit is used for representing a map on the ground;
the 3DTiles service unit is used for representing a three-dimensional model of a building, a tree, a point cloud and vector data;
the model data service unit is used for representing static characteristics, dynamic characteristics and constraint conditions of objects;
the space query unit is used for displaying the space distribution information of the query result on a map; the space query unit comprises a keyword space query subunit and a graph on graph subunit;
the keyword space query subunit is used for displaying the geographical distribution searched by the user on a map;
the map on graph subunit is used for displaying the search result on a map;
the space analysis unit is used for analyzing the length, distance, area, direct projection, visual field range, sunshine information and square information of the target object displayed on the map;
the space aggregation unit is used for displaying data required by a user in a form of clustering adjacent point elements; the space aggregation unit comprises an aggregation point subunit and a potential diagram subunit;
the aggregation point subunit is used for displaying the adjacent point elements in a clustering mode;
the potential diagram subunit is used for displaying the clustered data;
the path planning unit is used for guiding route planning from one point to another point;
the route track unit is used for representing the movement and posture change of objects;
the graphic plotting unit is used for drawing points, lines and surfaces on a map.
6. The system for fusing graphic visual representations according to claim 5, wherein the 3D files service unit includes a oblique photography shadow unit, an Shp floor gray mold unit, an artificial modeling subunit, and a BIM subunit;
the oblique photography subunit is used for establishing a three-dimensional model of the earth surface;
the Shp floor gray mold unit is used for representing a three-dimensional building by using a single color;
the artificial mold building unit is used for building a 3D model;
and the BIM subunit is used for describing services of geometric information, professional attribute and state information of the building member and state information of the non-member object.
7. The system for fusing graphic visual expressions according to claim 5, wherein the spatial analysis unit includes a quantitative calculator subunit, a position analysis subunit, a full-view analysis subunit, a visual field analysis subunit, a sunshine analysis subunit, and a square quantity analysis subunit;
the measuring and calculating subunit is used for measuring and calculating the distance and the area of the target object;
the position analysis subunit is used for analyzing the optimal distance between the specified point and another point, another line or another surface according to the specified point;
the perspective analysis subunit is used for judging whether the straight line can directly irradiate the target under the condition of no shielding;
the visual field analysis subunit is used for determining a sight line range on the three-dimensional map;
the sunshine analysis subunit is used for judging the sunshine information of the building at a certain fixed time or a fixed time period by simulating the running track of the sun;
and the square analysis subunit is used for excavating within a specified range and depth and calculating to obtain the excavated land square.
8. The system for fusing graphic visual representations according to claim 5, wherein the route trajectory units include a historical trajectory subunit, a real-time trajectory subunit, a skeletal animation subunit, and a gesture presentation subunit;
the historical track subunit is used for displaying a route of the target moving from one point to another point within a fixed time;
the real-time track subunit is used for displaying the moving track of the target;
the skeleton animation subunit is used for displaying non-integral changes of the human or animal;
and the posture display subunit is used for displaying the integral posture change of the human or animal.
9. A method of fusing visual representations of graphics, comprising the steps of:
s1, defining the type, expression mode and GIS service classification of the graphic data;
s2, assembling the graphic data, the expression mode and the GIS service classification selected by the user;
s3, forming a configuration file by the assembled result and storing the configuration file in a database, and providing a plurality of sets of style models for a user to select according to the assembled result;
and S4, rendering the data through the display terminal according to the configuration file, and finishing the visual expression of the fusion graph.
10. The method for fusing graphic visual expressions according to claim 9, wherein the display terminal in the step S4 is a WEB terminal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010521598.4A CN111680118B (en) | 2020-06-10 | 2020-06-10 | System and method for fusing graphic visual expression |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010521598.4A CN111680118B (en) | 2020-06-10 | 2020-06-10 | System and method for fusing graphic visual expression |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111680118A true CN111680118A (en) | 2020-09-18 |
CN111680118B CN111680118B (en) | 2023-04-18 |
Family
ID=72454509
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010521598.4A Active CN111680118B (en) | 2020-06-10 | 2020-06-10 | System and method for fusing graphic visual expression |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111680118B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112634303A (en) * | 2020-12-29 | 2021-04-09 | 北京深睿博联科技有限责任公司 | Method, system, device and storage medium for assisting blind person in visual reconstruction |
Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08147455A (en) * | 1994-11-21 | 1996-06-07 | Hitachi Ltd | Solid graphic data managing method |
WO2000014482A1 (en) * | 1998-09-09 | 2000-03-16 | Ibs Integrierte Business Systeme Gmbh | System for processing geographic position data and images and circuit for said system |
JP2008176808A (en) * | 2008-03-07 | 2008-07-31 | Hitachi Ltd | Graphic data retrieval system and method, and screen display method of graphic data |
CA2686847A1 (en) * | 2008-12-02 | 2010-06-02 | Oculus Info Inc. | System and method for visualizing connected temporal and spatial information as an integrated visual representation on a user interface |
CN103150754A (en) * | 2011-12-06 | 2013-06-12 | 泰瑞数创科技(北京)有限公司 | Moving object locating and tracking technique based on three-dimensional geographic information technique |
CN105069020A (en) * | 2015-07-14 | 2015-11-18 | 国家信息中心 | 3D visualization method and system of natural resource data |
WO2016054605A2 (en) * | 2014-10-02 | 2016-04-07 | Reylabs Inc. | Systems and methods involving diagnostic monitoring, aggregation, classification, analysis and visual insights |
US20160127641A1 (en) * | 2014-11-03 | 2016-05-05 | Robert John Gove | Autonomous media capturing |
CN105758412A (en) * | 2009-07-09 | 2016-07-13 | 通腾科技股份有限公司 | Navigation device using map data with route search acceleration data |
CN105787000A (en) * | 2016-02-19 | 2016-07-20 | 云南电网有限责任公司电力科学研究院 | Method and system for generating SVG/CIM graph model file based on spatial data |
MX2016000520A (en) * | 2015-12-21 | 2017-06-20 | Sbc Tecnologias S A De C V | Process for the detection of graphic markers for the interaction with increased reality objects. |
CN106933961A (en) * | 2017-01-31 | 2017-07-07 | 杭州市公安局上城区分局 | Based on the three-dimensional police geographical information platform that commanding elevation automatically analyzes |
US9939381B1 (en) * | 2017-04-07 | 2018-04-10 | Vidrio Technologies, Llc | Automated scanning path planner with path calibration for high frame rate multi photon laser scanning microscope with wide field of view |
CN108022273A (en) * | 2016-10-28 | 2018-05-11 | 中国测绘科学研究院 | A kind of figure number Detachable drafting method and system |
CN109829022A (en) * | 2019-01-08 | 2019-05-31 | 桂林电子科技大学 | A kind of the Internet map service system and construction method of fusion monitoring video information |
CN110059151A (en) * | 2019-04-26 | 2019-07-26 | 北京百度网讯科技有限公司 | Map rendering method, map rendering device, map server and storage medium |
CN110136219A (en) * | 2019-04-17 | 2019-08-16 | 太原理工大学 | A kind of two three-dimensional map methods of exhibiting based on multisource data fusion |
CN110136259A (en) * | 2019-05-24 | 2019-08-16 | 唐山工业职业技术学院 | A kind of dimensional Modeling Technology based on oblique photograph auxiliary BIM and GIS |
US10636209B1 (en) * | 2019-09-13 | 2020-04-28 | Bongfish GmbH | Reality-based three-dimensional infrastructure reconstruction |
CN111125585A (en) * | 2019-12-24 | 2020-05-08 | 武汉珞珈灵智科技有限公司 | Visualization method and system for Web-end three-dimensional model |
-
2020
- 2020-06-10 CN CN202010521598.4A patent/CN111680118B/en active Active
Patent Citations (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08147455A (en) * | 1994-11-21 | 1996-06-07 | Hitachi Ltd | Solid graphic data managing method |
WO2000014482A1 (en) * | 1998-09-09 | 2000-03-16 | Ibs Integrierte Business Systeme Gmbh | System for processing geographic position data and images and circuit for said system |
JP2008176808A (en) * | 2008-03-07 | 2008-07-31 | Hitachi Ltd | Graphic data retrieval system and method, and screen display method of graphic data |
CA2686847A1 (en) * | 2008-12-02 | 2010-06-02 | Oculus Info Inc. | System and method for visualizing connected temporal and spatial information as an integrated visual representation on a user interface |
CN105758412A (en) * | 2009-07-09 | 2016-07-13 | 通腾科技股份有限公司 | Navigation device using map data with route search acceleration data |
CN103150754A (en) * | 2011-12-06 | 2013-06-12 | 泰瑞数创科技(北京)有限公司 | Moving object locating and tracking technique based on three-dimensional geographic information technique |
WO2016054605A2 (en) * | 2014-10-02 | 2016-04-07 | Reylabs Inc. | Systems and methods involving diagnostic monitoring, aggregation, classification, analysis and visual insights |
US20160127641A1 (en) * | 2014-11-03 | 2016-05-05 | Robert John Gove | Autonomous media capturing |
CN105069020A (en) * | 2015-07-14 | 2015-11-18 | 国家信息中心 | 3D visualization method and system of natural resource data |
MX2016000520A (en) * | 2015-12-21 | 2017-06-20 | Sbc Tecnologias S A De C V | Process for the detection of graphic markers for the interaction with increased reality objects. |
CN105787000A (en) * | 2016-02-19 | 2016-07-20 | 云南电网有限责任公司电力科学研究院 | Method and system for generating SVG/CIM graph model file based on spatial data |
CN108022273A (en) * | 2016-10-28 | 2018-05-11 | 中国测绘科学研究院 | A kind of figure number Detachable drafting method and system |
CN106933961A (en) * | 2017-01-31 | 2017-07-07 | 杭州市公安局上城区分局 | Based on the three-dimensional police geographical information platform that commanding elevation automatically analyzes |
US9939381B1 (en) * | 2017-04-07 | 2018-04-10 | Vidrio Technologies, Llc | Automated scanning path planner with path calibration for high frame rate multi photon laser scanning microscope with wide field of view |
CN109829022A (en) * | 2019-01-08 | 2019-05-31 | 桂林电子科技大学 | A kind of the Internet map service system and construction method of fusion monitoring video information |
CN110136219A (en) * | 2019-04-17 | 2019-08-16 | 太原理工大学 | A kind of two three-dimensional map methods of exhibiting based on multisource data fusion |
CN110059151A (en) * | 2019-04-26 | 2019-07-26 | 北京百度网讯科技有限公司 | Map rendering method, map rendering device, map server and storage medium |
CN110136259A (en) * | 2019-05-24 | 2019-08-16 | 唐山工业职业技术学院 | A kind of dimensional Modeling Technology based on oblique photograph auxiliary BIM and GIS |
US10636209B1 (en) * | 2019-09-13 | 2020-04-28 | Bongfish GmbH | Reality-based three-dimensional infrastructure reconstruction |
CN111125585A (en) * | 2019-12-24 | 2020-05-08 | 武汉珞珈灵智科技有限公司 | Visualization method and system for Web-end three-dimensional model |
Non-Patent Citations (6)
Title |
---|
KAIXU LIU, ET.AL.: "A 2D and 3D Indoor Mapping Approach for Virtual Navigation Services" * |
YANG GAO,ET.AL: "Design and Typical Application of Three-Dimension Visualization Simulation Platform Based on Digital Earth" * |
ZELU JIA,ET.AL.: "Visualized Geology Spatial Data Classifying Based on Integrated Techniques between GIS and SDM" * |
何龙: "面向城市管网网络三维GIS关键技术与实现" * |
刘欣: "Web三维专题地图可视化框架研究与设计" * |
罗显刚: "数字地球三维空间信息服务关键技术研究" * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112634303A (en) * | 2020-12-29 | 2021-04-09 | 北京深睿博联科技有限责任公司 | Method, system, device and storage medium for assisting blind person in visual reconstruction |
CN112634303B (en) * | 2020-12-29 | 2022-02-25 | 北京深睿博联科技有限责任公司 | Method, system, device and storage medium for assisting blind person in visual reconstruction |
Also Published As
Publication number | Publication date |
---|---|
CN111680118B (en) | 2023-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zollmann et al. | Augmented reality for construction site monitoring and documentation | |
Shan et al. | Research on 3D urban landscape design and evaluation based on geographic information system | |
CN109360262B (en) | Indoor positioning system and method for generating three-dimensional model based on CAD (computer-aided design) drawing | |
Zhang et al. | ARGIS-based outdoor underground pipeline information system | |
Brenner et al. | Fast production of virtual reality city models | |
Zollmann et al. | Interactive 4D overview and detail visualization in augmented reality | |
Polis et al. | Automating the construction of large-scale virtual worlds | |
CN110209864B (en) | Network platform system for three-dimensional model measurement, ruler changing, labeling and re-modeling | |
CN111680118B (en) | System and method for fusing graphic visual expression | |
Milosavljević et al. | GIS-augmented video surveillance | |
Bergen et al. | Data-driven simulation, dimensional accuracy and realism in a landscape visualization tool | |
Cera et al. | Semantically Annotated 3D Material Supporting the Design of Natural User Interfaces for Architectural Heritage. | |
Sedláček et al. | When does the point cloud become a real tool for a landscape architect? Teaching experience with bachelor and master student programmes in landscape architecture | |
CN111445565B (en) | Multi-source spatial data integration display method and device based on visual range | |
CN114490907A (en) | Method and device for constructing famous city management database and storage medium | |
Gruber et al. | Urban data management—A modern approach | |
CN116030213B (en) | Multi-machine cloud edge collaborative map creation and dynamic digital twin method and system | |
Niwa et al. | Interactive collision detection for engineering plants based on large-scale point-clouds | |
XUHUI | A Design Method of Group Animation Fusion Motion Capturing Data Based on Virtual Reality Technology | |
Dübel | Scalable visualization of spatial data in 3D terrain | |
Gülch | Virtual cities from digital imagery | |
Lovett et al. | GIS-based landscape visualization—The state of the art | |
US20220375175A1 (en) | System For Improving The Precision and Accuracy of Augmented Reality | |
Lukes | Computer-assisted photo interpretation research at United States Army Engineer Topographic Laboratories (USAETL) | |
Fielding-Piper | The illuminated design environment: A 3-D tangible interface for landscape analysis |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |