WO2018061010A1 - Transformation de nuage de points dans une modélisation urbaine à grande échelle - Google Patents
Transformation de nuage de points dans une modélisation urbaine à grande échelle Download PDFInfo
- Publication number
- WO2018061010A1 WO2018061010A1 PCT/IL2017/051100 IL2017051100W WO2018061010A1 WO 2018061010 A1 WO2018061010 A1 WO 2018061010A1 IL 2017051100 W IL2017051100 W IL 2017051100W WO 2018061010 A1 WO2018061010 A1 WO 2018061010A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- map tile
- map
- overlap
- tile
- digital image
- Prior art date
Links
- 230000001131 transforming effect Effects 0.000 title description 5
- 238000000034 method Methods 0.000 claims abstract description 67
- 230000009466 transformation Effects 0.000 claims abstract description 37
- 238000004458 analytical method Methods 0.000 claims abstract description 11
- 238000004590 computer program Methods 0.000 claims description 24
- 238000003860 storage Methods 0.000 claims description 24
- 238000000844 transformation Methods 0.000 claims description 14
- 230000007423 decrease Effects 0.000 claims description 10
- 230000006870 function Effects 0.000 description 28
- 238000012545 processing Methods 0.000 description 16
- 238000010586 diagram Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 8
- 230000009471 action Effects 0.000 description 7
- 238000009499 grossing Methods 0.000 description 6
- 101150109517 Camlg gene Proteins 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 4
- 239000000306 component Substances 0.000 description 4
- 238000005457 optimization Methods 0.000 description 4
- 101100459261 Cyprinus carpio mycb gene Proteins 0.000 description 3
- 238000013459 approach Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 241000969729 Apteryx rowi Species 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000002902 bimodal effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 239000008358 core component Substances 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/13—Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/22—Arrangements for sorting or merging computer data on continuous record carriers, e.g. tape, drum, disc
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/579—Depth or shape recovery from multiple images from motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Definitions
- the invention relates to the three-dimensional (3D) mapping, and more specifically to generating 3D urban models from images.
- Three dimensional (3D) urban models are digital models of urban areas showing terrain surfaces, buildings, roads, and the like. Components of the model may be encoded in vector format and stored in a database, optionally with texturing images for presentation of the models on a display of a user interface, semantic metadata for names of objects, and/or the like.
- a 3D urban model may comprise different levels of details (LOD) to allow different levels of abstraction and resolution.
- LOD levels of details
- Spatio-semantic coherence, resolution of the texture, and the like may be considered a part of the LOD.
- CityGML defines five LODs for building models: LOD 0: 2.5D footprints; LOD 1 : Buildings represented by block models (usually extruded footprints); LOD 2: Building models with standard roof structures; LOD 3: Detailed (architectural) building models; and, LOD 4: LOD 3 building models supplemented with interior features.
- a 3D urban model may comprise geographical information system (GIS) data of base information, such as by digital terrain models, road networks, land use maps, and related geo-referenced data.
- GIS data may also include cadastral data that may be converted into simple 3D models as, for example, in the case of building footprints.
- Core components of 3D urban models form digital terrain models (DTM) represented, for example, by TINs or grids.
- DTM digital terrain models
- a 3D urban model may comprise computer-aided drafting (CAD) data, such as models of buildings, sites, and infrastructure elements.
- CAD data may provide a high level of detail, possible not required by 3D city model applications, but may be incorporated either by exporting their geometry or as encapsulated objects.
- Building information models (BIM) data may represent another category of geo- spatial data that may be integrated into a 3D urban model providing the highest level of detail for building components.
- a building model construction may comprise extruding the footprint polygons of buildings, e.g., taken from the cadaster, by pre-computing average building heights.
- 3D models of buildings of urban regions may be generated by capturing and analyzing 3D point clouds (e.g., sampled by terrestrial or aerial laser scanning) or by photogrammetric approaches.
- 3D point clouds e.g., sampled by terrestrial or aerial laser scanning
- photogrammetric approaches e.g., sampled by photogrammetric approaches.
- digital terrain surfaces and 2D footprint polygons may be required by automated building reconstruction tools such as BREC.
- Statistical approaches are common for roof reconstruction based on airborne laser scanning point clouds.
- Structure from Motion (SFM) techniques may estimate the parameters of a set of cameras, such as position, orientation, focal length, distortion, and/or the like, and may estimate the 3D position of objects observed by them. This may be done by computing the relationships between images and estimating initial cameras poses, camera positions, camera poses, camera intrinsic parameters, and/or the like.
- Camera pose computation may be performed incrementally or globally. Incremental camera pose computation allows unsolved cameras to be introduced iteratively and their initial pose is estimated using the already solved cameras. Global camera pose computation may be performed on the entire set of image files simultaneously. Each of these techniques may use bundle adjustment optimization to decrease the re-projection error.
- a computerized method comprising using one or more hardware processors for receiving two or more digital image files, each comprising a digital image depicting a geographical location. For each digital image file, computing a camera location, camera intrinsic parameter values, and/or a camera pose that acquired the digital image and two or more structural feature locations depicted in the digital image.
- the computerized method further comprises an action of segmenting the digital image files according to respective camera pose and geographical location to one of two or more map tiles, wherein each map tile is associated with a subset of the digital image files, and wherein the map tiles overlap each other.
- the computerized method further comprises an action of computing, for each map tile, a structure from motion analysis (SFM) on the subset to produce a point cloud for that map tile, wherein the SFM additionally produces (a) a refined camera pose and (b) refined camera intrinsic parameter values, for each digital image file.
- the computerized method further comprises an action of, for one or more map tile, computing an alignment transformation based on iteratively computing a discrepancy score of the overlaps between that map tile and the SFM values from surrounding map tiles, wherein the iteratively computing results in a decrease of the discrepancy score.
- the computerized method further comprises an action of, for one or more map tile, generating a transformed SFM values based on the alignment transformation.
- the computerized method further comprises an action of, for one or more map tile, computing a three-dimensional (3D) urban model of that map tile based on the transformed SFM values.
- the computerized method further comprises bundle adjusting the SFM values of the map tiles and/or the overlap.
- the computerized method further comprises using the one or more hardware processor for model aligning at least some of the 3D urban models, each associated with the respective map tile, to produce a large-scale 3D urban model.
- the computerized method further comprises separating at least some of the 3D urban models into two or more building models and two or more terrain models, and wherein the model aligning is performed separately for the terrain models.
- the computerized method further comprises separating at least some of the 3D urban models into two or more building models and two or more terrain models, and wherein the model aligning of the building models is performed using bundle adjustment of the transformed SFM value subset associated with the building models.
- one or more of the map tiles is sized and shaped to match one or more of: (i) specific features at the borders of that map tile, and (ii) the number of points in the point cloud at the structural feature locations.
- a computerized system comprising one or more hardware processor and a non-transitory computer readable storage medium, having program code stored thereon.
- the program code is configured, when executed on the one or more hardware processor, to receive two or more digital image files, each comprising a digital image depicting a geographical location.
- the program code is configured to compute a camera location, camera intrinsic parameter values, and/or a camera pose that acquired the digital image and two or more structural feature locations depicted in the digital image.
- the program code is configured to segment the digital image files according to respective camera pose and geographical location to one of two or more map tiles, wherein each map tile is associated with a subset of the digital image files, and wherein the map tiles overlap each other.
- the program code is configured to compute, for each map tile, a structure from motion analysis (SFM) on the subset to produce a point cloud for that map tile, wherein the SFM additionally produces (a) a refined camera pose and (b) refined camera intrinsic parameter values, for each digital image file.
- SFM structure from motion analysis
- the program code is configured to, for one or more map tile, compute an alignment transformation based on iteratively computing a discrepancy score of the overlaps between that map tile and the SFM values from surrounding map tiles, wherein the iteratively computing results in a decrease of the discrepancy score.
- the program code is configured to, for one or more map tile, generate a transformed SFM values based on the alignment transformation.
- the program code is configured to, for one or more map tile, compute a three-dimensional (3D) urban model of that map tile based on the transformed SFM values.
- the computerized system further comprises an action of bundle adjusting the SFM values of the map tiles and/or the overlap.
- the computerized system further comprises program code configured to model align at least some of the 3D urban models, each associated with the respective map tile, to produce a large-scale 3D urban model.
- the computerized system further comprises program code configured to separate at least some of the 3D urban models into two or more building models and two or more terrain models, and wherein the model aligning is performed separately for the terrain models.
- the computerized system further comprises program code configured to separating at least some of the 3D urban models into two or more building models and two or more terrain models, and wherein the model aligning of the building models is performed using bundle adjustment of the transformed SFM value subset associated with the building models.
- a computer program product comprising a non-transitory computer-readable storage medium having program code embodied therewith.
- the program code executable by one or more hardware processor to receive two or more digital image files, each comprising a digital image depicting a geographical location.
- the program code executable by one or more hardware processor to, for each digital image file, compute one or more of a camera location, camera intrinsic parameter values, and a camera pose that acquired the digital image and two or more structural feature locations depicted in the digital image.
- the program code executable by one or more hardware processor to segment the digital image files according to respective camera pose and geographical location to one of two or more map tiles, wherein each map tile is associated with a subset of the digital image files, and wherein the map tiles overlap each other.
- the program code executable by one or more hardware processor to compute, for each map tile, a structure from motion analysis (SFM) on the subset to produce a point cloud for that map tile, wherein the SFM additionally produces (a) a refined camera pose and (b) refined camera intrinsic parameter values, for each digital image file.
- SFM structure from motion analysis
- the alignment transformation is computed according to two or more sub-transformations, each respective sub-transformation computed from one of two or more discrepancy scores.
- the computer program product further comprises program code configured to bundle adjust the SFM values of the map tiles and/or the overlap.
- the computer program product further comprises program code configured to model align at least some of the 3D urban models, each associated with the respective map tile, to produce a large-scale 3D urban model.
- the computer program product further comprises program code configured to separate at least some of the 3D urban models into two or more building models and two or more terrain models, and wherein the model aligning is performed separately for the terrain models.
- the computer program product further comprises program code configured to separating at least some of the 3D urban models into two or more building models and two or more terrain models, and wherein the model aligning of the building models is performed using bundle adjustment of the transformed SFM value subset associated with the building models.
- the discrepancy score is based on the point cloud, the camera poses, and/or the camera intrinsic parameter values associated with the map tile overlap.
- the overlap is between 1% and 20% of a linear dimension distance substantially orthogonal to an edge of the map tile at that overlap location.
- the overlap is sized and shaped such that between 1% and 20% of the digital image files of that tile depict the map tile overlap.
- the overlap is sized and shaped to match one or more of: (i) specific features at the borders of that map tile, and (ii) the number of points in the point cloud at the structural feature locations.
- each map tile is sized and shaped to match one or more of: (i) specific features at the borders of that map tile, and (ii) the number of points in the point cloud at the structural feature locations.
- one or more of the map tiles is shaped at least in part as one or more of a square, a rectangle, a circle, a triangle, a trapezoid, a parallelepiped, a polygon, and a blob, and wherein the shape is selected according to the structural feature locations.
- the above method may be implemented as a computerized method, incorporated into a computerized system, as a computer program product, as a software-as-a-service, and/or the like.
- FIG. 1 is a schematic illustration of a system for tiling and point cloud transformations in 3D urban modeling
- FIG. 2A is a flowchart of a method for gridding and point cloud transformations in 3D urban modeling
- FIG. 2B is a flowchart of a second method for gridding and point cloud transformations in 3D urban modeling
- FIG. 3A is a flowchart of a method for map tile bundle adjustment using map tile overlap discrepancy scores
- FIG. 3B is a flowchart of a method for point cloud classification
- FIG. 4 is a schematic illustration of a vertical view of a map tile overlap
- FIG. 5 is a schematic illustration of a top view of a map tile overlap
- FIG. 6 is a schematic illustration of a map tile overlap and camera pose discrepancy
- FIG. 7 is a schematic illustration of a global coordinate system
- FIG. 8 is a schematic illustration of map tile overlap weighting values.
- Images of urban areas are received from one or more cameras, such as ground-based cameras, air-borne cameras, car mounted cameras, and/or the like.
- the images may also comprise a global positioning system (GPS) tag indicating the geographical location of the image.
- GPS global positioning system
- the images may be analyzed using structure from motion (SFM) techniques to calculate camera poses of each image, image coordinates of structure features in each image, camera intrinsic parameters, and/or the like.
- SFM structure from motion
- the images are arranged according to geographical location in overlapping map tiles, where a 3D model is computed for each tile separately and the 3D models of all tiles are combined later.
- tile means a geographical map tile.
- the size and shape of the map tiles and the amount of overlap may be determined using a uniform map tile and overlap size, using features of the image, using the number of images available in each tile, and//or the like.
- the overlap is between 1% and 20% of a linear dimension distance substantially orthogonal to an edge of the map tile at that overlap location, such that any linear dimension has a minimum amount of overlap.
- the overlap is sized and shaped such that between 1% and 20% of the digital image files of that tile are depicted on the map tile overlap, such as the map tile has 1000 images and 20% of those images are in the overlap region with neighboring tiles.
- the percentage of overlap may be 5%, 10%, 15%, 20%, 25%, 30%, or the like, depending on the images available, features depicted, and/or the like.
- the overlap and/or the map tile are sized and shaped to match features of the 3D urban model, images features, the number of points in the point cloud, and/or the like.
- at least part of each map tile is shaped as a square, a rectangle, a circle, a triangle, a trapezoid, a parallelepiped, a polygon, a blob, and/or the like.
- the transformed point clouds for each tile may be segmented into building points and terrain points.
- the building points undergo bundle adjustment and 3D modelling separately from the 3D modelling of the terrain points.
- the two 3D models, terrain and builds, may be combined for the final 3D urban model.
- the discrepancy scores may be cost functions that minimize the discrepancy of the point clouds of each tile at the tiles overlaps.
- the discrepancy score cost function is minimized to determine a transformation of the camera poses and/or point cloud data.
- a coherence score benefit function is maximized to determine a transformation of the camera poses and/or point cloud data.
- the discrepancy scores may be a function of two adjacent tile overlaps, a corner overlap of four tiles, a 4-side overlap of a tile (5 tiles total), and the like.
- the overlap is a sensitive geographical area that needs a higher accuracy 3D model and more tiles participate in the discrepancy score function.
- the technique further allows incremental addition of new images to a 3D urban model without completely re-computing the 3D model.
- the map tile(s) of the new images to be added are located, and only the specific map tiles may be recomputed, thereby allowing a crowdsourcing-like approach to addition of new images from multiple cameras, vehicles, image qualities, weather conditions, lighting conditions, and/or the like.
- These improvements over conventional techniques allow the efficient computation of high-quality 3D models on a large scale, such as a city-scale, country-scale, region-scale, global-scale, and/or the like.
- FIG. 1 is a schematic illustration of a system 100 for tiling and point cloud transformations in 3D urban modeling.
- System 100 comprises one or more hardware processors 101 for executing processor instructions stored in modules on a storage medium 102, such as a non-transitory computer-readable storage medium.
- a Structure from Motion (SFM) Analyzer 102A receives digital image files from one or more camera systems (130) through a network interface 110, such as a group of files from each camera system, and processes the files to determine for each one a camera pose, image features, camera intrinsic parameters, and/or the like.
- a camera system 130 is a video system aboard a drone that flies over an urban area to create a 3D urban model.
- a camera system 130 is a vehicle mounted video system.
- a camera system 130 is a series of end user photographs, such as a crowdsourced camera system.
- each digital image file comprises a digital image and metadata related to the camera intrinsic parameters, camera position, camera location, camera pose, and/or the like.
- SFM Analyzer 102A may produce a refined camera pose, a refined camera intrinsic parameter values, a refined point cloud, and/or the like.
- a map tiler 102B separates files into geographical map tiles, where each map tile may be processed separately, such as in parallel by hardware processors 101.
- the map tiles of adjacent tiles overlap each other.
- the size of the map tiles and the size of the overlap may be determined by the number of image files, the existence of a previous 3D urban model, the quality of the existing model and image files, the desired accuracy of the model, and/or the like.
- Point clouds may be calculated from the image feature coordinates and camera poses, such as by triangulation.
- a discrepancy transformer 102C may transform the camera pose coordinates and point cloud coordinates to new coordinates based on a discrepancy function computed for the overlap between tiles. For example, the transformation using the discrepancy function smooths the transition of the 3D urban model across tiles.
- a 3D model stitcher 102D may classify some of the points of the point cloud as belonging to a building or terrain, and separately process the points from the building and the terrain. For example, the point cloud of buildings is bundle adjusted separately, and an adjusted point cloud is generated from the buildings. The adjusted point cloud may be used to generate a separate building model or combined with the processed or unprocessed terrain model for further processing and/or modelling. For example, the point cloud of terrain is modelled and stitched across map tiles separately from the buildings.
- Computerized system 100 may include a user interface 111 to control the processing, display the results, monitor the system, and the like.
- modules and/or models may be stored on a network attached storage 120.
- Hardware processor(s) 101 receive 201 digital image files from camera system(s) 120, such as images in groups, each group associated with a capture session, such as a video clip, a series of drone captured images from a flight, a series of images from car mounted camera mapping vehicles, a series of crowdsourced photos, and/or the like.
- Each image may comprise one or more GPS coordinate, one or more two-dimensional (2D) image data, metadata tags, and the like.
- the term image file means the digital image file
- the term image means the 2D image data, such as a digital still, a frame of digital video, and/or the like.
- the groups of images may undergo feature extraction 202 to determine camera poses, image features, point cloud, and the like, each associate with one of the digital image files.
- the images may be split 203 into sub-groups, each associated with a geographical map tile.
- the map tile may comprise an overlap with adjacent tiles and the selection of the size of the tiles and the area of the overlaps will be explained in detail hereinbelow.
- an image when an image is outside the currently computed tile but contains feature of objects within the current tile, it is included in the processing of the current tile, such as when an image was computed previously for a different tile.
- the images may be solved for camera positions, such as using by using SFM analysis 204.
- the discrepancies across tiles may be computed 205, and the camera poses may be transformed 206 to minimize the discrepancy within the tile overlap, such as computed 205 with a discrepancy score, discrepancy function, cost function, benefit function, coherence function, and the like.
- a point cloud may then be created 207 from the transformed camera poses, and the point cloud may be transformed 208 using a new or existing computation 205 of the cross tile overlap discrepancies.
- the transformed point cloud may be used to create 209 a 3D urban model.
- FIG. 2B is a flowchart of a second method 210 for gridding and point cloud transformations in 3D urban modeling.
- the action of receiving 201 images, extracting 202 features, and splitting 203 images into tiles may be performed by hardware processor(s) 101.
- Camera poses may be solved, such as using by using SFM analysis 214, creating a new point cloud of structural features.
- Each point of the cloud may be classified 215 into belonging to terrain or buildings.
- Building point cloud may be bundle adjusted 216 including adjusted camera poses, optionally including points of the same building from nearby tiles.
- bundle adjustment 216 is performed on a subset of the point cloud, such as sparse point cloud of the extracted 202 features.
- a dense building point cloud may be created 217 based on this bundle adjustment 216.
- the terrain point cloud may be used to model the 3D terrain and stitch 218 the 3D terrain model between map tiles.
- a new set of terrain point cloud from the model may be combined 219 with the adjusted building point cloud, and used to create 220 a 3D urban model.
- the adjusted dense building point cloud is used to create building models, so that a building does not need alignment of building parts across tiles and the building models may be combined with the 3D terrain model to create a 3D urban model.
- each of the identified building point clouds is re-computed using the images in which the building is observed.
- FIG. 3 A is a flowchart of a method 310 for map tile bundle adjustment using tile overlap discrepancy scores.
- different discrepancy scores may be used multiple times during the bundle adjusting 313, camera pose transforming, point cloud transforming, and/or the like.
- a first discrepancy score is minimized 311 to determine a bundle adjust transform.
- a second discrepancy score is minimized 312 to determine a bundle adjust transform, and combined with the first transform.
- An entire tile is bundle adjusted 313, and the first discrepancy score is again minimized 314 to transform the bundle adjusted build point cloud.
- Each discrepancy score may use a cost/benefit function computed on the point cloud overlap between tiles to transform the point cloud within a tile and thus achieve a smoother tile to tile transition of the 3D models with benefits of shorter computation time, less memory usage, easier integration of new images to an existing 3D model, and/or the like.
- Point cloud may be resampled 321 on a regular grid, and each point cloud identified 322 based on the 3D traits of the point, such as coordinates, colors, adjacent neighbors, and/or the like.
- the points may be projected 323 onto the images, such as converting the 3D coordinates to 2D image coordinates.
- the point may be classified 324 based on radiometric traits, such as view, adjacency, color, computer vision techniques, and/or the like.
- the classification may then be used to locate 325 the object boundaries on the original point cloud.
- a world-scale 3D model may handle unconventional problems, such as not all images are provided to the algorithm at once, image groups cover different areas, such as areas far from each other, changes in the scene over time may render the previous gathered data and/or model at least partially obsolete.
- the proposed methods may overcome such problems and issues using computed map tiles for determining subgroups of image files for processing together, such as a during bundle adjustment, modelling, and/or the like.
- the techniques disclosed herein provide technical solutions for stitching data between adjacent tiles, such as ground point cloud data, classification data, model data, and the like, without the use of control points.
- ground control points are additionally used to generate and combine the 3D models.
- Map tiles may be geographically bounded areas, such as a bounding box, with an infinite height, containing images and all related data, such as properties of the camera sensor(s) used for the acquisition of the images, images analysis products, the generated model, and the like.
- the generated model within the tile's bounding box is hereafter referred to as the tile's Area of Interest (AOI).
- the spatial boundaries of the computation tile may be defined using the center location of the tile and its width, referred to as easting, and height, referred to as northing.
- the location coordinates of the map tile may be expressed in geodetic coordinate system (i.e. Lat. Lon.), with World Geodetic System (WGS84) data, and the like.
- the computation map tiles may not be required to be of the same size, it may be simpler to maintain and process the entire dataset when they are.
- the size of the tile the number of images it contains and the area it covers may be considered. Too few images may not converge well to a correct solution and too many images may have a negative impact on performance. Similarly, too small an area may prove inefficient in later steps of the reconstruction pipeline, and too large areas may show major inconsistency between neighboring tiles, due to the spherical nature of the surface of the earth. Since images may not have the same size (width & height), the same shape, cover the same area, such as in aerial images vs. terrestrial images, and/or the like, robust criteria pertaining to the amount of image data in a map tile may be considered. For example, the sum of pixels, the number of certain features extracted from the images, and/or the like.
- a tile size of, for example, the square root of 384000 is approximately 620 m in width and height. Overlap ranges may be from very small where the features are space and map tile stitching is minimal, and up to very large overlaps
- a model For each tile a model may be created, and the map tile models stitched together to create a uniform 3D urban model. Images affecting the tile's AOI may be outside the tile's boundaries, and the bounding box, hereafter referred to as expanded boundaries, may include images with GPS tags outside the map tile boundaries.
- the bounding box hereafter referred to as expanded boundaries
- additional cameras may be added from that tile to the computed tile, even when they are outside the cameras bounding box, as they may contribute to the tile's AOI. These additional cameras may also be used for the process of aligning the tile's AOI to its surrounding.
- the embodiments described herein may be scalable since the map tile computations, possibly being of an unlimited number, may be easily deployed in a multicomputer environment, such as public or private cloud infrastructures, parallel computers, and/or the like.
- FIG. 4 is a schematic illustration of a vertical view of a map tile overlap.
- the figure shows a camera (clear dot marked A) within the tile boundaries contributing to the AOI, a camera (red dot marked B) within the tile boundaries, not contributing to the AOI and dropped from tile, and a camera (blue dot marked C) in solved adjacent tile contributing to the AOI.
- T ⁇ T ⁇ ...T m ⁇ denote a set of computation tiles.
- P ⁇ p ⁇ ...p a ⁇ denote a group of images.
- ExRect(T q ) denote the coordinates of the expanded bounding rectangle of the tile
- P(T q )Q P denote the set of images in T q for which the GPS coordinates of all images in P(T q ) may be contained within ExRect(T q ).
- Rect(T q ) denote the coordinates of the bounding rectangle of the tile, containing the reconstructed scene (AOI).
- GPS tags may be expressed in geodetic coordinate system having latitude and longitude using WGS84 data.
- coordinates may be converted to a Cartesian system, such as ECEF (Earth Centered, Earth Fixed), to the appropriate local UTM (Universal Traversal Mercator) zone, and the like.
- ECEF Earth Centered, Earth Fixed
- UTM Universal Traversal Mercator
- Overlap may exist between tiles, an image may be associated with more than one tile, such as ILP(r q ) ⁇ (7 )
- Each tile may be processed independently as detailed in the following.
- pairs of matching key points such as feature points, object points, and the like, may be found by comparing feature vectors computed for those points. These pair-wise matches may have incorrect matches (such as outliers), and thus a filtering step may be carried out.
- a fundamental or an essential matrix may be computed, using Random Sample Consensus (RANSAC), and used to filter out key points not complying with the transformation, under a given threshold.
- RASAC Random Sample Consensus
- N-view correspondences may be inferred using the pair- wise matches, for example when key point kn in image Pi, matches key point fej in image P 2 , and fej matches key point feh in image P3, then the 3-view correspondence would be the set of pairs: ⁇ Pi , kn >, ⁇ Pi, ki] >, ⁇ P3, feh > ⁇ .
- Tracks may be referred to as Tracks.
- camera poses may be solved for the entire set, using incremental or global structure from motion techniques.
- the bundle adjustment steps may assure that the solution is (at worse) locally optimized, such that it may be with substantially minimal re-projection error with respect to the images key points.
- C(Pi, T q ) ⁇ Intrinsic ⁇ i, q >, Pose ⁇ q >>.
- Camera intrinsic parameters may be focal length, principal point, skew, lens' distortion (radial and tangential), and the like.
- Pose is the camera translation and rotation denoted as t and R respectively.
- neighboring tiles may not align perfectly. This means that for any image i, such as i 6 P(T q ) for some q and i 6 P(Tr) for q ⁇ r, the solved camera parameters C(Pi, T q ) ⁇ C(Pi, Tr).
- This inequality may be referred to as cross-tiles cameras discrepancies. For example, 3D models computed in 2 neighboring tiles may converged to different solutions in each of the tiles.
- Discrepancy may be measured in multiple ways, as well as the optimal transformation which results from a minimization of the discrepancies.
- a discrepancy score function may be used, such as a distance function.
- Overlap(T q ) ⁇ i I Pi EP(T q ), I I 77 Pi)ll > 1 ⁇ denote a list of indices of all images in tile T q that may be also associated to other tiles, i.e. images in overlapping regions of the tile.
- FIG. 5 is a schematic illustration of a top view of a map tile overlap, such as from different cameras.
- the cross-tiles cameras discrepancies may be minimized to find a rigid transformation and a scaling factor, such as one that minimizes the following score function: r L
- the transformation function may be defined as: s x
- the minimal discrepancy transform may be computed using the minimal mean squared error
- a non-linear optimization such as Levenberg-Marquardt (LM) algorithm may be applied.
- LM Levenberg-Marquardt
- FIG. 6 is a schematic illustration of a map tile
- the discrepancy score function may be denoted as:
- This function may be minimized by searching for the ratio s with the transformation function
- Ft( cam ,s) cam.
- focal * s an( j ⁇ 6 distance function dist may be denoted as the
- the rest of the cameras' parameters may be adjusted in accordance. This may be done for example by applying a bundle adjustment step in which the focal length is kept fixed.
- Discrepancy score may be minimized by combining different optimization steps. For example, first minimizing Discrepancy ScoreF, and then minimizing DiscrepancyScoreT.
- Pixel(q, rowi, coli>.... ⁇ i m , row m , col m > ⁇ , where z ' i,..z ' m may refer to images Pii,.. Pim E P(T q ) where the vertex Vj is projected at ⁇ rowi, coh>... ⁇ row m , col m > respectively, for example, a look up table referencing a vertex v ? ,j to all images observing it. Also denote Vertex(q, i, row, col) j, the reverse lookup table, referencing pixel ⁇ row, col> of i 6 T q to the computed vertex v ? j E Vq.
- the discrepancy score is computed iteratively, where each iteration may use a different discrepancy score variation. For example, a first iteration computes a discrepancy score that is near optimal, and no further iterations may be needed. For example, a first iteration computes a first discrepancy score for the overlap with a first adjacent map tile, a second iteration computes a second discrepancy score for the overlap with a second adjacent map tile, and/or the like. When the discrepancy score decreases with each iteration, the iterations may continue until the discrepancy score decrease is below a threshold value (i.e. the discrepancy score did not decrease enough or increased).
- the transformation of the SFM values is computed according to sub- transformations, where each sub-transformations uses a different discrepancy score.
- the point cloud Before smoothing the tile's seams, for alignment within the seam neighborhood, the point cloud may be uniformly sampled. Manmade structures, such as buildings, bridges, and the like may be extracted. These 2 steps may be interchangeable, depending on the techniques used for classification and segmentation. Such manmade objects split at the tiles boundaries may result in visible defects in the surface of the object, such as geometry misalignment, wrong texture projection and other deformations.
- the density threshold may be determined using Otzu's technique, such as based on the histogram of local densities across all vertices in the point cloud.
- local density may also be defined in M 2 :
- FIG. 7 is a schematic illustration of a global coordinate system. Projecting ECEF coordinate system to a tile's local tangent plane is shown.
- the grid sample rate (resolution) is denoted Rate( q ).
- Rate( q ) The grid sample rate (resolution) is denoted Rate( q ).
- the tile may be uniformly resampled to produce a grid on the XY plane, where each cell contains its height. To prevent extrapolating the heights at the exact boundaries of the tile, additional rows and columns may be added at the left, right, top and bottom of the tile.
- the new computed tile may be smoothed along the seams and adjacent tiles.
- the smoothing process may ensure that the grid generated from adjacent tiles will align along the joint borders of these tiles, and that the alignment is gradually smoothed from the border towards the tile center.
- a height grid may be constructed from the sampled tiles around the newly computed tile and the tile itself.
- the tiles may be re-sampled based on the highest available rate among all participating tiles.
- a smoothing operator may be applied on the seam-lines.
- FIG. 8 is a schematic illustration of map tile overlap weighting values. The smoothing operator may be combined with some gradient of weights such as in FIG. 8, to determine which seams are most smoothed, with a decreasing weight as the operator moves away from the seam.
- the shaded cells are the center tile and the darker shaded cells are the tile boundaries.
- the smoothing operator may be a Gaussian filter, for example.
- Grid(i ) Gauss(i )* f(weight(i ))+Grid(i )*(l-f (weighs)
- images in tiles that show that building may be identified, using the key points matchings. These images may be re-computed with respect to camera poses, and a new dense point cloud is generated on a per building basis.
- the buildings may be re-inserted into relevant tiles as points, or kept as a separate layer. Further processing may include surface estimation, plane alignment, simplification, heuristics, and/or the like, relying on the a-priori knowledge that the point cloud represents only buildings.
- the tile may have undergone alignment transformations for dealing with cross-tiles discrepancies, buildings may be re-aligned, before being combined with the terrain. This may be achieved by analyzing a building's principal components and aligning the up axis with the tiles' up axis.
- the tile may undergo other processing, such as, surface estimation and texture assignment in order to complete a 3D textured mesh.
- Other applications may require the generation of the same model in several Level of Details (LOD), for better view or faster analysis.
- LOD Level of Details
- the present invention may be a system, a method, and/or a computer program product.
- the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
- various embodiments of this invention may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range.
- the computer readable storage medium may be a tangible device that may retain and store instructions for use by an instruction execution device.
- the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- a non- exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device having instructions recorded thereon, and any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or Flash memory erasable programmable read-only memory
- SRAM static random access memory
- CD-ROM compact disc read-only memory
- DVD digital versatile disk
- memory stick a floppy disk
- any suitable combination of the foregoing includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable
- a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Rather, the computer readable storage medium is a non-transient (i.e., not-volatile) medium.
- Computer readable program instructions described herein may be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
- the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
- a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
- Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
- the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
- These computer readable program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer readable program instructions may also be stored in a computer readable storage medium that may direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the block may occur out of the order noted in the figures.
- two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration may be implemented by special purpose hardware -based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Civil Engineering (AREA)
- Evolutionary Computation (AREA)
- Computational Mathematics (AREA)
- Architecture (AREA)
- Structural Engineering (AREA)
- Instructional Devices (AREA)
- Processing Or Creating Images (AREA)
Abstract
L'invention concerne un procédé informatisé comprenant l'utilisation d'au moins un processeur matériel pour ajuster un modèle tridimensionnel urbain numérique par réception d'une pluralité de fichiers d'images numériques, et pour chaque paramètre de caméra initial informatique, segmenter chacune d'une pluralité de tuiles de carte qui se chevauchent, et, pour chaque tuile de carte, calculer une structure à partir d'une analyse de mouvement (SFM). Le procédé comprend en outre, pour au moins une tuile de carte, le calcul d'une transformation d'alignement sur la base d'un calcul itératif d'un score d'écart des chevauchements entre ladite tuile de carte et les valeurs SFM à partir de tuiles de carte environnantes. Le procédé comprend en outre, pour au moins une tuile de carte, la génération de valeurs de données SFM transformées sur la base de la transformation d'alignement. Le procédé comprend en outre, pour au moins une tuile de carte, le calcul d'un modèle urbain tridimensionnel (3D) de ladite tuile de carte sur la base des valeurs SFM transformées.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662400684P | 2016-09-28 | 2016-09-28 | |
US62/400,684 | 2016-09-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018061010A1 true WO2018061010A1 (fr) | 2018-04-05 |
Family
ID=61760328
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IL2017/051100 WO2018061010A1 (fr) | 2016-09-28 | 2017-09-28 | Transformation de nuage de points dans une modélisation urbaine à grande échelle |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2018061010A1 (fr) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110060343A (zh) * | 2019-04-24 | 2019-07-26 | 百度在线网络技术(北京)有限公司 | 地图构建方法及系统、服务器、计算机可读介质 |
CN110298103A (zh) * | 2019-06-25 | 2019-10-01 | 中国电建集团成都勘测设计研究院有限公司 | 基于无人机机载三维激光扫描仪的高陡危岩体调查方法 |
CN111383354A (zh) * | 2020-04-02 | 2020-07-07 | 西安因诺航空科技有限公司 | 一种基于sfm的三维点云朝向修正方法 |
CN111627061A (zh) * | 2020-06-03 | 2020-09-04 | 贝壳技术有限公司 | 位姿检测方法、装置以及电子设备、存储介质 |
CN111723573A (zh) * | 2020-06-16 | 2020-09-29 | 郑州星空北斗导航服务有限公司 | 时空基准统一下的多卫星影像数据语义化处理方法 |
CN111984026A (zh) * | 2019-05-23 | 2020-11-24 | 广州极飞科技有限公司 | 无人机的控制方法和装置 |
CN111984875A (zh) * | 2019-05-22 | 2020-11-24 | 赫尔环球有限公司 | 用于识别建筑物访问机构的方法、设备和计算机程序产品 |
CN112285733A (zh) * | 2020-10-21 | 2021-01-29 | 郑州中核岩土工程有限公司 | 一种城乡规划核实测绘数据处理方法 |
CN112446951A (zh) * | 2020-11-06 | 2021-03-05 | 杭州易现先进科技有限公司 | 三维重建方法、装置、电子设备及计算机存储介质 |
CN113178000A (zh) * | 2021-03-26 | 2021-07-27 | 杭州易现先进科技有限公司 | 三维重建方法、装置、电子设备及计算机存储介质 |
DE102020122010A1 (de) | 2020-08-24 | 2022-02-24 | Bareways GmbH | Verfahren und system zur bestimmung einer beschaffenheit einer geografischen linie |
CN114119892A (zh) * | 2021-11-30 | 2022-03-01 | 云南云岭高速公路工程咨询有限公司 | 一种基于bim和gis技术的三维数字路网建设方法 |
CN114219717A (zh) * | 2021-11-26 | 2022-03-22 | 杭州三坛医疗科技有限公司 | 点云配准方法、装置、电子设备及存储介质 |
WO2023284715A1 (fr) * | 2021-07-15 | 2023-01-19 | 华为技术有限公司 | Procédé de reconstruction d'objet et dispositif associé |
CN115661495A (zh) * | 2022-09-28 | 2023-01-31 | 中国测绘科学研究院 | 一种紧凑划分及多层次合并策略的大规模SfM方法 |
CN115795579A (zh) * | 2022-12-23 | 2023-03-14 | 岭南师范学院 | 一种无特征复杂曲面误差分析的快速坐标对齐方法 |
CN116824273A (zh) * | 2023-08-28 | 2023-09-29 | 成都飞机工业(集团)有限责任公司 | 一种航空制造件任意视角二维投影图像面片属性判断方法 |
CN116883251A (zh) * | 2023-09-08 | 2023-10-13 | 宁波市阿拉图数字科技有限公司 | 基于无人机视频的图像定向拼接与三维建模方法 |
US11954797B2 (en) | 2019-01-10 | 2024-04-09 | State Farm Mutual Automobile Insurance Company | Systems and methods for enhanced base map generation |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080103699A1 (en) * | 2005-02-10 | 2008-05-01 | Barbara Hanna | Method and apparatus for performing wide area terrain mapping |
US20110181589A1 (en) * | 2010-01-28 | 2011-07-28 | The Hong Kong University Of Science And Technology | Image-based procedural remodeling of buildings |
US20120041722A1 (en) * | 2009-02-06 | 2012-02-16 | The Hong Kong University Of Science And Technology | Generating three-dimensional models from images |
US20160154999A1 (en) * | 2014-12-02 | 2016-06-02 | Nokia Technologies Oy | Objection recognition in a 3d scene |
-
2017
- 2017-09-28 WO PCT/IL2017/051100 patent/WO2018061010A1/fr active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080103699A1 (en) * | 2005-02-10 | 2008-05-01 | Barbara Hanna | Method and apparatus for performing wide area terrain mapping |
US20120041722A1 (en) * | 2009-02-06 | 2012-02-16 | The Hong Kong University Of Science And Technology | Generating three-dimensional models from images |
US20110181589A1 (en) * | 2010-01-28 | 2011-07-28 | The Hong Kong University Of Science And Technology | Image-based procedural remodeling of buildings |
US20160154999A1 (en) * | 2014-12-02 | 2016-06-02 | Nokia Technologies Oy | Objection recognition in a 3d scene |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11954797B2 (en) | 2019-01-10 | 2024-04-09 | State Farm Mutual Automobile Insurance Company | Systems and methods for enhanced base map generation |
CN110060343A (zh) * | 2019-04-24 | 2019-07-26 | 百度在线网络技术(北京)有限公司 | 地图构建方法及系统、服务器、计算机可读介质 |
CN111984875B (zh) * | 2019-05-22 | 2024-03-19 | 赫尔环球有限公司 | 用于识别建筑物访问机构的方法、设备和计算机程序产品 |
CN111984875A (zh) * | 2019-05-22 | 2020-11-24 | 赫尔环球有限公司 | 用于识别建筑物访问机构的方法、设备和计算机程序产品 |
CN111984026A (zh) * | 2019-05-23 | 2020-11-24 | 广州极飞科技有限公司 | 无人机的控制方法和装置 |
CN110298103A (zh) * | 2019-06-25 | 2019-10-01 | 中国电建集团成都勘测设计研究院有限公司 | 基于无人机机载三维激光扫描仪的高陡危岩体调查方法 |
CN111383354A (zh) * | 2020-04-02 | 2020-07-07 | 西安因诺航空科技有限公司 | 一种基于sfm的三维点云朝向修正方法 |
CN111383354B (zh) * | 2020-04-02 | 2024-02-20 | 西安因诺航空科技有限公司 | 一种基于sfm的三维点云朝向修正方法 |
CN111627061A (zh) * | 2020-06-03 | 2020-09-04 | 贝壳技术有限公司 | 位姿检测方法、装置以及电子设备、存储介质 |
CN111723573A (zh) * | 2020-06-16 | 2020-09-29 | 郑州星空北斗导航服务有限公司 | 时空基准统一下的多卫星影像数据语义化处理方法 |
DE102020122010B4 (de) | 2020-08-24 | 2023-05-04 | Bareways GmbH | Verfahren und system zur bestimmung einer beschaffenheit einer geografischen linie |
DE102020122010A1 (de) | 2020-08-24 | 2022-02-24 | Bareways GmbH | Verfahren und system zur bestimmung einer beschaffenheit einer geografischen linie |
CN112285733B (zh) * | 2020-10-21 | 2023-09-26 | 中核勘察设计研究有限公司 | 一种城乡规划核实测绘数据处理方法 |
CN112285733A (zh) * | 2020-10-21 | 2021-01-29 | 郑州中核岩土工程有限公司 | 一种城乡规划核实测绘数据处理方法 |
CN112446951A (zh) * | 2020-11-06 | 2021-03-05 | 杭州易现先进科技有限公司 | 三维重建方法、装置、电子设备及计算机存储介质 |
CN112446951B (zh) * | 2020-11-06 | 2024-03-26 | 杭州易现先进科技有限公司 | 三维重建方法、装置、电子设备及计算机存储介质 |
CN113178000B (zh) * | 2021-03-26 | 2022-06-24 | 杭州易现先进科技有限公司 | 三维重建方法、装置、电子设备及计算机存储介质 |
CN113178000A (zh) * | 2021-03-26 | 2021-07-27 | 杭州易现先进科技有限公司 | 三维重建方法、装置、电子设备及计算机存储介质 |
WO2023284715A1 (fr) * | 2021-07-15 | 2023-01-19 | 华为技术有限公司 | Procédé de reconstruction d'objet et dispositif associé |
CN114219717A (zh) * | 2021-11-26 | 2022-03-22 | 杭州三坛医疗科技有限公司 | 点云配准方法、装置、电子设备及存储介质 |
CN114119892A (zh) * | 2021-11-30 | 2022-03-01 | 云南云岭高速公路工程咨询有限公司 | 一种基于bim和gis技术的三维数字路网建设方法 |
CN114119892B (zh) * | 2021-11-30 | 2024-04-30 | 云南云岭高速公路工程咨询有限公司 | 一种基于bim和gis技术的三维数字路网建设方法 |
CN115661495A (zh) * | 2022-09-28 | 2023-01-31 | 中国测绘科学研究院 | 一种紧凑划分及多层次合并策略的大规模SfM方法 |
CN115795579A (zh) * | 2022-12-23 | 2023-03-14 | 岭南师范学院 | 一种无特征复杂曲面误差分析的快速坐标对齐方法 |
CN116824273A (zh) * | 2023-08-28 | 2023-09-29 | 成都飞机工业(集团)有限责任公司 | 一种航空制造件任意视角二维投影图像面片属性判断方法 |
CN116824273B (zh) * | 2023-08-28 | 2024-01-12 | 成都飞机工业(集团)有限责任公司 | 一种航空制造件任意视角二维投影图像面片属性判断方法 |
CN116883251A (zh) * | 2023-09-08 | 2023-10-13 | 宁波市阿拉图数字科技有限公司 | 基于无人机视频的图像定向拼接与三维建模方法 |
CN116883251B (zh) * | 2023-09-08 | 2023-11-17 | 宁波市阿拉图数字科技有限公司 | 基于无人机视频的图像定向拼接与三维建模方法 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018061010A1 (fr) | Transformation de nuage de points dans une modélisation urbaine à grande échelle | |
US12056817B2 (en) | Generating three-dimensional geo-registered maps from image data | |
CN113192193B (zh) | 基于Cesium三维地球框架的高压输电线路走廊三维重建方法 | |
US7509241B2 (en) | Method and apparatus for automatically generating a site model | |
US8427505B2 (en) | Geospatial modeling system for images and related methods | |
US7983474B2 (en) | Geospatial modeling system and related method using multiple sources of geographic information | |
CN111383335B (zh) | 一种众筹照片与二维地图结合的建筑物三维建模方法 | |
CN105205808A (zh) | 基于多特征多约束的多视影像密集匹配融合方法及系统 | |
CN116518864A (zh) | 一种基于三维点云对比分析的工程结构全场变形检测方法 | |
KR100904078B1 (ko) | 항공 사진의 영상정합을 이용한 3차원 공간 정보 생성 시스템 및 방법 | |
Kim et al. | Interactive 3D building modeling method using panoramic image sequences and digital map | |
CN116805356A (zh) | 建筑模型的构建方法、设备及计算机可读存储介质 | |
Özdemir et al. | A multi-purpose benchmark for photogrammetric urban 3D reconstruction in a controlled environment | |
CN112465849B (zh) | 一种无人机激光点云与序列影像的配准方法 | |
Sun et al. | Building outline extraction from aerial imagery and digital surface model with a frame field learning framework | |
KR101079475B1 (ko) | 포인트 클라우드 필터링을 이용한 3차원 도시공간정보 구축 시스템 | |
CN118887569A (zh) | 一种基于无人机多相机倾斜摄影的建筑立面图像提取方法 | |
Deng et al. | Automatic true orthophoto generation based on three-dimensional building model using multiview urban aerial images | |
Ariff et al. | Exploratory study of 3D point cloud triangulation for smart city modelling and visualization | |
Li et al. | Geometric object based building reconstruction from satellite imagery derived point clouds | |
Carneiro et al. | Digital urban morphometrics: automatic extraction and assessment of morphological properties of buildings | |
Farkoushi et al. | Generating Seamless Three-Dimensional Maps by Integrating Low-Cost Unmanned Aerial Vehicle Imagery and Mobile Mapping System Data | |
Zhu | A pipeline of 3D scene reconstruction from point clouds | |
Wu et al. | Building facade reconstruction using crowd-sourced photos and two-dimensional maps | |
Yu et al. | Advanced approach for automatic reconstruction of 3d buildings from aerial images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17855170 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17855170 Country of ref document: EP Kind code of ref document: A1 |