AU2010200145A1 - Extraction processes - Google Patents
Extraction processes Download PDFInfo
- Publication number
- AU2010200145A1 AU2010200145A1 AU2010200145A AU2010200145A AU2010200145A1 AU 2010200145 A1 AU2010200145 A1 AU 2010200145A1 AU 2010200145 A AU2010200145 A AU 2010200145A AU 2010200145 A AU2010200145 A AU 2010200145A AU 2010200145 A1 AU2010200145 A1 AU 2010200145A1
- Authority
- AU
- Australia
- Prior art keywords
- cell
- values
- cells
- value
- sub
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 25
- 238000000034 method Methods 0.000 claims abstract description 59
- 238000004590 computer program Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 4
- 230000011218 segmentation Effects 0.000 description 39
- 230000008901 benefit Effects 0.000 description 11
- 230000006870 function Effects 0.000 description 9
- 238000013459 approach Methods 0.000 description 7
- 238000012937 correction Methods 0.000 description 7
- 238000012935 Averaging Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000001154 acute effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/05—Geographic models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Remote Sensing (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Astronomy & Astrophysics (AREA)
- Multimedia (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
EXTRACTION PROCESSES A method of and apparatus for extracting an object (8,10) 5 or terrain feature (6), comprising: defining an area (4); dividing the area (4) into cells (12,14,16); measuring a value of a parameter (for example the height) at different locations in each cell (12,14,16); for each cell (12,14,16), determining a value of a function of the parameter values; identifying a cell 10 as corresponding only to an object (8,10) or terrain feature (6) if the function value for that cell is in a range of values; defining, for the cells not identified as corresponding only to the object (8,10) or terrain feature (6), one or more sub-cells, each having at least one of the different locations; and 15 identifying a sub-cell as corresponding at least in part to the object (8,10) or terrain feature (6) if one or more of the parameter values in that sub-cell is in the range of values. (Figure 1) 2165738_1 (GHManers) 14/01/10
Description
AUSTRALIA Patents Act 1990 COMPLETE SPECIFICATION Standard Patent Applicant(s): University of Sydney Invention Title: Extraction processes The following statement is a full description of this invention, including the best method for performing it known to me/us: -2 EXTRACTION PROCESSES FIELD OF THE INVENTION The present invention relates to extraction, extraction 5 processes, extraction algorithms, and the like. BACKGROUND Data corresponding to the geometry of an area of terrain and any natural and/or artificial features or objects of the 10 area may be generated. For example, a laser scanner, such as a Riegl laser scanner, may be used to scan the area of terrain and generate 3D point cloud data corresponding to the terrain and the features. Various algorithms for processing 3D point cloud data of a 15 terrain area are known. Such algorithms are typically used to construct 3D terrain models of the terrain area for use in, for example, path planning or analysing mining environments. The terrain models conventionally used include the Mean Elevation Map, the Min-Max Elevation Map, the Multi-Level 20 Elevation Map, the Volumetric Density Map, Ground Modelling via Plane Extraction, and Surface Based Segmentation. Mean Elevation Maps are commonly classified as 2'/ 2 D models because the third dimension (height) is only partially modelled. In these models the terrain is represented by a grid having a 25 number of cells. The height of the laser scanner returns falling in each grid cell is averaged to produce a single height value for each cell. An advantage of averaging the height of the laser returns is that noisy returns can be filtered out. However, this technique cannot capture overhanging structures, such as tree 30 canopies. Min-Max Elevation Maps are also used to capture the height of the returns in each grid cell. The difference between the maximum and the minimum height of the laser scanner returns falling in a cell are computed. A cell is declared occupied if 35 its calculated height difference exceeds a pre-defined threshold. These height differences provide a computationally efficient approximation to the terrain gradient in a cell. Cells 2165738_1 (GHMattcrs) 14/01/10 -3 which contain too steep a slope or are occupied by an object will be characterized by a strong gradient and can be identified as occupied. An advantage of this technique is that approximations are not made, i.e. averaging is avoided. However, 5 this technique is more sensitive to noise than a Mean Elevation Map. Multi-Level Elevation Maps are an extension of elevation maps. Such algorithms are capable of capturing overhanging structures by discretising the vertical dimension. They also 10 allow for the generation of large scale 3D maps by recursively registering local maps. Typically however, the discrete classes chosen for the vertical dimension may not facilitate segmentation. Also, typically the ground is not used as a reference for vertical height. 15 Volumetric Density Maps discriminate between soft and hard obstacles. This technique breaks the terrain area into a set of voxels and counts in each voxel the number of hits and misses sensor data. A hit corresponds to a return that terminates in a given voxel. A miss corresponds to a laser beam going through a 20 voxel. Regions containing soft obstacles, such as vegetation, correspond to a small ratio of hits over misses. Regions containing hard obstacles correspond to a large ratio of hits over misses. While this technique does allow the identification of soft obstacles (the canopy of the trees for instance), 25 segmenting a scene based on the representation it provides would not be straightforward since parts of objects would be disregarded (windows in buildings or patches of vegetation for instance). A Ground Modelling via Plane Extraction approach is 30 suitable for extracting multi-resolution planar surfaces. This involves discretising the area terrain into two superimposed 2D grids of different resolutions, i.e. one grid has larger cells than the other. Each grid cell in each of the two grids is represented by a plane fitted to the corresponding laser returns 35 via least square regression. A least square error for each plane in the each grid is computed. By comparing the different error values, several types of regions can be identified. In 2165738_1 (GHMatters) 14/01/10 - 4 particular, the values are both small in sections corresponding to the ground. Also, the error value of the larger celled plane is small while error values of the smaller plane is large in areas containing a flat surface with a spike (e.g. a thin pole 5 for instance). Also, both error values are large in areas containing an obstacle. This method is able to identify the ground while not averaging out thin vertical obstacles (unlike a Mean Elevation Map). However, it is not able to represent overhanging structures. 10 Surface Based Segmentation performs segmentation of 3D point clouds based on the notion of surface continuity. Surface continuity is evaluated using a mesh built from data. The mesh is generated by exploiting the physical ordering of the measurements which implies that longer edges in the mesh or more 15 acute angles formed by two consecutive edges directly correspond to surface discontinuities. While this approach performs 3D segmentation, it does not identify the ground surface. Thus, there is a need for an algorithm for performing segmentation of 3D point cloud data that jointly provides a 20 representation of the ground, and representations of objects. SUMMARY OF THE INVENTION In a first aspect, the present invention provides an extraction process for extracting an object or terrain feature, 25 the extraction process comprising: defining an area to be processed; dividing the area into a plurality of cells; measuring a value of a parameter at a plurality of different locations in each cell; for each cell determining a value of a function of the measured parameter values in that cell; 30 identifying a cell as corresponding only to a particular object or terrain feature if the determined function value for that cell is in a range of values that corresponds to the particular object or terrain feature; defining, for the cells that are not identified as corresponding only to a particular object or 35 terrain feature, one or more sub-cells, each sub-cell having in it at least one of the plurality of different locations; and identifying a sub-cell as corresponding at least in part to the 2165738_1 (GHMatters) 14/01/10 -5 particular object or terrain feature if one or more of the measured parameter values for the at least one of the plurality of different locations in that sub-cell is in the range of values. 5 The step of identifying a sub-cell as corresponding at least in part to the particular object or terrain feature may comprise: identifying a sub-cell as corresponding only to the particular object or terrain feature if the measured parameter value for each of the at least one of the plurality of different 10 locations in that sub-cell is in the range of values; and identifying a sub-cell as corresponding in part to the particular object or terrain feature if one or more of the measured parameter values for the at least one of the plurality of different locations in that sub-cell is in the range of 15 values and if one or more of the measured parameter values for the at least one of the plurality of different locations in that sub-cell is not in the range of values. The process may further comprise identifying a sub-cell as corresponding at least in part to a different object or terrain 20 feature if one or more of the measured parameter values for the at least one of the plurality of different locations in that sub-cell is not in the range of values. The process may further comprise identifying a sub-cell as corresponding only to a different object or terrain feature if 25 each of the measured parameter values for the at least one of the plurality of different locations in that sub-cell is not in the range of values. The step of determining a value of a function of the measured parameter values in that cell may comprise determining 30 an average value of the values of a parameter measured at the plurality of different locations in each cell. The step of determining a value of a function of the measured parameter values in that cell may further comprise: determining a gradient value for each cell using the determined 35 average value for that cell and determined average values for each of the cells surrounding that cell. The step of determining a value of a function of the 2165738_1 (GHMatters) 14/01/10 - 6 measured parameter values in that cell may further comprise grouping together directly adjacent cells having a gradient value below a first threshold value to form one or more clusters of cells. 5 The step of determining a value of a function of the measured parameter values in that cell may further comprise determining an average value of the values of a parameter measured in each cell of each cluster of cells. The step of identifying a cell as corresponding only to a 10 particular object or terrain feature if the determined function value for that cell is in a range of values, may comprise: identifying the cluster of cells having the largest number of cells; and determining the range of values using the determined average value of the largest cluster and a second threshold 15 value. The process may further comprise identifying noisy measured values of the parameter; and disregarding the noisy measured values. The parameter may be a height of the terrain. 20 The terrain feature may be a ground surface of the terrain. The process may further comprise classifying the particular object or terrain feature as belonging to a certain class of objects using values of the parameter measured in 25 cells or sub-cells that are identified as corresponding only to the particular object or terrain feature. In a further aspect the present invention provides an apparatus for generating a model of terrain in accordance with the method of the above aspect, the apparatus comprising a 30 scanning and measuring apparatus for measuring the plurality of values of a parameter, and a processor arranged to perform the processing steps of the above aspect. In a further aspect the present invention provides a computer program or plurality of computer programs arranged such 35 that when executed by a computer system it/they cause the computer system to operate in accordance with a method of any of the above aspects. 2165738_1 (GHMatters) 14/01/10 -7 In a further aspect the present invention provides a machine readable storage medium storing a computer program or at least one of the plurality of computer programs according to the above aspect. 5 BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a schematic illustration of an example terrain modelling scenario in which a laser scanner is used to scan a terrain area; 10 Figure 2 is a process flowchart showing certain steps of a terrain modelling algorithm performed by a processor; Figure 3 is a process flowchart showing certain steps of the ground extraction process of step s2 of the algorithm; Figure 4 is a schematic illustration of three cells of a 15 grid of a Mean Elevation Map; Figure 5 is a process flowchart showing certain steps of the object segmentation process of step s4 of the algorithm; Figure 6 is a schematic illustration of a first cell, a second cell, and a third cell of the grid and the range of 20 height values assigned to each of these cells in a Mean Elevation Map; Figure 7 is a schematic illustration of the first cell, the second cell, and the third cell of the Mean Elevation Map grid, after performing step s28; and 25 Figure 8 is a schematic illustration of the laser scanner scanning an object that is hidden behind a further object. DETAILED DESCRIPTION The terminology "terrain" and "terrain features" are used 30 herein to refer to a geometric configuration of an underlying supporting surface of an environment or a region of an environment. The terminology "object" is used herein to refer to any objects or structures that exist above (or below) this surface. The underlying supporting surface may, for example, 35 include surfaces such as the underlying geological terrain in a rural setting, or the artificial support surface in an urban setting, either indoors or outdoors. The geometric configuration 2165738_1 (GHMatters) 14/01/10 - 8 of other objects or structures above this surface, may, for example, include naturally occurring objects such as trees or people, or artificial objects such as buildings or cars. Some examples of terrain and objects are as follows: rural 5 terrain having hills, cliffs, and plains, together with object such as rivers, trees, fences, buildings, and dams; outdoor urban terrain having roads and footpaths, together with buildings, lampposts, traffic lights, cars, and people; outdoor urban terrain such as a construction site having partially laid 10 foundations, together with objects such as partially constructed buildings, people and construction equipment; and indoor terrain having a floor, together with objects such as walls, ceiling, people and furniture. Figure 1 is a schematic illustration of an example terrain 15 modelling scenario in which a laser scanner 2 is used to scan a terrain area 4. In this embodiment, the laser scanner 2 used to scan a terrain area 4 is a Riegl laser scanner. The laser scanner 2 generates dense 3D point cloud data for the terrain area 4 in a conventional way. This data is sent 20 from the laser scanner 2 to a processor 3. In this example, the terrain area 4 comprises an area of ground 6 (or terrain surface), and two objects, namely a building 8 and a tree 10. The generated 3D point cloud data for the terrain area 4 25 is processed by the processor 3 using an embodiment of a novel terrain modelling algorithm, hereinafter referred to as the "segmentation algorithm". The segmentation algorithm advantageously tends to provide a representation of the ground 6, as well as representations of the various objects 8, 10 above 30 the ground 6, and also to the refinements that can be made to the representation of the ground 6 using the representations of the objects 8, 10, as described in more detail later below. Figure 2 is a process flowchart showing certain steps of an embodiment of a process implemented by the segmentation 35 algorithm performed by the processor 3. At step s2, a ground extraction process is performed on the 3D point cloud data. The ground extraction process 2165738_1 (GHMatters) 14/01/10 - 9 explicitly separates 3D point cloud data corresponding to the ground 6 from that corresponding to the other objects, i.e. here the building 8, and the tree 10 and is described in more detail later below with reference to Figure 3. 5 At step s4 an object segmentation process is performed on the 3D point cloud data. The object segmentation process segments the 3D point cloud data such that each segment of data corresponds to a single object, as described in more detail later below with reference to Figure 5. 10 Figure 3 is a process flowchart showing certain steps of the ground extraction process of step s2 of the segmentation algorithm. At step s6, a Mean Elevation Map of the terrain area 4 is computed. This is a conventional Mean Elevation Map. The 15 resolution of a grid underlying the map may be any appropriate value. In this embodiment, the Mean Elevation Map is a grid having a plurality of cells. Each cell has assigned to it a height value determined from height values corresponding to 20 laser sensor returns from that cell. In this embodiment, the height value for a cell is the average of the height values corresponding to laser sensor returns from that cell. At step s8, a surface gradient value is computed for each cell in the grid. 25 A surface gradient value for a particular cell is obtained by first computing the gradients between that cell and each of the surrounding cells. The gradient with the largest absolute value is retained as the gradient at the particular cell. At step s 1 0, cells corresponding to relatively flat 30 surfaces are identified. In this embodiment, this is achieved by selecting cells having a surface gradient value below a gradient-threshold value. In this embodiment, the gradient threshold value is 0.5. This corresponds to a slope angle of 27 degrees. However, in other embodiments a different gradient 35 threshold value is used. At step s12, the cells identified as corresponding to the relatively flat surfaces, i.e. the cells that have a surface 2165738_1 (GHMatters) 14/01/10 - 10 gradient value below the gradient-threshold, are grouped together with any adjacent cells having a surface gradient value below the gradient-threshold value. This forms clusters of cells that correspond to relatively flat areas. 5 At step s14, the largest cluster of cells that correspond to a relatively flat area, i.e. the cluster formed at step s12 containing the largest number of cells, is identified. At step s16, the identified largest cluster is used as a reference cluster with respect to which it can be determined 10 whether the other smaller clusters formed at step s12 correspond to the ground 6 of the terrain area 4. The reference cluster is used because locally smooth clusters that do not correspond to the ground 6 may exist. Thus, these cases are filtered out using the reference to the ground 6 provided by the largest ground 15 cluster. In this embodiment, the identified largest cluster is assumed to correspond to the ground 6. Thus, any of the smaller clusters of cells, the cells of which have substantially smaller or larger height values than those of the largest cluster, are 20 assumed not to correspond to the ground 6. In other words, in this embodiment the cells corresponding to the ground 6 is defined to be the union of the largest cluster of cells, with a surface gradient value below the gradient-threshold, and the other clusters of cells, also containing a surface gradient 25 value below the gradient-threshold, in which the absolute value of the average height of the cells minus the average height of the cells in the largest cluster is smaller than a height threshold. In this embodiment, this height-threshold is 0.2m. At step s18, a correction of errors generated during the 30 computations of the surface gradient values is performed. One source of such errors and the correction of those errors will now be explained with reference to Figure 4. Figure 4 is a schematic illustration of three cells of the grid of the Mean Elevation Map, namely the first cell 12, the 35 second cell 14, and the third cell 16. The height value for the first cell 12, i.e. the average of the height values corresponding to laser sensor returns from 2165738_1 (GHMatters) 14/01/10 - 11 the first cell 12, is hereinafter referred to as the "first height value 18". The height value for the second cell 14, i.e. the average of the height values corresponding to laser sensor returns from 5 the second cell 14, is hereinafter referred to as the "second height value 20". The height value for the third cell 16, i.e. the average of the height values corresponding to laser sensor returns from the third cell 16, is hereinafter referred to as the "third 10 height value 22". In this example the first height value 18 and the second height value 20 are substantially equal. Also, the third height value 22 is substantially greater than the first height value 18 and the second height value 20. 15 The surface gradient value for the second cell 14, which is determined at step s8 as described above, is obtained by first computing the gradients between that cell and each of the surrounding cells. The gradient with the largest absolute value is retained as the gradient at the particular cell. Thus, in 20 this example the surface gradient value for the second cell 14 is the slope between the height levels of the second cell 14 and the third cell 16 (since the gradient between the first cell 12 and the second cell 14 is zero). This gradient is indicated in Figure 4 by the reference numeral 24. Thus, in this example the 25 second cell has a relatively large surface gradient value. In particular, the service gradient value of the second cell 14 is above the gradient-threshold. Thus, the second cell 14 is included in the same cluster of cells as the first cell 12 despite the second height value 20 being substantially equal to 30 the first height value 18. Such errors are corrected at step s18 of the ground extraction process as follows. Each cell identified as not belonging to the ground is inspected. The neighbour cells of the cell being inspected that correspond to the ground 6 are 35 identified, and their average height is computed. If the absolute value of the difference between this average height and the height in the inspected cell is less than a correction 21657381 (GHMaters) 14/01/10 - 12 threshold value, the inspected cell is identified as corresponding to the ground 6. For example, returning to Figure 4, the first cell 12 corresponds to the ground 6, whereas the third cell 16 corresponds to an object 8, 10. The difference 5 between the height of the first cell 12, i.e. the first height value 18, and the height of the second cell 14, i.e. the second height value, is zero. In this example the correction-threshold is 0.1m. Thus, since zero is less than 0.lm the second cell 14 is identified as corresponding to the ground 6. 10 At step s20, the steps s12 and s14 as described above are repeated. The correction of errors carried out at step s18 modifies the cluster of cells that correspond to the ground. Thus, the operations carried out steps s12 and s14, i.e. the forming clusters of cells that correspond to areas of relatively 15 flat terrain and the identification of the largest cluster of cells, are repeated after performing the function of step s18 to accommodate for the changes made. The correction steps s18 and s20 allow for the reconstructing of a larger portion of the ground 6 of the 20 terrain area 4. This is because a reconstruction of the ground 6 obtained without this correction comprises a number of "holes" that are not identified as either the ground 6 or an obstacle 8, 10. The performance of the correction steps s18 and s20 advantageously tends to remove these holes. This may, for 25 example, allow a path planner to find paths going through areas of the map previously marked as containing obstacles. Thus, the ground extraction process of step s2 is performed. Returning to Figure 2, this ground extraction process is followed by the object segmentation process of s4. 30 Figure 5 is a process flowchart showing certain steps of the object segmentation process of step s4 of the segmentation algorithm. At step s22, a Min-Max Elevation Map of the terrain area 4 is computed. This is a conventional Min-Max Elevation Map. The 35 resolution of a grid underlying the map may be any appropriate value. In this embodiment, the grid of the Min-Max Elevation Map is the same as that of the Mean Elevation Map. This Min-Max 2165738_1 (GHMatters) 14/01/10 - 13 Elevation Map of the terrain area 4 is hereinafter referred to as the global map. In this embodiment, the Min-Max Elevation Map is a grid having a plurality of cells. Each cell has assigned to it a 5 range of height values. The range of height values assigned to a particular cell ranges from the minimum to the maximum height values corresponding to laser sensor returns from that cell. Figure 6 is a schematic illustration of the first cell 12, the second cell 14, and the third cell 16 of the grid and the 10 range of height values assigned to each of these cells in the Mean Elevation Map. In this example, the first cell 12 is assigned a range of height values, hereinafter referred to as the "first range 26". The first range 26 has a minimum, indicated in Figure 6 by the 15 reference numeral 260, and a maximum, indicated in Figure 6 be the reference numeral 262. Also, the second cell 14 is assigned a range of height values, hereinafter referred to as the "second range 28". The second range 28 has a minimum, indicated in Figure 6 by the 20 reference numeral 280, and a maximum, indicated in Figure 6 be the reference numeral 282. Also, the third cell 16 is assigned a range of height values, hereinafter referred to as the "third range 30". The third range 30 has a minimum, indicated in Figure 6 by the 25 reference numeral 300, and a maximum, indicated in Figure 6 be the reference numeral 302. Thus, the first, second, and third cells 12, 14, 16 each have a volume assigned to them that represents the range of the heights corresponding to the laser returns from that cell. 30 At step s24, adjacent cells corresponding to an object 8, 10, i.e. the sets of cells not identified as corresponding to the ground 6 at step s16 of the ground extraction process, are connected together to form clusters of object cells. At step s26, for each identified object cluster a second 35 Min-Max Elevation Map is built from the laser returns contained in that cluster. These second Min-Max Elevation Map are hereinafter referred to as "local maps". The local maps have 2165738_1 (GHMatters) 14/01/10 - 14 higher resolution than the global map generated at step s22. For example, the cell size in the local maps is 0.2m by 0.2m, whereas the cell size in the global map is 0.4m by 0.4m. At step s28, for each local map, the range of height 5 values of each cell in the local map is divided into segments, or voxels. Each voxel for a cell corresponds to a sub-range of the range of height values. In this embodiment, the height of each voxel is 0.2m. However, in other embodiments a different voxel height is used. 10 Each voxel contain the height values within the corresponding sub-range of the laser returns from that cell. Voxels that do not contain any laser returns are disregarded. Also, voxels of a particular cell are merged with other voxels of that cell, if they are in contact with those other voxels. 15 Figure 7 is a schematic illustration of the first cell 12, the second cell 14, and the third cell 16 of the Mean Elevation Map grid, after performing step s28. In this example, the third cell 16 was identified as corresponding to an object, i.e. not corresponding to the 20 ground. Thus, a higher resolution grid is defined over the third cell 16, and the range of values of laser returns in each of the cells of the higher resolution grid is divided into voxels, as shown in Figure 6. In this example, each of the cells of the higher resolution grid of the third cell 16 contain 25 the same data. Also, only the voxels corresponding to the higher height values in the third range 30 and the lower height values in the third range 30 contain any laser scanner returns. Voxels in the middle of the third range 30 do not contain any laser scanner returns. Thus, in this example, each of the cells 30 of the higher resolution grid of the third cell 16 contains two voxels, one containing laser scanner returns corresponding to relatively lower height values, and the other containing laser scanner returns corresponding to relatively higher height values. The voxels corresponding to lower height values are 35 hereinafter referred to as the "lower voxels" and are indicated in Figure 7 by the reference numeral 38. The voxels corresponding to higher height values are hereinafter referred 2165738_1 (GHMaers) 14/01/10 - 15 to as the "upper" voxels and are indicated in Figure 7 by the reference numeral 38. At step s30, the voxels corresponding to the ground 6 are identified. The identification of these voxels is implemented as 5 follows. For a given cell, a number of the closest cells corresponding to the ground 6 in the grid are identified. If the absolute value of the difference between the mean height value of the lowest voxel in the given cell and the mean of the heights of the closest cells, is less than a voxel-threshold, 10 then the that voxel is marked as corresponding to the ground 6. For example, the lowest voxels in the third cell 16 are the lower voxels 36. The second cell 14 may be identified as a closest cell that corresponds to the ground 6 to the third cell 16. In this example, the mean height of the second voxel 28 and 15 the lower voxels 36 are substantially the same, i.e. the difference between these values is below a voxel-threshold value of, for example, 0.2m. Thus, the lower voxels 36 is identified as corresponding to the ground 6. This process advantageously tends to allow for the 20 reconstruction of the ground 6 under overhanging structures, for example the canopy of the tree 10. This process also advantageously allows the reconstruction of the ground 6 that was generated at step s2 as described above with reference to Figures 2 and 3, to be refined. This is 25 carried out at step s32. At step s32, the reconstruction of the ground 6 is refined. At this step the fact that a voxel from a local map corresponds to the ground 6 is used to update the Mean Elevation Map generated in the ground extraction process of s2. In 30 particular, the cell in the Mean Elevation Map which most closely corresponds to the cell in the local map that contains the voxel corresponding to the ground 6 is identified. The identified cell is then updated by re-computing the mean height in that cell computed using only the laser returns that falls 35 into the voxel corresponding to the ground 6. Thus, the reconstruction of the ground 6 under overhanging structures is performed. This process advantageously exploits 2165738_1 (GHManers) 14/01/10 - 16 interaction between the Mean Elevation Map of the ground extraction process of step s2 and the Min-Max Elevation Map of the object segmentation process of step s4. At step s34, contacting voxels are grouped together to 5 form voxel clusters. In this embodiment, voxels identified as belonging to the ground are interpreted as separators between clusters. At step s36, noisy laser scanner returns are identified. In this embodiment, voxels which contain noisy returns are 10 assumed to satisfy the following conditions. Firstly, the voxel belongs to a cluster which is not in contact with a cell or voxel corresponding to the ground 6. Secondly, the size of the cluster (in each of the x-, y-, and z-directions) that the voxel belongs to is smaller than a predetermined noise-threshold. In 15 this embodiment, the noise threshold is 0.1m. At step s38, the identified noisy returns are removed or withdrawn from the map. This completes the segmentation algorithm performed by the processor 3. 20 The reconstruction of the terrain area 4 produced by performing the segmentation algorithm advantageously reconstructs portions of the ground that are under overhanging structures, for example the canopy of the tree 10. This is achieved by the steps s28 - s30 as described above. 25 A further advantage of the segmentation algorithm is that fine details tend to be conserved. For example, frames of windows of the building 8 are conserved by the segmentation algorithm. The segmentation algorithm advantageously tends to benefit 30 from the advantages of the Mean Elevation Map approach. In particular, the segmentation algorithm tends to be able to generate smooth surfaces by filtering out noisy returns. Also, the segmentation algorithm advantageously tends to benefit from the advantages of the Min-Max Elevation Map 35 approach. In particular, the segmentation algorithm does not make an approximation of the height corresponding to the laser scanner return when separating the objects above the ground. 2165738_1 (GHMatters) 14/01/10 - 17 Also, the local maps have higher resolution than the global map which tends to allow efficient reasoning in the ground extraction process at a lower resolution, yet provide a fine resolution object model. 5 A further advantage provided by the above described segmentation algorithm is that it tends to be able to achieve the following tasks. Firstly, the explicit extraction of the surface of ground 6 is performed, as opposed to extracting 3 dimensional surfaces without explicitly specifying which of 10 those surfaces correspond to the ground 6. Secondly, overhanging structures, such as the canopy of the tree 10, are represented. Thirdly, full 3-dimensional segmentation of the objects 8, 10 is performed. Conventional algorithms do not jointly perform all of these tasks. 15 A further advantage of the segmentation algorithm is that errors that occur when generating 3-dimensional surfaces corresponding to the ground 6 tend to be minimised. This is due to the ability of the ground-object approach implemented by the segmentation algorithm to separate the objects above the ground. 20 A further advantage is that by separately classifying terrain features, the terrain model produced by performing the segmentation algorithm tends to reduce the complexity of, for example, path planning operations. Also, high-resolution terrain navigation and obstacle avoidance, particularly those 25 obstacles with overhangs, is provided. Moreover, the segmentation algorithm tends to allow for planning operations to be performed efficiently in a reduced, i.e. 2-dimensional workspace. Also, the provided segmentation algorithm allows a path 30 planner to take advantage of the segmented ground model. For example, clearance around obstacles with complex geometry can be determined. This allows for better navigation through regions with overhanging features. In the above embodiments, the average value of the 35 measured plurality of values of cells is used to determine clusters and further process those clusters, and various other processes. However, this need not be the case, and instead, in 2165738_1 (GHMatters) 14/01/10 - 18 other embodiments, other functions may be used instead of average value, for example an average value of parameter values that remain after certain extreme values have been filtered out, or for example statistical measures other than an average as 5 such. In the above embodiments, the measured parameter is the height of the terrain and/or objects above the ground. However, this need not be the case, and in other embodiments any other suitable parameter may be used instead, for example 10 colour/texture properties, optical density, reflectivity, and so on. Apparatus, including the processor 3, for implementing the above arrangement, and performing the method steps described above, may be provided by configuring or adapting any suitable 15 apparatus, for example one or more computers or other processing apparatus or processors, and/or providing additional modules. The apparatus may comprise a computer, a network of computers, or one or more processors, for implementing instructions and using data, including instructions and data in the form of a 20 computer program or plurality of computer programs stored in or on a machine readable storage medium such as computer memory, a computer disk, ROM, PROM etc., or any combination of these or other storage media. In the above embodiments, the 3-dimensional point cloud 25 data for the terrain area was provided by a Riegl laser scanner. However, in other embodiments the laser data is provided by a different means, for example by SICK and Velodyne sensors. Moreover, in other embodiments the data on the terrain area is not laser scanner data and is instead a different appropriate 30 type of data, for example data generated by an infrared camera. In the above embodiments, the terrain area is outdoors and comprises a building and a tree. However, in other embodiments the terrain area is a different appropriate area comprising any number of terrain features. In particular, the terrain features 35 are not limited to trees and buildings. In the above embodiments, the segmentation algorithm is performed by performing each of the above described method steps 2165738_1 (GHMaters) 14/01/10 - 19 in the above provided order. However, in other embodiments certain method steps may be omitted. For example, steps s36 and s38 of the segmentation algorithm may be omitted, however the resulting terrain model would tend to be less accurate than if 5 these steps were included. In the above embodiments, the segmentation algorithm does not take into account occluded, or partially hidden, objects. However, in other embodiments provision is made for partially hidden objects, as will now be described in more detail with 10 reference to Figure 8. Figure 8 is a schematic illustration of the laser scanner 2 scanning an object that is hidden behind a further object. The object being scanned by the laser scanner is hereinafter referred to as the "hidden object 40", and the object partially 15 hiding, or occluding, the hidden object 40 is hereinafter referred to as the "non-hidden object 42". In this example, the hidden object 40 can only be partially imaged by the laser scanner 2. Thus, a height of the hidden object observed by the laser scanner, hereinafter 20 referred to as the "observed height 44", does not correspond to the actual object height 46. Accurate estimation of the ground height ideally considers occlusions such as these. Thus an estimation of the ground height is preferably based on non-occluded cells. A cell can be 25 assessed as non-occluded using a ray-tracing process. In a ray-tracing process a set of cells, or a trace, is computed to best approximate a straight line joining two given cells. If any of the cells in the trace do not correspond to the ground, the end cell of the trace is occluded. 30 Using a ray-tracing process tends to allow for occlusions to be taken into account and reliable estimates of the ground height to be computed. In order to avoid resorting to an explicit ray-tracing process and decrease the amount of computations, the following 35 approach may be adopted. As described above, the ground is extracted by applying a threshold on the computed surface gradients. Thus, there is a "smoothness constraint" between 2165738_1 (GHMatters) 14/01/10 - 20 neighbour cells identified as belonging to the ground. The terminology "smoothness constraints" is used to mean that the variation of height between two neighbour ground cells is limited. Thus, the closest ground cell to an obstacle will 5 provide a reliable local estimate of the ground height, i.e. a given ground cell is connected (via "smoothness constraints") to the rest of the ground cells which implies that this cell not only provides a local estimate of the ground height but in fact it provides a globally constrained local estimate. 10 This approach advantageously tends to avoid the use of ray-tracing while providing reliable estimates of the ground height. Standard ray-tracing techniques do not use this reasoning simply because extracting the ground is not always possible. 15 The above embodiments of a segmentation algorithm provide a model of the ground surface and three dimensional models of objects above the ground. In further embodiments one or more of the object models that have been generated are then classified, i.e. the object model is assigned a particular object-class. 20 Thus, the object corresponding to the object model is identified as a particular type of object. Classification processes that may be used for the classifying of the object model include those classification processes that compare features of the object model to features of one or more template models 25 representative of one or more respective object classes. For example, conventional feature based classification process that incorporate Principal Component Analysis (PCA), a 'Spin Image', Moments Grids, and/or Spherical Harmonic Descriptors may be used. 30 Classification of the object models generated by the above described segmentation algorithms tends to produce more accurate and efficient results compared to classification of object models generated using conventional techniques. One reason for this is that better segmentation of the objects and the ground 35 tends to result from the above described segmentation algorithm. In the claims which follow and in the preceding description of the invention, except where the context requires 2165738_1 (GHMatters) 14/01/10 - 21 otherwise due to express language or necessary implication, the word "comprise" or variations such as "comprises" or "comprising" is used in an inclusive sense, i.e. to specify the presence of the stated features but not to preclude the presence 5 or addition of further features in various embodiments of the invention. It is to be understood that, if any prior art publication is referred to herein, such reference does not constitute an admission that the publication forms a part of the common 10 general knowledge in the art, in Australia or any other country. 2165738_1 (GHMaters) 14/01/10
Claims (15)
1. An extraction process for extracting an object (8, 10) or terrain feature (6), the extraction process comprising: 5 defining an area (4) to be processed; dividing the area (4) into a plurality of cells (12, 14, 16); measuring a value of a parameter at a plurality of different locations in each cell (12, 14, 16); 10 for each cell (12, 14, 16), determining a value of a function of the measured parameter values in that cell; identifying a cell as corresponding only to a particular object (8,10) or terrain feature (6) if the determined function value for that cell is in a range of values that 15 corresponds to the particular object (8,10) or terrain feature (6); defining, for the cells that are not identified as corresponding only to a particular object (8, 10) or terrain feature (6), one or more sub-cells, each sub-cell 20 having in it at least one of the plurality of different locations; and identifying a sub-cell as corresponding at least in part to the particular object (8, 10) or terrain feature (6) if one or more of the measured parameter values for the at 25 least one of the plurality of different locations in that sub-cell is in the range of values.
2. A process according to claim 1, wherein the step of identifying a sub-cell as corresponding at least in part to the 30 particular object (8, 10) or terrain feature (6) comprises: identifying a sub-cell as corresponding only to the particular object (8, 10) or terrain feature (6) if the measured parameter value for each of the at least one of the plurality of different locations in that sub-cell is 35 in the range of values; and identifying a sub-cell as corresponding in part to the particular object (8, 10) or terrain feature (6) if one or 2165738_1 (GHMatters) 14/01/10 - 23 more of the measured parameter values for the at least one of the plurality of different locations in that sub-cell is in the range of values and if one or more of the measured parameter values for the at least one of the 5 plurality of different locations in that sub-cell is not in the range of values.
3. A process according to claim 1 or 2 further comprising identifying a sub-cell as corresponding at least in part to a 10 different object or terrain feature if one or more of the measured parameter values for the at least one of the plurality of different locations in that sub-cell is not in the range of values. 15
4. A process according to any of claims 1 to 3 further comprising identifying a sub-cell as corresponding only to a different object or terrain feature if each of the measured parameter values for the at least one of the plurality of different locations in that sub-cell is not in the range of 20 values.
5. A process according to any of claims 1 to 3, wherein the step of determining a value of a function of the measured parameter values in that cell comprises: 25 determining an average value of the values of a parameter measured at the plurality of different locations in each cell (12, 14, 16).
6. A process according to claim 5, wherein the step of 30 determining a value of a function of the measured parameter values in that cell further comprises: determining a gradient value for each cell using the determined average value for that cell and determined average values for each of the cells surrounding that 35 cell. 2165738_1 (GHMatters) 14101/10 - 24
7. A process according to claim 8, wherein the step of determining a value of a function of the measured parameter values in that cell further comprises: 5 grouping together directly adjacent cells (12,14,16) having a gradient value below a first threshold value to form one or more clusters of cells.
8. A process according to claim 7, wherein the step of 10 determining a value of a function of the measured parameter values in that cell further comprises: determining an average value of the values of a parameter measured in each cell of each cluster of cells. 15
9. A process according to claim 8, wherein the step of identifying a cell as corresponding only to a particular object (8,10) or terrain feature (6) if the determined function value for that cell is in a range of values, comprises: identifying the cluster of cells having the largest number 20 of cells (12,14,16); and determining the range of values using the determined average value of the largest cluster and a second threshold value. 25
10. A process according to any of claims 1 to 9, wherein the parameter is a height of the terrain.
11. A process according to any of claims 1 to 10, wherein the terrain feature (6) is a ground surface of the terrain. 30
12. A process according to any of claims 1 to 11, further comprising classifying the particular object (8, 10) or terrain feature (6) as belonging to a certain class of objects using values of the parameter measured in cells or sub-cells that are 35 identified as corresponding only to the particular object (8, 10) or terrain feature (6). 2165738_1 (GHMatters) 14/01/10 - 25
13. An apparatus for generating a model of terrain in accordance with the process of any of claims 1 to 12, the apparatus comprising a scanning and measuring apparatus (2) for measuring the plurality of values of a parameter, and a 5 processor (3) arranged to perform the processing steps recited in claims 1 to 12.
14. A computer program or plurality of computer programs arranged such that when executed by a computer system it/they 10 cause the computer system to operate in accordance with the process of any of claims 1 to 12.
15. A machine readable storage medium storing a computer program or at least one of the plurality of computer programs 15 according to claim 14. 2165738_1 (GHMaers) 14/01/10
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2010200145A AU2010200145A1 (en) | 2010-01-14 | 2010-01-14 | Extraction processes |
PCT/AU2011/000015 WO2011085436A1 (en) | 2010-01-14 | 2011-01-07 | Extraction processes |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2010200145A AU2010200145A1 (en) | 2010-01-14 | 2010-01-14 | Extraction processes |
Publications (1)
Publication Number | Publication Date |
---|---|
AU2010200145A1 true AU2010200145A1 (en) | 2011-07-28 |
Family
ID=44303719
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2010200145A Abandoned AU2010200145A1 (en) | 2010-01-14 | 2010-01-14 | Extraction processes |
Country Status (2)
Country | Link |
---|---|
AU (1) | AU2010200145A1 (en) |
WO (1) | WO2011085436A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2894600B1 (en) * | 2014-01-14 | 2018-03-14 | HENSOLDT Sensors GmbH | Method of processing 3D sensor data to provide terrain segmentation |
US11195324B1 (en) | 2018-08-14 | 2021-12-07 | Certainteed Llc | Systems and methods for visualization of building structures |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7110604B2 (en) * | 2001-06-26 | 2006-09-19 | Anoto Ab | Processing of digital images |
US7400770B2 (en) * | 2002-11-06 | 2008-07-15 | Hrl Laboratories | Method and apparatus for automatically extracting geospatial features from multispectral imagery suitable for fast and robust extraction of landmarks |
-
2010
- 2010-01-14 AU AU2010200145A patent/AU2010200145A1/en not_active Abandoned
-
2011
- 2011-01-07 WO PCT/AU2011/000015 patent/WO2011085436A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2011085436A1 (en) | 2011-07-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Park et al. | Creating 3D city models with building footprints and LIDAR point cloud classification: A machine learning approach | |
Forlani et al. | C omplete classification of raw LIDAR data and 3D reconstruction of buildings | |
Lafarge et al. | Creating large-scale city models from 3D-point clouds: a robust approach with hybrid representation | |
Zhang et al. | Filtering airborne LiDAR data by embedding smoothness-constrained segmentation in progressive TIN densification | |
Jaboyedoff et al. | New insight techniques to analyze rock-slope relief using DEM and 3Dimaging cloud points: COLTOP-3D software | |
Rottensteiner et al. | The ISPRS benchmark on urban object classification and 3D building reconstruction | |
You et al. | Urban site modeling from lidar | |
CN106127857B (en) | The on-board LiDAR data modeling method of integrated data driving and model-driven | |
US20130096886A1 (en) | System and Method for Extracting Features from Data Having Spatial Coordinates | |
WO2011085435A1 (en) | Classification process for an extracted object or terrain feature | |
CN109242862A (en) | A kind of real-time digital surface model generation method | |
Höfle et al. | Detection of building regions using airborne LiDAR: a new combination of raster and point cloud based GIS methods | |
Huber et al. | Fusion of LIDAR data and aerial imagery for automatic reconstruction of building surfaces | |
WO2011085434A1 (en) | Extraction processes | |
Li et al. | New methodologies for precise building boundary extraction from LiDAR data and high resolution image | |
AU2012229873A1 (en) | Extraction processes | |
WO2011085433A1 (en) | Acceptation/rejection of a classification of an object or terrain feature | |
Li et al. | Coarse-to-fine segmentation of individual street trees from side-view point clouds | |
WO2011085437A1 (en) | Extraction processes | |
Pfeifer et al. | Extraction of building footprints from airborne laser scanning: Comparison and validation techniques | |
WO2011066602A1 (en) | Extraction processes | |
Rutzinger et al. | Change detection of building footprints from airborne laser scanning acquired in short time intervals | |
AU2010200145A1 (en) | Extraction processes | |
Zeng | Automated Building Information Extraction and Evaluation from High-resolution Remotely Sensed Data | |
Cui et al. | A Review of Indoor Automation Modeling Based on Light Detection and Ranging Point Clouds. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
MK1 | Application lapsed section 142(2)(a) - no request for examination in relevant period |