CN108399399B - Urban range extraction method based on noctilucent remote sensing image - Google Patents
Urban range extraction method based on noctilucent remote sensing image Download PDFInfo
- Publication number
- CN108399399B CN108399399B CN201810246441.8A CN201810246441A CN108399399B CN 108399399 B CN108399399 B CN 108399399B CN 201810246441 A CN201810246441 A CN 201810246441A CN 108399399 B CN108399399 B CN 108399399B
- Authority
- CN
- China
- Prior art keywords
- urban
- threshold
- value
- pixels
- dmsp
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/176—Urban or other man-made structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
A city range extraction method based on a noctilucent remote sensing image comprises the steps of obtaining night light DMSP/OLS data of a research area, land utilization type data of a part of the research area and an MODIS water distribution map; performing mask processing by using an MODIS water body distribution diagram, and performing segmentation; calculating the pixel number, the noctilucent mean value, the maximum value, the minimum value and the variance in each segmented object; selecting and calculating the optimal light segmentation threshold of a part of segmented objects, and training a BP neural network model; and calculating the optimal threshold values of all the segmented objects by using the trained BP neural network model, and obtaining the final city range of the research area by using the threshold values. The method utilizes the noctilucent remote sensing image to extract the city range, and combines the BP neural network, so that the extracted city range is more accurate. The method has the advantages of small calculated amount, low complexity, high precision and the like, and can provide timely and effective information for the development and layout of the city.
Description
Technical Field
The invention belongs to the application of noctilucent remote sensing in the field of urban development research, and provides a brand-new urban range extraction method based on noctilucent remote sensing images.
Background
The urbanization process in china began to accelerate in the 80's of the 20 th century. Urbanization is one of the main indicators for measuring the development status of a country. However, the rapid urbanization process brings a lot of influences on the living environment and ecosystem of people. The four problems of the relation between urbanization and industrialization, whether urbanization is delayed, what the speed of urbanization should be in the future 20 years, and what scale of cities should be developed are the main problems encountered in the urbanization process in China at present. To solve these problems, we must be able to obtain the exact extent of cities and the spatio-temporal trends of city changes in a timely manner.
The development of the aerospace remote sensing technology provides an efficient means for urban range extraction, compared with the traditional ground actual measurement method, the aerospace remote sensing technology has the characteristics of small workload, low cost, short period, high efficiency and the like, and can meet the requirement of the current research on urbanization. The traditional method for extracting the city range by using the remote sensing technology generally utilizes multispectral remote sensing images with higher resolution within one year to extract, and the extraction process mainly comprises the steps of geometric correction, atmospheric correction, inlaying, cutting, classifying and the like of the images. The multispectral remote sensing image is greatly influenced by weather, so that the obtained image is difficult to operate due to different imaging time when the obtained image is subjected to geometric correction, mosaic and other work. In addition, the spatial resolution of the multispectral image generally used is high, which increases the data volume and causes certain difficulties in data processing, storage, management, distribution, and the like.
Night-light remote sensing began in the 70 th century in the United states Defense weather Satellite Program Linear scanning service System (DMSP/OLS) which initially acquired night-time cloud layer distribution information by detecting weak night-light reflected by cloud layers at night[1]However, the relevant scientists have found that the sensor can obtain visible light emitted on the earth's surface in cloudless conditions, such as lighting of town lighting facilities, lighting of fishing boats, lighting of combustion of oil and gas wells, and the like. Compared with the traditional daytime remote sensing, the noctilucent remote sensing image is imaged at night, and the only thing which can directly reflect human activities at night is illumination of lighting facilities. The noctilucent images obtained at present include year products of DMSP/OLS and month products and year products obtained by Visible Infrared Imaging Radiometer Suite (VIIRS) carried by a Visible light near Infrared Imaging radiation sensor (Suomi National Point orbital satellite, Suomi NPP). The two luminous remote sensing images released at present are all global synthesis products, atmospheric correction, mosaic and other processing are not needed, and due to the small data volume, urban range extraction by using the luminous remote sensing images is a new efficient means for researching urban development.
At present, many scholars research various methods for extracting city range by using luminous images, and the methods mainly use a threshold segmentation method. The threshold segmentation method comprises a single threshold method, a regional threshold method and an object-based threshold method.
The single threshold method was originally developed by scholars of marcl[2]The method is provided, a light brightness value is selected as an urban segmentation threshold value in a research area, namely a pixel with the brightness value larger than the threshold value is an urban pixel, and otherwise, the pixel is a non-urban pixel. The method is simple and easy to implement, but as the research area range is enlarged, the development in the research area is unbalanced, so that the difference of the light distribution at night is enlarged, and the single threshold value method can not meet the requirement of large-range research any more[3]。
To the deficiency of the single threshold method, scholars such as Zhang[4]A method of zoning thresholds was initially proposed. The method includes dividing a research area into a plurality of smaller research areas according to economy, population, geographical positions and the like in the research area, then independently selecting a threshold in each subarea, and finally obtaining the city range of the whole research area according to the area threshold. Although this method is more reasonable than the single threshold method, the resulting city range varies with the partition criteria, and even though there are always large differences in the distribution of light within a city.
To make up for the deficiencies of the single threshold method and the zone threshold method, there have been documents [5-6] that propose object-based threshold methods. The luminous image is divided according to the light brightness value, and then the optimal threshold value in each object is calculated. This object-based approach can solve the problem of city-wide accuracy due to differences in spatial and luminance distributions of the light. The document [5] also shows that the selection of the lighting threshold value not only has a relationship with the lighting brightness value, but also is related with factors such as the size of a segmented object, an average value in the object and the like, and conforms to a certain function distribution relationship. However, the function model is different according to the data year, so that the parameters of the function need to be designed according to different years, and the relationship between the threshold and the light data is difficult to be simulated accurately through the scatter diagram and the statistical data.
An Artificial Neural Network (ANN) is a calculation method developed on the basis of biological neural network research, and the method is based on learning training samples to finishA specific task such as classification, and can gradually improve the performance in the training process[7]. An artificial neural network is a mathematical model that attempts to model the structure and function of a biological neural network. The basic components are artificial neurons, i.e. simple mathematical functions. The artificial neuron comprises three groups of operation rules: multiplication, summation, and activation. At the entrance of the artificial neuron, the inputs are weighted, i.e. each input value is multiplied by a separate weight; in the middle part of the neuron, the weighted values of all inputs and their deviations are summed. At the exit, the sum of the previously weighted inputs and the offset will be the final operation by the activation function.
In order to fully exploit the advantages of mathematical complexity resulting from the interconnection of artificial neurons, and at the same time avoid the problem of increasing the complexity of the system, which makes it difficult to manage, the connection of artificial neurons is usually in accordance with certain rules and paradigms. Some "standardized" artificial neural network topologies have been proposed, and these predefined shapes can help users to solve problems more simply, quickly, and efficiently[8]. Different types of artificial neural network topologies are suitable for solving different types of problems. After the type of a given problem is determined, the topology structure of the artificial neural network to be adopted is determined, and meanwhile, in order to exert the optimal effect of the neural network, the topology and the parameters thereof need to be finely adjusted. The topology construction of the artificial neural network is completed only by half. Just as biological neural networks need to learn how to derive appropriate outputs for a given input in an environment, artificial neural networks also need to derive how to derive feedback from inputs by supervised, unsupervised or reinforcement learning, resulting in an optimal neural network model based on sample data.
In recent years, various artificial neural network models have been proposed and studied intensively. Wherein 80-90% of artificial neural Network model is feedforward Back Propagation Network (Back-Propagation Network referred to as BP Network) or its improved form[9]. The BP neural network has an input layer, an output layer and one or more hidden layers, wherein neurons in the same layer are not related, and neurons in different layers are connected forwards. Selecting an appropriate network based on the complexity of the objectBy the structure, mapping of any nonlinear function from an input space to an output space can be realized. The BP neural network is mainly used for: the fields of function approximation, system identification and prediction, classification, data compression and the like[10]。
In combination with previous researches, at present, no mature method exists for extracting the city range by using the noctilucent remote sensing image, and most methods need to manually set a threshold value, so that errors caused by large human factors exist in a research result. Meanwhile, the city range extraction method based on the object indicates that a certain functional relationship exists between the object threshold and the object parameter. The method uses a BP neural network model, takes the maximum value, the minimum value, the mean value, the variance, the pixel number and the optimal threshold value in a segmentation object as sample data to train the model, and uses the trained BP neural network to simulate a certain functional relation between input parameters and the output optimal threshold value. The method can automatically set the threshold value and extract the city range, so that the city range extracted by using the luminous image is free from the interference of human factors, and a more accurate and reliable result is obtained.
The relevant documents are as follows:
[1]Croft T A.Nighttime Images of the Earth from Space[J].Scientific American,1978,239(1):86-98.
[2]Imhoff M L,Lawrence W T,Stutzer D C,et al.A technique for using composite DMSP/OLS“City Lights”satellite data to map urban area[J].Remote Sensing of Environment,1997,61(3):361-370.
[3]Zhang Q,Seto K C.Can Night-Time Light Data Identify Typologies of UrbanizationA Global Assessment of Successes and Failures[J].Remote Sensing,2013,5(5):3476-3494.
[4]Liu Z,He C,Zhang Q,et al.Extracting the dynamics of urban expansion in China using DMSP-OLS nighttime light data from 1992to 2008[J].Landscape&Urban Planning,2012,106(106):62-72.
[5]Zhou Y,Smith S J,Elvidge C D,et al.A cluster-based method to map urban area from DMSP/OLS nightlights[J].Remote Sensing of Environment,2014,147(18):173–185.
[6]Xie Y,Weng Q.Updating urban extents with nighttime light imagery by using an object-based thresholding method[J].Remote Sensing of Environment,2016,187:1-13.
[7]Whitley D,Starkweather T,Bogart C.Genetic algorithms and neural networks:optimizing connections and connectivity[J].Parallel Computing,1990,14(3):347-361.
[8] li Shuangcheng, Zheng application of the model of artificial neural network to geological research [ J ]. Earth science progress, 2003,18(1):68-76.
[9] Yang mega liter, Zhu Zhong, BP neural network-based real-time path travel time prediction model [ J ] systematic engineering theory and practice, 1999,19(8):59-64.
[10] Lemna, once ordered, Duoyanze, Jinxieli, Liuyanchun, Wang Hui.
Disclosure of Invention
The invention provides a novel city range extraction method based on a noctilucent remote sensing image, which is comprehensively researched by predecessors and aims at overcoming the defect that the city range is extracted by utilizing the noctilucent remote sensing image at present.
In order to solve the technical problems, the invention adopts the following technical scheme:
a city range extraction method based on a noctilucent remote sensing image comprises the following steps:
step 1, acquiring night light DMSP/OLS data of a research area, land utilization type data of a part of the research area and an MODIS water body distribution map;
step 2, establishing a binary image by using the MODIS water body distribution diagram to perform mask processing on the DMSP/OLS noctilucent image;
step 3, segmenting the DMSP/OLS noctilucent image processed by the mask obtained in the step 2;
step 4, calculating the pixel number, the noctilucent mean value, the maximum value, the minimum value and the variance in each segmented object obtained in the step 3;
step 5, selecting the segmentation objects with corresponding land use type data, and calculating the optimal light segmentation threshold of each segmentation object according to the number of city pixels in the land use type data;
step 6, taking the pixel number, the average value, the maximum value, the minimum value and the variance in the segmented object obtained in the step 5 as input data, and taking the corresponding optimal light segmentation threshold value as an output value, and training a BP neural network model;
and 7, calculating the optimal threshold values of all the segmented objects by using the BP neural network model trained in the step 6, and obtaining the final city range of the research area by using the threshold values.
In step 5, the following substeps are performed for each of the divided objects,
step 5.1, taking the minimum value of the pixels in the object in the noctilucent image as a Threshold initial value Threshold _ urban, wherein the lamplight pixels larger than the Threshold are urban pixels, and otherwise, the lamplight pixels are non-urban pixels;
step 5.2, respectively calculating the number of Urban pixels Urban _ DMSP obtained by utilizing a threshold value in the object and the number of Urban pixels Urban _ NLCD in the land utilization type data;
step 5.3, comparing the city pixel number extracted by using the threshold value with the known city pixel number in the land use type data, and calculating an absolute error value T of the two to be | Urban _ DMSP-Urban _ NLCD |;
step 5.4, sequentially selecting the pixel values in the object from small to large as a new Threshold initial value Threshold _ urban, and repeating the steps 5.1 to 5.3 until the light brightness values of all the pixels in the object are calculated;
and 5.5, selecting the corresponding light brightness value when the T is minimum as the optimal light Threshold value Threshold _ urban.
In step 5, the number of layers of the BP neural network is set as an input layer, 2 hidden layers and 1 output layer, wherein the input layer has 5 neurons, the 2 hidden layers all have 10 neurons, and the final output layer is 1 neuron.
In step 3, the DMSP/OLS image after the mask processing is segmented by adopting a marker control watershed segmentation algorithm.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the invention can make the calculation result more reasonable and accurate by using the method based on the object and the neural network.
2. The invention uses the neural network to replace a specific function model, avoids deviation of a calculation result caused by inaccuracy of the function model, and has simpler data processing and calculation.
Drawings
FIG. 1 is a schematic diagram of the extraction process of the present invention.
Fig. 2 is a schematic structural diagram of a BP neural network model and input/output variables according to an embodiment of the present invention.
Detailed Description
The technical solution of the present invention is explained below with reference to the drawings and examples.
Referring to fig. 1, an embodiment of the present invention includes the steps of:
step 1, acquiring a night light DMSP/OLS image of a research area, land utilization type data of a part of the research area and an MODIS water body distribution map;
preferably, the data of the same year is adopted, and the specific operation of the step is as follows:
respectively downloading DMSP/OLS luminous images of the same year in the research area, land utilization type data of partial areas in the research area (only the data of the partial areas are needed for training a network) and an MODIS water body distribution map.
Step 2, establishing a binary image by using the MODIS water body distribution diagram to perform mask processing on the DMSP/OLS noctilucent image;
the specific operation of this step is as follows:
in the step, the light quantity of the earth surface is increased by the reflected light of the water body, the influence can be eliminated through mask processing, and the calculation formula is as follows:
in the formula (3), DMSP is a noctilucent image, the invention adopts DMSP/OLS noctilucent images, LC _ Water is a Water body binary image, i, j is the row number of the image, thus DMSP (i, j) represents the value of the pixel (i, j) in the noctilucent image, and LC _ Water (i, j) represents the value of the pixel (i, j) in the Water body binary image.
Step 3, segmenting the DMSP/OLS noctilucent image processed by the mask obtained in the step 2 by utilizing a mark control watershed segmentation algorithm;
the method comprises the following specific steps:
and (3) segmenting the DMSP/OLS image after the mask processing by adopting a mark control watershed segmentation algorithm, wherein the segmentation window is 3 multiplied by 3. The Marker-Controlled Watershed Segmentation algorithm can be found in Parvati K, Rao B S P, Das M. image Segmentation Using Gray-Scale Morphology and Marker-Controlled Watershed Transformation [ J ]. Discrete Dynamics in Nature & Society,2013,2008 (1026) -.
Step 4, calculating the pixel number, the noctilucent mean value, the maximum value, the minimum value and the variance in each segmented object obtained in the step 3;
the method comprises the following specific steps:
respectively counting the pixel number, the light mean value, the maximum value, the minimum value and the variance of each segmented object, wherein the calculation formula is as follows:
Obj_size=n.……(4)
Obj_max=MAX(DMSP(i)),i=1,2,…,n.……(6)
Obj_min=MIN(DMSP(i)),i=1,2,…,n.……(7)
wherein Obj _ size, Obj _ mean, Obj _ max, Obj _ min and Obj _ var are respectively the number of pixels of the object, the mean value, the maximum value, the minimum value and the variance of the pixels in the object, wherein n is the number of pixels in the object, and DMSP (i) is a corresponding value of the ith pixel in the object in the DMSP/OLS noctilucent image processed by the mask obtained in step 2.
Step 5, selecting segmentation objects with corresponding land utilization type data, and calculating the optimal light threshold value according to the number of city pixels of the segmentation objects in the land utilization type data;
the step 5 further includes the following sub-steps for each object:
step 5.1, taking the minimum value of the pixels in the object in the noctilucent image as a Threshold initial value Threshold _ urban, wherein the lamplight pixels larger than the Threshold are urban pixels, and otherwise, the lamplight pixels are non-urban pixels;
step 5.2, respectively calculating the number of Urban pixels Urban _ DMSP obtained by utilizing a threshold value in the object and the number of Urban pixels Urban _ NLCD in the land utilization type data;
step 5.3, comparing the urban pixel number extracted by using the threshold value with the known urban pixel number in the land use type data, and calculating the absolute error value of the two: t ═ Urban _ DMSP-Urban _ NLCD |;
step 5.4, sequentially selecting the pixel values in the object as a new Threshold initial value Threshold _ urban from small to large, and repeating the steps 5.1, 5.2 and 5.3 until the light brightness values of all the pixels in the object are calculated;
and 5.5, selecting the light brightness value corresponding to the minimum T as the optimal light Threshold Threshold _ urban.
In the embodiment, the specific operations in this step are as follows:
randomly selecting partial objects, calculating an optimal threshold value of each object, wherein when the LC _ sum-Obj _ sum | takes the minimum value, the corresponding threshold value is the optimal threshold value, and the calculation formula is as follows:
Obj_threshold(j)∈[min(DMSP),max(DMSP)].……(9)
the method comprises the following steps that DMSP is a segmentation object processed currently, min (DMSP) is the maximum value of pixels in the object, max (DMSP) is the minimum value of the pixels in the object, Obj _ Threshold (j) is the initial value Threshold _ urban of the j th Threshold selected for the current object, DMSP _ Urban (i) is the value DMSP (i) of the ith pixel of the current object after Threshold segmentation, 1 represents an urban pixel, 0 represents a non-urban pixel, LC _ Urban (i) is land utilization type data of the ith pixel of the current object, 1 represents an urban pixel, 0 represents a non-urban pixel, LC _ sum is the number of the urban pixels in the land utilization type data, Obj _ sum is the number of the urban pixels in the DMSP/OLS luminous image object under a certain Threshold, and n is the total number of the pixels in the object. All Obj _ Threshold values are traversed from small to large, Threshold _ urban ═ Obj _ Threshold (j) when | LC _ sum-Obj _ sum | is the minimum value.
Step 6, taking the pixel number, the luminous mean value, the maximum value, the minimum value and the variance in the segmented object obtained in the step 5 as input data, and taking the corresponding optimal light segmentation threshold value as an output value, and taking the output value as a network training sample to train the constructed BP neural network model;
the step 6 further comprises the following substeps:
step 6.1, firstly, designing the number of layers of the BP neural network and the neuron activation function of each layer, wherein the number of layers of the BP neural network is set as an input layer, two hidden layers and an output layer. The input layer has five neurons, two hidden layers both have ten neurons, and the final output layer is one neuron. Wherein the activation functions of the hidden layer and the output layer are "tansig" and "purelin" functions.
And 6.2, selecting partial objects, and training the model by respectively taking the maximum value, the minimum value, the mean value, the variance and the object size of the objects as input values, wherein the optimal threshold value of the selected partial objects is taken as an output value.
And 6.3, processing all the objects by using the BP neural network model trained by the selected part of the objects, and calculating to obtain the optimal threshold value of each object.
In the embodiment, the specific operations in this step are as follows:
as shown in fig. 2, a 4-layer BP neural network initial model is first constructed, wherein the first layer is an input layer, the second and third layers are hidden layers, the number of network nodes is 10, and the fourth layer is an output layer.
Randomly selecting part of the objects to train the BP neural network model, wherein the data composition of the samples is input data (Obj _ size, Obj _ mean, Obj _ max, Obj _ min, Obj _ var) and output data (Obj _ threshold).
And 7, calculating the optimal threshold values of all the segmented objects by using the BP neural network model trained in the step 6, and obtaining the final city range of the research area by using the threshold values.
The specific operation of this step is as follows:
and 7.1, processing all the segmentation objects by using the trained network model to obtain the optimal threshold value of each object.
7.2, calculating the urban range according to the optimal threshold obtained by calculation of the BP neural network model, wherein the urban range is obtained by respectively extracting each object by adopting the corresponding optimal threshold, the light pixels which are larger than the threshold are urban pixels, otherwise, the light pixels are non-urban pixels, and the urban range is obtained by integrating the extraction results of all the objects.
The technical solution and the advantages of the present invention will be further described with reference to specific examples.
DMSP/OLS luminous images of Chinese areas in 2005, land utilization type data of part of cities in China and MODIS water body distribution maps are selected. Firstly, the DMSP/OLS luminous image is masked by using an MODIS water body image, and then the masked DMSP/OLS luminous image is segmented by using an image segmentation algorithm. And training the BP neural network model by a random selection part, and finally applying the trained BP neural network model to the whole image to obtain the 2005 urban range. Performing regression analysis on the experimental result and the real land utilization data on a provincial scale, and displaying the regression result R2=0.85,RMSE=421.8。
In specific implementation, the process provided by the invention can be automatically operated by adopting a software technology.
The foregoing is a more detailed description of the present invention that is presented in conjunction with specific embodiments, and the practice of the invention is not to be considered limited to those descriptions. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.
Claims (5)
1. A city range extraction method based on a luminous remote sensing image is characterized by comprising the following steps:
step 1, acquiring night light DMSP/OLS data of a research area, land utilization type data of a part of the research area and an MODIS water body distribution map;
step 2, establishing a binary image by using the MODIS water body distribution diagram to perform mask processing on the DMSP/OLS noctilucent image;
step 3, segmenting the DMSP/OLS noctilucent image processed by the mask obtained in the step 2;
step 4, calculating the pixel number, the noctilucent mean value, the maximum value, the minimum value and the variance in each segmented object obtained in the step 3;
step 5, selecting the segmentation objects with corresponding land use type data, and calculating the optimal light segmentation threshold of each segmentation object according to the number of city pixels in the land use type data;
step 6, taking the pixel number, the average value, the maximum value, the minimum value and the variance in the segmented object obtained in the step 5 as input data, and taking the corresponding optimal light segmentation threshold value as an output value, and training a BP neural network model;
and 7, calculating the optimal threshold values of all the segmented objects by using the BP neural network model trained in the step 6, and obtaining the final city range of the research area by using the threshold values.
2. The urban area extraction method based on the noctilucent remote sensing image as claimed in claim 1, characterized in that: in step 5, the following substeps are respectively executed for each segmented object, step 5.1, the minimum value of the pixels in the object in the luminous image is taken as a Threshold initial value Threshold _ urban, the light pixels larger than the Threshold are urban pixels, otherwise, the light pixels are non-urban pixels;
step 5.2, respectively calculating the number of Urban pixels Urban _ DMSP obtained by utilizing a threshold value in the object and the number of Urban pixels Urban _ NLCD in the land utilization type data;
step 5.3, comparing the city pixel number extracted by using the threshold value with the known city pixel number in the land use type data, and calculating an absolute error value T of the two to be | Urban _ DMSP-Urban _ NLCD |;
step 5.4, sequentially selecting the pixel values in the object from small to large as a new Threshold initial value Threshold _ urban, and repeating the steps 5.1 to 5.3 until the light brightness values of all the pixels in the object are calculated;
and 5.5, selecting the corresponding light brightness value when the T is minimum as the optimal light Threshold value Threshold _ urban.
3. The method for extracting the urban area based on the noctilucent remote sensing image according to claim 1 or 2, characterized in that: in step 5, the number of layers of the BP neural network is set as an input layer, 2 hidden layers and 1 output layer, wherein the input layer comprises 5 neurons, the 2 hidden layers comprise 10 neurons, and the final output layer is 1 neuron.
4. The method for extracting the urban area based on the noctilucent remote sensing image according to claim 1 or 2, characterized in that: and 3, segmenting the DMSP/OLS image after the mask processing by adopting a mark control watershed segmentation algorithm.
5. The urban area extraction method based on the noctilucent remote sensing image as claimed in claim 3, characterized in that: and 3, segmenting the DMSP/OLS image after the mask processing by adopting a mark control watershed segmentation algorithm.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810246441.8A CN108399399B (en) | 2018-03-23 | 2018-03-23 | Urban range extraction method based on noctilucent remote sensing image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810246441.8A CN108399399B (en) | 2018-03-23 | 2018-03-23 | Urban range extraction method based on noctilucent remote sensing image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108399399A CN108399399A (en) | 2018-08-14 |
CN108399399B true CN108399399B (en) | 2021-09-03 |
Family
ID=63091570
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810246441.8A Active CN108399399B (en) | 2018-03-23 | 2018-03-23 | Urban range extraction method based on noctilucent remote sensing image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108399399B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109509154B (en) * | 2018-10-23 | 2021-05-18 | 东华理工大学 | Desaturation correction method for DMSP/OLS (digital multiplex/organic line system) annual stable noctilucent remote sensing image |
CN109670556B (en) * | 2018-12-27 | 2023-07-04 | 中国科学院遥感与数字地球研究所 | Global heat source heavy industry area identification method based on fire point and noctilucent data |
CN111862104B (en) * | 2019-04-26 | 2024-06-21 | 利亚德照明股份有限公司 | Video cutting method and system based on large-scale urban night scenes |
CN110765885B (en) * | 2019-09-29 | 2022-04-01 | 武汉大学 | City expansion detection method and device based on heterogeneous luminous remote sensing image |
CN111192298B (en) * | 2019-12-27 | 2023-02-03 | 武汉大学 | Relative radiation correction method for luminous remote sensing image |
CN111144340A (en) * | 2019-12-30 | 2020-05-12 | 中山大学 | Method and system for automatically monitoring human activities in natural reserve area based on night light and high-resolution remote sensing image |
CN112488820A (en) * | 2020-11-19 | 2021-03-12 | 建信金融科技有限责任公司 | Model training method and default prediction method based on noctilucent remote sensing data |
CN112561942B (en) * | 2020-12-16 | 2023-01-17 | 中国科学院地理科学与资源研究所 | Automatic extraction method of rural area ternary structure based on DMSP night light image |
CN112927354B (en) * | 2021-02-25 | 2022-09-09 | 电子科技大学 | Three-dimensional reconstruction method, system, storage medium and terminal based on example segmentation |
CN113158899B (en) * | 2021-04-22 | 2022-07-29 | 中国科学院地理科学与资源研究所 | Village and town development state measurement method based on remote sensing luminous dark target enhancement technology |
CN113378724A (en) * | 2021-06-15 | 2021-09-10 | 中南大学 | Multi-center city hot spot area rapid identification and dynamic monitoring method |
CN115713691B (en) * | 2022-11-21 | 2024-01-30 | 武汉大学 | Noctilucent remote sensing-based pixel-level power popularity rate estimation method and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103955583A (en) * | 2014-05-12 | 2014-07-30 | 中国科学院城市环境研究所 | Method for determining threshold value of urban built-up area extracted through nighttime light data |
CN104318544A (en) * | 2014-09-25 | 2015-01-28 | 中国水产科学研究院东海水产研究所 | Method for estimating the number of light-induced trapping fishing boats based on satellite remote sensing data at night light |
CN106127121A (en) * | 2016-06-15 | 2016-11-16 | 四川省遥感信息测绘院 | A kind of built-up areas intellectuality extracting method based on nighttime light data |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9401977B1 (en) * | 2013-10-28 | 2016-07-26 | David Curtis Gaw | Remote sensing device, system, and method utilizing smartphone hardware components |
-
2018
- 2018-03-23 CN CN201810246441.8A patent/CN108399399B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103955583A (en) * | 2014-05-12 | 2014-07-30 | 中国科学院城市环境研究所 | Method for determining threshold value of urban built-up area extracted through nighttime light data |
CN104318544A (en) * | 2014-09-25 | 2015-01-28 | 中国水产科学研究院东海水产研究所 | Method for estimating the number of light-induced trapping fishing boats based on satellite remote sensing data at night light |
CN106127121A (en) * | 2016-06-15 | 2016-11-16 | 四川省遥感信息测绘院 | A kind of built-up areas intellectuality extracting method based on nighttime light data |
Also Published As
Publication number | Publication date |
---|---|
CN108399399A (en) | 2018-08-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108399399B (en) | Urban range extraction method based on noctilucent remote sensing image | |
Verma et al. | Transfer learning approach to map urban slums using high and medium resolution satellite imagery | |
CN110135354B (en) | Change detection method based on live-action three-dimensional model | |
CN102184423B (en) | Full-automatic method for precisely extracting regional impervious surface remote sensing information | |
CN105787501A (en) | Vegetation classification method capable of automatically selecting features in power transmission line corridor area | |
Chen et al. | Agricultural remote sensing image cultivated land extraction technology based on deep learning | |
CN110765885A (en) | City expansion detection method and device based on heterogeneous luminous remote sensing image | |
Zhou | Application of artificial intelligence in geography | |
Zhang et al. | Road extraction from multi-source high-resolution remote sensing image using convolutional neural network | |
Stark | Using deep convolutional neural networks for the identification of informal settlements to improve a sustainable development in urban environments | |
Chao et al. | A spatio-temporal neural network learning system for city-scale carbon storage capacity estimating | |
Chen et al. | Remote sensing and urban green infrastructure: A synthesis of current applications and new advances | |
Wang et al. | Remote sensing image analysis and cyanobacterial bloom prediction method based on ACL3D-Pix2Pix | |
Shokri et al. | POINTNET++ Transfer Learning for Tree Extraction from Mobile LIDAR Point Clouds | |
Han et al. | A graph-based deep learning framework for field scale wheat yield estimation | |
Aslan et al. | Spatiotemporal land use change analysis and future urban growth simulation using remote sensing: A case study of antalya | |
CN114063063A (en) | Geological disaster monitoring method based on synthetic aperture radar and point-like sensor | |
Singh et al. | ENVINet5 deep learning change detection framework for the estimation of agriculture variations during 2012–2023 with Landsat series data | |
Jaroenchai et al. | Transfer learning with convolutional neural networks for hydrological streamline delineation | |
Munawar et al. | Application of Deep Learning on UAV-Based Aerial Images for Flood Detection. Smart Cities 2021, 4, 1220–1242 | |
Dwivedi et al. | Development of Population Distribution Map and Automated Human Settlement Map Using High Resolution Remote Sensing Images | |
Fu et al. | An efficient and accurate deep learning method for tree species classification that integrates depthwise separable convolution and dilated convolution using hyperspectral data | |
Cai et al. | Remote Sensing Image River Segmentation Method Based on U-Net | |
Song et al. | Urban landscape modeling and algorithms under machine learning and remote sensing data | |
Wu et al. | Seabird statistics in coastal wetlands based on aerial views from drones |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |