CN108399399A - A kind of city scope extracting method based on noctilucence remote sensing image - Google Patents
A kind of city scope extracting method based on noctilucence remote sensing image Download PDFInfo
- Publication number
- CN108399399A CN108399399A CN201810246441.8A CN201810246441A CN108399399A CN 108399399 A CN108399399 A CN 108399399A CN 201810246441 A CN201810246441 A CN 201810246441A CN 108399399 A CN108399399 A CN 108399399A
- Authority
- CN
- China
- Prior art keywords
- city
- value
- noctilucence
- threshold
- dmsp
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/176—Urban or other man-made structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
A kind of city scope extracting method based on noctilucence remote sensing image, including the night lights DMSP/OLS data of acquisition survey region and part survey region land use pattern data and MODIS water body distribution maps;Mask process is carried out using MODIS water body distribution maps, and is split;Calculate pixel number, noctilucence mean value, maximum value, minimum value and the variance in each cutting object;The optimized lamp segmentation threshold for selecting calculating section cutting object, is trained BP neural network model;The optimal threshold that all cutting objects are calculated using trained BP neural network model obtains the final city scope of survey region using threshold value.The present invention extracts city scope using noctilucence remote sensing image, and combines BP neural network so that the city scope extracted is more accurate.And this method has many advantages, such as that calculation amount is small, complexity is low, precision is high, can provide timely and effectively information for the development in city and layout.
Description
Technical field
Application the invention belongs to noctilucence remote sensing in Study of Urban field proposes a kind of completely new based on noctilucence remote sensing
The city scope extracting method of image.
Background technology
Urbanization Process In China starts to accelerate in the 1980s.Urbanization is to weigh the master of a national development situation
One of indicate.But this quick urbanization process simultaneously also be people living environment and the ecosystem bring it is very much
It influences.Whether urbanization and industrialized relationship, urbanization lag, the speed of the following urbanization in 20 years should be it is how many, should
It is the main problem encountered in the current Process of Urbanization in China to develop this four problems of the city of what scale.It is asked to solve these
Topic, we allow for obtaining the exact range in city and the space-time trend of urban changes in time.
The city scope that develops into of airborne and spaceborne RS technology provides a kind of efficient means, compared to traditional
The practical method measured in ground, remote sensing technology has the features such as workload is small, at low cost, the period is short, efficient, and can expire
The demand of sufficient current research urbanization.The mode of traditional remote sensing technology extraction city scope is usually to utilize in 1 year compared with high score
The multispectral remote sensing image of resolution extracts, and extraction process includes mainly the geometric correction of image, atmospheric correction, edge
It is embedding, cut, classification and etc..It is affected by weather since multispectral remote sensing image is obtained, obtained shadow
As when carrying out geometric correction, the work such as inlaying since the difference of imaging time causes its operation relatively difficult.And generally make
The spatial resolution of multispectral image is higher, such that data volume becomes larger, for data processing, storage, management, hair
Cloth etc. all causes certain difficulty.
Noctilucence remote sensing starts from the linear scan operation system of U.S. national defense meteorological satellite plan the 1970s
(Defense Meteorological Satellite Program/Operational Linescan System,DMSP/
OLS), which is initially the faint noctilucence of the cloud layer reflection for detection night to obtain the information of night cloud layer distribution[1], but
It is that related science man has found that the sensor can obtain the visible light emitted on earth surface under conditions of cloudless, for example cities and towns are shone
The luminous of bright facility, fishing boat shine, the burning of oil/gas well shines etc..Compared to traditional remote sensing in the daytime, noctilucence remote sensing image at
Picture is in night, and what night uniquely can directly reflect mankind's activity is that lighting installation shines, in view of this uniqueness of noctilucence remote sensing
Characteristic, therefore it is more the spatial and temporal distributions for reflecting earth surface city, this for extraction city scope provide it is a kind of newly
Data source.The noctilucence image of acquisition hitherto has the year product and U.S.'s polar-orbiting satellite (Suomi of DMSP/OLS
National Poal Orbiting Partnership, Suomi NPP) carry visible light near infrared imaging radiation sensor
(Visible Infrared Imaging Radiometer Suite, VIIRS) obtained moon product and year product.At present
Both noctilucence remote sensing images of publication are all global sintetics, the processing such as need not carry out atmospheric correction, inlay, and
Since data volume is small, it is study urban development a kind of new efficient to carry out city scope extraction using noctilucence remote sensing image
Means.
Have many scholars at present and have studied the various methods using noctilucence Extraction of Image city scope, mainly with threshold value
Based on dividing method.Its threshold segmentation method including single threshold value method, subregion threshold method and is based on object threshold method again.
Single threshold value method is initially by scholars such as MarcL.Imhoff[2]It proposes, party's rule is selected in survey region
On the contrary for one lamplight brightness value as city segmentation threshold, i.e., the pixel that brightness value is more than the threshold value is city pixel, then be non-
City pixel.This method is simply easily achieved, but becoming larger with survey region range, and the development in survey region is more uneven
Weighing apparatus causes the otherness that night lights are distributed to become larger, then single threshold value method can not meet a wide range of research and require again[3]。
For the defect of single threshold value method, the scholars such as Zhang[4]The method for initially proposing subregion threshold value.Party's rule
It is that survey region is divided into, then by multiple smaller survey regions according to economy, population, the geographical location etc. in survey region
The independent choice threshold value in each subregion, finally obtains the city scope of entire survey region according to region threshold.Although the party
Method is more reasonable compared with single threshold value method, but its gained city scope can change with the difference of scoping rules, and
And even if urban inner light distribution appoint so there is prodigious differences.
In order to make up the deficiency of single threshold value method and subregion threshold method, has document [5-6] and propose object-based threshold
Value method.It is first to be split according to lamplight brightness value to noctilucence image, then calculates the optimal threshold in each object again.
This object-based method can be solved due to city scope precision problem caused by the space of light and Luminance Distribution difference.
Document [5] while also indicating that the selection of light threshold value not only there are relationship between lamplight brightness value, also with the ruler of cutting object
The factors such as mean value are related in very little, object, and meet certain function distribution relation.But function model is with the data time
It is different and different, it needs according to the different times come the parameter of design function, and by scatter plot and statistical data, it is difficult to compared with
The accurately relationship between analog threshold and light data.
Artificial neural network (artificial neural network, ANN) is on the basis of biological neural network is studied
A kind of computational methods developed, this method is based on learning training sample to complete the particular tasks such as classification, and can instruct
Performance is gradually improved in experienced process[7].Artificial neural network is a kind of structure and function for attempting to simulate biological neural network
Mathematical model.It is artificial neuron that it is formed substantially, i.e., simple mathematical function.Artificial neuron includes three groups of operation rules:
Multiplication, summation and activation.In the inlet of artificial neuron, input is weighted, i.e., each input value is multiplied by individual weight;
The middle section of neuron, weighted value and its deviation to all inputs are summed.In exit, the input of prior weight and
The summation of deviation will carry out last operation by activation primitive.
The advantage of mathematical complexity is obtained in order to give full play to the interconnection of artificial neuron, and avoids raising system simultaneously
Complexity cause it to be difficult to manage, artificial neuron connection usually according to certain rule and normal form.Some " standardization "
Artificial neural network topological diagram be suggested, these predefined shapes may help to that user is more simple and quick effectively to be solved
Certainly problem[8].Different types of artificial neural network topology is suitable for solving the problems, such as different type.Determining given problem
After type, determine by the topological structure of the artificial neural network of use, while in order to play the optimum utility of neural network, topology
And its parameter is required to be finely adjusted.The topology constructing for completing artificial neural network is to complete the task of half.As life
Object neural network needs to learn how to show that output appropriate, artificial neural network are also required to pass through prison to giving input in environment
Superintend and direct, non-supervisory or intensified learning obtains how to obtain feedback according to input, finally obtain the optimal nerve based on sample data
Network model.
In recent years, it is suggested and is furtherd investigate there are many artificial nerve network model.Wherein 80%~90%
Artificial nerve network model be using feedforward counterpropagation network (Back-Propagation Network abbreviation BP networks) or
Its improved form[9].BP neural network it there are one input layer, an output layer and one or more hidden layer, same layer nerves
Onrelevant between member connects forward between different layer neuron.According to the complexity of object, network structure appropriate is selected, so that it may with
Realize the mapping from the input space to the Any Nonlinear Function in output space.BP neural network is mainly used for:Function approximation is
The fields such as system identification and prediction, classification, data compression[10]。
The research of comprehensive forefathers, currently with noctilucence remote sensing image extraction city scope there are no the method for comparative maturity,
Wherein most methods are required for that threshold value manually is arranged, this will make result of study, and there are missed caused by larger human factor
Difference.Simultaneously based on showing that there are certain function passes between object threshold and image parameter in object extraction city scope method
System.This patent is then by using BP neural network model, with the maximum value in cutting object, minimum value, mean value, variance, pixel
Number and optimal threshold are that sample data is trained model, using trained BP neural network come simulation input parameter and defeated
Go out certain functional relation between optimal threshold.Threshold value and extraction city scope can be arranged in this method automatically so that utilize night
Optical image extraction city scope breaks away from the interference of human factor, obtains more accurate and reliable result.
Pertinent literature is as follows:
[1]Croft T A.Nighttime Images of the Earth from Space[J].Scientific
American,1978,239(1):86-98.
[2]Imhoff M L,Lawrence W T,Stutzer D C,et al.A technique for using
composite DMSP/OLS“City Lights”satellite data to map urban area[J].Remote
Sensing of Environment,1997,61(3):361-370.
[3]Zhang Q,Seto K C.Can Night-Time Light Data Identify Typologies of
UrbanizationA Global Assessment of Successes and Failures[J].Remote Sensing,
2013,5(5):3476-3494.
[4]Liu Z,He C,Zhang Q,et al.Extracting the dynamics of urban
expansion in China using DMSP-OLS nighttime light data from 1992to 2008[J]
.Landscape&Urban Planning,2012,106(106):62-72.
[5]Zhou Y,Smith S J,Elvidge C D,et al.A cluster-based method to map
urban area from DMSP/OLS nightlights[J].Remote Sensing of Environment,2014,
147(18):173–185.
[6]Xie Y,Weng Q.Updating urban extents with nighttime light imagery
by using an object-based thresholding method[J].Remote Sensing of
Environment,2016,187:1-13.
[7]Whitley D,Starkweather T,Bogart C.Genetic algorithms and neural
networks:optimizing connections and connectivity[J].Parallel Computing,1990,
14(3):347-361.
[8] Li Shuancheng, Zheng Du artificial nerve network models in earth science research application progress [J] geosciences into
Exhibition, 2003,18 (1):68-76.
[9] Yang Zhaosheng, path forms time real-time prediction model [J] system engineering reasons of the based on BP neural network in Zhu
By with practice, 1999,19 (8):59-64.
[10] Li Ping, Zeng Lingke, tax Anze, golden sherry, the BP neural network prediction of Liu Yanchun, Wang Hui based on MATLAB
Design [J] the computer applications of system and software, 2008 (04):149-150+184.
Invention content
Comprehensive forefathers study and for the deficiencies for extracting city scope currently with noctilucence remote sensing image, and the present invention proposes
A kind of new city scope extracting method based on noctilucence remote sensing image.
In order to solve the above-mentioned technical problem, the present invention adopts the following technical scheme that:
A kind of city scope extracting method based on noctilucence remote sensing image, includes the following steps:
Step 1, the night lights DMSP/OLS data and part survey region land use pattern number of survey region are obtained
According to and MODIS water body distribution maps;
Step 2, it establishes bianry image using MODIS water body distribution maps and mask process is carried out to DMSP/OLS noctilucence images;
Step 3, the DMSP/OLS noctilucence images after step 2 gained mask process are split;
Step 4, pixel number, noctilucence mean value, maximum value, minimum value and the side in each cutting object obtained by step 3 are calculated
Difference;
Step 5, the cutting object for having corresponding land use pattern data is selected, according in land use pattern data
City pixel number calculates the optimized lamp segmentation threshold of each cutting object;
Step 6, using pixel number, mean value, maximum value, minimum value and variance in step 5 gained cutting object as input data,
Corresponding optimum light segmentation threshold is output valve, is trained to BP neural network model;
Step 7, the optimal threshold of all cutting objects is calculated using the trained BP neural network model of step 6, is utilized
Threshold value obtains the final city scope of survey region.
Moreover, in step 5, following sub-step is executed respectively to each cutting object,
Step 5.1, using the pixel minimum value in object in noctilucence image as threshold value initial value Threshold_urban, it is more than
The light pixel of threshold value then be city pixel, on the contrary it is then be non-city pixel;
Step 5.2, the city pixel number Urban_DMSP obtained using threshold value in object and land use are calculated separately
City pixel number Urban_NLCD in categorical data;
Step 5.3, it is compared with the known city in the city pixel number and land use pattern data of threshold value extraction
Pixel number calculates the absolute error value T=of the two | Urban_DMSP-Urban_NLCD |;
Step 5.4, the pixel value in selecting object is new threshold value initial value Threshold_urban successively from small to large,
Step 5.1-5.3 is repeated, until the lamplight brightness value of all pixels in object is all by calculating;
Step 5.5, it is optimized lamp threshold value Threshold_urban to select the bright brightness value of light corresponding when T minimums.
Moreover, in step 5, BP neural network, the number of plies are set as input layer, 2 hidden layers and 1 output layer, wherein
Input layer has 5 neurons, 2 hidden layers all to contain 10 neurons, and last output layer is 1 neuron.
Moreover, in step 3, the DMSP/OLS images after mask process are carried out using marking of control fractional spins
Segmentation.
Compared with prior art, the present invention has the following advantages and beneficial effect:
1, the present invention utilizes the method based on object and neural network, enables to result of calculation more rationally and accurate.
2, the present invention replaces specific function model using neural network, avoids and is caused because of the inaccuracy of function model
Result of calculation have a deviation, and the method for the present invention data processing and calculate simpler.
Description of the drawings
Fig. 1 must extract flow diagram to be of the invention.
Fig. 2 is the organigram and input/output variable of BP neural network model in the embodiment of the present invention.
Specific implementation mode
Technical solution of the present invention is illustrated below in conjunction with drawings and examples.
Referring to Fig. 1, the embodiment of the present invention includes following steps:
Step 1, the night lights DMSP/OLS images and part survey region land use pattern number of survey region are obtained
According to and MODIS water body distribution maps;
It is preferred that using the data in same time, the concrete operations of this step are as follows:
Subregional soil in the middle part of the DMSP/OLS noctilucence image and survey region in survey region same time is downloaded respectively
Use pattern data (only needing the data of subregion for training network) and MODIS water body distribution maps.
Step 2, it establishes bianry image using MODIS water body distribution maps and mask process is carried out to DMSP/OLS noctilucence images;
The concrete operations of this step are as follows:
The reflected light that this step is mainly in view of water body can increase the light amount of earth's surface, can then be eliminated by mask process
The influence, calculation formula are as follows:
DMSP is noctilucence image in formula (3), and it is water body binary map that the present invention, which uses DMSP/OLS noctilucence images, LC_Water,
Picture, i, j are the ranks number of image, and such DMSP (i, j) indicates the value of pixel (i, j) in noctilucence image, LC_Water (i, j)
Indicate the value of pixel (i, j) in water body bianry image.
Step 3, using marking of control fractional spins to the DMSP/OLS noctilucence shadows after step 2 gained mask process
As being split;
The concrete operations of this step are as follows:
The DMSP/OLS images after mask process are split using marking of control fractional spins, split window
It is 3 × 3.Marking of control fractional spins can be found in Parvati K, Rao B S P, Das M M.Image
Segmentation Using Gray-Scale Morphology and Marker-Controlled Watershed
Transformation[J].Discrete Dynamics in Nature&Society,2013,2008(1026-0226):
307-318.
Step 4, pixel number, noctilucence mean value, maximum value, minimum value and the side in each cutting object obtained by step 3 are calculated
Difference;
The concrete operations of this step are as follows:
The pixel number of each object, light mean value, maximum value, minimum value, variance, calculation formula after statistics segmentation respectively
It is as follows:
Obj_size=n. ... (4)
Obj_max=MAX (DMSP (i)), i=1,2 ..., n. ... (6)
Obj_min=MIN (DMSP (i)), i=1,2 ..., n. ... (7)
Obj_size, Obj_mean, Obj_max, Obj_min, Obj_var are respectively pixel number, the object of object in formula
Interior pixel mean value, maximum value, minimum value, variance, wherein n are the pixel number in the object, and DMSP (i) is i-th in object
Corresponding value in DMSP/OLS noctilucence images of the pixel after step 2 gained mask process.
Step 5, the cutting object for having corresponding soil profit categorical data is selected, according to these objects in land use pattern
City pixel number in data calculates optimized lamp threshold value;
Above-mentioned steps 5 further comprise executing following sub-step respectively to each object:
Step 5.1, using the pixel minimum value in object in noctilucence image as threshold value initial value Threshold_urban, it is more than
The light pixel of threshold value then be city pixel, on the contrary it is then be non-city pixel;
Step 5.2, the city pixel number Urban_DMSP obtained using threshold value in object and land use are calculated separately
City pixel number Urban_NLCD in categorical data;
Step 5.3, it is compared with the known city in the city pixel number and land use pattern data of threshold value extraction
Pixel number calculates the absolute error value of the two:T=| Urban_DMSP-Urban_NLCD |;
Step 5.4, the pixel value in selecting object is new threshold value initial value Threshold_urban successively from small to large,
Step 5.1,5.2,5.3 are repeated, until the lamplight brightness value of all pixels in object is all by calculating;
Step 5.5, it is optimized lamp threshold value Threshold_urban to select lamplight brightness value corresponding when T minimums.
In embodiment, the concrete operations of this step are as follows:
The object for randomly choosing part, calculates the optimal threshold of each object, when | LC_sum-Obj_sum | it is minimized
When corresponding threshold value be optimal threshold, calculation formula is as follows:
Obj_threshold(j)∈[min(DMSP),max(DMSP)]. ……(9)
Wherein DMSP is currently processed cutting object, and min (DMSP) is the pixel maximum value in object, max (DMSP)
For the pixel minimum value in object, Obj_threshold (j) is j-th of threshold value initial value Threshold_ of existing object selection
Urban, DMSP_Urban (i) be i-th of pixel of existing object value DMSP (i) after Threshold segmentation as a result, 1 represents city
City's pixel, 0 represents non-city pixel, and LC_Urban (i) is the land use pattern data of i-th of pixel of existing object, and 1 represents
City pixel, 0 represents non-city pixel, and LC_sum is the city pixel number in land use pattern data, Obj_sum be
City pixel number under a certain threshold value in DMSP/OLS noctilucence imaged object, n are pixel sum in object.It traverses from small to large
All Obj_threshold values, when | LC_sum-Obj_sum | Threshold_urban=Obj_ when being minimum value
threshold(j)。
Step 6, it is input with pixel number, noctilucence mean value, maximum value, minimum value and variance in step 5 gained cutting object
Data, corresponding optimum light segmentation threshold are output valve, as training sample to the BP neural network model that builds into
Row training;
Above-mentioned steps 6 further comprise following sub-step:
Step 6.1, the neuron activation functions of the number of plies and each layer of BP neural network are designed first, and BP is refreshing in this patent
The number of plies through network is set as input layer, two hidden layers and an output layer.Wherein there are five neurons for input layer, and two hidden
All contain ten neurons containing layer, last output layer is a neuron.The activation primitive of wherein hidden layer and output layer is
" tansig " and " purelin " function.
Step 6.2, selected section object, respectively with the maximum value of these objects, minimum value, mean value, variance and object ruler
Very little is input, wherein the optimal threshold of this partial objects selected is trained model as output valve.
Step 6.3, the BP neural network model trained using the partial objects of selection, handles all objects,
The optimal threshold of each object is calculated.
In embodiment, the concrete operations of this step are as follows:
As shown in Figure 2, one 4 layers of BP neural network initial model is constructed first, wherein first layer is input layer,
Second and third layer is hidden layer, and network node is all 10, and the 4th layer is output layer.
The object for randomly choosing part carries out the training of BP neural network model, and the wherein data group of sample becomes input number
According to (Obj_size, Obj_mean, Obj_max, Obj_min, Obj_var) and output data (Obj_threshold).
Step 7, the optimal threshold of all cutting objects is calculated using the trained BP neural network model of step 6, is utilized
Threshold value obtains the final city scope of survey region.
The concrete operations of this step are as follows:
7.1, whole cutting objects is handled using trained network model, obtains the best threshold of each object
Value.
7.2, city scope is calculated according to the optimal threshold that BP neural network model is calculated, including to each object point
Not Cai Yong corresponding optimum threshold value extract, be more than threshold value light pixel then be city pixel, on the contrary it is then be non-city pixel,
Comprehensive all object extraction results obtain city scope.
The technical solution and advantageous effect further illustrated the present invention below in conjunction with specific example application.
Select regional in 2005 DMSP/OLS noctilucence image and national urban land use pattern data with
And MODIS water body distribution maps.Mask process is carried out to DMSP/OLS noctilucence images first with using MODIS water bodys image, so
The DMSP/OLS noctilucence images after mask process are split using image segmentation algorithm afterwards.Random selection part is to BP nerves
Network model is trained, and finally by trained BP neural network model use to entire image, obtains city in 2005
Range.Regression analysis is carried out on excellent layout to experimental result and true land use data, as a result shows recurrence gained R2
=0.85, RMSE=421.8.
When it is implemented, software technology, which can be used, in flow provided by the invention realizes automatic running.
The above content is combining, specific embodiment is made for the present invention to be further described, and it cannot be said that this hair
Bright specific implementation is confined to these explanations.For those of ordinary skill in the art to which the present invention belongs, it is not taking off
Under the premise of from present inventive concept, a number of simple deductions or replacements can also be made, all shall be regarded as belonging to the protection of the present invention
Range.
Claims (5)
1. a kind of city scope extracting method based on noctilucence remote sensing image, which is characterized in that include the following steps:
Step 1, the night lights DMSP/OLS data and part survey region land use pattern data for obtaining survey region, with
And MODIS water body distribution maps;
Step 2, it establishes bianry image using MODIS water body distribution maps and mask process is carried out to DMSP/OLS noctilucence images;
Step 3, the DMSP/OLS noctilucence images after step 2 gained mask process are split;
Step 4, pixel number, noctilucence mean value, maximum value, minimum value and the variance in each cutting object obtained by step 3 are calculated;
Step 5, the cutting object for having corresponding land use pattern data is selected, according to the city in land use pattern data
Pixel number calculates the optimized lamp segmentation threshold of each cutting object;
Step 6, using pixel number, mean value, maximum value, minimum value and variance in step 5 gained cutting object as input data, accordingly
Optimized lamp segmentation threshold is output valve, is trained to BP neural network model;
Step 7, the optimal threshold that all cutting objects are calculated using the trained BP neural network model of step 6, utilizes threshold value
Obtain the final city scope of survey region.
2. the city scope extracting method based on noctilucence remote sensing image as described in claim 1, it is characterised in that:It is right in step 5
Each cutting object executes following sub-step respectively, step 5.1, using the pixel minimum value in object in noctilucence image as at the beginning of threshold value
Value Threshold_urban, be more than threshold value light pixel then be city pixel, on the contrary it is then be non-city pixel;
Step 5.2, the city pixel number Urban_DMSP obtained using threshold value in object and land use pattern are calculated separately
City pixel number Urban_NLCD in data;
Step 5.3, the known city pixel being compared in the city pixel number and land use pattern data of threshold value extraction
Number calculates the absolute error value T=of the two | Urban_DMSP-Urban_NLCD |;
Step 5.4, the pixel value in selecting object is new threshold value initial value Threshold_urban successively from small to large, is repeated
Step 5.1-5.3, until the lamplight brightness value of all pixels in object is all by calculating;
Step 5.5, it is optimized lamp threshold value Threshold_urban to select the bright brightness value of light corresponding when T minimums.
3. the city scope extracting method based on noctilucence remote sensing image as claimed in claim 1 or 2, it is characterised in that:Step 5
In, BP neural network, the number of plies are set as input layer, 2 hidden layers and 1 output layer, and wherein input layer has 5 neurons, and 2
A hidden layer all contains 10 neurons, and last output layer is 1 neuron.
4. the city scope extracting method based on noctilucence remote sensing image as claimed in claim 1 or 2, it is characterised in that:Step 3
In, the DMSP/OLS images after mask process are split using marking of control fractional spins.
5. the city scope extracting method based on noctilucence remote sensing image as claimed in claim 3, it is characterised in that:In step 3, adopt
The DMSP/OLS images after mask process are split with marking of control fractional spins.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810246441.8A CN108399399B (en) | 2018-03-23 | 2018-03-23 | Urban range extraction method based on noctilucent remote sensing image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810246441.8A CN108399399B (en) | 2018-03-23 | 2018-03-23 | Urban range extraction method based on noctilucent remote sensing image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108399399A true CN108399399A (en) | 2018-08-14 |
CN108399399B CN108399399B (en) | 2021-09-03 |
Family
ID=63091570
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810246441.8A Active CN108399399B (en) | 2018-03-23 | 2018-03-23 | Urban range extraction method based on noctilucent remote sensing image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108399399B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109509154A (en) * | 2018-10-23 | 2019-03-22 | 东华理工大学 | A kind of stable noctilucence remote sensing image desaturation bearing calibration of DMSP/OLS |
CN109670556A (en) * | 2018-12-27 | 2019-04-23 | 中国科学院遥感与数字地球研究所 | Global heat source heavy industry region recognizer based on fire point and noctilucence data |
CN110765885A (en) * | 2019-09-29 | 2020-02-07 | 武汉大学 | City expansion detection method and device based on heterogeneous luminous remote sensing image |
CN111144340A (en) * | 2019-12-30 | 2020-05-12 | 中山大学 | Method and system for automatically monitoring human activities in natural reserve area based on night light and high-resolution remote sensing image |
CN111192298A (en) * | 2019-12-27 | 2020-05-22 | 武汉大学 | Relative radiation correction method for luminous remote sensing image |
CN111862104A (en) * | 2019-04-26 | 2020-10-30 | 利亚德照明股份有限公司 | Video cutting method and system based on large-scale urban night scene |
CN112488820A (en) * | 2020-11-19 | 2021-03-12 | 建信金融科技有限责任公司 | Model training method and default prediction method based on noctilucent remote sensing data |
CN112561942A (en) * | 2020-12-16 | 2021-03-26 | 中国科学院地理科学与资源研究所 | Automatic extraction method of rural area ternary structure based on DMSP night light image |
CN112927354A (en) * | 2021-02-25 | 2021-06-08 | 电子科技大学 | Three-dimensional reconstruction method, system, storage medium and terminal based on example segmentation |
CN113158899A (en) * | 2021-04-22 | 2021-07-23 | 中国科学院地理科学与资源研究所 | Village and town development state measurement method based on remote sensing luminous dark target enhancement technology |
CN113378724A (en) * | 2021-06-15 | 2021-09-10 | 中南大学 | Multi-center city hot spot area rapid identification and dynamic monitoring method |
CN115713691A (en) * | 2022-11-21 | 2023-02-24 | 武汉大学 | Pixel-level electric power popularity estimation method and device based on noctilucent remote sensing |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103955583A (en) * | 2014-05-12 | 2014-07-30 | 中国科学院城市环境研究所 | Method for determining threshold value of urban built-up area extracted through nighttime light data |
CN104318544A (en) * | 2014-09-25 | 2015-01-28 | 中国水产科学研究院东海水产研究所 | Method for estimating the number of light-induced trapping fishing boats based on satellite remote sensing data at night light |
US20160323431A1 (en) * | 2013-10-28 | 2016-11-03 | David Curtis Gaw | Remote sensing device, system and method utilizing smartphone hardware components |
CN106127121A (en) * | 2016-06-15 | 2016-11-16 | 四川省遥感信息测绘院 | A kind of built-up areas intellectuality extracting method based on nighttime light data |
-
2018
- 2018-03-23 CN CN201810246441.8A patent/CN108399399B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160323431A1 (en) * | 2013-10-28 | 2016-11-03 | David Curtis Gaw | Remote sensing device, system and method utilizing smartphone hardware components |
CN103955583A (en) * | 2014-05-12 | 2014-07-30 | 中国科学院城市环境研究所 | Method for determining threshold value of urban built-up area extracted through nighttime light data |
CN104318544A (en) * | 2014-09-25 | 2015-01-28 | 中国水产科学研究院东海水产研究所 | Method for estimating the number of light-induced trapping fishing boats based on satellite remote sensing data at night light |
CN106127121A (en) * | 2016-06-15 | 2016-11-16 | 四川省遥感信息测绘院 | A kind of built-up areas intellectuality extracting method based on nighttime light data |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109509154B (en) * | 2018-10-23 | 2021-05-18 | 东华理工大学 | Desaturation correction method for DMSP/OLS (digital multiplex/organic line system) annual stable noctilucent remote sensing image |
CN109509154A (en) * | 2018-10-23 | 2019-03-22 | 东华理工大学 | A kind of stable noctilucence remote sensing image desaturation bearing calibration of DMSP/OLS |
CN109670556A (en) * | 2018-12-27 | 2019-04-23 | 中国科学院遥感与数字地球研究所 | Global heat source heavy industry region recognizer based on fire point and noctilucence data |
CN109670556B (en) * | 2018-12-27 | 2023-07-04 | 中国科学院遥感与数字地球研究所 | Global heat source heavy industry area identification method based on fire point and noctilucent data |
CN111862104A (en) * | 2019-04-26 | 2020-10-30 | 利亚德照明股份有限公司 | Video cutting method and system based on large-scale urban night scene |
CN110765885A (en) * | 2019-09-29 | 2020-02-07 | 武汉大学 | City expansion detection method and device based on heterogeneous luminous remote sensing image |
CN111192298A (en) * | 2019-12-27 | 2020-05-22 | 武汉大学 | Relative radiation correction method for luminous remote sensing image |
CN111144340A (en) * | 2019-12-30 | 2020-05-12 | 中山大学 | Method and system for automatically monitoring human activities in natural reserve area based on night light and high-resolution remote sensing image |
CN112488820A (en) * | 2020-11-19 | 2021-03-12 | 建信金融科技有限责任公司 | Model training method and default prediction method based on noctilucent remote sensing data |
CN112561942A (en) * | 2020-12-16 | 2021-03-26 | 中国科学院地理科学与资源研究所 | Automatic extraction method of rural area ternary structure based on DMSP night light image |
CN112927354A (en) * | 2021-02-25 | 2021-06-08 | 电子科技大学 | Three-dimensional reconstruction method, system, storage medium and terminal based on example segmentation |
CN113158899A (en) * | 2021-04-22 | 2021-07-23 | 中国科学院地理科学与资源研究所 | Village and town development state measurement method based on remote sensing luminous dark target enhancement technology |
CN113378724A (en) * | 2021-06-15 | 2021-09-10 | 中南大学 | Multi-center city hot spot area rapid identification and dynamic monitoring method |
CN115713691A (en) * | 2022-11-21 | 2023-02-24 | 武汉大学 | Pixel-level electric power popularity estimation method and device based on noctilucent remote sensing |
CN115713691B (en) * | 2022-11-21 | 2024-01-30 | 武汉大学 | Noctilucent remote sensing-based pixel-level power popularity rate estimation method and device |
Also Published As
Publication number | Publication date |
---|---|
CN108399399B (en) | 2021-09-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108399399A (en) | A kind of city scope extracting method based on noctilucence remote sensing image | |
WO2021258758A1 (en) | Coastline change identification method based on multiple factors | |
Quan et al. | GIS-based landslide susceptibility mapping using analytic hierarchy process and artificial neural network in Jeju (Korea) | |
CN102314546B (en) | Method for estimating plant growth biomass liveweight variation based on virtual plants | |
Xue et al. | Mapping the fine-scale spatial pattern of artificial light pollution at night in urban environments from the perspective of bird habitats | |
CN111307643A (en) | Soil moisture prediction method based on machine learning algorithm | |
CN111028255A (en) | Farmland area pre-screening method and device based on prior information and deep learning | |
CN113762090B (en) | Disaster monitoring and early warning method for ultra-high voltage dense transmission channel | |
CN108804394A (en) | A kind of construction method of city noctilucence total amount-urban population regression model | |
CN116595121B (en) | Data display monitoring system based on remote sensing technology | |
CN114357563A (en) | Layout generation method and application of south-of-the-river private garden landscape | |
Zhang et al. | Analysis on spatial structure of landuse change based on remote sensing and geographical information system | |
CN117390552A (en) | Intelligent irrigation system and method based on digital twin | |
CN100580692C (en) | Method for detecting change of water body and settlement place based on aviation video | |
CN110765885B (en) | City expansion detection method and device based on heterogeneous luminous remote sensing image | |
CN112967286B (en) | Method and device for detecting newly added construction land | |
Pandi et al. | Assessment of Land Use and Land Cover Dynamics Using Geospatial Techniques | |
Sharafi et al. | Evaluation of AquaCrop and intelligent models in predicting yield and biomass values of wheat | |
Li et al. | Design of the 3D Digital Reconstruction System of an Urban Landscape Spatial Pattern Based on the Internet of Things | |
Aslan et al. | Spatiotemporal land use change analysis and future urban growth simulation using remote sensing: A case study of antalya | |
Jaroenchai et al. | Transfer learning with convolutional neural networks for hydrological streamline delineation | |
Agarwal et al. | A Neural Network based Concept to Improve Downscaling Accuracy of Coarse Resolution Satellite Imagery for Parameter Extraction | |
Phonphan et al. | Evaluating Spatiotemporal Dynamics: A Comparative Study of Predictive Efficacy in Land Use Land Cover Change Models-Markov Chain, CA-ANN, and PLUS. | |
Zhou et al. | Research on segmentation algorithm of UAV remote sensing image based on deep learning | |
CN112883251B (en) | Agricultural auxiliary system based on multi-satellite combination |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |