CN118506201A - Remote sensing image classification method and system based on improvement MobileNet v2 - Google Patents
Remote sensing image classification method and system based on improvement MobileNet v2 Download PDFInfo
- Publication number
- CN118506201A CN118506201A CN202410646435.7A CN202410646435A CN118506201A CN 118506201 A CN118506201 A CN 118506201A CN 202410646435 A CN202410646435 A CN 202410646435A CN 118506201 A CN118506201 A CN 118506201A
- Authority
- CN
- China
- Prior art keywords
- remote sensing
- sensing image
- image classification
- classification
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000007781 pre-processing Methods 0.000 claims abstract description 11
- 230000004044 response Effects 0.000 claims description 92
- 230000004913 activation Effects 0.000 claims description 30
- 238000012549 training Methods 0.000 claims description 23
- 230000010339 dilation Effects 0.000 claims description 17
- 230000003044 adaptive effect Effects 0.000 claims description 16
- 238000011176 pooling Methods 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 13
- 230000004927 fusion Effects 0.000 claims description 13
- 238000013528 artificial neural network Methods 0.000 claims description 12
- 238000012360 testing method Methods 0.000 claims description 9
- 238000012790 confirmation Methods 0.000 claims description 8
- 238000001914 filtration Methods 0.000 claims description 2
- 230000007246 mechanism Effects 0.000 abstract description 10
- 230000006870 function Effects 0.000 description 22
- 230000009286 beneficial effect Effects 0.000 description 11
- 239000000284 extract Substances 0.000 description 6
- 238000004891 communication Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000007500 overflow downdraw method Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 2
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 2
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000005065 mining Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 239000011800 void material Substances 0.000 description 2
- 238000006424 Flood reaction Methods 0.000 description 1
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013145 classification model Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004720 fertilization Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002262 irrigation Effects 0.000 description 1
- 238000003973 irrigation Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000012067 mathematical method Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000003121 nonmonotonic effect Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/13—Satellite images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Molecular Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Astronomy & Astrophysics (AREA)
- Remote Sensing (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
本发明提供一种基于改进MobileNet v2的遥感图像分类方法及系统,其中,方法包括:步骤1:获取遥感图像分类数据集;步骤2:预处理遥感图像分类数据集;步骤3:根据处理数据,获取第一输出特征;步骤4:多尺度混合卷积第一输出特征获取第二输出特征;步骤5:交叉融合第二输出特征获取第三输出特征;步骤6:根据第三输出特征确定有用信息;步骤7:根据有用信息进行分类预测;步骤8:根据分类预测结果进行迭代,确定目标网络模型,基于遥感图像分类任务,确定分类结果。本发明的一种基于改进MobileNet v2的遥感图像分类方法及系统,交叉融合不同尺度的特征信息,分类准确率更高,引入注意力机制,识别效率更高。
The present invention provides a remote sensing image classification method and system based on improved MobileNet v2, wherein the method comprises: step 1: obtaining a remote sensing image classification data set; step 2: preprocessing the remote sensing image classification data set; step 3: obtaining a first output feature according to the processed data; step 4: obtaining a second output feature by multi-scale mixed convolution of the first output feature; step 5: cross-fusion of the second output feature to obtain a third output feature; step 6: determining useful information according to the third output feature; step 7: performing classification prediction according to the useful information; step 8: iterating according to the classification prediction result, determining the target network model, and determining the classification result based on the remote sensing image classification task. The present invention provides a remote sensing image classification method and system based on improved MobileNet v2, cross-fusion of feature information of different scales, higher classification accuracy, introduction of attention mechanism, and higher recognition efficiency.
Description
技术领域Technical Field
本发明涉及深度学习技术领域,特别涉及一种基于改进MobileNet v2的遥感图像分类方法及系统。The present invention relates to the field of deep learning technology, and in particular to a remote sensing image classification method and system based on improved MobileNet v2.
背景技术Background Art
对遥感图像进行分类是地理信息系统(GIS)、环境监测、城市规划、农业管理、灾害评估和许多其他领域中的一项关键任务。遥感图像分类可以帮助识别和监测不同类型的土地利用和覆盖,比如:森林、农田、城市地区和水体等。遥感分类也可以辅助城市规划者了解城市扩张模式,识别非法建筑,并规划未来的基础设施发展。农民和农业专家可以使用遥感图像来监测作物健康,评估作物产量,以及管理灌溉和施肥。另外,遥感图像分类可以迅速评估自然灾害(比如:洪水、火灾和地震)对地表的影响,从而帮助救援团队定位受灾区域并制定救援计划。Classifying remote sensing images is a critical task in geographic information systems (GIS), environmental monitoring, urban planning, agricultural management, disaster assessment, and many other fields. Remote sensing image classification can help identify and monitor different types of land use and cover, such as forests, farmlands, urban areas, and water bodies. Remote sensing classification can also assist urban planners in understanding urban expansion patterns, identifying illegal construction, and planning future infrastructure development. Farmers and agricultural experts can use remote sensing images to monitor crop health, assess crop yields, and manage irrigation and fertilization. In addition, remote sensing image classification can quickly assess the impact of natural disasters (such as floods, fires, and earthquakes) on the surface, thereby helping rescue teams locate disaster-affected areas and develop rescue plans.
MobileNet v2是一种轻量化且准确率高的网络,基于MobileNet v1网络,它保留了其简单性,并且不需要任何特殊的运算符,同时显著提高了它的准确性,广泛应用于计算机视觉领域的各种应用场景。MobileNet v2 is a lightweight and highly accurate network based on the MobileNet v1 network. It retains its simplicity and does not require any special operators, while significantly improving its accuracy. It is widely used in various application scenarios in the field of computer vision.
申请号为:CN202210749334.3的发明专利公开了一种耦合知识图谱与深度神经网络的遥感图像挖掘方法,其中,方法包括:步骤1、构建顾及地物成像机理与遥感影像特征的遥感图像知识表达模型;步骤2、基于多源数据构建遥感图像知识图谱;步骤3、采用知识表示学习方法挖掘遥感图像知识图谱中的实体与关系知识,将其转换为知识空间中低维稠密的向量表示;步骤4、采用深度卷积神经网络提取遥感图像的典型视觉特征,进行视觉空间特征向知识空间特征的映射学习;步骤5、将映射向量和遥感图像知识表示学习向量进行相似度计算;步骤6:待挖掘遥感图像,采用步骤4和步骤5的典型视觉特征提取和特征映射学习方法,进行遥感图像知识挖掘。上述发明在遥感图像解译和挖掘效果上有了进一步的提升。The invention patent with application number: CN202210749334.3 discloses a remote sensing image mining method that couples knowledge graph and deep neural network, wherein the method includes: step 1, constructing a remote sensing image knowledge expression model that takes into account the imaging mechanism of ground objects and the characteristics of remote sensing images; step 2, constructing a remote sensing image knowledge graph based on multi-source data; step 3, using knowledge representation learning methods to mine entity and relationship knowledge in the remote sensing image knowledge graph, and converting it into a low-dimensional dense vector representation in the knowledge space; step 4, using a deep convolutional neural network to extract typical visual features of remote sensing images, and learn to map visual space features to knowledge space features; step 5, calculating the similarity between the mapping vector and the remote sensing image knowledge representation learning vector; step 6: for the remote sensing image to be mined, the typical visual feature extraction and feature mapping learning methods of steps 4 and 5 are used to mine remote sensing image knowledge. The above invention has further improved the interpretation and mining effects of remote sensing images.
但是,上述现有技术采用MobileNet-V2模型进行遥感图像视觉特征提取,MobileNet-V2模型缺少多尺度学习特征的功能,也没有充分的利用各个特征之间的信息进行深度融合与交互,在目标物体尺度差异较大的情况下,精度会略低,且对于目标信息比较复杂的情况,MobileNet v2缺少注意力机制来使其更加关注图像的关键部分,降低了后续图像识别效率。However, the above-mentioned existing technology uses the MobileNet-V2 model to extract visual features of remote sensing images. The MobileNet-V2 model lacks the function of multi-scale learning features, and does not fully utilize the information between each feature for deep fusion and interaction. When the scale of the target object varies greatly, the accuracy will be slightly lower. In addition, when the target information is relatively complex, MobileNet v2 lacks an attention mechanism to make it pay more attention to the key parts of the image, which reduces the efficiency of subsequent image recognition.
有鉴于此,亟需一种基于改进MobileNet v2的遥感图像分类方法及系统,以至少解决上述不足。In view of this, there is an urgent need for a remote sensing image classification method and system based on an improved MobileNet v2 to at least solve the above-mentioned shortcomings.
发明内容Summary of the invention
本发明目的之一在于提供了一种基于改进MobileNet v2的遥感图像分类方法及系统,对获取的遥感图像进行预处理,并将预处理获得的处理数据输入MobileNet v2网络获取第一输出特征,对第一输出特征依次进行多尺度混合卷积、特征交叉融合和权重计算,确定有用信息。将有用信息输入全连接层进行分类预测,并基于预测结果和实际结果的差异进行多次迭代,确定符合迭代要求时的权重文件配置目标网络模型,目标网络模型执行遥感图像分类任务,确定分类结果,提取了多个不同尺度的特征信息、使用交叉融合的方法,提高分类准确率,引入注意力机制忽略冗余信息,提高了识别效率。One of the purposes of the present invention is to provide a remote sensing image classification method and system based on an improved MobileNet v2, preprocess the acquired remote sensing image, and input the processed data obtained by the preprocessing into the MobileNet v2 network to obtain the first output feature, and perform multi-scale mixed convolution, feature cross fusion and weight calculation on the first output feature in sequence to determine useful information. The useful information is input into the fully connected layer for classification prediction, and multiple iterations are performed based on the difference between the predicted result and the actual result to determine the weight file configuration target network model that meets the iteration requirements, the target network model performs the remote sensing image classification task, determines the classification result, extracts feature information of multiple different scales, uses the cross fusion method, improves the classification accuracy, introduces the attention mechanism to ignore redundant information, and improves the recognition efficiency.
本发明实施例提供的一种基于改进MobileNet v2的遥感图像分类方法,包括:An embodiment of the present invention provides a remote sensing image classification method based on an improved MobileNet v2, comprising:
步骤1:获取遥感图像分类数据集,并将遥感图像分类数据集划分为训练集和测试集;Step 1: Obtain a remote sensing image classification dataset and divide it into a training set and a test set;
步骤2:对遥感图像分类数据集进行预处理,获得处理数据;Step 2: Preprocess the remote sensing image classification data set to obtain processed data;
步骤3:将处理数据输入MobileNet v2网络,获取第一输出特征;Step 3: Input the processed data into the MobileNet v2 network to obtain the first output feature;
步骤4:对第一输出特征进行多尺度混合卷积,获取第二输出特征;Step 4: Perform multi-scale mixed convolution on the first output feature to obtain the second output feature;
步骤5:对第二输出特征进行特征交叉融合,获得第三输出特征;Step 5: Perform feature cross fusion on the second output feature to obtain the third output feature;
步骤6:对第三输出特征进行权重计算,确定有用信息;Step 6: Calculate the weight of the third output feature to determine useful information;
步骤7:将有用信息输入全连接层进行分类预测;Step 7: Input useful information into the fully connected layer for classification prediction;
步骤8:根据分类预测结果进行多次迭代,当满足停止迭代要求时,保存权重文件,获得目标网络模型,并基于目标网络模型的遥感图像分类任务,确定分类结果。Step 8: Perform multiple iterations based on the classification prediction results. When the requirements for stopping iterations are met, save the weight file, obtain the target network model, and determine the classification results based on the remote sensing image classification task of the target network model.
优选的,步骤2:对遥感图像分类数据集进行预处理,获得处理数据,包括:Preferably, step 2: preprocessing the remote sensing image classification data set to obtain processed data includes:
将遥感图像分类数据集中的遥感图像对齐成256*256大小的RGB图片,获得处理数据。The remote sensing images in the remote sensing image classification dataset are aligned into RGB images of size 256*256 to obtain processed data.
优选的,步骤4:对第一输出特征进行多尺度混合卷积,获取第二输出特征,包括:Preferably, step 4: performing multi-scale mixed convolution on the first output feature to obtain the second output feature includes:
配置多尺度混合卷积模块;Configure the multi-scale mixed convolution module;
将第一输出特征输入多尺度混合卷积模块,获得第二输出特征;Inputting the first output feature into a multi-scale mixed convolution module to obtain a second output feature;
其中,配置多尺度混合卷积模块,包括:Among them, a multi-scale mixed convolution module is configured, including:
设置1*1的普通卷积、3*3的普通卷积、3*3空洞率为6的空洞卷积和3*3空洞率为12的空洞卷积;Set 1*1 normal convolution, 3*3 normal convolution, 3*3 dilated convolution with a dilation rate of 6, and 3*3 dilated convolution with a dilation rate of 12;
设置BN层和ReLU6激活层。Set the BN layer and ReLU6 activation layer.
本发明实施例提供的一种基于改进MobileNet v2的遥感图像分类方法,还包括:An embodiment of the present invention provides a remote sensing image classification method based on improved MobileNet v2, further comprising:
获取所需感受野;Get the required receptive field;
计算当前感受野;Calculate the current receptive field;
根据所需感受野和当前感受野,调整空洞卷积的空洞率;Adjust the dilation rate of the dilated convolution according to the required receptive field and the current receptive field;
其中,计算当前感受野,包括:Among them, calculating the current receptive field includes:
计算空洞卷积的等效卷积核尺寸,等效卷积核尺寸的计算公式具体为:Calculate the equivalent convolution kernel size of the dilated convolution. The calculation formula for the equivalent convolution kernel size is:
k'=k+(k-1)(r-1)k'=k+(k-1)(r-1)
其中,k′为等效卷积核尺寸,k为空洞卷积核的尺寸,r为空洞卷积的空洞率;Among them, k′ is the equivalent convolution kernel size, k is the size of the dilated convolution kernel, and r is the dilation rate of the dilated convolution;
根据等效卷积核尺寸,计算当前感受野,当前感受野的计算公式具体为:According to the equivalent convolution kernel size, the current receptive field is calculated. The calculation formula of the current receptive field is as follows:
RF=k'+(k-1)(r-1)RF=k'+(k-1)(r-1)
其中,RF为当前感受野。Among them, RF is the current receptive field.
优选的,获取所需感受野,包括:Preferably, obtaining the required receptive field includes:
获取历史遥感图像分类任务,并提取历史遥感图像分类任务的第一任务特征;Obtain historical remote sensing image classification tasks and extract the first task features of the historical remote sensing image classification tasks;
确定历史遥感图像分类任务的分类精度;Determine the classification accuracy of historical remote sensing image classification tasks;
根据第一任务特征和分类精度,构建训练样本向量;Construct a training sample vector based on the first task features and classification accuracy;
根据训练样本向量,构建感受野确定模型;According to the training sample vector, a receptive field determination model is constructed;
解析遥感图像分类任务,获取第二任务特征和所需分类精度;Analyze the remote sensing image classification task to obtain the second task features and the required classification accuracy;
根据第二任务特征和所需分类精度,构建输入向量;Construct an input vector according to the second task characteristics and the required classification accuracy;
将输入向量输入感受野确定模型,获得所需感受野。The input vector is input into the receptive field determination model to obtain the desired receptive field.
优选的,步骤6:对第三输出特征进行权重计算,确定有用信息,包括:Preferably, step 6: performing weight calculation on the third output feature to determine useful information includes:
构建通道注意力块;Construct channel attention block;
构建空间注意力块;Building a spatial attention block;
将第三输出特征输入通道注意力块,获取通道注意力块输出的通道注意特征;Input the third output feature into the channel attention block to obtain the channel attention feature output by the channel attention block;
对通道注意特征和第三输出特征进行乘法操作,获得输入特征图;Multiply the channel attention feature and the third output feature to obtain the input feature map;
将输入特征图输入空间注意力块,获得空间注意特征;Input the input feature map into the spatial attention block to obtain the spatial attention feature;
将输入特征图和空间注意特征进行乘法操作,获得有用信息;Multiply the input feature map and the spatial attention feature to obtain useful information;
其中,构建通道注意力块,包括:Among them, constructing the channel attention block includes:
设置第一自适应全局最大池化层;Setting the first adaptive global maximum pooling layer;
设置第一自适应全局平均池化层;Set the first adaptive global average pooling layer;
设置两层神经网络,两层神经网络的第一层为1*1的卷积和Mish激活函数,两层神经网络的第二层为1*1的卷积;Set up a two-layer neural network. The first layer of the two-layer neural network is 1*1 convolution and Mish activation function, and the second layer of the two-layer neural network is 1*1 convolution;
设置第一Mish激活函数层;Set the first Mish activation function layer;
其中,构建空间注意力块,包括:Among them, the spatial attention block is constructed, including:
设置第二自适应全局最大池化层;Setting a second adaptive global maximum pooling layer;
设置第二自适应全局平均池化层;Set the second adaptive global average pooling layer;
设置7*7的卷积;Set 7*7 convolution;
设置第二Mish激活函数层。Set the second Mish activation function layer.
优选的,Mish激活函数,包括:Preferably, the Mish activation function includes:
Mish=x*tanh(In(1+ex)Mish = x*tanh(In(1+ex)
其中,Mish为Mish激活函数,x为待输入激活函数的特征图。Among them, Mish is the Mish activation function, and x is the feature map to be input into the activation function.
优选的,步骤1:获取遥感图像分类数据集,包括:Preferably, step 1: obtaining a remote sensing image classification data set includes:
提取遥感图像分类任务的任务特征;Extracting task features of remote sensing image classification tasks;
根据任务特征,构建获取索引;Build an acquisition index based on task characteristics;
根据获取索引,通过本地节点获取第一遥感图像分类数据子集;According to the acquisition index, a first remote sensing image classification data subset is acquired through a local node;
根据获取索引,通过大数据节点获取第二遥感图像分类数据子集;According to the acquisition index, a second remote sensing image classification data subset is acquired through a big data node;
将第一遥感图像分类数据子集和第二遥感图像分类数据子集共同作为遥感图像分类数据集。The first remote sensing image classification data subset and the second remote sensing image classification data subset are collectively used as a remote sensing image classification data set.
优选的,根据获取索引,通过大数据节点获取第二遥感图像分类数据子集,包括:Preferably, according to the acquisition index, the second remote sensing image classification data subset is acquired through the big data node, including:
获取大数据节点的节点标签;Get the node label of the big data node;
根据节点标签,确定大数据节点的第一可信值;Determine a first trustworthy value of a big data node according to the node label;
筛选第一可信值大于等于预设的第一目标阈值的大数据节点,并作为第一目标节点;Filtering a large data node whose first credibility value is greater than or equal to a preset first target threshold value and using it as the first target node;
将大数据节点中除了第一目标节点的剩余大数据节点作为第二目标节点;The remaining big data nodes except the first target node in the big data nodes are used as second target nodes;
获取第二目标节点的数据提供记录的反馈特征;Acquiring the feedback characteristics of the data provided by the second target node;
根据反馈特征,拟合反馈特征曲线;According to the feedback characteristics, fitting the feedback characteristic curve;
对反馈特征曲线进行求导计算,获取反馈增率曲线;Derivative calculation is performed on the feedback characteristic curve to obtain the feedback increase rate curve;
若反馈增率曲线的曲线特征符合凹曲线特征,将第二目标节点作为第三目标节点;If the curve characteristics of the feedback increase rate curve meet the concave curve characteristics, the second target node is used as the third target node;
根据第三目标节点的凹曲线特征和预设的上调值确定库,确定反馈增率曲线的上调值;Determine the upward adjustment value of the feedback increase rate curve according to the concave curve characteristics of the third target node and the preset upward adjustment value determination library;
根据上调值和第三目标节点的第一可信值,确定第三目标节点的第二可信值;Determining a second credible value of the third target node according to the upward-adjusted value and the first credible value of the third target node;
若第二可信值大于等于预设的第二目标阈值,将对应第三目标节点作为第四目标节点;If the second credible value is greater than or equal to the preset second target threshold, the corresponding third target node is used as the fourth target node;
根据获取索引,通过第一目标节点和第四目标节点获取第二遥感图像分类数据子集。According to the acquisition index, a second remote sensing image classification data subset is acquired through the first target node and the fourth target node.
本发明实施例提供的一种基于改进MobileNet v2的遥感图像分类方法,还包括:An embodiment of the present invention provides a remote sensing image classification method based on improved MobileNet v2, further comprising:
步骤9:获取遥感图像分类任务的预警规则,根据分类结果和预警规则,进行相应预警;Step 9: Obtain the warning rules for the remote sensing image classification task, and issue corresponding warnings based on the classification results and warning rules;
其中,步骤9:获取遥感图像分类任务的预警规则,根据分类结果和预警规则,进行相应预警,包括:Among them, step 9: obtain the warning rules of the remote sensing image classification task, and make corresponding warnings according to the classification results and warning rules, including:
根据预警规则,确定预警分类特征;According to the warning rules, determine the warning classification characteristics;
根据分类结果,确定实际分类特征;According to the classification results, determine the actual classification features;
匹配实际分类特征和预警分类特征,若存在匹配符合,触发相应预警分类特征的预警应对措施;Match the actual classification features with the warning classification features. If there is a match, trigger the warning response measures of the corresponding warning classification features.
获取响应路径集;Get the response path set;
解析响应路径集,确定响应路径特征,响应路径特征包括:措施匹配度、路径响应时间和路径紧急调度时间;Parse the response path set and determine the response path characteristics, which include: measure matching degree, path response time, and path emergency dispatch time;
根据预警应对措施,确定响应路径集中的第一响应路径;Determine the first response path in the response path set based on the early warning response measures;
获取每一第一响应路径的路径确认信息;Obtaining path confirmation information for each first response path;
解析路径确认信息,若存在第一响应路径不确认,则根据预警应对措施、确认的第一响应路径的路径特征以及响应路径集中除了第一响应路径之外的第二响应路径的路径特征,确定第三响应路径;Parse the path confirmation information, and if there is an unconfirmed first response path, determine the third response path according to the early warning response measures, the path characteristics of the confirmed first response path, and the path characteristics of the second response path in the response path set except the first response path;
向第三响应路径发送预警应对措施。Send early warning response measures to the third response path.
本发明实施例提供的一种基于改进MobileNet v2的遥感图像分类系统,包括:An embodiment of the present invention provides a remote sensing image classification system based on an improved MobileNet v2, comprising:
数据集获取子系统,用于获取遥感图像分类数据集,并将遥感图像分类数据集划分为训练集和测试集;The data set acquisition subsystem is used to acquire the remote sensing image classification data set and divide the remote sensing image classification data set into a training set and a test set;
预处理子系统,用于对遥感图像分类数据集进行预处理,获得处理数据;The preprocessing subsystem is used to preprocess the remote sensing image classification data set to obtain processed data;
第一输出特征获取子系统,用于将处理数据输入MobileNet v2网络,获取第一输出特征;A first output feature acquisition subsystem, used to input the processed data into the MobileNet v2 network to obtain a first output feature;
多尺度混合卷积子系统,用于对第一输出特征进行多尺度混合卷积,获取第二输出特征;A multi-scale mixed convolution subsystem, used for performing multi-scale mixed convolution on the first output feature to obtain a second output feature;
特征交叉融合子系统,用于对第二输出特征进行特征交叉融合,获得第三输出特征;A feature cross fusion subsystem, used for performing feature cross fusion on the second output feature to obtain a third output feature;
并行注意子系统,用于对第三输出特征进行权重计算,确定有用信息;A parallel attention subsystem is used to calculate the weight of the third output feature and determine the useful information;
分类预测子系统,用于将有用信息输入全连接层进行分类预测;The classification prediction subsystem is used to input useful information into the fully connected layer for classification prediction;
分类子系统,用于根据分类预测结果进行多次迭代,当满足停止迭代要求时,保存权重文件,获得目标网络模型,并基于目标网络模型的遥感图像分类任务,确定分类结果。The classification subsystem is used to perform multiple iterations based on the classification prediction results. When the requirements for stopping iterations are met, the weight file is saved, the target network model is obtained, and the classification results are determined based on the remote sensing image classification task of the target network model.
本发明的有益效果为:The beneficial effects of the present invention are:
本发明对获取的遥感图像进行预处理,并将预处理获得的处理数据输入MobileNet v2网络获取第一输出特征,对第一输出特征依次进行多尺度混合卷积、特征交叉融合和权重计算,确定有用信息。将有用信息输入全连接层进行分类预测,并基于预测结果和实际结果的差异进行多次迭代,确定符合迭代要求时的权重文件配置目标网络模型,目标网络模型执行遥感图像分类任务,确定分类结果,提取了多个不同尺度的特征信息、使用交叉融合的方法,提高分类准确率,引入注意力机制忽略冗余信息,提高了识别效率。The present invention preprocesses the acquired remote sensing image, and inputs the processed data obtained by the preprocessing into the MobileNet v2 network to obtain the first output feature, and sequentially performs multi-scale mixed convolution, feature cross fusion and weight calculation on the first output feature to determine useful information. The useful information is input into the fully connected layer for classification prediction, and multiple iterations are performed based on the difference between the predicted result and the actual result to determine the weight file configuration target network model that meets the iteration requirements, and the target network model performs the remote sensing image classification task to determine the classification result, extracts feature information of multiple different scales, uses the cross fusion method to improve the classification accuracy, introduces the attention mechanism to ignore redundant information, and improves the recognition efficiency.
本发明的其它特征和优点将在随后的说明书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本发明而了解。本发明的目的和其他优点可通过本申请文件中所特别指出的结构来实现和获得。Other features and advantages of the present invention will be described in the following description, and partly become apparent from the description, or understood by practicing the present invention. The purpose and other advantages of the present invention can be realized and obtained by the structures specifically pointed out in the present application documents.
下面通过附图和实施例,对本发明的技术方案做进一步的详细描述。The technical solution of the present invention is further described in detail below through the accompanying drawings and embodiments.
附图说明BRIEF DESCRIPTION OF THE DRAWINGS
附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明的实施例一起用于解释本发明,并不构成对本发明的限制。在附图中:The accompanying drawings are used to provide a further understanding of the present invention and constitute a part of the specification. Together with the embodiments of the present invention, they are used to explain the present invention and do not constitute a limitation of the present invention. In the accompanying drawings:
图1为本发明实施例中一种基于改进MobileNet v2的遥感图像分类方法的示意图;FIG1 is a schematic diagram of a remote sensing image classification method based on an improved MobileNet v2 in an embodiment of the present invention;
图2为本发明实施例中遥感图像的分类结果可视化示意图;FIG2 is a schematic diagram showing a visualization of classification results of a remote sensing image according to an embodiment of the present invention;
图3为本发明实施例中一种基于改进MobileNet v2的遥感图像分类系统的示意图。FIG3 is a schematic diagram of a remote sensing image classification system based on an improved MobileNet v2 in an embodiment of the present invention.
具体实施方式DETAILED DESCRIPTION
以下结合附图对本发明的优选实施例进行说明,应当理解,此处所描述的优选实施例仅用于说明和解释本发明,并不用于限定本发明。The preferred embodiments of the present invention are described below in conjunction with the accompanying drawings. It should be understood that the preferred embodiments described herein are only used to illustrate and explain the present invention, and are not used to limit the present invention.
本发明实施例提供了一种基于改进MobileNet v2的遥感图像分类方法,如图1所示,包括:The embodiment of the present invention provides a remote sensing image classification method based on improved MobileNet v2, as shown in FIG1 , including:
步骤1:获取遥感图像分类数据集,并将遥感图像分类数据集划分为训练集和测试集;其中,遥感图像分类数据集为:已经标记好分类结果的遥感图像的集合;训练集和测试集的划分规则由人工设置;Step 1: Obtain a remote sensing image classification dataset and divide it into a training set and a test set; the remote sensing image classification dataset is: a collection of remote sensing images with labeled classification results; the division rules of the training set and the test set are set manually;
步骤2:对遥感图像分类数据集进行预处理,获得处理数据;其中,预处理为:将遥感图像处理成适宜模型训练的数据,比如:调整成适宜模型输入的尺寸;Step 2: Preprocess the remote sensing image classification data set to obtain processed data; wherein the preprocessing is: processing the remote sensing image into data suitable for model training, such as: adjusting it to a size suitable for model input;
步骤3:将处理数据输入MobileNet v2网络,获取第一输出特征;其中,第一输出特征为:MobileNet v2网络输出图像的特征化表示;Step 3: Input the processed data into the MobileNet v2 network to obtain the first output feature; wherein the first output feature is: a feature representation of the output image of the MobileNet v2 network;
步骤4:对第一输出特征进行多尺度混合卷积,获取第二输出特征;其中,第二输出特征为:结合不同尺度的卷积核提取第一输出特征获得的特征;Step 4: Perform multi-scale mixed convolution on the first output feature to obtain a second output feature; wherein the second output feature is: a feature obtained by extracting the first output feature by combining convolution kernels of different scales;
步骤5:对第二输出特征进行特征交叉融合,获得第三输出特征;其中,第三输出特征为:将不同层级或不同来源的第二输出特征结合获得的特征;Step 5: Perform feature cross fusion on the second output feature to obtain a third output feature; wherein the third output feature is: a feature obtained by combining the second output features of different levels or different sources;
步骤6:对第三输出特征进行权重计算,确定有用信息;其中,权重计算为:加入注意力机制可以自动学习特征图的权重,更加关注图像的关键部分,忽略冗余信息;有用信息为:第三输出特征剔除冗余信息后获得的结果;Step 6: Calculate the weight of the third output feature to determine useful information; the weight calculation is: adding an attention mechanism can automatically learn the weight of the feature map, pay more attention to the key parts of the image, and ignore redundant information; the useful information is: the result obtained after the third output feature removes redundant information;
步骤7:将有用信息输入全连接层进行分类预测;其中,分类预测为:根据有用信息进行遥感图像分类获得的结果;Step 7: Input the useful information into the fully connected layer for classification prediction; wherein the classification prediction is: the result obtained by classifying the remote sensing image according to the useful information;
步骤8:根据分类预测结果进行多次迭代,当满足停止迭代要求时,保存权重文件,获得目标网络模型,并基于目标网络模型的遥感图像分类任务,确定分类结果。其中,根据分类预测结果进行多次迭代指的是:根据测试集进行测试,判断分类的准确率,向提高准确率的方向调整模型训练参数,进行模型的迭代。停止迭代要求由人工预先设置,比如:准确率达到98%;权重文件为:包含了模型训练过程中学习到的所有权重和偏置参数的文件;目标网络模型为:基于改进MobileNet v2的遥感图像分类模型,该模型部署了权重文件;遥感图像分类任务为:输入目标网络模型的待分类的遥感图像和分类需求;分类结果为:目标网络模型对遥感图像分类任务的执行结果,分类结果的可视化示意图如图2所示。Step 8: Perform multiple iterations based on the classification prediction results. When the requirements for stopping iteration are met, save the weight file, obtain the target network model, and determine the classification results based on the remote sensing image classification task of the target network model. Among them, performing multiple iterations based on the classification prediction results means: testing based on the test set, judging the classification accuracy, adjusting the model training parameters in the direction of improving the accuracy, and iterating the model. The requirements for stopping iteration are manually set in advance, for example: the accuracy reaches 98%; the weight file is: a file containing all weights and bias parameters learned during the model training process; the target network model is: a remote sensing image classification model based on the improved MobileNet v2, which deploys the weight file; the remote sensing image classification task is: input the remote sensing image to be classified and the classification requirements of the target network model; the classification result is: the execution result of the target network model for the remote sensing image classification task, and the visualization diagram of the classification result is shown in Figure 2.
上述技术方案的工作原理及有益效果为:The working principle and beneficial effects of the above technical solution are:
本申请对获取的遥感图像进行预处理,并将预处理获得的处理数据输入MobileNet v2网络获取第一输出特征,对第一输出特征依次进行多尺度混合卷积、特征交叉融合和权重计算,确定有用信息。将有用信息输入全连接层进行分类预测,并基于预测结果和实际结果的差异进行多次迭代,确定符合迭代要求时的权重文件配置目标网络模型,目标网络模型执行遥感图像分类任务,确定分类结果,提取了多个不同尺度的特征信息、使用交叉融合的方法,提高分类准确率,引入注意力机制忽略冗余信息,提高了识别效率。This application preprocesses the acquired remote sensing images, and inputs the processed data obtained by the preprocessing into the MobileNet v2 network to obtain the first output feature, and performs multi-scale mixed convolution, feature cross fusion and weight calculation on the first output feature in sequence to determine useful information. The useful information is input into the fully connected layer for classification prediction, and multiple iterations are performed based on the difference between the predicted results and the actual results to determine the weight file configuration target network model that meets the iteration requirements. The target network model performs the remote sensing image classification task, determines the classification result, extracts feature information of multiple different scales, uses the cross fusion method to improve the classification accuracy, introduces the attention mechanism to ignore redundant information, and improves the recognition efficiency.
在数据集WHU-RS19、RSSCN7和SIRI-WHU上,本方法的整体分类准确率(OA)分别达到了98.19%、94.18%和96.37%。如表1-表3所示:表1为本发明在数据集WHU-RS19上与其他方法的总体分类精度对比、表2为本发明在数据集RSSCN7上与其他方法的总体分类精度对比,以及,表3为本发明在数据集SIRI-WHU上与其他方法的总体分类精度对比;On the datasets WHU-RS19, RSSCN7 and SIRI-WHU, the overall classification accuracy (OA) of the present method reached 98.19%, 94.18% and 96.37% respectively. As shown in Tables 1-3: Table 1 is a comparison of the overall classification accuracy of the present invention with other methods on the dataset WHU-RS19, Table 2 is a comparison of the overall classification accuracy of the present invention with other methods on the dataset RSSCN7, and Table 3 is a comparison of the overall classification accuracy of the present invention with other methods on the dataset SIRI-WHU;
表1Table 1
表2Table 2
表3Table 3
在一个实施例中,步骤2:对遥感图像分类数据集进行预处理,获得处理数据,包括:In one embodiment, step 2: preprocessing the remote sensing image classification data set to obtain processed data includes:
将遥感图像分类数据集中的遥感图像对齐成256*256大小的RGB图片,获得处理数据。其中,256*256大小指的是图像分辨率,即图像的宽度和高度都是256像素。The remote sensing images in the remote sensing image classification dataset are aligned into RGB images of size 256*256 to obtain the processed data. The size of 256*256 refers to the image resolution, that is, the width and height of the image are both 256 pixels.
上述技术方案的工作原理及有益效果为:The working principle and beneficial effects of the above technical solution are:
本申请将遥感图像分类数据集中的遥感图像均对齐成256*256大小的RGB图片,获得准备好输入到机器学习模型中进行训练的处理数据,提高了后续训练的适宜性。This application aligns the remote sensing images in the remote sensing image classification dataset into RGB images of size 256*256, obtains processed data ready to be input into the machine learning model for training, and improves the suitability of subsequent training.
在一个实施例中,步骤4:对第一输出特征进行多尺度混合卷积,获取第二输出特征,包括:In one embodiment, step 4: performing multi-scale mixed convolution on the first output feature to obtain the second output feature includes:
配置多尺度混合卷积模块;Configure the multi-scale mixed convolution module;
将第一输出特征输入多尺度混合卷积模块,获得第二输出特征;Inputting the first output feature into a multi-scale mixed convolution module to obtain a second output feature;
其中,配置多尺度混合卷积模块,包括:Among them, a multi-scale mixed convolution module is configured, including:
设置1*1的普通卷积、3*3的普通卷积、3*3空洞率为6的空洞卷积和3*3空洞率为12的空洞卷积;其中,空洞卷积通过在卷积核中插入跳过的单元来增加感受野;空洞率表示卷积核中空洞的数量和分布,比如:空洞率为6意味着每6个卷积核单元中有一个实际的数值,其余是空洞;Set 1*1 normal convolution, 3*3 normal convolution, 3*3 dilated convolution with a dilation rate of 6, and 3*3 dilated convolution with a dilation rate of 12. Among them, dilated convolution increases the receptive field by inserting skipped units in the convolution kernel. The dilation rate indicates the number and distribution of holes in the convolution kernel. For example, a dilation rate of 6 means that there is an actual value in every 6 convolution kernel units, and the rest are holes.
设置BN层和ReLU6激活层。其中,BN层为:批量归一化层。Set the BN layer and ReLU6 activation layer. The BN layer is: batch normalization layer.
上述技术方案的工作原理及有益效果为:The working principle and beneficial effects of the above technical solution are:
本申请针对遥感图像中目标特征尺度差异大的问题,提出了多尺度混合卷积模块来提取不同尺度的目标语义信息。此模块由四个不同的卷积构成,分别是1*1的普通卷积、3*3的普通卷积、3*3空洞率为6的空洞卷积、3*3空洞率为12的空洞卷积。卷积之后再经过BN层和ReLU6激活函数得到输出。多尺度混合卷积模块中的支路1的感受野为1*1,支路2的感受野为3*3,支路3的感受野为23*23,支路4的感受野为47*47。四条支路的感受野都不同,可以提取到不同尺度的特征信息,解决目标特征尺度差异大的问题。In order to solve the problem of large differences in target feature scales in remote sensing images, this application proposes a multi-scale hybrid convolution module to extract target semantic information of different scales. This module consists of four different convolutions, namely 1*1 ordinary convolution, 3*3 ordinary convolution, 3*3 dilated convolution with a void rate of 6, and 3*3 dilated convolution with a void rate of 12. After convolution, the output is obtained through the BN layer and the ReLU6 activation function. The receptive field of branch 1 in the multi-scale hybrid convolution module is 1*1, the receptive field of branch 2 is 3*3, the receptive field of branch 3 is 23*23, and the receptive field of branch 4 is 47*47. The receptive fields of the four branches are different, and feature information of different scales can be extracted to solve the problem of large differences in target feature scales.
本发明实施例提供了一种基于改进MobileNet v2的遥感图像分类方法,还包括:The embodiment of the present invention provides a remote sensing image classification method based on improved MobileNet v2, further comprising:
获取所需感受野;其中,所需感受野为:技术人员根据分类任务确定的最小感受野大小;Obtain the required receptive field; wherein the required receptive field is: the minimum receptive field size determined by the technician according to the classification task;
计算当前感受野;其中,当前感受野为:神经元所覆盖的输入图像区域的实际大小;Calculate the current receptive field; where the current receptive field is: the actual size of the input image area covered by the neuron;
根据所需感受野和当前感受野,调整空洞卷积的空洞率;其中,空洞卷积可以通过调节空洞率带来不同的感受野,调整空洞率使当前感受野向所需感受野趋近;Adjust the dilation rate of the dilated convolution according to the required receptive field and the current receptive field. The dilated convolution can bring different receptive fields by adjusting the dilation rate, and adjusting the dilation rate makes the current receptive field approach the required receptive field.
其中,计算当前感受野,包括:Among them, calculating the current receptive field includes:
计算空洞卷积的等效卷积核尺寸,等效卷积核尺寸的计算公式具体为:Calculate the equivalent convolution kernel size of the dilated convolution. The calculation formula for the equivalent convolution kernel size is:
k′=k+(k-1)(r-1)k′=k+(k-1)(r-1)
其中,k′为等效卷积核尺寸,k为空洞卷积核的尺寸,r为空洞卷积的空洞率;Among them, k′ is the equivalent convolution kernel size, k is the size of the dilated convolution kernel, and r is the dilation rate of the dilated convolution;
根据等效卷积核尺寸,计算当前感受野,当前感受野的计算公式具体为:According to the equivalent convolution kernel size, the current receptive field is calculated. The calculation formula of the current receptive field is as follows:
RF=k′+(k-1)(r-1)RF=k′+(k-1)(r-1)
其中,RF为当前感受野。Among them, RF is the current receptive field.
上述技术方案的工作原理及有益效果为:The working principle and beneficial effects of the above technical solution are:
本申请获取技术人员根据分类任务确定的所需感受野,另外,计算当前感受野。根据所需感受野和当前感受野调整空洞卷积的空洞率,提高了感受野设置的合理性。This application obtains the required receptive field determined by the technician according to the classification task, and in addition, calculates the current receptive field. The dilation rate of the dilated convolution is adjusted according to the required receptive field and the current receptive field, thereby improving the rationality of the receptive field setting.
在一个实施例中,获取所需感受野,包括:In one embodiment, obtaining a desired receptive field includes:
获取历史遥感图像分类任务,并提取历史遥感图像分类任务的第一任务特征;其中,历史遥感图像分类任务为:过去已经完成的遥感图像的分类工作;第一任务特征为:历史遥感图像分类任务的特征化表示,比如:任务种类、设置何种神经网络多大的感受野等;Obtain historical remote sensing image classification tasks and extract the first task features of the historical remote sensing image classification tasks; wherein the historical remote sensing image classification tasks are: the classification work of remote sensing images that have been completed in the past; the first task features are: the characteristic representation of the historical remote sensing image classification tasks, such as: the type of task, what kind of neural network to set and how large the receptive field is, etc.;
确定历史遥感图像分类任务的分类精度;其中,分类精度为:分类结果较于需求的准确性量化结果;Determine the classification accuracy of the historical remote sensing image classification task; where the classification accuracy is: the quantitative result of the accuracy of the classification result compared with the requirement;
根据第一任务特征和分类精度,构建训练样本向量;其中,训练样本向量为:由第一任务特征和分类精度构成的向量,向量元素的分布位置由人工确定;According to the first task feature and the classification accuracy, a training sample vector is constructed; wherein the training sample vector is: a vector composed of the first task feature and the classification accuracy, and the distribution position of the vector elements is manually determined;
根据训练样本向量,构建感受野确定模型;其中,感受野确定模型为:确定最优感受野大小的AI模型;According to the training sample vector, a receptive field determination model is constructed; wherein the receptive field determination model is: an AI model that determines the optimal receptive field size;
解析遥感图像分类任务,获取第二任务特征和所需分类精度;其中,第二任务特征为:遥感图像分类任务的特征化表示;所需分类精度为:遥感图像分类任务期望达到的分类准确度;Analyze the remote sensing image classification task to obtain the second task feature and the required classification accuracy; the second task feature is: the feature representation of the remote sensing image classification task; the required classification accuracy is: the classification accuracy expected to be achieved by the remote sensing image classification task;
根据第二任务特征和所需分类精度,构建输入向量;其中,输入向量的向量元素的分布位置同训练样本向量;According to the second task feature and the required classification accuracy, an input vector is constructed; wherein the distribution positions of the vector elements of the input vector are the same as those of the training sample vector;
将输入向量输入感受野确定模型,获得所需感受野。The input vector is input into the receptive field determination model to obtain the desired receptive field.
上述技术方案的工作原理及有益效果为:The working principle and beneficial effects of the above technical solution are:
本申请提取历史遥感图像分类任务中的第一任务特征,另外,确定历史遥感图像分类任务的分类精度。根据第一任务特征和分类精度构建训练样本向量。引入感受野确定模型,将训练样本向量输入感受野确定模型。基于训练样本向量相同的构建规则,根据获取的第二任务特征和所需分类精度,确定输入向量。将输入向量输入感受野确定模型,获得所需感受野,提高了所需感受野确定过程的适宜性。The present application extracts the first task features in the historical remote sensing image classification task, and also determines the classification accuracy of the historical remote sensing image classification task. A training sample vector is constructed based on the first task features and the classification accuracy. A receptive field determination model is introduced, and the training sample vector is input into the receptive field determination model. Based on the same construction rules as the training sample vector, the input vector is determined according to the acquired second task features and the required classification accuracy. The input vector is input into the receptive field determination model to obtain the required receptive field, thereby improving the suitability of the required receptive field determination process.
在一个实施例中,步骤6:对第三输出特征进行权重计算,确定有用信息,包括:In one embodiment, step 6: weighting the third output feature to determine useful information includes:
构建通道注意力块;Construct channel attention block;
构建空间注意力块;Building a spatial attention block;
将第三输出特征输入通道注意力块,获取通道注意力块输出的通道注意特征;Input the third output feature into the channel attention block to obtain the channel attention feature output by the channel attention block;
对通道注意特征和第三输出特征进行乘法操作,获得输入特征图;Multiply the channel attention feature and the third output feature to obtain the input feature map;
将输入特征图输入空间注意力块,获得空间注意特征;Input the input feature map into the spatial attention block to obtain the spatial attention feature;
将输入特征图和空间注意特征进行乘法操作,获得有用信息;Multiply the input feature map and the spatial attention feature to obtain useful information;
其中,构建通道注意力块,包括:Among them, constructing the channel attention block includes:
设置第一自适应全局最大池化层;Setting the first adaptive global maximum pooling layer;
设置第一自适应全局平均池化层;Set the first adaptive global average pooling layer;
设置两层神经网络,两层神经网络的第一层为1*1的卷积和Mi sh激活函数,两层神经网络的第二层为1*1的卷积;Set up a two-layer neural network. The first layer of the two-layer neural network is a 1*1 convolution and a Mi sh activation function, and the second layer of the two-layer neural network is a 1*1 convolution;
设置第一Mi sh激活函数层;Set the first Mi sh activation function layer;
其中,构建空间注意力块,包括:Among them, the spatial attention block is constructed, including:
设置第二自适应全局最大池化层;Setting a second adaptive global maximum pooling layer;
设置第二自适应全局平均池化层;Set the second adaptive global average pooling layer;
设置7*7的卷积;Set up a 7*7 convolution;
设置第二Mi sh激活函数层。Set the second Mi sh activation function layer.
上述技术方案的工作原理及有益效果为:The working principle and beneficial effects of the above technical solution are:
通道注意力块将输入的特征图分别从高度和宽度两个维度进行自适应全局最大池化和自适应全局平均池化,得到两个维度的特征图,再将它们分别送入一个两层的神经网络。第一层是一个1*1的卷积和激活函数,第二层是1*1的卷积。然后将输出的特征进行加和操作后,通过激活函数操作生成最终的通道注意特征。最后,将通道注意特征和输入特征进行乘法操作,生成的特征图将作为空间注意机制块的输入特征图。The channel attention block performs adaptive global maximum pooling and adaptive global average pooling on the input feature map from the height and width dimensions respectively, obtains feature maps of two dimensions, and then feeds them into a two-layer neural network. The first layer is a 1*1 convolution and activation function, and the second layer is a 1*1 convolution. Then the output features are added and the final channel attention features are generated through the activation function operation. Finally, the channel attention features are multiplied with the input features, and the generated feature map will be used as the input feature map of the spatial attention mechanism block.
空间注意机制块将输入的特征图分别做一个基于通道的自适应全局最大池化和自适应全局平均池化,得到两个特征图,然后将两个特征图进行通道拼接,再经过一个7*7的卷积和激活函数生成空间注意特征,最后将空间注意特征和输入特征图进行乘法操作。The spatial attention mechanism block performs a channel-based adaptive global maximum pooling and an adaptive global average pooling on the input feature map to obtain two feature maps. The two feature maps are then channel-joined, and a 7*7 convolution and activation function are performed to generate the spatial attention feature. Finally, the spatial attention feature is multiplied by the input feature map.
本申请引入通道注意力块和空间注意力块,加入注意力机制可以自动学习特征图的权重,使其更加关注图像的关键部分,忽略冗余信息,提升了复杂背景下的分类准确率,解决了检测网络对背景干扰敏感的问题。This application introduces channel attention blocks and spatial attention blocks. The addition of the attention mechanism can automatically learn the weights of the feature maps, making them pay more attention to the key parts of the image and ignore redundant information, thereby improving the classification accuracy under complex backgrounds and solving the problem that the detection network is sensitive to background interference.
在一个实施例中,Mish激活函数,包括:In one embodiment, the Mish activation function includes:
Mish=x*tanh(ln(1+ex)Mish=x*tanh(ln(1+e x )
其中,Mish为Mish激活函数,x为待输入激活函数的特征图。Among them, Mish is the Mish activation function, and x is the feature map to be input into the activation function.
上述技术方案的工作原理及有益效果为:The working principle and beneficial effects of the above technical solution are:
本申请引入Mish激活函数捕捉卷积输出的关系。Mish激活函数没有上界,可以保证没有饱和区域,因此在训练过程中不会有梯度消失的问题;Mish激活函数没有下界,有助于实现正则化效果;Mish激活函数非单调,一些较小的负输入可以保留为负输出,以提高网络的可解释性和梯度流。Mish激活函数具有光滑性,具有较好的泛化能力和结果的有效优化能力,可以提高结果的质量。This application introduces the Mish activation function to capture the relationship between convolution outputs. The Mish activation function has no upper bound, which can ensure that there is no saturation area, so there will be no gradient vanishing problem during training; the Mish activation function has no lower bound, which helps to achieve regularization effect; the Mish activation function is non-monotonic, and some smaller negative inputs can be retained as negative outputs to improve the interpretability and gradient flow of the network. The Mish activation function is smooth, has good generalization ability and effective optimization ability of the results, and can improve the quality of the results.
在一个实施例中,步骤1:获取遥感图像分类数据集,包括:In one embodiment, step 1: obtaining a remote sensing image classification dataset includes:
提取遥感图像分类任务的任务特征;其中,任务特征为:遥感图像分类任务的任务种类以及任务内容;Extracting the task features of the remote sensing image classification task; wherein the task features are: the task type and task content of the remote sensing image classification task;
根据任务特征,构建获取索引;其中,获取索引为:追踪访问与任务种类以及任务内容有关的遥感图像分类数据的索引;According to the task characteristics, an acquisition index is constructed; wherein the acquisition index is: an index for tracking and accessing remote sensing image classification data related to the task type and task content;
根据获取索引,通过本地节点获取第一遥感图像分类数据子集;其中,本地节点为:本地遥感图像库的通信节点,本地遥感图像库中存储本地已经进行分类标记的遥感图像;According to the acquisition index, a first remote sensing image classification data subset is acquired through a local node; wherein the local node is: a communication node of a local remote sensing image library, and the local remote sensing image library stores remote sensing images that have been classified and marked locally;
根据获取索引,通过大数据节点获取第二遥感图像分类数据子集;其中,大数据节点为:存储遥感数据的大数据平台,比如:GPS导航平台,又比如:森林防火遥感监控平台;According to the obtained index, a second remote sensing image classification data subset is obtained through a big data node; wherein the big data node is a big data platform for storing remote sensing data, such as a GPS navigation platform, or a forest fire prevention remote sensing monitoring platform;
将第一遥感图像分类数据子集和第二遥感图像分类数据子集共同作为遥感图像分类数据集。The first remote sensing image classification data subset and the second remote sensing image classification data subset are collectively used as a remote sensing image classification data set.
上述技术方案的工作原理及有益效果为:The working principle and beneficial effects of the above technical solution are:
本申请引入遥感图像分类任务的任务特征,根据任务特征,构建追踪访问与任务种类以及任务内容有关的遥感图像分类数据的获取索引。根据获取索引分别在本地节点和大数据节点获取遥感图像分类数据集。提高了遥感图像分类数据集获取的全面性。This application introduces the task characteristics of remote sensing image classification tasks. Based on the task characteristics, an acquisition index for tracking and accessing remote sensing image classification data related to the task type and task content is constructed. According to the acquisition index, remote sensing image classification datasets are acquired at local nodes and big data nodes respectively. The comprehensiveness of remote sensing image classification dataset acquisition is improved.
在一个实施例中,根据获取索引,通过大数据节点获取第二遥感图像分类数据子集,包括:In one embodiment, according to the acquisition index, the second remote sensing image classification data subset is acquired through the big data node, including:
获取大数据节点的节点标签;其中,节点标签为:大数据节点的网络标识;Obtain a node label of a big data node; wherein the node label is: a network identifier of the big data node;
根据节点标签,确定大数据节点的第一可信值;其中,第一可信值为:根据节点标签读取的大数据节点在互联网平台公共的评价系统中存储的可信程度评价值;Determine a first trust value of the big data node according to the node label; wherein the first trust value is: a trust evaluation value of the big data node stored in a public evaluation system of an Internet platform according to the node label;
筛选第一可信值大于等于预设的第一目标阈值的大数据节点,并作为第一目标节点;其中,预设的第一目标阈值由人工预先设置;Screening a large data node whose first credibility value is greater than or equal to a preset first target threshold value and using it as a first target node; wherein the preset first target threshold value is manually preset;
将大数据节点中除了第一目标节点的剩余大数据节点作为第二目标节点;The remaining big data nodes except the first target node in the big data nodes are used as second target nodes;
获取第二目标节点的数据提供记录的反馈特征;其中,数据提供记录为:第二目标节点在大数据平台上提供数据的记录;反馈特征为:数据提供记录的数据使用方的评价值;Acquire feedback features of the data provision record of the second target node; wherein the data provision record is: a record of the second target node providing data on the big data platform; the feedback features are: an evaluation value of a data user of the data provision record;
根据反馈特征,拟合反馈特征曲线;其中,拟合反馈特征曲线指的是:使用数学方法(比如:多项式拟合)来建立描述反馈特征随时间变化趋势的曲线;According to the feedback characteristics, a feedback characteristic curve is fitted; wherein fitting the feedback characteristic curve refers to: using a mathematical method (such as polynomial fitting) to establish a curve describing the trend of feedback characteristics changing over time;
对反馈特征曲线进行求导计算,获取反馈增率曲线;其中,反馈增率曲线为:反馈特征曲线对时间变量进行求导获得的结果;The feedback characteristic curve is derivatized to obtain a feedback increase rate curve; wherein the feedback increase rate curve is: a result obtained by derivatizing the feedback characteristic curve with respect to the time variable;
若反馈增率曲线的曲线特征符合凹曲线特征,将第二目标节点作为第三目标节点;If the curve characteristics of the feedback increase rate curve meet the concave curve characteristics, the second target node is used as the third target node;
根据第三目标节点的凹曲线特征和预设的上调值确定库,确定反馈增率曲线的上调值;其中,预设的上调值确定库包括:多个一一对应的待匹配凹曲线特征和待匹配上调值;Determine the upward adjustment value of the feedback rate increase curve according to the concave curve feature of the third target node and the preset upward adjustment value determination library; wherein the preset upward adjustment value determination library includes: a plurality of one-to-one corresponding concave curve features to be matched and upward adjustment values to be matched;
根据上调值和第三目标节点的第一可信值,确定第三目标节点的第二可信值;其中,第二可信值为:上调值和第一可信值相乘;Determine a second credible value of the third target node according to the increased value and the first credible value of the third target node; wherein the second credible value is: the increased value multiplied by the first credible value;
若第二可信值大于等于预设的第二目标阈值,将对应第三目标节点作为第四目标节点;其中,预设的第二目标阈值由人工预先设置。If the second credible value is greater than or equal to a preset second target threshold, the corresponding third target node is used as the fourth target node; wherein the preset second target threshold is manually preset.
根据获取索引,通过第一目标节点和第四目标节点获取第二遥感图像分类数据子集。According to the acquisition index, a second remote sensing image classification data subset is acquired through the first target node and the fourth target node.
上述技术方案的工作原理及有益效果为:The working principle and beneficial effects of the above technical solution are:
本申请获取大数据节点的网络标识,根据网络标识读取大数据节点在互联网平台公共的评价系统中存储的第一可信值。筛选出第一可信值大于等于第一目标阈值的第一目标节点。获取第二目标节点的数据提供记录的反馈特征,并拟合反馈特征曲线。对反馈特征曲线求导获取反馈增率曲线。提取反馈增率曲线的曲线特征,若曲线特征符合凹曲线特征,将相应的第二目标节点作为第三目标节点。引入上调值确定库、根据第三目标节点的凹曲线特征,确定上调值。将上调值和第一可信值相乘,获得第二可信值,筛选第二可信值大于等于第二目标阈值的第四目标节点。基于获取索引,通过第一目标节点和第四目标节点共同获取第二遥感图像分类数据子集。大数据节点的筛选过程更适宜,提高了训练数据的可用性。The present application obtains the network identification of the big data node, and reads the first credible value stored in the public evaluation system of the big data node on the Internet platform according to the network identification. The first target node whose first credible value is greater than or equal to the first target threshold is screened out. The feedback characteristics of the data provided by the second target node are obtained, and the feedback characteristic curve is fitted. The feedback rate curve is derived to obtain the feedback rate curve. The curve characteristics of the feedback rate curve are extracted. If the curve characteristics meet the concave curve characteristics, the corresponding second target node is used as the third target node. The upward adjustment value determination library is introduced, and the upward adjustment value is determined according to the concave curve characteristics of the third target node. The upward adjustment value is multiplied by the first credible value to obtain the second credible value, and the fourth target node whose second credible value is greater than or equal to the second target threshold is screened. Based on the acquisition index, the second remote sensing image classification data subset is jointly obtained through the first target node and the fourth target node. The screening process of the big data node is more suitable, which improves the availability of the training data.
本发明实施例提供了一种基于改进MobileNet v2的遥感图像分类方法,还包括:The embodiment of the present invention provides a remote sensing image classification method based on improved MobileNet v2, further comprising:
步骤9:获取遥感图像分类任务的预警规则,根据分类结果和预警规则,进行相应预警;其中,预警规则为:在遥感图像分类中用于判断何时触发预警的标准或条件;Step 9: Obtain the warning rules for the remote sensing image classification task, and make corresponding warnings according to the classification results and the warning rules; wherein the warning rules are: the standards or conditions used to determine when to trigger the warning in the remote sensing image classification;
其中,步骤9:获取遥感图像分类任务的预警规则,根据分类结果和预警规则,进行相应预警,包括:Among them, step 9: obtain the warning rules of the remote sensing image classification task, and make corresponding warnings according to the classification results and warning rules, including:
根据预警规则,确定预警分类特征;其中,预警分类特征为:预警规则对应的会发出预警的分类结果的特征化表示;According to the warning rules, the warning classification features are determined; wherein the warning classification features are: the characteristic representation of the classification results corresponding to the warning rules that will issue warnings;
根据分类结果,确定实际分类特征;其中,实际分类特征为:将分类结果特征化获得的结果;According to the classification result, the actual classification feature is determined; wherein the actual classification feature is: the result obtained by characterizing the classification result;
匹配实际分类特征和预警分类特征,若存在匹配符合,触发相应预警分类特征的预警应对措施;其中,预警应对措施为:采用何种举措应对预警;Match the actual classification features with the warning classification features. If there is a match, trigger the warning response measures of the corresponding warning classification features; the warning response measures are: what measures to take to respond to the warning;
获取响应路径集;其中,响应路径集包括:遥感图像分类平台能够调度的响应方的通信节点路径的集合;Obtaining a response path set; wherein the response path set includes: a set of communication node paths of responders that can be scheduled by the remote sensing image classification platform;
解析响应路径集,确定响应路径特征,响应路径特征包括:措施匹配度、路径响应时间和路径紧急调度时间;其中,措施匹配度根据响应路径能够执行的措施和预警应对措施的匹配程度确定;路径响应时间为:响应路径能够调度的时间范围;路径紧急调度时间为:响应路径临时上线的所需时间;Parse the response path set and determine the response path characteristics. The response path characteristics include: measure matching degree, path response time and path emergency dispatch time. Among them, the measure matching degree is determined according to the matching degree between the measures that can be executed by the response path and the early warning response measures; the path response time is: the time range within which the response path can be dispatched; the path emergency dispatch time is: the time required for the response path to be temporarily online;
根据预警应对措施,确定响应路径集中的第一响应路径;其中,第一响应路径为:能够执行预警应对措施的响应方的通信节点;According to the early warning response measures, determine a first response path in the response path set; wherein the first response path is: a communication node of the responder that can execute the early warning response measures;
获取每一第一响应路径的路径确认信息;其中,路径确定信息为:响应方确认未来是否执行分配的预警应对措施的相关信息;Obtaining path confirmation information for each first response path; wherein the path confirmation information is: relevant information for the responder to confirm whether to execute the assigned early warning response measures in the future;
解析路径确认信息,若存在第一响应路径不确认,则根据预警应对措施、确认的第一响应路径的路径特征以及响应路径集中除了第一响应路径之外的第二响应路径的路径特征,确定第三响应路径;其中,第三响应路径为:第一响应路径不应答时,确定的候补响应路径;确定第三响应路径时,根据预警应对措施和确认的第一响应路径的路径特征,确定未匹配措施部分,根据未匹配措施部分和响应路径集中除了第一响应路径之外的第二响应路径的路径特征,匹配第三响应路径;Parse the path confirmation information, if there is a first response path that is not confirmed, determine the third response path according to the early warning response measures, the path characteristics of the confirmed first response path, and the path characteristics of the second response path other than the first response path in the response path set; wherein the third response path is: the alternate response path determined when the first response path is not answered; when determining the third response path, determine the unmatched measure part according to the early warning response measures and the path characteristics of the confirmed first response path, and match the third response path according to the unmatched measure part and the path characteristics of the second response path other than the first response path in the response path set;
向第三响应路径发送预警应对措施。Send early warning response measures to the third response path.
上述技术方案的工作原理及有益效果为:The working principle and beneficial effects of the above technical solution are:
本申请引入预警规则,确定会发出预警的分类结果的预警分类特征。根据分类结果,确定实际分类特征。匹配实际分类特征和预警分类特征,确定匹配符合的预警分类特征的预警应对措施。获取遥感图像分类平台能够调度的响应方的通信节点路径的集合(响应路径集)。确定每一响应路径的响应路径特征,初步匹配第一响应路径。由于预警的临时性,第一响应路径不一定都能及时给予反馈。因此,解析每一第一响应路径的路径确认信息,若存在路径不确认,根据预警应对措施和确认的第一响应路径的路径特征,确定未匹配措施部分,根据未匹配措施部分和响应路径集中除了第一响应路径之外的第二响应路径的路径特征,匹配第三响应路径,并向第三响应路径发送预警应对措施,在响应路径不能及时响应时确定备选方案,提高了预警应对措施执行的可靠性。This application introduces early warning rules to determine the early warning classification features of the classification results that will issue an early warning. According to the classification results, the actual classification features are determined. The actual classification features and the early warning classification features are matched to determine the early warning response measures that match the early warning classification features. Obtain a set of communication node paths of the responders that can be scheduled by the remote sensing image classification platform (response path set). Determine the response path features of each response path and preliminarily match the first response path. Due to the temporary nature of the early warning, the first response path may not always be able to give feedback in a timely manner. Therefore, the path confirmation information of each first response path is parsed. If there is an unconfirmed path, the unmatched measure part is determined based on the early warning response measures and the path features of the confirmed first response path. According to the unmatched measure part and the path features of the second response path in the response path set except the first response path, the third response path is matched, and the early warning response measures are sent to the third response path. When the response path cannot respond in time, an alternative plan is determined, which improves the reliability of the execution of the early warning response measures.
本发明实施例提供了一种基于改进MobileNet v2的遥感图像分类系统,如图3所示,包括:The embodiment of the present invention provides a remote sensing image classification system based on improved MobileNet v2, as shown in FIG3 , including:
数据集获取子系统1,用于获取遥感图像分类数据集,并将遥感图像分类数据集划分为训练集和测试集;The data set acquisition subsystem 1 is used to acquire the remote sensing image classification data set and divide the remote sensing image classification data set into a training set and a test set;
预处理子系统2,用于对遥感图像分类数据集进行预处理,获得处理数据;The preprocessing subsystem 2 is used to preprocess the remote sensing image classification data set to obtain processed data;
第一输出特征获取子系统3,用于将处理数据输入MobileNet v2网络,获取第一输出特征;A first output feature acquisition subsystem 3, used for inputting the processed data into the MobileNet v2 network to obtain a first output feature;
多尺度混合卷积子系统4,用于对第一输出特征进行多尺度混合卷积,获取第二输出特征;A multi-scale mixed convolution subsystem 4, used for performing a multi-scale mixed convolution on the first output feature to obtain a second output feature;
特征交叉融合子系统5,用于对第二输出特征进行特征交叉融合,获得第三输出特征;A feature cross fusion subsystem 5 is used to perform feature cross fusion on the second output feature to obtain a third output feature;
并行注意子系统6,用于对第三输出特征进行权重计算,确定有用信息;A parallel attention subsystem 6, for calculating weights on the third output feature to determine useful information;
分类预测子系统7,用于将有用信息输入全连接层进行分类预测;Classification prediction subsystem 7, used to input useful information into the fully connected layer for classification prediction;
分类子系统8,用于根据分类预测结果进行多次迭代,当满足停止迭代要求时,保存权重文件,获得目标网络模型,并基于目标网络模型的遥感图像分类任务,确定分类结果。The classification subsystem 8 is used to perform multiple iterations according to the classification prediction results. When the requirements for stopping iteration are met, the weight file is saved, the target network model is obtained, and the classification result is determined based on the remote sensing image classification task of the target network model.
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。Obviously, those skilled in the art can make various changes and modifications to the present invention without departing from the spirit and scope of the present invention. Thus, if these modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include these modifications and variations.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410646435.7A CN118506201B (en) | 2024-05-23 | 2024-05-23 | Remote sensing image classification method and system based on improvement MobileNet v2 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410646435.7A CN118506201B (en) | 2024-05-23 | 2024-05-23 | Remote sensing image classification method and system based on improvement MobileNet v2 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118506201A true CN118506201A (en) | 2024-08-16 |
CN118506201B CN118506201B (en) | 2025-01-17 |
Family
ID=92246332
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410646435.7A Active CN118506201B (en) | 2024-05-23 | 2024-05-23 | Remote sensing image classification method and system based on improvement MobileNet v2 |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118506201B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114494821A (en) * | 2021-12-16 | 2022-05-13 | 广西壮族自治区自然资源遥感院 | Remote sensing image cloud detection method based on feature multi-scale perception and self-adaptive aggregation |
WO2022160771A1 (en) * | 2021-01-26 | 2022-08-04 | 武汉大学 | Method for classifying hyperspectral images on basis of adaptive multi-scale feature extraction model |
CN115311463A (en) * | 2022-10-09 | 2022-11-08 | 中国海洋大学 | Category-guided multi-scale decoupling method and system for ocean remote sensing image text retrieval |
CN115937594A (en) * | 2022-12-14 | 2023-04-07 | 长沙理工大学 | Remote sensing image classification method and device based on fusion of local and global features |
WO2023138300A1 (en) * | 2022-01-20 | 2023-07-27 | 城云科技(中国)有限公司 | Target detection method, and moving-target tracking method using same |
CN116630700A (en) * | 2023-05-22 | 2023-08-22 | 齐鲁工业大学(山东省科学院) | Remote sensing image classification method based on introduction channel-space attention mechanism |
-
2024
- 2024-05-23 CN CN202410646435.7A patent/CN118506201B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022160771A1 (en) * | 2021-01-26 | 2022-08-04 | 武汉大学 | Method for classifying hyperspectral images on basis of adaptive multi-scale feature extraction model |
CN114494821A (en) * | 2021-12-16 | 2022-05-13 | 广西壮族自治区自然资源遥感院 | Remote sensing image cloud detection method based on feature multi-scale perception and self-adaptive aggregation |
WO2023138300A1 (en) * | 2022-01-20 | 2023-07-27 | 城云科技(中国)有限公司 | Target detection method, and moving-target tracking method using same |
CN115311463A (en) * | 2022-10-09 | 2022-11-08 | 中国海洋大学 | Category-guided multi-scale decoupling method and system for ocean remote sensing image text retrieval |
CN115937594A (en) * | 2022-12-14 | 2023-04-07 | 长沙理工大学 | Remote sensing image classification method and device based on fusion of local and global features |
CN116630700A (en) * | 2023-05-22 | 2023-08-22 | 齐鲁工业大学(山东省科学院) | Remote sensing image classification method based on introduction channel-space attention mechanism |
Non-Patent Citations (2)
Title |
---|
YUAN LIU, ET AL.: "Multi-Scale Deep Neural Based on Dilated Convolution for Spacecraft Image Segmentation", SENSORS 2022, 1 June 2022 (2022-06-01), pages 1 - 5 * |
徐胜军;欧阳朴衍;郭学源;TAHA MUTHAR KHAN;段中兴;: "多尺度特征融合空洞卷积 ResNet遥感图像建筑物分割", 光学精密工程, no. 07, 15 July 2020 (2020-07-15), pages 142 - 151 * |
Also Published As
Publication number | Publication date |
---|---|
CN118506201B (en) | 2025-01-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Verma et al. | Transfer learning approach to map urban slums using high and medium resolution satellite imagery | |
US11275989B2 (en) | Predicting wildfires on the basis of biophysical indicators and spatiotemporal properties using a long short term memory network | |
US10990874B2 (en) | Predicting wildfires on the basis of biophysical indicators and spatiotemporal properties using a convolutional neural network | |
Dezhkam et al. | Performance evaluation of land change simulation models using landscape metrics | |
US11790410B2 (en) | System and method for natural capital measurement | |
Clohessy et al. | Development of a high-throughput plant disease symptom severity assessment tool using machine learning image analysis and integrated geolocation | |
US11676375B2 (en) | System and process for integrative computational soil mapping | |
CN106970986B (en) | Method and system for mining impact degree of urban waterlogging based on deep learning | |
Shen et al. | Biomimetic vision for zoom object detection based on improved vertical grid number YOLO algorithm | |
CN111310898A (en) | A prediction method of landslide susceptibility based on RNN | |
CN107256017A (en) | route planning method and system | |
CN112101189B (en) | SAR image target detection method and test platform based on attention mechanism | |
Chew et al. | Residential scene classification for gridded population sampling in developing countries using deep convolutional neural networks on satellite imagery | |
CN112131731A (en) | A Cellular Simulation Method of Urban Growth Based on Spatial Eigenvector Filtering | |
Ozdemir et al. | Comparison of deep learning techniques for classification of the insects in order level with mobile software application | |
Mishra et al. | Assessment of spatio-temporal changes in land use/land cover over a decade (2000–2014) using earth observation datasets: a case study of Varanasi district, India | |
MohanRajan et al. | Fuzzy Swin transformer for land use/land cover change detection using LISS-III Satellite data | |
Li et al. | Extraction and modelling application of evacuation movement characteristic parameters in real earthquake evacuation video based on deep learning | |
CN116824410A (en) | Inversion method and device for vegetation coverage aiming at hyperspectral remote sensing image data | |
CN114581761A (en) | Remote sensing image recognition method, device, equipment and computer readable storage medium | |
Dallaqua et al. | ForestEyes project: Can citizen scientists help rainforests? | |
Putri | Analysis of land cover classification results using ann, svm, and rf methods with r programming language (case research: Surabaya, Indonesia) | |
CN112270671A (en) | Image detection method, image detection device, electronic equipment and storage medium | |
Gueguen et al. | Mapping human settlements and population at country scale from VHR images | |
Guo et al. | Identifying rice field weeds from unmanned aerial vehicle remote sensing imagery using deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |