CN113344148A - Marine ship target identification method based on deep learning - Google Patents
Marine ship target identification method based on deep learning Download PDFInfo
- Publication number
- CN113344148A CN113344148A CN202110899130.3A CN202110899130A CN113344148A CN 113344148 A CN113344148 A CN 113344148A CN 202110899130 A CN202110899130 A CN 202110899130A CN 113344148 A CN113344148 A CN 113344148A
- Authority
- CN
- China
- Prior art keywords
- loss
- image
- prediction
- target
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a marine ship target identification method based on deep learning, which constructs a deep learning convolution network suitable for remote sensing images, improves the detection effect of dense small targets by optimizing non-maximum value inhibition, and avoids confusion of different types of ships at ports; and further, the detection precision is further improved and the convergence speed of the algorithm is improved through the interest area optimization strategy of bilinear interpolation. The method can be effectively applied to target detection and identification of various marine vessels of satellite remote sensing images, and provides reliable technical support for relevant applications such as maritime monitoring, marine operation, sea area safety and the like.
Description
Technical Field
The invention belongs to the field of target identification and artificial intelligence, and particularly relates to a marine ship target identification method based on deep learning.
Background
The marine ship is an important transportation means of the ocean and is an important carrier for supporting ocean economic development, technological progress, environmental protection and ocean safety maintenance. Effective control of marine vessels is a main approach to achieve marine safety and harmonious development, and therefore accurate detection and supervision of marine vessels are necessary.
Most of the existing offshore target detection methods mainly rely on optical and radar images, and the detection precision, efficiency and coverage rate are very limited, because the existing offshore ship targets are mainly based on local observation technology; meanwhile, the traditional image processing method has limitation in the aspect of extracting marine ship features, can not cover ships of different sizes and types, and is particularly susceptible to sea clutter and sea wave backgrounds for some dense targets and small targets in an ocean area, so that the detection and identification precision is reduced.
Disclosure of Invention
The invention provides a marine ship target identification method based on deep learning, which constructs a deep learning convolution network suitable for remote sensing images, improves the detection effect of dense small targets by optimizing non-maximum value inhibition, and avoids confusion of different types of ships at ports; and further, the detection precision is further improved and the convergence speed of the algorithm is improved through the interest area optimization strategy of bilinear interpolation. The method can be effectively applied to target detection and identification of various marine vessels of satellite remote sensing images, and provides reliable technical support for relevant applications such as maritime monitoring, marine operation, sea area safety and the like.
The method applies deep learning to the identification of the marine ship target, can effectively overcome the bottleneck defects of poor identification precision, difficult image matching and the like of the traditional convolution image identification method under the conditions of small targets, dense targets and complex sea, obtains abundant marine ship characteristics by constructing a large number of multi-type marine ship sample data sets, realizes high-precision and rapid marine ship target identification and situation judgment, and provides a new idea and a new method for monitoring and effective control of marine ships.
The technical scheme of the invention is as follows: a marine ship target identification method based on deep learning comprises the following steps:
a, constructing a training set by using a known satellite remote sensing image data set of a marine ship, calibrating a ship target in an image to enable a target frame to reach pixel-level precision, and simultaneously carrying out scale transformation on an input image to enable the size of the image to be in the same range, so that features are convenient to extract;
b, constructing an image target recognition framework for deep learning, determining a feature extractor on the basis of adopting a convolutional neural network as a basic calculation unit, and extracting features of the training data set constructed in the step A;
c, carrying out target classification, position regression and mask prediction on the features obtained in the step B;
d, generating a candidate frame on the basis of the step C, screening the candidate frame by combining the real target markers of the training set, and calculating the loss of the candidate frame to enable the loss to reach the expected requirement;
step E, when the loss of the candidate frame in the step D meets the expected requirement, generating a training model parameter;
step F, carrying out scale transformation processing on the remote sensing image data set to be tested so as to conveniently carry out feature extraction;
step G, inputting the data set to be tested processed in the step F into the image target identification framework constructed in the step B, and performing feature extraction;
step H, processing the interest region of the features extracted in the step G, inputting the processed result into the model generated by training in the step E, and further performing convolution operation and flattening full connection;
step I, respectively carrying out mask prediction, category prediction and boundary frame prediction on the result of the step H, and calculating prediction loss;
step J, performing non-maximum value inhibition on the result of the step I, and performing further screening;
and K, carrying out unified scale transformation on the result of the step J, and outputting the result as a final result.
Further, in the step a and the step F, the data collection is calibrated by using a high-precision mask, the data collection is a satellite remote sensing image, and the related ships include small ships such as small yachts, fishing boats and sailing boats, medium ships such as cargo ships, large mail ships and warships, and large ships such as aircraft carriers.
Further, in the step B and the step F, MASK R-CNN is used as a basic deep learning framework, the framework supports a convolutional neural network required for completing ship identification, and in the process, a positive sample and a negative sample are constructed by using an intersection set threshold and are used as input of sample identification prediction.
Further, in the step C, the feature extractor adopts a ResNet-101 extractor, the extraction of the ship target feature layer in the image is realized through the candidate frame extraction network, and the target score and the regression parameter of the target bounding box are predicted. And dividing the size of the image by the size of the feature matrix by using the target score predicted by the feature extractor and the regression parameter of the target bounding box to obtain the size of the original image corresponding to each pixel of the feature matrix.
Further, in step D, the candidate frame generation method includes:
whereinhAndwrepresenting the height and width of the input image,h' andw' is the height and width of the candidate box after the scale change;ratiorepresenting the ratio of height and width of the incoming image, with*For applying superscriptsratio*Representing the ratio of the height and the width of the image after transformation, aiming at keeping the area of the candidate frame unchanged, the specific calculation method of the candidate frame loss is as follows:
L reg represents the regression prediction loss, wherein:smooth L1 is defined as:
t i the predicted values of the regression parameters are represented,t i *the true values of the regression parameters are indicated.Whereinx i Andrespectively representing the candidate frame prediction coordinates and the anchor frame coordinates,w a indicating the width of the anchor frame.
Further, in the step I, the loss calculation methods of the classification prediction, the regression prediction and the mask prediction are respectively as follows:
whereinL cls A loss of classification is indicated and,p i represents the firstiThe probability of a class object is determined,p i *the one-hot code which can be regarded as the target real type is 1 only when the target type is correct, the rest is 0, and the classification loss has a smaller proportion in the total loss.
The regression prediction loss method is the same as the candidate frame loss calculation method in the step D, and specifically comprises the following steps:
L reg represents the regression prediction loss, wherein:smooth L1 is defined as:
t i the predicted values of the regression parameters are represented,t i *the true values of the regression parameters are indicated.Whereinx i Andrespectively representing the candidate frame prediction coordinates and the anchor frame coordinates,w a indicating the width of the anchor frame.
Further, in the step J, for each feature layer, the top ranked targets are screened according to the prediction probability by a non-maximum suppression strategy, and the specific process is to traverse all prediction boxes first, delete the out-of-range candidate boxes, and then delete a certain amount of small targets according to the non-maximum principle. The loss of the candidate box extraction network includes classification loss, bounding box regression loss, and mask loss. The classification loss is the loss of the classification judgment of the candidate frame and the difference of the real result, the classification output can be regarded as one unique hot code, and the classification loss can be calculated by solving the average cross entropy. The boundary loss is calculated as the difference between the predicted regression parameters and the true regression parameters. The mask penalty is then a class-based binary cross entropy penalty.
Compared with the prior art, the invention has the advantages that:
compared with the traditional marine remote sensing target identification method, the marine vessel detection method of the MASK R-CNN frame is established by constructing the feature data sets aiming at different types of marine vessels, and reasonable feature extractors can be selected aiming at different types of vessel targets; meanwhile, in consideration of the difference between the remote sensing image and the daily scene image, the method improves the detection precision of the marine dense small targets in the remote sensing image through a non-maximum inhibition way, introduces bilinear interpolation in the interest area through improving a characteristic diagram scale scaling method, and greatly improves the detection precision and the convergence speed of the marine ship targets.
Drawings
FIG. 1 is a flow chart of an implementation of a marine vessel target identification method based on deep learning according to the present invention;
FIG. 2 is a diagram illustrating the detection effect of the present invention on a small ship target;
FIG. 3 is a diagram illustrating the detection effect of the method of the present invention on dense targets;
FIG. 4 shows the detection effect of the method of the present invention under complex sea conditions;
FIG. 5 is a schematic diagram of the scaling performed by the method of the present invention.
Detailed Description
The invention will be described in detail below with reference to the accompanying drawings and specific embodiments, which are only intended to facilitate the understanding of the invention and are not intended to limit the invention.
The invention provides a marine ship target identification method based on deep learning, which constructs a deep learning convolution network suitable for remote sensing images, improves the detection effect of dense small targets by optimizing non-maximum value inhibition, and avoids confusion of different types of ships at ports; and further, the detection precision is further improved and the convergence speed of the algorithm is improved through the interest area optimization strategy of bilinear interpolation. The method can be effectively applied to target detection and identification of various marine vessels of satellite remote sensing images, and provides reliable technical support for relevant applications such as maritime monitoring, marine operation, sea area safety and the like.
As shown in fig. 1, the marine vessel target identification method based on deep learning of the present invention specifically includes the following steps:
(1) the method comprises the steps of constructing a satellite remote sensing image data set of marine ships, calibrating ship targets in the images to enable target frames of the ship targets to reach pixel-level precision, calibrating data collection by using a high-precision mask, and calibrating data collection by using satellite remote sensing images, wherein related ships comprise small ships such as small yachts, fishing boats and sailing boats, medium ships such as cargo ships, large mail ships and warships, and large ships such as aircraft carriers.
(2) Constructing an image target identification framework for deep learning, and determining a feature extractor on the basis of adopting a convolutional neural network as a basic calculation unit so as to accurately extract the features of the ship to be detected; the MASK R-CNN is used as a basic deep learning framework which supports a convolutional neural network required by ship identification, and positive samples and negative samples are constructed by using an intersection set threshold in the process and are used as input of sample identification prediction.
(3) Inputting the image into a feature extractor to obtain a feature map, and obtaining a feature matrix according to the feature map; and the feature extractor adopts a ResNet-101 extractor, and extracts the ship target feature layer in the image through a candidate frame extraction network.
(4) And predicting the target score and the regression parameters of the target boundary box. And dividing the size of the image by the size of the feature matrix by using the target score predicted by the feature extractor and the regression parameter of the target bounding box to obtain the size of the original image corresponding to each pixel of the feature matrix.
(5) Generating a candidate box, wherein the width and the height of the candidate box have the following relation with the input width and the input height:
whereinhAndwrepresenting height and width of input image, of the same height and width' Representing the height and width after the transformation,ratioindicating the ratio of height to width of the incoming image, with superscriptratio*Representing the ratio of the height and width of the transformed image, with the aim of keeping the candidate frame area constant,h' andw' is the height and width of the candidate box after the scale change;
the specific calculation method of the candidate frame loss comprises the following steps:
L reg represents the regression prediction loss, wherein:smooth L1 is defined as:
t i the predicted values of the regression parameters are represented,the true values of the regression parameters are represented,whereinx i Andrespectively representing the candidate frame prediction coordinates and the anchor frame coordinates,w a indicating the width of the anchor frame.
(6) And when the candidate frame loss meets the expected requirement, determining to generate training model parameters.
(7) And carrying out scale transformation processing on the image to be tested, adopting MASK R-CNN as a basic deep learning framework, supporting a convolutional neural network required by ship identification by the framework, and utilizing a ResNet-101 extractor to carry out feature extraction.
(8) Processing interest areas of the image features to be detected, inputting the obtained feature matrix into the pooling layer to obtain a full-connection layer, and detecting and identifying the full-connection layer; further, mask prediction, category prediction, and bounding box prediction are performed, respectively, to calculate a prediction loss.
whereinL cls A loss of classification is indicated and,p i represents the firstiThe probability of a class object is determined,p i *the one-hot code which can be regarded as the target real type is 1 only when the target type is correct, the rest is 0, and the classification loss has a smaller proportion in the total loss.
The regression prediction loss method is the same as the candidate frame loss calculation method in the step D, and specifically comprises the following steps:
L reg represents the regression prediction loss, wherein:smooth L1 is defined as:
t i the predicted values of the regression parameters are represented,t i *the true values of the regression parameters are indicated.Whereinx i Andrespectively representing the candidate frame prediction coordinates and the anchor frame coordinates,w a indicating the width of the anchor frame.
(9) For each feature layer, screening the top-ranked targets according to the prediction probability by a non-maximum suppression strategy, wherein the specific process comprises the steps of traversing all prediction frames, deleting out-of-range candidate frames and deleting a certain amount of small targets according to a non-maximum principle. The loss of the candidate box extraction network includes classification loss, bounding box regression loss, and mask loss. The classification loss is the loss of the classification judgment of the candidate frame and the difference of the real result, the classification output can be regarded as one unique hot code, and the classification loss can be calculated by solving the average cross entropy. The boundary loss is calculated as the difference between the predicted regression parameters and the true regression parameters. The mask penalty is then a class-based binary cross entropy penalty.
(10) And when the detection requirement is met, outputting the pixel coordinates of the rectangular frame of the ship target to be detected in the image.
Fig. 2 is a diagram illustrating a detection effect of a small ship target in an embodiment of the present invention, which shows that the method of the present invention can effectively detect and identify the small ship target; FIG. 3 is a diagram illustrating the detection effect of the method according to the present invention on dense targets, which shows that the detection of dense marine vessel targets can be effectively realized by the non-maximum suppression strategy in the method according to the present invention;
fig. 4 shows the detection effect of the method of the present invention under complex sea conditions, compared with other methods, the method of the present invention can realize accurate detection of the ship target under complex sea conditions without sea and land division. Table 1 shows that the average accuracy indexes of the mean value of the method are superior to those of the method of the invention of the fast R-CNN and the YOLO-V3 by comparing the detection results of the method of the invention with those of the fast R-CNN and the YOLO-V3.
FIG. 5 is a schematic diagram of the scale transformation of the present invention, in the scale transformation, the acceptable image range is input, then each image is scaled to the range in a fixed ratio, finally the longest height and width are found out, and each image is filled to the longest height and width, so that the scale transformation is completed; in the scale conversion, the short side in the image width and height needs to be selected first, then the short side is compared with the size of the target, the image is scaled according to the ratio, if the whole image is within the acceptable range after scaling, the image is directly output, otherwise, the long side and the corresponding side are used for scaling according to the ratio.
The above description is only exemplary of the present invention and should not be taken as limiting the scope of the present invention, and any modifications, equivalents, improvements and the like that are within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (8)
1. A marine ship target identification method based on deep learning is characterized by comprising the following steps:
a, constructing a training set by using a known satellite remote sensing image data set of a marine ship, calibrating a ship target in an image to enable a target frame to reach pixel-level precision, and simultaneously carrying out scale transformation on an input image to enable the size of the image to be in the same range, so that features are convenient to extract;
b, constructing an image target recognition framework for deep learning, determining a feature extractor on the basis of adopting a convolutional neural network as a basic calculation unit, and extracting features of the training data set constructed in the step A;
c, carrying out target classification, position regression and mask prediction on the features obtained in the step B;
d, generating a candidate frame on the basis of the step C, screening the candidate frame by combining the real target markers of the training set, and calculating the loss of the candidate frame to enable the loss to be smaller than a preset threshold value;
step E, when the loss of the candidate frame in the step D is less than a preset threshold value, generating a training model parameter to obtain a training model;
step F, carrying out scale transformation processing on the remote sensing image to-be-tested data set so as to conveniently carry out feature extraction;
step G, inputting the data set to be tested processed in the step F into the image target identification framework constructed in the step B, and performing feature extraction;
step H, processing the interest area of the features extracted in the step G, inputting the processed result into the model generated by training in the step E, and further performing full convolution operation and flattening full connection;
step I, performing modular classification prediction, regression prediction and mask prediction on the result of the step H respectively, and calculating prediction loss;
step J, performing non-maximum value inhibition on the result of the step I, and performing further screening;
and K, carrying out unified scale transformation on the result of the step J, and outputting the result as a final result.
2. The marine vessel target identification method based on deep learning of claim 1, wherein: in the step A, a high-precision mask is adopted for calibration in a data set, a satellite remote sensing image is adopted for the data set, and the related ships comprise a small ship, a medium ship and a large ship; the small ships comprise small yachts, fishing boats and sailing ships, the medium ships comprise cargo ships, large mail ships and warships, and the large ships comprise aircraft carriers; in the scale conversion, inputting an acceptable image range, then scaling each image to the range in a fixed proportion, finally finding out the longest height and width, and completing the scale conversion by complementing each image to the longest height and width; in the scale conversion, the short side in the image width and height needs to be selected first, then the short side is compared with the size of the target, the image is scaled according to the ratio, if the whole image is within the acceptable range after scaling, the image is directly output, otherwise, the long side and the corresponding side are used for scaling according to the ratio.
3. The marine vessel target identification method based on deep learning of claim 1, wherein: in the step B, MASK R-CNN is used as a basic deep learning framework which supports a convolutional neural network required by ship identification, and positive samples and negative samples are constructed by using an intersection set threshold in the process and are used as input of sample identification prediction.
4. The marine vessel target identification method based on deep learning of claim 1, wherein: in the step C, the feature extractor adopts a ResNet-101 extractor, the extraction of the ship target feature layer in the image is realized through the candidate frame extraction network, and the target score and the regression parameter of the target boundary frame are predicted.
5. The marine vessel target identification method based on deep learning of claim 1, wherein: in the step C, the size of the original image corresponding to each pixel of the feature matrix is obtained by dividing the size of the image by the size of the feature matrix by using the target score predicted by the feature extractor and the regression parameter of the target bounding box, and the image incoming proportion is determined.
6. The marine vessel target identification method based on deep learning of claim 1, wherein: in the step D, the candidate frame generation method includes:
whereinhAndwrepresenting the height and width of the input image,ratioindicating the ratio of height to width of the incoming image, with superscriptratio*Representing the ratio of the height and width of the transformed image, with the aim of keeping the candidate frame area constant,h' andw' is the height and width of the candidate box after the scale change;
the specific calculation method of the candidate frame loss comprises the following steps:
L reg represents the regression prediction loss, wherein:smooth L1 is defined as:
t i the predicted values of the regression parameters are represented,the true values of the regression parameters are represented,whereinx i Andrespectively representing the candidate frame prediction coordinates and the anchor frame coordinates,w a and representing the width of an anchor frame, wherein the anchor frame is a result frame of dividing the feature diagram and is a basis for generating different region candidate frames.
7. The marine vessel target identification method based on deep learning of claim 1, wherein: in the step I, the loss calculation methods of classification prediction, regression prediction and mask prediction are respectively as follows:
whereinL cls A loss of classification is indicated and,p i represents the firstiThe probability of a class object is determined,p i *the one-hot code which is regarded as the target real type is 1 only when the target type is correct, the rest is 0, and the classification loss has smaller proportion in the total loss;
the regression prediction loss method is the same as the candidate frame loss calculation method in the step D, and specifically comprises the following steps:
L reg represents the regression prediction loss, wherein:smooth L1 is defined as:
t i expressing predicted values of regression parameters,t i *The true values of the regression parameters are represented,whereinx i Andrespectively representing the candidate frame prediction coordinates and the anchor frame coordinates,w a representing the width of the anchor frame;
8. The marine vessel target identification method based on deep learning of claim 1, wherein: in the step J, for each feature layer, screening the top-ranked targets according to the prediction probability by a non-maximum suppression strategy, wherein the specific process is that all prediction frames are traversed, boundary-crossing candidate frames are deleted, and then partial small targets are deleted according to a non-maximum principle; the loss of the candidate box extraction network comprises classification loss, boundary box regression loss and mask loss; the classification loss is the loss of the classification judgment of the candidate frame and the difference of the real result, the classification output can be regarded as a unique hot code, and the classification loss can be calculated by solving the average cross entropy; the boundary loss is calculated by the difference between the predicted regression parameters and the real regression parameters; the mask penalty is then a class-based binary cross entropy penalty.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110899130.3A CN113344148A (en) | 2021-08-06 | 2021-08-06 | Marine ship target identification method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110899130.3A CN113344148A (en) | 2021-08-06 | 2021-08-06 | Marine ship target identification method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113344148A true CN113344148A (en) | 2021-09-03 |
Family
ID=77481021
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110899130.3A Pending CN113344148A (en) | 2021-08-06 | 2021-08-06 | Marine ship target identification method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113344148A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114359720A (en) * | 2021-12-23 | 2022-04-15 | 湖南国科轩宇信息科技有限公司 | Marine target detection method, system and device based on satellite optical image |
CN116051548A (en) * | 2023-03-14 | 2023-05-02 | 中国铁塔股份有限公司 | Positioning method and device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107871124A (en) * | 2017-11-15 | 2018-04-03 | 陕西师范大学 | A kind of Remote Sensing Target detection method based on deep neural network |
CN108960143A (en) * | 2018-07-04 | 2018-12-07 | 北京航空航天大学 | Detect deep learning method in a kind of naval vessel in High Resolution Visible Light remote sensing images |
CN109766830A (en) * | 2019-01-09 | 2019-05-17 | 深圳市芯鹏智能信息有限公司 | A kind of ship seakeeping system and method based on artificial intelligence image procossing |
CN111563473A (en) * | 2020-05-18 | 2020-08-21 | 电子科技大学 | Remote sensing ship identification method based on dense feature fusion and pixel level attention |
CN111723748A (en) * | 2020-06-22 | 2020-09-29 | 电子科技大学 | Infrared remote sensing image ship detection method |
CN111985376A (en) * | 2020-08-13 | 2020-11-24 | 湖北富瑞尔科技有限公司 | Remote sensing image ship contour extraction method based on deep learning |
US20210073573A1 (en) * | 2018-11-15 | 2021-03-11 | Shanghai Advanced Avionics Co., Ltd. | Ship identity recognition method based on fusion of ais data and video data |
CN112802005A (en) * | 2021-02-07 | 2021-05-14 | 安徽工业大学 | Automobile surface scratch detection method based on improved Mask RCNN |
-
2021
- 2021-08-06 CN CN202110899130.3A patent/CN113344148A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107871124A (en) * | 2017-11-15 | 2018-04-03 | 陕西师范大学 | A kind of Remote Sensing Target detection method based on deep neural network |
CN108960143A (en) * | 2018-07-04 | 2018-12-07 | 北京航空航天大学 | Detect deep learning method in a kind of naval vessel in High Resolution Visible Light remote sensing images |
US20210073573A1 (en) * | 2018-11-15 | 2021-03-11 | Shanghai Advanced Avionics Co., Ltd. | Ship identity recognition method based on fusion of ais data and video data |
CN109766830A (en) * | 2019-01-09 | 2019-05-17 | 深圳市芯鹏智能信息有限公司 | A kind of ship seakeeping system and method based on artificial intelligence image procossing |
CN111563473A (en) * | 2020-05-18 | 2020-08-21 | 电子科技大学 | Remote sensing ship identification method based on dense feature fusion and pixel level attention |
CN111723748A (en) * | 2020-06-22 | 2020-09-29 | 电子科技大学 | Infrared remote sensing image ship detection method |
CN111985376A (en) * | 2020-08-13 | 2020-11-24 | 湖北富瑞尔科技有限公司 | Remote sensing image ship contour extraction method based on deep learning |
CN112802005A (en) * | 2021-02-07 | 2021-05-14 | 安徽工业大学 | Automobile surface scratch detection method based on improved Mask RCNN |
Non-Patent Citations (2)
Title |
---|
凌晨 等: "基于mask R-CNN算法的遥感图像处理技术及其应用", 《计算机科学》 * |
吴金亮等: "基于Mask R-CNN的舰船目标检测研究", 《无线电工程》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114359720A (en) * | 2021-12-23 | 2022-04-15 | 湖南国科轩宇信息科技有限公司 | Marine target detection method, system and device based on satellite optical image |
CN114359720B (en) * | 2021-12-23 | 2024-04-26 | 湖南国科轩宇信息科技有限公司 | Marine target detection method, system and device based on satellite optical image |
CN116051548A (en) * | 2023-03-14 | 2023-05-02 | 中国铁塔股份有限公司 | Positioning method and device |
CN116051548B (en) * | 2023-03-14 | 2023-08-11 | 中国铁塔股份有限公司 | Positioning method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108961235B (en) | Defective insulator identification method based on YOLOv3 network and particle filter algorithm | |
CN113469177B (en) | Deep learning-based drainage pipeline defect detection method and system | |
CN107871119B (en) | Target detection method based on target space knowledge and two-stage prediction learning | |
CN111027511B (en) | Remote sensing image ship detection method based on region of interest block extraction | |
CN103400156B (en) | Based on the High Resolution SAR image Ship Detection of CFAR and rarefaction representation | |
CN109740460B (en) | Optical remote sensing image ship detection method based on depth residual error dense network | |
CN111091095B (en) | Method for detecting ship target in remote sensing image | |
CN108460382A (en) | Remote sensing image Ship Detection based on deep learning single step detector | |
CN113569667A (en) | Inland ship target identification method and system based on lightweight neural network model | |
CN112052817A (en) | Improved YOLOv3 model side-scan sonar sunken ship target automatic identification method based on transfer learning | |
CN107492094A (en) | A kind of unmanned plane visible detection method of high voltage line insulator | |
WO2018000252A1 (en) | Oceanic background modelling and restraining method and system for high-resolution remote sensing oceanic image | |
CN114627052A (en) | Infrared image air leakage and liquid leakage detection method and system based on deep learning | |
CN113538331A (en) | Metal surface damage target detection and identification method, device, equipment and storage medium | |
CN109829423A (en) | A kind of icing lake infrared imaging detection method | |
CN113486819A (en) | Ship target detection method based on YOLOv4 algorithm | |
CN113344148A (en) | Marine ship target identification method based on deep learning | |
CN113469097B (en) | Multi-camera real-time detection method for water surface floaters based on SSD network | |
CN116740528A (en) | Shadow feature-based side-scan sonar image target detection method and system | |
CN116665095B (en) | Method and system for detecting motion ship, storage medium and electronic equipment | |
Zhang et al. | Nearshore vessel detection based on Scene-mask R-CNN in remote sensing image | |
CN114565824B (en) | Single-stage rotating ship detection method based on full convolution network | |
CN113420594A (en) | SAR image ship detection method based on improved Faster R-CNN | |
Zou et al. | Maritime target detection of intelligent ship based on faster R-CNN | |
CN117994666A (en) | Sea ice identification method combining residual error network and edge detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20210903 |