CN115641507B - Remote sensing image small-scale surface target detection method based on self-adaptive multi-level fusion - Google Patents
Remote sensing image small-scale surface target detection method based on self-adaptive multi-level fusion Download PDFInfo
- Publication number
- CN115641507B CN115641507B CN202211387533.0A CN202211387533A CN115641507B CN 115641507 B CN115641507 B CN 115641507B CN 202211387533 A CN202211387533 A CN 202211387533A CN 115641507 B CN115641507 B CN 115641507B
- Authority
- CN
- China
- Prior art keywords
- level
- fusion
- target
- remote sensing
- sensing image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 48
- 238000001514 detection method Methods 0.000 title claims abstract description 38
- 238000000034 method Methods 0.000 claims abstract description 30
- 238000000605 extraction Methods 0.000 claims abstract description 19
- 230000004913 activation Effects 0.000 claims description 6
- 230000003044 adaptive effect Effects 0.000 claims description 6
- 230000007246 mechanism Effects 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 claims description 5
- 230000002776 aggregation Effects 0.000 claims description 4
- 238000004220 aggregation Methods 0.000 claims description 4
- 238000012937 correction Methods 0.000 claims description 2
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000012512 characterization method Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a remote sensing image small-scale surface target detection method based on self-adaptive multi-level fusion, which comprises the following steps: step 1: extracting shallow and deep multi-level feature images of an input image by using a trunk feature extraction network, wherein the downsampling levels are respectively 4 times, 8 times, 16 times and 32 times; step 2: the multi-level feature extraction architecture of the self-adaptive fusion weight is used for realizing the fusion of different downsampling series features in the step 1; step 3: and predicting the target position and the category information by selecting the high-resolution characteristic layers with the downsampling progression of 4 times and 8 times after fusion to obtain a final detection result. The method can effectively fuse semantic and structural information in different levels, improves the feature extraction and detection positioning capability of a network on the small-scale target, and effectively reduces the interference of a false alarm source on target detection in a scene, thereby realizing the detection of the small-scale target of the remote sensing image with high detection rate and low false alarm rate.
Description
Technical Field
The invention belongs to the technical field of target detection and recognition, relates to a detection method for a target of a small-scale surface of a remote sensing image, and particularly relates to a detection method for a target of a small-scale surface of a remote sensing image based on self-adaptive multi-level feature fusion.
Background
The remote sensing image target detection is used as a key technology in the remote sensing image interpretation field, and the effective classification and accurate positioning of the region of interest or the target instance are realized by extracting the difference characteristics of the target and the background in the aerospace remote sensing data, so that the method plays a key role in the military and civil applications such as timely rescue at sea, intelligent traffic management, real-time regional monitoring and the like. The continuous improvement of the resolution of the remote sensing image enables detection and identification of artificial objects with smaller dimensions such as ships, vehicles and the like to be possible.
Early detection methods required artificial design of image feature extraction operators for different kinds of targets, wherein common features of small-scale targets such as vehicles comprise geometric outline features, texture edge features, symmetry features and the like. However, the characteristics of the artificial design are easily affected by factors such as illumination, scale, color and the like, and the characteristic capability of the target characteristics is limited, so that the artificial design is only suitable for target detection in specific scenes. With the improvement of hardware computing capacity and the rapid increase of data samples, the target detection method based on the neural network can learn the characteristics with more robustness and strong characterization capability of the target through the self-adaptive mining of the image characteristics. However, due to the influence of a remote sensing special overhead imaging mode, imaging resolution restriction and imaging link degradation, the detail information of geometric textures, edge contours and the like of small-scale targets such as ships, vehicles and the like in images is lost, and meanwhile, a large number of false alarm sources similar to the characteristics of the target forms, textures and the like exist in complex and diverse scenes. These factors all increase the difficulty in extracting the effective features of the remote sensing small-scale target. Therefore, a need exists for developing a rapid and accurate small-scale target detection method research in close combination with practical application requirements.
Disclosure of Invention
Aiming at the difficulties of small geometric size, weak texture characteristics, multiple false alarm sources of suspected targets in complex scenes and the like of small-scale targets in a remote sensing image, the invention provides a remote sensing image small-scale target detection method based on self-adaptive multi-level fusion. The method can effectively fuse semantic and structural information in different levels, improves the feature extraction and detection positioning capability of a network on the small-scale targets, and effectively reduces the interference of the false alarm sources on target detection in a scene, so that the detection of the small-scale targets of the remote sensing image with high detection rate and low false alarm rate is realized, and the weak and small target detection in a complex scene can be effectively supported.
The invention aims at realizing the following technical scheme:
A remote sensing image small-scale surface target detection method based on self-adaptive multi-level fusion comprises the following steps:
Step 1: extracting shallow and deep multi-level feature images of an input image by using a trunk feature extraction network, wherein the downsampling levels are respectively 4 times, 8 times, 16 times and 32 times;
Step 2: the multi-level feature extraction architecture of the self-adaptive fusion weight is used for realizing the fusion of different downsampling series features in the step 1;
step 3: and predicting the target position and the category information by selecting the high-resolution characteristic layers with the downsampling progression of 4 times and 8 times after fusion to obtain a final detection result.
Compared with the prior art, the invention has the following advantages:
(1) The invention provides a detection method for a target on a small-scale surface of a remote sensing image, which can effectively solve the problems of small geometric scale and texture feature deficiency of the target on the small-scale surface, weak extraction capability and poor detection performance of the target caused by a large number of false alarm sources similar to the texture feature of the target in a complex scene, and can be applied to detection of the target on the small-scale surface under the conditions of complex ground object interference and dense target arrangement.
(2) The invention provides a self-adaptive weighting-fusion multi-level feature extraction architecture which can realize the effective fusion of semantic and structural information in different levels, improve the feature extraction and detection positioning capability of a network on a small-scale target and effectively reduce the interference of a false alarm source on target detection in a scene.
(3) According to the method, the detection performance of the object in the densely arranged scene can be effectively improved by only adopting the strategy of detecting the object by the high-level feature layer.
Drawings
FIG. 1 is a flow chart of target detection on a small scale surface of a remote sensing image based on adaptive multi-level fusion;
FIG. 2 is a true value and test result for a small scale ship target;
fig. 3 is a true value and detection result for a small-scale vehicle target.
Detailed Description
The following description of the present invention is provided with reference to the accompanying drawings, but is not limited to the following description, and any modifications or equivalent substitutions of the present invention should be included in the scope of the present invention without departing from the spirit and scope of the present invention.
The invention provides a remote sensing image small-scale surface target detection method based on self-adaptive multi-level fusion, which is shown in fig. 1, and comprises the following specific implementation steps:
Step 1: and extracting a shallow layer and a deep layer multi-level feature map of the input image by using a trunk feature extraction network, wherein the downsampling levels are respectively 4 times, 8 times, 16 times and 32 times. The method comprises the following specific steps:
And extracting shallow and deep multi-level feature images in the remote sensing image through a trunk feature extraction network for the input remote sensing image, and respectively extracting feature images with downsampling levels of 4, 8, 16 and 32 times, namely P2, P3, P4 and P5 levels.
In this step, the optional backbone feature extraction network includes, but is not limited to, CSPDARKNET, resNet, denseNet, and other typical networks.
Step 2: and (3) realizing the fusion of different downsampling progression characteristics in the step (1) by using a multi-level characteristic extraction framework of the self-adaptive fusion weight. The method comprises the following specific steps:
fusion conveys high-level semantics and low-level positioning information through top-down and bottom-up paths, respectively. In the top-down fusion path, for the nth level feature map F n, the length, width, and number of channels are denoted as (H, W, C), it is necessary to fuse the n+1th deeper level feature map F n+1, the length, width, and number of channels are denoted as (H/2, W/2, C).
Step 2-1: the method utilizes CARAFE up-sampling operators to realize up-sampling of deep level feature graphs so as to realize aggregation of semantic information, and comprises the following specific steps:
In order to keep the length and width dimensions of the shallow level feature map F n and the deep level feature map F n+1 consistent, an up-sampling operator of CARAFE is utilized to up-sample F n+1 so as to realize aggregation of semantic information, and a high-level feature map is obtained The length, width and number of channels are recorded as (H, W, C). Alternative upsampling methods include, but are not limited to, interpolation, deconvolution, CARAFE operators, and the like.
Step 2-2: shallow layer level feature map correction based on deep layer level feature supervision comprises the following specific steps:
because the low-level feature extraction of the convolutional neural network is insufficient, a large amount of noise exists in the convolutional neural network to influence the fusion of effective features, the convolutional neural network is characterized by a high-level feature map Fine tuning of shallow level feature map F n as supervision, noise-rich low level features are modified by high level semantic features, processed by Sigmoid nonlinear activation functionObtaining a corrected weight map, and applying the corrected weight map to the shallow level feature map F n to obtain a low level feature mapLow-level feature mapCan be calculated by the following formula:
where σ (·) is the Sigmoid activation function.
Step 2-3: and the point-to-point fusion weight is adaptively generated based on the significance of the space and channel dimension characteristics and a broadcasting mechanism, so that the fusion of deep and shallow layer characteristics is realized.
After the upsampled high level features and the modified low level features are acquired, the fusion of the two may begin. But the significance of the feature maps for the various channels in the different levels to the small scale target feature characterization is not exactly the same. If different levels of features are directly fused, the channel rich in scene semantic information may mask the features of the small scale object as interference. Based on the thought of feature significance, the importance degree of the features in the training and reasoning process is adaptively adjusted by giving different weights to the features of each level. The method comprises the following specific steps:
step 2-3-1: the high-level and low-level feature graphs are spliced along the channel dimension, and the specific steps are as follows:
Stitching low-level feature graphs along a channel dimension And a high-level feature mapObtaining a spliced characteristic diagramFor subsequent channel saliency and spatial saliency generation.
Step 2-3-2: the generation channel is remarkable based on the spatial feature map pooling and the full-connection neural network, and the method comprises the following specific steps:
First aggregate by spatial pooling The global information in the model is recorded as the dependency relationship among the learning channels of the fully connected neural network:
Wherein AvePool (. Cndot.) is average pooling, f FCN (. Cndot.) is full junction layer, ω c is channel significance generated, and the dimension is (1, C).
Step 2-3-3: the spatial saliency is generated based on a 1×1 convolution and a 3×3 convolution, and the specific steps are as follows:
adjustment by 1 x 1 convolution Channel, reuse 3 x 3 convolution aggregate channel dimension information to generate spatial significance, denoted as:
Where conv 3×3 and conv 1×1 represent 3×3 convolution and 1×1 convolution, respectively, and ω s is the spatial significance of the generation and the dimension is (H, W, 1).
Step 2-3-4: the method generates point-by-point fusion weight based on a broadcasting mechanism and comprises the following specific steps:
The dimensions of the channel saliency and the space saliency are different, but point-by-point saliency with the dimensions of (H, W, C) can be generated by adding through a broadcasting mechanism, and then fusion weight omega can be obtained by utilizing a Sigmoid nonlinear activation function and is recorded as:
ω=σ(ωc+ωs);
step 2-3-5: based on the self-adaptively generated fusion weight, the deep level features and the shallow level features are weighted and fused, and the specific steps are as follows:
based on the generated self-adaptive fusion weight, the method is used for generating a low-level characteristic diagram And upsampling the high-level feature mapFusing to obtain the feature fusion result of the nth levelThe fusion formula is as follows:
step 3: and predicting the target position and the category information by selecting the high-resolution characteristic layers with the downsampling progression of 4 times and 8 times after fusion to obtain a final detection result.
In order to improve the detection performance of dense scenes, only high-resolution prediction layers P2 and P3 with downsampling multiples of 4 and 8 are adopted to predict target positions and category information. Prediction methods include, but are not limited to, the target information regression method of YOLO.
Fig. 2 and fig. 3 respectively show true values and detection results of small-scale ships and small-scale vehicle targets in complex scenes such as false alarm source interference, dense arrangement and the like. As can be seen from the figure, the method can detect that all targets do not have false alarms.
Claims (7)
1. The method for detecting the target of the small-scale surface of the remote sensing image based on the self-adaptive multi-level fusion is characterized by comprising the following steps of:
Step 1: extracting shallow and deep multi-level feature images of an input image by using a trunk feature extraction network, wherein the downsampling levels are respectively 4 times, 8 times, 16 times and 32 times;
Step 2: the multi-level feature extraction architecture of the self-adaptive fusion weight is used for realizing the fusion of different downsampling series features in the step 1, and the specific steps are as follows:
fusing high-level semantic and low-level positioning information respectively transferred through top-down and bottom-up paths, wherein in the top-down fused path, for an nth-level feature map F n, the length, width and channel number are marked as (H, W, C), and an n+1th deeper-level feature map F n+1, the length, width and channel number are marked as (H/2, W/2, C), is required to be fused;
Step 2-1: the method utilizes CARAFE up-sampling operators to realize up-sampling of deep level feature graphs so as to realize aggregation of semantic information, and comprises the following specific steps:
Upsampling F n+1 to achieve semantic information aggregation, resulting in a high-level feature map The length, width and channel number are recorded as (H, W, C);
step 2-2: shallow layer level feature map correction based on deep layer level feature supervision comprises the following specific steps:
in a high-level feature map Fine tuning of shallow level feature map F n as supervision, noise-rich low level features are modified by high level semantic features, processed by Sigmoid nonlinear activation functionObtaining a corrected weight map, and applying the corrected weight map to the shallow level feature map F n to obtain a low level feature map
Step 2-3: the point-to-point fusion weight is adaptively generated based on the significance of the space and channel dimension characteristics and a broadcasting mechanism, and the fusion of deep and shallow layer characteristics is realized, and the specific steps are as follows:
step 2-3-1: the high-level and low-level feature graphs are spliced along the channel dimension, and the specific steps are as follows:
Stitching low-level feature graphs along a channel dimension And a high-level feature mapObtaining a spliced characteristic diagram F n C for subsequent generation of channel saliency and space saliency;
step 2-3-2: the generation channel is remarkable based on the spatial feature map pooling and the full-connection neural network, and the method comprises the following specific steps:
Firstly, global information in F n C is aggregated through spatial pooling, and then the dependency relationship among channels is learned through a fully connected neural network and is recorded as:
ωc=fFCN(AvePool(FnC));
Wherein AvePool (·) is average pooling, f FCN (·) is full tie layer, ω c is channel significance generated, and the dimension is (1, c);
step 2-3-3: the spatial saliency is generated based on a 1×1 convolution and a 3×3 convolution, and the specific steps are as follows:
The F n C channels were adjusted by a 1 x1 convolution and the channel dimension information was aggregated by a3 x 3 convolution to generate spatial significance, denoted as:
ωs=conv3×3(conv1×1(FnC));
Wherein conv 3×3 and conv 1×1 represent 3×3 convolution and 1×1 convolution, respectively, ω s is the spatial significance of the generation, and the dimension is (H, W, 1);
step 2-3-4: the method generates point-by-point fusion weight based on a broadcasting mechanism and comprises the following specific steps:
The dimensions of the channel saliency and the space saliency are different, but point-by-point saliency with the dimensions of (H, W, C) can be generated by adding through a broadcasting mechanism, and then fusion weight omega can be obtained by utilizing a Sigmoid nonlinear activation function and is recorded as:
ω=σ(ωc+ωs);
step 2-3-5: based on the self-adaptively generated fusion weight, the deep level features and the shallow level features are weighted and fused, and the specific steps are as follows:
based on the generated self-adaptive fusion weight, the method is used for generating a low-level characteristic diagram And upsampling the high-level feature mapFusing to obtain the feature fusion result of the nth level
Step 3: and predicting the target position and the category information by selecting the high-resolution characteristic layers with the downsampling progression of 4 times and 8 times after fusion to obtain a final detection result.
2. The method for detecting the target of the small-scale surface of the remote sensing image based on the self-adaptive multi-level fusion according to claim 1, wherein the specific steps of the step1 are as follows:
And extracting shallow and deep multi-level feature images in the remote sensing image through a trunk feature extraction network for the input remote sensing image, and respectively extracting feature images with downsampling levels of 4, 8, 16 and 32 times, namely P2, P3, P4 and P5 levels.
3. The method for detecting the target of the small-scale surface of the remote sensing image based on the adaptive multi-level fusion according to claim 1 or 2, wherein the backbone feature extraction network is CSPDARKNET, resNet or DenseNet.
4. The method for detecting the target of the small-scale surface of the remote sensing image based on the adaptive multi-level fusion according to claim 1, wherein in the step 2-1, the up-sampling method is interpolation, deconvolution or CARAFE operators.
5. The method for detecting the target of the small-scale surface of the remote sensing image based on the adaptive multi-level fusion according to claim 1, wherein the low-level feature map is characterized in thatCalculated from the following formula:
where σ (·) is the Sigmoid activation function.
6. The method for detecting the target of the small-scale surface of the remote sensing image based on the adaptive multi-level fusion according to claim 1, wherein in the step 2-3-5, the low-level characteristic diagram is obtainedAnd upsampling the high-level feature mapThe formula for fusion is as follows:
7. The method for detecting the target of the small-scale surface of the remote sensing image based on the adaptive multi-level fusion according to claim 1, wherein in the step 3, the prediction method is a target information regression method of YOLO.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211387533.0A CN115641507B (en) | 2022-11-07 | 2022-11-07 | Remote sensing image small-scale surface target detection method based on self-adaptive multi-level fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211387533.0A CN115641507B (en) | 2022-11-07 | 2022-11-07 | Remote sensing image small-scale surface target detection method based on self-adaptive multi-level fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115641507A CN115641507A (en) | 2023-01-24 |
CN115641507B true CN115641507B (en) | 2024-11-05 |
Family
ID=84949084
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211387533.0A Active CN115641507B (en) | 2022-11-07 | 2022-11-07 | Remote sensing image small-scale surface target detection method based on self-adaptive multi-level fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115641507B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117152546B (en) * | 2023-10-31 | 2024-01-26 | 江西师范大学 | Remote sensing scene classification method, system, storage medium and electronic equipment |
CN117994506B (en) * | 2024-04-07 | 2024-08-20 | 厦门大学 | Remote sensing image saliency target detection method based on dynamic knowledge integration |
CN118155106B (en) * | 2024-05-13 | 2024-08-09 | 齐鲁空天信息研究院 | Unmanned aerial vehicle pedestrian detection method, system, equipment and medium for mountain rescue |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111738110A (en) * | 2020-06-10 | 2020-10-02 | 杭州电子科技大学 | Remote sensing image vehicle target detection method based on multi-scale attention mechanism |
CN112365501A (en) * | 2021-01-13 | 2021-02-12 | 南京理工大学 | Weldment contour detection algorithm based on convolutional neural network |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108647668A (en) * | 2018-05-21 | 2018-10-12 | 北京亮亮视野科技有限公司 | The construction method of multiple dimensioned lightweight Face datection model and the method for detecting human face based on the model |
CN110956094B (en) * | 2019-11-09 | 2023-12-01 | 北京工业大学 | RGB-D multi-mode fusion personnel detection method based on asymmetric double-flow network |
CN112347859B (en) * | 2020-10-15 | 2024-05-24 | 北京交通大学 | Method for detecting significance target of optical remote sensing image |
CN115187820A (en) * | 2021-04-06 | 2022-10-14 | 中国科学院深圳先进技术研究院 | Light-weight target detection method, device, equipment and storage medium |
CN113723172A (en) * | 2021-06-11 | 2021-11-30 | 南京航空航天大学 | Fusion multi-level feature target detection method for weak and small targets of remote sensing images |
CN114638836B (en) * | 2022-02-18 | 2024-04-30 | 湖北工业大学 | Urban street view segmentation method based on highly effective driving and multi-level feature fusion |
CN114708511B (en) * | 2022-06-01 | 2022-08-16 | 成都信息工程大学 | Remote sensing image target detection method based on multi-scale feature fusion and feature enhancement |
-
2022
- 2022-11-07 CN CN202211387533.0A patent/CN115641507B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111738110A (en) * | 2020-06-10 | 2020-10-02 | 杭州电子科技大学 | Remote sensing image vehicle target detection method based on multi-scale attention mechanism |
CN112365501A (en) * | 2021-01-13 | 2021-02-12 | 南京理工大学 | Weldment contour detection algorithm based on convolutional neural network |
Also Published As
Publication number | Publication date |
---|---|
CN115641507A (en) | 2023-01-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115641507B (en) | Remote sensing image small-scale surface target detection method based on self-adaptive multi-level fusion | |
CN112818903B (en) | Small sample remote sensing image target detection method based on meta-learning and cooperative attention | |
CN114202696B (en) | SAR target detection method and device based on context vision and storage medium | |
CN110879959B (en) | Method and device for generating data set, and testing method and testing device using same | |
Xu et al. | Bridging the domain gap for multi-agent perception | |
CN108038445B (en) | SAR automatic target identification method based on multi-view deep learning framework | |
CN111723693B (en) | Crowd counting method based on small sample learning | |
CN113468978B (en) | Fine granularity car body color classification method, device and equipment based on deep learning | |
CN113095277B (en) | Unmanned aerial vehicle aerial photography vehicle detection method based on target space distribution characteristics | |
Park et al. | Advanced wildfire detection using generative adversarial network-based augmented datasets and weakly supervised object localization | |
CN107247960A (en) | Method, object identification method and the automobile of image zooming-out specification area | |
CN117422971A (en) | Bimodal target detection method and system based on cross-modal attention mechanism fusion | |
CN114170531B (en) | Infrared image target detection method and device based on difficult sample transfer learning | |
CN114067142B (en) | Method for realizing scene structure prediction, target detection and lane-level positioning | |
Chu et al. | Change detection of remote sensing image based on deep neural networks | |
CN111666801A (en) | Large-scene SAR image ship target detection method | |
CN109977968A (en) | A kind of SAR change detecting method of deep learning classification and predicting | |
CN117011728A (en) | Unmanned aerial vehicle aerial photographing target detection method based on improved YOLOv7 | |
CN112084897A (en) | Rapid traffic large-scene vehicle target detection method of GS-SSD | |
Aghayan‐Mashhady et al. | Road damage detection with bounding box and generative adversarial networks based augmentation methods | |
CN115424072A (en) | Unmanned aerial vehicle defense method based on detection technology | |
Zhang et al. | Lateral distance detection model based on convolutional neural network | |
Fang et al. | A ViTDet based dual-source fusion object detection method of UAV | |
Xia et al. | Abnormal event detection method in surveillance video based on temporal CNN and sparse optical flow | |
CN111160282A (en) | Traffic light detection method based on binary Yolov3 network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |