[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114863258B - Method for detecting small target based on visual angle conversion in sea-sky-line scene - Google Patents

Method for detecting small target based on visual angle conversion in sea-sky-line scene Download PDF

Info

Publication number
CN114863258B
CN114863258B CN202210786036.1A CN202210786036A CN114863258B CN 114863258 B CN114863258 B CN 114863258B CN 202210786036 A CN202210786036 A CN 202210786036A CN 114863258 B CN114863258 B CN 114863258B
Authority
CN
China
Prior art keywords
sea
coordinate information
image
target
sky
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210786036.1A
Other languages
Chinese (zh)
Other versions
CN114863258A (en
Inventor
李非桃
冉欢欢
李和伦
陈益
王丹
褚俊波
陈春
李毅捷
赵瑞欣
莫桥波
王逸凡
李东晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Desheng Xinda Brain Intelligence Technology Co ltd
Original Assignee
Sichuan Desheng Xinda Brain Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Desheng Xinda Brain Intelligence Technology Co ltd filed Critical Sichuan Desheng Xinda Brain Intelligence Technology Co ltd
Priority to CN202210786036.1A priority Critical patent/CN114863258B/en
Publication of CN114863258A publication Critical patent/CN114863258A/en
Application granted granted Critical
Publication of CN114863258B publication Critical patent/CN114863258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/778Active pattern-learning, e.g. online learning of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for detecting a small target based on visual angle conversion in a sea-sky-line scene, which comprises the following steps: acquiring an image to be detected; identifying a sea-sky-line; framing an effective rectangular area of an image to be detected; dividing the effective rectangular area into N 2 Each image block, wherein an overlapping area exists between every two adjacent image blocks; will N 2 Arranging the image blocks according to N rows and N columns to obtain a recombined image; detecting sea surface targets in the recombined image by using a deep learning network model, obtaining first coordinate information of each sea surface target, and combining the first coordinate information of each sea surface target into a first coordinate information set; and respectively converting the first coordinate information of each sea surface target into second coordinate information of the sea surface target in the image to be detected, and forming a second coordinate information set. The invention improves the detection precision of small target ships and small target buoys near sea-sky-lines through the selection of effective areas and the recombination of image blocks.

Description

Method for detecting small target based on visual angle conversion in sea-sky-line scene
Technical Field
The invention belongs to the technical field of target identification, and particularly relates to a method for detecting a small target based on visual angle conversion in a sea-sky-line scene.
Background
At present, the target detection technology plays more and more important roles in various fields and is gradually matured. The existing mature target detection technology is mostly based on a deep learning method. In general, a deep learning method scales input image data to a fixed size in an image preprocessing stage, and detects a target in the scaled image data by loading a model, for example, applying a relatively wide YOLO algorithm model and SSD algorithm model.
In the field of sea surface target detection, small target vessels, small target buoys and the like near sea aerials result in extremely small targets due to the long distance. At present, when a small target ship and a small target buoy near a sea antenna are detected, the following problems exist during the operation of a traditional deep learning network model:
1. inputting a long-distance sea antenna image of a deep learning network model, wherein the long-distance sea antenna image is an image with higher resolution, such as 5472 × 3648, however, the proportion of pixels of a small target near a sea antenna in the whole image is small, most of the pixels are regions such as sea wave sky and the like which are not concerned by a user, and therefore most of operation time is applied to unrelated regions when a traditional algorithm is operated; 2. in an image preprocessing stage of a deep learning network model, high-resolution image data is compressed to a resolution of 640 × 640 pixels, even in order to compress the computational efficiency to a resolution of 416 × 416 pixels, small targets near a sea antenna become smaller, and meanwhile, more interference pixel points are introduced, so that detection failure is caused; 3. under a complex sea-sky-line scene, airplanes, flying birds and the like in the sky near the sea-sky-line cause large interference to the identification of small target ships, small target buoys and the like near the sea-sky-line, and the interference cannot be eliminated.
Disclosure of Invention
The invention aims to overcome one or more defects in the prior art and provides a method for detecting a small target based on view angle conversion in a sea-sky-line scene.
The purpose of the invention is realized by the following technical scheme:
the method for detecting the small target based on the view angle conversion in the sea-sky-line scene specifically comprises the following steps:
acquiring an image to be detected;
identifying a sea-sky line in an image to be detected;
selecting an effective rectangular area of the image to be detected according to the sea antenna frame;
transversely dividing the effective rectangular region into N 2 The image block comprises image blocks, wherein an overlapping area exists between two adjacent image blocks, the transverse widths of all the image blocks are the same, and the value of N is a positive integer greater than one;
will N 2 Arranging the image blocks according to N rows and N columns to obtain a recombined image of the image to be detected;
detecting sea surface targets in the recombined image by using a pre-constructed deep learning network model, obtaining first coordinate information of each sea surface target, and combining the first coordinate information of each sea surface target into a first coordinate information set, wherein the first coordinate information is the coordinate information of the sea surface target in the recombined image;
and respectively converting the first coordinate information of each sea surface target into second coordinate information of the sea surface target, and combining the second coordinate information of each sea surface target into a second coordinate information set, wherein the second coordinate information is the coordinate information of the sea surface target in the image to be detected.
In a further improvement, after the step of respectively converting the first coordinate information of each sea surface target into the second coordinate information of the sea surface target and combining the second coordinate information of each sea surface target into the second coordinate information set, the method further includes the following steps:
and removing the repeated sea surface target coordinate information in the second coordinate information set.
Further improved, the sea-sky line in the image to be detected is identified, specifically including:
calculating the vertical gradient of the image to be detected, and extracting to obtain edge features;
obtaining an edge straight line segment according to the edge characteristics;
screening the edge straight line segments according to a preset first threshold value to obtain target straight line segments;
aggregating the target straight line segments by adopting a preset clustering algorithm to obtain a sea-sky line segment set;
and fitting the sea-sky-line segment set by adopting a least square method to obtain the sea-sky-line.
Further improved, the selecting the effective rectangular area of the image to be detected according to the sea antenna frame specifically comprises:
identifying the intersection point of the sea-sky-line and the left boundary line of the image to be detected, and recording the coordinates of the intersection point as
Figure 279104DEST_PATH_IMAGE001
Identifying the intersection point of the sea-sky-line and the right boundary line of the image to be detected, and recording the coordinates of the intersection point as
Figure 736630DEST_PATH_IMAGE002
Calculating the coordinates of the center point of the sea-sky-line according to the coordinates of the intersection point of the sea-sky-line and the left boundary line of the image to be detected and the coordinates of the intersection point of the sea-sky-line and the right boundary line of the image to be detected
Figure 347740DEST_PATH_IMAGE003
Wherein
Figure 232519DEST_PATH_IMAGE004
Figure 296290DEST_PATH_IMAGE005
Judgment equation
Figure 557507DEST_PATH_IMAGE006
Whether the result is true or not; if yes, determining a first distance parameter
Figure 147757DEST_PATH_IMAGE007
And a second distance parameter
Figure 203438DEST_PATH_IMAGE008
If not, determining a first distance parameter
Figure 20084DEST_PATH_IMAGE009
And a second distance parameter
Figure 553834DEST_PATH_IMAGE010
Wherein W is the width of the image to be detected;
the coordinates of the upper boundary line are calculated from the first distance parameter d1
Figure 139536DEST_PATH_IMAGE011
And calculating the coordinates of the lower boundary line based on the second distance parameter d2
Figure 631697DEST_PATH_IMAGE012
And forming an effective rectangular area of the image to be detected by the upper boundary line, the lower boundary line, the left boundary line of the image to be detected and the right boundary line of the image to be detected.
In a further improvement, the first coordinate information of the sea surface object includes coordinate information of a center point of the sea surface object, height information of an object frame for framing the sea surface object, and width information of the object frame.
In a further improvement, the transverse widths of the overlapping regions are all first preset intervals, and the first preset intervals are positive integers.
In a further improvement, the calculating the vertical gradient of the image to be detected specifically includes:
calculating the vertical gradient of the image to be detected by a kernel operator, wherein
Figure 935639DEST_PATH_IMAGE013
Weight value
Figure 7501DEST_PATH_IMAGE014
In a further improvement, the method further includes, after removing the repeated sea surface target coordinate information in the second coordinate information set, the following steps:
and removing the coordinate information of the interference target in the second coordinate information set to obtain a final coordinate information set of the sea surface target in the image to be detected, wherein the interference target comprises an airplane and a bird in the sky.
In a further improvement, the removing the coordinate information of the interference target in the second coordinate information set specifically includes:
calculating the vertical coordinate of the lower right corner of each sea surface target based on the second coordinate information set, and recording the vertical coordinate of the lower right corner as the vertical coordinate
Figure 465287DEST_PATH_IMAGE015
Judgment of
Figure 862771DEST_PATH_IMAGE016
If the sea surface target is not the sea surface target, keeping the coordinate information of the sea surface target in a second coordinate information set, and if the sea surface target is not the sea surface target, removing the coordinate information of the sea surface target from the second coordinate information set;
wherein,
Figure 513064DEST_PATH_IMAGE017
and m is the number of the remaining sea surface targets after the repeated sea surface target coordinate information in the second coordinate information set is removed.
The invention has the following beneficial effects:
1) the sea antenna in the image to be detected is fitted, an effective rectangular area of the image to be detected is selected based on the sea antenna, the effective rectangular area is transversely divided into a plurality of image blocks, meanwhile, in order to avoid cutting of a sea surface target, when the effective rectangular area is transversely divided, two adjacent image blocks are partially overlapped, all the image blocks are recombined, the recombined image is input into a pre-constructed deep learning network model to be subjected to target detection, and the detected coordinates of the sea surface target are converted into the coordinate position in the image to be detected.
According to the method, the interference of most invalid information is eliminated through the selection of the effective area, and meanwhile, compared with the original image to be detected, the resolution of the sea surface target input into the deep learning network model is increased by the recombined image, so that the accuracy of the detection of the sea surface target is improved, and the detection of small target ships, small target buoys and the like near the sea antenna under the complex sea antenna scene is completed.
2) And calculating the vertical gradient of the image to be detected through a kernel operator, and combining a least square method to realize the accurate simulation of the sea-sky-line.
3) And the precision of sea surface target detection is further improved by removing the repeated target and the interference target coordinate information in the second coordinate information set.
Drawings
FIG. 1 is a flow chart of a method for detecting small targets based on perspective transformation in a sea-sky-line scenario;
FIG. 2 is a schematic diagram of a division of an effective rectangular area;
fig. 3 is a schematic diagram of a recombined image obtained by recombining image blocks.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the following embodiments, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention.
As shown in fig. 1, the present embodiment provides a method for detecting a small target based on view angle conversion in a sea-sky-line scene, which is used for detecting a small sea-surface target such as a small ship and a small buoy near a sea-sky-line, and specifically includes the following steps:
and S1, acquiring an image to be detected.
And S2, identifying the sea-sky-line in the image to be detected. In a common embodiment, before the sea-sky-line in the image to be detected is identified, filtering and denoising processing is further performed on the image to be detected.
In the present embodiment, S2 includes the following sub-steps:
and a substep S21 of calculating the vertical gradient of the image to be detected and extracting to obtain edge features.
And a substep S22 of obtaining an edge straight line segment according to the edge characteristics.
And a substep S23 of screening the edge straight line segment according to a preset first threshold value to obtain a target straight line segment.
And a substep S24, adopting a preset k-meas clustering algorithm to aggregate the target straight line segments to obtain a sea-sky line segment set.
And a substep S25 of fitting the sea-sky-line segment set by adopting a least square method to obtain the sea-sky-line.
And S3, selecting an effective rectangular area of the image to be detected according to the sea antenna frame.
In the present embodiment, S3 includes the following sub-steps:
substep S31, identifying the intersection point of the sea-sky-line and the left boundary line of the image to be detected, and recording the coordinates of the intersection point as
Figure 388616DEST_PATH_IMAGE018
Substep S32, identifying the intersection point of the sea-sky-line and the right boundary line of the image to be detected, and recording the coordinates of the intersection point as
Figure 683331DEST_PATH_IMAGE019
Substep S33, calculating coordinates of center point of sea-sky-line according to coordinates of intersection point of sea-sky-line and left boundary line of image to be detected and coordinates of intersection point of sea-sky-line and right boundary line of image to be detected
Figure 517295DEST_PATH_IMAGE020
Wherein
Figure 264671DEST_PATH_IMAGE021
Figure 209493DEST_PATH_IMAGE022
Substep S34, determining equation
Figure 217769DEST_PATH_IMAGE023
Whether the result is true or not; if yes, determining a first distance parameter
Figure 222634DEST_PATH_IMAGE024
And a second distance parameter
Figure 457307DEST_PATH_IMAGE025
At this time, the first value mode of the first distance parameter d1 and the second distance parameter d2 is adopted; if not, determining a first distance parameter
Figure 674661DEST_PATH_IMAGE026
And a second distance parameter
Figure 209548DEST_PATH_IMAGE027
In this case, the first distance parameter d1 and the second distance parameter d2 have the second value. Wherein W is the width of the image to be detected. The step determines the value mode of the first distance parameter d1 and the second distance parameter d2 according to the calculation of the sea-sky-line inclination angle. When the sea-sky-line level or the inclination angle of the sea-sky-line is less than 10 degrees, the first distance parameter d1 and the second distance parameter d2 adopt the second value mode, and when the inclination angle of the sea-sky-line is more than or equal to 10 degrees, the first distance parameter d1 and the second distance parameter d2 adopt the first value mode. In the first value taking mode, the calculation coefficient 1.2 and the calculation coefficient 1.5 are preferable values based on experience, can be adjusted according to specific situations, and meanwhile, sea surface targets close to the lens are relatively large, so that the distance parameter on the ocean side is set to be larger, the sea surface targets far away from the lens are relatively smaller, and the distance parameter on the sky side is set to be smaller by oneAnd (3) a plurality of. The calculation factor in the second distance parameter d2 is 1.5 and the calculation factor in the first distance parameter d1 is 1.2.
Substep S35, calculating coordinates of the upper boundary line based on the first distance parameter d1
Figure 250228DEST_PATH_IMAGE028
And calculating the coordinates of the lower boundary line based on the second distance parameter d2
Figure 237776DEST_PATH_IMAGE012
Substep S36, forming an effective rectangular region of the image to be detected by the upper boundary line, the lower boundary line, the left boundary line of the image to be detected and the right boundary line of the image to be detected.
In addition, the coordinates of the left boundary line of the image to be detected
Figure 524401DEST_PATH_IMAGE029
Coordinates of the right boundary line of the image to be detected
Figure 382635DEST_PATH_IMAGE030
. In this embodiment, it is preferable
Figure 729303DEST_PATH_IMAGE031
Thus, therefore, it is
Figure 938567DEST_PATH_IMAGE032
S4, laterally dividing the effective rectangular area into N by N image blocks, where an overlapping area with a lateral width of a first preset interval d3 exists between any two adjacent image blocks, and the lateral widths of all the image blocks are the same, where a value of N is a positive integer greater than one, and a value of the first preset interval d3 is a positive integer.
The calculation of the transverse width w1 of an image block is as follows:
calculating a first intermediate parameter
Figure 28883DEST_PATH_IMAGE033
N is positiveAn integer number;
calculating the lateral width of an image block
Figure 272783DEST_PATH_IMAGE034
W1 is a positive integer, specifically: take the positive integer closest to the actual calculated value of w1 and greater than the actual value of w 1.
The calculation process of N in the transverse segmentation of the effective rectangular region is as follows:
if it is
Figure 259193DEST_PATH_IMAGE035
If N is equal to 2; if it is
Figure 221333DEST_PATH_IMAGE036
If N is 3; if it is
Figure 708815DEST_PATH_IMAGE037
If so, then N takes the value of 4; if it is
Figure 72800DEST_PATH_IMAGE038
Then N takes the value 5.
And S5, arranging the N-by-N image blocks according to N rows and N columns to obtain a recombined image of the image to be detected.
S6, inputting the recombined images into a pre-constructed deep learning network model for target detection, identifying sea surface targets in the recombined images, obtaining first coordinate information of the sea surface targets, and combining the obtained first coordinate information of the sea surface targets into a first coordinate information set. The first coordinate information is coordinate information of the sea surface target in the recombined image. The deep learning network model is used for detecting sea surface targets in a common embodiment. The first coordinate information of the sea surface target comprises the coordinate information of the center point of the sea surface target, the height information of a target frame for framing the sea surface target and the width information of the target frame.
And S7, converting the first coordinate information of each sea surface target into second coordinate information of the sea surface target respectively, and combining the second coordinate information of each sea surface target into a second coordinate information set. And the second coordinate information is the coordinate information of the sea surface target in the image to be detected. And the second coordinate information of the sea surface target comprises the coordinate information of the central point of the sea surface target in the image to be detected, the height information of a target frame for framing the sea surface target and the width information of the target frame.
Preferably, the following steps are also included after S7:
and S8, removing the repeated sea surface target coordinate information in the second coordinate information set. Because two adjacent image blocks have an overlapping area with the transverse width of the first preset interval d3, after the target detection is performed through the deep learning network model and then the coordinate conversion is performed, a plurality of sea surface targets with the same coordinate information may appear, and the precision of the sea surface target detection is improved by screening out the repeated sea surface target coordinate information.
Preferably, the vertical gradient of the image to be detected is calculated, specifically:
calculating the vertical gradient of the image to be detected by a kernel operator, wherein
Figure 230112DEST_PATH_IMAGE039
Weighted value
Figure 413969DEST_PATH_IMAGE014
. The larger the weight value k is, the larger the gradient value in the vertical direction is, and k takes a value of 1 in this embodiment.
Preferably, the following steps are also included after S8:
and S9, removing the coordinate information of the interference target in the second coordinate information set to obtain a final coordinate information set of the sea surface target in the image to be detected. The interference target includes an airplane, a bird and the like in the sky near the sea-sky-line.
Removing the coordinate information of the interference target in the second coordinate information set, which specifically comprises the following substeps:
s91, calculating the vertical coordinate of the lower right corner of each sea surface target based on the second coordinate information set, and recording the vertical coordinate of the lower right corner as
Figure 846087DEST_PATH_IMAGE040
S92, judgment
Figure 527561DEST_PATH_IMAGE041
And if not, removing the coordinate information of the sea surface target from the second coordinate information set.
Wherein,
Figure 386933DEST_PATH_IMAGE042
and m is the remaining sea surface target coordinate number after the repeated sea surface target coordinate information in the second coordinate information set is removed, namely the remaining sea surface target number. And the accuracy of sea surface target detection is further improved by screening out the coordinate information of the interference target in the second coordinate information set.
The first preset interval d3 is preferably greater than half of the maximum width of the sea surface targets such as various small target vessels and small target buoys, based on the type of small target vessels and small target buoys to be detected.
With reference to fig. 2 and 3, the following describes a generation process of a reconstructed image (four image blocks are arranged in two rows and two columns in a rectangular manner) and a specific calculation process of converting each coordinate information in the first coordinate information set to a corresponding coordinate position in an image to be detected, by dividing the effective rectangular area into four parts by transverse division.
The generation process of the recombined image comprises the following steps:
four image blocks obtained by transversely dividing the effective rectangular area are sequentially defined as a first image block, a second image block, a third image block and a fourth image block from left to right;
placing the first image blocks in a first row and a first column of rectangular arrangement, placing the second image blocks in a second row and a second column of rectangular arrangement, placing the third image blocks in a first row and a first column of rectangular arrangement, and placing the fourth image blocks in a second row and a second column of rectangular arrangement to obtain a recombined image;
a specific calculation process for converting each piece of coordinate information in the first set of coordinate information to a corresponding coordinate position in the image to be detected:
converting the coordinate information of the sea surface target detected in the first image block in the recombined image into the coordinate position in the image to be detected, wherein the calculation formula is as follows:
Figure 917140DEST_PATH_IMAGE043
converting the coordinate information of the sea surface target detected in the second image block in the recombined image into the coordinate position in the image to be detected, wherein the calculation formula is as follows:
Figure 152949DEST_PATH_IMAGE044
Figure 225948DEST_PATH_IMAGE045
converting the coordinate information of the sea surface target detected in the third image block in the recombined image into the coordinate position in the image to be detected, wherein the calculation formula is as follows:
Figure 990641DEST_PATH_IMAGE046
Figure 149090DEST_PATH_IMAGE047
converting the coordinate information of the sea surface target detected in the fourth image block in the recombined image into the coordinate position in the image to be detected, wherein the calculation formula is as follows:
Figure 923011DEST_PATH_IMAGE048
wherein,
Figure 850516DEST_PATH_IMAGE049
the coordinates of the sea surface object in the recombined image,
Figure 786111DEST_PATH_IMAGE050
for coordinates of sea surface objects detected by the reconstructed image in the image to be detected, and
Figure 431856DEST_PATH_IMAGE051
the method for detecting the small target ship in the sea-sky-line scene adopts three means of selecting an effective rectangular area, transversely dividing an image to be detected and recombining image blocks, belongs to a conversion realization of a machine visual angle in deep learning, saves the operation amount of a deep learning network model, increases the resolution ratio of a sea surface small target in image data input into the deep learning network model, greatly improves the detection precision of the small target ship and a small target buoy near the sea-sky-line, overcomes the defects of the existing target detection scheme mentioned in the background technology, and has a larger application prospect.
The foregoing is illustrative of the preferred embodiments of this invention, and it is to be understood that the invention is not limited to the precise form disclosed herein and that various other combinations, modifications, and environments may be resorted to, falling within the scope of the concept as disclosed herein, either as described above or as apparent to those skilled in the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. The method for detecting the small target based on the visual angle conversion in the sea-sky-line scene is characterized by comprising the following steps:
acquiring an image to be detected;
identifying a sea-sky line in an image to be detected;
selecting an effective rectangular area of the image to be detected according to the sea antenna frame;
transversely dividing the effective rectangular region into N 2 The image block comprises image blocks, wherein an overlapping area exists between two adjacent image blocks, the transverse widths of all the image blocks are the same, and the value of N is a positive integer greater than one;
will N 2 An instituteArranging the image blocks according to N rows and N columns to obtain a recombined image of the image to be detected;
detecting sea surface targets in the recombined image by using a pre-constructed deep learning network model, obtaining first coordinate information of each sea surface target, and combining the first coordinate information of each sea surface target into a first coordinate information set, wherein the first coordinate information is the coordinate information of the sea surface target in the recombined image;
respectively converting the first coordinate information of each sea surface target into second coordinate information of the sea surface target, and combining the second coordinate information of each sea surface target into a second coordinate information set, wherein the second coordinate information is the coordinate information of the sea surface target in the image to be detected;
the effective rectangular area of waiting to detect the image is elected according to sea antenna frame specifically includes:
identifying the intersection point of the sea-sky-line and the left boundary line of the image to be detected, and recording the coordinates of the intersection point as
Figure 681555DEST_PATH_IMAGE001
Identifying the intersection point of the sea-sky-line and the right boundary line of the image to be detected, and recording the coordinates of the intersection point as
Figure 878182DEST_PATH_IMAGE002
Calculating the coordinates of the center point of the sea-sky-line according to the coordinates of the intersection point of the sea-sky-line and the left boundary line of the image to be detected and the coordinates of the intersection point of the sea-sky-line and the right boundary line of the image to be detected
Figure 570194DEST_PATH_IMAGE003
Wherein
Figure 244889DEST_PATH_IMAGE004
Figure 909220DEST_PATH_IMAGE005
Judgment equation
Figure 999449DEST_PATH_IMAGE006
Whether the result is true; if yes, determining a first distance parameter
Figure 178758DEST_PATH_IMAGE007
And a second distance parameter
Figure 391564DEST_PATH_IMAGE008
If not, determining a first distance parameter
Figure 972718DEST_PATH_IMAGE009
And a second distance parameter
Figure 245568DEST_PATH_IMAGE010
Wherein W is the width of the image to be detected;
the coordinates of the upper boundary line are calculated from the first distance parameter d1
Figure 115435DEST_PATH_IMAGE011
And calculating the coordinates of the lower boundary line based on the second distance parameter d2
Figure 131932DEST_PATH_IMAGE012
And forming an effective rectangular area of the image to be detected by the upper boundary line, the lower boundary line, the left boundary line of the image to be detected and the right boundary line of the image to be detected.
2. The method for detecting small targets based on view angle transformation in sea-sky-line scene as claimed in claim 1, wherein the step of converting the first coordinate information of each sea-surface target into the second coordinate information of the sea-surface target and combining the second coordinate information of each sea-surface target into the second coordinate information set further comprises the following steps:
and removing the repeated sea surface target coordinate information in the second coordinate information set.
3. The method for detecting the small target based on the view angle conversion in the sea-sky-line scene as claimed in claim 1, wherein the identifying the sea-sky-line in the image to be detected specifically comprises:
calculating the vertical gradient of the image to be detected, and extracting to obtain edge features;
obtaining an edge straight line segment according to the edge characteristics;
screening the edge straight line segments according to a preset first threshold value to obtain target straight line segments;
aggregating the target straight line segments by adopting a preset clustering algorithm to obtain a sea-sky line segment set;
and fitting the sea-sky-line segment set by adopting a least square method to obtain the sea-sky-line.
4. The method for detecting the small target based on the view angle conversion in the sea-sky-line scene according to claim 1, wherein the first coordinate information of the sea-surface target includes coordinate information of a center point of the sea-surface target, height information of a target frame for framing the sea-surface target, and width information of the target frame.
5. The method for detecting the small target based on the view angle conversion in the sea-sky-line scene as claimed in claim 1, wherein the horizontal widths of the overlapping regions are all a first preset interval, and the first preset interval is a positive integer.
6. The method for detecting the small target based on the view angle conversion in the sea-sky-line scene as claimed in claim 3, wherein the calculating the vertical gradient of the image to be detected specifically comprises:
calculating the vertical gradient of the image to be detected by a kernel operator, wherein
Figure 364331DEST_PATH_IMAGE013
Weighted value
Figure 73661DEST_PATH_IMAGE014
7. The method for detecting the small target based on the view angle transformation in the sea-sky-line scene according to claim 2, wherein the step of removing the repeated sea surface target coordinate information in the second coordinate information set further comprises the following steps:
and removing the coordinate information of the interference target in the second coordinate information set to obtain a final coordinate information set of the sea surface target in the image to be detected, wherein the interference target comprises an airplane and a bird in the sky.
8. The method for detecting the small target based on the perspective transformation in the sea-sky-line scene according to claim 7, wherein the removing the coordinate information of the interfering target in the second coordinate information set specifically includes:
calculating the vertical coordinate of the lower right corner of each sea surface target based on the second coordinate information set, and recording the vertical coordinate of the lower right corner as the vertical coordinate
Figure 961982DEST_PATH_IMAGE015
Judgment of
Figure 782171DEST_PATH_IMAGE016
If the sea surface target is not the sea surface target, keeping the coordinate information of the sea surface target in a second coordinate information set, and if the sea surface target is not the sea surface target, removing the coordinate information of the sea surface target from the second coordinate information set;
wherein,
Figure 72338DEST_PATH_IMAGE017
and m is the number of the remaining sea surface targets after the repeated sea surface target coordinate information in the second coordinate information set is removed.
CN202210786036.1A 2022-07-06 2022-07-06 Method for detecting small target based on visual angle conversion in sea-sky-line scene Active CN114863258B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210786036.1A CN114863258B (en) 2022-07-06 2022-07-06 Method for detecting small target based on visual angle conversion in sea-sky-line scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210786036.1A CN114863258B (en) 2022-07-06 2022-07-06 Method for detecting small target based on visual angle conversion in sea-sky-line scene

Publications (2)

Publication Number Publication Date
CN114863258A CN114863258A (en) 2022-08-05
CN114863258B true CN114863258B (en) 2022-09-06

Family

ID=82625993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210786036.1A Active CN114863258B (en) 2022-07-06 2022-07-06 Method for detecting small target based on visual angle conversion in sea-sky-line scene

Country Status (1)

Country Link
CN (1) CN114863258B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115049907B (en) * 2022-08-17 2022-10-28 四川迪晟新达类脑智能技术有限公司 FPGA-based YOLOV4 target detection network implementation method
CN115830140B (en) * 2022-12-12 2024-08-20 中国人民解放军海军工程大学 Offshore short-range photoelectric monitoring method, system, medium, equipment and terminal
CN118314331B (en) * 2024-06-06 2024-09-13 湖南大学 Sea surface scene-oriented target detection method and system
CN118334099B (en) * 2024-06-12 2024-09-24 湖南大学 Open sea surface scene offshore target depth estimation method and system thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846844A (en) * 2018-04-13 2018-11-20 上海大学 A kind of sea-surface target detection method based on sea horizon
CN111767856A (en) * 2020-06-29 2020-10-13 哈工程先进技术研究院(招远)有限公司 Infrared small target detection algorithm based on gray value statistical distribution model

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103679694B (en) * 2013-05-29 2016-06-29 哈尔滨工程大学 A kind of ship small targets detection method based on panoramic vision
CN104599273B (en) * 2015-01-22 2017-07-28 南京理工大学 Sea and sky background infrared small target detection method based on multi-scale wavelet crossing operation
CN108229342B (en) * 2017-12-18 2021-10-26 西南技术物理研究所 Automatic sea surface ship target detection method
CN111091024B (en) * 2018-10-23 2023-05-23 广州弘度信息科技有限公司 Small target filtering method and system based on video recognition result
CN110188696B (en) * 2019-05-31 2023-04-18 华南理工大学 Multi-source sensing method and system for unmanned surface equipment
US11132780B2 (en) * 2020-02-14 2021-09-28 Huawei Technologies Co., Ltd. Target detection method, training method, electronic device, and computer-readable medium
CN112258518B (en) * 2020-10-09 2022-05-03 国家海洋局南海调查技术中心(国家海洋局南海浮标中心) Sea-sky-line extraction method and device
CN112669332B (en) * 2020-12-28 2023-09-01 大连海事大学 Method for judging sea-sky conditions and detecting infrared targets based on bidirectional local maxima and peak value local singularities
CN113223000A (en) * 2021-04-14 2021-08-06 江苏省基础地理信息中心 Comprehensive method for improving small target segmentation precision
CN114494179A (en) * 2022-01-24 2022-05-13 深圳闪回科技有限公司 Mobile phone back damage point detection method and system based on image recognition

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846844A (en) * 2018-04-13 2018-11-20 上海大学 A kind of sea-surface target detection method based on sea horizon
CN111767856A (en) * 2020-06-29 2020-10-13 哈工程先进技术研究院(招远)有限公司 Infrared small target detection algorithm based on gray value statistical distribution model

Also Published As

Publication number Publication date
CN114863258A (en) 2022-08-05

Similar Documents

Publication Publication Date Title
CN114863258B (en) Method for detecting small target based on visual angle conversion in sea-sky-line scene
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN108648161B (en) Binocular vision obstacle detection system and method of asymmetric kernel convolution neural network
CN108960135B (en) Dense ship target accurate detection method based on high-resolution remote sensing image
CN110084241B (en) Automatic ammeter reading method based on image recognition
CN105513064A (en) Image segmentation and adaptive weighting-based stereo matching method
CN106548153A (en) Video abnormality detection method based on graph structure under multi-scale transform
CN113435282B (en) Unmanned aerial vehicle image ear recognition method based on deep learning
CN106530271B (en) A kind of infrared image conspicuousness detection method
CN109087261A (en) Face antidote based on untethered acquisition scene
CN109886937A (en) Defects of insulator detection method based on super-pixel segmentation image recognition
CN111723464A (en) Typhoon elliptic wind field parametric simulation method based on remote sensing image characteristics
CN111598780A (en) Terrain adaptive interpolation filtering method suitable for airborne LiDAR point cloud
CN109754440A (en) A kind of shadow region detection method based on full convolutional network and average drifting
CN108460833A (en) A kind of information platform building traditional architecture digital protection and reparation based on BIM
CN116721228B (en) Building elevation extraction method and system based on low-density point cloud
CN105405138A (en) Water surface target tracking method based on saliency detection
CN107944497A (en) Image block method for measuring similarity based on principal component analysis
CN116758080A (en) Method and system for detecting screen printing defects of solar cell
CN117115359A (en) Multi-view power grid three-dimensional space data reconstruction method based on depth map fusion
CN106952262A (en) A kind of deck of boat analysis of Machining method based on stereoscopic vision
CN113658144A (en) Method, device, equipment and medium for determining pavement disease geometric information
CN113807238A (en) Visual measurement method for area of river surface floater
CN110322454B (en) High-resolution remote sensing image multi-scale segmentation optimization method based on spectrum difference maximization
CN116957935A (en) Side-scan sonar stripe image stitching method based on path line constraint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant