CN108876855A - A kind of sea cucumber detection and binocular visual positioning method based on deep learning - Google Patents
A kind of sea cucumber detection and binocular visual positioning method based on deep learning Download PDFInfo
- Publication number
- CN108876855A CN108876855A CN201810519615.3A CN201810519615A CN108876855A CN 108876855 A CN108876855 A CN 108876855A CN 201810519615 A CN201810519615 A CN 201810519615A CN 108876855 A CN108876855 A CN 108876855A
- Authority
- CN
- China
- Prior art keywords
- image
- binocular
- sea cucumber
- carried out
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 241000251511 Holothuroidea Species 0.000 title claims abstract description 41
- 238000001514 detection method Methods 0.000 title claims abstract description 35
- 238000013135 deep learning Methods 0.000 title claims abstract description 32
- 238000000034 method Methods 0.000 title claims abstract description 25
- 230000000007 visual effect Effects 0.000 title claims abstract description 10
- 230000004807 localization Effects 0.000 claims abstract description 15
- 238000003384 imaging method Methods 0.000 claims abstract description 13
- 230000003287 optical effect Effects 0.000 claims abstract description 6
- 239000007787 solid Substances 0.000 claims abstract description 4
- 238000012549 training Methods 0.000 claims description 15
- 238000000605 extraction Methods 0.000 claims description 14
- 238000013528 artificial neural network Methods 0.000 claims description 10
- 239000003595 mist Substances 0.000 claims description 6
- 230000005540 biological transmission Effects 0.000 claims description 5
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims description 5
- 238000004458 analytical method Methods 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 4
- 230000006835 compression Effects 0.000 claims description 2
- 238000007906 compression Methods 0.000 claims description 2
- 230000000694 effects Effects 0.000 description 8
- 230000001965 increasing effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000012360 testing method Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000007689 inspection Methods 0.000 description 4
- 230000002708 enhancing effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 238000000889 atomisation Methods 0.000 description 2
- 238000004040 coloring Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 description 1
- 241000257465 Echinoidea Species 0.000 description 1
- 244000283207 Indigofera tinctoria Species 0.000 description 1
- 241000237509 Patinopecten sp. Species 0.000 description 1
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000009189 diving Effects 0.000 description 1
- 238000005562 fading Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004615 ingredient Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 210000004218 nerve net Anatomy 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000000050 nutritive effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 235000020637 scallop Nutrition 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 241000894007 species Species 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 239000005425 throughfall Substances 0.000 description 1
- 239000003643 water by type Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/73—Colour balance circuits, e.g. white balance circuits or colour temperature control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention proposes a kind of sea cucumber detection and binocular visual positioning method based on deep learning, the underwater robot suitable for aquafarm are caught task to seabed sea cucumber, are mainly included the following steps that:By carrying out the inside and outside parameter that calibration obtains video camera to binocular camera;Binocular camera is corrected, so that the imaging origin of left and right view is consistent, two camera optical axises are parallel, left and right imaging plane is coplanar, is aligned to polar curve row;The acquisition of subsea image data is carried out using the binocular camera demarcated;Dark priority algorithm based on white balance compensation is carried out to acquired image data and carries out image enhancement;The sea cucumber target detection based on deep learning is carried out to the subsea image of image enhancement;The three-dimensional localization coordinate information that binocular solid Feature Points Matching algorithm obtains target is carried out to the image for obtaining target bidimensional regression frame information by image enhancement and deep learning.The accurate positioning of underwater sea cucumber treasure can be achieved in the present invention, and does not need manually to participate in.
Description
Technical field
The invention belongs to machine vision and underwater target detection fields, and in particular to a kind of sea cucumber inspection based on deep learning
The new method surveyed and positioned.
Background technique
In recent years, since the marine benthos such as sea cucumber, sea urchin, scallop, abalone have very high nutritive value, entirely
The output demand of marine organisms is constantly increased in world wide, the culture fishery in China continues to develop.Catch skill in seabed
Art also becomes more and more important.The most common method of marine fishing at present is artificial diving fishing, for traditional Fishing technology
The problem of, when high artificial fishing operation danger coefficient, operation can be improved by studying the automatic testing method of underwater ocean biological targets
Between short, problem that actual bodily harm is big, allow robot that people is replaced to complete sea cucumber fishing task, development and utilization sea can also be facilitated afterwards
Foreign resource.But Underwater Image Fuzzy, cross-color, most underwater robots need the booster action of the mankind that could complete fishing work
Industry, seabed catching rate are low.It therefore is the detailed ecological environment of labor intensity and protection sea for mitigating people, develop a kind of can know automatically
The vision detection system of other sea cucumber target has great importance.
Deep learning can using Training learn automatically extraction sea cucumber useful feature, enable feature can it is more abstract,
It shows high-risely, and distributed and parallel computation ability is its great advantage.This patent use the principle of deep learning for
It guides, is accomplished that the classification and three-dimensional localization of sea cucumber.For the fishing operation occasion of complicated seabed underwater environment marine product
Traditional Underwater Target Classification low precision, the low problem of three-dimensional localization time efficiency propose a kind of seabed based on deep learning
The enhancing of biological targets detection method, first underwater picture is then based on deep learning and is classified to target and positioned two-dimentional time
Return frame, finally carries out binocular positioning using recurrence frame as area-of-interest, the precision of target classification and the efficiency of positioning can be improved.
This patent can enrich the sensory perceptual system of underwater robot, promote underwater quick machine man-based development, artificial using deep learning etc.
Intelligent method and sophisticated machine people technical substitution manually realize the accurate fishing of underwater precious marine product, for protecting seabed ecology ring
Border protects the health of diver and personal safety to be of great significance.
Since underwater sea cucumber imaging atomization, contrast reduce, color degradation, and sea cucumber form is changeable, traditional sub-sea
It engages in an inspection survey technology, often to screen and extract by hand multiple features, there are sea cucumber nicety of grading differences and three-dimensional localization time efficiency
Low problem.Therefore, seek it is a kind of while not only can increase nicety of grading but also the object detection method for the time of can be shortened always all
It is an important research direction of underwater vision.Advantage is significant in terms of deep learning morning Automatic Feature Extraction, utilizes depth
The target detection means of study can fill up above-mentioned deficiency, have very high practical value.So the present invention proposes a kind of base
In the sea cucumber detection of deep learning and binocular visual positioning method.
Summary of the invention
The present invention is achieved by the following technical solutions:
A kind of sea cucumber detection and binocular visual positioning method based on deep learning, which is characterized in that including following
Step:
(1) by demarcating to binocular camera, the inside and outside parameter of video camera is obtained;
(2) binocular camera is corrected, so that the imaging origin of left and right view is consistent, two camera optical axises are flat
Row, left and right imaging plane are coplanar, are aligned to polar curve row;
(3) acquisition of subsea image data is carried out using the binocular camera demarcated;
(4) the dark priority algorithm based on white balance compensation is carried out to collected underwater picture data and carries out image increasing
By force;
(5) the sea cucumber target detection to the subsea image of image enhancement based on deep learning, realizes the target of two dimensional image
Classification and the recurrence frame information for obtaining target;
(6) binocular solid spy is carried out to the image for obtaining target bidimensional regression frame information by image enhancement and deep learning
Sign point matching algorithm obtains the three-dimensional localization coordinate information of target.
The step (4) specifically includes:
(4.1) white compression balance is carried out:Red channel I is carried out in each location of pixels (x)rcCompensation and blue channel Ibc
Compensation, red channel IrcCompensation, formula are as follows:
Wherein Ir, IgIndicate the red and green channel of image I,WithIndicate IrAnd IgAverage value, α be constant 1;
Blue channel IbcCompensation formula is as follows:
Wherein Ib, IgIndicate the blue and green channel of image I,WithIndicate IgAnd IbAverage value, α be constant 1;
(4.2) secret tunnel defogging is carried out;
(4.2.1) has mist picture to calculate its dark I to onedark:
In formula, IcFor three Color Channels of red, green, blue of image, Ω (x) is using x as the window area at coordinate center;
(4.2.2) carries out the A estimation of steam veil:In the point region for inquiring preceding 0.1% big pixel value for dark
It corresponding initial pictures I in its is found, obtains the max pixel value of each channel in the region, several channels are in the region
The average value of max pixel value is A;
(4.2.3) carries out transmissivity t analysis for mist image, obtains initial transmission plot:
In formula, IcFor three Color Channels of red, green, blue of image, AcFor the water of three Color Channels of red, green, blue of image
Vapour veil A value;
(4.2.4) restores image with known estimator:
Threshold parameter t is set0, when t value is less than t0When, enable t=t0, it is as follows to obtain image reply formula:
The step (5) specifically includes:
(5.1) the sea cucumber data for the acquisition of (3) carry out the foundation of sea cucumber data set;
(5.2) data extending is carried out to the data set that (5.1) are established;
(5.3) neural network is constructed;
(5.4) neural network built using (5.3) instructs the data set for having markup information of (5.2) offline
Practice;
(5.5) trained model is tested, predicts the recurrence frame information of target classification and target;
(5.6) maximum inhibits the recurrence frame of removal redundancy.
The step (6) specifically includes:
(6.1) feature point extraction:Feature point extraction is carried out using ORB descriptor;
(6.1.1) carries out the extraction of FAST key point;
(6.1.2) carries out BRIEF description and extracts;
(6.2) Feature Points Matching is carried out using the Euclidean distance that Brute Force feature matching method calculates corresponding points;
(6.3) target three-dimensional localization is carried out using binocular camera:
Using the left image upper left corner as coordinate origin, geometric space point coordinate P (x is setc,yc,zc) in the match views of left and right
Incident point x-axis coordinate be XleftAnd Xright, value is Y after y-axis coordinate polar curve has corrected, and is expressed as:
Wherein B and f is the system parameter for demarcating the Binocular Stereo Vision System of acquisition in water by binocular stereo vision,
F is focal length of camera, and B is baseline distance, and three dimensional space coordinate is expressed as follows:
Wherein Disparity=Xleft-XrightFor the alternate position spike between left images matching double points.
Compared with prior art, the present invention the advantage is that:During the fishing of seabed sea cucumber, sea cucumber target detection is sea
The core technology of bottom fishing, can not show a candle to problem waterborne for Underwater Imaging quality, the quick white balance proposed by this patent
Dark channel diagram image intensifying technological improvement binocular image quality, improves the precision of detection and localization;When for traditional binocular matching
Between long problem, the recurrences frame obtained by deep learning as area-of-interest to reduce the search range of binocular feature,
Improve the speed of detection and localization.For the fishing operation occasion of complicated seabed underwater environment marine product, this patent is improved
Traditional Underwater Target Classification low precision, the problem low with three-dimensional localization time efficiency.
Detailed description of the invention
Fig. 1 is the method for the present invention flow chart;
Fig. 2 is the flow chart of binocular calibration;
Fig. 3 is the flow chart of binocular correction;
Fig. 4 is that the dark priority algorithm of white balance compensation carries out the flow chart of image enhancement;
Fig. 5 is the effect picture of the dark priority algorithm of white balance compensation and the histogram in RGB color channel;
Fig. 6 is the schematic diagram that deep learning carries out target detection;
Fig. 7 is the effect picture that data set carries out image amplification method;
Fig. 8 is the structure chart for the neural network that deep learning carries out target detection;
Fig. 9 is the flow chart that binocular stereo vision carries out that characteristic matching completes target three-dimensional localization;
Figure 10 is the effect picture of various features point detection;
Figure 11 is the schematic diagram for carrying out Feature Points Matching positioning;
Figure 12 is that hyper parameter determines figure;
Figure 13 is soft-NMS flow chart;
Figure 14 is that full figure characteristic point and ROI feature point detect number and detection time comparison diagram.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with Fig. 1 to Figure 14 to this
Invention is described in further details.
As shown in Figure 1, a kind of sea cucumber detection and binocular vision 3 D localization method based on deep learning, including it is following several
A step:
(1) by demarcating to binocular camera, the inside and outside parameter of video camera is obtained, as shown in Figure 2.
In stereo visual system, camera calibration determines the pixel in two-dimension picture collected by image capture device
Contacting between target three-dimensional coordinate.The present invention realizes underwater calibration task using Zhang Zhengyou camera calibration method.Binocular is taken the photograph
Camera is all Microsoft's HD-3000 720P high-definition camera, and by their parallel fixed placements, binocular camera is calibrated not only
It can obtain the focal length, imaging origin, distortion factor inner parameter etc. of each camera, moreover it is possible to two camera shootings are measured by demarcating
Relative position between head, i.e., D translation and rotation parameter of the right camera relative to left camera.
(2) binocular camera is corrected, so that the imaging origin of left and right view is consistent, two camera optical axises are flat
Row, left and right imaging plane are coplanar, are aligned to polar curve row.As shown in Figure 3
In binocular vision, binocular correction is the monocular internal reference data obtained after being calibrated according to camera, including focal length, at
As origin, distortion factor and binocular relative position relationship, including spin matrix and translation vector, disappear respectively to left and right view
Except distortion and row alignment, so that the imaging origin of left and right view is consistent, two camera optical axises are parallel, left and right imaging plane is total
Face is aligned polar curve row.
(3) acquisition of subsea image data is carried out using the binocular camera demarcated;
(1) and (2) calibration and the binocular camera corrected is utilized to obtain underwater RGB image, if there is sea in image
Join target, then contain the colouring information and texture information on sea cucumber surface, does not include depth information.
(4) image enhancement of the dark priority algorithm based on white balance compensation is carried out to collected image data, is flowed
Journey carries out dark defogging processing as shown in figure 4, first carrying out white balance compensation again.
(4.1) white balance compensation
White balance is intended to compensate since depth-selectiveness absorbs colour cast caused by color.When light through-fall, selection is had
Property ground influence of fading wavelength spectrum, to influence the intensity and appearance of colored surface.The main problem that this patent acquires image is red
The loss problem of chrominance channel and blue channel, so first being increased using the red white balance method with blue channel of compensation, then defogging
By force.
In order to make up the loss of red channel, we establish following four principle:
A. green channel saves relatively preferably under water relative to red and blue channel.
B. green channel includes red channel colouring information, and compared with green, stronger decaying caused by compensation is red is outstanding
Its is important.Therefore, the sub-fraction that green channel can be added becomes red to compensate red decaying.
C. compensation rate should be directly proportional to the difference between the equal red value of average Green Peace, because assuming in grey-world
Under, this species diversity reflects the red difference between green decaying.
D. green channel information should not be in the still effective area transmissions of red channel information.Substantially, red channel
Compensation can only be carried out in the region of altitude decay.
Mathematically, in order to illustrate above-mentioned observation, we indicate the red channel I of compensation in each location of pixels (x)rcSuch as
Shown in formula (1):
Wherein Ir, IgIndicate the red and green channel of image I, each channel normalizes it in the upper limit of its dynamic range
Afterwards in section [0,1];WithIndicate IrAnd IgAverage value, and α indicate constant.It is tested through test of many times, shows α
=1 value is suitable for various lighting conditions and capture setting.
In muddy waters or the concentration of high planktonic organism, blue channel also due to the absorption of organic substance and obviously subtract
It is weak.In order to solve these problems, when blue strong attenuation, when the compensation of red channel seems inadequate, this patent also compensates indigo plant
Chrominance channel decaying calculates compensation blue channel IbcAs shown in formula (2):
Wherein Ib, IgIndicate the blue and green channel of image I, and α is also set to 1.
(4.2) dark defogging
Firstly, in computer vision and computer graphics study theory, by the atomization image mould of following equation expression
Type generally uses:
I (x)=J (x) t (x)+A (1-t (x)) (3)
In formula, I (x) is the image to defogging, and J (x) is the fogless image to be restored, and parameter A is steam ingredient, t (x)
For transmissivity.Priori knowledge has I (x), solves J (x).No array solution can be found by algebra knowledge theorem formula (3).It will be according to dark
The preferential theorem in channel obtains determining solution.
(4.2.1) has mist picture to calculate its dark I to onedark。
In formula, IcFor three Color Channels of red, green, blue of image, Ω (x) is using x as the window area at coordinate center.
(4.2.2) carries out the A estimation of steam veil;This parameter is obtained using foggy image according to dark channel image.Needle
The point region of preceding 0.1% big pixel value is inquired dark, and finds its corresponding initial pictures I and obtains each channel in this area
Max pixel value in domain, the average value in these channels are exactly A.
(4.2.3) carries out transmissivity t analysis, the available initial transmission plot of the process for mist image;
(4.2.4) restores image with known estimator;Since the value for working as transmission image t is very small, J will be caused
Result it is excessive, therefore enable image albefaction excessive, so needing to be added a threshold parameter t in general0, when t value is small
In t0When, enable t=t0, such as t0=0.1.Therefore, it is as follows to restore formula for final image:
(5) the sea cucumber target detection to the subsea image of image enhancement based on deep learning, realizes the target of two dimensional image
Classification and the recurrence frame information for obtaining target, as shown in Figure 6.
The characteristics of for various deep learning target detection models, according to the precision, speed and transplanting of sea cucumber target detection
Property consider, the present invention is based on the object detection methods of YOLO v2 convolutional neural networks by candidate frame selection, feature extraction, target
Classification, target positioning are fused in a network.Convolutional neural networks select candidate regions in the color image of two-dimentional triple channel
Domain, while entire image feature predicts sea cucumber position and probability.Regression problem is converted by sea cucumber test problems, is really realized
The detection of end-to-end (endto end).
(5.1) the sea cucumber data for the acquisition of (3) carry out the foundation of sea cucumber data set.
Also it is able to detect the performance of training pattern when in order to make training convolutional neural networks, needs to be divided into sea cucumber data set
Training set, verifying collection, test set.The purpose of training set is the weighting parameter that network is obtained to network model training, verifying collection mesh
Be for training when result carry out arameter optimization, the purpose of test set is the estimation to network model precision.
(5.2) data extending is carried out to the data set that (5.1) are established.
Deep learning training pattern needs sufficient training sample that can just train the good model of precision high effect.Therefore,
This patent carries out data extending to sample before training convolutional neural networks model.Data extending means in training sample
Small disturbance and variation are filled, training sample not only can be increased, improves the generalization ability of sea cucumber detection model, but also
It can increase noise data, to enhance the robustness of model.Main data extending method has:It is turning-over changed (flip), random
Trim (random crop), color jitter (color jittering), translation transformation (shift), change of scale (scale),
Noise disturbance (noise), rotation transformation (rotation) etc..Effect is as shown in Figure 7.
(5.3) neural network is constructed, as shown in Figure 8.
In embodiments of the present invention, the benchmark neural network Darknet-19 of the YOLO v2 of building, there is 19 convolutional layers, 5
A maximum value pond layer.The neural network is utilized very more 3*3 filters and carries out feature extraction, and 1*1 convolution kernel is put
In two layers of 3*3 convolution kernel, the parameter of model can not only be reduced, and non-linear expression's feature is more preferable.And in benchmark nerve
Fused layer (Marge layer) is added in network, fused layer merges the characteristic pattern and further feature figure of shallow-layer.YOLO v2 inspection
What survey device used is exactly the characteristic pattern Jing Guo increased high-low resolution, it possesses more fine granularity features, that is, refers to object
Critical component carry out positioning and accurate description feature, this is helpful to the object detection of scale less than normal, so that the property of model
It can be promoted.
(5.4) neural network built using (5.3) instructs the data set for having markup information of (5.2) offline
Practice.
The given sample pre-processed is inputted into neural network, and the true value file of given sample, is instructed using having to supervise
Practice 20000 times.The effect of neural network model and the target of optimization are defined by loss function, nerve net of the invention
Network is trained effect and is assessed using four class loss functions, they, according to the difference of weight, are to have target respectively
(object) it loses, without target (noobject) loss, classification (class) loss and coordinate (coord) loss.Overall loss
It is the quadratic sum of four parts, as shown in formula (7).
Formula (7) first two are that the coordinate of prediction and the difference of true value coordinate are lost.Section 3 is to have the confidence level of target
Loss, Section 4 are the loss of aimless confidence level, last is the loss of classification, due to object centralized positioning precision
With classification regard as it is of equal importance be it is unreasonable, so the weight difference λ of loss functioncoord=5, λnoobj=1, whereinFor
In judging characteristic figure, whether j-th of coordinate of i-th of Center Prediction is responsible for this target.According to loss function stochastic gradient
Descent method carries out right value update.
Figure 12 is that hyper parameter determines table, in order to effectively train network and prediction desired as a result, should suitably really
Determine the hyper parameter of network.For momentum value (momentum), number of iterations (epoch), batch (batch size) and learning rate
(learning rate) carries out network training to optimize hyper parameter.
(5.5) trained model is tested, predicts the recurrence frame information of target classification and target.
After neural network model training is completed, the two-dimentional subsea image feature by enhancing is extracted using convolutional layer, then
Output class probability is predicted using full convolutional layer and returns frame coordinate information.For sea cucumber data set, 5 kinds of boxes sizes are predicted,
Each box includes 5 coordinate values, returns frame coordinate and confidence score, and there are also 1 classifications, so a total of 5* (5+1)=30
Export dimension.
(5.6) maximum inhibits the recurrence frame of removal redundancy.
Classified using YOLO v2 network objectives and positioned, non-maxima suppression will be used for the recurrence frame of redundancy
(NMS) it operates, the recurrence frame of redundancy is deleted.Traditional maximum suppressing method will appear sea cucumber missing inspection situation, and this patent is adopted
With soft-NMS, the score for returning frame is Gauss weighting:
M is the highest recurrence frame of current confidence score, and bi is recurrence frame to be treated, and the IOU of bi and M are higher, bi's
Confidence score si is reduced by faster.Figure 13 is the process that soft-NMS handles that redundancy returns frame:
(6) image for obtaining target bidimensional regression frame information by image enhancement and deep learning is carried out as shown in Figure 9
Binocular solid Feature Points Matching algorithm obtains the three-dimensional localization coordinate information of target.For pairing precision problem, step is utilized
(4) underwater white balance dark channel diagram image intensifying technology can improve the quantity of matching pair, and then improve accurate positioning rate.For
The rate of image pairing, the recurrence frame that the present invention is obtained using the deep learning of (5) is as area-of-interest, so as to shorten double
The time of mesh Feature Points Matching.
(6.1) feature point extraction;
The present invention carries out feature point extraction using ORB descriptor, and ORB descriptor is in the speed of service, matching accuracy rate, patent
Limitation etc. can replace SIFT and SURF.ORB descriptor is that improved FAST corner feature extracts and BRIEF description is sub
Combination.The extraction step of ORB descriptor is divided into two steps:
(6.1.1) FAST key point is extracted:ORB improves the principal direction of FAST angle point in detection image, for BRIEF description
Invariable rotary characteristic is provided;
(6.1.2) BRIEF description:Vector expression is carried out for improved angle point.
As shown in Figure 10, after the enhancing of dark underwater picture white balance image, the number of characteristic point is obviously increased,
SIFT, SURF, ORB algorithm are attained by very well in full figure and interested region (Region of Interesting, ROI)
Effect, and in Figure 14 it can be seen that ORB detection time it is most fast, it is also seen that being returned by deep learning in Figure 14
Return frame as interested region, needle.Area-of-interest carries out feature point extraction can reduce the time of characteristic point detection significantly,
And then improve the efficiency of positioning.
(6.2) Feature Points Matching;
Feature Points Matching, which refers to, constructs corresponding relationship according to some locally or globally characteristics of descriptor, and for looking into
Ask whether two descriptors have common point.The present invention calculates the European of corresponding points using Brute Force feature matching method
Distance.
(6.3) target three-dimensional localization, as shown in figure 11.
Binocular camera is to obtain same photograph frame according to calibrated two video cameras, the difference being imaged according to left and right cameras
It is different, by principle of triangulation, obtain the three-dimensional information of spatial point.
Using the left image upper left corner as coordinate origin, geometric space point coordinate P (x is setc,yc,zc) in the match views of left and right
Incident point x-axis coordinate be XleftAnd Xright, y-axis coordinate polar curve is identical after having corrected, and is worth for Y.It is theoretical using similar triangles,
There is formula (9) to be expressed as:
In formula, f is focal length of camera, and B is the optical center distance of baseline distance and binocular camera, and B and f are binocular tri-dimensionals
The system parameter of feel system demarcates acquisition using binocular stereo vision in water.Disparity=Xleft-XrightIt is left and right
Alternate position spike between images match point pair, abbreviation parallax (disparity), so as to obtain the three-dimensional coordinate of spatial point, such as
Under:
As seen through the above analysis, binocular Feature Points Matching is the calculating of left figure match point Yu right figure match point, one
Denier finds binocular ranging to upper match point, can be calculated according to formula (10), and the 3D coordinate of match point is further obtained
And depth information.
Claims (4)
1. a kind of sea cucumber detection and binocular visual positioning method based on deep learning, which is characterized in that including following step
Suddenly:
(1) by demarcating to binocular camera, the inside and outside parameter of video camera is obtained;
(2) binocular camera is corrected, so that the imaging origin of left and right view is consistent, two camera optical axises are parallel,
Left and right imaging plane is coplanar, is aligned to polar curve row;
(3) acquisition of subsea image data is carried out using the binocular camera demarcated;
(4) the dark priority algorithm based on white balance compensation is carried out to collected underwater picture data and carries out image enhancement;
(5) the sea cucumber target detection to the subsea image of image enhancement based on deep learning, realizes the target classification of two dimensional image
With the recurrence frame information for obtaining target;
(6) binocular solid characteristic point is carried out to the image for obtaining target bidimensional regression frame information by image enhancement and deep learning
Matching algorithm obtains the three-dimensional localization coordinate information of target.
2. a kind of sea cucumber detection and binocular visual positioning method based on deep learning according to claim 1, feature
It is, the step (4) specifically includes:
(4.1) white compression balance is carried out:Red channel I is carried out in each location of pixels (x)rcCompensation and blue channel IbcCompensation,
Red channel IrcCompensation, formula are as follows:
Wherein Ir, IgIndicate the red and green channel of image I,WithIndicate IrAnd IgAverage value, α be constant 1;
Blue channel IbcCompensation formula is as follows:
Wherein Ib, IgIndicate the blue and green channel of image I,WithIndicate IgAnd IbAverage value, α be constant 1;
(4.2) secret tunnel defogging is carried out;
(4.2.1) has mist picture to calculate its dark I to onedark:
In formula, IcFor three Color Channels of red, green, blue of image, Ω (x) is using x as the window area at coordinate center;
(4.2.2) carries out the A estimation of steam veil:It is found in the point region for inquiring preceding 0.1% big pixel value for dark
Corresponding initial pictures I, obtains the max pixel value of each channel in the region, the maximum of several channels in the region in it
The average value of pixel value is A;
(4.2.3) carries out transmissivity t analysis for mist image, obtains initial transmission plot:
In formula, IcFor three Color Channels of red, green, blue of image, AcFor the steam face of three Color Channels of red, green, blue of image
Yarn A value;
(4.2.4) restores image with known estimator:
Threshold parameter t is set0, when t value is less than t0When, enable t=t0, it is as follows to obtain image reply formula:
3. a kind of sea cucumber detection and binocular visual positioning method based on deep learning according to claim 1, feature
It is, the step (5) specifically includes:
(5.1) the sea cucumber data for the acquisition of (3) carry out the foundation of sea cucumber data set;
(5.2) data extending is carried out to the data set that (5.1) are established;
(5.3) neural network is constructed;
(5.4) neural network built using (5.3) carries out off-line training to the data set for having markup information of (5.2);
(5.5) trained model is tested, predicts the recurrence frame information of target classification and target;
(5.6) maximum inhibits the recurrence frame of removal redundancy.
4. a kind of sea cucumber detection and binocular visual positioning method based on deep learning according to claim 1, feature
It is, the step (6) specifically includes:
(6.1) feature point extraction:Feature point extraction is carried out using ORB descriptor;
(6.1.1) carries out the extraction of FAST key point;
(6.1.2) carries out BRIEF description and extracts;
(6.2) Feature Points Matching is carried out using the Euclidean distance that Brute Force feature matching method calculates corresponding points;
(6.3) target three-dimensional localization is carried out using binocular camera:
Using the left image upper left corner as coordinate origin, geometric space point coordinate P (x is setc,yc,zc) throwing in the match views of left and right
Exit point x-axis coordinate is XleftAnd Xright, value is Y after y-axis coordinate polar curve has corrected, and is expressed as:
Wherein B and f is the system parameter for demarcating the Binocular Stereo Vision System of acquisition in water by binocular stereo vision, and f is
Focal length of camera, B are baseline distance, and three dimensional space coordinate is expressed as follows:
Wherein Disparity=Xleft-XrightFor the alternate position spike between left images matching double points.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810519615.3A CN108876855A (en) | 2018-05-28 | 2018-05-28 | A kind of sea cucumber detection and binocular visual positioning method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810519615.3A CN108876855A (en) | 2018-05-28 | 2018-05-28 | A kind of sea cucumber detection and binocular visual positioning method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108876855A true CN108876855A (en) | 2018-11-23 |
Family
ID=64335035
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810519615.3A Pending CN108876855A (en) | 2018-05-28 | 2018-05-28 | A kind of sea cucumber detection and binocular visual positioning method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108876855A (en) |
Cited By (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109300154A (en) * | 2018-11-27 | 2019-02-01 | 郑州云海信息技术有限公司 | A kind of distance measuring method and device based on binocular solid |
CN109465809A (en) * | 2018-12-17 | 2019-03-15 | 中北大学 | A kind of Intelligent garbage classification robot based on binocular stereo vision fixation and recognition |
CN110060299A (en) * | 2019-04-18 | 2019-07-26 | 中国测绘科学研究院 | Danger source identifies and positions method in passway for transmitting electricity based on binocular vision technology |
CN110132243A (en) * | 2019-05-31 | 2019-08-16 | 南昌航空大学 | A kind of modularization positioning system based on deep learning and ranging |
CN110210279A (en) * | 2018-11-27 | 2019-09-06 | 腾讯科技(深圳)有限公司 | Object detection method, device and computer readable storage medium |
CN110222832A (en) * | 2019-06-19 | 2019-09-10 | 中国水产科学研究院东海水产研究所 | Entrance of Changjiang River salt marshes macrobenthos habitat simulation prediction technique |
CN110232711A (en) * | 2019-06-05 | 2019-09-13 | 中国科学院自动化研究所 | The binocular vision real-time perception localization method of marine product crawl, system, device |
CN110543859A (en) * | 2019-09-05 | 2019-12-06 | 大连海事大学 | sea cucumber autonomous recognition and grabbing method based on deep learning and binocular positioning |
CN110599489A (en) * | 2019-08-26 | 2019-12-20 | 华中科技大学 | Target space positioning method |
CN110619660A (en) * | 2019-08-21 | 2019-12-27 | 深圳市优必选科技股份有限公司 | Object positioning method and device, computer readable storage medium and robot |
CN110689118A (en) * | 2019-09-29 | 2020-01-14 | 华南理工大学 | Improved target detection method based on YOLO V3-tiny |
CN110702066A (en) * | 2019-10-15 | 2020-01-17 | 哈尔滨工程大学 | Underwater binocular camera vision positioning method |
CN110969158A (en) * | 2019-11-06 | 2020-04-07 | 中国科学院自动化研究所 | Target detection method, system and device based on underwater operation robot vision |
CN111046726A (en) * | 2019-10-25 | 2020-04-21 | 青岛农业大学 | AI intelligent vision-based underwater sea cucumber identification and positioning method |
CN111160213A (en) * | 2019-12-25 | 2020-05-15 | 广州方纬智慧大脑研究开发有限公司 | Illegal boarding and alighting detection method and system based on deep learning and storage medium |
CN111340951A (en) * | 2020-02-26 | 2020-06-26 | 天津大学 | Ocean environment automatic identification method based on deep learning |
CN111696150A (en) * | 2020-05-19 | 2020-09-22 | 杭州飞锐科技有限公司 | Method for measuring phenotypic data of channel catfish |
CN111768449A (en) * | 2019-03-30 | 2020-10-13 | 北京伟景智能科技有限公司 | Object grabbing method combining binocular vision with deep learning |
CN111798496A (en) * | 2020-06-15 | 2020-10-20 | 博雅工道(北京)机器人科技有限公司 | Visual locking method and device |
CN112053324A (en) * | 2020-08-03 | 2020-12-08 | 上海电机学院 | Complex material volume measurement method based on deep learning |
CN112183640A (en) * | 2020-09-29 | 2021-01-05 | 无锡信捷电气股份有限公司 | Detection and classification method based on irregular object |
CN112183485A (en) * | 2020-11-02 | 2021-01-05 | 北京信息科技大学 | Deep learning-based traffic cone detection positioning method and system and storage medium |
CN112419418A (en) * | 2019-08-22 | 2021-02-26 | 刘锐 | Positioning method based on camera mechanical aiming |
CN112529960A (en) * | 2020-12-17 | 2021-03-19 | 珠海格力智能装备有限公司 | Target object positioning method and device, processor and electronic device |
CN112561996A (en) * | 2020-12-08 | 2021-03-26 | 江苏科技大学 | Target detection method in autonomous underwater robot recovery docking |
CN112700499A (en) * | 2020-11-04 | 2021-04-23 | 南京理工大学 | Deep learning-based visual positioning simulation method and system in irradiation environment |
CN112767455A (en) * | 2021-01-08 | 2021-05-07 | 北京的卢深视科技有限公司 | Calibration method and system for binocular structured light |
CN112949389A (en) * | 2021-01-28 | 2021-06-11 | 西北工业大学 | Haze image target detection method based on improved target detection network |
CN113420704A (en) * | 2021-06-18 | 2021-09-21 | 北京盈迪曼德科技有限公司 | Object identification method and device based on visual sensor and robot |
CN113561178A (en) * | 2021-07-30 | 2021-10-29 | 燕山大学 | Intelligent grabbing device and method for underwater robot |
CN113689484A (en) * | 2021-08-25 | 2021-11-23 | 北京三快在线科技有限公司 | Method and device for determining depth information, terminal and storage medium |
CN114485613A (en) * | 2021-12-31 | 2022-05-13 | 海南浙江大学研究院 | Multi-information fusion underwater robot positioning method |
CN114529493A (en) * | 2020-11-04 | 2022-05-24 | 中国科学院沈阳自动化研究所 | Cable appearance defect detection and positioning method based on binocular vision |
CN114529811A (en) * | 2020-11-04 | 2022-05-24 | 中国科学院沈阳自动化研究所 | Rapid and automatic identification and positioning method for foreign matters in subway tunnel |
CN114882346A (en) * | 2021-01-22 | 2022-08-09 | 中国科学院沈阳自动化研究所 | Underwater robot target autonomous identification method based on vision |
CN115375977A (en) * | 2022-10-27 | 2022-11-22 | 青岛杰瑞工控技术有限公司 | Deep sea cultured fish sign parameter identification system and identification method |
CN115984341A (en) * | 2023-03-20 | 2023-04-18 | 深圳市朗诚科技股份有限公司 | Marine water quality microorganism detection method, device, equipment and storage medium |
CN116255908A (en) * | 2023-05-11 | 2023-06-13 | 山东建筑大学 | Underwater robot-oriented marine organism positioning measurement device and method |
CN116681935A (en) * | 2023-05-31 | 2023-09-01 | 国家深海基地管理中心 | Autonomous recognition and positioning method and system for deep sea hydrothermal vent |
WO2023207186A1 (en) * | 2022-04-27 | 2023-11-02 | 博众精工科技股份有限公司 | Target positioning method and apparatus, electronic device, and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107883875A (en) * | 2017-11-23 | 2018-04-06 | 哈尔滨工程大学 | Autonomous type sea cucumber finishing device visual detection positioning device and vision-based detection localization method |
CN108038459A (en) * | 2017-12-20 | 2018-05-15 | 深圳先进技术研究院 | A kind of detection recognition method of aquatic organism, terminal device and storage medium |
-
2018
- 2018-05-28 CN CN201810519615.3A patent/CN108876855A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107883875A (en) * | 2017-11-23 | 2018-04-06 | 哈尔滨工程大学 | Autonomous type sea cucumber finishing device visual detection positioning device and vision-based detection localization method |
CN108038459A (en) * | 2017-12-20 | 2018-05-15 | 深圳先进技术研究院 | A kind of detection recognition method of aquatic organism, terminal device and storage medium |
Non-Patent Citations (4)
Title |
---|
CODRUTA O. ANCUTI等: "《Color Balance and Fusion for Underwater Image Enhancement》", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》 * |
ETHAN RUBLEE等: "《ORB: an efficient alternative to SIFT or SURF》", 《2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》 * |
WU ZHIHUAN等: "《Rapid Target Detection in High Resolution Remote Sensing Images Using YOLO Model》", 《THE INTERNATIONAL ARCHIVES OF THE PHOTOGRAMMETRY, REMOTE SENSING AND SPATIAL INFORMATION SCIENCES, VOLUME XLII-3, 2018 ISPRS TC III MID-TERM SYMPOSIUM "DEVELOPMENTS, TECHNOLOGIES AND APPLICATIONS IN REMOTE SENSING"》 * |
王迪: "《水下机器人双目视觉测距与机械手视觉伺服控制研究》", 《万方数据》 * |
Cited By (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109300154A (en) * | 2018-11-27 | 2019-02-01 | 郑州云海信息技术有限公司 | A kind of distance measuring method and device based on binocular solid |
CN110210279A (en) * | 2018-11-27 | 2019-09-06 | 腾讯科技(深圳)有限公司 | Object detection method, device and computer readable storage medium |
CN109465809A (en) * | 2018-12-17 | 2019-03-15 | 中北大学 | A kind of Intelligent garbage classification robot based on binocular stereo vision fixation and recognition |
CN111768449A (en) * | 2019-03-30 | 2020-10-13 | 北京伟景智能科技有限公司 | Object grabbing method combining binocular vision with deep learning |
CN111768449B (en) * | 2019-03-30 | 2024-05-14 | 北京伟景智能科技有限公司 | Object grabbing method combining binocular vision with deep learning |
CN110060299A (en) * | 2019-04-18 | 2019-07-26 | 中国测绘科学研究院 | Danger source identifies and positions method in passway for transmitting electricity based on binocular vision technology |
CN110132243A (en) * | 2019-05-31 | 2019-08-16 | 南昌航空大学 | A kind of modularization positioning system based on deep learning and ranging |
CN110232711B (en) * | 2019-06-05 | 2021-08-13 | 中国科学院自动化研究所 | Binocular vision real-time perception positioning method, system and device for marine product grabbing |
CN110232711A (en) * | 2019-06-05 | 2019-09-13 | 中国科学院自动化研究所 | The binocular vision real-time perception localization method of marine product crawl, system, device |
CN110222832A (en) * | 2019-06-19 | 2019-09-10 | 中国水产科学研究院东海水产研究所 | Entrance of Changjiang River salt marshes macrobenthos habitat simulation prediction technique |
CN110619660A (en) * | 2019-08-21 | 2019-12-27 | 深圳市优必选科技股份有限公司 | Object positioning method and device, computer readable storage medium and robot |
CN112419418A (en) * | 2019-08-22 | 2021-02-26 | 刘锐 | Positioning method based on camera mechanical aiming |
CN110599489A (en) * | 2019-08-26 | 2019-12-20 | 华中科技大学 | Target space positioning method |
CN110543859A (en) * | 2019-09-05 | 2019-12-06 | 大连海事大学 | sea cucumber autonomous recognition and grabbing method based on deep learning and binocular positioning |
CN110543859B (en) * | 2019-09-05 | 2023-08-18 | 大连海事大学 | Sea cucumber autonomous identification and grabbing method based on deep learning and binocular positioning |
CN110689118A (en) * | 2019-09-29 | 2020-01-14 | 华南理工大学 | Improved target detection method based on YOLO V3-tiny |
CN110702066A (en) * | 2019-10-15 | 2020-01-17 | 哈尔滨工程大学 | Underwater binocular camera vision positioning method |
CN110702066B (en) * | 2019-10-15 | 2022-03-18 | 哈尔滨工程大学 | Underwater binocular camera vision positioning method |
CN111046726B (en) * | 2019-10-25 | 2023-08-08 | 青岛农业大学 | Underwater sea cucumber identification and positioning method based on AI intelligent vision |
CN111046726A (en) * | 2019-10-25 | 2020-04-21 | 青岛农业大学 | AI intelligent vision-based underwater sea cucumber identification and positioning method |
CN110969158B (en) * | 2019-11-06 | 2023-07-25 | 中国科学院自动化研究所 | Target detection method, system and device based on underwater operation robot vision |
CN110969158A (en) * | 2019-11-06 | 2020-04-07 | 中国科学院自动化研究所 | Target detection method, system and device based on underwater operation robot vision |
CN111160213A (en) * | 2019-12-25 | 2020-05-15 | 广州方纬智慧大脑研究开发有限公司 | Illegal boarding and alighting detection method and system based on deep learning and storage medium |
CN111160213B (en) * | 2019-12-25 | 2024-06-25 | 广州方纬智慧大脑研究开发有限公司 | Illegal boarding and disembarking detection method, system and storage medium based on deep learning |
CN111340951A (en) * | 2020-02-26 | 2020-06-26 | 天津大学 | Ocean environment automatic identification method based on deep learning |
CN111696150A (en) * | 2020-05-19 | 2020-09-22 | 杭州飞锐科技有限公司 | Method for measuring phenotypic data of channel catfish |
CN111798496B (en) * | 2020-06-15 | 2021-11-02 | 博雅工道(北京)机器人科技有限公司 | Visual locking method and device |
CN111798496A (en) * | 2020-06-15 | 2020-10-20 | 博雅工道(北京)机器人科技有限公司 | Visual locking method and device |
CN112053324A (en) * | 2020-08-03 | 2020-12-08 | 上海电机学院 | Complex material volume measurement method based on deep learning |
CN112183640A (en) * | 2020-09-29 | 2021-01-05 | 无锡信捷电气股份有限公司 | Detection and classification method based on irregular object |
CN112183640B (en) * | 2020-09-29 | 2024-07-02 | 无锡信捷电气股份有限公司 | Detection and classification method based on irregular object |
CN112183485A (en) * | 2020-11-02 | 2021-01-05 | 北京信息科技大学 | Deep learning-based traffic cone detection positioning method and system and storage medium |
CN112183485B (en) * | 2020-11-02 | 2024-03-05 | 北京信息科技大学 | Deep learning-based traffic cone detection positioning method, system and storage medium |
CN114529811A (en) * | 2020-11-04 | 2022-05-24 | 中国科学院沈阳自动化研究所 | Rapid and automatic identification and positioning method for foreign matters in subway tunnel |
CN114529493A (en) * | 2020-11-04 | 2022-05-24 | 中国科学院沈阳自动化研究所 | Cable appearance defect detection and positioning method based on binocular vision |
CN112700499A (en) * | 2020-11-04 | 2021-04-23 | 南京理工大学 | Deep learning-based visual positioning simulation method and system in irradiation environment |
CN112700499B (en) * | 2020-11-04 | 2022-09-13 | 南京理工大学 | Deep learning-based visual positioning simulation method and system in irradiation environment |
CN112561996A (en) * | 2020-12-08 | 2021-03-26 | 江苏科技大学 | Target detection method in autonomous underwater robot recovery docking |
CN112529960A (en) * | 2020-12-17 | 2021-03-19 | 珠海格力智能装备有限公司 | Target object positioning method and device, processor and electronic device |
CN112767455B (en) * | 2021-01-08 | 2022-09-02 | 合肥的卢深视科技有限公司 | Calibration method and system for binocular structured light |
CN112767455A (en) * | 2021-01-08 | 2021-05-07 | 北京的卢深视科技有限公司 | Calibration method and system for binocular structured light |
CN114882346A (en) * | 2021-01-22 | 2022-08-09 | 中国科学院沈阳自动化研究所 | Underwater robot target autonomous identification method based on vision |
CN114882346B (en) * | 2021-01-22 | 2024-07-09 | 中国科学院沈阳自动化研究所 | Underwater robot target autonomous identification method based on vision |
CN112949389A (en) * | 2021-01-28 | 2021-06-11 | 西北工业大学 | Haze image target detection method based on improved target detection network |
CN113420704A (en) * | 2021-06-18 | 2021-09-21 | 北京盈迪曼德科技有限公司 | Object identification method and device based on visual sensor and robot |
CN113561178B (en) * | 2021-07-30 | 2024-02-13 | 燕山大学 | Intelligent grabbing device and method for underwater robot |
CN113561178A (en) * | 2021-07-30 | 2021-10-29 | 燕山大学 | Intelligent grabbing device and method for underwater robot |
CN113689484A (en) * | 2021-08-25 | 2021-11-23 | 北京三快在线科技有限公司 | Method and device for determining depth information, terminal and storage medium |
CN114485613B (en) * | 2021-12-31 | 2024-05-17 | 浙江大学海南研究院 | Positioning method for multi-information fusion underwater robot |
CN114485613A (en) * | 2021-12-31 | 2022-05-13 | 海南浙江大学研究院 | Multi-information fusion underwater robot positioning method |
WO2023207186A1 (en) * | 2022-04-27 | 2023-11-02 | 博众精工科技股份有限公司 | Target positioning method and apparatus, electronic device, and storage medium |
CN115375977A (en) * | 2022-10-27 | 2022-11-22 | 青岛杰瑞工控技术有限公司 | Deep sea cultured fish sign parameter identification system and identification method |
CN115984341A (en) * | 2023-03-20 | 2023-04-18 | 深圳市朗诚科技股份有限公司 | Marine water quality microorganism detection method, device, equipment and storage medium |
CN116255908A (en) * | 2023-05-11 | 2023-06-13 | 山东建筑大学 | Underwater robot-oriented marine organism positioning measurement device and method |
CN116255908B (en) * | 2023-05-11 | 2023-08-15 | 山东建筑大学 | Underwater robot-oriented marine organism positioning measurement device and method |
CN116681935B (en) * | 2023-05-31 | 2024-01-23 | 国家深海基地管理中心 | Autonomous recognition and positioning method and system for deep sea hydrothermal vent |
CN116681935A (en) * | 2023-05-31 | 2023-09-01 | 国家深海基地管理中心 | Autonomous recognition and positioning method and system for deep sea hydrothermal vent |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108876855A (en) | A kind of sea cucumber detection and binocular visual positioning method based on deep learning | |
Akkaynak et al. | Sea-thru: A method for removing water from underwater images | |
Bianco et al. | A new color correction method for underwater imaging | |
Pinto et al. | MARESye: A hybrid imaging system for underwater robotic applications | |
Bryson et al. | True color correction of autonomous underwater vehicle imagery | |
CN104700404B (en) | A kind of fruit positioning identifying method | |
Chiang et al. | Underwater image enhancement by wavelength compensation and dehazing | |
Prados et al. | A novel blending technique for underwater gigamosaicing | |
CN105184857B (en) | Monocular vision based on structure light ranging rebuilds mesoscale factor determination method | |
CN106780726A (en) | The dynamic non-rigid three-dimensional digital method of fusion RGB D cameras and colored stereo photometry | |
Lou et al. | Accurate multi-view stereo 3D reconstruction for cost-effective plant phenotyping | |
CN114067197B (en) | Pipeline defect identification and positioning method based on target detection and binocular vision | |
CN109816680A (en) | A kind of high-throughput calculation method of crops plant height | |
CN114241031A (en) | Fish body ruler measurement and weight prediction method and device based on double-view fusion | |
JP2023541102A (en) | Method and apparatus for underwater imaging | |
TW201308251A (en) | Underwater image enhancement system | |
CN112561996A (en) | Target detection method in autonomous underwater robot recovery docking | |
Zhang et al. | Deep learning for semantic segmentation of coral images in underwater photogrammetry | |
Wei et al. | Passive underwater polarization imaging detection method in neritic area | |
CN108460794A (en) | A kind of infrared well-marked target detection method of binocular solid and system | |
CN112465950A (en) | Device and method for measuring underwater distance of deep-sea net cage and fishing net, electronic equipment and medium | |
CN116012700A (en) | Real-time fish disease detection system based on YOLO-v5 | |
Gong et al. | Research on the method of color compensation and underwater image restoration based on polarization characteristics | |
Swirski et al. | Stereo from flickering caustics | |
CN117853370A (en) | Underwater low-light image enhancement method and device based on polarization perception |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181123 |
|
RJ01 | Rejection of invention patent application after publication |