[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109345547B - Traffic lane line detection method and device based on deep learning multitask network - Google Patents

Traffic lane line detection method and device based on deep learning multitask network Download PDF

Info

Publication number
CN109345547B
CN109345547B CN201811222879.9A CN201811222879A CN109345547B CN 109345547 B CN109345547 B CN 109345547B CN 201811222879 A CN201811222879 A CN 201811222879A CN 109345547 B CN109345547 B CN 109345547B
Authority
CN
China
Prior art keywords
image
coordinates
lane line
deep learning
lane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811222879.9A
Other languages
Chinese (zh)
Other versions
CN109345547A (en
Inventor
刘琰
高旭麟
薛超
白云飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tiandy Technologies Co Ltd
Original Assignee
Tianjin Tiandi Weiye Investment Management Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Tiandi Weiye Investment Management Co ltd filed Critical Tianjin Tiandi Weiye Investment Management Co ltd
Priority to CN201811222879.9A priority Critical patent/CN109345547B/en
Publication of CN109345547A publication Critical patent/CN109345547A/en
Application granted granted Critical
Publication of CN109345547B publication Critical patent/CN109345547B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

Compared with the traditional lane line detection algorithm based on linear detection, the method introduces a deep learning multitask Convolutional Neural Network (CNN), extracts lane line characteristic information, fully utilizes detail information of each layer of an image, firstly collects and evaluates brightness and edge information of the image, adjusts cutting size according to an evaluation result, divides the image into a plurality of image blocks, then normalizes the image, sends the image into the deep learning network, outputs lane line image types and coordinates, and fits lane lines by utilizing the relevance of airspace images, thereby finally realizing the function of accurately and quickly identifying lane line information under different scenes and brightness. The method is suitable for the applications of a bayonet camera and an electronic police in the field of intelligent traffic, fully utilizes a deep learning network on the premise of ensuring the real-time performance of image analysis, and effectively improves the adaptability and the accuracy of a lane line detection function.

Description

Traffic lane line detection method and device based on deep learning multitask network
Technical Field
The invention belongs to the field of intelligent video monitoring, and particularly relates to a traffic scene lane line detection method and device based on a deep learning multitask network.
Background
The method firstly needs to convert a color image into a gray image, loses color information in the image, and then carries out binarization processing, inevitably loses a large amount of edge details in the process, thereby further reducing the detection accuracy, and meanwhile, has obvious defects on the adaptability of different scene images, such as ground reflection after rain, lower night brightness, shadow coverage, vehicle shielding and other scenes, which can not better identify the lane line.
Disclosure of Invention
The invention aims to provide a traffic lane line detection method and device based on a deep learning multitask network, which have good adaptability under different brightness, color temperature and weather environments and various complex traffic scenes based on multitask deep learning network training, meet the real-time requirement of intelligent traffic, and have the characteristics of accuracy, rapidness and strong adaptability.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
a traffic lane line detection method based on a deep learning multitask network comprises the following steps:
s1, collecting brightness and edge information: extracting average brightness information of the image and extracting the edge intensity of the image through a two-dimensional convolution operator;
s2, image self-adaptive cutting: calculating a unit area edge intensity threshold according to the proportional relation between the average brightness and the edge intensity of the image, so as to adaptively cut the image, divide the image into a plurality of image blocks and prepare for image normalization;
s3, image normalization: zooming the image cut according to the edge intensity threshold value according to a fixed size, and preparing to be sent to deep learning;
s4, deep learning: inputting image blocks with uniform size into the composite convolutional neural network, and generating classification results of transverse and longitudinal lane lines and non-lane lines and lane line coordinates by a network model;
s5, category analysis and coordinate restoration: respectively extracting the coordinates of the lane line blocks according to the classification information output by the deeply-learned network model, and calculating corresponding coordinate values in the original image according to the size of the image blocks in the original image;
s6, lane line fitting: and fitting the coordinates of each image block of the same lane line by using the spatial correlation of each image block according to the classification information and the coordinate values of each image block, thereby outputting the final lane line coordinates.
Further, the specific method of step S1 includes:
s11, sampling the image based on the spatial correlation of the pixels in the actual scene image, where the sampling rate is 1/(3 × 3), that is, the central point is collected every 3 rows and 3 columns, if the original size is W × H, the thumbnail with size (W/3) × (H/3) is obtained after sampling, and the padding processing is performed if the edge does not satisfy 3 × 3, so as to obtain the average brightness value:
Figure BDA0001835219320000021
s12, performing two-dimensional convolution on the image to extract edge information, wherein the convolution operator is optimized based on a scharr operator and is divided into a horizontal direction and a vertical direction:
Figure BDA0001835219320000022
and
Figure BDA0001835219320000023
padding the edges to respectively obtain horizontal and vertical gradients, and combining the gradients to obtain the average edge information intensity of the image:
Figure BDA0001835219320000024
further, the specific method of step S2 includes: when the size S of the segmented image is obtained, a piecewise function is adopted:
Figure BDA0001835219320000025
here, Smin=112,Smax224, the length of the cropped blockThe width is S.
Further, the deep learning network described in step S4 is evolved based on the resnet and the DARKnet network, and residual error transfer is added after every two volumes and a CRP layer of a pooling layer, and a multi-task training function is added with reference to the MTCNN architecture, so as to output 3 classification results and 3 confidences of the horizontal and vertical lane lines and the non-lane line, and output 10 output parameters in total, while outputting the coordinates of the start point and the end point of the line segment.
Further, the specific method of step S6 is: and obtaining the position of each image block in the original image and the coordinates of line segments in the image blocks in the original image, then clustering in the full image range to obtain image blocks with the slopes and the spatial positions close to each other, fitting the coordinates of the line segments of the image blocks in the same class into a straight line by using a least square method, and finally outputting the coordinates of the lane lines to finish the automatic identification of the lane lines.
In another aspect of the present invention, a traffic lane line detection apparatus based on a deep learning multitasking network is further provided, including:
a brightness and edge information collection module: extracting average brightness information of the image and extracting the edge intensity of the image through a two-dimensional convolution operator;
the image self-adaptive clipping module: calculating a unit area edge intensity threshold according to the proportional relation between the average brightness and the edge intensity of the image, so as to adaptively cut the image, divide the image into a plurality of image blocks and prepare for image normalization;
an image normalization module: zooming the image cut according to the edge intensity threshold value according to a fixed size, and preparing to send the image into a deep learning model;
a composite convolutional neural network module: inputting image blocks with uniform size, and generating classification results of transverse and longitudinal lane lines and non-lane lines and lane line coordinates by a network model;
category analysis and coordinate restoration module: respectively extracting the coordinates of the lane line blocks according to the classification information output by the deep learning network model, and calculating corresponding coordinate values in the original image according to the size of the image blocks in the original image;
lane line fitting module: and fitting the coordinates of each image block of the same lane line by using the spatial correlation of each image block according to the classification information and the coordinate values of each image block, thereby outputting the final lane line coordinates.
Further, the brightness and edge information collecting module includes:
average luminance value unit: sampling the image based on the spatial correlation of pixels in the actual scene image, wherein the sampling rate is 1/(3 × 3), namely, 3 rows and 3 columns of the image are used for collecting a central point, if the size of the original image is W × H, a thumbnail with the size of (W/3) × (H/3) is obtained after sampling, padding processing is carried out on the edge which does not meet the size of 3 × 3, and the average brightness value is obtained:
Figure BDA0001835219320000031
mean edge information strength unit: performing two-dimensional convolution on the image to extract edge information, wherein the convolution operator is optimized based on a scharr operator and is divided into a horizontal direction and a vertical direction:
Figure BDA0001835219320000032
and
Figure BDA0001835219320000033
padding the edges to respectively obtain horizontal and vertical gradients, and combining the gradients to obtain the average edge information intensity of the image:
Figure BDA0001835219320000034
further, the image adaptive clipping module includes a segmentation function unit, and when the size S of the segmented image is obtained, the segmentation function is adopted:
Figure BDA0001835219320000041
here, Smin=112,3maxThe length and width of the clipped image block are all S224.
Further, the complex convolutional neural network module includes a deep learning network unit: the method is characterized in that residual error transmission is added after every two volumes and a CRP layer of a pooling layer are evolved based on a resnet network and a DARKnet network, a multi-task training function is added by referring to an MTCNN framework, 3 classification results and 3 confidences of transverse and longitudinal lane lines and non-lane lines are output, and meanwhile, 10 output parameters are output, namely, the coordinates of the starting point and the ending point of a line segment.
Further, the lane line fitting module includes a cluster fitting unit: and obtaining the position of each image block in the original image and the coordinates of line segments in the image blocks in the original image, then clustering in the full image range to obtain image blocks with the slopes and the spatial positions close to each other, fitting the coordinates of the line segments of the image blocks in the same class into a straight line by using a least square method, and finally outputting the coordinates of the lane lines to finish the automatic identification of the lane lines.
Compared with the prior art, the invention has the following beneficial effects:
the method is based on the multitask deep learning network training, has good adaptability in different brightness, color temperature and weather environments and various complex traffic scenes, meets the real-time requirement of intelligent traffic, and has the characteristics of accuracy, quickness and strong adaptability; compared with the traditional lane line detection method, the method has the advantages that the adaptability and the accuracy are obviously improved.
Drawings
FIG. 1 is a diagram illustrating a common structure CRP layer structure in a deep learning network according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a training network structure according to an embodiment of the present invention;
fig. 3 is a schematic view of the overall structure of an embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
The invention relates to a new traffic scene lane line detection method based on a deep learning multitask network, which has the following implementation mode:
in extracting brightness and edge informationBased on the spatial correlation of pixels in the actual scene image, firstly, down-sampling the image, wherein the sampling rate is 1/(3 × 3), namely, 3 rows and 3 columns of each image are used for collecting a central point, if the size of the original image is W × H, a thumbnail with the size of (W/3) × (H/3) is obtained after sampling, and the edge does not meet the padding processing of 3 × 3, so that the average brightness value is obtained:
Figure BDA0001835219320000051
and then, performing two-dimensional convolution on the image to extract edge information, wherein the convolution operator is optimized based on a scharr operator, so that the correlation of adjacent pixels is enhanced, and the method mainly comprises the following two directions:
Figure BDA0001835219320000052
and
Figure BDA0001835219320000053
padding the edge to obtain horizontal and vertical gradients, respectively, and combining the gradients to obtain the average edge information intensity of the image
Figure BDA0001835219320000054
Correlation coefficient R ═ B of image brightness and edge informationavg/Lavg. According to experimental data, the brightness edge correlation coefficient of the traffic scene is between 0.75 and 3, is lower than 0.75, represents that the image noise is too much or the details are rich, and is higher than 3, represents that the image scene is single. The more abundant scene of noise and detail, the more big the recognition difficulty is, the smaller the image block size that needs to be cut apart and sent into the deep learning model is, when solving segmentation image size S, adopt the piecewise function:
Figure BDA0001835219320000055
here, Smin=112,SmaxAnd (224) after the length and the width of the cut image block are all S, normalizing the image, uniformly scaling the image block into 224 x 224, and sending the image block into the deep learning model.
The selected deep learning network is evolved based on the resnet and DARKnet networks, residual error transmission is added after every two volume layers and a CRP layer of a pooling layer, the network depth is increased, image features are fully extracted, meanwhile, the problem of gradient dispersion is effectively avoided, a multi-task training function is added by referring to an MTCNN framework, 3 classification results of transverse and longitudinal lane lines and non-lane lines and 3 confidence degrees are output, meanwhile, the coordinates of the starting point and the ending point of the line segments are output, 10 output parameters are provided, the network structure is shown in fig. 1 and fig. 2, fig. 1 is a CRP layer with a general structure in the network, and the CRP layer comprises two 3 × 3 containment layers, a ReLU layer and a maxpopooling layer; fig. 2 is a diagram of a training network architecture according to the present invention.
And (3) the classification and regression results output by the deep learning model are sent to a classification analysis and coordinate restoration module to obtain the position of each image block in the original image and the coordinates of line segments in the image block in the original image, then clustering is carried out in the whole image range to obtain image blocks with similar slopes and spatial positions, the line segment coordinates of the image blocks in the same class are fitted into a straight line by using a least square method, and finally, the coordinates of lane lines are output to finish the algorithm process of automatic lane line identification.
In summary, the new traffic scene lane line detection method based on the deep learning multitask network is composed of a brightness and edge information collecting module, an image self-adaptive clipping module, an image normalization module, a deep learning network feature extraction module, a category analysis and coordinate restoration module, a lane line fitting module and the like, has good adaptability in different brightness, color temperature and weather environments and various complex traffic scenes based on the multitask deep learning network training, meets the real-time requirements of intelligent traffic, and has the characteristics of accuracy, rapidness and strong adaptability. The structure of the traffic scene lane line detection system based on the deep learning multitask network is shown in fig. 3.
The invention can be realized on an embedded platform with a deep learning module in mainstream in the industry, the training sample of the invention adopts pictures of transverse and longitudinal lane lines with the size of 224 × 224 as positive samples, other pictures intercepted in a traffic scene are used as negative samples, 20% of the positive samples are respectively selected as samples of regression coordinates, and the proportion of the training samples is as follows: hor Lane:ver Lane Neg Sample Hor Landmark Ver Landmark 5:5:15:1: 1. In FIGS. 1-2, CRP1 is subjected to a common feature extraction layerAfter CRP3, the classification and regression tasks extract features on a common feature map, with classification network feature extraction paths of CPR4-1 and 5-1 and regression network feature extraction paths of CPR4-2 and 5-2, respectively. The classification training loss evaluation function adopts a sparse cross entropy function:
Figure BDA0001835219320000061
wherein k is the number of classifications, pkThe confidence coefficient of the class is p or less than the classification confidence coefficient 0 after softmaxkAnd (5) the sum of the three types of cross entropies is less than or equal to 1, and the classified sparse cross entropy loss value is obtained. The regression coordinate loss function adopts a Euclidean distance square sum function:
Figure BDA0001835219320000062
wherein xk、ykAnd Xk、YkThe predicted value of the regression coordinate and the actual value of Label calibration are respectively represented, the final loss function is the weighted sum of the classification loss function and the regression loss function, and the values of the proportionality coefficients xi and xi added with two loss values are 0.5 in training:
Figure BDA0001835219320000063
through tests, compared with the traditional lane line detection method, the new traffic scene lane line detection method based on the deep learning multitask network has the advantages that the adaptability and the accuracy are obviously improved, the performance is good under different brightness, color temperature and weather environments and various complex traffic scenes, the real-time requirement of intelligent traffic is met, the method has the characteristics of accuracy, rapidness and strong adaptability, and the requirement of front-end equipment of current traffic products is met.
Although the present invention has been described with reference to the preferred embodiments, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A traffic lane line detection method based on a deep learning multitask network is characterized by comprising the following steps:
s1, collecting brightness and edge information: extracting average brightness information of the image and extracting the edge intensity of the image through a two-dimensional convolution operator;
s2, image self-adaptive cutting: cutting the image in a self-adaptive manner according to the proportional relation between the average brightness and the edge intensity of the image, dividing the image into a plurality of image blocks and preparing for image normalization;
s3, image normalization: zooming the image cut according to the edge intensity threshold value according to a fixed size, and preparing to be sent to deep learning;
s4, deep learning: inputting image blocks with uniform sizes into a deep learning network, and generating classification results of transverse and longitudinal lane lines and non-lane lines and lane line coordinates by a network model;
s5, category analysis and coordinate restoration: respectively extracting the coordinates of the lane line blocks according to the classification information output by the deeply-learned network model, and calculating corresponding coordinate values in the original image according to the size of the image blocks in the original image;
s6, lane line fitting: and fitting the coordinates of each image block of the same lane line by using the spatial correlation of each image block according to the classification information and the coordinate values of each image block, thereby outputting the final lane line coordinates.
2. The method according to claim 1, wherein the specific method of step S1 includes:
s11, sampling the image based on the spatial correlation of the pixels in the actual scene image, where the sampling rate is 1/(3 × 3), that is, the central point is collected every 3 rows and 3 columns, if the original size is W × H, the thumbnail with size (W/3) × (H/3) is obtained after sampling, and the padding processing is performed if the edge does not satisfy 3 × 3, so as to obtain the average brightness value:
Figure FDA0003130633500000011
s12, performing two-dimensional convolution on the image to extract edge information, wherein the convolution operator is optimized based on a scharr operator and is divided into a horizontal direction and a vertical direction:
Figure FDA0003130633500000012
and
Figure FDA0003130633500000013
padding the edges to respectively obtain horizontal and vertical gradients, and combining the gradients to obtain the average edge information intensity of the image:
Figure FDA0003130633500000014
3. the method according to claim 1, wherein the specific method of step S2 includes: when the size S of the segmented image is obtained, a piecewise function is adopted:
Figure FDA0003130633500000021
here, Smin=112,SmaxThe length and width of the clipped image block are all S224.
4. The method as claimed in claim 1, wherein the deep learning network of step S4 is evolved based on resnet and DARKnet networks, adding residual transfer after every two convolution and CRP layers of one pooling layer, and adding a multitask training function with reference to MTCNN architecture, outputting 3 classification results and 3 confidences of horizontal, vertical lane lines and non-lane lines, and outputting 10 output parameters at the same time for line segment start point and end point coordinates.
5. The method according to claim 1, wherein the specific method of step S6 is: and obtaining the position of each image block in the original image and the coordinates of line segments in the image blocks in the original image, then clustering in the full image range to obtain image blocks with the slopes and the spatial positions close to each other, fitting the coordinates of the line segments of the image blocks in the same class into a straight line by using a least square method, and finally outputting the coordinates of the lane lines to finish the automatic identification of the lane lines.
6. A traffic lane line detection device based on a deep learning multitask network is characterized by comprising:
a brightness and edge information collection module: extracting average brightness information of the image and extracting the edge intensity of the image through a two-dimensional convolution operator;
the image self-adaptive clipping module: cutting the image in a self-adaptive manner according to the proportional relation between the average brightness and the edge intensity of the image, dividing the image into a plurality of image blocks and preparing for image normalization;
an image normalization module: zooming the image cut according to the edge intensity threshold value according to a fixed size, and preparing to send the image into a deep learning model;
the deep learning network module: inputting image blocks with uniform size, and generating classification results of transverse and longitudinal lane lines and non-lane lines and lane line coordinates by a network model;
category analysis and coordinate restoration module: respectively extracting the coordinates of the lane line blocks according to the classification information output by the deep learning network model, and calculating corresponding coordinate values in the original image according to the size of the image blocks in the original image;
lane line fitting module: and fitting the coordinates of each image block of the same lane line by using the spatial correlation of each image block according to the classification information and the coordinate values of each image block, thereby outputting the final lane line coordinates.
7. The apparatus of claim 6, wherein the brightness and edge information collecting module comprises:
average luminance value unit: sampling the image based on the spatial correlation of pixels in the actual scene image, wherein the sampling rate is 1/(3 × 3), namely, 3 rows and 3 columns of the image are used for collecting a central point, if the size of the original image is W × H, a thumbnail with the size of (W/3) × (H/3) is obtained after sampling, padding processing is carried out on the edge which does not meet the size of 3 × 3, and the average brightness value is obtained:
Figure FDA0003130633500000031
mean edge information strength unit: performing two-dimensional convolution on the image to extract edge information, wherein the convolution operator is optimized based on a scharr operator and is divided into a horizontal direction and a vertical direction:
Figure FDA0003130633500000032
and
Figure FDA0003130633500000033
padding the edges to respectively obtain horizontal and vertical gradients, and combining the gradients to obtain the average edge information intensity of the image:
Figure FDA0003130633500000034
8. the apparatus of claim 6, wherein the image adaptive cropping module comprises a segmentation function unit, and when the size S of the segmented image is obtained, the segmentation function is adopted:
Figure FDA0003130633500000035
here, Smin=112,SmaxThe length and width of the clipped image block are all S224.
9. The apparatus of claim 6, wherein the complex convolutional neural network module comprises a deep learning network unit: the method is characterized in that residual error transmission is added after every two volumes and a CRP layer of a pooling layer are evolved based on a resnet network and a DARKnet network, a multi-task training function is added by referring to an MTCNN framework, 3 classification results and 3 confidences of transverse and longitudinal lane lines and non-lane lines are output, and meanwhile, 10 output parameters are output, namely, the coordinates of the starting point and the ending point of a line segment.
10. The apparatus of claim 6, wherein the lane line fitting module comprises a cluster fitting unit: and obtaining the position of each image block in the original image and the coordinates of line segments in the image blocks in the original image, then clustering in the full image range to obtain image blocks with the slopes and the spatial positions close to each other, fitting the coordinates of the line segments of the image blocks in the same class into a straight line by using a least square method, and finally outputting the coordinates of the lane lines to finish the automatic identification of the lane lines.
CN201811222879.9A 2018-10-19 2018-10-19 Traffic lane line detection method and device based on deep learning multitask network Active CN109345547B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811222879.9A CN109345547B (en) 2018-10-19 2018-10-19 Traffic lane line detection method and device based on deep learning multitask network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811222879.9A CN109345547B (en) 2018-10-19 2018-10-19 Traffic lane line detection method and device based on deep learning multitask network

Publications (2)

Publication Number Publication Date
CN109345547A CN109345547A (en) 2019-02-15
CN109345547B true CN109345547B (en) 2021-08-24

Family

ID=65311326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811222879.9A Active CN109345547B (en) 2018-10-19 2018-10-19 Traffic lane line detection method and device based on deep learning multitask network

Country Status (1)

Country Link
CN (1) CN109345547B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109961105B (en) * 2019-04-08 2020-03-27 上海市测绘院 High-resolution remote sensing image classification method based on multitask deep learning
TWI694019B (en) * 2019-06-05 2020-05-21 國立中正大學 Lane line detection and tracking method
CN110363182B (en) * 2019-07-24 2021-06-18 北京信息科技大学 Deep learning-based lane line detection method
CN111400040B (en) * 2020-03-12 2023-06-20 重庆大学 Industrial Internet system based on deep learning and edge calculation and working method
CN112966639B (en) * 2021-03-22 2024-04-26 新疆爱华盈通信息技术有限公司 Vehicle detection method, device, electronic equipment and storage medium
CN113516010B (en) * 2021-04-08 2024-09-06 柯利达信息技术有限公司 Intelligent internet access recognition and processing system for foreign matters on expressway
CN113313071A (en) * 2021-06-28 2021-08-27 浙江同善人工智能技术有限公司 Road area identification method and system
CN113822218B (en) * 2021-09-30 2024-08-02 厦门汇利伟业科技有限公司 Lane line detection method and computer readable storage medium
CN114022863B (en) * 2021-10-28 2022-10-11 广东工业大学 Deep learning-based lane line detection method, system, computer and storage medium
CN114554204A (en) * 2022-01-20 2022-05-27 珠海全志科技股份有限公司 Method and device for adjusting image quality of coded image
CN115376082B (en) * 2022-08-02 2023-06-09 北京理工大学 Lane line detection method integrating traditional feature extraction and deep neural network
CN116543365B (en) * 2023-07-06 2023-10-10 广汽埃安新能源汽车股份有限公司 Lane line identification method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102073848A (en) * 2010-12-31 2011-05-25 深圳市永达电子股份有限公司 Intelligent optimization-based road recognition system and method
CN102567713A (en) * 2010-11-30 2012-07-11 富士重工业株式会社 Image processing apparatus
CN106203398A (en) * 2016-07-26 2016-12-07 东软集团股份有限公司 A kind of detect the method for lane boundary, device and equipment
CN108009524A (en) * 2017-12-25 2018-05-08 西北工业大学 A kind of method for detecting lane lines based on full convolutional network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046235B (en) * 2015-08-03 2018-09-07 百度在线网络技术(北京)有限公司 The identification modeling method and device of lane line, recognition methods and device
JP2018116369A (en) * 2017-01-16 2018-07-26 株式会社Soken Lane recognition device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567713A (en) * 2010-11-30 2012-07-11 富士重工业株式会社 Image processing apparatus
CN102073848A (en) * 2010-12-31 2011-05-25 深圳市永达电子股份有限公司 Intelligent optimization-based road recognition system and method
CN106203398A (en) * 2016-07-26 2016-12-07 东软集团股份有限公司 A kind of detect the method for lane boundary, device and equipment
CN108009524A (en) * 2017-12-25 2018-05-08 西北工业大学 A kind of method for detecting lane lines based on full convolutional network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A novel variable block-size image compression based on edge detection;Thuniki Yashwanth Reddy et al.;《2017 Conference on Information and Communication Technology(CICT17)》;20180419;全文 *
改进图像阈值分割算法的研究;贾允 等;《光学技术》;20050120;第31卷(第1期);全文 *

Also Published As

Publication number Publication date
CN109345547A (en) 2019-02-15

Similar Documents

Publication Publication Date Title
CN109345547B (en) Traffic lane line detection method and device based on deep learning multitask network
CN111784685B (en) Power transmission line defect image identification method based on cloud edge cooperative detection
CN112101175A (en) Expressway vehicle detection and multi-attribute feature extraction method based on local images
CN113065558A (en) Lightweight small target detection method combined with attention mechanism
CN111160205B (en) Method for uniformly detecting multiple embedded types of targets in traffic scene end-to-end
CN110399840B (en) Rapid lawn semantic segmentation and boundary detection method
CN110084302B (en) Crack detection method based on remote sensing image
CN116030074A (en) Identification method, re-identification method and related equipment for road diseases
CN114973002A (en) Improved YOLOv 5-based ear detection method
CN113723377A (en) Traffic sign detection method based on LD-SSD network
CN104361357B (en) Photo album categorizing system and sorting technique based on image content analysis
CN111915583A (en) Vehicle and pedestrian detection method based on vehicle-mounted thermal infrared imager in complex scene
CN113011308A (en) Pedestrian detection method introducing attention mechanism
CN115272204A (en) Bearing surface scratch detection method based on machine vision
CN112418087A (en) Underwater video fish identification method based on neural network
CN116129291A (en) Unmanned aerial vehicle animal husbandry-oriented image target recognition method and device
CN116052090A (en) Image quality evaluation method, model training method, device, equipment and medium
CN105893970A (en) Nighttime road vehicle detection method based on luminance variance characteristics
CN116824347A (en) Road crack detection method based on deep learning
CN116597270A (en) Road damage target detection method based on attention mechanism integrated learning network
CN111027564A (en) Low-illumination imaging license plate recognition method and device based on deep learning integration
CN110751667A (en) Method for detecting infrared dim small target under complex background based on human visual system
CN104299234B (en) The method and system that rain field removes in video data
CN111597939B (en) High-speed rail line nest defect detection method based on deep learning
CN107992799A (en) Towards the preprocess method of Smoke Detection application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210901

Address after: No.8, Haitai Huake 2nd Road, Huayuan Industrial Zone, Binhai New Area, Tianjin, 300450

Patentee after: TIANDY TECHNOLOGIES Co.,Ltd.

Address before: 300384 a222, building 4, No. 8, Haitai Huake Second Road, Huayuan Industrial Zone (outside the ring), high tech Zone, Binhai New Area, Tianjin

Patentee before: TIANJIN TIANDI WEIYE INVESTMENT MANAGEMENT Co.,Ltd.

TR01 Transfer of patent right