[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113408413B - Emergency lane identification method, system and device - Google Patents

Emergency lane identification method, system and device Download PDF

Info

Publication number
CN113408413B
CN113408413B CN202110678895.4A CN202110678895A CN113408413B CN 113408413 B CN113408413 B CN 113408413B CN 202110678895 A CN202110678895 A CN 202110678895A CN 113408413 B CN113408413 B CN 113408413B
Authority
CN
China
Prior art keywords
feature
target image
features
road
emergency lane
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110678895.4A
Other languages
Chinese (zh)
Other versions
CN113408413A (en
Inventor
马伟
章勇
毛晓蛟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Keda Technology Co Ltd
Original Assignee
Suzhou Keda Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Keda Technology Co Ltd filed Critical Suzhou Keda Technology Co Ltd
Priority to CN202110678895.4A priority Critical patent/CN113408413B/en
Publication of CN113408413A publication Critical patent/CN113408413A/en
Application granted granted Critical
Publication of CN113408413B publication Critical patent/CN113408413B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a method, a system and a device for identifying an emergency lane, wherein the method comprises the following steps: acquiring a road segmentation result of a target image, and extracting a first feature of the road segmentation result; the road segmentation result is used for displaying a road containing an emergency lane; extracting a second feature of the target image, and overlapping the first feature and the second feature to identify the number of lanes in the target image based on the overlapped feature; and determining the area occupied by the emergency lane in the road of the target image according to the number of the recognized lanes. According to the technical scheme provided by the invention, the identification precision of the emergency lane can be improved.

Description

Emergency lane identification method, system and device
Technical Field
The invention relates to the technical field of image recognition, in particular to a method, a system and a device for recognizing an emergency lane.
Background
The emergency lane is usually used temporarily by the traffic department or the accident vehicle in case of an emergency. However, in practical application, the emergency lane is occupied by a part of vehicles without reason, so that the rescue and treatment process of the emergency cannot be normally executed. In view of this, the traffic department needs to identify the violation vehicles occupying the emergency lane, and the precondition for identifying the violation vehicles is that the area of the emergency lane can be accurately identified from the monitoring picture.
Currently, unmanned aerial vehicles are increasingly being used in the identification of violation vehicles due to their greater mobility. The angle of the monitoring camera of the unmanned aerial vehicle is greatly different from the angle of the common monitoring camera, so that the existing algorithm cannot accurately identify the emergency lane of the monitoring image acquired by the unmanned aerial vehicle.
Disclosure of Invention
In view of this, embodiments of the present invention provide a method, a system, and a device for identifying an emergency lane, which can improve the identification accuracy of the emergency lane.
The invention provides an emergency lane identification method, which comprises the following steps: acquiring a road segmentation result of a target image, and extracting a first feature of the road segmentation result; the road segmentation result is used for displaying a road containing an emergency lane; extracting a second feature of the target image, and overlapping the first feature and the second feature to identify the number of lanes in the target image based on the overlapped feature; and determining the area occupied by the emergency lane in the road of the target image according to the number of the recognized lanes.
In one embodiment, acquiring the road segmentation result of the target image comprises: extracting multi-scale segmentation features of the target image by using an initial model, and fusing the multi-scale segmentation features by using a hierarchical model to generate a plurality of segmentation fusion features; introducing specified scale segmentation features to the top layer and the bottom layer of the hierarchical model for feature fusion; and after each segmentation fusion feature is transformed into a transformation feature with a uniform scale, overlapping each transformation feature, and generating a road segmentation result of the target image based on the overlapped transformation features.
The initial model and the hierarchical model are used for feature extraction, so that semantic features of a shallow network and spatial features of a deep network in the model can be enhanced.
In one embodiment, the hierarchical model includes at least a first path and a second path arranged in parallel, the first path is used for performing layer-by-layer upsampling on the multi-scale segmentation feature, and the second path is used for performing layer-by-layer downsampling on the output feature of each layer of the first path; and introducing a first specified scale division feature to the top layer of the first channel for feature fusion, and introducing a second specified scale division feature to the bottom layer of the second channel for feature fusion, wherein the dimension of the first specified scale division feature is smaller than that of the second specified scale division feature.
In one embodiment, the segmentation-fused features output layer by the second pass are transformed into uniformly scaled transformed features.
In the application, the output characteristics of the first path are fused again through the second path, so that the loss of high-level information can be reduced, and the capability of learning characteristics by a neural network is more excellent.
In one embodiment, a first sub-network is used for extracting a first feature of the road segmentation result, a second sub-network is used for extracting a second feature of the target image, and the overlapped features are predicted through a classification layer to obtain the number of lanes in the target image; and highlighting foreground features in the overlapped features, and fading background features.
In the superposed characteristics, the characteristics of the foreground can be more highlighted, and the characteristics of the background are faded, so that the subsequent network is guided to focus on identifying the characteristics of the foreground. Therefore, on one hand, the complexity of data processing is reduced, and on the other hand, the detection precision of the number of lanes can be improved.
In one embodiment, determining the area occupied by the emergency lane in the road of the target image comprises: acquiring a first standard width of an emergency lane and a second standard width of a non-emergency lane, and calculating the area proportion of the emergency lane matched with the number of lanes according to the first standard width and the second standard width; and taking the emergency lane area to be a limited area in the road of the target image as the area occupied by the determined application lane.
In one embodiment, the area occupied by the emergency lane and the area occupied by the non-emergency lane are distinguished by an emergency lane line, which is determined as follows: drawing a plurality of reference lines in the road segmentation result, wherein the reference lines and the region boundary of the road in the road segmentation result form an intersection point; determining a target position corresponding to an emergency lane line on each reference line in the area of the road according to the emergency lane area proportion; and taking the straight line drawn by each target position as the emergency lane line.
The area of the emergency lane is determined by combining the number of lanes and the standard width, and lane lines are not directly identified, so that the defect that the existing lane line detection method cannot be applied to unmanned aerial vehicle images is overcome.
In one embodiment, the method further comprises: extracting multi-scale vehicle features of the target image by using an initial model, fusing the multi-scale vehicle features by using a hierarchical model to generate a plurality of vehicle fusion features, and identifying vehicles contained in the target image based on the plurality of vehicle fusion features; introducing vehicle features with specified dimensions into the top layer and the bottom layer of the hierarchical model for feature fusion; and if the target vehicle identified in the target image runs in the area occupied by the emergency lane, judging the target vehicle as a violation vehicle, and outputting the vehicle information of the violation vehicle.
The characteristics are extracted through the initial model and the hierarchical model, the semantic characteristics of a shallow network and the spatial characteristics of a deep network in the model can be enhanced, and the violation vehicles can be effectively identified by combining the positions of the vehicles and the positions of the emergency lanes.
In another aspect of the present invention, there is provided an emergency lane recognition system, including: the characteristic extraction unit is used for acquiring a road segmentation result of a target image and extracting a first characteristic of the road segmentation result; the road segmentation result is used for displaying a road containing an emergency lane; a lane number recognition unit configured to extract a second feature of the target image, and superimpose the first feature and the second feature to recognize the number of lanes in the target image based on the superimposed feature; and the emergency lane determining unit is used for determining the area occupied by the emergency lane in the road of the target image according to the number of the recognized lanes.
The invention also provides an emergency lane recognition device, which comprises a processor and a memory, wherein the memory is used for storing a computer program, and the computer program is executed by the processor to realize the emergency lane recognition method.
In another aspect, the present invention further provides a computer-readable storage medium for storing a computer program, which when executed by a processor, implements the emergency lane identification method described above.
According to the technical scheme, the original target image and the road segmentation result of the target image can be comprehensively analyzed, and therefore the area occupied by the emergency lane in the target image is identified. Specifically, after respective features of the road segmentation result and the target image are extracted, the two features may be superimposed, and then the number of lanes in the target image may be identified based on the superimposed features. In this way, when the number of lanes in the target image is detected, the detection area can be more limited to the foreground by the characteristics of the road segmentation result, so that the complexity of data processing is reduced, and the detection accuracy of the number of lanes can be improved.
After the number of lanes is identified, an area occupied by an emergency lane may be determined in the road of the target image based on the number of lanes. Because the field of view of the target image shot by the unmanned aerial vehicle is high, the conventional way of detecting the lane line cannot be well applied to the target image. According to the lane identification method and the lane identification device, the lane lines in the target image can be prevented from being directly detected, the number of lanes is firstly identified based on the overlapped features, and then the area of the emergency lane is estimated based on the number of the lanes. Therefore, by combining the characteristics of the road segmentation result and utilizing the number of lanes to speculate the area of the emergency lane, the defects of the algorithm in the prior art can be avoided, and the detection precision of the emergency lane is improved.
Drawings
The features and advantages of the present invention will be more clearly understood by reference to the accompanying drawings, which are illustrative and not to be construed as limiting the invention in any way, and in which:
fig. 1 is a schematic diagram illustrating a method for recognizing an emergency lane according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating road segmentation results in one embodiment of the present invention;
FIG. 3 is a diagram illustrating the generation of road segmentation results according to an embodiment of the present invention;
FIG. 4 illustrates a schematic view of the identification of the number of lanes in one embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating a method for determining an emergency lane line according to an embodiment of the present invention;
FIG. 6 shows a functional block diagram of an emergency lane identification system in one embodiment of the present invention;
fig. 7 is a schematic structural view showing an emergency lane recognition apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings of the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of the present invention.
The application provides an identification method of emergency lane can be applied to the equipment that possesses higher shooting field of vision such as unmanned aerial vehicle. By the method, the emergency lanes can be effectively identified for the images acquired by the equipment.
Referring to fig. 1, a method for identifying an emergency lane according to an embodiment of the present application may include the following steps.
S1: acquiring a road segmentation result of a target image, and extracting a first feature of the road segmentation result; the road segmentation result is used for displaying a road containing an emergency lane.
In this embodiment, for a target image to be detected, a road segmentation result of the target image may be obtained first. In the road segmentation result, a road including an emergency lane may be presented, and a foreground and a background of the target image may be distinguished. The foreground may refer to a road to be detected, and the background may refer to a region other than the road to be detected in the target image. Referring to fig. 2, the road segmentation result may be a binary image corresponding to the target image, and in the binary image, the region formed by white pixels may be a foreground region; the area formed by the black pixels can be a background area.
In one embodiment, the target image may be processed using an example segmentation network, which may include, for example, a maskrnnn, solov2, deepmask, etc. network, to generate corresponding road segmentation results. In practical application, the corresponding instance segmentation network can be flexibly selected according to different scene requirements. For example, in the working process, the mask rcnn usually detects the target frame first and then performs instance segmentation, and solov2 can directly output the segmented target area, so that the effect of generating the road segmentation result by using solov2 is better for roads with wide coverage.
In a specific application example, when the solov2 model is adopted to generate a road segmentation result, in order to enhance the semantic features of the shallow network and the spatial features of the deep network in the model, the original solov2 model can be used as an initial model, and feature extraction is performed by combining the initial model and the hierarchical model.
Specifically, referring to fig. 3, the initial model may be used to extract the multi-scale segmentation features of the target image, and then the hierarchical model is used to fuse the multi-scale segmentation features. In this embodiment, the initial model may be used as a basic network (backbone) of the training process, and the initial model may be combined with the hierarchical model after the design is completed. As shown in fig. 3, the multi-level down-sampling process from the bottom to the top on the leftmost side can be performed by the initial model, and a corresponding scale division feature can be generated after each down-sampling. After multiple downsampling, the multi-scale segmentation features with different resolutions can be obtained. For example, in fig. 3, multi-scale segmented features with resolutions (dimensions) of 76 × 76, 38 × 38, and 19 × 19, respectively, may be obtained. In order to enhance the semantic features of the shallow network and the spatial features of the deep network in the initial model, the hierarchical model may be used to continue to fuse the multi-scale segmentation features, thereby generating a plurality of segmentation fusion features.
Specifically, the hierarchical model may include at least a first pass and a second pass arranged in parallel, as shown in fig. 3, the first pass being used for up-sampling the multi-scale segmented features layer by layer (from 19 × 19 to 76 × 76), and the second pass being used for down-sampling the output features of the layers of the first pass layer by layer (from 76 × 76 to 19 × 19). When the first path is upsampled, the deep output features (output features of 19 × 19) of the initial model may be processed by the convolutional neural network of 1*1 to serve as input features, the input features are upsampled by the sampling factor of 2 to obtain features of 38 × 38, and the features of 38 × 38 may be combined with the features of 38 × 38 generated by the initial model (the features generated by the initial model are all processed by the 1*1 convolutional neural network), so as to obtain the fusion features of 38 × 38 in the first path. By analogy, the first path and the second path perform feature fusion layer by layer. It should be noted that, in a general hierarchical structure, only the first path is often included, and the second path is added in the present application, so that the output features of the first path are fused again through the second path, thereby reducing the loss of high-level information and making the capability of the neural network to learn features more excellent.
In addition, in one embodiment, the specified scale segmentation features generated by the initial model can be introduced into the feature fusion process, and the specified scale segmentation features can be introduced into the top layer and the bottom layer of the hierarchical model, so that the input features of the target image can be fully utilized in the feature fusion stage, and thus, not only can the shallow input information generated by the initial model be acquired, but also the deep input information generated by the first path can be combined, and a good effect is achieved on the generation of the road segmentation result.
Specifically, referring to fig. 3, a first specified scale segmentation feature generated in the down-sampling process of the initial model may be introduced into a top layer of the first path for feature fusion, and a second specified scale segmentation feature may be introduced into a bottom layer of the second path for feature fusion. Since the top layer of the first via corresponds to a smaller dimension than the bottom layer of the second via, the dimension of the first scaled segmented feature should also be smaller than the dimension of the second scaled segmented feature. In a specific application example, the first specified scale division feature may be a feature 19 × 19 obtained by performing a downsampling process with a downsampling factor of 4 on a feature 76 × 76 in the initial model, and the second specified scale division feature may be a feature 76 × 76 in the initial model.
In the present embodiment, a plurality of segmentation-fusion features, which may be features output from each layer of the second path, can be obtained by extracting features of the target image using the initial model and the hierarchical model. After the segmentation and fusion features output by each layer are obtained, the segmentation and fusion features can be transformed into transformation features with a uniform scale. Specifically, referring to fig. 3, the segmented fusion features of each layer can be transformed into transformation features with consistent scales (all transformed into scales of 76 × 76 in fig. 3) by a coordconv algorithm. In practical applications, the coordconv algorithm may include a series of processes such as 3*3 convolutional layer processing, group normalization processing (group normalization), activation function processing, and 2 bilinear interpolations. After the transformation features of each layer are obtained, the transformation features may be superimposed, and a road segmentation result of the target image may be generated based on the superimposed transformation features.
In this way, the initial model and the hierarchical model are combined, so that the shallow information of the target image, including information such as shape and color, can be further enhanced, and thus, the edge shape information of the road can be more accurately identified, and the accuracy of the road segmentation result is improved.
Referring to fig. 4, in the present embodiment, after the road segmentation result of the target image is obtained, the first feature of the road segmentation result may be extracted first. Specifically, the lightweight feature extraction network mobilenetv3 may be adopted to extract the above-described first feature. In the first feature, the features of the foreground and the background can be well distinguished, so that a reference basis is provided for the subsequent process.
S3: and extracting a second feature of the target image, and overlapping the first feature and the second feature to identify the number of lanes in the target image based on the overlapped feature.
Referring to fig. 4, in the present embodiment, the second feature of the target image may be extracted by the same method as the first feature. For example, the lightweight feature extraction network mobilenetv3 is used to extract the second feature described above. The two feature extraction networks may be respectively a first sub-network for extracting a first feature and a second sub-network for extracting a second feature. In practical applications, in order to ensure that the extracted first feature and the extracted second feature have the same scale, the road segmentation result and the target image may be scaled to the same size and then output to respective sub-networks.
In this embodiment, after the first feature and the second feature are extracted separately by the two subnetworks, the two features may be superimposed by a superimposing operation. In practical applications, the superposition operation may be an addition or a multiplication of the eigenvalues at the positions of the corresponding eigenvalues in the two eigenvalues. As the road segmentation result is that the foreground and the background are represented by a binary image, the feature difference between the foreground and the background in the first feature is also large (for example, the feature value of each feature point in the foreground may be 1, and the feature value of each feature point in the background may be 0). This results in features of the foreground being more prominent in the superimposed features, while features of the background are diluted, thereby guiding the subsequent network to focus on identifying features of the foreground. For example, when the overlay operation is performed, the feature values at the corresponding feature point positions in the two features are multiplied, and then since the feature value of each feature point in the background is 0, the feature points in the background are filtered out after the overlay operation, and the feature points in the foreground are retained. Therefore, on one hand, the complexity of data processing is reduced, and on the other hand, the detection precision of the number of lanes can be improved. The superimposed features may be fed into a classification layer for prediction, resulting in the number of lanes in the target image. Wherein, the classification layer can be realized by adopting a current mature classifier, and the classifier can comprise a naive Bayes classifier, a support vector machine classifier, a K nearest neighbor classifier and the like. By training a large number of samples, the classification layer can identify the number of lanes contained in the target image.
In the training stage of practical application, the number of lanes in the target image can be manually labeled, and the training error is calculated by comparing the prediction result with the labeling result. The two sub-networks and the classification layer can be continuously corrected through training errors, and therefore an accurate lane number recognition network is obtained. Wherein the training error can be represented by a cross entropy loss function or a loss function with similar function.
In a specific application scenario, considering that a road usually includes two driving lanes in two directions, the respective lane numbers of the two driving lanes may be respectively marked during manual marking, so that the result output by the classification layer also includes the two lane numbers. The correction for each lane number can be performed in the manner described above and will not be described here.
S5: and determining the area occupied by the emergency lane in the road of the target image according to the number of the recognized lanes.
In the present embodiment, after the number of lanes included in the road is recognized, the area occupied by the emergency lane may be specified in the road based on the configuration patterns of the emergency lane and the non-emergency lane and the respective standard widths.
Generally, the emergency lane is only one in the road and is often the rightmost side in the direction of travel. Furthermore, the first standard width of the emergency lane is typically 3.5 meters and the second standard width of the non-emergency lane is typically 3.75 meters, and based on this information, in combination with the number of lanes in the road, the area occupied by the emergency lane can be determined.
In one embodiment, the emergency lane area proportion matched with the number of lanes may be calculated according to the first standard width and the second standard width. For example, the ratio of the width of the emergency lane to the width of the non-emergency lane may be 3.5. For example, for three lanes, the ratio is 3.5. The ratio may indicate an area occupation ratio of the emergency lane in the road, and the area occupation ratio may be an area defined in the road of the target image, which may be an area occupied by the application lane. For example, assuming that the emergency lane is on the right side of the road, an area that meets the area occupation ratio may be divided into areas of the emergency lane from the right side of the road.
In another embodiment, considering that the area occupied by the emergency lane and the area occupied by the non-emergency lane are distinguished by the emergency lane line, the area occupied by the emergency lane may be determined only by determining the emergency lane line. Specifically, referring to fig. 5, a plurality of reference lines may be drawn in the road segmentation result, the reference lines may be parallel to each other, and each reference line and the area boundary of the road may form two intersection points (shown by characters in the figure, only some intersection points are labeled in the figure, and actually, the left side and the right side of each reference line and the area boundary of the road respectively form corresponding intersection points). And then, according to the calculated emergency lane area proportion, determining a target position corresponding to the emergency lane line on each reference line in the lane area. For example, in fig. 5, the position represented by a circle in the middle of the reference line is the corresponding target position of the emergency lane line on the reference line. Finally, the straight line drawn by each target position can be used as an emergency lane line. The area defined by the emergency lane line and the nearer roadside is the area occupied by the emergency lane.
In the embodiment, after the area occupied by the emergency lane is identified in the road of the target image, the vehicles running in the emergency lane can be further identified, and the vehicle information of the part of vehicles can be output, so that the vehicle information can be used as the basis for punishing the violation.
Specifically, in order to improve the accuracy of vehicle identification, a vehicle may be identified in the target image in a manner of combining the initial model and the hierarchical model in fig. 3. The initial model can be used for extracting multi-scale vehicle features of the target image, and the multi-scale vehicle features are fused by using the hierarchical model to generate a plurality of vehicle fusion features. In practical applications, the initial model may be constructed based on yolov4 detection network. In the embodiment, after the initial model and the hierarchical model are used to perform feature extraction on the target image to obtain a plurality of vehicle fusion features, the vehicle fusion features are finally sent to a detector to be detected, so as to obtain a detection result of the vehicle, and the detection result may include information such as the model, the license plate, and the position of the vehicle. The position information of the vehicle can be represented by the coordinates of the pixel points. Certainly, in the vehicle detection process, the vehicle features of the specified scale may also be introduced at the top layer and the bottom layer of the hierarchical model for feature fusion, which is not described herein again.
In practical application, in the training process of vehicle detection, the vehicle in the target image can be labeled manually, and the predicted result is compared with the result of manual labeling to generate a training error of the position of the detection frame. The initial model and the hierarchical model and the detector can be continuously corrected through training errors, and therefore an accurate vehicle detection network is obtained. Wherein the training error of the detection frame position can be represented by a complete overlap degree (CIOU) error function.
In the present embodiment, after the vehicle on the road is recognized, if the target vehicle recognized in the target image travels in the area occupied by the emergency lane, the target vehicle is determined as a violation vehicle, and the vehicle information of the violation vehicle is output. In particular, the position of the vehicle may be compared to the area occupied by the emergency lane. In the target image, the position of the vehicle and the area occupied by the emergency lane can be represented by the coordinate values of the pixel points. And if the pixel points representing the position of the vehicle are partially or completely positioned in the area occupied by the emergency lane, the vehicle is judged to be a violation vehicle. Certainly, in practical application, when the vehicle and the area occupied by the emergency lane are partially overlapped, the vehicle with the overlapping proportion larger than a certain threshold value can be determined as the violation vehicle.
It should be noted that, in practical applications, the step of vehicle detection may be performed at any time according to requirements, and is not necessarily performed after the emergency lane identification. For example, the process of vehicle detection may be completed at the beginning of the present application. In addition, if the number of data processes is reduced, after the area occupied by the emergency lane is identified, the sub-image corresponding to the area occupied by the emergency lane alone may be subjected to vehicle detection, so that only the vehicle in the emergency lane is detected, and the vehicle outside the emergency lane may not be detected, thereby increasing the detection speed of the illegal vehicle.
Referring to fig. 6, an embodiment of the present application further provides an emergency lane identification system, including:
the system comprises a feature extraction unit, a feature extraction unit and a feature extraction unit, wherein the feature extraction unit is used for acquiring a road segmentation result of a target image and extracting a first feature of the road segmentation result; the road segmentation result is used for displaying a road containing an emergency lane;
a lane number recognition unit configured to extract a second feature of the target image, and superimpose the first feature and the second feature to recognize the number of lanes in the target image based on the superimposed feature;
and the emergency lane determining unit is used for determining the area occupied by the emergency lane in the road of the target image according to the number of the recognized lanes.
Referring to fig. 7, an embodiment of the present application further provides an emergency lane recognition apparatus, where the emergency lane recognition apparatus includes a processor and a memory, where the memory is used to store a computer program, and when the computer program is executed by the processor, the method for recognizing an emergency lane as described above is implemented.
The processor may be a Central Processing Unit (CPU). The Processor may also be other general purpose processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, or a combination thereof.
The memory, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the methods in the embodiments of the present invention. The processor executes various functional applications and data processing of the processor by executing non-transitory software programs, instructions and modules stored in the memory, that is, the method in the above method embodiment is realized.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created by the processor, and the like. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and such remote memory may be coupled to the processor via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
An embodiment of the present application further provides a computer-readable storage medium for storing a computer program, which when executed by a processor, implements the emergency lane identification method described above.
According to the technical scheme, the original target image and the road segmentation result of the target image can be comprehensively analyzed, and therefore the area occupied by the emergency lane in the target image is identified. Specifically, after respective features of the road segmentation result and the target image are extracted, the two features may be superimposed, and then the number of lanes in the target image may be identified based on the superimposed features. In this way, when the number of lanes in the target image is detected, the detection area can be more limited to the foreground by the characteristics of the road segmentation result, so that the complexity of data processing is reduced, and the detection accuracy of the number of lanes can be improved.
After the number of lanes is identified, an area occupied by an emergency lane may be determined in the road of the target image based on the number of lanes. Because the field of view of the target image shot by the unmanned aerial vehicle is high, the conventional way of detecting the lane line cannot be well applied to the target image. According to the method and the device, the lane lines in the target image can be prevented from being directly detected, the number of lanes is firstly identified based on the overlapped features, and then the area of the emergency lane is estimated based on the number of the lanes. Therefore, by combining the characteristics of the road segmentation result and utilizing the number of lanes to speculate the area of the emergency lane, the defects of the algorithm in the prior art can be avoided, and the detection precision of the emergency lane is improved.
It will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk Drive (Hard Disk Drive, abbreviated as HDD), or a Solid State Drive (SSD); the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (10)

1. A method of identifying an emergency lane, the method comprising:
acquiring a road segmentation result of a target image, and extracting a first feature of the road segmentation result; the road segmentation result is used for displaying a road comprising an emergency lane;
extracting a second feature of the target image, and overlapping the first feature and the second feature to identify the number of lanes in the target image based on the overlapped feature;
determining the area occupied by an emergency lane in the road of the target image according to the number of the recognized lanes;
the method for acquiring the road segmentation result of the target image comprises the following steps:
extracting multi-scale segmentation features of the target image by using an initial model, and fusing the multi-scale segmentation features by using a hierarchical model to generate a plurality of segmentation fusion features; introducing specified scale segmentation features to the top layer and the bottom layer of the hierarchical model for feature fusion;
and after each segmentation fusion feature is transformed into a transformation feature with a uniform scale, overlapping each transformation feature, and generating a road segmentation result of the target image based on the overlapped transformation features.
2. The method according to claim 1, wherein the hierarchical model comprises at least a first path and a second path arranged in parallel, the first path is used for performing layer-by-layer upsampling on the multi-scale segmentation feature, and the second path is used for performing layer-by-layer downsampling on output features of each layer of the first path; and introducing a first specified scale division feature to the top layer of the first path for feature fusion, introducing a second specified scale division feature to the bottom layer of the second path for feature fusion, wherein the dimension of the first specified scale division feature is smaller than that of the second specified scale division feature.
3. The method of claim 2, wherein the second pass layer-by-layer output segmentation-fusion features are transformed into uniform-scale transformed features.
4. The method of claim 1, wherein a first sub-network is used to extract a first feature of the road segmentation result, a second sub-network is used to extract a second feature of the target image, and the superimposed features are predicted by classification layers to obtain the number of lanes in the target image; and highlighting foreground features in the overlapped features, and fading background features.
5. The method of claim 1, wherein determining the area of the target image's road occupied by the emergency lane comprises:
acquiring a first standard width of an emergency lane and a second standard width of a non-emergency lane, and calculating the area proportion of the emergency lane matched with the number of lanes according to the first standard width and the second standard width;
and taking the emergency lane area to be a limited area in the road of the target image as the area occupied by the determined application lane.
6. The method according to claim 5, characterized in that the area occupied by the emergency lane is distinguished from the area occupied by the non-emergency lane by an emergency lane line, which is determined in the following way:
drawing a plurality of reference lines in the road segmentation result, wherein the reference lines and the region boundary of the road in the road segmentation result form an intersection point;
determining a target position corresponding to an emergency lane line on each reference line in the area of the road according to the emergency lane area proportion;
and taking the straight line drawn by each target position as the emergency lane line.
7. The method of claim 1, further comprising:
extracting multi-scale vehicle features of the target image by using an initial model, fusing the multi-scale vehicle features by using a hierarchical model to generate a plurality of vehicle fusion features, and identifying vehicles contained in the target image based on the plurality of vehicle fusion features; introducing vehicle features with specified dimensions into the top layer and the bottom layer of the hierarchical model for feature fusion;
and if the target vehicle identified in the target image runs in the area occupied by the emergency lane, judging the target vehicle as a violation vehicle, and outputting the vehicle information of the violation vehicle.
8. An emergency lane identification system, the system comprising:
the system comprises a feature extraction unit, a feature extraction unit and a feature extraction unit, wherein the feature extraction unit is used for acquiring a road segmentation result of a target image and extracting a first feature of the road segmentation result; the road segmentation result is used for displaying a road containing an emergency lane;
a lane number recognition unit configured to extract a second feature of the target image, and superimpose the first feature and the second feature to recognize the number of lanes in the target image based on the superimposed feature;
the emergency lane determining unit is used for determining the area occupied by the emergency lane in the road of the target image according to the number of the recognized lanes;
wherein the acquiring of the road segmentation result of the target image comprises: extracting multi-scale segmentation features of the target image by using an initial model, and fusing the multi-scale segmentation features by using a hierarchical model to generate a plurality of segmentation fusion features; introducing specified scale segmentation features to the top layer and the bottom layer of the hierarchical model for feature fusion; and after each segmentation fusion feature is transformed into a transformation feature with a uniform scale, overlapping each transformation feature, and generating a road segmentation result of the target image based on the overlapped transformation features.
9. An emergency lane identification device, characterized in that it comprises a processor and a memory for storing a computer program which, when executed by the processor, carries out the method according to any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium is used to store a computer program which, when executed by a processor, implements the method of any one of claims 1 to 7.
CN202110678895.4A 2021-06-18 2021-06-18 Emergency lane identification method, system and device Active CN113408413B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110678895.4A CN113408413B (en) 2021-06-18 2021-06-18 Emergency lane identification method, system and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110678895.4A CN113408413B (en) 2021-06-18 2021-06-18 Emergency lane identification method, system and device

Publications (2)

Publication Number Publication Date
CN113408413A CN113408413A (en) 2021-09-17
CN113408413B true CN113408413B (en) 2023-03-24

Family

ID=77681523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110678895.4A Active CN113408413B (en) 2021-06-18 2021-06-18 Emergency lane identification method, system and device

Country Status (1)

Country Link
CN (1) CN113408413B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114415708A (en) * 2022-01-24 2022-04-29 上海复亚智能科技有限公司 Road self-inspection method and device, unmanned aerial vehicle and storage medium
CN114998863B (en) * 2022-05-24 2023-12-12 北京百度网讯科技有限公司 Target road identification method, device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105702049A (en) * 2016-03-29 2016-06-22 成都理工大学 DSP-based emergency lane monitoring system and realizing method thereof
CN111311675A (en) * 2020-02-11 2020-06-19 腾讯科技(深圳)有限公司 Vehicle positioning method, device, equipment and storage medium
CN111680580A (en) * 2020-05-22 2020-09-18 北京格灵深瞳信息技术有限公司 A recognition method, device, electronic device and storage medium for running a red light
CN112562406A (en) * 2020-11-27 2021-03-26 众安在线财产保险股份有限公司 Method and device for identifying off-line driving

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109637151B (en) * 2018-12-31 2021-12-07 上海眼控科技股份有限公司 Method for identifying illegal driving of emergency lane on highway
CN111551957B (en) * 2020-04-01 2023-02-03 上海富洁科技有限公司 Park low-speed automatic cruise and emergency braking system based on laser radar sensing
CN112232285A (en) * 2020-11-05 2021-01-15 浙江点辰航空科技有限公司 Unmanned aerial vehicle system that highway emergency driveway was patrolled and examined

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105702049A (en) * 2016-03-29 2016-06-22 成都理工大学 DSP-based emergency lane monitoring system and realizing method thereof
CN111311675A (en) * 2020-02-11 2020-06-19 腾讯科技(深圳)有限公司 Vehicle positioning method, device, equipment and storage medium
CN111680580A (en) * 2020-05-22 2020-09-18 北京格灵深瞳信息技术有限公司 A recognition method, device, electronic device and storage medium for running a red light
CN112562406A (en) * 2020-11-27 2021-03-26 众安在线财产保险股份有限公司 Method and device for identifying off-line driving

Also Published As

Publication number Publication date
CN113408413A (en) 2021-09-17

Similar Documents

Publication Publication Date Title
CN108009543B (en) License plate recognition method and device
CN111126453B (en) A fine-grained image classification method and system based on attention mechanism and cutting and filling
US11527077B2 (en) Advanced driver assist system, method of calibrating the same, and method of detecting object in the same
KR101848019B1 (en) Method and Apparatus for Detecting Vehicle License Plate by Detecting Vehicle Area
Ohgushi et al. Road obstacle detection method based on an autoencoder with semantic segmentation
Li et al. Detection of road objects with small appearance in images for autonomous driving in various traffic situations using a deep learning based approach
CN111178211A (en) Image segmentation method and device, electronic equipment and readable storage medium
CN108154149B (en) License plate recognition method based on deep learning network sharing
CN105426861A (en) Method and device for determining lane line
CN113408413B (en) Emergency lane identification method, system and device
KR102489884B1 (en) Image processing apparatus for improving license plate recognition rate and image processing method using the same
Al Mamun et al. Efficient lane marking detection using deep learning technique with differential and cross-entropy loss.
CN111062347B (en) Traffic element segmentation method in automatic driving, electronic equipment and storage medium
CN111292331B (en) Image processing method and device
Merugu et al. Multi lane detection, curve fitting and lane type classification
CN111178158B (en) Rider detection method and system
Triki et al. A comprehensive survey and analysis of traffic sign recognition systems with hardware implementation
CN117237882A (en) Identification methods and related equipment for moving vehicles
CN114565764A (en) Port panorama sensing system based on ship instance segmentation
Harsha et al. Vehicle Detection and Lane Keep Assist System for Self-driving Cars
Erwin et al. Detection of highway lane using color filtering and line determination
Luo et al. Fe-det: An effective traffic object detection framework for fish-eye cameras
NGUYEN License plate detection and refinement based on deep convolutional neural network
US20250054268A1 (en) Image reconstruction system
Bahadure et al. Traffic Signal Detection and Recognition from Real-Scenes Using YOLO

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant