Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention, and the components related to the present invention are only shown in the drawings rather than drawn according to the number, shape and size of the components in actual implementation, and the type, quantity and proportion of the components in actual implementation may be changed freely, and the layout of the components may be more complicated.
Example one
The embodiment provides a detection method of an indication line, wherein the indication line comprises a longitudinal vehicle location line and a lane line; the method for detecting the indicator line comprises the following steps:
acquiring a parking lot image, and performing semantic segmentation on the parking lot image to acquire a parking lot segmentation image;
traversing the parking lot segmentation image, and respectively processing the parking lot segmentation image into a lane line semantic graph and a parking space line semantic graph;
acquiring lane line points in the lane line semantic graph, fitting the lane lines according to the lane line points, acquiring longitudinal vehicle position line points in the lane line semantic graph, and fitting the longitudinal vehicle position lines according to the longitudinal vehicle position line points;
and integrating the fitted lane lines and the longitudinal lane lines and outputting.
The method for detecting the indicator line provided in the present embodiment will be described in detail with reference to the drawings. The indication line comprises a longitudinal lane line and a lane line. The parking space is automatically searched according to the line information when the vehicle is automatically parked by detecting the longitudinal vehicle line and the lane line. In this embodiment, in order to realize automatic parking space search, the vehicle needs to travel along the lane line before finding an empty parking space, and needs to travel along the longitudinal lane line without the lane line. When the empty parking space is found, other functions are needed to realize automatic parking. Therefore, the lateral vehicle line needs to be masked in this scenario.
Please refer to fig. 1A, which is a schematic flow chart illustrating an exemplary method for detecting an indicator line. As shown in fig. 1A, the method for detecting the indicator line specifically includes the following steps:
s11, a parking lot image is obtained, and semantic segmentation is performed on the parking lot image to obtain a parking lot segmentation image, for example, as shown in fig. 2A.
In this embodiment, the parking lot image is semantically segmented by a pre-stored semantic segmentation network model.
And S12, traversing the parking lot segmentation image, and processing the parking lot segmentation image into a lane line semantic graph and a parking space line semantic graph respectively.
Specifically, the S12 includes: according to the RGB threshold of the lane line, the lane line is extracted from the parking lot segmentation image to form the lane line semantic map, which is shown in fig. 2B.
Specifically, the S12 further includes: and extracting longitudinal vehicle position lines from the parking lot segmentation image according to RGB threshold values of the longitudinal vehicle position lines to form the parking space line semantic graph, wherein the parking space line semantic graph is shown as 2C.
And S13, calculating an included angle between the lane line and the advancing direction of the vehicle. In one embodiment, the calculated angle between the lane line and the vehicle heading direction is used to achieve a fit of the lane line by rotating the lane line semantic map.
Specifically, the S13 includes:
traversing half of the lane line semantic graph from the left side and the right side respectively in a transverse direction by a preset step length;
recording the number of pixels of each traversed row, and storing the pixel coordinates of the pixel points of each row;
adding the pixels of each row, and classifying the pixel sums of each row specifically by classifying the pixels and the same or similar rows into one class), and selecting the maximum class with the most rows (selecting the maximum class indicates that the pixel and the same or similar rows have the highest frequency);
calculating the average coordinate of each line in the maximum class;
calculating the tangent angle between two adjacent coordinates, averaging all the calculated tangent angles, and defining the averaged tangent angle as the included angle between the lane line and the advancing direction of the vehicle.
And S14, obtaining lane line points in the lane line semantic graph, and fitting the lane line according to the lane line points.
In this embodiment, the S14 includes:
s141, rotating the lane line semantic graph by taking the center of the graph as the center according to the calculated included angle between the lane line and the advancing direction of the vehicle;
s142, longitudinally traversing the rotated lane line semantic graph, searching lane lines and acquiring fitted point coordinates corresponding to the lane lines;
s143, reversely rotating the lane line points to obtain coordinate points corresponding to the lane line semantic graph, so that the lane line is fitted to the coordinate points corresponding to the lane line semantic graph through the lane line points.
And S15, acquiring longitudinal vehicle position line points from the vehicle position line semantic graph, and fitting the longitudinal vehicle position line according to the longitudinal vehicle position line points.
Specifically, the S15 includes:
rotating the parking space line semantic graph by the center of the graph according to the calculated included angle between the lane line and the advancing direction of the vehicle;
traversing the rotated parking space line semantic graph transversely, and filtering a transverse parking space line through a preset parking space line threshold;
longitudinally traversing the rotated parking space line semantic graph, searching a longitudinal parking space line, and acquiring a corresponding fitting point coordinate of the longitudinal parking space line;
and reversely rotating the longitudinal car position line point to obtain a coordinate point corresponding to the car position line semantic graph, so that the longitudinal car position line is fitted to the coordinate point corresponding to the car position line semantic graph through the longitudinal car position line point.
S16, integrating the fitted lane lines and longitudinal lane lines, and outputting the fitted lane lines and longitudinal lane lines, such as the example diagram of the fitting results of the lane lines and longitudinal lane lines shown in fig. 3.
Please refer to fig. 1B, which is a schematic flow chart illustrating another exemplary method for detecting an indicator line. As shown in fig. 1B, the method for detecting the indicator line specifically includes:
s11', acquiring a parking lot image, and performing semantic segmentation on the parking lot image to acquire a parking lot segmentation image.
In this embodiment, the parking lot image is semantically segmented by a pre-stored semantic segmentation network model.
And S12', traversing the parking lot segmentation image, and processing the parking lot segmentation image into a lane line semantic graph and a parking space line semantic graph respectively.
Specifically, the S12' includes: and extracting the lane lines from the parking lot segmentation image according to RGB threshold values of the lane lines to form the lane line semantic graph.
Specifically, the S12' further includes: and extracting longitudinal vehicle position lines from the parking lot segmentation images according to RGB threshold values of the longitudinal vehicle position lines to form the parking space line semantic graph.
S13', obtaining lane line points in the lane line semantic graph, and fitting the lane line according to the lane line points.
In this embodiment, the S13' includes:
s131', traversing the lane line semantic graph transversely by preset step length, recording the coordinates of the boundary points of the lane line at each horizontal position and counting the number of the boundary points of the lane line;
s132', classifying the horizontal positions with the same number of boundary points of the lane line, searching the horizontal position of the maximum class and the coordinates of the boundary points of the lane line at the horizontal position, and calculating the coordinates of the center position between every two adjacent lane line points;
and S133', respectively fitting the central points belonging to the same lane line to fit the lane line, and calculating an included angle between the lane line and the advancing direction of the vehicle.
In this embodiment, the fitting of the lane line in S133' uses a fitting function fitline () of opencv, and the distance parameter is selected as an L2 distance, that is, a least square fitting method.
In another fitting mode of the lane line, an included angle between the lane line and the advancing direction of the vehicle is calculated according to the slope k of the fitted lane line.
Specifically, the slope k is negative, and the included angle between the lane line and the vehicle advancing direction is 90+ arctan (k);
the slope k is positive, and the angle between the lane line and the vehicle advancing direction is arctan (k) -90.
And S14', acquiring longitudinal vehicle position line points from the vehicle position line semantic graph, and fitting the longitudinal vehicle position line according to the longitudinal vehicle position line points.
Specifically, the S14' includes:
rotating the parking space line semantic graph by the center of the graph according to the calculated included angle between the lane line and the advancing direction of the vehicle;
traversing the rotated parking space line semantic graph transversely, and filtering a transverse parking space line through a preset parking space line threshold;
longitudinally traversing the rotated parking space line semantic graph, searching a longitudinal parking space line, and acquiring a corresponding fitting point coordinate of the longitudinal parking space line;
and reversely rotating the longitudinal car position line point to obtain a coordinate point corresponding to the car position line semantic graph, so that the longitudinal car position line is fitted to the coordinate point corresponding to the car position line semantic graph through the longitudinal car position line point.
And S15', integrating the fitted lane lines and the longitudinal lane lines, and outputting the fitted lane lines and the longitudinal lane lines.
Please refer to fig. 1C, which is a schematic flow chart illustrating another exemplary method for detecting an indicator line. As shown in fig. 1C, the method for detecting the indicator line specifically includes the following steps:
and S11', acquiring a parking lot image, and performing semantic segmentation on the parking lot image to acquire a parking lot segmentation image.
In this embodiment, the parking lot image is semantically segmented by a pre-stored semantic segmentation network model.
And S12', traversing the parking lot segmentation image, and processing the parking lot segmentation image into a lane line semantic graph and a parking space line semantic graph respectively.
Specifically, the S12 ″ includes: and extracting the lane lines from the parking lot segmentation image according to RGB threshold values of the lane lines to form the lane line semantic graph.
Specifically, the S12 ″ further includes: and extracting longitudinal vehicle position lines from the parking lot segmentation images according to RGB threshold values of the longitudinal vehicle position lines to form the parking space line semantic graph.
And S13', calculating the included angle between the longitudinal vehicle position line and the longitudinal axis of the vehicle.
Specifically, S13 "includes the steps of:
traversing half of the vehicle-location line semantic graph from the left side and the right side respectively in a transverse direction by a preset step length;
recording the number of pixels of each traversed row, and storing the pixel coordinates of the pixel points of each row;
adding pixels of each row, classifying pixel sums of each row (specifically, classifying the pixels and the same or similar rows into one class), and selecting a maximum class with the largest number of rows (selecting the maximum class indicates that the pixel and the same or similar rows have the highest occurrence frequency);
calculating the average coordinate of each line in the maximum class;
calculating the angle between two adjacent coordinates, averaging all the calculated angles, and judging whether the averaged angle is larger than a preset angle threshold value or not; if so, defining the complementary angle of the averaged angle as the included angle between the longitudinal vehicle position line and the longitudinal axis of the vehicle; if not, defining the complementary angle of the averaged angle as the included angle between the longitudinal vehicle position line and the longitudinal axis of the vehicle.
S14', obtaining longitudinal car position line points from the car position line semantic graph, and according to the longitudinal car position line points, the step of fitting the longitudinal car position line comprises the following steps:
and S141', according to the calculated included angle between the longitudinal position line and the longitudinal axis of the vehicle, rotating the parking space line semantic graph by the center of the graph, such as the example graph of the rotated parking space line semantic graph shown in FIG. 4.
In this embodiment, a rotation matrix of pixel coordinates can be calculated according to the image center of the parking space line semantic graph and the calculated included angle between the longitudinal parking space line and the longitudinal axis of the vehicle or the included angle between the lane line and the advancing direction of the vehicle as the rotation angle.
Specifically, when point O is the center of a circle, the coordinate of point P is transformed into point Q after point P rotates around point O by r radians, and the formula is:
Q.x=P.x*cos(r)-P.y*sin(r)
Q.y=P.x*sin(r)+P.y*cos(r)
s142 ", the largest rectangle including the image after rotation can be obtained according to the rotation angle and the original height and width, and the place without the picture element is filled with black.
And S143', traversing the rotated parking space line semantic graph transversely, and filtering the transverse parking space line through a preset parking space line threshold value.
And S144', longitudinally traversing the rotated parking space line semantic graph, searching a longitudinal parking space line, and acquiring a corresponding fitting point coordinate of the longitudinal parking space line, for example, a longitudinal parking space line point as shown in FIG. 5.
In this embodiment, after the car position line semantic graph rotates, the longitudinal car position line is vertical, the picture is longitudinally traversed, and if the sum of pixels at a certain longitudinal position exceeds a certain pixel threshold, the position is considered to have the longitudinal car position line. And after all the positions which are considered to have the longitudinal vehicle position line are longitudinally searched, classifying the positions according to the distance, and classifying the positions close to each other into the same longitudinal vehicle position line. And acquiring the coordinate of the corresponding fitting point of the longitudinal parking space line, wherein the position is the abscissa x, and the ordinate y is acquired by self-definition, namely, a point is acquired from top to bottom at a certain distance.
S145 ″, reversely rotating the longitudinal lane point (see fig. 6, which shows the reversely rotated longitudinal lane point), and obtaining a coordinate point corresponding to the car lane semantic graph, so as to fit the longitudinal lane point to the coordinate point of the car lane semantic graph through the longitudinal lane point, where the fitted longitudinal lane point is shown in fig. 7.
And S15', obtaining lane line points in the lane line semantic graph, and fitting the lane line according to the lane line points.
In this embodiment, the S15 ″ includes:
s151 ″ traversing the lane line semantic graph transversely by a preset step length, recording coordinates of boundary points of the lane line at each horizontal position and counting the number of the boundary points of the lane line, please refer to fig. 8, which is shown as a statistical schematic diagram of the boundary points of the lane line;
s152', classifying the horizontal positions with the same number of lane line points, searching the horizontal position of the maximum class and the coordinates of the boundary points of the lane line at the horizontal position, and calculating the coordinates of the central position between the boundary points of every two adjacent lane lines;
s153 ", respectively fitting center points belonging to the same lane line, for example, the center points of the fitted lane lines shown in fig. 9, to fit the lane lines, and calculating an angle between the lane lines and a vehicle advancing direction.
In this embodiment, the fitting of the lane line in S153 ″ uses a fitting function fitline () of opencv, and the distance parameter is selected as an L2 distance, that is, a least square fitting method. In practical applications, the steps S14 ″ and S15 ″ may be executed simultaneously or sequentially, as shown in fig. 1C, the steps S14 ″ and S15 ″ are executed simultaneously in this embodiment.
And S16', integrating the fitted lane lines and the longitudinal lane lines, and outputting the fitted lane lines and the longitudinal lane lines.
The detection method of the indicator line has the following beneficial effects:
first, this embodiment can greatly reduce the resource consumption of the detection task, and can satisfy the accuracy requirement. The method for fitting the lane lines is simple in thought and does not involve complex conversion, the image traversal is only carried out at a preset horizontal position and does not involve the traversal of the whole image, the counting and adding and subtracting are simply carried out, and the resource consumption is low.
Secondly, the method for fitting the longitudinal vehicle-location line in the embodiment includes the steps of obtaining the deflection angle, converting the deflection angle into the longitudinal search through picture rotation, ingeniously simplifying the detection difficulty of the inclined longitudinal vehicle-location line, and being simple in thought and small in calculated amount.
Thirdly, the embodiment can improve the fitting efficiency and precision of the lane line and the parking space line in the parking process of the vehicle, so that the driving path of the vehicle is more accurate, and the vehicle can conveniently and automatically search the parking space.
Example two
The embodiment provides a detection method of an indicator line, wherein the indicator line comprises a longitudinal vehicle line; the method for detecting the indicator line comprises the following steps:
acquiring a parking lot image, and performing semantic segmentation on the parking lot image to acquire a parking lot segmentation image;
traversing the parking lot segmentation image, and processing the parking lot segmentation image into a parking space line semantic graph;
acquiring longitudinal vehicle position line points from the vehicle position line semantic graph, and fitting the longitudinal vehicle position line according to the longitudinal vehicle position line points;
and outputting the fitted longitudinal vehicle position line.
The method for detecting the indicator line provided in the present embodiment will be described in detail with reference to the drawings. The embodiment is applied to the indicating line comprising the use under a longitudinal vehicle line. Please refer to fig. 10, which is a flowchart illustrating a method for detecting an indicator line according to another embodiment. As shown in fig. 10, the method for detecting the indicator line specifically includes the following steps:
s101, obtaining a parking lot image, and performing semantic segmentation on the parking lot image to obtain a parking lot segmentation image.
And S102, traversing the parking lot segmentation image, and processing the parking lot segmentation image into a parking space line semantic graph.
And extracting longitudinal vehicle position lines from the parking lot segmentation images according to RGB threshold values of the longitudinal vehicle position lines to form the parking space line semantic graph.
S103, calculating an included angle between the longitudinal vehicle position line and the longitudinal axis of the vehicle.
Specifically, S103 includes the steps of:
traversing half of the vehicle-location line semantic graph from the left side and the right side respectively in a transverse direction by a preset step length;
recording the number of pixels of each traversed row, and storing the pixel coordinates of the pixel points of each row;
adding pixels of each row, classifying pixel sums of each row (specifically, classifying the pixels and the same or similar rows into one class), and selecting a maximum class with the largest number of rows (selecting the maximum class indicates that the pixel and the same or similar rows have the highest occurrence frequency);
calculating the average coordinate of each line in the maximum class;
calculating the angle between two adjacent coordinates, averaging all the calculated angles, and judging whether the averaged angle is larger than a preset angle threshold value or not; if so, defining the complementary angle of the averaged angle as the included angle between the longitudinal vehicle position line and the longitudinal axis of the vehicle; if not, defining the complementary angle of the averaged angle as the included angle between the longitudinal vehicle position line and the longitudinal axis of the vehicle.
And S104, acquiring longitudinal vehicle position line points from the vehicle position line semantic graph, and fitting the longitudinal vehicle position line according to the longitudinal vehicle position line points.
Specifically, the S104 includes:
and rotating the parking space line semantic graph by using the center of the graph according to the calculated included angle between the longitudinal vehicle position line and the longitudinal axis of the vehicle or the included angle between the lane line and the advancing direction of the vehicle.
In this embodiment, a rotation matrix of pixel coordinates can be calculated according to the image center of the parking space line semantic graph and the calculated included angle between the longitudinal parking space line and the longitudinal axis of the vehicle or the included angle between the lane line and the advancing direction of the vehicle as the rotation angle.
Specifically, when point O is the center of a circle, the coordinate of point P is transformed into point Q after point P rotates around point O by r radians, and the formula is:
Q.x=P.x*cos(r)-P.y*sin(r)
Q.y=P.x*sin(r)+P.y*cos(r)
the largest rectangle comprising the image after rotation can be solved according to the rotation angle and the original height and width, and the place without picture elements is filled with black.
And traversing the rotated parking space line semantic graph transversely, and filtering the transverse parking space line through a preset parking space line threshold value. In this embodiment, filtering out the horizontal lane can better find the point coordinates of the longitudinal lane.
And longitudinally traversing the rotated parking space line semantic graph, searching a longitudinal parking space line, and acquiring the corresponding fitting point coordinates of the longitudinal parking space line.
In this embodiment, after the car position line semantic graph rotates, the longitudinal car position line is vertical, the picture is longitudinally traversed, and if the sum of pixels at a certain longitudinal position exceeds a certain pixel threshold, the position is considered to have the longitudinal car position line. And after all the positions which are considered to have the longitudinal vehicle position line are longitudinally searched, classifying the positions according to the distance, and classifying the positions close to each other into the same longitudinal vehicle position line. And acquiring the coordinate of the corresponding fitting point of the longitudinal parking space line, wherein the position is the abscissa x, and the ordinate y is acquired by self-definition, namely, a point is acquired from top to bottom at a certain distance.
And reversely rotating the longitudinal car position line point to obtain a coordinate point corresponding to the car position line semantic graph, so that the longitudinal car position line is fitted to the coordinate point corresponding to the car position line semantic graph through the longitudinal car position line point.
Specifically, by inverting the acquired rotation matrix, an inverse rotation matrix is acquired. And reversely rotating the longitudinal vehicle line point by utilizing the reverse rotation matrix.
EXAMPLE III
In an embodiment, the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method for detecting an indicator according to the first embodiment and the second embodiment.
The present application may be embodied as systems, methods, and/or computer program products, in any combination of technical details. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present application.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable programs described herein may be downloaded from a computer-readable storage medium to a variety of computing/processing devices, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device. The computer program instructions for carrying out operations of the present application may be assembly instructions, Instruction Set Architecture (ISA) instructions, machine related instructions, microcode, firmware instructions, state setting data, integrated circuit configuration data, or source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry can execute computer-readable program instructions to implement aspects of the present application by utilizing state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Example four
The embodiment provides a detection system of an indication line, wherein the indication line comprises a longitudinal vehicle line and/or a lane line;
the detection system of the indicator line comprises:
the acquisition module is used for acquiring a parking lot image;
the segmentation module is used for performing semantic segmentation on the parking lot image to obtain a parking lot segmentation image;
the fitting processing module is used for traversing the parking lot segmentation images and respectively processing the parking lot segmentation images into a lane line semantic graph and a parking space line semantic graph when the indication lines comprise longitudinal vehicle position lines and lane lines; acquiring lane line points in the lane line semantic graph, fitting the lane lines according to the lane line points, acquiring longitudinal vehicle position line points in the lane line semantic graph, and fitting the longitudinal vehicle position lines according to the longitudinal vehicle position line points; or when the indication line only comprises the longitudinal vehicle position line, obtaining longitudinal vehicle position line points from the vehicle position line semantic graph, and fitting the longitudinal vehicle position line according to the longitudinal vehicle position line points;
and the output module is used for integrating the fitted lane line and the longitudinal vehicle position line and outputting or only outputting the fitted longitudinal vehicle position line.
The detection system of the indicator line provided in the present embodiment will be described in detail with reference to the drawings. Please refer to fig. 11, which is a schematic structural diagram of an indicating line detecting system in an embodiment. As shown in fig. 11, the detection system 11 for the indicator line includes: an obtaining module 111, a dividing module 112, a fitting processing module 113 and an output module 114.
The acquisition module 111 is used for acquiring parking lot images.
The segmentation module 112 is configured to perform semantic segmentation on the parking lot image to obtain a parking lot segmentation image.
In this embodiment, the segmentation module 112 performs semantic segmentation on the parking lot image through a pre-stored semantic segmentation network model.
The fitting processing module 113 is configured to traverse the parking lot segmentation image when the indication line includes a longitudinal vehicle location line and a lane line, and process the parking lot segmentation image into a lane line semantic graph and a parking space line semantic graph respectively; and acquiring lane line points in the lane line semantic graph, fitting the lane line according to the lane line points, acquiring longitudinal vehicle position line points in the lane line semantic graph, and fitting the longitudinal vehicle position line according to the longitudinal vehicle position line points.
Specifically, the fitting processing module 113 calculates an included angle between a lane line and a vehicle advancing direction; rotating the semantic graph of the lane line by taking the center of the graph as the center according to the calculated included angle between the lane line and the advancing direction of the vehicle; longitudinally traversing the rotated lane line semantic graph, searching lane lines, and acquiring the coordinates of fitting points corresponding to the lane lines; and reversely rotating the lane line points to obtain coordinate points corresponding to the lane line semantic graph, so that the lane line is fitted to the coordinate points corresponding to the lane line semantic graph through the lane line points.
The process of calculating the included angle between the lane line and the vehicle advancing direction by the fitting processing module 113 includes: traversing half of the lane line semantic graph from the left side and the right side respectively in a transverse direction by a preset step length; recording the number of pixels of each traversed row, and storing the pixel coordinates of the pixel points of each row; adding the pixels of each row, classifying the pixel sums of each row, and selecting the maximum class with the largest row number; calculating the average coordinate of each line in the maximum class; calculating the tangent angle between two adjacent coordinates, averaging all the calculated tangent angles, and defining the averaged tangent angle as the included angle between the lane line and the advancing direction of the vehicle.
The fitting processing module 113 is further configured to rotate the parking space line semantic graph with the center of the graph according to the calculated included angle between the lane line and the vehicle advancing direction; traversing the rotated parking space line semantic graph transversely, and filtering a transverse parking space line through a preset parking space line threshold; longitudinally traversing the rotated parking space line semantic graph, searching a longitudinal parking space line, and acquiring a corresponding fitting point coordinate of the longitudinal parking space line; and reversely rotating the longitudinal car position line point to obtain a coordinate point corresponding to the car position line semantic graph, so that the longitudinal car position line is fitted to the coordinate point corresponding to the car position line semantic graph through the longitudinal car position line point.
The fitting processing module 113 is further configured to calculate an included angle between the longitudinal vehicle-location line and the longitudinal axis of the vehicle; rotating the parking space line semantic graph by the center of the picture according to the calculated included angle between the longitudinal vehicle position line and the longitudinal axis of the vehicle; traversing the rotated parking space line semantic graph transversely, and filtering a transverse parking space line through a preset parking space line threshold; longitudinally traversing the rotated parking space line semantic graph, searching a longitudinal parking space line, and acquiring a corresponding fitting point coordinate of the longitudinal parking space line; and reversely rotating the longitudinal car position line point to obtain a coordinate point corresponding to the car position line semantic graph, so that the longitudinal car position line is fitted to the coordinate point corresponding to the car position line semantic graph through the longitudinal car position line point.
The process of calculating the included angle between the longitudinal vehicle position line and the longitudinal axis of the vehicle by the fitting processing module 113 includes: traversing half of the vehicle-location line semantic graph from the left side and the right side respectively in a transverse direction by a preset step length; recording the number of pixels of each traversed row, and storing the pixel coordinates of the pixel points of each row; adding the pixels of each row, classifying the pixel sums of each row, and selecting the maximum class with the largest row number; calculating the average coordinate of each line in the maximum class; calculating the angle between two adjacent coordinates, averaging all the calculated angles, and judging whether the averaged angle is larger than a preset angle threshold value or not; if so, defining the complementary angle of the averaged angle as the included angle between the longitudinal vehicle position line and the longitudinal axis of the vehicle; if not, defining the complementary angle of the averaged angle as the included angle between the longitudinal vehicle position line and the longitudinal axis of the vehicle.
The output module 113 is configured to integrate the fitted lane line and the longitudinal lane line, and output the integrated lane line and longitudinal lane line.
The fitting processing module 113 is further configured to, when the indication line only includes a longitudinal vehicle location line, obtain a longitudinal vehicle location line point from the vehicle location line semantic map, and fit the longitudinal vehicle location line according to the longitudinal vehicle location line point.
Specifically, when the indication line only includes the longitudinal vehicle-location line, the fitting processing module 113 calculates an included angle between the longitudinal vehicle-location line and the longitudinal axis of the vehicle; rotating the parking space line semantic graph by the center of the picture according to the calculated included angle between the longitudinal vehicle position line and the longitudinal axis of the vehicle; traversing the rotated parking space line semantic graph transversely, and filtering a transverse parking space line through a preset parking space line threshold; longitudinally traversing the rotated parking space line semantic graph, searching a longitudinal parking space line, and acquiring a corresponding fitting point coordinate of the longitudinal parking space line; and reversely rotating the longitudinal car position line point to obtain a coordinate point corresponding to the car position line semantic graph, so that the longitudinal car position line is fitted to the coordinate point corresponding to the car position line semantic graph through the longitudinal car position line point.
The output module 114 is configured to output the fitted longitudinal vehicle position line.
The detection systems and detection methods of the above indication lines are in one-to-one correspondence, and are not described herein again.
It should be noted that the division of the modules of the above system is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And the modules can be realized in a form that all software is called by the processing element, or in a form that all the modules are realized in a form that all the modules are called by the processing element, or in a form that part of the modules are called by the hardware. For example: the x module can be a separately established processing element, and can also be integrated in a certain chip of the system. In addition, the x-module may be stored in the memory of the system in the form of program codes, and may be called by one of the processing elements of the system to execute the functions of the x-module. Other modules are implemented similarly. All or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software. These above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), one or more microprocessors (DSPs), one or more Field Programmable Gate Arrays (FPGAs), and the like. When a module is implemented in the form of a Processing element scheduler code, the Processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor capable of calling program code. These modules may be integrated together and implemented in the form of a System-on-a-chip (SOC).
EXAMPLE five
The present embodiment provides a detection apparatus for an indicator line, including: a processor, memory, transceiver, communication interface, or/and system bus; the memory is used for storing the computer program, the communication interface is used for communicating with other devices, and the processor and the transceiver are used for operating the computer program to enable the detection device of the indicator line to execute the steps of the detection method of the indicator line according to the first embodiment and the second embodiment.
The above-mentioned system bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The system bus may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus. The communication interface is used for realizing communication between the database access device and other equipment (such as a client, a read-write library and a read-only library). The Memory may include a Random Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The Processor may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components.
The protection scope of the method for detecting an indication line according to the present invention is not limited to the execution sequence of the steps listed in this embodiment, and all the solutions implemented by adding, subtracting, and replacing the steps in the prior art according to the principles of the present invention are included in the protection scope of the present invention.
The present invention also provides a system for detecting an indication line, which can implement the method for detecting and accessing an indication line according to the present invention, but the implementation apparatus of the method for detecting an indication line according to the present invention includes, but is not limited to, the structure of the detection system for an indication line recited in the present embodiment, and all structural modifications and substitutions in the prior art made according to the principles of the present invention are included in the protection scope of the present invention.
In summary, the method, system, device and computer-readable storage medium for detecting an indicator line according to the present invention have the following advantages:
first, the present invention can greatly reduce the resource consumption of the detection task and can meet the precision requirement. The method for fitting the lane lines is simple in thought and does not involve complex conversion, the image traversal is only carried out at a preset horizontal position and does not involve the traversal of the whole image, the counting and adding and subtracting are simply carried out, and the resource consumption is low.
Secondly, the method for fitting the longitudinal vehicle-location line comprises the steps of firstly obtaining the deflection angle, converting the deflection angle into longitudinal search through picture rotation, skillfully simplifying the detection difficulty of the inclined longitudinal vehicle-location line, and having simple thought and small calculated amount.
Thirdly, the invention can improve the fitting efficiency and precision of the lane line and the parking space line in the parking process of the vehicle, so that the driving path of the vehicle is more accurate, and the vehicle can conveniently and automatically search the parking space. The invention effectively overcomes various defects in the prior art and has high industrial utilization value.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Any person skilled in the art can modify or change the above-mentioned embodiments without departing from the spirit and scope of the present invention. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical spirit of the present invention be covered by the claims of the present invention.