CN110675635B - Method and device for acquiring external parameters of camera, electronic equipment and storage medium - Google Patents
Method and device for acquiring external parameters of camera, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN110675635B CN110675635B CN201910953217.7A CN201910953217A CN110675635B CN 110675635 B CN110675635 B CN 110675635B CN 201910953217 A CN201910953217 A CN 201910953217A CN 110675635 B CN110675635 B CN 110675635B
- Authority
- CN
- China
- Prior art keywords
- lane
- lane line
- discrete points
- line
- processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000012545 processing Methods 0.000 claims abstract description 66
- 238000001514 detection method Methods 0.000 claims abstract description 23
- 230000015654 memory Effects 0.000 claims description 19
- 238000004364 calculation method Methods 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 11
- 238000012216 screening Methods 0.000 claims description 11
- 238000007781 pre-processing Methods 0.000 claims description 6
- 238000003672 processing method Methods 0.000 abstract description 5
- 230000010365 information processing Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000003993 interaction Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000007664 blowing Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
- G08G1/0175—Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Traffic Control Systems (AREA)
Abstract
The camera external parameter acquisition method, the camera external parameter acquisition device, the electronic equipment and the storage medium can be used in the field of automatic driving, and lane line information is acquired by carrying out lane line detection processing on an image to be processed of a current frame; processing the lane line information according to a preset lane line mask, and extracting to obtain a lane line central line; and carrying out iterative operation based on closest point matching on the lane discrete points in the high-precision map and the lane line central line, and outputting camera external parameters. According to the lane line information processing method and device, the lane line information is processed by utilizing the lane line mask, so that the data quantity of the lane line information is reduced, the scale of data points of a lane line central line obtained by extracting the processed lane line information is smaller, the operation efficiency of iterative operation of the lane central line and lane discrete points based on closest point matching is improved, and the real-time requirement of camera external parameters is met.
Description
Technical Field
The present disclosure relates to data processing technologies, and in particular, to a method and an apparatus for acquiring external parameters of a camera, an electronic device, and a storage medium, which can be used in the field of automatic driving.
Background
Under a perception scene of a V2X road side, road traffic information beyond the visual range can be acquired through a camera installed on a light pole or a traffic light pole, and the image position of a vehicle or a pedestrian obtained by camera shooting can be converted with high-precision map data through utilizing camera external parameters, so that the real coordinate of the vehicle or the pedestrian on the precision map can be obtained. However, the camera shakes due to wind blowing or heavy vehicle passing, and once the camera shakes, the camera parameters of the camera change, and at this time, the camera parameters need to be recalculated to meet the use requirements.
Disclosure of Invention
In view of the above technical problems, the present disclosure provides a method and an apparatus for obtaining external parameters of a camera, an electronic device, and a storage medium.
In a first aspect, the present disclosure provides a method for obtaining external parameters of a camera, including:
carrying out lane line detection processing on the image to be processed of the current frame to obtain lane line information;
processing the lane line information according to a preset lane line mask, and extracting to obtain a lane line central line;
and carrying out iterative operation based on closest point matching on the lane discrete points in the high-precision map and the lane line central line, and outputting camera external parameters.
In a second aspect, the present disclosure provides an apparatus for obtaining camera external parameters, including:
the detection module is used for carrying out lane line detection processing on the image to be processed of the current frame to obtain lane line information;
the processing module is used for processing the lane line information according to a preset lane line mask and extracting to obtain a lane line central line;
and the operation module is used for performing iterative operation based on closest point matching on the lane discrete points in the high-precision map and the lane line central line and outputting camera external parameters.
In a third aspect, the present disclosure provides an electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any of the methods described above.
In a fourth aspect, the present disclosure provides a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any of the above.
According to the method, the device, the electronic equipment and the storage medium for acquiring the external parameters of the camera, lane line detection processing is performed on the image to be processed of the current frame, so that lane line information is acquired; processing the lane line information according to a preset lane line mask, and extracting to obtain a lane line central line; and carrying out iterative operation based on closest point matching on the lane discrete points in the high-precision map and the lane line central line, and outputting camera external parameters. According to the lane line information processing method and device, the lane line information is processed by utilizing the lane line mask, so that the data quantity of the lane line information is reduced, the scale of data points of a lane line central line obtained by extracting the processed lane line information is smaller, the operation efficiency of iterative operation of the lane central line and lane discrete points based on closest point matching is improved, and the real-time requirement of camera external parameters is met.
Other effects of the above-described alternative will be described below with reference to specific embodiments.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic diagram of a network architecture provided by the present disclosure;
fig. 2 is a schematic flowchart of a method for acquiring external parameters of a camera according to the present disclosure;
FIG. 3 is a schematic view of a lane line mask provided by the present disclosure;
FIG. 4 is a schematic illustration of lane line information provided by the present disclosure;
FIG. 5 is a schematic illustration of the intersection of lane line mask and lane line information provided by the present disclosure;
FIG. 6 is a schematic illustration of lane line centerline information provided by the present disclosure;
fig. 7 is a schematic flowchart of another camera external reference obtaining method provided by the present disclosure;
fig. 8 is a schematic structural diagram of an apparatus for acquiring camera external reference provided by the present disclosure;
fig. 9 is a block diagram of an electronic device of a processing method of an embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Under a perception scene of a V2X road side, road traffic information beyond the visual range can be acquired through a camera installed on a light pole or a traffic light pole, and the image position of a vehicle or a pedestrian obtained by camera shooting can be converted with high-precision map data through utilizing camera external parameters, so that the real coordinate of the vehicle or the pedestrian on the precision map can be obtained. However, the camera shakes due to wind blowing or heavy vehicle passing, and once the camera shakes, the camera parameters of the camera change, and at this time, the camera parameters need to be recalculated to meet the use requirements.
In the prior art, in order to ensure the accuracy of calculating camera extrinsic parameters, a lane line matching mode is generally adopted, namely, firstly, lane line detection is carried out on an input image, then, refinement is carried out, a lane central line is extracted, closest point matching and optimal solution are carried out on the lane central discrete point in a high-precision map, whether the requirement of a reprojection error is met or not is judged, if the requirement is not met, the closest point matching is recalculated, and if the requirement is not met, extrinsic parameters are output and a ground equation is calculated. However, when the closest point matching is performed on the lane center line and the lane center discrete point in the high-precision map, all the discrete points in the high-precision map need to be traversed for each data point in the lane center line to obtain a matching result.
However, since there are too many discrete points in the center of the lane in the high-precision map, the calculation speed and the matching rate are slow, and the real-time requirement is not met.
In order to solve the above problems, the present disclosure provides a method and an apparatus for obtaining external parameters of a camera, an electronic device, and a storage medium. According to the processing method, the lane line information is processed by using the lane line mask, so that the data quantity of the lane line information is reduced, the scale of data points of a lane line center line obtained by extracting the processed lane line information is smaller, the operation efficiency of iterative operation of the lane center line and lane discrete points based on closest point matching is improved, and the real-time requirement on camera external parameters is met.
Fig. 1 is a schematic diagram of a network architecture provided by the present disclosure, and as shown in fig. 1, the method for acquiring camera external parameters provided by the present disclosure may be applied to various application scenarios that require acquisition of camera external parameters, including but not limited to acquisition of camera external parameters of a surveillance camera based on an intersection or a fixed road segment, acquisition of camera external parameters based on a camera system of an autonomous vehicle, and the like. The network architecture may include an acquisition apparatus 1 with external parameters of a camera, a shooting device 2, and a network. The external camera parameter acquiring device 1 may be a server or a server cluster installed in a cloud, or may be an electronic unit or an electronic module integrated on the shooting device 2. The shooting device 2 may be a terminal having an image capturing function, such as a monitoring camera and a camera system. Through a wireless network or a wired network, the camera external parameter acquiring device 1 can perform data interaction with the shooting device 2, so that the camera external parameter acquiring device 1 receives image data uploaded by the shooting device 2 and sends the camera external parameter obtained through processing to the shooting device 2.
It should be noted that the manner shown in fig. 1 is only one of the network architecture manners provided by the present disclosure, and the architecture thereof will vary accordingly based on different application scenarios.
In a first aspect, the present disclosure provides a method for acquiring camera external parameters, and fig. 2 is a schematic flow chart of the method for acquiring camera external parameters provided by the present disclosure. As shown in fig. 2, the acquiring method includes:
the execution subject of the method for acquiring the external parameter of the camera provided by the example of the present disclosure is the aforementioned acquiring apparatus of the external parameter of the camera, wherein the acquiring apparatus of the external parameter of the camera may specifically be composed of various types of hardware devices, such as a processor, a communicator, a memory, and the like.
Specifically, the device for acquiring camera external parameters provided by the present disclosure first performs lane line detection processing on a current true image to be processed to obtain lane line information. The lane line detection processing specifically refers to identifying the position of the lane line from the image by using a machine algorithm identification technology, an image identification technology or a pixel identification technology, and the like, and the position is used as lane line information to be subsequently processed, wherein the position can be specifically represented by a pixel coordinate or a plane coordinate of the image, and the disclosure does not limit the position.
And 102, processing the lane line information according to a preset lane line mask, and extracting to obtain a lane line central line.
Specifically, the camera extrinsic parameter acquiring device further processes the acquired lane line information according to a preset lane line mask, and extracts the processed lane line information to acquire a lane line center line.
In an optional example, the high-precision map refers to a high-precision map, and in colloquial, the high-precision map is an electronic map with higher precision and more data dimensions, the precision is more accurate to the centimeter level, and the data dimensions are more apparent that the high-precision map includes surrounding static information related to traffic besides road information. In general, the coordinates of each data in the high-precision map are expressed as a real coordinate system, that is, three-dimensional coordinates of each object in the real world. The lane discrete points of the high-precision map refer to a three-dimensional coordinate set of points of an object forming a lane in the high-precision map, and the discrete points can be used for describing information such as lane trend, edge distribution, width and the like.
Fig. 3 is a schematic diagram of a lane line mask provided by the present disclosure, and as shown in fig. 3, in this example, discrete points of a high-precision map may be preprocessed to obtain sparse discrete points as the lane line mask. The lane line mask may be used to process the previously obtained lane line information to reduce the size of data in the lane line information. Specifically, fig. 4 is a schematic diagram of lane line information provided by the present disclosure, and fig. 5 is a schematic diagram of an intersection of a lane line mask and lane line information provided by the present disclosure, as shown in fig. 4 and 5, the lane line mask and the lane line information may be subjected to and operation to obtain the intersection of the lane line mask and the lane line information. Subsequently, fig. 6 is a schematic diagram of the lane line center line information provided by the present disclosure, which may perform a thinning process on the intersection of the lane line mask and the lane line information to obtain the lane line center line information.
In other alternative examples, the lane line mask may be pre-acquired, such as offline. Specifically, the acquisition device may perform planar projection processing on the lane discrete points of the high-precision map according to the off-line external parameter.
The off-line external reference is a coordinate conversion matrix which can convert three-dimensional coordinates of an object in the high-precision map in a real coordinate system into plane coordinates based on an image, and is generally related to a pose state or a motion state of a camera in the real coordinate system. In the process of acquiring the lane line mask, the acquisition device adopts an off-line external parameter, and the off-line external parameter can be a coordinate transformation matrix when the camera is in an ideal pose state or a motion state at an ideal real coordinate system position.
In other words, the acquisition means may convert the world coordinates of the lane discrete points based on the real coordinate system into the plane coordinates of the image-based plane coordinate system according to the off-line external parameters.
The acquisition device also performs sparse processing on the lane discrete points after the plane projection processing according to the lane discrete points to obtain a lane line mask.
Specifically, the acquisition means may first calculate the distance between any two lane discrete points from the plane coordinates of each lane discrete point. And then, the acquisition device also screens the discrete points of each lane according to a preset distance threshold value, and uses the discrete points of each lane reserved after screening as a lane mask, wherein the distances between the discrete points of each lane reserved after screening are all larger than the distance threshold value.
By the above example, the lane line mask can be obtained, and each discrete point composing the lane line mask is based on the plane coordinates of the plane coordinate system of the image, and is relatively sparse, so that the lane line mask can effectively describe the distribution of lanes under the condition of adopting smaller-scale data. Correspondingly, the lane line mask is utilized to carry out AND operation on the lane line information, so that the obtained intersection can effectively reserve the lane distribution condition described by the lane line information and reduce the data scale of the central line of the original lane line.
103, carrying out iterative operation based on closest point matching on the lane discrete points in the high-precision map and the lane line central line, and outputting camera external parameters.
Specifically, similar to the prior art, for the lane discrete points and the lane line center line obtained as described above, the camera external parameters are obtained by using an iterative operation based on the closest point matching.
Further, the obtaining device can firstly carry out closest point matching calculation on the lane discrete points and the lane line central line to obtain camera external parameters; then, judging whether the camera external parameters obtained by calculation meet a preset reprojection error; if yes, the obtaining device outputs the external parameters of the camera; otherwise, the obtaining device returns to the step of performing closest point matching calculation on the lane offline point and the lane line central line, and obtains the camera external parameters again and judges the camera external parameters.
In addition, in the process of performing closest point matching calculation on the lane discrete points and the lane line center line, the prior art adopts a traversal processing mode, that is, traversing each lane discrete point for each data point of the lane line center line in sequence to determine the closest matching point. In the disclosed example, in order to further improve the processing efficiency of the operation, in the process of the closest point matching calculation, a KD tree is established for the lane line center line, and the original traversal search is replaced by KD tree search, so that the operation efficiency of the closest point matching is effectively improved.
According to the method and the device, the lane line information is processed by utilizing the lane line mask, so that the data quantity of the lane line information is reduced, the scale of data points of a lane line center line obtained by extracting the processed lane line information is smaller, the operation efficiency of iterative operation of the lane center line and lane discrete points based on closest point matching is improved, and the real-time requirement on camera external parameters is met.
Fig. 7 is a schematic flowchart of another method for acquiring camera external references provided by the present disclosure, as shown in fig. 7, the method includes:
204, processing the lane line information according to a lane line mask corresponding to a preset region of interest, and extracting to obtain a lane line central line;
and step 205, performing iterative operation based on closest point matching on the lane discrete points in the high-precision map and the lane line central line, and outputting camera external parameters.
Different from the foregoing example, in order to further improve the output efficiency of the external parameter of the camera and ensure the real-time performance of the operation, in this example, the following method may be adopted to perform lane line detection processing on the image to be processed of the current frame and obtain lane line information:
the acquisition device acquires a current frame image, wherein the current frame image can be acquired by shooting equipment; and then, the acquisition device processes the current frame image according to a pre-drawn region of interest, and takes the current frame image corresponding to the region of interest obtained after processing as the image to be processed. Specifically, the region of interest refers to a region including lane line information, and generally, in the current frame image obtained by acquisition, the area where the lane line is located has a small proportion, and the area with the large proportion is generally the area where the lane itself or the intersection or the sidewalk is located. Therefore, the area where the lane line is located can be previously circled to serve as the interested area of the current frame image. And after the acquisition device acquires the current frame image again, processing the current frame image based on the region of interest, taking the current frame image corresponding to the region of interest as the image to be processed, and taking only the information contained in the image to be processed as the information for the next processing.
Then, similarly to the foregoing example, the acquiring means performs lane line detection processing on the image to be processed once to acquire lane line information; processing the lane line information according to a lane line mask corresponding to a preset region of interest, and extracting to obtain a lane line central line; and carrying out iterative operation based on closest point matching on the lane discrete points in the high-precision map and the lane line central line, and finally outputting camera external parameters.
In this example, since only the image corresponding to the region of interest is processed, the lane line mask thereof should also be the lane line mask corresponding to the region of interest. The obtaining method is similar to the previous example, and is not described in detail here.
According to the method and the device, the lane line information is processed by utilizing the lane line mask, so that the data quantity of the lane line information is reduced, the scale of data points of a lane line center line obtained by extracting the processed lane line information is smaller, the operation efficiency of iterative operation of the lane center line and lane discrete points based on closest point matching is improved, and the real-time requirement on camera external parameters is met.
In a second aspect, the present disclosure provides an apparatus for acquiring camera external parameters, and fig. 8 is a schematic structural diagram of an apparatus for acquiring camera external parameters provided by the present disclosure.
As shown in fig. 8, the apparatus for acquiring external parameters of a camera includes:
the detection module 10 is configured to perform lane line detection processing on the image to be processed of the current frame to obtain lane line information;
the processing module 20 is configured to process the lane line information according to a preset lane line mask, and extract and obtain a lane line center line;
and the operation module 30 is used for performing iterative operation based on closest point matching on the lane discrete points in the high-precision map and the lane line central line and outputting camera external parameters.
In an optional example, the detection module 10 is specifically configured to acquire a current frame image, process the current frame image according to a pre-drawn region of interest, use the current frame image corresponding to the processed region of interest as the image to be processed, and perform lane line detection processing on the image to be processed to obtain lane line information.
In an optional example, the operation module 30 is specifically configured to perform closest point matching calculation on the lane discrete points and the lane line center line to obtain camera external parameters; judging whether the camera external parameters obtained by calculation meet a preset reprojection error or not; if yes, outputting the external parameters of the camera; otherwise, returning to the step of performing closest point matching calculation on the lane offline point and the lane line central line.
In an optional example, the obtaining means further comprises: a pre-processing module 20;
the preprocessing module 20 is configured to perform plane projection processing on the lane discrete points of the high-precision map according to off-line external parameters; and performing sparse processing according to the lane discrete points after the plane projection processing of the lane discrete points to obtain a lane line mask.
In an alternative example, the preprocessing module 20 is specifically configured to: converting the world coordinates of the lane discrete points based on the real coordinate system into the plane coordinates of the plane coordinate system based on the image according to the off-line external parameters; calculating the distance between any two lane discrete points according to the plane coordinates of the lane discrete points; and screening the discrete points of each lane according to a preset distance threshold, and taking the discrete points of each lane reserved after screening as a lane line mask, wherein the distances between the discrete points of each lane reserved after screening are all larger than the distance threshold.
In an alternative example, the processing module 20 is specifically configured to: performing and operation on the lane line mask and the lane line information to obtain an intersection of the lane line mask and the lane line information; and thinning the intersection of the lane line mask and the lane line information, and extracting to obtain a lane line central line.
The device for acquiring the camera extrinsic parameters, provided by the disclosure, acquires lane line information by performing lane line detection processing on a to-be-processed image of a current frame; processing the lane line information according to a preset lane line mask, and extracting to obtain a lane line central line; and carrying out iterative operation based on closest point matching on the lane discrete points in the high-precision map and the lane line central line, and outputting camera external parameters. According to the lane line information processing method and device, the lane line information is processed by utilizing the lane line mask, so that the data quantity of the lane line information is reduced, the scale of data points of a lane line central line obtained by extracting the processed lane line information is smaller, the operation efficiency of iterative operation of the lane central line and lane discrete points based on closest point matching is improved, and the real-time requirement of camera external parameters is met.
The present disclosure also provides an electronic device and a readable storage medium according to an embodiment of the present disclosure.
As shown in fig. 9, is a block diagram of an electronic device of an acquisition method according to an embodiment of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 9, the electronic apparatus includes: one or more processors 901, memory 902, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 9 illustrates an example of a processor 901.
The memory 902 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the electronic device of the acquisition method, and the like. Further, the memory 902 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 902 may optionally include memory located remotely from the processor 901, which may be connected to the electronic device via a network. Examples of such networks include, but are not limited to, the internet, an intranet, a lan, a mobile 902, an input device 903, and an output device 904, which may be connected by a bus, as illustrated in fig. 9.
The input device 903 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus for the acquisition method, such as an input device such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer, one or more mouse buttons, a track ball, a joystick, or the like. The output devices 904 may include a display device, auxiliary lighting devices (e.g., LEDs), tactile feedback devices (e.g., vibrating motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.
Claims (14)
1. A method for acquiring external parameters of a camera is characterized by comprising the following steps:
carrying out lane line detection processing on the image to be processed of the current frame to obtain lane line information;
processing the lane line information according to a preset lane line mask, extracting to obtain a lane line central line, wherein the lane line mask is obtained by sparsely processing lane discrete points of the high-precision map after plane projection;
and carrying out iterative operation based on closest point matching on the lane discrete points in the high-precision map and the lane line central line, and outputting camera external parameters.
2. The obtaining method according to claim 1, wherein the performing lane line detection processing on the image to be processed of the current frame to obtain lane line information includes:
acquiring a current frame image;
processing the current frame image according to a pre-drawn region of interest, and taking the current frame image corresponding to the region of interest obtained after processing as the image to be processed;
and carrying out lane line detection processing on the image to be processed to obtain lane line information.
3. The acquisition method according to claim 1, wherein performing an iterative operation based on closest point matching on the lane discrete points in the high-precision map and the lane line center line, and outputting camera external parameters comprises:
performing closest point matching calculation on the lane discrete points and the lane line central line to obtain camera external parameters;
judging whether the camera external parameters obtained by calculation meet a preset reprojection error or not;
if yes, outputting the external parameters of the camera; otherwise, returning to the step of performing closest point matching calculation on the lane offline point and the lane line central line.
4. The acquisition method according to any one of claims 1 to 3, characterized by further comprising:
according to off-line external parameters, carrying out plane projection processing on lane discrete points of the high-precision map;
and performing sparse processing according to the lane discrete points after the plane projection processing of the lane discrete points to obtain a lane line mask.
5. The acquisition method according to claim 4, wherein the planar projection processing of the lane discrete points of the high-precision map according to the off-line external parameters comprises:
converting the world coordinates of the lane discrete points based on the real coordinate system into the plane coordinates of the plane coordinate system based on the image according to the off-line external parameters;
the sparse processing is carried out on the lane discrete points after the planar projection processing according to the lane discrete points to obtain a lane line mask, and the method comprises the following steps:
calculating the distance between any two lane discrete points according to the plane coordinates of the lane discrete points;
and screening the discrete points of each lane according to a preset distance threshold, and taking the discrete points of each lane reserved after screening as a lane line mask, wherein the distances between the discrete points of each lane reserved after screening are all larger than the distance threshold.
6. The method according to claim 1, wherein the processing the lane line information according to a preset lane line mask to extract and obtain a lane line center line comprises:
performing and operation on the lane line mask and the lane line information to obtain an intersection of the lane line mask and the lane line information;
and thinning the intersection of the lane line mask and the lane line information, and extracting to obtain a lane line central line.
7. An apparatus for obtaining external parameters of a camera, comprising:
the detection module is used for carrying out lane line detection processing on the image to be processed of the current frame to obtain lane line information;
the processing module is used for processing the lane line information according to a preset lane line mask to extract and obtain a lane line central line, and the lane line mask is obtained by sparse processing of lane discrete points of the high-precision map after plane projection;
and the operation module is used for performing iterative operation based on closest point matching on the lane discrete points in the high-precision map and the lane line central line and outputting camera external parameters.
8. The obtaining apparatus according to claim 7, wherein the detecting module is specifically configured to acquire a current frame image, process the current frame image according to a pre-drawn region of interest, use the current frame image corresponding to the region of interest obtained after the processing as the image to be processed, and perform lane line detection processing on the image to be processed to obtain lane line information.
9. The obtaining device of claim 7, wherein the pair calculation module is specifically configured to perform closest point matching calculation on the lane discrete points and the lane line center line to obtain camera external parameters; judging whether the camera external parameters obtained by calculation meet a preset reprojection error or not; if yes, outputting the external parameters of the camera; otherwise, returning to the step of performing closest point matching calculation on the lane offline point and the lane line central line.
10. The acquisition device according to any one of claims 7 to 9, characterized by further comprising: a preprocessing module;
the preprocessing module is used for carrying out plane projection processing on the lane discrete points of the high-precision map according to off-line external parameters; and performing sparse processing according to the lane discrete points after the plane projection processing of the lane discrete points to obtain a lane line mask.
11. The obtaining apparatus according to claim 10, wherein the preprocessing module is specifically configured to: converting the world coordinates of the lane discrete points based on the real coordinate system into the plane coordinates of the plane coordinate system based on the image according to the off-line external parameters; calculating the distance between any two lane discrete points according to the plane coordinates of the lane discrete points; and screening the discrete points of each lane according to a preset distance threshold, and taking the discrete points of each lane reserved after screening as a lane line mask, wherein the distances between the discrete points of each lane reserved after screening are all larger than the distance threshold.
12. The obtaining apparatus according to claim 7, wherein the processing module is specifically configured to: performing and operation on the lane line mask and the lane line information to obtain an intersection of the lane line mask and the lane line information; and thinning the intersection of the lane line mask and the lane line information, and extracting to obtain a lane line central line.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910953217.7A CN110675635B (en) | 2019-10-09 | 2019-10-09 | Method and device for acquiring external parameters of camera, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910953217.7A CN110675635B (en) | 2019-10-09 | 2019-10-09 | Method and device for acquiring external parameters of camera, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110675635A CN110675635A (en) | 2020-01-10 |
CN110675635B true CN110675635B (en) | 2021-08-03 |
Family
ID=69081088
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910953217.7A Active CN110675635B (en) | 2019-10-09 | 2019-10-09 | Method and device for acquiring external parameters of camera, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110675635B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111291681B (en) * | 2020-02-07 | 2023-10-20 | 北京百度网讯科技有限公司 | Method, device and equipment for detecting lane change information |
CN111597987B (en) * | 2020-05-15 | 2023-09-01 | 阿波罗智能技术(北京)有限公司 | Method, apparatus, device and storage medium for generating information |
CN111612851B (en) * | 2020-05-20 | 2023-04-07 | 阿波罗智联(北京)科技有限公司 | Method, apparatus, device and storage medium for calibrating camera |
CN111650604B (en) * | 2020-07-02 | 2023-07-28 | 上海电科智能系统股份有限公司 | Method for realizing accurate detection of self-vehicle and surrounding obstacle by using accurate positioning |
CN113884089B (en) * | 2021-09-09 | 2023-08-01 | 武汉中海庭数据技术有限公司 | Camera lever arm compensation method and system based on curve matching |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102855759A (en) * | 2012-07-05 | 2013-01-02 | 中国科学院遥感应用研究所 | Automatic collecting method of high-resolution satellite remote sensing traffic flow information |
CN103345751A (en) * | 2013-07-02 | 2013-10-09 | 北京邮电大学 | Visual positioning method based on robust feature tracking |
CN105718870A (en) * | 2016-01-15 | 2016-06-29 | 武汉光庭科技有限公司 | Road marking line extracting method based on forward camera head in automatic driving |
CN106558080A (en) * | 2016-11-14 | 2017-04-05 | 天津津航技术物理研究所 | Join on-line proving system and method outside a kind of monocular camera |
CN106909937A (en) * | 2017-02-09 | 2017-06-30 | 北京汽车集团有限公司 | Traffic lights recognition methods, control method for vehicle, device and vehicle |
CN107554430A (en) * | 2017-09-20 | 2018-01-09 | 京东方科技集团股份有限公司 | Vehicle blind zone view method, apparatus, terminal, system and vehicle |
CN108629804A (en) * | 2017-03-20 | 2018-10-09 | 北京大学口腔医学院 | A kind of three-dimensional face symmetric reference plane extracting method with weight distribution mechanism |
CN109101957A (en) * | 2018-10-29 | 2018-12-28 | 长沙智能驾驶研究院有限公司 | Binocular solid data processing method, device, intelligent driving equipment and storage medium |
CN109523597A (en) * | 2017-09-18 | 2019-03-26 | 百度在线网络技术(北京)有限公司 | The scaling method and device of Camera extrinsic |
CN110210303A (en) * | 2019-04-29 | 2019-09-06 | 山东大学 | A kind of accurate lane of Beidou vision fusion recognizes and localization method and its realization device |
CN110298262A (en) * | 2019-06-06 | 2019-10-01 | 华为技术有限公司 | Object identification method and device |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102227855B1 (en) * | 2015-01-22 | 2021-03-15 | 현대모비스 주식회사 | Parking guide system and method for controlling the same |
US10990830B2 (en) * | 2016-09-13 | 2021-04-27 | Genetec Inc. | Auto-calibration of tracking systems |
-
2019
- 2019-10-09 CN CN201910953217.7A patent/CN110675635B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102855759A (en) * | 2012-07-05 | 2013-01-02 | 中国科学院遥感应用研究所 | Automatic collecting method of high-resolution satellite remote sensing traffic flow information |
CN103345751A (en) * | 2013-07-02 | 2013-10-09 | 北京邮电大学 | Visual positioning method based on robust feature tracking |
CN105718870A (en) * | 2016-01-15 | 2016-06-29 | 武汉光庭科技有限公司 | Road marking line extracting method based on forward camera head in automatic driving |
CN106558080A (en) * | 2016-11-14 | 2017-04-05 | 天津津航技术物理研究所 | Join on-line proving system and method outside a kind of monocular camera |
CN106909937A (en) * | 2017-02-09 | 2017-06-30 | 北京汽车集团有限公司 | Traffic lights recognition methods, control method for vehicle, device and vehicle |
CN108629804A (en) * | 2017-03-20 | 2018-10-09 | 北京大学口腔医学院 | A kind of three-dimensional face symmetric reference plane extracting method with weight distribution mechanism |
CN109523597A (en) * | 2017-09-18 | 2019-03-26 | 百度在线网络技术(北京)有限公司 | The scaling method and device of Camera extrinsic |
CN107554430A (en) * | 2017-09-20 | 2018-01-09 | 京东方科技集团股份有限公司 | Vehicle blind zone view method, apparatus, terminal, system and vehicle |
CN109101957A (en) * | 2018-10-29 | 2018-12-28 | 长沙智能驾驶研究院有限公司 | Binocular solid data processing method, device, intelligent driving equipment and storage medium |
CN110210303A (en) * | 2019-04-29 | 2019-09-06 | 山东大学 | A kind of accurate lane of Beidou vision fusion recognizes and localization method and its realization device |
CN110298262A (en) * | 2019-06-06 | 2019-10-01 | 华为技术有限公司 | Object identification method and device |
Also Published As
Publication number | Publication date |
---|---|
CN110675635A (en) | 2020-01-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110675635B (en) | Method and device for acquiring external parameters of camera, electronic equipment and storage medium | |
US20220270289A1 (en) | Method and apparatus for detecting vehicle pose | |
CN111612760A (en) | Method and apparatus for detecting obstacles | |
CN111767853B (en) | Lane line detection method and device | |
CN110929639A (en) | Method, apparatus, device and medium for determining position of obstacle in image | |
CN111739005B (en) | Image detection method, device, electronic equipment and storage medium | |
JP7273129B2 (en) | Lane detection method, device, electronic device, storage medium and vehicle | |
CN111738072A (en) | Training method and device of target detection model and electronic equipment | |
KR102694715B1 (en) | Method for detecting obstacle, electronic device, roadside device and cloud control platform | |
CN111797745B (en) | Training and predicting method, device, equipment and medium for object detection model | |
CN111578839B (en) | Obstacle coordinate processing method and device, electronic equipment and readable storage medium | |
CN112528786A (en) | Vehicle tracking method and device and electronic equipment | |
CN111666876B (en) | Method and device for detecting obstacle, electronic equipment and road side equipment | |
CN110717933B (en) | Post-processing method, device, equipment and medium for moving object missed detection | |
CN111601013B (en) | Method and apparatus for processing video frames | |
JP2022050311A (en) | Method for detecting lane change of vehicle, system, electronic apparatus, storage medium, roadside machine, cloud control platform, and computer program | |
CN110659600A (en) | Object detection method, device and equipment | |
CN111539347A (en) | Method and apparatus for detecting target | |
CN111652113A (en) | Obstacle detection method, apparatus, device, and storage medium | |
CN111027195B (en) | Simulation scene generation method, device and equipment | |
CN111191619A (en) | Method, device and equipment for detecting virtual line segment of lane line and readable storage medium | |
CN112509126A (en) | Method, device, equipment and storage medium for detecting three-dimensional object | |
CN111814651A (en) | Method, device and equipment for generating lane line | |
CN111597987A (en) | Method, apparatus, device and storage medium for generating information | |
CN112749701B (en) | License plate offset classification model generation method and license plate offset classification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |