[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111316288A - Road structure information extraction method, unmanned aerial vehicle and automatic driving system - Google Patents

Road structure information extraction method, unmanned aerial vehicle and automatic driving system Download PDF

Info

Publication number
CN111316288A
CN111316288A CN201980005571.5A CN201980005571A CN111316288A CN 111316288 A CN111316288 A CN 111316288A CN 201980005571 A CN201980005571 A CN 201980005571A CN 111316288 A CN111316288 A CN 111316288A
Authority
CN
China
Prior art keywords
road
lane
image data
information
road structure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980005571.5A
Other languages
Chinese (zh)
Inventor
李鑫超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SZ DJI Technology Co Ltd
Original Assignee
SZ DJI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SZ DJI Technology Co Ltd filed Critical SZ DJI Technology Co Ltd
Publication of CN111316288A publication Critical patent/CN111316288A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/182Network patterns, e.g. roads or rivers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a road structure information extraction method, an unmanned aerial vehicle and an automatic driving system. The method comprises the following steps: acquiring at least one frame of image data of a road (S201); according to at least one frame of image data, a semantic map of a road is determined (S202), and road structure information of the road is determined according to the semantic map and a road structure model, wherein the road structure information comprises lane grouping information (S203), so that automatic extraction of the road structure information is realized, manual marking is not needed, the road structure information can be extracted in real time, the extraction efficiency of the road structure information is improved, the road structure information comprises not only the lane information but also the lane grouping information, and the description of the road structure is more detailed.

Description

Road structure information extraction method, unmanned aerial vehicle and automatic driving system
Technical Field
The embodiment of the invention relates to the technical field of unmanned driving, in particular to a road structure information extraction method, an unmanned aerial vehicle and an automatic driving system.
Background
In the unmanned driving scene, the road structure information is indispensable dependency information for safe driving. With the rapid development of the unmanned technology, the requirements for map precision and information quantity are gradually increased, the traditional map cannot meet the requirements, and a high-precision map capable of providing high-precision and detailed road structure information is required. High-precision maps require not only high precision on the data, but also the inclusion of detailed, well-organized road structure information.
In the prior art, after scene road data is acquired by using a sensor, such as a camera, a laser radar and the like, road structure information contained in the road data is labeled in a manual labeling mode, and the road structure information cannot be acquired in real time.
Disclosure of Invention
The embodiment of the invention provides a road structure information extraction method, an unmanned aerial vehicle and an automatic driving system, which are used for solving the problems that manual marking is needed and the efficiency of obtaining road structure information is low in the prior art.
In a first aspect, an embodiment of the present invention provides a method for extracting road structure information, including:
acquiring at least one frame of image data of a road;
determining a semantic map of the road according to the at least one frame of image data;
and determining road structure information of the road according to the semantic map and the road structure model, wherein the road structure information comprises lane grouping information.
In a second aspect, an embodiment of the present invention provides an unmanned aerial vehicle, including a fuselage and a processor;
the processor is configured to:
acquiring at least one frame of image data of a road;
determining a semantic map of the road according to the at least one frame of image data;
and determining road structure information of the road according to the semantic map and the road structure model, wherein the road structure information comprises lane grouping information.
In a third aspect, an embodiment of the present invention provides an automatic driving system, including a memory and a processor;
the processor is configured to:
acquiring at least one frame of image data of a road;
determining a semantic map of the road according to the at least one frame of image data;
and determining road structure information of the road according to the semantic map and the road structure model, wherein the road structure information comprises lane grouping information.
In a fourth aspect, an embodiment of the present invention provides an apparatus (e.g., a chip, an integrated circuit, etc.) for extracting road structure information, including: a memory and a processor. The memory stores code for performing an extraction method of the road structure information. The processor is configured to call the code stored in the memory, and execute the method for extracting road structure information according to the first aspect of the present invention.
In a fifth aspect, the present invention provides a computer-readable storage medium, where a computer program is stored, where the computer program includes at least one code that is executable by a computer to control the computer to execute the method for extracting road structure information according to the first aspect.
In a sixth aspect, an embodiment of the present invention provides a computer program, which is used to implement the method for extracting road structure information according to the first aspect when the computer program is executed by a computer.
According to the method for extracting the road structure information, the unmanned aerial vehicle and the automatic driving system, the semantic map of the road is determined according to the at least one frame of image data by acquiring the at least one frame of image data of the road, the road structure information of the road is determined according to the semantic map and the road structure model, the road structure information comprises lane grouping information, automatic extraction of the road structure information is achieved, manual marking is not needed, the road structure information can be extracted in real time, extraction efficiency of the road structure information is improved, the road structure information comprises lane information and lane grouping information, and description of the road structure is more detailed.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic architecture diagram of a drone system provided in accordance with an embodiment of the present invention;
FIG. 2 is a flowchart illustrating an embodiment of a method for extracting road structure information according to the present invention;
fig. 3A to fig. 3C are schematic process diagrams of an embodiment of a method for extracting road structure information according to the present invention;
fig. 4 is a schematic structural diagram of an embodiment of the unmanned aerial vehicle provided in the present invention;
fig. 5 is a schematic structural diagram of an embodiment of an automatic driving system provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When a component is referred to as being "connected" to another component, it can be directly connected to the other component or intervening components may also be present.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Fig. 1 is a schematic architecture diagram of a drone system provided in accordance with an embodiment of the present invention. As shown in fig. 1, the drone system 100 provided by the present embodiment may include a drone 110, a display device 130, and a control end 140. The drone 110 may include, among other things, a power system 150, a motion control system 160, a frame (not shown), and a pan-tilt 120 carried on the frame. The drone 110 may be in wireless communication with the control end 140 and the display device 130. The drone may be an unmanned vehicle or an unmanned airplane, and the embodiments described hereinafter take an unmanned vehicle as an example for illustration.
The power system 150 may include one or more electronic governors (simply electric governors) 151, and one or more electric motors 152. Wherein, the motor 152 is connected with the electronic governor 151; the electronic governor 151 is configured to receive a driving signal generated by the motion control system 160 and provide a driving current to the motor 152 according to the driving signal to control the rotation speed of the motor 152. The motor 152 is used to drive the wheels to rotate, thereby providing power for movement of the drone 110 that enables the drone 110 to achieve one or more degrees of freedom of movement. It should be understood that the motor 152 may be a dc motor or an ac motor. The motor 152 may be a brushless motor or a brush motor.
The motion control system 160 may include a motion controller 161 and a sensing system 162. The sensing system 162 is used to measure motion information of the unmanned vehicle 110, for example, position information and motion state information of the unmanned vehicle 110 in space, for example, three-dimensional position, three-dimensional angle, three-dimensional velocity, three-dimensional acceleration, three-dimensional angular velocity, and the like. The sensing system 162 may include, for example, at least one of a gyroscope, an ultrasonic sensor, an electronic compass, an Inertial Measurement Unit (IMU), a vision sensor, a global navigation satellite system, and a barometer. For example, the Global navigation satellite System may be a Global Positioning System (GPS). The motion controller 161 is used to control the movement of the unmanned vehicle 110, for example, the movement of the unmanned vehicle 110 may be controlled based on motion information measured by the sensing system 162. It should be understood that the motion controller 161 may control the drone vehicle 110 according to preprogrammed instructions, or may control the drone vehicle 110 in response to one or more control instructions from the control end 140.
The pan/tilt head 120 may include a motor 122. The pan/tilt head 120 can be used to carry a camera 123. The motion controller 161 may control the motion of the pan/tilt head 120 through the motor 122. Optionally, as another embodiment, the pan/tilt head 120 may further include a controller for controlling the movement of the pan/tilt head 120 by controlling the motor 122. It should be understood that the pan/tilt head 120 may be separate from the unmanned vehicle 110, or may be part of the unmanned vehicle 110. It should be understood that the motor 122 may be a dc motor or an ac motor. The motor 122 may be a brushless motor or a brush motor. It should also be understood that pan/tilt head 120 may be located on the top of unmanned vehicle 110, on the bottom of unmanned vehicle 110, or elsewhere.
The camera 123 may be, for example, a camera, a video camera, a radar, or the like for capturing an image, and the camera 123 may communicate with the movement controller 161 and perform photographing under the control of the movement controller 161. The image capturing Device 123 of this embodiment at least includes a photosensitive element, such as a Complementary Metal Oxide Semiconductor (CMOS) sensor or a Charge-coupled Device (CCD) sensor. It is understood that the camera 123 may be directly fixed to the unmanned vehicle 110, and thus the pan/tilt head 120 may be omitted. The number of the photographing devices 123 may be set as needed, and when the number of the photographing devices 123 is plural, the arrangement may be performed according to a preset rule.
The display device 130 may communicate with the unmanned vehicle 110 in a wireless manner, and may be used to display posture information of the unmanned vehicle 110. In addition, an image photographed by the photographing device 123 may also be displayed on the display apparatus 130. It should be understood that the display device 130 may be a stand-alone device or may be integrated in the control terminal 140.
In some embodiments, the control end 140 may be a terminal device located at the ground end, including but not limited to a mobile phone, a computer, a digital broadcast terminal, a messaging device, a tablet device, a medical device, a personal digital assistant, etc.; in other embodiments, the control end 140 may be a server located in the cloud, including but not limited to a single web server, a server group composed of multiple web servers, or a cloud based on cloud computing composed of a large number of computers or web servers. The control end 140 may communicate wirelessly with the unmanned vehicle 110 for remote manipulation of the unmanned vehicle 110.
In addition, the unmanned vehicle 110 may further be mounted with a speaker (not shown in the figure) for playing audio files, and the speaker may be directly fixed to the unmanned vehicle 110 or may be mounted on the cradle head 120.
In some embodiments, the mobile controller 161 may obtain the road structure information by using the method for extracting the road structure information described in the following embodiments, and control the unmanned vehicle 110, such as performing path planning, navigation, obstacle avoidance, lane change, acceleration, deceleration, and the like, in combination with the motion information of the unmanned vehicle 110 obtained by the sensing system 162.
In other embodiments, the unmanned vehicle 110 sends the image data acquired by the photographing device 123 to the control end 140, and the control end 140 acquires the road structure information by using the method for extracting the road structure information according to the acquired image data, and generates a control instruction for controlling the unmanned vehicle 110.
The display device 130 may also be used to display the acquired road structure information.
It should be understood that the above-mentioned nomenclature for the components of the unmanned vehicle system is for identification purposes only, and should not be construed as limiting the embodiments of the present invention. The unmanned vehicle system provided by the embodiment can acquire the road structure information by adopting the method for extracting the road structure information provided by the following method embodiments, for example, so as to construct a high-precision map.
Fig. 2 is a flowchart of an embodiment of a method for extracting road structure information according to the present invention. As shown in fig. 2, the method provided by this embodiment may include:
s201, acquiring at least one frame of image data of a road.
The road in this embodiment is a target road from which road structure information is to be extracted. The image data in this embodiment may be images captured by the same imaging device in time series, images captured by a plurality of imaging devices at the same time and at different angles, or images captured by a plurality of imaging devices at different angles in time series. The present embodiment does not limit the data type of the image data, and may include, but is not limited to, RGB image, grayscale image, depth image, point cloud data, and the like. The frame number of the image data in this embodiment can be set according to actual needs, for example, when real-time performance is pursued, the frame number of the image data can be reduced; when accuracy is pursued, the number of frames of image data can be increased. Optionally, at least one frame of image data of the road in this embodiment may be acquired in real time. For example, when the method provided by the present embodiment is applied to an unmanned vehicle, at least one frame of image data of a road may be acquired in real time by a photographing device installed on the unmanned vehicle. The present embodiment is not limited to the type of the photographing device, and may include, but is not limited to, an RGB camera, a grayscale camera, a depth camera, a laser radar, and the like. The number of frames of the acquired image data may be determined according to the vehicle speed of the unmanned vehicle and/or the photographing frame rate of the photographing device, for example, the number of frames may be inversely related to the vehicle speed, or the number of frames may be positively related to the photographing frame rate.
S202, determining a semantic map of the road according to at least one frame of image data.
In this embodiment, after at least one frame of image data of a road is acquired, a semantic map of the road is determined according to the at least one frame of image data. The semantic map of the road comprises semantic information of the road.
S203, determining road structure information of the road according to the semantic map and the road structure model, wherein the road structure information comprises lane grouping information.
In this embodiment, after the semantic map of the road is determined, the road structure information of the road is determined according to the semantic map and the road structure model. The road structure model can output road structure information matched with the semantic map according to the input semantic map.
Optionally, the road structure model in this embodiment may be pre-trained and/or online trained. That is, the road structure model may be trained in advance before determining the road structure information, i.e., determined by offline training, or may be trained online when determining the road structure information, or may be determined by combining the advance training with online training. Optionally, the road structure model in this embodiment may be obtained based on neural network training, for example, may be obtained based on convolutional neural network training. The embodiment does not limit the specific implementation manner of the neural network adopted by the road structure model, and for example, one of deep convolutional neural networks such as AlexNet, VGGNet, GoogleNet, ResNet, and the like, or an improvement of one of them, or a combination of multiple of them may be adopted. The road structure model obtained based on neural network training can realize extraction of road structure information under various complex scenes. Optionally, the road structure information in this embodiment may further include lane information. The lane information can be used for representing the incidence relation between the road marker and the lane; the lane grouping information may be used to represent the association of road markers with the lane group.
Optionally, the lane grouping information may include one or more of the following information: which lane lines can be divided into the same group, the corresponding relation between the speed limit sign and the lane line group and the corresponding relation between the arrow and the lane line group. It is understood that the specific category included in the lane grouping information may be determined according to the road type, for example, when the road type is a speed limit road segment, the lane grouping information may include a corresponding relationship between a speed limit sign and a lane line group; when the road type is an intersection scene, the lane grouping information may include a correspondence relationship between an arrow and a lane group, and the like.
Optionally, the lane information may include one or more of the following information: the number of lane lines, the position information of the lane lines, the corresponding relation between the speed limit sign and the lane, the corresponding relation between the guide arrow and the lane and the corresponding relation between the lane type and the lane. It is understood that the specific category included in the lane information may be determined according to the road type, for example, when the road type is a speed-limiting road segment, the lane information may include a corresponding relationship between a speed-limiting sign and a lane; when the road type is an intersection scene, the lane information may include a corresponding relationship between a guide arrow and a lane.
According to the method for extracting the road structure information, the semantic map of the road is determined according to the at least one frame of image data of the road, the road structure information of the road is determined according to the semantic map and the road structure model, the road structure information comprises lane grouping information, automatic extraction of the road structure information is achieved, manual marking is not needed, the road structure information can be extracted in real time, extraction efficiency of the road structure information is improved, the road structure information comprises lane information and lane grouping information, and description of the road structure is more detailed.
The following describes the process of the extraction method of the road structure information by a specific example. Fig. 3A to 3C are schematic process diagrams of an embodiment of a method for extracting road structure information according to the present invention. Fig. 3A is acquired image data of a road, and as shown in fig. 3A, two frames of image data on a time sequence captured by the same capturing device are used in the present embodiment; FIG. 3B is a semantic map of a road determined from the image data shown in FIG. 3A; fig. 3C is a schematic diagram of road structure information determined by using a road structure model according to the semantic map shown in fig. 3B.
In some embodiments, from at least one frame of image data, one implementation of determining a semantic map of a road may be: identifying the road markers in each frame of image data; and determining a semantic map of the road according to the road markers in the at least one frame of image data.
Optionally, the road marker may include one or more of the following information: lane lines, arrows, no-parking areas, curbs, guardrails, and drivable areas.
In some embodiments, one implementation of identifying road markers in each frame of image data may be: and determining semantic identifications of all pixel points in each frame of image data according to a pre-trained semantic segmentation model, and identifying the road markers in each frame of image data.
The semantic segmentation model in this embodiment may determine the semantic identifier of each pixel point in the image data according to the input image data.
Optionally, before determining the semantic identifier of each pixel point in each frame of image data according to the pre-trained semantic segmentation model, the method may further include: and training the semantic segmentation model by adopting a training sample labeled with the semantic identifier of each pixel point in advance.
In some embodiments, one implementation of determining a semantic map of a road from road markers in at least one frame of image data may be: determining a semantic map of a road according to a frame of image data and road markers in the image data;
or,
and performing fusion processing on the multi-frame image data and the road markers in the multi-frame image data to determine the semantic map of the road.
In the embodiment, when the image data is a frame, the semantic map of the road is determined directly according to the image data and the road marker in the image data; when the image data is multi-frame, the image data needs to be fused firstly, and then the semantic map of the road is determined.
Optionally, the fusing the multiple frames of image data and the road markers in the multiple frames of image data to determine the semantic map of the road may include: based on the multi-frame image data and the road markers in the multi-frame image data, a semantic map of the road is determined by using a Simultaneous Localization and Mapping (SLAM) algorithm.
On the basis of any of the foregoing embodiments, before determining the road structure information of the road according to the semantic map and the road structure model, the method provided by this embodiment may further include: obtaining semantic maps and road structure information of a plurality of training samples, wherein the road structure information is labeled in advance; and taking the semantic maps of the training samples as input features of the road structure model, taking the road structure information of the training samples as expected output features of the road structure model, and training the road structure model.
It should be noted that, in this embodiment, the training samples are semantic maps and road structure information corresponding to the semantic maps appearing in pairs. The road structure information is labeled in advance, and may include lane information and lane grouping information.
Optionally, the loss function may be determined according to the expected output characteristic and the actual output characteristic of the road structure model, and the road structure model is trained until the value of the loss function satisfies the preset condition.
Optionally, the plurality of training samples covers one or more of the following scenarios: a straight-ahead scene, a turning scene, an incoming-outgoing scene, an intersection scene, a bifurcation scene and a converging scene. It should be noted that the more scene types covered by the training samples, the stronger the generalization ability of the trained road structure model, and the extraction requirement of the road structure information in various complex scenes can be satisfied. Road structure information under different scenes has different characteristics, taking a straight-going scene and a bifurcation scene as an example: in the lane information, the branch scene may include more branch guide arrows than the straight scene; in the lane grouping information, all lane lines in the straight-ahead scene may belong to the same lane group, while in the bifurcation scene, a lane line before bifurcation may belong to one lane group, and a lane line in each bifurcation after bifurcation may belong to different lane groups, respectively, as in the bifurcation scene shown in fig. 3C, all lane lines may be divided into 3 lane groups.
In some embodiments, taking semantic maps of a plurality of training samples as input features of the road structure model, taking road structure information of a plurality of training samples as expected output features of the road structure model, and one implementation manner of training the road structure model may be:
according to the scene type, the training samples are divided into a training sample subset corresponding to the scene type. For example, the plurality of training samples may be divided into a straight-going scene training sample subset, a turning scene training sample subset, a merging-out scene training sample subset, an intersection scene training sample subset, a bifurcation scene training sample subset, and a merging scene training sample subset.
And training the road structure model matched with each training sample subset. For example, the straight-ahead scene road structure model is trained by using the straight-ahead scene training sample subset, the turning scene road structure model is trained by using the turning scene training sample subset, the merging-in and merging-out scene road structure model is trained by using the merging-in and merging-out scene training sample subset, the intersection scene road structure model is trained by using the intersection scene training sample subset, the bifurcation scene road structure model is trained by using the bifurcation scene training sample subset, and the merging scene road structure model is trained by using the merging-out scene training sample subset.
One implementation manner of determining the road structure information of the road according to the semantic map and the road structure model may be: determining a scene type according to the semantic map; determining a road structure model matched with the scene type according to the scene type; and determining road structure information of the road according to the semantic map and the road structure model matched with the scene type.
For example, if the scene type determined according to the semantic map is a straight-going scene, determining road structure information by using a straight-going scene road structure model; and if the scene type determined according to the semantic map is a turning scene, determining road structure information by adopting a turning scene road structure model.
In the method for extracting road structure information provided by this embodiment, on the basis of any of the above embodiments, a scene type is determined according to a semantic map; determining a road structure model matched with the scene type according to the scene type; and determining road structure information of the road according to the semantic map and the road structure model matched with the scene type. The road structure information is determined by adopting the road structure model matched with different scene types, and the accuracy of extracting the road structure information is improved.
In some embodiments, the road structure model may include a lane grouping module and a lane information module. The lane grouping module is used for determining lane grouping information, and the lane information module is used for determining lane information.
Determining the road structure information of the road according to the semantic map and the road structure model, which may include: determining lane grouping information of a road according to a semantic map and a lane grouping module; and determining lane information of the road according to the semantic map, the lane grouping information and the lane information module.
On the basis of any of the above embodiments, the method provided by this embodiment may further include: and performing error correction processing and/or completion processing on the semantic map according to the road structure information.
For example, when a part of the semantic map may be missing due to the occlusion of an obstacle such as another vehicle, the semantic map may be supplemented according to the determined road structure information, such as supplementing the missing part of the lane line; according to the characteristics of the road structure information, such as the parallel characteristic of the lane lines belonging to the same lane line group, the error correction processing can be carried out on the non-parallel lane lines belonging to the same lane line group in the semantic map.
In order to further improve the accuracy of the road structure information, on the basis of the foregoing embodiment, the method provided in this embodiment may further include: and updating the road structure information of the road according to the semantic map and the road structure model after the error correction processing and/or the completion processing are/is carried out.
In this embodiment, the semantic map after the error correction processing and/or the completion processing may be used as the input feature of the road structure model, and the output feature of the road structure model may be used as the updated road structure information. And the accuracy of the road structure information is improved in a closed-loop processing mode.
The method for extracting the road structure information provided by the embodiment of the invention has wide application prospect, and can be used for the fields of automatic driving, high-precision maps, security and inspection and the like. For example, when the method for extracting road structure information provided by the embodiment of the invention is applied to the field of automatic driving, an automatic driving automobile can extract the road structure information in real time by adopting the method, and guide automatic driving according to the obtained road structure information without depending on a predetermined high-precision map, so that the adaptive capacity of the automatic driving automobile to the environment is improved, and the driving safety of the automatic driving automobile in an unknown environment can be improved; when the method for extracting the road structure information provided by the embodiment of the invention is applied to the field of high-precision maps, the method is adopted to extract the road structure information, and then the high-precision map is constructed based on the obtained road structure information without manually marking the road structure information, so that the manufacturing efficiency of the high-precision map can be improved, the manufacturing cost of the high-precision map can be reduced, errors caused by manual marking can be avoided, and the accuracy of the high-precision map can be improved.
Fig. 4 is a schematic structural diagram of an embodiment of the unmanned aerial vehicle provided in the present invention. As shown in fig. 4, the drone 400 provided by the present embodiment may include a fuselage 401 and a processor 402. Wherein the processor 402 may be configured to:
acquiring at least one frame of image data of a road;
determining a semantic map of a road according to at least one frame of image data;
and determining road structure information of the road according to the semantic map and the road structure model, wherein the road structure information comprises lane grouping information.
Optionally, the drone 400 may be an unmanned vehicle or an unmanned airplane.
The unmanned aerial vehicle provided by the embodiment can extract road structure information in real time by acquiring at least one frame of image data of a road, determining a semantic map of the road according to the at least one frame of image data, and determining the road structure information of the road according to the semantic map and a road structure model. The high-precision map is constructed based on the acquired road structure information, the road structure information does not need to be marked manually, the manufacturing efficiency of the high-precision map can be improved, the manufacturing cost of the high-precision map is reduced, errors caused by manual marking can be avoided, and the accuracy of the high-precision map is improved.
Optionally, the processor 402 is configured to acquire at least one frame of image data of a road, and specifically may include:
at least one frame of image data of a road is acquired in real time.
Optionally, the road structure model is pre-trained and/or on-line trained.
Optionally, the road structure information further includes lane information.
Optionally, the road structure model is obtained based on neural network training.
Optionally, the road structure model is obtained based on convolutional neural network training.
Optionally, the processor 402 is configured to determine a semantic map of a road according to at least one frame of image data, and specifically may include:
identifying the road markers in each frame of image data;
and determining a semantic map of the road according to the road markers in the at least one frame of image data.
Optionally, the road marker may include one or more of the following information: lane lines, arrows, no-parking areas, curbs, guardrails, and drivable areas.
Optionally, the processor 402 is configured to identify the road marker in each frame of image data, and specifically includes:
and determining semantic identifications of all pixel points in each frame of image data according to a pre-trained semantic segmentation model, and identifying the road markers in each frame of image data.
Optionally, the processor 402 is configured to determine a semantic map of a road according to a road marker in at least one frame of image data, and specifically may include:
determining a semantic map of a road according to a frame of image data and road markers in the image data;
or,
and performing fusion processing on the multi-frame image data and the road markers in the multi-frame image data to determine the semantic map of the road.
Optionally, the processor 402 is configured to perform fusion processing on the multiple frames of image data and the road markers in the multiple frames of image data, and determine the semantic map of the road, where the fusion processing specifically includes:
and determining the semantic map of the road by utilizing a synchronous positioning and mapping algorithm SLAM based on the multi-frame image data and the road markers in the multi-frame image data.
Optionally, before the processor 402 is configured to determine the road structure information of the road according to the semantic map and the road structure model, the processor 402 may be further configured to:
obtaining semantic maps and road structure information of a plurality of training samples, wherein the road structure information is labeled in advance;
and taking the semantic maps of the training samples as input features of the road structure model, taking the road structure information of the training samples as expected output features of the road structure model, and training the road structure model.
Optionally, the plurality of training samples covers one or more of the following scenarios: a straight-ahead scene, a turning scene, an incoming-outgoing scene, an intersection scene, a bifurcation scene and a converging scene.
Optionally, the road structure model may include a lane grouping module and a lane information module, the lane grouping module is configured to determine lane grouping information, and the lane information module is configured to determine lane information; the processor 402 is configured to determine road structure information of a road according to the semantic map and the road structure model, and specifically may include:
determining lane grouping information of a road according to a semantic map and a lane grouping module;
and determining lane information of the road according to the semantic map, the lane grouping information and the lane information module.
Optionally, the lane grouping information may include one or more of the following information: which lane lines can be divided into the same group, the corresponding relation between the speed limit sign and the lane line group and the corresponding relation between the arrow and the lane line group.
Optionally, the lane information may include one or more of the following information: the number of lane lines, the position information of the lane lines, the corresponding relation between the speed limit sign and the lane, the corresponding relation between the guide arrow and the lane and the corresponding relation between the lane type and the lane.
Optionally, the processor 402 may be further configured to:
and performing error correction processing and/or completion processing on the semantic map according to the road structure information.
Optionally, the processor 402 may be further configured to:
and updating the road structure information of the road according to the semantic map and the road structure model after the error correction processing and/or the completion processing are/is carried out.
Fig. 5 is a schematic structural diagram of an embodiment of an automatic driving system provided by the present invention. As shown in fig. 5, the present embodiment provides an autopilot system 500 that may include a memory 501 and a processor 502. The memory 501 and the processor 502 may be communicatively connected by a bus, which may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The Processor 502 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The processor 502 may be configured to:
acquiring at least one frame of image data of a road;
determining a semantic map of the road according to the at least one frame of image data;
and determining road structure information of the road according to the semantic map and the road structure model, wherein the road structure information comprises lane grouping information.
The automatic driving system provided by the embodiment can be applied to an automatic driving automobile, and can extract road structure information in real time by acquiring at least one frame of image data of a road, determining a semantic map of the road according to the at least one frame of image data, and determining the road structure information of the road according to the semantic map and a road structure model. The automatic driving automobile guides automatic driving according to the road structure information acquired in real time without depending on a predetermined high-precision map, and the adaptability of the automatic driving automobile to the environment and the driving safety of the automatic driving automobile in an unknown environment are improved.
Optionally, the processor 502 is configured to acquire at least one frame of image data of a road, and specifically may include:
at least one frame of image data of a road is acquired in real time.
Optionally, the road structure model is pre-trained and/or on-line trained.
Optionally, the road structure information further includes lane information.
Optionally, the road structure model is obtained based on neural network training.
Optionally, the road structure model is obtained based on convolutional neural network training.
Optionally, the processor 502 is configured to determine a semantic map of a road according to at least one frame of image data, and specifically may include:
identifying the road markers in each frame of image data;
and determining a semantic map of the road according to the road markers in the at least one frame of image data.
Optionally, the road marker may include one or more of the following information: lane lines, arrows, no-parking areas, curbs, guardrails, and drivable areas.
Optionally, the processor 502 is configured to identify the road marker in each frame of image data, and specifically includes:
and determining semantic identifications of all pixel points in each frame of image data according to a pre-trained semantic segmentation model, and identifying the road markers in each frame of image data.
Optionally, the processor 502 is configured to determine a semantic map of a road according to a road marker in at least one frame of image data, and specifically may include:
determining a semantic map of a road according to a frame of image data and road markers in the image data;
or,
and performing fusion processing on the multi-frame image data and the road markers in the multi-frame image data to determine the semantic map of the road.
Optionally, the processor 502 is configured to perform fusion processing on the multiple frames of image data and the road markers in the multiple frames of image data, and determine the semantic map of the road, where the fusion processing specifically includes:
and determining the semantic map of the road by utilizing a synchronous positioning and mapping algorithm SLAM based on the multi-frame image data and the road markers in the multi-frame image data.
Optionally, before the processor 502 is configured to determine the road structure information of the road according to the semantic map and the road structure model, the processor 502 may be further configured to:
obtaining semantic maps and road structure information of a plurality of training samples, wherein the road structure information is labeled in advance;
and taking the semantic maps of the training samples as input features of the road structure model, taking the road structure information of the training samples as expected output features of the road structure model, and training the road structure model.
Optionally, the plurality of training samples covers one or more of the following scenarios: a straight-ahead scene, a turning scene, an incoming-outgoing scene, an intersection scene, a bifurcation scene and a converging scene.
Optionally, the road structure model may include a lane grouping module and a lane information module, the lane grouping module is configured to determine lane grouping information, and the lane information module is configured to determine lane information; the processor 502 is configured to determine road structure information of a road according to the semantic map and the road structure model, and specifically may include:
determining lane grouping information of a road according to a semantic map and a lane grouping module;
and determining lane information of the road according to the semantic map, the lane grouping information and the lane information module.
Optionally, the lane grouping information may include one or more of the following information: which lane lines can be divided into the same group, the corresponding relation between the speed limit sign and the lane line group and the corresponding relation between the arrow and the lane line group.
Optionally, the lane information may include one or more of the following information: the number of lane lines, the position information of the lane lines, the corresponding relation between the speed limit sign and the lane, the corresponding relation between the guide arrow and the lane and the corresponding relation between the lane type and the lane.
Optionally, the processor 502 may be further configured to:
and performing error correction processing and/or completion processing on the semantic map according to the road structure information.
Optionally, the processor 502 may be further configured to:
and updating the road structure information of the road according to the semantic map and the road structure model after the error correction processing and/or the completion processing are/is carried out.
An embodiment of the present invention further provides an apparatus (e.g., a chip, an integrated circuit, etc.) for extracting road structure information, including: a memory and a processor. The memory stores code for performing an extraction method of the road structure information. The processor is configured to call the code stored in the memory, and execute the method for extracting road structure information provided in any of the above embodiments.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media capable of storing program codes, such as a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, and an optical disk.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (55)

1. A method for extracting road structure information is characterized by comprising the following steps:
acquiring at least one frame of image data of a road;
determining a semantic map of the road according to the at least one frame of image data;
and determining road structure information of the road according to the semantic map and the road structure model, wherein the road structure information comprises lane grouping information.
2. The method of claim 1, wherein said obtaining at least one frame of image data of a roadway comprises:
at least one frame of image data of a road is acquired in real time.
3. The method according to claim 1, characterized in that the road structure model is pre-trained and/or trained online.
4. The method of claim 1, wherein the road structure information further comprises lane information.
5. The method of claim 1, wherein the road structure model is derived based on neural network training.
6. The method of claim 5, wherein the road structure model is based on convolutional neural network training.
7. The method of claim 1, wherein determining the semantic map of the road from the at least one frame of image data comprises:
identifying the road markers in each frame of image data;
and determining a semantic map of the road according to the road markers in the at least one frame of image data.
8. The method of claim 7, wherein the road marker comprises one or more of the following information:
lane lines, arrows, no-parking areas, curbs, guardrails, and drivable areas.
9. The method of claim 7, wherein the identifying the road markers in each frame of image data comprises:
and determining semantic identifications of all pixel points in each frame of image data according to a pre-trained semantic segmentation model, and identifying the road markers in each frame of image data.
10. The method of claim 7, wherein determining the semantic map of the road from the road markers in the at least one frame of image data comprises:
determining a semantic map of the road according to one frame of image data and the road marker in the image data;
or,
and carrying out fusion processing on the multi-frame image data and the road markers in the multi-frame image data to determine the semantic map of the road.
11. The method according to claim 10, wherein the fusing the multiple frames of image data and the road markers in the multiple frames of image data to determine the semantic map of the road comprises:
and determining the semantic map of the road by utilizing a synchronous positioning and mapping algorithm SLAM based on the multi-frame image data and the road markers in the multi-frame image data.
12. The method of claim 1, wherein before determining the road structure information of the road according to the semantic map and the road structure model, the method further comprises:
obtaining semantic maps and road structure information of a plurality of training samples, wherein the road structure information is labeled in advance;
and taking semantic maps of the training samples as input features of the road structure model, taking road structure information of the training samples as expected output features of the road structure model, and training the road structure model.
13. The method of claim 12, wherein the plurality of training samples cover one or more of the following scenarios:
a straight-ahead scene, a turning scene, an incoming-outgoing scene, an intersection scene, a bifurcation scene and a converging scene.
14. The method of claim 4, wherein the road structure model comprises a lane grouping module for determining lane grouping information and a lane information module for determining lane information;
determining the road structure information of the road according to the semantic map and the road structure model, wherein the determining comprises the following steps:
determining lane grouping information of the road according to the semantic map and the lane grouping module;
and determining the lane information of the road according to the semantic map, the lane grouping information and the lane information module.
15. The method of claim 14, wherein the lane grouping information comprises one or more of:
which lane lines can be divided into the same group, the corresponding relation between the speed limit sign and the lane line group and the corresponding relation between the arrow and the lane line group.
16. The method of claim 14, wherein the lane information comprises one or more of the following:
the number of lane lines, the position information of the lane lines, the corresponding relation between the speed limit sign and the lane, the corresponding relation between the guide arrow and the lane and the corresponding relation between the lane type and the lane.
17. The method of claim 1, further comprising:
and performing error correction processing and/or completion processing on the semantic map according to the road structure information.
18. The method of claim 17, further comprising:
and updating the road structure information of the road according to the semantic map subjected to the error correction processing and/or completion processing and the road structure model.
19. An unmanned aerial vehicle, comprising a fuselage and a processor;
the processor is configured to:
acquiring at least one frame of image data of a road;
determining a semantic map of the road according to the at least one frame of image data;
and determining road structure information of the road according to the semantic map and the road structure model, wherein the road structure information comprises lane grouping information.
20. The drone of claim 19, wherein the processor is configured to obtain at least one frame of image data of a road, and in particular to:
at least one frame of image data of a road is acquired in real time.
21. A drone according to claim 19, characterised in that the road structure model is pre-trained and/or trained online.
22. The drone of claim 19, wherein the road structure information further includes lane information.
23. The drone of claim 19, wherein the road structure model is derived based on neural network training.
24. The drone of claim 23, wherein the road structure model is derived based on convolutional neural network training.
25. A drone according to claim 19, wherein the processor is configured to determine a semantic map of the road from the at least one frame of image data, including in particular:
identifying the road markers in each frame of image data;
and determining a semantic map of the road according to the road markers in the at least one frame of image data.
26. A drone according to claim 25, wherein the road markers include one or more of the following information:
lane lines, arrows, no-parking areas, curbs, guardrails, and drivable areas.
27. A drone as claimed in claim 25, wherein the processor is configured to identify the road markers in each frame of image data, including:
and determining semantic identifications of all pixel points in each frame of image data according to a pre-trained semantic segmentation model, and identifying the road markers in each frame of image data.
28. A drone according to claim 25, wherein the processor is configured to determine a semantic map of the road from road markers in the at least one frame of image data, including:
determining a semantic map of the road according to one frame of image data and the road marker in the image data;
or,
and carrying out fusion processing on the multi-frame image data and the road markers in the multi-frame image data to determine the semantic map of the road.
29. The unmanned aerial vehicle of claim 28, wherein the processor is configured to perform fusion processing on multiple frames of image data and road markers in the multiple frames of image data, and determine the semantic map of the road, specifically including:
and determining the semantic map of the road by utilizing a synchronous positioning and mapping algorithm SLAM based on the multi-frame image data and the road markers in the multi-frame image data.
30. The drone of claim 19, wherein the processor is further configured to, prior to determining the road structure information for the road based on the semantic map and a road structure model:
obtaining semantic maps and road structure information of a plurality of training samples, wherein the road structure information is labeled in advance;
and taking semantic maps of the training samples as input features of the road structure model, taking road structure information of the training samples as expected output features of the road structure model, and training the road structure model.
31. A drone as claimed in claim 30, wherein the plurality of training samples cover one or more of the following scenarios:
a straight-ahead scene, a turning scene, an incoming-outgoing scene, an intersection scene, a bifurcation scene and a converging scene.
32. The drone of claim 22, wherein the road structure model includes a lane grouping module to determine lane grouping information and a lane information module to determine lane information;
the processor is configured to determine road structure information of the road according to the semantic map and the road structure model, and specifically includes:
determining lane grouping information of the road according to the semantic map and the lane grouping module;
and determining the lane information of the road according to the semantic map, the lane grouping information and the lane information module.
33. A drone as claimed in claim 32, wherein the lane grouping information includes one or more of the following:
which lane lines can be divided into the same group, the corresponding relation between the speed limit sign and the lane line group and the corresponding relation between the arrow and the lane line group.
34. The drone of claim 32, wherein the lane information includes one or more of the following:
the number of lane lines, the position information of the lane lines, the corresponding relation between the speed limit sign and the lane, the corresponding relation between the guide arrow and the lane and the corresponding relation between the lane type and the lane.
35. The drone of claim 19, wherein the processor is further to:
and performing error correction processing and/or completion processing on the semantic map according to the road structure information.
36. The drone of claim 35, wherein the processor is further to:
and updating the road structure information of the road according to the semantic map subjected to the error correction processing and/or completion processing and the road structure model.
37. The drone of claim 19, wherein the drone includes an unmanned vehicle or an unmanned airplane.
38. An autopilot system, comprising: a memory and a processor;
the processor is configured to:
acquiring at least one frame of image data of a road;
determining a semantic map of the road according to the at least one frame of image data;
and determining road structure information of the road according to the semantic map and the road structure model, wherein the road structure information comprises lane grouping information.
39. The system of claim 38, wherein the processor is configured to obtain at least one frame of image data of a roadway, and in particular comprises:
at least one frame of image data of a road is acquired in real time.
40. The system of claim 38, wherein the road structure model is pre-trained and/or on-line trained.
41. The system of claim 38, wherein the road structure information further comprises lane information.
42. The system of claim 38, wherein the road structure model is derived based on neural network training.
43. The system of claim 42, wherein the road structure model is based on convolutional neural network training.
44. The system of claim 38, wherein the processor is configured to determine a semantic map of the road from the at least one frame of image data, and in particular comprises:
identifying the road markers in each frame of image data;
and determining a semantic map of the road according to the road markers in the at least one frame of image data.
45. The system of claim 44, wherein the road markers include one or more of the following information:
lane lines, arrows, no-parking areas, curbs, guardrails, and drivable areas.
46. The system of claim 44, wherein the processor is configured to identify a road marker in each frame of image data, and further comprising:
and determining semantic identifications of all pixel points in each frame of image data according to a pre-trained semantic segmentation model, and identifying the road markers in each frame of image data.
47. The system according to claim 44, wherein the processor is configured to determine a semantic map of the road based on road markers in the at least one frame of image data, and specifically comprises:
determining a semantic map of the road according to one frame of image data and the road marker in the image data;
or,
and carrying out fusion processing on the multi-frame image data and the road markers in the multi-frame image data to determine the semantic map of the road.
48. The system according to claim 47, wherein the processor is configured to perform fusion processing on multiple frames of image data and road markers in the multiple frames of image data, and determine the semantic map of the road, specifically including:
and determining the semantic map of the road by utilizing a synchronous positioning and mapping algorithm SLAM based on the multi-frame image data and the road markers in the multi-frame image data.
49. The system of claim 38, wherein before the processor is configured to determine the road structure information of the road based on the semantic map and a road structure model, the processor is further configured to:
obtaining semantic maps and road structure information of a plurality of training samples, wherein the road structure information is labeled in advance;
and taking semantic maps of the training samples as input features of the road structure model, taking road structure information of the training samples as expected output features of the road structure model, and training the road structure model.
50. The system of claim 49, wherein the plurality of training samples cover one or more of the following scenarios:
a straight-ahead scene, a turning scene, an incoming-outgoing scene, an intersection scene, a bifurcation scene and a converging scene.
51. The system of claim 41, wherein the road structure model comprises a lane grouping module for determining lane grouping information and a lane information module for determining lane information;
the processor is configured to determine road structure information of the road according to the semantic map and the road structure model, and specifically includes:
determining lane grouping information of the road according to the semantic map and the lane grouping module;
and determining the lane information of the road according to the semantic map, the lane grouping information and the lane information module.
52. The system of claim 51, wherein the lane grouping information comprises one or more of the following:
which lane lines can be divided into the same group, the corresponding relation between the speed limit sign and the lane line group and the corresponding relation between the arrow and the lane line group.
53. The system of claim 51, wherein the lane information comprises one or more of the following:
the number of lane lines, the position information of the lane lines, the corresponding relation between the speed limit sign and the lane, the corresponding relation between the guide arrow and the lane and the corresponding relation between the lane type and the lane.
54. The system of claim 38, wherein the processor is further configured to:
and performing error correction processing and/or completion processing on the semantic map according to the road structure information.
55. The system of claim 54, wherein the processor is further configured to:
and updating the road structure information of the road according to the semantic map subjected to the error correction processing and/or completion processing and the road structure model.
CN201980005571.5A 2019-02-28 2019-02-28 Road structure information extraction method, unmanned aerial vehicle and automatic driving system Pending CN111316288A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/076568 WO2020172875A1 (en) 2019-02-28 2019-02-28 Method for extracting road structure information, unmanned aerial vehicle, and automatic driving system

Publications (1)

Publication Number Publication Date
CN111316288A true CN111316288A (en) 2020-06-19

Family

ID=71147654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980005571.5A Pending CN111316288A (en) 2019-02-28 2019-02-28 Road structure information extraction method, unmanned aerial vehicle and automatic driving system

Country Status (2)

Country Link
CN (1) CN111316288A (en)
WO (1) WO2020172875A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112326686A (en) * 2020-11-02 2021-02-05 坝道工程医院(平舆) Unmanned aerial vehicle intelligent cruise pavement disease detection method, unmanned aerial vehicle and detection system
CN112464773A (en) * 2020-11-19 2021-03-09 浙江吉利控股集团有限公司 Road type identification method, device and system
CN112488009A (en) * 2020-12-05 2021-03-12 武汉中海庭数据技术有限公司 Lane linear point string extraction method and system in unmanned aerial vehicle data
CN113239960A (en) * 2021-04-09 2021-08-10 中用科技有限公司 Intelligent early warning method and system for road protection by fusing AI visual algorithm
CN113449692A (en) * 2021-07-22 2021-09-28 成都纵横自动化技术股份有限公司 Map lane information updating method and system based on unmanned aerial vehicle
CN113591730A (en) * 2021-08-03 2021-11-02 湖北亿咖通科技有限公司 Method, device and equipment for recognizing lane grouping line
WO2022001432A1 (en) * 2020-07-02 2022-01-06 华为技术有限公司 Method for inferring lane, and method and apparatus for training lane inference model
CN114927006A (en) * 2022-05-23 2022-08-19 东风汽车集团股份有限公司 Indoor passenger-replacing parking system based on unmanned aerial vehicle
WO2023060963A1 (en) * 2021-10-14 2023-04-20 华为技术有限公司 Method and apparatus for identifying road information, electronic device, vehicle, and medium

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507891B (en) * 2020-12-12 2023-02-03 武汉中海庭数据技术有限公司 Method and device for automatically identifying high-speed intersection and constructing intersection vector
CN112560684B (en) * 2020-12-16 2023-10-24 阿波罗智联(北京)科技有限公司 Lane line detection method, lane line detection device, electronic equipment, storage medium and vehicle
CN112580511A (en) * 2020-12-18 2021-03-30 广州市城市规划设计所 Method, device, equipment and storage medium for estimating road area rate
CN112785610B (en) * 2021-01-14 2023-05-23 华南理工大学 Lane line semantic segmentation method integrating low-level features
CN113033301B (en) * 2021-02-07 2024-02-13 交信北斗科技有限公司 Method for acquiring road inspection facility data based on AI image recognition technology
CN114419592A (en) * 2022-01-18 2022-04-29 长沙慧联智能科技有限公司 Road area identification method, automatic driving control method and device
CN114620055B (en) * 2022-03-15 2022-11-25 阿波罗智能技术(北京)有限公司 Road data processing method and device, electronic equipment and automatic driving vehicle
CN114724108B (en) * 2022-03-22 2024-02-02 北京百度网讯科技有限公司 Lane line processing method and device
CN117115776A (en) * 2022-05-17 2023-11-24 华为技术有限公司 Method, device, storage medium and program product for predicting vehicle starting behavior
CN115438517B (en) * 2022-11-07 2023-03-24 阿里巴巴达摩院(杭州)科技有限公司 Simulation map generation method, electronic device and computer storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106441319A (en) * 2016-09-23 2017-02-22 中国科学院合肥物质科学研究院 System and method for generating lane-level navigation map of unmanned vehicle
US20170278402A1 (en) * 2016-03-25 2017-09-28 Toyota Jidosha Kabushiki Kaisha Understanding Road Scene Situation and Semantic Representation of Road Scene Situation for Reliable Sharing
CN109059954A (en) * 2018-06-29 2018-12-21 广东星舆科技有限公司 The method and system for supporting high-precision map lane line real time fusion to update

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4861850B2 (en) * 2007-02-13 2012-01-25 アイシン・エィ・ダブリュ株式会社 Lane determination device and lane determination method
CN106802954B (en) * 2017-01-18 2021-03-26 中国科学院合肥物质科学研究院 Unmanned vehicle semantic map model construction method and application method thereof on unmanned vehicle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170278402A1 (en) * 2016-03-25 2017-09-28 Toyota Jidosha Kabushiki Kaisha Understanding Road Scene Situation and Semantic Representation of Road Scene Situation for Reliable Sharing
CN106441319A (en) * 2016-09-23 2017-02-22 中国科学院合肥物质科学研究院 System and method for generating lane-level navigation map of unmanned vehicle
CN109059954A (en) * 2018-06-29 2018-12-21 广东星舆科技有限公司 The method and system for supporting high-precision map lane line real time fusion to update

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022001432A1 (en) * 2020-07-02 2022-01-06 华为技术有限公司 Method for inferring lane, and method and apparatus for training lane inference model
CN112326686A (en) * 2020-11-02 2021-02-05 坝道工程医院(平舆) Unmanned aerial vehicle intelligent cruise pavement disease detection method, unmanned aerial vehicle and detection system
CN112326686B (en) * 2020-11-02 2024-02-02 坝道工程医院(平舆) Unmanned aerial vehicle intelligent cruising pavement disease detection method, unmanned aerial vehicle and detection system
CN112464773A (en) * 2020-11-19 2021-03-09 浙江吉利控股集团有限公司 Road type identification method, device and system
CN112488009A (en) * 2020-12-05 2021-03-12 武汉中海庭数据技术有限公司 Lane linear point string extraction method and system in unmanned aerial vehicle data
CN113239960A (en) * 2021-04-09 2021-08-10 中用科技有限公司 Intelligent early warning method and system for road protection by fusing AI visual algorithm
CN113239960B (en) * 2021-04-09 2024-05-28 中用科技有限公司 Intelligent road protection early warning method and system integrating AI vision algorithm
CN113449692A (en) * 2021-07-22 2021-09-28 成都纵横自动化技术股份有限公司 Map lane information updating method and system based on unmanned aerial vehicle
CN113591730A (en) * 2021-08-03 2021-11-02 湖北亿咖通科技有限公司 Method, device and equipment for recognizing lane grouping line
CN113591730B (en) * 2021-08-03 2023-11-10 湖北亿咖通科技有限公司 Method, device and equipment for identifying lane grouping lines
WO2023060963A1 (en) * 2021-10-14 2023-04-20 华为技术有限公司 Method and apparatus for identifying road information, electronic device, vehicle, and medium
CN114927006A (en) * 2022-05-23 2022-08-19 东风汽车集团股份有限公司 Indoor passenger-replacing parking system based on unmanned aerial vehicle
CN114927006B (en) * 2022-05-23 2023-03-14 东风汽车集团股份有限公司 Indoor passenger-replacing parking system based on unmanned aerial vehicle

Also Published As

Publication number Publication date
WO2020172875A1 (en) 2020-09-03

Similar Documents

Publication Publication Date Title
CN111316288A (en) Road structure information extraction method, unmanned aerial vehicle and automatic driving system
CN111448476B (en) Technique for sharing mapping data between unmanned aerial vehicle and ground vehicle
CN110136199B (en) Camera-based vehicle positioning and mapping method and device
CN111670339B (en) Techniques for collaborative mapping between unmanned aerial vehicles and ground vehicles
WO2020113423A1 (en) Target scene three-dimensional reconstruction method and system, and unmanned aerial vehicle
KR20220053513A (en) Image data automatic labeling method and device
CN108235815B (en) Imaging control device, imaging system, moving object, imaging control method, and medium
CN110136058B (en) Drawing construction method based on overlook spliced drawing and vehicle-mounted terminal
CN113240813B (en) Three-dimensional point cloud information determining method and device
CN111316285A (en) Object detection method, electronic device, and computer storage medium
CN111261016A (en) Road map construction method and device and electronic equipment
CN112560769B (en) Method for detecting obstacle, electronic device, road side device and cloud control platform
CN111982132B (en) Data processing method, device and storage medium
CN113063421A (en) Navigation method and related device, mobile terminal and computer readable storage medium
JP7501535B2 (en) Information processing device, information processing method, and information processing program
CN114792414A (en) Target variable detection method and system for carrier
CN113252066B (en) Calibration method and device for parameters of odometer equipment, storage medium and electronic device
CN116745722A (en) Unmanned aerial vehicle control method and device, unmanned aerial vehicle and storage medium
CN111433819A (en) Target scene three-dimensional reconstruction method and system and unmanned aerial vehicle
WO2019242611A1 (en) Control device, moving object, control method and program
CN117392234A (en) Calibration method and device for camera and laser radar
Thai et al. Application of edge detection algorithm for self-driving vehicles
WO2021035746A1 (en) Image processing method and device, and movable platform
JP6495560B2 (en) Point cloud processing system
CN115775325A (en) Pose determination method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200619

WD01 Invention patent application deemed withdrawn after publication