WO2021052810A1 - Method for determining a model of a traffic barrier - Google Patents
Method for determining a model of a traffic barrier Download PDFInfo
- Publication number
- WO2021052810A1 WO2021052810A1 PCT/EP2020/075025 EP2020075025W WO2021052810A1 WO 2021052810 A1 WO2021052810 A1 WO 2021052810A1 EP 2020075025 W EP2020075025 W EP 2020075025W WO 2021052810 A1 WO2021052810 A1 WO 2021052810A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicles
- traffic barrier
- model
- barrier
- camera
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30236—Traffic on road, railway or crossing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
- G06T2207/30261—Obstacle
Definitions
- the disclosure relates to a method for determining a model of a traffic barrier, e.g. a Jersey Wall that may be arranged on a road to separate various roadways of the road from each other.
- a traffic barrier e.g. a Jersey Wall that may be arranged on a road to separate various roadways of the road from each other.
- Road furniture includes, for example, traffic signs, poles, barriers, and traffic bar riers which may include Jersey Walls, etc.
- a traffic barrier like a Jersey Wall may be used in a construction site of a road for separating roadways of the road.
- Road furniture may be captured and registered by special purpose vehicles using complex sensor systems such as stereo-cameras that capture images of a road and road furniture while driving.
- Modeling of road furniture means to recover and represent a 3D space information of the road furniture.
- To detect and model a traffic barrier most of the known approaches adopt LIDAR or a stereo-camera which could gain 3D information directly.
- US 20200184233 A1 discloses a method of detecting road barriers like jersey walls.
- vertical functions and horizontal functions are used, which when taken together, map to a plurality of image features. These functions are gener ated by a complex multi-camera system.
- a simple optical sensor system for example an optical system com prising a monocular camera
- a monocular camera cannot get 3D information of a scene directly.
- the only known way for using a monocular camera to reconstruct 3D information needs a pixel's match relation between frames.
- most of the traffic barrier is nearly texture less. As a result, it is nearly impossible to get a pixel's match relation between frames on the traffic barrier.
- the disparity is too small to recover its spatial information.
- a monocular camera could recover spatial lines by multiview geometry theory, only when the disparity is sufficient. However, there is almost no disparity on a traffic barrier edge in consecutive frames.
- the problem to be solved by the invention is to provide a method for determin ing a model of a traffic barrier by using a cost-effective and simple sensor sys tem, for example a monocular camera. Further, the method should provide an improved recognition accuracy.
- a method for determining a model of a traffic barrier, by a plurality of vehicles each having at least one camera and a processor for com puter vision processing is provided.
- a respective image of a scene is captured by the respective at least one camera of each of the plurality of vehicles.
- the re spective image is evaluated, and a respective preliminary model of the traffic barrier is generated by the respective processor of each of the vehicles.
- the re spective preliminary model of the traffic barrier is transmitted from each of the vehicles to a server.
- the respective preliminary model received from each of the vehicles is evaluated, and the model of the traffic barrier is determined by an image processor of the server.
- a single camera may be sufficient.
- a plurality of cameras may be provided.
- a series of successive images are usually taken by means of an optical sensor system, such as a stereo-camera system. Then, pixels belonging to the same position on the object are compared in the successive images to cre ate a model of the object.
- the stereo-camera system must provide information about the object from different spatial positions.
- the problem with modeling a traffic barrier surface such as a traffic barrier, e.g. a Jersey Wall, however, is that the surface of the traffic barrier usually has very little texture or is nearly texture less. This makes it difficult to identify the same pixels in successive images being arranged at the same position on the surface of the traffic barrier to be modeled.
- the method for determining a model of a traffic barrier allows images of a traffic barrier to be captured using simple cameras located at different locations.
- the individual cameras can, for example, be designed as simple monocular cameras, wherein a respective monocular camera is located in each of a plurality of vehi cles.
- the plurality of vehicles is located at different positions in a scene so that each camera takes a picture of the traffic barrier from an individual position.
- a respective processor in each of the vehicles may evaluate the captured image in formation of the traffic barrier so that an individual preliminary/virtual model of the traffic barrier is generated in each vehicle.
- the individual model information of the traffic barrier i.e. the individual prelimi nary/virtual model of the traffic barrier
- the preliminary/virtual models of the traffic barrier received from the dif ferent vehicles are evaluated by the image processor of the server.
- the method for determining a model of a traffic barrier the stereovision thus takes place on the server which can generate a pre cise model of the traffic barrier by evaluating and comparing the individual preliminary/virtual models generated and received from each of the various ve hicles in the scene.
- a traffic barrier may includeat least one of a Jersey wall, a Jer sey barrier, K-rail or a median barrier.
- Figure 1 illustrates a system to perform a method for determining a model of a traffic barrier
- Figure 2 shows a flowchart illustrating method steps of a method for determining a model of a traffic barrier
- Figure 3 illustrates a simplified example of the method for determining a model of a traffic barrier on a server by evaluating individual preliminary/vir tual models of the traffic barrier generated by individual vehicles.
- Figure 1 shows a system comprising a plurality of vehicles 100, 101 and 102, wherein each of the vehicles includes at least one camera 10 for capturing an im age/frame of an environmental scene of the respective vehicle, a processor 20 for computer vision processing, and a storage device 30 to store a road database.
- the various vehicles 100, 101 and 102 may be in communication with a server 200 including an image processor 201. In a very simple embodiment, a single camera may be sufficient.
- An embodiment to model a traffic barrier in a road database system includes method steps VI, V2 and V3 performed by each of a plurality of vehicles and method steps S performed by the server in a sequence of VI, V2, V3, S.
- the method steps are illustrated in the flowchart of Figure 2.
- a plurality of vehicles 100, 101 and 102 is provided.
- Each of the vehicles includes a respective camera 10 and a respective processor 20 for computer vi sion processing.
- step VI performed by each of a plurality of vehicles a respec tive image of an environmental scene of the vehicles is captured by the respec tive camera 10 of each of the plurality of vehicles 100, 101 and 102.
- the camera 10 may be embodied as a monocular camera installed in each of the vehicles.
- the camera may be configured to capture a video of the environmental scene in which the respective vehicle is located in real time.
- step V2 the respective image cap tured in step VI is evaluated, and a respective preliminary/virtual model of the traffic barrier is generated by the respective processor 20 of each of the vehicles 100, 101 and 102.
- a road model may be provided in a respec tive storage device 30 of each of the vehicles 100, 101 and 102.
- a planar road surface may be assumed for the road model ([0,0,1]). Giving the road model, an image pixel of the captured image may be projected to a 3D point on the road model.
- the respective captured image is evaluated by the respective processor 20 of each of the vehicles 100, 101 and 102 by computer vision processing to extract the pixels in the respective captured image representing an edge of the traffic barrier.
- the ex tracted pixels in the respective captured image may represent an upper edge of the traffic barrier.
- the computer vision processing may be used to detect the whole area of the traffic barrier.
- the boundary between wall and ground in general may not be clear.
- the approach to model a traffic barrier only the position of the traffic barrier edge, particularly the upper wall edge, is considered by extracting the pixels representing the edge of the traffic barrier from each of the captured images.
- the respective processor 20 of each of the vehicles 100, 101 and 102 generates the respective preliminary/virtual model of the traffic barrier by projecting the respective extracted pixels representing the edge of the traffic barrier in the road model to generate respective 3D points of the edge of the traffic barrier.
- the respective processor 20 of each of the vehicles 100, 101 and 102 generates a respective spline curve of the edge of the traffic barrier.
- the respective spline curve represents the respective prelimi nary/virtual model of the traffic barrier generated in each of the vehicles.
- a step V3 of the method for determining a model of a traffic barrier the re spective preliminary/virtual model of the traffic barrier is transmitted from each of the vehicles 100, 101 and 102 to the server 200.
- the respective image is cap tured in step VI in an individual pose of the respective camera 10 of each of the vehicles 100, 101 and 102.
- the different preliminary/virtual models generated from the respective processor 20 of the different vehicles are gener ated from different viewpoints. Since the traffic barrier preliminary/virtual mod els are generated from different viewpoints, the camera position/pose is also im portant to determine the model of the traffic barrier by the server.
- the pose of the respective camera 10 of each of the vehicles 100, 101 and 102 is transmitted together with the respective prelim inary/virtual model of the traffic barrier from each of the vehicles to the server 200.
- step S the respective preliminary/virtual model re ceived from each of the vehicles 100, 101 and 102 is evaluated, and the model of the traffic barrier is determined by the image processor 201 of the server.
- the image processor 201 of the server 200 After having collected information, i.e. the reported individual traffic barrier prelimi nary/virtual models and the individual camera poses from each of the vehicles, the image processor 201 of the server 200 will recover the real spatial infor mation of the traffic barrier.
- the model of the traffic barrier, determined by the image processor 201 of the server may include at least information about the position and the height of the traffic barrier.
- the image processor 201 of the server 200 evaluates at least two of the respective preliminary/virtual models of the traffic barrier received from at least two of the plurality of vehicles 101 and 102 to determine the model of the traffic barrier by the server.
- Figure 3 illustrates how to recover a model of the traffic barrier, i.e. the real position of the traffic barrier, assuming that two reports of a respective individual preliminary/virtual model and a respective individual cam era pose of the vehicles 101 and 102 passing in different lanes have been re ceived by the server 200.
- Reference sign 104 represents the position of the traffic barrier preliminary/vir tual model generated from the camera of the vehicle 101.
- a line 105 passes through the camera position of the vehicle 101 and the preliminary/virtual posi tion of traffic barrier 104.
- the true position of the traffic barrier edge of the traf fic barrier is on line 105.
- the reference sign 103 corresponds to the prelimi nary/virtual model generated by the camera of the vehicle 102.
- the true position of the traffic barrier is located on the line 106 connecting the position of the ve hicle 102 and the preliminary/virtual model 103.
- the image processing 200 de termines the intersection 107 as the true position of the traffic barrier. All the in tersection points are fitted by a spline curve in order to model the traffic barrier.
- the method for determining a model of a traffic barrier has several advantages.
- the method only needs the generated individual preliminary/virtual wall models and the respective camera pose of each of the vehicles.
- the method saves communication capa bility.
- the method to model a traffic barrier can be used for generating a road database used for autonomously driving cars, but can also be used in a plurality of other fields of machine vision and machine orientation, for example robot orientation outdoors or under water.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
According to a method for determining a model of a traffic barrier, a plurality of vehicles each having at leat one camera and a processor for computer vision processing are provided. A respective image of a scene is captured by the respective at least one camera of each of the plurality of vehicles. The respective image is evaluated and a respective preliminary model of the traffic barrier is generated by the respective processor of each of the vehicles. The respective preliminary model of the traffic barrier is transmitted from each of the vehicles to a server. The respective preliminary model received from each of the vehicles is evaluated, and the model of the traffic barrier is determined by an image processor of the server.
Description
Method for determining a model of a traffic barrier
Field of the invention
The disclosure relates to a method for determining a model of a traffic barrier, e.g. a Jersey Wall that may be arranged on a road to separate various roadways of the road from each other.
Description of the related art
To detect and model road furniture is the basic requirement of generating an ex act road database that may be used for autonomous or robot-assisted driving. Road furniture includes, for example, traffic signs, poles, barriers, and traffic bar riers which may include Jersey Walls, etc. A traffic barrier like a Jersey Wall may be used in a construction site of a road for separating roadways of the road.
Road furniture may be captured and registered by special purpose vehicles using complex sensor systems such as stereo-cameras that capture images of a road and road furniture while driving. Modeling of road furniture means to recover and represent a 3D space information of the road furniture. To detect and model a traffic barrier, most of the known approaches adopt LIDAR or a stereo-camera which could gain 3D information directly.
US 20200184233 A1 discloses a method of detecting road barriers like jersey walls. Here, vertical functions and horizontal functions are used, which when taken together, map to a plurality of image features. These functions are gener ated by a complex multi-camera system.
Theoretically a simple optical sensor system, for example an optical system com prising a monocular camera, could not reconstruct traffic barrier 3D information
for several reasons. First, a monocular camera cannot get 3D information of a scene directly. The only known way for using a monocular camera to reconstruct 3D information needs a pixel's match relation between frames. Second, most of the traffic barrier is nearly texture less. As a result, it is nearly impossible to get a pixel's match relation between frames on the traffic barrier. Third, the disparity is too small to recover its spatial information. A monocular camera could recover spatial lines by multiview geometry theory, only when the disparity is sufficient. However, there is almost no disparity on a traffic barrier edge in consecutive frames.
Summary of the invention
The problem to be solved by the invention is to provide a method for determin ing a model of a traffic barrier by using a cost-effective and simple sensor sys tem, for example a monocular camera. Further, the method should provide an improved recognition accuracy.
Solutions of the problem are described in the independent claims. The depend ent claims relate to further improvements of the invention.
In an embodiment, a method for determining a model of a traffic barrier, by a plurality of vehicles each having at least one camera and a processor for com puter vision processing is provided. A respective image of a scene is captured by the respective at least one camera of each of the plurality of vehicles. The re spective image is evaluated, and a respective preliminary model of the traffic barrier is generated by the respective processor of each of the vehicles. The re spective preliminary model of the traffic barrier is transmitted from each of the vehicles to a server. The respective preliminary model received from each of the vehicles is evaluated, and the model of the traffic barrier is determined by an
image processor of the server. In a very simple embodiment, a single camera may be sufficient. Alternatively, a plurality of cameras may be provided.
To model an object, a series of successive images are usually taken by means of an optical sensor system, such as a stereo-camera system. Then, pixels belonging to the same position on the object are compared in the successive images to cre ate a model of the object. The stereo-camera system must provide information about the object from different spatial positions. The problem with modeling a traffic barrier surface, such as a traffic barrier, e.g. a Jersey Wall, however, is that the surface of the traffic barrier usually has very little texture or is nearly texture less. This makes it difficult to identify the same pixels in successive images being arranged at the same position on the surface of the traffic barrier to be modeled.
The method for determining a model of a traffic barrier allows images of a traffic barrier to be captured using simple cameras located at different locations. The individual cameras can, for example, be designed as simple monocular cameras, wherein a respective monocular camera is located in each of a plurality of vehi cles. The plurality of vehicles is located at different positions in a scene so that each camera takes a picture of the traffic barrier from an individual position. A respective processor in each of the vehicles may evaluate the captured image in formation of the traffic barrier so that an individual preliminary/virtual model of the traffic barrier is generated in each vehicle.
The individual model information of the traffic barrier, i.e. the individual prelimi nary/virtual model of the traffic barrier, is sent by each of the vehicles to a server. The preliminary/virtual models of the traffic barrier received from the dif ferent vehicles are evaluated by the image processor of the server. In conclusion, according to an embodiment, the method for determining a model of a traffic barrier, the stereovision thus takes place on the server which can generate a pre cise model of the traffic barrier by evaluating and comparing the individual
preliminary/virtual models generated and received from each of the various ve hicles in the scene.
In general, the embodiments are applicable to all kinds of road boundaries, curbs, railings, etc. A traffic barrier may includeat least one of a Jersey wall, a Jer sey barrier, K-rail or a median barrier.
Description of Drawings
In the following the embodiments will be described by way of example, without limitation of the general inventive concept, on examples of embodiment with reference to the drawings.
Figure 1 illustrates a system to perform a method for determining a model of a traffic barrier;
Figure 2 shows a flowchart illustrating method steps of a method for determining a model of a traffic barrier; and
Figure 3 illustrates a simplified example of the method for determining a model of a traffic barrier on a server by evaluating individual preliminary/vir tual models of the traffic barrier generated by individual vehicles.
In the following the different steps of an embodiment of the method for deter mining a model of a traffic barrier are explained in general terms by means of Figures 1 and 2.
Figure 1 shows a system comprising a plurality of vehicles 100, 101 and 102, wherein each of the vehicles includes at least one camera 10 for capturing an im age/frame of an environmental scene of the respective vehicle, a processor 20 for computer vision processing, and a storage device 30 to store a road database.
The various vehicles 100, 101 and 102 may be in communication with a server 200 including an image processor 201. In a very simple embodiment, a single camera may be sufficient.
An embodiment to model a traffic barrier in a road database system includes method steps VI, V2 and V3 performed by each of a plurality of vehicles and method steps S performed by the server in a sequence of VI, V2, V3, S. The method steps are illustrated in the flowchart of Figure 2.
According to an embodiment, in a method for determining a model of a traffic barrier, a plurality of vehicles 100, 101 and 102 is provided. Each of the vehicles includes a respective camera 10 and a respective processor 20 for computer vi sion processing. In step VI performed by each of a plurality of vehicles, a respec tive image of an environmental scene of the vehicles is captured by the respec tive camera 10 of each of the plurality of vehicles 100, 101 and 102. The camera 10 may be embodied as a monocular camera installed in each of the vehicles.
The camera may be configured to capture a video of the environmental scene in which the respective vehicle is located in real time.
In step V2 performed by each of a plurality of vehicles, the respective image cap tured in step VI is evaluated, and a respective preliminary/virtual model of the traffic barrier is generated by the respective processor 20 of each of the vehicles 100, 101 and 102. For this purpose, a road model may be provided in a respec tive storage device 30 of each of the vehicles 100, 101 and 102. According to an embodiment, a planar road surface may be assumed for the road model ([0,0,1]). Giving the road model, an image pixel of the captured image may be projected to a 3D point on the road model.
According to an embodiment of the method, the respective captured image is evaluated by the respective processor 20 of each of the vehicles 100, 101 and 102 by computer vision processing to extract the pixels in the respective
captured image representing an edge of the traffic barrier. For example, the ex tracted pixels in the respective captured image may represent an upper edge of the traffic barrier. Regarding conventional methods for modeling a surface of a traffic barrier, the computer vision processing may be used to detect the whole area of the traffic barrier. However, the boundary between wall and ground in general may not be clear. According to an embodiment, the approach to model a traffic barrier, only the position of the traffic barrier edge, particularly the upper wall edge, is considered by extracting the pixels representing the edge of the traffic barrier from each of the captured images.
The respective processor 20 of each of the vehicles 100, 101 and 102 generates the respective preliminary/virtual model of the traffic barrier by projecting the respective extracted pixels representing the edge of the traffic barrier in the road model to generate respective 3D points of the edge of the traffic barrier. Accord ing to an embodiment of the method, the respective processor 20 of each of the vehicles 100, 101 and 102 generates a respective spline curve of the edge of the traffic barrier. The respective spline curve represents the respective prelimi nary/virtual model of the traffic barrier generated in each of the vehicles. These preliminary/virtual models still do not include the real traffic barrier position.
In a step V3 of the method for determining a model of a traffic barrier, the re spective preliminary/virtual model of the traffic barrier is transmitted from each of the vehicles 100, 101 and 102 to the server 200. The respective image is cap tured in step VI in an individual pose of the respective camera 10 of each of the vehicles 100, 101 and 102. In conclusion, the different preliminary/virtual models generated from the respective processor 20 of the different vehicles are gener ated from different viewpoints. Since the traffic barrier preliminary/virtual mod els are generated from different viewpoints, the camera position/pose is also im portant to determine the model of the traffic barrier by the server. According to an embodiment of the method, the pose of the respective camera 10 of each of
the vehicles 100, 101 and 102 is transmitted together with the respective prelim inary/virtual model of the traffic barrier from each of the vehicles to the server 200.
In step S performed by the server, the respective preliminary/virtual model re ceived from each of the vehicles 100, 101 and 102 is evaluated, and the model of the traffic barrier is determined by the image processor 201 of the server. After having collected information, i.e. the reported individual traffic barrier prelimi nary/virtual models and the individual camera poses from each of the vehicles, the image processor 201 of the server 200 will recover the real spatial infor mation of the traffic barrier. The model of the traffic barrier, determined by the image processor 201 of the server, may include at least information about the position and the height of the traffic barrier.
According to an embodiment of the method for determining a model of a traffic barrier, the image processor 201 of the server 200 evaluates at least two of the respective preliminary/virtual models of the traffic barrier received from at least two of the plurality of vehicles 101 and 102 to determine the model of the traffic barrier by the server. Figure 3 illustrates how to recover a model of the traffic barrier, i.e. the real position of the traffic barrier, assuming that two reports of a respective individual preliminary/virtual model and a respective individual cam era pose of the vehicles 101 and 102 passing in different lanes have been re ceived by the server 200.
Reference sign 104 represents the position of the traffic barrier preliminary/vir tual model generated from the camera of the vehicle 101. A line 105 passes through the camera position of the vehicle 101 and the preliminary/virtual posi tion of traffic barrier 104. The true position of the traffic barrier edge of the traf fic barrier is on line 105. The reference sign 103 corresponds to the prelimi nary/virtual model generated by the camera of the vehicle 102. The true position
of the traffic barrier is located on the line 106 connecting the position of the ve hicle 102 and the preliminary/virtual model 103. The image processing 200 de termines the intersection 107 as the true position of the traffic barrier. All the in tersection points are fitted by a spline curve in order to model the traffic barrier.
It has proven to be advantageous if the vehicles of which the individual prelimi nary/virtual models of the traffic barrier are generated are passing in different lanes. In the other case, if two reports of generated individual preliminary/virtual models are received from vehicles passing in the same lane, they have been gen erated almost from the same view. As a result, the intersection between two lines has a high variance so that the recovered model of the traffic barrier is in most cases unreliable.
The method for determining a model of a traffic barrier has several advantages. First, the reported information has a high efficiency. The method only needs the generated individual preliminary/virtual wall models and the respective camera pose of each of the vehicles. Moreover, the method saves communication capa bility. Second, it is easy to cope with the misdetection problems which failed to detect the traffic barrier in some frames. If some frames have failed to be de tected, it is easily possible to interpolate or fit the missed part when the prelimi nary/virtual model is generated.
The method to model a traffic barrier can be used for generating a road database used for autonomously driving cars, but can also be used in a plurality of other fields of machine vision and machine orientation, for example robot orientation outdoors or under water.
Claims
1. A method for determining a model of a traffic barrier, comprising:
- providing a plurality of vehicles (100, 101, 102) each having at least one camera (10) and a processor (20) for computer vision pro cessing,
- capturing a respective image of a scene by the respective at least one camera (10) of each of the plurality of vehicles (100, 101, 102),
- evaluating the respective image and generating a respective prelim inary model of the traffic barrier by the respective processor (20) of each of the vehicles (100, 101, 102),
- transmitting the respective preliminary model of the traffic barrier from each of the vehicles (100, 101, 102) to a server (200),
- evaluating the respective preliminary model received from each of the vehicles (100, 101, 102) and determining the model of the traf fic barrier by an image processor (201) of the server (200).
2. The method of claim 1, wherein the at least one camera (10) is a monocular camera.
3. The method of claim 1 or 2, wherein the at least one camera (10) is a single camera.
4. The method of any of the claims 1 to 3, comprising: providing a road model in a respective storage device (30) of each of the vehicles (100, 101, 102).
5. The method of any of the claims 1 to 4,
wherein the respective captured image is evaluated by the respective pro cessor (20) of each of the vehicles (100, 101, 102) by computer vision pro cessing to extract the pixels in the respective captured image representing an edge of the traffic barrier.
6. The method of claim 5, wherein the extracted pixels in the respective captured image represent an upper edge of the traffic barrier.
7. The method of claim 6, wherein the respective processor (20) of each of the vehicles (100, 101,
102) generates the respective preliminary model of the traffic barrier by projecting the respective extracted pixels representing the edge of the traf fic barrier in the road model to generate respective 3D points of the edge of the traffic barrier.
8. The method of claim 7, wherein the respective processor (20) of each of the vehicles (100, 101,
102) generates a respective spline curve of the edge of the traffic barrier, the respective spline curve representing the respective preliminary model of the traffic barrier.
9. The method of any of the claims 1 to 8,
- wherein the respective image is captured in a pose of the respective at least one camera (10) of each of the vehicles (100, 101, 102),
- transmitting the pose of the respective at least one camera (10) of each of the vehicles (100, 101, 102) together with the respective
preliminary model of the traffic barrier from each of the vehicles (100, 101, 102) to the server (200).
10. The method of any of the claims 1 to 9, wherein the image processor (201) of the server (200) evaluates at least two of the respective preliminary models of the traffic barriers received from at least two of the plurality of vehicles (100, 101, 102) to determine the model of the traffic barrier by the image processor (201) by the server.
11. The method of any of the claims 1 to 10, wherein the model of the traffic barrier, determined by the image proces sor (201) of the server, includes at least information about the position and the height of the traffic barrier.
12. The method of any of the claims 1 to 11, wherein the traffic barrier includes at least one of a Jersey wall, a Jersey barrier, K-rail or a median barrier.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP20768575.1A EP4052222A1 (en) | 2019-09-20 | 2020-09-08 | Method for determining a model of a traffic barrier |
CN202080080822.9A CN114730468A (en) | 2019-09-20 | 2020-09-08 | Method for determining a model of a traffic obstacle |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102019214397.0 | 2019-09-20 | ||
DE102019214397 | 2019-09-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021052810A1 true WO2021052810A1 (en) | 2021-03-25 |
Family
ID=72432908
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/EP2020/075025 WO2021052810A1 (en) | 2019-09-20 | 2020-09-08 | Method for determining a model of a traffic barrier |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4052222A1 (en) |
CN (1) | CN114730468A (en) |
WO (1) | WO2021052810A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2431917A1 (en) * | 2010-09-21 | 2012-03-21 | Mobileye Technologies Limited | Barrier and guardrail detection using a single camera |
US20180023960A1 (en) * | 2016-07-21 | 2018-01-25 | Mobileye Vision Technologies Ltd. | Distributing a crowdsourced sparse map for autonomous vehicle navigation |
WO2018069757A2 (en) * | 2016-10-11 | 2018-04-19 | Mobileye Vision Technologies Ltd. | Navigating a vehicle based on a detected barrier |
US20200184233A1 (en) | 2017-05-03 | 2020-06-11 | Mobileye Vision Technologies Ltd. | Detection and classification systems and methods for autonomous vehicle navigation |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100972041B1 (en) * | 2009-01-15 | 2010-07-22 | 한민홍 | Method for recognizing obstacle using cameras |
CN102508246B (en) * | 2011-10-13 | 2013-04-17 | 吉林大学 | Method for detecting and tracking obstacles in front of vehicle |
CN103176185B (en) * | 2011-12-26 | 2015-01-21 | 上海汽车集团股份有限公司 | Method and system for detecting road barrier |
US9972096B2 (en) * | 2016-06-14 | 2018-05-15 | International Business Machines Corporation | Detection of obstructions |
FR3067999B1 (en) * | 2017-06-23 | 2019-08-02 | Renault S.A.S. | METHOD FOR AIDING THE DRIVING OF A MOTOR VEHICLE |
-
2020
- 2020-09-08 CN CN202080080822.9A patent/CN114730468A/en active Pending
- 2020-09-08 WO PCT/EP2020/075025 patent/WO2021052810A1/en unknown
- 2020-09-08 EP EP20768575.1A patent/EP4052222A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2431917A1 (en) * | 2010-09-21 | 2012-03-21 | Mobileye Technologies Limited | Barrier and guardrail detection using a single camera |
US20180023960A1 (en) * | 2016-07-21 | 2018-01-25 | Mobileye Vision Technologies Ltd. | Distributing a crowdsourced sparse map for autonomous vehicle navigation |
WO2018069757A2 (en) * | 2016-10-11 | 2018-04-19 | Mobileye Vision Technologies Ltd. | Navigating a vehicle based on a detected barrier |
US20200184233A1 (en) | 2017-05-03 | 2020-06-11 | Mobileye Vision Technologies Ltd. | Detection and classification systems and methods for autonomous vehicle navigation |
Non-Patent Citations (1)
Title |
---|
MASSOW K ET AL: "Deriving HD maps for highly automated driving from vehicular probe data", 2016 IEEE 19TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), IEEE, 1 November 2016 (2016-11-01), pages 1745 - 1752, XP033028575, DOI: 10.1109/ITSC.2016.7795794 * |
Also Published As
Publication number | Publication date |
---|---|
EP4052222A1 (en) | 2022-09-07 |
CN114730468A (en) | 2022-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3598874B1 (en) | Systems and methods for updating a high-resolution map based on binocular images | |
CN110062871B (en) | Method and system for video-based positioning and mapping | |
AU2017302833B2 (en) | Database construction system for machine-learning | |
CN111081064B (en) | Automatic parking system and automatic passenger-replacing parking method of vehicle-mounted Ethernet | |
EP3007099B1 (en) | Image recognition system for a vehicle and corresponding method | |
US9121717B1 (en) | Collision avoidance for vehicle control | |
Li et al. | Springrobot: A prototype autonomous vehicle and its algorithms for lane detection | |
US7321386B2 (en) | Robust stereo-driven video-based surveillance | |
CN112085047A (en) | Image fusion for autonomous vehicle operation | |
CN107273788B (en) | Imaging system for performing lane detection in a vehicle and vehicle imaging system | |
Broggi et al. | Self-calibration of a stereo vision system for automotive applications | |
EP2928178B1 (en) | On-board control device | |
CN111448478A (en) | System and method for correcting high-definition maps based on obstacle detection | |
CN112967283A (en) | Target identification method, system, equipment and storage medium based on binocular camera | |
KR102167835B1 (en) | Apparatus and method of processing image | |
CN114419098A (en) | Moving target trajectory prediction method and device based on visual transformation | |
CN102555905A (en) | Method for producing image of e.g. post in concrete parking bay of ego vehicle, involves installing graphical element in object region to produce image in surrounding from virtual camera position, where region is not visible for sensor | |
CN113838060A (en) | Perception system for autonomous vehicle | |
CN113516711A (en) | Camera pose estimation techniques | |
CN110827340B (en) | Map updating method, device and storage medium | |
WO2020118619A1 (en) | Method for detecting and modeling of object on surface of road | |
JP4696925B2 (en) | Image processing device | |
CN113569812A (en) | Unknown obstacle identification method and device and electronic equipment | |
WO2021052810A1 (en) | Method for determining a model of a traffic barrier | |
JP2000099896A (en) | Traveling path detecting device and vehicle traveling controller and recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20768575 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2020768575 Country of ref document: EP Effective date: 20220420 |