[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2024249384A1 - Aid to pavement marking detection in wet conditions - Google Patents

Aid to pavement marking detection in wet conditions Download PDF

Info

Publication number
WO2024249384A1
WO2024249384A1 PCT/US2024/031203 US2024031203W WO2024249384A1 WO 2024249384 A1 WO2024249384 A1 WO 2024249384A1 US 2024031203 W US2024031203 W US 2024031203W WO 2024249384 A1 WO2024249384 A1 WO 2024249384A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
aid
image
wavelength
imager
Prior art date
Application number
PCT/US2024/031203
Other languages
French (fr)
Inventor
Shimon Maimon
Ronit SASON-MAIMON
Original Assignee
Netzvision Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Netzvision Llc filed Critical Netzvision Llc
Publication of WO2024249384A1 publication Critical patent/WO2024249384A1/en

Links

Classifications

    • EFIXED CONSTRUCTIONS
    • E01CONSTRUCTION OF ROADS, RAILWAYS, OR BRIDGES
    • E01FADDITIONAL WORK, SUCH AS EQUIPPING ROADS OR THE CONSTRUCTION OF PLATFORMS, HELICOPTER LANDING STAGES, SIGNS, SNOW FENCES, OR THE LIKE
    • E01F9/00Arrangement of road signs or traffic signals; Arrangements for enforcing caution
    • E01F9/30Arrangements interacting with transmitters or receivers otherwise than by visible means, e.g. using radar reflectors or radio transmitters
    • EFIXED CONSTRUCTIONS
    • E01CONSTRUCTION OF ROADS, RAILWAYS, OR BRIDGES
    • E01CCONSTRUCTION OF, OR SURFACES FOR, ROADS, SPORTS GROUNDS, OR THE LIKE; MACHINES OR AUXILIARY TOOLS FOR CONSTRUCTION OR REPAIR
    • E01C23/00Auxiliary devices or arrangements for constructing, repairing, reconditioning, or taking-up road or like surfaces
    • E01C23/16Devices for marking-out, applying, or forming traffic or like markings on finished paving; Protecting fresh markings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road

Definitions

  • the present invention relates generally to detection and/or identification of pavement markings, and more particularly to a method and apparatus for detecting markings on pavement under wet conditions
  • the term “pavement” is used to denote roads, pavements, and generally any paved surface, and those terms may be used interchangeably.
  • the term “pavement marking” also relates to marking on certain non-paved surfaces such as trails and the like.
  • Pavement markings provide vital information for vehicles.
  • road lanes are outlined by elongated series of lines
  • arrows on the pavement indicate proper lane positioning according to the desired direction of travel
  • stop signs are often marked on the pavement, as are crosswalks, and the like.
  • ADAS Advanced Driver Assistance Systems
  • autonomous vehicles constitute a technology that promises to eliminate the need for a driver and allow safe and efficient operations of vehicles. While ADAS systems are mostly assisting drivers, to operate safely autonomous vehicles require the ability to recognize pavement markings under all conditions.
  • Glare is generally ambient light and light from other vehicles which is reflected from the water layer covering the pavement Glare further hinders pavement markings identification by reducing the sensitivity of a camera operating in the visual range to the faint pavement markings This is colloquially known as ‘blinding’ of the camera.
  • roads covered by snow or by a mixture of snow and water layers also present similar difficulties in pavement marking detection and identification
  • this specification shall relate to a road covered wholly or partially by rain, snow and similar water-based covering as a “wet road” or equivalently as a “road under wet conditions”, and similar expressions.
  • the term “wet layer” should be construed to include layers of snow, ice, and water which are on the road.
  • vehicle and “car” shall be used interchangeably and should be construed as extending to all types of road vehicles, such as trucks, motorcycles, cars, and the like.
  • an aid for detecting pavement markings on a road under wet conditions at least a portion of the road being illuminated by at least a first light having at least a first wavelength associated therewith and by a second light having at least a second wavelength associated therewith, the first wavelength has a higher absorption coefficient in water than the absorption coefficient of the second wavelength.
  • the aid comprises a first imager having a field of view operationally directed at the road, the first imager is capable of producing a first image representing first light reflected from the road.
  • the aid further comprises a second imager having a field of view operationally at least partially congruent with the field of view of the first imager, the second imager is capable of producing a second image representing second light reflected from the road.
  • the aid further comprises an image differentiator capable of differentiating between light intensity sensed in pixels of the first image and light intensity sensed by corresponding pixels of the second image, and of generating a third image representing the result of the differentiation.
  • the second light includes at least one wavelength or wavelength range which differs from the wavelength range of the first light, and is defined by the inclusion of at least one such wavelength or wavelength range.
  • the second light wavelength range has a significantly lower water absorption coefficient from the absorption coefficient of the first light. In some embodiments the water absorption coefficient of the at least one wavelength range of the second light is lower than 50% than the water absorption coefficient of the first light.
  • the third image is displayed to the driver, such as by using a display, and/or by utilizing an augmented or virtual reality display.
  • the aid comprises an analyzer configured to analyze the third image and identify pavement markings thereon.
  • the third image is enhanced, either for display or for analysis purposes Such enhancement may, by way of example increase the image contrast or utilize edge detection which may be utilized for graphical presentation or for use by other ADAS components.
  • Artificial Intelligence (Al) systems may be utilized to detect the pavement markings from the third image, or act as the differentiator or a portion thereof, to form and enhance the third image.
  • the identified pavement markings may be displayed to the driver
  • the information related to the identified pavement markings is utilized to alert the driver, and/or to assist in steering the vehicle
  • the information related to the identified pavement markings may be utilized for steering and/or driving the vehicle.
  • comparison of corresponding pixels between the first and second images may be done on individual pixels or on groups of adjacent pixels, and such groups of pixels are considered equivalent to individual pixels.
  • the first light is substantially fully absorbed by the water layer on the road.
  • the wavelength of the first light is selected to be absorbed by water at least twice as the wavelength of the second light. This will cause a layer of water as shallow as a few millimeters to absorb most of the first light energy directed at it, as compared with a much lower absorption of the second light.
  • the first light is selected to have an absorption coefficient greater than 10 cm' 1 in water
  • the second light is selected to include at least one wavelength range having an absorption coefficient which is at least half of the absorption coefficient of the first light, or in the above embodiments, smaller than 5 cm -1 .
  • the second light may include other wavelength ranges, and in some cases may include wavelengths of the first light.
  • a broad spectrum light source may include both the first light (which in the example embodiment meets the requirement of an absorption coefficient greater than 10 cm 1 ) and one or more wavelength ranges which meet the requirement of the second light (which following the example embodiment, having an absorption coefficient equal to or smaller than 5 cm' 1 ).
  • Each of such wavelength ranges which meets the requirement of an absorption coefficient lower than 50% the absorption coefficient of the first light may serve as the second light, the wavelength ranges being taken separately or in combination.
  • the imager of the second light may be capable of receiving and sensing any wavelength, including wavelengths that fall within the first light. Since the first light is absorbed by the water layer, the system is based on detecting a null return of the first light. Having a null return of the first light in the second imager would have no effect on the second image, as the second image includes the sensing of at least one wavelength which is absorbed in water significantly less than the first light.
  • any broad spectrum light such as natural or simulated sunlight, may be used to provide the first and/ore the second light, however the imager of the first light would be made sensitive only to light range or ranges which meets the first light requirement, and the imager of the second light may be made sensitive to any wavelength range or ranges, including wavelengths of the first light, as long as the second imager is sensitive to one or more wavelengths which meet the requirement of the second light.
  • the first light may have an absorption coefficient lower than 10 cm-1 and in such embodiments the second light may be scaled to have smaller than half of the absorption coefficient of the first light.
  • the first light may be selected to have an absorption coefficient of 8 cm 1 or greater, and the second light may be selected to have an absorption coefficient of 4 cm’ 1 or smaller, and the like
  • the sensitivity selectivity of the first imager may be achieved by placing a wavelength sensitive filter in the optical path of the imager, by selection of detector material and/or construction, and the like. Since the second light may include the first light, no filter is necessary in the optical path of the second imager.
  • the aid comprises at least one light source disposed to illuminate at least a section the road within the combined field of view of the first and second imagers, and the light received by the first imager and/or second imager is light arriving from the aid’s light source, or from a combination of the aid’s light source and ambient light.
  • a light source may be mounted on the vehicle, either with, or remotely to, the first and second imagers.
  • the light source resides on the upper portion of the vehicle, such as on top of the vehicle or on the upper third of the vehicle. Such placement of the light source increases detection range due to better angel of incidence compared with regular headlight.
  • a light source mounted on the vehicle radiates in the Short Wave InfraRed (SWIR) range and thus avoid blinding drivers of oncoming vehicles.
  • SWIR Short Wave InfraRed
  • the first imager and the second imager are integrated Such integration may be achieved by placing the first and second imager in a single enclosure. Even tighter integration may be achieved by the first imager and the second imager generating their respective image from a single Focal Plane Array (FPA) detector, wherein a first set of pixels of the FPA are exposed to the first light reflected from the road, and a second set of pixels of the FPA are exposed to the second light reflected from the road.
  • FPA Focal Plane Array
  • the first imager comprises the first set of pixels
  • the second imager comprises the second set of pixels.
  • the first and second imagers share most of the optical path between the road and the imagers. Multi spectral imagers of differing design may also be utilized to embody the first and/or second imagers described above.
  • the image differentiator is analog in nature, and in such analog differentiator the image analyzer comprises an analog circuit operating to subtract an analog light intensity sensed by pixels in the first image from an analog light intensity sensed by corresponding pixels in the second image.
  • the image differentiator is digital in nature, where the light intensity sensed by pixels of the first and second imagers are converted to digital values, and the image differentiator comprises a digital controller configured to subtract digital data representing light intensity sensed by pixels of the first image from digital data representing light intensity sensed by corresponding pixels of the second image.
  • the image differentiator may comprise of any combination of digital and analog circuitry.
  • the image differentiator comprises software and/or software and hardware combination, where the software may - in whole or in part, be an artificial intelligence type software.
  • the image differentiator and/or the analyzer are mounted remotely to the first and/or second imager.
  • the imagers may be housed in a camera unit, while the image differentiator and/or the analyzer may be embodied as a different portion of the vehicle.
  • the analyzer may be embodied within an artificial intelligence capacity of the vehicle.
  • the analyzer comprises a controller configured to utilize artificial intelligence software to recognize pavement markings in the third image
  • the analyzer comprises a controller configured to utilize image processing software to recognize pavement markings in the third image.
  • image processing software any combination of artificial intelligence and image processing and/or image recognition may be utilized.
  • the differentiator may comprise the analyzer and be embedded therein, such as when the first and second images are fed as a whole to the analyzer which differentiate therebetween and provides the third image and/or an enhanced representation thereof.
  • the first light and/or the second light are polarized.
  • the aid further comprises a polarizing filter disposed between the road and the first imager, the second imager, or both imagers.
  • the first wavelength is selected between 1400nm and 1500nm or between 1900nm and 2000nm.
  • the second wavelength may include any light range including natural ambient light.
  • the second light may be selected between 900nm and 1399 nm, or between 200nm and 1399nm or between 1501 nm and 1700nm.
  • the second light may be selected anywhere between the far UV and the far IR ranges, including all subspectrums therebetween, as long as such wavelength range selection includes at least one wavelength with a water absorbance coefficient smaller than 50% the absorption coefficient of the first light.
  • the first and second images are corresponding images in a plurality of images taken in temporal succession such as a video stream.
  • the first and second images are taken contemporaneously.
  • a plurality of first and second images may be utilized to form the third image
  • a plurality of third images may be displayed and/or analyzed, and/or acted upon, and a plurality of third images form a third video stream.
  • a sequence of the respective images is considered equivalent to individual images, even if the sequence is interrupted to an extent that would not hamper operationality of the invention.
  • a vehicle comprising a light source operationally directed to illuminate a portion of a road over which it travels, the light source emitting at least a first light having first wavelength range, and a second light having a second wavelength range; a first imager having a field of view operationally directed at the road, the first imager is capable of producing a first image representing first light reflected from the road ; a second imager having a field of view operationally at least partially congruent with the field of view of the first imager, the second imager is capable of producing a second image representing second light reflected from the road; and an image differentiator capable of subtracting light intensity sensed by pixels of the first image from light intensity sensed by corresponding pixels of the second image, so as to generate a third image representing the result of the subtraction.
  • the vehicle further comprises a display configured to display the third image, an enhancement thereof, or a symbolic graphical representation thereof.
  • Such third image would be presented to the vehicle driver and significantly enhance the driver awareness of pavement markings.
  • the display may constitute only the third image, in certain embodiments the display may be embodied in augmented reality display or in an artificial reality display. Such display may present the enhanced road markings with other elements of the environment.
  • the display is embodied as Heads-Up Display (HUD), which places the displayed information in the driver’s field of view while the driver views the road
  • HUD Heads-Up Display
  • Processing of information added to the pavement markings may be carried out by a specialized controller or by controller or controllers embedded in the vehicle and performing other tasks, and such controller(s) may be integrated with the imagers, or mounted remotely thereto.
  • the first imager and the second imager are integrated. Such integration may be made by sharing a single enclosure and optionally a single FPA.
  • the vehicle optionally comprises a controller or controllers executing an artificial intelligence algorithm, and the displayed image is generated by or enhanced by, the result of the artificial intelligence algorithm.
  • the image differentiator and/or the analyzer, if implemented, may be partially or wholly embodied as by software, which is optionally artificial intelligence software.
  • the vehicle comprises an analyzer configured to analyze the third image and identify pavement markings thereon, and the vehicle further comprising a controller configured to utilize information related to the pavement markings identified by the analyzer for controlling motion of the vehicle, and/or for alerting a driver to deviation of the vehicle from desired behavior in accordance with the identified pavement markings.
  • alert may relate to the vehicle departing from a lane marked by the pavement marking.
  • a vehicle controller may utilize identified pavement markings information to steer the vehicle so as to stay within the lane.
  • the vehicle controller may slow and/or force a stop, if necessary, to comply with a pavement marking mandating a stop or speed reduction.
  • the vehicle controller may alert the driver of upcoming stop marking such as 16A in Fig. 1 , lane departure, a lane designated to a turn, and the like, and such alert may optionally be combined with the above mentioned steering, slowing, and/or stopping action
  • the analyzer and/or differentiator may be embodied at least in part, in the controller.
  • Fig 1 depicts schematically a vehicle utilizing an aid to identifying pavement markings
  • Fig 1 A depicts a schematic close up of a water covered road, with light directed to, and reflected or scattered therefrom.
  • Fig. 2A depicts a schematic image of a road covered with water layer, as sensed by an imager sensitive to wavelengths range centered at 1445 nm.
  • Fig. 2B depicts a schematic image of the same road taken simultaneously with the image at Fig. 2A, as sensed by an imager sensitive to wavelengths range centered at 1000 nm.
  • Fig. 2C represents schematically a third image resulting from subtraction of the images of Fig. 2A and 2B.
  • Figs 2A’ is a source image of Fig. 2A
  • Fig. 2B’ is a source image of Fig. 2B.
  • Fig. 3 depicts a simplified graph showing absorption of light in water as a function of wavelength.
  • Fig 4 depicts schematically a simplified block diagram of pavement marking detecting aid, showing several optional elements.
  • Fig. 5 depicts schematically a simplified block diagram of an optional embodiment of a pavement marking detecting aid utilizing a single lens.
  • Fig. 1 depicts schematically a vehicle 10 utilizing a pavement marking detecting aid 100 mounted on a vehicle 10 travelling on the road 15.
  • the first imager 20, second imager 25 and the optional light sources 35 are depicted.
  • the light source is depicted in the optional location on the top portion of the vehicle 10. While the light source may be disposed anywhere in the vehicle that would allow illuminating the road, disposing the light source as high as possible on the vehicle increases detection range. Therefore, in the depicted embodiment the light source is disposed at the uppermost portion of the vehicle that would allow the light source to illuminate at least a portion of the road ahead of the vehicle.
  • the dash-doted lines emanating from the light source 35 symbolize the light cone projected from the light source, but the light cone may vary to fit desired design parameters.
  • the first 20 and second 25 imagers are depicted on two sides of the vehicle 10 for clarity of the drawing, but may be disposed in any manner which achieves at least partial overlap of the field of view of the first and second imagers, the field of view being symbolized by dotted lines emanating from the respective imagers, with arcs 80 and 85 symbolizing the respective field of view thereof.
  • a representation of a water layer is depicted by shape 7. It is seen that the road, and more specifically the road markings 16 and 16A are faint as compared to road marks on areas of the road not covered by the water layer 7. It is noted that items in the environment which are not covered by water, such as vertical items by way of example, shed water and are therefore less effected by light reflection and absorption as much as areas covered by water
  • Fig. 1A depicts schematically a cross section of a region of a road 15 with a road mark 16, a water layer 7 covering the road and several light rays depicting schematically paths.
  • a ray L1 of first light and a ray L2 of second light are depicted entering water layer 7.
  • a portion of the light which in actuality is the majority of the light impinging on the water, is reflected away as R1 and R2 respectively.
  • a portion of the first light L1 enters the water layer 7 but is quickly absorbed by the water. Even if the first light reaches the road mark 16, its scattering still has to travel out of the water layer, and will be further absorbed thereby.
  • Figs. 2A and 2B are simplified schematic representations of actual images Image presented in Figs 2A’ and Fig. 2B’ respectively.
  • Fig. 2A depict schematically an image captured by an imager sensitive to light at wavelength range centered about 1445 nm
  • Fig. 2B depict schematically a contemporaneously captured image of the scene captured in Fig. 2A, however the image of Fig 2B was captured utilizing an imager sensitive to light at wavelengths centered around 1030 nm
  • the pavement marking has been enhanced in Fig 2B, however in actual images road markings appear faint while under a water layer.
  • Fig. 2A depicts schematically a third image resulting from subtraction of the images of Fig. 2A and 2B. Fig. 2C is enhanced for clarity. Oftentimes, images taken by different imagers are normalized prior to the subtraction process, to compensate for spectral conditions, sensor sensitivity and output, and the like.
  • Fig. 20 depicts a simplified graph showing absorption of light in water as a function of wavelength.
  • This graph shows by way of example that the wavelength range between about 1400 nm and 1550 nm and the wavelength range between about 1850 nm and 2000 nm an absorption coefficient greater than 10 cm-1 , which makes them highly efficient choice as the band for the first light
  • the second light may include any wavelength having an absorption coefficient lower than 50% of the absorption coefficient selected for the first light in the present example, and thus any wavelength between 380 nm and 1390 nm or between 1560 nm and 1890nm may be selected for the second light.
  • Fig. 4 depicts schematically a simplified block diagram of a pavement marking detecting aid 100, showing several optional elements. A first 20 and second 25 imagers are depicted.
  • the light source 35 is shown as a separate entity, however in other embodiments the light source may be integrated with the imagers in the same enclosure.
  • the light source in the drawing is depicted as being laterally displaced, however the light source may be disposed at any convenient location relative to the imagers as long as the light source illuminates the road area of interest, which is covered by the imagers.
  • the light source 35 emits polarized light. It is important to note that ambient light may be used as the light source, and the a pavement marking detecting aid 100 does not necessitate its own light source. The ambient light may come from solar light or the operational scene may be illuminated, such as by road lights.
  • the light may emanate from a plurality of light sources. If the light is disposed in the upper portion of the vehicle the detection is improved as the incidence angle between the light and the water layer increases, resulting in better light penetration and in higher intensity returns of the second light.
  • one or more polarizers 70 may be disposed between the imagers and the image scene.
  • the respective fields of view 80, 85 of the first and second imagers respectively are at least partially congruent as shown by the hatched triangle.
  • the field of view 80 of the first imager 20 is essentially fully congruent with the field of view 85 of the second imager 25.
  • the first 20 and second 25 imagers are depicted as being placed side by side, however this placement is merely one option of many.
  • the imagers may be embodied as different portions of a multi-spectral camera. While optionally, the imagers may share a single lens as depicted by lens 65 in Figs. 4 and 5, each imager may be provided with its own lens and/or its own enclosure, such as when using two separate cameras, each camera responsive to one of the first and second light respectively.
  • the image reflected from the road may be split by an optical beam splitter 175 such as a mirror, a prism, a dichroic splitter, and the like.
  • the two imagers may be embodied on a single FPA utilizing filters to sensitize specific pixel set to the first wavelength range or to the second wavelength range.
  • an optional polarizing filter 70 is placed between the image scene and the first imager 20 the second imager 25.
  • a single polarizer or different polarizers may be utilized respectively.
  • a polarizing filter may be equivalently related to as a polarizer. While optional, such polarizers may reduce glare and reflection from unwanted light sources.
  • Polarizing filter 70 may optionally be split for the different imagers and the polarizer for one of the imager may optionally differ polarization from the polarization for the other. While no polarizer appears in Fig. 5, polarizers may be disposed between optical beam splitter 175 and the respective imagers 20, 25.
  • a polarizer 70 may be disposed in the optical path of only one of the imagers 20, 25, or different polarizers may be disposed in the optical paths of each of the imagers.
  • one polarizer may provide polarization at 90 degrees to the other.
  • filters may be disposed in the respective optical path of each of the imagers, such that each imager receives a filter substantially transparent to its respective wavelength.
  • a polarizer may be disposed in front of light source 35, or if two light sources are used, a polarizer may be disposed in front of one light source or both.
  • the polarizers may or may not posses the same polarization characteristics.
  • the image differentiator 30 is integrated with the first and second imagers within enclosure 5, however such placement is optional and the respective images from the first and second imager may be transferred to another location, either in analog or digital form.
  • the image reflected from the road may be split by an optical beam splitter 175 such as a mirror, a prism, a dichroic splitter, and the like.
  • Wavelength related outputs of a single multi-spectral camera may be utilized to embody both the first and second imagers, if such outputs reflects separate sensing of the first and second light respectively.
  • the first and second imager share at least a portion of the optical path between the road and the imagers This is in contrast to the embodiment of Fig 1 where the imagers are completely separate and partially share a field of view.
  • the image differentiator 30 is configured to operationally subtract the image of the first imager 20 from the image of the second imager 25. This subtraction may be achieved by analog circuitry, where the voltage outputted by individual pixels in the first imager 20 would be subtracted from the voltage outputted by corresponding pixels in the second imager 25.
  • the image differentiator 30 may also be embodied in digital form, where the output of pixels of the first and second imagers are converted to a digital representation which is then subtracted as digital values between corresponding parts of the first and second images.
  • compensation for respective differences due to the available spectrum and instrument differences may be easily accomplished during digital differentiation as the compensation only require adding and/or subtracting compensation values to the digitized pixel values of the image being compensated.
  • an image fusion is required prior to subtraction, so as to correctly determine the offsets between corresponding pixels from the first and second imagers
  • a third image 60 is formed from the result of the differentiation, where the difference between pixels of the first image 50 are subtracted from corresponding pixels of the second image 55 to form the corresponding pixels in the third image 60.
  • An illustrative example of such third image is depicted in Fig. 3.
  • the first imager 20 produces images 50 representing the amplitude of the first light reflected from the image scene
  • the second imager 25 produces images 55 of the second light reflected from the image scene
  • the first light utilizes a wavelength selected such that is substantially absorbed by the water layer on the road, and thus no first light is reflected back to the first imager, causing the corresponding pixels to generate low output levels.
  • the water on the road cause the road to appear as a ‘dark’ area in the first image 50.
  • the second light is absorbed less by the water layer, and thus some of the second light is reflected back from the image scene to the second imager, and the image 55 sensed by the second imager represents - at least to some extent - the road underneath the water layer.
  • the water reflects most of the light shown thereupon by the second light, such marking will be faint and hard to identify over a relatively dark background which is caused by the dark color of the road surface not covered by pavement markings.
  • the pavement markings will be substantially invisible in the first image 50, yet at least faintly visible in the second image 55.
  • the compensation is performed to compensate for the respective characteristic response of the two imagers to the scene.
  • Such compensation takes into account, by way of example, light level within the utilized spectrum, absorption of the specific wavelength by the ambient environment, differing sensitivity of the imager to the respective light, and the like
  • the compensation parameters are commonly being determined empirically once per a given setup of the pavement marking detecting aid.
  • the pavement markings become more prominent in the third image 60, which represents the result of the subtraction. If desired, enhancement is carried out to further contrast of the third image, or the road markings detected therein.
  • the first light L1 is substantially completely absorbed by the water layer 7 over the road 15.
  • the second light L2 is scattered differently by the black road 15 and the road markings 16 thereon A portion of the second light reaches the road markings through the water layer, and a portion of that light is scattered thereby.
  • Some of the scattered second light exits the water layer, and a portion of that light RET is sensed by the second imager 25.
  • the resulting third image 60 would have the pavement markings emphasized, as they would substantially stand alone against the road surface. This will make even the faintly sensed pavement markings easily identifiable Under certain embodiments the intensity of all of the third image 60 pixels which lie on the road may be multiplied so as to increase the contrast, making the pavement markings more prominent.
  • the rest of the scene objects except for the road are sensed by the first 20 and second 25 imagers
  • the images of such objects may optionally be offset by the compensation specific to the imagers, as described above; under such embodiment, the representation of the other image objects in the first 50 and second 55 images are substantially equal and their subtraction will leave a null in the third image 60. Such embodiment will leave the third image substantially with only the road markings.
  • Fig 4 further displays several optional components that may be present in the a pavement marking detecting aid and/or in the vehicle alone or in combination.
  • the third image 60 may be fed directly to a display 45.
  • the display may be a screen visible to a driver, a Heads-Up Display (HUD) which is displayed in the field of view of the driver, or in any other desired display device.
  • HUD Heads-Up Display
  • the amplitude of the displayed pixels of the third image are enhanced prior to being displayed, to ease recognition of the pavement marking.
  • Such enhancement may be performed by selecting a threshold value, where pixels with amplitude below the threshold are displayed in one color and pixels with amplitude above the threshold are displayed by a second color. A number of threshold values may be selected, and the displayed image may be displayed in more colors than two.
  • the third image 60 may be analyzed by an analyzer 40.
  • the analyzer may be integrated in the same enclosure as the imagers 20, 25 and/or the image differentiator 30, or be disposed remotely thereto
  • the analyzer comprises, at least in part, computing resources in the vehicle.
  • the analyzer utilizes Artificial Intelligence to analyze the third image and to identify pavement markings therein, however common image processing technique may also be utilized.
  • the analyzer 40 typically comprises memory 40A and one or more Central recessing Unit (CPU) 40B. Images may be stored in the memory and the CPU executes software code to perform the functions described herein.
  • the road markings may be processed by the analyzer 40 and displayed on the display as a graphical representation. Such graphical representation may be derived by edge detection algorithms, by artificial intelligence, and the like.
  • the analyzer output may be utilized to guide an ADAS system.
  • ADAS system may provide an alarm 140 and/or actuate an actuator 135
  • the actuator may be used to provide additional alerting such as shaking the steering wheel, and/or to steer the vehicle in a desired direction
  • More than a single actuator may be activated by the analyzer and optionally a plurality of components may be intermediate to the activation of the actuator.
  • the analyzer may identify lane markings and a lane deviation of the vehicle.
  • the analyzer may activate the alarm 140, may activate a wheel shaker actuator, and/or steer the vehicle to stay within the lane.
  • the actuator may be activated by digital communications, relays, amplifiers, servos, duty cycle controllers, digital to analog (D/A) systems, and the like.
  • An analyzer 40 and an image differentiator 30 may be formed by one or more computers, having memory to store the images and perform processing thereupon.
  • the output of the analyzer 40 is utilized in an augmented reality system, or in a virtual reality system, such as will be beneficial in certain remotely operated vehicles.
  • the pavement marking detecting aid 100 When the pavement marking detecting aid 100 is operated in a dynamic environment, such as from a moving vehicle there is an advantage to taking the images 50, 55 from the same point of view, however such arrangement is not mandatory. Disposing the two imagers 20, 25 at a distance from one another is possible as long as their respective fields of view are at least partially congruent in the road area. Distance between the two imagers would require mapping of individual points from one image to the other to compensate for the parallax stemming from the differing point of view.
  • Image fusion is well known, and by casual reference, the reader is directed to the likes of “Image Fusion Algorithm at Pixel Level Based on Edge Detection ", Jiming Chen et al ⁇ , research article at the Journal of Healthcare Engineering Volume 2021 , article ID 5760660, Hindawi London August 10, 2021, and “Review of Pixel-Level Image Fusion”, Bo Young et al, Journal of Shanghai Jiaotong University (Science), February 2010.
  • a multi-spectral camera may provide both images from essentially the same point of view.
  • An example of imager which may provide images at the desired wavelength ranges from a single point of view may be embodied in an imager utilizing microlens array and common filters, as described by way of example in US Patent No. 7,433,042 to Cavanaugh et al.
  • Other multi-spectral imagers which use congruent field of view are known in the art, and such imagers may be utilized as well.
  • Fig. 5 depicts schematically an embodiment where the first and second imagers 20,25 in one optional arrangement.
  • the first 20 and second 25 imagers are disposed within a single enclosure 5 to share the optical path from the imager lens 65 to the road.
  • Light reflected from the road 15 passes the lens 65 and is directed at an optical beam splitter 175.
  • the optical beam splitter is constructed to pass the first light to the first imager 20 and reflect the second light to the second imager 25
  • the field of view 80A of both imagers is substantially fully congruent, parallax compensation is not required
  • the light source 35 is disposed behind and above the imagers, however, as described above the light source may be disposed at any convenient location.
  • any component of the pavement marking detecting aid 100 may be integrated with another component, such as by being placed in a single enclosure or in adjacent enclosures, or mounted remotely to other components. Therefore, by way of example, in the figures the first and second imagers are shown mounted remotely to the light source 35, but may be integrated in a single enclosure therewith.
  • the pavement marking detecting aid 100 is in motion, as would be the case if it is mounted to a moving vehicle, there is a distinct advantage to acquiring both the first 50 and second 55 images simultaneously as doing so will obviate the need for motion compensation that would be needed to compensate for a time lapse between the images, if taken at different times from the moving vehicle.
  • the first 50 and second images may thus be taken from corresponding frames of two video streams. Stated differently, consecutive images 50 captured by the first imager 20 form the frames of a first video stream, consecutive images 55 captured by the second imager 25 form the frames of the second video stream, and the subtracted corresponding frame images 60 form a third video stream. It is however not mandatory that each and every frame captured by the imagers will be incorporated in the video stream. By way of example, a plurality of consecutively captured images of either imager may be integrated by summation and/or averaging, and only the integrated image or images may be utilized for the subtraction.
  • each consecutive image or integrated images in the respective video stream may be decided at the time of system construction, or determined dynamically during use.
  • the number of captured images between consecutive video frames is arbitrary and the image integration is not mandatory.
  • imagers are devices which capture light reflected from a scene defined by the imager field of view, and generate an electronic representation of the intensity of light received from regions in the scene by a Focal Plane Array (FPA) within the imager.
  • FPA Focal Plane Array
  • the scene regions are mapped to discrete elements of the FPA known as pixels.
  • an imager has a characteristic frequency response which is centered around a center wavelength, and a continuum of wavelength range on both sides of the center wavelength. Therefore, when a wavelength of an imager is specified, it should be construed that the imager frequency response includes a reasonably prominent response at the specified frequency, however the imager may respond to other wavelengths, and the specified wavelength does not have to be identical to the center frequency.
  • Such frequency response may be affected, by way of example, by a bandpass filter disposed in front of the imager, by selection of the FPA, and the like.
  • the first imager is constructed to detect only the first light.
  • One option to such construction may be made by way of example by adding a bandpass filter in its optical path.
  • the second light may include a broad spectrum which may include, inter alia the first light, a filter is not necessary in the light path of the second imager.
  • Imagers may generate discrete images or a continuous stream of consecutive images, which are termed a “video” as described above
  • a video does not necessarily comprise each captured image, as a video may comprise any selected images which are captured one after the other regardless of the number of captured images therebetween.
  • consecutively captured images may be averaged to form a single imaged in the video stream.
  • the full potential of the pixel is calibrated in accordance with the capabilities of the FPA or its sensitivity to specific wavelengths, as well as to the relative amplitude of the specific wavelength within the spectrum.
  • a single FPA may be utilized to sense two separate wavelengths of equal intensity, but the first wavelength of the two generates a 90% output response while the second may generate only 70% output response, and thus the sensed images must be scaled to show similar results.
  • An aspect of the invention comprises a vehicle incorporating any of the embodiments disclosed herein
  • the third image analysis and/ the subtraction of the first and second images may be performed by vehicle born computers, or by dedicated hardware.
  • the vehicle may be an autonomous vehicle.
  • Some vehicles utilize computer systems adapted to utilize artificial intelligence algorithms, and such computer system, or a portion thereof, may be utilized as the controller 40
  • the term “dark”, as related to a pixel or a group of pixels of the imager, should be construed to correspond to a pixel which senses light amplitude that causes the pixel to output a level lower than ⁇ 5% or 10% or 20% of the full potential output of the pixel. Such dark pixel or group of pixels may be considered as sensing a null signal.
  • definition of dark output level may be dynamically adjusted to compensate for various conditions, and thus the exact output level corresponding to a ‘dark’ pixel or region should be construed as relative to current condition, where the output level is low in comparison to other regions of the image.
  • the term "absorption coefficient” relates to the absorption coefficient of at least the center frequency of the respective light in water
  • varying phases of water such as snow, ice and the like may respond differently to certain light conditions and accordingly detection parameters may differ between various water phases. .
  • first and second imagers are capable of sensing the light reflected from the first and second lights, respectively, however light in the specific wavelengths is absorbed at different levels by water on the road, and thus in some cases the specific wavelength is absorbed rather than reflected.
  • first and second wavelength or wavelength ranges associated with the first and second lights define the light characteristics which enable pavement marking detection
  • first light, first wavelength, and first wavelength range should be construed as equivalent and interchangeable
  • second light, second wavelength and second wavelength range should be construed as equivalent and interchangeable.
  • light, wavelength, and/or wavelength range are specifically differentiated or clearly understood from the text in view of the rest of the specification, such as when defining a light to have a wavelength or wavelength range, and the like.
  • Adjectives such as “about” and “substantially” that modify a condition or relationship characteristic of a feature or features of an embodiment of the present technology, are to be understood to mean that the condition or characteristic is defined to within tolerances that are acceptable for operation of the embodiment for an application for which it is intended, or within variations expected from the measurement being performed and/or from the measuring instrument being used, or sufficiently close to the location, disposition, or configuration of the relevant element to preserve operability of the element within the invention which does not materially modifies the invention.
  • an imager may sense low levels of wavelengths outside its intended range, however those skilled in the art would recognize that such sensing falls within the tolerance of the technology selected for the implementation, and that an imager falls within the scope of the invention Similarly, unless specifically specified (replace “specified” with “identified”? »l think that precise and specific numerical values without any tolerance (or with specified tolerance) should be considered as “specified”. Do you think that “identified” is better? or clear from its context, numerical values should be construed to include certain tolerances that the skilled in the art would recognize as having negligible importance as it does not materially change the operability of the invention. When the term “about” precedes a numerical value, it is intended to indicate +/- 15%, or +/- 10%, or even only +/-5% or +/-1 %, and in some instances the precise value.
  • each of the verbs, “comprise” “include” and “have”, and conjugates thereof, are used to indicate that the object or objects of the verb are not necessarily a complete listing of features, members, components, elements, steps or parts of the subject or subjects of the verb
  • Positional or motional terms such as “upper”, “lower”, “right”, “left”, “bottom”, “below”, “lowered”, “low”, “top”, “above”, “elevated”, “high”, “vertical”, “horizontal”, “front”, “back”, “backward”, “forward”, “upstream” and “downstream”, as well as grammatical variations thereof, may be used herein for exemplary purposes only, to illustrate the relative positioning, placement or displacement of certain components, to indicate a first and a second component in present illustrations or to do both Such terms do not necessarily indicate that, for example, a “bottom” component is below a “top” component, as such directions, components or both may be flipped, rotated, moved in space, placed in a diagonal orientation or position, placed horizontally or vertically, or similarly modified.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Civil Engineering (AREA)
  • Structural Engineering (AREA)
  • Image Processing (AREA)

Abstract

Detecting and/or discriminating pavement markings covered by a water layer is facilitated by subtracting a first image taken at a wavelength which is highly absorbed by water from a second image taken at a wavelength which is being absorbed by water to a lesser extent than the first wavelength. A third image, resulting from the subtraction of the first and second images, is displayed and/or utilized to detect and/or identify the pavement markings. Placing light source in the upper portion of a vehicle may increase detection range.

Description

Aid to Pavement Marking Detection in Wet Conditions
Field of the invention
[0001] The present invention relates generally to detection and/or identification of pavement markings, and more particularly to a method and apparatus for detecting markings on pavement under wet conditions
Background of the Invention
[0002] When used in this specification, the term “pavement” is used to denote roads, pavements, and generally any paved surface, and those terms may be used interchangeably. The term “pavement marking” also relates to marking on certain non-paved surfaces such as trails and the like.
[0003] Pavement markings provide vital information for vehicles. By way of example, road lanes are outlined by elongated series of lines, arrows on the pavement indicate proper lane positioning according to the desired direction of travel, and stop signs are often marked on the pavement, as are crosswalks, and the like.
[0004] Advanced Driver Assistance Systems (ADAS) are increasingly common in modern vehicles and enhance safe operation of the vehicle by reducing driver errors and facilitating improvement in the driver capabilities. Furthermore, autonomous vehicles constitute a technology that promises to eliminate the need for a driver and allow safe and efficient operations of vehicles. While ADAS systems are mostly assisting drivers, to operate safely autonomous vehicles require the ability to recognize pavement markings under all conditions.
[0005] Systems that aid in keeping the vehicle within a lane and aid generally in pavement marking identification are well known, however, those systems generally utilize data obtained by cameras operating in the visual range (about 380-750 nM). The effectiveness of such cameras, as well as of human drivers, is significantly reduced in wet and/or snowy road conditions, and those difficulties are far greater during nighttime.
[0006] Two major problems which contribute to the reduced visibility of pavement marking during wet night time conditions are reflection of light from the vehicle headlights by the water film on the road, and glare from other vehicles and/or other ambient lighting such as road lights, and the like.
[0007] Water on a road will reflect as much as 99% of a common vehicle's headlights Most of the light will reflect away from the vehicle and some would reflect back toward the vehicle, however, as the light is reflected by the water covering the underlying road, only a small percentage will transmit through the water and will scatter from the road. As a result, when covered by water, pavement markings appear faint at the vehicle.
[0008] Glare is generally ambient light and light from other vehicles which is reflected from the water layer covering the pavement Glare further hinders pavement markings identification by reducing the sensitivity of a camera operating in the visual range to the faint pavement markings This is colloquially known as ‘blinding’ of the camera.
[0009] Notably, roads covered by snow or by a mixture of snow and water layers also present similar difficulties in pavement marking detection and identification For brevity this specification shall relate to a road covered wholly or partially by rain, snow and similar water-based covering as a “wet road” or equivalently as a “road under wet conditions”, and similar expressions. The term “wet layer” should be construed to include layers of snow, ice, and water which are on the road. Furthermore, for brevity the terms “vehicle” and “car” shall be used interchangeably and should be construed as extending to all types of road vehicles, such as trucks, motorcycles, cars, and the like.
[0010] It is seen therefore, that there is a long-felt, yet heretofore unsolved need, for a method and an apparatus that aids in reliable detection and/or identification of pavement markings under wet conditions.
Summary of the invention
[0011] It is a goal of the present invention to aid the detection and/or identification of pavement markings under wet road conditions, either for facilitating safe driving by a human operating a vehicle, for facilitating automatic actions taken by the vehicle, and/or for enhancing autonomous vehicles operations. Assistance to a human operated vehicle may be obtained by displaying detected pavement markings to the vehicle driver on a display, an augmented-reality display, or virtual-reality display
[0012] To that end there is provided an aid for detecting pavement markings on a road under wet conditions, at least a portion of the road being illuminated by at least a first light having at least a first wavelength associated therewith and by a second light having at least a second wavelength associated therewith, the first wavelength has a higher absorption coefficient in water than the absorption coefficient of the second wavelength. The aid comprises a first imager having a field of view operationally directed at the road, the first imager is capable of producing a first image representing first light reflected from the road. The aid further comprises a second imager having a field of view operationally at least partially congruent with the field of view of the first imager, the second imager is capable of producing a second image representing second light reflected from the road. The aid further comprises an image differentiator capable of differentiating between light intensity sensed in pixels of the first image and light intensity sensed by corresponding pixels of the second image, and of generating a third image representing the result of the differentiation. [0013] The second light includes at least one wavelength or wavelength range which differs from the wavelength range of the first light, and is defined by the inclusion of at least one such wavelength or wavelength range. The second light wavelength range has a significantly lower water absorption coefficient from the absorption coefficient of the first light. In some embodiments the water absorption coefficient of the at least one wavelength range of the second light is lower than 50% than the water absorption coefficient of the first light.
[0014] In some embodiments the third image is displayed to the driver, such as by using a display, and/or by utilizing an augmented or virtual reality display. Optionally, the aid comprises an analyzer configured to analyze the third image and identify pavement markings thereon. In some embodiments the third image is enhanced, either for display or for analysis purposes Such enhancement may, by way of example increase the image contrast or utilize edge detection which may be utilized for graphical presentation or for use by other ADAS components.
Optionally, Artificial Intelligence (Al) systems may be utilized to detect the pavement markings from the third image, or act as the differentiator or a portion thereof, to form and enhance the third image. The identified pavement markings may be displayed to the driver Optionally, in vehicles having ADAS systems, the information related to the identified pavement markings is utilized to alert the driver, and/or to assist in steering the vehicle In autonomous vehicles, the information related to the identified pavement markings may be utilized for steering and/or driving the vehicle.
[0015] It is noted that the comparison of corresponding pixels between the first and second images may be done on individual pixels or on groups of adjacent pixels, and such groups of pixels are considered equivalent to individual pixels.
[0016] Optionally, the first light is substantially fully absorbed by the water layer on the road. Stated differently, the wavelength of the first light is selected to be absorbed by water at least twice as the wavelength of the second light. This will cause a layer of water as shallow as a few millimeters to absorb most of the first light energy directed at it, as compared with a much lower absorption of the second light.
[0017] In certain embodiments the first light is selected to have an absorption coefficient greater than 10 cm'1 in water, while the second light is selected to include at least one wavelength range having an absorption coefficient which is at least half of the absorption coefficient of the first light, or in the above embodiments, smaller than 5 cm-1. The second light may include other wavelength ranges, and in some cases may include wavelengths of the first light. Thus, a broad spectrum light source may include both the first light (which in the example embodiment meets the requirement of an absorption coefficient greater than 10 cm 1) and one or more wavelength ranges which meet the requirement of the second light (which following the example embodiment, having an absorption coefficient equal to or smaller than 5 cm'1). Each of such wavelength ranges which meets the requirement of an absorption coefficient lower than 50% the absorption coefficient of the first light may serve as the second light, the wavelength ranges being taken separately or in combination.
[0018] It is important to note that while the imager of the first light is sensitive only to the first light, the imager of the second light may be capable of receiving and sensing any wavelength, including wavelengths that fall within the first light. Since the first light is absorbed by the water layer, the system is based on detecting a null return of the first light. Having a null return of the first light in the second imager would have no effect on the second image, as the second image includes the sensing of at least one wavelength which is absorbed in water significantly less than the first light. Any broad spectrum light, such as natural or simulated sunlight, may be used to provide the first and/ore the second light, however the imager of the first light would be made sensitive only to light range or ranges which meets the first light requirement, and the imager of the second light may be made sensitive to any wavelength range or ranges, including wavelengths of the first light, as long as the second imager is sensitive to one or more wavelengths which meet the requirement of the second light.
[0019] In certain embodiments the first light may have an absorption coefficient lower than 10 cm-1 and in such embodiments the second light may be scaled to have smaller than half of the absorption coefficient of the first light. By way of example, the first light may be selected to have an absorption coefficient of 8 cm 1 or greater, and the second light may be selected to have an absorption coefficient of 4 cm’1 or smaller, and the like
[0020] The sensitivity selectivity of the first imager may be achieved by placing a wavelength sensitive filter in the optical path of the imager, by selection of detector material and/or construction, and the like. Since the second light may include the first light, no filter is necessary in the optical path of the second imager.
[0021] While in some embodiments the light source is ambient light such as solar light or light provided by roadside lighting, in other embodiments the aid comprises at least one light source disposed to illuminate at least a section the road within the combined field of view of the first and second imagers, and the light received by the first imager and/or second imager is light arriving from the aid’s light source, or from a combination of the aid’s light source and ambient light. If the aid is mounted on a vehicle, a light source may be mounted on the vehicle, either with, or remotely to, the first and second imagers. Optionally, the light source resides on the upper portion of the vehicle, such as on top of the vehicle or on the upper third of the vehicle. Such placement of the light source increases detection range due to better angel of incidence compared with regular headlight. Optionally, a light source mounted on the vehicle radiates in the Short Wave InfraRed (SWIR) range and thus avoid blinding drivers of oncoming vehicles.
[0022] In certain embodiments the first imager and the second imager are integrated Such integration may be achieved by placing the first and second imager in a single enclosure. Even tighter integration may be achieved by the first imager and the second imager generating their respective image from a single Focal Plane Array (FPA) detector, wherein a first set of pixels of the FPA are exposed to the first light reflected from the road, and a second set of pixels of the FPA are exposed to the second light reflected from the road. In an embodiment utilizing a single FPA, the first imager comprises the first set of pixels, and the second imager comprises the second set of pixels. In a single FPA embodiment the first and second imagers share most of the optical path between the road and the imagers. Multi spectral imagers of differing design may also be utilized to embody the first and/or second imagers described above.
[0023] In certain embodiments the image differentiator is analog in nature, and in such analog differentiator the image analyzer comprises an analog circuit operating to subtract an analog light intensity sensed by pixels in the first image from an analog light intensity sensed by corresponding pixels in the second image. In other embodiments the image differentiator is digital in nature, where the light intensity sensed by pixels of the first and second imagers are converted to digital values, and the image differentiator comprises a digital controller configured to subtract digital data representing light intensity sensed by pixels of the first image from digital data representing light intensity sensed by corresponding pixels of the second image. Optionally the image differentiator may comprise of any combination of digital and analog circuitry. In certain embodiments the image differentiator comprises software and/or software and hardware combination, where the software may - in whole or in part, be an artificial intelligence type software.
[0024] Optionally, the image differentiator and/or the analyzer are mounted remotely to the first and/or second imager. By way of example, the imagers may be housed in a camera unit, while the image differentiator and/or the analyzer may be embodied as a different portion of the vehicle. In certain embodiments the analyzer may be embodied within an artificial intelligence capacity of the vehicle.
[0025] In some embodiments the analyzer comprises a controller configured to utilize artificial intelligence software to recognize pavement markings in the third image In other embodiments the analyzer comprises a controller configured to utilize image processing software to recognize pavement markings in the third image. However, any combination of artificial intelligence and image processing and/or image recognition may be utilized. The differentiator may comprise the analyzer and be embedded therein, such as when the first and second images are fed as a whole to the analyzer which differentiate therebetween and provides the third image and/or an enhanced representation thereof.
[0026] Optionally the first light and/or the second light are polarized. Further optionally the aid further comprises a polarizing filter disposed between the road and the first imager, the second imager, or both imagers.
[0027] Optionally, the first wavelength is selected between 1400nm and 1500nm or between 1900nm and 2000nm. The second wavelength may include any light range including natural ambient light. By way of example, the second light may be selected between 900nm and 1399 nm, or between 200nm and 1399nm or between 1501 nm and 1700nm. Generally, the second light may be selected anywhere between the far UV and the far IR ranges, including all subspectrums therebetween, as long as such wavelength range selection includes at least one wavelength with a water absorbance coefficient smaller than 50% the absorption coefficient of the first light.
[0028] It is noted that, especially when the aid is embodied in a vehicle, the first and second images are corresponding images in a plurality of images taken in temporal succession such as a video stream. Optionally the first and second images are taken contemporaneously. While the subtraction of the first image from the second image alone may form the third image, in certain embodiments a plurality of first and second images may be utilized to form the third image Similar to the first and second images, a plurality of third images may be displayed and/or analyzed, and/or acted upon, and a plurality of third images form a third video stream. A sequence of the respective images is considered equivalent to individual images, even if the sequence is interrupted to an extent that would not hamper operationality of the invention.
[0029] In an aspect of the invention there is provided a vehicle comprising a light source operationally directed to illuminate a portion of a road over which it travels, the light source emitting at least a first light having first wavelength range, and a second light having a second wavelength range; a first imager having a field of view operationally directed at the road, the first imager is capable of producing a first image representing first light reflected from the road ; a second imager having a field of view operationally at least partially congruent with the field of view of the first imager, the second imager is capable of producing a second image representing second light reflected from the road; and an image differentiator capable of subtracting light intensity sensed by pixels of the first image from light intensity sensed by corresponding pixels of the second image, so as to generate a third image representing the result of the subtraction.
[0030] Optionally the vehicle further comprises a display configured to display the third image, an enhancement thereof, or a symbolic graphical representation thereof. Such third image would be presented to the vehicle driver and significantly enhance the driver awareness of pavement markings. While in some embodiments the display may constitute only the third image, in certain embodiments the display may be embodied in augmented reality display or in an artificial reality display. Such display may present the enhanced road markings with other elements of the environment. Optionally, the display is embodied as Heads-Up Display (HUD), which places the displayed information in the driver’s field of view while the driver views the road
[0031] Processing of information added to the pavement markings (or of pavement markings added top other information) may be carried out by a specialized controller or by controller or controllers embedded in the vehicle and performing other tasks, and such controller(s) may be integrated with the imagers, or mounted remotely thereto. Optionally the first imager and the second imager are integrated. Such integration may be made by sharing a single enclosure and optionally a single FPA.
[0032] While displaying the third image to the vehicle drivers are-is useful, the vehicle optionally comprises a controller or controllers executing an artificial intelligence algorithm, and the displayed image is generated by or enhanced by, the result of the artificial intelligence algorithm. The image differentiator and/or the analyzer, if implemented, may be partially or wholly embodied as by software, which is optionally artificial intelligence software.
[0033] Optionally, the vehicle comprises an analyzer configured to analyze the third image and identify pavement markings thereon, and the vehicle further comprising a controller configured to utilize information related to the pavement markings identified by the analyzer for controlling motion of the vehicle, and/or for alerting a driver to deviation of the vehicle from desired behavior in accordance with the identified pavement markings. By way of example such alert may relate to the vehicle departing from a lane marked by the pavement marking. In certain vehicles a vehicle controller may utilize identified pavement markings information to steer the vehicle so as to stay within the lane. Optionally, the vehicle controller may slow and/or force a stop, if necessary, to comply with a pavement marking mandating a stop or speed reduction. In certain embodiments, the vehicle controller may alert the driver of upcoming stop marking such as 16A in Fig. 1 , lane departure, a lane designated to a turn, and the like, and such alert may optionally be combined with the above mentioned steering, slowing, and/or stopping action Optionally the analyzer and/or differentiator may be embodied at least in part, in the controller.
[0034] Other features described in relation to the aid aspect described above may be utilized in combination with the vehicle.
Short description of drawings
[0035] The summary above, and the following detailed description will be better understood in view of the enclosed drawings which depict details of preferred embodiments. It should however be noted that the invention is not limited to the precise arrangement shown in the drawings and that the drawings are provided merely as examples.
[0036] Fig 1 depicts schematically a vehicle utilizing an aid to identifying pavement markings Fig 1 A depicts a schematic close up of a water covered road, with light directed to, and reflected or scattered therefrom.
[0037] Fig. 2A depicts a schematic image of a road covered with water layer, as sensed by an imager sensitive to wavelengths range centered at 1445 nm. Fig. 2B depicts a schematic image of the same road taken simultaneously with the image at Fig. 2A, as sensed by an imager sensitive to wavelengths range centered at 1000 nm. Fig. 2C represents schematically a third image resulting from subtraction of the images of Fig. 2A and 2B. Figs 2A’ is a source image of Fig. 2A and Fig. 2B’ is a source image of Fig. 2B. [0038] Fig. 3 depicts a simplified graph showing absorption of light in water as a function of wavelength.
[0039] Fig 4 depicts schematically a simplified block diagram of pavement marking detecting aid, showing several optional elements.
[0040] Fig. 5 depicts schematically a simplified block diagram of an optional embodiment of a pavement marking detecting aid utilizing a single lens.
Detailed Description
[0041] Certain embodiments would now be described utilizing the enclosed figures by way of nonlimiting example
[0042] Fig. 1 depicts schematically a vehicle 10 utilizing a pavement marking detecting aid 100 mounted on a vehicle 10 travelling on the road 15. The first imager 20, second imager 25 and the optional light sources 35, are depicted. Notably the light source is depicted in the optional location on the top portion of the vehicle 10. While the light source may be disposed anywhere in the vehicle that would allow illuminating the road, disposing the light source as high as possible on the vehicle increases detection range. Therefore, in the depicted embodiment the light source is disposed at the uppermost portion of the vehicle that would allow the light source to illuminate at least a portion of the road ahead of the vehicle. The dash-doted lines emanating from the light source 35 symbolize the light cone projected from the light source, but the light cone may vary to fit desired design parameters. The first 20 and second 25 imagers are depicted on two sides of the vehicle 10 for clarity of the drawing, but may be disposed in any manner which achieves at least partial overlap of the field of view of the first and second imagers, the field of view being symbolized by dotted lines emanating from the respective imagers, with arcs 80 and 85 symbolizing the respective field of view thereof.
[0043] A representation of a water layer is depicted by shape 7. It is seen that the road, and more specifically the road markings 16 and 16A are faint as compared to road marks on areas of the road not covered by the water layer 7. It is noted that items in the environment which are not covered by water, such as vertical items by way of example, shed water and are therefore less effected by light reflection and absorption as much as areas covered by water
[0044] Fig. 1A depicts schematically a cross section of a region of a road 15 with a road mark 16, a water layer 7 covering the road and several light rays depicting schematically paths. A ray L1 of first light and a ray L2 of second light are depicted entering water layer 7. As may be seen a portion of the light, which in actuality is the majority of the light impinging on the water, is reflected away as R1 and R2 respectively. A portion of the first light L1 enters the water layer 7 but is quickly absorbed by the water. Even if the first light reaches the road mark 16, its scattering still has to travel out of the water layer, and will be further absorbed thereby. In contrast, due to lower absorption by the water layer, a portion of the second light L2 does reach the road mark and is scattered therefrom. A portion of the second light scatters after it hits the road mark 16, and leaves the water. Some of the scattered light is wasted as indicated schematically by ray S2, however a portion of the second light, depicted as broken line RET, is returned, and may be sensed by the second imager 25.
[0045] Figs. 2A and 2B are simplified schematic representations of actual images Image presented in Figs 2A’ and Fig. 2B’ respectively. Fig. 2A depict schematically an image captured by an imager sensitive to light at wavelength range centered about 1445 nm, and Fig. 2B depict schematically a contemporaneously captured image of the scene captured in Fig. 2A, however the image of Fig 2B was captured utilizing an imager sensitive to light at wavelengths centered around 1030 nm To improve visibility, the pavement marking has been enhanced in Fig 2B, however in actual images road markings appear faint while under a water layer.
[0046] Both scenes describe a vehicle parked near a road marking line. It is clearly seen that in Fig. 2A the road 15 appears featureless, while in Fig. 2B the road appears more pronounced and the pavement markings maybe seen, Items which are not covered by water, such the car, and the like, may be easily corelated between the images taken by the first and second imagers, as they are visible in both images, despite potentially differing color. The main reason for this is that the water usually accumulate on the road and not on objects like cars, houses, trees, signs etc. Fig. 2C depicts schematically a third image resulting from subtraction of the images of Fig. 2A and 2B. Fig. 2C is enhanced for clarity. Oftentimes, images taken by different imagers are normalized prior to the subtraction process, to compensate for spectral conditions, sensor sensitivity and output, and the like.
[0047] Fig. 20 depicts a simplified graph showing absorption of light in water as a function of wavelength. This graph shows by way of example that the wavelength range between about 1400 nm and 1550 nm and the wavelength range between about 1850 nm and 2000 nm an absorption coefficient greater than 10 cm-1 , which makes them highly efficient choice as the band for the first light The second light may include any wavelength having an absorption coefficient lower than 50% of the absorption coefficient selected for the first light in the present example, and thus any wavelength between 380 nm and 1390 nm or between 1560 nm and 1890nm may be selected for the second light.
[0048] Fig. 4 depicts schematically a simplified block diagram of a pavement marking detecting aid 100, showing several optional elements. A first 20 and second 25 imagers are depicted.
[0049] In the embodiment depicted in Fig. 4, the light source 35 is shown as a separate entity, however in other embodiments the light source may be integrated with the imagers in the same enclosure. For clarity the light source in the drawing is depicted as being laterally displaced, however the light source may be disposed at any convenient location relative to the imagers as long as the light source illuminates the road area of interest, which is covered by the imagers. Optionally the light source 35 emits polarized light. It is important to note that ambient light may be used as the light source, and the a pavement marking detecting aid 100 does not necessitate its own light source. The ambient light may come from solar light or the operational scene may be illuminated, such as by road lights. Furthermore, the light may emanate from a plurality of light sources. If the light is disposed in the upper portion of the vehicle the detection is improved as the incidence angle between the light and the water layer increases, resulting in better light penetration and in higher intensity returns of the second light. Optionally, one or more polarizers 70 may be disposed between the imagers and the image scene.
[0050] It is noted that the respective fields of view 80, 85 of the first and second imagers respectively are at least partially congruent as shown by the hatched triangle. In embodiments where the optical path of the two imagers is shared the field of view 80 of the first imager 20 is essentially fully congruent with the field of view 85 of the second imager 25.
[0051] In Fig. 4, the first 20 and second 25 imagers are depicted as being placed side by side, however this placement is merely one option of many. Optionally the imagers may be embodied as different portions of a multi-spectral camera. While Optionally, the imagers may share a single lens as depicted by lens 65 in Figs. 4 and 5, each imager may be provided with its own lens and/or its own enclosure, such as when using two separate cameras, each camera responsive to one of the first and second light respectively. Where a single lens is utilized, the image reflected from the road may be split by an optical beam splitter 175 such as a mirror, a prism, a dichroic splitter, and the like. The two imagers may be embodied on a single FPA utilizing filters to sensitize specific pixel set to the first wavelength range or to the second wavelength range.
[0052] In Fig. 4, an optional polarizing filter 70 is placed between the image scene and the first imager 20 the second imager 25. A single polarizer or different polarizers may be utilized respectively. A polarizing filter may be equivalently related to as a polarizer. While optional, such polarizers may reduce glare and reflection from unwanted light sources. Polarizing filter 70 may optionally be split for the different imagers and the polarizer for one of the imager may optionally differ polarization from the polarization for the other. While no polarizer appears in Fig. 5, polarizers may be disposed between optical beam splitter 175 and the respective imagers 20, 25.
[0053] In embodiments which utilize polarizers, a polarizer 70 may be disposed in the optical path of only one of the imagers 20, 25, or different polarizers may be disposed in the optical paths of each of the imagers. By way of example, one polarizer may provide polarization at 90 degrees to the other. Furthermore, filters may be disposed in the respective optical path of each of the imagers, such that each imager receives a filter substantially transparent to its respective wavelength. Optionally a polarizer may be disposed in front of light source 35, or if two light sources are used, a polarizer may be disposed in front of one light source or both. If separate polarizers are placed in front of the respective light source, the polarizers may or may not posses the same polarization characteristics. [0054] In Fig. 5, the image differentiator 30 is integrated with the first and second imagers within enclosure 5, however such placement is optional and the respective images from the first and second imager may be transferred to another location, either in analog or digital form.
[0055] As shown in Fig. 5, the image reflected from the road may be split by an optical beam splitter 175 such as a mirror, a prism, a dichroic splitter, and the like. Wavelength related outputs of a single multi-spectral camera may be utilized to embody both the first and second imagers, if such outputs reflects separate sensing of the first and second light respectively. In both the split image embodiment shown in Fig. 5, and the multi-spectral camera embodiment shown in Fig. 4, the first and second imager share at least a portion of the optical path between the road and the imagers This is in contrast to the embodiment of Fig 1 where the imagers are completely separate and partially share a field of view.
[0056] The image differentiator 30 is configured to operationally subtract the image of the first imager 20 from the image of the second imager 25. This subtraction may be achieved by analog circuitry, where the voltage outputted by individual pixels in the first imager 20 would be subtracted from the voltage outputted by corresponding pixels in the second imager 25. The image differentiator 30 may also be embodied in digital form, where the output of pixels of the first and second imagers are converted to a digital representation which is then subtracted as digital values between corresponding parts of the first and second images. Optionally, compensation for respective differences due to the available spectrum and instrument differences may be easily accomplished during digital differentiation as the compensation only require adding and/or subtracting compensation values to the digitized pixel values of the image being compensated. Commonly but not necessarily, an image fusion is required prior to subtraction, so as to correctly determine the offsets between corresponding pixels from the first and second imagers
[0057] Regardless of the method of differentiation, and the presence or absence of compensation, a third image 60 is formed from the result of the differentiation, where the difference between pixels of the first image 50 are subtracted from corresponding pixels of the second image 55 to form the corresponding pixels in the third image 60. An illustrative example of such third image is depicted in Fig. 3.
[0058] The first imager 20 produces images 50 representing the amplitude of the first light reflected from the image scene, and the second imager 25 produces images 55 of the second light reflected from the image scene The first light utilizes a wavelength selected such that is substantially absorbed by the water layer on the road, and thus no first light is reflected back to the first imager, causing the corresponding pixels to generate low output levels. Stated differently, the water on the road cause the road to appear as a ‘dark’ area in the first image 50. In contrast the second light is absorbed less by the water layer, and thus some of the second light is reflected back from the image scene to the second imager, and the image 55 sensed by the second imager represents - at least to some extent - the road underneath the water layer. However since the water reflects most of the light shown thereupon by the second light, such marking will be faint and hard to identify over a relatively dark background which is caused by the dark color of the road surface not covered by pavement markings.
[0059] It is seen therefore that the pavement markings will be substantially invisible in the first image 50, yet at least faintly visible in the second image 55. After a compensation is carried out, items which are not covered by water should differ only slightly between the first and second images. The compensation is performed to compensate for the respective characteristic response of the two imagers to the scene. Such compensation takes into account, by way of example, light level within the utilized spectrum, absorption of the specific wavelength by the ambient environment, differing sensitivity of the imager to the respective light, and the like The compensation parameters are commonly being determined empirically once per a given setup of the pavement marking detecting aid.
[0060] By subtracting corresponding pixels or pixel groups of the first and second images the pavement markings become more prominent in the third image 60, which represents the result of the subtraction. If desired, enhancement is carried out to further contrast of the third image, or the road markings detected therein.
[0061] For simplicity of explanation, under assumed ideal conditions, the first light L1 is substantially completely absorbed by the water layer 7 over the road 15. Usually the second light L2 is scattered differently by the black road 15 and the road markings 16 thereon A portion of the second light reaches the road markings through the water layer, and a portion of that light is scattered thereby. Some of the scattered second light exits the water layer, and a portion of that light RET is sensed by the second imager 25. When a first image 50 sensed by the first imager 20 and a contemporaneously captured second image 55 captured by the second imager 25 are subtracted by the image differentiator 35, the resulting third image 60 would have the pavement markings emphasized, as they would substantially stand alone against the road surface. This will make even the faintly sensed pavement markings easily identifiable Under certain embodiments the intensity of all of the third image 60 pixels which lie on the road may be multiplied so as to increase the contrast, making the pavement markings more prominent.
[0062] Under the condition described above, the rest of the scene objects except for the road are sensed by the first 20 and second 25 imagers The images of such objects may optionally be offset by the compensation specific to the imagers, as described above; under such embodiment, the representation of the other image objects in the first 50 and second 55 images are substantially equal and their subtraction will leave a null in the third image 60. Such embodiment will leave the third image substantially with only the road markings.
[0063] Fig 4, further displays several optional components that may be present in the a pavement marking detecting aid and/or in the vehicle alone or in combination. [0064] The third image 60 may be fed directly to a display 45. The display may be a screen visible to a driver, a Heads-Up Display (HUD) which is displayed in the field of view of the driver, or in any other desired display device. Optionally the amplitude of the displayed pixels of the third image are enhanced prior to being displayed, to ease recognition of the pavement marking. Such enhancement may be performed by selecting a threshold value, where pixels with amplitude below the threshold are displayed in one color and pixels with amplitude above the threshold are displayed by a second color. A number of threshold values may be selected, and the displayed image may be displayed in more colors than two.
[0065] The third image 60 may be analyzed by an analyzer 40. The analyzer may be integrated in the same enclosure as the imagers 20, 25 and/or the image differentiator 30, or be disposed remotely thereto In certain embodiments the analyzer comprises, at least in part, computing resources in the vehicle. In certain embodiments the analyzer utilizes Artificial Intelligence to analyze the third image and to identify pavement markings therein, however common image processing technique may also be utilized. The analyzer 40 typically comprises memory 40A and one or more Central recessing Unit (CPU) 40B. Images may be stored in the memory and the CPU executes software code to perform the functions described herein. Optionally, the road markings may be processed by the analyzer 40 and displayed on the display as a graphical representation. Such graphical representation may be derived by edge detection algorithms, by artificial intelligence, and the like.
[0066] Alternatively, or additionally to displaying the analyzer 40 output on the display 45, the analyzer output may be utilized to guide an ADAS system. Such ADAS system may provide an alarm 140 and/or actuate an actuator 135 The actuator may be used to provide additional alerting such as shaking the steering wheel, and/or to steer the vehicle in a desired direction More than a single actuator may be activated by the analyzer and optionally a plurality of components may be intermediate to the activation of the actuator. By way of example, the analyzer may identify lane markings and a lane deviation of the vehicle. In response the analyzer may activate the alarm 140, may activate a wheel shaker actuator, and/or steer the vehicle to stay within the lane. The actuator may be activated by digital communications, relays, amplifiers, servos, duty cycle controllers, digital to analog (D/A) systems, and the like.
[0067] An analyzer 40 and an image differentiator 30 may be formed by one or more computers, having memory to store the images and perform processing thereupon.
[0068] Optionally the output of the analyzer 40 is utilized in an augmented reality system, or in a virtual reality system, such as will be beneficial in certain remotely operated vehicles.
[0069] When the pavement marking detecting aid 100 is operated in a dynamic environment, such as from a moving vehicle there is an advantage to taking the images 50, 55 from the same point of view, however such arrangement is not mandatory. Disposing the two imagers 20, 25 at a distance from one another is possible as long as their respective fields of view are at least partially congruent in the road area. Distance between the two imagers would require mapping of individual points from one image to the other to compensate for the parallax stemming from the differing point of view. Image fusion is well known, and by casual reference, the reader is directed to the likes of “Image Fusion Algorithm at Pixel Level Based on Edge Detection ", Jiming Chen et al©, research article at the Journal of Healthcare Engineering Volume 2021 , article ID 5760660, Hindawi London August 10, 2021, and “Review of Pixel-Level Image Fusion”, Bo Young et al, Journal of Shanghai Jiaotong University (Science), February 2010.
[0070] Taking the first 50 and second 55 images from a single point of view obviates the need for parallax compensation Several solutions may be utilized to obtain both the first 50 and second 55 images from the same point By way of example a multi-spectral camera may provide both images from essentially the same point of view. An example of imager which may provide images at the desired wavelength ranges from a single point of view may be embodied in an imager utilizing microlens array and common filters, as described by way of example in US Patent No. 7,433,042 to Cavanaugh et al. Other multi-spectral imagers which use congruent field of view are known in the art, and such imagers may be utilized as well.
[0071] Fig. 5 depicts schematically an embodiment where the first and second imagers 20,25 in one optional arrangement. In the depicted example, the first 20 and second 25 imagers are disposed within a single enclosure 5 to share the optical path from the imager lens 65 to the road. Light reflected from the road 15 passes the lens 65 and is directed at an optical beam splitter 175. The optical beam splitter is constructed to pass the first light to the first imager 20 and reflect the second light to the second imager 25 As the field of view 80A of both imagers is substantially fully congruent, parallax compensation is not required It is noted that in Fig. 5 the light source 35 is disposed behind and above the imagers, however, as described above the light source may be disposed at any convenient location.
[0072] Any component of the pavement marking detecting aid 100 may be integrated with another component, such as by being placed in a single enclosure or in adjacent enclosures, or mounted remotely to other components. Therefore, by way of example, in the figures the first and second imagers are shown mounted remotely to the light source 35, but may be integrated in a single enclosure therewith.
[0073] Furthermore, if the pavement marking detecting aid 100 is in motion, as would be the case if it is mounted to a moving vehicle, there is a distinct advantage to acquiring both the first 50 and second 55 images simultaneously as doing so will obviate the need for motion compensation that would be needed to compensate for a time lapse between the images, if taken at different times from the moving vehicle.
[0074] When the pavement marking detecting aid 100 is operated in a dynamic environment it is typical to utilize video imaging. The first 50 and second images may thus be taken from corresponding frames of two video streams. Stated differently, consecutive images 50 captured by the first imager 20 form the frames of a first video stream, consecutive images 55 captured by the second imager 25 form the frames of the second video stream, and the subtracted corresponding frame images 60 form a third video stream. It is however not mandatory that each and every frame captured by the imagers will be incorporated in the video stream. By way of example, a plurality of consecutively captured images of either imager may be integrated by summation and/or averaging, and only the integrated image or images may be utilized for the subtraction. Such arrangement may be utilized for reducing the effect of noise, or for enhancing road images of poor quality. Using each consecutive image or integrated images in the respective video stream may be decided at the time of system construction, or determined dynamically during use. The number of captured images between consecutive video frames is arbitrary and the image integration is not mandatory.
[0075] To the extent necessary to understand and/or to complete the disclosure of the present invention, all publications, patents, and patent applications mentioned herein, including in particular the applications of the Applicant and/or inventor, are expressly incorporated by reference in their entirety by reference as if fully set forth herein, to the extent that the disclosure provided thereby does not conflict with the present disclosure.
[0076] In these specification imagers are devices which capture light reflected from a scene defined by the imager field of view, and generate an electronic representation of the intensity of light received from regions in the scene by a Focal Plane Array (FPA) within the imager. The scene regions are mapped to discrete elements of the FPA known as pixels. Typically, an imager has a characteristic frequency response which is centered around a center wavelength, and a continuum of wavelength range on both sides of the center wavelength. Therefore, when a wavelength of an imager is specified, it should be construed that the imager frequency response includes a reasonably prominent response at the specified frequency, however the imager may respond to other wavelengths, and the specified wavelength does not have to be identical to the center frequency. Such frequency response may be affected, by way of example, by a bandpass filter disposed in front of the imager, by selection of the FPA, and the like. As explained above, the first imager is constructed to detect only the first light. One option to such construction may be made by way of example by adding a bandpass filter in its optical path. However, as the second light may include a broad spectrum which may include, inter alia the first light, a filter is not necessary in the light path of the second imager.
[0077] Imagers may generate discrete images or a continuous stream of consecutive images, which are termed a “video” as described above Notably a video does not necessarily comprise each captured image, as a video may comprise any selected images which are captured one after the other regardless of the number of captured images therebetween. Furthermore, in certain embodiments consecutively captured images may be averaged to form a single imaged in the video stream. It is noted that the full potential of the pixel is calibrated in accordance with the capabilities of the FPA or its sensitivity to specific wavelengths, as well as to the relative amplitude of the specific wavelength within the spectrum. By way of example a single FPA may be utilized to sense two separate wavelengths of equal intensity, but the first wavelength of the two generates a 90% output response while the second may generate only 70% output response, and thus the sensed images must be scaled to show similar results.
[0078] An aspect of the invention comprises a vehicle incorporating any of the embodiments disclosed herein In such a vehicle the third image analysis and/ the subtraction of the first and second images may be performed by vehicle born computers, or by dedicated hardware. In certain embodiments the vehicle may be an autonomous vehicle. Some vehicles utilize computer systems adapted to utilize artificial intelligence algorithms, and such computer system, or a portion thereof, may be utilized as the controller 40
[0079] As stated above it is desirable, but not mandatory, to place the light source 35 as high as practically possible on the vehicle in order to increase the penetration of the light through the water layer on the road, and thus also increase the reflection from the pavement markings Therefore, in pavement marking detections aids which are mounted on a vehicle it is desirable to locate the light source 35 in the top third of the vehicle
[0080] Both original images of which Figs 2A and 2B are schematic, line drawing representations of actual images that were captured utilizing a multispectral camera similar to the camera described in US Patent Application No. 7,433,042 to Cavanaugh, and the resulting sensed images were translated to the visual range It is noted that Application No PCT/US2022/040299 to Maimon discloses a multispectral and LIDAR detector using light field optics, and that such imager, or the Cavanaugh device may also be advantageously utilized for the present application by configuring the detector to provide images at the desired wavelengths.
[0081] While an imager may be sensitive to frequencies outside the visual range, the term “dark”, as related to a pixel or a group of pixels of the imager, should be construed to correspond to a pixel which senses light amplitude that causes the pixel to output a level lower than ~5% or 10% or 20% of the full potential output of the pixel. Such dark pixel or group of pixels may be considered as sensing a null signal. However, definition of dark output level may be dynamically adjusted to compensate for various conditions, and thus the exact output level corresponding to a ‘dark’ pixel or region should be construed as relative to current condition, where the output level is low in comparison to other regions of the image.
[0082] It is important to note that while the first light and the second light are described as having a wavelength associated therewith, such wavelength should be considered as nominal, and the actual light wavelength commonly spreads over a certain wavelength range which is sufficiently close to the nominal wavelength to be considered equivalent thereto for operational purposes. Furthermore, unless otherwise specified, the term "absorption coefficient” relates to the absorption coefficient of at least the center frequency of the respective light in water Furthermore, varying phases of water, such as snow, ice and the like may respond differently to certain light conditions and accordingly detection parameters may differ between various water phases. . Notably the first and second imagers are capable of sensing the light reflected from the first and second lights, respectively, however light in the specific wavelengths is absorbed at different levels by water on the road, and thus in some cases the specific wavelength is absorbed rather than reflected. As the first and second wavelength or wavelength ranges associated with the first and second lights define the light characteristics which enable pavement marking detection the use of the terms first light, first wavelength, and first wavelength range should be construed as equivalent and interchangeable, and similarly the terms second light, second wavelength and second wavelength range should be construed as equivalent and interchangeable. Unless the terms light, wavelength, and/or wavelength range are specifically differentiated or clearly understood from the text in view of the rest of the specification, such as when defining a light to have a wavelength or wavelength range, and the like.
[0083] The term “contemporaneously”, as related to images sensed by the first and seconds imagers, should be construed to occur at sufficient temporal proximity to allow fusion of the first and second images, and proper image fusion, which will be usable when the vehicle velocity is considered.
[0084] Unless otherwise specified, relational terms used in these specifications should be construed to include certain tolerances that the skilled in the art would recognize as providing equivalent functionality. By way of example the term perpendicular is not necessarily limited to 90.0°, but also to any slight variation thereof that the skilled in the art would recognize as providing equivalent functionality for the purposes described for the relevant member or element Wavelengths are a convenient approximation. By way of example, specifying a wavelength of 1445 nm relates similarly to a range of wavelengths allowing similarly effective results, ranging, by way of example at any point between 1430nm and 1460nm and extending at any point between two points selected by the water absorption coefficient of 10cm-1 , such as between 1400 nm and 1550 nm.
[0085] Adjectives such as “about” and “substantially” that modify a condition or relationship characteristic of a feature or features of an embodiment of the present technology, are to be understood to mean that the condition or characteristic is defined to within tolerances that are acceptable for operation of the embodiment for an application for which it is intended, or within variations expected from the measurement being performed and/or from the measuring instrument being used, or sufficiently close to the location, disposition, or configuration of the relevant element to preserve operability of the element within the invention which does not materially modifies the invention. By way of example, an imager may sense low levels of wavelengths outside its intended range, however those skilled in the art would recognize that such sensing falls within the tolerance of the technology selected for the implementation, and that an imager falls within the scope of the invention Similarly, unless specifically specified (replace “specified” with “identified”? »l think that precise and specific numerical values without any tolerance (or with specified tolerance) should be considered as “specified”. Do you think that “identified” is better? or clear from its context, numerical values should be construed to include certain tolerances that the skilled in the art would recognize as having negligible importance as it does not materially change the operability of the invention. When the term “about” precedes a numerical value, it is intended to indicate +/- 15%, or +/- 10%, or even only +/-5% or +/-1 %, and in some instances the precise value.
[0086] Whenever the term ‘and/or’ is used in these specifications and the attached claims, it should be construed as any number, combination or permutation of all, one, some, a plurality or none of each of the items or list mentioned. It is also understood that “(s)” appended to the end of a word designates either singular or plural of the word It is further understood that “or” is an inclusive “or” to include all items in a list and not intended to be limiting and means any number, combination or permutation of all, one or plurality of each of the item or list mentioned, unless the term 'or’ is explicitly defined as exclusive, or if the context would clearly indicate an exclusive or to the skilled artisan. In the description and claims of the present disclosure, each of the verbs, “comprise” “include” and “have", and conjugates thereof, are used to indicate that the object or objects of the verb are not necessarily a complete listing of features, members, components, elements, steps or parts of the subject or subjects of the verb
[0087] In these specifications reference is often made to the accompanying drawings which form a part of the disclosure, and in which are shown by way of illustration and not of limitation, exemplary implementations and embodiments. Further, it should be noted that while the description provides various exemplary embodiments, as described below and as illustrated in the drawings, this disclosure is not limited to the implementations described and illustrated herein, but can extend to other embodiments as would be known or as would become known to those skilled in the art. Reference in the specification to "one embodiment", "this embodiment", "these embodiments", “several embodiments”, “selected embodiments”, "some embodiments" or conjugates thereof means that a particular feature, structure, or characteristic described in connection with the relevant embodiment(s) may be included in one or more implementations and/or embodiments, and the appearances of these phrases in various places in the specification are not necessarily all referring to the same embodiment(s). Additionally, in the description, numerous specific details are set forth in order to provide a thorough disclosure, guidance and/or to facilitate understanding of the invention or features thereof. However, it will be apparent to one of ordinary skill in the art that these specific details may not all be needed in each implementation. In certain embodiments, well-known structures, materials, circuits, processes and interfaces have not been described in detail, and/or may be illustrated schematically or in block diagram form, so as to not unnecessarily obscure the disclosure.
[0088] Positional or motional terms such as “upper”, “lower”, “right”, “left”, “bottom", “below”, “lowered”, “low”, “top”, “above”, “elevated”, “high”, “vertical”, “horizontal”, “front”, “back", “backward”, “forward”, “upstream” and “downstream", as well as grammatical variations thereof, may be used herein for exemplary purposes only, to illustrate the relative positioning, placement or displacement of certain components, to indicate a first and a second component in present illustrations or to do both Such terms do not necessarily indicate that, for example, a “bottom” component is below a “top” component, as such directions, components or both may be flipped, rotated, moved in space, placed in a diagonal orientation or position, placed horizontally or vertically, or similarly modified.
[0089] Although the foregoing invention has been described in detail by way of illustration and example, it will be understood that the present invention is not limited to the particular embodiments, options, alternatives, and examples provided in the description and the drawings. Specific embodiments described but may comprise any combination of the above disclosed elements and their equivalents and variations thereof, as well as those combinations, changes and/or modifications which will be obvious to those skilled in the art in view of the present disclosure. The invention extends to such variations and modifications as fall within the true spirit and scope of the invention.

Claims

Claims
1. An aid (100) for detecting pavement markings (16) on a road (15) under wet conditions, at least a portion of the road being lit by at least a first light having at least a first wavelength and by a second light having at least a second wavelength, the aid (100) comprising: a first imager (20) having a field of view (80) operationally directed at the road (15), the first imager being capable of producing a first image (50) representing first light reflected from the road (15); a second imager(25) having a field of view (85) operationally at least partially congruent with the field of view (80) of the first imager, the second imager being capable of producing a second image (55) representing second light reflected from the road(15); the aid being characterized by: the first wavelength having an absorption coefficient in water higher than an absorption coefficient in water of the second wavelength; and, an image differentiator (30) being configured to differentiate between light intensity sensed in pixels of the first image (50) and light intensity sensed in corresponding pixels of the second image (55), and generate a third image (60) representing a result of the differentiation
2. The aid as claimed in claim 1, wherein the water absorption coefficient of second wavelength is lower than 50% of the water absorption coefficient of the first wavelength.
3. The aid as claimed in claim 1 or 2, further comprising an analyzer (40) configured to analyze the third image (60) and identify pavement markings (16) therein.
4. The aid as claimed in claim 3 further comprising a display (45) configured to display the third image (60), or an enhanced third image having the pavement markings identified by the analyzer (40).
5. The aid as claimed in claim 3, wherein information related to the identified pavement markings is utilized to alert a driver of a vehicle (10), and/or to assist in steering of the vehicle.
6. The aid as claimed in claim 3, wherein the identified pavement markings are being transmitted to an Advanced Driver Assistance System (ADAS).
7. The aid as claimed in claim 3, wherein the analyzer comprises a controller configured to utilize artificial intelligence software to recognize pavement markings in the third image.
8. The aid as claimed in any preceding claim, wherein the first wavelength is selected to have absorption coefficient greater than 10 cm 1 in water, and the second wavelength is selected to have absorption coefficient which is at least half of the absorption coefficient of the first light, or in the above embodiments, smaller than 5 cm'
9. The aid as claimed in any preceding claim, wherein the first imager (20) is sensitive only to the first light, and wherein the second imager (25) is capable of sensing any wavelength, including wavelengths that fall within the first light.
10. The aid as claimed in any preceding claim further comprises at least one light source (35) disposed to irradiate at least a section the road (15) within the combined field of view of the first (20) and second (25) imagers.
11. The aid as claimed in claim 9, wherein the aid is mounted on a vehicle (10) having a height and wherein the at least one light source is positioned in or on the vehicle at a height equal to or greater than two third of the vehicle height.
12. The aid as claimed in claims 9 or 10, wherein the at least light source (35) irradiates the road (15) with light having at least one wavelength that is within a Short Wave Infra Red (SWIR) range.
13. The aid of any preceding claim, wherein the first (20) and the second (25) imagers are integrated.
14. The aid as claimed in 12, wherein the first imager (20) and the second imager (25) generate the first (50) and the second (55) images from a single Focal Plane Array (FPA) detector.
15. The aid as claimed in any preceding claim, wherein the first wavelength is between 1400 and 1500 nanometer (nm).
16. The aid as claimed in any preceding claim, wherein the first wavelength is between 1900 and 2000 nanometer.
17. The aid as claimed in any preceding claim, wherein the first image and the second image are images in respective first and second video streams.
18. A vehicle comprising an aid (100) for detecting pavement markings (16) on a road, the aid being as claimed in any preceding claim.
PCT/US2024/031203 2023-05-29 2024-05-27 Aid to pavement marking detection in wet conditions WO2024249384A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202363469474P 2023-05-29 2023-05-29
US63/469,474 2023-05-29

Publications (1)

Publication Number Publication Date
WO2024249384A1 true WO2024249384A1 (en) 2024-12-05

Family

ID=93658610

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2024/031203 WO2024249384A1 (en) 2023-05-29 2024-05-27 Aid to pavement marking detection in wet conditions

Country Status (1)

Country Link
WO (1) WO2024249384A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140180129A1 (en) * 2011-02-09 2014-06-26 Tel Hashomer Medical Research Infrastructure And Services Ltd. Methods and devices suitable for imaging blood-containing tissue
US20150054954A1 (en) * 2012-02-13 2015-02-26 Izumi Itoh Imaging unit and method for installing the same
US20170113664A1 (en) * 2015-10-23 2017-04-27 Harman International Industries, Incorporated Systems and methods for detecting surprising events in vehicles
US20200257915A1 (en) * 2017-10-27 2020-08-13 3M Innovative Properties Company Optical sensor systems
US20200366870A1 (en) * 2004-04-15 2020-11-19 Magna Electronics Inc. Vehicular control system with traffic lane detection
US20200408676A1 (en) * 2017-08-29 2020-12-31 Panasonic Intellectual Property Management Co., Ltd. Water content sensor and road surface state detection device
US20220406077A1 (en) * 2021-06-18 2022-12-22 Continental Automotive Gmbh Method and system for estimating road lane geometry

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200366870A1 (en) * 2004-04-15 2020-11-19 Magna Electronics Inc. Vehicular control system with traffic lane detection
US20140180129A1 (en) * 2011-02-09 2014-06-26 Tel Hashomer Medical Research Infrastructure And Services Ltd. Methods and devices suitable for imaging blood-containing tissue
US20150054954A1 (en) * 2012-02-13 2015-02-26 Izumi Itoh Imaging unit and method for installing the same
US20170113664A1 (en) * 2015-10-23 2017-04-27 Harman International Industries, Incorporated Systems and methods for detecting surprising events in vehicles
US20200408676A1 (en) * 2017-08-29 2020-12-31 Panasonic Intellectual Property Management Co., Ltd. Water content sensor and road surface state detection device
US20200257915A1 (en) * 2017-10-27 2020-08-13 3M Innovative Properties Company Optical sensor systems
US20220406077A1 (en) * 2021-06-18 2022-12-22 Continental Automotive Gmbh Method and system for estimating road lane geometry

Similar Documents

Publication Publication Date Title
US10899277B2 (en) Vehicular vision system with reduced distortion display
US10257432B2 (en) Method for enhancing vehicle camera image quality
US20250016441A1 (en) Vehicular vision system
US11472338B2 (en) Method for displaying reduced distortion video images via a vehicular vision system
US20210201048A1 (en) Nighttime Sensing
JP5506745B2 (en) Image acquisition unit, method and associated control unit {IMAGEACQUISITIONUNIT, ACQUISITIONMETHODANDASSOCIATEDCONTROLLUNT}
US10875403B2 (en) Vehicle vision system with enhanced night vision
US10324297B2 (en) Heads up display system for vehicle
Kanzawa et al. Human skin detection by visible and near-infrared imaging
US11532233B2 (en) Vehicle vision system with cross traffic detection
US10933798B2 (en) Vehicle lighting control system with fog detection
EP2172873A2 (en) Bundling of driver assistance systems
JP2005195601A (en) System and method of detecting driving conditions for car
KR20150024860A (en) Gated imaging using an adaptive dapth of field
US20160119527A1 (en) Vehicle vision system camera with dual filter
JP2004289787A (en) Multifunctional integrated visual system having cmos or ccd technology matrix
US10112552B2 (en) Vehicle vision system with camera viewing through windshield
EP3428677B1 (en) A vision system and a vision method for a vehicle
Grauer et al. Active gated imaging for automotive safety applications
US10440249B2 (en) Vehicle vision system camera with semi-reflective and semi-transmissive element
JP5839253B2 (en) Object detection device and in-vehicle device control device including the same
WO2024249384A1 (en) Aid to pavement marking detection in wet conditions
EP3428687A1 (en) A vision system and vision method for a vehicle
JP2020121615A (en) Vehicular road sign recognition support device
Kato et al. Detection of driver's posture in the car by using far infrared camera

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24816264

Country of ref document: EP

Kind code of ref document: A1