[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2020079698A1 - Adas systems functionality testing - Google Patents

Adas systems functionality testing Download PDF

Info

Publication number
WO2020079698A1
WO2020079698A1 PCT/IL2019/051135 IL2019051135W WO2020079698A1 WO 2020079698 A1 WO2020079698 A1 WO 2020079698A1 IL 2019051135 W IL2019051135 W IL 2019051135W WO 2020079698 A1 WO2020079698 A1 WO 2020079698A1
Authority
WO
WIPO (PCT)
Prior art keywords
objects
vut
roadway
targets
virtual
Prior art date
Application number
PCT/IL2019/051135
Other languages
French (fr)
Inventor
Jonathan ABIR
Arie ABIR
Shlomi ATHIAS
Amir SHAHAR SHPUND
Original Assignee
A.D Knight Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by A.D Knight Ltd. filed Critical A.D Knight Ltd.
Publication of WO2020079698A1 publication Critical patent/WO2020079698A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/66Radar-tracking systems; Analogous systems
    • G01S13/72Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar
    • G01S13/723Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar by using numerical data
    • G01S13/726Multiple target tracking
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/862Combination of radar systems with sonar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/93Sonar systems specially adapted for specific applications for anti-collision purposes
    • G01S15/931Sonar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • G01S7/4052Means for monitoring or calibrating by simulation of echoes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • G01S7/4052Means for monitoring or calibrating by simulation of echoes
    • G01S7/4082Means for monitoring or calibrating by simulation of echoes using externally generated reference signals, e.g. via remote reflector or transponder
    • G01S7/4086Means for monitoring or calibrating by simulation of echoes using externally generated reference signals, e.g. via remote reflector or transponder in a calibrating environment, e.g. anechoic chamber
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52004Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/66Sonar tracking systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9316Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles combined with communication equipment with other vehicles or with base stations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9318Controlling the steering
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/93185Controlling the brakes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9319Controlling the accelerator
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/932Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles using own vehicle data, e.g. ground speed, steering wheel direction
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9323Alternative operation using light waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9324Alternative operation using ultrasonic waves

Definitions

  • This invention relates to the field of driver assistance systems.
  • ADAS advanced driver assistance systems
  • ADAS require sensors such as radio frequency (RF) detectors, ranging (RADAR), cameras, light detection and ranging (LiDAR) sensor, and ultrasonic sensors. These systems perform decision tasks, such as path planning and obstacle avoidance, as well as actuation tasks, such as acceleration, deceleration, braking, and steering. Therefore, inspection, calibration, validation, verification and failure and errors detection are essential to assure safety and performance of ADAS systems.
  • RF radio frequency
  • RADAR ranging
  • LiDAR light detection and ranging
  • ultrasonic sensors ultrasonic sensors.
  • a method comprising: receiving, from a first sensor system, information regarding a first set of objects within a scene; receiving, from a second sensor system, information regarding a second set of objects within said scene, wherein said second set of objects is located within a region of interest (ROI) of said second sensor system; selecting, from said first set of objects, a subset of targets comprising those objects in said first set of objects that are located within said ROI; and comparing said subset of targets with said second set of objects to determine a functionality status of said second sensor system.
  • ROI region of interest
  • a system comprising at least one hardware processor; and a non-transitory computer-readable storage medium having stored thereon program instructions, the program instructions executable by the at least one hardware processor to: receive, from a first sensor system, information regarding a first set of objects within a scene, receive, from a second sensor system, information regarding a second set of objects within said scene, wherein said second set of objects is located within a region of interest (ROI) of said second sensor system, select, from said first set of objects, a subset of targets comprising those objects in said first set of objects that are located within said ROI, and compare said subset of targets with said second set of objects to determine a functionality status of said second sensor system.
  • ROI region of interest
  • a computer program product comprising a non-transitory computer-readable storage medium having program instructions embodied therewith, the program instructions executable by at least one hardware processor to: receive, from a first sensor system, information regarding a first set of objects within a scene; receive, from a second sensor system, information regarding a second set of objects within said scene, wherein said second set of objects is located within a region of interest (ROI) of said second sensor system; select, from said first set of objects, a subset of targets comprising those objects in said first set of objects that are located within said ROI; and compare said subset of targets with said second set of objects to determine a functionality status of said second sensor system.
  • ROI region of interest
  • At least one of said first and second sensor systems are located on board a vehicle under testing (VUT).
  • VUT vehicle under testing
  • said VUT is a moving VUT and said scene is a roadway scene.
  • said comparing further comprises determining, with respect to said VUT, at least one of a position relative to each of said targets, a location relative to each of said targets, a trajectory relative to each of said targets, an orientation relative to each of said targets, VUT velocity, and VUT thrust line.
  • said objects are selected from the group consisting of: vehicles, trucks, roadway infrastructure elements, roadway signs, bridges, buildings, roadway hazards, pedestrians, bicycles, vulnerable road users, and animals.
  • said first and second sensor systems each comprise at least one of ultrasonic-based, camera-based, radar-based, and light detection and ranging (LiDAR)-based sensors.
  • LiDAR light detection and ranging
  • said comparing comprises: (i) assigning a scoring function to each target and to each object in said second set of objects, (ii) associating each of said targets with a corresponding object in said second set of objects, and (iii) comparing said scoring functions assigned to each of said targets and its associated object.
  • said scoring function is based, at least in part, on a location, type, timestamp, and trajectory of each of said targets and said objects.
  • said ROI is determined based, at least in part on one of: a field of sensing of said second sensor system, a velocity of said VUT, a trajectory of said VUT, a braking distance of said VUT, a mass of said VUT, dimensions of said VUT, weather conditions, visibility conditions, roadway conditions, roadway friction coefficient, roadway speed limit, and roadway type.
  • said ROI is determined by applying a trained machine learning algorithm to said scene, wherein said machine learning algorithm is trained using a training set comprising: (i) a plurality of roadway scenes comprising a VUT; and (ii) labels associated with an ROI assigned to each VUT in each of said roadway scenes.
  • said information regarding said first set of objects comprises at least one of: a priority level associated with at least some of said objects, and a danger level associated with at least some of said objects.
  • said priority level associated with at least one of said objects is determined based, at least in part, on an assessment of a deadliness associated with a possible collision between said VUT and said object, a source of said information with respect to said object, and a location of said object.
  • said priority level associated with at least one of said objects is determined by applying a trained machine learning algorithm to said scene, wherein said machine learning algorithm is trained using a training set comprising: (i) a plurality of roadway scenes, each comprising one or more objects; and (ii) labels associated with a priority level assigned to each of said objects in each of said roadway scenes.
  • said danger level associated with at least one of said objects is determined based, at least in part, on object motion trajectory, object perceived compliance with roadway rules, object speed variations, and object roadway lane adherence.
  • said danger level associated with at least one of said objects is determined by applying a trained machine learning algorithm to said scene, wherein said machine learning algorithm is trained using a training set comprising: (i) a plurality of roadway scenes, each comprising one or more objects; and (ii) labels associated with a danger level assigned to each of said objects.
  • a method comprising: generating, by a testing system, a virtual scene comprising a plurality of virtual objects; receiving an output from a sensor system representing a response of said sensor system to said virtual scene; and comparing said output with a simulated reference response generated for said virtual scene, to determine a functionality status of said sensor system.
  • a system comprising: at least one hardware processor; and a non-transitory computer-readable storage medium having stored thereon program instructions, the program instructions executable by the at least one hardware processor to: generate, by a testing system, a virtual scene comprising a plurality of virtual objects, receive an output from a sensor system representing a response of said sensor system to said virtual scene, and compare said output with a simulated reference response generated for said virtual scene, to determine a functionality status of said sensor system.
  • a computer program product comprising a non-transitory computer-readable storage medium having program instructions embodied therewith, the program instructions executable by at least one hardware processor to: generate, by a testing system, a virtual scene comprising a plurality of virtual objects; receive an output from a sensor system representing a response of said sensor system to said virtual scene; and compare said output with a simulated reference response generated for said virtual scene, to determine a functionality status of said sensor system.
  • said sensor system is located on board a vehicle under testing (VUT) and said virtual scene is a roadway scene.
  • VUT vehicle under testing
  • said VUT is a moving VUT.
  • said virtual objects are selected from the group consisting of: vehicles, trucks, roadway infrastructure elements, roadway signs, bridges, buildings, roadway hazards, pedestrians, bicycles, and animals.
  • said sensor systems comprises at least one of ultrasonic- based, camera-based, radar-based, and light detection and ranging (LiDAR)-based sensors.
  • LiDAR light detection and ranging
  • At least some of said virtual objects are generated by a simulated reflected signal.
  • At least some of said virtual objects are generated by a transceiver configured to receive a signal transmitted from a sensor of said sensor system, and transmit, based at least in part on said received signal, an output signal representative of a virtual target, such that at least a portion of the output signal is received by said sensor.
  • said comparing comprises comparing parameters of each of said virtual targets with parameters of each detected virtual target in said output of said sensor system.
  • said parameters are selected from the group consisting of virtual target location, virtual target velocity, virtual target trajectory, virtual target type, and virtual target timing.
  • FIG. 1 is a high level conceptual illustration of the interaction among the various modules of a vehicle comprising an ADAS suite and an ADAS testing system, according to an embodiment
  • FIG. 2 is a high level schematic block diagram of an embodiment of a system 110 for ADAS testing, verification and failure indication, according to an embodiment
  • FIG. 3 schematically illustrates a vehicle positioned within a reference coordinate system
  • Fig. 4 illustrates an exemplary ROI analysis method as a function of the vehicle’s speed, according to an embodiment
  • Fig. 5 shows an example of extended ROI derived from an action of lane changing, according to an embodiment.
  • Fig. 6 shows an example of NV sharing an object with a vehicle, according to an embodiment
  • Fig. 7 shows an example of LOS/NLOS analysis in an urban scenario, according to an embodiment
  • Fig. 8 shows an example of a high danger scenario, according to an embodiment
  • Fig. 9 shows an example of a high predicted deadliness level of an impact between a vehicle and a pedestrian, according to an embodiment
  • Fig. 10 shows an example where a target is located at the breaking distance of the vehicle, according to an embodiment
  • Fig. 11. Shows an example of a method for anomalies detection, according to an embodiment
  • Fig. 12 shows an example of a dangerous target not obeying the traffic rules and drives on the opposite lane, according to an embodiment
  • Fig. 13 shows an example of handling network communication disconnections, according to an embodiment
  • Fig. 14 shows a system for external inspection of a neighboring vehicle, according to an embodiment.
  • ADAS advanced driver assistance systems
  • An ADAS system may include one or more of the following elements: Adaptive Cruise Control (ACC) system, adaptive high beam system; automatic light control system, automatic parking assistance system, automotive night vision system, blind spot monitor, collision avoidance systems, crosswind stabili ation system, driver drowsiness detection system, lane departure warning systems, pedestrian protection systems, and the like.
  • ACC Adaptive Cruise Control
  • ADAS systems may take on parts of the driving tasks of the vehicle driver, including the detection of environmental information relevant to the vehicle driving.
  • the ADAS system may also be a forward-orientated driver assistance system, for instance an augmented video/augmented reality system or a system for traffic light detection, an environment monitoring system (in particular for coupled vehicles, such as trailers), a night vision system (FIR, NIR, active gated sensing), and the like.
  • Driver assistance systems for interior monitoring, in particular driver monitoring systems may also be tested according to the invention
  • Autonomous vehicles (AV) and/or ADAS-equipped vehicles systems may be classified based on their level of autonomy.
  • level 0 vehicles have ADAS systems which do not assume vehicle control, but may issue warnings to the driver of the vehicle.
  • Level 1 comprises systems where the driver must be ready to take control at any time, such as, Parking Assistance with automated steering, or Lane Keeping Assistance (LKA).
  • Level 2 ADAS systems may execute accelerating, braking, and steering, however, the driver is obliged to detect objects and events in the roadway environment and respond if the ADAS systems fail to respond properly.
  • Level 3 systems may operate completely autonomously within known, limited environments, such as freeways.
  • Level 4 systems can control the vehicle completely autonomously in all but a few environments, such as severe weather.
  • Level 5 systems may operate a vehicle completely autonomously in all environments and destinations.
  • Vehicles comprising ADAS systems usually include a sensor set comprising one or more sensors which enable the functioning of the ADAS systems, such as, but not limited to, camera-based, radar-based, and/or LiDAR-based sensor sets.
  • the data from the sensors may describe, e.g., the physical environment or roadway environment where the vehicle is located, static and dynamic objects within this physical environment, the position of the vehicle relative to the static and dynamic objects, the weather, other natural phenomena within the physical environment, and the operation of the suite of ADAS systems in response to the static and dynamic objects.
  • Static objects perceived by a sensor set of the ADAS suite may include one or more objects of the roadway environment that are either static or substantially static.
  • the static objects may include plants, trees, fire hydrants, traffic signs, other roadside structures, a sidewalk, and/or various equipment.
  • Dynamic objects may include one or more objects of the roadway environment that are dynamic in terms of their motion or operation, such as other vehicles present in the roadway, pedestrians, animals, traffic lights, and/or environmental factors (e.g., wind, water, ice, variation of sun light).
  • ADAS and/or autonomous vehicle environment perception is a critical component.
  • ADAS perception uses data from various sensors (e.g., camera, Radar, Lidar, etc.), to detect objects in the environment (e.g., other vehicles, pedestrians, signs, road hazards) which may be relevant to the operation of the ADAS systems, and by extension, the operation and safety of the vehicle and its occupants.
  • objects in the environment e.g., other vehicles, pedestrians, signs, road hazards
  • Accurate environmental perception enables ADAS systems to correctly determine operational commands to the vehicle, such as acceleration, deceleration, breaking, and/or steering.
  • a potential safety issue may therefore arise when one or more components of an ADAS suite experience a failure, such as a software bug, algorithm error, hardware failure, hardware wear, sensors misalignment, sensor performance variations, and the like.
  • a failure such as a software bug, algorithm error, hardware failure, hardware wear, sensors misalignment, sensor performance variations, and the like.
  • ADAS failures are difficult to predict, simulate, or detect.
  • inspection, calibration, and validation of ADAS systems require relatively costly and complex testing equipment, and/or deployment of actual vehicles in roadway to replicate desired scenarios.
  • the present disclosure provides for systems, methods, and computer program products for testing the sensing, perception, decision, and/or actuation subsystems of an ADAS suite installed in a vehicle, to detect and determine potential failures.
  • a potential advantage of the present disclosure is, therefore, in that it provides for simultaneous testing of multiple ADAS systems under a plurality of scenarios using relatively simple processes.
  • an ADAS suite testing may be carried out, for instance in the context of a routine and/or non-routine workshop inspection or main examination of the vehicle.
  • ADAS systems may undergo regulatory and/or certification testing by, e.g., governmental, quasi-govemmental, and/or other organizations.
  • the testing may be carried out, at least in part, in the field, e.g., while the vehicle is travelling on a roadway.
  • the present disclosure may be executed during a workshop inspection of a vehicle.
  • a continuous function testing can thereby be configured, such that malfunctions can be detected in real time rather than during a planned inspection.
  • the present disclosure may be configured to perceive, detect, identify, and/or generate parts of all of an environmental test scene comprising, e.g., one or more static and/or dynamic targets which are configured to exercise one or more corresponding ADAS suite sensing modalities in an ADAS -equipped vehicle.
  • the generated and/or perceived test scene may comprise one or more virtual static and/or dynamic targets generated by the present system.
  • virtual scene generation may comprise virtualizing a roadway environment comprising, e.g., a plurality of static and/or dynamic objects that realistically represent driving scenarios which may be encountered by a vehicle in the real world. Virtualized scenes so generated may be included in ADAS simulations configured to accurately measure and test the performance of different ADAS systems and system settings.
  • virtual targets may be represented as one or more signals produced and/or transmitted within the vehicle’s environment, or otherwise supplied directly to an ADAS subsystem module.
  • virtual target generation may comprise a plurality of individual targets configured to exercise and/or activate one or more of the ADAS suite sensing modalities, simultaneously and/or over a specified period of time within, e.g., a predefined testing cycle.
  • the present disclosure is configured to determine a precise position, pose, orientation, velocity, trajectory, and/or location of the vehicle relative to the test scene and/or a known coordinate system. In some embodiments, the present disclosure further provides for determining vehicle steering angle and/or thrust angle.
  • the present disclosure if further configured to process the response of one or more of the ADAS suite subsystems to the test scene, to determine whether the ADAS suite and/or relevant subsystems thereof correctly perceive the test scene and its targets.
  • this determination may be based, at least in part, on a comparison between the ADAS suite sensor modalities and a reference‘ground truth’ response associated with the scene, to evaluate the accuracy and functionality of at least some of the ADAS suite subsystems.
  • the reference response corresponds to the theoretical response of a fully -functional ADAS suite in view of the test scene.
  • a reference response can be calculated based on, e.g., ADAS system technical specifications.
  • a reference response can be predicted using such tools as, e.g., machine learning algorithms and the like.
  • optical components e.g., cameras, lenses, filters
  • electrical circuits e.g., electrical circuits, and/or mechanical components
  • vehicles with ADAS suite being tested may be land, airborne or water-borne vehicles.
  • the vehicle is a motor vehicle, in particular a passenger vehicle, a lorry or a motorcycle.
  • the vehicle may also be a ship or an aircraft.
  • the present disclosure further provides for testing, validating, verifying, and/or calibrating one of more the ADAS systems comprising an ADAS suite in a vehicle based, at least in part, on a response of one or more of the sensing modalities associated with the ADAS systems.
  • testing, validation, and/or calibration are further based, at least in part, on the initial accurate determination of vehicle pose and orientation within the coordinate system.
  • FIG. 1 is a high level conceptual illustration of the interaction among the various modules of a vehicle comprising an ADAS suite and an ADAS testing system, according to an embodiment.
  • a vehicle comprising an ADAS suite 100 is under testing by a system 110 for ADAS testing, calibration, verification, validation, and failure indication.
  • the various modules of the ADAS suite and the testing platform as described herein are only an exemplary embodiment of the present invention, and in practice may have more or fewer components than shown, may combine two or more of the components, may have a different configuration or arrangement of the components.
  • the various components of described herein may be implemented in hardware, software or a combination of both hardware and software. In various embodiments, these systems may comprise one or more dedicated hardware devices, may form an addition to/or extension of an existing device, and/or may be provide as a cloud-based service.
  • ADAS suite 100 may comprise a sensor functionality comprising a plurality of relevant sensor modalities, such as, but not limited to, camera- based, radar-based, and/or LiDAR-based sensor modalities.
  • ADAS suite 100 comprises a perception function configured to obtain sensor inputs and fuse them, to understand and produce an accurate model of the vehicle’s environment.
  • ADAS suite 100 may further comprise a decision and/or a planning function, configured to produce a plan of action for the vehicle based on the perceived environment model generated by the perception function.
  • an actuation function of ADAS suite 100 is configured to take the plan generated by the decision/planning module and implement it by providing operational commands to the vehicle, such as acceleration, deceleration, breaking, and/or steering commands.
  • system 110 may comprise a vehicle pose detection function configured to determine a vehicle’s location, pose, orientation, attitude, and/or bearing in relation to a reference coordinate system
  • vehicle pose detection module comprises one or more sensor sets, and/or is configured to receive and obtain relevant sensor inputs from sensor sets of the vehicle under test, and/or from any other external source.
  • a target generation function may be configured to generate one or more scenarios comprising a plurality of static and/or dynamic targets to exercise and/or activate one or more of the vehicle’s sensing modalities.
  • the target generation function may generate virtual targets by generating signals configured to be received by one or more sensors and interpreted as associated with a target in the roadway.
  • generating a virtual target may comprise generating an electrical, electromagnetic, visual, auditory, acoustic, sonic, visual, and/or another signal configured to be received by a relevant sensor.
  • generating a virtual target may comprise receiving a probing signal generating by a relevant sensor (e.g., a laser signal, an ultrasonic signal) and manipulating a reflection of the probing signal to generate a particular desired perception in the sensor.
  • a relevant sensor e.g., a laser signal, an ultrasonic signal
  • an analytics function of ADAS testing platform 110 may comprise an analytics function configured for obtaining information regarding on or more of vehicle pose and orientation, vehicle ADAS sensor inputs, vehicle ADAS perception, vehicle decision and planning, vehicle actuation, testing platform target generation and/or detection, and perform a variety of analyses with respect to ADAS system and subsystem performance, detect potential hardware and/or software failures, and perform system testing, verification, validation, calibration, and failure detection in ADAS.
  • Fig. 2 is a high level schematic block diagram of an embodiment of a system 110 for ADAS testing, verification and failure indication.
  • ADAS suite 100 is installed onboard a vehicle.
  • ADAS suite 100 and system 110 as described herein is only an exemplary embodiment of the present invention, and in practice may have more or fewer components than shown, may combine two or more of the components, or may have a different configuration or arrangement of the components.
  • the various components of described herein may be implemented in hardware, software or a combination of both hardware and software. In various embodiments, these systems may comprise a dedicated hardware device, or may form an addition to/or extension of an existing device.
  • System 110 may store in a storage device software instructions or components configured to operate a hardware processor (also "CPU,” or simply “processor”).
  • the software components may include an operating system, including various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitating communication between various hardware and software components.
  • the software components of system 110 may comprise an operating system, including various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage system control, power management, etc.) and facilitating communication between various hardware and software components.
  • general system tasks e.g., memory management, storage system control, power management, etc.
  • system 110 may also comprise, e.g., a communications module, a user interface, an imaging device, an image processing module, and/or a machine learning module.
  • the produced data representing stream of images can be in a format of data operable by a computer device for diverse purposes such as displaying the data, storing the image data, editing the data, and the like.
  • the data may be used at the analysis process of the video sequence.
  • such data can be used to derive various information aspects, which can be utilized in a number of processes such as detecting region of interest, segmentation, feature calculation, and the like.
  • information can refer to color channels, e.g., red, green and blue.
  • the color channels may be used to calculate various metrics such as the intensity levels of at least some of the wavelengths, levels of brightness of at least some of the color channels, and the like.
  • a user interface comprises one or more of a control panel for controlling system 110, buttons, display monitor, and/or speaker for providing audio commands.
  • system 110 includes one or more user input control devices, such as a physical or virtual joystick, mouse, and/or click wheel.
  • system 110 comprises one or more of peripheral interfaces, RF circuitry, audio circuitry, a microphone, an input/output (I/O) subsystem, other input or control devices, optical or other sensors, and an external port.
  • I/O input/output subsystem
  • Each of the above identified modules and applications correspond to a set of instructions for performing one or more functions described above. These modules (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments.
  • a communications module may be configured for connect system 110 to a network, such as the internet, a local area network, a wide area network and/or a wireless network.
  • the communications module facilitates communications with other devices over one or more external ports, and also includes various software components for handling data received by system 110.
  • an image processing module may be configured to receive video stream data, and process the data to extract and/or calculate a plurality of values and/or features derived from the data.
  • image processing module may be configured to perform at least some of object detection, image segmentation, and/or object tracking based on one or more image processing techniques.
  • the image processing module may also be configured to calculate a plurality of time-dependent features from the video stream data.
  • the features may represent various metrics derived from the video stream data, such as time domain, frequency domain, and/or other or similar features.
  • ADAS suite 100 comprises perception module 102.
  • Perception module 102 received inputs from sensor unit 104 comprising an array of sensors including, for example: one or more ultrasonic sensors; one or more RADAR sensors; one or more Light Detection and Ranging (“LiDAR”) sensors; one or more surround cameras (typically are located at various places on vehicle body to image areas all around the vehicle body); one or more stereo cameras e.g., to provide depth- perception for object detection and object recognition in the vehicle path); one or more infrared cameras; GPS unit that provides location coordinates; a steering sensor that detects the steering angle; speed sensors (one for each of the wheels); an inertial sensor or inertial measurement unit (“IMU”) that monitors movement of vehicle body (this sensor can be for example an accelerometer(s) and/or a gyro-sensor(s) and/or a magnetic compass(es)); tire vibration sensors; and/or microphones placed around and inside the vehicle.
  • sensor unit 104 comprising an array of sensors
  • sensor unit 104 may comprise, e.g., one or more of a global positioning system sensor; an infrared detector; a motion detector; a thermostat; a sound detector, a carbon monoxide sensor; a carbon dioxide sensor; an oxygen sensor; a mass air flow sensor; an engine coolant temperature sensor; a throttle position sensor; a crank shaft position sensor; an automobile engine sensor; a valve timer; an air-fuel ratio meter; a blind spot meter; a curb feeler; a defect detector; a Hall effect sensor, a manifold absolute pressure sensor; a parking sensor; a radar gun; a speedometer; a speed sensor; a tire-pressure monitoring sensor; a torque sensor; a transmission fluid temperature sensor; a turbine speed sensor (TSS); a variable reluctance sensor; a vehicle speed sensor (VSS); a water sensor; a wheel speed sensor; and any other type of automotive sensor.
  • other sensors may be used, as is known to persons of ordinary skill in the
  • one or more cameras or other imaging devices of sensor unit 104 may comprise any one or more devices that capture a stream of images and represent them as data.
  • Imaging devices may be optic -based, but may also include depth sensors, infrared imaging sensors, and the like.
  • the imaging device may be a Kinect or a similar motion sensing device, capable of, e.g., IR imaging.
  • the imaging device may be configured to detect RGB (red-green-blue) spectral bands.
  • the imaging device may be configured to detect at least one of monochrome, ultraviolet (UV), near infrared (NIR), short-wave infrared (SWIR), multiple spectral bands, and or in hyperspectral imaging techniques.
  • UV ultraviolet
  • NIR near infrared
  • SWIR short-wave infrared
  • the imaging device comprises a digital imaging sensor selected from a group consisting of: complementary metal-oxide-semiconductor (CMOS), charge- coupled device (CCD), Indium gallium arsenide (InGaAs), and polarization- sensitive sensor element.
  • CMOS complementary metal-oxide-semiconductor
  • CCD charge- coupled device
  • InGaAs Indium gallium arsenide
  • polarization- sensitive sensor element polarization-sensitive sensor element
  • ADAS suite 100 further comprises decision/planning module 106 which uses the data from perception module 102 for forward planning of the vehicle path. Decision/planning module 106 decisions are then communicated to one or more of the vehicle actuation system 108, to provide operational commands to the vehicle, such as, acceleration, deceleration, breaking, and/or steering.
  • actuation system 108 sends command signals to operate vehicle brakes via one or more braking actuators, operate steering mechanism via a steering actuator, and operate propulsion unit which also receives an accelerator/throttle actuation signal.
  • actuation is performed by methods known to persons of ordinary skill in the art, with signals typically sent via a Controller Area Network data interface (“CAN bus”)— a network inside modem cars used to control brakes, acceleration, steering, windshield wipers, etc.
  • actuation system 108 may be obtained with dedicated hardware and software, allowing control of throttle, brake, steering, and shifting.
  • the hardware provides a bridge between the vehicle's CAN bus and the controller, forwarding vehicle data to controller including the turn signal, wheel speed, acceleration, pitch, roll, yaw, Global Positioning System (“GPS”) data, tire pressure, fuel level, sonar, brake torque, and others.
  • GPS Global Positioning System
  • Similar actuation controllers may be configured for any other make and type of vehicle, including special-purpose patrol and security cars, long-haul trucks including tractor-trailer configurations, tiller trucks, agricultural vehicles, industrial vehicles, and buses, including but not limited to articulated buses.
  • system 110 which may be installed onboard the vehicle and/or externally to the vehicle (e.g., in the cloud), comprises a testing and verification subsystem configured to initiate testing and verification of ADAS suite 100, and continuous failure and errors identification sub system 128, which may be installed onboard the vehicle and/or externally to the vehicle (e.g., in the cloud), and configured to constantly check ADAS suite 100 for failures and errors during continuously during usage of the vehicle.
  • system 110 may be configured to accurately detect a vehicle's pose, orientation, bearing, and/or location, as well as other parameters such as wheel angles and thrust line, for purposes of in ADAS testing, verification and failure indication. In some embodiments, these parameters are used as a reference to the vehicle’s maneuvering sensors and various safety system sensors. Accordingly, in some embodiments, system 110 comprises a vehicle pose calculation module 114, which utilizes one or more sensors such as, but not limited to, camera-based, radar-based, and/or LiDAR-based sensor sets. In some embodiments, system 110 and/or pose calculation module 114 may obtain external sensor inputs using e.g., V2X communication. Accordingly, in some embodiments, the present disclosure provides for determining vehicle pose and related parameters using a dedicated sub-system for this purpose.
  • vehicle pose calculation module 114 utilizes one or more sensors such as, but not limited to, camera-based, radar-based, and/or LiDAR-based sensor sets.
  • system 110 and/or pose calculation module 114 may obtain external sensor input
  • pose calculation module 114 components and/or sensors may be mounted onboard the vehicle and/or externally to the vehicle. Using multiple sensors requires transforming sensor coordinated system to a world coordinate system. Data from the various sensors of pose calculation module 114 are transferred to, e.g., a Coordinate System Transformation Algorithm, where the data from each sensor, in a camera coordinate system, may be transformed to a reference coordinate system, to obtain accurate vehicle pose and orientation relative to the reference coordinate system.
  • the vehicle pose calculation algorithm can fuse the data from all sensors of pose calculation module 114 and then calculate the vehicle pose, or can calculate the pose based on each individual camera and fuse the multiple vehicle poses.
  • pose calculation module 114 may implement various techniques, including but not limited to, computer vision algorithms and/or signal processing algorithms, which detect key points and features in the vehicle and its surroundings.
  • the algorithms may use a database of predefined parameters to improve accuracy, precision, and processing time.
  • a database may include data regarding physical dimensions of vehicles and tiers, sensors and features location and orientation.
  • the algorithms may use user input regarding the vehicle under measurement such as make, model, model year, etc.
  • Fig. 3 schematically illustrates a vehicle 200 positioned within a reference coordinate system.
  • pose calculation module 114 may be configured to measure vehicle steering angle a of vehicle 200.
  • pose calculation module 114 may be configured to calculate a thrust line T of vehicle 200 in Fig. 3.
  • Thrust line T is a nominal line that is perpendicular to the rear axle of the.
  • Thrust angle g is the angle between the thrust line and the centerline C of the vehicle.
  • the algorithms for calculating vehicle coordinate system or steering angle can be used to calculate the vehicle thrust line and thrust angle.
  • the output of pose calculation module 114 can be used to calibrate and validate various aspects of the vehicle ADAS suite.
  • pose calculation module 114 may also employ data from vehicle sensors.
  • the system and its algorithms may include further output, such as but not limited to, wheel toe and/or camber angles, steering box position, parallelism and axle offset, wheel caster, KPI, toe out on turns, and maximum turn angle.
  • system 110 may further comprise a testing infrastructure comprising acceleration and/or force and/or strain and/or temperature and/or pressure transducers, to create an accurate perception of the tested vehicle and to determine its location, pose, orientation, attitude, and/or bearing in relation to a reference coordinate system.
  • System 110 further comprises a target generator 118 configured to use vehicle pose calculation module 114 data and desired user scenarios, fed, e.g., via HMI (Human Machine Interface) module 120 and/or stored in database 122, for generating one or more virtual targets with one or more unique properties such as, but not limited to, timing, location, distance, orientation, dimensions, velocity, radar cross section, optical properties, electromagnetic properties and chemical properties, for the tested vehicle ADAS suite 100.
  • the virtual targets may be transmitted to tested vehicle ADAS sensor unit 104 as one or more signals configured to activate one or more sensors such as, but not limited to, camera-based, radar-based, and/or LiDAR-based sensors.
  • the virtual targets may also be injected directly to tested vehicle ADAS perception module 102 and/or decision/planning module 106, bypassing ADAS sensor unit 104, via a communication bus such as, but not limited to, CAN (Controller Area Network).
  • Target generator 118 may be configured to receive transmissions from one or more active sensors of tested vehicle such as, analyze signal characteristics such as frequency, phase, power, bandwidth and modulation, and shape the transmitted virtual target signal accordingly.
  • System 110 further comprises an analysis module 126 which is configured to compare the desired tested vehicle behavior in response to one or more virtual targets generated by target generator 118, to the actual tested vehicle behavior which may be concluded from the image of the tested vehicle perceived via perception module 102 and/or decision/planning module 106.
  • the actual tested vehicle behavior is analyzed by continuous failures and errors identification sub system analysis module 136 (will be described in details hereinafter), which in turn reports to analysis module 126.
  • the results of the comparison may be reported to the user via HMI module 120 and/or stored in database 122.
  • continuous failures and errors identification sub system 128 comprises ROI (Region of Interest) & LOS (Line of Sight) calculation module 130, which is configured to calculate the region in which an object is considered as a potential target for an ADAS suite, according to factors such as, but not limited to, weather and visibility conditions, vehicle dimensions, vehicle braking distance, vehicle mass, vehicle speed, vehicle sensors technology and location, road conditions, road friction coefficient, road speed limit, and safety factors.
  • ROI Region of Interest
  • LOS Line of Sight
  • ROI & LOS calculation module 130 is also configured to calculate the areas to which the vehicle have LOS and the areas to which the vehicle does not have LOS, defined hereinafter as NLOS (Non Line of Sight), according to factors such as, but not limited to, topographic data, cover and relief maps and vehicle sensors technology and location.
  • Sub system 128 further comprise an object tracking 132, which is configured to utilizes one or more sensors of sensor unit 134 such as, but not limited to, camera-based, radar-based, and/or LiDAR-based sensor, to recognize and track objects in the vehicle vicinity.
  • object tracking 132 is also configured to prioritize objects as targets according to factors such as, but not limited to, object source, object location, object speed and relative speed to vehicle, object maneuvers and scenario risk level.
  • Sub system 128 further comprise an analysis module 136, which is configured to compare the prioritized targets received from object tracking 132, to the output of tested vehicle ADAS perception module 102, according to factors such as, but not limited to, target identification number, target type, target location, target speed, target trajectory, target maneuvers and acquiring sensor.
  • Analysis module 136 is further configured to calculate comparison score according to discovered anomalies such as, but not limited to, target location, target type, target trajectory, target maneuvers, timestamp and acquiring sensor.
  • Analysis module 136 is also configured to report found anomalies and/or dangerous targets to tested vehicle ADAS perception module 102 and/or decision/planning module 106, which may lead to alteration of tested vehicle behavior, such as, but not limited to, speed reduction and fail safe operation.
  • a system for ADAS testing comprises an inspection target, ADAS system configured to acquire an inspection target, and an inspection parameter determiner configured to determine an inspection parameter based at least in part on ADAS sensor data, wherein the inspection parameter enables determination of the ADAS system performance.
  • An inspection target can be, but not limited to, passive, active, static, dynamic, virtual, real (physical), fixed, or having variable characteristics.
  • the inspection target is generated by, e.g., target generator 118.
  • Maps and HD maps which are used by ADAS may be include data generated by the virtual target generator. GPS, GNSS, and IMU signals may be modified to support the virtual target validity, this can be done by, e.g., generating spoofed signals.
  • the present disclosure may provide for an exemplary system for ADAS testing comprising, e.g., at least a link to a network, such as but no limited to Local Interconnect Network (LIN), Controller Area Network (CAN), FlexRay, Media Oriented System Transport (MOST), Ethernet, Onboard-Diagnostics (OBD), V2X (V2V, V2I, V2P, V2C), Mobile Network (2G, 3G, 4G, 5G, LTE), Dedicated Short- Range Communications (DSRC), WiFi, and Zigbee.
  • LIN Local Interconnect Network
  • CAN Controller Area Network
  • FlexRay Media Oriented System Transport
  • MOST Media Oriented System Transport
  • OBD Onboard-Diagnostics
  • V2X V2V, V2I, V2P, V2C
  • Mobile Network 2G, 3G, 4G, 5G, LTE
  • DSRC Dedicated Short- Range Communications
  • WiFi and Zigbee.
  • the present disclosure may be configured to provide for generating virtual scenes with targets, for the purpose of inspecting, calibrating, verifying, validating ADAS systems.
  • a system such as virtual target generator 118 of system 110 in Fig. 2, provides for virtual target generating for at least one sensing modality in a vehicle ADAS suite.
  • virtual target generator 118 may provide for generating one or more signals which may be received by ADAS sensing modalities and identified as representing one or more actual, physical targets, for inspection, calibration, verification, and validation purposes.
  • virtual target generation comprises generating synthetic sensor data, such as synthetic camera, radar, lidar, and/or sonar data from three dimensional (3D) scene data that may be custom designed for a desired scene.
  • target generator 118 comprises a target generator for at least one sensing modality, including but not limited to, camera-based, radar-based, and/or LiDAR-based sensors.
  • virtual target generator 118 may communicate with the vehicle under test using, e.g., V2X communication (Vehicle to Vehicle V2V, Vehicle to Infrastructure V2I, Vehicle to Network V2N, Vehicle to Pedestrian V2P, Vehicle to Device V2D, and Vehicle to Grid V2G).
  • V2X communication Vehicle to Vehicle V2V, Vehicle to Infrastructure V2I, Vehicle to Network V2N, Vehicle to Pedestrian V2P, Vehicle to Device V2D, and Vehicle to Grid V2G.
  • virtual target generator 118 may generate a virtual target on a mapping and or HD mapping services.
  • target generator 118 may be configured to virtualize real- life objects, such as but not limited to bridges, signs, buildings, and the like.
  • these targets are detected by the ADAS systems, and by comparing the detection properties with the expected values it can be used for inspection, calibration, verification, and validation.
  • highly accurate sensors used for HD mapping may be employed for generating targets.
  • data from sensors which can be processed online or offline, is sent to a target generator algorithm.
  • Prominent objects which can be easily identified, detected, and measured are being chosen by the algorithm.
  • Their accurate properties such as but not limited to location, orientation, dimensions, velocity, radar cross section, optical properties, electromagnetic properties, chemical properties are being extracted from the sensor data. Properties which cannot be extracted are estimated using simulations and modeling techniques.
  • Each target can be used for one or more sensing modalities. Once a viable target and its properties are identified, the target is stored on a HD map service, which can be accessible by vehicle equipped with ADAS.
  • generating a virtual target for more than one sensing modality and/or sensing technique requires synchronization and communication bus between each virtual target generator.
  • synchronization takes into consideration timing and spatial properties of the virtual target generator.
  • system 110 may include a database used to store a library of predefined targets, sensor configurations and parameters, vehicle parameters, and/or setup parameters.
  • the virtual target generator can be portable, hence can be deployed in the field, allowing generating virtual targets to static and dynamic vehicles. For dynamic cases, where the velocity, orientation, and relative distance between the virtual target generator and the vehicle under test is not constant during the process, these parameters shall be used by the target generator algorithm.
  • target data is stored on a database which can be accessible by the ADAS.
  • Target data can be transmitted to the vehicle using various communication protocols, and in one example can be V2X.
  • an inspection target is generated, it is acquired by the ADAS sensors, then the sensors data is transferred to the perception algorithm 102 responsible for object recognition and object tracking. From the perception algorithm 102, data is transferred to the decision algorithm 106 which responsible for path planning and obstacle avoidance, then the data is transferred to the actuation system 108.
  • the following stages: sensing, perception, decision, and actuation are internal stages within an ADAS system under test. The output from each stage is transferred to an inspection algorithm, which compares the system performance to an inspection determiner.
  • System performance can be, but not limited to, sensor alignment, transmitter/receiver/antenna characteristics, target location, target dimensions, target orientation, target velocity, target perception (recognition and tracking), decision, and actuation command (acceleration, steering, braking).
  • the inspection algorithm can be implemented on an inspection system. If the ADAS meets the required performance, the system passes the inspection, otherwise the system under test is required to be replaced or calibrated.
  • the present disclosure may be configured to detect, e.g., an electromagnetic, optical, sonic, ultrasonic, and/or another form of signal from one or more sensors of an ADAS -equipped vehicle, and to manipulate and/or modify attributes of the signal to simulate one or more virtual object in the path of the signal.
  • this method may be applicable to radar- and LiDAR-based sensor modalities.
  • an ultrasonic sensor may transmit sound waves that reflect off of nearby objects, wherein the reflected waves are received by applicable sensors, and the distance from the vehicle to the object is calculated.
  • the virtual target generator receives the ultrasonic sensor transmitted signal, and analyzes its characteristics (such as frequency, phase, power, bandwidth, modulation, and etc.).
  • the signal characteristic, along with target unique properties (such as timing, location, distance, orientation, dimensions, velocity, radar cross section, optical properties, electromagnetic properties, chemical properties) and the target generator properties (location of virtual target generator in respect to the vehicle) are used as inputs for signal generator algorithm.
  • the generated signal is then transmitted using an appropriate transmitter, to be detected by the ultrasound sensor and perceived in a manner consistent with the generating parameters.
  • Generating and transmitting the signal can be made using an active system (transmitter) or a passive system (by modifying the original signal using mechanical, electrical, chemical, or optical methods).
  • a radar target generator may use various techniques to modify the received signal frequency, phase, amplitude, etc., e.g., through causing a virtual Doppler shift.
  • LiDAR target generator may use a similar technique to the ultrasonic sensor.
  • the present disclosure may be configured to determine a set of targets for a vehicle under test, by detecting a plurality of objects within a region of interest (ROI) surrounding the vehicle, and screening the objects to determine, identify, and prioritize targets for testing.
  • ROI region of interest
  • ROI Region of Interest
  • the present disclosure may define a Region of Interest (ROI) around the vehicle-under-test (VUT), whose parameters may then be used to determine whether a detected object within the ROI is a potential target.
  • ROI may have a defined shape, e.g., circle, rectangle, oval, or an abstract shape around the vehicle.
  • the present system may acquire data from a network in order to detect and identify objects, such as but not limited to vehicles, trucks, road infrastructure, signs, road hazards, pedestrians, and bicycles, surrounding the vehicle.
  • Objects may be identified based on Basic Safety Message (BSM) and/or Cooperative Awareness Messages (CAM) protocols.
  • BSM Basic Safety Message
  • CAM Cooperative Awareness Messages
  • the system may use the vehicle’s sensors data in order to detect and identify objects based on signal and image processing algorithms.
  • the present disclosure may take into consideration at least some of the following parameters: BSM data, CAM data, VUT braking distance, weather condition, VUT mass, visibility conditions, VUT dimensions, road conditions, road friction coefficient, speed limit, urban/highway road, VUT dynamics, VUT velocity, VUT sensor modalities, VUT sensor location, VUT perception and sensor data, nearby vehicles, nearby pedestrians, vulnerable road users, and driving history.
  • ROI determination may be based, at least in part, on a line-of-sight (LOS) and/or non-line-of- sight (NLOS) analysis with respect to a VUT, to determine which of the objects may constitute targets which should be perceived by the VUT ADAS suite.
  • LOS line-of-sight
  • NLOS non-line-of- sight
  • objects located within the NLOS cannot be physically sensed by the vehicle sensors, and thus may not be defined as potential targets.
  • the LOS/NLOS analysis may be accomplished using a mapping data.
  • a circular ROI with respect to a vehicle may be calculated as:
  • ROI(V, SM) / * F a * SM, where V is the vehicle’s speed, a is a factor, SM is a safety factor, and is a correction constant.
  • Fig. 4 illustrates an exemplary ROI analysis method as a function of vehicle speed.
  • the present disclosure may define an object located in the ROI as a target.
  • the system may define areas around the vehicle that are at Line of Sight (LOS) and Not in Line of Sight (NLOS) by the vehicle’s sensors.
  • the analysis may use topographic data, cover and relief maps, sensors location on the vehicle.
  • vehicle’s sensors performances such as but not limited to Field of View (FOV) and detection range.
  • FOV Field of View
  • ROI determination may be achieved using a machine learning algorithm.
  • the present disclosure may incorporate a machine learning model, such as a deep neural network, Bayesian network, or other type of machine learning model.
  • the present disclosure may comprise one or more statistical models which may define an approach for updating a probability for the computed parameters of an ROI outside of a vehicle. In some embodiments, these statistical models may be evaluated and modified by processing simulated sensor outputs that simulate perception of a scenario by the sensors of the vehicle.
  • the present disclosure may execute a training engine configured for training the machine learning model based, at least on part, on a training set comprising a plurality of scenarios, each comprising input data such as, but not limited to, BSM data, CAM data, VUT braking distance, weather condition, VUT mass, visibility conditions, VUT dimensions, road conditions, road friction coefficient, speed limit, urban/highway road, VUT dynamics, VUT velocity, VUT sensor modalities, VUT sensor location, VUT perception and sensor data, nearby vehicles, nearby pedestrians, vulnerable road users, and driving history.
  • input data may be labelled with manual delineation of an ROI consistent with the input data.
  • a trained machine learning model of the present disclosure may be applied to a target roadway scene, to determine the ROI associated with the roadway scene.
  • the training set may be based on, e.g., manually generated scenarios and/or manual inputs specifying relevant data.
  • scenarios may be modeled based on images, a video sequence, or other measurements of an actual location, e.g., observations of a location, movements of vehicles in the location, the location of other objects, etc.
  • a scenario generation module may read a file specifying locations and/or orientations for various elements of a scenario, and create a model of the scenario having models of the elements positioned as instructed in the file. In this manner, manually or automatically generated files may be used to define a wide range of scenarios.
  • ROI determination may be updated dynamically based, at least on part, on a predicted location of the VUT upon performing one or more planned actions (e.g., lane change, speed increase, braking, turning, etc.).
  • the present disclosure may use data from the VUT decision subsystem for analyzing the safety of an action it is planning to perform. In such case, the present disclosure may analyze a predicted“virtual” location of the VUT following the execution of a planned action, and, based on this analysis, may update the ROI to include one or more relevant targets.
  • Fig. 5 shows an example of extended ROI derived from an action of lane changing, where Target 1 was added.
  • an ROI for a VUT may be dynamically updated and/or extended based, at least in part on data shared form other vehicles in the vicinity.
  • the nearby vehicle may share perceived objects to extend and/or improve an ROI of a VUT.
  • the nearby vehicle may comprise a different perception system developed by different manufacturers, or it may perceive potential targets from a different and/or improved vantage point.
  • perception sharing may include sharing raw and/or processed sensor data.
  • objects detected by a nearby vehicle (NV) perception subsystems may be screened and prioritize.
  • Fig. 6 shows an example of NV sharing an Object x (e.g., a pedestrian) with a VUT.
  • the present disclosure may be configured for detecting a plurality of objects surrounding a VUT, and perform object screening based on specified criteria, to determine a subset of relevant targets from the group of objects. In some embodiments, the present disclosure may then perform target prioritization from among the subset of screened targets, based on specified prioritization criteria.
  • the present disclosure may make an initial object screening within a defined ROI around the vehicle. The present disclosure may then detect objects located within the ROI as potential targets. In some embodiments, the present disclosure may further track potential targets based on available data or by predicting location and telemetry.
  • a target database may include target location, telemetry, dimension, type, and material.
  • object screening may be based, at least in part, on the following parameters:
  • LOS line-of-sight
  • object screening and target prioritization processes may be used.
  • the object screening process may use the following parameters:
  • Fig. 7 shows VUT camera sensor LOS and NLOS analysis, where object 2 is within the VUT LOS, and therefore constitutes a potential target, while object 2 is within the NLOS, and therefore will not be deemed to be a potential target and will be screened out.
  • targets prioritizing is used to ensure that the most significant targets are identified by the VUT’s perception system.
  • the prioritization process may use the following parameters:
  • Data source of the target e.g., VUT sensor, DTD, nearby vehicle.
  • Fig. 8 shows an example of a high danger scenario, where the VUT and T arget are approaching a T junction, hence should be highly prioritize.
  • Fig. 9 shows an example of a potential high deadlines level of impact between the VUT and Target (e.g., pedestrian).
  • Fig. 10 shows an example where T arget is located within the breaking distance of the vehicle, which should be highly prioritize.
  • the prioritization process may use a prioritization function, P(Ti), which allows analytical and predictable analysis of the prioritization process for validation and verification.
  • P(Ti) can be calculated
  • P(TQ Pa(TQ * Pb(TQ * Pc(TQ * Pd(TQ * Pe(TQ, where Pa, Pb, Pc, Pd, and Pe are the scenario danger level, deadliness likelihood, location accuracy, source ranking, and the relative location prioritization functions, respectively.
  • target prioritization as well as related parameters such as scenario danger level, predicted deadliness level, location accuracy, and the source ranking functions may be performed using a trained machine learning algorithm.
  • one or more machine learning classifiers may be trained to predict one or more of the relevant scene prioritization parameters, on relevant training set comprising a plurality of scenarios, each including one or more targets.
  • the training sets may be based, at least in part on relevant input data such as: BSM data, CAM data, VUT braking distance, weather condition, VUT mass, visibility conditions, VUT dimensions, road conditions, road friction coefficient, speed limit, urban/highway road, VUT dynamics, VUT velocity, VUT sensor modalities, VUT sensor location, VUT perception and sensor data, nearby vehicles, nearby pedestrians, vulnerable road users, and driving history.
  • each scenario in a training set may be labelled, e.g., based on a priority level assigned to each target within the scenario.
  • one or more trained machine learning algorithms may be applied to a target roadway scene, to determine a prioritization of targets within the target scene.
  • the present disclosure may analyze and indicate for anomalies between VUT perception of objects and/or targets within its ROI and‘ground truth’ refence set of targets determined for the VUT.
  • these anomalies may provide for an indication of, e.g., hardware failure, software failure, bug, and/or algorithm error, in the context of testing, verification, validation, calibration, and failure detection in ADAS.
  • the analysis may be based on e.g., a pull architecture, where VUT perception data is provided for analysis, and/or a push architecture, where the VUT responds to queries from the system regarding perceived targets.
  • the objects-targets comparison and analysis may be performed using at least one of system CPU, system GPU, system ECU, VUT CPU, VUT GPU, and/or VUT ECU.
  • analysis comparison may be performed for a least some of the following parameters:
  • Vehicle identification number (if identified by the vehicle’s perception subsystem).
  • Target / object type at least: vehicle, truck, infrastructure, sign, obstacle, pedestrian.
  • Target / object global or relative coordinates - latitude, longitude, height.
  • FIG. 11 An example of method for anomalies detection is shown in Fig. 11.
  • the system may perform a high-level comparison, e.g. comparing target ⁇ object location, and if no anomaly was detected then performing a low-level comparison, e.g. comparing timestamp, velocity, and trajectory.
  • a comparison score and a pass/fail criterion may be using the following parameters:
  • the comparison process may use a scoring function, S(Ti, Oj), which allows analytical and predictable analysis of the comparison process for validation and verification.
  • the scoring function S(Ti, Of) can be calculated
  • Sa, Sb, Sc, Sd, and Se are the location, type, timestamp, trajectory, and acquired sensor scoring functions, respectively.
  • O j is the j -th object
  • the location anomaly function, Sa(T i, Oi ) can be calculated
  • one or more machine learning algorithms may be used to determine, e.g., location, type, timestamp, trajectory, and/or acquired sensor scoring function for exmaple, at a training stage, one or more machine learning classifiers may be trained to predict any of these functions based, at least on part, on a training set comprising a plurality of roadway scenarios, each comprising input data such as: target/object type, time stamp, target/object global or relative coordinates (e.g., latitude, longitude, height), location accuracy, velocity, velocity accuracy, trajectory, acceleration, acceleration accuracy, vehicle size, road conditions, and/or acquired sensor.
  • Training set scenarios may be manually labelled according to these functions.
  • the trained machine learning classifiers may be applied to a target roadway scene, to determine anomalies between perception objects and targets associated with said roadway scene.
  • the present system may report the found anomalies, which may be indicative of hardware failure, software bug, and/or algorithm error.
  • the report may be used by the VUT perception, decision, actuation, and/or any other of the VUT functions and/or ADAS subsystems.
  • the report may include an analysis which allows root cause analysis of the anomaly, as well as deeper insights on hardware failures, software bugs, and/or algorithm errors.
  • the report may lead to a fail safe operation of the vehicle.
  • the report may be used by a hardware failures, software bugs, and algorithm errors database (VFDB).
  • VFDB hardware failures, software bugs, and algorithm errors database
  • the system may perform a validity check to the data received from the vehicle’s perception, for example, a CRC check.
  • identifying abnormal or dangerous behavior of a target on a roadway may be used in order to better prioritize the targets.
  • the vehicle’s decision subsystem may use a dangerous targets dataset to choose a cautious strategy (e.g., reducing speed and extending safety distance).
  • this system may be used to identify hardware failure, software bugs, and algorithm errors in nearby vehicles.
  • the output of the system may be stored on a Dangerous Targets Database (DTD), and the analysis may receive data from the DTD.
  • DTD Dangerous Targets Database
  • the present disclosure may track identified targets surrounding the VUT, and indicate for potential and/or predicted danger level associated with each target, based on, e.g., dangerous driving and/or other behaviors detected with respect to the target.
  • a dangerous targets database may include targets identified and/or classified as potentially dangerous, based, e.g., on driving and/or other behavioral parameters.
  • the DTD may be used by the present disclosure and/or by other vehicles.
  • danger assessment analysis may use one or more of the following parameters:
  • Fig. 12 shows an example of a target vehicle Target not obeying the traffic rules by driving on the wrong side of the road.
  • danger level assessment may be performed using one or more machine learning algorithms.
  • a machine learning classifier may be trained on a training set comprising, e.g., a plurality of roadway scenarios including one or more targets, wherein each scenario provides for input data such as target’s route, target speed, target’s physical conditions, target’s lane, etc.
  • each such training scenario may be manually labelled according to a perceived danger level associated with, e.g., one or more targets within the scenario and/or the scenario as a whole.
  • a trained machine learning algorithm of the present disclosure may be applied to a target roadway scene, to determine danger level associated with one or more targets within the scenario.
  • network communication may experience disconnections due to, e.g., Quality of Service (QOS) issues.
  • QOS Quality of Service
  • loss of communication ability may be considered as a critical failure leading to a safety hazard.
  • the present system and method for object screening and target prioritization may include a system and method for object tracking estimation.
  • V ti V t0 + a t0 (tl— tO)
  • Lti L to + V t0 (tl - tO) + a t0 (tl - t0) 2 /2
  • the tracking estimator may be used when an on object lose communication without an appropriate notification (e.g. parking and engine turning off), once the object gain the communication back, tracking estimator stops.
  • an estimation function may use other and/or additional parameters, such as target driving history, target predicted route, traffic history, etc.
  • target location estimation may be performed using a trained machine learning algorithm.
  • one or more machine learning algorithms may be trained to predict target location.
  • such machine learning algorithm may be trained on a training set comprising a plurality of scenarios, each comprising one or more targets, wherein there are provided input data such as target route, target speed, target physical condition, target current lane, driving history, traffic history, etc. in each case, training set scenarios may be labeled using predicted target location at various prospective times.
  • the trained machine learning algorithm may be applied to a roadway scenario coming one or more targets, to estimate object location, speed, and/or acceleration parameters with respect to the target.
  • vehicles may suffer from external damages such as but not limited to flat tire, damaged glass and windshield, a damaged mirror, dents and dings, chips and scratches. These damage may be due to wear and tear or due to collisions and/or other roadway incidents.
  • the present system allows for detecting external damage to one or more nearby vehicles (NV) on the roadway, without the requirements to send it for an assessment at a dedicated worksite.
  • NV nearby vehicles
  • the system may use a communication network to locate the NV, the system will send a request to the NV to perform an external inspection to the NV using the vehicle’s sensors.
  • a vehicle is located behind the NV, hence it performs an external inspection of the rear part of the NV, wherein the results of the inspection are sent to a database.
  • the analysis of the NV’s inspection may be processed on the vehicle’s subsystems, on the NV’s subsystems, or on a remote computing platform such as cloud service.
  • the analysis may use image processing algorithms in order to detect and assess the damage to the NV based on the vehicles’ sensors data.
  • the results of the analysis may be sent to the NV and/or sent to a remote server.
  • the vehicle may perform a coverage of 360° NV inspection.
  • the inspection may be performed by several vehicles to achieve a coverage of 360°.
  • vehicle damage estimation may be performed using a trained machine learning algorithm.
  • one or more machine learning algorithms may be trained to detect and/or assess external damage to a vehicle.
  • such machine learning algorithm may be trained on a training set comprising a plurality of images of vehicles exhibiting external damage, such as a flat tire, damaged bodywork, damaged windshield and/or windows, etc.
  • the training set may be labelled consistent with the damage exhibited in each image.
  • a trained machine learning algorithm may be applied to images of vehicles, to analyze external damage.
  • aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Acoustics & Sound (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Traffic Control Systems (AREA)

Abstract

A method comprising: receiving, from a first sensor system, information regarding a first set of objects within a scene; receiving, from a second sensor system, information regarding a second set of objects within said scene, wherein said second set of objects is located within a region of interest (ROI) of said second sensor system; selecting, from said first set of objects, a subset of targets comprising those objects in said first set of objects that are located within said ROI; and comparing said subset of targets with said second set of objects to determine a functionality status of said second sensor system.

Description

ADAS SYSTEMS FUNCTIONALITY TESTING
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority from U.S. Patent Application No. 62/747,673, filed on October 19, 2018, entitled“METHODS AND SYSTEMS FOR CALIBRATING VEHICLE EQUIPMENT,” and from U.S. Patent Application No. 62/796, 109, filed on January 24, 2019, entitled“METHODS AND SYSTEMS FOR ENSURING SAFETY OF HIGH LEVEL OF AUTONOMY ADAS/AD SYSTEMS”, the contents of all of which are incorporated by reference herein in their entirety.
BACKGROUND
[0002] This invention relates to the field of driver assistance systems.
[0003] In recent years, various types of advanced driver assistance systems (ADAS) have been developed and applied to vehicles.
[0004] ADAS require sensors such as radio frequency (RF) detectors, ranging (RADAR), cameras, light detection and ranging (LiDAR) sensor, and ultrasonic sensors. These systems perform decision tasks, such as path planning and obstacle avoidance, as well as actuation tasks, such as acceleration, deceleration, braking, and steering. Therefore, inspection, calibration, validation, verification and failure and errors detection are essential to assure safety and performance of ADAS systems.
[0005] The foregoing examples of the related art and limitations related therewith are intended to be illustrative and not exclusive. Other limitations of the related art will become apparent to those of skill in the art upon a reading of the specification and a study of the figures.
SUMMARY OF THE INVENTION
[0006] The following embodiments and aspects thereof are described and illustrated in conjunction with systems, tools and methods which are meant to be exemplary and illustrative, not limiting in scope.
[0007] There is provided, in an embedment, a method comprising: receiving, from a first sensor system, information regarding a first set of objects within a scene; receiving, from a second sensor system, information regarding a second set of objects within said scene, wherein said second set of objects is located within a region of interest (ROI) of said second sensor system; selecting, from said first set of objects, a subset of targets comprising those objects in said first set of objects that are located within said ROI; and comparing said subset of targets with said second set of objects to determine a functionality status of said second sensor system.
[0008] There is also provided, in an embodiment, a system comprising at least one hardware processor; and a non-transitory computer-readable storage medium having stored thereon program instructions, the program instructions executable by the at least one hardware processor to: receive, from a first sensor system, information regarding a first set of objects within a scene, receive, from a second sensor system, information regarding a second set of objects within said scene, wherein said second set of objects is located within a region of interest (ROI) of said second sensor system, select, from said first set of objects, a subset of targets comprising those objects in said first set of objects that are located within said ROI, and compare said subset of targets with said second set of objects to determine a functionality status of said second sensor system.
[0009] There is further provided, in an embodiment, a computer program product comprising a non-transitory computer-readable storage medium having program instructions embodied therewith, the program instructions executable by at least one hardware processor to: receive, from a first sensor system, information regarding a first set of objects within a scene; receive, from a second sensor system, information regarding a second set of objects within said scene, wherein said second set of objects is located within a region of interest (ROI) of said second sensor system; select, from said first set of objects, a subset of targets comprising those objects in said first set of objects that are located within said ROI; and compare said subset of targets with said second set of objects to determine a functionality status of said second sensor system.
[0010] In some embodiments, at least one of said first and second sensor systems are located on board a vehicle under testing (VUT).
[0011] In some embodiments, said VUT is a moving VUT and said scene is a roadway scene.
[0012] In some embodiments, said comparing further comprises determining, with respect to said VUT, at least one of a position relative to each of said targets, a location relative to each of said targets, a trajectory relative to each of said targets, an orientation relative to each of said targets, VUT velocity, and VUT thrust line.
[0013] In some embodiments, said objects are selected from the group consisting of: vehicles, trucks, roadway infrastructure elements, roadway signs, bridges, buildings, roadway hazards, pedestrians, bicycles, vulnerable road users, and animals.
[0014] In some embodiments, said first and second sensor systems each comprise at least one of ultrasonic-based, camera-based, radar-based, and light detection and ranging (LiDAR)-based sensors.
[0015] In some embodiments, said comparing comprises: (i) assigning a scoring function to each target and to each object in said second set of objects, (ii) associating each of said targets with a corresponding object in said second set of objects, and (iii) comparing said scoring functions assigned to each of said targets and its associated object.
[0016] In some embodiments, said scoring function is based, at least in part, on a location, type, timestamp, and trajectory of each of said targets and said objects.
[0017] In some embodiments, said ROI is determined based, at least in part on one of: a field of sensing of said second sensor system, a velocity of said VUT, a trajectory of said VUT, a braking distance of said VUT, a mass of said VUT, dimensions of said VUT, weather conditions, visibility conditions, roadway conditions, roadway friction coefficient, roadway speed limit, and roadway type.
[0018] In some embodiments, said ROI is determined by applying a trained machine learning algorithm to said scene, wherein said machine learning algorithm is trained using a training set comprising: (i) a plurality of roadway scenes comprising a VUT; and (ii) labels associated with an ROI assigned to each VUT in each of said roadway scenes.
[0019] In some embodiments, said information regarding said first set of objects comprises at least one of: a priority level associated with at least some of said objects, and a danger level associated with at least some of said objects.
[0020] In some embodiments, said priority level associated with at least one of said objects is determined based, at least in part, on an assessment of a deadliness associated with a possible collision between said VUT and said object, a source of said information with respect to said object, and a location of said object. [0021] In some embodiments, said priority level associated with at least one of said objects is determined by applying a trained machine learning algorithm to said scene, wherein said machine learning algorithm is trained using a training set comprising: (i) a plurality of roadway scenes, each comprising one or more objects; and (ii) labels associated with a priority level assigned to each of said objects in each of said roadway scenes.
[0022] In some embodiments, said danger level associated with at least one of said objects is determined based, at least in part, on object motion trajectory, object perceived compliance with roadway rules, object speed variations, and object roadway lane adherence.
[0023] In some embodiments, said danger level associated with at least one of said objects is determined by applying a trained machine learning algorithm to said scene, wherein said machine learning algorithm is trained using a training set comprising: (i) a plurality of roadway scenes, each comprising one or more objects; and (ii) labels associated with a danger level assigned to each of said objects.
[0024] There is further provided, in an embodiment, a method comprising: generating, by a testing system, a virtual scene comprising a plurality of virtual objects; receiving an output from a sensor system representing a response of said sensor system to said virtual scene; and comparing said output with a simulated reference response generated for said virtual scene, to determine a functionality status of said sensor system.
[0025] There is further provided, in an embodiment, a system comprising: at least one hardware processor; and a non-transitory computer-readable storage medium having stored thereon program instructions, the program instructions executable by the at least one hardware processor to: generate, by a testing system, a virtual scene comprising a plurality of virtual objects, receive an output from a sensor system representing a response of said sensor system to said virtual scene, and compare said output with a simulated reference response generated for said virtual scene, to determine a functionality status of said sensor system.
[0026] There is further provided, in an embodiment, a computer program product comprising a non-transitory computer-readable storage medium having program instructions embodied therewith, the program instructions executable by at least one hardware processor to: generate, by a testing system, a virtual scene comprising a plurality of virtual objects; receive an output from a sensor system representing a response of said sensor system to said virtual scene; and compare said output with a simulated reference response generated for said virtual scene, to determine a functionality status of said sensor system.
[0027] In some embodiments, said sensor system is located on board a vehicle under testing (VUT) and said virtual scene is a roadway scene.
[0028] In some embodiments, said VUT is a moving VUT.
[0029] In some embodiments, said virtual objects are selected from the group consisting of: vehicles, trucks, roadway infrastructure elements, roadway signs, bridges, buildings, roadway hazards, pedestrians, bicycles, and animals.
[0030] The method of any one of claims 43-48, wherein at least some of said virtual objects are generated based, at least in part, on mapping data obtained from a mapping service.
[0031] In some embodiments, said sensor systems comprises at least one of ultrasonic- based, camera-based, radar-based, and light detection and ranging (LiDAR)-based sensors.
[0032] In some embodiments, at least some of said virtual objects are generated by a simulated reflected signal.
[0033] In some embodiments, at least some of said virtual objects are generated by a transceiver configured to receive a signal transmitted from a sensor of said sensor system, and transmit, based at least in part on said received signal, an output signal representative of a virtual target, such that at least a portion of the output signal is received by said sensor.
[0034] In some embodiments, said comparing comprises comparing parameters of each of said virtual targets with parameters of each detected virtual target in said output of said sensor system.
[0035] In some embodiments, said parameters are selected from the group consisting of virtual target location, virtual target velocity, virtual target trajectory, virtual target type, and virtual target timing. [0036] In addition to the exemplary aspects and embodiments described above, further aspects and embodiments will become apparent by reference to the figures and by study of the following detailed description.
BRIEF DESCRIPTION OF THE FIGURES
[0037] The present invention will be understood and appreciated more comprehensively from the following detailed description taken in conjunction with the appended drawings in which:
[0038] Fig. 1 is a high level conceptual illustration of the interaction among the various modules of a vehicle comprising an ADAS suite and an ADAS testing system, according to an embodiment;
[0039] Fig. 2 is a high level schematic block diagram of an embodiment of a system 110 for ADAS testing, verification and failure indication, according to an embodiment;
[0040] Fig. 3 schematically illustrates a vehicle positioned within a reference coordinate system;
[0041] Fig. 4 illustrates an exemplary ROI analysis method as a function of the vehicle’s speed, according to an embodiment;
[0042] Fig. 5 shows an example of extended ROI derived from an action of lane changing, according to an embodiment.
[0043] Fig. 6 shows an example of NV sharing an object with a vehicle, according to an embodiment;
[0044] Fig. 7 shows an example of LOS/NLOS analysis in an urban scenario, according to an embodiment;
[0045] Fig. 8 shows an example of a high danger scenario, according to an embodiment;
[0046] Fig. 9 shows an example of a high predicted deadliness level of an impact between a vehicle and a pedestrian, according to an embodiment;
[0047] Fig. 10 shows an example where a target is located at the breaking distance of the vehicle, according to an embodiment;
[0048] Fig. 11. Shows an example of a method for anomalies detection, according to an embodiment; [0049] Fig. 12 shows an example of a dangerous target not obeying the traffic rules and drives on the opposite lane, according to an embodiment;
[0050] Fig. 13 shows an example of handling network communication disconnections, according to an embodiment; and
[0051] Fig. 14 shows a system for external inspection of a neighboring vehicle, according to an embodiment.
DETAILED DESCRIPTION
[0052] Disclosed are a system, method, and computer program product for testing, verification, validation, calibration, and failure detection in advanced driver assistance systems (ADAS).
[0053] Most current vehicles have a suite of ADAS systems that provide a combination of enhanced safety, driver assistance, and autonomous driving features. An ADAS system may include one or more of the following elements: Adaptive Cruise Control (ACC) system, adaptive high beam system; automatic light control system, automatic parking assistance system, automotive night vision system, blind spot monitor, collision avoidance systems, crosswind stabili ation system, driver drowsiness detection system, lane departure warning systems, pedestrian protection systems, and the like.
[0054] ADAS systems may take on parts of the driving tasks of the vehicle driver, including the detection of environmental information relevant to the vehicle driving. The ADAS system may also be a forward-orientated driver assistance system, for instance an augmented video/augmented reality system or a system for traffic light detection, an environment monitoring system (in particular for coupled vehicles, such as trailers), a night vision system (FIR, NIR, active gated sensing), and the like. Driver assistance systems for interior monitoring, in particular driver monitoring systems, may also be tested according to the invention
[0055] Autonomous vehicles (AV) and/or ADAS-equipped vehicles systems may be classified based on their level of autonomy. For example, level 0 vehicles have ADAS systems which do not assume vehicle control, but may issue warnings to the driver of the vehicle. Level 1 comprises systems where the driver must be ready to take control at any time, such as, Parking Assistance with automated steering, or Lane Keeping Assistance (LKA). Level 2 ADAS systems may execute accelerating, braking, and steering, however, the driver is obliged to detect objects and events in the roadway environment and respond if the ADAS systems fail to respond properly. Level 3 systems may operate completely autonomously within known, limited environments, such as freeways. Level 4 systems can control the vehicle completely autonomously in all but a few environments, such as severe weather. Level 5 systems may operate a vehicle completely autonomously in all environments and destinations.
[0056] Vehicles comprising ADAS systems usually include a sensor set comprising one or more sensors which enable the functioning of the ADAS systems, such as, but not limited to, camera-based, radar-based, and/or LiDAR-based sensor sets. The data from the sensors may describe, e.g., the physical environment or roadway environment where the vehicle is located, static and dynamic objects within this physical environment, the position of the vehicle relative to the static and dynamic objects, the weather, other natural phenomena within the physical environment, and the operation of the suite of ADAS systems in response to the static and dynamic objects.
[0057] Static objects perceived by a sensor set of the ADAS suite may include one or more objects of the roadway environment that are either static or substantially static. For example, the static objects may include plants, trees, fire hydrants, traffic signs, other roadside structures, a sidewalk, and/or various equipment. Dynamic objects may include one or more objects of the roadway environment that are dynamic in terms of their motion or operation, such as other vehicles present in the roadway, pedestrians, animals, traffic lights, and/or environmental factors (e.g., wind, water, ice, variation of sun light).
[0058] ADAS and/or autonomous vehicle environment perception is a critical component. ADAS perception uses data from various sensors (e.g., camera, Radar, Lidar, etc.), to detect objects in the environment (e.g., other vehicles, pedestrians, signs, road hazards) which may be relevant to the operation of the ADAS systems, and by extension, the operation and safety of the vehicle and its occupants. Accurate environmental perception enables ADAS systems to correctly determine operational commands to the vehicle, such as acceleration, deceleration, breaking, and/or steering.
[0059] A potential safety issue may therefore arise when one or more components of an ADAS suite experience a failure, such as a software bug, algorithm error, hardware failure, hardware wear, sensors misalignment, sensor performance variations, and the like. However, ADAS failures are difficult to predict, simulate, or detect. Typically, inspection, calibration, and validation of ADAS systems require relatively costly and complex testing equipment, and/or deployment of actual vehicles in roadway to replicate desired scenarios.
[0060] Accordingly, the present disclosure, in some embodiments, provides for systems, methods, and computer program products for testing the sensing, perception, decision, and/or actuation subsystems of an ADAS suite installed in a vehicle, to detect and determine potential failures.
[0061] A potential advantage of the present disclosure is, therefore, in that it provides for simultaneous testing of multiple ADAS systems under a plurality of scenarios using relatively simple processes.
[0062] In some embodiments, an ADAS suite testing may be carried out, for instance in the context of a routine and/or non-routine workshop inspection or main examination of the vehicle. For example, ADAS systems may undergo regulatory and/or certification testing by, e.g., governmental, quasi-govemmental, and/or other organizations. In some embodiments, the testing may be carried out, at least in part, in the field, e.g., while the vehicle is travelling on a roadway. In some embodiments, the present disclosure may be executed during a workshop inspection of a vehicle. In some embodiments, a continuous function testing can thereby be configured, such that malfunctions can be detected in real time rather than during a planned inspection.
[0063] In some embodiments, the present disclosure may be configured to perceive, detect, identify, and/or generate parts of all of an environmental test scene comprising, e.g., one or more static and/or dynamic targets which are configured to exercise one or more corresponding ADAS suite sensing modalities in an ADAS -equipped vehicle.
[0064] In some embodiments, the generated and/or perceived test scene may comprise one or more virtual static and/or dynamic targets generated by the present system. In some embodiments, virtual scene generation may comprise virtualizing a roadway environment comprising, e.g., a plurality of static and/or dynamic objects that realistically represent driving scenarios which may be encountered by a vehicle in the real world. Virtualized scenes so generated may be included in ADAS simulations configured to accurately measure and test the performance of different ADAS systems and system settings. In some embodiments, virtual targets may be represented as one or more signals produced and/or transmitted within the vehicle’s environment, or otherwise supplied directly to an ADAS subsystem module. In some embodiments, virtual target generation may comprise a plurality of individual targets configured to exercise and/or activate one or more of the ADAS suite sensing modalities, simultaneously and/or over a specified period of time within, e.g., a predefined testing cycle.
[0065] In some embodiments, at a preliminary step, the present disclosure is configured to determine a precise position, pose, orientation, velocity, trajectory, and/or location of the vehicle relative to the test scene and/or a known coordinate system. In some embodiments, the present disclosure further provides for determining vehicle steering angle and/or thrust angle.
[0066] In some embodiments, the present disclosure if further configured to process the response of one or more of the ADAS suite subsystems to the test scene, to determine whether the ADAS suite and/or relevant subsystems thereof correctly perceive the test scene and its targets.
[0067] In some embodiments, this determination may be based, at least in part, on a comparison between the ADAS suite sensor modalities and a reference‘ground truth’ response associated with the scene, to evaluate the accuracy and functionality of at least some of the ADAS suite subsystems.
[0068] In some embodiments, the reference response (‘ground truth’) corresponds to the theoretical response of a fully -functional ADAS suite in view of the test scene. In some embodiments, a reference response can be calculated based on, e.g., ADAS system technical specifications. In other cases, a reference response can be predicted using such tools as, e.g., machine learning algorithms and the like. Using the comparison of the stimulation response and reference response, it is possible to draw conclusions relating to at least one of:
• The functionality state of the ADAS suite;
• ageing and/or wear and tear of ADAS suite components and subsystems;
• physical state of various components and subsystems, such as, but not limited to, optical components (e.g., cameras, lenses, filters), electrical circuits, and/or mechanical components; and
• software bugs and malfunctions. [0069] In some embodiments, vehicles with ADAS suite being tested may be land, airborne or water-borne vehicles. Preferably, the vehicle is a motor vehicle, in particular a passenger vehicle, a lorry or a motorcycle. The vehicle may also be a ship or an aircraft.
[0070] In some embodiments, the present disclosure further provides for testing, validating, verifying, and/or calibrating one of more the ADAS systems comprising an ADAS suite in a vehicle based, at least in part, on a response of one or more of the sensing modalities associated with the ADAS systems.
[0071] In some embodiments, the testing, validation, and/or calibration are further based, at least in part, on the initial accurate determination of vehicle pose and orientation within the coordinate system.
[0072] Fig. 1 is a high level conceptual illustration of the interaction among the various modules of a vehicle comprising an ADAS suite and an ADAS testing system, according to an embodiment. As can be seen, a vehicle comprising an ADAS suite 100 is under testing by a system 110 for ADAS testing, calibration, verification, validation, and failure indication.
[0073] The various modules of the ADAS suite and the testing platform as described herein are only an exemplary embodiment of the present invention, and in practice may have more or fewer components than shown, may combine two or more of the components, may have a different configuration or arrangement of the components. The various components of described herein may be implemented in hardware, software or a combination of both hardware and software. In various embodiments, these systems may comprise one or more dedicated hardware devices, may form an addition to/or extension of an existing device, and/or may be provide as a cloud-based service.
[0074] In some embodiments, ADAS suite 100 may comprise a sensor functionality comprising a plurality of relevant sensor modalities, such as, but not limited to, camera- based, radar-based, and/or LiDAR-based sensor modalities.
[0075] In some embodiments, ADAS suite 100 comprises a perception function configured to obtain sensor inputs and fuse them, to understand and produce an accurate model of the vehicle’s environment.
[0076] In some embodiments, ADAS suite 100 may further comprise a decision and/or a planning function, configured to produce a plan of action for the vehicle based on the perceived environment model generated by the perception function. [0077] In some embodiments, an actuation function of ADAS suite 100 is configured to take the plan generated by the decision/planning module and implement it by providing operational commands to the vehicle, such as acceleration, deceleration, breaking, and/or steering commands.
[0078] In some embodiments, system 110 may comprise a vehicle pose detection function configured to determine a vehicle’s location, pose, orientation, attitude, and/or bearing in relation to a reference coordinate system in some embodiments, vehicle pose detection module comprises one or more sensor sets, and/or is configured to receive and obtain relevant sensor inputs from sensor sets of the vehicle under test, and/or from any other external source.
[0079] In some embodiments, a target generation function may be configured to generate one or more scenarios comprising a plurality of static and/or dynamic targets to exercise and/or activate one or more of the vehicle’s sensing modalities. In some embodiments, the target generation function may generate virtual targets by generating signals configured to be received by one or more sensors and interpreted as associated with a target in the roadway. In some embodiments, generating a virtual target may comprise generating an electrical, electromagnetic, visual, auditory, acoustic, sonic, visual, and/or another signal configured to be received by a relevant sensor. In some embodiments, generating a virtual target may comprise receiving a probing signal generating by a relevant sensor (e.g., a laser signal, an ultrasonic signal) and manipulating a reflection of the probing signal to generate a particular desired perception in the sensor.
[0080] In some embodiments, an analytics function of ADAS testing platform 110 may comprise an analytics function configured for obtaining information regarding on or more of vehicle pose and orientation, vehicle ADAS sensor inputs, vehicle ADAS perception, vehicle decision and planning, vehicle actuation, testing platform target generation and/or detection, and perform a variety of analyses with respect to ADAS system and subsystem performance, detect potential hardware and/or software failures, and perform system testing, verification, validation, calibration, and failure detection in ADAS.
[0081] Reference is now made to Fig. 2, which is a high level schematic block diagram of an embodiment of a system 110 for ADAS testing, verification and failure indication. [0082] In some embodiments, ADAS suite 100 is installed onboard a vehicle. ADAS suite 100 and system 110 as described herein is only an exemplary embodiment of the present invention, and in practice may have more or fewer components than shown, may combine two or more of the components, or may have a different configuration or arrangement of the components. The various components of described herein may be implemented in hardware, software or a combination of both hardware and software. In various embodiments, these systems may comprise a dedicated hardware device, or may form an addition to/or extension of an existing device.
[0083] System 110 may store in a storage device software instructions or components configured to operate a hardware processor (also "CPU," or simply "processor”). In some embodiments, the software components may include an operating system, including various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitating communication between various hardware and software components.
[0084] In some embodiments, the software components of system 110 may comprise an operating system, including various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage system control, power management, etc.) and facilitating communication between various hardware and software components.
[0085] In some embodiments, system 110 may also comprise, e.g., a communications module, a user interface, an imaging device, an image processing module, and/or a machine learning module.
[0086] In some cases, the produced data representing stream of images can be in a format of data operable by a computer device for diverse purposes such as displaying the data, storing the image data, editing the data, and the like. In some embodiments, the data may be used at the analysis process of the video sequence. In some embodiments, such data can be used to derive various information aspects, which can be utilized in a number of processes such as detecting region of interest, segmentation, feature calculation, and the like. In some embodiments, such information can refer to color channels, e.g., red, green and blue. In some embodiments, the color channels may be used to calculate various metrics such as the intensity levels of at least some of the wavelengths, levels of brightness of at least some of the color channels, and the like. [0087] In some embodiments, a user interface comprises one or more of a control panel for controlling system 110, buttons, display monitor, and/or speaker for providing audio commands. In some embodiments, system 110 includes one or more user input control devices, such as a physical or virtual joystick, mouse, and/or click wheel. In other variations, system 110 comprises one or more of peripheral interfaces, RF circuitry, audio circuitry, a microphone, an input/output (I/O) subsystem, other input or control devices, optical or other sensors, and an external port. Each of the above identified modules and applications correspond to a set of instructions for performing one or more functions described above. These modules (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments.
[0088] In some embodiments, a communications module may be configured for connect system 110 to a network, such as the internet, a local area network, a wide area network and/or a wireless network. In some embodiments, the communications module facilitates communications with other devices over one or more external ports, and also includes various software components for handling data received by system 110.
[0089] In some embodiments, an image processing module may be configured to receive video stream data, and process the data to extract and/or calculate a plurality of values and/or features derived from the data. In some embodiments, image processing module may be configured to perform at least some of object detection, image segmentation, and/or object tracking based on one or more image processing techniques. In some embodiments, the image processing module may also be configured to calculate a plurality of time-dependent features from the video stream data. In some embodiments, the features may represent various metrics derived from the video stream data, such as time domain, frequency domain, and/or other or similar features.
[0090] In some embodiments, ADAS suite 100 comprises perception module 102. Perception module 102 received inputs from sensor unit 104 comprising an array of sensors including, for example: one or more ultrasonic sensors; one or more RADAR sensors; one or more Light Detection and Ranging (“LiDAR”) sensors; one or more surround cameras (typically are located at various places on vehicle body to image areas all around the vehicle body); one or more stereo cameras e.g., to provide depth- perception for object detection and object recognition in the vehicle path); one or more infrared cameras; GPS unit that provides location coordinates; a steering sensor that detects the steering angle; speed sensors (one for each of the wheels); an inertial sensor or inertial measurement unit (“IMU”) that monitors movement of vehicle body (this sensor can be for example an accelerometer(s) and/or a gyro-sensor(s) and/or a magnetic compass(es)); tire vibration sensors; and/or microphones placed around and inside the vehicle. In some embodiments, sensor unit 104 may comprise, e.g., one or more of a global positioning system sensor; an infrared detector; a motion detector; a thermostat; a sound detector, a carbon monoxide sensor; a carbon dioxide sensor; an oxygen sensor; a mass air flow sensor; an engine coolant temperature sensor; a throttle position sensor; a crank shaft position sensor; an automobile engine sensor; a valve timer; an air-fuel ratio meter; a blind spot meter; a curb feeler; a defect detector; a Hall effect sensor, a manifold absolute pressure sensor; a parking sensor; a radar gun; a speedometer; a speed sensor; a tire-pressure monitoring sensor; a torque sensor; a transmission fluid temperature sensor; a turbine speed sensor (TSS); a variable reluctance sensor; a vehicle speed sensor (VSS); a water sensor; a wheel speed sensor; and any other type of automotive sensor. In some embodiments, other sensors may be used, as is known to persons of ordinary skill in the art.
[0091] In some embodiments, one or more cameras or other imaging devices of sensor unit 104 may comprise any one or more devices that capture a stream of images and represent them as data. Imaging devices may be optic -based, but may also include depth sensors, infrared imaging sensors, and the like. In some embodiments, the imaging device may be a Kinect or a similar motion sensing device, capable of, e.g., IR imaging. In some embodiments, the imaging device may be configured to detect RGB (red-green-blue) spectral bands. In other embodiments, the imaging device may be configured to detect at least one of monochrome, ultraviolet (UV), near infrared (NIR), short-wave infrared (SWIR), multiple spectral bands, and or in hyperspectral imaging techniques. In some embodiments, the imaging device comprises a digital imaging sensor selected from a group consisting of: complementary metal-oxide-semiconductor (CMOS), charge- coupled device (CCD), Indium gallium arsenide (InGaAs), and polarization- sensitive sensor element.
[0092] ADAS suite 100 further comprises decision/planning module 106 which uses the data from perception module 102 for forward planning of the vehicle path. Decision/planning module 106 decisions are then communicated to one or more of the vehicle actuation system 108, to provide operational commands to the vehicle, such as, acceleration, deceleration, breaking, and/or steering. In some embodiments, actuation system 108 sends command signals to operate vehicle brakes via one or more braking actuators, operate steering mechanism via a steering actuator, and operate propulsion unit which also receives an accelerator/throttle actuation signal. In some embodiments, actuation is performed by methods known to persons of ordinary skill in the art, with signals typically sent via a Controller Area Network data interface (“CAN bus”)— a network inside modem cars used to control brakes, acceleration, steering, windshield wipers, etc. in some embodiments, actuation system 108 may be obtained with dedicated hardware and software, allowing control of throttle, brake, steering, and shifting. The hardware provides a bridge between the vehicle's CAN bus and the controller, forwarding vehicle data to controller including the turn signal, wheel speed, acceleration, pitch, roll, yaw, Global Positioning System (“GPS”) data, tire pressure, fuel level, sonar, brake torque, and others. Similar actuation controllers may be configured for any other make and type of vehicle, including special-purpose patrol and security cars, long-haul trucks including tractor-trailer configurations, tiller trucks, agricultural vehicles, industrial vehicles, and buses, including but not limited to articulated buses.
[0093] In some embodiments, system 110, which may be installed onboard the vehicle and/or externally to the vehicle (e.g., in the cloud), comprises a testing and verification subsystem configured to initiate testing and verification of ADAS suite 100, and continuous failure and errors identification sub system 128, which may be installed onboard the vehicle and/or externally to the vehicle (e.g., in the cloud), and configured to constantly check ADAS suite 100 for failures and errors during continuously during usage of the vehicle.
[0094] In some embodiments, system 110 may be configured to accurately detect a vehicle's pose, orientation, bearing, and/or location, as well as other parameters such as wheel angles and thrust line, for purposes of in ADAS testing, verification and failure indication. In some embodiments, these parameters are used as a reference to the vehicle’s maneuvering sensors and various safety system sensors. Accordingly, in some embodiments, system 110 comprises a vehicle pose calculation module 114, which utilizes one or more sensors such as, but not limited to, camera-based, radar-based, and/or LiDAR-based sensor sets. In some embodiments, system 110 and/or pose calculation module 114 may obtain external sensor inputs using e.g., V2X communication. Accordingly, in some embodiments, the present disclosure provides for determining vehicle pose and related parameters using a dedicated sub-system for this purpose.
[0095] In some embodiments, at least some of pose calculation module 114 components and/or sensors may be mounted onboard the vehicle and/or externally to the vehicle. Using multiple sensors requires transforming sensor coordinated system to a world coordinate system. Data from the various sensors of pose calculation module 114 are transferred to, e.g., a Coordinate System Transformation Algorithm, where the data from each sensor, in a camera coordinate system, may be transformed to a reference coordinate system, to obtain accurate vehicle pose and orientation relative to the reference coordinate system. In some embodiments, the vehicle pose calculation algorithm can fuse the data from all sensors of pose calculation module 114 and then calculate the vehicle pose, or can calculate the pose based on each individual camera and fuse the multiple vehicle poses.
[0096] In some embodiments, pose calculation module 114 may implement various techniques, including but not limited to, computer vision algorithms and/or signal processing algorithms, which detect key points and features in the vehicle and its surroundings. In some embodiments, the algorithms may use a database of predefined parameters to improve accuracy, precision, and processing time. A database may include data regarding physical dimensions of vehicles and tiers, sensors and features location and orientation. Furthermore, the algorithms may use user input regarding the vehicle under measurement such as make, model, model year, etc.
[0097] Fig. 3 schematically illustrates a vehicle 200 positioned within a reference coordinate system. In some embodiments, as illustrated in Fig. 3, pose calculation module 114 may be configured to measure vehicle steering angle a of vehicle 200.
[0098] In some embodiments, pose calculation module 114 may be configured to calculate a thrust line T of vehicle 200 in Fig. 3. Thrust line T is a nominal line that is perpendicular to the rear axle of the. Thrust angle g is the angle between the thrust line and the centerline C of the vehicle. The algorithms for calculating vehicle coordinate system or steering angle can be used to calculate the vehicle thrust line and thrust angle.
[0099] In some embodiments, the output of pose calculation module 114, including, but not limited to, vehicle pose and orientation with a reference coordinate system as well as, e.g., steering angle and thrust line, can be used to calibrate and validate various aspects of the vehicle ADAS suite.
[0100] In some embodiments, pose calculation module 114 may also employ data from vehicle sensors. The system and its algorithms may include further output, such as but not limited to, wheel toe and/or camber angles, steering box position, parallelism and axle offset, wheel caster, KPI, toe out on turns, and maximum turn angle.
[0101] In some embodiments, system 110 may further comprise a testing infrastructure comprising acceleration and/or force and/or strain and/or temperature and/or pressure transducers, to create an accurate perception of the tested vehicle and to determine its location, pose, orientation, attitude, and/or bearing in relation to a reference coordinate system. System 110 further comprises a target generator 118 configured to use vehicle pose calculation module 114 data and desired user scenarios, fed, e.g., via HMI (Human Machine Interface) module 120 and/or stored in database 122, for generating one or more virtual targets with one or more unique properties such as, but not limited to, timing, location, distance, orientation, dimensions, velocity, radar cross section, optical properties, electromagnetic properties and chemical properties, for the tested vehicle ADAS suite 100. The virtual targets may be transmitted to tested vehicle ADAS sensor unit 104 as one or more signals configured to activate one or more sensors such as, but not limited to, camera-based, radar-based, and/or LiDAR-based sensors.
[0102] The virtual targets may also be injected directly to tested vehicle ADAS perception module 102 and/or decision/planning module 106, bypassing ADAS sensor unit 104, via a communication bus such as, but not limited to, CAN (Controller Area Network). Target generator 118 may be configured to receive transmissions from one or more active sensors of tested vehicle such as, analyze signal characteristics such as frequency, phase, power, bandwidth and modulation, and shape the transmitted virtual target signal accordingly. System 110 further comprises an analysis module 126 which is configured to compare the desired tested vehicle behavior in response to one or more virtual targets generated by target generator 118, to the actual tested vehicle behavior which may be concluded from the image of the tested vehicle perceived via perception module 102 and/or decision/planning module 106. According to an embodiment of the present invention, the actual tested vehicle behavior is analyzed by continuous failures and errors identification sub system analysis module 136 (will be described in details hereinafter), which in turn reports to analysis module 126. The results of the comparison may be reported to the user via HMI module 120 and/or stored in database 122.
[0103] In some embodiments, continuous failures and errors identification sub system 128 comprises ROI (Region of Interest) & LOS (Line of Sight) calculation module 130, which is configured to calculate the region in which an object is considered as a potential target for an ADAS suite, according to factors such as, but not limited to, weather and visibility conditions, vehicle dimensions, vehicle braking distance, vehicle mass, vehicle speed, vehicle sensors technology and location, road conditions, road friction coefficient, road speed limit, and safety factors. ROI & LOS calculation module 130 is also configured to calculate the areas to which the vehicle have LOS and the areas to which the vehicle does not have LOS, defined hereinafter as NLOS (Non Line of Sight), according to factors such as, but not limited to, topographic data, cover and relief maps and vehicle sensors technology and location. Sub system 128 further comprise an object tracking 132, which is configured to utilizes one or more sensors of sensor unit 134 such as, but not limited to, camera-based, radar-based, and/or LiDAR-based sensor, to recognize and track objects in the vehicle vicinity. Taking into account ROI & LOS module 130 outcomes, object tracking 132 is also configured to prioritize objects as targets according to factors such as, but not limited to, object source, object location, object speed and relative speed to vehicle, object maneuvers and scenario risk level. Sub system 128 further comprise an analysis module 136, which is configured to compare the prioritized targets received from object tracking 132, to the output of tested vehicle ADAS perception module 102, according to factors such as, but not limited to, target identification number, target type, target location, target speed, target trajectory, target maneuvers and acquiring sensor. Analysis module 136 is further configured to calculate comparison score according to discovered anomalies such as, but not limited to, target location, target type, target trajectory, target maneuvers, timestamp and acquiring sensor. Such anomalies may indicate hardware failure and/or software error of tested vehicle ADAS 100 and its subsystems. Analysis module 136 is also configured to report found anomalies and/or dangerous targets to tested vehicle ADAS perception module 102 and/or decision/planning module 106, which may lead to alteration of tested vehicle behavior, such as, but not limited to, speed reduction and fail safe operation.
[0104] In some embodiments, a system for ADAS testing comprises an inspection target, ADAS system configured to acquire an inspection target, and an inspection parameter determiner configured to determine an inspection parameter based at least in part on ADAS sensor data, wherein the inspection parameter enables determination of the ADAS system performance.
[0105] An inspection target can be, but not limited to, passive, active, static, dynamic, virtual, real (physical), fixed, or having variable characteristics. The inspection target is generated by, e.g., target generator 118.
[0106] Maps and HD maps, which are used by ADAS may be include data generated by the virtual target generator. GPS, GNSS, and IMU signals may be modified to support the virtual target validity, this can be done by, e.g., generating spoofed signals.
[0107] In some embodiments, the present disclosure may provide for an exemplary system for ADAS testing comprising, e.g., at least a link to a network, such as but no limited to Local Interconnect Network (LIN), Controller Area Network (CAN), FlexRay, Media Oriented System Transport (MOST), Ethernet, Onboard-Diagnostics (OBD), V2X (V2V, V2I, V2P, V2C), Mobile Network (2G, 3G, 4G, 5G, LTE), Dedicated Short- Range Communications (DSRC), WiFi, and Zigbee.
Virtual Target Generation
[0108] In some embodiments, the present disclosure may be configured to provide for generating virtual scenes with targets, for the purpose of inspecting, calibrating, verifying, validating ADAS systems.
[0109] In some embodiments, a system, such as virtual target generator 118 of system 110 in Fig. 2, provides for virtual target generating for at least one sensing modality in a vehicle ADAS suite.
[0110] In some embodiments, virtual target generator 118 may provide for generating one or more signals which may be received by ADAS sensing modalities and identified as representing one or more actual, physical targets, for inspection, calibration, verification, and validation purposes. In some embodiments, virtual target generation comprises generating synthetic sensor data, such as synthetic camera, radar, lidar, and/or sonar data from three dimensional (3D) scene data that may be custom designed for a desired scene. [0111] In some embodiments, target generator 118 comprises a target generator for at least one sensing modality, including but not limited to, camera-based, radar-based, and/or LiDAR-based sensors.
[0112] In some embodiments, virtual target generator 118 may communicate with the vehicle under test using, e.g., V2X communication (Vehicle to Vehicle V2V, Vehicle to Infrastructure V2I, Vehicle to Network V2N, Vehicle to Pedestrian V2P, Vehicle to Device V2D, and Vehicle to Grid V2G).
[0113] In some embodiments, virtual target generator 118 may generate a virtual target on a mapping and or HD mapping services.
[0114] In some embodiments, target generator 118 may be configured to virtualize real- life objects, such as but not limited to bridges, signs, buildings, and the like. In some embodiments, these targets are detected by the ADAS systems, and by comparing the detection properties with the expected values it can be used for inspection, calibration, verification, and validation.
[0115] In one example, highly accurate sensors used for HD mapping may be employed for generating targets. In some embodiments, data from sensors, which can be processed online or offline, is sent to a target generator algorithm. Prominent objects, which can be easily identified, detected, and measured are being chosen by the algorithm. Their accurate properties such as but not limited to location, orientation, dimensions, velocity, radar cross section, optical properties, electromagnetic properties, chemical properties are being extracted from the sensor data. Properties which cannot be extracted are estimated using simulations and modeling techniques. Each target can be used for one or more sensing modalities. Once a viable target and its properties are identified, the target is stored on a HD map service, which can be accessible by vehicle equipped with ADAS.
[0116] In some embodiments, generating a virtual target for more than one sensing modality and/or sensing technique requires synchronization and communication bus between each virtual target generator. In some embodiments, synchronization takes into consideration timing and spatial properties of the virtual target generator. In some embodiments, system 110 may include a database used to store a library of predefined targets, sensor configurations and parameters, vehicle parameters, and/or setup parameters. In some embodiments, the virtual target generator can be portable, hence can be deployed in the field, allowing generating virtual targets to static and dynamic vehicles. For dynamic cases, where the velocity, orientation, and relative distance between the virtual target generator and the vehicle under test is not constant during the process, these parameters shall be used by the target generator algorithm.
[0117] In some embodiments, target data is stored on a database which can be accessible by the ADAS. Target data can be transmitted to the vehicle using various communication protocols, and in one example can be V2X.
[0118] In some embodiments, once an inspection target is generated, it is acquired by the ADAS sensors, then the sensors data is transferred to the perception algorithm 102 responsible for object recognition and object tracking. From the perception algorithm 102, data is transferred to the decision algorithm 106 which responsible for path planning and obstacle avoidance, then the data is transferred to the actuation system 108. The following stages: sensing, perception, decision, and actuation are internal stages within an ADAS system under test. The output from each stage is transferred to an inspection algorithm, which compares the system performance to an inspection determiner. System performance can be, but not limited to, sensor alignment, transmitter/receiver/antenna characteristics, target location, target dimensions, target orientation, target velocity, target perception (recognition and tracking), decision, and actuation command (acceleration, steering, braking). The inspection algorithm can be implemented on an inspection system. If the ADAS meets the required performance, the system passes the inspection, otherwise the system under test is required to be replaced or calibrated.
Target Generation by Active Signal Manipulation
[0119] In some embodiments, the present disclosure may be configured to detect, e.g., an electromagnetic, optical, sonic, ultrasonic, and/or another form of signal from one or more sensors of an ADAS -equipped vehicle, and to manipulate and/or modify attributes of the signal to simulate one or more virtual object in the path of the signal. In some embodiments, this method may be applicable to radar- and LiDAR-based sensor modalities.
[0120] In the case of ultrasonic sensors, in some embodiments, an ultrasonic sensor may transmit sound waves that reflect off of nearby objects, wherein the reflected waves are received by applicable sensors, and the distance from the vehicle to the object is calculated. The virtual target generator receives the ultrasonic sensor transmitted signal, and analyzes its characteristics (such as frequency, phase, power, bandwidth, modulation, and etc.). The signal characteristic, along with target unique properties (such as timing, location, distance, orientation, dimensions, velocity, radar cross section, optical properties, electromagnetic properties, chemical properties) and the target generator properties (location of virtual target generator in respect to the vehicle) are used as inputs for signal generator algorithm. The generated signal is then transmitted using an appropriate transmitter, to be detected by the ultrasound sensor and perceived in a manner consistent with the generating parameters. Generating and transmitting the signal can be made using an active system (transmitter) or a passive system (by modifying the original signal using mechanical, electrical, chemical, or optical methods).
[0121] In one example, a radar target generator may use various techniques to modify the received signal frequency, phase, amplitude, etc., e.g., through causing a virtual Doppler shift. LiDAR target generator may use a similar technique to the ultrasonic sensor.
Object Detection and Target Screening
[0122] In some embodiments, the present disclosure may be configured to determine a set of targets for a vehicle under test, by detecting a plurality of objects within a region of interest (ROI) surrounding the vehicle, and screening the objects to determine, identify, and prioritize targets for testing.
Determining a Region of Interest (ROI)
[0123] In some embodiments, the present disclosure may define a Region of Interest (ROI) around the vehicle-under-test (VUT), whose parameters may then be used to determine whether a detected object within the ROI is a potential target. In some embodiments, the ROI may have a defined shape, e.g., circle, rectangle, oval, or an abstract shape around the vehicle.
[0124] In some embodiments, the present system may acquire data from a network in order to detect and identify objects, such as but not limited to vehicles, trucks, road infrastructure, signs, road hazards, pedestrians, and bicycles, surrounding the vehicle. Objects may be identified based on Basic Safety Message (BSM) and/or Cooperative Awareness Messages (CAM) protocols. The system may use the vehicle’s sensors data in order to detect and identify objects based on signal and image processing algorithms. [0125] In order to determine the ROI, the present disclosure may take into consideration at least some of the following parameters: BSM data, CAM data, VUT braking distance, weather condition, VUT mass, visibility conditions, VUT dimensions, road conditions, road friction coefficient, speed limit, urban/highway road, VUT dynamics, VUT velocity, VUT sensor modalities, VUT sensor location, VUT perception and sensor data, nearby vehicles, nearby pedestrians, vulnerable road users, and driving history.
[0126] In some embodiments, ROI determination may be based, at least in part, on a line-of-sight (LOS) and/or non-line-of- sight (NLOS) analysis with respect to a VUT, to determine which of the objects may constitute targets which should be perceived by the VUT ADAS suite. For exmaple, objects located within the NLOS cannot be physically sensed by the vehicle sensors, and thus may not be defined as potential targets. In some embodiments, the LOS/NLOS analysis may be accomplished using a mapping data.
[0127] In one example, a circular ROI with respect to a vehicle may be calculated as:
ROI(V, SM) = / * Fa * SM, where V is the vehicle’s speed, a is a factor, SM is a safety factor, and
Figure imgf000026_0001
is a correction constant. Fig. 4 illustrates an exemplary ROI analysis method as a function of vehicle speed.
[0128] In some embodiments, the present disclosure may define an object located in the ROI as a target. The system may define areas around the vehicle that are at Line of Sight (LOS) and Not in Line of Sight (NLOS) by the vehicle’s sensors. The analysis may use topographic data, cover and relief maps, sensors location on the vehicle. Furthermore, it may use vehicle’s sensors performances such as but not limited to Field of View (FOV) and detection range.
[0129] In some embodiments, ROI determination may be achieved using a machine learning algorithm. For exmaple, in some embodiments, the present disclosure may incorporate a machine learning model, such as a deep neural network, Bayesian network, or other type of machine learning model. In some embodiments, the present disclosure may comprise one or more statistical models which may define an approach for updating a probability for the computed parameters of an ROI outside of a vehicle. In some embodiments, these statistical models may be evaluated and modified by processing simulated sensor outputs that simulate perception of a scenario by the sensors of the vehicle. [0130] In some embodiments, the present disclosure may execute a training engine configured for training the machine learning model based, at least on part, on a training set comprising a plurality of scenarios, each comprising input data such as, but not limited to, BSM data, CAM data, VUT braking distance, weather condition, VUT mass, visibility conditions, VUT dimensions, road conditions, road friction coefficient, speed limit, urban/highway road, VUT dynamics, VUT velocity, VUT sensor modalities, VUT sensor location, VUT perception and sensor data, nearby vehicles, nearby pedestrians, vulnerable road users, and driving history. In some embodiments, such input data may be labelled with manual delineation of an ROI consistent with the input data. In some embodiments, at an inference stage, a trained machine learning model of the present disclosure may be applied to a target roadway scene, to determine the ROI associated with the roadway scene. In some embodiments, the training set may be based on, e.g., manually generated scenarios and/or manual inputs specifying relevant data. In some embodiments, scenarios may be modeled based on images, a video sequence, or other measurements of an actual location, e.g., observations of a location, movements of vehicles in the location, the location of other objects, etc. In some embodiments, a scenario generation module may read a file specifying locations and/or orientations for various elements of a scenario, and create a model of the scenario having models of the elements positioned as instructed in the file. In this manner, manually or automatically generated files may be used to define a wide range of scenarios.
[0131] In some embodiments, ROI determination may be updated dynamically based, at least on part, on a predicted location of the VUT upon performing one or more planned actions (e.g., lane change, speed increase, braking, turning, etc.). In some embodiments, the present disclosure may use data from the VUT decision subsystem for analyzing the safety of an action it is planning to perform. In such case, the present disclosure may analyze a predicted“virtual” location of the VUT following the execution of a planned action, and, based on this analysis, may update the ROI to include one or more relevant targets. Fig. 5 shows an example of extended ROI derived from an action of lane changing, where Target 1 was added.
[0132] In some embodiments, an ROI for a VUT may be dynamically updated and/or extended based, at least in part on data shared form other vehicles in the vicinity. For exmaple, the nearby vehicle (NV) may share perceived objects to extend and/or improve an ROI of a VUT. In some embodiments, the nearby vehicle may comprise a different perception system developed by different manufacturers, or it may perceive potential targets from a different and/or improved vantage point. In some embodiments, perception sharing may include sharing raw and/or processed sensor data. In some embodiments, objects detected by a nearby vehicle (NV) perception subsystems may be screened and prioritize. Fig. 6 shows an example of NV sharing an Object x (e.g., a pedestrian) with a VUT.
Objects Screening and Target Prioritization
[0133] In some embodiments, the present disclosure may be configured for detecting a plurality of objects surrounding a VUT, and perform object screening based on specified criteria, to determine a subset of relevant targets from the group of objects. In some embodiments, the present disclosure may then perform target prioritization from among the subset of screened targets, based on specified prioritization criteria.
[0134] In some embodiments, the present disclosure may make an initial object screening within a defined ROI around the vehicle. The present disclosure may then detect objects located within the ROI as potential targets. In some embodiments, the present disclosure may further track potential targets based on available data or by predicting location and telemetry. A target database may include target location, telemetry, dimension, type, and material.
[0135] In some embodiments, object screening may be based, at least in part, on the following parameters:
• Vehicle ROI and line-of-sight (LOS), which determine which objects in the roadway environment should be perceived by the vehicle’s various sensor modalities.
• Object distance for the vehicle, based on known sensing parameters.
• Validity check on the data transferred, for example, a CRC check.
[0136] In some embodiments, in order to reduce the computing resources required for anomaly analysis, object screening and target prioritization processes may be used. The object screening process may use the following parameters:
• Objects which are located outside the LOS of the VUT may be screened out.
• Objects located beyond the vehicle’s specified sensing distance may be omitted. [0137] Fig. 7 shows VUT camera sensor LOS and NLOS analysis, where object2 is within the VUT LOS, and therefore constitutes a potential target, while object2 is within the NLOS, and therefore will not be deemed to be a potential target and will be screened out.
[0138] In some embodiments, targets prioritizing is used to ensure that the most significant targets are identified by the VUT’s perception system. The prioritization process may use the following parameters:
• Scenario danger level
• Predicted deadliness level of impact between the VUT and a target.
• Location and distance of target relative to the VUT
• Location accuracy of a target.
• Data source of the target (e.g., VUT sensor, DTD, nearby vehicle).
• location of the target relative to the VUT.
• Roadway conditions.
[0139] Fig. 8 shows an example of a high danger scenario, where the VUT and T arget are approaching a T junction, hence should be highly prioritize.
[0140] Fig. 9 shows an example of a potential high deadlines level of impact between the VUT and Target (e.g., pedestrian).
[0141] Fig. 10 shows an example where T arget is located within the breaking distance of the vehicle, which should be highly prioritize. The prioritization process may use a prioritization function, P(Ti), which allows analytical and predictable analysis of the prioritization process for validation and verification. In one example, the prioritization function P(Ti) can be calculated
P(TQ = Pa(TQ * Pb(TQ * Pc(TQ * Pd(TQ * Pe(TQ, where Pa, Pb, Pc, Pd, and Pe are the scenario danger level, deadliness likelihood, location accuracy, source ranking, and the relative location prioritization functions, respectively.
[0142] In one example, the location accuracy prioritization function, Pc(Ti), can be calculated Pc(Ti ) = 1/a, where a is the location accuracy.
[0143] In some embodiments, target prioritization as well as related parameters such as scenario danger level, predicted deadliness level, location accuracy, and the source ranking functions may be performed using a trained machine learning algorithm.
[0144] In some embodiments, at a training stage, one or more machine learning classifiers may be trained to predict one or more of the relevant scene prioritization parameters, on relevant training set comprising a plurality of scenarios, each including one or more targets. In some embodiments, the training sets may be based, at least in part on relevant input data such as: BSM data, CAM data, VUT braking distance, weather condition, VUT mass, visibility conditions, VUT dimensions, road conditions, road friction coefficient, speed limit, urban/highway road, VUT dynamics, VUT velocity, VUT sensor modalities, VUT sensor location, VUT perception and sensor data, nearby vehicles, nearby pedestrians, vulnerable road users, and driving history. In some embodiments, each scenario in a training set may be labelled, e.g., based on a priority level assigned to each target within the scenario.
[0145] In some embodiments, at an inference stage, one or more trained machine learning algorithms may be applied to a target roadway scene, to determine a prioritization of targets within the target scene.
Target Perception Analysis
[0146] In some embodiments, the present disclosure may analyze and indicate for anomalies between VUT perception of objects and/or targets within its ROI and‘ground truth’ refence set of targets determined for the VUT. In some embodiments, these anomalies may provide for an indication of, e.g., hardware failure, software failure, bug, and/or algorithm error, in the context of testing, verification, validation, calibration, and failure detection in ADAS.
[0147] In some embodiments, the analysis may be based on e.g., a pull architecture, where VUT perception data is provided for analysis, and/or a push architecture, where the VUT responds to queries from the system regarding perceived targets. In some embodiments, the objects-targets comparison and analysis may be performed using at least one of system CPU, system GPU, system ECU, VUT CPU, VUT GPU, and/or VUT ECU.
[0148] In some embodiments, analysis comparison may be performed for a least some of the following parameters:
• Vehicle identification number (if identified by the vehicle’s perception subsystem).
• Target / object type, at least: vehicle, truck, infrastructure, sign, obstacle, pedestrian.
• Time Stamp.
• Target / object global or relative coordinates - latitude, longitude, height.
• Location accuracy.
• Velocity.
• Velocity accuracy.
• Trajectory.
• Acceleration.
• Acceleration accuracy.
• Vehicle size.
• Acquired sensor
[0149] An example of method for anomalies detection is shown in Fig. 11. The system may perform a high-level comparison, e.g. comparing target\object location, and if no anomaly was detected then performing a low-level comparison, e.g. comparing timestamp, velocity, and trajectory. A comparison score and a pass/fail criterion may be using the following parameters:
• Location anomaly
• Type anomaly
• Timestamp anomaly
• Trajectory anomaly • Acquired sensor anomaly
[0150] The comparison process may use a scoring function, S(Ti, Oj), which allows analytical and predictable analysis of the comparison process for validation and verification. In one example, the scoring function S(Ti, Of) can be calculated
S(Ti, Oj ) = Sa(Ti, Oj ) * Sb(Ti, Oj ) * Sc(Ti, Oj ) * Sd(Ti, Oj ) * Se(Ti, Oj ),
[0151] where Sa, Sb, Sc, Sd, and Se are the location, type, timestamp, trajectory, and acquired sensor scoring functions, respectively. 7) is i -th target and Oj is the j -th object
[0152] In one example, the location anomaly function, Sa(T i, Oi ), can be calculated
Sa(Ti, Oj ) = abs{LTi— L0j ), where LTi and LOi are the location of 7) and Oj, respectively.
[0153] In some embodiments, one or more machine learning algorithms may be used to determine, e.g., location, type, timestamp, trajectory, and/or acquired sensor scoring function for exmaple, at a training stage, one or more machine learning classifiers may be trained to predict any of these functions based, at least on part, on a training set comprising a plurality of roadway scenarios, each comprising input data such as: target/object type, time stamp, target/object global or relative coordinates (e.g., latitude, longitude, height), location accuracy, velocity, velocity accuracy, trajectory, acceleration, acceleration accuracy, vehicle size, road conditions, and/or acquired sensor. In some embodiments. Training set scenarios may be manually labelled according to these functions.
[0154] In some embodiments, at an inference stage, the trained machine learning classifiers may be applied to a target roadway scene, to determine anomalies between perception objects and targets associated with said roadway scene.
[0155] In some embodiments the present system may report the found anomalies, which may be indicative of hardware failure, software bug, and/or algorithm error. The report may be used by the VUT perception, decision, actuation, and/or any other of the VUT functions and/or ADAS subsystems. Furthermore, the report may include an analysis which allows root cause analysis of the anomaly, as well as deeper insights on hardware failures, software bugs, and/or algorithm errors. The report may lead to a fail safe operation of the vehicle. The report may be used by a hardware failures, software bugs, and algorithm errors database (VFDB). [0156] The system may perform a validity check to the data received from the vehicle’s perception, for example, a CRC check.
[0157] In some embodiments, identifying abnormal or dangerous behavior of a target on a roadway, e.g., a vehicle, may be used in order to better prioritize the targets. Furthermore, the vehicle’s decision subsystem may use a dangerous targets dataset to choose a cautious strategy (e.g., reducing speed and extending safety distance).
[0158] In some embodiments, this system may be used to identify hardware failure, software bugs, and algorithm errors in nearby vehicles. The output of the system may be stored on a Dangerous Targets Database (DTD), and the analysis may receive data from the DTD.
Target Danger Level Assessment
[0159] In some embodiments, the present disclosure may track identified targets surrounding the VUT, and indicate for potential and/or predicted danger level associated with each target, based on, e.g., dangerous driving and/or other behaviors detected with respect to the target. In some embodiments, a dangerous targets database (DTD) may include targets identified and/or classified as potentially dangerous, based, e.g., on driving and/or other behavioral parameters. In some embodiments, the DTD may be used by the present disclosure and/or by other vehicles.
[0160] In same embodiments, danger assessment analysis may use one or more of the following parameters:
• Driver non-compliance with roadway rules and regulation;
• irregular driving and road behavior patterns;
• target speed deviates significantly from speed limits;
• target changes lanes frequently; and/or
• vehicle state of repair.
[0161] Fig. 12 shows an example of a target vehicle Target not obeying the traffic rules by driving on the wrong side of the road.
[0162] In some embodiments, danger level assessment may be performed using one or more machine learning algorithms. In some embodiments, at a training stage, a machine learning classifier may be trained on a training set comprising, e.g., a plurality of roadway scenarios including one or more targets, wherein each scenario provides for input data such as target’s route, target speed, target’s physical conditions, target’s lane, etc. In some embodiments, each such training scenario may be manually labelled according to a perceived danger level associated with, e.g., one or more targets within the scenario and/or the scenario as a whole.
[0163] In some embodiments, at an inference stage, a trained machine learning algorithm of the present disclosure may be applied to a target roadway scene, to determine danger level associated with one or more targets within the scenario.
Target Tracking Estimation
[0164] In some embodiments, network communication may experience disconnections due to, e.g., Quality of Service (QOS) issues. In some cases, loss of communication ability may be considered as a critical failure leading to a safety hazard. In some embodiments, the present system and method for object screening and target prioritization may include a system and method for object tracking estimation.
[0165] In Fig. 13, object at location Lt0, velocity Vt0, and acceleration at0 are known at t0, however due to limited QOS, updated parameters are unknown at Q. Hence, using the tracking estimation function, location Ltl and velocity Vtl at t1 may be estimated by:
Vti = Vt0 + at0(tl— tO)
Lti = Lto + Vt0(tl - tO) + at0(tl - t0)2/2
[0166] The tracking estimator may be used when an on object lose communication without an appropriate notification (e.g. parking and engine turning off), once the object gain the communication back, tracking estimator stops. In some embodiments, an estimation function may use other and/or additional parameters, such as target driving history, target predicted route, traffic history, etc.
[0167] In some embodiments, target location estimation may be performed using a trained machine learning algorithm. In some embodiments, at a training stage, one or more machine learning algorithms may be trained to predict target location. In some embodiments, such machine learning algorithm may be trained on a training set comprising a plurality of scenarios, each comprising one or more targets, wherein there are provided input data such as target route, target speed, target physical condition, target current lane, driving history, traffic history, etc. in each case, training set scenarios may be labeled using predicted target location at various prospective times. In some embodiments, at an inference stage, the trained machine learning algorithm may be applied to a roadway scenario coming one or more targets, to estimate object location, speed, and/or acceleration parameters with respect to the target.
System and method for detecting external damages to nearby vehicles
[0168] In some embodiments, vehicles may suffer from external damages such as but not limited to flat tire, damaged glass and windshield, a damaged mirror, dents and dings, chips and scratches. These damage may be due to wear and tear or due to collisions and/or other roadway incidents. In some embodiments, the present system allows for detecting external damage to one or more nearby vehicles (NV) on the roadway, without the requirements to send it for an assessment at a dedicated worksite.
[0169] In some embodiments, the system may use a communication network to locate the NV, the system will send a request to the NV to perform an external inspection to the NV using the vehicle’s sensors. For example, as shown in Fig. 14, a vehicle is located behind the NV, hence it performs an external inspection of the rear part of the NV, wherein the results of the inspection are sent to a database. The analysis of the NV’s inspection may be processed on the vehicle’s subsystems, on the NV’s subsystems, or on a remote computing platform such as cloud service. The analysis may use image processing algorithms in order to detect and assess the damage to the NV based on the vehicles’ sensors data. The results of the analysis may be sent to the NV and/or sent to a remote server. The vehicle may perform a coverage of 360° NV inspection. The inspection may be performed by several vehicles to achieve a coverage of 360°.
[0170] In some embodiments, vehicle damage estimation may be performed using a trained machine learning algorithm. In some embodiments, at a training stage, one or more machine learning algorithms may be trained to detect and/or assess external damage to a vehicle. In some embodiments, such machine learning algorithm may be trained on a training set comprising a plurality of images of vehicles exhibiting external damage, such as a flat tire, damaged bodywork, damaged windshield and/or windows, etc. In some embodiments, the training set may be labelled consistent with the damage exhibited in each image. In some embodiments, at an inference stage, a trained machine learning algorithm may be applied to images of vehicles, to analyze external damage. [0171] As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," "module" or "system." Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
[0172] Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
[0173] A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. [0174] Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
[0175] Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
[0176] Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a hardware processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
[0177] These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks. [0178] The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
[0179] The flowcharts and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
[0180] The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
[0181] In the description and claims of the application, each of the words "comprise" "include" and "have", and forms thereof, are not necessarily limited to members in a list with which the words may be associated. In addition, where there are inconsistencies between this application and any document incorporated by reference, it is hereby intended that the present application controls.

Claims

CLAIMS What is claimed is:
1. A method comprising:
receiving, from a first sensor system, information regarding a first set of objects within a scene;
receiving, from a second sensor system, information regarding a second set of objects within said scene, wherein said second set of objects is located within a region of interest (ROI) of said second sensor system;
selecting, from said first set of objects, a subset of targets comprising those objects in said first set of objects that are located within said ROI; and
comparing said subset of targets with said second set of objects to determine a functionality status of said second sensor system.
2. The method of claim 1, wherein at least one of said first and second sensor systems are located on board a vehicle under testing (VUT).
3. The method of claim 2, wherein said VUT is a moving VUT and said scene is a roadway scene.
4. The method of any one of claims 1-3, wherein said comparing further comprises determining, with respect to said VUT, at least one of a position relative to each of said targets, a location relative to each of said targets, a trajectory relative to each of said targets, an orientation relative to each of said targets, VUT velocity, and VUT thrust line.
5. The method of any one of claims 1-4, wherein said objects are selected from the group consisting of: vehicles, trucks, roadway infrastructure elements, roadway signs, bridges, buildings, roadway hazards, pedestrians, bicycles, vulnerable road users, and animals.
6. The method of any one of claims 1-5, wherein said first and second sensor systems each comprise at least one of ultrasonic-based, camera-based, radar-based, and light detection and ranging (LiDAR)-based sensors.
7. The method of any one of claims 1-6, wherein said comparing comprises:
(i) assigning a scoring function to each of said targets and to each object in said second set of objects, (ii) associating each of said targets with a corresponding object in said second set of objects, and
(iii) comparing said scoring functions assigned to each of said targets and its associated object.
8. The method of claim 7, wherein said scoring function is based, at least in part, on a location, type, timestamp, and trajectory of each of said targets and said objects.
9. The method of any one of claims 1-8, wherein said ROI is determined based, at least in part on one of: a field of sensing of said second sensor system, a velocity of said VUT, a trajectory of said VUT, a braking distance of said VUT, a mass of said VUT, dimensions of said VUT, weather conditions, visibility conditions, roadway conditions, roadway friction coefficient, roadway speed limit, and roadway type.
10. The method of any one of claims 1-9, wherein said ROI is determined by applying a trained machine learning algorithm to said scene, wherein said machine learning algorithm is trained using a training set comprising:
(i) a plurality of roadway scenes comprising a VUT ; and
(ii) labels associated with an ROI assigned to each VUT in each of said roadway scenes.
11. The method of any one of claims 1-10, wherein said information regarding said first set of objects comprises at least one of: a priority level associated with at least some of said objects, and a danger level associated with at least some of said objects.
12. The method of claim 11, wherein said priority level associated with at least one of said objects is determined based, at least in part, on an assessment of a deadliness associated with a possible collision between said VUT and said object, a source of said information with respect to said object, and a location of said object.
13. The method of claim 11, wherein said priority level associated with at least one of said objects is determined by applying a trained machine learning algorithm to said scene, wherein said machine learning algorithm is trained using a training set comprising:
(i) a plurality of roadway scenes, each comprising one or more objects; and
(ii) labels associated with a priority level assigned to each of said objects in each of said roadway scenes.
14. The method of claim 11, wherein said danger level associated with at least one of said objects is determined based, at least in part, on object motion trajectory, object perceived compliance with roadway rules, object speed variations, and object roadway lane adherence.
15. The method of claim 11, wherein said danger level associated with at least one of said objects is determined by applying a trained machine learning algorithm to said scene, wherein said machine learning algorithm is trained using a training set comprising:
(i) a plurality of roadway scenes, each comprising one or more objects; and
(ii) labels associated with a danger level assigned to each of said objects.
16. A system comprising:
at least one hardware processor; and
a non-transitory computer-readable storage medium having stored thereon program instructions, the program instructions executable by the at least one hardware processor to:
receive, from a first sensor system, information regarding a first set of objects within a scene,
receive, from a second sensor system, information regarding a second set of objects within said scene, wherein said second set of objects is located within a region of interest (ROI) of said second sensor system,
select, from said first set of objects, a subset of targets comprising those objects in said first set of objects that are located within said ROI, and
compare said subset of targets with said second set of objects to determine a functionality status of said second sensor system.
17. The system of claim 16, wherein at least one of said first and second sensor systems are located on board a vehicle under testing (VUT).
18. The system of claim 17, wherein said VUT is a moving VUT and said scene is a roadway scene.
19. The method of any one of claims 16-18, wherein said comparing further comprises determining, with respect to said VUT, at least one of a position relative to each of said targets, a location relative to each of said targets, a trajectory relative to each of said targets, an orientation relative to each of said targets, VUT velocity, and VUT thrust line.
20. The system of any one of claims 16-19, wherein said objects are selected from the group consisting of: vehicles, trucks, roadway infrastructure elements, roadway signs, bridges, buildings, roadway hazards, pedestrians, bicycles, vulnerable road users, and animals.
21. The system of any one of claims 16-20, wherein said first and second sensor systems each comprise at least one of ultrasonic-based, camera-based, radar-based, and light detection and ranging (LiDAR)-based sensors.
22. The system of any one of claims 16-21, wherein said comparing comprises:
(i) assigning a scoring function to each of said targets and to each object in said second set of objects,
(ii) associating each of said targets with a corresponding object in said second set of objects, and
(iii) comparing said scoring functions assigned to each of said targets and its associated object.
23. The system of claim 22, wherein said scoring function is based, at least in part, on a location, type, timestamp, and trajectory of each of said targets and said objects.
24. The system of any one of claims 16-23, wherein said ROI is determined based, at least in part on one of: a field of sensing of said second sensor system, a velocity of said VUT, a trajectory of said VUT, a braking distance of said VUT, a mass of said VUT, dimensions of said VUT, weather conditions, visibility conditions, roadway conditions, roadway friction coefficient, roadway speed limit, and roadway type.
25. The system of any one of claims 16-24, wherein said ROI is determined by applying a trained machine learning algorithm to said scene, wherein said machine learning algorithm is trained using a training set comprising:
(i) a plurality of roadway scenes comprising a VUT; and
(ii) labels associated with an ROI assigned to each VUT in each of said roadway scenes.
26. The system of any one of claims 16-25, wherein said information regarding said first set of objects comprises at least one of: a priority level associated with at least some of said objects, and a danger level associated with at least some of said objects.
27. The system of claim 26, wherein said priority level associated with at least one of said objects is determined based, at least in part, on an assessment of a deadliness associated with a possible collision between said VUT and said object, a source of said information with respect to said object, and a location of said object.
28. The system of claim 26, wherein said priority level associated with at least one of said objects is determined by applying a trained machine learning algorithm to said scene, wherein said machine learning algorithm is trained using a training set comprising:
(i) a plurality of roadway scenes, each comprising one or more objects; and
(ii) labels associated with a priority level assigned to each of said objects in each of said roadway scenes.
29. The system of claim 26, wherein said danger level associated with at least one of said objects is determined based, at least in part, on object motion trajectory, object perceived compliance with roadway rules, object speed variations, and object roadway lane adherence.
30. The system of claim 26, wherein said danger level associated with at least one of said objects is determined by applying a trained machine learning algorithm to said scene, wherein said machine learning algorithm is trained using a training set comprising:
(i) a plurality of roadway scenes, each comprising one or more objects; and
(ii) labels associated with a danger level assigned to each of said objects.
31. A computer program product comprising a non-transitory computer-readable storage medium having program instructions embodied therewith, the program instructions executable by at least one hardware processor to:
receive, from a first sensor system, information regarding a first set of objects within a scene;
receive, from a second sensor system, information regarding a second set of objects within said scene, wherein said second set of objects is located within a region of interest (ROI) of said second sensor system; select, from said first set of objects, a subset of targets comprising those objects in said first set of objects that are located within said ROI; and
compare said subset of targets with said second set of objects to determine a functionality status of said second sensor system.
32. The computer program product of claim 31, wherein at least one of said first and second sensor systems are located on board a vehicle under testing (VUT).
33. The computer program product of claim 32, wherein said VUT is a moving VUT and said scene is a roadway scene.
34. The method of any one of claims 1-33, wherein said comparing further comprises determining, with respect to said VUT, at least one of a position relative to each of said targets, a location relative to each of said targets, a trajectory relative to each of said targets, an orientation relative to each of said targets, VUT velocity, and VUT thrust line.
35. The computer program product of any one of claims 31-34, wherein said objects are selected from the group consisting of: vehicles, trucks, roadway infrastructure elements, roadway signs, bridges, buildings, roadway hazards, pedestrians, bicycles, vulnerable road users, and animals.
36. The computer program product of any one of claims 31-35, wherein said first and second sensor systems each comprise at least one of ultrasonic -based, camera-based, radar-based, and light detection and ranging (LiDAR) -based sensors.
37. The computer program product of any one of claims 31-36, wherein said comparing comprises:
(i) assigning a scoring function to each of said targets and to each object in said second set of objects,
(ii) associating each of said targets with a corresponding object in said second set of objects, and
(iii) comparing said scoring functions assigned to each of said targets and its associated object.
38. The computer program product of claim 37, wherein said scoring function is based, at least in part, on a location, type, timestamp, and trajectory of each of said targets and said objects.
39. The computer program product of any one of claims 31-38, wherein said ROI is determined based, at least in part on one of: a field of sensing of said second sensor system, a velocity of said VUT, a trajectory of said VUT, a braking distance of said VUT, a mass of said VUT, dimensions of said VUT, weather conditions, visibility conditions, roadway conditions, roadway friction coefficient, roadway speed limit, and roadway type.
40. The computer program product of any one of claims 31-39, wherein said ROI is determined by applying a trained machine learning algorithm to said scene, wherein said machine learning algorithm is trained using a training set comprising:
(i) a plurality of roadway scenes comprising a VUT; and
(ii) labels associated with an ROI assigned to each VUT in each of said roadway scenes.
41. The computer program product of any one of claims 31-40, wherein said information regarding said first set of objects comprises at least one of: a priority level associated with at least some of said objects, and a danger level associated with at least some of said objects.
42. The computer program product of claim 41 , wherein said priority level associated with at least one of said objects is determined based, at least in part, on an assessment of a deadliness associated with a possible collision between said VUT and said object, a source of said information with respect to said object, and a location of said object.
43. The computer program product of claim 41 , wherein said priority level associated with at least one of said objects is determined by applying a trained machine learning algorithm to said scene, wherein said machine learning algorithm is trained using a training set comprising:
(i) a plurality of roadway scenes, each comprising one or more objects; and
(ii) labels associated with a priority level assigned to each of said objects in each of said roadway scenes.
44. The computer program product of claim 41 , wherein said danger level associated with at least one of said objects is determined based, at least in part, on object motion trajectory, object perceived compliance with roadway rules, object speed variations, and object roadway lane adherence.
45. The computer program product of claim 41, wherein said danger level associated with at least one of said objects is determined by applying a trained machine learning algorithm to said scene, wherein said machine learning algorithm is trained using a training set comprising:
(i) a plurality of roadway scenes, each comprising one or more objects; and
(ii) labels associated with a danger level assigned to each of said objects.
46. A method comprising:
generating, by a testing system, a virtual scene comprising a plurality of virtual objects;
receiving an output from a sensor system representing a response of said sensor system to said virtual scene; and
comparing said output with a simulated reference response generated for said virtual scene, to determine a functionality status of said sensor system.
47. The method of claim 46, wherein said sensor system is located on board a vehicle under testing (VUT) and said virtual scene is a roadway scene.
48. The method of claim 47, wherein said VUT is a moving VUT.
49. The method of any one of claims 46-48, wherein said virtual objects are selected from the group consisting of: vehicles, trucks, roadway infrastructure elements, roadway signs, bridges, buildings, roadway hazards, pedestrians, bicycles, and animals.
50. The method of any one of claims 46-49, wherein at least some of said virtual objects are generated based, at least in part, on mapping data obtained from a mapping service.
51. The method of any one of claims 46-50, wherein said sensor systems comprises at least one of ultrasonic -based, camera-based, radar-based, and light detection and ranging (LiDAR)-based sensors.
52. The method of any one of claims 46-51, wherein at least some of said virtual objects are generated by a simulated reflected signal.
53. The method of any one of claims 43-48, wherein at least some of said virtual objects are generated by a transceiver configured to receive a signal transmitted from a sensor of said sensor system, and transmit, based at least in part on said received signal, an output signal representative of a virtual target, such that at least a portion of the output signal is received by said sensor.
54. The method of any one of claims 46-52, wherein said comparing comprises comparing parameters of each of said virtual targets with parameters of each detected virtual target in said output of said sensor system.
55. The method of claim 54, wherein said parameters are selected from the group consisting of: virtual target location, virtual target velocity, virtual target trajectory, virtual target type, and virtual target timing.
56. A system comprising:
at least one hardware processor; and
a non-transitory computer-readable storage medium having stored thereon program instructions, the program instructions executable by the at least one hardware processor to:
generate, by a testing system, a virtual scene comprising a plurality of virtual objects,
receive an output from a sensor system representing a response of said sensor system to said virtual scene, and
compare said output with a simulated reference response generated for said virtual scene, to determine a functionality status of said sensor system.
57. The system of claim 56, wherein said sensor system is located on board a vehicle under testing (VUT) and said virtual scene is a roadway scene.
58. The system of claim 57, wherein said VUT is a moving VUT.
59. The system of any one of claims 52-54, wherein said virtual objects are selected from the group consisting of: vehicles, trucks, roadway infrastructure elements, roadway signs, bridges, buildings, roadway hazards, pedestrians, bicycles, and animals.
60. The method of any one of claims 56-59, wherein at least some of said virtual objects are generated based, at least in part, on mapping data obtained from a mapping service.
61. The system of any one of claims 56-60, wherein said sensor systems comprises at least one of ultrasonic -based, camera-based, radar-based, and light detection and ranging (LiDAR)-based sensors.
62. The system of any one of claims 56-61, wherein at least some of said virtual objects are generated by a simulated reflected signal.
63. The system of any one of claims 56-62, wherein at least some of said virtual objects are generated by a transceiver configured to receive a signal transmitted from a sensor of said sensor system, and transmit, based at least in part on said received signal, an output signal representative of a virtual target, such that at least a portion of the output signal is received by said sensor.
64. The system of any one of claims 56-63, wherein said comparing comprises comparing parameters of each of said virtual targets with parameters of each detected virtual target in said output of said sensor system.
65. The system of claim 64, wherein said parameters are selected from the group consisting of: virtual target location, virtual target velocity, virtual target trajectory, virtual target type, and virtual target timing.
66. A computer program product comprising a non-transitory computer-readable storage medium having program instructions embodied therewith, the program instructions executable by at least one hardware processor to:
generate, by a testing system, a virtual scene comprising a plurality of virtual objects;
receive an output from a sensor system representing a response of said sensor system to said virtual scene; and compare said output with a simulated reference response generated for said virtual scene, to determine a functionality status of said sensor system.
67. The computer program product of claim 66, wherein said sensor system is located on board a vehicle under testing (VUT) and said virtual scene is a roadway scene.
68. The computer program product of claim 67, wherein said VUT is a moving VUT.
69. The computer program product of any one of claims 66-68, wherein said virtual objects are selected from the group consisting of: vehicles, trucks, roadway infrastructure elements, roadway signs, bridges, buildings, roadway hazards, pedestrians, bicycles, and animals.
70. The method of any one of claims 66-69, wherein at least some of said virtual objects are generated based, at least in part, on mapping data obtained from a mapping service.
71. The computer program product of any one of claims 66-70, wherein said sensor systems comprises at least one of ultrasonic -based, camera-based, radar-based, and light detection and ranging (LiDAR)-based sensors.
72. The computer program product of any one of claims 66-71 , wherein at least some of said virtual objects are generated by a simulated reflected signal.
73. The computer program product of any one of claims 66-72, wherein at least some of said virtual objects are generated by a transceiver configured to receive a signal transmitted from a sensor of said sensor system, and transmit, based at least in part on said received signal, an output signal representative of a virtual target, such that at least a portion of the output signal is received by said sensor.
74. The computer program product of any one of claims 66-73, wherein said comparing comprises comparing parameters of each of said virtual targets with parameters of each detected virtual target in said output of said sensor system.
75. The computer program product of claim 74, wherein said parameters are selected from the group consisting of: virtual target location, virtual target velocity, virtual target trajectory, virtual target type, and virtual target timing.
PCT/IL2019/051135 2018-10-19 2019-10-20 Adas systems functionality testing WO2020079698A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201862747673P 2018-10-19 2018-10-19
US62/747,673 2018-10-19
US201962796109P 2019-01-24 2019-01-24
US62/796,109 2019-01-24

Publications (1)

Publication Number Publication Date
WO2020079698A1 true WO2020079698A1 (en) 2020-04-23

Family

ID=70283735

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2019/051135 WO2020079698A1 (en) 2018-10-19 2019-10-20 Adas systems functionality testing

Country Status (1)

Country Link
WO (1) WO2020079698A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112612261A (en) * 2020-12-21 2021-04-06 广州小鹏自动驾驶科技有限公司 Simulation test system and method for assisting lane change
CN113009900A (en) * 2021-02-06 2021-06-22 武汉光庭信息技术股份有限公司 Hardware-in-loop simulation system of ADAS controller
CN113155173A (en) * 2021-06-02 2021-07-23 福瑞泰克智能系统有限公司 Perception performance evaluation method and device, electronic device and storage medium
CN113254283A (en) * 2021-05-20 2021-08-13 中国兵器装备集团自动化研究所有限公司 Multi-CAN port test system optimization method, device, equipment and storage medium
US20210394794A1 (en) * 2020-06-22 2021-12-23 Zenuity Ab Assessment of a vehicle control system
CN113946146A (en) * 2021-08-03 2022-01-18 上海和夏新能源科技有限公司 Intelligent driving and ADAS test data acquisition system and method with scene data
CN114679581A (en) * 2022-03-17 2022-06-28 福思(杭州)智能科技有限公司 Image data processing method, system, electronic device and storage medium
EP4033460A1 (en) * 2021-01-22 2022-07-27 Aptiv Technologies Limited Data recording for adas testing and validation
EP4047515A1 (en) * 2021-02-19 2022-08-24 Zenseact AB Platform for perception system development for automated driving systems
EP4047514A1 (en) * 2021-02-19 2022-08-24 Zenseact AB Platform for perception system development for automated driving system
CN116147686A (en) * 2023-04-19 2023-05-23 江西鼎铁自动化科技有限公司 Automobile ADAS calibration method, system, computer and storage medium
US20230417888A1 (en) * 2022-06-22 2023-12-28 Nxp B.V. Object detection system and method for identifying faults therein
EP4385848A1 (en) * 2022-12-14 2024-06-19 Aptiv Technologies AG Perception sensor processing method and processing unit for performing the same
WO2024215849A1 (en) * 2023-04-13 2024-10-17 Atieva, Inc. Positioning application for adas calibration target

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2484795A (en) * 2010-10-21 2012-04-25 Gm Global Tech Operations Inc Operation of a vehicle sensor
EP2490175A1 (en) * 2011-02-15 2012-08-22 Adasens Automotive S.L.U. Method for calibrating and/or aligning a camera mounted in an automobile vehicle and corresponding camera
WO2017079301A1 (en) * 2015-11-04 2017-05-11 Zoox, Inc. Calibration for autonomous vehicle operation
US20170169627A1 (en) * 2015-12-09 2017-06-15 Hyundai Motor Company Apparatus and method for failure diagnosis and calibration of sensors for advanced driver assistance systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2484795A (en) * 2010-10-21 2012-04-25 Gm Global Tech Operations Inc Operation of a vehicle sensor
EP2490175A1 (en) * 2011-02-15 2012-08-22 Adasens Automotive S.L.U. Method for calibrating and/or aligning a camera mounted in an automobile vehicle and corresponding camera
WO2017079301A1 (en) * 2015-11-04 2017-05-11 Zoox, Inc. Calibration for autonomous vehicle operation
US20170169627A1 (en) * 2015-12-09 2017-06-15 Hyundai Motor Company Apparatus and method for failure diagnosis and calibration of sensors for advanced driver assistance systems

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210394794A1 (en) * 2020-06-22 2021-12-23 Zenuity Ab Assessment of a vehicle control system
CN112612261A (en) * 2020-12-21 2021-04-06 广州小鹏自动驾驶科技有限公司 Simulation test system and method for assisting lane change
EP4033460A1 (en) * 2021-01-22 2022-07-27 Aptiv Technologies Limited Data recording for adas testing and validation
CN113009900A (en) * 2021-02-06 2021-06-22 武汉光庭信息技术股份有限公司 Hardware-in-loop simulation system of ADAS controller
EP4047515A1 (en) * 2021-02-19 2022-08-24 Zenseact AB Platform for perception system development for automated driving systems
EP4047514A1 (en) * 2021-02-19 2022-08-24 Zenseact AB Platform for perception system development for automated driving system
US11983918B2 (en) 2021-02-19 2024-05-14 Zenseact Ab Platform for perception system development for automated driving system
CN113254283A (en) * 2021-05-20 2021-08-13 中国兵器装备集团自动化研究所有限公司 Multi-CAN port test system optimization method, device, equipment and storage medium
CN113155173A (en) * 2021-06-02 2021-07-23 福瑞泰克智能系统有限公司 Perception performance evaluation method and device, electronic device and storage medium
CN113155173B (en) * 2021-06-02 2022-08-30 福瑞泰克智能系统有限公司 Perception performance evaluation method and device, electronic device and storage medium
CN113946146A (en) * 2021-08-03 2022-01-18 上海和夏新能源科技有限公司 Intelligent driving and ADAS test data acquisition system and method with scene data
CN114679581A (en) * 2022-03-17 2022-06-28 福思(杭州)智能科技有限公司 Image data processing method, system, electronic device and storage medium
US20230417888A1 (en) * 2022-06-22 2023-12-28 Nxp B.V. Object detection system and method for identifying faults therein
EP4385848A1 (en) * 2022-12-14 2024-06-19 Aptiv Technologies AG Perception sensor processing method and processing unit for performing the same
WO2024215849A1 (en) * 2023-04-13 2024-10-17 Atieva, Inc. Positioning application for adas calibration target
CN116147686A (en) * 2023-04-19 2023-05-23 江西鼎铁自动化科技有限公司 Automobile ADAS calibration method, system, computer and storage medium

Similar Documents

Publication Publication Date Title
WO2020079698A1 (en) Adas systems functionality testing
US10755007B2 (en) Mixed reality simulation system for testing vehicle control system designs
CN112740188B (en) Log-based simulation using bias
US20180267538A1 (en) Log-Based Vehicle Control System Verification
CN111795832B (en) Intelligent driving vehicle testing method, device and equipment
US11042758B2 (en) Vehicle image generation
US11897505B2 (en) In-vehicle operation of simulation scenarios during autonomous vehicle runs
Belbachir et al. Simulation-driven validation of advanced driving-assistance systems
CN114510018B (en) Metric back propagation for subsystem performance evaluation
US20220198107A1 (en) Simulations for evaluating driving behaviors of autonomous vehicles
US20220204009A1 (en) Simulations of sensor behavior in an autonomous vehicle
US11270164B1 (en) Vehicle neural network
US20220073104A1 (en) Traffic accident management device and traffic accident management method
KR20230159308A (en) Method, system and computer program product for calibrating and validating an advanced driver assistance system (adas) and/or an automated driving system (ads)
CN111094095A (en) Automatically receiving a travel signal
Bruggner et al. Model in the loop testing and validation of embedded autonomous driving algorithms
US10991178B2 (en) Systems and methods for trailer safety compliance
CN116466697A (en) Method, system and storage medium for a vehicle
US11610412B2 (en) Vehicle neural network training
CN116710809A (en) System and method for monitoring LiDAR sensor health
US20240160804A1 (en) Surrogate model for vehicle simulation
WO2023021755A1 (en) Information processing device, information processing system, model, and model generation method
WO2024044305A1 (en) Efficient and optimal feature extraction from observations
US20230356733A1 (en) Increasing autonomous vehicle log data usefulness via perturbation
CN117008574A (en) Intelligent network allies oneself with car advanced auxiliary driving system and autopilot system test platform

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19874730

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19874730

Country of ref document: EP

Kind code of ref document: A1