CN114511833A - Assistance system, method and storage medium for training an image recognition model - Google Patents
Assistance system, method and storage medium for training an image recognition model Download PDFInfo
- Publication number
- CN114511833A CN114511833A CN202011180369.7A CN202011180369A CN114511833A CN 114511833 A CN114511833 A CN 114511833A CN 202011180369 A CN202011180369 A CN 202011180369A CN 114511833 A CN114511833 A CN 114511833A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- queue
- image
- time
- risk level
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012549 training Methods 0.000 title claims abstract description 43
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000010191 image analysis Methods 0.000 claims abstract description 21
- 230000006399 behavior Effects 0.000 claims description 30
- 238000004590 computer program Methods 0.000 claims description 17
- 230000004044 response Effects 0.000 abstract description 3
- 238000003780 insertion Methods 0.000 description 13
- 238000012545 processing Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Human Resources & Organizations (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Economics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Strategic Management (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Development Economics (AREA)
- Game Theory and Decision Science (AREA)
- General Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Quality & Reliability (AREA)
- Operations Research (AREA)
- Marketing (AREA)
- Entrepreneurship & Innovation (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention provides an assistance system for a vehicle, a system for training an image recognition model, a vehicle comprising the same, and a corresponding method, computer device and computer-readable storage medium. The assistance system includes: a real-time image acquisition unit configured to acquire one or more real-time images of an in-line vehicle in the vicinity of a current vehicle; a real-time image analysis unit configured to analyze the real-time images using a pre-trained image recognition model to determine a risk level of an in-line behavior of the in-line vehicle; and the control unit is configured for sending reminding information and/or sending control information for controlling the current vehicle and/or the queue vehicle to adjust the running state according to the risk level. By using the scheme of the invention, the danger level of the queue-inserting behavior of the queue-inserting vehicles can be determined, and the vehicles are reminded or the driving state of the vehicles is adjusted in response to the danger level, so that the driving safety of the vehicles is improved.
Description
Technical Field
The present invention relates to the field of vehicle technology, and more particularly to an assistance system for a vehicle, a system for training an image recognition model, corresponding methods, computer devices and computer-readable storage media.
Background
Other vehicles may change lanes or queue up during the driving process of the vehicles on the road. Because the existing radar systems of the vehicles have detection blind areas and the existing intelligent auxiliary systems of some vehicles fail to provide timely queue-in reminding and corresponding control, the current vehicles may cause traffic accidents because the detection of the queue-in behaviors of other vehicles is inaccurate or the response is not timely.
Therefore, it is desirable to provide a solution that can more efficiently determine the risk level of other vehicle squat activities.
Disclosure of Invention
In order to solve the above technical problems, the present invention at least proposes an assistance system for a vehicle capable of determining a risk level of an intervening behavior of an intervening vehicle based on an acquired image of the intervening vehicle, and reminding the vehicle and/or adjusting a vehicle driving state in response to the determined risk level of the intervening behavior, thereby improving safety of vehicle driving.
According to a first aspect of the present invention, an assistance system for a vehicle is provided, wherein the assistance system comprises:
a real-time image acquisition unit configured to acquire one or more real-time images of an in-line vehicle in the vicinity of a current vehicle;
a real-time image analysis unit configured to analyze the real-time images using a pre-trained image recognition model to determine a risk level of an in-line behavior of the in-line vehicle;
and the control unit is configured for sending reminding information and/or sending control information for controlling the current vehicle and/or the queue vehicle to adjust the running state according to the risk level.
In one embodiment, the real-time image analysis unit is further configured to:
inputting the acquired real-time image into the pre-trained image recognition model;
searching a historical image matched with the real-time image in the pre-trained image recognition model according to the acquired real-time image;
if the historical image matched with the real-time image is found, determining the risk level of the queue-inserting behavior of the queue-inserting vehicle according to the risk level corresponding to the matched historical image; or,
if the historical images matched with the real-time images are not found, the acquired real-time images are analyzed to determine the running data of the current vehicle and the queue-inserting vehicles, the running tracks of the current vehicle and the queue-inserting vehicles in a set time period in the future are simulated according to the determined running data of the current vehicle and the queue-inserting vehicles, whether the current vehicle and the queue-inserting vehicles collide in the set time period in the future is predicted, and the risk level of the queue-inserting behaviors of the queue-inserting vehicles is determined at least according to the prediction result.
In one embodiment, the control unit is further configured to:
when the risk level is lower than or equal to a set risk threshold, sending reminding information for reminding the risk level of the queue-inserting behavior to the current vehicle and/or the queue-inserting vehicle; and/or
And when the risk level is higher than a set risk threshold value, sending control information for controlling the current vehicle and/or the queue vehicle to adjust the running state of the current vehicle and/or the queue vehicle.
According to a second aspect of the present invention, there is provided an assist method for a vehicle, wherein the assist method includes:
acquiring one or more real-time images of a fleet vehicle near a current vehicle;
analyzing the real-time images using a pre-trained image recognition model to determine a risk level of an in-line behavior of the in-line vehicle;
and sending reminding information and/or sending control information for controlling the current vehicle and/or the queue vehicle to adjust the running state according to the risk level.
In one embodiment, the analyzing the real-time images using a pre-trained image recognition model to determine a risk level of squad behavior of the squad vehicle further comprises:
inputting the acquired real-time image into the pre-trained image recognition model;
searching a historical image matched with the real-time image in the pre-trained image recognition model according to the acquired real-time image;
if the historical image matched with the real-time image is found, determining the risk level of the queue-inserting behavior of the queue-inserting vehicle according to the risk level corresponding to the matched historical image; or,
if the historical images matched with the real-time images are not found, the acquired real-time images are analyzed to determine the running data of the current vehicle and the queue-inserting vehicles, the running tracks of the current vehicle and the queue-inserting vehicles in a set time period in the future are simulated according to the determined running data of the current vehicle and the queue-inserting vehicles, whether the current vehicle and the queue-inserting vehicles collide in the set time period in the future is predicted, and the risk level of the queue-inserting behaviors of the queue-inserting vehicles is determined at least according to the prediction result.
In one embodiment, the sending of the reminding information to the current vehicle and/or the queue vehicle and/or the information for controlling the current vehicle and/or the queue vehicle to adjust the driving state according to the risk level further comprises:
when the risk level is lower than or equal to a set risk threshold value, sending reminding information prompting the risk level of the queue-inserting behavior to the current vehicle and/or the queue-inserting vehicle; and/or
And when the risk level is higher than a set risk threshold value, sending control information for controlling the current vehicle and/or the queue vehicle to adjust the running state of the current vehicle and/or the queue vehicle.
According to a third aspect of the present invention, there is provided a system for training an image recognition model, comprising:
a history image acquisition unit configured to acquire one or more history images regarding one or more host vehicles and a nearby convoy vehicle;
a history image analysis unit configured to analyze the history images to determine respective traveling data of the host vehicle and the in-line vehicle and result data of whether the host vehicle and the in-line vehicle collide, and set a corresponding risk level to the history images according to at least the result data;
an image recognition model training unit configured to train generation of the image recognition model based on at least the determined historical images and their corresponding risk levels.
In one embodiment, the historical image analysis unit is further configured to:
if it is determined from the result data that the host vehicle and the in-line vehicle in the history image have collided, recording a collision position and a collision time at which the collision occurred, and setting a risk level of the history image according to the collision position and the collision time; and/or
If it is determined from the result data that the host vehicle and the queue vehicle in the history image do not collide, a closest position between the host vehicle and the queue vehicle and a time to reach the closest position are determined from the history image, and a risk level of the history image is set according to the closest position and the time to reach the closest position.
In one embodiment, the historical image analysis unit is further configured to:
if the history image does not include an image that can be used to determine whether the host vehicle and the queue vehicle collide, the travel trajectories of the host vehicle and the queue vehicle in the history image are simulated to predict whether the host vehicle and the queue vehicle collide in the history image.
According to a fourth aspect of the present invention, there is provided a method for training an image recognition model, comprising:
acquiring one or more historical images of one or more host vehicles and a nearby queue vehicle for the queue;
analyzing the historical images to determine respective driving data of the host vehicle and the queue vehicle and result data of whether the host vehicle and the queue vehicle collide, and setting corresponding risk levels for the historical images at least according to the result data;
training to generate the image recognition model based at least on the determined historical images and their corresponding risk levels.
In one embodiment, the analyzing the historical images to determine respective travel data of the host vehicle and the fleet vehicle and result data of whether the host vehicle and the fleet vehicle collide further comprises:
if it is determined from the result data that the host vehicle and the in-line vehicle in the history image have collided, recording a collision position and a collision time at which the collision occurred, and setting a risk level of the history image according to the collision position and the collision time; and/or
If it is determined that the host vehicle and the queue vehicle in the history image do not collide according to the result data, a closest position between the host vehicle and the queue vehicle and a time to reach the closest position are determined according to the history image, and a risk level of the history image is set according to the closest position and the time to reach the closest position.
In one embodiment, the analyzing the historical images to determine respective travel data of the host vehicle and the fleet vehicle and result data of whether the host vehicle and the fleet vehicle collide further comprises:
if the history image does not include an image that can be used to determine whether the host vehicle and the queue vehicle collide, the travel trajectories of the host vehicle and the queue vehicle in the history image are simulated to predict whether the host vehicle and the queue vehicle collide in the history image.
According to a fifth aspect of the invention, a vehicle is provided, wherein the assistance system according to the invention is comprised.
According to a sixth aspect of the invention, there is provided a computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the method of the invention when executing the computer program.
According to a seventh aspect of the invention, a computer-readable storage medium is provided, on which a computer program is stored, wherein the computer program, when being executed by a processor, carries out the method of the invention.
In some of the schemes provided by the invention, the pre-trained image recognition model can be generated by utilizing the system or the method for training the image recognition model disclosed by the invention, and can be obtained by training based on the real road traffic historical image, so that the reliability of the image recognition model is ensured, the auxiliary system can determine the danger level of the queue-insertion behavior by utilizing the image recognition model to rapidly analyze the obtained real-time image, and the accuracy and the efficiency of risk judgment are greatly improved. Therefore, the scheme provided by the invention can help the vehicle timely respond to the determined danger level of the queue-insertion behavior to remind the vehicle or adjust the driving state of the vehicle, thereby improving the driving safety of the vehicle.
Drawings
Non-limiting and non-exhaustive embodiments of the present invention are described, by way of example, with reference to the following drawings, in which:
FIG. 1 is a schematic diagram illustrating one example application scenario to which the present invention is applicable;
FIG. 2 is a schematic diagram illustrating an assistance system for a vehicle according to one embodiment of the present invention;
FIG. 3 is a flow chart illustrating an assistance method for a vehicle according to one embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a system for training an image recognition model according to one embodiment of the present invention;
FIG. 5 is a flow diagram illustrating a method for training an image recognition model according to one embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
FIG. 1 illustrates an example scenario including a current vehicle and a queue vehicle ready for queue insertion, both of which present potential collision hazards. The assistance system disclosed herein may be partially or wholly provided on the current vehicle or on an intelligent terminal device or a remote server in communication with the current vehicle, so that the current vehicle may analyze the acquired real-time images of the fleet vehicles using a pre-trained image recognition model to determine a risk level of fleet intervention behavior of the fleet vehicles. The "pre-trained image recognition model" can be obtained by training through the system for training an image recognition model disclosed herein. In one embodiment, receiving means may be provided on the current vehicle for receiving the pre-trained image recognition model, for example, through an application program (APP) installed on an in-vehicle mobile terminal or a smart terminal device of a vehicle user. The current vehicle and/or the fleet vehicle may be a manually driven vehicle or an autonomous vehicle. A plurality of sensors or positioning devices, such as an image pickup device, a laser radar, a millimeter wave radar, an ultrasonic sensor, a vehicle networking communication (V2X) device, a High Automated Driving (HAD) map, and the like, may be mounted on the vehicle. These sensors are capable of detecting the environment surrounding the vehicle, such as surrounding objects, obstacles, infrastructure, etc.; for example, the sensor-based detection may detect travel data of other vehicles in the vicinity of the current vehicle, and determine whether there is an intervening vehicle intended to be inserted in front of the current vehicle among the other vehicles in the vicinity based on the travel data of the other vehicles. The following driving data relating to the vehicle can be acquired by means of the sensors, the locating device and/or by means of communication with an online server: the position of the vehicle, turn light information, heading, lateral speed of the vehicle, longitudinal speed, acceleration, lane marking information, and/or lateral distance between the current vehicle and the lead vehicle, etc.
Fig. 2 shows an assistance system 200 for a vehicle according to an embodiment of the invention. As shown in fig. 2, the assistance system 200 includes a real-time image acquisition unit 210, a real-time image analysis unit 220, and a control unit 230.
The real-time image acquisition unit 210 may be configured to acquire one or more real-time images of an in-line vehicle in the vicinity of the current vehicle. In one embodiment, the real-time image obtaining unit may obtain the real-time image by a camera device, for example, a camera mounted on a current vehicle, or a camera mounted on another vehicle capable of obtaining an image around the current vehicle and capable of communicating with the current vehicle, or a camera on an infrastructure capable of communicating with the current vehicle (e.g., a monitoring camera mounted on a road). Preferably, the real-time image acquisition unit may acquire the real-time image through one or more cameras mounted on the current vehicle. When a vehicle is equipped with a plurality of cameras, the plurality of cameras may be disposed at the front, both sides, top, and rear of the current vehicle. The real-time image may include, but is not limited to, a picture or a video. The "current vehicle" is not specific to a particular vehicle, and any vehicle that may be cut-in while traveling on a road may be suitable for use as the current vehicle herein. The queue vehicle is a vehicle actively queued (e.g., merged or plugged in) ahead of the current vehicle. The vicinity of the current vehicle is within a preset distance range of the current vehicle, for example, within 3m or 5m, but is not limited thereto, and may be set according to circumstances.
The real-time image analysis unit 220 may be configured to analyze the real-time images using a pre-trained image recognition model to determine a risk level of the queue-insertion behavior of the queue-insertion vehicles.
In one embodiment, the acquired real-time images may be input into the pre-trained image recognition model, and then historical images matching the real-time images in the pre-trained image recognition model may be searched based on the acquired real-time images.
In one embodiment, the image recognition model may perform feature image extraction on the input real-time image, and then perform screening in the historical images based on the extracted feature images to find matching historical images. In one embodiment, the image recognition model may be, for example, an image recognition model based on a neural network, and the image recognition model may extract feature images of the input real-time image by using the neural network, then map the feature images to the neural network for classification, and then match corresponding historical images based on the classification result. The "matching" means that the real-time image and the historical image have the same or similar relative position and relative speed, and specifically, for example, the relative position and relative speed of the current vehicle and the queue vehicle in the real-time image and the relative position and relative speed relationship of the current vehicle and the queue vehicle in the historical image are the same or similar.
Depending on the situation, the real-time image analysis unit 220 may or may not find a history image matching the real-time image.
In one case, if the historical image matched with the real-time image is found, determining the risk level of the queue-insertion behavior of the queue-insertion vehicle according to the risk level corresponding to the matched historical image. Here, the risk level of the historical image may be divided in a number of possible forms or ways; for example, the risk level may be divided into a level I, a level II, and a level III in the following form, wherein the level I indicates that the possibility of collision of the in-line vehicle with the current vehicle is small, for example, the collision probability is 10%; class II indicates that the possibility of a collision of the oncoming vehicle with the current vehicle is uncertain, for example, the collision probability is 50%; the level III indicates that the possibility of collision of the oncoming vehicle with the current vehicle is high, for example, the collision probability is 80%.
Or in another case, if the historical images matched with the real-time images are not found, analyzing the acquired real-time images to determine the running data of the current vehicle and the queue-inserting vehicle, simulating the running tracks of the current vehicle and the queue-inserting vehicle within a set time period in the future according to the determined running data of the current vehicle and the queue-inserting vehicle to predict whether the current vehicle and the queue-inserting vehicle collide within the set time period in the future, and determining the risk level of queue-inserting behavior of the queue-inserting vehicle at least according to the prediction result. Here, the real-time image analysis unit 220 may analyze the acquired real-time image by using an online server of the traffic control department, or a processing module configured by itself, or another suitable processing module, to acquire the driving data and perform the driving track simulation. Wherein the real-time image is preferably a video capable of recording the driving data of the current vehicle and/or the queue vehicle. The future set time period may be adjusted according to circumstances, for example, when the speed of the queue vehicle is high, the future set time period may be set to be short, and conversely, it may be set to be long. The prediction result may include a collision and a non-collision, and the prediction result is that the risk level corresponding to the collision is higher than the risk level corresponding to the non-collision.
The control unit 230 may be configured to send a warning message to the current vehicle and/or the lead vehicle and/or a message controlling it to adjust the driving status according to the risk level.
In one embodiment, the control unit 230 may be further configured to: when the risk level is lower than or equal to a set risk threshold value, sending reminding information prompting the risk level of the queue-inserting behavior to the current vehicle and/or the queue-inserting vehicle; and/or when the risk level is higher than a set risk threshold value, sending control information for controlling the current vehicle and/or the queue vehicle to adjust the running state of the current vehicle and/or the queue vehicle. Here, the set risk threshold may be set according to the risk level or set according to an empirical value. Alternatively, the risk level and the set risk threshold may both be represented in the form of a probability. For example, the risk threshold is set to 45%, and when the risk level is less than or equal to 45%, the queue-insertion behavior can be allowed without changing the current vehicle driving state, and only the reminding information is sent to the current vehicle; when the risk level is greater than 45%, it is necessary to transmit control information for controlling the current vehicle to decelerate or even stop running or change the running direction. The reminder information and the control information may be presented in a number of possible ways, for a vehicle user of a non-autonomous vehicle in one or any combination of the following ways: voice broadcast, holographic projection, augmented reality display, on-vehicle display terminal display, the portable smart machine (smart-phone, intelligent wrist-watch, intelligent bracelet) that the vehicle user wore show. For the automatic driving vehicle, the carrier signal carrying the reminding information can be sent to the vehicle through wired transmission or wireless transmission, wherein the wireless transmission can be realized through a mobile network or Wi-Fi and the like. The control information may include instructions for controlling the current vehicle to run normally, decelerate, emergency brake or steer, etc.
Fig. 3 shows an assistance method 300 for a vehicle according to an embodiment of the invention. As shown in fig. 3, the assist method 300 for a vehicle includes:
s310, acquiring one or more real-time images of the queue vehicle near the current vehicle;
s320, analyzing the real-time images by using a pre-trained image recognition model to determine the risk level of the queue-insertion behavior of the queue-insertion vehicles;
s330, sending reminding information and/or sending control information for controlling the current vehicle and/or the queue vehicle to adjust the running state according to the risk level.
It should be understood that the assistance method 300 for a vehicle is performed by the assistance system 200 described above. The specific features described herein above in relation to the assistance system of a vehicle may also be applied analogously to the assistance method of a vehicle, with similar extensions. For the sake of simplicity, it is not described in detail.
FIG. 4 illustrates a system 400 for training an image recognition model according to an embodiment of the present invention. The system 400 for training an image recognition model includes a historical image acquisition unit 410, a historical image analysis unit 420, and an image recognition model training unit 430, and each unit is communicatively coupled to each other. The system 400 for training an image recognition model may be located, in part or in whole, on a vehicle, or preferably on a server that is communicable with the vehicle.
Specifically, the history image acquisition unit 410 may be configured to acquire one or more history images regarding one or more host vehicles and their nearby fleet vehicles.
In one embodiment, the historical image acquisition unit 410 may acquire the historical image in any one or more of a number of possible ways: (1) the history image acquisition unit 410 is configured and adapted to communicate with a camera on an available infrastructure, e.g. via a mobile network, thereby acquiring the history image by means of the camera on the infrastructure; (2) the history image obtaining unit 410 is configured to be suitably connected to a source capable of providing road image information, such as an online server of a traffic management department, and obtain the history image from the source; wherein the connection between the history image acquisition unit 410 and the source may be a wired connection or a wireless connection; (3) the history image acquisition unit 410 is configured and adapted to communicate with other vehicles travelling on the road, e.g. via a mobile network or Wi-Fi or the like, thereby acquiring the history images by means of the other vehicles. The historical image may comprise a picture or a video, preferably a video.
The history image analysis unit 420 may be configured to analyze the history images to determine respective traveling data of the host vehicle and the fleet vehicle and result data of whether the host vehicle and the fleet vehicle collide, and set a corresponding risk level to the history images according to at least the result data.
In one embodiment, the historical image analysis unit 420 may also analyze the acquired real-time images by means of the traffic management department online server, or its own configured processing module, or other suitable processing module, to acquire the driving data and the result data. Here, taking the historical image as a traffic monitoring video as an example, the acquired driving data may include complete dynamic parameters of the queue-insertion process, such as but not limited to heading, speed, acceleration, identification information of a lane where the host vehicle or the queue-insertion vehicle is located, and/or a change in lateral distance between the host vehicle and the queue-insertion vehicle. Optionally, a corresponding risk level may be set for the history image based on the driving data, for example, by determining whether the speed of the vehicle in queue is greater than a preset speed threshold, specifically, if the speed of the vehicle in queue is greater than the preset threshold, a higher risk level may be set for the history image; setting a lower risk level for the historical image if the speed of the in-line vehicle is not greater than a preset threshold.
The result data may include information representative of a collision or a non-collision. In one case, if it is determined from the result data that the host vehicle and the oncoming vehicle in the history image have collided, a collision position and a collision time at which the collision occurred may be recorded, and the risk level of the history image may be set according to the collision position and the collision time. Here, the collision position and the collision time may both be determined based on analysis of the history image.
Taking the historical image as the traffic monitoring video as an example, the historical image analysis unit 420 may perform image segmentation and feature image extraction on the video image, so as to determine the respective collision positions on the host vehicle and the queue vehicle. The time of collision may be determined based on the time information of the video, for example, calculating the difference between the point in time at which the oncoming vehicle collides with the host vehicle and the point in time at which the oncoming vehicle starts the queue. The risk level may be set as follows: the closer the collision position is to the vehicle door, the higher the risk level is, and vice versa; the shorter the collision time, the higher the risk level, and vice versa; or the greater the collision speed, the higher the risk level and vice versa.
In another case, if it is judged that the host vehicle and the intervening vehicle in the history image do not collide according to the result data, a closest position between the host vehicle and the intervening vehicle and a time of arrival at the closest position are determined according to the history image, and a risk level of the history image is set according to the closest position and the time of arrival at the closest position. Here, the history image analysis unit 420 may acquire the travel data of the host vehicle and the in-line vehicle by analyzing the history image, and calculate the closest position and the times at which the host vehicle and the in-line vehicle reach the closest position, respectively, from the travel data. The risk level may be set as follows: the closer the time when the host vehicle and the squad vehicle reach the nearest position, the higher the risk level, and vice versa; the closer the nearest position is to the current position of the vehicle, the higher the risk level and vice versa.
In an embodiment, the historical image analysis unit 420 is further configured to: if the history image does not include an image that can be used to determine whether the host vehicle and the queue vehicle collide, the travel trajectories of the host vehicle and the queue vehicle in the history image are simulated to predict whether the host vehicle and the queue vehicle collide in the history image. Here, the history image analyzing unit 420 may analyze the history image by means of a traffic control online server, or a processing module configured by itself, or another applicable processing module, to acquire the driving data and perform driving trajectory simulation. The description of the travel data may refer to the above description. If the running tracks are crossed, the host vehicle and the queue vehicle can be considered to be collided, and otherwise, the host vehicle and the queue vehicle cannot be collided. The image recognition model training unit 430 is configured to train generation of the image recognition model based on at least the determined historical images and their corresponding risk levels.
The relevant extension of the image recognition model may refer to the embodiment described in connection with fig. 2, and will not be described here in detail. As known to those skilled in the art, the image recognition model may also be understood as a model defined by a machine learning algorithm for image recognition (e.g., a neural network image recognition model or other known types of machine learning models) capable of performing the functions defined herein.
In an alternative embodiment, the system 400 for training an image recognition model may be applied in conjunction with the assistance system 200. For example, when the image recognition model is trained and applied to the assistance system 200 according to the present invention, if the real-time images acquired by the assistance system include images that do not match the historical images in the image recognition model, the system 400 for training the image recognition model may perform training based on the real-time images to optimize the image recognition model.
FIG. 5 illustrates a method 500 for training an image recognition model according to an embodiment of the present invention. As shown in fig. 5, the method 500 for training an image recognition model includes:
s510, acquiring one or more historical images of one or more host vehicles and the vehicles which are nearby and are in queue for queue-in;
s520, analyzing the historical images to determine respective driving data of the host vehicle and the queue vehicle and result data of whether the host vehicle and the queue vehicle collide, and setting corresponding risk levels for the historical images at least according to the result data;
s530, training and generating the image recognition model at least based on the determined historical images and the corresponding risk levels thereof.
It should be appreciated that the method 500 for training an image recognition model may be performed by the system 400 for training an image recognition model described above. The specific features described herein above with respect to the system for training an image recognition model may also be similarly applied in the method for training an image recognition model, with similar extensions. For the sake of simplicity, it is not described in detail.
It should be understood that the various elements of the assistance system, the system for training an image recognition model of the present invention may be implemented in whole or in part by software, hardware, firmware, or a combination thereof. The units may be embedded in a processor of the computer device in a hardware or firmware form or independent of the processor, or may be stored in a memory of the computer device in a software form for being called by the processor to execute operations of the units. Each of the units may be implemented as a separate component or module, or two or more units may be implemented as a single component or module.
It will be appreciated by those skilled in the art that the schematic diagram of the assistance system shown in fig. 2 and the schematic diagram of the system for training an image recognition model shown in fig. 4 are merely exemplary illustrative block diagrams of partial structures associated with aspects of the present invention and do not constitute a limitation of a computer device, processor or computer program embodying aspects of the present invention. A particular computer device, processor or computer program may include more or fewer components or modules than shown in the figures, or may combine or split certain components or modules, or may have a different arrangement of components or modules.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored thereon computer instructions executable by the processor, the computer instructions, when executed by the processor, instruct the processor to perform the steps of the method of assisting or training an image recognition model of the invention. The computer device may broadly be a server, a vehicle mounted terminal, or any other electronic device having the necessary computing and/or processing capabilities. In one embodiment, the computer device may include a processor, memory, network interface, communication interface, etc. connected by a system bus. The processor of the computer device may be used to provide the necessary computing, processing and/or control capabilities. The memory of the computer device may include non-volatile storage media and internal memory. An operating system, a computer program, and the like may be stored in or on the non-volatile storage medium. The internal memory may provide an environment for the operating system and the computer programs in the non-volatile storage medium to run. The network interface and the communication interface of the computer device may be used to connect and communicate with an external device via a network. The computer program, when being executed by a processor, performs the steps of the method of assisting and the method of training an image recognition model of the invention.
The invention may be implemented as a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, causes the steps of the method of the invention to be performed. In one embodiment, the computer program is distributed across a plurality of computer devices or processors coupled by a network such that the computer program is stored, accessed, and executed by one or more computer devices or processors in a distributed fashion. One or more method steps/operations may be performed by one or more computer devices or processors, and one or more other method steps/operations may be performed by one or more other computer devices or processors. One or more computer devices or processors may perform a single method step/operation, or two or more method steps/operations.
It will be understood by those skilled in the art that all or part of the steps of the method for assisting the present invention and the method for training an image recognition model may be performed by instructing related hardware such as a computer device or a processor through a computer program, which may be stored in a non-transitory computer-readable storage medium, and the computer program when executed causes the steps of the method for assisting the present invention or the method for training an image recognition model to be performed. Any reference herein to memory, storage, databases, or other media may include non-volatile and/or volatile memory, as appropriate. Examples of non-volatile memory include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), flash memory, magnetic tape, floppy disk, magnetic storage device, optical storage device, hard disk, solid state disk, and the like. Examples of volatile memory include Random Access Memory (RAM), external cache memory, and the like.
The respective technical features described above may be arbitrarily combined. Although not all possible combinations of features are described, any combination of features should be considered to be covered by the present specification as long as there is no contradiction between such combinations.
While the invention has been described in connection with the embodiments, it is to be understood by those skilled in the art that the foregoing description and drawings are merely illustrative and not restrictive of the broad invention, and that this invention not be limited to the disclosed embodiments. Various modifications and variations are possible without departing from the spirit of the invention.
Claims (15)
1. An assistance system for a vehicle, characterized in that the assistance system comprises:
a real-time image acquisition unit configured to acquire one or more real-time images of an in-line vehicle in the vicinity of a current vehicle;
a real-time image analysis unit configured to analyze the real-time images using a pre-trained image recognition model to determine a risk level of an in-line behavior of the in-line vehicle;
and the control unit is configured for sending reminding information and/or sending control information for controlling the current vehicle and/or the queue vehicle to adjust the running state according to the risk level.
2. The assistance system of claim 1, wherein the real-time image analysis unit is further configured for:
inputting the acquired real-time image into the pre-trained image recognition model;
searching a historical image matched with the real-time image in the pre-trained image recognition model according to the acquired real-time image;
if the historical image matched with the real-time image is found, determining the risk level of the queue-inserting behavior of the queue-inserting vehicle according to the risk level corresponding to the matched historical image; or,
if the historical images matched with the real-time images are not found, the acquired real-time images are analyzed to determine the running data of the current vehicle and the queue-inserting vehicles, the running tracks of the current vehicle and the queue-inserting vehicles in a set time period in the future are simulated according to the determined running data of the current vehicle and the queue-inserting vehicles, whether the current vehicle and the queue-inserting vehicles collide in the set time period in the future is predicted, and the risk level of the queue-inserting behaviors of the queue-inserting vehicles is determined at least according to the prediction result.
3. The assistance system of claim 1 or 2, wherein the control unit is further configured for:
when the risk level is lower than or equal to a set risk threshold value, sending reminding information prompting the risk level of the queue-inserting behavior to the current vehicle and/or the queue-inserting vehicle; and/or
And when the risk level is higher than a set risk threshold value, sending control information for controlling the current vehicle and/or the queue vehicle to adjust the running state of the current vehicle and/or the queue vehicle.
4. An assist method for a vehicle, characterized by comprising:
acquiring one or more real-time images of a fleet vehicle near a current vehicle;
analyzing the real-time images using a pre-trained image recognition model to determine a risk level of an in-line behavior of the in-line vehicle;
and sending reminding information and/or sending control information for controlling the current vehicle and/or the queue vehicle to adjust the running state according to the risk level.
5. The assistance method of claim 4, wherein the analyzing the real-time images using a pre-trained image recognition model to determine a risk level of queue-breaking behavior of the queue-breaking vehicle further comprises:
inputting the acquired real-time image into the pre-trained image recognition model;
searching a historical image matched with the real-time image in the pre-trained image recognition model according to the acquired real-time image;
if the historical image matched with the real-time image is found, determining the risk level of the queue-inserting behavior of the queue-inserting vehicle according to the risk level corresponding to the matched historical image; or,
if the historical images matched with the real-time images are not found, the acquired real-time images are analyzed to determine the running data of the current vehicle and the queue-inserting vehicles, the running tracks of the current vehicle and the queue-inserting vehicles in a set time period in the future are simulated according to the determined running data of the current vehicle and the queue-inserting vehicles, whether the current vehicle and the queue-inserting vehicles collide in the set time period in the future is predicted, and the risk level of the queue-inserting behaviors of the queue-inserting vehicles is determined at least according to the prediction result.
6. The assistance method according to claim 4 or 5, wherein said sending of a reminder message to said current vehicle and/or said squad vehicle and/or of a message controlling it to adjust the driving state according to said risk level further comprises:
when the risk level is lower than or equal to a set risk threshold value, sending reminding information prompting the risk level of the queue-inserting behavior to the current vehicle and/or the queue-inserting vehicle; and/or
And when the risk level is higher than a set risk threshold value, sending control information for controlling the current vehicle and/or the queue vehicle to adjust the running state of the current vehicle and/or the queue vehicle.
7. A system for training an image recognition model, comprising:
a history image acquisition unit configured to acquire one or more history images regarding one or more host vehicles and a nearby convoy vehicle;
a history image analysis unit configured to analyze the history images to determine respective traveling data of the host vehicle and the in-line vehicle and result data of whether the host vehicle and the in-line vehicle collide, and set a corresponding risk level to the history images according to at least the result data;
an image recognition model training unit configured to train generation of the image recognition model based on at least the determined historical images and their corresponding risk levels.
8. The system for training an image recognition model of claim 7, wherein the historical image analysis unit is further configured for:
if it is determined from the result data that the host vehicle and the in-line vehicle in the history image have collided, recording a collision position and a collision time at which the collision occurred, and setting a risk level of the history image according to the collision position and the collision time; and/or
If it is determined from the result data that the host vehicle and the queue vehicle in the history image do not collide, a closest position between the host vehicle and the queue vehicle and a time to reach the closest position are determined from the history image, and a risk level of the history image is set according to the closest position and the time to reach the closest position.
9. The system of training an image recognition model according to claim 7 or 8, wherein the historical image analysis unit is further configured for:
if the history image does not include an image that can be used to determine whether the host vehicle and the queue vehicle collide, the travel trajectories of the host vehicle and the queue vehicle in the history image are simulated to predict whether the host vehicle and the queue vehicle collide in the history image.
10. A method for training an image recognition model, comprising:
acquiring one or more historical images of one or more host vehicles and a nearby queue vehicle for the queue;
analyzing the historical images to determine respective driving data of the host vehicle and the queue vehicle and result data of whether the host vehicle and the queue vehicle collide, and setting corresponding risk levels for the historical images at least according to the result data;
training to generate the image recognition model based at least on the determined historical images and their corresponding risk levels.
11. The method of training an image recognition model of claim 10, wherein the analyzing the historical images to determine respective travel data of the host vehicle and the fleet vehicle and result data of whether the host vehicle and the fleet vehicle collided further comprises:
if it is determined from the result data that the host vehicle and the in-line vehicle in the history image have collided, recording a collision position and a collision time at which the collision occurred, and setting a risk level of the history image according to the collision position and the collision time; and/or
If it is determined that the host vehicle and the queue vehicle in the history image do not collide according to the result data, a closest position between the host vehicle and the queue vehicle and a time to reach the closest position are determined according to the history image, and a risk level of the history image is set according to the closest position and the time to reach the closest position.
12. The method of training an image recognition model according to claim 10 or 11, wherein the analyzing the historical images to determine respective travel data of the host vehicle and the fleet vehicle and result data of whether the host vehicle and the fleet vehicle collide further comprises:
if the history image does not include an image that can be used to determine whether the host vehicle and the queue vehicle collide, the travel trajectories of the host vehicle and the queue vehicle in the history image are simulated to predict whether the host vehicle and the queue vehicle collide in the history image.
13. A vehicle, characterized in that it comprises an assistance system according to any one of claims 1-3.
14. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the method of any of claims 4 to 6 or any of claims 10 to 12 when executing the computer program.
15. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of any one of claims 4 to 6 or of any one of claims 10 to 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011180369.7A CN114511833A (en) | 2020-10-29 | 2020-10-29 | Assistance system, method and storage medium for training an image recognition model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011180369.7A CN114511833A (en) | 2020-10-29 | 2020-10-29 | Assistance system, method and storage medium for training an image recognition model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114511833A true CN114511833A (en) | 2022-05-17 |
Family
ID=81546601
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011180369.7A Pending CN114511833A (en) | 2020-10-29 | 2020-10-29 | Assistance system, method and storage medium for training an image recognition model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114511833A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008158588A (en) * | 2006-12-20 | 2008-07-10 | Toyota Central R&D Labs Inc | Inter-vehicle communication device and inter-vehicle communication system |
JP2019123449A (en) * | 2018-01-18 | 2019-07-25 | 本田技研工業株式会社 | Travel control device, travel control method and program |
CN111091591A (en) * | 2019-12-23 | 2020-05-01 | 百度国际科技(深圳)有限公司 | Collision detection method and device, electronic equipment and storage medium |
-
2020
- 2020-10-29 CN CN202011180369.7A patent/CN114511833A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008158588A (en) * | 2006-12-20 | 2008-07-10 | Toyota Central R&D Labs Inc | Inter-vehicle communication device and inter-vehicle communication system |
JP2019123449A (en) * | 2018-01-18 | 2019-07-25 | 本田技研工業株式会社 | Travel control device, travel control method and program |
CN111091591A (en) * | 2019-12-23 | 2020-05-01 | 百度国际科技(深圳)有限公司 | Collision detection method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3104284B1 (en) | Automatic labeling and learning of driver yield intention | |
CN108944939B (en) | Method and system for providing driving directions | |
US20200307589A1 (en) | Automatic lane merge with tunable merge behaviors | |
US20170369072A1 (en) | Apparatus, system and method for personalized settings for driver assistance systems | |
US12077171B2 (en) | Vehicle control device, automated driving vehicle development system, vehicle control method, and storage medium for verifying control logic | |
JP7540338B2 (en) | Information processing device, information processing system, and information processing method | |
JP2018501543A (en) | Method and apparatus for identifying the current driving situation | |
US20230289980A1 (en) | Learning model generation method, information processing device, and information processing system | |
CN115909783A (en) | Lane-level driving assistance method and system based on traffic flow | |
CN112955361A (en) | Prediction of expected driving behavior | |
US11983918B2 (en) | Platform for perception system development for automated driving system | |
US20220266856A1 (en) | Platform for perception system development for automated driving systems | |
CN113335311B (en) | Vehicle collision detection method and device, vehicle and storage medium | |
EP4082862A1 (en) | Platform for path planning system development for automated driving system | |
CN113771845B (en) | Method and device for predicting vehicle track, vehicle and storage medium | |
CN109313851B (en) | Method, device and system for retrograde driver identification | |
CN114360289A (en) | Assistance system for a vehicle, corresponding method, vehicle and storage medium | |
CN109344776B (en) | Data processing method | |
CN115981344B (en) | Automatic driving method and device | |
CN113095344A (en) | Evaluation and optimization device, system and method, vehicle, server and medium | |
US11628859B1 (en) | Vehicle placement on aerial views for vehicle control | |
CN114511833A (en) | Assistance system, method and storage medium for training an image recognition model | |
WO2021193103A1 (en) | Information processing device, information processing method, and program | |
CN115017967A (en) | Detecting and collecting accident-related driving experience event data | |
CN114511834A (en) | Method and device for determining prompt information, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |