Disclosure of Invention
The application mainly aims to provide an augmented reality head-up display method, augmented reality equipment and a readable storage medium, and aims to solve the problem of low intelligent degree in a vehicle driving process.
In order to achieve the above object, the present application provides an augmented reality head-up display method, which is applied to an augmented reality device, comprising:
establishing communication link connection with a vehicle, and acquiring operation condition information and running environment information of the vehicle through the communication link;
determining whether the vehicle enters a risk scene according to the running condition information and the running environment information;
And after determining that the vehicle is in the risk scene, generating risk early-warning image content corresponding to the risk scene, and presenting the risk early-warning image content on a current window of the augmented reality equipment.
Optionally, the step of determining whether the vehicle enters a risk scene according to the running condition information and the running environment information includes:
determining a risk probability value of collision between a vehicle and a target object according to the running condition information and the running environment information, wherein the target object is a pedestrian, a pavement pit, a building or other vehicles, and the distance between the pedestrian, the pavement pit, the building or other vehicles and the vehicle is smaller than or equal to a preset distance;
If the risk probability value is smaller than a preset probability threshold value, determining that the vehicle is not in a risk scene;
and if the risk probability value is greater than or equal to a preset probability threshold value, determining that the vehicle is in a risk scene.
Optionally, the step of presenting the risk early warning image content on the current window of the augmented reality device includes:
And presenting the risk early warning image content to a preset target area on a current window of the augmented reality device, wherein the preset target area is an area matched with the azimuth of the target object.
Optionally, before the step of generating the risk early-warning image content corresponding to the risk scene, the method further includes:
Determining a probability interval in which the risk probability value is located;
inquiring from a preset scene type mapping table according to the probability interval to obtain the risk level mapped by the probability interval;
if the mapped risk level belongs to the first risk level, executing: the step of generating risk early warning image content corresponding to the risk scene;
if the mapped risk level belongs to the second risk level, controlling a vibration module of the augmented reality device to generate vibration for a preset duration, and executing: the step of generating risk early warning image content corresponding to the risk scene, wherein the second risk level is greater than the first risk level.
Optionally, after the step of querying the risk level mapped by the probability interval from the preset scene type mapping table, the method further includes:
if the mapped risk level belongs to a third risk level, generating response strategy information for avoiding collision between the vehicle and the target object, and sending the response strategy information to the vehicle, wherein the response strategy information is used for triggering the vehicle to actively intervene to execute a response strategy for avoiding collision risk, and the third risk level is larger than the second risk level.
Optionally, the step of generating the countermeasure information for avoiding the collision of the vehicle with the target object includes:
acquiring the type of the target object;
and generating response strategy information for avoiding collision between the vehicle and the target object according to the type of the target object.
Optionally, the step of generating the coping strategy information for avoiding the collision between the vehicle and the target object according to the type of the target object includes:
if the type of the target object is the pavement pit, determining the coping strategy information for avoiding the collision between the vehicle and the target object is as follows: the vehicle is decelerated until the speed of the vehicle is less than a preset speed threshold;
if the type of the object is a pedestrian, a building or other vehicles, the corresponding policy information for avoiding the collision between the vehicle and the object is determined as follows: the vehicle is braked in a decelerating manner.
Optionally, the risk early warning image content includes at least one of risk image identification information, position distance information between the vehicle and the target object, environment image information of a region corresponding to the target object, and risk avoidance response policy information.
The present application also provides an augmented reality device, which is an entity device, comprising: the device comprises a memory, a processor and a program of the augmented reality head-up display method, wherein the program of the augmented reality head-up display method is stored in the memory and can be run on the processor, and the steps of the augmented reality head-up display method can be realized when the program of the augmented reality head-up display method is executed by the processor.
The present application also provides a readable storage medium, which is a computer readable storage medium, where a program for implementing an augmented reality head-up display method is stored on the computer readable storage medium, and the program for implementing the augmented reality head-up display method is executed by a processor to implement the steps of the augmented reality head-up display method as described above.
The application also provides a computer program product comprising a computer program which when executed by a processor implements the steps of an augmented reality head-up display method as described above.
The application provides an augmented reality head-up display method, augmented reality equipment and a readable storage medium, wherein the technical scheme of the application is that a communication link is established with a vehicle, and the operation condition information and the running environment information of the vehicle are obtained through the communication link; determining whether the vehicle enters a risk scene according to the running condition information and the running environment information; after determining that the vehicle is in the risk scene, generating risk early-warning image content corresponding to the risk scene, and displaying the risk early-warning image content on a current window of the augmented reality device, so that the problem that potential safety hazards are caused due to the fact that the user drives distractedly and the road condition is not observed comprehensively is solved. For example, when a user drives through a crossroad without signal lamps and turns right to observe road conditions, the vehicle detects that other vehicles on the left side are close to collision risks rapidly, and at the moment, the application reminds information on the current window of the augmented reality equipment, so that the user can know the left-side traffic danger in time without turning the head, and accidents are avoided.
Compared with the existing vehicle-mounted HUD which only displays simple information such as vehicle speed and navigation line, the vehicle-mounted HUD determines whether the vehicle enters a risk scene according to the running condition information and running environment information of the vehicle, generates risk early warning image content corresponding to the risk scene after determining that the vehicle is in the risk scene, and changes the risk early warning image content along with different risk scenes, so that the content of augmented reality head-up display is rich and variable, and meanwhile, the risk early warning message can be timely pushed to the front of a user, the problem that potential safety hazards are caused due to the fact that the user is distracted when driving and the road condition is not completely observed is solved, and the intelligent degree of the vehicle driving process is improved.
Detailed Description
In order to make the above objects, features and advantages of the present invention more comprehensible, the following description of the embodiments accompanied with the accompanying drawings will be given in detail. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
With the development of automobiles, head-up display and augmented reality techniques have been applied in automobiles. The head-up display and augmented reality technology is combined and applied to the automobile, so that a driver can see an image fused with the virtual image and the real environment under the condition of no head lowering, and driving convenience is provided for the driver. But at present, the automobile can only display navigation routes, vehicle speeds and the like by using an augmented reality head-up display technology, and the intelligent degree of the driving process of the automobile is low.
Based on this, referring to fig. 1, the present embodiment provides an augmented reality head-up display method, where the augmented reality head-up display method is applied to an augmented reality device, the method includes:
Step S10, establishing communication link connection with a vehicle, and acquiring operation condition information and running environment information of the vehicle through the communication link;
In the present embodiment, the operation condition information refers to an operation condition during running of the vehicle. For example, the movement pattern of the automobile can comprise acceleration, deceleration, turning, ascending and descending slopes, stopping and other driving conditions. The control mode according to the driver can comprise working conditions such as gear shifting and speed changing, sliding (gear shifting and sliding, neutral gear sliding, accelerating and parking sliding), braking (emergency braking, speed control braking and braking), accelerator speed control, steering, reversing and the like. The running environment information is road surface environment information in a vehicle peripheral region (a region within a predetermined distance from the vehicle), and includes a distance from the vehicle to surrounding pedestrians or traffic objects such as other vehicles, and a road surface condition (for example, whether the road surface is uneven, whether the road surface is slippery, or the like) on which the vehicle is running. Therefore, it is easy to understand that the current running scene of the vehicle can be obtained by analyzing according to the current running condition information and the running environment information of the vehicle, and whether the vehicle enters a risk scene or not can be determined conveniently according to the current running scene.
In this embodiment, the operation condition information and the running environment information of the vehicle may be monitored by the vehicle-mounted sensor, for example, the operation condition information and the running environment information of the vehicle may be monitored by the vehicle-mounted sensor such as a steering sensor, a wheel sensor, a side slip sensor, an acceleration sensor, an accelerator brake sensor, a millimeter wave radar, a camera, and a pressure sensor, and the monitored operation condition information and running environment information may be transmitted to the augmented reality apparatus.
Step S20, determining whether the vehicle enters a risk scene or not according to the running condition information and the running environment information;
In this embodiment, the risk scenario refers to a scenario in which there is a high probability that a vehicle will collide and other safety accidents occur, and the risk scenario includes, but is not limited to, a scenario in which a driver is driving tiredly, a vehicle is driving overspeed, a vehicle is driving on a severely bumpy road, a vehicle is sideslip, wheels are locked, AEB (Autonomous Emergency Braking, automatic emergency brake system) is started, ESP (ElectronicStabilityProgram, vehicle body electronic stability system) is started, and a driver's hand leaves a steering wheel for a preset period of time. It is readily understood that when the vehicle is not in a risk scenario, it is representative that the vehicle is in a safety scenario. It should be noted that, when the vehicle collides, and within a preset time period after the collision, for example, the preset time period is 60 seconds, the method still belongs to the category of the risk scenario.
And step S30, after the vehicle is determined to be in the risk scene, generating risk early-warning image content corresponding to the risk scene, and presenting the risk early-warning image content on a current window of the augmented reality equipment.
Illustratively, the risk early-warning image content includes at least one of risk image identification information, position distance information between the vehicle and the target object, environment image information of a region corresponding to the target object, and risk avoidance response policy information.
The risk early warning image content can be presented through the dynamic display area of the current window of the augmented reality device, wherein the dynamic display area is used as an area for a driver to observe traffic information of a real environment, when pedestrians or other vehicles pass in front of the vehicles and potential safety risks exist, the system of the augmented reality device can sense the positions of the pedestrians through the sensor and mark the pedestrians, and the early warning information is displayed at the position where the area coincides with the real pedestrians in a superposition mode of the dynamic mark through the augmented reality device, so that the driver is prompted to pay attention to observe and avoid the accidents, and the accidents are avoided.
In this embodiment, the augmented reality device is worn on the head of the user.
The current window of the augmented Reality device refers to an XR (Extended Reality) content image of the maximum range that the user can see in the current head pose. The current head pose may include, among other things, a spatial position and an angle of the current head, where the angle may include a pitch angle (pitch) that rotates based on an X-axis, a yaw angle (yaw) that rotates based on a Y-axis, and a roll angle (roll) that rotates based on a Z-axis.
In an embodiment, the head pose information (i.e., the current head pose) of the user may be dynamically detected by an inertial sensor and/or a camera onboard the augmented reality device, wherein the camera may be one or more of a TOF (Time of Flight) camera, an infrared camera, a millimeter wave camera, and an ultrasonic camera. In another embodiment, the head pose information of the user may be sent to the augmented reality device in real time by other devices communicatively connected to the augmented reality device, and the dynamic detection of the head pose information may be accomplished. For example, a camera installed in an activity place where the augmented reality device is applied tracks and positions the augmented reality device (or the head of the user) to obtain the head posture information of the user, and sends the head posture information to the augmented reality device in real time, so that the augmented reality device acquires the dynamically detected head posture information in real time.
In this embodiment, it is known to those skilled in the art that, in order to simulate the appearance of human eye sensory variation in the real world for the augmented reality technology, the immersion feeling of the user in the augmented reality content is improved, and the visual field images that the user can see are often different under different head pose information. The current window refers to the visual field window which can be seen by the current head gesture (different head gesture information corresponds to different windows). That is, at the current head pose (i.e., a particular eye position), the maximum range of XR content images that the user can see is the user's current window image. Those skilled in the art will readily appreciate that during the content display process of the augmented reality device, the head pose information of the user may be changed in real time, and the augmented reality device may acquire or obtain the head pose information of the user in real time to update the current window.
Currently, in-vehicle AR HUD (augmented reality head-up display) is to reflect an image projected onto a film of a windshield into a human eye, and the display area is only a small area in front of the driver due to optical limitations, with a horizontal FOV of 20 ° and a vertical FOV of 7 °. The field angle FOV refers to the range covered by the lens (the object is not received in the lens beyond this angle), and a camera lens covers a wide range of scenes, usually indicated by an angle called the field angle FOV (angle of view) of the lens, as shown in fig. 2.
And as more and more vehicle enterprises begin to access and adapt AR/VR devices on their own new energy vehicles. Compared with the vehicle-mounted HUD, the enhanced reality device (such as AR glasses) has larger FOV, wider virtual image distance and 3D display, and the display of 360 degrees along with the head is not limited by a screen, so that the enhanced reality device is more suitable for intelligent driving assistance, as shown in fig. 3.
That is, compared to AR HUDs, the augmented reality device (e.g., AR glasses) can follow the rotation of the head, and the display screen is displayed in front of the user at any time, so that the user can pay attention to the prompt information more timely. Based on the above, the embodiment of the application provides an augmented reality head-up display method, augmented reality equipment and a readable storage medium, and the technical scheme of the application is that a communication link is established with a vehicle, and the running condition information and the running environment information of the vehicle are obtained through the communication link; determining whether the vehicle enters a risk scene according to the running condition information and the running environment information; after determining that the vehicle is in the risk scene, generating risk early-warning image content corresponding to the risk scene, and displaying the risk early-warning image content on a current window of the augmented reality device, so that the problem that potential safety hazards are caused due to the fact that the user drives distractedly and the road condition is not observed comprehensively is solved. For example, when a user drives through a crossroad without signal lamps and turns right to observe road conditions, the vehicle detects that other vehicles on the left side are close to collision risks rapidly, and at the moment, the application reminds information on the current window of the augmented reality equipment, as shown in fig. 4, so that the user can know left traffic danger in time without turning the head, and accidents are avoided.
Compared with the existing vehicle-mounted HUD which only displays simple information such as vehicle speed and navigation line, the embodiment of the application determines whether the vehicle enters a risk scene according to the running condition information and running environment information of the vehicle, generates the risk early warning image content corresponding to the risk scene after determining that the vehicle is in the risk scene, and changes the risk early warning image content along with different risk scenes, so that the content of augmented reality head-up display is rich and variable, and meanwhile, the risk early warning message can be timely pushed to the front of a user, the problem that potential safety hazards are caused by distraction of driving of the user and incomplete observation of road conditions is solved, and the intelligent degree of the vehicle driving process is further improved.
In one embodiment, the step of determining whether the vehicle enters a risk scene according to the operating condition information and the driving environment information includes:
Step A10, determining a risk probability value of collision between a vehicle and a target object according to the running condition information and the running environment information, wherein the target object is a pedestrian, a road pit, a building or other vehicles, and the distance between the pedestrian, the road pit, the building or other vehicles and the vehicle is smaller than or equal to a preset distance;
Step A20, if the risk probability value is smaller than a preset probability threshold value, determining that the vehicle is not in a risk scene;
and step A30, if the risk probability value is greater than or equal to a preset probability threshold value, determining that the vehicle is in a risk scene.
And if the risk probability value is greater than or equal to a preset probability threshold value, determining that the vehicle is in a risk scene. If the risk probability value is smaller than a preset probability threshold, the vehicle is determined to be in a safety scene, the preset probability threshold can be set by a person skilled in the art according to actual conditions, and the embodiment is not limited specifically.
In this embodiment, the current driving scene of the vehicle may be determined according to the driving condition information and the driving environment information, and then whether the vehicle enters the risk scene may be determined according to the current driving scene. It can be understood that the risk probability values of the traffic accidents corresponding to different driving scenes are different, for example, the current driving scene is that the vehicle is driving at high speed or overspeed, and the distance between the vehicle and the front vehicle or the pedestrian is smaller than the safety distance, and the risk probability value of the traffic accidents is higher. The current driving scene is that the vehicle runs at a speed (without overspeed running) according to the traffic control requirement, and keeps a safe distance from the front vehicle or the pedestrian, and the risk probability value of the traffic accident is low. For another example, the current driving scene is that the vehicle runs on a bumpy road, and surrounding obstacles are more, and the risk probability value of the occurrence of traffic accidents is higher. The current driving scene is that the vehicle drives on a flat road surface, surrounding obstacles are fewer, and the probability value of the risk of occurrence of traffic accidents is lower. For example, the current driving scene is that the vehicle is driving on a road with sharp bend or slope, and the risk probability value of the traffic accident is high. The current driving scene is that the vehicle runs on a straight road, and the probability value of the risk of the occurrence of traffic accidents is low. For another example, the current driving scene is that the tires of the vehicle are locked or sideslip, the risk probability value of the traffic accident is high, and the like. The above examples of the current driving scenario are merely helpful for understanding the present embodiment, and do not limit the present embodiment.
According to the embodiment, the current running condition information and the running environment information of the vehicle are dynamically obtained, the current running scene of the vehicle is determined according to the running condition information and the running environment information, and the risk probability value of the traffic accident corresponding to the current running scene is analyzed, so that whether the vehicle enters the risk scene or not is accurately detected.
In addition, as can be appreciated by those skilled in the art, the present running condition information and running environment information of the host vehicle can be comprehensively considered, other vehicles, pedestrians or other obstacle objects possibly colliding with the host vehicle can be identified, the collision or the trend of collision is predicted, the potential risk of collision of the host vehicle is estimated, and the risk probability value of traffic accidents of the host vehicle in the present running scene is obtained through analysis. In one possible embodiment, the step of determining a risk probability value of the current driving scenario corresponding to the occurrence of the traffic accident includes:
Step B10, determining traffic objects in a preset range around the vehicle according to the running environment information, and determining a time interval between the vehicle and the traffic objects according to the running condition information;
It should be noted that, the time interval between the vehicle and the traffic object refers to an estimated time interval when the vehicle and the traffic object arrive at the same position, or an estimated time interval when the vehicle and the traffic object collide in the future is determined according to the movement trend of the vehicle and the movement trend of the traffic object. The vehicle-mounted sensor is used for detecting the speed and the direction of the vehicle to predict the movement trend of the vehicle, and determining the time interval between the vehicle and the vehicle according to the movement trend of the vehicle and the movement trend of the vehicle.
And step B20, evaluating the risk probability value of the traffic accident of the vehicle according to the time interval.
In this embodiment, the risk probability value is evaluated by a time interval (i.e., an evaluation criterion of the risk level). According to the method, the complexity of the traffic environment where the vehicle is located is considered, the running environment information of the vehicle is collected through the vehicle-mounted sensor, the time interval between the vehicle and the traffic object is determined according to the running environment information and the running working condition information, finally the risk probability value of the traffic accident of the vehicle is estimated according to the time interval, the previous estimation of the risk probability value through the distance between the vehicle and the obstacle is abandoned, the movement speed and the movement direction of the obstacle are comprehensively estimated, misjudgment and missed judgment of potential collision risks are avoided, and whether the vehicle enters a risk scene is accurately detected.
Illustratively, the step of presenting the risk early warning image content on the current window of the augmented reality device comprises:
And step C10, presenting the risk early warning image content to a preset target area on a current window of the augmented reality equipment, wherein the preset target area is an area matched with the azimuth of the target object.
For example, when a radar sensor mounted on a vehicle detects that there is an accident risk on the left side (the distance between vehicles is too short and the collision risk of the vehicle) while the user wears an augmented reality device to drive, a warning is provided in a side area corresponding to the occurrence of an accident in the user augmented reality device, as shown in fig. 5.
According to the embodiment, the risk early warning image content is presented in the preset target area on the current window of the augmented reality device, wherein the preset target area is an area matched with the direction of the target object, so that the information is timely and accurately pushed to the user in front of the user in a special information prompting mode, and the problems that the user is distracted in driving and the road condition is not comprehensively observed and the safety accident risk exists are solved. For example, when a user observes road conditions through a crossroad without a signal lamp and turns right, the vehicle detects that the vehicle approaches to the left side rapidly and has collision risk, and the left display area of the augmented reality equipment can be used for reminding, so that the user can know left traffic danger in time under the condition that the user does not need to turn right, and follow-up countermeasures are carried out, and accidents are avoided.
In a possible implementation manner, before the step of generating the risk early-warning image content corresponding to the risk scene, the method further includes:
Step D10, determining a probability interval in which the risk probability value is located;
step D20, inquiring from a preset scene type mapping table according to the probability interval to obtain the risk level mapped by the probability interval;
step D30, if the mapped risk level belongs to the first risk level, executing: the step of generating risk early warning image content corresponding to the risk scene;
step D40, if the mapped risk level belongs to the second risk level, controlling the vibration module of the augmented reality device to generate vibration for a preset duration, and executing: the step of generating risk early warning image content corresponding to the risk scene, wherein the second risk level is greater than the first risk level.
In this embodiment, the scene type mapping table has a plurality of probability intervals, each probability interval maps a type of a risk scene, and different probability intervals map types of different risk scenes. It is easy to understand that the types of different risk scenarios correspond to different risk levels. That is, the risk probability of a traffic accident occurring in different types of risk scenes is different. For example, the risk probability values corresponding to the risk scenes such as fatigue driving of the driver, high-speed driving of the vehicle, and driving of the vehicle on a severely bumpy road surface can be divided into a first probability interval, the risk probability values corresponding to the risk scenes such as collision probability values of the predicted vehicle being larger than a first preset value and smaller than a second preset value, sideslip and wheel locking of the vehicle, overspeed driving of the vehicle, and the like are divided into a second probability interval, the risk probability values of the predicted vehicle being collision being larger than the second preset value and AEB start, and the risk probability values corresponding to ESP start are divided into a third probability interval, and the risk probability values corresponding to the vehicle being collided are divided into a fourth probability interval. It is easy to understand that the first probability interval is smaller than the second probability interval, the second probability interval is smaller than the third probability interval, and the third probability interval is smaller than the fourth probability interval. The above-described example of the division of the probability interval is merely helpful for understanding the present embodiment, and does not limit the division of the probability interval.
According to the method, the probability interval where the risk probability value is located is determined, and the risk level mapped by the probability interval is queried from the preset scene type mapping table according to the probability interval, so that after the fact that a vehicle enters a risk scene is determined, the current driving scene is accurately mapped to the corresponding risk level, and further early warning prompt information corresponding to the mapped risk level is conveniently and subsequently output, for example, when the risk level belongs to a higher level, the vibration module of the augmented reality equipment is controlled to vibrate, so that the danger warning performance to a driver is improved, and the driving safety is further improved.
In an embodiment, after the step of querying the preset scene type mapping table to obtain the risk level of the probability interval map, the method further includes:
And E10, if the mapped risk level belongs to a third risk level, generating response strategy information for avoiding collision between the vehicle and the target object, and sending the response strategy information to the vehicle, wherein the response strategy information is used for triggering the vehicle to actively intervene to execute a response strategy for avoiding collision risk, and the third risk level is larger than the second risk level.
In this embodiment, it should be noted that the third risk level is greater than the second risk level. The third risk level should be of the highest risk level.
According to the method, when the mapped risk level is determined to be the third risk level, the corresponding strategy information for avoiding the collision between the vehicle and the target object is generated, and the corresponding strategy information is sent to the vehicle, wherein the corresponding strategy information is used for triggering the vehicle to actively intervene to execute the corresponding strategy for avoiding the collision risk, so that when the current risk level is identified to be the highest level, a driver is enabled to realize and remedy possibly late through early warning prompt and other modes, and the vehicle is actively intervened to conduct intelligent unmanned, so that the vehicle driving risk is avoided in an emergency mode, and the driving safety is improved.
Example two
In another embodiment of the present application, the same or similar content as that of the first embodiment may be referred to the description above, and will not be repeated. On the basis, the step of generating the coping strategy information for avoiding the collision between the vehicle and the target object comprises the following steps:
step F10, obtaining the type of the target object;
And F20, generating coping strategy information for avoiding collision between the vehicle and the target object according to the type of the target object.
The types of objects may include road pits, pedestrians, vehicles, and buildings, among others. The coping strategy information can be determined according to the type of the target object, and it is understood that the coping strategy information corresponds to the type of the target object, and the obstacle avoidance decision corresponding to the type of the target object is calibrated in advance by a person skilled in the art and stored in the system of the augmented reality device.
In an embodiment, if the type of the object is identified as a road hole, the coping strategy information is to pass through at a reduced speed; if the type of the object is identified as the pedestrian, the strategy information is applied to deceleration braking, and the vehicle is controlled to continue running after the pedestrian walks for a preset distance; if the type of the target object is identified as a stationary vehicle, the strategy information is responded, and the running path is re-planned or braking is slowed down; if the type of the target object is identified as a moving vehicle, judging whether the running direction of the vehicle is consistent with the running direction of the vehicle, if so, starting a following mode, running the moving vehicle, and if not, decelerating and braking, and after the vehicle runs a preset distance, controlling the vehicle to continue running.
In one possible embodiment, the step of generating the countermeasure information for avoiding the collision of the vehicle with the target object according to the type of the target object includes:
Step G10, if the type of the target object is the road surface pit, determining the coping strategy information for avoiding the collision between the vehicle and the target object is: the vehicle is decelerated until the speed of the vehicle is less than a preset speed threshold;
Step G20, if the type of the target object is a pedestrian, a building or other vehicles, determining the coping strategy information for avoiding the collision between the vehicle and the target object is: the vehicle is braked in a decelerating manner.
The road hole obstacle represents a road hole.
In this embodiment, under the condition that the type of the target object is determined to be a road hole, the determination of the coping strategy information for avoiding the collision between the vehicle and the target object is as follows: the vehicle is decelerated to a vehicle speed less than a preset speed threshold. And in the case that the type of the object is a pedestrian, a building or other vehicles, the coping strategy information for determining that the evading vehicle collides with the object is: the vehicle is braked in a decelerating way, so that a targeted collision avoidance strategy is generated according to the type of the current obstacle, the obstacle avoidance capability of the vehicle facing different obstacle types is improved, and the intelligence and safety of the augmented reality device for guiding the vehicle are improved.
Example III
An embodiment of the present invention further provides an augmented reality head-up display device, referring to fig. 5, where the augmented reality head-up display device is applied to an augmented reality apparatus, the device includes:
an obtaining module 10, configured to establish a communication link connection with a vehicle, and obtain operation condition information and running environment information of the vehicle through the communication link;
A determining module 20 configured to determine whether the vehicle enters a risk scenario according to the operation condition information and the driving environment information;
And the presentation module 30 is configured to generate risk early-warning image content corresponding to a risk scene after determining that the vehicle is in the risk scene, and present the risk early-warning image content on a current window of the augmented reality device.
Optionally, the determining module 20 is further configured to:
determining a risk probability value of collision between a vehicle and a target object according to the running condition information and the running environment information, wherein the target object is a pedestrian, a pavement pit, a building or other vehicles, and the distance between the pedestrian, the pavement pit, the building or other vehicles and the vehicle is smaller than or equal to a preset distance;
If the risk probability value is smaller than a preset probability threshold value, determining that the vehicle is not in a risk scene;
and if the risk probability value is greater than or equal to a preset probability threshold value, determining that the vehicle is in a risk scene.
Optionally, the presentation module 30 is further configured to:
And presenting the risk early warning image content to a preset target area on a current window of the augmented reality device, wherein the preset target area is an area matched with the azimuth of the target object.
Optionally, the determining module 20 is further configured to:
Determining a probability interval in which the risk probability value is located;
inquiring from a preset scene type mapping table according to the probability interval to obtain the risk level mapped by the probability interval;
if the mapped risk level belongs to the first risk level, executing: the step of generating risk early warning image content corresponding to the risk scene;
if the mapped risk level belongs to the second risk level, controlling a vibration module of the augmented reality device to generate vibration for a preset duration, and executing: the step of generating risk early warning image content corresponding to the risk scene, wherein the second risk level is greater than the first risk level.
Optionally, the determining module 20 is further configured to:
if the mapped risk level belongs to a third risk level, generating response strategy information for avoiding collision between the vehicle and the target object, and sending the response strategy information to the vehicle, wherein the response strategy information is used for triggering the vehicle to actively intervene to execute a response strategy for avoiding collision risk, and the third risk level is larger than the second risk level.
Optionally, the determining module 20 is further configured to:
acquiring the type of the target object;
and generating response strategy information for avoiding collision between the vehicle and the target object according to the type of the target object.
Optionally, the determining module 20 is further configured to:
if the type of the target object is the pavement pit, determining the coping strategy information for avoiding the collision between the vehicle and the target object is as follows: the vehicle is decelerated until the speed of the vehicle is less than a preset speed threshold;
if the type of the object is a pedestrian, a building or other vehicles, the corresponding policy information for avoiding the collision between the vehicle and the object is determined as follows: the vehicle is braked in a decelerating manner.
Optionally, the risk early warning image content includes at least one of risk image identification information, position distance information between the vehicle and the target object, environment image information of a region corresponding to the target object, and risk avoidance response policy information.
The augmented reality head-up display device provided by the embodiment of the invention can solve the problem of low intelligent degree in the driving process of the vehicle by adopting the augmented reality head-up display method in the first embodiment or the second embodiment. Compared with the prior art, the beneficial effects of the augmented reality head-up display device provided by the embodiment of the invention are the same as those of the augmented reality head-up display method provided by the embodiment, and other technical features in the augmented reality head-up display device are the same as those disclosed by the method of the embodiment, so that redundant description is omitted.
Example IV
An embodiment of the present invention provides an augmented reality apparatus, including: at least one processor; and a memory communicatively coupled to the at least one processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can execute the augmented reality head-up display method in the first embodiment.
Referring now to fig. 6, a schematic diagram of an augmented reality device suitable for use in implementing embodiments of the present disclosure is shown. The augmented reality device in embodiments of the present disclosure includes, but is not limited to, an augmented reality (Augmented Reality) -AR device (e.g., AR glasses or AR helmets). The augmented reality device shown in fig. 6 is only one example and should not impose any limitation on the functionality and scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the augmented reality device may include a processing means 1001 (e.g., a central processor, a graphics processor, etc.) which may perform various appropriate actions and processes according to a program stored in a read only memory (ROM 1002) or a program loaded from a storage means into a random access memory (RAM 1004). In the RAM1004, various programs and data required for the operation of the augmented reality device are also stored. The processing device 1001, the ROM1002, and the RAM1004 are connected to each other by a bus 1005. An input/output (I/O) interface is also connected to bus 1005.
In general, the following systems may be connected to the I/O interface 1006: input devices 1007 including, for example, a touch screen, touchpad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, and the like; an output device 1008 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage device 1003 including, for example, a magnetic tape, a hard disk, and the like; and communication means 1009. The communication means 1009 may allow the augmented reality device to communicate wirelessly or by wire with other devices to exchange data. While an augmented reality device having various systems is shown in the figures, it should be understood that not all of the illustrated systems are required to be implemented or provided. More or fewer systems may alternatively be implemented or provided.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network through a communication device, or installed from the storage device 1003, or installed from the ROM 1002. The above-described functions defined in the method of the embodiment of the present disclosure are performed when the computer program is executed by the processing device 1001.
The augmented reality equipment provided by the invention can solve the problem of low intelligent degree in the vehicle driving process by adopting the augmented reality head-up display method in the embodiment. Compared with the prior art, the beneficial effects of the augmented reality device provided by the embodiment of the invention are the same as those of the augmented reality head-up display method provided by the embodiment, and other technical features of the augmented reality device are the same as those disclosed by the method of the previous embodiment, so that details are not repeated.
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the description of the above embodiments, particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Example five
An embodiment of the present invention provides a computer-readable storage medium having computer-readable program instructions stored thereon for performing the augmented reality head-up display method of the above embodiment.
The computer readable storage medium according to the embodiments of the present invention may be, for example, a usb disk, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this embodiment, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The above-described computer-readable storage medium may be embodied in an augmented reality device; or may exist alone without being assembled into an augmented reality device.
The computer-readable storage medium carries one or more programs that, when executed by an augmented reality device, cause the augmented reality device to: establishing communication link connection with a vehicle, and acquiring operation condition information and running environment information of the vehicle through the communication link; determining whether the vehicle enters a risk scene according to the running condition information and the running environment information; and after determining that the vehicle is in the risk scene, generating risk early-warning image content corresponding to the risk scene, and presenting the risk early-warning image content on a current window of the augmented reality equipment.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented in software or hardware. Wherein the name of the module does not constitute a limitation of the unit itself in some cases.
The computer readable storage medium provided by the invention stores the computer readable program instructions for executing the augmented reality head-up display method, and can solve the problem of low intelligent degree in the driving process of the vehicle. Compared with the prior art, the beneficial effects of the computer readable storage medium provided by the embodiment of the invention are the same as those of the augmented reality head-up display method provided by the first embodiment or the second embodiment, and are not described in detail herein.
Example six
The embodiment of the invention also provides a computer program product, which comprises a computer program, wherein the computer program realizes the steps of the augmented reality head-up display method when being executed by a processor.
The computer program product provided by the application can solve the problem of low intelligent degree in the driving process of the vehicle. Compared with the prior art, the beneficial effects of the computer program product provided by the embodiment of the present application are the same as those of the augmented reality head-up display method provided by the first embodiment or the second embodiment, and are not described in detail herein.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein, or any application, directly or indirectly, within the scope of the application.