CN118306322A - Vehicle and display method thereof - Google Patents
Vehicle and display method thereof Download PDFInfo
- Publication number
- CN118306322A CN118306322A CN202410743897.0A CN202410743897A CN118306322A CN 118306322 A CN118306322 A CN 118306322A CN 202410743897 A CN202410743897 A CN 202410743897A CN 118306322 A CN118306322 A CN 118306322A
- Authority
- CN
- China
- Prior art keywords
- target
- vehicle
- candidate
- sight
- driver
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 239000011521 glass Substances 0.000 claims abstract description 55
- 230000003993 interaction Effects 0.000 claims description 28
- 210000003128 head Anatomy 0.000 claims description 15
- 230000004044 response Effects 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 11
- 230000000007 visual effect Effects 0.000 claims description 7
- 239000005357 flat glass Substances 0.000 description 11
- 238000010586 diagram Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000004378 air conditioning Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R11/00—Arrangements for holding or mounting articles, not otherwise provided for
- B60R11/02—Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof
- B60R11/0229—Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof for displays, e.g. cathodic tubes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R16/00—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
- B60R16/02—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
- B60R16/037—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for occupant comfort, e.g. for automatic adjustment of appliances according to personal settings, e.g. seats, mirrors, steering wheel
- B60R16/0373—Voice control
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Navigation (AREA)
Abstract
The application discloses a vehicle and a display method thereof, and relates to the technical problem of intelligent driving. The method can acquire at least one candidate object which is positioned in the sight area of the target driver and outside the vehicle, and project and display the introduction information of each candidate object on target glass pointed by the sight direction of the target driver in the vehicle, so that the target driver can know the relevant condition of each candidate object. Thus, the intelligence of the cabin of the vehicle is effectively improved, and the driving experience of drivers and passengers is improved.
Description
Technical Field
The application relates to the technical field of intelligent driving, in particular to a vehicle and a display method thereof.
Background
The cabin of a vehicle is generally provided with an air conditioner, a sound box, and the like. While current vehicles support automatic adjustment of seats, air conditioning and audio according to the habits and preferences of the target occupant, cabin intelligence of current vehicles is still low.
Disclosure of Invention
The application provides a vehicle and a display method thereof, which can improve the intelligence of a cabin of the vehicle, and the technical scheme is as follows:
In one aspect, a display method of a vehicle is provided, the method including:
acquiring at least one candidate object which is positioned in the sight line area of a target driver and positioned outside the vehicle;
And projecting and displaying the introduction information of each candidate object on target glass of the vehicle, wherein the target glass is glass pointed by the sight line direction of the target driver in the vehicle.
In another aspect, there is provided a controller comprising: the display device comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the display method of the vehicle according to the aspect when executing the computer program.
In yet another aspect, a computer readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements a method of displaying a vehicle as described in the above aspect.
In a further aspect, there is provided a vehicle comprising a controller as described in the above aspect.
The technical scheme provided by the application has the beneficial effects that at least the following steps are included:
The application provides a vehicle and a display method thereof, wherein the method can acquire at least one alternative object which is positioned in the sight area of a target driver and outside the vehicle, and project and display introduction information of each alternative object on target glass pointed by the sight direction of the target driver in the vehicle so that the target driver can know the relevant conditions of each alternative object. Thus, the intelligence of the cabin of the vehicle is effectively improved, and the driving experience of drivers and passengers is improved.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
Fig. 1 is a flowchart of a display method of a vehicle according to an embodiment of the present application;
FIG. 2 is a flow chart of another method of displaying a vehicle according to an embodiment of the present application;
FIG. 3 is a schematic illustration of an alternative object within a target occupant's line of sight area provided by an embodiment of the present application;
FIG. 4 is a schematic view of an object of interest of a target occupant provided by an embodiment of the present application;
FIG. 5 is a schematic illustration of a vehicle to target occupant question and answer interaction provided by an embodiment of the present application;
FIG. 6 is a schematic illustration of a question-answer interaction of another vehicle with a target occupant provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a projection display of a navigation path on a secondary driving glazing provided by an embodiment of the present application;
Fig. 8 is a schematic structural diagram of a controller according to an embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present application and should not be construed as limiting the application.
The embodiment of the application provides a vehicle display method, which is applied to a vehicle, such as a controller of the vehicle. Referring to fig. 1, the method includes:
Step 101, at least one candidate object located in the sight line area of the target driver and located outside the vehicle is acquired.
Wherein, this sight area refers to: the three-dimensional space region that can be seen by the eyes of the target occupant when the target occupant looks outside the vehicle. The target occupant may be a driver, a co-driver passenger, a left rear passenger, or a right rear passenger of the vehicle.
Each of the candidate objects is an object that the target occupant may see, and each of the candidate objects may be landmark buildings, educational facilities (e.g., schools, libraries, gyms, etc.), medical facilities (e.g., hospitals, clinics, etc.), residences (e.g., cells), business facilities (e.g., shopping malls, office buildings, parking lots, banks, etc.), energy facilities (e.g., gas stations, or charging stations, etc.), transportation facilities (e.g., bridges, airports, bus stations, subway stations, etc.), public activity facilities (e.g., parks), or attractions, etc.
And 102, projecting and displaying the introduction information of each candidate object on target glass of the vehicle.
The vehicle may employ Head Up Display (HUD) technology to project and display introduction information of each of the at least one candidate object on the target glass so that a target driver knows the candidate object. The target glass is glass to which the sight line direction of a target driver in the vehicle is directed. The description information of each candidate object is used to describe the specific situation of the candidate object.
In summary, the embodiment of the application provides a vehicle display method, which can acquire at least one candidate object located in a sight line area of a target driver and outside a vehicle, and display introduction information of each candidate object on target glass pointed by a sight line direction of the target driver in the vehicle in a projection manner, so that the target driver can know related conditions of each candidate object. Thus, the intelligence of the cabin of the vehicle is effectively improved, and the driving experience of drivers and passengers is improved.
Fig. 2 is a schematic diagram of another vehicle display method according to an embodiment of the present application, which may be applied to a vehicle. As shown in fig. 2, the method may include:
Step 201, at least one candidate object located in the sight line area of the target driver and located outside the vehicle is acquired.
After the vehicle is started, at least one candidate object that is located within the line of sight of the target occupant and that is located outside of the vehicle may be acquired. Wherein, this sight area refers to: the three-dimensional space region that can be seen by the eyes of the target occupant when the target occupant looks outside the vehicle. The target occupant may be a driver, a co-driver passenger, a left rear passenger, or a right rear passenger of the vehicle.
Each candidate object may be a landmark building, educational facility (e.g., school, library, gym, etc.), medical facility (e.g., hospital, clinic, etc.), residence (e.g., cell), business facility (e.g., shopping mall, office building, parking lot, bank, etc.), energy facility (e.g., gas station, or charging station, etc.), transportation facility (e.g., bridge, airport, bus station, subway station, etc.), public activity facility (e.g., park), or scenic spot, etc.
In the embodiment of the application, the vehicle can determine a plurality of objects around the vehicle based on the electronic map and the position of the vehicle. The vehicle may then determine at least one candidate object from the plurality of objects that is within the line of sight region of the target occupant. The electronic MAP may be a normal MAP or a high-definition MAP (high definition MAP, HD MAP), and the coordinate system of the electronic MAP is a geodetic coordinate system. The location of the vehicle may be the coordinates (i.e., longitude and latitude) of the vehicle in a geodetic coordinate system. The plurality of objects around the vehicle may refer to: and the object is positioned in a circular area taking the position of the vehicle as the center and taking the first target length as the radius. The first target length may be pre-stored by the vehicle.
It will be appreciated that the vehicle may acquire the location of the target occupant's line of sight area prior to determining the candidate object from the plurality of objects. The location of the line of sight region may be characterized by, among other things, the angle of sight of the target occupant, the angle of vision of the target occupant (which may also be referred to as the viewing angle), and the target location. The line of sight angle refers to: the direction of the line of sight of the target occupant (i.e., the direction of the line of sight) is at an angle to the reference direction. The line of sight direction is the direction in which the eyes of the target occupant are looking. The reference direction may be parallel to an X-axis of the geodetic coordinate system, which may be parallel to the north-south direction of the earth. The visual angle may reflect the size of the spatial range that the target occupant can observe when looking outside the vehicle, and is typically expressed in degrees. The target position is a position of the vehicle or a position of the eyes of the target occupant, which may be, for example, a position of the eyes of the target occupant. In this way, a higher accuracy of the position of the acquired line-of-sight region can be ensured, and thus a higher accuracy of acquiring the at least one candidate object can be ensured
It is also understood that the location of the eyes of the target occupant may be the location of the centers of the eyes of the target occupant in the geodetic coordinate system. The position of the eyes in the geodetic coordinate system may be determined by the vehicle based on the position of the centers of the eyes in the vehicle coordinate system and the conversion relationship between the vehicle coordinate system and the geodetic coordinate system.
In an alternative implementation, the vehicle may obtain the target occupant's head yaw angle, the vehicle's attitude angle, and the vehicle speed. Then, the vehicle may determine a line-of-sight angle of the target occupant based on the head yaw angle and the attitude angle of the vehicle, and may determine a visual angle of the target occupant based on the vehicle speed. The visual angle is inversely related to the vehicle speed. Thereafter, the vehicle may acquire the position of the sight line area of the target occupant based on the sight line angle, the visual angle, and the target position.
Wherein, the head deflection angle of the target driver refers to: the range of rotation of the head of the target occupant relative to the reference position, and can be used to indicate the orientation of the face of the target occupant. The reference position may be a position where the head is located when the target occupant is in a relaxed and sitting position. The reference position may be stored in advance by the vehicle. The attitude angle of the vehicle may include: pitch angle, roll angle and yaw angle.
Alternatively, the vehicle may have a correspondence relationship between the speed and the angle stored in advance. Then, the vehicle may determine an angle corresponding to the vehicle speed in the correspondence relationship as the visual angle of the target driver.
Assuming that the target position is the position of the eyes of the target occupant, in another alternative implementation, a line-of-sight region acquisition model is stored in advance in the vehicle. The vehicle may input the state data of the target occupant and the state data of the vehicle to the sight-line area acquisition model, and obtain the position of the sight-line area of the target occupant output by the sight-line area acquisition model.
The status data of the driver may include: the head deflection angle and the position of the eyes of the target occupant, the status data of the vehicle may include: attitude angle of the vehicle and vehicle speed.
It should be appreciated that prior to associating the state data of the target occupant with the state data of the vehicle, the vehicle may acquire a plurality of training data and model-train the plurality of training data to obtain a vision region acquisition model. Wherein each training data may include: sample state data of the vehicle, sample state data of the target occupant, and sample positions of the sight line area of the target occupant under the sample state data of the vehicle and the sample state data of the target occupant.
In an embodiment of the present application, a vehicle may include: camera and automobile body sensor. The vehicle can acquire the head deflection angle of the target driver through the camera. And the vehicle can recognize the face of the target driver through the camera, then acquire the positions of the centers of the eyes of the target driver in the vehicle coordinate system, and obtain the positions of the eyes of the target driver in the geodetic coordinate system based on the conversion relation between the vehicle coordinate system and the geodetic coordinate system. The vehicle can acquire the speed of the vehicle and the attitude angle of the vehicle through the vehicle body sensor. The origin O of the vehicle coordinate system may be a point (such as a center point) of the vehicle, the positive X-axis direction may be parallel to the longitudinal direction of the vehicle, the positive Y-axis direction may be parallel to the width direction of the vehicle, and the Z-axis may be perpendicular to the XOY plane.
In an embodiment of the present application, the process of determining, by the vehicle, at least one candidate object located within the line of sight area of the target occupant from among the plurality of objects may include: the vehicle directly determines each object in the plurality of objects which is positioned in the sight area as an alternative object; or the vehicle acquires a plurality of initial objects positioned in the sight line area from a plurality of objects, and then selects at least one candidate object from the plurality of initial objects. The number of the at least one candidate object may be less than or equal to a preset value, where the preset value is less than the total number of the plurality of initial objects. The preset value may be pre-stored before the vehicle leaves the factory, or may be custom-defined by the driver.
Alternatively, the vehicle may randomly select at least one candidate object from the plurality of initial objects. Or the vehicle may take each of a plurality of initial objects with a preset value with a high probability as an alternative object. The probability is the probability that the initial object is a reference object, which is the object where the line of sight drop of the target occupant is located. For example, the vehicle may acquire the distance of the line of sight of each initial object to the target occupant and determine the probability that the initial object is a reference object based on the distance. Then, the vehicle may determine each initial object of the plurality of initial objects, which is a pre-set threshold value with a high probability, as an alternative object.
Wherein the probability is inversely related to the distance. I.e. the larger the distance, the smaller the probability that the candidate object is a reference object. The smaller the distance, the greater the probability that the candidate object is a reference object.
It should be understood that the method of acquiring a plurality of initial objects and selecting at least one candidate object from the plurality of initial objects may be applicable to a scenario where the number of initial objects is large and the area of the area for displaying the introduction information of the candidate object in the target glass is limited.
Optionally, the vehicle has a tracking projection function. After the vehicle is started, it may be detected whether the track projection function is enabled. If the vehicle determines that the track projection function is enabled, step 201 described above may be performed. If the vehicle determines that the tracking projection function is enabled, it may continue to detect whether the tracking projection function is enabled.
It will be appreciated that if the target occupant requires the vehicle to project presentation information of the candidate object on the vehicle glass, the vehicle may be controlled to turn on the track projection function. And then, if the target driver does not need to project and display the introduction information of the alternative object on the target glass by the vehicle, the vehicle can be controlled to turn off the tracking projection function.
It will be appreciated that the number of target occupants may be one or more. In the case where the number of target occupants is plural, the vehicle may acquire, for each target occupant, an alternative object located within the line-of-sight area of the target occupant.
Step 202, the introduction information of each candidate object is projected and displayed on the target glass of the vehicle.
In the embodiment of the application, the vehicle can adopt HUD technology, and the introduction information of each candidate object in at least one candidate object is displayed on the target glass, so that a target driver can know the candidate object. The target glass is glass to which the sight line direction of a target driver in the vehicle is directed. For example, if the target occupant is a secondary passenger and the secondary passenger looks out of the vehicle from the secondary windshield, the target glass is the secondary windshield to which the direction of the secondary passenger's line of sight is directed. If the target occupant is a left rear passenger and the target occupant looks out of the vehicle from the left rear window glass, the target glass is the left rear window glass to which the line of sight of the left rear passenger is directed. If the target driver is a driver and the driver looks out of the vehicle from the front windshield, the target glass is the front windshield.
The description information of each candidate object is used to describe the specific situation of the candidate object. The introduction information may include: basic information and operation information of the candidate object. The base information may include at least: icons and names of candidate objects. The operation information may include at least: time of operation. For example, if the candidate object is a landmark building, a attraction, or a park, the introduction information may further include: historical backgrounds, cultural backgrounds, and features of alternative objects. If the alternative object involves consumption, the operation information of the alternative object may further include: some consumption information and people flow information are relevant. Such as the ticket prices and the people flow rate of scenic spots and parks, the current oil prices of gas stations, the charging standard of parking lots, and the like.
Alternatively, the icon of the candidate object may be a two-dimensional image or a three-dimensional image of the candidate object. The two-dimensional image may be a photograph of the candidate object. The three-dimensional image may be a three-dimensional contour model of the candidate object.
In the embodiment of the application, the vehicle can project and display the introduction information of each alternative object in the whole area of the target glass. Or the vehicle may project presentation information of each candidate object in the projection area of the target glass. Wherein the projected area is smaller than the area of the target glass. And the center point of the projection area may be the intersection of the direction of the line of sight of the target occupant and the target glass.
Alternatively, the projection area may be a rectangular area, a polygonal area, or a circular area. For example, the projection area may be a circular area centered on the intersection point and having a radius of the second target length. The second target length may be pre-stored by the vehicle.
In the embodiment of the application, in the case that the number of at least one candidate object is a plurality of candidate objects, the introduction information of the plurality of candidate objects displayed in the target glass can be sequentially arranged. For example, the introduction information of the plurality of candidate objects may be arranged in order of magnitude of probabilities of the plurality of candidate objects. For example, the introduction information of the plurality of candidate objects may be arranged in order of the probability of the plurality of candidate objects from large to small. Thus, the driving experience of the target driver can be effectively improved.
Wherein the probability of each candidate object refers to: the probability that the candidate object is a reference object.
Step 203, if a selection operation of introduction information of a target object in the plurality of candidate objects is received, determining the target object as an object of interest of a target driver.
In the case where the number of at least one candidate object is plural, the target occupant may select an object of interest from among the plural candidate objects displayed on the target glass according to his own intention. Accordingly, the vehicle may determine the target object as the object of interest of the target occupant in response to a selection operation of the target occupant for introduction information of the target object among the plurality of candidate objects. I.e. the vehicle may select the object of interest of the target occupant based on a selection operation of the target occupant.
Alternatively, the selection operation may be a voice selection operation, such as "select first". At this time, the sound collection device may be installed in the vehicle. Or the selection operation may be a touch operation of introduction information for the target object. At this time, a touch film layer may be disposed in the target glass. The embodiment of the application does not limit the type of the selection operation.
In the embodiment of the application, if the vehicle receives the selection operation of the introduction information of the target object in the plurality of candidate objects before the display time of the introduction information of each candidate object reaches the target time, the target object can be determined as the object of interest. If the vehicle has not received a selection operation for a target object of the plurality of candidate objects after the display duration reaches the target duration, step 201 may be re-executed. The target time period may be stored in advance by the vehicle, for example, may be 10 seconds(s).
It should be appreciated that if the number of at least one candidate object is one and the target object is an object of interest to the target occupant, the target occupant may directly select the candidate object.
Step 204, responding to the voice interaction request aiming at the object of interest, and switching and displaying the response data of the voice interaction request in the target glass.
After the object of interest is selected, the target occupant may interact with the vehicle based on his or her intent to obtain detailed information about the object of interest. Specifically, the target occupant may issue a voice interaction request. The vehicle can collect the voice interaction request through the voice collection component, and respond to the voice interaction request, and response data of the voice interaction request are displayed in a switching mode in the target glass.
The voice interaction request is used for requesting to acquire the detail information of interest, and the response data is the detail information. Switching the response data of the voice interaction request to be displayed refers to: the response data is displayed and the content displayed before the response data is canceled from being displayed.
In an embodiment of the present application, a cabin of a vehicle may include: and a plurality of sound zones, each sound zone corresponding to a physical space of the cabin. The plurality of sound zones may include: a primary drive sound zone, a secondary drive sound zone, a first rear sound emission zone (also referred to as a left rear sound emission zone), and a second rear sound emission zone (also referred to as a right rear sound emission zone). In addition, for the target glass to which the sight line direction of the target driver is directed, the vehicle generally updates the content displayed on the target glass based on the voice interaction request issued by the target driver, but does not update the content displayed on the target glass based on the voice interaction request issued by the non-target driver. Therefore, the situation that the response data displayed by the target glass is not the data expected to be acquired by the target driver can be avoided, and the driving experience of the target driver can be effectively improved.
Based on the above, after receiving the voice interaction request for the object of interest, the vehicle can locate the sound source position of the voice interaction request to determine the sound zone where the sound source position is located. Then, if the vehicle determines that the voice zone is the voice zone where the target driver is located, it may be determined that the voice interaction request is issued by the target driver, and then the voice interaction request may be responded to, so as to switch and display response data of the voice interaction request on the target glass. If the vehicle determines that the sound zone is not the sound zone in which the target driver is located, it may be determined that the voice interaction request is not issued by the target driver, and then it may be determined that updating of the content displayed on the target glass based on the voice interaction request is not required.
Therefore, the vehicle provided by the embodiment of the application can adopt a sound source positioning technology to judge the sound zone where the driver who sends the voice interaction request is based on the intelligent voice zone (namely the sound zone) of the cabin, and then judge whether the driver is a target driver or not based on the sound zone.
It will be appreciated that the vehicle is also equipped with speakers and may support intelligent voice questions and answers. After the vehicle receives the voice interaction request aiming at the interested object, the vehicle can also carry out target confirmation and feedback through intelligent voice intelligent question answering. In particular, the vehicle may confirm the type of object of interest (e.g., park, gas station, bank, etc.) and inform the target occupant that the relevant detailed information has been projected onto the target vehicle glass.
Step 205, generating a navigation path to the position of the object of interest in response to the navigation instruction for the object of interest.
If the target driver needs to go to the position of the interested object, a navigation instruction aiming at the interested object can be sent. Correspondingly, after the vehicle receives the navigation instruction, a navigation path to the position of the object of interest can be generated in response to the navigation instruction.
Alternatively, the navigation instruction may be a voice instruction. Or the target glass may have navigation controls displayed thereon. The navigation instruction may have a touch operation trigger for the navigation control.
Step 206, the navigation path is projected and displayed on the target glass.
After the vehicle generates a navigation path to the position of the interested object, the navigation path can be projected and displayed on the target glass for the target driver to check.
In addition, if the vehicle is an automatic driving vehicle, after the navigation path is generated, the vehicle can automatically travel to the position of the object of interest according to the navigation path. If the vehicle is not an autonomous vehicle, after the navigation path is generated, the vehicle may display the navigation path (e.g., display the navigation path in a central control screen), and broadcast a navigation guidance voice based on the navigation path, so that the driver controls the vehicle to travel to the location of the object of interest based on the navigation guidance voice.
The method provided by the embodiment of the application is illustrated in the following by way of example with reference to the accompanying drawings:
during the travel of a vehicle, occupants of the vehicle may often focus on objects (e.g., signage buildings) and the line of sight may stay on objects of interest to themselves. At this time, the vehicle may judge the sight line area of the driver based on the attitude angle and the vehicle speed of the vehicle itself by recognizing the head deviation angle of the driver. The vehicle may then refine the location of the environment around the vehicle via a high-definition map based on the location of the vehicle at that moment to obtain at least one candidate object that appears within the driver's line of sight area.
For example, referring to fig. 3 and 4, the line-of-sight area of the left rear passenger of the vehicle is an area S1, and at least one candidate object within the area S1 includes: parking lots and gas stations. The vision area of the secondary passenger of the vehicle is an area S2, and at least one candidate object in the area S2 includes: parking lots and parks.
As can be seen from fig. 4, the head of the left rear passenger is biased toward the left rear window glass and is seen through the left rear window glass to the outside of the vehicle. The head of the secondary passenger is biased toward the secondary windshield and is seen through the secondary windshield to the outside of the vehicle. Accordingly, the vehicle may display the introduction information of at least one candidate object within the region S1 in projection in the projection region of the left rear window glass, and display the introduction information of at least one candidate object within the region S2 in projection in the projection region of the sub-driving window glass.
Further, referring to fig. 3, the line of sight of the left rear passenger (i.e., the arrow in the area S1) is directed to the gas station (i.e., the left rear passenger is looking at the gas station), and the line of sight of the co-driver passenger is directed to the parking lot (i.e., the arrow in the area S2). Therefore, if the introduction information of the candidate objects is displayed in order of the probability of the candidate objects from the higher to the lower, the introduction information of the gas station may be displayed above the introduction information of the parking lot in the left rear window glass, that is, the introduction information of the gas station is displayed on the side close to the roof, and the introduction information of the parking lot is displayed on the side far from the roof. Similarly, on the sub-driving window glass, introduction information of the parking lot may be displayed above introduction information of the park.
Then, it is assumed that the vehicle determines that the object of interest of the left rear passenger is a gas station based on the selection operation of the left rear passenger, and determines that the object of interest of the sub-driver passenger is a park based on the selection operation of the sub-driver passenger. Let the rear left passenger issue a voice interaction request for this station "hello, edi, is the station in front? With reference to fig. 5, the vehicle (e.g., the vehicle's machine) may feed back the following information "left back row is good, followed by xx gas stations. The specific oil price information of the gas station is projected to the left rear window glass, please see.
Suppose that a secondary rider issues a voice interaction request for the park, "hello, edison, what scenic spot is in front? Referring to fig. 6, the vehicle may feed back to the secondary rider the following information "secondary drive is good, preceded by xx park. The park has a free open time xxx, and the related information is projected onto the window glass of the auxiliary driving for you, please refer to you. ".
Assuming that the secondary rider wants to travel to the xx park, with continued reference to fig. 6, the secondary rider can issue a voice navigation instruction "navigate to the park ahead". At this time, the vehicle may reply to the secondary rider that "received, the optimal path has been planned for you. Further, referring to fig. 7, the vehicle may also display a navigation path D to the xx park on the sub-driving window glass in a projection manner.
As can be seen from the above description, the method provided by the embodiment of the present application may be applied to the following scenarios:
In the first scene, during the running process of the vehicle, the driver needs to acquire the introduction information of the objects which can be seen by the driver nearby in order to meet the sightseeing requirement of the driver.
Scene II: in the driving process, the related information of the building can not be quickly determined and navigation can not be realized aiming at the building which is temporarily seen and wanted by a driver and passengers. For example, when the driver looks at a charging station in front of the vehicle, and the vehicle is fast running, the driver needs to charge the vehicle, and the driver looks at the charging station to say "this is the charging station, is now business, and the vehicle can feed back the contents such as" whether the vehicle is the charging station, is business, and the like, and can project price information to the front windshield. Then, if the driver says "navigate to here", the vehicle can generate a navigation path and display the navigation path in the center control screen.
It can be understood that the sequence of the steps of the vehicle display method provided by the embodiment of the application can be properly adjusted, and the steps can be correspondingly increased or decreased according to the situation. For example, step 204 deletes as the case may be; or step 205 and step 206 may be deleted as appropriate; or steps 203 to 206 may be deleted according to circumstances. Any method that can be easily conceived by those skilled in the art within the technical scope of the present disclosure should be covered in the protection scope of the present application, and thus will not be repeated.
In summary, the embodiment of the application provides a vehicle display method, which can acquire at least one candidate object located in a sight line area of a target driver and outside a vehicle, and display introduction information of each candidate object on target glass pointed by a sight line direction of the target driver in the vehicle in a projection manner, so that the target driver can know related conditions of each candidate object. Thus, the intelligence of the cabin of the vehicle is effectively improved, and the driving experience of drivers and passengers is improved.
An embodiment of the present application provides a controller, referring to fig. 8, the controller 300 may include: a processor 301. The processor 301 is configured to:
Acquiring at least one candidate object which is positioned in the sight line area of a target driver and positioned outside the vehicle;
The introduction information of each candidate object is projected and displayed on the target glass of the vehicle, wherein the target glass is the glass pointed by the sight line direction of a target driver in the vehicle.
Alternatively, the processor 301 may be configured to:
determining a plurality of objects around the vehicle based on the electronic map and the location of the vehicle;
at least one candidate object is determined from the plurality of objects that is located within the line of sight region of the target occupant.
Optionally, the processor 301 may be further configured to:
Acquiring a head deflection angle of a target occupant, an attitude angle of a vehicle and a vehicle speed of the vehicle before determining at least one candidate object located within a line-of-sight region of the target occupant from a plurality of objects;
determining a sight angle of a target driver based on the head deflection angle and the attitude angle, and determining a vision angle of the target driver based on the vehicle speed, wherein the vision angle is inversely related to the vehicle speed;
acquiring the position of a sight line area based on the sight line angle, the vision angle and the target position;
Wherein the target position is the position of the vehicle, or the position of the eyes of the target occupant.
Optionally, when the number of at least one candidate object is a plurality of, the information of the plurality of candidate objects displayed by the target glass is sequentially arranged according to the probability of the plurality of candidate objects;
wherein the probability of each candidate object refers to: probability that the candidate object is a reference object. The reference object is the object where the sight drop point of the target driver is located.
Optionally, the processor 301 may be further configured to:
After the introduction information of each candidate object is projected and displayed on the target glass of the vehicle, if a selection instruction of the introduction information of the target object among the plurality of candidate objects is received in the case that the number of at least one candidate object is a plurality of, the target object is determined as the object of interest of the target driver.
Alternatively, the processor 301 may be configured to:
And if a selection instruction of the introduction information of the target object in the candidate objects is received before the display time of the introduction information of each candidate object reaches the target time, determining the target object as the interested object of the target driver.
Optionally, the processor 301 may be further configured to:
Responsive to the voice interaction request for the object of interest, switching display of response data of the voice interaction request in the target glass.
Optionally, the processor 301 may be further configured to:
If a navigation instruction aiming at the object of interest is received, generating a navigation path to the position of the object of interest;
the navigation path is projected and displayed on the target glass.
Optionally, the processor 301 may be further configured to:
if the gaze tracking projection function of the vehicle is enabled, at least one candidate object located within the gaze area of the target occupant and outside of the vehicle is acquired.
In summary, the embodiment of the present application provides a controller, which can acquire at least one candidate object located in a sight line area of a target driver and located outside a vehicle, and display introduction information of each candidate object on a target glass pointed by a sight line direction of the target driver in the vehicle in a projection manner, so that the target driver can learn about related conditions of each candidate object. Thus, the intelligence of the cabin of the vehicle is effectively improved, and the driving experience of drivers and passengers is improved.
With continued reference to fig. 8, the controller 300 includes: a memory 303. Wherein the processor 301 is coupled to the memory 303, such as via a bus 302.
The Processor 301 may be a CPU (Central Processing Unit ), general purpose Processor, DSP (DIGITAL SIGNAL Processor, data signal Processor), ASIC (Application SPECIFIC INTEGRATED Circuit), FPGA (Field Programmable GATE ARRAY ) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various exemplary logical blocks, modules, and circuits described in connection with the present disclosure. Processor 301 may also be a combination that implements computing functionality, e.g., comprising one or more microprocessor combinations, a combination of a DSP and a microprocessor, etc.
Bus 302 may include a path to transfer information between the components. Bus 302 may be a PCI (PERIPHERAL COMPONENT INTERCONNECT, peripheral component interconnect standard) bus or an EISA (Extended Industry Standard Architecture ) bus, or the like. Bus 302 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 8, but not only one bus or one type of bus.
The memory 303 is used to store a computer program corresponding to the display method of the vehicle of the above embodiment of the present application, which is controlled to be executed by the processor 301. The processor 301 is configured to execute a computer program stored in the memory 303 to implement what is shown in the foregoing method embodiments.
An embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, wherein the computer program, when executed by a processor, implements a display method for a vehicle as provided in the above method embodiment.
The embodiment of the application provides a vehicle, which comprises the controller provided by the embodiment of the device.
It should be noted that the logic and/or steps represented in the flowcharts or otherwise described herein, for example, may be considered as a ordered listing of executable instructions for implementing logical functions, and may be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
In the present application, unless explicitly specified and limited otherwise, the terms "mounted," "connected," "secured," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; either directly or indirectly, through intermediaries, or both, may be in communication with each other or in interaction with each other, unless expressly defined otherwise. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art according to the specific circumstances.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.
Claims (12)
1. A display method of a vehicle, the method comprising:
acquiring at least one candidate object which is positioned in the sight line area of a target driver and positioned outside the vehicle;
And projecting and displaying the introduction information of each candidate object on target glass of the vehicle, wherein the target glass is glass pointed by the sight line direction of the target driver in the vehicle.
2. The method of claim 1, wherein acquiring at least one candidate object located within a line of sight area of a target occupant and located outside of the vehicle comprises:
determining a plurality of objects around the vehicle based on the electronic map and the location of the vehicle;
At least one candidate object located within the line of sight region of the target occupant is determined from the plurality of objects.
3. The method of claim 2, wherein, from the plurality of objects, prior to determining at least one candidate object located within a line-of-sight area of a target occupant, the method further comprises:
Acquiring a head deflection angle of the target driver, an attitude angle of the vehicle and a speed of the vehicle;
Determining a line-of-sight angle of the target occupant based on the head yaw angle and the attitude angle, and determining a visual angle of the target occupant based on the vehicle speed, the visual angle being inversely related to the vehicle speed;
Acquiring the position of the sight line region based on the sight line angle, the vision angle and the target position;
wherein the target position is a position of the vehicle or a position of eyes of the target occupant.
4. A method according to any one of claims 1 to 3, wherein,
When the number of the at least one candidate objects is a plurality of, the information of the plurality of candidate objects displayed by the target glass is sequentially arranged according to the probability of the plurality of candidate objects;
wherein the probability of each of the candidate objects refers to: and the candidate object is the probability of a reference object, wherein the reference object is the object where the sight drop point of the target driver is located.
5. A method according to any one of claims 1 to 3, wherein after projection display of the introduction information of each of the candidate objects on the target glass of the vehicle, the method further comprises:
And if the number of the at least one candidate object is a plurality of, if a selection instruction of introduction information of a target object in the plurality of candidate objects is received, determining the target object as an object of interest of the target driver.
6. The method of claim 5, wherein determining the target object as the object of interest of the target occupant if a selection instruction for introduction information of a target object of the candidate objects is received, comprises:
And if a selection instruction of the introduction information of the target object in the candidate objects is received before the display time of the introduction information of each candidate object reaches the target time, determining the target object as the interested object of the target driver.
7. The method of claim 5, wherein the method further comprises:
Responsive to a voice interaction request for the object of interest, switching display of response data of the voice interaction request in the target glass.
8. The method of claim 5, wherein the method further comprises:
If a navigation instruction aiming at the object of interest is received, generating a navigation path to the position of the object of interest;
And displaying the navigation path on the target glass in a projection way.
9. A method according to any one of claims 1 to 3, wherein acquiring at least one candidate object located within the line of sight area of the target occupant and located outside the vehicle comprises:
And if the sight line tracking projection function of the vehicle is started, acquiring at least one alternative object which is positioned in the sight line area of the target driver and positioned outside the vehicle.
10. A controller, the controller comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any one of claims 1-9 when the computer program is executed.
11. A computer readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the method according to any of claims 1-9.
12. A vehicle comprising the controller of claim 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410743897.0A CN118306322A (en) | 2024-06-11 | 2024-06-11 | Vehicle and display method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410743897.0A CN118306322A (en) | 2024-06-11 | 2024-06-11 | Vehicle and display method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118306322A true CN118306322A (en) | 2024-07-09 |
Family
ID=91731776
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410743897.0A Pending CN118306322A (en) | 2024-06-11 | 2024-06-11 | Vehicle and display method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118306322A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200073520A1 (en) * | 2018-08-30 | 2020-03-05 | Sony Corporation | Display control of interactive content based on direction-of-view of occupant in vehicle |
US20200150432A1 (en) * | 2017-07-31 | 2020-05-14 | Nippon Seiki Co., Ltd. | Augmented real image display device for vehicle |
KR20210091394A (en) * | 2020-01-13 | 2021-07-22 | 엘지전자 주식회사 | Autonomous Driving Control Device and Control Method based on the Passenger's Eye Tracking |
US20220063510A1 (en) * | 2020-08-27 | 2022-03-03 | Naver Labs Corporation | Head up display and control method thereof |
CN115065818A (en) * | 2022-06-16 | 2022-09-16 | 南京地平线集成电路有限公司 | Projection method and device of head-up display system |
CN116932934A (en) * | 2023-07-25 | 2023-10-24 | 南京地平线集成电路有限公司 | Method and device for determining user interest points, electronic equipment and storage medium |
-
2024
- 2024-06-11 CN CN202410743897.0A patent/CN118306322A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200150432A1 (en) * | 2017-07-31 | 2020-05-14 | Nippon Seiki Co., Ltd. | Augmented real image display device for vehicle |
US20200073520A1 (en) * | 2018-08-30 | 2020-03-05 | Sony Corporation | Display control of interactive content based on direction-of-view of occupant in vehicle |
KR20210091394A (en) * | 2020-01-13 | 2021-07-22 | 엘지전자 주식회사 | Autonomous Driving Control Device and Control Method based on the Passenger's Eye Tracking |
US20220063510A1 (en) * | 2020-08-27 | 2022-03-03 | Naver Labs Corporation | Head up display and control method thereof |
CN115065818A (en) * | 2022-06-16 | 2022-09-16 | 南京地平线集成电路有限公司 | Projection method and device of head-up display system |
CN116932934A (en) * | 2023-07-25 | 2023-10-24 | 南京地平线集成电路有限公司 | Method and device for determining user interest points, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10809720B2 (en) | Bi-directional autonomous vehicle | |
CN107351763B (en) | Control device for vehicle | |
CN108137052B (en) | Driving control device, driving control method, and computer-readable medium | |
CN109383404B (en) | Display system, display method, and medium storing program | |
US20180222490A1 (en) | Systems and methods for adaptively communicating notices in a vehicle | |
US11617941B2 (en) | Environment interactive system providing augmented reality for in-vehicle infotainment and entertainment | |
GB2529559A (en) | Method and system for vehicle parking | |
US20180224297A1 (en) | Projection orientation correction system for vehicle utilizing a projection device | |
KR20190099165A (en) | Apparatus and method for virtual home service | |
CN110733497B (en) | Parking control method, system, electronic device and storage medium | |
US20210309211A1 (en) | Auto park human machine interface display based control | |
KR20190106843A (en) | Apparatus and method for controlling multi-purpose autonomous vehicle | |
US11181386B2 (en) | Navigation device, destination guiding system, and non-transitory recording medium | |
US20200348147A1 (en) | Control device and control method, program, and mobile object | |
US20200062173A1 (en) | Notification control apparatus and method for controlling notification | |
US12077059B2 (en) | Systems and methods for assisting a battery electric vehicle execute a charging operation at a battery charging lot | |
CN113306392B (en) | Display method, in-vehicle terminal, vehicle, and computer-readable storage medium | |
US20210382560A1 (en) | Methods and System for Determining a Command of an Occupant of a Vehicle | |
WO2021253955A1 (en) | Information processing method and apparatus, and vehicle and display device | |
CN110366744A (en) | Driving assist system | |
JP2009090927A (en) | Information management server, parking assist device, navigation system equipped with parking assist device, information management method, parking assist method, information management program, parking assist program, and record medium | |
CN118306322A (en) | Vehicle and display method thereof | |
US20240075959A1 (en) | Yellow light durations for autonomous vehicles | |
US11113550B2 (en) | Method and device for reminding a driver to start at a light signal device with variable output function | |
US11935200B2 (en) | System and method for displaying infrastructure information on an augmented reality display |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |