WO2019131296A1 - Head-up display device - Google Patents
Head-up display device Download PDFInfo
- Publication number
- WO2019131296A1 WO2019131296A1 PCT/JP2018/046440 JP2018046440W WO2019131296A1 WO 2019131296 A1 WO2019131296 A1 WO 2019131296A1 JP 2018046440 W JP2018046440 W JP 2018046440W WO 2019131296 A1 WO2019131296 A1 WO 2019131296A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- vehicle
- display
- virtual
- control unit
- Prior art date
Links
- 230000001815 facial effect Effects 0.000 claims description 6
- 230000001667 episodic effect Effects 0.000 claims description 2
- 230000001960 triggered effect Effects 0.000 claims 1
- 238000001514 detection method Methods 0.000 description 24
- 238000000034 method Methods 0.000 description 20
- 230000008569 process Effects 0.000 description 16
- 238000004891 communication Methods 0.000 description 10
- 230000033001 locomotion Effects 0.000 description 9
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 8
- 240000004050 Pentaglottis sempervirens Species 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 230000006399 behavior Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 238000010191 image analysis Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000005401 electroluminescence Methods 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000002184 metal Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 241000282414 Homo sapiens Species 0.000 description 1
- NIXOWILDQLNWCW-UHFFFAOYSA-N acrylic acid group Chemical group C(C=C)(=O)O NIXOWILDQLNWCW-UHFFFAOYSA-N 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 229920005989 resin Polymers 0.000 description 1
- 239000011347 resin Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 229920003002 synthetic resin Polymers 0.000 description 1
- 239000000057 synthetic resin Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
- B60K35/20—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
- B60K35/21—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor using visual output, e.g. blinking lights or matrix displays
- B60K35/23—Head-up displays [HUD]
- B60K35/235—Head-up displays [HUD] with means for detecting the driver's gaze direction or eye points
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
- B60K35/20—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
- B60K35/28—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the type of the output information, e.g. video entertainment or vehicle dynamics information; characterised by the purpose of the output information, e.g. for attracting the attention of the driver
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
Definitions
- the present invention relates to a head-up display device.
- a head-up display (HUD: Head-Up Display) device that displays an image as a virtual image in front of the light-transmissive member by projecting display light representing the image on a light-transmissive member such as a windshield of a vehicle is, for example, It is disclosed in reference 1.
- Patent Document 1 discloses the HUD device in FIG. 4A, the vehicle is mainly displayed on the display window of the display provided in the vehicle by directly displaying a stripe pattern that moves according to the vehicle speed. Discloses a display device that transmits the behavior of the vehicle through the driver's vision.
- the HUD device displays an image (virtual image) superimposed on a scene (real scene) such as a road viewed through the light transmitting member, a pattern mainly assuming display on a real screen as in Patent Document 1 is displayed according to the vehicle speed. Just moving it may interfere with the visibility of the real scene.
- the present invention has been made in view of the above circumstances, and has an object to provide a head-up display device capable of performing display recalling the behavior of a vehicle while securing the visibility of a real scene.
- a head-up display device is: A head-up display device mounted on a vehicle and displaying the image as a virtual image on a virtual surface set in front of the light transmitting member by projecting display light representing the image onto the light transmitting member.
- a display unit that emits the display light;
- a control unit configured to control the image displayed on the virtual surface by controlling the operation of the display unit;
- the virtual plane is set to be inclined forward with respect to the vertical direction of the vehicle,
- the image displayed on the virtual surface includes an episodic image pronounced of a surface by a combination of linear or dotted image elements,
- the control unit acquires a traveling speed of the vehicle.
- the facial image is displayed in the form of a moving image in which the image element moves in accordance with the acquired traveling speed.
- FIG. 1 It is a figure which shows the mounting aspect to the vehicle of the head-up display apparatus which concerns on one Embodiment of this invention. It is a schematic side view for explaining the composition of the head up display device concerning one embodiment of the present invention. It is a block diagram showing a head up display device etc. concerning one embodiment of the present invention. It is a schematic diagram for demonstrating the gradient angle etc. of a road surface.
- A) is a schematic diagram showing a virtual image displayed superimposed on a road surface seen through a windshield of the vehicle, and (b) is a diagram for explaining each image constituting the virtual image.
- (A) is a figure for demonstrating a virtual grid
- (b) is a figure which shows a memory image.
- (A) is a figure which shows the facial expression image in special mode
- (b) is a figure which shows the example of a display of the virtual image in special mode. It is a flow chart which shows an example of display control processing.
- (A) And (b) is a figure which shows the facial expression image which concerns on a modification.
- a head-up display device according to an embodiment of the present invention will be described with reference to the drawings.
- a head-up display device (HUD: Head-Up Display) device 100 is, for example, disposed in a dashboard 3 of a vehicle 1 as shown in FIG.
- the HUD device 100 emits display light L toward the combined windshield 2.
- the display light L reflected by the windshield 2 travels toward the user 4 (mainly the driver of the vehicle 1).
- the user 4 can visually recognize the image represented by the display light L as a virtual image V in front of the windshield 2 by placing the viewpoint in the eye box Eb. That is, the HUD device 100 displays the virtual image V in front of the windshield 2.
- the user 4 can observe the virtual image V superimposed on the landscape.
- the virtual image V displays various information on the vehicle 1 (hereinafter referred to as vehicle information).
- vehicle information includes not only the information of the vehicle 1 itself but also the external information of the vehicle 1.
- the HUD device 100 is set in front of the windshield 2 as shown in FIG. 1 and at the same time the virtual image A is set on a virtual surface A set to be inclined forward with respect to the vertical direction of the vehicle 1. indicate.
- the virtual surface A corresponds to the display surface 31 of the screen 30 described later, and is a displayable area of the virtual image V.
- the virtual surface A has a rectangular shape when viewed in the normal direction.
- one end closest to the vehicle 1 is set at a position of about 5 m (for example 6 m) from the vehicle 1 and the other end far from the vehicle 1 is at a position of about 10 m (for example 12 m) from the vehicle 1 Set to
- the virtual surface A and the eye box Eb are preset based on the size of the screen 30, which will be described later, various mirrors in the HUD device 100, and an optical system configured by the windshield 2 subjected to the combiner processing.
- the HUD device 100 includes the display 10, the first to third plane mirrors 21 to 23, the screen 30, the concave mirror 40, the housing 50, and the control device 60. Prepare. First, each part shown in FIG. 2 will be described.
- the display 10 generates and emits the display light L, and includes, for example, a projector using a reflective display device such as a DMD (Digital Micro mirror Device) or LCOS (Liquid Crystal On Silicon).
- the display 10 emits the display light L generated under the control of the control device 60 toward the first plane mirror 21.
- the first plane mirror 21 is, for example, a cold mirror, and is disposed obliquely on the light path of the display light L from the display 10.
- the display light L from the display 10 is reflected by the first plane mirror 21 and travels to the screen 30.
- the screen 30 is made of, for example, a transmissive screen such as a holographic diffuser, a microlens array, or a diffusion plate.
- the screen 30 displays an image indicated by the display light L on the display surface 31 on the back side of the surface receiving the display light L. Along with this, the display light L corresponding to the image displayed on the display surface 31 is emitted toward the second plane mirror 22.
- the virtual surface A which is a display surface of the virtual image V is set to be inclined forward with respect to the vertical direction of the vehicle 1.
- a known method can be appropriately adopted. For example, the inclination and the curvature of the reflection portion positioned on the optical path of the display light L are adjusted.
- a virtual plane A may be realized.
- the second plane mirror 22 reflects the display light L from the screen 30 toward the third plane mirror 23.
- the third plane mirror 23 reflects the display light L from the second plane mirror 22 toward the concave mirror 40.
- Each of the second plane mirror 22 and the third plane mirror 23 is, for example, a cold mirror.
- three flat mirrors 21 to 23 are used as the flat mirrors for folding the light path of the display light L.
- one or more flat mirrors may be used. The number of flat mirrors used and how to fold the light path of the display light L can be appropriately changed according to the design.
- the concave mirror 40 reflects the display light L from the third plane mirror 23 toward the windshield 2 while enlarging it.
- the virtual image V visually recognized by the user 4 is a magnified image of the image displayed on the screen 30.
- the housing 50 is formed in a box shape having a light shielding property by a synthetic resin or metal.
- the housing 50 is provided with an opening for securing the optical path of the display light L, and a translucent cover 51 is attached to close the opening.
- the translucent cover 51 is formed of a translucent resin such as acrylic.
- the display light L reflected by the concave mirror 40 passes through the translucent cover 51 and travels to the windshield 2.
- the display light L is emitted from the HUD device 100 toward the windshield 2.
- a virtual image V is displayed in front of the windshield 2 as viewed from the user 4.
- the concave mirror may be provided so as to be capable of rotational movement or parallel movement by an actuator (not shown).
- a concave mirror is provided so as to be rotatable clockwise and counterclockwise in FIG. 2, and the display position (height) of the virtual image V is adjusted by rotating the concave mirror and changing the reflection angle of the display light L It may be possible.
- the adjustment may be performed under the control of the control device 60 according to the user operation from the operation unit (not shown) or the viewpoint position of the viewer detected by the viewpoint detection unit (not shown).
- the control device 60 controls the entire operation of the HUD device 100, and includes a control unit 61, a storage medium 62, and an I / F (InterFace) 63.
- the control device 60 can communicate with various systems in the vehicle 1 through the I / F 63 by, for example, CAN (Controller Area Network).
- a power supply is connected to the HUD device 100, and for example, operating power is supplied to the control device 60 when the ignition of the vehicle 1 is turned on.
- the control unit 61 is a ROM (Read Only Memory) including a microcomputer and storing operation programs and various image data, a RAM (Random Access Memory) temporarily storing various calculation results, and the like, and the ROM.
- a central processing unit (CPU) that executes a stored operation program
- a graphics processing unit (GPU) that performs image processing in cooperation with the CPU
- a drive circuit that drives the display unit 10 under control of the CPU and the GPU Have.
- the ROM stores an operation program for executing display control processing described later.
- part of the control unit 61 may be configured by a dedicated circuit such as an application specific integrated circuit (ASIC).
- ASIC application specific integrated circuit
- the storage medium 62 is configured of a solid state drive (SSD), a hard disk drive (HDD), a DVD-ROM, a CD-ROM, and the like.
- the storage medium 62 stores digital map data composed of map information and three-dimensional coordinate information (three-dimensional information) for indicating a track shape.
- Digital map data includes runway shape data indicating the shape of the runway, height data indicating the reference height of the runway (e.g. above sea level), slope data indicating the inclination in the longitudinal direction of the runway, and inclination in the width direction of the runway It is configured to have various data such as inclination data indicating the curvature of the runway, speed data indicating the speed limit of the runway, and the like for each predetermined position (predetermined latitude and longitude) of the runway.
- the digital map data is used when the control unit 61 executes a drawing process.
- the CPU of the control unit 61 cooperates with the GPU to control display of the display 10 (generation of the display light L based on various image data stored in the ROM and digital map data stored in the storage medium 62 Control).
- the GPU determines the control content of the display operation on the display 10 based on a display control command from the CPU. For example, the GPU performs control to execute various displays by determining the switching timing of the image displayed on the screen 30 by the display light L from the display 10.
- the control unit 61 performs display control of the virtual image V.
- layers are assigned in advance to the respective images constituting the virtual image V, and the control unit 61 can perform individual display control of the respective images.
- the I / F 63 includes an operation unit 70, a vehicle speed sensor 81, a vehicle attitude detection unit 82, a GPS (Global Positioning System) device 83, a wireless communication unit 84, a forward situation detection unit 85, and an ECU. It is a circuit for electrically connecting each of the (Electronic Control Unit) 86 and the control unit 61.
- the configuration of the HUD device 100 is as described above. Subsequently, various configurations for communicating with the control device 60 of the HUD device 100 will be described.
- the display system for vehicles is comprised by the HUD apparatus 100 and the following various structures.
- the operation unit 70 receives various operations by the user 4 and supplies a signal indicating the content of the received operation to the control unit 61.
- the operation unit 70 receives an enlargement or reduction operation of the first notification image V1 indicating the map information described later and the like, a switching operation of the display mode of the virtual image V, and the like by the user 4.
- the vehicle speed sensor 81 detects the traveling speed (vehicle speed) of the vehicle 1, and outputs a signal corresponding to the vehicle speed to the control unit 61.
- the vehicle speed sensor 81 includes, for example, a Hall element that detects a detection target (for example, a gear unevenness or a metal protrusion) that rotates in synchronization with the wheel, and supplies the control unit 61 with a vehicle speed signal of a frequency according to the vehicle speed. .
- the control unit 61 A / D Analog to Digital converts the acquired vehicle speed signal, and calculates and acquires the vehicle speed according to the frequency of the vehicle speed signal.
- the host vehicle attitude detection unit 82 detects an attitude of the vehicle 1 (hereinafter, also referred to as “host vehicle 1”), and is formed of, for example, a gyro sensor.
- the gyro sensor detects the direction (traveling direction) of the vehicle 1 and the vehicle inclination angle ⁇ , and outputs a signal indicating the detection result to the control unit 61.
- the vehicle inclination angle ⁇ indicates the angle between the horizontal plane H and the vehicle 1 as shown in FIG.
- the positive direction of the vehicle inclination angle ⁇ is the clockwise direction in the same drawing (the same applies to the inclination angle ⁇ described later).
- the host vehicle inclination angle ⁇ can also be calculated based on position information from the GPS device 83 described later and digital map data of the storage medium 62.
- the host vehicle attitude detection unit 82 may include a steering angle sensor or a yaw rate sensor.
- the GPS device 83 obtains the latitude and longitude of the current position of the vehicle 1, is provided with a GPS receiving antenna and an amplifier circuit, and is a high frequency signal of a transmission radio wave indicating position information received from an artificial satellite by the receiving antenna.
- the amplified signal is output to the control unit 61.
- the control unit 61 Based on the position information from the GPS device 83, the control unit 61 reads map information in the vicinity of the current position, data of the runway shape, etc. from the storage medium 62, and reaches the destination set by the user 4 (mainly the driver). It also functions as a car navigation controller that determines the guidance route of
- the wireless communication unit 84 includes an antenna, a high frequency circuit, and the like to perform road-to-vehicle communication.
- the wireless communication unit 84 includes road information (including slope information indicating the slope angle of various roads through the roadside wireless device installed as an infrastructure. In addition, speed limit, lanes, road widths, intersections, curves, branches) It receives information on the route, etc., and outputs it to the control unit 61.
- the wireless communication unit 84 acquires road information from a base station for traffic control (for example, a base station of a Driving Safety Support Systems (DSSS)) via a roadside apparatus.
- the control unit 61 can grasp the gradient angle ⁇ of the road surface R based on the gradient information acquired from the wireless communication unit 84. As shown in FIG. 4, the inclination angle ⁇ indicates the angle between the horizontal plane H and the road surface R.
- the road surface R is a roadway in front of the vehicle 1 and at least the user 4 can see through the windshield 2 as shown in FIG. 5A.
- the front situation detection unit 85 is, for example, an imaging unit (such as a stereo camera) that images the scenery in front of the vehicle 1 (including the road surface R), an image analysis unit that analyzes a pickup image obtained by imaging by the imaging unit, A distance sensor or the like that measures the distance of
- the front situation detection unit 85 detects various objects in front of the vehicle 1 by analyzing the captured image by a known method such as a pattern matching method.
- the various objects are information on objects on the road surface R (preceding cars and obstacles), and road shape information (including the slope of the road surface R. Other information on lanes, road widths, intersections, curves, branches, etc.) Etc.
- the front situation detection unit 85 may be configured to include a sonar, an ultrasonic sensor, a millimeter wave radar, and the like.
- the ECU 86 controls each part of the vehicle 1, and in this embodiment, in particular, switches and controls the vehicle 1 in the automatic driving mode and the manual driving mode. Then, the ECU 86 outputs, to the control unit 61, operation mode information indicating whether the vehicle 1 is currently in the manual operation mode or the automatic operation mode.
- the automatic driving level when the vehicle 1 is set to the manual driving mode is level 0 or level 1.
- the driver operates all of the main control systems (acceleration, steering, braking).
- Level 1 the system assists with any one of acceleration, steering and braking.
- the automatic driving level when the vehicle 1 is set to the automatic driving mode is level 3 or more. At level 3, the system accelerates, steers and brakes only in limited environments or traffic conditions, and the driver responds when the system requests it.
- the control unit 61 In order to properly display the component image of the virtual image V described later, the control unit 61 first sets the setting position of the virtual plane A from the vehicle 1 (for example, several meters from the vehicle 1) on the road ahead of the vehicle 1
- the shape of the road surface R in (about several tens of meters) is specified based on the position information from the GPS device 83 and the data stored in the storage medium 62. Further, the control unit 61 can obtain gradient data that can be acquired based on the position information from the GPS device 83 and the data stored in the storage medium 62, and gradient information acquired from the wireless communication unit 84. Identify based on Further, the control unit 61 specifies the vehicle inclination angle ⁇ based on the detection signal from the vehicle attitude detection unit 82.
- control unit 61 calculates (estimates) the relative angle ⁇ 1 of the front road surface R with respect to the vehicle 1 by subtracting the vehicle gradient angle ⁇ from the gradient angle ⁇ of the road surface R.
- the control unit 61 may calculate (estimate) the slope of the road surface R viewed from the host vehicle 1 corresponding to the relative angle ⁇ 1 based on the information from the front situation detection unit 85.
- the inclination of the road surface R detected by the front situation detection unit 85 is based on the captured image obtained by the imaging unit mounted on the vehicle 1 and thus corresponds to the relative angle ⁇ 1.
- the virtual image V is configured to include a surface recalled image VS, a first notification image V1, and a second notification image V2.
- the virtual image V shown in FIGS. 5 (a) and 5 (b) represents the view from the user 4 (driver) who is seated in the driver's seat of the vehicle 1 (see FIGS. 6 and 7, which will be described later).
- the control unit 61 performs display control of the virtual image V in consideration of the projection onto the virtual plane A of For example, in order to cause the user 4 to visually recognize the rectangular virtual image V facing the user itself, the trapezoidal virtual image V obtained by projecting the rectangle from the viewpoint of the user 4 onto the virtual surface A is displayed. .
- the respective images constituting the virtual image V described below are subjected to display control in consideration of the fact that the virtual plane A is oblique as described above.
- the control unit 61 may use an assumed viewpoint position stored in advance in the ROM, or a detection signal from viewpoint detection means (not shown) (such as a camera for imaging the user 4). You may specify suitably based on.
- the surface image VS is an image that reminds the user 4 of the surface by the combination of the linear image elements E shown in FIG. 5B, and as shown in FIG. 5A, the road surface in front of the vehicle 1 It is viewed by the user 4 along R.
- control unit 61 is almost parallel to a part of the road surface R for the user 4 based on the shape of the road surface R that can be identified, the gradient angle ⁇ of the road surface R that can be calculated, and the relative angle ⁇ 1.
- the display control of the display 10 is performed so that the memory image VS can be viewed in parallel.
- the image element E is drawn, for example, according to a virtual grid G shown in FIG.
- the virtual grid G is generated along the road surface R identified as described above.
- the virtual grid G is, for example, a combination of a plurality of lines extending from the set vanishing point P toward the user 4 and a plurality of lines extending in the left-right direction in consideration of perspective.
- the vanishing point P may be set in advance in consideration of the relationship between the viewpoint position of the user 4 and the virtual plane A, and may be stored in the ROM, or the image analysis of the front situation detection unit 85
- the tip of the runway which becomes the visual recognition limit may be specified, and one calculated based on the specified result may be used. Further, as the plurality of lines extending in the left and right direction in the virtual grid G approach the vanishing point P from the user 4 side, the arrangement interval becomes shorter.
- the pattern of the virtual grid G is set in consideration of the “gradient of the texture” which becomes finer as it goes farther.
- the virtual grid G is not actually displayed as a virtual image V, and is used for the control unit 61 to draw the co-occurrence image VS.
- the control unit 61 draws linear image elements E so as to be positioned on a plurality of lines from the vanishing point P on the virtual grid G toward the user 4 side. Do. Further, the linear image element E is drawn so that the length becomes shorter as it approaches the vanishing point P in consideration of the arrangement interval of the plurality of lines extending in the horizontal direction in the virtual grid G. As a result, as shown in FIG. 6B, it is possible for the user 4 to recall a face in which a sense of perspective is taken into consideration, as the face-to-face image VS composed of a plurality of linear image elements E.
- control unit 61 displays the face-magnified image VS in a moving image mode in which the image element E is moved according to the traveling speed (vehicle speed) of the vehicle 1 so that the image element E is directed to the user 4. Thereby, the behavior of the vehicle 1 can be transmitted to the user 4 via vision. Further, since the face-magnified image VS is composed of linear (or point-like as described later) image elements E, it is possible to ensure the visibility of the real view.
- the first notification image V1 indicates, as shown in FIGS. 5A and 5B, map information in the vicinity of the current location of the vehicle 1 and a guide route.
- the control unit 61 performs display control of map information in the vicinity of the current location of the vehicle 1 and display of a first notification image V1 indicating a guide route, based on the position information from the GPS device 83 and the digital map data of the storage medium 62.
- the first notification image V1 is displayed along the virtual grid G generated as described above, that is, along the co-located image VS.
- the first notification image V1 does not move according to the traveling speed (vehicle speed) as a whole, unlike the face-magnified image VS.
- the first notification image V1 includes an own vehicle image representing the current position of the own vehicle 1, the own vehicle image may be moved according to the vehicle speed.
- the first notification image V ⁇ b> 1 can be enlarged and displayed under the control of the control unit 61 in accordance with the enlargement operation from the operation unit 70 by the user 4.
- the control unit 61 widens the arrangement pitch in the left-right direction of the image elements E constituting the face-correlated image VS in accordance with the enlarged display of the first notification image V1.
- the first notification image V1 is displayed in an enlarged manner in conjunction with the memory image VS.
- the first notification image V1 can be reduced and displayed under the control of the control unit 61 in accordance with the reduction operation by the user 4 from the operation unit 70.
- the control unit 61 narrows the arrangement pitch in the left-right direction of the image elements E constituting the face-correlated image VS in accordance with the reduced display of the first notification image V1. As a result, it is possible to make the first notification image V1 appear to be displayed in a reduced size in conjunction with the memory image VS.
- the control unit 61 may change the arrangement pitch or the length of a plurality of lines from the vanishing point P toward the user 4 in accordance with the enlargement / reduction display of the first notification image V1.
- the second notification image V2 is, for example, an image for reporting the speed limit of the runway of the vehicle 1.
- the control unit 61 performs display control of the second notification image V2 based on the position information from the GPS device 83, the digital map data of the storage medium 62, and the data indicating the speed limit received by the wireless communication unit 84.
- the second notification image V2 is displayed on the virtual surface A, when viewed from the user 4, display control of the second notification image V2 is performed on the control unit 61 as a false standing image that is visually recognized as if rising up with respect to the memory image VS.
- the second notification image V ⁇ b> 2 is viewed by the user 4 substantially directly facing itself.
- the second notification image V2 also does not move in accordance with the traveling speed (vehicle speed) as a whole, unlike the face-magnified image VS.
- the second notification image V2 may be a vehicle speed display or the like that represents the vehicle speed detected by the vehicle speed sensor 81 by a numerical value.
- the vehicle speed display is naturally changed and displayed according to the vehicle speed, but there is no change in not moving according to the vehicle speed.
- FIGS. 5A and 5B shows an example in which the memory image VS is displayed also in the display area of the first notification image V1, the display of the first notification image V1 is shown.
- the display luminance or the like of the memory image VS may be lowered to make it difficult to view as compared to other display areas, or the memory image VS may be non-displayed.
- the display superiority of the facial image VS, the first notification image V1, and the second notification image V2 may be determined in advance by the layer.
- the second notification image V2, the first notification image V1, and the memory image VS are selected from the higher display priority (the layer is higher), and the display priority of the memory image VS is set to the lowest, etc. May be
- the virtual image V displayed by the HUD device 100 has a characteristic that the appearance changes depending on the superposition target. For example, when there is a building, a leading vehicle, etc. (hereinafter referred to as a front object) in front, in response to the user 4 focusing on them, the virtual image V feels somewhat up for the user 4 Or Therefore, when only the first notification image V1 and the second notification image V2 are displayed as the virtual image V, the appearance may change in this manner. On the other hand, in an image that makes a motion feel, human beings respond sensitively to the motion. Therefore, according to the face-to-face image VS capable of animation according to the vehicle speed, the user 4 can recall a surface without being connected to the front object as much as possible.
- a front object for example, when there is a building, a leading vehicle, etc. (hereinafter referred to as a front object) in front, in response to the user 4 focusing on them, the virtual image V feels somewhat up for the user 4 Or Therefore, when only the first notification image V1
- the HUD device 100 causes the user 4 to perceive the motion image VS thus causing the motion as a reference plane, and causes the first notification image V1 and the second notification image V2 to be visually recognized with respect to the reference surface.
- the first notification image V1 and the second notification image V2 can be visually recognized well.
- a predetermined switching is performed from a mode (hereinafter, referred to as a “normal mode”) in which the co-found image VS is displayed along the road surface VR (VR).
- the control unit 61 switches the display mode of the virtual image V to the special mode.
- the virtual image V in the special mode is, as shown in FIG. 7B, that the first notification image V1 represents the map information and the guide route in a plan view, and in accordance with this, the meditation image VS also has a plan view It changes to make it feel.
- the control unit 61 generates the virtual grid G such that the line in the vertical direction and the line in the horizontal direction are orthogonal to each other, as shown in FIG. 7A.
- the control unit 61 draws a linear image element E so as to be positioned on a plurality of lines extending in the vertical direction in the virtual grid G. Note that, in the special mode related to planar view, it is not necessary to create a sense of perspective, so the control unit 61 draws the memory image VS without considering the vanishing point P or the “gradient of the texture”.
- the switching trigger from the normal mode to the special mode is, for example, when the control unit 61 receives a display mode switching operation from the operation unit 70 by the user 4 or that the vehicle 1 has entered the automatic driving mode from the ECU 86. It is sufficient if the operation mode information shown is received.
- the trigger for switching from the special mode to the normal mode is, for example, when the control unit 61 receives a display mode switching operation from the operation unit 70 by the user 4 or that the vehicle 1 has entered the manual operation mode from the ECU 86. It is sufficient if the operation mode information shown is received.
- the control unit 61 switches the virtual image V from the special mode to the normal mode or identifies either the normal mode or the special mode when the possibility of danger is specified from the front situation detected by the front situation detection unit 85. May be ended (for example, the first notification image V1 or the memory image VS may be deleted).
- control unit 61 sets the face-magnified image VS to the vehicle speed so that the image element E is directed to the user 4 (from downward to above in FIGS. 7A and 7B). Display in the moving picture mode to move accordingly.
- the second notification image V2 may be displayed at an appropriate position also in the special mode.
- the special mode is not limited to the aspect in which the first notification image V1 and the surface recall image VS are displayed in plan view.
- the first notification image V1 and the memory image VS may be represented by a bird's-eye view.
- the control unit 61 uses a previously set variable virtual viewpoint (a viewpoint located in a virtual sky) and a digital map in which the projection from the virtual viewpoint to the map plane is stored in the virtual viewpoint and the storage medium 62
- a bird's-eye view of the first notification image V1 may be generated by calculation based on the data. In this case, it is sufficient to generate the surface image VS of the bird's-eye view according to the bird's-eye view of the generated first notification image V1.
- the vanishing point P may be set within the display area or outside the display area.
- the control unit 61 generates a virtual grid G configured by a combination of a plurality of lines extending from the vanishing point P toward the virtual viewpoint and a plurality of lines intersecting the lines, and the generated virtual grid G is generated. What is necessary is just to draw the face-troubled image VS by the linear image element E located on the several line which goes to the virtual viewpoint side from the vanishing point P in the grid G. Also in this case, the pattern of the virtual grid G may be set in consideration of the “gradient of the texture” which becomes finer as it goes farther (closer to the vanishing point P). Further, also in the bird's-eye view, the control unit 61 may display the face-magnified image VS in a moving image mode in which the image element E moves from the vanishing point P side in a predetermined direction according to the vehicle speed.
- control unit 61 changes the arrangement pitch and the length of the linear image elements E in accordance with the enlargement / reduction display of the first notification image V1.
- the first notification image V1 is displayed in an enlarged / reduced size in conjunction with the memory image VS.
- the display control process is repeatedly performed, for example, in a predetermined cycle within the on period of the ignition of the vehicle 1.
- the display control process of the virtual image V in the normal mode shown in FIGS. 5A and 5B will be described below as an example.
- the control unit 61 acquires information necessary for image generation (step S1). Specifically, the control unit 61 acquires the position coordinates of the vehicle 1 from the GPS device 83. Further, the control unit 61 calculates the traveling direction of the vehicle 1 based on the time change of the position coordinates acquired from the GPS device 83 and the signal from the gyro sensor of the host vehicle posture detection unit 82. Further, the control unit 61 specifies the shape of the road surface R at the setting position of the virtual plane A from the own vehicle 1 based on the position information from the GPS device 83 and the data stored in the storage medium 62.
- control unit 61 calculates / acquires the inclination angle ⁇ of the road surface, the vehicle inclination angle ⁇ , and the relative angle ⁇ 1 of the road surface R with respect to the vehicle 1. Further, the control unit 61 acquires the vanishing point P necessary for the calculation of the virtual grid G from the ROM or calculates it based on the image analysis of the forward situation detection unit 85.
- the control unit 61 generates a face-magnified image VS, a first notification image V1, and a second notification image V2, and drives and controls the operation of the display 10 to generate a virtual image V consisting of these various images in a virtual plane.
- Display on A step S2).
- the control unit 61 calculates a virtual grid G along the road surface R based on the shape of the road surface R acquired in step S1 and the relative angle ⁇ 1, and a linear image on the virtual grid G By drawing the element E, a memory image VS is generated.
- the control unit 61 Based on the position information from the GPS device 83 and the digital map data of the storage medium 62, the control unit 61 generates map information in the vicinity of the current location of the vehicle 1 and a first notification image V1 indicating a guide route. Further, the control unit 61 controls the display of the second notification image V2 based on the position information from the GPS device 83, the digital map data of the storage medium 62, and the data indicating the speed limit received by the wireless communication unit 84.
- control unit 61 acquires the vehicle speed from the vehicle speed sensor 81 (step S3), and determines whether the vehicle 1 is stopped based on the acquired vehicle speed (step S4). Note that whether or not the vehicle 1 is stopped may be determined based on brake information that can be acquired from the ECU 86 or time change of position coordinates acquired from the GPS device 83.
- step S4 When it is determined in step S4 that the vehicle 1 is not stopped (step S4; No), the control unit 61 calculates the moving speed of the image element E when displaying the memory image VS in the moving image mode (step S4) S5).
- the control unit 61 refers to table data indicating the moving speed of the image element E stored in advance in the ROM and associated with the vehicle speed, and acquires the moving speed according to the vehicle speed acquired in step S3.
- the vehicle speed acquired in step S3 may be used as the traveling speed as it is, or the coefficient ⁇ (0 ⁇ ⁇ 1 or ⁇ > 1) may be predetermined for the vehicle speed acquired in step S3.
- the moving speed of the image element E may be calculated by multiplying.
- control unit 61 drives and controls the display unit 10, and displays the cofound image VS in the moving image mode so that the image element E is directed to the user 4 at the moving speed calculated in step S5 (step S6) .
- the face image VS is displayed in a moving picture mode in which the image element E moves in accordance with the vehicle speed of the vehicle 1.
- step S4 When it is determined in step S4 that the vehicle 1 is stopped (step S4; Yes), the control unit 61 causes the image element E to go to the user 4 at the set speed stored in the ROM in advance. Thus, the image is displayed in the form of a moving image (step S8). As a result, even when the vehicle 1 is stopped, motion can be felt in the memory image VS, and the user 4 can recall a surface without being connected to the front object as much as possible. After execution of step S8, the control unit 61 returns to step S1 and executes the process.
- the moving speed of the image element E at the time of stopping may be the moving speed immediately before stopping (that is, the moving speed calculated at step S5 immediately before it is determined as Yes at step S4).
- the image element E may be swung or blinked instead of moving toward the user 4.
- step S6 the control unit 61 determines whether the vehicle speed acquired in step S3 exceeds a threshold value stored in advance in the ROM (step S7).
- the threshold is a threshold for determining whether the vehicle is traveling at a high speed. If the vehicle speed is equal to or less than the threshold (step S7; No), the control unit 61 returns to step S1 and executes the process.
- the control unit 61 executes the visibility securing process (step S9).
- the visibility ensuring process is a process for suppressing the user 4 from being complicated by the image element E moving at a high speed according to the vehicle speed.
- the control unit 61 reduces the visibility of the face-magnified image VS (for example, reduces the display brightness, decreases the lightness and saturation, etc.) than when the vehicle speed is equal to or less than the threshold. At least one control of reducing the number of image elements E included in the memory image VS is performed.
- step S9 the control unit 61 returns to step S1 and executes the process.
- the above is display control processing. Even when the display mode of the virtual image V is switched from the normal mode to the plane mode or the bird's eye view special mode during execution of the display control process, the control unit 61 repeatedly executes the display control process in the same manner. .
- a grid composed of linear image elements E along each of a line from the vanishing point P to the user 4 side in the virtual grid G and a line extending in the left-right direction It may be a lattice-like memory image VS1.
- the face-magnified image VS1 may be superimposed on at least a part of a plurality of lines forming the virtual grid G, or the configuration of the virtual grid G, although not shown. It may coincide with all of the plurality of lines.
- it may be a memory image VS2 composed of a plurality of cross-shaped image elements E superimposed on the lines forming the virtual grid G.
- the invention is not limited to the combination of the linear image elements E, but if it is possible to recall the face to the user 4, a combination of the dot image elements or the linear image elements and the dot image elements
- the combination of may constitute a memory image.
- the virtual grid G and the coplanar images VS, VS1, VS2 (hereinafter, the symbols VS1 and VS2 are omitted) generated on the virtual grid G are not limited to those configured by straight lines.
- the control unit 61 may configure the virtual grid G and the epigram VS as curves along the road surface R ahead based on the position information from the GPS device 83 and the digital map data of the storage medium 62.
- the gradient in the longitudinal direction of the road surface R ahead but also the inclination and curvature in the width direction of the road surface R are specified based on the position information from the GPS device 83 and the digital map data of the storage medium 62, and the inclination in the width direction
- the virtual grid G or the memory image VS may be generated in consideration of the curvature and the curvature.
- control unit 61 calculates the expected traveling locus of the vehicle 1 according to the detection signals from the steering angle sensor and the yaw rate sensor included in the host vehicle attitude detection unit 82, and follows the calculated traveling locus.
- the virtual grid G and the memory image VS may be configured by curves.
- the drawing control of the memory image VS is performed based on the virtual grid G generated by the control unit 61
- the present invention is not limited thereto. It is also possible to control the drawing of the memory image VS without the intervention of the virtual grid G.
- the face image VS may be drawn without using the vanishing point P or the gradient of the texture.
- the manner in which the memory image VS is drawn can be changed as appropriate.
- the angle of the screen 30 can be adjusted by an actuator (not shown), and the virtual plane A itself may be controlled in parallel with a part of the road surface R in front.
- the display unit for displaying the display image that is the source of the display light L is not limited to the combination of the display 10 made of a reflective display device such as DMD and the screen 30.
- the display unit may be configured of a liquid crystal display, an organic EL (Electro Luminescence) display, or the like.
- the projection target (light transmitting member) of the display light L is not limited to the front glass 2 of the vehicle 1 and may be a combiner configured by a plate-like half mirror, a hologram element, and the like.
- the type of the vehicle 1 on which the HUD device 100 is mounted is not limited, and the invention can be applied to various vehicles such as a four-wheeled motor vehicle and a two-wheeled motor vehicle.
- the HUD device 100 described above is mounted on the vehicle 1, and projects the display light L representing an image on the light transmitting member (front glass 2) to form a virtual surface set in front of the light transmitting member Display an image as a virtual image V on A.
- the HUD device 100 includes a display unit (for example, the display 10) that emits display light L, and a control unit 61 that controls an image displayed on the virtual surface A by controlling the operation of the display unit.
- the virtual surface A is set to be inclined forward with respect to the vertical direction of the vehicle 1, and the image displayed on the virtual surface A is a surface-induced image VS that evokes a surface by a combination of linear or dotted image elements E. including.
- the control unit 61 acquires the traveling speed of the vehicle 1 and displays the face-correlated image VS in a moving picture mode in which the image element E moves in accordance with the acquired traveling speed. Since this is done, as described above, it is possible to perform display that evokes the behavior of the vehicle while securing the visibility of the real scene. Further, since the virtual plane A on which the virtual image V is displayed is set to be inclined forward with respect to the vertical direction of the vehicle 1, the virtual plane A is set to face the user 4 along the vertical direction. In comparison with a real scene, the virtual image V can be displayed without making the user 4 feel as annoying as possible.
- the image displayed on the virtual surface A includes a notification image (the first notification image A1 and the second notification image A2) that does not move according to the traveling speed.
- the notification image includes the second notification image V2 as a false standing image that is visually recognized as if it has risen with respect to the co-found image VS.
- the user 4 is made to behave as if the first notification image V1 and the second notification image V2 that do not move according to the traveling speed are visually recognized on the basis of the copious image VS causing the user to feel movement. While making the user feel, the first notification image V1 and the second notification image V2 can be grasped well.
- control unit 61 moves the image element E even when the vehicle 1 is stopped. As a result, even when the vehicle 1 is stopped, motion can be felt in the memory image VS, and the user 4 can recall a surface without being connected to the front object as much as possible.
- control unit 61 displays the memory image VS so as to follow the road surface R that is viewed in front of the light-transmissive member through the light-transmissive member (display in the normal mode). Thereby, display using AR (Augmented Reality) is possible.
- AR Augmented Reality
- control unit 61 displays the facial appearance image VS in a mode different from the first mode (the normal mode) in which the facial appearance image VS is displayed along the road surface R and the second mode.
- the image element E moves in the direction from the predetermined vanishing point P toward the user 4 in the first mode, and in the second mode, in the direction different from the first mode. Image element E moves. Since this is done, it is possible to perform various types of notification by adding interesting effects.
- control unit 61 may lower the visibility of the memory image VS compared to the case where the traveling speed is equal to or less than the threshold, Control of at least one of reducing the number of image elements E included in the facial image VS is performed. Thereby, the visibility of the real view at the time of high speed traveling can be secured.
- ECU L display light A: virtual surface V: virtual image VS, VS1, VS2: face recall image E: image element, G: virtual grid, P: vanishing point V1: first notification image V2: second notification image R: road surface, ⁇ ... inclination angle, ⁇ 1 ... relative angle, ⁇ ... vehicle inclination angle 1 ... vehicle, 2 ... windshield, 3 ... dashboard, 4 ... user
Landscapes
- Engineering & Computer Science (AREA)
- Chemical & Material Sciences (AREA)
- Combustion & Propulsion (AREA)
- Transportation (AREA)
- Mechanical Engineering (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Instrument Panels (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
Provided is a head-up display (HUD) device that can provide a display which enables envisioning of the behavior of a vehicle while ensuring visibility of the actual view. A HUD device 100 is mounted on a vehicle 1, and displays an image as a virtual image V on a virtual screen A which is set in front of a windshield 2 by projecting display light L indicating the image to the windshield 2. The HUD device 100 is provided with: a display unit that emits the display light L; and a control unit that controls the image displayed on the virtual surface A by controlling an operation of the display unit. The virtual surface A is set so as to incline forward with respect to the vertical direction of the vehicle 1, and the image displayed on the virtual surface A includes a surface-envisioning image that causes a surface to be envisioned by combining linear or dot image elements. The control unit acquires the traveling speed of the vehicle 1, and displays the surface-envisioning image in a video mode where the image elements move in accordance with the acquired traveling speed.
Description
本発明は、ヘッドアップディスプレイ装置に関する。
The present invention relates to a head-up display device.
車両のフロントガラス等の透光部材に画像を表す表示光を投射することで、透光部材の前方に画像を虚像として表示するヘッドアップディスプレイ(HUD:Head-Up Display)装置が、例えば、特許文献1に開示されている。
A head-up display (HUD: Head-Up Display) device that displays an image as a virtual image in front of the light-transmissive member by projecting display light representing the image on a light-transmissive member such as a windshield of a vehicle is, for example, It is disclosed in reference 1.
特許文献1は、図4(a)においてHUD装置について開示しているが、主には、車内に設けられたディスプレイの表示窓に直接、車速に応じて動く縞パターンを表示することで、車両の挙動を運転者の視覚を介して伝達する表示装置について開示している。
Although Patent Document 1 discloses the HUD device in FIG. 4A, the vehicle is mainly displayed on the display window of the display provided in the vehicle by directly displaying a stripe pattern that moves according to the vehicle speed. Discloses a display device that transmits the behavior of the vehicle through the driver's vision.
HUD装置では、透光部材越しに見える道路などの景色(実景)に重ねて画像(虚像)を表示するため、特許文献1のように主に実画面による表示を想定したパターンを車速に応じて動かすだけでは、実景の視認性を妨げる虞がある。
Since the HUD device displays an image (virtual image) superimposed on a scene (real scene) such as a road viewed through the light transmitting member, a pattern mainly assuming display on a real screen as in Patent Document 1 is displayed according to the vehicle speed. Just moving it may interfere with the visibility of the real scene.
本発明は、上記実情に鑑みてなされたものであり、実景の視認性を確保しつつも、車両の挙動を想起させる表示を行うことができるヘッドアップディスプレイ装置を提供することを目的とする。
The present invention has been made in view of the above circumstances, and has an object to provide a head-up display device capable of performing display recalling the behavior of a vehicle while securing the visibility of a real scene.
上記目的を達成するため、本発明に係るヘッドアップディスプレイ装置は、
車両に搭載され、透光部材に画像を表す表示光を投射することで、前記透光部材の前方に設定された仮想面に前記画像を虚像として表示するヘッドアップディスプレイ装置であって、
前記表示光を発する表示部と、
前記表示部の動作を制御することで、前記仮想面に表示される前記画像を制御する制御部と、を備え、
前記仮想面は、前記車両の上下方向に対して前方に傾いて設定され、
前記仮想面に表示される前記画像は、線状又は点状の画像要素の組み合わせにより面を想起させる面想起画像を含み、
前記制御部は、前記車両の走行速度を取得し、
前記面想起画像を、取得した前記走行速度に応じて前記画像要素が移動する動画態様で表示する。 In order to achieve the above object, a head-up display device according to the present invention is:
A head-up display device mounted on a vehicle and displaying the image as a virtual image on a virtual surface set in front of the light transmitting member by projecting display light representing the image onto the light transmitting member.
A display unit that emits the display light;
A control unit configured to control the image displayed on the virtual surface by controlling the operation of the display unit;
The virtual plane is set to be inclined forward with respect to the vertical direction of the vehicle,
The image displayed on the virtual surface includes an episodic image reminiscent of a surface by a combination of linear or dotted image elements,
The control unit acquires a traveling speed of the vehicle.
The facial image is displayed in the form of a moving image in which the image element moves in accordance with the acquired traveling speed.
車両に搭載され、透光部材に画像を表す表示光を投射することで、前記透光部材の前方に設定された仮想面に前記画像を虚像として表示するヘッドアップディスプレイ装置であって、
前記表示光を発する表示部と、
前記表示部の動作を制御することで、前記仮想面に表示される前記画像を制御する制御部と、を備え、
前記仮想面は、前記車両の上下方向に対して前方に傾いて設定され、
前記仮想面に表示される前記画像は、線状又は点状の画像要素の組み合わせにより面を想起させる面想起画像を含み、
前記制御部は、前記車両の走行速度を取得し、
前記面想起画像を、取得した前記走行速度に応じて前記画像要素が移動する動画態様で表示する。 In order to achieve the above object, a head-up display device according to the present invention is:
A head-up display device mounted on a vehicle and displaying the image as a virtual image on a virtual surface set in front of the light transmitting member by projecting display light representing the image onto the light transmitting member.
A display unit that emits the display light;
A control unit configured to control the image displayed on the virtual surface by controlling the operation of the display unit;
The virtual plane is set to be inclined forward with respect to the vertical direction of the vehicle,
The image displayed on the virtual surface includes an episodic image reminiscent of a surface by a combination of linear or dotted image elements,
The control unit acquires a traveling speed of the vehicle.
The facial image is displayed in the form of a moving image in which the image element moves in accordance with the acquired traveling speed.
本発明によれば、実景の視認性を確保しつつも、車両の挙動を想起させる表示を行うことができる。
According to the present invention, it is possible to perform display that evokes the behavior of a vehicle while securing the visibility of a real scene.
本発明の一実施形態に係るヘッドアップディスプレイ装置について図面を参照して説明する。
A head-up display device according to an embodiment of the present invention will be described with reference to the drawings.
本実施形態に係るヘッドアップディスプレイ装置(HUD:Head-Up Display)装置100は、図1に示すように、例えば、車両1のダッシュボード3内に配設される。
A head-up display device (HUD: Head-Up Display) device 100 according to the present embodiment is, for example, disposed in a dashboard 3 of a vehicle 1 as shown in FIG.
HUD装置100は、コンバイナ処理されたフロントガラス2に向けて表示光Lを射出する。フロントガラス2で反射した表示光Lは、ユーザ4(主に車両1の運転者)側へと向かう。ユーザ4は、視点をアイボックスEb内におくことで、フロントガラス2の前方に、表示光Lが表す画像を虚像Vとして視認することができる。つまり、HUD装置100は、フロントガラス2の前方に虚像Vを表示する。これにより、ユーザ4は、虚像Vを風景と重畳させて観察することができる。虚像Vは、車両1に関する各種情報(以下、車両情報と言う。)を表示する。なお、車両情報は、車両1自体の情報のみならず、車両1の外部情報も含む。
The HUD device 100 emits display light L toward the combined windshield 2. The display light L reflected by the windshield 2 travels toward the user 4 (mainly the driver of the vehicle 1). The user 4 can visually recognize the image represented by the display light L as a virtual image V in front of the windshield 2 by placing the viewpoint in the eye box Eb. That is, the HUD device 100 displays the virtual image V in front of the windshield 2. Thus, the user 4 can observe the virtual image V superimposed on the landscape. The virtual image V displays various information on the vehicle 1 (hereinafter referred to as vehicle information). The vehicle information includes not only the information of the vehicle 1 itself but also the external information of the vehicle 1.
本実施形態に係るHUD装置100は、図1に示すように、フロントガラス2の前方に設定されるとともに、車両1の上下方向に対して前方に傾いて設定された仮想面Aに虚像Vを表示する。なお、仮想面Aは、後述のスクリーン30の表示面31に対応し、虚像Vの表示可能領域となる。仮想面Aは、その法線方向から見て矩形状となる。例えば、仮想面Aは、車両1に最も近い一端が、車両1から5m程度(例えば6m)の位置に設定され、車両1から最も遠い他端が、車両1から10m程度(例えば12m)の位置に設定される。仮想面A及びアイボックスEbは、後述のスクリーン30の大きさや、HUD装置100内の各種の鏡や、コンバイナ処理されたフロントガラス2によって構成される光学系に基づいて予め設定される。
The HUD device 100 according to the present embodiment is set in front of the windshield 2 as shown in FIG. 1 and at the same time the virtual image A is set on a virtual surface A set to be inclined forward with respect to the vertical direction of the vehicle 1. indicate. The virtual surface A corresponds to the display surface 31 of the screen 30 described later, and is a displayable area of the virtual image V. The virtual surface A has a rectangular shape when viewed in the normal direction. For example, in the virtual plane A, one end closest to the vehicle 1 is set at a position of about 5 m (for example 6 m) from the vehicle 1 and the other end far from the vehicle 1 is at a position of about 10 m (for example 12 m) from the vehicle 1 Set to The virtual surface A and the eye box Eb are preset based on the size of the screen 30, which will be described later, various mirrors in the HUD device 100, and an optical system configured by the windshield 2 subjected to the combiner processing.
HUD装置100は、図2及び図3に示すように、表示器10と、第1~第3平面鏡21~23と、スクリーン30と、凹面鏡40と、筐体50と、制御装置60と、を備える。まず、図2に示した各部について説明する。
As shown in FIGS. 2 and 3, the HUD device 100 includes the display 10, the first to third plane mirrors 21 to 23, the screen 30, the concave mirror 40, the housing 50, and the control device 60. Prepare. First, each part shown in FIG. 2 will be described.
表示器10は、表示光Lを生成・射出するものであり、例えば、DMD(Digital Micro mirror Device)やLCOS(Liquid Crystal On Silicon)などの反射型表示デバイスを用いたプロジェクタなどからなる。表示器10は、制御装置60の制御の下で生成した表示光Lを第1平面鏡21に向けて射出する。
The display 10 generates and emits the display light L, and includes, for example, a projector using a reflective display device such as a DMD (Digital Micro mirror Device) or LCOS (Liquid Crystal On Silicon). The display 10 emits the display light L generated under the control of the control device 60 toward the first plane mirror 21.
第1平面鏡21は、例えばコールドミラーからなり、表示器10からの表示光Lの光路上に傾いて配置される。表示器10からの表示光Lは、第1平面鏡21で反射し、スクリーン30へと向かう。
The first plane mirror 21 is, for example, a cold mirror, and is disposed obliquely on the light path of the display light L from the display 10. The display light L from the display 10 is reflected by the first plane mirror 21 and travels to the screen 30.
スクリーン30は、例えば、ホログラフィックディフューザ、マイクロレンズアレイ、拡散板等の透過型スクリーンから構成される。スクリーン30は、表示光Lを受ける面の裏側の表示面31において、表示光Lが示す画像を表示する。これに伴い、表示面31に表示された画像に対応する表示光Lが、第2平面鏡22に向けて射出される。
The screen 30 is made of, for example, a transmissive screen such as a holographic diffuser, a microlens array, or a diffusion plate. The screen 30 displays an image indicated by the display light L on the display surface 31 on the back side of the surface receiving the display light L. Along with this, the display light L corresponding to the image displayed on the display surface 31 is emitted toward the second plane mirror 22.
この実施形態では、スクリーン30を車両1の上下方向に対して前方に傾けることにより、虚像Vの表示面である仮想面Aが車両1の上下方向に対して前方に傾いて設定されている。なお、仮想面Aをこのように設定する手法としては、適宜公知の手法を採用することができ、例えば、表示光Lの光路上に位置する反射部の傾きや曲率を調整することで傾いた仮想面Aを実現してもよい。
In this embodiment, by tilting the screen 30 forward with respect to the vertical direction of the vehicle 1, the virtual surface A which is a display surface of the virtual image V is set to be inclined forward with respect to the vertical direction of the vehicle 1. In addition, as a method of setting the virtual surface A in this manner, a known method can be appropriately adopted. For example, the inclination and the curvature of the reflection portion positioned on the optical path of the display light L are adjusted. A virtual plane A may be realized.
第2平面鏡22は、スクリーン30からの表示光Lを、第3平面鏡23に向けて反射させる。第3平面鏡23は、第2平面鏡22からの表示光Lを凹面鏡40に向けて反射させる。第2平面鏡22及び第3平面鏡23は、各々、例えばコールドミラーから構成されている。なお、この実施形態では、表示光Lの光路を折り返す平面鏡として、第1~第3平面鏡21~23の3枚を用いているが、平面鏡は1枚以上であればよい。平面鏡を何枚用いるかや、どのように表示光Lの光路を折り返すかは、設計に応じて適宜変更可能である。
The second plane mirror 22 reflects the display light L from the screen 30 toward the third plane mirror 23. The third plane mirror 23 reflects the display light L from the second plane mirror 22 toward the concave mirror 40. Each of the second plane mirror 22 and the third plane mirror 23 is, for example, a cold mirror. In this embodiment, three flat mirrors 21 to 23 are used as the flat mirrors for folding the light path of the display light L. However, one or more flat mirrors may be used. The number of flat mirrors used and how to fold the light path of the display light L can be appropriately changed according to the design.
凹面鏡40は、第3平面鏡23からの表示光Lを拡大しつつ、フロントガラス2に向けて反射させる。これにより、ユーザ4に視認される虚像Vは、スクリーン30に表示されている画像が拡大されたものとなる。
The concave mirror 40 reflects the display light L from the third plane mirror 23 toward the windshield 2 while enlarging it. As a result, the virtual image V visually recognized by the user 4 is a magnified image of the image displayed on the screen 30.
筐体50は、合成樹脂や金属により遮光性を有して箱状に形成されている。筐体50には、表示光Lの光路を確保する開口部が設けられ、この開口部を塞ぐように透光性カバー51が取り付けられている。透光性カバー51は、アクリル等の透光性樹脂から形成されている。
The housing 50 is formed in a box shape having a light shielding property by a synthetic resin or metal. The housing 50 is provided with an opening for securing the optical path of the display light L, and a translucent cover 51 is attached to close the opening. The translucent cover 51 is formed of a translucent resin such as acrylic.
凹面鏡40で反射した表示光Lは、透光性カバー51を透過して、フロントガラス2へと向かう。このようにして、HUD装置100からフロントガラス2に向けて表示光Lが射出される。この表示光Lがフロントガラス2で反射することで、ユーザ4から見てフロントガラス2の前方に虚像Vが表示される。
The display light L reflected by the concave mirror 40 passes through the translucent cover 51 and travels to the windshield 2. Thus, the display light L is emitted from the HUD device 100 toward the windshield 2. By reflecting the display light L on the windshield 2, a virtual image V is displayed in front of the windshield 2 as viewed from the user 4.
なお、凹面鏡は、図示しないアクチュエータにより、回転移動または平行移動可能に設けられていてもよい。例えば、凹面鏡が、図2における時計回り・反時計回りに回転可能に設けられ、凹面鏡が回転して表示光Lの反射角が変更されることで、虚像Vの表示位置(高さ)が調整可能となっていてもよい。当該調整は、図示しない操作部からのユーザ操作や、図示しない視点検出手段が検出した視認者の視点位置に応じて、制御装置60の制御の下で実行されればよい。
The concave mirror may be provided so as to be capable of rotational movement or parallel movement by an actuator (not shown). For example, a concave mirror is provided so as to be rotatable clockwise and counterclockwise in FIG. 2, and the display position (height) of the virtual image V is adjusted by rotating the concave mirror and changing the reflection angle of the display light L It may be possible. The adjustment may be performed under the control of the control device 60 according to the user operation from the operation unit (not shown) or the viewpoint position of the viewer detected by the viewpoint detection unit (not shown).
続いて、主に図3を参照して、HUD装置100の制御構成を説明する。
Subsequently, a control configuration of the HUD device 100 will be described mainly with reference to FIG.
制御装置60は、HUD装置100の全体動作を制御するものであり、制御部61と、記憶媒体62と、I/F(InterFace)63と、を備える。
The control device 60 controls the entire operation of the HUD device 100, and includes a control unit 61, a storage medium 62, and an I / F (InterFace) 63.
制御装置60は、I/F63を介して、車両1内の各種システムと、例えばCAN(Controller Area Network)などにより通信が可能となっている。HUD装置100には、電源が接続されており、例えば、車両1のイグニッションのオンに伴って制御装置60へ動作電力が供給される。
The control device 60 can communicate with various systems in the vehicle 1 through the I / F 63 by, for example, CAN (Controller Area Network). A power supply is connected to the HUD device 100, and for example, operating power is supplied to the control device 60 when the ignition of the vehicle 1 is turned on.
制御部61は、マイクロコンピュータからなり、動作プログラムや各種の画像データが記憶されたROM(Read Only Memory)と、各種の演算結果などを一時的に記憶するRAM(Random Access Memory)と、ROMに記憶された動作プログラムを実行するCPU(Central Processing Unit)と、CPUと協働して画像処理を実行するGPU(Graphics Processing Unit)と、CPU及びGPUの制御により表示器10を駆動する駆動回路とを有する。特に、ROMには、後述する表示制御処理を実行するための動作プログラムが格納されている。なお、制御部61の一部は、ASIC(Application Specific Integrated Circuit)などの専用回路によって構成されていてもよい。
The control unit 61 is a ROM (Read Only Memory) including a microcomputer and storing operation programs and various image data, a RAM (Random Access Memory) temporarily storing various calculation results, and the like, and the ROM. A central processing unit (CPU) that executes a stored operation program, a graphics processing unit (GPU) that performs image processing in cooperation with the CPU, and a drive circuit that drives the display unit 10 under control of the CPU and the GPU Have. In particular, the ROM stores an operation program for executing display control processing described later. Note that part of the control unit 61 may be configured by a dedicated circuit such as an application specific integrated circuit (ASIC).
記憶媒体62は、SSD(Solid State Drive)、HDD(Hard Disk Drive)、DVD-ROM、CD-ROM等から構成される。記憶媒体62は、地図情報及び走路形状を示すための三次元の座標情報(三次元情報)からなるデジタルマップデータを記憶する。デジタルマップデータは、走路の形状を示す走路形状データ、走路の基準高さ(海抜など)を示す高さデータ、走路の長手方向の傾き(勾配)を示す勾配データ、走路の幅方向の傾きを示す傾きデータ、走路の曲率を示す曲率データ、走路の制限速度を示す制限速度データ等の各種データを、走路の所定位置(所定の緯度経度)毎に有して構成されている。デジタルマップデータは、制御部61が描画処理を実行する際に使用される。
The storage medium 62 is configured of a solid state drive (SSD), a hard disk drive (HDD), a DVD-ROM, a CD-ROM, and the like. The storage medium 62 stores digital map data composed of map information and three-dimensional coordinate information (three-dimensional information) for indicating a track shape. Digital map data includes runway shape data indicating the shape of the runway, height data indicating the reference height of the runway (e.g. above sea level), slope data indicating the inclination in the longitudinal direction of the runway, and inclination in the width direction of the runway It is configured to have various data such as inclination data indicating the curvature of the runway, speed data indicating the speed limit of the runway, and the like for each predetermined position (predetermined latitude and longitude) of the runway. The digital map data is used when the control unit 61 executes a drawing process.
制御部61のCPUは、GPUと協働して、ROMに記憶された各種の画像データや、記憶媒体62に記憶されたデジタルマップデータに基づき、表示器10の表示制御(表示光Lの生成制御)を行う。GPUは、CPUからの表示制御指令などに基づき、表示器10における表示動作の制御内容を決定する。例えば、GPUは、表示器10からの表示光Lによりスクリーン30に表示される画像の切換タイミングを決定することなどにより、各種の表示を実行させるための制御を行う。このようにして、制御部61は、虚像Vの表示制御を行う。また、虚像Vを構成する各画像には予めレイヤが割り当てられており、制御部61は、各画像の個別の表示制御が可能となっている。
The CPU of the control unit 61 cooperates with the GPU to control display of the display 10 (generation of the display light L based on various image data stored in the ROM and digital map data stored in the storage medium 62 Control). The GPU determines the control content of the display operation on the display 10 based on a display control command from the CPU. For example, the GPU performs control to execute various displays by determining the switching timing of the image displayed on the screen 30 by the display light L from the display 10. Thus, the control unit 61 performs display control of the virtual image V. In addition, layers are assigned in advance to the respective images constituting the virtual image V, and the control unit 61 can perform individual display control of the respective images.
I/F63は、車両1内に配設された操作部70、車速センサ81、自車姿勢検出部82、GPS(Global Positioning System)装置83、無線通信部84、前方状況検出部85、及びECU(Electronic Control Unit)86の各々と制御部61とを電気的に接続するための回路である。
The I / F 63 includes an operation unit 70, a vehicle speed sensor 81, a vehicle attitude detection unit 82, a GPS (Global Positioning System) device 83, a wireless communication unit 84, a forward situation detection unit 85, and an ECU. It is a circuit for electrically connecting each of the (Electronic Control Unit) 86 and the control unit 61.
HUD装置100の構成は以上である。続いて、HUD装置100の制御装置60と通信を行う各種構成について説明する。車両1内においては、HUD装置100と以下の各種構成により車両用表示システムが構成される。
The configuration of the HUD device 100 is as described above. Subsequently, various configurations for communicating with the control device 60 of the HUD device 100 will be described. In the vehicle 1, the display system for vehicles is comprised by the HUD apparatus 100 and the following various structures.
操作部70は、ユーザ4による各種操作を受け付けるものであり、受け付けた操作内容を示す信号を制御部61に供給する。例えば、操作部70は、ユーザ4による、後述の地図情報などを示す第1報知画像V1の拡大又は縮小操作や、虚像Vの表示モードの切替操作などを受け付ける。
The operation unit 70 receives various operations by the user 4 and supplies a signal indicating the content of the received operation to the control unit 61. For example, the operation unit 70 receives an enlargement or reduction operation of the first notification image V1 indicating the map information described later and the like, a switching operation of the display mode of the virtual image V, and the like by the user 4.
車速センサ81は、車両1の走行速度(車速)を検出し、車速に応じた信号を制御部61に出力する。車速センサ81は、例えば、車輪と同期して回転する被検出部(例えば、ギアの凹凸や金属突起)を検出するホール素子からなり、車速に応じた周波数の車速信号を制御部61に供給する。制御部61は、取得した車速信号をA/D(Analog to Digital)変換し、車速信号の周波数に応じた車速を算出し、取得する。
The vehicle speed sensor 81 detects the traveling speed (vehicle speed) of the vehicle 1, and outputs a signal corresponding to the vehicle speed to the control unit 61. The vehicle speed sensor 81 includes, for example, a Hall element that detects a detection target (for example, a gear unevenness or a metal protrusion) that rotates in synchronization with the wheel, and supplies the control unit 61 with a vehicle speed signal of a frequency according to the vehicle speed. . The control unit 61 A / D (Analog to Digital) converts the acquired vehicle speed signal, and calculates and acquires the vehicle speed according to the frequency of the vehicle speed signal.
自車姿勢検出部82は、車両1(以下、「自車1」とも言う。)の姿勢を検出するものであり、例えばジャイロセンサからなる。ジャイロセンサは、自車1の向き(進行方向)や自車勾配角θを検出し、検出結果を示す信号を制御部61へ出力する。自車勾配角θは、図4に示すように、水平面Hと自車1とのなす角を示す。一例として、自車勾配角θの正の方向は、同図における時計周り方向とする(後述の勾配角γも同様)。なお、自車勾配角θは、後述のGPS装置83からの位置情報と、記憶媒体62のデジタルマップデータとに基づいて算出することも可能である。また、自車姿勢検出部82は、操舵角センサやヨーレートセンサを含んでいてもよい。
The host vehicle attitude detection unit 82 detects an attitude of the vehicle 1 (hereinafter, also referred to as “host vehicle 1”), and is formed of, for example, a gyro sensor. The gyro sensor detects the direction (traveling direction) of the vehicle 1 and the vehicle inclination angle θ, and outputs a signal indicating the detection result to the control unit 61. The vehicle inclination angle θ indicates the angle between the horizontal plane H and the vehicle 1 as shown in FIG. As an example, the positive direction of the vehicle inclination angle θ is the clockwise direction in the same drawing (the same applies to the inclination angle γ described later). The host vehicle inclination angle θ can also be calculated based on position information from the GPS device 83 described later and digital map data of the storage medium 62. In addition, the host vehicle attitude detection unit 82 may include a steering angle sensor or a yaw rate sensor.
GPS装置83は、自車1の現在位置の緯度経度を求めるものであり、GPS用の受信アンテナや増幅回路を備え、当該受信アンテナで人工衛星から受信した位置情報を示す送信電波を、高周波信号として増幅した信号を制御部61へ出力する。制御部61は、GPS装置83からの位置情報に基づいて、現在位置近傍の地図情報及び走路形状のデータなどを記憶媒体62から読み出し、ユーザ4(主に運転者)により設定された目的地までの案内経路を決定するカーナビゲーションコントローラとしても機能する。
The GPS device 83 obtains the latitude and longitude of the current position of the vehicle 1, is provided with a GPS receiving antenna and an amplifier circuit, and is a high frequency signal of a transmission radio wave indicating position information received from an artificial satellite by the receiving antenna. The amplified signal is output to the control unit 61. Based on the position information from the GPS device 83, the control unit 61 reads map information in the vicinity of the current position, data of the runway shape, etc. from the storage medium 62, and reaches the destination set by the user 4 (mainly the driver). It also functions as a car navigation controller that determines the guidance route of
無線通信部84は、アンテナ、高周波回路等を備え、路車間通信を行う。無線通信部84は、インフラストラクチャーとして設置された路側無線装置を介して、道路情報(各種道路の勾配角を示す勾配情報を含む。その他、制限速度、車線、道路の幅員、交差点、カーブ、分岐路に関する情報など)を受信し、制御部61へ出力する。例えば、無線通信部84は、交通管制用の基地局(例えば、安全運転支援システム(DSSS:Driving Safety Support Systems)の基地局)から、路側無線装置を介して、道路情報を取得する。制御部61は、無線通信部84から取得した勾配情報により、路面Rの勾配角γが把握可能となっている。図4に示すように、勾配角γは、水平面Hと路面Rとのなす角を示す。なお、路面Rは、車両1の前方の走路であって、図5(a)に示すように、少なくともユーザ4がフロントガラス2越しに視認可能なものである。
The wireless communication unit 84 includes an antenna, a high frequency circuit, and the like to perform road-to-vehicle communication. The wireless communication unit 84 includes road information (including slope information indicating the slope angle of various roads through the roadside wireless device installed as an infrastructure. In addition, speed limit, lanes, road widths, intersections, curves, branches) It receives information on the route, etc., and outputs it to the control unit 61. For example, the wireless communication unit 84 acquires road information from a base station for traffic control (for example, a base station of a Driving Safety Support Systems (DSSS)) via a roadside apparatus. The control unit 61 can grasp the gradient angle γ of the road surface R based on the gradient information acquired from the wireless communication unit 84. As shown in FIG. 4, the inclination angle γ indicates the angle between the horizontal plane H and the road surface R. The road surface R is a roadway in front of the vehicle 1 and at least the user 4 can see through the windshield 2 as shown in FIG. 5A.
前方状況検出部85は、例えば、自車1の前方風景(路面Rを含む)を撮像する撮像手段(ステレオカメラなど)、撮像手段による撮像により得た撮像画像を解析する画像解析部、被写体との距離を測定する距離センサ等から構成されている。前方状況検出部85は、撮像画像をパターンマッチング法などの公知の手法により解析することで、自車1の前方における各種対象を検出する。当該各種対象は、路面R上の物体に関する情報(先行車や障害物)や、道路形状情報(路面Rの傾斜を含む。その他、車線、道路の幅員、交差点、カーブ、分岐路に関する情報等)などである。なお、前方状況検出部85は、ソナー、超音波センサ、ミリ波レーダ等を含んで構成されてもよい。
The front situation detection unit 85 is, for example, an imaging unit (such as a stereo camera) that images the scenery in front of the vehicle 1 (including the road surface R), an image analysis unit that analyzes a pickup image obtained by imaging by the imaging unit, A distance sensor or the like that measures the distance of The front situation detection unit 85 detects various objects in front of the vehicle 1 by analyzing the captured image by a known method such as a pattern matching method. The various objects are information on objects on the road surface R (preceding cars and obstacles), and road shape information (including the slope of the road surface R. Other information on lanes, road widths, intersections, curves, branches, etc.) Etc. The front situation detection unit 85 may be configured to include a sonar, an ultrasonic sensor, a millimeter wave radar, and the like.
ECU86は、車両1の各部の制御を行うものであり、この実施形態では特に、車両1を自動運転モードと手動運転モードとで切替制御する。そして、ECU86は、現在、車両1が手動運転モードであるか自動運転モードであるかを示す運転モード情報を制御部61に出力する。例えば、車両1が手動運転モードに設定されているときの自動運転レベルは、レベル0又はレベル1である。レベル0においては、運転者がすべての主制御系統(加速・操舵・制動)の操作を行う。レベル1においては、加速・操舵・制動のいずれか一つをシステムが支援的に行う。また、車両1例えば、自動運転モードに設定されているときの自動運転レベルは、レベル3以上である。レベル3においては、限定的な環境下若しくは交通状況のみ、システムが加速・操舵・制動を行い、システムが要請したときは運転者が対応する。
The ECU 86 controls each part of the vehicle 1, and in this embodiment, in particular, switches and controls the vehicle 1 in the automatic driving mode and the manual driving mode. Then, the ECU 86 outputs, to the control unit 61, operation mode information indicating whether the vehicle 1 is currently in the manual operation mode or the automatic operation mode. For example, the automatic driving level when the vehicle 1 is set to the manual driving mode is level 0 or level 1. At level 0, the driver operates all of the main control systems (acceleration, steering, braking). At Level 1, the system assists with any one of acceleration, steering and braking. Further, for example, the automatic driving level when the vehicle 1 is set to the automatic driving mode is level 3 or more. At level 3, the system accelerates, steers and brakes only in limited environments or traffic conditions, and the driver responds when the system requests it.
制御部61は、後述する虚像Vの構成画像を適切に表示するため、まず、自車1の前方道路のうち、自車1からの仮想面Aの設定位置(例えば、自車1から数m~十数m)における路面Rの形状を、GPS装置83からの位置情報や記憶媒体62に記憶されたデータに基づき特定する。また、制御部61は、特定した路面Rの勾配角γを、GPS装置83からの位置情報と記憶媒体62に記憶されたデータに基づき取得可能な勾配データ、無線通信部84から取得した勾配情報などに基づき特定する。また、制御部61は、自車姿勢検出部82からの検出信号に基づき、自車勾配角θを特定する。そして、制御部61は、自車1に対する前方路面Rの相対角γ1を、路面Rの勾配角γから自車勾配角θを減算することで算出(推定)する。なお、制御部61、相対角γ1に相当する自車1から見た路面Rの勾配を、前方状況検出部85からの情報に基づいて算出(推定)してもよい。前方状況検出部85が検出する路面Rの傾斜は、自車1に搭載された撮像手段により得られる撮像画像に基づくものであるため、相対角γ1に相当するものとなる。
In order to properly display the component image of the virtual image V described later, the control unit 61 first sets the setting position of the virtual plane A from the vehicle 1 (for example, several meters from the vehicle 1) on the road ahead of the vehicle 1 The shape of the road surface R in (about several tens of meters) is specified based on the position information from the GPS device 83 and the data stored in the storage medium 62. Further, the control unit 61 can obtain gradient data that can be acquired based on the position information from the GPS device 83 and the data stored in the storage medium 62, and gradient information acquired from the wireless communication unit 84. Identify based on Further, the control unit 61 specifies the vehicle inclination angle θ based on the detection signal from the vehicle attitude detection unit 82. Then, the control unit 61 calculates (estimates) the relative angle γ1 of the front road surface R with respect to the vehicle 1 by subtracting the vehicle gradient angle θ from the gradient angle γ of the road surface R. The control unit 61 may calculate (estimate) the slope of the road surface R viewed from the host vehicle 1 corresponding to the relative angle γ1 based on the information from the front situation detection unit 85. The inclination of the road surface R detected by the front situation detection unit 85 is based on the captured image obtained by the imaging unit mounted on the vehicle 1 and thus corresponds to the relative angle γ1.
続いて、HUD装置100の制御装置60が表示器10の動作を制御することで、仮想面Aに表示される虚像Vについて説明する。
Subsequently, the virtual image V displayed on the virtual surface A by the control device 60 of the HUD device 100 controlling the operation of the display 10 will be described.
虚像Vは、図5(a)及び(b)に示すように、面想起画像VSと、第1報知画像V1と、第2報知画像V2と、を含んで構成されている。
As shown in FIGS. 5A and 5B, the virtual image V is configured to include a surface recalled image VS, a first notification image V1, and a second notification image V2.
なお、図5(a)及び(b)に示す虚像Vは、車両1の運転席に着座したユーザ4(運転手)からの見え方を表したものである(後述の図6、図7、図9も同様)。前述のように仮想面Aは、斜めに設定されているため、図5(a)及び(b)のように虚像Vをユーザ4に視認させるためには、ユーザ4の視点から視認させたい像の仮想面A上への射影を考慮して、制御部61は、虚像Vの表示制御を行う。例えば、ユーザ4に、自身と正対する長方形の虚像Vを視認させるためには、当該長方形をユーザ4の視点から仮想面A上へ射影して得られる台形状の虚像Vを表示させることになる。以下で説明する虚像Vを構成する各画像は、このように仮想面Aが斜めであることを考慮して表示制御される。なお、ユーザ4の視点位置として、制御部61は、ROM内に予め格納した想定される視点位置を用いてもよいし、図示しない視点検出手段(ユーザ4を撮像するカメラなど)からの検出信号に基づき適宜特定してもよい。
The virtual image V shown in FIGS. 5 (a) and 5 (b) represents the view from the user 4 (driver) who is seated in the driver's seat of the vehicle 1 (see FIGS. 6 and 7, which will be described later). The same applies to FIG. As described above, since the virtual surface A is set obliquely, in order to make the virtual image V visible to the user 4 as shown in FIGS. 5A and 5B, an image that the user 4 wants to visually recognize from the viewpoint The control unit 61 performs display control of the virtual image V in consideration of the projection onto the virtual plane A of For example, in order to cause the user 4 to visually recognize the rectangular virtual image V facing the user itself, the trapezoidal virtual image V obtained by projecting the rectangle from the viewpoint of the user 4 onto the virtual surface A is displayed. . The respective images constituting the virtual image V described below are subjected to display control in consideration of the fact that the virtual plane A is oblique as described above. Note that, as the viewpoint position of the user 4, the control unit 61 may use an assumed viewpoint position stored in advance in the ROM, or a detection signal from viewpoint detection means (not shown) (such as a camera for imaging the user 4). You may specify suitably based on.
面想起画像VSは、図5(b)に示す線状の画像要素Eの組み合わせにより、ユーザ4に、面を想起させる画像であり、図5(a)に示すように車両1の前方の路面Rに沿ってユーザ4に視認される。
The surface image VS is an image that reminds the user 4 of the surface by the combination of the linear image elements E shown in FIG. 5B, and as shown in FIG. 5A, the road surface in front of the vehicle 1 It is viewed by the user 4 along R.
制御部61は、前述のように、特定可能な路面Rの形状や、算出可能な路面Rの勾配角γ、相対角γ1に基づき、ユーザ4にとって、路面Rの一部と概ね平行(丁度、平行も含む)に面想起画像VSが視認されるように、表示器10を表示制御する。
As described above, the control unit 61 is almost parallel to a part of the road surface R for the user 4 based on the shape of the road surface R that can be identified, the gradient angle γ of the road surface R that can be calculated, and the relative angle γ1. The display control of the display 10 is performed so that the memory image VS can be viewed in parallel.
画像要素Eは、例えば、図6(a)に示す仮想グリッドGに従って描画される。仮想グリッドGは、前記のように特定した前方の路面Rに沿うように生成される。
The image element E is drawn, for example, according to a virtual grid G shown in FIG. The virtual grid G is generated along the road surface R identified as described above.
仮想グリッドGは、遠近法(Perspective)を考慮して、例えば、一点透視図法により、設定された消失点Pからユーザ4側に向う複数の線と、左右方向に延びる複数の線との組合せにより構成される。消失点Pは、ユーザ4の視点位置と仮想面Aとの関係を考慮して予め設定され、ROMに記憶されていてもよいし、前方状況検出部85の画像解析により、前方風景における水平線や視認限界となる走路先端を特定し、当該特定結果に基づいて算出したものを用いてもよい。また、仮想グリッドGにおける左右方向に延びる複数の線は、ユーザ4側から消失点Pに近づけば近づくほど、配列間隔が短くなっている。つまり、仮想グリッドGの模様は、遠くになればなるほどきめが細かくなる「きめの勾配」を考慮して設定される。仮想グリッドGは、実際に虚像Vとして表示されるものではなく、制御部61が面想起画像VSを描画するために用いられる。
The virtual grid G is, for example, a combination of a plurality of lines extending from the set vanishing point P toward the user 4 and a plurality of lines extending in the left-right direction in consideration of perspective. Configured The vanishing point P may be set in advance in consideration of the relationship between the viewpoint position of the user 4 and the virtual plane A, and may be stored in the ROM, or the image analysis of the front situation detection unit 85 The tip of the runway which becomes the visual recognition limit may be specified, and one calculated based on the specified result may be used. Further, as the plurality of lines extending in the left and right direction in the virtual grid G approach the vanishing point P from the user 4 side, the arrangement interval becomes shorter. That is, the pattern of the virtual grid G is set in consideration of the “gradient of the texture” which becomes finer as it goes farther. The virtual grid G is not actually displayed as a virtual image V, and is used for the control unit 61 to draw the co-occurrence image VS.
この実施形態では、制御部61は、図6(b)に示すように、仮想グリッドGにおける消失点Pからユーザ4側に向う複数の線上に位置するように、線状の画像要素Eを描画する。また、線状の画像要素Eは、仮想グリッドGにおける左右方向に延びる複数の線の配列間隔を考慮して、消失点Pに近づけば近づくほど、長さが短くなるように描画される。これにより、複数の線状の画像要素Eからなる面想起画像VSは、図6(b)に示すように、遠近感を考慮した面をユーザ4に想起させることが可能になる。
In this embodiment, as shown in FIG. 6B, the control unit 61 draws linear image elements E so as to be positioned on a plurality of lines from the vanishing point P on the virtual grid G toward the user 4 side. Do. Further, the linear image element E is drawn so that the length becomes shorter as it approaches the vanishing point P in consideration of the arrangement interval of the plurality of lines extending in the horizontal direction in the virtual grid G. As a result, as shown in FIG. 6B, it is possible for the user 4 to recall a face in which a sense of perspective is taken into consideration, as the face-to-face image VS composed of a plurality of linear image elements E.
また、制御部61は、面想起画像VSを、画像要素Eがユーザ4に向かうように、車両1の走行速度(車速)に応じて移動させる動画態様で表示する。これにより、ユーザ4に車両1の挙動を視覚を介して伝達することができる。また、面想起画像VSは、線状(または後述するように点状)の画像要素Eにより構成されるため、実景の視認性を確保することができる。
Further, the control unit 61 displays the face-magnified image VS in a moving image mode in which the image element E is moved according to the traveling speed (vehicle speed) of the vehicle 1 so that the image element E is directed to the user 4. Thereby, the behavior of the vehicle 1 can be transmitted to the user 4 via vision. Further, since the face-magnified image VS is composed of linear (or point-like as described later) image elements E, it is possible to ensure the visibility of the real view.
第1報知画像V1は、図5(a)、(b)に示すように、車両1の現在地近傍の地図情報や、案内経路を示すものである。制御部61は、GPS装置83からの位置情報と記憶媒体62のデジタルマップデータに基づき、車両1の現在地近傍の地図情報や、案内経路を示す第1報知画像V1の表示制御を行う。第1報知画像V1は、前述のように生成される仮想グリッドGに沿って、つまり、面想起画像VSに沿って表示される。第1報知画像V1は、面想起画像VSとは異なり、全体としては走行速度(車速)に応じて移動しない。なお、第1報知画像V1が現在の自車1の位置を表す自車画像を含んでいた場合は、当該自車画像については車速に応じて移動させてもよい。
The first notification image V1 indicates, as shown in FIGS. 5A and 5B, map information in the vicinity of the current location of the vehicle 1 and a guide route. The control unit 61 performs display control of map information in the vicinity of the current location of the vehicle 1 and display of a first notification image V1 indicating a guide route, based on the position information from the GPS device 83 and the digital map data of the storage medium 62. The first notification image V1 is displayed along the virtual grid G generated as described above, that is, along the co-located image VS. The first notification image V1 does not move according to the traveling speed (vehicle speed) as a whole, unlike the face-magnified image VS. When the first notification image V1 includes an own vehicle image representing the current position of the own vehicle 1, the own vehicle image may be moved according to the vehicle speed.
第1報知画像V1は、ユーザ4による操作部70からの拡大操作に応じて、制御部61の制御により、拡大表示が可能となっている。制御部61は、第1報知画像V1の拡大表示に合わせて、面想起画像VSを構成する画像要素Eの左右方向の配列ピッチを広くする。これにより、面想起画像VSと連動して第1報知画像V1が拡大表示されたように感じさせることができる。また、第1報知画像V1は、ユーザ4による操作部70からの縮小操作に応じて、制御部61の制御により、縮小表示が可能となっている。制御部61は、第1報知画像V1の縮小表示に合わせて、面想起画像VSを構成する画像要素Eの左右方向の配列ピッチを狭くする。これにより、面想起画像VSと連動して第1報知画像V1が縮小表示されたように感じさせることができる。なお、制御部61は、第1報知画像V1の拡大・縮小表示に合わせて、消失点Pからユーザ4側に向う複数の線の配列ピッチや、長さを変化させてもよい。
The first notification image V <b> 1 can be enlarged and displayed under the control of the control unit 61 in accordance with the enlargement operation from the operation unit 70 by the user 4. The control unit 61 widens the arrangement pitch in the left-right direction of the image elements E constituting the face-correlated image VS in accordance with the enlarged display of the first notification image V1. As a result, it is possible to feel that the first notification image V1 is displayed in an enlarged manner in conjunction with the memory image VS. In addition, the first notification image V1 can be reduced and displayed under the control of the control unit 61 in accordance with the reduction operation by the user 4 from the operation unit 70. The control unit 61 narrows the arrangement pitch in the left-right direction of the image elements E constituting the face-correlated image VS in accordance with the reduced display of the first notification image V1. As a result, it is possible to make the first notification image V1 appear to be displayed in a reduced size in conjunction with the memory image VS. The control unit 61 may change the arrangement pitch or the length of a plurality of lines from the vanishing point P toward the user 4 in accordance with the enlargement / reduction display of the first notification image V1.
第2報知画像V2は、図5(a)、(b)に示すように、一例として、車両1の走路の制限速度を報知するための画像である。制御部61は、GPS装置83からの位置情報と記憶媒体62のデジタルマップデータや、無線通信部84で受信した制限速度を示すデータに基づき、第2報知画像V2の表示制御を行う。第2報知画像V2は、仮想面Aに表示されるものの、ユーザ4から見ると、面想起画像VSに対して起き上がったように視認される擬似起立画像として制御部61に表示制御される。一例として、第2報知画像V2は、ユーザ4にとっては、自身とほぼ正対して視認される。第2報知画像V2も、面想起画像VSとは異なり、全体としては走行速度(車速)に応じて移動しない。なお、第2報知画像V2は、車速センサ81が検出した車速を数値により表す車速表示などであってもよい。当該車速表示は、当然、車速に応じて変化して表示されることになるが、車速に応じて移動しないことには変わりがない。
As shown in FIGS. 5A and 5B, the second notification image V2 is, for example, an image for reporting the speed limit of the runway of the vehicle 1. The control unit 61 performs display control of the second notification image V2 based on the position information from the GPS device 83, the digital map data of the storage medium 62, and the data indicating the speed limit received by the wireless communication unit 84. Although the second notification image V2 is displayed on the virtual surface A, when viewed from the user 4, display control of the second notification image V2 is performed on the control unit 61 as a false standing image that is visually recognized as if rising up with respect to the memory image VS. As an example, the second notification image V <b> 2 is viewed by the user 4 substantially directly facing itself. The second notification image V2 also does not move in accordance with the traveling speed (vehicle speed) as a whole, unlike the face-magnified image VS. Note that the second notification image V2 may be a vehicle speed display or the like that represents the vehicle speed detected by the vehicle speed sensor 81 by a numerical value. The vehicle speed display is naturally changed and displayed according to the vehicle speed, but there is no change in not moving according to the vehicle speed.
なお、図5(a)、(b)に示した例では、第1報知画像V1の表示領域においても、面想起画像VSが表示されている例を示したが、第1報知画像V1の表示領域においては、他の表示領域に比べて面想起画像VSの表示輝度などを低下させて視認困難としたり、面想起画像VSを非表示にしたりしてもよい。また、面想起画像VS、第1報知画像V1、及び第2報知画像V2の表示優位度をレイヤにより予め定めてもよい。例えば、表示優位度が高い方(レイヤが上位な方)から、第2報知画像V2、第1報知画像V1、面想起画像VSとし、面想起画像VSの表示優先度を最も低く設定するなどしてもよい。
Although the example shown in FIGS. 5A and 5B shows an example in which the memory image VS is displayed also in the display area of the first notification image V1, the display of the first notification image V1 is shown. In the area, the display luminance or the like of the memory image VS may be lowered to make it difficult to view as compared to other display areas, or the memory image VS may be non-displayed. In addition, the display superiority of the facial image VS, the first notification image V1, and the second notification image V2 may be determined in advance by the layer. For example, the second notification image V2, the first notification image V1, and the memory image VS are selected from the higher display priority (the layer is higher), and the display priority of the memory image VS is set to the lowest, etc. May be
ここで、HUD装置100が表示する虚像Vは、重畳対象によって見え方が変化するという特性がある。例えば、前方に建物や先行車など(以下、前方物体と言う。)がある場合、ユーザ4がそれらに焦点を合わせたことに応じて、ユーザ4にとっては虚像Vがやや立ち上がったように感じてしまったりする。したがって、虚像Vとして、第1報知画像V1や第2報知画像V2のみを表示する場合は、このように見え方が変化してしまう可能性がある。一方で、動きを感じさせる画像では、人間は当該動きに敏感に反応する。したがって、車速に応じたアニメーションが可能な面想起画像VSによれば、ユーザ4は、前方物体に極力つられずに、面を想起することができる。HUD装置100は、このように動きを感じさせる面想起画像VSをユーザ4に基準面として捉えさせた上で、当該基準面に対して第1報知画像V1や第2報知画像V2を視認させることで、ユーザ4に、車両1の挙動を感じさせつつも、第1報知画像V1と第2報知画像V2とを良好に視認させることができる。
Here, the virtual image V displayed by the HUD device 100 has a characteristic that the appearance changes depending on the superposition target. For example, when there is a building, a leading vehicle, etc. (hereinafter referred to as a front object) in front, in response to the user 4 focusing on them, the virtual image V feels somewhat up for the user 4 Or Therefore, when only the first notification image V1 and the second notification image V2 are displayed as the virtual image V, the appearance may change in this manner. On the other hand, in an image that makes a motion feel, human beings respond sensitively to the motion. Therefore, according to the face-to-face image VS capable of animation according to the vehicle speed, the user 4 can recall a surface without being connected to the front object as much as possible. The HUD device 100 causes the user 4 to perceive the motion image VS thus causing the motion as a reference plane, and causes the first notification image V1 and the second notification image V2 to be visually recognized with respect to the reference surface. Thus, while making the user 4 feel the behavior of the vehicle 1, the first notification image V1 and the second notification image V2 can be visually recognized well.
また、図5(a)、(b)に示すように、面想起画像VSが路面Rに沿ってVR(Virtual Reality)表示されるモード(以下、「通常モード」と言う)から、所定の切替契機で、制御部61は、虚像Vの表示モードを、特殊モードへと切り替える。特殊モードの虚像Vは、図7(b)に示すように、第1報知画像V1が地図情報や案内経路を平面視で表すものであり、これにあわせて、面想起画像VSも平面視を感じさせるように変化する。具体的には、制御部61は、仮想グリッドGを、図7(a)に示すように、上下方向の線と、左右方向の線とが直交するように生成する。そして、制御部61は、図7(a)に示すように、仮想グリッドGにおける上下方向に延びる複数の線上に位置するように、線状の画像要素Eを描画する。なお、平面視に係る特殊モードでは、遠近感を創出する必要がないため、制御部61は、消失点Pや「きめの勾配」は考慮せずに面想起画像VSを描画する。
Further, as shown in FIGS. 5A and 5B, a predetermined switching is performed from a mode (hereinafter, referred to as a “normal mode”) in which the co-found image VS is displayed along the road surface VR (VR). At the moment, the control unit 61 switches the display mode of the virtual image V to the special mode. The virtual image V in the special mode is, as shown in FIG. 7B, that the first notification image V1 represents the map information and the guide route in a plan view, and in accordance with this, the meditation image VS also has a plan view It changes to make it feel. Specifically, the control unit 61 generates the virtual grid G such that the line in the vertical direction and the line in the horizontal direction are orthogonal to each other, as shown in FIG. 7A. Then, as shown in FIG. 7A, the control unit 61 draws a linear image element E so as to be positioned on a plurality of lines extending in the vertical direction in the virtual grid G. Note that, in the special mode related to planar view, it is not necessary to create a sense of perspective, so the control unit 61 draws the memory image VS without considering the vanishing point P or the “gradient of the texture”.
通常モードから特殊モードへの切替契機は、制御部61が、例えば、ユーザ4による操作部70からの表示モード切替操作を受け付けた場合や、ECU86から、車両1が自動運転モードとなったことを示す運転モード情報を受信した場合などであればよい。特殊モードから通常モードへの切替契機は、制御部61が、例えば、ユーザ4による操作部70からの表示モード切替操作を受け付けた場合や、ECU86から、車両1が手動運転モードとなったことを示す運転モード情報を受信した場合などであればよい。また、制御部61は、前方状況検出部85が検出した前方状況から、危険の蓋然性を特定した場合に、虚像Vを特殊モードから通常モードへと切り替えたり、通常モードと特殊モードのいずれの表示を終了させたり(例えば、第1報知画像V1や面想起画像VSを消去するなど)してもよい。
The switching trigger from the normal mode to the special mode is, for example, when the control unit 61 receives a display mode switching operation from the operation unit 70 by the user 4 or that the vehicle 1 has entered the automatic driving mode from the ECU 86. It is sufficient if the operation mode information shown is received. The trigger for switching from the special mode to the normal mode is, for example, when the control unit 61 receives a display mode switching operation from the operation unit 70 by the user 4 or that the vehicle 1 has entered the manual operation mode from the ECU 86. It is sufficient if the operation mode information shown is received. The control unit 61 switches the virtual image V from the special mode to the normal mode or identifies either the normal mode or the special mode when the possibility of danger is specified from the front situation detected by the front situation detection unit 85. May be ended (for example, the first notification image V1 or the memory image VS may be deleted).
また、特殊モードにおいても、制御部61は、面想起画像VSを、画像要素Eがユーザ4に向かうように(図7(a)、(b)の上方から下方に向かうように)、車速に応じて移動させる動画態様で表示する。また、図示しないが、特殊モードにおいても、第2報知画像V2を適宜の位置に表示してもよい。
In addition, also in the special mode, the control unit 61 sets the face-magnified image VS to the vehicle speed so that the image element E is directed to the user 4 (from downward to above in FIGS. 7A and 7B). Display in the moving picture mode to move accordingly. Although not illustrated, the second notification image V2 may be displayed at an appropriate position also in the special mode.
また、特殊モードは、第1報知画像V1や面想起画像VSを平面視で表す態様に限られない。第1報知画像V1や面想起画像VSを鳥瞰図で表してもよい。
In addition, the special mode is not limited to the aspect in which the first notification image V1 and the surface recall image VS are displayed in plan view. The first notification image V1 and the memory image VS may be represented by a bird's-eye view.
例えば、制御部61は、予め設定された可変の仮想視点(仮想の上空に位置する視点)を用い、仮想視点からの地図平面までの射影を、仮想視点と記憶媒体62に記憶されたデジタルマップデータとに基づき演算し、第1報知画像V1の鳥瞰図を生成してもよい。この場合、生成した第1報知画像V1の鳥瞰図に応じた、鳥瞰態様の面想起画像VSを生成すればよい。鳥瞰態様の面想起画像VSを描画する際に使用する仮想グリッドGでは、消失点Pが表示領域内に設定されていても、表示領域外に設定されていてもよい。いずれの場合においても、制御部61は、消失点Pから仮想視点側に向う複数の線と、当該線と交差する複数の線との組合せにより構成される仮想グリッドGを生成し、生成した仮想グリッドGにおける、消失点Pから仮想視点側に向う複数の線上に位置する、線状の画像要素Eにより面想起画像VSを描画すればよい。この場合においても、仮想グリッドGの模様は、遠くになればなるほど(消失点Pに近づけば近づくほど)きめが細かくなる「きめの勾配」を考慮して設定すればよい。また、鳥瞰態様においても、制御部61は、面想起画像VSを、画像要素Eが消失点P側から所定の方向へ車速に応じて移動させる動画態様で表示すればよい。
For example, the control unit 61 uses a previously set variable virtual viewpoint (a viewpoint located in a virtual sky) and a digital map in which the projection from the virtual viewpoint to the map plane is stored in the virtual viewpoint and the storage medium 62 A bird's-eye view of the first notification image V1 may be generated by calculation based on the data. In this case, it is sufficient to generate the surface image VS of the bird's-eye view according to the bird's-eye view of the generated first notification image V1. In the virtual grid G used when drawing the face-to-face image VS in a bird's-eye view manner, the vanishing point P may be set within the display area or outside the display area. In any case, the control unit 61 generates a virtual grid G configured by a combination of a plurality of lines extending from the vanishing point P toward the virtual viewpoint and a plurality of lines intersecting the lines, and the generated virtual grid G is generated. What is necessary is just to draw the face-troubled image VS by the linear image element E located on the several line which goes to the virtual viewpoint side from the vanishing point P in the grid G. Also in this case, the pattern of the virtual grid G may be set in consideration of the “gradient of the texture” which becomes finer as it goes farther (closer to the vanishing point P). Further, also in the bird's-eye view, the control unit 61 may display the face-magnified image VS in a moving image mode in which the image element E moves from the vanishing point P side in a predetermined direction according to the vehicle speed.
なお、平面態様と鳥瞰態様のいずれの特殊モードにおいても、制御部61は、第1報知画像V1の拡大・縮小表示に合わせて、線状の画像要素Eの配列ピッチや、長さを変化させることで、面想起画像VSと連動して第1報知画像V1が拡大・縮小表示されたように感じさせればよい。
In any of the plane mode and the bird's-eye view special mode, the control unit 61 changes the arrangement pitch and the length of the linear image elements E in accordance with the enlargement / reduction display of the first notification image V1. Thus, it may be felt that the first notification image V1 is displayed in an enlarged / reduced size in conjunction with the memory image VS.
続いて、HUD装置100の制御部61により実行される表示制御処理の一例を、図8を参照して説明する。表示制御処理は、例えば、車両1のイグニッションのオン期間内に、所定の周期で繰り返し実行される。以下では、一例として、図5(a)、(b)に示す通常モードでの虚像Vの表示制御処理について説明する。
Subsequently, an example of the display control process executed by the control unit 61 of the HUD device 100 will be described with reference to FIG. The display control process is repeatedly performed, for example, in a predetermined cycle within the on period of the ignition of the vehicle 1. The display control process of the virtual image V in the normal mode shown in FIGS. 5A and 5B will be described below as an example.
(表示制御処理)
表示制御処理を開始すると、制御部61は、画像生成に必要な情報を取得する(ステップS1)。具体的には、制御部61は、GPS装置83から車両1の位置座標を取得する。また、制御部61は、GPS装置83から取得した位置座標の時間変化や、自車姿勢検出部82のジャイロセンサからの信号に基づいて、車両1の進行方向を算出する。また、制御部61は、自車1からの仮想面Aの設定位置における路面Rの形状を、GPS装置83からの位置情報や記憶媒体62に記憶されたデータに基づき特定する。また、制御部61は、前記のように、路面の勾配角γ、自車勾配角θ、自車1に対する路面Rの相対角γ1を算出・取得する。また、制御部61は、仮想グリッドGの演算に必要な消失点Pを、ROMから取得するか、前方状況検出部85の画像解析に基づいて算出する。 (Display control process)
When the display control process is started, thecontrol unit 61 acquires information necessary for image generation (step S1). Specifically, the control unit 61 acquires the position coordinates of the vehicle 1 from the GPS device 83. Further, the control unit 61 calculates the traveling direction of the vehicle 1 based on the time change of the position coordinates acquired from the GPS device 83 and the signal from the gyro sensor of the host vehicle posture detection unit 82. Further, the control unit 61 specifies the shape of the road surface R at the setting position of the virtual plane A from the own vehicle 1 based on the position information from the GPS device 83 and the data stored in the storage medium 62. Further, as described above, the control unit 61 calculates / acquires the inclination angle γ of the road surface, the vehicle inclination angle θ, and the relative angle γ1 of the road surface R with respect to the vehicle 1. Further, the control unit 61 acquires the vanishing point P necessary for the calculation of the virtual grid G from the ROM or calculates it based on the image analysis of the forward situation detection unit 85.
表示制御処理を開始すると、制御部61は、画像生成に必要な情報を取得する(ステップS1)。具体的には、制御部61は、GPS装置83から車両1の位置座標を取得する。また、制御部61は、GPS装置83から取得した位置座標の時間変化や、自車姿勢検出部82のジャイロセンサからの信号に基づいて、車両1の進行方向を算出する。また、制御部61は、自車1からの仮想面Aの設定位置における路面Rの形状を、GPS装置83からの位置情報や記憶媒体62に記憶されたデータに基づき特定する。また、制御部61は、前記のように、路面の勾配角γ、自車勾配角θ、自車1に対する路面Rの相対角γ1を算出・取得する。また、制御部61は、仮想グリッドGの演算に必要な消失点Pを、ROMから取得するか、前方状況検出部85の画像解析に基づいて算出する。 (Display control process)
When the display control process is started, the
続いて、制御部61は、面想起画像VS、第1報知画像V1、第2報知画像V2を生成し、表示器10の動作を駆動制御することで、これら各種画像からなる虚像Vを仮想面Aに表示させる(ステップS2)。具体的には、制御部61は、ステップS1で取得した前方の路面Rの形状や、相対角γ1に基づき、路面Rに沿う仮想グリッドGを演算し、当該仮想グリッドG上に線状の画像要素Eを描画することで面想起画像VSを生成する。また、制御部61は、GPS装置83からの位置情報と記憶媒体62のデジタルマップデータに基づき、車両1の現在地近傍の地図情報や、案内経路を示す第1報知画像V1を生成する。また、制御部61は、GPS装置83からの位置情報と記憶媒体62のデジタルマップデータや、無線通信部84で受信した制限速度を示すデータに基づき、第2報知画像V2の表示制御を行う。
Subsequently, the control unit 61 generates a face-magnified image VS, a first notification image V1, and a second notification image V2, and drives and controls the operation of the display 10 to generate a virtual image V consisting of these various images in a virtual plane. Display on A (step S2). Specifically, the control unit 61 calculates a virtual grid G along the road surface R based on the shape of the road surface R acquired in step S1 and the relative angle γ1, and a linear image on the virtual grid G By drawing the element E, a memory image VS is generated. Further, based on the position information from the GPS device 83 and the digital map data of the storage medium 62, the control unit 61 generates map information in the vicinity of the current location of the vehicle 1 and a first notification image V1 indicating a guide route. Further, the control unit 61 controls the display of the second notification image V2 based on the position information from the GPS device 83, the digital map data of the storage medium 62, and the data indicating the speed limit received by the wireless communication unit 84.
続いて、制御部61は、車速センサ81から車速を取得し(ステップS3)、取得した車速に基づいて、車両1が停車しているか否かを判別する(ステップS4)。なお、車両1が停車しているか否かは、ECU86から取得可能なブレーキ情報や、GPS装置83から取得した位置座標の時間変化に基づいて判別してもよい。
Subsequently, the control unit 61 acquires the vehicle speed from the vehicle speed sensor 81 (step S3), and determines whether the vehicle 1 is stopped based on the acquired vehicle speed (step S4). Note that whether or not the vehicle 1 is stopped may be determined based on brake information that can be acquired from the ECU 86 or time change of position coordinates acquired from the GPS device 83.
ステップS4で、車両1が停車していないと判別した場合(ステップS4;No)、制御部61は、面想起画像VSを動画態様で表示する際の画像要素Eの移動速度を算出する(ステップS5)。例えば、制御部61は、ROM内に予め格納された、車速と対応付けられた画像要素Eの移動速度を示すテーブルデータを参照し、ステップS3で取得した車速に応じた移動速度を取得する。なお、ステップS3で取得した車速をそのまま移動速度としてもよいし、ステップS3で取得した車速に予め定められた係数α(0<α<1であっても、α>1であってもよい)を乗算することで画像要素Eの移動速度を算出してもよい。
When it is determined in step S4 that the vehicle 1 is not stopped (step S4; No), the control unit 61 calculates the moving speed of the image element E when displaying the memory image VS in the moving image mode (step S4) S5). For example, the control unit 61 refers to table data indicating the moving speed of the image element E stored in advance in the ROM and associated with the vehicle speed, and acquires the moving speed according to the vehicle speed acquired in step S3. Note that the vehicle speed acquired in step S3 may be used as the traveling speed as it is, or the coefficient α (0 <α <1 or α> 1) may be predetermined for the vehicle speed acquired in step S3. The moving speed of the image element E may be calculated by multiplying.
続いて、制御部61は、表示器10を駆動制御し、面想起画像VSを、ステップS5で算出した移動速度で画像要素Eがユーザ4に向かうように、動画態様で表示する(ステップS6)。つまり、面想起画像VSを、画像要素Eが車両1の車速に応じて移動する動画態様で表示する。
Subsequently, the control unit 61 drives and controls the display unit 10, and displays the cofound image VS in the moving image mode so that the image element E is directed to the user 4 at the moving speed calculated in step S5 (step S6) . In other words, the face image VS is displayed in a moving picture mode in which the image element E moves in accordance with the vehicle speed of the vehicle 1.
ステップS4で、車両1が停車していると判別した場合(ステップS4;Yes)、制御部61は、面想起画像VSを、予めROM内に記憶した設定速度で画像要素Eがユーザ4に向かうように、動画態様で表示する(ステップS8)。これにより、車両1が停車時においても、面想起画像VSに動きを感じさせることができ、ユーザ4は、前方物体に極力つられずに、面を想起することができる。ステップS8の実行後、制御部61は、またステップS1に戻って処理を実行する。
When it is determined in step S4 that the vehicle 1 is stopped (step S4; Yes), the control unit 61 causes the image element E to go to the user 4 at the set speed stored in the ROM in advance. Thus, the image is displayed in the form of a moving image (step S8). As a result, even when the vehicle 1 is stopped, motion can be felt in the memory image VS, and the user 4 can recall a surface without being connected to the front object as much as possible. After execution of step S8, the control unit 61 returns to step S1 and executes the process.
なお、停車時における画像要素Eの移動速度は、停車直前における移動速度(つまり、ステップS4でYesと判別される直前に、ステップS5で算出した移動速度)であってもよい。また、停車時においては、画像要素Eがユーザ4に向かって移動する態様でなく、画像要素Eを揺動させたり、点滅させたりしてもよい。
The moving speed of the image element E at the time of stopping may be the moving speed immediately before stopping (that is, the moving speed calculated at step S5 immediately before it is determined as Yes at step S4). When the vehicle is at a stop, the image element E may be swung or blinked instead of moving toward the user 4.
ステップS6に続いて、制御部61は、ステップS3で取得した車速が、予めROM内に記憶した閾値を超えたか否かを判別する(ステップS7)。当該閾値は、高速走行しているか否かを判別するための閾値である。車速が閾値以下である場合(ステップS7;No)、制御部61は、またステップS1に戻って処理を実行する。
Following step S6, the control unit 61 determines whether the vehicle speed acquired in step S3 exceeds a threshold value stored in advance in the ROM (step S7). The threshold is a threshold for determining whether the vehicle is traveling at a high speed. If the vehicle speed is equal to or less than the threshold (step S7; No), the control unit 61 returns to step S1 and executes the process.
一方、車速が閾値を超えた場合(ステップS7;Yes)、つまり、所定の高速走行をしていると見做せる場合、制御部61は、視認性確保処理を実行する(ステップS9)。視認性確保処理は、画像要素Eが車速に応じて高速で動くことにより、ユーザ4に煩雑さを与えてしまうことを抑制する処理である。視認性確保処理として、制御部61は、車速が閾値以下であった場合よりも、面想起画像VSの視認性を低下させる(例えば、表示輝度を低下させる、明度や彩度を低下させる等)か、面想起画像VSが含む画像要素Eの数を減らすかの少なくともいずれかの制御を行う。ステップS9の処理後、制御部61は、またステップS1に戻って処理を実行する。以上が表示制御処理である。なお、表示制御処理の実行中に、虚像Vの表示モードが、通常モードから、平面態様や鳥瞰態様の特殊モードへ切り替わった場合においても、制御部61は、同様に表示制御処理を繰り返し実行する。
On the other hand, when the vehicle speed exceeds the threshold (step S7; Yes), that is, when it is considered that the vehicle is traveling at a predetermined high speed, the control unit 61 executes the visibility securing process (step S9). The visibility ensuring process is a process for suppressing the user 4 from being complicated by the image element E moving at a high speed according to the vehicle speed. As the visibility securing process, the control unit 61 reduces the visibility of the face-magnified image VS (for example, reduces the display brightness, decreases the lightness and saturation, etc.) than when the vehicle speed is equal to or less than the threshold. At least one control of reducing the number of image elements E included in the memory image VS is performed. After the process of step S9, the control unit 61 returns to step S1 and executes the process. The above is display control processing. Even when the display mode of the virtual image V is switched from the normal mode to the plane mode or the bird's eye view special mode during execution of the display control process, the control unit 61 repeatedly executes the display control process in the same manner. .
なお、本発明は以上の実施形態及び図面によって限定されるものではない。本発明の要旨を変更しない範囲で、適宜、変更(構成要素の削除も含む)を加えることが可能である。
The present invention is not limited by the above embodiment and the drawings. Changes (including deletion of components) can be made as appropriate without changing the scope of the present invention.
(変形例)
以上の例では、面想起画像VSを、仮想グリッドGにおける消失点Pからユーザ4側に向う線上に位置する、線状の画像要素Eにより構成した例を示したが、これに限られない。 (Modification)
In the above example, although the example in which the face-to-face image VS is configured by linear image elements E located on a line from the vanishing point P to the user 4 side in the virtual grid G is shown, the present invention is not limited thereto.
以上の例では、面想起画像VSを、仮想グリッドGにおける消失点Pからユーザ4側に向う線上に位置する、線状の画像要素Eにより構成した例を示したが、これに限られない。 (Modification)
In the above example, although the example in which the face-to-face image VS is configured by linear image elements E located on a line from the vanishing point P to the user 4 side in the virtual grid G is shown, the present invention is not limited thereto.
例えば、図9(a)に示すように、仮想グリッドGにおける消失点Pからユーザ4側に向う線と、左右方向に延びる線との各々に沿う、線状の画像要素Eにより構成したグリッド(格子)状の面想起画像VS1であってもよい。面想起画像VS1は、図9(a)に示すように、仮想グリッドGを構成する複数の線の少なくとも一部に重畳されるものであってもよいし、図示しないが、仮想グリッドGの構成する複数の線の全てと一致するものであってもよい。また、図9(b)に示すように、仮想グリッドGを構成する線と重畳する、複数の十字状の画像要素Eにより構成した面想起画像VS2であってもよい。また、以上のように線状の画像要素Eの組合せに限られず、ユーザ4に面を想起させることができれば、点状の画像要素の組合せ、または線状の画像要素と点状の画像要素との組合せにより、面想起画像を構成してもよい。
For example, as shown in FIG. 9A, a grid composed of linear image elements E along each of a line from the vanishing point P to the user 4 side in the virtual grid G and a line extending in the left-right direction It may be a lattice-like memory image VS1. As shown in FIG. 9A, the face-magnified image VS1 may be superimposed on at least a part of a plurality of lines forming the virtual grid G, or the configuration of the virtual grid G, although not shown. It may coincide with all of the plurality of lines. In addition, as shown in FIG. 9B, it may be a memory image VS2 composed of a plurality of cross-shaped image elements E superimposed on the lines forming the virtual grid G. Further, as described above, the invention is not limited to the combination of the linear image elements E, but if it is possible to recall the face to the user 4, a combination of the dot image elements or the linear image elements and the dot image elements The combination of may constitute a memory image.
また、仮想グリッドGや、仮想グリッドG上に生成される面想起画像VS、VS1、VS2(以下、符号VS1、VS2は省略する。)は、直線で構成されるものに限られない。制御部61は、GPS装置83からの位置情報と記憶媒体62のデジタルマップデータに基づき、前方の路面Rに沿うように、仮想グリッドGや面想起画像VSを曲線で構成してもよい。例えば、前方の路面Rの長手方向の勾配だけでなく、路面Rの幅方向の傾きや曲率をGPS装置83からの位置情報と記憶媒体62のデジタルマップデータに基づき特定し、当該幅方向の傾きや曲率も考慮して、仮想グリッドGや面想起画像VSを生成してもよい。
Further, the virtual grid G and the coplanar images VS, VS1, VS2 (hereinafter, the symbols VS1 and VS2 are omitted) generated on the virtual grid G are not limited to those configured by straight lines. The control unit 61 may configure the virtual grid G and the epigram VS as curves along the road surface R ahead based on the position information from the GPS device 83 and the digital map data of the storage medium 62. For example, not only the gradient in the longitudinal direction of the road surface R ahead but also the inclination and curvature in the width direction of the road surface R are specified based on the position information from the GPS device 83 and the digital map data of the storage medium 62, and the inclination in the width direction The virtual grid G or the memory image VS may be generated in consideration of the curvature and the curvature.
また、制御部61は、自車姿勢検出部82が含む操舵角センサやヨーレートセンサからの検出信号に応じて、車両1の予想される走行軌跡を演算し、演算した走行軌跡に沿うように、仮想グリッドGや面想起画像VSを曲線で構成してもよい。
Further, the control unit 61 calculates the expected traveling locus of the vehicle 1 according to the detection signals from the steering angle sensor and the yaw rate sensor included in the host vehicle attitude detection unit 82, and follows the calculated traveling locus. The virtual grid G and the memory image VS may be configured by curves.
また、以上では、制御部61が生成した仮想グリッドGを基準として、面想起画像VSを描画制御する例を示したが、これに限られない。仮想グリッドGを介さずに面想起画像VSを描画制御してもよい。また、面想起画像VSは、ユーザ4に面を想起させることができれば、消失点Pや、きめの勾配を用いずに描画されたものであってもよい。面想起画像VSをどのような態様で描画するかは、目的に応じて適宜変更可能である。
Furthermore, although the example has been described above, in which the drawing control of the memory image VS is performed based on the virtual grid G generated by the control unit 61, the present invention is not limited thereto. It is also possible to control the drawing of the memory image VS without the intervention of the virtual grid G. In addition, as long as the face image can be reminded to the user 4, the face image VS may be drawn without using the vanishing point P or the gradient of the texture. Depending on the purpose, the manner in which the memory image VS is drawn can be changed as appropriate.
また、図示しないアクチュエータによりスクリーン30の角度を調整可能とし、仮想面A自体を、前方の路面Rの一部と平行に制御する構成を採用してもよい。
Alternatively, the angle of the screen 30 can be adjusted by an actuator (not shown), and the virtual plane A itself may be controlled in parallel with a part of the road surface R in front.
また、HUD装置100の筐体50内において、表示光Lの元となる表示像を表示する表示部は、DMDなどの反射型表示デバイスからなる表示器10とスクリーン30の組合せに限られない。当該表示部を、液晶ディスプレイ、有機EL(Electro Luminescence)ディスプレイなどから構成してもよい。
Further, in the housing 50 of the HUD device 100, the display unit for displaying the display image that is the source of the display light L is not limited to the combination of the display 10 made of a reflective display device such as DMD and the screen 30. The display unit may be configured of a liquid crystal display, an organic EL (Electro Luminescence) display, or the like.
また、表示光Lの投射対象(透光部材)は、車両1のフロントガラス2に限定されず、板状のハーフミラー、ホログラム素子等により構成されるコンバイナであってもよい。
Further, the projection target (light transmitting member) of the display light L is not limited to the front glass 2 of the vehicle 1 and may be a combiner configured by a plate-like half mirror, a hologram element, and the like.
また、HUD装置100が搭載される車両1の種類は限定されず、自動四輪車や、自動二輪車など、様々の車両に適用可能である。
Further, the type of the vehicle 1 on which the HUD device 100 is mounted is not limited, and the invention can be applied to various vehicles such as a four-wheeled motor vehicle and a two-wheeled motor vehicle.
(1)以上に説明したHUD装置100は、車両1に搭載され、透光部材(フロントガラス2)に画像を表す表示光Lを投射することで、透光部材の前方に設定された仮想面Aに画像を虚像Vとして表示する。HUD装置100は、表示光Lを発する表示部(例えば、表示器10)と、表示部の動作を制御することで、仮想面Aに表示される画像を制御する制御部61と、を備える。仮想面Aは、車両1の上下方向に対して前方に傾いて設定され、仮想面Aに表示される画像は、線状又は点状の画像要素Eの組み合わせにより面を想起させる面想起画像VSを含む。制御部61は、車両1の走行速度を取得し、面想起画像VSを、取得した走行速度に応じて画像要素Eが移動する動画態様で表示する。
このようにしたから、前述の通り、実景の視認性を確保しつつも、車両の挙動を想起させる表示を行うことができる。また、虚像Vが表示される仮想面Aは、車両1の上下方向に対して前方に傾いて設定されているため、仮想面Aが上下方向に沿ってユーザ4と正対して設定されている場合に比べて、実景との対比において、ユーザ4に極力煩わしさを感じさせることなく虚像Vを表示することができる。 (1) TheHUD device 100 described above is mounted on the vehicle 1, and projects the display light L representing an image on the light transmitting member (front glass 2) to form a virtual surface set in front of the light transmitting member Display an image as a virtual image V on A. The HUD device 100 includes a display unit (for example, the display 10) that emits display light L, and a control unit 61 that controls an image displayed on the virtual surface A by controlling the operation of the display unit. The virtual surface A is set to be inclined forward with respect to the vertical direction of the vehicle 1, and the image displayed on the virtual surface A is a surface-induced image VS that evokes a surface by a combination of linear or dotted image elements E. including. The control unit 61 acquires the traveling speed of the vehicle 1 and displays the face-correlated image VS in a moving picture mode in which the image element E moves in accordance with the acquired traveling speed.
Since this is done, as described above, it is possible to perform display that evokes the behavior of the vehicle while securing the visibility of the real scene. Further, since the virtual plane A on which the virtual image V is displayed is set to be inclined forward with respect to the vertical direction of thevehicle 1, the virtual plane A is set to face the user 4 along the vertical direction. In comparison with a real scene, the virtual image V can be displayed without making the user 4 feel as annoying as possible.
このようにしたから、前述の通り、実景の視認性を確保しつつも、車両の挙動を想起させる表示を行うことができる。また、虚像Vが表示される仮想面Aは、車両1の上下方向に対して前方に傾いて設定されているため、仮想面Aが上下方向に沿ってユーザ4と正対して設定されている場合に比べて、実景との対比において、ユーザ4に極力煩わしさを感じさせることなく虚像Vを表示することができる。 (1) The
Since this is done, as described above, it is possible to perform display that evokes the behavior of the vehicle while securing the visibility of the real scene. Further, since the virtual plane A on which the virtual image V is displayed is set to be inclined forward with respect to the vertical direction of the
(2)また、仮想面Aに表示される画像は、走行速度に応じて移動しない報知画像(第1報知画像A1や第2報知画像A2)を含む。
(3)また、報知画像は、面想起画像VSに対して起き上がったように視認される擬似起立画像としての第2報知画像V2を含む。
このように、走行速度に応じては移動しない第1報知画像V1や第2報知画像V2を、動きを感じさせる面想起画像VSを基準として視認させることで、ユーザ4に、車両1の挙動を感じさせつつも、第1報知画像V1や第2報知画像V2を良好に把握させることができる。 (2) Further, the image displayed on the virtual surface A includes a notification image (the first notification image A1 and the second notification image A2) that does not move according to the traveling speed.
(3) Further, the notification image includes the second notification image V2 as a false standing image that is visually recognized as if it has risen with respect to the co-found image VS.
As described above, the user 4 is made to behave as if the first notification image V1 and the second notification image V2 that do not move according to the traveling speed are visually recognized on the basis of the copious image VS causing the user to feel movement. While making the user feel, the first notification image V1 and the second notification image V2 can be grasped well.
(3)また、報知画像は、面想起画像VSに対して起き上がったように視認される擬似起立画像としての第2報知画像V2を含む。
このように、走行速度に応じては移動しない第1報知画像V1や第2報知画像V2を、動きを感じさせる面想起画像VSを基準として視認させることで、ユーザ4に、車両1の挙動を感じさせつつも、第1報知画像V1や第2報知画像V2を良好に把握させることができる。 (2) Further, the image displayed on the virtual surface A includes a notification image (the first notification image A1 and the second notification image A2) that does not move according to the traveling speed.
(3) Further, the notification image includes the second notification image V2 as a false standing image that is visually recognized as if it has risen with respect to the co-found image VS.
As described above, the user 4 is made to behave as if the first notification image V1 and the second notification image V2 that do not move according to the traveling speed are visually recognized on the basis of the copious image VS causing the user to feel movement. While making the user feel, the first notification image V1 and the second notification image V2 can be grasped well.
(4)また、制御部61は、車両1が停まった場合であっても画像要素Eを移動させる。これにより、車両1が停車時においても、面想起画像VSに動きを感じさせることができ、ユーザ4は、前方物体に極力つられずに、面を想起することができる。
(4) Further, the control unit 61 moves the image element E even when the vehicle 1 is stopped. As a result, even when the vehicle 1 is stopped, motion can be felt in the memory image VS, and the user 4 can recall a surface without being connected to the front object as much as possible.
(5)また、制御部61は、透光部材を透かして前方に視認される路面Rに沿うように面想起画像VSを表示する(通常モードにおける表示)。これにより、AR(Augmented Reality)を利用した表示が可能である。
(5) Further, the control unit 61 displays the memory image VS so as to follow the road surface R that is viewed in front of the light-transmissive member through the light-transmissive member (display in the normal mode). Thereby, display using AR (Augmented Reality) is possible.
(6)また、制御部61は、路面Rに沿うように面想起画像VSを表示する第1態様(通常モード)と、第1態様とは異なる態様で面想起画像VSを表示する第2態様(特殊モード)とを条件に応じて切り替え、第1態様では、所定の消失点Pからユーザ4側に向かう方向に画像要素Eが移動し、第2態様では、第1態様とは異なる方向に画像要素Eが移動する。このようにしたから、面白味のある演出を付加して、各種の報知を行うことができる。
(6) In addition, the control unit 61 displays the facial appearance image VS in a mode different from the first mode (the normal mode) in which the facial appearance image VS is displayed along the road surface R and the second mode. In the first mode, the image element E moves in the direction from the predetermined vanishing point P toward the user 4 in the first mode, and in the second mode, in the direction different from the first mode. Image element E moves. Since this is done, it is possible to perform various types of notification by adding interesting effects.
(7)また、制御部61は、取得した走行速度が予め定めた閾値を超えた場合には、走行速度が閾値以下であった場合よりも、面想起画像VSの視認性を低下させるか、面想起画像VSが含む画像要素Eの数を減らすかの少なくともいずれかの制御を行う。これにより、高速走行時の実景の視認性を確保することができる。
(7) Further, if the acquired traveling speed exceeds a predetermined threshold, the control unit 61 may lower the visibility of the memory image VS compared to the case where the traveling speed is equal to or less than the threshold, Control of at least one of reducing the number of image elements E included in the facial image VS is performed. Thereby, the visibility of the real view at the time of high speed traveling can be secured.
以上の説明では、本発明の理解を容易にするために、公知の技術的事項の説明を適宜省略した。
In the above description, in order to facilitate understanding of the present invention, the description of known technical matters is appropriately omitted.
100…ヘッドアップディスプレイ(HUD)装置
10…表示器
21~23…第1~第3平面鏡
30…スクリーン、31…表示面
40…凹面鏡
50…筐体、51…透光性カバー
60…制御装置
61…制御部、62…記憶媒体、63…I/F
70…操作部
81…車速センサ
82…自車姿勢検出部
83…GPS装置
84…無線通信部
85…前方状況検出部
86…ECU
L …表示光
A …仮想面
V …虚像
VS、VS1、VS2…面想起画像
E…画像要素、G…仮想グリッド、P…消失点
V1…第1報知画像
V2…第2報知画像
R…路面、γ…勾配角、γ1…相対角、θ…自車勾配角
1…車両、2…フロントガラス、3…ダッシュボード、4…ユーザ DESCRIPTION OFSYMBOLS 100 ... Head-up display (HUD) apparatus 10 ... Display 21-23 ... 1st-3rd plane mirror 30 ... Screen, 31 ... Display surface 40 ... Concave mirror 50 ... Housing | casing, 51 ... Translucent cover 60 ... Control apparatus 61 ... Control unit, 62 ... Storage medium, 63 ... I / F
70Operation unit 81 Vehicle speed sensor 82 Self attitude detection unit 83 GPS device 84 Wireless communication unit 85 Forward situation detection unit 86 ECU
L: display light A: virtual surface V: virtual image VS, VS1, VS2: face recall image E: image element, G: virtual grid, P: vanishing point V1: first notification image V2: second notification image R: road surface, γ ... inclination angle,γ 1 ... relative angle, θ ... vehicle inclination angle 1 ... vehicle, 2 ... windshield, 3 ... dashboard, 4 ... user
10…表示器
21~23…第1~第3平面鏡
30…スクリーン、31…表示面
40…凹面鏡
50…筐体、51…透光性カバー
60…制御装置
61…制御部、62…記憶媒体、63…I/F
70…操作部
81…車速センサ
82…自車姿勢検出部
83…GPS装置
84…無線通信部
85…前方状況検出部
86…ECU
L …表示光
A …仮想面
V …虚像
VS、VS1、VS2…面想起画像
E…画像要素、G…仮想グリッド、P…消失点
V1…第1報知画像
V2…第2報知画像
R…路面、γ…勾配角、γ1…相対角、θ…自車勾配角
1…車両、2…フロントガラス、3…ダッシュボード、4…ユーザ DESCRIPTION OF
70
L: display light A: virtual surface V: virtual image VS, VS1, VS2: face recall image E: image element, G: virtual grid, P: vanishing point V1: first notification image V2: second notification image R: road surface, γ ... inclination angle,
Claims (7)
- 車両に搭載され、透光部材に画像を表す表示光を投射することで、前記透光部材の前方に設定された仮想面に前記画像を虚像として表示するヘッドアップディスプレイ装置であって、
前記表示光を発する表示部と、
前記表示部の動作を制御することで、前記仮想面に表示される前記画像を制御する制御部と、を備え、
前記仮想面は、前記車両の上下方向に対して前方に傾いて設定され、
前記仮想面に表示される前記画像は、線状又は点状の画像要素の組み合わせにより面を想起させる面想起画像を含み、
前記制御部は、前記車両の走行速度を取得し、
前記面想起画像を、取得した前記走行速度に応じて前記画像要素が移動する動画態様で表示する、
ヘッドアップディスプレイ装置。 A head-up display device mounted on a vehicle and displaying the image as a virtual image on a virtual surface set in front of the light transmitting member by projecting display light representing the image onto the light transmitting member.
A display unit that emits the display light;
A control unit configured to control the image displayed on the virtual surface by controlling the operation of the display unit;
The virtual plane is set to be inclined forward with respect to the vertical direction of the vehicle,
The image displayed on the virtual surface includes an episodic image reminiscent of a surface by a combination of linear or dotted image elements,
The control unit acquires a traveling speed of the vehicle.
Displaying the facial image in a moving picture mode in which the image element moves in accordance with the acquired traveling speed;
Head-up display device. - 前記仮想面に表示される前記画像は、前記走行速度に応じて移動しない報知画像を含む、
請求項1に記載のヘッドアップディスプレイ装置。 The image displayed on the virtual surface includes a notification image that does not move according to the traveling speed.
The head-up display device according to claim 1. - 前記報知画像は、前記面想起画像に対して起き上がったように視認される擬似起立画像を含む、
請求項2に記載のヘッドアップディスプレイ装置。 The notification image includes a false standing image that is viewed as if it has risen with respect to the face-triggered image.
The head-up display device according to claim 2. - 前記制御部は、前記車両が停まった場合であっても前記画像要素を移動させる、
請求項1乃至3のいずれか1項に記載のヘッドアップディスプレイ装置。 The control unit moves the image element even when the vehicle stops.
A head-up display device according to any one of claims 1 to 3. - 前記制御部は、前記透光部材を透かして前方に視認される路面に沿うように前記面想起画像を表示する、
請求項1乃至4のいずれか1項に記載のヘッドアップディスプレイ装置。 The control unit displays the surface recalled image along a road surface that is viewed in front of the light-transmissive member through the light-transmissive member.
A head-up display device according to any one of claims 1 to 4. - 前記制御部は、前記路面に沿うように前記面想起画像を表示する第1態様と、前記第1態様とは異なる態様で前記面想起画像を表示する第2態様とを条件に応じて切り替え、
前記第1態様では、所定の消失点から視認者側に向かう方向に前記画像要素が移動し、
前記第2態様では、前記第1態様とは異なる方向に前記画像要素が移動する、
請求項5に記載のヘッドアップディスプレイ装置。 The control unit switches, according to a condition, a first mode for displaying the maze image along the road surface and a second mode for displaying the maze image in a mode different from the first mode,
In the first aspect, the image element moves in a direction from the predetermined vanishing point toward the viewer.
In the second aspect, the image element moves in a direction different from the first aspect.
The head-up display device according to claim 5. - 前記制御部は、取得した前記走行速度が予め定めた閾値を超えた場合には、前記走行速度が前記閾値以下であった場合よりも、前記面想起画像の視認性を低下させるか、前記面想起画像が含む前記画像要素の数を減らすかの少なくともいずれかの制御を行う、
請求項1乃至6のいずれか1項に記載のヘッドアップディスプレイ装置。 When the acquired traveling speed exceeds a predetermined threshold, the control unit may lower the visibility of the surface recalled image or the surface compared to when the traveling speed is equal to or less than the threshold. Control at least one of reducing the number of image elements included in the recall image
A head-up display device according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019563022A JP7276152B2 (en) | 2017-12-28 | 2018-12-18 | head-up display device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017254519 | 2017-12-28 | ||
JP2017-254519 | 2017-12-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019131296A1 true WO2019131296A1 (en) | 2019-07-04 |
Family
ID=67063066
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2018/046440 WO2019131296A1 (en) | 2017-12-28 | 2018-12-18 | Head-up display device |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP7276152B2 (en) |
WO (1) | WO2019131296A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2021020519A (en) * | 2019-07-25 | 2021-02-18 | 株式会社デンソー | Display control device for vehicle and display control method for vehicle |
JP2021081586A (en) * | 2019-11-19 | 2021-05-27 | 株式会社デンソー | Display control device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08253059A (en) * | 1995-03-17 | 1996-10-01 | Honda Motor Co Ltd | Vehicular operation supporting system |
JP2005207781A (en) * | 2004-01-20 | 2005-08-04 | Mazda Motor Corp | Image display apparatus, method, and program for vehicle |
JP2013112269A (en) * | 2011-11-30 | 2013-06-10 | Toshiba Alpine Automotive Technology Corp | In-vehicle display device |
JP2015092346A (en) * | 2014-11-18 | 2015-05-14 | 株式会社デンソー | Display device |
WO2017138432A1 (en) * | 2016-02-12 | 2017-08-17 | 日本精機株式会社 | Head-up display device |
JP2017149222A (en) * | 2016-02-23 | 2017-08-31 | 株式会社 ミックウェア | Performance device, vehicle, and computer program |
JP2017173536A (en) * | 2016-03-23 | 2017-09-28 | 日本精機株式会社 | Display device for vehicle |
US9809165B1 (en) * | 2016-07-12 | 2017-11-07 | Honda Motor Co., Ltd. | System and method for minimizing driver distraction of a head-up display (HUD) in a vehicle |
-
2018
- 2018-12-18 WO PCT/JP2018/046440 patent/WO2019131296A1/en active Application Filing
- 2018-12-18 JP JP2019563022A patent/JP7276152B2/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08253059A (en) * | 1995-03-17 | 1996-10-01 | Honda Motor Co Ltd | Vehicular operation supporting system |
JP2005207781A (en) * | 2004-01-20 | 2005-08-04 | Mazda Motor Corp | Image display apparatus, method, and program for vehicle |
JP2013112269A (en) * | 2011-11-30 | 2013-06-10 | Toshiba Alpine Automotive Technology Corp | In-vehicle display device |
JP2015092346A (en) * | 2014-11-18 | 2015-05-14 | 株式会社デンソー | Display device |
WO2017138432A1 (en) * | 2016-02-12 | 2017-08-17 | 日本精機株式会社 | Head-up display device |
JP2017149222A (en) * | 2016-02-23 | 2017-08-31 | 株式会社 ミックウェア | Performance device, vehicle, and computer program |
JP2017173536A (en) * | 2016-03-23 | 2017-09-28 | 日本精機株式会社 | Display device for vehicle |
US9809165B1 (en) * | 2016-07-12 | 2017-11-07 | Honda Motor Co., Ltd. | System and method for minimizing driver distraction of a head-up display (HUD) in a vehicle |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2021020519A (en) * | 2019-07-25 | 2021-02-18 | 株式会社デンソー | Display control device for vehicle and display control method for vehicle |
JP7400242B2 (en) | 2019-07-25 | 2023-12-19 | 株式会社デンソー | Vehicle display control device and vehicle display control method |
JP2021081586A (en) * | 2019-11-19 | 2021-05-27 | 株式会社デンソー | Display control device |
JP7238739B2 (en) | 2019-11-19 | 2023-03-14 | 株式会社デンソー | display controller |
Also Published As
Publication number | Publication date |
---|---|
JPWO2019131296A1 (en) | 2020-12-24 |
JP7276152B2 (en) | 2023-05-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6201690B2 (en) | Vehicle information projection system | |
JP6123761B2 (en) | Vehicle display device | |
US9164281B2 (en) | Volumetric heads-up display with dynamic focal plane | |
US20140268353A1 (en) | 3-dimensional (3-d) navigation | |
US20140362195A1 (en) | Enhanced 3-dimensional (3-d) navigation | |
JP2015069656A (en) | Three-dimensional (3d) navigation | |
US20080091338A1 (en) | Navigation System And Indicator Image Display System | |
US11525694B2 (en) | Superimposed-image display device and computer program | |
US20230221569A1 (en) | Virtual image display device and display system | |
JP2010143520A (en) | On-board display system and display method | |
JP6225379B2 (en) | Vehicle information projection system | |
JP6883759B2 (en) | Display systems, display system control methods, programs, and mobiles | |
JP2012035745A (en) | Display device, image data generating device, and image data generating program | |
WO2019207965A1 (en) | Head-up display device | |
JP6796806B2 (en) | Display system, information presentation system, display system control method, program, and mobile | |
JP7310560B2 (en) | Display control device and display control program | |
WO2019131296A1 (en) | Head-up display device | |
JP2018020779A (en) | Vehicle information projection system | |
JP2020024561A (en) | Display device, display control method, and program | |
JP2018167669A (en) | Head-up display device | |
JP7014206B2 (en) | Display control device and display control program | |
JP7318431B2 (en) | Display control device and display control program | |
JP6610376B2 (en) | Display device | |
JPWO2020040276A1 (en) | Display device | |
JP2021066197A (en) | Display control apparatus, display control program and on-vehicle system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18894530 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2019563022 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18894530 Country of ref document: EP Kind code of ref document: A1 |