[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2018109991A1 - Display device, electronic mirror, display device control method, program, and storage medium - Google Patents

Display device, electronic mirror, display device control method, program, and storage medium Download PDF

Info

Publication number
WO2018109991A1
WO2018109991A1 PCT/JP2017/031455 JP2017031455W WO2018109991A1 WO 2018109991 A1 WO2018109991 A1 WO 2018109991A1 JP 2017031455 W JP2017031455 W JP 2017031455W WO 2018109991 A1 WO2018109991 A1 WO 2018109991A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
depth
occupant
display screen
display device
Prior art date
Application number
PCT/JP2017/031455
Other languages
French (fr)
Japanese (ja)
Inventor
武彦 中川
奈緒樹 白石
良充 稲森
えり 柳川
岳洋 村尾
亮 菊地
山田 貴之
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2018109991A1 publication Critical patent/WO2018109991A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/31Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles providing stereoscopic vision
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/26Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view to the rear of the vehicle
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to a display device for an electronic mirror that displays an image taken by an in-vehicle camera, and an electronic mirror using the display device.
  • 3D images have a natural depth. Therefore, it is easy for the driver to intuitively understand the position and speed of the object (viewing target object) reflected on the display screen.
  • the viewpoint of the driver is farther in the 3D image displayed on the far side than the display screen when viewed from the driver as compared with the 2D image. Therefore, it is easy for the driver to recognize his / her visual object displayed on the display screen by moving his / her line of sight from the front of the vehicle to the display screen.
  • a conventional electronic mirror needs to be equipped with a twin-lens camera in order to display 3D video, the cost is increased.
  • the conventional electronic mirror combines two captured images captured by the twin-lens camera by the parallax mapping process, it takes time to display the 3D image on the display screen of the electronic mirror. Since the driver drives the vehicle while watching the 3D video displayed on the display screen, it is a problem that the timing for displaying the 3D video on the display screen is delayed.
  • An object of one embodiment of the present invention is to quickly display an image having a sense of depth or popping out.
  • a display device is disposed in front of an occupant including a driver of a vehicle, and a visual target included in a photographed image around the photographed vehicle is farther from the occupant than a display screen.
  • a setting unit, and a generation unit that generates a first output video including pixel data for left eye and pixel data for right eye according to the depth set by the depth setting unit from the captured video. I have.
  • a display device is arranged in front of an occupant including a driver of a vehicle, and the sighted object included in a captured image around the vehicle is displayed on the occupant rather than the display screen.
  • a depth of the display screen that allows the object to be recognized to be recognized on the back side or the near side.
  • the data of the position shifted by the pixel shift amount corresponding to the depth set by the depth setting unit and the depth set by the depth setting unit in the captured image is the pixel data for the left eye and the pixel data for the right eye
  • generating units that generate alternately arranged first output images, and the display screen is inclined in a direction away from the occupant.
  • a display device is arranged in front of an occupant including a driver of a vehicle, and the sighted object included in a captured image around the vehicle is displayed on the occupant rather than the display screen.
  • a depth of the display screen that allows the object to be recognized to be recognized on the back side or the near side.
  • the data of the position shifted by the pixel shift amount corresponding to the depth set by the depth setting unit and the depth set by the depth setting unit in the captured image is the pixel data for the left eye and the pixel data for the right eye
  • a generation unit that generates first output video arranged alternately, and the display screen uses a narrow viewing angle liquid crystal that shields light in a direction deviated by a predetermined angle or more from the front of the display screen.
  • a liquid crystal display .
  • a display device is arranged in front of an occupant including a driver of a vehicle, and the sighted object included in a captured image around the vehicle is displayed on the occupant rather than the display screen.
  • a depth of the display screen that allows the object to be recognized to be recognized on the back side or the near side.
  • the data of the position shifted by the pixel shift amount corresponding to the depth set by the depth setting unit and the depth set by the depth setting unit in the captured image is the pixel data for the left eye and the pixel data for the right eye
  • a generation unit that generates first alternately output video images, and is not visible when the display screen is viewed from the front of the display screen, and when viewed from a direction deviated by a predetermined angle or more from the front.
  • the visible image It is displayed on the serial display screen.
  • a display device is arranged in front of an occupant including a driver of a vehicle, and the sighted object included in a captured image around the vehicle is displayed on the occupant rather than the display screen.
  • a depth of the display screen that allows the object to be recognized to be recognized on the back side or the near side.
  • the data of the position shifted by the pixel shift amount corresponding to the depth set by the depth setting unit and the depth set by the depth setting unit in the captured image is the pixel data for the left eye and the pixel data for the right eye
  • a generation unit that generates first output video arranged alternately, and an optical film that blocks light in a direction shifted by a predetermined angle or more from the front of the display screen is pasted on the display screen.
  • a display device is a display device that is disposed in front of an occupant including a driver of a vehicle and displays a visual target object included in a photographed image around the photographed vehicle.
  • a detection unit that detects a position of the occupant's face with respect to the display screen; and a position at which the trimmed image is cut out in the captured video is determined according to the position of the occupant's face with respect to the display screen detected by the detection unit.
  • a trimming position determination unit; and a generation unit that generates an output video from the trimmed image cut out from the captured video in accordance with the position determined by the trimming position determination unit.
  • the display device control method is arranged in front of an occupant including a driver of a vehicle, and a visual object included in a captured image of the surrounding of the vehicle is displayed on a display screen.
  • a control method for a display device that is displayed so as to be recognized on the far side far from the occupant or the near side close to the occupant, from the display screen for recognizing the visible object on the far side or the near side.
  • the display device control method is arranged in front of an occupant including a driver of a vehicle, and a visual object included in a captured image of the surrounding of the vehicle is displayed on a display screen.
  • a control method for a display device that displays so as to be recognized on the far side away from the occupant or on the near side close to the occupant, the display screen being inclined in a direction away from the occupant,
  • the generation step of generating the first output video alternately arranged as the pixel data for the left eye and the pixel data for the right eye includes data at positions shifted by the amount.
  • the display device control method is a display device that is disposed in front of an occupant including a driver of a vehicle and displays a visual target object included in a captured image of the surroundings of the vehicle. And detecting the position of the occupant's face relative to the display screen, and trimming the captured video according to the position of the occupant's face relative to the display screen detected in the detection step.
  • a trimming position determining step for determining a position to cut out the image; and a generating step for generating an output video from the trimmed image cut out from the captured video in accordance with the position determined in the trimming position determining step.
  • FIG. 1 is a block diagram illustrating a configuration of an electronic mirror according to Embodiment 1.
  • FIG. It is a figure which shows schematically arrangement
  • FIG. It is a figure which shows a part of vehicle inner side provided with the electronic mirror which concerns on Embodiment 1, and is a figure which shows arrangement
  • 1 is a diagram illustrating an arrangement of a video display device according to Embodiment 1.
  • FIG. (A)-(c) is a figure which shows the depth of the output image
  • (A) shows a photographed video
  • (b) shows a right-eye video included in an output video trimmed from the photographed video
  • (c) shows a left-eye video contained in an output video trimmed from the photographed video.
  • (D) shows the output image produced
  • 4 is a flowchart showing a flow of video processing executed by the video control device for the electronic mirror according to the first embodiment.
  • FIG. 6 is a table showing a correspondence relationship between the depth of an output video and a pixel shift amount in the electronic mirror according to the first and second embodiments. It is a figure which shows arrangement
  • FIG. It is a block diagram which shows the structure of the electronic mirror which concerns on Embodiment 2.
  • FIG. 10 is a flowchart illustrating a flow of video processing executed by the video control device for an electronic mirror according to the second embodiment.
  • (A)-(c) is a figure which shows the range of the output image trimmed (cut out) from each picked-up image, (a) is the range of the reference output image, (b) is a driver
  • FIG. (A) (b) is a figure which shows the Example of the viewing angle control in the electronic mirror which concerns on Embodiment 5, (a) shows the Example of the viewing angle control by a directional diffusion film, (b) An example of viewing angle control by a luminance light-shielding film is shown.
  • Embodiment 1 Hereinafter, embodiments of the present invention will be described in detail with reference to FIGS.
  • FIG. 1 is a block diagram showing the configuration of the electronic mirror 1.
  • the electronic mirror 1 includes an imaging unit 11 (imaging unit), a video control device 10, a parallax barrier-compatible video display device 25 (display device) (hereinafter referred to as a video display device 25), and a parallax barrier device.
  • an imaging unit 11 imaging unit
  • a video control device 10 parallax barrier-compatible video display device 25
  • a parallax barrier device display device
  • a vehicle information acquisition device 40 an eye tracking camera 50
  • driver information acquisition device 52 a driver information acquisition device
  • a parallax barrier control device 55 The video control device 10 and the parallax barrier-compatible video display device 25 according to the present embodiment constitute a display device.
  • FIG. 2 shows a vehicle to which the electronic mirror 1 according to this embodiment is attached.
  • the imaging unit 11 is attached to the rear part of the vehicle so as to face the direction opposite to the traveling direction of the vehicle.
  • the imaging unit 11 may be attached so as to face the traveling direction of the vehicle.
  • the imaging unit 11 is a monocular camera.
  • the imaging unit 11 includes a plurality of monocular cameras, and includes at least three monocular cameras that respectively capture the front rear, left rear, and right rear of the vehicle.
  • the imaging unit 11 may further include a monocular camera for a drive recorder that captures the front of the vehicle.
  • a monocular camera directed to the rear right side is arranged on the right door of the vehicle.
  • This monocular camera captures the right rear side of the vehicle. While the electronic mirror 1 is activated, that is, while the vehicle is running, the imaging unit 11 captures the periphery of the vehicle (in FIG. 2, the rear of the vehicle) and generates captured video data. The captured image includes a visual target that exists around the vehicle. In some cases, the captured image includes a visual target in the vehicle. For example, when the electronic mirror 1 is used as a substitute for a room mirror, the imaging unit 11 includes a monocular camera that captures the interior of the vehicle. The imaging unit 11 outputs the generated captured video to the video control device 10.
  • the video control device 10 acquires a captured video from the imaging unit 11 and generates an output video to be output to the video display device 25 from the acquired captured video.
  • the output video includes a video corresponding to the right eye of the driver (hereinafter referred to as a video for the right eye) and a video corresponding to the left eye of the driver (hereinafter referred to as a video for the left eye).
  • the video control apparatus 10 includes a depth determination unit 12 (depth setting unit) and an output video generation unit 13 (generation unit). Each part of the video control apparatus 10 will be described later.
  • the video display device 25 displays the output video generated by the video control device 10 on the display screens 20 and 20 ′.
  • the video display device 25 may be a liquid crystal display device, for example.
  • the video display device 25 displays on the display screen 20 an output video based on the captured video taken by the imaging unit 11.
  • the video display device 25 displays an output video based on a digital video including information such as speed, engine speed, and used gear on the display screen 20 ′.
  • FIG. 3 shows a part of the inside of the vehicle shown in FIG. 2, particularly the instrument panel of the driver's seat (cockpit).
  • the display screens 20 and 20 ′ of the video display device 25 are arranged in the vicinity of the driver's seat inside the vehicle.
  • the display screen 20 of the video display device 25 may be incorporated in a dashboard. Alternatively, it may be arranged in the dashboard and / or on the ceiling of the vehicle (position of the room mirror).
  • the video display device 25 displays the output video acquired from the video control device 10 on the display screens 20 and 20 ′ arranged at the positions illustrated in FIG. 3.
  • the parallax barrier device 30 is disposed in front of each display screen 20, 20 ′ of the video display device 25.
  • the parallax barrier device 30 controls the viewing angle of the output video displayed on the display screens 20 and 20 ′ of the video display device 25 by being controlled by the parallax barrier control device 55.
  • the video display device 25 is another image directional liquid crystal display device capable of controlling the viewing angle of the video
  • the viewing angle of the output video is controlled by controlling the directivity of the backlight of the video display device 25. May be.
  • the electronic mirror 1 may not include the parallax barrier device 30 and the parallax barrier control device 55.
  • the vehicle information acquisition device 40 acquires various information indicating the state of the vehicle.
  • the information acquired by the vehicle information acquisition device 40 includes, for example, a seat offset amount (corresponding to the seat position), a vehicle speed, and gear information.
  • the eye tracking camera 50 tracks the driver's face or eye position.
  • the eye tracking camera outputs the detected driver face or eye position information to the driver information acquisition device 52 and the parallax barrier control device 55.
  • the driver information acquisition device 52 acquires various information indicating the states of the driver and other passengers.
  • the information acquired by the driver information acquisition device 52 includes, for example, information on the driver's face or eye position.
  • the driver information acquisition device 52 acquires information on the driver's face or eye position detected by the eye tracking camera 50.
  • the driver information acquisition device 52 may include, for example, a biosensor attached to a vehicle seat. In this configuration, the driver information acquisition device 52 acquires information such as the body temperature, sweating, and movement of the occupant seated on the seat from the biometric sensor.
  • the parallax barrier control device 55 controls the parallax barrier device 30 based on information on the driver's face or eye position detected by the eye tracking camera 50.
  • the video control apparatus 10 includes a depth determination unit 12 and an output video generation unit 13.
  • the depth determination unit 12 determines the depth of the output video based on at least one of information indicating the vehicle state acquired by the vehicle information acquisition device 40 and information indicating the driver state acquired by the driver information acquisition device 52. To decide.
  • the depth of the output video is a parameter representing the depth of the position of the visual target object in the output video when viewed from the driver (that is, the distance from the display screen 20, 20 'to the visual target object). The depth can be positive or negative or zero.
  • the depth determination unit 12 determines the pixel shift amount ⁇ pix between the right-eye pixel data and the left-eye pixel data based on the determined depth of the output video. The depth and the pixel shift amount ⁇ pix will be described later.
  • the output video generation unit 13 Based on the pixel shift amount ⁇ pix determined by the depth determination unit 12, the output video generation unit 13 converts pixel data at a position shifted from each other by the pixel shift amount ⁇ pix from one captured video, pixel data for the left eye, and right As the eye pixel data, an output video is generated by alternately arranging the pixel data.
  • the output video generated in this way has a sense of depth (when the depth is positive) or a pop-out feeling (when the depth is negative) like a 3D video when viewed through the parallax barrier device 30. Further, the processing time for generating the output video is shortened compared to the processing time for generating the 3D video because the parallax mapping processing is not performed.
  • the driver can capture it as a plane image of the distance moved by the depth, and by capturing the image instantly by focusing on that plane, A visual target can be found.
  • the viewpoint movement with respect to the front visual field can be reduced depending on the depth, the visual recognition object in the output video can be quickly recognized. Therefore, the driver can make a judgment and act more quickly based on information included in the output video than when viewing a normal 2D video.
  • the output video generation unit 13 outputs the generated output video to the video display device 25.
  • the output video generation unit 13 may adjust the angle of view or the trimming range of the output video according to which video display device 25 displays the output video.
  • the output video generation unit 13 may display an image on the video display device 25 used as a substitute for a room mirror rather than an output video output to another video display device 25 used as a substitute for a door mirror (side mirror).
  • An output video having a large corner may be output. This is because the rearview mirror has a role of transmitting a wide range of information behind the vehicle to the driver.
  • the output video generation unit 13 is a parallax captured by the twin-lens camera based on the depth (the pop-out amount p and the depth amount d) determined by the depth determination unit 12.
  • a 3D image may be generated from two captured images with a general 3D image generation method.
  • FIG. 4 shows an example of the arrangement of the video display device 25 (see FIG. 3) arranged in the dashboard.
  • the video display device 25 is arranged instead of a mechanical meter such as a speedometer.
  • the video display device 25 displays the output video output from the video control device 10 on the display screen 20 ′. Since the entire image display device 25 is smaller than the mechanical meter, the space in the dashboard can be expanded as compared with the conventional case.
  • the display screen 20 ′ of the video display device 25 faces the driver. Therefore, when the depth of the output video changes, the viewpoint of the driver who views the output video moves in the traveling direction of the vehicle or in the opposite direction.
  • FIGS. 5A to 5C show the display screen 20 of the video display device 25 displaying the output video.
  • the viewing angle of the output video is controlled by the parallax barrier device 30 and the parallax barrier control device 55.
  • the right eye of the driver sees only the right-eye video composed of the right-eye pixel data included in the output video.
  • the driver's left eye sees only the left-eye video composed of pixel data for the right eye included in the output video.
  • the driver recognizes as if the visual target is present at the position where the line of sight of the right eye and the line of sight of the left eye shown in FIGS.
  • (A) of FIG. 5 shows an output image when the depth is 0, that is, the pixel shift amount ⁇ pix is 0.
  • the visual target is on the display screen 20 of the video display device 25 as viewed from the driver. That is, the driver views the output video in the same manner as when the 2D video is displayed on the video display device 25.
  • the parallax barrier control device 55 stops or disables the parallax barrier device 30.
  • FIG. 5 shows an output image when the depth is negative, that is, the pixel shift amount ⁇ pix is negative, and the visually recognized object appears to jump out of the display screen 20.
  • the depth of the output video is represented by the pop-out amount p.
  • the depth determination unit 12 may make the depth of the output video smaller than zero.
  • FIG. 5 shows an output image when the depth is positive, that is, the pixel shift amount ⁇ pix is positive.
  • the depth of the output video is represented by a depth amount d.
  • the visual target is located behind the display screen 20 of the video display device 25 as viewed from the driver. In this case, the viewpoint of the driver is farther than the display screen 20 of the video display device 25.
  • the depth determination unit 12 may change the depth of the output video based on the vehicle state information acquired from the vehicle information acquisition device 40. For example, when the vehicle is moving at high speed, the driver's viewpoint is usually far from the vehicle. Therefore, the depth determination unit 12 may increase the depth of the output image as the speed of the vehicle increases or as the used gear increases. Further, when the R gear is used, the depth determination unit 12 may make the output depth shallower, that is, reduce the pixel shift amount ⁇ pix than when other gears are used. This makes it easier for the driver to visually recognize the vicinity of the vehicle. Further, the depth determination unit 12 may increase the depth of the output video as the position of the sheet is closer to the display screen 20 of the video display device 25. Thereby, irrespective of the position of the seat, the distance from the driver to the visual recognition object in the output video can be appropriately maintained.
  • the depth determination unit 12 may change the depth of the output video based on the driver information acquired from the driver information acquisition device 52. For example, when the driver information acquisition device 52 fails in eye tracking, the depth determination unit 12 may set the depth of the output video to a default value, that is, zero (fail safe). In addition, the driver may be able to manually change the depth of the output video to zero. Thereby, it is possible to prevent the output image from being seen twice by the driver and other occupants.
  • the depth of the output video determined by the depth determination unit 12 is constant in the vertical direction (vertical direction) and the horizontal direction (horizontal direction) of the output video.
  • the method by which the depth determination unit 12 determines the depth of the output video is not limited to the method described in this embodiment.
  • the depth determination unit 12 may change the depth of the output video in the vertical direction (vertical direction) or the horizontal direction (horizontal direction) of the output video (see Embodiment 2).
  • the parallax barrier control device 55 stops, that is, invalidates the operation of the parallax barrier device 30.
  • the depth determination unit 12 may change the depth of the output video according to the surrounding environment detected by a sensor or the like.
  • the depth determination unit 12 may determine the depth of the output video according to the brightness of the environment (for example, backlight or twilight) or the time zone (for example, daytime or night).
  • the depth determination unit 12 may determine the depth of the output video according to information included in the output video. For example, the depth determination unit 12 may analyze which of the sky, road, and other vehicles is included in the output video, and may determine the depth of the output video according to the analysis result. In particular, when the output video includes many roads or other vehicles, the depth determination unit 12 may determine the depth of the output video according to the distance from the vehicle to the road or other vehicles.
  • the depth determination unit 12 may determine the depth of the output video according to the arrangement of the display screens 20 and 20 ′ of the video display device 25. For example, when the video display device 25 arranged near the driver's face or a part of the face (eyes, nose, etc.) displays the output video, the depth determination unit 12 may increase the depth of the output video. Good.
  • the depth determination unit 12 may determine the depth of the output video according to the position of the driver's face or a part of the face (eyes, nose, etc.). For example, when the driver's face moves back and forth, left and right, or up and down (for example, head swing and stretch), the depth determination unit 12 determines the depth of the output video according to the position of the driver after the movement. May be.
  • the depth determination unit 12 may determine the depth of the output video according to the driver's action. For example, when the driver performs an operation for moving the vehicle backward (for example, putting an arm on the passenger seat, opening a window or door of the driver seat), the depth determining unit 12 determines the type of the driver's operation. The depth of the output video may be determined.
  • Output video generation method A method of generating the output video by the output video generation unit 13 will be described with reference to FIGS. 6A to 6D, FIG. 7, and FIG.
  • FIG. 6 shows the imaging
  • the captured image shown in FIG. 6A includes a visual recognition object.
  • the captured video shown in FIG. 6A has a pixel size of (1920 + ⁇ ) ⁇ 1080, for example.
  • is equal to the pixel shift amount ⁇ pix.
  • FIG. 6B shows the range of the right-eye video included in the output video trimming range.
  • FIG. 6C shows the range of the left-eye video included in the output video trimming range.
  • the range of the left-eye video is shifted to the left by the pixel shift amount ⁇ pix determined by the depth determination unit 12 from the range of the right-eye video.
  • the output video generation unit 13 is (1920 + ⁇ pix) ⁇ 1080 from the left end of the left eye video range shown in FIG. 6C to the right end of the right eye video range shown in FIG. Trim the pixel data of the area. Then, the output video generation unit 13 extracts pixel data at positions shifted by the pixel shift amount ⁇ pix from the trimmed pixel data, and alternately arranges them as pixel data for the left eye and pixel data for the right eye. , Generate output video.
  • FIG. 6D shows an output video in which pixel data are alternately arranged.
  • the position of the visual target object included in the right-eye image and the position of the same visual object included in the left-eye image are shifted from each other by ⁇ pix. That is, the size of the output video is 1920 ⁇ 1080, similar to the range of the right-eye video and the range of the left-eye video.
  • FIG. 7 is a diagram showing numbers assigned to pixels of a captured video acquired by the output video generation unit 13.
  • FIG. 8 is a diagram showing an output video generated from the captured video shown in FIG. 7.
  • the output video pixel is associated with the corresponding right-eye video pixel number and the corresponding left-eye video pixel. It is the figure which numbered and showed.
  • the pixels corresponding to the right-eye image are numbered sequentially from the first in the horizontal direction, and the pixels corresponding to the left-eye image are sequentially numbered from the 51st in the horizontal direction. Waving. That is, in the example shown in FIG. 7, the pixel shift amount ⁇ pix between the right-eye video and the left-eye video is 50 pixels.
  • FIG. 7 illustrates only the pixels in the top row of the right-eye video and the left-eye video, the pixels in the other rows of the captured video are the same as the pixels in the top row. By rule, a number is given.
  • the output video generation unit 13 includes the odd-numbered pixels (1, 3,%) Of the right-eye video and the even-numbered pixels (2, 4,%) Of the left-eye video. .) Are alternately arranged in the horizontal direction to generate an output video.
  • FIG. 8 illustrates only the pixels in the top row of the output video, the pixels in the other rows of the output video are also arranged in the same rule as the pixels in the top row. That is, in this embodiment, the pixel shift amount ⁇ pix in the trimming range does not change in the vertical direction and the horizontal direction of the output video.
  • the viewing angle of the output video displayed by the video display device 25 is controlled by the parallax barrier device 30.
  • the right eye of the driver sees the right-eye video (video composed of even-numbered pixel data) included in the output video.
  • the driver's left eye sees a left-eye image (image composed of odd-numbered pixel data) included in the output image.
  • the video for the right eye and the video for the left eye are synthesized in the driver's brain. Therefore, the driver feels as if he is watching only one output image. In addition, since the size of one pixel is very small, the driver hardly recognizes the space between pixels.
  • FIG. 9 is a flowchart showing the operation of the video control apparatus 10.
  • the video control device 10 acquires a captured video captured by the imaging unit 11 (S2).
  • the depth determination unit 12 acquires information indicating the state of the vehicle from the vehicle information acquisition device 40, and acquires information indicating the state of the driver from the driver information acquisition device 52 (S4).
  • the depth determination unit 12 determines the depth of the output video based on at least one of information indicating the state of the vehicle and information indicating the state of the driver. Further, a pixel shift amount ⁇ pix corresponding to the determined depth of the output video is determined with reference to a table (see FIG. 10) described later (S6, setting step). The depth determination unit 12 notifies the output video generation unit 13 of information on the determined pixel shift amount ⁇ pix.
  • the output video generation unit 13 generates an output video by the above-described method based on the information of the pixel shift amount ⁇ pix determined by the depth determination unit 12 (S8, generation step).
  • the output video generation unit 13 outputs the generated output video to the video display device 25.
  • the video display device 25 displays the output video received from the output video generation unit 13 on the display screen 20 (S10). Thus, the operation of the video control apparatus 10 is finished.
  • FIG. 10 is a table showing a correspondence relationship between the depth (the pop-out amount p and the depth amount d) set in the output video and the pixel shift amount ⁇ pix.
  • ⁇ x (unit: mm) is a shift width corresponding to different pixel shift amounts ⁇ pix.
  • the pop-out amount p (see FIG. 5B) indicates how much the object to be visually recognized in the output video protrudes forward from the display screen 20, that is, in the direction toward the driver. Represents what looks like.
  • the depth amount d (see FIG. 5C) indicates how much the object to be visually recognized in the output video is offset from the display screen 20 in the back direction, that is, the driver's line-of-sight direction. Represents what it looks like.
  • the display screen of the video display device may not face the driver.
  • the driver sees the display screen of the video display device at an angle, it may be difficult to accurately grasp the sense of depth of the output video.
  • the depth of the output video is changed according to the tilt angle of the display screen of the video display device. This makes it easy to accurately grasp the sense of depth of the output video.
  • FIG. 11 shows an arrangement of the display screen 20 ′ of the video display device 25 according to the present embodiment.
  • the display screen 20 ′ of the video display device 25 according to the present embodiment has an ideal surface facing the driver. It is inclined with respect to it.
  • the angle formed between the display screen 20 ′ of the video display device 25 and the ideal plane is ⁇ .
  • the upper end of the display screen 20 ′ comes closest to the front and tilts away from the driver as it approaches the lower end of the display screen 20 ′, but the display screen 20 ′ tilts in the opposite direction. It may be. That is, the upper end of the display screen 20 'may be inclined farther from the driver and closer to the driver as it approaches the lower end of the display screen 20'.
  • a space (a triangle indicated by a broken line in FIG. 11) is generated between the dashboard and the display screen 20 ′ of the video display device 25.
  • This space can be used for arranging electronic devices, HUD (Head-Up-Display), and the like, similarly to the space in the dashboard.
  • HUD Head-Up-Display
  • the length of the hood that shields sunlight is made shorter than before. can do.
  • the depth determination unit 12 of the video control apparatus 10 refers to the table shown in FIG. 10, and shifts the pixel shift amount ⁇ pix ((d in FIG. 6) in the vertical direction (the direction indicated by the arrow of the height h). )) Is gradually changed. Specifically, the depth determination unit 12 determines the pixel shift amount ⁇ pix corresponding to the depth d determined according to the position a of the height h by referring to the table.
  • the ideal depth d corresponding to the position a is the distance from the display screen 20 ′ to the ideal plane where the driver wants to recognize the visual target.
  • L is the length from the lower end of the display screen 20 ′ of the video display device 25 to the position a measured.
  • the pixel shift amount ⁇ pix is determined according to the ideal depth d and position a. Therefore, the visual recognition object in the output video can be seen on the ideal surface for the driver.
  • the electronic mirror changes the field angle range of the captured video displayed as the output video according to the position of the driver's face or eyes.
  • FIG. 12 is a block diagram showing a configuration of the electronic mirror 2 according to the present embodiment. 12, in addition to the configuration of the video control apparatus 10 described in the first embodiment, the video control apparatus 210 of the electronic mirror 2 according to the present embodiment includes an angle-of-view range determination unit 14 (trimming position determination unit). ). The video control device 210 and the parallax barrier-compatible video display device 25 according to the present embodiment constitute a display device.
  • the angle-of-view range determination unit 14 of the electronic mirror 2 acquires information on the driver's face or eye position from the driver information acquisition device 52. Then, the view angle range determination unit 14 determines the view angle range of the output video according to the position of the driver's face or eyes. More specifically, the angle-of-view range determination unit 14 determines a range of pixel data (trimming image range) to be extracted in order to generate an output video from a captured video.
  • pixel data trimming image range
  • the output video generation unit 13 generates an output video by trimming a part of the captured video according to the angle of view range determined by the angle of view range determination unit 14 ((( a) to (d)).
  • FIG. 13 is a flowchart showing the operation of the video control apparatus 210.
  • the video control device 210 acquires a captured video captured by the imaging unit 11 (S2).
  • the angle-of-view range determination unit 14 acquires information indicating the position of the driver's face or eyes detected by the driver information acquisition device 52 (S32, detection step). Subsequently, the angle-of-view range determination unit 14 determines the trimming range of the output video based on the position of the driver's face or eyes (S34, trimming position determination step). The angle-of-view range determination unit 14 outputs information on the determined trimming range to the output video generation unit 13. When the driver information acquisition device 52 fails in the eye tracking process, the view angle range determination unit 14 may return the view angle of the output video to a preset default value (fail safe).
  • the depth determination unit 12 acquires information indicating the state of the vehicle and information indicating the state of the driver (S4). The specific example of the information acquired by the depth determination unit 12 has been described in the first embodiment. Next, the depth determination unit 12 determines the pixel shift amount ⁇ pix as described in the first embodiment (S6).
  • the output video generation unit 13 generates an output video based on the trimming range information determined by the angle-of-view range determination unit 14 and the pixel shift amount ⁇ pix determined by the depth determination unit 12. (S8 ').
  • the output video generation unit 13 outputs the generated output video to the video display device 25.
  • the video display device 25 displays the output video received from the output video generation unit 13 on the display screens 20 and 20 '(see FIG. 3) arranged in the vicinity of the driver's seat (S10).
  • the operation of the video control device 210 is completed.
  • FIGS. (A) to (c) of FIG. 14 are diagrams showing the field angle range of the output video i3 according to the position of the driver's face or eyes.
  • (A) of FIG. 14 shows the field angle range of the output image i3 when the driver's face or eyes are at a predetermined reference position.
  • the reference position may be set, for example, to the position of the driver's face or eyes when the driver is seated in an appropriate posture on the vehicle seat.
  • FIG. 14 shows the field angle range of the output video i3 when the driver's face or eyes move leftward from the reference position shown in (a) in FIG. In this case, the field angle range of the output video i3 in the captured video moves to the right from the position shown in FIG.
  • (C) in FIG. 14 shows the field angle range of the output video i3 when the driver's face or eyes move upward from the reference position shown in (a) in FIG. In this case, the field angle range of the output video i3 in the captured video moves downward from the position shown in FIG.
  • the output video when the driver's face or eyes move, the field angle range (trimming position) of the output video displayed by the video display device 25 changes. Therefore, the driver can naturally change the viewing area by moving his / her face or eyes as in the case of looking at the mirror. Furthermore, according to the configuration of the present embodiment, the angle of view of the output video can be changed according to the direction that the driver wants to see without changing the shooting direction of the imaging unit 11, that is, the monocular camera. In the present embodiment, when displaying an output video obtained by shifting pixels from a video captured by the imaging unit 11, that is, a monocular camera, the angle of view range of the output video is changed according to the position of the driver's face or eyes.
  • the output video may be a 2D video or a 3D video described in the fourth embodiment later.
  • the type of video displayed by the video display device is switched according to at least one of the state of the vehicle and the state of the driver.
  • FIG. 15 is a block diagram showing a configuration of the electronic mirror 3 according to the present embodiment.
  • the video control device 310 of the electronic mirror 3 further includes a display switching unit 15 as compared with the configuration of the video control device 10 of the electronic mirror 1 according to the first embodiment.
  • the electronic mirror 3 further includes a 3D video generation device that generates a normal 3D video.
  • the video control apparatus 310 may further include the angle-of-view range determination unit 14 described in the second embodiment.
  • the video control device 310 and the parallax barrier-compatible video display device 25 according to the present embodiment constitute a display device.
  • the output video generation unit 13 generates an output video by the method described in the first embodiment.
  • the output video generation unit 13 outputs the generated output video to the display switching unit 15.
  • the display switching unit 15 acquires information indicating the state of the vehicle from the vehicle information acquisition device 40. In addition, the display switching unit 15 acquires information indicating the state of the driver from the driver information acquisition device 52. Then, the display switching unit 15 outputs an image to be output to the image display device 25 according to at least one of the vehicle state and the driver state as an output image (first output image) and a 2D image (second output image). ) And 3D video (third output video).
  • the display switching unit 15 also acquires 2D video data.
  • the 2D video data may be, for example, video content data recorded on a recording device (not shown).
  • the 2D video may be generated from a captured video generated by the imaging unit 11. In this case, the depth is not set in the 2D video like the output video.
  • the display switching unit 15 outputs to the video display device 25 is not particularly limited. If a specific example is given, the display switching part 15 may switch the image
  • the display switching unit 15 outputs the output video generated by the output video generation unit 13 to the video display device 25.
  • the video display device 25 can quickly display an image having a sense of depth or a feeling of popping out.
  • the display switching unit 15 causes the video display device 25 to match the captured video (2D video) captured by the imaging unit 11 with the output format of the video display device 25. Output to.
  • the display switching unit 15 outputs 2D video other than captured video, such as a television program, a navigation screen, and content, to the video display device 25.
  • the parallax barrier control device 55 stops the parallax barrier device 30 of the electronic mirror 3.
  • the display switching unit 15 outputs the 3D video generated by the 3D video generation device to the video display device 25.
  • the field angle range of the output video is dynamically changed according to the position of the driver's face so that the driver can always see a normal output video.
  • the output image seen by the person (passenger) seated in the passenger seat of the vehicle changes between normal state, reverse view, and double image. Therefore, the passenger may feel such a change in the output video uncomfortable.
  • the viewing angle of the output video is controlled so that the passenger cannot see the output video. Passengers may not be able to see any video, or any video other than the output video.
  • FIG. 16 is a figure which shows one Example of viewing angle control.
  • the video display device 25 includes a directional diffusion film that is a kind of optical film.
  • the graph shown in FIG. 16A shows the diffusion viewing angle characteristic of the output video in this embodiment.
  • the output video is hardly diffused, but as the viewing angle increases, the output video is strongly diffused by the directional diffusion film. Accordingly, a normal output image can be seen by the driver D, while a double image of the right hand output image and the left hand output image can always be seen by the person P sitting in the passenger seat. However, since the double image seen by the person P has no luminance flicker, the discomfort of the person P is reduced.
  • FIG. 16B shows the other Example of viewing angle control.
  • the video display device 25 includes a viewing angle control film that is a kind of optical film.
  • the graph shown in FIG. 16B shows the luminance viewing angle characteristics of the output video in this embodiment.
  • the output image is more strongly shielded by the viewing angle control film as the viewing angle ⁇ increases. Accordingly, the driver D can normally see the output video, while the person P sitting in the passenger seat does not see the output video and sees a black screen. Therefore, at the viewing angle of the person P, even if the output video changes, the person P hardly senses such a change in the output video.
  • the luminance viewing angle control film may block light at both the positive and negative viewing angles ⁇ , or may block output video that enters the eyes of a person seated in the passenger seat.
  • the output image may be shielded by only one of the positive and negative viewing angles ⁇ .
  • the video display device 25 may display a so-called bale view.
  • a veil view image that is not visible when viewed from the front of the display screen 20 and is visible when viewed from a direction deviated by a predetermined angle or more from the front is displayed on the display screen 20. Cannot see the output video. Therefore, the person seated in the passenger seat does not feel uncomfortable due to the change in the output image.
  • the electronic mirror device for a vehicle has been described.
  • the control blocks (particularly the video control devices 10, 210, 310) of the electronic mirrors 1, 2, 3 may be realized by a logic circuit (hardware) formed in an integrated circuit (IC chip) or the like, It may be realized by software using a CPU (Central Processing Unit).
  • a smartphone equipped with a 3D display (video display device 25), an eye tracking camera 50 (driver information acquisition device 52), and a photographing camera (imaging unit 11) is connected to the electronic mirror 1 by a CPU and an application included in the smartphone. , 2 and 3 can be realized.
  • the video control apparatus 10 includes a CPU that executes instructions of a program that is software for realizing each function, and a ROM (Read Only Memory) in which the program and various data are recorded so as to be readable by the computer (or CPU).
  • a storage device (these are referred to as “recording media”), a RAM (Random Access Memory) for expanding the program, and the like are provided.
  • the objective of this invention is achieved when a computer (or CPU) reads the said program from the said recording medium and runs it.
  • a “non-temporary tangible medium” such as a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used.
  • the program may be supplied to the computer via an arbitrary transmission medium (such as a communication network or a broadcast wave) that can transmit the program.
  • an arbitrary transmission medium such as a communication network or a broadcast wave
  • one embodiment of the present invention can also be realized in the form of a data signal embedded in a carrier wave, in which the program is embodied by electronic transmission.
  • the display devices (video control devices 10, 210, 310 and the parallax barrier video display device 25) according to the first aspect of the present invention are arranged in front of a passenger including a driver of the vehicle and are photographed around the vehicle.
  • a display device for displaying a visual recognition object included in a captured image so as to be recognized on the far side farther from the occupant than the display screen (20, 20 ') or on the near side of the occupant,
  • a depth setting unit depth determining unit 12) for setting a depth (amount of protrusion d, a depth amount d) from the display screen for recognizing an object on the back side or the front side, and the depth from the captured image.
  • a generation unit (output video generation unit 13) that generates a first output video including pixel data for the left eye and pixel data for the right eye corresponding to the depth set by the height setting unit.
  • the depth (depth) is set in the output video, and the pixel data for the left eye and the pixel for the right eye shifted in the pixel arrangement direction by the pixel shift amount corresponding to the set depth.
  • An output video containing data is generated.
  • the object to be viewed in the output video appears behind the display screen or pops out of the display screen.
  • the display device is the display apparatus according to aspect 1, in which the captured image is an image captured by a monocular camera, and the generation unit is set by the depth setting unit in the captured image. Pixel data at a position shifted by a pixel shift amount ( ⁇ pix) corresponding to the depth is alternately arranged as the left-eye pixel data and the right-eye pixel data to generate the first output video. May be.
  • an output video having a sense of depth or popping out can be generated from a shot video shot using a monocular camera without requiring a binocular camera as in the prior art.
  • the depth setting unit may set the depth based on at least one of the state of the vehicle and the state of the occupant. .
  • an appropriate depth can be set in the output video based on at least one of the vehicle state and the occupant state.
  • the display device is the display device according to any one of the aspects 1 to 3, wherein the visual object generated from the first output video generated by the generation unit and the captured video is displayed on the display screen.
  • the second output image displayed so as to be recognized at the position of the image and the third output image generated from the photographed image and reproducing the binocular parallax of the human are displayed as the state of the vehicle and the occupant.
  • a display switching unit (15) that selects an output video to be displayed on the display screen based on at least one of the states may be further provided.
  • the display device is the display apparatus according to aspect 3, wherein the state of the vehicle is one of the speed of the vehicle and the position of a seat included in the vehicle, or the state of the occupant is the occupant. It may be either the position of the face relative to the display screen or the binocular parallax of the occupant.
  • an appropriate depth can be set in the output video based on any one of the vehicle speed, the seat position, the occupant's face position, and the occupant's binocular parallax.
  • the display devices (video control devices 10, 210, and 310 and the parallax barrier video display device 25) according to aspect 6 of the present invention are arranged in front of a passenger including a driver of the vehicle and are photographed around the vehicle.
  • a display device for displaying a visual recognition object included in a captured image so as to be recognized on the far side farther from the occupant than the display screen (20, 20 ') or on the near side of the occupant,
  • a depth setting unit depth determination unit 12) for setting a depth from the display screen for recognizing an object on the back side or the near side, and the depth set by the depth setting unit in the captured image
  • a generation unit (output video generation unit 13) that generates first output video alternately arranged as pixel data for the left eye and pixel data for the right eye, with the data shifted by the pixel shift amount corresponding to With Both the display screen is inclined in a direction away from the occupant.
  • the pixel farther from the occupant is set to a shallower depth from the display screen, and the pixel closer to the occupant is set to a deeper depth from the display screen. can do. Therefore, regardless of the pixel position in the tilt direction of the display screen, the output image of the tilted display screen can be shown facing the occupant.
  • the depth setting unit sets the depth from the display screen to a pixel farther from the occupant according to the inclination of the display screen. It may be set shallowly.
  • the distance from the occupant to the image becomes shorter as the image corresponds to the pixel set with a shallow depth from the display screen. Therefore, when the display screen is tilted, set the pixel farther away from the occupant to a shallower depth from the display screen, and the pixel closer to the occupant to set the depth from the display screen deeper. Can do. Therefore, regardless of the pixel position in the tilt direction of the display screen, the output image of the tilted display screen can be shown facing the occupant.
  • the display device (video control device 310 and parallax barrier video display device 25) according to aspect 8 of the present invention is arranged in front of a passenger including a driver of a vehicle and is included in a captured video of the surroundings of the vehicle.
  • a display device that displays the object to be recognized on the back side farther from the occupant than the display screen (20, 20 ′) or the near side to the occupant.
  • a depth setting unit for setting a depth from the display screen to be recognized on the side or the near side, and a pixel corresponding to the depth set by the depth setting unit in the captured image
  • the data at the position shifted by the shift amount includes a generation unit (output video generation unit 13) that generates first output video that is alternately arranged as pixel data for the left eye and pixel data for the right eye.
  • the above ⁇ surface is a liquid crystal display device using a narrow viewing angle liquid crystal to block light from the front of the display screen to the predetermined angle or more displacement direction.
  • the liquid crystal display device by limiting the field of view of the output video by the liquid crystal display device using the narrow viewing angle liquid crystal, only the left eye pixel is visible to the occupant's left eye, and the occupant's right eye is visible. Only the pixel for the right eye can be seen, so that the occupant (user) who sees the output video displays the object to be viewed in the output video according to the depth set in the output video. It appears behind the screen or appears to jump out of the display screen.
  • the display device (video control device 310 and parallax barrier video display device 25) according to aspect 9 of the present invention is disposed in front of the occupant including the driver of the vehicle and is included in the captured video around the vehicle.
  • a display device that displays the object to be recognized so as to be recognized on the back side farther from the occupant than the display screen or on the near side near the occupant, and on the back side or the near side.
  • a depth setting unit for setting a depth from the display screen to be recognized, and data of a position shifted by a pixel shift amount corresponding to the depth set by the depth setting unit in the captured image is the left eye
  • the object to be visually recognized in the output video appears behind the display screen or jumps out of the display screen.
  • the display device (video control device 310 and parallax barrier video display device 25) according to aspect 10 of the present invention is disposed in front of an occupant including the driver of the vehicle and is included in the captured video of the surroundings of the vehicle.
  • a display device that displays the object to be recognized so as to be recognized on the back side farther from the occupant than the display screen or on the near side near the occupant, and on the back side or the near side.
  • a depth setting unit for setting a depth from the display screen to be recognized, and data of a position shifted by a pixel shift amount corresponding to the depth set by the depth setting unit in the captured image is the left eye
  • Optical film for blocking light to have been attached
  • a passenger viewing the display screen from the front of the display screen can see the visual target in the output video behind the display screen or jump out of the display screen.
  • an occupant viewing the display screen from a direction deviated by a predetermined angle or more from the front of the display screen cannot see the output video. Therefore, it is possible to prevent an occupant who views the display screen from a direction deviated by a predetermined angle or more from the front of the display screen from giving unpleasant feeling due to flickering of the luminance of the output video.
  • the display device (video control device 210 and parallax barrier video display device 25) according to aspect 11 of the present invention is disposed in front of an occupant including the driver of the vehicle and is included in the captured video of the surroundings of the vehicle.
  • a display device that displays a visual recognition target, a detection unit that detects a position of the occupant's face relative to the display screen, and a position detected by the detection unit relative to the display screen of the occupant's face,
  • a trimming position determining unit that determines a position to cut out a trimmed image in the captured video; a generation unit that generates an output video from the trimmed image cut out from the captured video according to the position determined by the trimming position determining unit; It has.
  • an output image with an appropriate depth can be shown to the occupant according to the position of the occupant's face.
  • the trimming position determination unit is configured to capture the image in a direction opposite to the direction of the movement when the occupant's face moves relative to the display screen.
  • the position where the trimmed image is cut out in the video may be moved.
  • the occupant moves the face with respect to the display screen so as to view the normal mirror, so that the trimmed image displayed as the output video displayed on the display screen is displayed.
  • the range can be easily changed.
  • An electronic mirror according to aspect 13 of the present invention images the display device according to any one of aspects 1 to 12 above, the front of the vehicle, the front rear, the left rear, or the right rear, and displays the captured image as described above.
  • An imaging unit imaging unit 11 that outputs to the apparatus.
  • the visual object included in the photographed video around the vehicle that is disposed in front of the occupant including the driver of the vehicle is displayed on the display screen.
  • the display device control method is arranged in front of a passenger including a driver of the vehicle, and the object to be viewed included in the photographed video around the vehicle is more than the display screen.
  • a generation step of generating the first output video in which the data at the shifted position is alternately arranged as the pixel data for the left eye and the pixel data for the right eye is included.
  • the display device control method is a control of a display device that is arranged in front of an occupant including a driver of a vehicle and displays a visual object included in a photographed image around the photographed vehicle.
  • Electron mirror 10 1, 2, 3 Electron mirror 10, 210, 310
  • Video control device (display device) 11 Imaging unit (monocular camera, imaging unit) 12 Depth determination part (depth setting part) 13 Output video generator (generator) 14 Angle-of-view range determination unit (trimming position determination unit) DESCRIPTION OF SYMBOLS 15 Display switching part 20, 20 'Display screen 25 Parallax barrier corresponding
  • Driver information acquisition device detection unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Signal Processing (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

The present invention quickly displays video having a sense of depth or a pop-out effect. A depth determination unit (12) determines a depth indicating how far back or forward to display a visual object in an output video relative to display screens (20, 20') in a video display device (25), and an output video generation unit (13) generates an output video by alternatingly arranging pixels which are offset by a pixel offset amount corresponding to the depth determined by the depth determination unit (12).

Description

表示装置、電子ミラー、表示装置の制御方法、プログラム、記録媒体Display device, electronic mirror, display device control method, program, and recording medium
 本発明は、車載カメラで撮影した映像を表示する電子ミラー用の表示装置及び当該表示装置を用いた電子ミラーに関する。 The present invention relates to a display device for an electronic mirror that displays an image taken by an in-vehicle camera, and an electronic mirror using the display device.
 近年、車両に設置されたカメラが撮影した車両周辺の映像を、運転手の前方に配置された表示画面に表示する車両が開発されている。上記カメラおよび表示画面の組合せは、電子ミラーと呼ばれている。例えば、車両の直後の映像を表示するバック表示画面は、電子ミラーに含まれる。また、電子ミラーは、ルームミラーまたはドアミラー(サイドミラー)の代用として使用される。 In recent years, vehicles have been developed that display images around the vehicle taken by a camera installed on the vehicle on a display screen arranged in front of the driver. The combination of the camera and the display screen is called an electronic mirror. For example, a back display screen that displays an image immediately after the vehicle is included in the electronic mirror. The electronic mirror is used as a substitute for a room mirror or a door mirror (side mirror).
 電子ミラーの中には、3D映像(立体映像)を表示するものがある(特許文献1および2参照)。3D映像は、自然な奥行き感を持つ。したがって、運転手が、表示画面に映る物体(視認対象物)の位置や速度を直感的に理解しやすい。加えて、運転手から見て表示画面よりも遠い奥側に表示する3D映像は、2D映像と比較すると、運転手の視点が遠くなる。そのため、運転手が、車両の前方から表示画面へ視線を移動して、表示画面に表示される視認対象物を認識することを楽に感じる。 Some electronic mirrors display 3D video (stereoscopic video) (see Patent Documents 1 and 2). 3D images have a natural depth. Therefore, it is easy for the driver to intuitively understand the position and speed of the object (viewing target object) reflected on the display screen. In addition, the viewpoint of the driver is farther in the 3D image displayed on the far side than the display screen when viewed from the driver as compared with the 2D image. Therefore, it is easy for the driver to recognize his / her visual object displayed on the display screen by moving his / her line of sight from the front of the vehicle to the display screen.
日本国公開特許公報「特開2013-026770号」(2013年 2月 4日公開)Japanese Published Patent Publication “JP 2013-026770” (published February 4, 2013) 日本国公開特許公報「特開2003-339060号」(2003年11月28日公開)Japanese Patent Publication “Japanese Patent Laid-Open No. 2003-339060” (published on November 28, 2003)
 しかしながら、従来の電子ミラーは、3D映像を表示するために、2眼カメラを備える必要があるので、コストがかさむ。また、従来の電子ミラーは、2眼カメラで撮影した2つの撮影映像を、視差マッピング処理により合成するので、電子ミラーの表示画面に3D映像を表示するまでに時間がかかる。運転手は、表示画面に表示される3D映像を見ながら車両を運転するので、表示画面に3D映像を表示するタイミングが遅延することは問題である。 However, since a conventional electronic mirror needs to be equipped with a twin-lens camera in order to display 3D video, the cost is increased. In addition, since the conventional electronic mirror combines two captured images captured by the twin-lens camera by the parallax mapping process, it takes time to display the 3D image on the display screen of the electronic mirror. Since the driver drives the vehicle while watching the 3D video displayed on the display screen, it is a problem that the timing for displaying the 3D video on the display screen is delayed.
 本発明の一態様は、奥行き感または飛び出し感を持った映像を迅速に表示することを目的とする。 An object of one embodiment of the present invention is to quickly display an image having a sense of depth or popping out.
 本発明の一態様に係る表示装置は、車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を、表示画面よりも上記乗員から遠い奥側または上記乗員に近い手前側に認識されるように表示する表示装置であって、上記視認対象物を上記奥側または上記手前側に認識させる上記表示画面からの深さを設定する深さ設定部と、上記撮影映像から、上記深さ設定部が設定した上記深さに応じた左眼用の画素データおよび右眼用の画素データを含む第1出力映像を生成する生成部と、を備えている。 A display device according to one embodiment of the present invention is disposed in front of an occupant including a driver of a vehicle, and a visual target included in a photographed image around the photographed vehicle is farther from the occupant than a display screen. Depth for setting a depth from the display screen for displaying the object to be recognized on the back side or the near side, which is displayed on the back side or the near side close to the occupant. A setting unit, and a generation unit that generates a first output video including pixel data for left eye and pixel data for right eye according to the depth set by the depth setting unit from the captured video. I have.
 また、本発明の一態様に係る表示装置は、車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を、表示画面よりも上記乗員から遠い奥側または上記乗員に近い手前側に認識されるように表示する表示装置であって、上記視認対象物を上記奥側または上記手前側に認識させる上記表示画面からの深さを設定する深さ設定部と、上記撮影映像における、上記深さ設定部が設定した上記深さに応じた画素ずらし量だけずれた位置のデータが、左眼用の画素データおよび右眼用の画素データとして、交互に配列した第1出力映像を生成する生成部と、を備えるとともに、上記表示画面は、上記乗員から遠ざかる方向に傾斜している。 In addition, a display device according to one embodiment of the present invention is arranged in front of an occupant including a driver of a vehicle, and the sighted object included in a captured image around the vehicle is displayed on the occupant rather than the display screen. A depth of the display screen that allows the object to be recognized to be recognized on the back side or the near side. The data of the position shifted by the pixel shift amount corresponding to the depth set by the depth setting unit and the depth set by the depth setting unit in the captured image is the pixel data for the left eye and the pixel data for the right eye And generating units that generate alternately arranged first output images, and the display screen is inclined in a direction away from the occupant.
 また、本発明の一態様に係る表示装置は、車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を、表示画面よりも上記乗員から遠い奥側または上記乗員に近い手前側に認識されるように表示する表示装置であって、上記視認対象物を上記奥側または上記手前側に認識させる上記表示画面からの深さを設定する深さ設定部と、上記撮影映像における、上記深さ設定部が設定した上記深さに応じた画素ずらし量だけずれた位置のデータが、左眼用の画素データおよび右眼用の画素データとして、交互に配列した第1出力映像を生成する生成部と、を備えるとともに、上記表示画面は、当該表示画面の正面から所定角度以上ずれた方向への光を遮光する狭視野角液晶を用いた液晶表示装置である。 In addition, a display device according to one embodiment of the present invention is arranged in front of an occupant including a driver of a vehicle, and the sighted object included in a captured image around the vehicle is displayed on the occupant rather than the display screen. A depth of the display screen that allows the object to be recognized to be recognized on the back side or the near side. The data of the position shifted by the pixel shift amount corresponding to the depth set by the depth setting unit and the depth set by the depth setting unit in the captured image is the pixel data for the left eye and the pixel data for the right eye A generation unit that generates first output video arranged alternately, and the display screen uses a narrow viewing angle liquid crystal that shields light in a direction deviated by a predetermined angle or more from the front of the display screen. A liquid crystal display .
 また、本発明の一態様に係る表示装置は、車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を、表示画面よりも上記乗員から遠い奥側または上記乗員に近い手前側に認識されるように表示する表示装置であって、上記視認対象物を上記奥側または上記手前側に認識させる上記表示画面からの深さを設定する深さ設定部と、上記撮影映像における、上記深さ設定部が設定した上記深さに応じた画素ずらし量だけずれた位置のデータが、左眼用の画素データおよび右眼用の画素データとして、交互に配列した第1出力映像を生成する生成部と、を備えるとともに、上記表示画面を当該表示画面の正面から見た場合は見えず、正面から所定角度以上ずれた方向から見た場合は見える画像を、上記表示画面に表示する。 In addition, a display device according to one embodiment of the present invention is arranged in front of an occupant including a driver of a vehicle, and the sighted object included in a captured image around the vehicle is displayed on the occupant rather than the display screen. A depth of the display screen that allows the object to be recognized to be recognized on the back side or the near side. The data of the position shifted by the pixel shift amount corresponding to the depth set by the depth setting unit and the depth set by the depth setting unit in the captured image is the pixel data for the left eye and the pixel data for the right eye A generation unit that generates first alternately output video images, and is not visible when the display screen is viewed from the front of the display screen, and when viewed from a direction deviated by a predetermined angle or more from the front. The visible image It is displayed on the serial display screen.
 また、本発明の一態様に係る表示装置は、車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を、表示画面よりも上記乗員から遠い奥側または上記乗員に近い手前側に認識されるように表示する表示装置であって、上記視認対象物を上記奥側または上記手前側に認識させる上記表示画面からの深さを設定する深さ設定部と、上記撮影映像における、上記深さ設定部が設定した上記深さに応じた画素ずらし量だけずれた位置のデータが、左眼用の画素データおよび右眼用の画素データとして、交互に配列した第1出力映像を生成する生成部と、を備えるとともに、上記表示画面に、当該表示画面の正面から所定角度以上ずれた方向への光を遮光する光学フィルムが貼られている。 In addition, a display device according to one embodiment of the present invention is arranged in front of an occupant including a driver of a vehicle, and the sighted object included in a captured image around the vehicle is displayed on the occupant rather than the display screen. A depth of the display screen that allows the object to be recognized to be recognized on the back side or the near side. The data of the position shifted by the pixel shift amount corresponding to the depth set by the depth setting unit and the depth set by the depth setting unit in the captured image is the pixel data for the left eye and the pixel data for the right eye And a generation unit that generates first output video arranged alternately, and an optical film that blocks light in a direction shifted by a predetermined angle or more from the front of the display screen is pasted on the display screen. .
 また、本発明の一態様に係る表示装置は、車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を表示する表示装置であって、上記乗員の顔の表示画面に対する位置を検出する検出部と、上記検出部が検出した、上記乗員の顔の上記表示画面に対する位置に応じて、上記撮影映像においてトリミング画像を切り出す位置を決定するトリミング位置決定部と、上記トリミング位置決定部が決定した上記位置に従って、上記撮影映像から切り出した上記トリミング画像から、出力映像を生成する生成部と、を備えている。 A display device according to an aspect of the present invention is a display device that is disposed in front of an occupant including a driver of a vehicle and displays a visual target object included in a photographed image around the photographed vehicle. A detection unit that detects a position of the occupant's face with respect to the display screen; and a position at which the trimmed image is cut out in the captured video is determined according to the position of the occupant's face with respect to the display screen detected by the detection unit. A trimming position determination unit; and a generation unit that generates an output video from the trimmed image cut out from the captured video in accordance with the position determined by the trimming position determination unit.
 また、本発明の一態様に係る表示装置の制御方法は、車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を、表示画面よりも上記乗員から遠い奥側または上記乗員に近い手前側に認識されるように表示する表示装置の制御方法であって、上記視認対象物を上記奥側または上記手前側に認識させる上記表示画面からの深さを設定する深さ設定ステップと、上記撮影映像における、上記深さ設定ステップにて設定した上記深さに応じた左眼用の画素データおよび右眼用の画素データを含む第1出力映像を生成する生成ステップと、を含む。 In addition, the display device control method according to one embodiment of the present invention is arranged in front of an occupant including a driver of a vehicle, and a visual object included in a captured image of the surrounding of the vehicle is displayed on a display screen. Is a control method for a display device that is displayed so as to be recognized on the far side far from the occupant or the near side close to the occupant, from the display screen for recognizing the visible object on the far side or the near side. A depth setting step for setting a depth of the first eye, and a first output including pixel data for the left eye and pixel data for the right eye according to the depth set in the depth setting step in the captured image Generating a video.
 また、本発明の一態様に係る表示装置の制御方法は、車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を、表示画面よりも上記乗員から遠い奥側または上記乗員に近い手前側に認識されるように表示する表示装置の制御方法であって、上記表示画面は、上記乗員から遠ざかる方向に傾斜しており、上記視認対象物を上記奥側または上記手前側に認識させる上記表示画面からの深さを設定する深さ設定ステップと、上記撮影映像における、上記深さ設定ステップにて設定した上記深さに応じた画素ずらし量だけずれた位置のデータが、左眼用の画素データおよび右眼用の画素データとして、交互に配列した第1出力映像を生成する生成ステップと、を含む。 In addition, the display device control method according to one embodiment of the present invention is arranged in front of an occupant including a driver of a vehicle, and a visual object included in a captured image of the surrounding of the vehicle is displayed on a display screen. Is a control method for a display device that displays so as to be recognized on the far side away from the occupant or on the near side close to the occupant, the display screen being inclined in a direction away from the occupant, A depth setting step for setting a depth from the display screen for recognizing an object on the back side or the near side, and a pixel shift according to the depth set in the depth setting step in the captured image The generation step of generating the first output video alternately arranged as the pixel data for the left eye and the pixel data for the right eye includes data at positions shifted by the amount.
 また、本発明の一態様に係る表示装置の制御方法は、車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を表示する表示装置の制御方法であって、上記乗員の顔の表示画面に対する位置を検出する検出ステップと、上記検出ステップにて検出した、上記乗員の顔の上記表示画面に対する位置に応じて、上記撮影映像においてトリミング画像を切り出す位置を決定するトリミング位置決定ステップと、上記トリミング位置決定ステップにて決定した上記位置に従って、上記撮影映像から切り出した上記トリミング画像から、出力映像を生成する生成ステップと、を含む。 The display device control method according to one aspect of the present invention is a display device that is disposed in front of an occupant including a driver of a vehicle and displays a visual target object included in a captured image of the surroundings of the vehicle. And detecting the position of the occupant's face relative to the display screen, and trimming the captured video according to the position of the occupant's face relative to the display screen detected in the detection step. A trimming position determining step for determining a position to cut out the image; and a generating step for generating an output video from the trimmed image cut out from the captured video in accordance with the position determined in the trimming position determining step.
 本発明の一態様によれば、奥行き感または飛び出し感を持った映像を迅速に表示することができる。 According to one embodiment of the present invention, it is possible to quickly display an image having a sense of depth or a feeling of popping out.
実施形態1に係る電子ミラーの構成を示すブロック図である。1 is a block diagram illustrating a configuration of an electronic mirror according to Embodiment 1. FIG. 実施形態1に係る電子ミラーの撮像部および映像表示装置の配置を概略的に示す図である。It is a figure which shows schematically arrangement | positioning of the imaging part of the electronic mirror and video display apparatus which concern on Embodiment 1. FIG. 実施形態1に係る電子ミラーを備えた車両の内側の一部を示す図であり、映像表示装置の配置を示す図である。It is a figure which shows a part of vehicle inner side provided with the electronic mirror which concerns on Embodiment 1, and is a figure which shows arrangement | positioning of a video display apparatus. 実施形態1に係る映像表示装置の配置を示す図である。1 is a diagram illustrating an arrangement of a video display device according to Embodiment 1. FIG. (a)~(c)は、それぞれ映像表示装置に表示された出力映像の深度を示す図であり、(a)は深度がゼロであり、(b)は深度が負であり、(c)は深度が正である。(A)-(c) is a figure which shows the depth of the output image | video respectively displayed on the video display apparatus, (a) is depth zero, (b) is negative depth, (c) Is positive in depth. (a)は撮影映像を示し、(b)は撮影映像からトリミングされた出力映像に含まれる右眼用映像を示し、(c)は撮影映像からトリミングされた出力映像に含まれる左眼用映像を示し、(d)は右眼用映像および左眼用映像から生成される出力映像を示す。(A) shows a photographed video, (b) shows a right-eye video included in an output video trimmed from the photographed video, and (c) shows a left-eye video contained in an output video trimmed from the photographed video. (D) shows the output image produced | generated from the image | video for right eyes, and the image | video for left eyes. 撮影映像の画素に番号を振って示した図である。It is the figure which numbered and showed the pixel of the picked-up image. 図7に示す撮影映像から生成される出力映像を示す図であり、出力映像の画素に対し、対応する撮影映像の画素の番号を振って示した図である。It is a figure which shows the output image produced | generated from the picked-up image shown in FIG. 7, and is the figure which numbered the pixel of the corresponding picked-up image with respect to the pixel of an output image. 実施形態1に係る電子ミラーの映像制御装置が実行する映像処理の流れを示すフローチャートである。4 is a flowchart showing a flow of video processing executed by the video control device for the electronic mirror according to the first embodiment. 実施形態1、2に係る電子ミラーにおいて、出力映像の深度と画素ずらし量との対応関係を示すテーブルである。6 is a table showing a correspondence relationship between the depth of an output video and a pixel shift amount in the electronic mirror according to the first and second embodiments. 実施形態2に係る映像表示装置の配置を示す図である。It is a figure which shows arrangement | positioning of the video display apparatus which concerns on Embodiment 2. FIG. 実施形態2に係る電子ミラーの構成を示すブロック図である。It is a block diagram which shows the structure of the electronic mirror which concerns on Embodiment 2. FIG. 実施形態2に係る電子ミラーの映像制御装置が実行する映像処理の流れを示すフローチャートである。10 is a flowchart illustrating a flow of video processing executed by the video control device for an electronic mirror according to the second embodiment. (a)~(c)は、それぞれ撮影映像からトリミング(切り出し)される出力映像の範囲を示す図であり、(a)は基準となる出力映像の範囲であり、(b)は運転手の目の位置が左方向に移動した場合の出力映像の範囲であり、(c)は運転手の視点が上方向に移動した場合の出力映像の範囲である。(A)-(c) is a figure which shows the range of the output image trimmed (cut out) from each picked-up image, (a) is the range of the reference output image, (b) is a driver | operator's range. This is the range of the output video when the eye position moves to the left, and (c) is the range of the output video when the driver's viewpoint moves upward. 実施形態3に係る電子ミラーの構成を示すブロック図である。It is a block diagram which shows the structure of the electronic mirror which concerns on Embodiment 3. FIG. (a)(b)は、実施形態5に係る電子ミラーにおける視野角制御の実施例を示す図であり、(a)は指向性拡散フィルムによる視野角制御の実施例を示し、(b)は輝度遮光性フィルムによる視野角制御の実施例を示す。(A) (b) is a figure which shows the Example of the viewing angle control in the electronic mirror which concerns on Embodiment 5, (a) shows the Example of the viewing angle control by a directional diffusion film, (b) An example of viewing angle control by a luminance light-shielding film is shown.
 〔実施形態1〕
 以下、本発明の実施の形態について、図1~図4を用いて詳細に説明する。
Embodiment 1
Hereinafter, embodiments of the present invention will be described in detail with reference to FIGS.
 (電子ミラー1)
 図1~図3を用いて、本実施形態に係る電子ミラー1の構成を説明する。図1は、電子ミラー1の構成を示すブロック図である。図1に示すように、電子ミラー1は、撮像部11(撮影部)、映像制御装置10、視差バリア対応映像表示装置25(表示装置)(以下、映像表示装置25と呼ぶ)、視差バリア装置30、車両情報取得装置40、アイトラッキングカメラ50、運転手情報取得装置52、および視差バリア制御装置55を備えている。本実施形態に係る映像制御装置10および視差バリア対応映像表示装置25は、表示装置を構成する。
(Electronic mirror 1)
The configuration of the electronic mirror 1 according to the present embodiment will be described with reference to FIGS. FIG. 1 is a block diagram showing the configuration of the electronic mirror 1. As shown in FIG. 1, the electronic mirror 1 includes an imaging unit 11 (imaging unit), a video control device 10, a parallax barrier-compatible video display device 25 (display device) (hereinafter referred to as a video display device 25), and a parallax barrier device. 30, a vehicle information acquisition device 40, an eye tracking camera 50, a driver information acquisition device 52, and a parallax barrier control device 55. The video control device 10 and the parallax barrier-compatible video display device 25 according to the present embodiment constitute a display device.
 図2は、本実施形態に係る電子ミラー1が取り付けられた車両を示す。図2に示すように、撮像部11は、車両の後部において、車両の進行方向とは逆方向を向いて取り付けられている。しかしながら、撮像部11は、車両の進行方向を向くように取り付けられていてもよい。撮像部11は、具体的には、単眼カメラである。撮像部11は、複数の単眼カメラを含んでおり、車両の正面後方、左側後方、および、右側後方をそれぞれ撮影する少なくとも3つの単眼カメラを含む。撮像部11は、車両の前方を撮影するドライブレコーダ用の単眼カメラをさらに含んでいてもよい。例えば、車両の右ドアには、右側後方に向けられた単眼カメラが配置されている。この単眼カメラは、車両の右側後方を撮影する。撮像部11は、電子ミラー1が起動している間、すなわち車両が走行している間、車両の周辺(図2では、車両の後方)を撮影して、撮影映像のデータを生成する。撮影映像には、車両の周辺に存在する視認対象物が含まれる。撮影映像には、車内の視認対象物が含まれる場合もある。例えば、電子ミラー1がルームミラーの代用として使用される場合、撮像部11は、車内を撮影する単眼カメラを含む。撮像部11は、生成した撮影映像を映像制御装置10に出力する。 FIG. 2 shows a vehicle to which the electronic mirror 1 according to this embodiment is attached. As shown in FIG. 2, the imaging unit 11 is attached to the rear part of the vehicle so as to face the direction opposite to the traveling direction of the vehicle. However, the imaging unit 11 may be attached so as to face the traveling direction of the vehicle. Specifically, the imaging unit 11 is a monocular camera. The imaging unit 11 includes a plurality of monocular cameras, and includes at least three monocular cameras that respectively capture the front rear, left rear, and right rear of the vehicle. The imaging unit 11 may further include a monocular camera for a drive recorder that captures the front of the vehicle. For example, a monocular camera directed to the rear right side is arranged on the right door of the vehicle. This monocular camera captures the right rear side of the vehicle. While the electronic mirror 1 is activated, that is, while the vehicle is running, the imaging unit 11 captures the periphery of the vehicle (in FIG. 2, the rear of the vehicle) and generates captured video data. The captured image includes a visual target that exists around the vehicle. In some cases, the captured image includes a visual target in the vehicle. For example, when the electronic mirror 1 is used as a substitute for a room mirror, the imaging unit 11 includes a monocular camera that captures the interior of the vehicle. The imaging unit 11 outputs the generated captured video to the video control device 10.
 映像制御装置10は、撮像部11から撮影映像を取得して、取得した撮影映像から、映像表示装置25に出力する出力映像を生成する。出力映像は、運転手の右眼に対応する映像(以下では、右眼用映像と呼ぶ)と、運転手の左眼に対応する映像(以下では、左眼用映像と呼ぶ)とを含む。図1に示すように、映像制御装置10は、深度決定部12(深さ設定部)、および出力映像生成部13(生成部)を含む。映像制御装置10の各部については、後で説明する。 The video control device 10 acquires a captured video from the imaging unit 11 and generates an output video to be output to the video display device 25 from the acquired captured video. The output video includes a video corresponding to the right eye of the driver (hereinafter referred to as a video for the right eye) and a video corresponding to the left eye of the driver (hereinafter referred to as a video for the left eye). As illustrated in FIG. 1, the video control apparatus 10 includes a depth determination unit 12 (depth setting unit) and an output video generation unit 13 (generation unit). Each part of the video control apparatus 10 will be described later.
 映像表示装置25は、映像制御装置10が生成した出力映像を表示画面20、20’に表示する。映像表示装置25は、例えば液晶表示装置であってよい。映像表示装置25は、撮像部11が撮影した撮影映像に基づく出力映像を表示画面20に表示する。映像表示装置25は、速度、エンジン回転数、および使用されているギアなどの情報を含むデジタル映像に基づく出力映像を表示画面20’に表示する。 The video display device 25 displays the output video generated by the video control device 10 on the display screens 20 and 20 ′. The video display device 25 may be a liquid crystal display device, for example. The video display device 25 displays on the display screen 20 an output video based on the captured video taken by the imaging unit 11. The video display device 25 displays an output video based on a digital video including information such as speed, engine speed, and used gear on the display screen 20 ′.
 図3は、図2に示す車両の内側の一部、特に運転席(コックピット)のインストルメントパネルを示す。図3に示すように、映像表示装置25の表示画面20、20’は、車両の内部において、運転席の近傍に配置されている。例えば、映像表示装置25の表示画面20は、ダッシュボードに組み込まれていてもよい。あるいは、ダッシュボード内、および/または、車両の天井(ルームミラーの位置)に配置されていてもよい。映像表示装置25は、映像制御装置10から取得した出力映像を、図3に例示する位置に配置された表示画面20、20’に表示する。 FIG. 3 shows a part of the inside of the vehicle shown in FIG. 2, particularly the instrument panel of the driver's seat (cockpit). As shown in FIG. 3, the display screens 20 and 20 ′ of the video display device 25 are arranged in the vicinity of the driver's seat inside the vehicle. For example, the display screen 20 of the video display device 25 may be incorporated in a dashboard. Alternatively, it may be arranged in the dashboard and / or on the ceiling of the vehicle (position of the room mirror). The video display device 25 displays the output video acquired from the video control device 10 on the display screens 20 and 20 ′ arranged at the positions illustrated in FIG. 3.
 視差バリア装置30は、映像表示装置25の各々の表示画面20、20’の手前に配置されている。視差バリア装置30は、視差バリア制御装置55により制御されることによって、映像表示装置25の表示画面20、20’に表示される出力映像の視野角を制限する。あるいは、映像表示装置25が、映像の視野角を制御可能な他影像指向性液晶表示装置である場合、映像表示装置25のバックライトの指向性を制御することで、出力映像の視野角を制御してもよい。この場合、電子ミラー1は、視差バリア装置30および視差バリア制御装置55を備えていなくてもよい。 The parallax barrier device 30 is disposed in front of each display screen 20, 20 ′ of the video display device 25. The parallax barrier device 30 controls the viewing angle of the output video displayed on the display screens 20 and 20 ′ of the video display device 25 by being controlled by the parallax barrier control device 55. Alternatively, when the video display device 25 is another image directional liquid crystal display device capable of controlling the viewing angle of the video, the viewing angle of the output video is controlled by controlling the directivity of the backlight of the video display device 25. May be. In this case, the electronic mirror 1 may not include the parallax barrier device 30 and the parallax barrier control device 55.
 車両情報取得装置40は、車両の状態を示す各種情報を取得する。車両情報取得装置40が取得する情報には、例えば、シートのオフセット量(シートの位置に対応する)、車両の速度、およびギア情報が含まれる。 The vehicle information acquisition device 40 acquires various information indicating the state of the vehicle. The information acquired by the vehicle information acquisition device 40 includes, for example, a seat offset amount (corresponding to the seat position), a vehicle speed, and gear information.
 アイトラッキングカメラ50は、運転手の顔または目の位置をトラッキングする。アイトラッキングカメラは、検知した運転手の顔または目の位置の情報を、運転手情報取得装置52および視差バリア制御装置55に出力する。 The eye tracking camera 50 tracks the driver's face or eye position. The eye tracking camera outputs the detected driver face or eye position information to the driver information acquisition device 52 and the parallax barrier control device 55.
 運転手情報取得装置52(検出部)は、運転手およびその他の乗員の状態を示す各種情報を取得する。運転手情報取得装置52が取得する情報には、例えば、運転手の顔または目の位置の情報が含まれる。運転手情報取得装置52は、アイトラッキングカメラ50が検知した運転手の顔または目の位置の情報を取得する。運転手情報取得装置52は、例えば、車両のシートに取り付けられた生体センサを備えていてもよい。この構成では、運転手情報取得装置52は、生体センサから、シートに着座した乗員の体温、発汗、および動作などの情報を取得する。 The driver information acquisition device 52 (detection unit) acquires various information indicating the states of the driver and other passengers. The information acquired by the driver information acquisition device 52 includes, for example, information on the driver's face or eye position. The driver information acquisition device 52 acquires information on the driver's face or eye position detected by the eye tracking camera 50. The driver information acquisition device 52 may include, for example, a biosensor attached to a vehicle seat. In this configuration, the driver information acquisition device 52 acquires information such as the body temperature, sweating, and movement of the occupant seated on the seat from the biometric sensor.
 視差バリア制御装置55は、アイトラッキングカメラ50が検知した運転手の顔または目の位置の情報に基づいて、視差バリア装置30を制御する。 The parallax barrier control device 55 controls the parallax barrier device 30 based on information on the driver's face or eye position detected by the eye tracking camera 50.
 (映像制御装置10の構成)
 図1に示すように、映像制御装置10は、深度決定部12および出力映像生成部13を含んでいる。
(Configuration of video control apparatus 10)
As shown in FIG. 1, the video control apparatus 10 includes a depth determination unit 12 and an output video generation unit 13.
 深度決定部12は、車両情報取得装置40が取得した車両の状態を示す情報、および、運転手情報取得装置52が取得した運転手の状態を示す情報の少なくとも一方に基づいて、出力映像の深度を決定する。出力映像の深度は、運転手から見た場合における出力映像中の視認対象物の位置の深さ(すなわち、表示画面20、20’から視認対象物までの距離)を表すパラメータである。深度は、正負の値またはゼロになり得る。本実施形態では、深度決定部12は、決定した出力映像の深度に基づいて、右眼用の画素データと左眼用の画素データとの間の画素ずらし量Δpixを決定する。なお、深度と画素ずらし量Δpixについては後述する。 The depth determination unit 12 determines the depth of the output video based on at least one of information indicating the vehicle state acquired by the vehicle information acquisition device 40 and information indicating the driver state acquired by the driver information acquisition device 52. To decide. The depth of the output video is a parameter representing the depth of the position of the visual target object in the output video when viewed from the driver (that is, the distance from the display screen 20, 20 'to the visual target object). The depth can be positive or negative or zero. In the present embodiment, the depth determination unit 12 determines the pixel shift amount Δpix between the right-eye pixel data and the left-eye pixel data based on the determined depth of the output video. The depth and the pixel shift amount Δpix will be described later.
 出力映像生成部13は、深度決定部12が決定した画素ずらし量Δpixに基づいて、1つの撮影映像から、互いに画素ずらし量Δpixだけずれた位置の画素データを、左眼用の画素データおよび右眼用の画素データとして、交互に配列して出力映像を生成する。こうして生成された出力映像は、視差バリア装置30を通して見ることにより、3D映像のような奥行き感(深度が正の場合)または飛び出し感(深度が負の場合)を持つ。また、出力映像を生成するための処理時間は、視差マッピング処理を行わないので、3D映像を生成するための処理時間と比べて短縮される。加えて、出力映像には、3D映像のような視差がないため、運転手は、深度だけ移動した距離の平面映像として捉えることができ、その平面へ焦点を合せることで映像を瞬時に捉え、視認対象物を発見することができる。また、深度によって前方視野との視点移動を小さくできるので、出力映像内の視認対象物を素早く認識できる。したがって、運転手は、出力映像に含まれる情報に基づいて、通常の2D映像を視認する場合よりも、素早く判断および行動することができる。 Based on the pixel shift amount Δpix determined by the depth determination unit 12, the output video generation unit 13 converts pixel data at a position shifted from each other by the pixel shift amount Δpix from one captured video, pixel data for the left eye, and right As the eye pixel data, an output video is generated by alternately arranging the pixel data. The output video generated in this way has a sense of depth (when the depth is positive) or a pop-out feeling (when the depth is negative) like a 3D video when viewed through the parallax barrier device 30. Further, the processing time for generating the output video is shortened compared to the processing time for generating the 3D video because the parallax mapping processing is not performed. In addition, since there is no parallax like 3D video in the output video, the driver can capture it as a plane image of the distance moved by the depth, and by capturing the image instantly by focusing on that plane, A visual target can be found. In addition, since the viewpoint movement with respect to the front visual field can be reduced depending on the depth, the visual recognition object in the output video can be quickly recognized. Therefore, the driver can make a judgment and act more quickly based on information included in the output video than when viewing a normal 2D video.
 出力映像生成部13は、生成した出力映像を映像表示装置25に出力する。なお、出力映像生成部13は、出力映像をどの映像表示装置25が表示するかに応じて、出力映像の画角あるいはトリミング範囲を調節してもよい。例えば、出力映像生成部13は、ルームミラーの代用として使用される映像表示装置25には、ドアミラー(サイドミラー)の代用として使用される別の映像表示装置25に出力する出力映像よりも、画角の大きい出力映像を出力してもよい。ルームミラーは車両後方の広範囲の情報を運転手に伝達する役割を持つからである。また、電子ミラー1が2眼カメラを備えている場合、出力映像生成部13は、深度決定部12が決定した深度(飛び出し量p、奥行き量d)に基づいて、2眼カメラが撮影した視差のある2つの撮影映像から、一般的な3D映像の生成方法で、3D映像を生成してもよい。 The output video generation unit 13 outputs the generated output video to the video display device 25. Note that the output video generation unit 13 may adjust the angle of view or the trimming range of the output video according to which video display device 25 displays the output video. For example, the output video generation unit 13 may display an image on the video display device 25 used as a substitute for a room mirror rather than an output video output to another video display device 25 used as a substitute for a door mirror (side mirror). An output video having a large corner may be output. This is because the rearview mirror has a role of transmitting a wide range of information behind the vehicle to the driver. In addition, when the electronic mirror 1 includes a twin-lens camera, the output video generation unit 13 is a parallax captured by the twin-lens camera based on the depth (the pop-out amount p and the depth amount d) determined by the depth determination unit 12. A 3D image may be generated from two captured images with a general 3D image generation method.
 (映像表示装置25の配置)
 図4は、ダッシュボード内に配置された映像表示装置25(図3参照)の配置の一例を示す。映像表示装置25は、スピードメータ等の機械式メータの代わりに配置される。映像表示装置25は、映像制御装置10から出力された出力映像を表示画面20’に表示する。映像表示装置25は機械式メータよりも装置全体が小さいため、ダッシュボード内のスペースを、従来よりも拡大することができる。本実施形態では、映像表示装置25の表示画面20’は、運転手と正対している。したがって、出力映像の深度が変化すると、出力映像を見る運転手の視点は、車両の進行方向または反対方向に移動する。映像表示装置25に表示する出力映像の深度をゼロよりも深くした場合、運転手が、車両の前方から映像表示装置25の表示画面20’に表示されたメータに視点を移動するとき、視点の移動距離が短くなる。したがって、運転手は視線の移動を楽に感じる。
(Arrangement of video display device 25)
FIG. 4 shows an example of the arrangement of the video display device 25 (see FIG. 3) arranged in the dashboard. The video display device 25 is arranged instead of a mechanical meter such as a speedometer. The video display device 25 displays the output video output from the video control device 10 on the display screen 20 ′. Since the entire image display device 25 is smaller than the mechanical meter, the space in the dashboard can be expanded as compared with the conventional case. In the present embodiment, the display screen 20 ′ of the video display device 25 faces the driver. Therefore, when the depth of the output video changes, the viewpoint of the driver who views the output video moves in the traveling direction of the vehicle or in the opposite direction. When the depth of the output video displayed on the video display device 25 is deeper than zero, when the driver moves the viewpoint from the front of the vehicle to the meter displayed on the display screen 20 ′ of the video display device 25, The moving distance is shortened. Therefore, the driver can easily feel the movement of the line of sight.
 (画素ずらし量Δpixの決定方法)
 図5の(a)~(c)を用いて、深度決定部12が、前述した右眼用の画素データと左眼用の画素データとの間の画素ずらし量Δpixを決定する方法について説明する。図5の(a)~(c)は、出力映像を表示している映像表示装置25の表示画面20を示す。出力映像の視野角は、視差バリア装置30および視差バリア制御装置55によって制御されている。図5の(a)~(c)において、運転手の右眼は、出力映像に含まれる右眼用の画素データによって構成される右眼用映像のみを見ている。運転手の左眼は、出力映像に含まれる右眼用の画素データによって構成される左眼用映像のみを見ている。運転手は、図5の(a)~(c)内に示す右眼の視線と左眼の視線とが交差する位置に、視認対象物が存在するかのように認識する。
(Determination method of pixel shift amount Δpix)
A method in which the depth determining unit 12 determines the pixel shift amount Δpix between the right-eye pixel data and the left-eye pixel data will be described with reference to FIGS. . 5A to 5C show the display screen 20 of the video display device 25 displaying the output video. The viewing angle of the output video is controlled by the parallax barrier device 30 and the parallax barrier control device 55. In FIGS. 5A to 5C, the right eye of the driver sees only the right-eye video composed of the right-eye pixel data included in the output video. The driver's left eye sees only the left-eye video composed of pixel data for the right eye included in the output video. The driver recognizes as if the visual target is present at the position where the line of sight of the right eye and the line of sight of the left eye shown in FIGS.
 図5の(a)は、深度が0、すなわち画素ずらし量Δpixが0である場合の出力映像を示す。この場合、運転手から見て、視認対象物は映像表示装置25の表示画面20上にある。つまり、運転手は、映像表示装置25に2D映像が表示されている場合と同じように、出力映像を見る。この場合、視差バリア制御装置55によって、視差バリア装置30は停止または無効化される。 (A) of FIG. 5 shows an output image when the depth is 0, that is, the pixel shift amount Δpix is 0. In this case, the visual target is on the display screen 20 of the video display device 25 as viewed from the driver. That is, the driver views the output video in the same manner as when the 2D video is displayed on the video display device 25. In this case, the parallax barrier control device 55 stops or disables the parallax barrier device 30.
 図5の(b)は、深度が負、すなわち、画素ずらし量Δpixが負であり、視認対象物が表示画面20から飛び出して見える場合の出力映像を示す。この場合、出力映像の深度は、飛び出し量pで表される。図5の(b)に示すように、深度が負である場合、運転手から見て、視認対象物は映像表示装置25の表示画面20よりも手前にある。この場合、運転手は、視認対象物が映像表示装置25の表示画面20から飛び出しているように感じる。例えば、映像表示装置25が電子ミラー1のオープニング画面を表示している場合、深度決定部12は、出力映像の深度をゼロよりも小さくしてよい。 (B) of FIG. 5 shows an output image when the depth is negative, that is, the pixel shift amount Δpix is negative, and the visually recognized object appears to jump out of the display screen 20. In this case, the depth of the output video is represented by the pop-out amount p. As shown in FIG. 5B, when the depth is negative, the visual target is in front of the display screen 20 of the video display device 25 as viewed from the driver. In this case, the driver feels that the visually recognized object is popping out from the display screen 20 of the video display device 25. For example, when the video display device 25 displays the opening screen of the electronic mirror 1, the depth determination unit 12 may make the depth of the output video smaller than zero.
 図5の(c)は、深度が正、すなわち画素ずらし量Δpixが正である場合の出力映像を示す。この場合、出力映像の深度は、奥行き量dで表される。図5の(c)に示すように、深度が正である場合、運転手から見て、視認対象物は映像表示装置25の表示画面20よりも奥にある。この場合、運転手の視点は、映像表示装置25の表示画面20よりも遠くなる。 (C) of FIG. 5 shows an output image when the depth is positive, that is, the pixel shift amount Δpix is positive. In this case, the depth of the output video is represented by a depth amount d. As shown in (c) of FIG. 5, when the depth is positive, the visual target is located behind the display screen 20 of the video display device 25 as viewed from the driver. In this case, the viewpoint of the driver is farther than the display screen 20 of the video display device 25.
 深度決定部12は、車両情報取得装置40から取得する車両の状態の情報に基づいて、出力映像の深度を変化させてもよい。例えば、車両が高速で移動している場合、通常、運転手の視点は車両から遠くなる。そこで、深度決定部12は、車両の速度が速くなるほど、または、使用されているギアが高くなるほど、出力映像の深度を深くしてもよい。また、Rギアが使用されている場合、深度決定部12は、他のギアが使用されている場合よりも出力深度の深度を浅く、すなわち画素ずらし量Δpixを小さくしてもよい。これにより、運転手は、車両の近傍を視認し易くなる。また、深度決定部12は、シートの位置が映像表示装置25の表示画面20に近いほど、出力映像の深度を深くしてもよい。これにより、シートの位置によらず、運転手から出力映像内の視認対象物までの距離を適切に保つことができる。 The depth determination unit 12 may change the depth of the output video based on the vehicle state information acquired from the vehicle information acquisition device 40. For example, when the vehicle is moving at high speed, the driver's viewpoint is usually far from the vehicle. Therefore, the depth determination unit 12 may increase the depth of the output image as the speed of the vehicle increases or as the used gear increases. Further, when the R gear is used, the depth determination unit 12 may make the output depth shallower, that is, reduce the pixel shift amount Δpix than when other gears are used. This makes it easier for the driver to visually recognize the vicinity of the vehicle. Further, the depth determination unit 12 may increase the depth of the output video as the position of the sheet is closer to the display screen 20 of the video display device 25. Thereby, irrespective of the position of the seat, the distance from the driver to the visual recognition object in the output video can be appropriately maintained.
 また、深度決定部12は、運転手情報取得装置52から取得する運転手の情報に基づいて、出力映像の深度を変化させてもよい。例えば、深度決定部12は、運転手情報取得装置52がアイトラッキングに失敗した場合、出力映像の深度をデフォルト値すなわちゼロにしてもよい(フェールセーフ)。また、運転手が、手動で出力映像の深度をゼロに変更することが可能であってもよい。これにより、運転手およびその他の乗員から出力映像が二重に見えることを防止することができる。 Also, the depth determination unit 12 may change the depth of the output video based on the driver information acquired from the driver information acquisition device 52. For example, when the driver information acquisition device 52 fails in eye tracking, the depth determination unit 12 may set the depth of the output video to a default value, that is, zero (fail safe). In addition, the driver may be able to manually change the depth of the output video to zero. Thereby, it is possible to prevent the output image from being seen twice by the driver and other occupants.
 (深度決定部12の変形例)
 本実施形態では、深度決定部12が決定する出力映像の深度は、出力映像の縦方向(上下方向)および横方向(左右方向)には一定である。しかしながら、深度決定部12が出力映像の深度を決定する方法は、本実施形態で説明した方法に限定されない。
(Modification of depth determination unit 12)
In the present embodiment, the depth of the output video determined by the depth determination unit 12 is constant in the vertical direction (vertical direction) and the horizontal direction (horizontal direction) of the output video. However, the method by which the depth determination unit 12 determines the depth of the output video is not limited to the method described in this embodiment.
 一変形例では、深度決定部12は、出力映像の縦方向(上下方向)または横方向(左右方向)に、出力映像の深度を変化させてもよい(実施形態2参照)。 In a modification, the depth determination unit 12 may change the depth of the output video in the vertical direction (vertical direction) or the horizontal direction (horizontal direction) of the output video (see Embodiment 2).
 なお、深度決定部12が出力映像の深度をゼロにする場合、視差バリア制御装置55が、視差バリア装置30の動作を停止すなわち無効化させる。 In addition, when the depth determination unit 12 sets the depth of the output video to zero, the parallax barrier control device 55 stops, that is, invalidates the operation of the parallax barrier device 30.
 他の変形例では、深度決定部12は、センサ等で検知した周囲の環境に応じて、出力映像の深度を変化させてもよい。例えば、深度決定部12は、環境の明るさ(例えば、逆光、薄暮)または時間帯(例えば、昼、夜)に応じて、出力映像の深度を決定してもよい。 In another modification, the depth determination unit 12 may change the depth of the output video according to the surrounding environment detected by a sensor or the like. For example, the depth determination unit 12 may determine the depth of the output video according to the brightness of the environment (for example, backlight or twilight) or the time zone (for example, daytime or night).
 他の変形例では、深度決定部12は、出力映像に含まれる情報に応じて、出力映像の深度を決定してもよい。例えば、深度決定部12は、出力映像に空、道路、および他の車両のいずれが多く含まれるかを解析し、解析結果に応じて、出力映像の深度を決定してもよい。特に、出力映像が道路または他の車両を多く含む場合、深度決定部12は、車両から測った道路または他の車両までの距離に応じて、出力映像の深度を決定してもよい。 In another modification, the depth determination unit 12 may determine the depth of the output video according to information included in the output video. For example, the depth determination unit 12 may analyze which of the sky, road, and other vehicles is included in the output video, and may determine the depth of the output video according to the analysis result. In particular, when the output video includes many roads or other vehicles, the depth determination unit 12 may determine the depth of the output video according to the distance from the vehicle to the road or other vehicles.
 他の変形例では、深度決定部12は、映像表示装置25の表示画面20、20’の配置に応じて、出力映像の深度を決定してもよい。例えば、運転手の顔または顔の一部(目、鼻など)の近くに配置された映像表示装置25が出力映像を表示する場合、深度決定部12は、出力映像の深度を深くしてもよい。 In another modification, the depth determination unit 12 may determine the depth of the output video according to the arrangement of the display screens 20 and 20 ′ of the video display device 25. For example, when the video display device 25 arranged near the driver's face or a part of the face (eyes, nose, etc.) displays the output video, the depth determination unit 12 may increase the depth of the output video. Good.
 他の変形例では、深度決定部12は、運転手の顔または顔の一部(目、鼻など)の位置に応じて、出力映像の深度を決定してもよい。例えば、運転手の顔が前後、左右、または上下に移動(例えば、頭振り、伸び上がり)した場合、深度決定部12は、移動後の運転手の位置に応じて、出力映像の深度を決定してもよい。 In another modification, the depth determination unit 12 may determine the depth of the output video according to the position of the driver's face or a part of the face (eyes, nose, etc.). For example, when the driver's face moves back and forth, left and right, or up and down (for example, head swing and stretch), the depth determination unit 12 determines the depth of the output video according to the position of the driver after the movement. May be.
 他の変形例では、深度決定部12は、運転手の動作に応じて、出力映像の深度を決定してもよい。例えば、運転手が車両を後退させるための動作(例えば、助手席に腕を掛ける。運転席の窓またはドアを開ける)をした場合、深度決定部12は、運転手の動作の種別に応じて、出力映像の深度を決定してもよい。 In another modification, the depth determination unit 12 may determine the depth of the output video according to the driver's action. For example, when the driver performs an operation for moving the vehicle backward (for example, putting an arm on the passenger seat, opening a window or door of the driver seat), the depth determining unit 12 determines the type of the driver's operation. The depth of the output video may be determined.
 (出力映像の生成方法)
 図6の(a)~(d)、図7、および図8を用いて、出力映像生成部13が出力映像を生成する方法を説明する。
(Output video generation method)
A method of generating the output video by the output video generation unit 13 will be described with reference to FIGS. 6A to 6D, FIG. 7, and FIG.
 図6の(a)は、撮像部11が撮影した撮影映像を示す。図6の(a)に示す撮影映像には、視認対象物が含まれている。図6の(a)に示す撮影映像は、例えば、(1920+α)×1080の画素サイズを有する。ここで、αは、画素ずらし量Δpixに等しい。 (A) of FIG. 6 shows the imaging | photography image | video which the imaging part 11 image | photographed. The captured image shown in FIG. 6A includes a visual recognition object. The captured video shown in FIG. 6A has a pixel size of (1920 + α) × 1080, for example. Here, α is equal to the pixel shift amount Δpix.
 図6の(b)は、出力映像のトリミング範囲に含まれる右眼用映像の範囲を示す。図6の(c)は、出力映像のトリミング範囲に含まれる左眼用映像の範囲を示す。 FIG. 6B shows the range of the right-eye video included in the output video trimming range. FIG. 6C shows the range of the left-eye video included in the output video trimming range.
 図6の(c)に示すように、左眼用映像の範囲は、右眼用映像の範囲から、深度決定部12が決定した画素ずらし量Δpixだけ左へずれている。出力映像生成部13は、図6の(c)に示す左眼用映像の範囲の左端から、図6の(b)に示す右眼用映像の範囲の右端までの、(1920+Δpix)×1080の領域の画素データをトリミングする。そして、出力映像生成部13は、トリミングした画素データから、互いに画素ずらし量Δpixだけずれた位置の画素データを取り出し、左眼用の画素データおよび右眼用の画素データとして交互に配列することにより、出力映像を生成する。 As shown in FIG. 6C, the range of the left-eye video is shifted to the left by the pixel shift amount Δpix determined by the depth determination unit 12 from the range of the right-eye video. The output video generation unit 13 is (1920 + Δpix) × 1080 from the left end of the left eye video range shown in FIG. 6C to the right end of the right eye video range shown in FIG. Trim the pixel data of the area. Then, the output video generation unit 13 extracts pixel data at positions shifted by the pixel shift amount Δpix from the trimmed pixel data, and alternately arranges them as pixel data for the left eye and pixel data for the right eye. , Generate output video.
 図6の(d)は、画素データが交互に配列さられた出力映像を示す。図6の(d)では、右眼用映像に含まれる視認対象物の位置と、左眼用映像に含まれる同じ視認対象物の位置とが、Δpixだけ互いにずれている。すなわち、出力映像のサイズは、右眼用映像の範囲および左眼用映像の範囲と同じく、1920×1080である。 (D) in FIG. 6 shows an output video in which pixel data are alternately arranged. In FIG. 6D, the position of the visual target object included in the right-eye image and the position of the same visual object included in the left-eye image are shifted from each other by Δpix. That is, the size of the output video is 1920 × 1080, similar to the range of the right-eye video and the range of the left-eye video.
 図7~図8を用いて、出力映像生成部13が出力映像を生成する方法をより詳細に説明する。図7は、出力映像生成部13が取得する撮影映像の画素に番号を振って示した図である。図8は、図7に示す撮影映像から生成される出力映像を示す図であり、出力映像の画素に対し、対応する右眼用映像の画素の番号、および対応する左眼用映像の画素の番号を振って示した図である。 A method for generating the output video by the output video generation unit 13 will be described in more detail with reference to FIGS. FIG. 7 is a diagram showing numbers assigned to pixels of a captured video acquired by the output video generation unit 13. FIG. 8 is a diagram showing an output video generated from the captured video shown in FIG. 7. The output video pixel is associated with the corresponding right-eye video pixel number and the corresponding left-eye video pixel. It is the figure which numbered and showed.
 図7では、右眼用映像に対応する画素に対し、1番から横方向に順番に、番号を振り、左眼用映像に対応する画素に対し、51番から横方向に順番に、番号を振っている。すなわち、図7に示す例では、右眼用映像と左眼用映像との間の画素ずらし量Δpixは50画素である。図7には、右眼用映像および左眼用映像の一番上の行の画素のみを図示しているが、撮影映像の他の行の画素にも、一番上の行の画素と同じ規則で、番号が付与される。 In FIG. 7, the pixels corresponding to the right-eye image are numbered sequentially from the first in the horizontal direction, and the pixels corresponding to the left-eye image are sequentially numbered from the 51st in the horizontal direction. Waving. That is, in the example shown in FIG. 7, the pixel shift amount Δpix between the right-eye video and the left-eye video is 50 pixels. Although FIG. 7 illustrates only the pixels in the top row of the right-eye video and the left-eye video, the pixels in the other rows of the captured video are the same as the pixels in the top row. By rule, a number is given.
 図8に示すように、出力映像生成部13は、右眼用映像の奇数番目の画素(1、3、・・・)と、左眼用映像の偶数番目の画素(2、4、・・・)とを、横方向に交互に配置することによって、出力映像を生成する。図8には、出力映像の一番上の行の画素のみを図示しているが、出力映像の他の行の画素も、一番上の行の画素と同じ規則で配置される。すなわち、本実施形態では、出力映像の縦方向および横方向において、トリミング範囲の画素ずらし量Δpixは変わらない。映像表示装置25が表示する出力映像は、視差バリア装置30によって視野角を制御される。そのため、運転手の右眼は、出力映像に含まれる右眼用映像(偶数列の画素データによって構成される映像)を見る。一方、運転手の左眼は、出力映像に含まれる左眼用映像(奇数列の画素データによって構成される映像)を見る。右眼用映像と左眼用映像とは、運転手の脳内で合成される。したがって、運転手は、ただ一つの出力映像を見ているように感じる。なお、一つの画素のサイズはとても小さいため、運転手は、画素間の空白をほとんど認識しない。 As shown in FIG. 8, the output video generation unit 13 includes the odd-numbered pixels (1, 3,...) Of the right-eye video and the even-numbered pixels (2, 4,...) Of the left-eye video. .) Are alternately arranged in the horizontal direction to generate an output video. Although FIG. 8 illustrates only the pixels in the top row of the output video, the pixels in the other rows of the output video are also arranged in the same rule as the pixels in the top row. That is, in this embodiment, the pixel shift amount Δpix in the trimming range does not change in the vertical direction and the horizontal direction of the output video. The viewing angle of the output video displayed by the video display device 25 is controlled by the parallax barrier device 30. Therefore, the right eye of the driver sees the right-eye video (video composed of even-numbered pixel data) included in the output video. On the other hand, the driver's left eye sees a left-eye image (image composed of odd-numbered pixel data) included in the output image. The video for the right eye and the video for the left eye are synthesized in the driver's brain. Therefore, the driver feels as if he is watching only one output image. In addition, since the size of one pixel is very small, the driver hardly recognizes the space between pixels.
 (映像制御装置10の動作)
 図9を用いて、本実施形態に係る映像制御装置10の動作を説明する。図9は、映像制御装置10の動作を示すフローチャートである。図9に示すように、映像制御装置10は、撮像部11が撮影した撮影映像を取得する(S2)。また、深度決定部12は、車両の状態を示す情報を車両情報取得装置40から取得するとともに、運転手の状態を示す情報を運転手情報取得装置52から取得する(S4)。
(Operation of video control apparatus 10)
The operation of the video control apparatus 10 according to the present embodiment will be described using FIG. FIG. 9 is a flowchart showing the operation of the video control apparatus 10. As illustrated in FIG. 9, the video control device 10 acquires a captured video captured by the imaging unit 11 (S2). In addition, the depth determination unit 12 acquires information indicating the state of the vehicle from the vehicle information acquisition device 40, and acquires information indicating the state of the driver from the driver information acquisition device 52 (S4).
 深度決定部12は、車両の状態を示す情報、および、運転手の状態を示す情報の少なくとも一方に基づいて、出力映像の深度を決定する。さらに、後述するテーブル(図10参照)を参照して、決定した出力映像の深度に対応する画素ずらし量Δpixを決定する(S6、設定ステップ)。深度決定部12は、決定した画素ずらし量Δpixの情報を、出力映像生成部13に通知する。 The depth determination unit 12 determines the depth of the output video based on at least one of information indicating the state of the vehicle and information indicating the state of the driver. Further, a pixel shift amount Δpix corresponding to the determined depth of the output video is determined with reference to a table (see FIG. 10) described later (S6, setting step). The depth determination unit 12 notifies the output video generation unit 13 of information on the determined pixel shift amount Δpix.
 出力映像生成部13は、深度決定部12が決定した画素ずらし量Δpixの情報に基づいて、前述した方法で、出力映像を生成する(S8、生成ステップ)。出力映像生成部13は、生成した出力映像を映像表示装置25に出力する。映像表示装置25は、出力映像生成部13から受信した出力映像を表示画面20に表示する(S10)。以上で、映像制御装置10の動作は終了する。 The output video generation unit 13 generates an output video by the above-described method based on the information of the pixel shift amount Δpix determined by the depth determination unit 12 (S8, generation step). The output video generation unit 13 outputs the generated output video to the video display device 25. The video display device 25 displays the output video received from the output video generation unit 13 on the display screen 20 (S10). Thus, the operation of the video control apparatus 10 is finished.
 (深度と画素ずらし量との対応関係)
 図10は、出力映像に設定される深度(飛び出し量p、奥行き量d)と、画素ずらし量Δpixとの対応関係を示すテーブルである。図10では、Δx(単位:mm)は、異なる画素ずらし量Δpixに対応するずれ幅である。図10において、飛び出し量p(図5の(b)参照)は、運転手にとって、出力映像中の視認対象物が、表示画面20から手前方向つまり運転手に向かう方向に、どれぐらい飛び出しているように見えるかを表す。また、奥行き量d(図5の(c)参照)は、転手にとって、出力映像中の視認対象物が、表示画面20から奥方向つまり運転手の視線方向に、どれぐらいオフセットしているように見えるかを表す。
(Correspondence between depth and pixel shift amount)
FIG. 10 is a table showing a correspondence relationship between the depth (the pop-out amount p and the depth amount d) set in the output video and the pixel shift amount Δpix. In FIG. 10, Δx (unit: mm) is a shift width corresponding to different pixel shift amounts Δpix. In FIG. 10, the pop-out amount p (see FIG. 5B) indicates how much the object to be visually recognized in the output video protrudes forward from the display screen 20, that is, in the direction toward the driver. Represents what looks like. In addition, the depth amount d (see FIG. 5C) indicates how much the object to be visually recognized in the output video is offset from the display screen 20 in the back direction, that is, the driver's line-of-sight direction. Represents what it looks like.
 〔実施形態2〕
 本発明の他の実施形態について、図11に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。
[Embodiment 2]
Another embodiment of the present invention is described below with reference to FIG. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
 映像表示装置の配置によっては、映像表示装置の表示画面が運転手と正対しない場合がある。この場合、運転手は、映像表示装置の表示画面を斜めに見ることになるので、出力映像の奥行き感を正確に把握しにくくなる可能性がある。本実施形態では、映像表示装置の表示画面の傾斜角度に応じて、出力映像の深度を変化させる。これにより、出力映像の奥行き感を正確に把握しやすくする。 Depending on the arrangement of the video display device, the display screen of the video display device may not face the driver. In this case, since the driver sees the display screen of the video display device at an angle, it may be difficult to accurately grasp the sense of depth of the output video. In the present embodiment, the depth of the output video is changed according to the tilt angle of the display screen of the video display device. This makes it easy to accurately grasp the sense of depth of the output video.
 図11は、本実施形態に係る映像表示装置25の表示画面20’の配置を示す。前記実施形態1に係る映像表示装置25の表示画面20の配置と比較すればわかるように、本実施形態に係る映像表示装置25の表示画面20’は、運転手に正対する理想的な面に対して傾斜している。映像表示装置25の表示画面20’と、理想的な面とのなす角はθである。なお、図11では、表示画面20’の上端が最も手前に来て、表示画面20’の下端に近づくにつれて運転手から遠ざかるように傾斜しているが、表示画面20’は逆方向に傾斜していてもよい。すなわち、表示画面20’ の上端が運転手から最も遠く、表示画面20’の下端に近づくにつれて運転手に近づくように傾斜していてもよい。 FIG. 11 shows an arrangement of the display screen 20 ′ of the video display device 25 according to the present embodiment. As can be seen from comparison with the arrangement of the display screen 20 of the video display device 25 according to the first embodiment, the display screen 20 ′ of the video display device 25 according to the present embodiment has an ideal surface facing the driver. It is inclined with respect to it. The angle formed between the display screen 20 ′ of the video display device 25 and the ideal plane is θ. In FIG. 11, the upper end of the display screen 20 ′ comes closest to the front and tilts away from the driver as it approaches the lower end of the display screen 20 ′, but the display screen 20 ′ tilts in the opposite direction. It may be. That is, the upper end of the display screen 20 'may be inclined farther from the driver and closer to the driver as it approaches the lower end of the display screen 20'.
 本実施形態における映像表示装置25の配置では、ダッシュボードと映像表示装置25の表示画面20’との間にスペース(図11に破線で示す三角形)が生じる。このスペースは、ダッシュボード内のスペースと同様に、電子機器およびHUD(Head Up Display)等を配置するために使用することができる。また、映像表示装置25の表示画面20’に対する太陽光の入射角が小さくなり、表示画面20’における太陽光の反射が低減されるので、太陽光を遮蔽するフードの長さを従来よりも短くすることができる。 In the arrangement of the video display device 25 in the present embodiment, a space (a triangle indicated by a broken line in FIG. 11) is generated between the dashboard and the display screen 20 ′ of the video display device 25. This space can be used for arranging electronic devices, HUD (Head-Up-Display), and the like, similarly to the space in the dashboard. Moreover, since the incident angle of sunlight with respect to the display screen 20 ′ of the video display device 25 is reduced and the reflection of sunlight on the display screen 20 ′ is reduced, the length of the hood that shields sunlight is made shorter than before. can do.
 本実施形態において、映像制御装置10の深度決定部12は、図10に示すテーブルを参照して、縦方向(高さhの矢印で示す方向)に、画素ずらし量Δpix(図6の(d)参照)を徐々に変化させる。具体的には、深度決定部12は、高さhの位置aに応じて決まる深度dに対応する画素ずらし量Δpixを、テーブルを参照することによって決定する。ここで、位置aに対応する理想的な深度dは、表示画面20’から、運転手に視認対象物を認識させたい理想的な面までの距離である。 In the present embodiment, the depth determination unit 12 of the video control apparatus 10 refers to the table shown in FIG. 10, and shifts the pixel shift amount Δpix ((d in FIG. 6) in the vertical direction (the direction indicated by the arrow of the height h). )) Is gradually changed. Specifically, the depth determination unit 12 determines the pixel shift amount Δpix corresponding to the depth d determined according to the position a of the height h by referring to the table. Here, the ideal depth d corresponding to the position a is the distance from the display screen 20 ′ to the ideal plane where the driver wants to recognize the visual target.
 図11において、理想的な深度dは、d=L×sinθの計算式によって計算される。ここで、Lは、映像表示装置25の表示画面20’の下端から測った位置aまでの長さである。このように、本実施形態では、画素ずらし量Δpixが、理想的な深度dおよび位置aに応じて決定される。そのため、運転手には、出力映像中の視認対象物が、理想的な面上に見える。 In FIG. 11, the ideal depth d is calculated by a calculation formula of d = L × sin θ. Here, L is the length from the lower end of the display screen 20 ′ of the video display device 25 to the position a measured. Thus, in the present embodiment, the pixel shift amount Δpix is determined according to the ideal depth d and position a. Therefore, the visual recognition object in the output video can be seen on the ideal surface for the driver.
 〔実施形態3〕
 本発明の他の実施形態について、図12~図14に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。
[Embodiment 3]
The following will describe another embodiment of the present invention with reference to FIGS. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
 本実施形態では、電子ミラーが、運転手の顔または目の位置に応じて、出力映像として表示される撮影映像の画角範囲を変化させる。 In this embodiment, the electronic mirror changes the field angle range of the captured video displayed as the output video according to the position of the driver's face or eyes.
 図12は、本実施形態に係る電子ミラー2の構成を示すブロック図である。図12に示すように、本実施形態に係る電子ミラー2の映像制御装置210は、前記実施形態1で説明した映像制御装置10の構成に加えて、画角範囲決定部14(トリミング位置決定部)をさらに備えている。本実施形態に係る映像制御装置210および視差バリア対応映像表示装置25は、表示装置を構成する。 FIG. 12 is a block diagram showing a configuration of the electronic mirror 2 according to the present embodiment. 12, in addition to the configuration of the video control apparatus 10 described in the first embodiment, the video control apparatus 210 of the electronic mirror 2 according to the present embodiment includes an angle-of-view range determination unit 14 (trimming position determination unit). ). The video control device 210 and the parallax barrier-compatible video display device 25 according to the present embodiment constitute a display device.
 電子ミラー2の画角範囲決定部14は、運転手情報取得装置52から、運転手の顔または目の位置の情報を取得する。そして、画角範囲決定部14は、運転手の顔または目の位置に応じて、出力映像の画角範囲を決定する。より詳細には、画角範囲決定部14は、撮影映像から、出力映像を生成するために抽出する画素のデータの範囲(トリミング画像の範囲)を決定する。 The angle-of-view range determination unit 14 of the electronic mirror 2 acquires information on the driver's face or eye position from the driver information acquisition device 52. Then, the view angle range determination unit 14 determines the view angle range of the output video according to the position of the driver's face or eyes. More specifically, the angle-of-view range determination unit 14 determines a range of pixel data (trimming image range) to be extracted in order to generate an output video from a captured video.
 本実施形態に係る出力映像生成部13は、画角範囲決定部14が決定した画角範囲にしたがって、撮影映像の一部の領域をトリミングすることによって、出力映像を生成する(図6の(a)~(d)参照)。 The output video generation unit 13 according to the present embodiment generates an output video by trimming a part of the captured video according to the angle of view range determined by the angle of view range determination unit 14 ((( a) to (d)).
 (映像制御装置210の動作)
 図13を用いて、本実施形態に係る映像制御装置210の動作を説明する。図13は、映像制御装置210の動作を示すフローチャートである。
(Operation of video control device 210)
The operation of the video control apparatus 210 according to the present embodiment will be described with reference to FIG. FIG. 13 is a flowchart showing the operation of the video control apparatus 210.
 図13に示すように、映像制御装置210は、撮像部11が撮影した撮影映像を取得する(S2)。 As shown in FIG. 13, the video control device 210 acquires a captured video captured by the imaging unit 11 (S2).
 次に、画角範囲決定部14は、運転手情報取得装置52が検出した、運転手の顔または目の位置を示す情報を取得する(S32、検出ステップ)。続いて、画角範囲決定部14は、運転手の顔または目の位置に基づいて、出力映像のトリミング範囲を決定する(S34、トリミング位置決定ステップ)。画角範囲決定部14は、決定したトリミング範囲の情報を、出力映像生成部13に出力する。なお、運転手情報取得装置52がアイトラッキング処理に失敗した場合、画角範囲決定部14は、出力映像の画角を、予め設定されたデフォルト値に戻してもよい(フェールセーフ)。 Next, the angle-of-view range determination unit 14 acquires information indicating the position of the driver's face or eyes detected by the driver information acquisition device 52 (S32, detection step). Subsequently, the angle-of-view range determination unit 14 determines the trimming range of the output video based on the position of the driver's face or eyes (S34, trimming position determination step). The angle-of-view range determination unit 14 outputs information on the determined trimming range to the output video generation unit 13. When the driver information acquisition device 52 fails in the eye tracking process, the view angle range determination unit 14 may return the view angle of the output video to a preset default value (fail safe).
 深度決定部12は、車両の状態を示す情報、および、運転手の状態を示す情報を取得する(S4)。深度決定部12が取得する情報の具体例を、前記実施形態1で説明した。次に、深度決定部12は、前記実施形態1で説明したように、画素ずらし量Δpixを決定する(S6)。 The depth determination unit 12 acquires information indicating the state of the vehicle and information indicating the state of the driver (S4). The specific example of the information acquired by the depth determination unit 12 has been described in the first embodiment. Next, the depth determination unit 12 determines the pixel shift amount Δpix as described in the first embodiment (S6).
 本実施形態では、出力映像生成部13は、画角範囲決定部14が決定したトリミング範囲の情報、および、深度決定部12が決定した画素ずらし量Δpixの情報に基づいて、出力映像を生成する(S8’)。出力映像生成部13は、生成した出力映像を映像表示装置25に出力する。映像表示装置25は、出力映像生成部13から受信した出力映像を、運転席の近傍に配置された表示画面20、20’(図3参照)に表示する(S10)。以上で、映像制御装置210の動作は終了する。 In the present embodiment, the output video generation unit 13 generates an output video based on the trimming range information determined by the angle-of-view range determination unit 14 and the pixel shift amount Δpix determined by the depth determination unit 12. (S8 '). The output video generation unit 13 outputs the generated output video to the video display device 25. The video display device 25 displays the output video received from the output video generation unit 13 on the display screens 20 and 20 '(see FIG. 3) arranged in the vicinity of the driver's seat (S10). Thus, the operation of the video control device 210 is completed.
 (出力映像の画角範囲の決定方法)
 図14の(a)~(c)を用いて、画角範囲決定部14が出力映像の画角範囲を決定するために実行する処理を具体的に説明する。図14の(a)~(c)は、運転手の顔または目の位置に応じた出力映像i3の画角範囲を示す図である。
(How to determine the field of view of the output video)
A process executed by the angle-of-view range determination unit 14 to determine the angle-of-view range of the output video will be specifically described with reference to FIGS. (A) to (c) of FIG. 14 are diagrams showing the field angle range of the output video i3 according to the position of the driver's face or eyes.
 図14の(a)は、運転手の顔または目が所定の基準位置にある場合における出力映像i3の画角範囲を示す。ここで、基準位置は、例えば、運転手が車両のシートに適切な姿勢で着座している場合の運転手の顔または目の位置に設定されてよい。 (A) of FIG. 14 shows the field angle range of the output image i3 when the driver's face or eyes are at a predetermined reference position. Here, the reference position may be set, for example, to the position of the driver's face or eyes when the driver is seated in an appropriate posture on the vehicle seat.
 図14の(b)は、運転手の顔または目が、図14の(a)に示す基準位置から左方向に移動した場合の出力映像i3の画角範囲を示す。この場合、撮影映像における出力映像i3の画角範囲は、図14の(a)に示す位置から右方向に移動する。 (B) in FIG. 14 shows the field angle range of the output video i3 when the driver's face or eyes move leftward from the reference position shown in (a) in FIG. In this case, the field angle range of the output video i3 in the captured video moves to the right from the position shown in FIG.
 図14の(c)は、運転手の顔または目が、図14の(a)に示す基準位置から上方向に移動した場合の出力映像i3の画角範囲を示す。この場合、撮影映像における出力映像i3の画角範囲は、図14の(a)に示す位置から下方向に移動する。 (C) in FIG. 14 shows the field angle range of the output video i3 when the driver's face or eyes move upward from the reference position shown in (a) in FIG. In this case, the field angle range of the output video i3 in the captured video moves downward from the position shown in FIG.
 本実施形態の構成によれば、運転手の顔または目が移動した場合、映像表示装置25が表示する出力映像の画角範囲(トリミング位置)が変化する。そのため、運転手は、ミラーを見ている場合のように、顔または目を移動させることで、見る領域を自然に変えることができる。さらに、本実施形態の構成によれば、撮像部11すなわち単眼カメラの撮影方向を変化させることなく、出力映像の画角を、運転手が見たい方向に応じて、変化させることができる。なお、本実施形態では、撮像部11すなわち単眼カメラによる撮影映像から画素ずらしを行った出力映像を表示する際に、運転手の顔または目の位置に応じて出力映像の画角範囲を変更する処理について説明したが、出力映像は、後の実施形態4で説明する2D映像または3D映像であってもよい。 According to the configuration of the present embodiment, when the driver's face or eyes move, the field angle range (trimming position) of the output video displayed by the video display device 25 changes. Therefore, the driver can naturally change the viewing area by moving his / her face or eyes as in the case of looking at the mirror. Furthermore, according to the configuration of the present embodiment, the angle of view of the output video can be changed according to the direction that the driver wants to see without changing the shooting direction of the imaging unit 11, that is, the monocular camera. In the present embodiment, when displaying an output video obtained by shifting pixels from a video captured by the imaging unit 11, that is, a monocular camera, the angle of view range of the output video is changed according to the position of the driver's face or eyes. Although the processing has been described, the output video may be a 2D video or a 3D video described in the fourth embodiment later.
 〔実施形態4〕
 本発明の他の実施形態について、図15に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。
[Embodiment 4]
The following will describe another embodiment of the present invention with reference to FIG. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
 本実施形態では、車両の状態および運転手の状態の少なくとも一方に応じて、映像表示装置が表示する映像の種別を切り替える。 In the present embodiment, the type of video displayed by the video display device is switched according to at least one of the state of the vehicle and the state of the driver.
 図15は、本実施形態に係る電子ミラー3の構成を示すブロック図である。図15に示すように、電子ミラー3の映像制御装置310は、前記実施形態1に係る電子ミラー1の映像制御装置10の構成と比較して、表示切替部15をさらに備えている。また、図示しないが、電子ミラー3は、通常の3D映像を生成する3D映像生成装置をさらに備えている。なお、映像制御装置310は、前記実施形態2で説明した画角範囲決定部14をさらに備えていてもよい。本実施形態に係る映像制御装置310および視差バリア対応映像表示装置25は、表示装置を構成する。 FIG. 15 is a block diagram showing a configuration of the electronic mirror 3 according to the present embodiment. As shown in FIG. 15, the video control device 310 of the electronic mirror 3 further includes a display switching unit 15 as compared with the configuration of the video control device 10 of the electronic mirror 1 according to the first embodiment. Although not shown, the electronic mirror 3 further includes a 3D video generation device that generates a normal 3D video. Note that the video control apparatus 310 may further include the angle-of-view range determination unit 14 described in the second embodiment. The video control device 310 and the parallax barrier-compatible video display device 25 according to the present embodiment constitute a display device.
 図15に示すように、出力映像生成部13は、前記実施形態1で説明した方法で、出力映像を生成する。出力映像生成部13は、生成した出力映像を、表示切替部15に出力する。 As shown in FIG. 15, the output video generation unit 13 generates an output video by the method described in the first embodiment. The output video generation unit 13 outputs the generated output video to the display switching unit 15.
 3D映像生成装置は、互いに視差のある2つの映像データを用いて、3D映像を生成する。具体的には、電子ミラー3が2眼カメラを備えている場合、3D映像生成装置は、2眼カメラが撮影した視差のある2つの撮影映像から、3D映像を生成することができる。なお、視差のある2つの撮影映像から3D映像を生成する方法は周知であるから、ここでは、その説明を省略する。3D映像生成装置は、生成した3D映像を、表示切替部15に出力する。なお、3D映像生成装置は、電子ミラー3の起動時のオープニング画面として、3D映像を生成してもよい。 The 3D video generation device generates a 3D video using two video data having parallax with each other. Specifically, when the electronic mirror 3 includes a twin-lens camera, the 3D video generation device can generate a 3D video from two captured videos with parallax captured by the twin-lens camera. Note that a method for generating a 3D image from two captured images having a parallax is well known, and thus description thereof is omitted here. The 3D video generation device outputs the generated 3D video to the display switching unit 15. The 3D video generation device may generate a 3D video as an opening screen when the electronic mirror 3 is activated.
 表示切替部15は、車両情報取得装置40から車両の状態を示す情報を取得する。また、表示切替部15は、運転手情報取得装置52から運転手の状態を示す情報を取得する。そして、表示切替部15は、車両の状態および運転手の状態の少なくとも一方に応じて、映像表示装置25に出力する映像を、出力映像(第1出力映像)と、2D映像(第2出力映像)と、3D映像(第3出力映像)との間で切り替える。 The display switching unit 15 acquires information indicating the state of the vehicle from the vehicle information acquisition device 40. In addition, the display switching unit 15 acquires information indicating the state of the driver from the driver information acquisition device 52. Then, the display switching unit 15 outputs an image to be output to the image display device 25 according to at least one of the vehicle state and the driver state as an output image (first output image) and a 2D image (second output image). ) And 3D video (third output video).
 上記のように、表示切替部15は、2D映像データも取得する。2D映像データは、例えば、記録装置(図示せず)に記録された映像コンテンツデータであってもよい。あるいは、2D映像は、撮像部11が生成した撮影映像から生成されてもよい。この場合、2D映像には、出力映像のようには、深度を設定されない。 As described above, the display switching unit 15 also acquires 2D video data. The 2D video data may be, for example, video content data recorded on a recording device (not shown). Alternatively, the 2D video may be generated from a captured video generated by the imaging unit 11. In this case, the depth is not set in the 2D video like the output video.
 表示切替部15がどのような場合にどのような映像を映像表示装置25に出力するかは、特に限定されない。具体例をあげれば、表示切替部15は、以下で説明する場合分けにしたがって、映像表示装置25に出力する映像を切り替えてもよい。 What kind of video the display switching unit 15 outputs to the video display device 25 is not particularly limited. If a specific example is given, the display switching part 15 may switch the image | video output to the video display apparatus 25 according to the case classification demonstrated below.
 第1に、車両が走行中である場合、表示切替部15は、出力映像生成部13が生成した出力映像を、映像表示装置25に出力する。この場合、映像表示装置25は、奥行き感または飛び出し感を持った映像を迅速に表示することができる。 First, when the vehicle is traveling, the display switching unit 15 outputs the output video generated by the output video generation unit 13 to the video display device 25. In this case, the video display device 25 can quickly display an image having a sense of depth or a feeling of popping out.
 第2に、車両が停止中である場合、表示切替部15は、撮像部11が撮影した撮影映像(2D映像)を、映像表示装置25の出力形式に対応させたうえで、映像表示装置25に出力する。また、表示切替部15は、テレビ番組、ナビゲーション画面、コンテンツなど、撮影映像以外の2D映像を、映像表示装置25に出力する。これらの場合、視差バリア制御装置55は、電子ミラー3の視差バリア装置30を停止させる。 Second, when the vehicle is stopped, the display switching unit 15 causes the video display device 25 to match the captured video (2D video) captured by the imaging unit 11 with the output format of the video display device 25. Output to. In addition, the display switching unit 15 outputs 2D video other than captured video, such as a television program, a navigation screen, and content, to the video display device 25. In these cases, the parallax barrier control device 55 stops the parallax barrier device 30 of the electronic mirror 3.
 第3に、映像表示装置25が電子ミラー3のオープニング画面を表示する場合、表示切替部15は、3D映像生成装置が生成した3D映像を、映像表示装置25に出力する。 Third, when the video display device 25 displays the opening screen of the electronic mirror 3, the display switching unit 15 outputs the 3D video generated by the 3D video generation device to the video display device 25.
 〔実施形態5〕
 本発明の他の実施形態について、図16に基づいて説明すれば、以下のとおりである。なお、説明の便宜上、前記実施形態にて説明した部材と同じ機能を有する部材については、同じ符号を付記し、その説明を省略する。
[Embodiment 5]
Another embodiment of the present invention will be described below with reference to FIG. For convenience of explanation, members having the same functions as those described in the embodiment are given the same reference numerals, and descriptions thereof are omitted.
 前記実施形態2で説明した構成では、運転手が常に正常な出力映像を見ることができるように、運転手の顔の位置等に応じて、出力映像の画角範囲が動的に変化する。この場合、車両の助手席に着座している人物(同乗者)が見る出力映像は、正常な状態、逆視、二重像の間で変化する。そのため、同乗者が、出力映像のこのような変化を不快に感じる可能性がある。 In the configuration described in the second embodiment, the field angle range of the output video is dynamically changed according to the position of the driver's face so that the driver can always see a normal output video. In this case, the output image seen by the person (passenger) seated in the passenger seat of the vehicle changes between normal state, reverse view, and double image. Therefore, the passenger may feel such a change in the output video uncomfortable.
 本実施形態では、同乗者から出力映像が見えなくなるように、出力映像の視野角制御を行う。同乗者には、どのような映像も見えなくてもよいし、出力映像以外の任意の映像が見えてもよい。 In this embodiment, the viewing angle of the output video is controlled so that the passenger cannot see the output video. Passengers may not be able to see any video, or any video other than the output video.
 図16の(a)は、視野角制御の一実施例を示す図である。本実施例では、映像表示装置25が、光学フィルムの一種である指向性拡散フィルムを備える。図16の(a)に示すグラフは、本実施例における出力映像の拡散視野角特性を示す。図16の(a)に示すように、視野角θが小さい場合、出力映像はほとんど拡散されないが、視野角が大きくなるにつれて、指向性拡散フィルムによって、出力映像が強く拡散されるようになる。したがって、運転手Dには正常な出力映像が見える一方、助手席に着座している人物Pには、常に、右手用出力映像および左手用出力映像の二重像が見える。しかしながら、人物Pが見る二重像には輝度のちらつきがないため、人物Pの不快感は低減される。 (A) of FIG. 16 is a figure which shows one Example of viewing angle control. In this embodiment, the video display device 25 includes a directional diffusion film that is a kind of optical film. The graph shown in FIG. 16A shows the diffusion viewing angle characteristic of the output video in this embodiment. As shown in FIG. 16A, when the viewing angle θ is small, the output video is hardly diffused, but as the viewing angle increases, the output video is strongly diffused by the directional diffusion film. Accordingly, a normal output image can be seen by the driver D, while a double image of the right hand output image and the left hand output image can always be seen by the person P sitting in the passenger seat. However, since the double image seen by the person P has no luminance flicker, the discomfort of the person P is reduced.
 図16の(b)は、視野角制御の他の実施例を示す図である。本実施例では、映像表示装置25が、光学フィルムの一種である視野角制御フィルムを備える。図16の(b)に示すグラフは、本実施例における出力映像の輝度視野角特性を示す。図16の(b)に示すように、視野角制御フィルムによって、視野角θが大きくなるほど、出力映像がより強く遮光される。したがって、運転手Dは出力映像を普通に見ることができる一方、助手席に着座している人物Pには、出力映像が見えず、黒画面が見える。したがって、人物Pの視野角では、出力映像が変化する場合であっても、人物Pは、出力映像のこのような変化をほとんど感知しない。 (B) of FIG. 16 is a figure which shows the other Example of viewing angle control. In this embodiment, the video display device 25 includes a viewing angle control film that is a kind of optical film. The graph shown in FIG. 16B shows the luminance viewing angle characteristics of the output video in this embodiment. As shown in FIG. 16B, the output image is more strongly shielded by the viewing angle control film as the viewing angle θ increases. Accordingly, the driver D can normally see the output video, while the person P sitting in the passenger seat does not see the output video and sees a black screen. Therefore, at the viewing angle of the person P, even if the output video changes, the person P hardly senses such a change in the output video.
 なお、図16の(b)に示すように、輝度視野角制御フィルムは、正負の視野角θの両方で遮光してもよいし、助手席に着座した人物の目に入る出力映像を遮光するように、正負の視野角θの一方のみで出力映像を遮光してもよい。 Note that, as shown in FIG. 16B, the luminance viewing angle control film may block light at both the positive and negative viewing angles θ, or may block output video that enters the eyes of a person seated in the passenger seat. Thus, the output image may be shielded by only one of the positive and negative viewing angles θ.
 映像表示装置25は、狭視野角液晶を備えた液晶表示装置であってもよい。この場合、映像表示装置25が、狭視野角液晶の透過率を制御することによって、出力映像の視野角θの範囲を制御することができる。この構成では、視野角制御フィルムなしで、大きい視野角θの出力映像を遮光することができる。よって、助手席に着座した人物には出力映像が見えないため、助手席に着座した人物が出力画像の変化によって不快に感じることがない。 The video display device 25 may be a liquid crystal display device including a narrow viewing angle liquid crystal. In this case, the video display device 25 can control the range of the viewing angle θ of the output video by controlling the transmittance of the narrow viewing angle liquid crystal. In this configuration, an output image with a large viewing angle θ can be shielded without a viewing angle control film. Therefore, since the person sitting in the passenger seat cannot see the output video, the person sitting in the passenger seat does not feel uncomfortable due to the change in the output image.
 映像表示装置25は、いわゆるベールビューを表示してもよい。この構成では、表示画面20を当該表示画面20の正面から見た場合は見えず、正面から所定角度以上ずれた方向から見た場合は見えるベールビュー映像を表示画面20に表示するため、助手席からは出力映像が見えない。したがって、助手席に着座した人物が出力画像の変化によって不快に感じることがない。 The video display device 25 may display a so-called bale view. In this configuration, a veil view image that is not visible when viewed from the front of the display screen 20 and is visible when viewed from a direction deviated by a predetermined angle or more from the front is displayed on the display screen 20. Cannot see the output video. Therefore, the person seated in the passenger seat does not feel uncomfortable due to the change in the output image.
 〔ソフトウェアによる実現例〕
 前記実施形態1~5では、車両の電子ミラー装置に関して説明した。しかしながら、電子ミラー1,2,3の制御ブロック(特に映像制御装置10、210、310)は、集積回路(ICチップ)等に形成された論理回路(ハードウェア)によって実現してもよいし、CPU(Central Processing Unit)を用いてソフトウェアによって実現してもよい。例えば、3Dディスプレイ(映像表示装置25)、アイトラッキングカメラ50(運転手情報取得装置52)、および撮影カメラ(撮像部11)を備えたスマートフォンは、当該スマートフォンが備えるCPUおよびアプリケーションによって、電子ミラー1,2,3の制御ブロックと同じ構成を実現可能である。
[Example of software implementation]
In the first to fifth embodiments, the electronic mirror device for a vehicle has been described. However, the control blocks (particularly the video control devices 10, 210, 310) of the electronic mirrors 1, 2, 3 may be realized by a logic circuit (hardware) formed in an integrated circuit (IC chip) or the like, It may be realized by software using a CPU (Central Processing Unit). For example, a smartphone equipped with a 3D display (video display device 25), an eye tracking camera 50 (driver information acquisition device 52), and a photographing camera (imaging unit 11) is connected to the electronic mirror 1 by a CPU and an application included in the smartphone. , 2 and 3 can be realized.
 後者の場合、映像制御装置10は、各機能を実現するソフトウェアであるプログラムの命令を実行するCPU、上記プログラムおよび各種データがコンピュータ(またはCPU)で読み取り可能に記録されたROM(Read Only Memory)または記憶装置(これらを「記録媒体」と称する)、上記プログラムを展開するRAM(Random Access Memory)などを備えている。そして、コンピュータ(またはCPU)が上記プログラムを上記記録媒体から読み取って実行することにより、本発明の目的が達成される。上記記録媒体としては、「一時的でない有形の媒体」、例えば、テープ、ディスク、カード、半導体メモリ、プログラマブルな論理回路などを用いることができる。また、上記プログラムは、該プログラムを伝送可能な任意の伝送媒体(通信ネットワークや放送波等)を介して上記コンピュータに供給されてもよい。なお、本発明の一態様は、上記プログラムが電子的な伝送によって具現化された、搬送波に埋め込まれたデータ信号の形態でも実現され得る。 In the latter case, the video control apparatus 10 includes a CPU that executes instructions of a program that is software for realizing each function, and a ROM (Read Only Memory) in which the program and various data are recorded so as to be readable by the computer (or CPU). Alternatively, a storage device (these are referred to as “recording media”), a RAM (Random Access Memory) for expanding the program, and the like are provided. And the objective of this invention is achieved when a computer (or CPU) reads the said program from the said recording medium and runs it. As the recording medium, a “non-temporary tangible medium” such as a tape, a disk, a card, a semiconductor memory, a programmable logic circuit, or the like can be used. The program may be supplied to the computer via an arbitrary transmission medium (such as a communication network or a broadcast wave) that can transmit the program. Note that one embodiment of the present invention can also be realized in the form of a data signal embedded in a carrier wave, in which the program is embodied by electronic transmission.
 〔まとめ〕
 本発明の態様1に係る表示装置(映像制御装置10、210、310および視差バリア対応映像表示装置25)は、車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を、表示画面(20、20’)よりも上記乗員から遠い奥側または上記乗員に近い手前側に認識されるように表示する表示装置であって、上記視認対象物を上記奥側または上記手前側に認識させる上記表示画面からの深さ(飛び出し量d、奥行き量d)を設定する深さ設定部(深度決定部12)と、上記撮影映像から、上記深さ設定部が設定した上記深さに応じた左眼用の画素データおよび右眼用の画素データを含む第1出力映像を生成する生成部(出力映像生成部13)と、を備える。
[Summary]
The display devices ( video control devices 10, 210, 310 and the parallax barrier video display device 25) according to the first aspect of the present invention are arranged in front of a passenger including a driver of the vehicle and are photographed around the vehicle. A display device for displaying a visual recognition object included in a captured image so as to be recognized on the far side farther from the occupant than the display screen (20, 20 ') or on the near side of the occupant, A depth setting unit (depth determining unit 12) for setting a depth (amount of protrusion d, a depth amount d) from the display screen for recognizing an object on the back side or the front side, and the depth from the captured image. A generation unit (output video generation unit 13) that generates a first output video including pixel data for the left eye and pixel data for the right eye corresponding to the depth set by the height setting unit.
 上記の構成によれば、出力映像に深さ(深度)が設定され、設定された深さに応じた画素ずらし量で画素の配列方向にずれた左眼用の画素データおよび右眼用の画素データを含む出力映像が生成される。出力映像を見る乗員(ユーザ)には、出力映像に設定された深さに応じて、出力映像中の視認対象物が、表示画面よりも奥に見えるか、あるいは表示画面から飛び出して見える。 According to the above configuration, the depth (depth) is set in the output video, and the pixel data for the left eye and the pixel for the right eye shifted in the pixel arrangement direction by the pixel shift amount corresponding to the set depth. An output video containing data is generated. For the occupant (user) who sees the output video, depending on the depth set in the output video, the object to be viewed in the output video appears behind the display screen or pops out of the display screen.
 本発明の態様2に係る表示装置は、上記態様1において、上記撮影映像は、単眼カメラによって撮影された映像であり、上記生成部は、上記撮影映像における、上記深さ設定部が設定した上記深さに応じた画素ずらし量(Δpix)だけずれた位置の画素データを、上記左眼用の画素データおよび上記右眼用の画素データとして、交互に配列して上記第1出力映像を生成してもよい。 The display device according to aspect 2 of the present invention is the display apparatus according to aspect 1, in which the captured image is an image captured by a monocular camera, and the generation unit is set by the depth setting unit in the captured image. Pixel data at a position shifted by a pixel shift amount (Δpix) corresponding to the depth is alternately arranged as the left-eye pixel data and the right-eye pixel data to generate the first output video. May be.
 上記の構成によれば、従来のように2眼カメラを必要とせずに、単眼カメラを用いて撮影した撮影映像から、奥行き感または飛び出し感を持った出力映像を生成することができる。 According to the above configuration, an output video having a sense of depth or popping out can be generated from a shot video shot using a monocular camera without requiring a binocular camera as in the prior art.
 本発明の態様3に係る表示装置は、上記態様1または2において、上記深さ設定部は、上記深さを、上記車両の状態および上記乗員の状態の少なくとも一方に基づいて設定してもよい。 In the display device according to aspect 3 of the present invention, in the aspect 1 or 2, the depth setting unit may set the depth based on at least one of the state of the vehicle and the state of the occupant. .
 上記の構成によれば、車両の状態および乗員の状態の少なくとも一方に基づいて、出力映像に適切な深さを設定することができる。 According to the above configuration, an appropriate depth can be set in the output video based on at least one of the vehicle state and the occupant state.
 本発明の態様4に係る表示装置は、上記態様1から3のいずれかにおいて、上記生成部が生成した上記第1出力映像と、上記撮影映像から生成された、上記視認対象物を上記表示画面の位置に認識されるように表示する第2出力映像と、上記撮影映像から生成された、人間の両眼視差を再現した第3出力映像とのいずれかを、上記車両の状態および上記乗員の状態の少なくとも一方に基づいて、上記表示画面に表示する出力映像として選択する表示切替部(15)を、さらに備えていてもよい。 The display device according to aspect 4 of the present invention is the display device according to any one of the aspects 1 to 3, wherein the visual object generated from the first output video generated by the generation unit and the captured video is displayed on the display screen. The second output image displayed so as to be recognized at the position of the image and the third output image generated from the photographed image and reproducing the binocular parallax of the human are displayed as the state of the vehicle and the occupant. A display switching unit (15) that selects an output video to be displayed on the display screen based on at least one of the states may be further provided.
 上記の構成によれば、車両の状態および乗員の状態の少なくとも一方に基づいて、表示する出力映像を適切に切り替えることができる。 According to the above configuration, it is possible to appropriately switch the output video to be displayed based on at least one of the vehicle state and the occupant state.
 本発明の態様5に係る表示装置は、上記態様3において、上記車両の状態が、上記車両の速度および上記車両が備えるシートの位置のいずれかである、または、上記乗員の状態が、上記乗員の顔の上記表示画面に対する位置および上記乗員の両眼視差のいずれかであってよい。 The display device according to aspect 5 of the present invention is the display apparatus according to aspect 3, wherein the state of the vehicle is one of the speed of the vehicle and the position of a seat included in the vehicle, or the state of the occupant is the occupant. It may be either the position of the face relative to the display screen or the binocular parallax of the occupant.
 上記の構成によれば、車両の速度、シートの位置、乗員の顔の位置、およびの乗員の両目視差のうちのいずれかに基づいて、出力映像に適切な深さを設定することができる。 According to the above configuration, an appropriate depth can be set in the output video based on any one of the vehicle speed, the seat position, the occupant's face position, and the occupant's binocular parallax.
 本発明の態様6に係る表示装置(映像制御装置10、210、310および視差バリア対応映像表示装置25)は、車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を、表示画面(20、20’)よりも上記乗員から遠い奥側または上記乗員に近い手前側に認識されるように表示する表示装置であって、上記視認対象物を上記奥側または上記手前側に認識させる上記表示画面からの深さを設定する深さ設定部(深度決定部12)と、上記撮影映像における、上記深さ設定部が設定した上記深さに応じた画素ずらし量だけずれた位置のデータが、左眼用の画素データおよび右眼用の画素データとして、交互に配列した第1出力映像を生成する生成部(出力映像生成部13)と、を備えるとともに、上記表示画面は、上記乗員から遠ざかる方向に傾斜している。 The display devices ( video control devices 10, 210, and 310 and the parallax barrier video display device 25) according to aspect 6 of the present invention are arranged in front of a passenger including a driver of the vehicle and are photographed around the vehicle. A display device for displaying a visual recognition object included in a captured image so as to be recognized on the far side farther from the occupant than the display screen (20, 20 ') or on the near side of the occupant, A depth setting unit (depth determination unit 12) for setting a depth from the display screen for recognizing an object on the back side or the near side, and the depth set by the depth setting unit in the captured image A generation unit (output video generation unit 13) that generates first output video alternately arranged as pixel data for the left eye and pixel data for the right eye, with the data shifted by the pixel shift amount corresponding to With Both the display screen is inclined in a direction away from the occupant.
 上記の構成において、表示画面の傾斜に対応して、乗員から遠い位置の画素ほど、表示画面からの深さを浅く設定し、乗員から近い位置の画素ほど、表示画面からの深さを深く設定することができる。したがって、表示画面の傾斜方向における画素の位置によらず、傾斜した表示画面の出力映像を、乗員に対して正対した状態で見せることが可能となる。 In the above configuration, corresponding to the tilt of the display screen, the pixel farther from the occupant is set to a shallower depth from the display screen, and the pixel closer to the occupant is set to a deeper depth from the display screen. can do. Therefore, regardless of the pixel position in the tilt direction of the display screen, the output image of the tilted display screen can be shown facing the occupant.
 本発明の態様7に係る表示装置は、上記態様6において、上記深さ設定部は、上記表示画面の上記傾斜に応じて、上記乗員から遠い位置の画素ほど、上記表示画面からの深さを浅く設定してもよい。 In the display device according to aspect 7 of the present invention, in the aspect 6, the depth setting unit sets the depth from the display screen to a pixel farther from the occupant according to the inclination of the display screen. It may be set shallowly.
 上記の構成によれば、表示画面からの深さが浅く設定されている画素に対応する画像ほど、乗員から見た当該画像までの距離は短くなる。そのため、表示画面が傾斜している場合において、乗員から遠い位置の画素ほど、表示画面からの深さを浅く設定し、乗員から近い位置の画素ほど、表示画面からの深さを深く設定することができる。したがって、表示画面の傾斜方向における画素の位置によらず、傾斜した表示画面の出力映像を、乗員に対して正対した状態で見せることが可能となる。 According to the above configuration, the distance from the occupant to the image becomes shorter as the image corresponds to the pixel set with a shallow depth from the display screen. Therefore, when the display screen is tilted, set the pixel farther away from the occupant to a shallower depth from the display screen, and the pixel closer to the occupant to set the depth from the display screen deeper. Can do. Therefore, regardless of the pixel position in the tilt direction of the display screen, the output image of the tilted display screen can be shown facing the occupant.
 本発明の態様8に係る表示装置(映像制御装置310および視差バリア対応映像表示装置25)は、車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を、表示画面(20、20’)よりも上記乗員から遠い奥側または上記乗員に近い手前側に認識されるように表示する表示装置であって、上記視認対象物を上記奥側または上記手前側に認識させる上記表示画面からの深さを設定する深さ設定部(深度決定部12)と、上記撮影映像における、上記深さ設定部が設定した上記深さに応じた画素ずらし量だけずれた位置のデータが、左眼用の画素データおよび右眼用の画素データとして、交互に配列した第1出力映像を生成する生成部(出力映像生成部13)と、を備えるとともに、上記表示画面は、当該表示画面の正面から所定角度以上ずれた方向への光を遮光する狭視野角液晶を用いた液晶表示装置である。 The display device (video control device 310 and parallax barrier video display device 25) according to aspect 8 of the present invention is arranged in front of a passenger including a driver of a vehicle and is included in a captured video of the surroundings of the vehicle. A display device that displays the object to be recognized on the back side farther from the occupant than the display screen (20, 20 ′) or the near side to the occupant. A depth setting unit (depth determination unit 12) for setting a depth from the display screen to be recognized on the side or the near side, and a pixel corresponding to the depth set by the depth setting unit in the captured image The data at the position shifted by the shift amount includes a generation unit (output video generation unit 13) that generates first output video that is alternately arranged as pixel data for the left eye and pixel data for the right eye. ,the above示画 surface is a liquid crystal display device using a narrow viewing angle liquid crystal to block light from the front of the display screen to the predetermined angle or more displacement direction.
 上記の構成によれば、狭視野角液晶を用いた液晶表示装置によって、出力映像の視野を制限することにより、乗員の左眼には左眼用の画素のみが見え、乗員の右眼には右眼用の画素のみが見えるようにすることができる、そのため、出力映像を見る乗員(ユーザ)には、出力映像に設定された深さに応じて、出力映像中の視認対象物が、表示画面よりも奥に見えるか、あるいは表示画面から飛び出して見える。 According to the above configuration, by limiting the field of view of the output video by the liquid crystal display device using the narrow viewing angle liquid crystal, only the left eye pixel is visible to the occupant's left eye, and the occupant's right eye is visible. Only the pixel for the right eye can be seen, so that the occupant (user) who sees the output video displays the object to be viewed in the output video according to the depth set in the output video. It appears behind the screen or appears to jump out of the display screen.
 本発明の態様9に係る表示装置(映像制御装置310および視差バリア対応映像表示装置25)は、車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を、表示画面よりも上記乗員から遠い奥側または上記乗員に近い手前側に認識されるように表示する表示装置であって、上記視認対象物を上記奥側または上記手前側に認識させる上記表示画面からの深さを設定する深さ設定部と、上記撮影映像における、上記深さ設定部が設定した上記深さに応じた画素ずらし量だけずれた位置のデータが、左眼用の画素データおよび右眼用の画素データとして、交互に配列した第1出力映像を生成する生成部と、を備えるとともに、上記表示画面を当該表示画面の正面から見た場合は見えず、正面から所定角度以上ずれた方向から見た場合は見える画像を、上記表示画面に表示する。 The display device (video control device 310 and parallax barrier video display device 25) according to aspect 9 of the present invention is disposed in front of the occupant including the driver of the vehicle and is included in the captured video around the vehicle. A display device that displays the object to be recognized so as to be recognized on the back side farther from the occupant than the display screen or on the near side near the occupant, and on the back side or the near side. A depth setting unit for setting a depth from the display screen to be recognized, and data of a position shifted by a pixel shift amount corresponding to the depth set by the depth setting unit in the captured image is the left eye A generation unit that generates first output video arranged alternately as pixel data for the right eye and pixel data for the right eye, and is not visible when the display screen is viewed from the front of the display screen. Or An image visible when viewed from a predetermined angle or more displacement direction, is displayed on the display screen.
 上記の構成によれば、正面以外の方向から表示画面を見る乗員のみが出力映像を見えるようにすることができる。正面以外の方向から表示画面を見る乗員には、出力映像中の視認対象物が、表示画面よりも奥に見えるか、あるいは表示画面から飛び出して見える。 According to the above configuration, only the occupant viewing the display screen from a direction other than the front can view the output video. For an occupant who looks at the display screen from a direction other than the front, the object to be visually recognized in the output video appears behind the display screen or jumps out of the display screen.
 本発明の態様10に係る表示装置(映像制御装置310および視差バリア対応映像表示装置25)は、車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を、表示画面よりも上記乗員から遠い奥側または上記乗員に近い手前側に認識されるように表示する表示装置であって、上記視認対象物を上記奥側または上記手前側に認識させる上記表示画面からの深さを設定する深さ設定部と、上記撮影映像における、上記深さ設定部が設定した上記深さに応じた画素ずらし量だけずれた位置のデータが、左眼用の画素データおよび右眼用の画素データとして、交互に配列した第1出力映像を生成する生成部と、を備えるとともに、上記表示画面に、当該表示画面の正面から所定角度以上ずれた方向への光を遮光する光学フィルムが貼られている。 The display device (video control device 310 and parallax barrier video display device 25) according to aspect 10 of the present invention is disposed in front of an occupant including the driver of the vehicle and is included in the captured video of the surroundings of the vehicle. A display device that displays the object to be recognized so as to be recognized on the back side farther from the occupant than the display screen or on the near side near the occupant, and on the back side or the near side. A depth setting unit for setting a depth from the display screen to be recognized, and data of a position shifted by a pixel shift amount corresponding to the depth set by the depth setting unit in the captured image is the left eye And a generation unit that generates first output video arranged alternately as pixel data for the right eye and pixel data for the right eye, and the display screen is shifted by a predetermined angle or more from the front of the display screen Optical film for blocking light to have been attached.
 上記の構成によれば、表示画面の正面から表示画面を見る乗員には、出力映像中の視認対象物が、表示画面よりも奥に見えるか、あるいは表示画面から飛び出して見える。一方、表示画面の正面から所定角度以上ずれた方向から表示画面を見る乗員には、出力映像を見ることができない。したがって、表示画面の正面から所定角度以上ずれた方向から表示画面を見る乗員に対し、出力映像の輝度のちらつきなどに基づく不快感を与えることを防止できる。 According to the configuration described above, a passenger viewing the display screen from the front of the display screen can see the visual target in the output video behind the display screen or jump out of the display screen. On the other hand, an occupant viewing the display screen from a direction deviated by a predetermined angle or more from the front of the display screen cannot see the output video. Therefore, it is possible to prevent an occupant who views the display screen from a direction deviated by a predetermined angle or more from the front of the display screen from giving unpleasant feeling due to flickering of the luminance of the output video.
 本発明の態様11に係る表示装置(映像制御装置210および視差バリア対応映像表示装置25)は、車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を表示する表示装置であって、上記乗員の顔の表示画面に対する位置を検出する検出部と、上記検出部が検出した、上記乗員の顔の上記表示画面に対する位置に応じて、上記撮影映像においてトリミング画像を切り出す位置を決定するトリミング位置決定部と、上記トリミング位置決定部が決定した上記位置に従って、上記撮影映像から切り出した上記トリミング画像から、出力映像を生成する生成部と、を備えている。 The display device (video control device 210 and parallax barrier video display device 25) according to aspect 11 of the present invention is disposed in front of an occupant including the driver of the vehicle and is included in the captured video of the surroundings of the vehicle. A display device that displays a visual recognition target, a detection unit that detects a position of the occupant's face relative to the display screen, and a position detected by the detection unit relative to the display screen of the occupant's face, A trimming position determining unit that determines a position to cut out a trimmed image in the captured video; a generation unit that generates an output video from the trimmed image cut out from the captured video according to the position determined by the trimming position determining unit; It has.
 上記の構成によれば、乗員の顔の位置に応じて、適切な深度のある出力映像を、当該乗員に見せることができる。 According to the above configuration, an output image with an appropriate depth can be shown to the occupant according to the position of the occupant's face.
 本発明の態様12に係る表示装置は、上記態様11において、上記トリミング位置決定部は、上記乗員の顔が上記表示画面に対して移動したとき、該移動の方向とは逆方向に、上記撮影映像においてトリミング画像を切り出す位置を移動させてもよい。上記の構成によれば、乗員は、通常のミラーを視認する場合にそうするように、表示画面に対して顔を移動させることで、表示画面に表示される出力映像として表示されるトリミング画像の範囲(トリミング範囲)を容易に変えることができる。 In the display device according to aspect 12 of the present invention, in the aspect 11, the trimming position determination unit is configured to capture the image in a direction opposite to the direction of the movement when the occupant's face moves relative to the display screen. The position where the trimmed image is cut out in the video may be moved. According to the above configuration, the occupant moves the face with respect to the display screen so as to view the normal mirror, so that the trimmed image displayed as the output video displayed on the display screen is displayed. The range (trimming range) can be easily changed.
 本発明の態様13に係る電子ミラーは、上記態様1から12のいずれかの表示装置と、車両の前方、正面後方、左側後方、または、右側後方を撮影し、当該撮影した撮影映像を上記表示装置に出力する撮影部(撮像部11)と、を備えている。 An electronic mirror according to aspect 13 of the present invention images the display device according to any one of aspects 1 to 12 above, the front of the vehicle, the front rear, the left rear, or the right rear, and displays the captured image as described above. An imaging unit (imaging unit 11) that outputs to the apparatus.
 上記の構成によれば、車両の前方、正面後方、左側後方、または、右側後方を撮影することによって得られた撮影映像に基づいて、出力映像を生成することができる。 According to the above configuration, it is possible to generate an output video based on a shot video obtained by shooting the front, rear of the front, left rear, or right rear of the vehicle.
 本発明の態様14に係る表示装置の制御方法は、車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を、表示画面よりも上記乗員から遠い奥側または上記乗員に近い手前側に認識されるように表示する表示装置の制御方法であって、上記視認対象物を上記奥側または上記手前側に認識させる上記表示画面からの深さを設定する深さ設定ステップと、上記撮影映像における、上記深さ設定ステップにて設定した上記深さに応じた左眼用の画素データおよび右眼用の画素データを含む第1出力映像を生成する生成ステップと、を含む。上記の構成によれば、上記態様1に係る表示装置と同様の効果を奏する。 According to the control method of the display device according to the fourteenth aspect of the present invention, the visual object included in the photographed video around the vehicle that is disposed in front of the occupant including the driver of the vehicle is displayed on the display screen. A display device control method for displaying on a back side far from an occupant or a near side close to the occupant, wherein the depth from the display screen is set to allow the visual recognition object to be recognized on the back side or the near side. A first output image including pixel data for the left eye and pixel data for the right eye corresponding to the depth set in the depth setting step in the depth setting step for setting the depth and the captured image Generating step. According to said structure, there exists an effect similar to the display apparatus which concerns on the said aspect 1. FIG.
 本発明の態様15に係る表示装置の制御方法は、車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を、表示画面よりも上記乗員から遠い奥側または上記乗員に近い手前側に認識されるように表示する表示装置の制御方法であって、上記表示画面は、上記乗員から遠ざかる方向に傾斜しており、上記視認対象物を上記奥側または上記手前側に認識させる上記表示画面からの深さを設定する深さ設定ステップと、上記撮影映像における、上記深さ設定ステップにて設定した上記深さに応じた画素ずらし量だけずれた位置のデータが、左眼用の画素データおよび右眼用の画素データとして、交互に配列した第1出力映像を生成する生成ステップと、を含む。上記の構成によれば、上記態様6に係る表示装置と同様の効果を奏する。 The display device control method according to the aspect 15 of the present invention is arranged in front of a passenger including a driver of the vehicle, and the object to be viewed included in the photographed video around the vehicle is more than the display screen. A display device control method for displaying on a far side away from an occupant or a near side closer to the occupant, wherein the display screen is inclined in a direction away from the occupant, and A depth setting step for setting the depth from the display screen to be recognized on the back side or the near side, and a pixel shift amount corresponding to the depth set in the depth setting step in the captured image A generation step of generating the first output video in which the data at the shifted position is alternately arranged as the pixel data for the left eye and the pixel data for the right eye is included. According to said structure, there exists an effect similar to the display apparatus which concerns on the said aspect 6. FIG.
 本発明の態様16に係る表示装置の制御方法は、車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を表示する表示装置の制御方法であって、上記乗員の顔の表示画面に対する位置を検出する検出ステップと、上記検出ステップにて検出した、上記乗員の顔の上記表示画面に対する位置に応じて、上記撮影映像においてトリミング画像を切り出す位置を決定するトリミング位置決定ステップと、上記トリミング位置決定ステップにて決定した上記位置に従って、上記撮影映像から切り出した上記トリミング画像から、出力映像を生成する生成ステップと、を含む。上記の構成によれば、上記態様11に係る表示装置と同様の効果を奏する。 The display device control method according to aspect 16 of the present invention is a control of a display device that is arranged in front of an occupant including a driver of a vehicle and displays a visual object included in a photographed image around the photographed vehicle. A detection step of detecting a position of the occupant's face relative to the display screen, and a trimming image in the captured video according to the position of the occupant's face relative to the display screen detected in the detection step. A trimming position determining step for determining a position to be cut out, and a generating step for generating an output video from the trimmed image cut out from the captured video in accordance with the position determined in the trimming position determining step. According to said structure, there exists an effect similar to the display apparatus which concerns on the said aspect 11. FIG.
 本発明の各態様に係る表示装置(映像制御装置10、210、310)および電子ミラー(1、2、3)は、コンピュータによって実現してもよく、この場合には、コンピュータを上記電子ミラーが備える各部(ソフトウェア要素)として動作させることにより上記電子ミラーをコンピュータにて実現させる電子ミラーの制御プログラム、およびそれを記録したコンピュータ読み取り可能な記録媒体も、本発明の範疇に入る。 The display device ( video control device 10, 210, 310) and the electronic mirror (1, 2, 3) according to each aspect of the present invention may be realized by a computer. An electronic mirror control program that causes the electronic mirror to be realized by a computer by operating as each part (software element) provided, and a computer-readable recording medium that records the electronic mirror control program also fall within the scope of the present invention.
 本発明は上述した各実施形態に限定されるものではなく、請求項に示した範囲で種々の変更が可能であり、異なる実施形態にそれぞれ開示された技術的手段を適宜組み合わせて得られる実施形態についても本発明の技術的範囲に含まれる。さらに、各実施形態にそれぞれ開示された技術的手段を組み合わせることにより、新しい技術的特徴を形成することができる。 The present invention is not limited to the above-described embodiments, and various modifications are possible within the scope shown in the claims, and embodiments obtained by appropriately combining technical means disclosed in different embodiments. Is also included in the technical scope of the present invention. Furthermore, a new technical feature can be formed by combining the technical means disclosed in each embodiment.
 1、2、3 電子ミラー
 10、210、310 映像制御装置(表示装置)
 11 撮像部(単眼カメラ、撮影部)
 12 深度決定部(深さ設定部)
 13 出力映像生成部(生成部)
 14 画角範囲決定部(トリミング位置決定部)
 15 表示切替部
 20、20’ 表示画面
 25 視差バリア対応映像表示装置(表示装置)
 52 運転手情報取得装置(検出部)
1, 2, 3 Electron mirror 10, 210, 310 Video control device (display device)
11 Imaging unit (monocular camera, imaging unit)
12 Depth determination part (depth setting part)
13 Output video generator (generator)
14 Angle-of-view range determination unit (trimming position determination unit)
DESCRIPTION OF SYMBOLS 15 Display switching part 20, 20 'Display screen 25 Parallax barrier corresponding | compatible video display apparatus (display apparatus)
52 Driver information acquisition device (detection unit)

Claims (18)

  1.  車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を、表示画面よりも上記乗員から遠い奥側または上記乗員に近い手前側に認識されるように表示する表示装置であって、
     上記視認対象物を上記奥側または上記手前側に認識させる上記表示画面からの深さを設定する深さ設定部と、
     上記撮影映像から、上記深さ設定部が設定した上記深さに応じた左眼用の画素データおよび右眼用の画素データを含む第1出力映像を生成する生成部と、を備えることを特徴とする表示装置。
    The object to be visually recognized, which is placed in front of the occupant including the driver of the vehicle and included in the photographed image around the vehicle, is recognized on the far side farther from the occupant than the display screen or near the occupant. A display device that displays as
    A depth setting unit for setting a depth from the display screen that allows the visual recognition object to be recognized on the back side or the near side;
    A generating unit configured to generate a first output image including pixel data for left eye and pixel data for right eye corresponding to the depth set by the depth setting unit from the captured image. Display device.
  2.  上記撮影映像は、単眼カメラによって撮影された映像であり、
     上記生成部は、上記撮影映像における、上記深さ設定部が設定した上記深さに応じた画素ずらし量だけずれた位置の画素データを、上記左眼用の画素データおよび上記右眼用の画素データとして、交互に配列して上記第1出力映像を生成する
    ことを特徴とする請求項1に記載の表示装置。
    The above video is a video taken with a monocular camera,
    The generator generates pixel data at a position shifted by a pixel shift amount corresponding to the depth set by the depth setting unit in the captured video, the pixel data for the left eye and the pixel for the right eye The display device according to claim 1, wherein the first output video is generated by alternately arranging data.
  3.  上記深さ設定部は、上記深さを、上記車両の状態および上記乗員の状態の少なくとも一方に基づいて設定することを特徴とする請求項1または2に記載の表示装置。 3. The display device according to claim 1, wherein the depth setting unit sets the depth based on at least one of the state of the vehicle and the state of the occupant.
  4.  上記生成部が生成した上記第1出力映像と、
     上記撮影映像から生成された、上記視認対象物を上記表示画面の位置に認識されるように表示する第2出力映像と、
     上記撮影映像から生成された、人間の両眼視差を再現した第3出力映像とのいずれかを、
    上記車両の状態および上記乗員の状態の少なくとも一方に基づいて、上記表示画面に表示する出力映像として選択する表示切替部を、さらに備えることを特徴とする請求項1から3の何れかに記載の表示装置。
    The first output video generated by the generation unit;
    A second output video generated from the captured video and displayed so that the visual recognition object is recognized at a position of the display screen;
    One of the third output video generated from the captured video and reproducing human binocular parallax,
    4. The display switching unit according to claim 1, further comprising: a display switching unit that selects an output image to be displayed on the display screen based on at least one of the state of the vehicle and the state of the occupant. Display device.
  5.  上記車両の状態が、上記車両の速度および上記車両が備えるシートの位置のいずれかである、または、上記乗員の状態が、上記乗員の顔の上記表示画面に対する位置および上記乗員の両眼視差のいずれかであることを特徴とする請求項3に記載の表示装置。 The state of the vehicle is one of the speed of the vehicle and the position of a seat included in the vehicle, or the state of the occupant is the position of the occupant's face relative to the display screen and the binocular parallax of the occupant. The display device according to claim 3, wherein the display device is any one.
  6.  車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を、表示画面よりも上記乗員から遠い奥側または上記乗員に近い手前側に認識されるように表示する表示装置であって、
     上記視認対象物を上記奥側または上記手前側に認識させる上記表示画面からの深さを設定する深さ設定部と、
     上記撮影映像における、上記深さ設定部が設定した上記深さに応じた画素ずらし量だけずれた位置のデータが、左眼用の画素データおよび右眼用の画素データとして、交互に配列した第1出力映像を生成する生成部と、を備えるとともに、
     上記表示画面は、上記乗員から遠ざかる方向に傾斜している
    ことを特徴とする表示装置。
    The object to be visually recognized, which is placed in front of the occupant including the driver of the vehicle and included in the photographed image around the vehicle, is recognized on the far side farther from the occupant than the display screen or near the occupant. A display device that displays as
    A depth setting unit for setting a depth from the display screen that allows the visual recognition object to be recognized on the back side or the near side;
    In the captured image, data at positions shifted by the pixel shift amount corresponding to the depth set by the depth setting unit are alternately arranged as pixel data for the left eye and pixel data for the right eye. A generation unit that generates one output video,
    The display device, wherein the display screen is inclined in a direction away from the occupant.
  7.  上記深さ設定部は、上記表示画面の上記傾斜に応じて、上記乗員から遠い位置の画素ほど、上記表示画面からの深さを浅く設定することを特徴とする請求項6に記載の表示装置。 The display device according to claim 6, wherein the depth setting unit sets the depth from the display screen shallower as the pixel is located farther from the occupant according to the inclination of the display screen. .
  8.  車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を、表示画面よりも上記乗員から遠い奥側または上記乗員に近い手前側に認識されるように表示する表示装置であって、
     上記視認対象物を上記奥側または上記手前側に認識させる上記表示画面からの深さを設定する深さ設定部と、
     上記撮影映像における、上記深さ設定部が設定した上記深さに応じた画素ずらし量だけずれた位置のデータが、左眼用の画素データおよび右眼用の画素データとして、交互に配列した第1出力映像を生成する生成部と、を備えるとともに、
     上記表示画面は、当該表示画面の正面から所定角度以上ずれた方向への光を遮光する狭視野角液晶を用いた液晶表示装置である
    ことを特徴とする表示装置。
    The object to be visually recognized, which is placed in front of the occupant including the driver of the vehicle and included in the photographed image around the vehicle, is recognized on the far side farther from the occupant than the display screen or near the occupant. A display device that displays as
    A depth setting unit for setting a depth from the display screen that allows the visual recognition object to be recognized on the back side or the near side;
    In the captured image, data at positions shifted by the pixel shift amount corresponding to the depth set by the depth setting unit are alternately arranged as pixel data for the left eye and pixel data for the right eye. A generation unit that generates one output video,
    The display device is a liquid crystal display device using a narrow viewing angle liquid crystal that shields light in a direction deviated by a predetermined angle or more from the front of the display screen.
  9.  車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を、表示画面よりも上記乗員から遠い奥側または上記乗員に近い手前側に認識されるように表示する表示装置であって、
     上記視認対象物を上記奥側または上記手前側に認識させる上記表示画面からの深さを設定する深さ設定部と、
     上記撮影映像における、上記深さ設定部が設定した上記深さに応じた画素ずらし量だけずれた位置のデータが、左眼用の画素データおよび右眼用の画素データとして、交互に配列した第1出力映像を生成する生成部と、を備えるとともに、
     上記表示画面を当該表示画面の正面から見た場合は見えず、正面から所定角度以上ずれた方向から見た場合は見える画像を、上記表示画面に表示する
    ことを特徴とする表示装置。
    The object to be visually recognized, which is placed in front of the occupant including the driver of the vehicle and included in the photographed image around the vehicle, is recognized on the far side farther from the occupant than the display screen or near the occupant. A display device that displays as
    A depth setting unit for setting a depth from the display screen that allows the visual recognition object to be recognized on the back side or the near side;
    In the captured image, data at positions shifted by the pixel shift amount corresponding to the depth set by the depth setting unit are alternately arranged as pixel data for the left eye and pixel data for the right eye. A generation unit that generates one output video,
    A display device that displays on the display screen an image that is not visible when the display screen is viewed from the front of the display screen and that is visible when viewed from a direction deviated by a predetermined angle or more from the front.
  10.  車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を、表示画面よりも上記乗員から遠い奥側または上記乗員に近い手前側に認識されるように表示する表示装置であって、
     上記視認対象物を上記奥側または上記手前側に認識させる上記表示画面からの深さを設定する深さ設定部と、
     上記撮影映像における、上記深さ設定部が設定した上記深さに応じた画素ずらし量だけずれた位置のデータが、左眼用の画素データおよび右眼用の画素データとして、交互に配列した第1出力映像を生成する生成部と、を備えるとともに、
     上記表示画面に、当該表示画面の正面から所定角度以上ずれた方向への光を遮光する光学フィルムが貼られている
    ことを特徴とする表示装置。
    The object to be visually recognized, which is placed in front of the occupant including the driver of the vehicle and included in the photographed image around the vehicle, is recognized on the far side farther from the occupant than the display screen or near the occupant. A display device that displays as
    A depth setting unit for setting a depth from the display screen that allows the visual recognition object to be recognized on the back side or the near side;
    In the captured image, data at positions shifted by the pixel shift amount corresponding to the depth set by the depth setting unit are alternately arranged as pixel data for the left eye and pixel data for the right eye. A generation unit that generates one output video,
    An optical film that shields light in a direction shifted by a predetermined angle or more from the front of the display screen is attached to the display screen.
  11.  車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を表示する表示装置であって、
     上記乗員の顔の表示画面に対する位置を検出する検出部と、
     上記検出部が検出した、上記乗員の顔の上記表示画面に対する位置に応じて、上記撮影映像においてトリミング画像を切り出す位置を決定するトリミング位置決定部と、
     上記トリミング位置決定部が決定した上記位置に従って、上記撮影映像から切り出した上記トリミング画像から、出力映像を生成する生成部と、を備えることを特徴とする表示装置。
    A display device that is arranged in front of an occupant including a driver of a vehicle and displays a visual target object included in a photographed image around the photographed vehicle,
    A detection unit for detecting a position of the occupant's face with respect to the display screen;
    A trimming position determination unit that determines a position to cut out a trimmed image in the captured video according to the position of the occupant's face relative to the display screen detected by the detection unit;
    A display device comprising: a generation unit configured to generate an output video from the trimmed image cut out from the captured video according to the position determined by the trimming position determination unit.
  12.  上記トリミング位置決定部は、上記乗員の顔が上記表示画面に対して移動したとき、該移動の方向とは逆方向に、上記撮影映像においてトリミング画像を切り出す位置を移動させることを特徴とする請求項11に記載の表示装置。 The trimming position determination unit, when the occupant's face moves relative to the display screen, moves a position where a trimmed image is cut out in the captured image in a direction opposite to the movement direction. Item 12. The display device according to Item 11.
  13.  請求項1から12の何れかに記載の表示装置と、
     車両の前方、正面後方、左側後方、または、右側後方を撮影し、当該撮影した撮影映像を上記表示装置に出力する撮影部と、を備えることを特徴とする電子ミラー。
    A display device according to any one of claims 1 to 12,
    An electronic mirror comprising: a photographing unit that photographs the front, rear of the vehicle, rear of the left side, or rear of the right side of the vehicle, and outputs the captured image to the display device.
  14.  車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を、表示画面よりも上記乗員から遠い奥側または上記乗員に近い手前側に認識されるように表示する表示装置の制御方法であって、
     上記視認対象物を上記奥側または上記手前側に認識させる上記表示画面からの深さを設定する深さ設定ステップと、
     上記撮影映像における、上記深さ設定ステップにて設定した上記深さに応じた左眼用の画素データおよび右眼用の画素データを含む第1出力映像を生成する生成ステップと、を含むことを特徴とする表示装置の制御方法。
    The object to be visually recognized, which is placed in front of the occupant including the driver of the vehicle and included in the photographed image around the vehicle, is recognized on the far side farther from the occupant than the display screen or near the occupant. A method for controlling a display device to display as follows:
    A depth setting step for setting a depth from the display screen that allows the visual recognition object to be recognized on the back side or the near side;
    Generating a first output image including pixel data for left eye and pixel data for right eye corresponding to the depth set in the depth setting step in the captured image. A control method for a display device.
  15.  車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を、表示画面よりも上記乗員から遠い奥側または上記乗員に近い手前側に認識されるように表示する表示装置の制御方法であって、
     上記表示画面は、上記乗員から遠ざかる方向に傾斜しており、
     上記視認対象物を上記奥側または上記手前側に認識させる上記表示画面からの深さを設定する深さ設定ステップと、
     上記撮影映像における、上記深さ設定ステップにて設定した上記深さに応じた画素ずらし量だけずれた位置のデータが、左眼用の画素データおよび右眼用の画素データとして、交互に配列した第1出力映像を生成する生成ステップと、を含むことを特徴とする表示装置の制御方法。
    The object to be visually recognized, which is placed in front of the occupant including the driver of the vehicle and included in the photographed image around the vehicle, is recognized on the far side farther from the occupant than the display screen or near the occupant. A method for controlling a display device to display as follows:
    The display screen is tilted away from the occupant,
    A depth setting step for setting a depth from the display screen that allows the visual recognition object to be recognized on the back side or the near side;
    In the captured image, data at positions shifted by a pixel shift amount corresponding to the depth set in the depth setting step are alternately arranged as pixel data for the left eye and pixel data for the right eye. And a generation step of generating a first output video.
  16.  車両の運転手を含む乗員の前方に配置され、撮影された当該車両の周囲の撮影映像に含まれる視認対象物を表示する表示装置の制御方法であって、
     上記乗員の顔の表示画面に対する位置を検出する検出ステップと、
     上記検出ステップにて検出した、上記乗員の顔の上記表示画面に対する位置に応じて、上記撮影映像においてトリミング画像を切り出す位置を決定するトリミング位置決定ステップと、
     上記トリミング位置決定ステップにて決定した上記位置に従って、上記撮影映像から切り出した上記トリミング画像から、出力映像を生成する生成ステップと、を含むことを特徴とする表示装置の制御方法。
    A control method for a display device that displays a visual target object that is arranged in front of an occupant including a driver of a vehicle and is included in a photographed image around the photographed vehicle.
    A detection step of detecting the position of the occupant's face relative to the display screen;
    A trimming position determining step for determining a position to cut out a trimmed image in the captured video according to the position of the occupant's face relative to the display screen detected in the detecting step;
    And a generating step of generating an output video from the trimmed image cut out from the captured video in accordance with the position determined in the trimming position determining step.
  17.  請求項14から16の何れかに記載の表示装置の制御方法を、コンピュータに実行させるためのプログラム。 A program for causing a computer to execute the control method for a display device according to any one of claims 14 to 16.
  18.  請求項17に記載のプログラムを記録したコンピュータ読み取り可能な記録媒体。 A computer-readable recording medium on which the program according to claim 17 is recorded.
PCT/JP2017/031455 2016-12-12 2017-08-31 Display device, electronic mirror, display device control method, program, and storage medium WO2018109991A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-240769 2016-12-12
JP2016240769 2016-12-12

Publications (1)

Publication Number Publication Date
WO2018109991A1 true WO2018109991A1 (en) 2018-06-21

Family

ID=62558211

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/031455 WO2018109991A1 (en) 2016-12-12 2017-08-31 Display device, electronic mirror, display device control method, program, and storage medium

Country Status (1)

Country Link
WO (1) WO2018109991A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021062756A (en) * 2019-10-15 2021-04-22 株式会社国際電気通信基礎技術研究所 Display device for electronic mirror

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004084559A1 (en) * 2003-03-20 2004-09-30 Seijiro Tomita Video display for vehicle
JP2007503603A (en) * 2003-08-22 2007-02-22 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Display device and viewing angle control unit used therefor
WO2009011199A1 (en) * 2007-07-19 2009-01-22 Sharp Kabushiki Kaisha Display and view angle control element employed therein
JP2010171608A (en) * 2009-01-21 2010-08-05 Nikon Corp Image processing device, program, image processing method, recording method, and recording medium
JP2010259017A (en) * 2009-04-28 2010-11-11 Nikon Corp Display device, display method and display program
JP2011188028A (en) * 2010-03-04 2011-09-22 Denso Corp Vehicle surrounding monitoring system
WO2012053030A1 (en) * 2010-10-19 2012-04-26 三菱電機株式会社 Three-dimensional display device
JP2013104976A (en) * 2011-11-11 2013-05-30 Denso Corp Display device for vehicle
JP2013237320A (en) * 2012-05-14 2013-11-28 Toshiba Alpine Automotive Technology Corp Discomfort reduction display device and method for controlling display thereof

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004084559A1 (en) * 2003-03-20 2004-09-30 Seijiro Tomita Video display for vehicle
JP2007503603A (en) * 2003-08-22 2007-02-22 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Display device and viewing angle control unit used therefor
WO2009011199A1 (en) * 2007-07-19 2009-01-22 Sharp Kabushiki Kaisha Display and view angle control element employed therein
JP2010171608A (en) * 2009-01-21 2010-08-05 Nikon Corp Image processing device, program, image processing method, recording method, and recording medium
JP2010259017A (en) * 2009-04-28 2010-11-11 Nikon Corp Display device, display method and display program
JP2011188028A (en) * 2010-03-04 2011-09-22 Denso Corp Vehicle surrounding monitoring system
WO2012053030A1 (en) * 2010-10-19 2012-04-26 三菱電機株式会社 Three-dimensional display device
JP2013104976A (en) * 2011-11-11 2013-05-30 Denso Corp Display device for vehicle
JP2013237320A (en) * 2012-05-14 2013-11-28 Toshiba Alpine Automotive Technology Corp Discomfort reduction display device and method for controlling display thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021062756A (en) * 2019-10-15 2021-04-22 株式会社国際電気通信基礎技術研究所 Display device for electronic mirror
JP7398237B2 (en) 2019-10-15 2023-12-14 株式会社国際電気通信基礎技術研究所 Display device for electronic mirror

Similar Documents

Publication Publication Date Title
JP4857885B2 (en) Display device
EP2914002B1 (en) Virtual see-through instrument cluster with live video
CN105988220B (en) Method and control device for operating an autostereoscopic field display for a vehicle
JP4367212B2 (en) Virtual image display device and program
US10882454B2 (en) Display system, electronic mirror system, and moving body
JP2014010418A (en) Stereoscopic display device and stereoscopic display method
JP2008015188A (en) Image presenting system and image presenting method
CN111225830B (en) Full display mirror with adjustment correction
JP2007052304A (en) Video display system
US20190166357A1 (en) Display device, electronic mirror and method for controlling display device
JP2019133116A (en) Display system, movable body, and design method
JP2020078051A (en) Rear display device, rear display method, and program
JP5117613B1 (en) Video processing apparatus, video processing method, and storage medium
JP2018203245A (en) Display system, electronic mirror system, and mobile body
JP4929768B2 (en) Visual information presentation device and visual information presentation method
US10896017B2 (en) Multi-panel display system and method for jointly displaying a scene
US20190166358A1 (en) Display device, electronic mirror and method for controlling display device
JP2021067909A (en) Stereoscopic display device and head-up display apparatus
US20190141314A1 (en) Stereoscopic image display system and method for displaying stereoscopic images
JP2020136947A (en) Driving support device
WO2018109991A1 (en) Display device, electronic mirror, display device control method, program, and storage medium
US12088952B2 (en) Image processing apparatus, image processing method, and image processing system
WO2018101170A1 (en) Display device and electronic mirror
JP6697747B2 (en) Display system, electronic mirror system and moving body
JP7398237B2 (en) Display device for electronic mirror

Legal Events

Date Code Title Description
DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17880273

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17880273

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP