[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2005084027A1 - Image generation device, image generation program, and image generation method - Google Patents

Image generation device, image generation program, and image generation method Download PDF

Info

Publication number
WO2005084027A1
WO2005084027A1 PCT/JP2005/002150 JP2005002150W WO2005084027A1 WO 2005084027 A1 WO2005084027 A1 WO 2005084027A1 JP 2005002150 W JP2005002150 W JP 2005002150W WO 2005084027 A1 WO2005084027 A1 WO 2005084027A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
virtual viewpoint
display
viewpoint
space
Prior art date
Application number
PCT/JP2005/002150
Other languages
French (fr)
Japanese (ja)
Inventor
Hidekazu Iwaki
Takashi Miyoshi
Akio Kosaka
Original Assignee
Olympus Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Olympus Corporation filed Critical Olympus Corporation
Publication of WO2005084027A1 publication Critical patent/WO2005084027A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/27Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view providing all-round vision, e.g. using omnidirectional cameras
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/165Anti-collision systems for passive traffic, e.g. including static obstacles, trees
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/28Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with an adjustable field of view
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/02Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R21/00Arrangements or fittings on vehicles for protecting or preventing injuries to occupants or pedestrians in case of accidents or other traffic risks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/168Driving aids for parking, e.g. acoustic or visual feedback on parking space
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/14Display of multiple viewports
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/37Details of the operation on graphic patterns
    • G09G5/377Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • Image generation device image generation program, and image generation method
  • the present invention captures a plurality of images taken by one or several cameras with the one or several cameras rather than displaying the images independently of each other.
  • the present invention relates to an apparatus and a method for displaying a combined image so that the whole state of an area can be intuitively intensified.
  • the present invention relates to a suitable technique applicable to a monitor device in a store, a vehicle periphery monitor device for assisting safety confirmation when driving a vehicle, and the like.
  • Patent Document 1 an image generation device that displays images captured by a plurality of cameras in an easily viewable manner has been disclosed (for example, Patent Document 1.) o
  • Patent Document 1 an area captured by a plurality of cameras (for example, a vehicle) (Nearby) is synthesized as one continuous image, and an image generating apparatus for displaying the synthesized image is disclosed! / Puru.
  • an appropriately set space model is created by a space model creation means, or set in accordance with a distance to an obstacle around the vehicle detected by the obstacle detection means.
  • the created space model is created by the space model creation means.
  • the image of the periphery of the vehicle, which is input by a camera installed on the vehicle by the image input unit, is mapped to the space model by the mapping unit.
  • one image viewed from the viewpoint determined by the viewpoint conversion unit is synthesized with the mapped image data, and displayed by the display unit.
  • Patent Document 1 Japanese Patent No. 3286306
  • Patent Document 2 Japanese Patent Application Laid-Open No. 05-265547
  • Patent Document 3 JP-A-06-266828
  • An image generating apparatus includes: a space reconstructing unit that maps an input image from one or a plurality of cameras to a predetermined space model in a three-dimensional space; Viewpoint conversion means for generating a virtual viewpoint image, which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space, with reference to the obtained spatial data, and a plurality of the above-described plurality of the viewpoints viewed from different viewpoints.
  • Display control means for controlling display of the virtual viewpoint image.
  • an image generating apparatus includes a space reconstructing unit that maps an input image from one or a plurality of cameras mounted on a vehicle to a predetermined space model in a three-dimensional space.
  • a viewpoint conversion unit for generating a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint force in the three-dimensional space with reference to the spatial data mapped by the space reconstructing unit;
  • Display control means for controlling display of the plurality of viewed virtual viewpoint images, and display form selection means for selecting a display form by the display control means according to a state of the vehicle.
  • an image generating apparatus includes a space reconstructing unit that maps an input image from one or a plurality of cameras mounted on a vehicle to a predetermined space model in a three-dimensional space.
  • a viewpoint conversion unit for generating a virtual viewpoint replacement which is an image viewed from an arbitrary virtual viewpoint force in the three-dimensional space with reference to the spatial data mapped by the space reconstructing unit;
  • Display control means for controlling the display of the plurality of virtual viewpoint images viewed from above, and display form selection means for selecting a display form by the display control means in accordance with an operation state of the vehicle.
  • an image generating apparatus includes space reconstructing means for mapping an input image from one or a plurality of cameras mounted on a vehicle to a predetermined space model in a three-dimensional space.
  • a viewpoint conversion unit for generating a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint force in the three-dimensional space with reference to the spatial data mapped by the space reconstructing unit;
  • Display control means for controlling display of the plurality of viewed virtual viewpoint images, and Display form selecting means for selecting a display form by the display control means.
  • the image generation device includes: a space reconstructing unit that maps an input image from one or a plurality of cameras onto a predetermined space model of a three-dimensional space; Viewpoint conversion means for generating a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the mapped space data, and a similar object captured in the virtual viewpoint image
  • the image processing apparatus includes a color classifying unit for classifying each color, and a display control unit for displaying an image converted by the color classifying unit.
  • an image generating apparatus includes: a space reconstructing unit that maps an input image having one or more camera forces onto a predetermined space model of a three-dimensional space; Viewpoint conversion means for generating a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the spatial data mapped by Display control means for displaying, and the viewpoint conversion means generates the virtual viewpoint image based on user information which is information specific to the user or information on the state of the user.
  • a computer-readable recording medium on which the image generation program according to the present invention is recorded is a spatial reconstruction process that maps an input image of one or a plurality of cameras into a predetermined spatial model in a three-dimensional space. And a viewpoint conversion process of generating a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the spatial data mapped by the space reconstruction process. And a display control process of controlling display of the plurality of virtual viewpoint images viewed from the computer.
  • a computer-readable recording medium on which an image generation program according to the present invention is recorded is a spatial reproduction device that maps an input image of one or a plurality of cameras onto a predetermined space model in a three-dimensional space.
  • the computer causes a computer to execute a color-coding process for color-coding each same type of object photographed in the image and a display process for displaying the image converted by the color-coding process.
  • a computer-readable recording medium on which an image generation program according to the present invention is recorded is a spatial reproduction device that maps an input image of one or a plurality of cameras onto a predetermined space model in a three-dimensional space.
  • a configuration process a viewpoint conversion process of generating a virtual viewpoint image as an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the spatial data mapped by the space reconstruction process; And causing the computer to execute a display control process of displaying an image converted by the process.
  • the viewpoint conversion process is performed based on user information that is information specific to the user or information on the state of the user. To generate the virtual viewpoint image.
  • a computer-readable recording medium recording the image generation program according to the present invention maps an input image from one or more cameras mounted on a vehicle to a predetermined space model in a three-dimensional space.
  • a space reconstruction process for generating a virtual viewpoint image that is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the spatial data mapped by the space reconstruction process.
  • a display control process for controlling display of a plurality of the virtual viewpoint images viewed from different viewpoints, and a display format selection process for selecting a display format by the display control process according to a state of the vehicle And cause the computer to execute.
  • a computer-readable recording medium recording an image generation program converts an input image from one or a plurality of cameras mounted on a vehicle into a predetermined space model of a three-dimensional space.
  • Space reconstruction processing for mapping, and viewpoint conversion processing for generating a virtual viewpoint replacement that is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the spatial data mapped by the space reconstruction processing A display control process for controlling display of the plurality of virtual viewpoint images viewed from different viewpoints, and a display mode by the display control process according to an operation state of the vehicle. And a display mode selection process.
  • a computer-readable recording medium on which an image generation program according to the present invention is recorded can convert an input image from one or more cameras mounted on a vehicle into a predetermined three-dimensional space model. Spatial reconstruction processing to be mapped, and any temporary data in the three-dimensional space with reference to the spatial data mapped by the spatial reconstruction processing.
  • the image generation method is characterized in that an input image from one or a plurality of cameras is mapped to a predetermined space model in a three-dimensional space, and the 3D space is referred to by referring to the mapped space data.
  • a virtual viewpoint image which is an image viewed with an arbitrary virtual viewpoint power in a dimensional space is generated, and display of the plurality of virtual viewpoint images viewed with different viewpoint powers from each other is controlled.
  • the image generation method the input image from one or a plurality of cameras is mapped to a predetermined spatial model of a three-dimensional space, and the mapped spatial data is referred to.
  • a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint in a three-dimensional space, is generated, and the same kind of object photographed in the virtual viewpoint image is color-coded and the color-coded image is displayed.
  • the image generation method the input image from one or a plurality of cameras is mapped to a predetermined space model of a three-dimensional space, and the mapped space data is referred to.
  • An image generation method for generating a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint in a three-dimensional space, and displaying the virtual viewpoint image, wherein the virtual viewpoint image includes information unique to a user or a user. It is generated based on user information that is information on the state.
  • an image generation method maps an input image from one or a plurality of cameras mounted on a vehicle to a predetermined space model in a three-dimensional space, and executes the mapping.
  • a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space is generated with reference to the spatial data, and a display mode of the plurality of virtual viewpoint images viewed from different viewpoints is controlled. The display mode is selected according to the state of the vehicle.
  • the image generation method maps input images from a plurality of cameras mounted on a vehicle to a predetermined space model in a three-dimensional space, and Virtual viewpoint replacement, which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space, with reference to the spatial data obtained in the above-mentioned three-dimensional space.
  • the display mode is controlled, and the display mode is selected according to the operation status of the vehicle.
  • the image generation method maps an input image from one or a plurality of cameras mounted on a vehicle to a predetermined space model in a three-dimensional space, and executes the mapping.
  • a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space is generated with reference to the spatial data, and a display mode of the plurality of virtual viewpoint images viewed from different viewpoints is controlled.
  • the display mode is selected according to the operation status of the operator.
  • FIG. 1 is a diagram showing an image generation device according to a first embodiment.
  • FIG. 2 is a diagram showing a display flow of a virtual viewpoint image in the first embodiment.
  • FIG. 3 is a diagram showing a virtual viewpoint image superimposed and displayed in the first embodiment.
  • FIG. 4 is a diagram showing an example in which a plurality of virtual viewpoint images according to the first embodiment are panorama-synthesized.
  • FIG. 5 is a diagram in which four virtual viewpoint images according to the first embodiment are displayed side by side.
  • FIG. 6 is a diagram showing the relationship between the switching of the operation mode and the display mode in the first embodiment.
  • FIG. 7 is a diagram showing a correspondence relationship between a vehicle state and a display mode in the second embodiment.
  • FIG. 8 is a diagram showing an example of a mode of an operation state of a vehicle according to a third embodiment.
  • FIG. 9A is a diagram showing buttons used for mode switching in the third embodiment.
  • FIG. 9B is a diagram showing a mode button whose arrangement has been changed by an arrangement change (automatic change / manual change) function by learning according to the third embodiment.
  • FIG. 10 is a diagram illustrating an image generation device according to a fourth embodiment.
  • FIG. 11 is a diagram showing a processing flow in a fourth embodiment.
  • FIG. 12 is a diagram showing an example of displaying both a wide-angle virtual viewpoint image and a virtual viewpoint image in which an obstacle is enlarged in proximity to the wide-angle virtual viewpoint image in the fifth embodiment.
  • FIG. 13 is a diagram illustrating an image generation device according to a sixth embodiment.
  • FIG. 14 is a diagram showing three virtual viewpoint images superimposed on a part of a car navigation image in the seventh embodiment.
  • FIG. 15 is a configuration block diagram of a hardware environment of the image generation device according to the first to eighth embodiments.
  • Patent Document 1 an image of an area (for example, near a vehicle) captured by a plurality of cameras is synthesized as one continuous image, and the synthesized image is mapped into a virtual three-dimensional space model.
  • the main theme is how to generate an image (virtual viewpoint image) in which the viewpoint of the mapped data is virtually changed in three dimensions, and its display method and display form, etc. It is not a sufficiently specific suggestion to improve the usability of the user interface.
  • FIG. 1 shows an image generating apparatus 10000 according to an embodiment of the present invention.
  • an image generation apparatus 10000 according to the present invention includes a plurality of cameras 101, a camera parameter table 103, a spatial reconstruction means 104, a spatial data buffer 105, a viewpoint conversion means 106, a display control means 10001, a display mode selection. Means 10002 and display means 107.
  • the plurality of cameras 101 are provided in a state suitable for grasping the status of the monitoring target area.
  • the camera 101 is, for example, a plurality of television cameras that capture images of a space to be monitored such as a situation around the vehicle. Usually, it is preferable to use a camera having a large angle of view so that a large field of view can be obtained.
  • a known mode such as that disclosed in Patent Document 1 may be used.
  • the camera parameter table 103 stores camera parameters indicating the characteristics of the camera 101.
  • Image generation device 10000 A calibration means (not shown) is provided to perform camera calibration.
  • Camera calibration refers to a camera placed in a three-dimensional space, such as the camera mounting position, camera mounting angle, camera lens distortion correction value, and camera lens focal length in the three-dimensional space. This is to determine and correct camera parameters representing the characteristics of 101.
  • the calibration means and the camera parameter table 103 are described in detail in, for example, Patent Document 1! RU
  • the spatial reconstruction means 104 creates spatial data in which an input image from the camera 101 is mapped to a three-dimensional spatial model based on the camera parameters. That is, the spatial reconstruction means 104 associates each pixel constituting the input image from the camera 101 with a point in the three-dimensional space based on the camera parameters calculated by the calibration means (not shown). Create spatial data. That is, the spatial reconstruction means 104 calculates where each object included in the image captured by the camera 101 exists in the three-dimensional space, and stores the spatial data obtained as a result of the calculation in the spatial data buffer. Store in 105.
  • the spatial data buffer 105 temporarily stores the spatial data created by the spatial reconstruction means 104. This spatial data buffer 105 is also described in detail in Patent Document 1, for example.
  • the viewpoint conversion means 106 creates an image viewed from an arbitrary viewpoint with reference to the spatial data. That is, referring to the space data created by the space reconstructing means 104, an image taken by installing a camera at an arbitrary viewpoint is created.
  • This viewpoint conversion means 106 may have the configuration described in detail in Patent Document 1, for example.
  • the display control unit 10001 controls the display mode when displaying the image converted by the viewpoint conversion unit 106.
  • the display control unit 10001 controls the display of an image by using the selection operation of the display mode selection unit 10002 as a trigger.
  • a plurality of images converted by the viewpoint conversion unit 106 are displayed in a superimposed manner, displayed as one continuous image, or displayed side by side.
  • a plurality of images from different viewpoints can be displayed at the same time, for example, a virtual viewpoint image viewed with a knock mirror and a virtual viewpoint image viewed from a bird's-eye view viewpoint can be displayed simultaneously. In this way, images having different contents can be divided and displayed on a screen, displayed in a superimposed manner, displayed side by side, and the like.
  • the display form selection means 10002 is for instructing a change of the viewpoint and a change of the angle of view. These instructions are transmitted to the viewpoint changing means 106 via the display control means 10001 (or may be transmitted directly from the display form selecting means 10002 to the viewpoint changing means 106). A viewpoint image is created.
  • the display mode selection unit 10002 may instruct not only the change of the viewpoint and the change of the angle of view but also the change of the zoom, focus, exposure, shutter speed, and the like. Further, the display mode selection unit 10002 can also select a display mode as described later.
  • the display unit 107 is, for example, a display or the like, and displays an image controlled by the display control unit 10001.
  • FIG. 2 shows a display flow of the virtual viewpoint image in the present embodiment.
  • the respective fields of view of a plurality of cameras are integrated by the following procedure and combined as one image.
  • the spatial reconstruction means 104 calculates the correspondence between each pixel constituting the image obtained from the camera 101 and a point in the three-dimensional coordinate system, and creates spatial data. This calculation is performed for all pixels of the image obtained from each camera 101 (Sl).
  • a known embodiment disclosed in Patent Document 1 is applied. it can.
  • a desired viewpoint is designated by the viewpoint conversion means 106 (S2).
  • S2 a desired viewpoint is designated by the viewpoint conversion means 106 (S2).
  • the viewpoint conversion means 106 also reproduces the spatial data force of the image having the viewpoint power specified in S2, and the display control means 10001 controls the display mode of the reproduced image (S3). Then, the image is output to the display means 107 and displayed on the display means 107 (S4).
  • FIG. 3 shows a virtual viewpoint image superimposed and displayed in the present embodiment.
  • the virtual viewpoint images 10010 and 10011 are force-superimposed, that is, the virtual viewpoint image 10011 as a child screen can be displayed in the virtual viewpoint image 10010 using the rpicture In PictureJ function.
  • two screens are superimposed in the figure, the present invention is not limited to this, and a plurality of screens may be superimposed.
  • FIG. 4 shows an example in which a plurality of virtual viewpoint images according to the present embodiment are subjected to panoramic synthesis.
  • the two virtual viewpoint images 10020 and 10021 overlap the part where the angle of view overlaps (the part where the boundary part overlaps in the image) and combine them into a single continuous image (hereinafter referred to as a seamless image).
  • a seamless image a single continuous image
  • two screens are overlapped in the figure, the present invention is not limited to this, and a plurality of overlapping screens having different angles of view may be further overlapped to form a seamless image.
  • FIG. 5 shows a diagram in which four virtual viewpoint images 10030, 10031, 10032, and 10033 according to the present embodiment are displayed side by side.
  • the virtual viewpoint images displayed here need not be related to each other, unlike FIG. In the figure, the force of arranging four screens is not limited to this, and multiple screens may be arranged.
  • the control of the display mode in FIGS. 3 to 5 is controlled by the display control unit 10001, and the display mode can be selected by the display mode selection unit 10002.
  • FIG. 6 shows the relationship between the switching of the operation mode and the display mode in the present embodiment.
  • the driving modes include, for example, a right turn mode, a left turn mode, a forward mode, and a back mode. Mode, high-speed running mode, and the like.
  • the viewpoint, angle of view, display mode (superimposed display (see Fig. 3), seamless display (see Fig. 4), side-by-side display (see Fig. 5), etc.) are determined according to each driving mode.
  • a virtual viewpoint image having a wide angle of view with a rightward viewpoint force may be displayed so as to be superimposed on the force navigation image.
  • the virtual image of the rear side may be superimposed on the seamless image of the front side and displayed.
  • the display mode selecting means 10002 detects the operation mode.
  • a situation of the vehicle at that time can be detected by a gear, a speed, a win force, a steering angle, and the like.
  • Patent Document 1 discloses that a joystick is used to arbitrarily adjust a viewpoint.
  • an appropriate virtual viewpoint image can be displayed more easily by switching to a preset viewpoint, angle of view, display mode, or the like in accordance with the driving operation.
  • zoom, focus, exposure, shutter speed, and the like may be added to the conditions of the displayed image by switching.
  • the force for generating the virtual viewpoint image based on the invention of Patent Document 1 is not limited to this. That is, as long as a virtual viewpoint image is obtained, any known technique may be used.
  • control of the display mode of the virtual viewpoint image when the image generation device 10000 is mounted on a vehicle in a mode different from that of the first embodiment will be described.
  • the conceptual configuration of image generating apparatus 10000 itself in the present embodiment is the same as that in FIG.
  • the display mode selection unit 10002 selects the display mode according to the state of the vehicle, the operation status, and the like as described below. Note that this function may be added to the display mode selection unit 10002 of the first embodiment.
  • FIG. 7 shows the correspondence between the vehicle state and the display mode in the present embodiment.
  • Vehicles are equipped with sensors (temperature, humidity, pressure, illuminance, etc.), cameras (for in-vehicle use, body photography), and measuring instruments (or tachometer, Spy Existing measuring instruments such as a dodometer, coolant temperature gauge, oil pressure gauge, fuel gauge, etc. may be used.)
  • the vehicle state "running steering angle” indicates that the display mode changes according to the degree of the steering angle during running. For example, when the steering wheel is turned by a predetermined steering angle or more, an image in the direction of the steering angle (hereinafter, referred to as an image such as a virtual viewpoint image) is displayed on the display.
  • an image in the direction of the steering angle hereinafter, referred to as an image such as a virtual viewpoint image
  • the vehicle state "speed" indicates that the display mode changes in accordance with the degree of the speed. For example, a distant image (for example, linked to the safety stop distance) is displayed when the speed increases, and an entire surrounding image is displayed when the vehicle starts at a low speed or starts.
  • a distant image for example, linked to the safety stop distance
  • the vehicle state "acceleration" indicates that the display mode changes according to the degree of the acceleration. For example, a backward image is displayed during deceleration, and a far forward image is displayed during acceleration.
  • the vehicle state “gear” indicates that the display mode changes according to the force (eg, 1st, 2nd, 3rd,..., Back, etc.) where the gear is. For example, the back image is displayed during backing.
  • the force eg, 1st, 2nd, 3rd,..., Back, etc.
  • the vehicle state “wiper” indicates that the display mode changes according to the operating state of the wiper (for example, whether or not the wiper is operating, the operating speed of the wiper, and the like). For example, an image in rain mode (a virtual viewpoint image on which processing such as specular reflection component removal and water droplet removal has been performed) is displayed in conjunction with the wiper.
  • the operating state of the wiper for example, whether or not the wiper is operating, the operating speed of the wiper, and the like.
  • an image in rain mode a virtual viewpoint image on which processing such as specular reflection component removal and water droplet removal has been performed
  • the vehicle state "headlight, vehicle interior light ONZOFF brightness, etc.” indicates that the display mode changes according to the brightness of the headlight, vehicle interior light, and the like. For example, the brightness of the liquid crystal of the display is adjusted.
  • the volume of voice guidance is adjusted according to the stereo volume.
  • the display mode may be changed according to the stereo sound volume.
  • the vehicle state "dirt" indicates that the display mode changes according to the degree of dirt on the vehicle. ing. For example, warning of dirt on the vehicle (for example, front, rear, bumper, roof, bonnet, door, window, wheel, tire, etc.) (especially if the camera light-receiving part that may interfere with the system) Will be displayed.
  • the vehicle state “temperature, humidity” indicates that the display mode changes according to the temperature and humidity inside or outside the vehicle. For example, a warning is displayed for a place where the temperature or humidity inside the vehicle or on the road surface is abnormally high (or low, for example, when the temperature outside the vehicle is low and the road surface is covered with ice). Thereby, for example, slip prevention due to freezing can be achieved.
  • the vehicle state “oil amount” indicates that the display mode changes according to the remaining amount of oil! For example, when there is a possibility of an oil leak, an image is displayed so that it is possible to confirm whether or not the oil is leaking in an image behind or directly below the vehicle.
  • the vehicle state "number of passengers, riding position, and weight of loaded luggage” indicates that the display mode changes according to the number of passengers, the sitting position of the occupant, and the weight of loaded luggage. For example, a warning for correcting the safe stopping distance (specifically displaying the stopping distance superimposed on the video) is displayed.
  • the vehicle state "open / close window” indicates that the display mode changes according to the open / close state of the window. For example, the surrounding image is displayed so that the user can confirm whether there is no danger due to opening and closing, and the image of the window that can be opened or closed can be monitored with his / her hand or head.
  • the display related to the vehicle state is not limited to the one described above, and the braking distance calculated in consideration of the vehicle state and the road surface state and the weather is displayed in the traveling direction of the own vehicle. May be displayed in a bar graph-like pattern!
  • a preset display mode can be selected according to the vehicle state as described above, so that an appropriate virtual viewpoint image can be displayed more easily.
  • This embodiment is a modification of the second embodiment.
  • a display mode is selected for each state of each part of the vehicle.
  • a macro operation state of the vehicle is detected, and a display mode according to the operation state is selected. That is, in the present embodiment, the display mode selection unit 10002 focuses on the operation status of the vehicle, and selects the display mode corresponding to the operation status. Note that the image generation in the present embodiment The device is the same as in the first or second embodiment.
  • FIG. 8 shows an example of a mode of the operation state of the vehicle in the present embodiment.
  • the operation status modes of the present embodiment include “right turn mode”, “left turn mode”, “surrounding monitoring mode at start”, “in-vehicle monitoring mode”, “high-speed driving mode”, “backward monitoring mode”, “ There are “rainy driving mode”, “parallel parking mode” and “garage putting mode”. Now, each mode will be described.
  • the "right turn mode” displays an image in a direction in which the vehicle turns to the front. Specifically, when the vehicle makes a right turn, an image in front and an image in the right turn direction are displayed. “Left turn mode” displays images in the forward and ⁇ directions. More specifically, when the vehicle is turning left, an image in front and an image in the direction of left turn are displayed.
  • the “starting surrounding monitoring mode” displays a monitoring video around the vehicle when the vehicle starts.
  • the “in-vehicle monitoring mode” displays a monitoring image of the inside of the vehicle.
  • the “high-speed running mode” displays an image far ahead in the high-speed running mode.
  • a video image is displayed to confirm whether the driver can apply the sudden brake, that is, whether there is enough distance from the following vehicle to stop the driver with the sudden brake.
  • Such a mode change may be automatically recognized by the image generation device 10000 according to the operation of the vehicle, or may be set by the user. In addition, these modes can be freely combined.
  • FIGS. 9A and 9B show an example of a setting format in the case where the user can set the above mode in the present embodiment.
  • the "right turn” / "left turn mode” button is It is displayed as a button for “surrounding monitoring at start-up and in-vehicle monitoring mode”, a button for “far and highway at high speed”, and a button for “rain”. Then, by pressing each selection button, the mode is switched to the mode.
  • the display mode (arrangement mode) of the buttons can be changed.
  • variations of the display form include “selection by selection button”, “arrangement of modes according to the situation”, and “frequently used functions (modes) by the learning function. (Display on top) ".
  • Selection by selection button is an array of normal selection buttons, and is an array set by default. This array can be arbitrarily changed by the user. By doing so, the arrangement of the selection buttons can be changed according to the user's own preference.
  • the arrangement of the modes is changed according to the situation means that the arrangement of the selection buttons is changed according to the operation state of the vehicle, the surrounding environment, the weather, and the like. That is, by detecting the operation of the vehicle, the surrounding environment, the weather, and the like, the arrangement of the selection buttons is changed, and a selection button that is more suitable for the situation is displayed, for example, at the top. Also, the arrangement of the selection buttons may be changed according to the situation of each part in the second embodiment.
  • the function (mode) frequently used by the learning function is displayed at the front (upper)
  • the selection history of the selection button of the user is sequentially recorded, and this history power is statistically determined. Is to calculate whether or not is used more frequently, and arrange the selection buttons in order of the selection frequency. More specifically, for example, as shown in FIGS.
  • a car navigation information display frame 10035 capable of superimposing images on the periphery and the course of the vehicle is outside (in the present embodiment, the lower part)
  • the icon power of each button of "Back” 10037, “Left turn” 10038, "Right turn” 10036 is adaptively arranged in the order sequence according to the selection frequency (in this embodiment, the left power is also the right) Will do.
  • FIG. 9A shows the arrangement of the mode buttons before the arrangement is changed. For example, as shown in FIG. 9A, when the frequency of pressing “back” 10037 increases, the selection frequency increases as shown in FIG. 9B. Back "10037 is placed on the left end.
  • the display mode can be changed according to the operation state of the vehicle and the user's preference. Therefore, it is possible to display an optimal virtual viewpoint image for the user.
  • Patent Document 1 there is no particular mention as to how an image subjected to image processing such as viewpoint conversion is easily viewed ⁇ A clear display is made. Therefore, in the present embodiment, a clearer display is realized by a simple color-coded display.
  • FIG. 10 shows an image generation device 10000 according to the present embodiment.
  • the image generating apparatus 10000 of the figure is the same as the image generating apparatus 10000 of the first embodiment, except that a color coding / simplification unit 10040 and an object recognition unit 10041 are added.
  • the display mode selection unit 10002 is removed in the present embodiment, it may be attached according to the application.
  • the object recognizing means 10041 is means for recognizing each object (object) captured in the captured image.
  • Methods for recognizing these objects include methods that use only images obtained with the camera power of a single eye, methods that use the distance and images obtained from the camera obtained from the laser range finder, and methods that are obtained from the stereo method.
  • the stereo method is a method in which the same object is imaged by a plurality of cameras, corresponding points in the imaged images are extracted, and a distance image is calculated by triangulation.
  • a device that obtains a range image using these methods and the like is referred to as a stereo sensor.
  • An image including the distance information and the luminance or color information acquired by such a stereo sensor is referred to as a stereo image or a stereoscopic image.
  • Patent Document 2 a distance distribution is obtained over the entire captured image, and a three-dimensional object or a road shape is relied on based on the information of the distance distribution together with an accurate position and size.
  • a vehicular exterior monitoring device for highly sensitive detection is disclosed.
  • the type of an object can be identified based on the shape and size and position of a contour image of the detected object.
  • Patent Document 3 a side wall as a continuous three-dimensional object that serves as a boundary of a road, such as a guardrail, an implant, a pylon row, is reliably detected.
  • a vehicle exterior monitoring device that detects data in a form that is easy to process is disclosed.
  • the stereo optical system captures an image of an object within the installation range outside the vehicle, processes the image captured by the stereo optical system, and calculates the distance distribution over the entire image. Then, the three-dimensional position of each part of the subject corresponding to the distance distribution information is calculated, and the shape of the road and a plurality of three-dimensional objects are detected using the three-dimensional position information.
  • the object recognizing unit 10041 in the present embodiment can also recognize each object (object) captured in the image captured by the stereo image method as described above.
  • the processing itself can be performed by a conventional method, and a detailed description of the processing is omitted.
  • a plurality of cameras different from the camera 101 may be used, or the camera 101 may also be used.
  • the synchronization between the plurality of images is lost.
  • the object recognizing unit 10041 performs coloring on the same object in the virtual viewpoint image corresponding to the object recognized by the unit, so that the stereo image and the virtual viewpoint A table (association table 10042) for associating an object with an image is provided.
  • the color coding / simplification means 10040 performs a process of performing color coding for each type of object such as a road surface, an obstacle, and a vehicle in the virtual viewpoint image.
  • FIG. 11 shows a processing flow in the present embodiment.
  • spatial data creation processing S11
  • viewpoint specification processing S12
  • Sll and S12 are the same as the processing of SI and S2 in FIG. 2, respectively.
  • the object is recognized (S13). As described above, these processes themselves can use known methods exemplified in Patent Documents 2 and 3.
  • the recognized objects are classified by type.
  • the object is recognized by a known method described in, for example, Patent Document 2 or Patent Document 3, and the recognized object is classified by type.
  • the virtual viewpoint image acquired in S12 is associated with the object recognized in S13 (S14). This association is performed using an association table.
  • the position of each pixel constituting the image captured by the camera 101 is generally represented as coordinates on the UV plane including the CCD image plane.
  • a point on the UV plane where the pixel captured by the camera 101 exists is associated with a point in the world coordinate system.
  • the processing is performed by the spatial reconstruction means 104.
  • the position of the object recognized by the stereo image camera can be determined on the world coordinate system via the UV coordinate system. Can be specified. This makes it possible to associate the object in the stereo image with the object in the same virtual viewpoint image.
  • coloring is performed for each type of object in the virtual viewpoint image (S15).
  • the process is performed by the color classification / simple dangling means 10040, and the process of performing color classification for each type of object such as a road surface, an obstacle, and a vehicle in the virtual viewpoint image associated in S14.
  • each object recognized by the object recognition means 10041 is colored differently according to the type of article such as a road or a vehicle classified in S13.
  • the road surface in the virtual viewpoint image is colored in gray
  • the vehicle is red
  • the other obstacles are blue
  • the coloring for each object in the virtual viewpoint image, the texture tone may be monotone for each object to be displayed in a single color, or a semi-transparent color may be displayed in a superimposed manner.
  • the display mode of the image after the processing of S15 is controlled (S16).
  • the display mode is controlled in order to output to the output means 107.
  • a display mode may be selected as in the first embodiment.
  • the image is output to the display means 107 and displayed on the display means 107 (S17).
  • the force performed based on the processing flow of Fig. 11 is not limited to this. Anything is fine.
  • a stereo sensor may be used at Sl and S12 to acquire three-dimensional information at the same time.
  • the space model used for generating the space model in S11 described above is a space model having five plane forces and a bowl-shaped space. It may be a space model, a space model configured by combining a plane and a curved surface, a space model in which a surface is introduced, or a combination thereof. Note that the space model is not limited to these space models, and is not particularly limited as long as the space model is a combination of planes, a curved surface, or a combination of a plane and a curved surface.
  • the image projected for each space model may be similarly colored for each object type.
  • the display is made clearer by color-coding etc. according to each object model of the space model.
  • objects in the projected image that are not only colored may be simplified, that is, displayed as a figure abstracted by a circle, square, triangle, rectangle, trapezoid, or the like.
  • the present invention is not limited to these figures, and is not particularly limited as long as it is a simplified diagram, symbol, symbol, or the like.
  • the objects in the virtual viewpoint image are color-coded and simplified, so that an obstacle or the like can be easily recognized even during driving.
  • the virtual viewpoint image may be displayed as a wide-angle virtual viewpoint image and an enlarged virtual viewpoint image.
  • FIG. 12 shows an example in which a wide-angle virtual viewpoint image 10050 and a virtual viewpoint image 10051 in which an obstacle is enlarged in the vicinity thereof are displayed together in the present embodiment.
  • Patent Literature 1 merely discloses that any point can be displayed with respect to the viewpoint adjustment when the driver changes or the seating position is adjusted differently. Therefore, in the present embodiment, by using the viewpoint preset for each driver, The displayed image more easily.
  • FIG. 13 shows an image generating apparatus 10000 according to the present embodiment.
  • the image generating apparatus 10000 of the figure is obtained by adding a user information acquisition unit 10060 to the image generating apparatus 10000 of the first embodiment.
  • the display form selection means 10002 may be attached in accordance with the purpose of the power removal.
  • a face image of a driver is obtained by a camera that monitors the inside of a vehicle, and a viewpoint is obtained by extracting an eyeball position from the face image by a general image processing technique.
  • the position of the virtual viewpoint for displaying the virtual viewpoint image can be customized based on the measurement result.
  • the driver's visual field may be measured from the face image, and a virtual visual point image having an angle of view corresponding to the visual field may be displayed.
  • the measurement of the visual field may be estimated based on the degree of opening of the eyes in the face image.
  • another example of the user information acquisition unit 10060 may be configured to estimate the driver's posture or the like and calculate the power of the user's viewpoint, for example. For example, since the driver's head position can be measured from the driver's height (or sitting height) registered in advance and the current seat inclination angle, the approximate driver's viewpoint position can be determined from this.
  • the viewpoint position may be calculated in this manner.
  • the viewpoint position or the angle of view is determined in consideration of the position of the seat (slid forward or backward, slide or height, etc.) or whether the driver is wearing glasses. It may be calculated.
  • the user information obtained by the user information obtaining means 10060 in this way is transmitted to the display control means 10001. Then, the display control means 10001 determines the optimal virtual information for the user based on the information. Display the viewpoint image.
  • each driver can view the virtual viewpoint image without feeling uncomfortable. .
  • a virtual viewpoint image with a virtual viewpoint power is displayed so as to be superimposed on a part of the image of the car navigation system.
  • the virtual viewpoint image is placed at an appropriate position according to the map information. Display superimposed.
  • the main body of the image generating apparatus according to the present embodiment is the same as that of the first embodiment.
  • the car navigation image is a kind of virtual viewpoint image, and another virtual viewpoint image is superimposed on this image in the same manner as in the other embodiments.
  • FIG. 14 shows virtual viewpoint images 10071, 10072, and 10073 superimposed on a part of the car navigation image 10070 in the present embodiment.
  • Such control is performed by the display control means 10001.
  • the virtual viewpoints and images 10071, 10072, and 10073 are displayed as a semi-transparent display so as to be superimposed on the image 10070 of the car navigation system.
  • the virtual viewpoint image can be viewed while being navigated by the car navigation system.
  • the virtual viewpoint image is controlled to be displayed at a position that does not interfere with the navigation, the user does not feel inconvenience to navigate.
  • Patent Document 1 there is no mention of focus adjustment of a camera, and there is no particular consideration for appropriately displaying an image of an obstacle particularly when the camera is at a short distance.
  • the camera is provided with an AF (auto focus) function, and when the camera is closer to the stereoscopic view, the focus of the lens is adjusted to the short distance side. That is, the mode is generally adjusted to a macro mode in which a large image is taken at a position close to the object to be imaged. In this way, an image focused for three-dimensional reconstruction can be obtained at a short distance.
  • AF auto focus
  • a high-precision image can be similarly obtained for a long-distance image, and a long-range accuracy can be obtained. Leads to improvement.
  • This embodiment can be combined with the first to seventh embodiments.
  • the present invention is not limited to this, and the respective embodiments of the first to eighth embodiments can be combined with each other depending on the application.
  • the space model and the screen display shown in FIGS. 3 to 5, 9A, 9B, 12 and 14 can be applied between the embodiments.
  • FIG. 15 is a configuration block diagram of a hardware environment of the image generation device 10000 according to the first to eighth embodiments.
  • an image generation device 10000 includes at least a control device 10080 such as a central processing unit (CPU) and a storage device 10081 such as a read only memory (ROM), a random access memory (RAM), or a large capacity storage device.
  • a control device 10080 such as a central processing unit (CPU)
  • ROM read only memory
  • RAM random access memory
  • the devices connected to the input IZF include, for example, a camera 101, an in-vehicle camera, a stereo sensor and various sensors (refer to the second to fourth embodiments), input devices such as a keyboard and a mouse, and CD- Examples include a portable storage medium reading device such as a ROM and a DVD, and other peripheral devices.
  • the device connected to the IZF 10084 is, for example, a car navigation system or a communication device connected to the Internet or GPS.
  • the communication medium may be a communication network such as the Internet, a LAN, a WAN, a dedicated line, a wired line, a wireless line, or the like.
  • the storage device 10081 various types of storage devices such as a hard disk and a magnetic disk can be used, and the program of the flow described in the above embodiment, the table (for example, display of vehicle status, etc.) A table relating to the form, various setting values, and the like are stored. This program is read by the control device 10080, and each process of the flow is executed.
  • This program may also be provided via the Internet via the communication provider IZF10084, and may be stored in the storage device 10081. Further, this program may be stored in a portable storage medium which is sold and distributed. Then, the portable storage medium in which the program is stored can be set in the reading device and executed by the control device.
  • portable storage media various types of storage media such as CD-ROM, DVD, flexible disk, optical disk, magneto-optical disk, and IC card can be used. The program can be used, and the program stored in such a storage medium is read by the reading device.
  • a keyboard a mouse, an electronic camera, a microphone, a scanner, a sensor, a tablet, or the like can be used.
  • other peripheral devices can be connected.
  • Patent Document 1 is incorporated in this specification by reference to this specification.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

An image generation device includes: space reconfiguration means for mapping an input image from one or more cameras onto a predetermined space model of a 3D space; viewpoint conversion means for referencing the space data mapped by the space reconfiguration means and generating a virtual viewpoint image as an image viewed from an arbitrary virtual viewpoint in the 3D space; and display control means for controlling display of the virtual viewpoint images viewed from different viewpoints.

Description

画像生成装置、画像生成プログラム、及び画像生成方法  Image generation device, image generation program, and image generation method
技術分野  Technical field
[0001] 本発明は、 1台乃至数台のカメラで撮影された複数枚の画像について、画像を互 いに独立して表示するのではなぐ前記 1台乃至数台のカメラで撮影しているエリア の全体の様子が直感的に分力るように、一枚に合成した画像を表示する装置および 方法に関する。たとえば、本発明は、店舗におけるモニタ装置、あるいは車両運転の 際の安全確認の補助としての車両周囲モニタ装置等に応用可能な好適な技術に関 する。  [0001] The present invention captures a plurality of images taken by one or several cameras with the one or several cameras rather than displaying the images independently of each other. The present invention relates to an apparatus and a method for displaying a combined image so that the whole state of an area can be intuitively intensified. For example, the present invention relates to a suitable technique applicable to a monitor device in a store, a vehicle periphery monitor device for assisting safety confirmation when driving a vehicle, and the like.
背景技術  Background art
[0002] 近年、複数のカメラで撮影した画像を見やすく表示する画像生成装置が開示され ている(例えば、特許文献 1。 ) oこの特許文献 1では、複数のカメラで撮影した領域( 例えば、車両近辺)の画像を連続的な 1枚の画像として合成し、その合成した画像を 表示させる画像生成装置を開示して!/ヽる。  [0002] In recent years, an image generation device that displays images captured by a plurality of cameras in an easily viewable manner has been disclosed (for example, Patent Document 1.) o In Patent Document 1, an area captured by a plurality of cameras (for example, a vehicle) (Nearby) is synthesized as one continuous image, and an image generating apparatus for displaying the synthesized image is disclosed! / Puru.
[0003] また、特許文献 1では、予め適当に設定した空間モデルが空間モデル作成手段に よって作成されたり、または、障害物検知手段によって検知された車両周囲の障害物 までの距離に応じて設定される空間モデルが空間モデル作成手段によって作成され たりする。画像入力手段により車両に設置されたカメラ力 入力された車両周囲の画 像は、マッピング手段によって前記空間モデルにマッピングされる。つづいて、視点 変換手段にて決められた視点から見た一枚の画像をマッピングされた画像カゝら合成 して、表示手段にて表示する。  [0003] Further, in Patent Document 1, an appropriately set space model is created by a space model creation means, or set in accordance with a distance to an obstacle around the vehicle detected by the obstacle detection means. The created space model is created by the space model creation means. The image of the periphery of the vehicle, which is input by a camera installed on the vehicle by the image input unit, is mapped to the space model by the mapping unit. Subsequently, one image viewed from the viewpoint determined by the viewpoint conversion unit is synthesized with the mapped image data, and displayed by the display unit.
[0004] 特許文献 1に記載の車両に設置された装置によると、車両の全周囲にわたって車 両近辺にどのような物体が存在するかを出来るだけ現実に近いように分力り易く一枚 の画像として合成し、運転者にその画像を提供することができる。また、この際、視点 変換手段により、運転者の所望の視点からの画像を表示させることが可能である。 特許文献 1:特許第 3286306号公報  [0004] According to the device installed in the vehicle described in Patent Document 1, it is possible to easily determine what kind of object is present in the vicinity of the vehicle over the entire periphery of the vehicle so as to be as close to reality as possible. The images can be combined and provided to the driver. Also, at this time, it is possible to display an image from a driver's desired viewpoint by the viewpoint conversion means. Patent Document 1: Japanese Patent No. 3286306
特許文献 2:特開平 05— 265547号公報 特許文献 3:特開平 06- 266828号公報 Patent Document 2: Japanese Patent Application Laid-Open No. 05-265547 Patent Document 3: JP-A-06-266828
発明の開示  Disclosure of the invention
[0005] 本発明にかかる画像生成装置は、 1または複数のカメラからの入力画像を 3次元空 間の予め決められた空間モデルにマッピングする空間再構成手段と、前記空間再構 成手段によってマッピングされた空間データを参照して、前記 3次元空間における任 意の仮想視点から見た画像である仮想視点画像を生成する視点変換手段と、それ ぞれ相互に異なった視点から見た複数の前記仮想視点画像を表示することを制御 する表示制御手段と、を備える。  [0005] An image generating apparatus according to the present invention includes: a space reconstructing unit that maps an input image from one or a plurality of cameras to a predetermined space model in a three-dimensional space; Viewpoint conversion means for generating a virtual viewpoint image, which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space, with reference to the obtained spatial data, and a plurality of the above-described plurality of the viewpoints viewed from different viewpoints. Display control means for controlling display of the virtual viewpoint image.
[0006] また、本発明に力かる画像生成装置は、車両に搭載された 1または複数のカメラか らの入力画像を 3次元空間の予め決められた空間モデルにマッピングする空間再構 成手段と、前記空間再構成手段によってマッピングされた空間データを参照して前 記 3次元空間における任意の仮想視点力 見た画像である仮想視点画像を生成す る視点変換手段と、それぞれ互いに異なった視点から見た複数の前記仮想視点画 像を表示することを制御する表示制御手段と、前記車両の状態に応じて、前記表示 制御手段による表示形態を選択する表示形態選択手段と、を備える。  [0006] Further, an image generating apparatus according to the present invention includes a space reconstructing unit that maps an input image from one or a plurality of cameras mounted on a vehicle to a predetermined space model in a three-dimensional space. A viewpoint conversion unit for generating a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint force in the three-dimensional space with reference to the spatial data mapped by the space reconstructing unit; Display control means for controlling display of the plurality of viewed virtual viewpoint images, and display form selection means for selecting a display form by the display control means according to a state of the vehicle.
[0007] また、本発明に力かる画像生成装置は、車両に搭載された 1または複数のカメラか らの入力画像を 3次元空間の予め決められた空間モデルにマッピングする空間再構 成手段と、前記空間再構成手段によってマッピングされた空間データを参照して前 記 3次元空間における任意の仮想視点力 見た画像である仮想視点画換を生成す る視点変換手段と、それぞれ互いに異なった視点から見た複数の前記仮想視点画 像を表示することを制御する表示制御手段と、前記車両に係る動作状況に応じて、 前記表示制御手段による表示形態を選択する表示形態選択手段と、を備える。  [0007] Further, an image generating apparatus according to the present invention includes a space reconstructing unit that maps an input image from one or a plurality of cameras mounted on a vehicle to a predetermined space model in a three-dimensional space. A viewpoint conversion unit for generating a virtual viewpoint replacement, which is an image viewed from an arbitrary virtual viewpoint force in the three-dimensional space with reference to the spatial data mapped by the space reconstructing unit; Display control means for controlling the display of the plurality of virtual viewpoint images viewed from above, and display form selection means for selecting a display form by the display control means in accordance with an operation state of the vehicle. .
[0008] また、本発明に力かる画像生成装置は、車両に搭載された 1または複数のカメラか らの入力画像を 3次元空間の予め決められた空間モデルにマッピングする空間再構 成手段と、前記空間再構成手段によってマッピングされた空間データを参照して前 記 3次元空間における任意の仮想視点力 見た画像である仮想視点画像を生成す る視点変換手段と、それぞれ互いに異なった視点から見た複数の前記仮想視点画 像を表示することを制御する表示制御手段と、操作者による操作状況に応じて、前記 表示制御手段による表示形態を選択する表示形態選択手段と、を備える。 [0008] Further, an image generating apparatus according to the present invention includes space reconstructing means for mapping an input image from one or a plurality of cameras mounted on a vehicle to a predetermined space model in a three-dimensional space. A viewpoint conversion unit for generating a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint force in the three-dimensional space with reference to the spatial data mapped by the space reconstructing unit; Display control means for controlling display of the plurality of viewed virtual viewpoint images, and Display form selecting means for selecting a display form by the display control means.
[0009] また、本発明にかかる画像生成装置は、 1または複数のカメラからの入力画像を 3 次元空間の予め決められた空間モデルにマッピングする空間再構成手段と、前記空 間再構成手段によってマッピングされた空間データを参照して、前記 3次元空間にお ける任意の仮想視点から見た画像である仮想視点画像を生成する視点変換手段と、 前記仮想視点画像に撮影されている同種の物体毎に色分けする色分け手段と、前 記色分け手段にて変換された画像を表示する表示制御手段と、を備える。  [0009] Further, the image generation device according to the present invention includes: a space reconstructing unit that maps an input image from one or a plurality of cameras onto a predetermined space model of a three-dimensional space; Viewpoint conversion means for generating a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the mapped space data, and a similar object captured in the virtual viewpoint image The image processing apparatus includes a color classifying unit for classifying each color, and a display control unit for displaying an image converted by the color classifying unit.
[0010] また、本発明に力かる画像生成装置は、 1または複数のカメラ力もの入力画像を 3 次元空間の予め決められた空間モデルにマッピングする空間再構成手段と、前記空 間再構成手段によってマッピングされた空間データを参照して、前記 3次元空間にお ける任意の仮想視点から見た画像である仮想視点画像を生成する視点変換手段と、 前記視点変換手段にて変換された画像を表示する表示制御手段と、を備え、前記視 点変換手段は、ユーザの固有の情報またはユーザの状態に関する情報であるユー ザ情報に基づ 、て前記仮想視点画像を生成する。  [0010] Further, an image generating apparatus according to the present invention includes: a space reconstructing unit that maps an input image having one or more camera forces onto a predetermined space model of a three-dimensional space; Viewpoint conversion means for generating a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the spatial data mapped by Display control means for displaying, and the viewpoint conversion means generates the virtual viewpoint image based on user information which is information specific to the user or information on the state of the user.
[0011] 本発明にかかる画像生成プログラムを記録したコンピュータ読み取り可能な記録媒 体は、 1または複数のカメラ力 の入力画像を 3次元空間の予め決められた空間モデ ルにマッピングする空間再構成処理と、前記空間再構成処理によってマッピングされ た空間データを参照して、前記 3次元空間における任意の仮想視点から見た画像で ある仮想視点画像を生成する視点変換処理と、それぞれ相互に異なった視点から見 た複数の前記仮想視点画像を表示することを制御する表示制御処理と、を、コンビュ ータに実行させる。  [0011] A computer-readable recording medium on which the image generation program according to the present invention is recorded is a spatial reconstruction process that maps an input image of one or a plurality of cameras into a predetermined spatial model in a three-dimensional space. And a viewpoint conversion process of generating a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the spatial data mapped by the space reconstruction process. And a display control process of controlling display of the plurality of virtual viewpoint images viewed from the computer.
[0012] また、本発明に力かる画像生成プログラムを記録したコンピュータ読み取り可能な 記録媒体は、 1または複数のカメラ力 の入力画像を 3次元空間の予め決められた空 間モデルにマッピングする空間再構成処理と、前記空間再構成処理によってマツピ ングされた空間データを参照して、前記 3次元空間における任意の仮想視点から見 た画像である仮想視点画像を生成する視点変換処理と、前記仮想視点画像に撮影 されている同種の物体毎に色分けする色分け処理と、前記色分け処理にて変換され た画像を表示する表示処理と、を、コンピュータに実行させる。 [0013] また、本発明に力かる画像生成プログラムを記録したコンピュータ読み取り可能な 記録媒体は、 1または複数のカメラ力 の入力画像を 3次元空間の予め決められた空 間モデルにマッピングする空間再構成処理と、前記空間再構成処理によってマツピ ングされた空間データを参照して、前記 3次元空間における任意の仮想視点から見 た画像である仮想視点画像を生成する視点変換処理と、前記視点変換処理にて変 換された画像を表示する表示制御処理と、を、コンピュータに実行させ、前記視点変 換処理は、ユーザの固有の情報またはユーザの状態に関する情報であるユーザ情 報に基づ ヽて前記仮想視点画像を生成する。 [0012] Further, a computer-readable recording medium on which an image generation program according to the present invention is recorded is a spatial reproduction device that maps an input image of one or a plurality of cameras onto a predetermined space model in a three-dimensional space. A configuration process; a viewpoint conversion process of generating a virtual viewpoint image that is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the spatial data mapped by the space reconstruction process; The computer causes a computer to execute a color-coding process for color-coding each same type of object photographed in the image and a display process for displaying the image converted by the color-coding process. [0013] Further, a computer-readable recording medium on which an image generation program according to the present invention is recorded is a spatial reproduction device that maps an input image of one or a plurality of cameras onto a predetermined space model in a three-dimensional space. A configuration process; a viewpoint conversion process of generating a virtual viewpoint image as an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the spatial data mapped by the space reconstruction process; And causing the computer to execute a display control process of displaying an image converted by the process. The viewpoint conversion process is performed based on user information that is information specific to the user or information on the state of the user. To generate the virtual viewpoint image.
[0014] また、本発明にかかる画像生成プログラムを記録したコンピュータ読み取り可能な 記録媒体は、車両に搭載された 1または複数のカメラからの入力画像を 3次元空間の 予め決められた空間モデルにマッピングする空間再構成処理と、前記空間再構成処 理によってマッピングされた空間データを参照して前記 3次元空間における任意の仮 想視点から見た画像である仮想視点画像を生成する視点変換処理と、それぞれ互 いに異なった視点から見た複数の前記仮想視点画像を表示することを制御する表示 制御処理と、前記車両の状態に応じて、前記表示制御処理による表示形態を選択 する表示形態選択処理と、を、コンピュータに実行させる。  [0014] Further, a computer-readable recording medium recording the image generation program according to the present invention maps an input image from one or more cameras mounted on a vehicle to a predetermined space model in a three-dimensional space. A space reconstruction process for generating a virtual viewpoint image that is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the spatial data mapped by the space reconstruction process. A display control process for controlling display of a plurality of the virtual viewpoint images viewed from different viewpoints, and a display format selection process for selecting a display format by the display control process according to a state of the vehicle And cause the computer to execute.
[0015] また、本発明に力かる画像生成プログラムを記録したコンピュータ読み取り可能な 記録媒体は、車両に搭載された 1または複数のカメラからの入力画像を 3次元空間の 予め決められた空間モデルにマッピングする空間再構成処理と、前記空間再構成処 理によってマッピングされた空間データを参照して前記 3次元空間における任意の仮 想視点から見た画像である仮想視点画換を生成する視点変換処理と、それぞれ互 いに異なった視点から見た複数の前記仮想視点画像を表示することを制御する表示 制御処理と、前記車両に係る動作状況に応じて、前記表示制御処理による表示形態 を選択する表示形態選択処理と、を、コンピュータに実行させる。  [0015] Further, a computer-readable recording medium recording an image generation program according to the present invention converts an input image from one or a plurality of cameras mounted on a vehicle into a predetermined space model of a three-dimensional space. Space reconstruction processing for mapping, and viewpoint conversion processing for generating a virtual viewpoint replacement that is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the spatial data mapped by the space reconstruction processing A display control process for controlling display of the plurality of virtual viewpoint images viewed from different viewpoints, and a display mode by the display control process according to an operation state of the vehicle. And a display mode selection process.
[0016] また、本発明に力かる画像生成プログラムを記録したコンピュータ読み取り可能な 記録媒体は、車両に搭載された 1または複数のカメラからの入力画像を 3次元空間の 予め決められた空間モデルにマッピングする空間再構成処理と、前記空間再構成処 理によってマッピングされた空間データを参照して前記 3次元空間における任意の仮 想視点から見た画像である仮想視点画像を生成する視点変換処理と、それぞれ互 いに異なった視点から見た複数の前記仮想視点画像を表示することを制御する表示 制御処理と、操作者による操作状況に応じて、前記表示制御処理による表示形態を 選択する表示形態選択処理と、を、コンピュータに実行させる。 [0016] Further, a computer-readable recording medium on which an image generation program according to the present invention is recorded can convert an input image from one or more cameras mounted on a vehicle into a predetermined three-dimensional space model. Spatial reconstruction processing to be mapped, and any temporary data in the three-dimensional space with reference to the spatial data mapped by the spatial reconstruction processing. A viewpoint conversion process for generating a virtual viewpoint image which is an image viewed from a virtual viewpoint, a display control process for controlling display of the plurality of virtual viewpoint images viewed from different viewpoints, and And a display mode selection process for selecting a display mode by the display control process according to the operation status.
[0017] 本発明にかかる画像生成方法は、 1または複数のカメラからの入力画像を 3次元空 間の予め決められた空間モデルにマッピングし、前記マッピングされた空間データを 参照して、前記 3次元空間における任意の仮想視点力 見た画像である仮想視点画 像を生成し、それぞれ相互に異なった視点力 見た複数の前記仮想視点画像を表 示することを制御する。  [0017] The image generation method according to the present invention is characterized in that an input image from one or a plurality of cameras is mapped to a predetermined space model in a three-dimensional space, and the 3D space is referred to by referring to the mapped space data. A virtual viewpoint image which is an image viewed with an arbitrary virtual viewpoint power in a dimensional space is generated, and display of the plurality of virtual viewpoint images viewed with different viewpoint powers from each other is controlled.
[0018] また、本発明にかかる画像生成方法は、 1または複数のカメラからの入力画像を 3 次元空間の予め決められた空間モデルにマッピングし、前記マッピングされた空間 データを参照して、前記 3次元空間における任意の仮想視点から見た画像である仮 想視点画像を生成し、前記仮想視点画像に撮影されている同種の物体毎に色分け し、前記色分けされた画像を表示する。  [0018] Further, the image generation method according to the present invention, the input image from one or a plurality of cameras is mapped to a predetermined spatial model of a three-dimensional space, and the mapped spatial data is referred to. A virtual viewpoint image, which is an image viewed from an arbitrary virtual viewpoint in a three-dimensional space, is generated, and the same kind of object photographed in the virtual viewpoint image is color-coded and the color-coded image is displayed.
[0019] また、本発明にかかる画像生成方法は、 1または複数のカメラからの入力画像を 3 次元空間の予め決められた空間モデルにマッピングし、前記マッピングされた空間 データを参照して、前記 3次元空間における任意の仮想視点から見た画像である仮 想視点画像を生成し、前記仮想視点画像を表示する画像生成方法であって、前記 仮想視点画像は、ユーザの固有の情報またはユーザの状態に関する情報であるュ 一ザ情報に基づいて生成される。  [0019] Further, the image generation method according to the present invention, the input image from one or a plurality of cameras is mapped to a predetermined space model of a three-dimensional space, and the mapped space data is referred to. An image generation method for generating a virtual viewpoint image, which is an image viewed from an arbitrary virtual viewpoint in a three-dimensional space, and displaying the virtual viewpoint image, wherein the virtual viewpoint image includes information unique to a user or a user. It is generated based on user information that is information on the state.
[0020] また、本発明に力かる画像生成方法は、車両に搭載された 1または複数のカメラか らの入力画像を 3次元空間の予め決められた空間モデルにマッピングし、前記マツピ ングされた空間データを参照して前記 3次元空間における任意の仮想視点から見た 画像である仮想視点画像を生成し、それぞれ互いに異なった視点から見た複数の前 記仮想視点画像の表示形態を制御し、前記車両の状態に応じて、前記表示形態を 選択する。  Further, an image generation method according to the present invention maps an input image from one or a plurality of cameras mounted on a vehicle to a predetermined space model in a three-dimensional space, and executes the mapping. A virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space is generated with reference to the spatial data, and a display mode of the plurality of virtual viewpoint images viewed from different viewpoints is controlled. The display mode is selected according to the state of the vehicle.
[0021] また、本発明にかかる画像生成方法は、車両に搭載されたほたは複数のカメラか らの入力画像を 3次元空間の予め決められた空間モデルにマッピングし、前記マツピ ングされた空間データを参照して前記 3次元空間における任意の仮想視点から見た 画像である仮想視点画換を生成し、それぞれ互いに異なった視点カゝら見た複数の前 記仮想視点画像の表示形態を制御し、前記車両に係る動作状況に応じて、前記表 示形態を選択する。 The image generation method according to the present invention maps input images from a plurality of cameras mounted on a vehicle to a predetermined space model in a three-dimensional space, and Virtual viewpoint replacement, which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space, with reference to the spatial data obtained in the above-mentioned three-dimensional space. The display mode is controlled, and the display mode is selected according to the operation status of the vehicle.
[0022] また、本発明に力かる画像生成方法は、車両に搭載された 1または複数のカメラか らの入力画像を 3次元空間の予め決められた空間モデルにマッピングし、前記マツピ ングされた空間データを参照して前記 3次元空間における任意の仮想視点から見た 画像である仮想視点画像を生成し、それぞれ互いに異なった視点から見た複数の前 記仮想視点画像の表示形態を制御し、操作者による操作状況に応じて、前記表示 形態を選択する。  Further, the image generation method according to the present invention maps an input image from one or a plurality of cameras mounted on a vehicle to a predetermined space model in a three-dimensional space, and executes the mapping. A virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space is generated with reference to the spatial data, and a display mode of the plurality of virtual viewpoint images viewed from different viewpoints is controlled. The display mode is selected according to the operation status of the operator.
図面の簡単な説明  Brief Description of Drawings
[0023] [図 1]第 1の実施形態における画像生成装置を示す図である。 FIG. 1 is a diagram showing an image generation device according to a first embodiment.
[図 2]第 1の実施形態における仮想視点画像の表示フローを示す図である。  FIG. 2 is a diagram showing a display flow of a virtual viewpoint image in the first embodiment.
[図 3]第 1の実施形態における重畳表示させた仮想視点画像を示す図である。  FIG. 3 is a diagram showing a virtual viewpoint image superimposed and displayed in the first embodiment.
[図 4]第 1の実施形態における複数の仮想視点画像をパノラマ合成した例を示す図 である。  FIG. 4 is a diagram showing an example in which a plurality of virtual viewpoint images according to the first embodiment are panorama-synthesized.
[図 5]第 1の実施形態における 4つの仮想視点画像を並べて表示した図である。  FIG. 5 is a diagram in which four virtual viewpoint images according to the first embodiment are displayed side by side.
[図 6]第 1の実施形態における運転モードの切り換えと表示形態との関係を示す図で ある。  FIG. 6 is a diagram showing the relationship between the switching of the operation mode and the display mode in the first embodiment.
[図 7]第 2の実施形態における車両状態と表示形態との対応関係を示す図である。  FIG. 7 is a diagram showing a correspondence relationship between a vehicle state and a display mode in the second embodiment.
[図 8]第 3の実施形態における車両の動作状況のモードの一例を示す図である。  FIG. 8 is a diagram showing an example of a mode of an operation state of a vehicle according to a third embodiment.
[図 9A]第 3の実施形態におけるモード切り換えに用いる各ボタンを示す図である。  FIG. 9A is a diagram showing buttons used for mode switching in the third embodiment.
[図 9B]第 3の実施形態における学習による配置変更(自動変更 ·手動変更)機能によ り配置が変更されたモードボタンを示す図である。  FIG. 9B is a diagram showing a mode button whose arrangement has been changed by an arrangement change (automatic change / manual change) function by learning according to the third embodiment.
[図 10]第 4の実施形態における画像生成装置を示す図である。  FIG. 10 is a diagram illustrating an image generation device according to a fourth embodiment.
[図 11]第 4の実施形態における処理フローを示す図である。  FIG. 11 is a diagram showing a processing flow in a fourth embodiment.
[図 12]第 5の実施形態における広角の仮想視点画像とその近接して障害物が拡大さ れた仮想視点画像とを共に表示した一例を示す図である。 [図 13]第 6の実施形態における画像生成装置を示す図である。 FIG. 12 is a diagram showing an example of displaying both a wide-angle virtual viewpoint image and a virtual viewpoint image in which an obstacle is enlarged in proximity to the wide-angle virtual viewpoint image in the fifth embodiment. FIG. 13 is a diagram illustrating an image generation device according to a sixth embodiment.
[図 14]第 7の実施形態におけるカーナビゲーシヨンの画像の一部に重畳させた 3つ の仮想視点画像を示す図である。  FIG. 14 is a diagram showing three virtual viewpoint images superimposed on a part of a car navigation image in the seventh embodiment.
[図 15]第 1一第 8の実施形態における画像生成装置のハードウェア環境の構成プロ ック図である。  FIG. 15 is a configuration block diagram of a hardware environment of the image generation device according to the first to eighth embodiments.
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0024] 特許文献 1では、複数のカメラで撮影した領域 (例えば、車両近辺)の画像を連続 的な 1枚の画像として合成し、その合成した画像を仮想の 3次元空間モデルにマツピ ングして、そのマッピングしたデータを仮想的に 3次元上で視点を変えた画像 (仮想 視点画像)をどのようにして生成するかに主題をお 、た技術であり、その表示の方法 や表示形態等についてユーザインターフェースにおける利便性を向上させるというこ とにつ!、ては十分に具体的な提案をするものではな 、。  [0024] In Patent Document 1, an image of an area (for example, near a vehicle) captured by a plurality of cameras is synthesized as one continuous image, and the synthesized image is mapped into a virtual three-dimensional space model. The main theme is how to generate an image (virtual viewpoint image) in which the viewpoint of the mapped data is virtually changed in three dimensions, and its display method and display form, etc. It is not a sufficiently specific suggestion to improve the usability of the user interface.
[0025] 本発明に力かる実施形態では、仮想視点画像をユーザの利便性を考慮して表示さ せる画像生成装置を説明する。  [0025] In an embodiment that embodies the present invention, an image generating apparatus that displays a virtual viewpoint image in consideration of user convenience will be described.
<第 1の実施形態 >  <First embodiment>
先ず、仮想視点画像を複数表示させる技術にっ ヽて説明する。  First, a technique for displaying a plurality of virtual viewpoint images will be described.
[0026] 図 1は、本発明の一実施形態における画像生成装置 10000を示す。同図において 、本発明による画像生成装置 10000は、複数台のカメラ 101、カメラパラメータテー ブル 103、空間再構成手段 104、空間データバッファ 105、視点変換手段 106、表 示制御手段 10001、表示形態選択手段 10002、及び表示手段 107を含んで構成さ れる。 FIG. 1 shows an image generating apparatus 10000 according to an embodiment of the present invention. In the figure, an image generation apparatus 10000 according to the present invention includes a plurality of cameras 101, a camera parameter table 103, a spatial reconstruction means 104, a spatial data buffer 105, a viewpoint conversion means 106, a display control means 10001, a display mode selection. Means 10002 and display means 107.
[0027] 複数台のカメラ 101は、監視対象領域の状況を把握するのに適合した状態で設け られている。カメラ 101は、例えば、車両の周囲の状況など監視すべき空間の画像を 取り込む複数のテレビカメラである。このカメラ 101は、大きな視野を得ることができる よう、通常、画角が大きいものを使うのが好ましい。このカメラ 101の設置台数、設置 状態等につ 、ては、例えば特許文献 1に開示のような公知の態様でもよ!/、。  [0027] The plurality of cameras 101 are provided in a state suitable for grasping the status of the monitoring target area. The camera 101 is, for example, a plurality of television cameras that capture images of a space to be monitored such as a situation around the vehicle. Usually, it is preferable to use a camera having a large angle of view so that a large field of view can be obtained. Regarding the number of cameras 101 to be installed, the installation state, and the like, a known mode such as that disclosed in Patent Document 1 may be used.
[0028] カメラパラメータテーブル 103には、カメラ 101の特性を示すカメラパラメータを格納 してある。ここで、カメラパラメータについて説明する。画像生成装置 10000にはキヤ リブレーシヨン手段 (不図示)が設けられており、カメラキャリブレーションを行う。カメラ キャリブレーションとは、 3次元空間に配置されたカメラについての、その 3次元空間 における、カメラの取り付け位置、カメラの取り付け角度、カメラのレンズ歪み補正値、 カメラのレンズの焦点距離などといった、カメラ 101の特性を表すカメラパラメータを 決定して、補正することである。このキャリブレーション手段及びカメラパラメータテー ブル 103につ 、ては例えば特許文献 1にも詳述されて!、る。 The camera parameter table 103 stores camera parameters indicating the characteristics of the camera 101. Here, the camera parameters will be described. Image generation device 10000 A calibration means (not shown) is provided to perform camera calibration. Camera calibration refers to a camera placed in a three-dimensional space, such as the camera mounting position, camera mounting angle, camera lens distortion correction value, and camera lens focal length in the three-dimensional space. This is to determine and correct camera parameters representing the characteristics of 101. The calibration means and the camera parameter table 103 are described in detail in, for example, Patent Document 1! RU
[0029] 空間再構成手段 104では、カメラパラメータに基づいて、カメラ 101からの入力画像 を 3次元空間の空間モデルにマッピングした空間データを作成する。すなわち、空間 再構成手段 104は、キャリブレーション手段 (不図示)によって計算されたカメラパラメ ータに基づいて、カメラ 101からの入力画像を構成する各々の画素を 3次元空間の 点に対応づけた空間データを作成する。すなわち、空間再構成手段 104では、カメ ラ 101で撮影された画像に含まれる各々の物体が、 3次元空間のどこに存在するか を計算し、その計算結果として得られた空間データを空間データバッファ 105に格納 する。 [0029] The spatial reconstruction means 104 creates spatial data in which an input image from the camera 101 is mapped to a three-dimensional spatial model based on the camera parameters. That is, the spatial reconstruction means 104 associates each pixel constituting the input image from the camera 101 with a point in the three-dimensional space based on the camera parameters calculated by the calibration means (not shown). Create spatial data. That is, the spatial reconstruction means 104 calculates where each object included in the image captured by the camera 101 exists in the three-dimensional space, and stores the spatial data obtained as a result of the calculation in the spatial data buffer. Store in 105.
[0030] なお、カメラ 101により取得された入力画像を構成するそれぞれの画素のすべてを 利用して空間データを構成する必要はない。たとえば入力画像に水平線により上に 位置する領域が写っている場合は、その水平線より上の領域に含まれる画素を路面 にマッピングする必要はな 、。あるいは車体を写して 、る画素をマッピングする必要 もない。また、入力画像が高解像度な場合などは、数画素毎に飛ばして空間データ にマッピングすることにより処理を高速ィ匕することも考えられる。この空間再構成手段 104につ 、ては例えば特許文献 1に詳述されて 、る。  [0030] Note that it is not necessary to form spatial data using all of the pixels that make up the input image acquired by the camera 101. For example, if the input image includes an area located above the horizon, it is not necessary to map pixels included in the area above the horizon to the road surface. Alternatively, there is no need to map the vehicle and map pixels. Further, when the input image has a high resolution, for example, it is conceivable that the processing is performed at high speed by skipping every few pixels and mapping to spatial data. The space reconstructing means 104 is described in detail in Patent Document 1, for example.
[0031] 空間データバッファ 105では、空間再構成手段 104にて作成された空間データを 一時的に格納する。この空間データバッファ 105についても例えば特許文献 1に詳 述されている。  [0031] The spatial data buffer 105 temporarily stores the spatial data created by the spatial reconstruction means 104. This spatial data buffer 105 is also described in detail in Patent Document 1, for example.
[0032] 視点変換手段 106では、空間データを参照して、任意の視点から見た画像を作成 する。すなわち、空間再構成手段 104によって作成された空間データを参照して、任 意の視点にカメラを設置して撮影した画像を作成する。この視点変換手段 106につ V、ても例えば特許文献 1に詳述された構成をとり得る。 [0033] 表示制御手段 10001では、視点変換手段 106にて変換された画像を表示するに あたり、その表示形態を制御する。この表示制御手段 10001は、表示形態選択手段 10002の選択動作をトリガーにして画像の表示に関する制御を行う。 [0032] The viewpoint conversion means 106 creates an image viewed from an arbitrary viewpoint with reference to the spatial data. That is, referring to the space data created by the space reconstructing means 104, an image taken by installing a camera at an arbitrary viewpoint is created. This viewpoint conversion means 106 may have the configuration described in detail in Patent Document 1, for example. The display control unit 10001 controls the display mode when displaying the image converted by the viewpoint conversion unit 106. The display control unit 10001 controls the display of an image by using the selection operation of the display mode selection unit 10002 as a trigger.
[0034] 後述するように、本実施形態においては視点変換手段 106にて変換された複数の 画像を重畳して表示させたり、連続した 1枚の画像として表示させたり、並べて表示さ せたりする。また、異なる視点の画像を複数同時に表示することができるので、例え ば、ノ ックミラーで見える仮想視点画像と俯瞰視点カゝら見た仮想視点画像とを同時に 表示することができる。このように内容の異なる画像を画面上で分割して表示したり、 重畳して表示したり、並べて表示したり等することができる。  As described later, in the present embodiment, a plurality of images converted by the viewpoint conversion unit 106 are displayed in a superimposed manner, displayed as one continuous image, or displayed side by side. . Also, since a plurality of images from different viewpoints can be displayed at the same time, for example, a virtual viewpoint image viewed with a knock mirror and a virtual viewpoint image viewed from a bird's-eye view viewpoint can be displayed simultaneously. In this way, images having different contents can be divided and displayed on a screen, displayed in a superimposed manner, displayed side by side, and the like.
[0035] また、仮想視点画像同士を重畳して表示させるだけでなぐ通常の画像、例えば力 メラで撮影したままの生の映像やカーナビゲーシヨン画像等に重畳表示させることも できる。  [0035] Further, it is also possible to superimpose and display a normal image that is not merely displayed by superimposing the virtual viewpoint images, for example, a raw image or a car navigation image captured with a camera.
表示形態選択手段 10002は、視点の変更、画角の変更を指示するためのものであ る。これらの指示は、表示制御手段 10001を介して視点変更手段 106に送信され( または、表示形態選択手段 10002から直接視点変更手段 106に送信しても良い)、 その指示に基づ 1、た仮想視点画像が作成される。  The display form selection means 10002 is for instructing a change of the viewpoint and a change of the angle of view. These instructions are transmitted to the viewpoint changing means 106 via the display control means 10001 (or may be transmitted directly from the display form selecting means 10002 to the viewpoint changing means 106). A viewpoint image is created.
[0036] なお、表示形態選択手段 10002では、視点の変更、画角の変更に限らず、ズーム 、フォーカス、露出、シャッタースピード等の変更を指示するようにしてもよい。また、 表示形態選択手段 10002では、後述するように表示形態を選択することもできる。  Note that the display mode selection unit 10002 may instruct not only the change of the viewpoint and the change of the angle of view but also the change of the zoom, focus, exposure, shutter speed, and the like. Further, the display mode selection unit 10002 can also select a display mode as described later.
[0037] 表示手段 107は、例えばディスプレイ等であり、表示制御手段 10001により制御さ れた画像が表示される。  The display unit 107 is, for example, a display or the like, and displays an image controlled by the display control unit 10001.
図 2は、本実施形態における仮想視点画像の表示フローを示す。本実施形態の画 像生成装置 10000では、以下の手順によって、複数台設置されているそれぞれの力 メラ視野を統合し、一枚の画像として合成する。  FIG. 2 shows a display flow of the virtual viewpoint image in the present embodiment. In the image generating apparatus 10000 of the present embodiment, the respective fields of view of a plurality of cameras are integrated by the following procedure and combined as one image.
[0038] まず、空間再構成手段 104において、カメラ 101から得られた画像を構成する各々 の画素と、 3次元座標系の点との対応関係を計算し、空間データを作成する。この計 算は各々のカメラ 101から得られた画像のすべての画素に対して実施する(Sl)。こ の処理自体につ!、ては、例えば特許文献 1にも開示された公知の態様のものを適用 できる。 First, the spatial reconstruction means 104 calculates the correspondence between each pixel constituting the image obtained from the camera 101 and a point in the three-dimensional coordinate system, and creates spatial data. This calculation is performed for all pixels of the image obtained from each camera 101 (Sl). In the processing itself, for example, a known embodiment disclosed in Patent Document 1 is applied. it can.
[0039] 次に、視点変換手段 106において、所望の視点を指定する(S2)。すなわち、 S1で の 3次元座標系の、どの位置から、どの角度で、どれだけの倍率で、画像を見たいか を指定する。  Next, a desired viewpoint is designated by the viewpoint conversion means 106 (S2). In other words, from the position in the three-dimensional coordinate system in S1, from which position, at what angle, and at what magnification, the image is specified.
[0040] 次に、同じく視点変換手段 106において、 S2で指定した視点力もの画像を上記の 空間データ力も再現し、表示制御手段 10001によってその再現した画像の表示形 態を制御する(S3)。そして、その画像を表示手段 107へ出力し、表示手段 107にて 表示される(S4)。  Next, the viewpoint conversion means 106 also reproduces the spatial data force of the image having the viewpoint power specified in S2, and the display control means 10001 controls the display mode of the reproduced image (S3). Then, the image is output to the display means 107 and displayed on the display means 107 (S4).
[0041] 図 3は、本実施形態における重畳表示させた仮想視点画像を示す。仮想視点画像 10010と 10011と力重畳しており、すなわち、 rpicture In PictureJ機能を利用し て仮想視点画像 10010の中に子画面である仮想視点画像 10011を表示させること ができる。なお、同図では 2画面が重畳しているが、これに限らず、さらに複数重畳さ せてもよい。  FIG. 3 shows a virtual viewpoint image superimposed and displayed in the present embodiment. The virtual viewpoint images 10010 and 10011 are force-superimposed, that is, the virtual viewpoint image 10011 as a child screen can be displayed in the virtual viewpoint image 10010 using the rpicture In PictureJ function. Although two screens are superimposed in the figure, the present invention is not limited to this, and a plurality of screens may be superimposed.
[0042] 図 4は、本実施形態における複数の仮想視点画像をパノラマ合成した例を示す。 2 つの仮想視点画像 10020, 10021で画角が重なる部分 (画像内で境界部分が重な る部分)を重ねて、ノ Vラマ合成し、 1つの連続した画像 (以下、シームレス画像という )としている。なお、同図では 2画面を重ねているが、これに限らずさらに複数の画角 の重なる画面を重ねて、シームレス画像にしても良い。  FIG. 4 shows an example in which a plurality of virtual viewpoint images according to the present embodiment are subjected to panoramic synthesis. The two virtual viewpoint images 10020 and 10021 overlap the part where the angle of view overlaps (the part where the boundary part overlaps in the image) and combine them into a single continuous image (hereinafter referred to as a seamless image). . Although two screens are overlapped in the figure, the present invention is not limited to this, and a plurality of overlapping screens having different angles of view may be further overlapped to form a seamless image.
[0043] 図 5は、本実施形態における 4つの仮想視点画像 10030, 10031, 10032, 1003 3を並べて表示した図を示す。ここで表示される仮想視点画像は、図 4とは異なり、相 互に関連がなくてもよい。なお、同図では 4画面を並べている力 これに限らず、さら に複数の画面を並べてもょ 、。  FIG. 5 shows a diagram in which four virtual viewpoint images 10030, 10031, 10032, and 10033 according to the present embodiment are displayed side by side. The virtual viewpoint images displayed here need not be related to each other, unlike FIG. In the figure, the force of arranging four screens is not limited to this, and multiple screens may be arranged.
[0044] 上述したように、図 3—図 5の表示形態の制御は、表示制御手段 10001により制御 され、表示形態の選択は、表示形態選択手段 10002により行うことができる。  As described above, the control of the display mode in FIGS. 3 to 5 is controlled by the display control unit 10001, and the display mode can be selected by the display mode selection unit 10002.
次に、本実施形態における画像生成装置 10000を車両に搭載した場合について 説明する。  Next, a case where the image generation device 10000 according to the present embodiment is mounted on a vehicle will be described.
[0045] 図 6は、本実施形態における運転モードの切り換えと表示形態との関係を示す。同 図において、運転モードには、例えば、右折モード、左折モード、前進モード、バック モード、高速走行モード等がある。そして、各運転モードに応じて、視点や画角、表 示形態 (重畳表示(図 3参照)、シームレス表示(図 4参照)、並べて表示(図 5参照) 等)が決められている。 FIG. 6 shows the relationship between the switching of the operation mode and the display mode in the present embodiment. In the figure, the driving modes include, for example, a right turn mode, a left turn mode, a forward mode, and a back mode. Mode, high-speed running mode, and the like. The viewpoint, angle of view, display mode (superimposed display (see Fig. 3), seamless display (see Fig. 4), side-by-side display (see Fig. 5), etc.) are determined according to each driving mode.
[0046] 例えば、右折モードならば、右向きの視点力 の、広い画角の仮想視点画像を、力 一ナビゲーシヨン画像に重畳させて表示させてもよい。また、高速走行モードならば、 前方のシームレス画像に後方の仮想視点画像を重畳して表示させてもよい。  For example, in the case of the right turn mode, a virtual viewpoint image having a wide angle of view with a rightward viewpoint force may be displayed so as to be superimposed on the force navigation image. Further, in the case of the high-speed running mode, the virtual image of the rear side may be superimposed on the seamless image of the front side and displayed.
[0047] なお、表示形態選択手段 10002により、運転モードが検知される。具体的な検知 方法としては、例えば、ギヤ、速度、ウィン力、ハンドル舵角等によりそのときの車両の 状況を検知することができる。特許文献 1では、ジョイスティックを用いて視点を任意 に調整することが開示されている。一方、本実施形態では、運転上そのような煩わし い操作を行う必要がなくなる。そして、本実施形態では、運転動作に合わせて、予め 設定した視点、画角、表示形態等に切り換えることで、より簡便に適切な仮想視点画 像を表示することができる。なお、ズーム、フォーカス、露出、シャッタースピード等も 切り換えにより表示する画像の条件に加えてもよい。なお、本実施形態では、特許文 献 1の発明に基づいて仮想視点画像を生成している力 これに限られない。すなわ ち、仮想視点画像が得られるならば、どのような公知の技術を用いてもよい。  [0047] The display mode selecting means 10002 detects the operation mode. As a specific detection method, for example, a situation of the vehicle at that time can be detected by a gear, a speed, a win force, a steering angle, and the like. Patent Document 1 discloses that a joystick is used to arbitrarily adjust a viewpoint. On the other hand, in the present embodiment, it is not necessary to perform such a troublesome operation in driving. In the present embodiment, an appropriate virtual viewpoint image can be displayed more easily by switching to a preset viewpoint, angle of view, display mode, or the like in accordance with the driving operation. Note that zoom, focus, exposure, shutter speed, and the like may be added to the conditions of the displayed image by switching. In the present embodiment, the force for generating the virtual viewpoint image based on the invention of Patent Document 1 is not limited to this. That is, as long as a virtual viewpoint image is obtained, any known technique may be used.
[0048] 以上より、複数の視覚変換画像について、さらに利便性を高めた態様で表示させる ことができる。  [0048] As described above, a plurality of visual conversion images can be displayed in a more convenient manner.
<第 2の実施形態 >  <Second embodiment>
本実施形態では、第 1の実施形態とは異なる態様で画像生成装置 10000を車両 に搭載した場合での仮想視点画像の表示形態の制御について説明する。本実施形 態における画像生成装置 10000自体の概念的構成は、図 1と同様である。ただし、 表示形態選択手段 10002は、第 1の実施形態とは異なり、以下に示すように車両の 状態や動作状況等に応じて表示形態を選択するようにした。なお、第 1の実施形態 の表示形態選択手段 10002にこの機能を追加しても良い。  In the present embodiment, control of the display mode of the virtual viewpoint image when the image generation device 10000 is mounted on a vehicle in a mode different from that of the first embodiment will be described. The conceptual configuration of image generating apparatus 10000 itself in the present embodiment is the same as that in FIG. However, unlike the first embodiment, the display mode selection unit 10002 selects the display mode according to the state of the vehicle, the operation status, and the like as described below. Note that this function may be added to the display mode selection unit 10002 of the first embodiment.
[0049] 図 7は、本実施形態における車両状態と表示形態との対応関係を示す。車両には 、車両の状態が検出できるように車両の各部分にセンサ (温度、湿度、圧力、照度、 等)やカメラ (車内用、車体撮影用)、計測器が取り付けてある (又はタコメータ、スピ 一ドメータ、冷却液温度計、油圧計、燃料残量計、等の既存の計測器を用いても良 い)。 FIG. 7 shows the correspondence between the vehicle state and the display mode in the present embodiment. Vehicles are equipped with sensors (temperature, humidity, pressure, illuminance, etc.), cameras (for in-vehicle use, body photography), and measuring instruments (or tachometer, Spy Existing measuring instruments such as a dodometer, coolant temperature gauge, oil pressure gauge, fuel gauge, etc. may be used.)
[0050] 上記で車両の状態が検出されると、表示形態選択手段 10002に予め登録されて いるその検出された状態に対応した表示形態が選択される。そうすると、その選択情 報が表示制御手段に送信され、その後の処理は、第 1の実施形態と同様である。そ れでは、図 7を詳述する。  When the state of the vehicle is detected as described above, a display form corresponding to the detected state registered in advance in the display form selecting means 10002 is selected. Then, the selection information is transmitted to the display control means, and the subsequent processing is the same as in the first embodiment. Now, FIG. 7 is described in detail.
[0051] 車両状態「走行舵角」は、走行時の舵角の程度に応じて表示形態が変わることを表 している。例えば、ハンドルが所定の舵角以上回転したら、その舵角の方向の映像( 仮想視点画像等の映像をいう、以下同じ。)がディスプレイに表示される。  [0051] The vehicle state "running steering angle" indicates that the display mode changes according to the degree of the steering angle during running. For example, when the steering wheel is turned by a predetermined steering angle or more, an image in the direction of the steering angle (hereinafter, referred to as an image such as a virtual viewpoint image) is displayed on the display.
[0052] 車両状態「速度」は、速度の程度に応じて表示形態が変わることを表して 、る。例え ば、速度が速くなると遠くの映像 (例えば、安全停止距離に連動)が表示され、低速 · もしくは発進時には周囲全体映像が表示される。  [0052] The vehicle state "speed" indicates that the display mode changes in accordance with the degree of the speed. For example, a distant image (for example, linked to the safety stop distance) is displayed when the speed increases, and an entire surrounding image is displayed when the vehicle starts at a low speed or starts.
[0053] 車両状態「加速度」は、加速度の程度に応じて表示形態が変わることを表して 、る 。例えば、減速時には後方映像が表示され、加速時には遠くの前方映像が表示され る。  [0053] The vehicle state "acceleration" indicates that the display mode changes according to the degree of the acceleration. For example, a backward image is displayed during deceleration, and a far forward image is displayed during acceleration.
車両状態「ギヤ」は、ギヤがどこにはいっている力 (例えば、 1速、 2速、 3速、 · ·、バ ック等)に応じて表示形態が変わることを表している。例えば、バック時に後方映像が 表示される。  The vehicle state “gear” indicates that the display mode changes according to the force (eg, 1st, 2nd, 3rd,..., Back, etc.) where the gear is. For example, the back image is displayed during backing.
[0054] 車両状態「ワイパー」は、ワイパーの動作状況 (例えば、ワイパーが稼動中か否か、 ワイパーの動作速度等)に応じて表示形態が変わることを表している。例えば、ワイパ 一に連動して雨モードの映像 (鏡面反射成分除去 ·水滴除去等の処理が施された仮 想視点画像)が表示される。  The vehicle state “wiper” indicates that the display mode changes according to the operating state of the wiper (for example, whether or not the wiper is operating, the operating speed of the wiper, and the like). For example, an image in rain mode (a virtual viewpoint image on which processing such as specular reflection component removal and water droplet removal has been performed) is displayed in conjunction with the wiper.
[0055] 車両状態「ヘッドライド、車内灯 ONZOFF 明るさ等」は、ヘッドライドや車内灯等 の明るさに応じて表示形態が変わることを表している。例えば、ディスプレイの液晶の 輝度調整がされる。  [0055] The vehicle state "headlight, vehicle interior light ONZOFF brightness, etc." indicates that the display mode changes according to the brightness of the headlight, vehicle interior light, and the like. For example, the brightness of the liquid crystal of the display is adjusted.
[0056] 車両状態「ステレオ音量」は、ステレオ音量に応じて、例えば、音声案内 (警告)の 音量調整がされる。なお、ステレオ音量に応じて表示形態を変えてもよい。  In the vehicle state “stereo volume”, for example, the volume of voice guidance (warning) is adjusted according to the stereo volume. The display mode may be changed according to the stereo sound volume.
車両状態「汚れ」は、当該車両の汚れの程度に応じて表示形態が変わることを表し ている。例えば、自車両 (例えば、フロント部、リア部、バンパー、ルーフ、ボンネット、 ドア、ウィンドウ、ホイール、タイヤ、等)の汚れ (本システムに支障きたすようなカメラ受 光部の汚れは特に)を警告する表示される。 The vehicle state "dirt" indicates that the display mode changes according to the degree of dirt on the vehicle. ing. For example, warning of dirt on the vehicle (for example, front, rear, bumper, roof, bonnet, door, window, wheel, tire, etc.) (especially if the camera light-receiving part that may interfere with the system) Will be displayed.
[0057] 車両状態「温度、湿度」は、車内または車外の温度、湿度に応じて表示形態が変わ ることを表している。例えば、車内もしくは路面の温度,湿度が異常に高く(または低く 、例えば車外の温度が低く路面に氷が張っている等)なっている場所等を警告する 表示をする。これにより、例えば凍結によるスリップ防止が図れる。  The vehicle state “temperature, humidity” indicates that the display mode changes according to the temperature and humidity inside or outside the vehicle. For example, a warning is displayed for a place where the temperature or humidity inside the vehicle or on the road surface is abnormally high (or low, for example, when the temperature outside the vehicle is low and the road surface is covered with ice). Thereby, for example, slip prevention due to freezing can be achieved.
[0058] 車両状態「オイル量」は、オイル量の残量に応じて表示形態が変わることを表して!/ヽ る。例えば、オイル漏れの可能性がある場合に、車両後方もしくは真下の映像で漏れ ているかどうかを確かめることができる映像が表示される。  The vehicle state “oil amount” indicates that the display mode changes according to the remaining amount of oil! For example, when there is a possibility of an oil leak, an image is displayed so that it is possible to confirm whether or not the oil is leaking in an image behind or directly below the vehicle.
[0059] 車両状態「乗車人数、乗車位置、積載荷物重量」は、乗車人数、乗車者の着座位 置、積載荷物重量に応じて表示形態が変わることを表している。例えば、安全停止距 離の補正警告 (映像の中に具体的に停止距離を重畳表示)が表示される。  [0059] The vehicle state "number of passengers, riding position, and weight of loaded luggage" indicates that the display mode changes according to the number of passengers, the sitting position of the occupant, and the weight of loaded luggage. For example, a warning for correcting the safe stopping distance (specifically displaying the stopping distance superimposed on the video) is displayed.
[0060] 車両状態「窓'ドアの開閉」は、窓'ドアの開閉に応じて表示形態が変わることを表し ている。例えば、開閉による危険がないかどうかを確かめられるように周囲の映像や、 開!ヽて 、る窓力も手や頭を出して 、な 、かどうか監視できる映像が表示される。  [0060] The vehicle state "open / close window" indicates that the display mode changes according to the open / close state of the window. For example, the surrounding image is displayed so that the user can confirm whether there is no danger due to opening and closing, and the image of the window that can be opened or closed can be monitored with his / her hand or head.
[0061] なお、車両状態に関連した表示としては、上述したものだけには限られず、上述し た車両状態と路面状態や天候等を勘案して割り出した制動距離を自車両の進行方 向に向けて棒グラフ様のパターンで表示する等してもよ!、。  [0061] The display related to the vehicle state is not limited to the one described above, and the braking distance calculated in consideration of the vehicle state and the road surface state and the weather is displayed in the traveling direction of the own vehicle. May be displayed in a bar graph-like pattern!
[0062] 以上より、運転中に表示形態を変更するために煩わしい操作を行う必要がなくなる 。そして、上述のような車両状態に合わせて、予め設定された表示形態が選択され得 るので、より簡便に適切な仮想視点画像を表示することができる。  [0062] As described above, it is not necessary to perform a troublesome operation to change the display mode during driving. Then, a preset display mode can be selected according to the vehicle state as described above, so that an appropriate virtual viewpoint image can be displayed more easily.
[0063] <第 3の実施形態 >  <Third Embodiment>
本実施形態は、第 2の実施形態の変形例である。第 2の実施形態では、車両の各 部分の状態毎に表示形態を選択した。一方、本実施形態では、車両のマクロ的な動 作状況を検出して、その動作状況に応じた表示形態を選択する。すなわち、本実施 形態では、表示形態選択手段 10002により車両の動作状況に着目し、その動作状 況に対応して表示形態を選択するようにする。なお、本実施形態における画像生成 装置は、第 1または第 2の実施形態と同様である。 This embodiment is a modification of the second embodiment. In the second embodiment, a display mode is selected for each state of each part of the vehicle. On the other hand, in the present embodiment, a macro operation state of the vehicle is detected, and a display mode according to the operation state is selected. That is, in the present embodiment, the display mode selection unit 10002 focuses on the operation status of the vehicle, and selects the display mode corresponding to the operation status. Note that the image generation in the present embodiment The device is the same as in the first or second embodiment.
[0064] 図 8は、本実施形態における車両の動作状況のモードの一例を示す。本実施形態 の動作状況のモードとしては、「右折モード」、「左折モード」、「発進時周囲監視モー ド」、「車内監視モード」、「高速走行時モード」、「後方監視モード」、「雨天走行モード 」、「縦列駐車モード」、「車庫入れモード」がある。それでは、各モードについて説明 する。 FIG. 8 shows an example of a mode of the operation state of the vehicle in the present embodiment. The operation status modes of the present embodiment include “right turn mode”, “left turn mode”, “surrounding monitoring mode at start”, “in-vehicle monitoring mode”, “high-speed driving mode”, “backward monitoring mode”, “ There are "rainy driving mode", "parallel parking mode" and "garage putting mode". Now, each mode will be described.
[0065] 「右折モード」は、前方と曲がる方向の映像を表示する。具体的にいえば、車両が 右折の場合、前方の映像と右折方向の映像を表示する。「左折モード」は、前方と曲 力 ¾方向の映像を表示する。具体的にいえば、車両が左折の場合、前方の映像と左 折方向の映像を表示する。  [0065] The "right turn mode" displays an image in a direction in which the vehicle turns to the front. Specifically, when the vehicle makes a right turn, an image in front and an image in the right turn direction are displayed. “Left turn mode” displays images in the forward and 曲 directions. More specifically, when the vehicle is turning left, an image in front and an image in the direction of left turn are displayed.
[0066] 「発進時周囲監視モード」は、車両の発進時において、車両の周囲の監視映像を 表示する。「車内監視モード」は、車内の監視映像を表示する。「高速走行時モード」 は、高速走行時において、遠くの前方の映像を表示する。「後方監視モード」は、急 ブレーキをかけられるかどうか、すなわち、急ブレーキをかけて停止できるほど後続車 との距離があるかを確認するための後方の映像が表示される。  [0066] The "starting surrounding monitoring mode" displays a monitoring video around the vehicle when the vehicle starts. The “in-vehicle monitoring mode” displays a monitoring image of the inside of the vehicle. The “high-speed running mode” displays an image far ahead in the high-speed running mode. In the "backward monitoring mode", a video image is displayed to confirm whether the driver can apply the sudden brake, that is, whether there is enough distance from the following vehicle to stop the driver with the sudden brake.
[0067] 「雨天走行モード」は、降雨時には視界が悪くなるため、見落としやすい方向が生じ ることがあるので、そのような見落としやすい方向の映像、及び Zまたは画像処理に より水滴除去した映像を表示させるようにする。この見落としやす 、方向にっ 、ては、 統計的や経験的で求めた方向でもよいし、ユーザにより任意に設定できるようにして ちょい。  [0067] In the "rainy run mode", since the visibility becomes poor during rainfall, an overlooking direction may occur. Therefore, an image in such an easily overlooking direction and an image in which water droplets are removed by Z or image processing are used. To be displayed. The overlooking direction and the direction may be statistically or empirically determined, or may be arbitrarily set by the user.
[0068] 「縦列駐車モード」は、縦列駐車時にぉ 、て、前方車両と後方車両とに接触しな 、 ように、幅寄せする側の前方と後方の映像が表示される。「車庫入れモード」は、車庫 入れする場合に車庫の壁面に接触しやす ヽ方向の映像が表示される。  [0068] In the "parallel parking mode", images of the front side and the rear side on the side approaching the width are displayed so that the front vehicle and the rear vehicle do not come into contact with each other during parallel parking. In the “garage entry mode”, an image in the ヽ direction is displayed, which is easy to touch the garage wall when entering the garage.
[0069] このようなモードの変更は、車両の動作に応じて、画像生成装置 10000が自動で 認識するようにしてもよいし、ユーザにより設定するようにしてもよい。また、これらのモ ードは自由に組み合わせることができる。  [0069] Such a mode change may be automatically recognized by the image generation device 10000 according to the operation of the vehicle, or may be set by the user. In addition, these modes can be freely combined.
[0070] 図 9A及び図 9Bは、本実施形態におけるユーザが上記のモードを設定できる場合 の設定形式の一例を示す。本実施形態では、ディスプレイに「右折'左折モード」ボタ ン、「発進時周囲監視 ·車内監視モード」ボタン、「高速走行時遠方 ·後方モード」ボタ ン、「雨モード」ボタン等として表示される。そして、各選択ボタンを押下することにより 、当該モードへ切り替わるようにしている。 FIGS. 9A and 9B show an example of a setting format in the case where the user can set the above mode in the present embodiment. In the present embodiment, the "right turn" / "left turn mode" button is It is displayed as a button for “surrounding monitoring at start-up and in-vehicle monitoring mode”, a button for “far and highway at high speed”, and a button for “rain”. Then, by pressing each selection button, the mode is switched to the mode.
[0071] 本実施形態では、このボタンの表示形態 (配列形態)を変更することができると 、う ものである。表示形態のバリエーションとして、図 9A及び図 9Bで説明するように、「選 択ボタンによる選択」、「状況に応じてモードの配列が変わる」、「学習機能により、よく 使う機能 (モード)は前 (上位)へ表示する」がある。  In the present embodiment, it is supposed that the display mode (arrangement mode) of the buttons can be changed. As described in FIGS. 9A and 9B, variations of the display form include “selection by selection button”, “arrangement of modes according to the situation”, and “frequently used functions (modes) by the learning function. (Display on top) ".
[0072] 「選択ボタンによる選択」は、通常の選択ボタンの配列であって、デフォルトで設定 されている配列である。なお、この配列は、ユーザによって任意に設定変更すること ができる。このようにすることによって、ユーザ自身の好みに応じて、選択ボタンの配 列を変更することができる。  “Selection by selection button” is an array of normal selection buttons, and is an array set by default. This array can be arbitrarily changed by the user. By doing so, the arrangement of the selection buttons can be changed according to the user's own preference.
[0073] 「状況に応じてモードの配列が変わる」とは、車両の動作状況や周囲環境、天候等 によって選択ボタンの配列が変更することである。すなわち、車両の動作、周囲環境 、天候等を検知することにより、選択ボタンの配列が変更され、よりその状況に適した 選択ボタンが、例えば上位に表示される。また、選択ボタンの配列は、第 2の実施形 態での各部分の状況等に応じて変更してもよ 、。  “The arrangement of the modes is changed according to the situation” means that the arrangement of the selection buttons is changed according to the operation state of the vehicle, the surrounding environment, the weather, and the like. That is, by detecting the operation of the vehicle, the surrounding environment, the weather, and the like, the arrangement of the selection buttons is changed, and a selection button that is more suitable for the situation is displayed, for example, at the top. Also, the arrangement of the selection buttons may be changed according to the situation of each part in the second embodiment.
[0074] また、「学習機能により、よく使う機能 (モード)は前 (上位)へ表示する」とは、ユーザ の選択ボタンの選択履歴を逐次記録しておき、この履歴力 統計的にどのボタンがよ り多く使用されるかを算出して、その選択頻度順に選択ボタンを配列することである。 より具体的には、例えば、図 9A、図 9Bに示すように、自車両の周辺や進路に関する 映像をスーパーインポーズ可能になされたカーナビゲーシヨンの情報表示枠 10035 外 (本実施形態では下部)にタツチパネルでの、「バック」 10037、「左折」 10038、「 右折」 10036の各ボタンのアイコン力 選択頻度に沿った順序系列 (本実施形態で は左力も右への順序)で適応的に整列することになる。  [0074] "The function (mode) frequently used by the learning function is displayed at the front (upper)" means that the selection history of the selection button of the user is sequentially recorded, and this history power is statistically determined. Is to calculate whether or not is used more frequently, and arrange the selection buttons in order of the selection frequency. More specifically, for example, as shown in FIGS. 9A and 9B, a car navigation information display frame 10035 capable of superimposing images on the periphery and the course of the vehicle is outside (in the present embodiment, the lower part) On the touch panel, the icon power of each button of "Back" 10037, "Left turn" 10038, "Right turn" 10036 is adaptively arranged in the order sequence according to the selection frequency (in this embodiment, the left power is also the right) Will do.
[0075] 図 9Aは、配置変更前におけるモードボタンの配置を示しており、同図のように例え ば、「バック」 10037を押下する頻度が高くなると、図 9Bのように選択頻度の高い「バ ック」 10037が左端へ配置される。  FIG. 9A shows the arrangement of the mode buttons before the arrangement is changed. For example, as shown in FIG. 9A, when the frequency of pressing “back” 10037 increases, the selection frequency increases as shown in FIG. 9B. Back "10037 is placed on the left end.
[0076] 以上より、車両の動作状況やユーザの嗜好に応じて、表示形態を変更することがで きるので、当該ユーザに最適な仮想視点画像を表示することができる。 As described above, the display mode can be changed according to the operation state of the vehicle and the user's preference. Therefore, it is possible to display an optimal virtual viewpoint image for the user.
<第 4の実施形態 >  <Fourth embodiment>
特許文献 1の開示では、視点変換等の画像処理した画像如何に視認し易 ヽ明確 な表示とするかについては特段の言及がない。そこで、本実施形態では、単純な色 分け表示とすることで、より明確な表示を実現する。  In the disclosure of Patent Document 1, there is no particular mention as to how an image subjected to image processing such as viewpoint conversion is easily viewed ヽ A clear display is made. Therefore, in the present embodiment, a clearer display is realized by a simple color-coded display.
[0077] 図 10は、本実施形態における画像生成装置 10000を示す。同図の画像生成装置 10000は、第 1の実施形態の画像生成装置 10000に、色分け'単純化手段 10040 、オブジェクト認識手段 10041を追加したものである。なお、本実施形態では、表示 形態選択手段 10002を取り除いているが、用途に応じて、取り付けてもよい。  FIG. 10 shows an image generation device 10000 according to the present embodiment. The image generating apparatus 10000 of the figure is the same as the image generating apparatus 10000 of the first embodiment, except that a color coding / simplification unit 10040 and an object recognition unit 10041 are added. Although the display mode selection unit 10002 is removed in the present embodiment, it may be attached according to the application.
[0078] オブジェクト認識手段 10041は、撮影した画像に撮影されている各オブジェクト (物 体)を認識するための手段である。これらのオブジェクトを認識する方法としては、単 眼のカメラ力も得られた画像のみを用いた方法、レーザーレンジファインダーから得 られた距離とカメラから得られた画像を用いた方法、ステレオ法から得られた距離画 像と画像を用いた方法と様々考えられる。例えば、ステレオ法は、複数のカメラによつ て同一対象物を撮像し、撮像画像における対応点を抽出し、三角測定により距離画 像を算出する方式である。なお、以下では、これらの方法等を利用して距離画像を取 得する装置をステレオセンサーという。また、このようなステレオセンサーで取得した 距離情報と輝度あるいはカラー情報とを含んだ画像を、ステレオ画像、またはステレ ォ視による画像という。  [0078] The object recognizing means 10041 is means for recognizing each object (object) captured in the captured image. Methods for recognizing these objects include methods that use only images obtained with the camera power of a single eye, methods that use the distance and images obtained from the camera obtained from the laser range finder, and methods that are obtained from the stereo method. There are various methods using distance images and images. For example, the stereo method is a method in which the same object is imaged by a plurality of cameras, corresponding points in the imaged images are extracted, and a distance image is calculated by triangulation. In the following, a device that obtains a range image using these methods and the like is referred to as a stereo sensor. An image including the distance information and the luminance or color information acquired by such a stereo sensor is referred to as a stereo image or a stereoscopic image.
[0079] 例えば、特許文献 2では、撮像した画像カゝら画像全体に渡って距離分布を求め、こ の距離分布の情報から、立体物や道路形状を、正確な位置や大きさとともに、信頼 性高く検出する車輛用車外監視装置が開示されている。また、特許文献 2では、検 出した物体の輪郭像の形状寸法及び位置から、物体の種類を識別することができる  [0079] For example, in Patent Document 2, a distance distribution is obtained over the entire captured image, and a three-dimensional object or a road shape is relied on based on the information of the distance distribution together with an accurate position and size. A vehicular exterior monitoring device for highly sensitive detection is disclosed. In Patent Document 2, the type of an object can be identified based on the shape and size and position of a contour image of the detected object.
[0080] また、例えば、特許文献 3では、ガードレール、植え込み、パイロン列等の道路の境 界となる連続した立体物としての側壁を確実に検出し、し力も、側壁の有無、位置、 方向を処理が容易なデータ形態で検出する車輛用車外監視装置が開示されている [0081] これらの特許文献 2、特許文献 3では、ステレオ光学系によって車輛の車外の設置 範囲内の対象を撮像し、ステレオ光学系で撮像した画像を処理して画像全体に渡る 距離分布を計算し、この距離分布の情報に対応する被写体の各部分の三次元位置 を計算し、これらの三次元位置の情報を用いて道路の形状と複数の立体物を検出し ている。 Further, for example, in Patent Document 3, a side wall as a continuous three-dimensional object that serves as a boundary of a road, such as a guardrail, an implant, a pylon row, is reliably detected. A vehicle exterior monitoring device that detects data in a form that is easy to process is disclosed. In these Patent Documents 2 and 3, the stereo optical system captures an image of an object within the installation range outside the vehicle, processes the image captured by the stereo optical system, and calculates the distance distribution over the entire image. Then, the three-dimensional position of each part of the subject corresponding to the distance distribution information is calculated, and the shape of the road and a plurality of three-dimensional objects are detected using the three-dimensional position information.
[0082] 本実施形態におけるオブジェクト認識手段 10041でも、上記のようなステレオ画像 法により撮影した画像に撮影されて ヽる各オブジェクト(物体)を認識することができる 。そして、上記のように、ここでの処理自体は従来の手法を適応することが可能であり 、その詳細な処理の説明は省略する。なお、ステレオ画像法による画像を得るために 、カメラ 101とは別の複数のカメラを利用してもよいし、また、カメラ 101を兼用してもよ い。なお、兼用の場合は、位置を変更して複数回撮像するため、複数画像間の同時 性は失われる。  The object recognizing unit 10041 in the present embodiment can also recognize each object (object) captured in the image captured by the stereo image method as described above. As described above, the processing itself can be performed by a conventional method, and a detailed description of the processing is omitted. In order to obtain an image by the stereo image method, a plurality of cameras different from the camera 101 may be used, or the camera 101 may also be used. In the case of dual use, since the position is changed and the image is taken a plurality of times, the synchronization between the plurality of images is lost.
[0083] また、オブジェクト認識手段 10041には、後述するように、当該手段で認識したォ ブジエタトに対応する仮想視点画像内にあるこれと同一のオブジェクトに着色を行う ので、このステレオ画像と仮想視点画像とのオブジェクトを対応付けるためのテープ ル(対応付けテーブル 10042)が設けられて!/、る。  [0083] Also, as described later, the object recognizing unit 10041 performs coloring on the same object in the virtual viewpoint image corresponding to the object recognized by the unit, so that the stereo image and the virtual viewpoint A table (association table 10042) for associating an object with an image is provided.
[0084] 色分け'単純化手段 10040では、仮想視点画像内にある道路面、障害物、車両等 の各オブジェクトの種類毎に色分けする処理を行う。  [0084] The color coding / simplification means 10040 performs a process of performing color coding for each type of object such as a road surface, an obstacle, and a vehicle in the virtual viewpoint image.
図 11は、本実施形態における処理フローを示す。まず、空間データ作成処理 (S1 1)と視点の指定処理(S12)を行う。この Sl l, S12はそれぞれ、図 2の SI, S2の処 理と同様である。  FIG. 11 shows a processing flow in the present embodiment. First, spatial data creation processing (S11) and viewpoint specification processing (S12) are performed. Sll and S12 are the same as the processing of SI and S2 in FIG. 2, respectively.
[0085] 次に、オブジェクトの認識を行う(S13)。上述のように、これらの処理自体は、特許 文献 2、特許文献 3に例示される公知の方法を用い得る。また、認識したオブジェクト を種類毎に分類する。ここでの処理は例えば特許文献 2、または特許文献 3に記載さ れた公知の手法によりオブジェクトを認識し、認識したオブジェクトを種類毎に分類す る。  Next, the object is recognized (S13). As described above, these processes themselves can use known methods exemplified in Patent Documents 2 and 3. The recognized objects are classified by type. In this process, the object is recognized by a known method described in, for example, Patent Document 2 or Patent Document 3, and the recognized object is classified by type.
[0086] 次に、 S 12で取得した仮想視点画像と S 13で認識したオブジェクトとの対応付けを 行う(S14)。この対応づけは、対応付けテーブルを用いて行う。特許文献 1の公知例 では、カメラ 101で撮影された画像を構成する各々の画素の位置は、一般的に CCD 画像面を含む U— V平面上の座標として表されて 、る。 [0086] Next, the virtual viewpoint image acquired in S12 is associated with the object recognized in S13 (S14). This association is performed using an association table. Known examples of Patent Document 1 Then, the position of each pixel constituting the image captured by the camera 101 is generally represented as coordinates on the UV plane including the CCD image plane.
[0087] そこで、入力画像を構成する各々の画素をワールド座標系の点に対応付けるため には、カメラ 101で撮影された画素の存在する U— V平面の点をワールド座標系内の 点に対応付ける処理を空間再構成手段 104で行っている。  [0087] Therefore, in order to associate each pixel forming the input image with a point in the world coordinate system, a point on the UV plane where the pixel captured by the camera 101 exists is associated with a point in the world coordinate system. The processing is performed by the spatial reconstruction means 104.
[0088] よって、予め U— V座標系とステレオ画像の座標とを対応づけておけば、ステレオ画 像カゝら認識したオブジェクトの位置は、 U— V座標系を介してワールド座標系上で特 定することができる。これにより、ステレオ画像内のオブジェクトと同一の仮想視点画 像内のオブジェクトとを対応付けることができる。  Therefore, if the UV coordinate system and the coordinates of the stereo image are associated in advance, the position of the object recognized by the stereo image camera can be determined on the world coordinate system via the UV coordinate system. Can be specified. This makes it possible to associate the object in the stereo image with the object in the same virtual viewpoint image.
[0089] 次に、仮想視点画像内にあるオブジェクトの種類毎に着色を行う(S15)。この処理 では、色分け ·単純ィ匕手段 10040により行われ、 S 14で対応づけられた仮想視点画 像内にある道路面、障害物、車両等の各オブジェクトの種類毎に色分けする処理を 行う。ここでは、オブジェクト認識手段 10041により認識された各オブジェクトを S13 で分類した道路、車両等の物品の種類別に異なる色を着色していく。  Next, coloring is performed for each type of object in the virtual viewpoint image (S15). In this process, the process is performed by the color classification / simple dangling means 10040, and the process of performing color classification for each type of object such as a road surface, an obstacle, and a vehicle in the virtual viewpoint image associated in S14. Here, each object recognized by the object recognition means 10041 is colored differently according to the type of article such as a road or a vehicle classified in S13.
[0090] 例えば、仮想視点画像内にある道路面は灰色、車両は赤、その他の障害物は青、 等に色分けする。着色については、仮想視点画像中の各オブジェクトに対して、テク スチヤのトーンをオブジェクト毎にモノトーンにして単色表示としたり、半透過の色を重 畳表示したりするようにしてもよ 、。  For example, the road surface in the virtual viewpoint image is colored in gray, the vehicle is red, the other obstacles are blue, and so on. Regarding the coloring, for each object in the virtual viewpoint image, the texture tone may be monotone for each object to be displayed in a single color, or a semi-transparent color may be displayed in a superimposed manner.
[0091] 次に、 S15の処理後の画像の表示形態を制御する(S16)。ここでは、出力手段 10 7へ出力するために、表示形態を制御する。例えば、第 1の実施形態と同様に、表示 形態を選択するようにしてもよい。そして、その画像を表示手段 107へ出力し、表示 手段 107にて表示される(S17)。  Next, the display mode of the image after the processing of S15 is controlled (S16). Here, the display mode is controlled in order to output to the output means 107. For example, a display mode may be selected as in the first embodiment. Then, the image is output to the display means 107 and displayed on the display means 107 (S17).
[0092] なお、本実施形態では、仮想視点画像内にあるオブジェクトの種類ごとに色分けす る場合に、図 11の処理フローに基づいて行った力 これに限定されず、同様の効果 を奏する手法であればなんでもよい。例えば、 Sl l, S 12でステレオセンサーを用い て、立体情報等も同時に取得するようにしてもょ 、。  [0092] In the present embodiment, when performing color classification for each type of object in the virtual viewpoint image, the force performed based on the processing flow of Fig. 11 is not limited to this. Anything is fine. For example, a stereo sensor may be used at Sl and S12 to acquire three-dimensional information at the same time.
[0093] また、上記の S11において空間モデルを生成する力 このとき使用する空間モデル は、特許文献 1に記載があるように、 5つの平面力 なる空間モデル、お椀型をした空 間モデル、平面と曲面を組み合わせて構成した空間モデル、もしくはついたて面を 導入した空間モデル、またはこれらを組み合わせたものであってもよい。なお、これら の空間モデルに限定されず、空間モデルは、平面の組み合わせ、曲面、または平面 と曲面を組み合わせたものならば、特に限定されな 、。 [0093] Further, as described in Patent Document 1, the space model used for generating the space model in S11 described above is a space model having five plane forces and a bowl-shaped space. It may be a space model, a space model configured by combining a plane and a curved surface, a space model in which a surface is introduced, or a combination thereof. Note that the space model is not limited to these space models, and is not particularly limited as long as the space model is a combination of planes, a curved surface, or a combination of a plane and a curved surface.
[0094] また、このとき、この空間モデル毎に投影する画像についても同様にオブジェクトの 種類毎に色分けするようにしてもよい。つまり、空間モデルの表示をより明解にするた めに、空間モデルの各物体モデルに応じて色分けなどをすることでより明確な表示を 行う。このとき、色分けするだけでなぐその投影する画像内のオブジェクトを単純化、 すわなち、円、四角、三角、長方形、台形等で抽象化した図形で表示するようにして もよい。また、これらの図形に限らず、簡略ィ匕してある線図、記号、シンボル等であれ ば特に限定されない。  [0094] At this time, the image projected for each space model may be similarly colored for each object type. In other words, in order to make the display of the space model clearer, the display is made clearer by color-coding etc. according to each object model of the space model. At this time, objects in the projected image that are not only colored may be simplified, that is, displayed as a figure abstracted by a circle, square, triangle, rectangle, trapezoid, or the like. Further, the present invention is not limited to these figures, and is not particularly limited as long as it is a simplified diagram, symbol, symbol, or the like.
[0095] 以上より、仮想視点画像内のオブジェクトが色分け、単純化されるので、運転中で あっても容易に障害物等を認識することができる。  As described above, the objects in the virtual viewpoint image are color-coded and simplified, so that an obstacle or the like can be easily recognized even during driving.
<第 5の実施形態 >  <Fifth embodiment>
障害物が近接すると、表示画面中でその障害物の画面を占める割合が大きくなり、 画面を通して、その周囲の状況が把握しにくくなる。そこで、本実施形態では、表示 制御手段 10001により広角の仮想視点画像とその近接して障害物が拡大された仮 想視点画像とを共に表示することにより、より明解に情報を提示する。  When an obstacle approaches, the ratio of the obstacle occupying the screen in the display screen increases, and it becomes difficult to grasp the surrounding situation through the screen. Therefore, in the present embodiment, information is presented more clearly by displaying both a wide-angle virtual viewpoint image and a virtual viewpoint image in which an obstacle is enlarged in proximity to the virtual viewpoint image by the display control unit 10001.
[0096] これを実現するには、第 1の実施形態と同様にして、仮想視点画像を広角の仮想 視点画像と拡大された仮想視点画像とを表示させればよい。 [0096] In order to realize this, as in the first embodiment, the virtual viewpoint image may be displayed as a wide-angle virtual viewpoint image and an enlarged virtual viewpoint image.
図 12は、本実施形態における広角の仮想視点画像 10050とその近接して障害物 が拡大された仮想視点画像 10051とを共に表示した一例を示す。  FIG. 12 shows an example in which a wide-angle virtual viewpoint image 10050 and a virtual viewpoint image 10051 in which an obstacle is enlarged in the vicinity thereof are displayed together in the present embodiment.
[0097] このようにすることで、拡大された仮想視点画像とともに広角の仮想視点画像を表 示することで、周囲の状況を容易に把握することができる。 [0097] In this way, by displaying the wide-angle virtual viewpoint image together with the enlarged virtual viewpoint image, the surrounding situation can be easily grasped.
<第 6の実施形態 >  <Sixth embodiment>
特許文献 1では、運転者が変更になったり、着座位置の調整が異なったりしたとき の視点の調整については、任意の点が表示できるという開示に留まっている。そこで 、本実施形態では、運転者ごとにプリセットした視点を用いることで、それぞれに適し た画像をより簡便に表示できるようにする。 Patent Literature 1 merely discloses that any point can be displayed with respect to the viewpoint adjustment when the driver changes or the seating position is adjusted differently. Therefore, in the present embodiment, by using the viewpoint preset for each driver, The displayed image more easily.
[0098] 図 13は、本実施形態における画像生成装置 10000を示す。同図の画像生成装置 10000は、第 1の実施形態の画像生成装置 10000にユーザ情報取得手段 10060 を追加したものである。なお、本実施形態では、表示形態選択手段 10002を取り除 いている力 用途に応じて、取り付けてもよい。  FIG. 13 shows an image generating apparatus 10000 according to the present embodiment. The image generating apparatus 10000 of the figure is obtained by adding a user information acquisition unit 10060 to the image generating apparatus 10000 of the first embodiment. Note that, in the present embodiment, the display form selection means 10002 may be attached in accordance with the purpose of the power removal.
[0099] ユーザ情報取得手段 10060の一例として、例えば、車内を監視するカメラにより運 転者の顔画像を取得し、一般的な画像処理技術によりその顔画像から眼球位置を抽 出することで視点を計測し、その結果を基に仮想視点画像を表示するための仮想視 点の位置をカスタマイズできる。  [0099] As an example of the user information obtaining means 10060, for example, a face image of a driver is obtained by a camera that monitors the inside of a vehicle, and a viewpoint is obtained by extracting an eyeball position from the face image by a general image processing technique. The position of the virtual viewpoint for displaying the virtual viewpoint image can be customized based on the measurement result.
[0100] また、その顔画像から運転者の視野を計測し、その視野に対応した画角の仮想視 点画像を表示するようにしてもよい。視野の計測は、例えば、顔画像での目の開き具 合で推測するようにしてもょ ヽ。  [0100] Alternatively, the driver's visual field may be measured from the face image, and a virtual visual point image having an angle of view corresponding to the visual field may be displayed. For example, the measurement of the visual field may be estimated based on the degree of opening of the eyes in the face image.
[0101] また、ユーザ情報取得手段 10060の別の例は、例えば、運転者の姿勢等とを推測 してそこ力もユーザの視点を算出するようにしてもよい。例えば、予め登録してある運 転者の身長(又は座高)と、現在のシートの傾斜角度から運転者の頭の位置が計測 できるので、そこからおおよその運転者の視点の位置がわかる。  [0101] Further, another example of the user information acquisition unit 10060 may be configured to estimate the driver's posture or the like and calculate the power of the user's viewpoint, for example. For example, since the driver's head position can be measured from the driver's height (or sitting height) registered in advance and the current seat inclination angle, the approximate driver's viewpoint position can be determined from this.
[0102] よって、このようにして視点位置を算出するようにしてもよい。また、さらに、シートの 位置 (前にスライドさせて 、るか後ろにスライドさせて 、るかや高さ等)、または運転者 が眼鏡をしているか等も考慮して視点位置または画角を算出してもよい。  [0102] Therefore, the viewpoint position may be calculated in this manner. In addition, the viewpoint position or the angle of view is determined in consideration of the position of the seat (slid forward or backward, slide or height, etc.) or whether the driver is wearing glasses. It may be calculated.
[0103] このようにしてユーザ情報取得手段 10060で取得されたユーザ情報は、表示制御 手段 10001へ送信される、そうすると、表示制御手段 10001では、その情報に基づ いて、当該ユーザに最適な仮想視点画像を表示する。  [0103] The user information obtained by the user information obtaining means 10060 in this way is transmitted to the display control means 10001. Then, the display control means 10001 determines the optimal virtual information for the user based on the information. Display the viewpoint image.
[0104] 以上より、各運転者の視点または画角に合った仮想視点画像を当該運転者に提供 することができるので、各運転者は違和感のな 、仮想視点画像を観賞することができ る。  [0104] As described above, since a virtual viewpoint image that matches the viewpoint or the angle of view of each driver can be provided to the driver, each driver can view the virtual viewpoint image without feeling uncomfortable. .
<第 7の実施形態 >  <Seventh embodiment>
本実施形態では、カーナビゲーシヨンの画像の一部に重畳して仮想視点力もの仮 想視点画像を表示する。このとき、地図情報に応じて適切な位置に仮想視点画像を 重畳表示する。本実施形態における画像生成装置の本体部は、第 1の実施形態と 同様である。なお、カーナビゲーシヨンの画像は仮想視点画像の一種であり、これに 更に別の仮想視点画像を重畳して!/ヽる点は、他の実施形態と同様である。 In the present embodiment, a virtual viewpoint image with a virtual viewpoint power is displayed so as to be superimposed on a part of the image of the car navigation system. At this time, the virtual viewpoint image is placed at an appropriate position according to the map information. Display superimposed. The main body of the image generating apparatus according to the present embodiment is the same as that of the first embodiment. Note that the car navigation image is a kind of virtual viewpoint image, and another virtual viewpoint image is superimposed on this image in the same manner as in the other embodiments.
[0105] 図 14は、本実施形態におけるカーナビゲーシヨンの画像 10070の一部に重畳させ た仮想視点画像 10071, 10072, 10073を示す。このような制御は、表示制御手段 10001により行われて!/、る。また、このとき、 f列えば、、仮想視点、画像 10071, 10072 , 10073は、半透明表示として、カーナビゲーシヨンの画像 10070に重畳して表示さ せることちでさる。  FIG. 14 shows virtual viewpoint images 10071, 10072, and 10073 superimposed on a part of the car navigation image 10070 in the present embodiment. Such control is performed by the display control means 10001. Also, at this time, if f columns are used, the virtual viewpoints and images 10071, 10072, and 10073 are displayed as a semi-transparent display so as to be superimposed on the image 10070 of the car navigation system.
[0106] 以上より、表示手段としてカーナビゲーシヨンのディスプレイを流用することができる ので、カーナビゲーシヨンシステムによるナビゲートを受けながら、仮想視点画像を見 ることができる。また、このナビゲートの邪魔にならない位置に仮想視点画像は表示さ れるように制御されているので、ナビゲートに対して不自由さを感じることもない。  As described above, since the display of the car navigation system can be used as the display means, the virtual viewpoint image can be viewed while being navigated by the car navigation system. In addition, since the virtual viewpoint image is controlled to be displayed at a position that does not interfere with the navigation, the user does not feel inconvenience to navigate.
[0107] <第 8の実施形態 >  <Eighth Embodiment>
特許文献 1では、カメラのフォーカス調整などには言及されず、特に近距離になつ たときの障害物の画像を適切に表示することについては別段の配慮がない。  In Patent Document 1, there is no mention of focus adjustment of a camera, and there is no particular consideration for appropriately displaying an image of an obstacle particularly when the camera is at a short distance.
[0108] そこで、本実施形態では、 AF (オートフォーカス)機能をカメラに持たせ、ステレオ 視により近接している場合は、近距離側にレンズのフォーカスを調整するようにする。 つまり、一般にマクロモードと呼ばれる撮影対象に近接した位置で大きく撮影する場 合のモードに調整する。このようにすることで、 3次元再構成用にフォーカスのあった 画像が近距離にお!ヽて得られる。  Thus, in the present embodiment, the camera is provided with an AF (auto focus) function, and when the camera is closer to the stereoscopic view, the focus of the lens is adjusted to the short distance side. That is, the mode is generally adjusted to a macro mode in which a large image is taken at a position close to the object to be imaged. In this way, an image focused for three-dimensional reconstruction can be obtained at a short distance.
[0109] また、遠距離にある被写体を撮影する場合には、 AF機能により当該被写体にフォ 一カスを合わせることで遠距離画像についても同様に、高精度の画像が得られ、遠 距離の精度向上につながる。  Also, when photographing a subject at a long distance, by focusing on the subject by the AF function, a high-precision image can be similarly obtained for a long-distance image, and a long-range accuracy can be obtained. Leads to improvement.
[0110] なお、本実施形態は第 1一第 7の実施形態に組み合わせることができる。また、これ に限らず、第 1一第 8の実施形態の各実施形態相互間で用途に応じて、それぞれの 実施形態を組み合わせることができる。例えば、空間モデルや図 3—図 5、図 9Α、図 9Β、図 12、図 14で示した画面表示は、各実施形態相互間で適用することができる。 また、このような空間モデル及び画面表示以外のものについても、可能であれば各 実施形態相互間で適用できる。 This embodiment can be combined with the first to seventh embodiments. The present invention is not limited to this, and the respective embodiments of the first to eighth embodiments can be combined with each other depending on the application. For example, the space model and the screen display shown in FIGS. 3 to 5, 9A, 9B, 12 and 14 can be applied between the embodiments. In addition, if possible, other than the space model and screen display Applicable between embodiments.
[0111] 次に、第 1一第 8の実施形態における画像生成装置 10000のハードウェア構成を 示す。  Next, a hardware configuration of the image generating apparatus 10000 according to the first to eighth embodiments will be described.
図 15は、第 1一第 8の実施形態における画像生成装置 10000のハードウェア環境 の構成ブロック図である。同図において画像生成装置 10000は、少なくとも、例えば 、中央処理装置(CPU)等の制御装置 10080と、リードオンリメモリ(ROM)、ランダム アクセスメモリ (RAM)、または大容量記憶装置等の記憶装置 10081と、出力 IZF( インターフェース、以下同じ) 10082、入力 IZF10083、通信 IZF10084及びこれ らを接続するバス 10085から構成され、それに、ディスプレイやスピーカ等の出力手 段 107や、入力 IZFまたは通信 IZFに接続される各種の装置がある。  FIG. 15 is a configuration block diagram of a hardware environment of the image generation device 10000 according to the first to eighth embodiments. In the figure, an image generation device 10000 includes at least a control device 10080 such as a central processing unit (CPU) and a storage device 10081 such as a read only memory (ROM), a random access memory (RAM), or a large capacity storage device. And an output IZF (interface, the same applies hereinafter) 10082, an input IZF 10083, a communication IZF10084, and a bus 10085 connecting them. There are various types of devices.
[0112] 入力 IZFに接続される装置としては、例えば、カメラ 101、車内用カメラ、ステレオ センサー及び各種センサ(第 2—第 4の実施形態を参照)、キーボードやマウス等の 入力装置、 CD - ROMや DVD等の可搬型記憶媒体の読み取り装置、その他周辺機 器 等が挙げられる。 The devices connected to the input IZF include, for example, a camera 101, an in-vehicle camera, a stereo sensor and various sensors (refer to the second to fourth embodiments), input devices such as a keyboard and a mouse, and CD- Examples include a portable storage medium reading device such as a ROM and a DVD, and other peripheral devices.
[0113] 通信 IZF 10084に接続される装置は、例えば、カーナビゲーシヨンシステムや、ィ ンターネット又は GPSと接続する通信機器である。なお、通信メディアとしては、イン ターネット、 LAN, WAN,専用線、有線、無線等の通信網であってよい。  [0113] Communication The device connected to the IZF 10084 is, for example, a car navigation system or a communication device connected to the Internet or GPS. The communication medium may be a communication network such as the Internet, a LAN, a WAN, a dedicated line, a wired line, a wireless line, or the like.
[0114] 記憶装置 10081の一例としてはハードディスク、磁気ディスクなど様々な形式の記 憶装置を使用することができ、上記の実施形態で述べたフローのプログラム、テープ ル (例えば、車両状況等と表示形態との関連テーブル、各種設定値)等が格納され ている。このプログラムは、制御装置 10080によって読み込まれ、フローの各処理が 実行される。  [0114] As an example of the storage device 10081, various types of storage devices such as a hard disk and a magnetic disk can be used, and the program of the flow described in the above embodiment, the table (for example, display of vehicle status, etc.) A table relating to the form, various setting values, and the like are stored. This program is read by the control device 10080, and each process of the flow is executed.
[0115] このプログラムは、プログラム提供者側力も通信 IZF10084を介して、インターネッ ト経由で提供され、記憶装置 10081に格納されてもよい。また、このプログラムは、巿 販されて流通している可搬型記憶媒体に格納されていてもよい。そして、このプログ ラムが格納された可搬型記憶媒体は、読み取り装置にセットされて、制御装置によつ て実行されることも可能である。可搬型記憶媒体としては CD— ROM、 DVD,フレキ シブルディスク、光ディスク、光磁気ディスク、 ICカードなど様々な形式の記憶媒体を 使用することができ、このような記憶媒体に格納されたプログラムが読み取り装置によ つて読み取られる。 [0115] This program may also be provided via the Internet via the communication provider IZF10084, and may be stored in the storage device 10081. Further, this program may be stored in a portable storage medium which is sold and distributed. Then, the portable storage medium in which the program is stored can be set in the reading device and executed by the control device. As portable storage media, various types of storage media such as CD-ROM, DVD, flexible disk, optical disk, magneto-optical disk, and IC card can be used. The program can be used, and the program stored in such a storage medium is read by the reading device.
[0116] また、入力装置には、キーボード、マウス、または電子カメラ、マイク、スキャナ、セン サ、タブレットなどを用いることが可能である。さらに、その他の周辺機器も接続するこ とがでさる。  [0116] As the input device, a keyboard, a mouse, an electronic camera, a microphone, a scanner, a sensor, a tablet, or the like can be used. In addition, other peripheral devices can be connected.
[0117] 以上より、本発明によれば、仮想視点画像の表示に係るユーザインターフェースに ついて、さらに利便性を高めるための技術を具現させることができる。  [0117] As described above, according to the present invention, it is possible to embody a technology for further improving the convenience of the user interface related to the display of the virtual viewpoint image.
なお、特許文献 1は、この明細書で参照されることにより、この明細書に組み込まれ ている。  It should be noted that Patent Document 1 is incorporated in this specification by reference to this specification.

Claims

請求の範囲 The scope of the claims
[1] 1または複数のカメラからの入力画像を 3次元空間の予め決められた空間モデルに マッピングする空間再構成手段と、  [1] spatial reconstruction means for mapping an input image from one or more cameras to a predetermined spatial model in a three-dimensional space,
前記空間再構成手段によってマッピングされた空間データを参照して、前記 3次元 空間における任意の仮想視点力 見た画像である仮想視点画像を生成する視点変 換手段と、  Viewpoint conversion means for generating a virtual viewpoint image, which is an image viewed at an arbitrary virtual viewpoint power in the three-dimensional space, with reference to the spatial data mapped by the space reconstruction means;
それぞれ相互に異なった視点力 見た複数の前記仮想視点画像を表示することを 制御する表示制御手段と、  Display control means for controlling display of the plurality of virtual viewpoint images viewed from different viewpoint powers,
を備えることを特徴とする画像生成装置。  An image generating apparatus comprising:
[2] 前記表示制御手段は、前記複数の仮想視点画像を重畳して表示させる [2] The display control means superimposes and displays the plurality of virtual viewpoint images.
ことを特徴とする請求項 1に記載の画像生成装置。  2. The image generating device according to claim 1, wherein:
[3] 前記表示制御手段は、前記仮想視点画像相互間で画角が重なる部分を重畳させ て連続した前記仮想視点画像として表示させる [3] The display control means superimposes a portion having an angle of view between the virtual viewpoint images and displays the overlapped portion as the continuous virtual viewpoint image.
ことを特徴とする請求項 1に記載の画像生成装置。  2. The image generating device according to claim 1, wherein:
[4] 前記表示制御手段は、前記複数の仮想視点画像を並列して表示させる [4] The display control means displays the plurality of virtual viewpoint images in parallel.
ことを特徴とする請求項 1に記載の画像生成装置。  2. The image generating device according to claim 1, wherein:
[5] 前記画像生成装置は、車両に搭載されており、 [5] The image generation device is mounted on a vehicle,
前記車両の状態または走行状況に基づ 、て、前記表示制御手段により前記複数 の仮想視点画像を重畳または並列して表示される表示形態を選択する表示形態選 択手段  Display mode selection means for selecting a display mode in which the plurality of virtual viewpoint images are superimposed or displayed in parallel by the display control means based on the state or running state of the vehicle
を、さらに備えることを特徴とする請求項 1に記載の画像生成装置。  The image generation device according to claim 1, further comprising:
[6] 前記車両の状態または走行状況に基づ 、て、画角、視点、ズーム、フォーカス、露 出、もしくはシャッタースピード、またはこれらの組み合わせを撮影条件として撮影を 行うことにより取得した前記仮想視点画像を表示する [6] The virtual viewpoint acquired by performing photographing based on the state of the vehicle or the driving condition based on the angle of view, viewpoint, zoom, focus, exposure, or shutter speed, or a combination thereof, as a photographing condition. Display image
ことを特徴とする請求項 5に記載の画像生成装置。  6. The image generation device according to claim 5, wherein:
[7] 車両に搭載された 1または複数のカメラ力 の入力画像を 3次元空間の予め決めら れた空間モデルにマッピングする空間再構成手段と、 [7] spatial reconstruction means for mapping one or more camera-powered input images mounted on the vehicle to a predetermined spatial model in a three-dimensional space;
前記空間再構成手段によってマッピングされた空間データを参照して前記 3次元 空間における任意の仮想視点力 見た画像である仮想視点画像を生成する視点変 換手段と、 3D by referring to the spatial data mapped by the spatial reconstruction means Viewpoint conversion means for generating a virtual viewpoint image that is an image viewed from an arbitrary virtual viewpoint force in space;
それぞれ互いに異なった視点力 見た複数の前記仮想視点画像を表示することを 制御する表示制御手段と、  Display control means for controlling display of the plurality of virtual viewpoint images viewed from different viewpoint powers,
前記車両の状態に応じて、前記表示制御手段による表示形態を選択する表示形 態選択手段と、  Display mode selecting means for selecting a display mode by the display control means according to a state of the vehicle;
を備えることを特徴とする画像生成装置。  An image generating apparatus comprising:
[8] 前記表示形態選択手段は、前記車両の状態として、当該車両の走行時の舵角、速 度、加速度、変速機における変速比、ワイパーの挙動、照明装置の動作状況、音響 装置の動作状況、ウィンドウその他の汚れの状況、当該車両が検出している部位に 係る温度および Zまたは湿度、潤滑油の状態、乗車人数、乗車者の着座位置、貨物 の積載状況、窓乃至ドアの開閉状態の各状態のうちの一以上の状態を認識可能に なされ、当該認識に基づいて前記表示制御手段による表示形態を選択するように構 成されたものであることを特徴とする請求項 7に記載の画像生成装置。  [8] The display mode selection means may determine the state of the vehicle as a steering angle, a speed, an acceleration, a transmission gear ratio, a wiper behavior, an operation state of a lighting device, and an operation of a sound device when the vehicle is traveling. Situation, window and other dirt conditions, temperature and Z or humidity related to the part detected by the vehicle, lubrication oil condition, number of occupants, occupant's seating position, cargo loading status, opening and closing status of windows and doors 8. The apparatus according to claim 7, wherein one or more of the states described above is configured to be recognizable, and the display form by the display control means is selected based on the recognition. Image generation device.
[9] 車両に搭載された 1または複数のカメラ力 の入力画像を 3次元空間の予め決めら れた空間モデルにマッピングする空間再構成手段と、  [9] spatial reconstruction means for mapping one or more camera-powered input images mounted on the vehicle to a predetermined spatial model in a three-dimensional space;
前記空間再構成手段によってマッピングされた空間データを参照して前記 3次元 空間における任意の仮想視点力 見た画像である仮想視点画換を生成する視点変 換手段と、  Viewpoint conversion means for generating a virtual viewpoint change, which is an image viewed with an arbitrary virtual viewpoint force in the three-dimensional space, with reference to the spatial data mapped by the space reconstructing means;
それぞれ互いに異なった視点力 見た複数の前記仮想視点画像を表示することを 制御する表示制御手段と、  Display control means for controlling display of the plurality of virtual viewpoint images viewed from different viewpoint powers,
前記車両に係る動作状況に応じて、前記表示制御手段による表示形態を選択する 表示形態選択手段と、  A display mode selection unit that selects a display mode by the display control unit in accordance with an operation state of the vehicle;
を備えることを特徴とする画像生成装置。  An image generating apparatus comprising:
[10] 前記表示形態選択手段は、前記車両に係る動作状況として、当該車両の右折モ ード、左折モード、発進時周囲監視モード、車内監視モード、後方監視モード、雨天 走行モード、縦列駐車モード、車庫入れモードの各動作状況のうちの何れの動作状 態にある力を弁別可能になされ、当該弁別に基づいて前記表示制御手段による表 示形態を選択するように構成されたものであることを特徴とする請求項 9に記載の画 像生成装置。 [10] The display mode selection means may include, as the operation status of the vehicle, a right turn mode, a left turn mode, a start surrounding monitoring mode, an in-vehicle monitoring mode, a rear monitoring mode, a rainy driving mode, and a parallel parking mode. The force in any one of the operating states in the garage mode can be discriminated, and a table by the display control means is displayed based on the discrimination. 10. The image generation device according to claim 9, wherein the image generation device is configured to select a display mode.
[11] 車両に搭載された 1または複数のカメラ力 の入力画像を 3次元空間の予め決めら れた空間モデルにマッピングする空間再構成手段と、  [11] spatial reconstruction means for mapping one or more camera-powered input images mounted on the vehicle to a predetermined spatial model in a three-dimensional space;
前記空間再構成手段によってマッピングされた空間データを参照して前記 3次元 空間における任意の仮想視点力 見た画像である仮想視点画像を生成する視点変 換手段と、  Viewpoint conversion means for generating a virtual viewpoint image, which is an image viewed by an arbitrary virtual viewpoint force in the three-dimensional space, with reference to the spatial data mapped by the space reconstructing means;
それぞれ互いに異なった視点力 見た複数の前記仮想視点画像を表示することを 制御する表示制御手段と、  Display control means for controlling display of the plurality of virtual viewpoint images viewed from different viewpoint powers,
操作者による操作状況に応じて、前記表示制御手段による表示形態を選択する表 示形態選択手段と、  Display form selection means for selecting a display form by the display control means in accordance with an operation state by an operator;
を備えることを特徴とする画像生成装置。  An image generating apparatus comprising:
[12] 前記表示形態選択手段は、当該車両に係る所定の操作部への操作に基づいて当 該操作状況を弁別し、該弁別に基づいて前記表示制御手段による表示形態を選択 するように構成されたものであることを特徴とする請求項 11に記載の画像生成装置。 [12] The display form selection means is configured to discriminate the operation state based on an operation on a predetermined operation unit of the vehicle, and to select a display form by the display control means based on the discrimination. The image generation device according to claim 11, wherein
[13] 前記表示形態選択手段は、当該車両に係る状況に応じて所定の各操作に関する 配列が変化するようになされた操作部への操作に基づ ヽて当該操作状況を弁別し、 該弁別に基づいて前記表示制御手段による表示形態を選択するように構成されたも のであることを特徴とする請求項 11に記載の画像生成装置。 [13] The display mode selection means discriminates the operation status based on an operation on an operation unit configured to change an arrangement relating to each predetermined operation in accordance with a status relating to the vehicle. 12. The image generation apparatus according to claim 11, wherein the image generation apparatus is configured to select a display mode by the display control unit based on the information.
[14] 前記表示形態選択手段は、操作状況の履歴に対して学習する学習機能を備えた 操作部への操作に基づいて当該操作状況を弁別し、該弁別に基づいて前記表示制 御手段による表示形態を選択するように構成されたものであることを特徴とする請求 項 11に記載の画像生成装置。 [14] The display mode selection unit discriminates the operation state based on an operation on an operation unit having a learning function of learning a history of operation states, and the display control unit performs the discrimination based on the discrimination. 12. The image generation device according to claim 11, wherein the image generation device is configured to select a display mode.
[15] 1または複数のカメラからの入力画像を 3次元空間の予め決められた空間モデルに マッピングする空間再構成手段と、 [15] spatial reconstruction means for mapping an input image from one or a plurality of cameras to a predetermined spatial model of a three-dimensional space,
前記空間再構成手段によってマッピングされた空間データを参照して、前記 3次元 空間における任意の仮想視点力 見た画像である仮想視点画像を生成する視点変 換手段と、 前記仮想視点画像に撮影されている同種の物体毎に色分けする色分け手段と、 前記色分け手段にて変換された画像を表示する表示制御手段と、 Viewpoint conversion means for generating a virtual viewpoint image, which is an image viewed at an arbitrary virtual viewpoint power in the three-dimensional space, with reference to the spatial data mapped by the space reconstruction means; A color classifying unit that classifies colors of the same kind of object photographed in the virtual viewpoint image, a display control unit that displays an image converted by the color classifying unit,
を備えることを特徴とする画像生成装置。  An image generating apparatus comprising:
[16] 前記画像生成装置は、さらに、  [16] The image generation device further comprises:
前記仮想視点画像として生成された画像における同種の物体毎に色分けする色分 け手段  A color separation unit for performing color classification for each object of the same kind in the image generated as the virtual viewpoint image
を備えることを特徴とする請求項 1に記載の画像生成装置。  The image generation device according to claim 1, further comprising:
[17] 前記空間モデルは、平面または曲面を組み合わせて形成されたものであり、 [17] The space model is formed by combining planes or curved surfaces,
前記空間モデルに投影される画像は単純ィ匕される  The image projected on the space model is simple
ことを特徴とする請求項 1に記載の画像生成装置。  2. The image generating device according to claim 1, wherein:
[18] 前記空間モデルは、平面または曲面を組み合わせて形成されたものであり、 [18] The space model is formed by combining flat surfaces or curved surfaces,
前記空間モデルに投影される画像は単純ィ匕される  The image projected on the space model is simple
ことを特徴とする請求項 7に記載の画像生成装置。  8. The image generation device according to claim 7, wherein:
[19] 前記空間モデルは、平面または曲面を組み合わせて形成されたものであり、 [19] The space model is formed by combining planes or curved surfaces.
前記空間モデルに投影される画像は単純ィ匕される  The image projected on the space model is simple
ことを特徴とする請求項 9に記載の画像生成装置。  10. The image generation device according to claim 9, wherein:
[20] 前記空間モデルは、平面または曲面を組み合わせて形成されたものであり、 [20] The space model is formed by combining planes or curved surfaces,
前記空間モデルに投影される画像は単純ィ匕される  The image projected on the space model is simple
ことを特徴とする請求項 11に記載の画像生成装置。  12. The image generation device according to claim 11, wherein:
[21] 前記空間モデルは、平面または曲面を組み合わせて形成されたものであり、 [21] The space model is formed by combining planes or curved surfaces,
前記空間モデルに投影される画像は単純ィ匕される  The image projected on the space model is simple
ことを特徴とする請求項 15に記載の画像生成装置。  16. The image generation device according to claim 15, wherein:
[22] 前記表示手段は、近接する被写体の前記仮想視点画像を表示させる場合、該被 写体を含む広角の前記仮想視点画像と、該被写体の拡大された前記仮想視点画像 とを表示させる [22] The display means, when displaying the virtual viewpoint image of the close subject, displays the wide-angle virtual viewpoint image including the subject and the enlarged virtual viewpoint image of the subject.
ことを特徴とする請求項 1に記載の画像生成装置。  2. The image generating device according to claim 1, wherein:
[23] 前記表示手段は、近接する被写体の前記仮想視点画像を表示させる場合、該被 写体を含む広角の前記仮想視点画像と、該被写体の拡大された前記仮想視点画像 とを表示させる [23] The display means, when displaying the virtual viewpoint image of the nearby object, the wide-angle virtual viewpoint image including the object and the enlarged virtual viewpoint image of the object. And display
ことを特徴とする請求項 7に記載の画像生成装置。  8. The image generation device according to claim 7, wherein:
[24] 前記表示手段は、近接する被写体の前記仮想視点画像を表示させる場合、該被 写体を含む広角の前記仮想視点画像と、該被写体の拡大された前記仮想視点画像 とを表示させる [24] The display means displays the wide-angle virtual viewpoint image including the object and the enlarged virtual viewpoint image of the object when displaying the virtual viewpoint image of the nearby object.
ことを特徴とする請求項 9に記載の画像生成装置。  10. The image generation device according to claim 9, wherein:
[25] 前記表示手段は、近接する被写体の前記仮想視点画像を表示させる場合、該被 写体を含む広角の前記仮想視点画像と、該被写体の拡大された前記仮想視点画像 とを表示させる [25] The display means displays the wide-angle virtual viewpoint image including the object and the enlarged virtual viewpoint image of the object when displaying the virtual viewpoint image of the nearby object.
ことを特徴とする請求項 11に記載の画像生成装置。  12. The image generation device according to claim 11, wherein:
[26] 前記表示手段は、近接する被写体の前記仮想視点画像を表示させる場合、該被 写体を含む広角の前記仮想視点画像と、該被写体の拡大された前記仮想視点画像 とを表示させる [26] The display means, when displaying the virtual viewpoint image of the close subject, displays the wide-angle virtual viewpoint image including the subject and the enlarged virtual viewpoint image of the subject.
ことを特徴とする請求項 15に記載の画像生成装置。  16. The image generation device according to claim 15, wherein:
[27] 1または複数のカメラからの入力画像を 3次元空間の予め決められた空間モデルに マッピングする空間再構成手段と、 [27] spatial reconstruction means for mapping input images from one or more cameras to a predetermined spatial model of a three-dimensional space,
前記空間再構成手段によってマッピングされた空間データを参照して、前記 3次元 空間における任意の仮想視点力 見た画像である仮想視点画像を生成する視点変 換手段と、  Viewpoint conversion means for generating a virtual viewpoint image, which is an image viewed at an arbitrary virtual viewpoint power in the three-dimensional space, with reference to the spatial data mapped by the space reconstruction means;
前記視点変換手段にて変換された画像を表示する表示制御手段と、  Display control means for displaying the image converted by the viewpoint conversion means,
を備え、  With
前記視点変換手段は、ユーザの固有の情報またはユーザの状態に関する情報で あるユーザ情報に基づいて前記仮想視点画像を生成する  The viewpoint conversion means generates the virtual viewpoint image based on user information which is information specific to the user or information on the state of the user.
ことを特徴とする画像生成装置。  An image generating apparatus, characterized in that:
[28] 前記視点変換手段は、前記ユーザ情報としての該ユーザの視点に関する情報に 基づ 、て前記仮想視点画像を生成する [28] The viewpoint conversion unit generates the virtual viewpoint image based on information on the user's viewpoint as the user information.
ことを特徴とする請求項 27に記載の画像生成装置。  28. The image generation device according to claim 27, wherein:
[29] 前記視点変換手段は、前記ユーザ情報としての該ユーザの視野に対応した画角に 関する情報に基づ 、て前記仮想視点画像を生成する [29] The viewpoint conversion means sets an angle of view corresponding to the field of view of the user as the user information. Generating the virtual viewpoint image based on the related information
ことを特徴とする請求項 27に記載の画像生成装置。  28. The image generation device according to claim 27, wherein:
[30] 前記仮想視点画像は、カーナビゲーシヨンシステムの表示装置に表示される [30] The virtual viewpoint image is displayed on a display device of a car navigation system.
ことを特徴とする請求項 1に記載の画像生成装置。  2. The image generating device according to claim 1, wherein:
[31] 1または複数のカメラからの入力画像を 3次元空間の予め決められた空間モデルに マッピングする空間再構成処理と、 [31] spatial reconstruction processing that maps input images from one or more cameras to a predetermined spatial model in a three-dimensional space,
前記空間再構成処理によってマッピングされた空間データを参照して、前記 3次元 空間における任意の仮想視点力 見た画像である仮想視点画像を生成する視点変 換処理と、  A viewpoint conversion process of generating a virtual viewpoint image that is an image viewed with an arbitrary virtual viewpoint force in the three-dimensional space with reference to the spatial data mapped by the space reconstruction process;
それぞれ相互に異なった視点力 見た複数の前記仮想視点画像を表示することを 制御する表示制御処理と、  Display control processing for controlling display of a plurality of the virtual viewpoint images viewed from different viewpoint powers;
を、コンピュータに実行させるための画像生成プログラムを記録したコンピュータ読 み取り可能な記録媒体。  A computer-readable recording medium that stores an image generation program for causing a computer to execute the program.
[32] 1または複数のカメラからの入力画像を 3次元空間の予め決められた空間モデルに マッピングする空間再構成処理と、 [32] spatial reconstruction processing that maps input images from one or more cameras to a predetermined spatial model in a three-dimensional space,
前記空間再構成処理によってマッピングされた空間データを参照して、前記 3次元 空間における任意の仮想視点力 見た画像である仮想視点画像を生成する視点変 換処理と、  A viewpoint conversion process of generating a virtual viewpoint image that is an image viewed with an arbitrary virtual viewpoint force in the three-dimensional space with reference to the spatial data mapped by the space reconstruction process;
前記仮想視点画像に撮影されている同種の物体毎に色分けする色分け処理と、 前記色分け処理にて変換された画像を表示する表示処理と、  A color-coding process for color-coding each same type of object captured in the virtual viewpoint image, and a display process for displaying an image converted by the color-coding process.
を、コンピュータに実行させるための画像生成プログラムを記録したコンピュータ読 み取り可能な記録媒体。  A computer-readable recording medium that stores an image generation program for causing a computer to execute the program.
[33] 1または複数のカメラからの入力画像を 3次元空間の予め決められた空間モデルに マッピングする空間再構成処理と、 [33] spatial reconstruction processing that maps input images from one or more cameras to a predetermined spatial model in a three-dimensional space,
前記空間再構成処理によってマッピングされた空間データを参照して、前記 3次元 空間における任意の仮想視点力 見た画像である仮想視点画像を生成する視点変 換処理と、  A viewpoint conversion process of generating a virtual viewpoint image that is an image viewed with an arbitrary virtual viewpoint force in the three-dimensional space with reference to the spatial data mapped by the space reconstruction process;
前記視点変換処理にて変換された画像を表示する表示制御処理と、 を、コンピュータに実行させるための画像生成プログラムを記録したコンピュータ読 み取り可能な記録媒体であって、 Display control processing for displaying the image converted in the viewpoint conversion processing, A computer-readable recording medium on which an image generation program for causing a computer to execute is recorded,
前記視点変換処理は、ユーザの固有の情報またはユーザの状態に関する情報で あるユーザ情報に基づいて前記仮想視点画像を生成する  In the viewpoint conversion process, the virtual viewpoint image is generated based on user information which is information specific to a user or information on a state of the user.
ことを特徴とする画像生成プログラムを記録したコンピュータ読み取り可能な記録媒 体。  A computer-readable recording medium on which an image generation program is recorded.
[34] 車両に搭載された 1または複数のカメラ力 の入力画像を 3次元空間の予め決めら れた空間モデルにマッピングする空間再構成処理と、  [34] spatial reconstruction processing that maps one or more camera-powered input images mounted on the vehicle to a predetermined spatial model in a three-dimensional space;
前記空間再構成処理によってマッピングされた空間データを参照して前記 3次元 空間における任意の仮想視点力 見た画像である仮想視点画像を生成する視点変 換処理と、  A viewpoint conversion process of generating a virtual viewpoint image which is an image viewed by an arbitrary virtual viewpoint force in the three-dimensional space with reference to the spatial data mapped by the space reconstruction process;
それぞれ互いに異なった視点力 見た複数の前記仮想視点画像を表示することを 制御する表示制御処理と、  A display control process for controlling display of the plurality of virtual viewpoint images viewed from different viewpoint powers;
前記車両の状態に応じて、前記表示制御処理による表示形態を選択する表示形 態選択処理と、  A display mode selection process for selecting a display mode by the display control process according to the state of the vehicle;
を、コンピュータに実行させるための画像生成プログラムを記録したコンピュータ読 み取り可能な記録媒体。  A computer-readable recording medium that stores an image generation program for causing a computer to execute the program.
[35] 車両に搭載された 1または複数のカメラ力 の入力画像を 3次元空間の予め決めら れた空間モデルにマッピングする空間再構成処理と、 [35] spatial reconstruction processing that maps one or more camera-powered input images mounted on the vehicle to a predetermined spatial model in three-dimensional space,
前記空間再構成処理によってマッピングされた空間データを参照して前記 3次元 空間における任意の仮想視点力 見た画像である仮想視点画換を生成する視点変 換処理と、  A viewpoint conversion process of generating a virtual viewpoint change, which is an image viewed with an arbitrary virtual viewpoint force in the three-dimensional space, with reference to the spatial data mapped by the space reconstruction process;
それぞれ互いに異なった視点力 見た複数の前記仮想視点画像を表示することを 制御する表示制御処理と、  A display control process for controlling display of the plurality of virtual viewpoint images viewed from different viewpoint powers;
前記車両に係る動作状況に応じて、前記表示制御処理による表示形態を選択する 表示形態選択処理と、  A display mode selection process for selecting a display mode by the display control process according to an operation state of the vehicle;
を、コンピュータに実行させるための画像生成プログラムを記録したコンピュータ読 み取り可能な記録媒体。 A computer-readable recording medium that stores an image generation program for causing a computer to execute the program.
[36] 車両に搭載された 1または複数のカメラ力 の入力画像を 3次元空間の予め決めら れた空間モデルにマッピングする空間再構成処理と、 [36] spatial reconstruction processing that maps one or more camera-powered input images mounted on the vehicle to a predetermined spatial model in a three-dimensional space;
前記空間再構成処理によってマッピングされた空間データを参照して前記 3次元 空間における任意の仮想視点力 見た画像である仮想視点画像を生成する視点変 換処理と、  A viewpoint conversion process of generating a virtual viewpoint image which is an image viewed by an arbitrary virtual viewpoint force in the three-dimensional space with reference to the spatial data mapped by the space reconstruction process;
それぞれ互いに異なった視点力 見た複数の前記仮想視点画像を表示することを 制御する表示制御処理と、  A display control process for controlling display of the plurality of virtual viewpoint images viewed from different viewpoint powers;
操作者による操作状況に応じて、前記表示制御処理による表示形態を選択する表 示形態選択処理と、  A display mode selection process for selecting a display mode by the display control process according to an operation state by an operator;
を、コンピュータに実行させるための画像生成プログラムを記録したコンピュータ読 み取り可能な記録媒体。  A computer-readable recording medium that stores an image generation program for causing a computer to execute the program.
[37] 1または複数のカメラからの入力画像を 3次元空間の予め決められた空間モデルに マッピングし、 [37] Input images from one or more cameras are mapped to a predetermined spatial model in 3D space,
前記マッピングされた空間データを参照して、前記 3次元空間における任意の仮想 視点から見た画像である仮想視点画像を生成し、  With reference to the mapped spatial data, a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space is generated,
それぞれ相互に異なった視点力 見た複数の前記仮想視点画像を表示することを 制御する  Control display of multiple virtual viewpoint images viewed from different viewpoint powers
ことを特徴とする画像生成方法。  An image generation method characterized by the following.
[38] 1または複数のカメラからの入力画像を 3次元空間の予め決められた空間モデルに マッピングし、 [38] Input images from one or more cameras are mapped to a predetermined spatial model in 3D space,
前記マッピングされた空間データを参照して、前記 3次元空間における任意の仮想 視点から見た画像である仮想視点画像を生成し、  With reference to the mapped spatial data, a virtual viewpoint image which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space is generated,
前記仮想視点画像に撮影されている同種の物体毎に色分けし、  Color-coding for each of the same kind of objects being photographed in the virtual viewpoint image,
前記色分けされた画像を表示する  Display the color-coded image
ことを特徴とする画像生成方法。  An image generation method characterized by the following.
[39] 1または複数のカメラからの入力画像を 3次元空間の予め決められた空間モデルに マッピングし、 [39] Input images from one or more cameras are mapped to a predetermined spatial model in 3D space,
前記マッピングされた空間データを参照して、前記 3次元空間における任意の仮想 視点から見た画像である仮想視点画像を生成し、 Referring to the mapped spatial data, any virtual in the 3D space Generate a virtual viewpoint image that is an image viewed from the viewpoint,
前記仮想視点画像を表示する  Displaying the virtual viewpoint image
ことを行う画像生成方法であって、  An image generation method for:
前記仮想視点画像は、ユーザの固有の情報またはユーザの状態に関する情報で あるユーザ情報に基づいて生成される  The virtual viewpoint image is generated based on user information that is information specific to the user or information on the state of the user.
ことを特徴とする画像生成方法。  An image generation method characterized by the following.
[40] 車両に搭載された 1または複数のカメラ力 の入力画像を 3次元空間の予め決めら れた空間モデルにマッピングし、  [40] One or more camera-powered input images mounted on the vehicle are mapped to a predetermined spatial model in three-dimensional space,
前記マッピングされた空間データを参照して前記 3次元空間における任意の仮想 視点から見た画像である仮想視点画像を生成し、  Generating a virtual viewpoint image that is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the mapped spatial data;
それぞれ互いに異なった視点力 見た複数の前記仮想視点画像の表示形態を制 御し、  Controlling the display form of the plurality of virtual viewpoint images viewed from different viewpoint powers,
前記車両の状態に応じて、前記表示形態を選択する  Selecting the display mode according to the state of the vehicle
ことを特徴とする画像生成方法。  An image generation method characterized by the following.
[41] 車両に搭載された 1または複数のカメラ力 の入力画像を 3次元空間の予め決めら れた空間モデルにマッピングし、 [41] One or more camera-powered input images mounted on the vehicle are mapped to a predetermined space model in three-dimensional space,
前記マッピングされた空間データを参照して前記 3次元空間における任意の仮想 視点から見た画像である仮想視点画換を生成し、  Generating a virtual viewpoint replacement which is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the mapped spatial data;
それぞれ互いに異なった視点力 見た複数の前記仮想視点画像の表示形態を制 御し、  Controlling the display form of the plurality of virtual viewpoint images viewed from different viewpoint powers,
前記車両に係る動作状況に応じて、前記表示形態を選択する  Selecting the display mode according to the operation status of the vehicle
ことを特徴とする画像生成方法。  An image generation method characterized by the following.
[42] 車両に搭載された 1または複数のカメラ力 の入力画像を 3次元空間の予め決めら れた空間モデルにマッピングし、 [42] One or more camera-powered input images mounted on the vehicle are mapped to a predetermined space model in three-dimensional space,
前記マッピングされた空間データを参照して前記 3次元空間における任意の仮想 視点から見た画像である仮想視点画像を生成し、  Generating a virtual viewpoint image that is an image viewed from an arbitrary virtual viewpoint in the three-dimensional space with reference to the mapped spatial data;
それぞれ互いに異なった視点力 見た複数の前記仮想視点画像の表示形態を制 御し、 操作者による操作状況に応じて、前記表示形態を選択する ことを特徴とする画像生成方法。 Controlling the display form of the plurality of virtual viewpoint images viewed from different viewpoint powers, An image generation method, wherein the display mode is selected according to an operation situation by an operator.
PCT/JP2005/002150 2004-02-26 2005-02-14 Image generation device, image generation program, and image generation method WO2005084027A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2004050784A JP2005242606A (en) 2004-02-26 2004-02-26 Image generation system, image generation program and image generation method
JP2004-050784 2004-02-26

Publications (1)

Publication Number Publication Date
WO2005084027A1 true WO2005084027A1 (en) 2005-09-09

Family

ID=34908603

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2005/002150 WO2005084027A1 (en) 2004-02-26 2005-02-14 Image generation device, image generation program, and image generation method

Country Status (2)

Country Link
JP (1) JP2005242606A (en)
WO (1) WO2005084027A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008087706A1 (en) * 2007-01-16 2008-07-24 Pioneer Corporation Display device for vehicle, display method for vehicle, and display program for vehicle
WO2011129275A1 (en) * 2010-04-12 2011-10-20 住友重機械工業株式会社 Processing target image generation device, processing target image generation method, and operation support system
JP2019079173A (en) * 2017-10-23 2019-05-23 パナソニックIpマネジメント株式会社 Three-dimensional intrusion detection system and three-dimensional intrusion detection method
WO2021079517A1 (en) * 2019-10-25 2021-04-29 日本電気株式会社 Image generation device, image generation method, and program
DE102018106039B4 (en) 2017-04-26 2024-01-04 Denso Ten Limited IMAGE REPRODUCTION APPARATUS, IMAGE REPRODUCTION SYSTEM AND IMAGE REPRODUCTION METHOD
WO2024070203A1 (en) * 2022-09-30 2024-04-04 株式会社東海理化電機製作所 Remote monitoring system

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007318198A (en) * 2006-05-23 2007-12-06 Alpine Electronics Inc Vehicle perimeter image generating apparatus and photometry adjustment method of imaging apparatus
JP4858017B2 (en) * 2006-08-31 2012-01-18 パナソニック株式会社 Driving assistance device
JP4808119B2 (en) * 2006-09-28 2011-11-02 クラリオン株式会社 Navigation device for displaying predicted stop position
JP4888831B2 (en) * 2006-12-11 2012-02-29 株式会社デンソー Vehicle periphery monitoring device
JP5277603B2 (en) * 2007-10-09 2013-08-28 株式会社デンソー Image display device
JP5102718B2 (en) * 2008-08-13 2012-12-19 株式会社Ihi Vegetation detection apparatus and method
US8583406B2 (en) 2008-11-06 2013-11-12 Hoya Lens Manufacturing Philippines Inc. Visual simulator for spectacle lens, visual simulation method for spectacle lens, and computer readable recording medium recording computer readable visual simulation program for spectacle lens
JP5494372B2 (en) * 2010-09-08 2014-05-14 株式会社デンソー Vehicle display device
US9911257B2 (en) 2011-06-24 2018-03-06 Siemens Product Lifecycle Management Software Inc. Modeled physical environment for information delivery
JP6115278B2 (en) 2013-04-16 2017-04-19 株式会社デンソー Vehicle display device
JP6287583B2 (en) * 2014-05-27 2018-03-07 株式会社デンソー Driving assistance device
JP6565148B2 (en) * 2014-09-05 2019-08-28 アイシン精機株式会社 Image display control device and image display system
US10134112B2 (en) 2015-09-25 2018-11-20 Hand Held Products, Inc. System and process for displaying information from a mobile computer in a vehicle
JP6723820B2 (en) 2016-05-18 2020-07-15 株式会社デンソーテン Image generation apparatus, image display system, and image display method
JP6812181B2 (en) * 2016-09-27 2021-01-13 キヤノン株式会社 Image processing device, image processing method, and program
JPWO2018101227A1 (en) * 2016-11-29 2019-10-31 シャープ株式会社 Display control device, head mounted display, display control device control method, and control program
KR101855345B1 (en) * 2016-12-30 2018-06-14 도로교통공단 divided display method and apparatus by multi view points for simulated virtual image
JP6427258B1 (en) * 2017-12-21 2018-11-21 キヤノン株式会社 Display control device, display control method
KR102566896B1 (en) * 2018-03-20 2023-08-11 스미토모 겐키 가부시키가이샤 Indicator for shovel and shovel
JP7353782B2 (en) 2019-04-09 2023-10-02 キヤノン株式会社 Information processing device, information processing method, and program
JP7378243B2 (en) * 2019-08-23 2023-11-13 キヤノン株式会社 Image generation device, image display device, and image processing method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000064175A1 (en) * 1999-04-16 2000-10-26 Matsushita Electric Industrial Co., Ltd. Image processing device and monitoring system
JP2002083284A (en) * 2000-06-30 2002-03-22 Matsushita Electric Ind Co Ltd Drawing apparatus
JP2003116125A (en) * 2001-10-03 2003-04-18 Auto Network Gijutsu Kenkyusho:Kk Apparatus for visually confirming surrounding of vehicle
JP2004048295A (en) * 2002-07-10 2004-02-12 Toyota Motor Corp Image processor, parking assist apparatus, and image processing method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05265547A (en) * 1992-03-23 1993-10-15 Fuji Heavy Ind Ltd On-vehicle outside monitoring device
JP3324821B2 (en) * 1993-03-12 2002-09-17 富士重工業株式会社 Vehicle exterior monitoring device
JP3463144B2 (en) * 1996-04-26 2003-11-05 アイシン・エィ・ダブリュ株式会社 Building shape map display device
EP2259220A3 (en) * 1998-07-31 2012-09-26 Panasonic Corporation Method and apparatus for displaying image
JP2001282824A (en) * 2000-03-31 2001-10-12 Pioneer Electronic Corp Menu display system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000064175A1 (en) * 1999-04-16 2000-10-26 Matsushita Electric Industrial Co., Ltd. Image processing device and monitoring system
JP2002083284A (en) * 2000-06-30 2002-03-22 Matsushita Electric Ind Co Ltd Drawing apparatus
JP2003116125A (en) * 2001-10-03 2003-04-18 Auto Network Gijutsu Kenkyusho:Kk Apparatus for visually confirming surrounding of vehicle
JP2004048295A (en) * 2002-07-10 2004-02-12 Toyota Motor Corp Image processor, parking assist apparatus, and image processing method

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008087706A1 (en) * 2007-01-16 2008-07-24 Pioneer Corporation Display device for vehicle, display method for vehicle, and display program for vehicle
JPWO2008087706A1 (en) * 2007-01-16 2010-05-06 パイオニア株式会社 VEHICLE DISPLAY DEVICE, VEHICLE DISPLAY METHOD, AND VEHICLE DISPLAY PROGRAM
WO2011129275A1 (en) * 2010-04-12 2011-10-20 住友重機械工業株式会社 Processing target image generation device, processing target image generation method, and operation support system
JP2011223409A (en) * 2010-04-12 2011-11-04 Sumitomo Heavy Ind Ltd Processing target image generation device, processing target image generation method, and operation support system
US8848981B2 (en) 2010-04-12 2014-09-30 Sumitomo Heavy Industries, Ltd. Processing-target image generation device, processing-target image generation method and operation support system
DE102018106039B4 (en) 2017-04-26 2024-01-04 Denso Ten Limited IMAGE REPRODUCTION APPARATUS, IMAGE REPRODUCTION SYSTEM AND IMAGE REPRODUCTION METHOD
JP2019079173A (en) * 2017-10-23 2019-05-23 パナソニックIpマネジメント株式会社 Three-dimensional intrusion detection system and three-dimensional intrusion detection method
WO2021079517A1 (en) * 2019-10-25 2021-04-29 日本電気株式会社 Image generation device, image generation method, and program
JPWO2021079517A1 (en) * 2019-10-25 2021-04-29
JP7351345B2 (en) 2019-10-25 2023-09-27 日本電気株式会社 Image generation device, image generation method, and program
WO2024070203A1 (en) * 2022-09-30 2024-04-04 株式会社東海理化電機製作所 Remote monitoring system

Also Published As

Publication number Publication date
JP2005242606A (en) 2005-09-08

Similar Documents

Publication Publication Date Title
WO2005084027A1 (en) Image generation device, image generation program, and image generation method
JP6806156B2 (en) Peripheral monitoring device
US8754760B2 (en) Methods and apparatuses for informing an occupant of a vehicle of surroundings of the vehicle
JP6148887B2 (en) Image processing apparatus, image processing method, and image processing system
JP4323377B2 (en) Image display device
EP2763407B1 (en) Vehicle surroundings monitoring device
CN104163133B (en) Use the rear view camera system of position of rear view mirror
JP4364471B2 (en) Image processing apparatus for vehicle
US8514282B2 (en) Vehicle periphery display device and method for vehicle periphery image
US9706175B2 (en) Image processing device, image processing system, and image processing method
US20100054580A1 (en) Image generation device, image generation method, and image generation program
US10647256B2 (en) Method for providing a rear mirror view of a surroundings of a vehicle
WO2012169355A1 (en) Image generation device
EP2631696B1 (en) Image generator
JP5516998B2 (en) Image generation device
JP7523196B2 (en) Peripheral image display device
JP2004240480A (en) Operation support device
JP3876761B2 (en) Vehicle periphery monitoring device
JP5966513B2 (en) Rear side photographing device for vehicle
JP5516997B2 (en) Image generation device
JP2005269010A (en) Image creating device, program and method
JP4552525B2 (en) Image processing apparatus for vehicle
JP2007302238A (en) Vehicular image processing device
JP7253708B2 (en) Information processing device, vehicle, and information processing method
JP2012116400A (en) Corner pole projection device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase